id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
15,000 | in particularfor the squared-error loss we have ar er min (ka qe aka ( this is convex optimization problemand its solution is found by differentiating ( with respect to and and equating to zeroleading to the following system of ( mlinear equations>kkn gk kq ( qkqq as long as is of full column rankthe minimizing function is unique example (ridge regression (cont )we return to example and identify that is the rkhs with linear kernel function (xx xx and is the linear space of constant functions in this caseh is spanned by the function moreoverk xxand if we appeal to the representer theorem directlythen the problem in ( becomesas result of ( ) xxa kxak min , this is convex optimization problemand so the solution follows by taking derivatives and setting them to zero this gives the equations xx(xxn in and ( xxanote that these are equivalent to ( and ( (once again assuming that and has full rank pequivalentlythe solution is found by solving ( )>xx xx xxxx xx xx this is system of ( linear equationsand is typically of much larger dimension than the ( linear equations given by ( and ( as suchone may question the practicality of reformulating the problem in this way howeverthe benefit of this formulation is that the problem can be expressed entirely through the gram matrix kwithout having to explicitly compute the feature vectors -in turn permitting the (implicituse of infinite dimensional feature spaces example (estimating the peaks functionfigure shows the surface plot of the peaks function - -( + ) ( ( - - -( + - ( the goal is to learn the function (xbased on small set of training data (pairs of (xyvaluesthe red dots in the figure represent data {(xi yi )} = where yi (xi and the {xi have been chosen in quasi-random wayusing hammersley points (with bases quasi-random |
15,001 | and on the square [- ] quasi-random point sets have better space-filling properties than either regular grid of points or set of pseudo-random points we refer to [ for details note that there is no observation noise in this particular problem - - - figure peaks function sampled at hammersley points the purpose of this example is to illustrate howusing the small data set of size the entire peaks function can be approximated well using kernel methods in particularwe use the gaussian kernel ( on and denote by the unique rkhs corresponding to this kernel we omit the regularization term in ( )and thus our objective is to find the solution to min (yi (xi )) gh = by the representer theoremthe optimal function is of the form kx xi (xai exp = where :[ an ]isby ( )the solution to the set of linear equations kka ky note that we are performing regression over the class of functions with an implicit feature space due to the representer theoremthe solution to this problem coincides with the solution to the linear regression problem for which the -th feature (for nis chosen to be the vector [ ( xi ) (xn xi )]the following code performs these calculations and gives the contour plots of and the peaks functionsshown in figure we see that the two are quite close code for the generation of hammersley points is available from the book' github site as genham py peakskernel py from genham import hammersley import numpy as np import matplotlib pyplot as plt from mpl_toolkits mplot import axes |
15,002 | from matplotlib import cm from numpy linalg import norm import numpy as np def peaks ( , ) ( *( - )** np exp (-( ** ( + ** *( / ** ** np exp(- ** ** / np exp (-( + ** ** )return (zn - hammersley ([ , ,nz peaks ( [, , [, ]xx yy np mgrid - : : - : : jzz peaks (xx ,yyplt contour (xx ,yy ,zz levels = fig=plt figure (ax fig add_subplot ( projection =' 'ax plot_surface (xx ,yy ,zz rstride = cstride = color =' ',alpha = linewidth = ax scatter ( [, , [, ,zcolor =' ', = plt show (sig kernel parameter def ( , )return (np exp - norm(xu)** sig ) np zeros (( , )for in range ( )for in range ( ) [ ,jk( [ ,:, [ ]alpha np linalg solve ( @ tk@znxx flatten (shape kx np zeros (( , )for in range ( )for in range ( )kx[ ,jk( [ ,:np array (xx flatten ()[ ],yy flatten ()[ ]]) kx alpha dim np sqrt(nastype (intyhat reshape (dim ,dimplt contour (xx ,yy ,yhat levels = smoothing cubic splines striking application of kernel methods is to fitting "well-behavedfunctions to data key examples of "well-behavedfunctions are those that do not have large second |
15,003 | - - - - - figure contour plots for the prediction function (leftand the peaks function given in ( (rightorder derivatives consider functions [ that are twice differentiable and define kg : ( ( )) dx as measure of the size of the second derivative example (behavior of kg intuitivelythe larger kg isthe more "wigglythe function will be as an explicit exampleconsider (xsin(oxfor [ ]where is free parameter we can explicitly compute ( - sin(ox)and consequently ( sinc( )kg sin (oxdx as |othe frequency of increases and we have kg nowin the context of data fittingconsider the following penalized least-squares optimization problem on [ ] (yi (xi )) kg gg = min ( where we will specify in what follows in order to apply the kernel machinerywe want to write this in the form ( )for some rkhs and null space clearlythe norm on should be of the form kgkh kg and should be well-defined ( finite and ensuring and are absolutely continuousthis suggests that we take { [ kg gg absolutely continuousg( ( }with inner product fgih : (xg (xdx one rationale for imposing the boundary conditions ( ( is as followswhen expanding about the point taylor' theorem (with integral remainder termstates that (xg( ( ( ( sds |
15,004 | imposing the condition that ( ( for functions in will ensure that where the null space contains only linear functionsas we will see to see that this is in fact an rkhswe derive its reproducing kernel using integration by parts (or directly from the taylor expansion above)write (xz (sds ( ( sds ( ( )ds if is kernelthen by the reproducing property it must hold that (xhgk ih (sk (sds so that must satisfy (xs( )where :max{ thereforenoting that (xuhk ku ih we have (see exercise (xuz (xs (usmax{xumin{xu} min{xu} ds the last expression is cubic function with quadratic and cubic terms that misses the constant and linear monomials this is not surprising considering the taylor' theorem interpretation of function if we now take as the space of functions of the following form (having zero second derivative) xx [ ]then ( is exactly of the form ( as consequence of the representer theorem the optimal solution to ( is linear combination of piecewise cubic functionsg(xe ai (xi ( = such function is called cubic spline with knots (with one knot at each data point xi -so calledbecause the piecewise cubic function between knots is required to be "tied togetherat the knots the parameters ae are determined from ( for instance by solving ( with matrices [ (xi )]nij= and with -th row of the form [ xi for example (smoothing splinefigure shows various cubic smoothing splines for the data ( )( )( )( )( in the figurewe use the reparameterization /( gfor the smoothing parameter thus [ ]where means an infinite penalty for curvature (leading to the ordinary linear regression solutionand does not penalize curvature at all and leads to perfect fit via the so-called natural spline of course the latter will generally lead to overfitting for from up to the solutions will be close to the simple linear regression linewhile only for very close to the shape of the curve changes significantly cubic spline |
15,005 | figure various cubic smoothing splines for smoothing parameter /( { for the natural spline through the data points is obtainedfor the simple linear regression line is found the following code first computes the matrices and qand then solves the linear system ( finallythe smoothing curve is determined via ( )for selected pointsand then plotted note that the code plots only single curve corresponding to the specified value of smoothspline py import matplotlib pyplot as plt import numpy as np np array ([[ ]] np array ([[ ]] shape [ ngamma ( - )/ lambda ( / np max (( , )np min (( , )* (( / np min (( , ))** np zeros (( , )for in range ( )for in range ( ) [ ,jk( [ ] [ ] np hstack (np ones (( , )) ) np hstack (( ngamma ) ) np hstack (( tq ) np vstack (( , ) np vstack ((kq ) ad np linalg solve ( , |
15,006 | plot the curve xx np arange ( , + , reshape - , np zeros_like (xxqx np hstack (np ones_like (xx)xx) np zeros_like (xxn np shape (xx)[ kx np zeros (( , )for in range ( )for in range ( )kx[ ,jk( [ ]xx[ ] np hstack (kx tqx)ad plt ylim (( , )plt plot(xx glabel ' {}format ( )linewidth plt plot( , ' 'markersize = plt xlabel ('$ 'plt ylabel ('$ 'plt legend ( gaussian process regression another application of the kernel machinery is to gaussian process regression gaussian process (gpon space is stochastic process { xwherefor any choice of indices xn the vector [ xn ]has multivariate gaussian distribution as suchthe distribution of gp is completely specified by its mean and covariance functions and rrespectively the covariance function is finite positive semidefinite functionand hencein view of theorem can be viewed as reproducing kernel on as for ordinary regressionthe objective of gp regression is to learn regression function that predicts response (xfor each feature vector this is done in bayesian fashionby establishing ( prior pdf for and ( the likelihood of the datafor given from these two we then derivevia bayesformulathe posterior distribution of given the data we refer to section for the general bayesian framework simple bayesian model for gp regression is as follows firstthe prior distribution of is taken to be the distribution of gp with some known mean function and covariance function (that iskernelk most often is taken to be constantand for simplicity of expositionwe take it to be the gaussian kernel ( is often used for the covariance function for radial basis function kernels (including the gaussian kernel)points that are closer will be more highly correlated or "similar[ ]independent of translations in space secondsimilar to standard regressionwe view the observed feature vectors xn as fixed and the responses yn as outcomes of random variables yn specificallygiven gwe model the {yi as yi (xi ei ( gaussian process |
15,007 | iid where {ei ( to simplify the analysislet us assume that is knownso no prior needs to be specified for let [ ( ) (xn )]be the (unknownvector of regression values placing gp prior on the function is equivalent to placing multivariate gaussian prior on the vector gg ( )( where the covariance matrix of is gram matrix (implicitly associated with feature map through the kernel )given byk( ( ( xn ( ( ( xn ( (xn (xn (xn xn the likelihood of our data given gdenoted ( )is obtained directly from the model ( )( gngs in ( solving this bayesian problem involves deriving the posterior distribution of yto do sowe first note that since has covariance matrix in (which can be seen from ( ))the joint distribution of and is again normalwith mean and covariance matrixk in , ( the posterior can then be found by conditioning on yvia theorem giving yn ( in )- yk ( in )- this only gives information about at the observed points xn it is more interesting to consider the posterior predictive distribution of : ( xfor new input we can find the corresponding posterior predictive pdf ( yby integrating out the joint posterior pdf ( gg )which is equivalent to taking the expectation of ( gwhen is distributed according to the posterior pdf ( )that isz ( yp( gp( ydg to do so more easily than direct evaluation via the above integral representation of ( )we can begin with the joint distribution of [ , ]which is multivariate normal with mean and covariance matrix ek ( kk( xe xwhere [ ( xx ) ( xxn )]it now followsagain by using theorem that ( yhas normal distribution with mean and variance given respectively by and predictive ( xk( in )- ( ( xk( xe xk( in )- ( these are sometimes called the predictive mean and variance it is important to note that ( we are predicting the expected response ey xhereand not the actual response |
15,008 | example (gp regressionsuppose the regression function is ( sin( px) [ we use gp regression to estimate gusing gaussian kernel of the form ( with bandwidth parameter the explanatory variables were drawn uniformly on the interval [ ]and the responses were obtained from ( )with noise level figure shows samples from the prior distribution for as well as the data points and the true sinusoidal regression function - - - - - - figure leftsamples drawn from the gp prior distribution rightthe true regression function with the data points again assuming that the variance is knownthe predictive distribution as determined by ( and ( is shown in figure for bandwidth (leftand (rightclearlydecreasing the bandwidth leads to the covariance between points and decreasing at faster rate with respect to the squared distance kx leading to predictive mean that is less smooth in the above expositionwe have taken the mean function for the prior distribution of to be identically zero if instead we have general mean function and write [ ( ) (xn )]then the predictive variance ( remains unchangedand the predictive mean ( is modified to read ( xm( xk( in )- ( ( typicallythe variance appearing in ( is not knownand the kernel itself depends on several parameters -for instance gaussian kernel ( with an unknown bandwidth parameter in the bayesian frameworkone typically specifies hierarchical model by introducing prior (thfor the vector th of such hyperparameters nowthe gp prior ( th(equivalentlyspecifying pg th)and the model for the likelihood of the data given ygthnamely ( gth)are both dependent on th the posterior distribution of ythis as before hyperparameters |
15,009 | (xpredictive mean (xpredictive mean - - - - - - figure gp regression of synthetic data set with bandwidth (leftand (rightthe black dots represent the data and the blue curve is the latent function ( sin( pxthe red curve is the mean of the gp predictive distribution given by ( )and the shaded region is the confidence bandcorresponding to the predictive variance given in ( empirical bayes one approach to setting the hyperparameter th is to determine its posterior (th yand obtain point estimatefor instance via its maximum posteriori estimate howeverthis can be computationally demanding exercise what is frequently done in practice is to consider instead the marginal likelihood ( thand maximize this with respect to th this procedure is called empirical bayes considering again the mean function to be identically zerofrom ( )we have that ( this multivariate normal with mean and covariance matrix in immediately giving an expression for the marginal log-likelihood ( ln ( thln( pln det( )yk- we notice that only the second and third terms in ( depend on th considering partial derivative of ( with respect to single element th of the hyperparameter vector th yields # - - ln ( thtr ky ky - ( yth th th where th is the element-wise derivative of matrix with respect to th if these partial derivatives can be computed for each hyperparameter thgradient information could be used when maximizing ( example (gp regression (cont )continuing example we plot in figure the marginal log-likelihood as function of the noise level and bandwidth parameter the maximum is attained for bandwidth parameter around and which is very close to the left panel of figure for the case where was assumed to be known (and equal to we note here that the marginal log-likelihood is extremely flatperhaps owing to the small number of points |
15,010 | - - - figure contours of the marginal log-likelihood for the gp regression example the maximum is denoted by cross kernel pca in its basic formkernel pca (principal component analysiscan be thought of as pca in feature space the main motivation for pca introduced in section was as dimensionality reduction technique therethe analysis rested on an svd of the matrix xxp where the data in was first centered via xi, xij where xi = xij what we shall do is to first re-cast the problem in terms of the gram matrix xx[hxi (note the different order of and )and subsequently replace the inner product hxx with (xx for general reproducing kernel to make the linklet us start with an svd of xxudv( the dimensions of xudand are nd dd nand nrespectively then an svd of xx is xx (udv)(udv) (dd)uand an svd of is (udv)(udvv(dd)vlet lr denote the non-zero eigenvalues of xx (orequivalentlyof kand denote the corresponding diagonal matrix by without loss of generality we can assume that the eigenvector of xx corresponding to lk is the -th column of and that the -th column of is an eigenvector of similar to section let uk and vk contain the first columns of and vrespectivelyand let lk be the corresponding kxk submatrix of lk by the svd ( )we have xvk udvvk uk / nextconsider the projection of point onto the -dimensional linear space spanned by the columns of uk -the first principal components we saw in section that this projection simply is the linear mapping > using the fact that uk xvk - / we find that is projected to |
15,011 | point given by - / > xx - / > where we have (suggestivelydefined :[hx xihxn xi]the important point is that is completely determined by the vector of inner products and the principal eigenvalues and (righteigenvectors of the gram matrix note that each component zm of is of the form zm am, (xi ) ( = the preceding discussion assumed centering of the columns of consider now an then the centered data can be written as en xe where uncentered data matrix en is the matrix of ones consequentlyex en ex ex een en ex een xxx ex ehwhere in > in is the nxn identity matrixormore compactlyxxh and is the vector of ones ex eby [ (xi )ij nto generalize to the kernel settingwe replace and set [ ( ) (xn )]so that lk is the diagonal matrix of the largest eigenvalues of hkh and vk is the corresponding matrix of eigenvectors note that the "usualpca is recovered when we use the linear kernel (xyxy howeverinstead of having only kernels that are explicitly inner products of feature vectorswe are now permitted to implicitly use infinite feature maps (functionsby using kernels example (kernel pcawe simulated pointsx from the uniform distribution on the set ( bc )where br :{(xyr (diskwith radius rwe apply kernel pca with gaussian kernel (xx exp -kx and compute the functions zm ( ) in ( their density plots are shown in figure the data points are superimposed in each plot from this we see that the principal components identify the radial structure present in the data finallyfigure shows the projections [ (xi ) (xi )] of the original data points onto the first two principal components we see that the projected points can be separated by straight linewhereas this is not possible for the original datasee alsoexample for related problem |
15,012 | figure first nine eigenfunctions using gaussian kernel for the two-dimensional data set formed by the red and cyan points - - - - - - figure projection of the data onto the first two principal components observe that already the projections of the inner and outer points are well separated |
15,013 | further reading for good overview of the ridge regression and the lassowe refer the reader to [ for overviews of the theory of rkhs we refer to [ ]and for in-depth background on splines and their connection to rkhss we refer to [ for further details on gp regression we refer to [ and for kernel pca in particular we refer to [ finallymany facts about kernels and their corresponding rkhss can be found in [ exercises let be an rkhs with reproducing kernel show that is positive semidefinite function show that reproducing kernelif it existsis unique let be hilbert space of functions recall that the evaluation functional is the map (xfor given show that evaluation functionals are linear operators let be the pre-rkhs constructed in the proof of theorem thusg is of the form ni= ai xi and hgk ig = ai hk xi ig ai (xi xg(xi= thereforewe may write the evaluation functional of at as :hgk ig show that is bounded on for every xthat is| kg for some continuing exercise let fn be cauchy sequence in such that fn ( ) for all show that fn kg continuing exercises and to show that the inner product ( is well defineda number of facts have to be checked (averify that the limit converges (bverify that the limit is independent of the cauchy sequences used (cverify that the properties of an inner product are satisfied the only non-trivial property to verify is that ff ig if and only if exercises - show that defined in the proof of theorem is an inner product space it remains to prove that is an rkhs this requires us to prove that the inner product space is complete (and thus hilbert)and that its evaluation functionals are bounded and hence continuous (see theorem this is done in number of steps (ashow that is dense in in the sense that every is limit point (with respect to the norm on gof cauchy sequence fn in |
15,014 | (bshow that every evaluation functional on is continuous at the function that ise kg ( ) ( continuity of at all functions then follows automatically from linearity (cshow that is completethat isevery cauchy sequence fn converges in the norm ||| if and are kernels on and ythen ((xy)( ): (xx (yy and kx ((xy)( : (xx ) (yy are kernels on the cartesian product prove this an rkhs enjoys the following desirable smoothness propertyif (gn is sequence belonging to rkhs on xand kgn gkg then (xlimn gn (xfor all prove thisusing cauchy-schwarz let be an rd -valued random variable that is symmetric about the origin (that isx and (-xidentically distributeddenote by is its distribution and ps(tr are eit eit (dxfor rd is its characteristic function verify that (xx ps( is real-valued positive semidefinite function suppose an rkhs of functions from (with kernel kis invariant under group of transformations xthat isfor all fg and we have (if and (iih tg ig fgig show that ( xt (xx for all xx and given two hilbert spaces and gwe call mapping hilbert space isomorphism if it is (ia linear mapthat isa( bgaaf ba(gfor any fg and ab (iia surjective mapand (iiian isometrythat isfor all fg hit holds that fgih ha fagig let (equipped with the usual euclidean inner productand construct its (continuousdual space gconsisting of all continuous linear functions from to ras follows(afor each define gb via gb (xhbxi bxfor all (bequip with the inner product hgb gg ig :bg show that defined by (bgb for is hilbert space isomorphism let be an model matrix show that xx gi for is invertible as example clearly illustratesthe pdf of random variable that is symmetric about the origin is not in general valid reproducing kernel take two such iid random variables and with common pdf and define denote by psz and fz the characteristic function and pdf of zrespectively show that if psz is in ( )fz is positive semidefinite function use this to show that (xx fz ( {| }( | |/ is valid reproducing kernel hilbert space isomorphism |
15,015 | for the smoothing cubic spline of section show that (xumax{ , } min{ ,umin{ , } let be an model matrix and let be the unit-length vector with -th entry equal to one (uk kuk suppose that the -th column of is and that it is replaced with new predictor wso that we obtain the new model matrixe ( )ux (adenoting : ( vkw vk show that ex xx udduxx ( )( ( )( dx ex differs from xx by symmetric matrix of rank two in other wordsx (bsuppose that :(xx gi )- is already computed explain how the sherman-morrison formulas in theorem can be applied twice to comex gi in (( )ppute the inverse and log-determinant of the matrix computing timerather than the usual (( )pcomputing time (cwrite python program for updating matrix (xx gi )- when we change the -th column of xas shown in the following pseudo-code algorithm updating via sherman-morrison formula inputmatrices and bindex kand replacement for the -th column of outputupdated matrices and set to be the -th column of set to be the unit-length vector such that uk kuk budb dbu bdub ubd update the -th column of with return xb use algorithm from exercise to write python code that computes the ridge regression coefficient in ( and use it to replicate the results on figure the following pseudo-code (with running cost of (( ) )may help with the writing of the python code this sherman-morrison updating is not always numerically stable more numerically stable method will perform two consecutive rank-one updates of the cholesky decomposition of xx gi |
15,016 | algorithm ridge regression coefficients via sherman-morrison formula inputtraining set {xyand regularization parameter outputsolution ( xx)- xy - set to be an matrix of zeros and ( for do set to be the -th column of update {abvia algorithm with inputs {abjw (xy return consider example with diag( for some nonnegative vector so that twice the negative logarithm of the model evidence can be written as - ln (yl( : ln[ ( xsx)yln |dln |scwhere is constant that depends only on (ause the woodbury identities ( and ( to show that xsx( xdx)- ln |dln |sln | xdxdeduce that (ln ln[ycyln |ccwhere :( xdx)- (blet [ : denote the columns/predictors of show that - =ip lk vk > = explain why setting lk has the effect of excluding the -th predictor from the regression model how can this observation be used for model selection(cprove the following formulas for the gradient and hessian elements of ( )( > cy) vi cvi li cy ( > cy)( > cy ( )(vi cv vi cv li ycy ( (done method to determine which predictors in are important is to compute :argmin (ll> usingfor examplethe interior-point minimization algorithm with gradient and hessian computed from ( write python code to compute land use it to select the best polynomial model in example |
15,017 | (exercise continued consider again example with diag( for some nonnegative model-selection parameter bayesian choice for is the maximizer of the marginal likelihood ( )that isl argmax (bs ldb ds > where ln (bs np ky xbk bd- ln |dln( ps ln to maximize ( )one can use the em algorithm with and acting as latent variables in the complete-data log-likelihood ln (bs ldefine :( - xx)- :sxy :kyk yxb ( (ashow that the conditional density of the latent variables and is such that - ly gamma ls bs (buse theorem to show that the expected complete-data log-likelihood is - tr( - sln |dc where is constant that does not depend on (cuse theorem to simplify the expected complete-data log-likelihood and to show that it is maximized at li sii (bi / ) for hencededuce the following and steps in the em algorithme-step given lupdate (sbb via the formulas ( -step given (sbb )update via li sii (bi / ) (dwrite python code to compute lvia the em algorithmand use it to select the best polynomial model in example possible stopping criterion is to terminate the em iterations when ln ( lt+ ln ( lt for some small where the marginal log-likelihood is ln ( lln(npb ln |dln |sln ( / |
15,018 | in this exercise we explore how the early stopping of the gradient descent iterations (see example )xt+ xt (xt ) is (approximatelyequivalent to the global minimization of ( gkxk for certain values of the ridge regularization parameter (see example we illustrate the early stopping idea on the quadratic function ( ( ) ( )where rnxn is symmetric positive-definite (hessianmatrix with eigenvalues {lk }nk= early stopping (averify that for symmetric matrix rn such that is invertiblewe have at- ( at )( )- (blet qlqbe the diagonalization of as per theorem if show that the formula for xt is xt ( al) qu hencededuce that necessary condition for xt to converge is maxk lk (cshow that the minimizer of ( gkxk can be written as xu ( - )- qu (dfor fixed value of tlet the learning rate using part (band ( )show that if /( aas then xt xin other wordsxt is approximately equal to xfor small aprovided that is inversely proportional to |
15,019 | lassification the purpose of this is to explain the mathematical ideas behind well-known classification techniques such as the naive bayes methodlinear and quadratic discriminant analysislogistic/softmax classificationthe -nearest neighbors methodand support vector machines introduction classification methods are supervised learning methods in which categorical response variable takes one of possible values (for example whether person is sick or healthy)which is to be predicted from vector of explanatory variables (for examplethe blood pressureageand smoking status of the person)using prediction function in this senseg classifies the input into one of the classessay in the set { for this reasonwe will call classification function or simply classifier as with any supervised learning technique (see section )the goal is to minimize the expected loss or risk `(ge loss(yg( )classifier ( for some loss functionloss( , )that quantifies the impact of classifying response via (xthe natural loss function is the zero-one (also written - or indicator lossloss( , : { }that isthere is no loss for correct classification ( yand unit loss for misclassification ( yin this case the optimal classifier is given in the following theorem indicator loss theorem optimal classifier for the loss function loss( , { }an optimal classification function is (xargmax [ xy{ , - ( proofthe goal is to minimize `(ge { ( )over all functions taking values in { conditioning on givesby the tower property`(ge ( [ (xx)and so minimizing `(gwith respect to can be accomplished by maximizing [ |
15,020 | introduction (xx xwith respect to ( )for every fixed in other wordstake (xto be equal to the class label for which [ xis maximal bayes error rate the formulation ( allows for "ties"when there is an equal probability between optimal classes for feature vector assigning one of these tied classes arbitrarily (or randomlyto does not affect the loss function and so we assume for simplicity that (xis always scalar value note thatas was the case for the regression (seee theorem )the optimal prediction function depends on the conditional pdf ( xp[ xhoweversince we assign to class if ( xf ( xfor all zwe do not need to learn the entire surface of the function ( )we only need to estimate it well enough near the decision boundary { ( xf ( )for any choice of classes and this is because the assignment ( divides the feature space into regionsry { ( xmaxz ( )} recall that for any supervised learning problem the smallest possible expected loss (that isthe irreducible riskis given by ``(gfor the indicator lossthe irreducible risk is equal to [ ( )this smallest possible probability of misclassification is often called the bayes error rate for given training set ta classifier is often derived from pre-classifier gt which is prediction function (learnerthat can take any real valuerather than only values in the set of class labels typical situation is the case of binary classification with labels - and where the prediction function gt is function taking values in the interval [- and the actual classifier is given by sign(gt it will be clear from the context whether prediction function gt should be interpreted as classifier or pre-classifier the indicator loss function may not always be the most appropriate choice of loss function for given classification problem for examplewhen diagnosing an illnessthe mistake in misclassifying person as being sick when in fact the person is healthy may be less serious than classifying the person as healthy when in fact the person is sick in section we consider various classification metrics there are many ways to fit classifier to training set {( )(xn yn )the approach taken in section is to use bayesian framework for classification here the conditional pdf ( xis viewed as posterior pdf ( xf ( yf (yfor given class prior (yand likelihood ( ysection discusses linear and quadratic discriminant analysis for classificationwhich assumes that the class of approximating functions for the conditional pdf ( yis parametric class of gaussian densities as result of this choice of gthe marginal (xis approximated via gaussian mixture density in contrastin the logistic or soft-max classification in section the conditional pdf ( xis approximated using more flexible class of approximating functions as result of thisthe approximation to the marginal density (xdoes not belong to simple parametric class (such as gaussian mixtureas in unsupervised learningthe crossentropy loss is the most common choice for training the learner the -nearest neighbors methoddiscussed in section is yet another approach to classification that makes minimal assumptions on the class here the aim is to directly |
15,021 | estimate the conditional pdf ( xfrom the training datausing only feature vectors in the neighborhood of in section we explain the support vector methodology for classificationthis is based on the same reproducing kernel hilbert space ideas that proved successful for regression analysis in section finallya versatile way to do both classification and regression is to use classification and regression trees this is the topic of neural networks (provide yet another way to perform classification classification metrics the effectiveness of classifier istheoreticallymeasured in terms of the risk ( )which depends on the loss function used fitting classifier to iid training data {(xi yi )}ni= is established by minimizing the training loss ` ( loss(yi (xi ) = ( over some class of functions as the training loss is often poor estimator of the risk the risk is usually estimated as in ( )using instead test set {( )}ni= that is independent of the training setas explained in section to measure the performance of classifier on training or test setit is convenient to introduce the notion of loss matrix consider classification problem with classifier gloss function lossand classes if an input feature vector is classified as (xwhen the observed class is ythe loss incurred isby definitionloss( , yconsequentlywe may identify the loss function with matrix [lossjk)jk { }for the indicator loss functionthe matrix has on the diagonal and everywhere else another useful matrix is the confusion matrixdenoted by mwhere the jk)-th element of counts the number of times thatfor the training or test datathe actual (observedclass is whereas the predicted class is table shows the confusion matrix of some dog/cat/possum classifier loss matrix confusion matrix table confusion matrix for three classes predicted actual dog cat possum dog cat possum we can now express the classifier performance ( in terms of and as [ mjk , ( where is the elementwise product of and note that for the indicator loss( is simply tr( )/nand is called the misclassification error the expression ( makes misclassification error it clear that both the counts and the loss are important in determining the performance of classifier |
15,022 | true positive true negative false positive false negative in the spirit of table for hypothesis testingit is sometimes useful to divide the elements of confusion matrix into four groups the diagonal elements are the true positive countsthat isthe numbers of correct classifications for each class the true positive counts for the dogcatand possum classes in table are and respectively similarlythe true negative count for class is the sum of all matrix elements that do not belong to the row or the column of this particular class for the dog class it is the false positive count for class is the sum of the corresponding column elements without the diagonal element for the dog class it is finallythe false negative count for specific classcan be calculated by summing over the corresponding row elements (againwithout counting the diagonal elementfor the dog class it is in terms of the elements of the confusion matrixwe have the following counts for class tp fp mk true positive false positive (column sumkj fn false negative jk (row sumkj tn fn fp tp true negative accuracy note that in the binary classification case ( )and using the indicator loss functionthe misclassification error ( can be written as fp fn ( error this does not depend on which of the two classes is consideredas fp fn fp fn similarlythe accuracy measures the fraction of correctly classified objectstp tn ( accuracy error in some casesclassification error (or accuracyalone is not sufficient to adequately describe the effectiveness of classifier as an exampleconsider the following two classification problems based on fingerprint detection system identification of authorized personnel in top-secret military facility identification to get an online discount for some retail chain both problems are binary classification problems howevera false positive in the first problem is extremely dangerouswhile false positive in the second problem will make customer happy let us examine classifier in the top-secret facility the corresponding confusion matrix is given in table table confusion matrix for authorized personnel classification predicted actual authorized non-authorized authorized non-authorized , |
15,023 | from ( )we conclude that the accuracy of classification is equal to accuracy tp tn tp tn fp fn howeverwe can see that in this particular caseaccuracy is problematic metricsince the algorithm allowed non-authorized personnel to enter the facility one way to deal with this issue is to modify the loss function to give much higher loss to non-authorized access thusinstead of an (indicatorloss matrixwe could for example take the loss matrix an alternative approach is to keep the indicator loss function and consider additional classification metrics below we give list of commonly used metrics for simplicity we call an object whose actual class is -objectthe precision (also called positive predictive valueis the fraction of all objects classified as that are actually -objects specificallyprecision tp tp fp the recall (also called sensitivityis the fraction of all -objects that are correctly classified as such that istp recall tp fn the specificity measures the fraction of all nonj-objects that are correctly classified as such specificallytn specificity fp tn the fb score is combination of the precision and the recall and is used as single measurement for classifier' performance the fb score is given by fbj precision ( tp ( tp fn fp for we obtain the precision and for we obtain the recall the particular choice of metric is clearly application dependent for examplein the classification of authorized personnel in top-secret military facilitysuppose we have two classifiers the first (classifier has confusion matrix given in table and the second (classifier has confusion matrix given in table various metrics for these two classifiers are show in table in this case we prefer classifier which has much higher precision recall specificity fb score |
15,024 | table confusion matrix for authorized personnel classificationusing different classifier (classifier predicted actual authorized non-authorized authorized non-authorized , table comparing the metrics for the confusion matrices in tables and metric multilabel classification hierarchical classification classifier classifier accuracy - precision - recall - specificity - - - - - - - remark (multilabel and hierarchical classificationin standard classification the classes are assumed to be mutually exclusive for example satellite image could be classified as "cloudy""clear"or "foggyin multilabel classification the classes (often called labelsdo not have to be mutually exclusive in this case the response is subset of some collection of labels { equivalentlythe response can be viewed as binary vector of length cwhere the -th element is if the response belongs to label and otherwise againconsider the satellite image example and add two labelssuch as "roadand "riverto the previous three labels clearlyan image can contain both road and river in additionthe image can be clearcloudyor foggy in hierarchical classification hierarchical relation between classes/labels is taken into account during the classification process usuallythe relations are modeled via tree or directed acyclic graph visual comparison between the hierarchical and non-hierarchical (flatclassification tasks for satellite image data is presented in figure root root rural urban farm barn skyscraper rural barn farm urban skyscraper figure hierarchical (leftand non-hierarchical (rightclassification schemes barns and farms are common in rural areaswhile skyscrapers are generally located in cities while this relation can be clearly observed in the hierarchical model schemethe connection is missing in the non-hierarchical design exact match ratio : (xand the true response are in multilabel classificationboth the prediction subsets of the label set { - reasonable metric is the so-called exact match ratio |
15,025 | defined as pn exact match ratio = {yi yi the exact match ratio is rather stringentas it requires full match in order to consider partial correctnessthe following metrics could be used instead the accuracy is defined as the ratio of correctly predicted labels and the total number of predicted and actual labels the formula is given by pn accuracy pi= = bi|yi bi|yi the precision is defined as the ratio of correctly predicted labels and the total number of predicted labels specificallypn precision = |yi yi pn = ( the recall is defined as the ratio of correctly predicted labels and the total number of actual labels specificallypn bi|yi pn recall = = |yi ( the hamming loss counts the average number of incorrect predictions for all classescalculated as - xx { yi { { yi hamming { = = classification via bayesrule we saw from theorem that the optimal classifier for classes divides the feature space into regionsdepending on ( )the conditional pdf of the response given the feature vector in particularif ( xf ( xfor all ythe feature vector is classified as classifying feature vectors on the basis of their conditional class probabilities is natural thing to doespecially in bayesian learning contextsee section for an overview of bayesian terminology and usage specificallythe conditional probability ( xis interpreted as posterior probabilityof the form ( xf ( yf ( )( where ( yis the likelihood of obtaining feature vector from class and (yis the prior probability of class by making various modeling assumptions about the prior here we have used the bayesian notation convention of "overloadingthe notation |
15,026 | bayes optimal decision rule ( all classes are priori equally likelyand the likelihood functionone obtains the posterior pdf via bayesformula ( class is then assigned to feature vector according to the highest posterior probabilitythat iswe classify according to the bayes optimal decision ruleb argmax ( )( naive bayes which is exactly ( since the discrete density ( ) is usually not knownthe aim is to approximate it well with function ( xfrom some class of functions note that in this contextg(xrefers to discrete density ( probability mass functionfor given suppose feature vector [ ]of features has to be classified into one of the classes for examplethe classes could be different people and the features could be various facial measurementssuch as the width of the eyes divided by the distance between the eyesor the ratio of the nose height and mouth width in the naive bayes methodthe class of approximating functions is chosen such that ( yg( yg( )that isconditional on the labelall features are independent assuming uniform prior for ythe posterior pdf can thus be written as ( xp = ( )where the marginal pdfs ( ) belong to given class of approximating functions to classify xsimply take the that maximizes the unnormalized posterior pdf for instancesuppose that the approximating class is such that ( yn(uy ) the corresponding posterior pdf is then kx uy ( uy ) exp ( thxexp = where uy :[uy uyp ]and th :{ uc- collects all model parameters the probability ( thxis maximal when kx uy is minimal thus argminy kx uy is the classifier that maximizes the posterior probability that isclassify as when uy is closest to in euclidean distance of coursethe parameters (herethe {uy and are unknown and have to be estimated from the training data we can extend the above idea to the case where also the variance depends on the class and feature jas in the next example example (naive bayes classificationtable lists the means and standard deviations of normally distributed featuresfor different classes how should feature vector [ ]be classifiedthe posterior pdf is ( ( thx(sy sy sy )- exp = where th :{ } - = again collects all model parameters the (unscaledvalues for ( thx) are and - respectively hencethe feature vector should be classified as the code follows |
15,027 | table feature parameters feature feature feature class naivebayes py import numpy as np np array ([ , , ]reshape ( , mu np array ([ ]reshape ( , sig np array ([ ]reshape ( , lambda np prod(sig[ ,:]np exp- np sum (( -mu[ ,:]** sig[ ,:]** )for in range ( , )print ('{: }format ( ( )) + - + - linear and quadratic discriminant analysis the bayesian viewpoint for classification of the previous section (not limited to naive bayesleads in natural way to the well-established technique of discriminant analysis we discuss the binary classification case firstwith classes and we consider class of approximating functions such thatconditional on the class { }the feature vector [ ]has (uy sy distribution (see ( )) - ( thyp ( -uy sy ( -uy ( pp |sy rpy { }( where th { } - = collects all model parametersincluding the probability vector (that isi ai and ai which helps define the prior densityg( thay { thenthe posterior density is ( thxay ( thy)discriminant analysis |
15,028 | quadratic discriminant function linear discriminant function linear and quadratic discriminant analysis andaccording to the bayes optimal decision rule ( )we classify to come from class if ( th ( th orequivalently (by taking logarithmsif ln | ( ) - ln ln | ( ) - ( ln ( the function rp ( dy (xln ay ln |sy ( uy ) - ( uy ) is called the quadratic discriminant function for class point is classified to class for which dy (xis largest the function is quadratic in and so the decision boundary { (xd ( )is quadratic as well an important simplification arises for the case where the assumption is made that nowthe decision boundary is the set of for which ln ( ) - ( ln ( ) - ( expanding the above expression shows that the quadratic term in is eliminatedgiving linear decision boundary in ln > - xs- ln > - xs- the corresponding linear discriminant function for class is dy (xln ay > - uy xs- uy rp ( example (linear discriminant analysisconsider the case where / and su the distribution of is mixture of two bivariate normal distributions its pdf ( thy ( thy ) is depicted in figure figure gaussian mixture density where the two mixture components have the same covariance matrix |
15,029 | we used the following python code to make this figure ldamixture py import numpy as np matplotlib pyplot as plt from scipy stats import multivariate_normal from mpl_toolkits mplot import axes from matplotlib colors import lightsource mu mu np array ([ , ]np array ([ , ]sigma np array ([[ , ,[ ]]xy np mgrid - : : - : : jmvn multivariate_normal mu sigma mvn multivariate_normal mu sigma xy np hstack (( reshape - , , reshape - , )) mvn pdf(xyreshape ( shape mvn pdf(xyreshape ( shape fig plt figure (ax fig gcaprojection =' 'ls lightsource azdeg = altdeg = cols ls shade (zplt cm winter surf ax plot_surface (xyzrstride = cstride = linewidth = antialiased =false facecolors =colsplt show (the following python codewhich imports the previous codedraws contour plot of the mixture densitysimulates data points from the mixture densityand draws the decision boundary to compute and display the linear decision boundarylet [ ] - ( and > - > - thenthe decision boundary can be written as orequivalentlyx -( )/ we see in figure that the decision boundary nicely separates the two modes of the mixture density lda py from ldamixture import from numpy random import rand from numpy linalg import inv fig plt figure (plt contourf (xy,zcmap=plt cm blues alpha extend ='both 'plt ylim - , plt xlim - , (rand( , for in range ( , )if [ ] np random multivariate_normal (mu ,sigma , plt plot( [ ][ , [ ][ ',alpha elseu np random multivariate_normal (mu ,sigma , plt plot( [ ][ , [ ][ '+ ',alpha |
15,030 | invsigma (mu -mu ) mu reshape ( , invsigma mu reshape ( , mu reshape ( , invsigma @mu reshape ( , xx np linspace - , , yy (-( [ ]xx + )/ [ ][ plt plot(xx ,yy ,' 'plt show ( figure the linear discriminant boundary lies between the two modes of the mixture density and is linear to illustrate the difference between the linear and quadratic casewe specify different covariance matrices for the mixture components in the next example example (quadratic discriminant analysisas in example we consider mixture of two gaussiansbut now with different covariance matrices figure shows the quadratic decision boundary the python code follows figure quadratic decision boundary |
15,031 | qda py import numpy as np import matplotlib pyplot as plt from scipy stats import multivariate_normal mu np array ([ , ]mu np array ([ , ]sigma np array ([[ , ,[ ]]sigma np array ([[ , ,[ ]]xy np mgrid - : : - : : jmvn multivariate_normal mu sigma mvn multivariate_normal mu sigma xy np hstack (( reshape - , , reshape - , )) mvn pdf(xyreshape ( shape mvn pdf(xyreshape ( shape plt contour ( , ,zz mvn pdf(xyreshape ( shape mvn pdf(xyreshape ( shape )plt contour ( , , levels =[ linestyles ='dashed 'linewidths colors ' 'plt show (of coursein practice the true parameter th { }cj= is not known and must be estimated from the training data -for exampleby minimizing the cross-entropy training loss ( with respect to thn lossf (xi yi ) (xi yi th)ln (xi yi th) = = where ln |sy ( uy ) - ln( py ( uy the corresponding estimates of the model parameters (see exercise areln (xy thln ay ny xi uy ny : = sy (xi uy )(xi uy ny : = ay ( for where ny :ni= {yi yfor the case where sy for all ywe have yb ay sy when classes are involvedthe classification procedure carries through in exactly the same wayleading to quadratic and linear discriminant functions ( and ( for each class the space now is partitioned into regionsdetermined by the linear or quadratic boundaries determined by each pair of gaussians |
15,032 | sphere the data for the linear discriminant case (that iswhen sy for all )it is convenient to first "whitenor sphere the data as follows let be an invertible matrix such that bbobtainedfor examplevia the cholesky method we linearly transform each data point to : - and each mean uy to : - uy let the random vector be distributed according to the mixture pdf - ( -uy sy ( -uy ( th:ay ( pp |sy thenby the transformation theorem the vector - has density - ( thx ay - ( the ( -uy (bb ( -uy - | ( pp = - ay ( pp = ( -uy ( -uy - = ay ( pp kx -uy this is the pdf of mixture of standard -dimensional normal distributions the name "spheringderives from the fact that the contours of each mixture component are perfect spheres classification of the transformed data is now particularly easyclassify as : argminy kx uy ln ay note that this rule only depends on the prior probabilities and the distance from to the transformed means { this procedure can lead to significant dimensionality reduction of the data namelythe data can be projected onto the space spanned by the differences between the mean vectors { when there are classesthis is ( )-dimensional spaceas opposed to the -dimensional space of the original data we explain the precise ideas via an example example (classification after data reductionconsider an equal mixture of three -dimensional gaussian distributions with identical covariance matrices after sphering the datathe covariance matrices are all equal to the identity matrix suppose the mean vectors of the sphered data are [ - ] [ - ]and [ ]the left panel of figure shows the -dimensional (sphereddata from each of the three classes figure leftoriginal data rightprojected data the data are stored in three matrices and here is how the data was generated and plotted |
15,033 | datared py import numpy as np from numpy random import randn import matplotlib pyplot as plt from mpl_toolkits mplot import axes = mu np array ([ , - ]mu np array ([ - , ]mu np array ([ , , ] randn ( , mu randn ( , mu randn ( , mu fig plt figure (ax fig gcaprojection =' ,ax plot( [, [, [, ' ',alpha = markersize = ax plot( [, [, [, ' ',alpha = markersize = ax plot( [, [, [, ' ',alpha = markersize = ax set_xlim - , ax set_ylim - , ax set_zlim - , plt show (since we have equal mixtureswe classify each data point according to the closest distance to or we can achieve reduction in the dimensionality of the data by projecting the data onto the two-dimensional affine space spanned by the {ui }that isall vectors are of the form ( ( ) in factone may just as well project the data onto the subspace spanned by the vectors and let [ be the matrix whose columns are and the orthogonal projection matrix onto the subspace spanned by the columns of is (see theorem ) www(ww)- wlet udvbe the singular value decomposition of then can also be written as ud(dd)- dunote that has dimension so is not square the first two columns of usay and form an orthonormal basis of the subspace what we want to do is rotate this subspace to the planemapping and to [ ]and [ ]respectively this is achieved via the rotation matrix - ugiving the skewed projection matrix up (dd)- duwhose rd row only contains zeros applying to all the data pointsand ignoring the rd component of the projected points (which is )gives the right panel of figure we see that the projected points are much better separated than the original ones we have achieved dimensionality reduction of the data while retaining all the necessary information required for classification here is the rest of the python code |
15,034 | dataproj py from datared import from numpy linalg import svd pinv mu (mu mu reshape ( , mu (mu mu reshape ( , np hstack (mu mu ) , , svd(wwe only need pinv(wr rx ( tt rx ( tt rx ( tt plt plot(rx [, rx [, ,' ',alpha = markersize = plt plot(rx [, rx [, ,' ',alpha = markersize = plt plot(rx [, rx [, ,' ',alpha = markersize = plt show ( logistic regression and softmax classification in example we introduced the logistic (logitregression model as generalized linear model whereconditional on -dimensonal feature vector xthe random response has ber( (xb)distribution with ( /( - the parameter was then learned from the training data by maximizing the likelihood of the training responses orequivalentlyby minimizing the supervised version of the cross-entropy training loss ( ) ln (yi bxi ) = where ( bx /( - and ( bxe- /( - in particularwe have ( bxln xb ( ( bxlog-odds ratio in other wordsthe log-odds ratio is linear function of the feature vector as consequencethe decision boundary { ( bxg( bx)is the hyperplane xb note that typically includes the constant feature if the constant feature is considered separatelythat is [ ]then the boundary is an affine hyperplane in suppose that training on {(xi yi )yields the estimate with the corresponding > learner gt ( /( - the learner can be used as pre-classifier from which we obtain the classifier {gt ( / orequivalentlyb :argmax gt ( ) { , multi-logit in accordance with the fundamental classification rule ( the above classification methodology for the logit model can be generalized to the multi-logit model where the response takes values in the set { the key idea is |
15,035 | to replace ( with ln ( wbxxb ( wbxj ( where the matrix ( - ) ( - and vector rc- reparameterize all such that (recall [ ])we [ bc- ] observe that the random response is assumed to have conditional probability distribution for which the log-odds ratio with respect to class and "referenceclass (in this case is linear the separating boundaries between two pairs of classes are again affine hyperplanes the model ( completely specifies the distribution of ynamelyexp(zy+ ( wbxpc = exp(zk where is an arbitrary constantsay corresponding to the "referenceclass and [ zc ]: note that ( wbxis the ( )-st component of softmax( )where exp(zsoftmax exp(zk is the softmax function and [ zc ]finallywe can write the classifier as softmax argmax + { , - in summarywe have the sequence of mappings transforming the input into the output yx we softmax(zargmax + { , - in example we will revisit the multi-logit model and reinterpret this sequence of mappings as neural network in the context of neural networksw is called weight matrix and is called bias vector the parameters and have to be learned from the training datawhich involves minimization of the supervised version of the cross-entropy training loss ( ) lossf (yi xi ) (yi wbxi )ln (yi wbxi = = using the softmax functionthe cross-entropy loss can be simplified toc lossf ( ) ( wbx)-zy+ ln exp(zk ( = the discussion on training is postponed until where we reinterpret the multilogit model as neural netwhich can be trained using the limited-memory bfgs method (exercise note that in the binary case ( )where there is only one vector to be estimatedexample already established that minimization of the cross-entropy training loss is equivalent to likelihood maximization |
15,036 | -nearest neighbors -nearest neighbors classification let {(xi yi )}ni= be the training setwith yi { }and let be new feature vector define ( ( (nas the feature vectors ordered by closeness to in some distance dist(xxi ) the euclidean distance kxx let ( :{( ( ( ( (ky( )be the subset of that contains feature vectors xi that are closest to then the -nearest neighbors classification rule classifies according to the most frequently occurring class labels in (xif two or more labels receive the same number of votesthe feature vector is classified by selecting one of these labels randomly with equal probability for the case the set (xcontains only one elementsay ( )and is classified as this divides the space into regions ri { dist(xxi dist(xx ) } for feature space with the euclidean distancethis gives voronoi tessellation of the feature spacesimilar to what was done for vector quantization in section example (nearest neighbor classificationthe python program below simulates random points above and below the line points above the line have label and points below this line have label figure shows the voronoi tessellation obtained from the -nearest neighbor classification figure the -nearest neighbor algorithm divides up the space into voronoi cells nearestnb py import numpy as np from numpy random import rand randn import matplotlib pyplot as plt from scipy spatial import voronoi voronoi_plot_ |
15,037 | np random seed ( randn ( , np zeros (mpre allocate list for in range ( )if rand (< [ , [ix[ , np absrandn ()) elsex[ , [ix[ , np absrandn ()) vor voronoi (xplt_options {'show_vertices ':false 'show_points ':false 'line_alpha ': fig voronoi_plot_ (vor *plt_options plt plot( [ == , [ == , 'bo' [ == , [ == , 'rs'markersize = support vector machine suppose we are given the training set {(xi yi )}ni= where each response yi takes either the value - or and we wish to construct classifier taking values in {- as this merely involves relabeling of the - classification problem in section the optimal classification function for the indicator loss { }isby theorem equal to if [ / ( - if [ / it is not difficult to showsee exercise that the function gcan be viewed as the minimizer of the risk for the hinge loss functionloss( , ( yb ):max{ yb }over all prediction functions (not necessarily taking values only in the set {- }that isgargmin ( ( )) ( given the training set twe can approximate the risk `(ge ( ( ))with the training loss ( yi (xi ))` (gn = and minimize this over (smallerclass of functions to obtain the optimal prediction function gt finallyas the prediction function gt generally is not classifier by itself (it usually does not only take values - or )we take the classifier sign gt ( the reason why we use responses - and hereinstead of and is that the notation becomes easier hinge loss |
15,038 | optimal decision boundary thereforea feature vector is classified according to or - depending on whether gt ( or respectively the optimal decision boundary is given by the set of for which gt ( similar to the cubic smoothing spline or rkhs setting in ( )we can consider finding the best classifiergiven the training datavia the penalized goodness-of-fit optimizationn [ yi (xi )] kgk min ghh = for some regularization parameter it will be convenient to define : ne and to solve the equivalent problem min [ yi (xi )]kgk ghh = we know from the representer theorem that if is the reproducing kernel corresponding to hthen the solution is of the form (assuming that the null space has constant term only) (xa ai (xi ( = substituting into the minimization expression yields the analogue of ( ) min [ yi ( {ka} )]akaa, = ( where is the gram matrix this is convex optimization problemas it is the sum of convex quadratic and piecewise linear term in defining li :gai /yi and :[ ln ]we show in exercise that the optimal and in ( can be obtained by solving the "dualconvex optimization problem max subject ton = li xx li yi (xi = = ( and = ai (xi for any for which ( in view of ( )the optimal prediction function (pre-classifiergt is then given by gt (xa ai (xi xa yi li (xi xg = = ( to mitigate possible numerical problems in the calculation of it is customary to take an overall averagen ( |jjj where : ( ) = |
15,039 | note thatfrom ( )the optimal pre-classifier (xand the classifier sign (xonly depend on vectors xi for which li these vectors are called the support vectors of the support vector machine it is also important to note that the quadratic function in ( depends on the regularization parameter by defining ni :li /gi nwe can rewrite ( as min subject ton ni ni yi (xi ij = ni yi ni / = ( = for perfectly separable datathat isdata for which an affine plane can be drawn to perfectly separate the two classeswe may take as explained below otherwisec needs to be chosen via cross-validation or test data setfor example geometric interpretation for the linear kernel function (xx xx we have gt (xb bxp with and - ni= li yi xi ni= ai xi and so the decision boundary is an affine plane the situation is illustrated in figure the decision boundary is formed by the points such that gt ( the two sets { gt ( - and { gt ( are called the margins the distance from the points on margin to the decision boundary is /kbk figure classifying two classes (red and blueusing svm based on the "multipliers{li }we can divide the training samples {(xi yi )into three categories (see exercise )points for which li ( these are the support vectors on the margins (green encircled in the figureand are correctly classified support vectors |
15,040 | points for which li these pointswhich are also support vectorslie strictly inside the margins (points and in the figuresuch points may or may not be correctly classified points for which li these are the non-support vectorswhich all lie outside the margins every such point is correctly classified if the classes of points {xi yi and {xi yi - are perfectly separable by some affine planethen there will be no points strictly inside the marginsso all support vectors will lie exactly on the margins in this case ( reduces to min kbk , subject toyi ( > ( using the fact that and ka xxa xb we may replace min kbk in ( with max /kbkas this gives the same optimal solution as /kbk is equal to half the margin widththe latter optimization problem has simple interpretationseparate the points via an affine hyperplane such that the margin width is maximized example (support vector machinethe data in figure was uniformly generated on the unit disc class- points (blue dotshave radius less than / ( -values and class- points (red crosseshave radius greater than / ( -values - - - - - - - figure separate the two classes of course it is not possible to separate the two groups of points via straight line in howeverit is possible to separate them in by considering three-dimensional feature vectors [ ][ ]for any the corresponding feature vector lies on quadratic surface in this space it is possible to separate the {zi points into two groups by means of planar surfaceas illustrated in figure |
15,041 | figure in feature space the points can be separated by plane we wish to find separating plane in using the transformed features the following python code uses the svc function of the sklearn module to solve the quadratic optimization problem ( (with the results are summarized in table the data is available from the book' github site as svmcirc csv svmquad py import numpy as np from numpy import genfromtxt from sklearn svm import svc data genfromtxt ('svmcirc csv 'delimiter =',' data [,[ , ]vectors are rows data [,[ ]reshape (len( ,labels tmp np sum(np power ( , ,axis = reshape (len( , np hstack (( ,tmp)clf svc( np inf kernel ='linear 'clf fit( ,yprint (support vectors \ "clf support_vectors_ print (support vector labels ", [clf support_ ]print ("nu",clf dual_coef_ print ("bias",clf intercept_ support vectors [ - - - - - - ]support vector labels - - nu [- - bias [ ] |
15,042 | table optimal support vector machine parameters for the data zy ny - - - - - - - - - - it follows that the normal vector of the plane is bai zi [- - ]is where is the set of indices of the support vectors we see that the plane is almost perpendicular to the plane the bias term can also be found from the table above in particularfor any xand in table we have bz to draw the separating boundary in we need to project the intersection of the separating plane with the quadratic surface onto the plane that iswe need to find all points ( such that ( ( this is the equation of circle with (approximatecenter ( - and radius which is very close to the true circular boundary between the two groupswith center ( and radius this circle is drawn in figure - - figure the circular decision boundary can be viewed equivalently as (athe projection onto the plane of the intersection of the separating plane with the quadratic surface (both in )or (bthe set of points ( for which gt (xb bph( an equivalent way to derive this circular separating boundary is to consider the feature map ph( [ ]on which defines reproducing kernel (xx ph( )ph( ) |
15,043 | on which in turn gives rise to (uniquerkhs the optimal prediction function ( is now of the form gt (xa where and yi li ph(xi )ph(xb bph( ) = ( byi li ph(xi = the decision boundary{ gt ( }is again circle in the following code determines the fitted model parameters and the decision boundary figure shows the optimal decision boundarywhich is identical to ( the function mykernel specifies the custom kernel above svmkern py import numpy as np matplotlib pyplot as plt from numpy import genfromtxt from sklearn svm import svc def mykernel ( , )tmpu np sum(np power ( , ,axis = reshape (len( , np hstack (( ,tmpu)tmpv np sum(np power ( , ,axis = reshape (len( , np hstack (( ,tmpv) print ( shape return read in the data inp genfromtxt ('svmcirc csv 'delimiter =','data inp [,[ , ]vectors are rows inp [,[ ]reshape (len(data,labels clf svc( np inf kernel =mykernel gamma ='auto 'custom kernel clf svc( np inf kernel =rbf"gamma =scale 'inbuilt clf fit(data ,yprint (support vectors \ "clf support_vectors_ print (support vector labels ", [clf support_ ]print ("nu ",clf dual_coef_ print ("bias ",clf intercept_ plot x_min x_max - , y_min y_max - , xx yy np meshgrid (np arange (x_min x_max )np arange (y_min y_max )plt plot(data[clf support_ , data[clf support_ , ,'go'plt plot(data[ == , data[ == , ' 'plt plot(data[ =- , data[ =- , ,'rx ' |
15,044 | clf predict (np c_[xx ravel (yy ravel (] reshape (xx shape plt contour (xx yy zcolors =" "plt show (finallywe illustrate the use of the gaussian kernel (xx - kx- ( where is some tuning constant this is an example of radial basis function kernelwhich are reproducing kernels of the form (xx (kx )for some positive realvalued function each feature vector is now transformed to function ( *we can think of it as the (unnormalizedpdf of gaussian distribution centered around xand gt is (signedmixture of these pdfsplus constantthat isgt (xa ai - kxi -xk = replacing in line of the previous code mykernel with 'rbfproduces the svm parameters given in table figure shows the decision boundarywhich is not exactly circularbut is close to the true (circularboundary { kxk / there are now seven support vectorsrather than the four in figure table optimal support vector machine parameters for the gaussian kernel case xy ( xy ( - - - - - - - - - - - - - - - - - - - - figure leftthe decision boundary { gt ( is roughly circularand separates the two classes well there are seven support vectorsindicated by green circles rightthe graph of gt is scaled mixture of gaussian pdfs plus constant |
15,045 | remark (scaling and penalty parameterswhen using radial basis function in svc in sklearnthe scaling ( can be set via the parameter gamma note that large values of gamma lead to highly peaked predicted functionsand small values lead to highly smoothed predicted functions the parameter in svc refers / in ( classification with scikit-learn in this section we apply several classification methods to real-world data setusing the python module sklearn (the package name is scikit-learnspecificallythe data is obtained from uci' breast cancer wisconsin data set this data setfirst published and analyzed in [ ]contains the measurements related to images of benign and malignant breast masses the goal is to classify breast mass as benign or malignant based on featuresradiustextureperimeterareasmoothnesscompactnessconcavityconcave pointssymmetryand fractal dimension of each mass the meanstandard errorand "worstof these attributes were computed for each imageresulting in features for instancefeature is mean radiusfeature is radius sefeature is worst radius the following python code reads the dataextracts the response vector and model (featurematrix and divides the data into training and test set skclass py from numpy import genfromtxt from sklearn model_selection import train_test_split url "http :/mlr cs umass edu/ml/machine -learning databases /url "breast -cancer wisconsin /name "wdbc datadata genfromtxt (url url name delimiter =','dtype =stry data [, responses data [, :astype ('float 'features as an ndarray matrix x_train x_test y_train y_test train_test_split xytest_size random_state to visualize the data we create scatterplot for the features mean radiusmean textureand mean concavitywhich correspond to the columns and of the model matrix figure suggests that the malignant and benign breast masses could be well separated using these three features skclass py from skclass import xy import matplotlib pyplot as plt from mpl_toolkits mplot import axes import numpy as np bidx np where ( =' 'midxnp where ( =' 'plot features radius column texture ( concavity ( |
15,046 | fig plt figure (ax fig gcaprojection ' 'ax scatter ( [bidx , [bidx , [bidx , =' 'marker ='^'label ='benign 'ax scatter ( [midx , [midx , [midx , =' 'marker =' 'label ='malignant 'ax legend (ax set_xlabel ('mean radius 'ax set_ylabel ('mean texture 'ax set_zlabel ('mean concavity 'plt show (benign mean co ncavity malignant mea ra dius ea ur xt te figure scatterplot of three features of the benign and malignant breast masses the following code uses various classifiers to predict the category of breast masses (benign or malignantin this case the training set has elements and the test set has elements for each classifier the percentage of correct predictions (that isthe accuracyin the test set is reported we see that in this case quadratic discriminant analysis gives the highest accuracy ( exercise explores the question whether this metric is the most appropriate for these data skclass py from skclass import x_train y_train x_test y_test from sklearn metrics import accuracy_score import sklearn discriminant_analysis as da from sklearn naive_bayes import gaussiannb from sklearn neighbors import kneighborsclassifier from sklearn linear_model import logisticregression from sklearn svm import svc names [logit ",nbayes ""lda""qda""knn""svm" |
15,047 | classifiers logisticregression ( = )gaussiannb (da lineardiscriminantanalysis (da quadraticdiscriminantanalysis (kneighborsclassifier n_neighbors = svckernel ='rbf 'gamma - )print ('name accuracy \ '+ '-'for name clf in zip(names classifiers )clf fit(x_train y_train y_pred clf predict x_test print ('{: {: }format (name accuracy_score (y_test y_pred ))name accuracy logit nbayes lda qda knn svm further reading an excellent source for understanding various pattern recognition techniques is the book [ by duda et al theoretical foundations of classificationincluding the vapnikchernovenkis dimension and the fundamental theorem of learningare discussed in [ popular measure for characterizing the performance of binary classifier is the receiver operating characteristic (roccurve [ the naive bayes classification paradigm can be extended to handle explanatory variable dependency via graphical models such as bayesian networks and markov random fields [ for detailed discussion on bayesian decision theorysee [ exercises let show that the solution to the convex optimization problem min ,pn subject ton- - = pi and ( pi = is given by pi /( ) and pn derive the formulas ( by minimizing the cross-entropy training lossn ln (xi yi th) = |
15,048 | where (xy this such thatln (xy thln ay ln |sy ( uy ) - ln( py ( uy adapt the code in example to plot the estimated decision boundary instead of the true one in figure compare the true and estimated decision boundaries recall from equation ( that the decision boundaries of the multi-logit classifier are linearand that the pre-classifier can be written as conditional pdf of the formexp(zy+ ( wbxpc = exp(zi { }where [ xand we (ashow that the linear discriminant pre-classifier in section can also be written as conditional pdf of the form (th {ay sy uy } - = )exp(zy+ ( thxpc = exp(zi { }where [ xand we find formulas for the corresponding and in terms of the linear discriminant parameters {ay uy sy } - = where sy for all (bexplain which pre-classifier has smaller approximation errorthe linear discriminant or multi-logit onejustify your answer by proving an inequality between the two approximation errors consider binary classification problem where the response takes values in {- show that optimal prediction function for the hinge loss loss( , ( -yb ):max{ yb yis the same as the optimal prediction function gfor the indicator lossif [ / ( - if [ / that isshow that for all functions ( ( )) ( ( ))( in example we applied principal component analysis (pcato the iris databut refrained from classifying the flowers based on their feature vectors implement -nearest neighbor algorithmusing training set of randomly chosen data pairs (xyfrom the iris data set how many of the remaining flowers are correctly classifiednow classify these entries with an off-the-shelf multi-logit classifiere such as can be found in the sklearn and statsmodels packages figure displays two groups of data pointsgiven in table the convex hulls have also been plotted it is possible to separate the two classes of points via straight line |
15,049 | in factmany such lines are possible svm gives the best separationin the sense that the gap (marginbetween the points is maximal - - - - - figure separate the points by straight line so that the separation between the two groups is maximal table data for figure - - - - - - - - - - - - - - - - - (aidentify from the figure the three support vectors (bfor separating boundary (linegiven by bx show that the margin width is /kbk (cshow that the parameters and that solve the convex optimization problem ( provide the maximal width between the margins (dsolve ( using penalty approachsee section in particularminimize the |
15,050 | penalty function (bb kbk = min ( bxi yi for some positive penalty constant (efind the solution the dual optimization problem ( by using sklearn' scv method note thatas the two point sets are separablethe constraint may be removedand the value of can be set to in example we used the feature map ph( [ ]to classify the points an easier way is to map the points into via the feature map ph(xkxk or any monotone function thereof translated back into this yields circular separating boundary find the radius and center of this circleusing the fact that here the sorted norms for the two groups are let { be response variable and let (xbe the regression function ( : [ xp[ xrecall that the bayes classifier is ( { ( / let { be any other classifier function belowwe denote all probabilities and expectations conditional on as [*and [*(ashow that irreducible error } [ (xyp [ (xy+| ( { (xg( )hencededuce that for learner gt constructed from training set we have [ [gt (xy ] [ (xy| ( [gt (xg( )]where the first expectation and last probability operations are with respect to (busing the previous resultdeduce that for the unconditional error (that iswe no longer condition on )we have [ (xy [gt (xy(cshow thatif gt : {ht ( / is classifier function such that as ht ( - ( ( ) ( )for some mean and variance functions (xand ( )respectivelythen sign( ( ))( ( [gt (xg ( )-ph (xwhere ph is the cdf of standard normal random variable |
15,051 | the purpose of this exercise is to derive the dual program ( from the primal program ( the starting point is to introduce vector of auxiliary variables :[ xn ]and write the primal program as xi aka , , = min subject tox yi ( {ka} xi ( (aapply the lagrangian optimization theory from section to obtain the lagrangian function ({ ax}{lu})where and are the lagrange multipliers corresponding to the first and second inequality constraintsrespectively (bshow that the karush-kuhn-tucker (see theorem conditions for optimizing are ly / ( lx li (yi (xi xi yi (xi xi ( here stands for componentwise multiplicatione [ yn ln ]and we have abbreviated {ka} to (xi )in view of ( [hintone of the kkt conditions is uthus we can eliminate (cusing the kkt conditions ( )reduce the lagrange dual function ( :mina , , ({ ax}{ }to (ln = li xx li yi (xi = = ( (das consequence of ( and ( )-( )show that the optimal prediction function gt is given by gt (xa yi li (xi )( = where is the solution to max (ll subject toly and ( pn = yi li (xi for any such that ( consider svm classification as illustrated in figure the goal of this exercise is to classify the training points {(xi yi )based on the value of the multipliers {li in exercise let xi be the auxiliary variable in exercise |
15,052 | (afor li ( show that (xi yi lies exactly on the decision border (bfor li show that (xi yi lies strictly inside the margins (cshow that for li the point (xi yi lies outside the margins and is correctly classified well-known data set is the mnist handwritten digit databasecontaining many thousands of digitalized numbers (from to )each described by matrix of gray scales similar but much smaller data set is described in [ hereeach handwritten digit is summarized by matrix with integer entries from (whiteto (blackfigure shows the first digitized images the data set can be accessed with python using the sklearn package as follows from sklearn import datasets digits datasets load_digits (x_digits digits data explanatory variables y_digits digits target responses figure classify the digitized images (adivide the data into training set and test set (bcompare the effectiveness of the -nearest neighbors and naive bayes method to classify the data (cassess which to use in the -nearest neighbors classification download the winequality-red csv data set from uci' wine-quality website the response here is the wine quality (from to as specified by wine "expertand the explanatory variables are various characteristics such as acidity and sugar content use the svc classifier of sklearn svm with linear kernel and penalty parameter (see remark to fit the data use the method cross_val_score from |
15,053 | sklearn model_selection to obtain five-fold cross-validation score as an estimate of the probability that the predicted class matches the expert' class consider the credit approval data set crx data from uci' credit approval website the data set is concerned with credit card applications the last column in the data set indicates whether the application is approved (+or not (-with the view of preserving data privacyall explanatory variables were anonymized note that some explanatory variables are continuous and some are categorical (aload and prepare the data for analysis with sklearn firsteliminate data rows with missing values nextencode categorical explanatory variables using onehotencoder object from sklearn preprocessing to create model matrix with indicator variables for the categorical variablesas described in section (bthe model matrix should contain rows and columns the response variable should be / variable (reject/approvewe will consider several classification algorithms and test their performance (using zero-one lossvia ten-fold cross validation write function which takes parametersxyand modeland returns the ten-fold cross-validation estimate of the expected generalization risk ii consider the following sklearn classifierskneighborsclassifier ( )logisticregressionand mplclassifier (multilayer perceptronuse the function from (ito identify the best performing classifier consider synthetic data set that was generated in the following fashion the explanatory variable follows standard normal distribution the response label is if the explanatory variable is between the and quantiles of the standard normal distributionand otherwise the data set was generated using the following code import numpy as np import scipy stats generate data np random seed ( np random randn (nq scipy stats norm ppf ( np zeros (ny[ >= [ <=- reshape - , compare the -nearest neighbors classifier with and logistic regression classifier without computationwhich classifier is likely to be better for these dataverify your answer by coding both classifiers and printing the corresponding training - loss consider the digits data set from exercise in this exercisewe would like to train binary classifier for the identification of digit (adivide the data such that the first rows are used as the training set and the rest are used as the test set |
15,054 | (btrain the logisticregression classifier from the sklearn linear_model package ( "traina naive classifier that always returns that isthe naive classifier identifies each instance as being not (dcompare the zero-one test losses of the logistic regression and the naive classifiers (efind the confusion matrixthe precisionand the recall of the logistic regression classifier (ffind the fraction of eights that are correctly detected by the logistic regression classifier repeat exercise with the original mnist data set use the first , rows as the train set and the remaining , rows as the test set the original data set can be obtained using the following code from sklearn datasets import fetch_openml xy fetch_openml ('mnist_ 'version = return_x_y =true for the breast cancer data in section investigate and discuss whether accuracy is the relevant metric to use or if other metrics discussed in section are more appropriate |
15,055 | ecision rees and nsemble ethods statistical learning methods based on decision trees have gained tremendous popularity due to their simplicityintuitive representationand predictive accuracy this gives an introduction to the construction and use of such trees we also discuss two key ensemble methodsnamely bootstrap aggregation and boostingwhich can further improve the efficiency of decision trees and other learning methods introduction tree-based methods provide simpleintuitiveand powerful mechanism for both regression and classification the main idea is to divide (potentially complicatedfeature space into smaller regions and fit simple prediction function to each region for examplein regression settingone could take the mean of the training responses associated with the training features that fall in that specific region in the classification settinga commonly used prediction function takes the majority vote among the corresponding response variables we start with simple classification example example (decision tree for classificationthe left panel of figure shows training set of two-dimensional points (featuresfalling into two classes (red and bluehow should the new feature vector (black pointbe classified figure lefttraining data and new feature righta partition of the feature space |
15,056 | decision tree it is not possible to linearly separate the training setbut we can partition the feature space into rectangular regions and assign class (colorto each regionas shown in the right panel of figure points in these regions are classified accordingly as blue or red the partition thus defines classifier (prediction functiong that assigns to each feature vector class "redor "bluefor examplefor [- ](solid black point) ( "blue"since it belongs to blue region of the feature space both the classification procedure and the partitioning of the feature space can be conveniently represented by binary decision tree this is tree where each node corresponds to region (subsetrv of the feature space -the root node corresponding to the feature space itself each internal node contains logical condition that dix < vides rv into two disjoint subregions the leaf nodes (the tertrue minal nodes of the treeare not subdividedand their corresfalse ponding regions form partition of xas they are disjoint and <- their union is associated with each leaf node is also false true regional prediction function gw on rw < the partitioning of figure was obtained from the decision tree shown in figure as an illustratrue false tion of the decision procedureconsider again the input < [ ][- ]the classification process starts true from the tree rootwhich contains the condition as false the second component of is the root condition is satisfied <- and we proceed to the left childwhich contains the condition true false - the next step is similar as - the condition is not satisfied and we proceed to the right child such an evaluation of logical conditions along the tree path will eventually bring us to leaf node and its associated region in this case the process terminates in leaf that corresponds to the figure the decisiontree that corresponds to the left blue region in the right-hand panel of figure partition in figure more generallya binary tree will partition the feature space into as many regions as there are leaf nodes denote the set of leaf nodes by the overall prediction function that corresponds to the tree can then be written as (xgw ( { rw }( ww regional prediction functions where denotes the indicator function the representation ( is very general and depends on ( how the regions {rw are constructed via the logical conditions in the decision treeas well as ( how the regional prediction functions of the leaf nodes are defined simple logical conditions of the form split euclidean feature space into rectangles aligned with the axes for examplefigure partitions the feature space into six rectanglestwo blue and four red rectangles in classification settingthe regional prediction function gw corresponding to leaf node takes values in the set of possible class labels in most casesas in example it is taken to be constant on the corresponding region rw in regression settinggw is realvalued and also usually takes only one value that isevery feature vector in rw leads to |
15,057 | the same predicted value of coursedifferent regions will usually have different predicted values constructing tree with training set {(xi yi )}}ni= amounts to minimizing the training loss ` ( loss(yi (xi ) = ( for some loss functionsee with of the form ( )we can write ` (gn xx loss(yi (xi ) {xi rw loss(yi (xi ) = = ww ww = {xi rw loss(yi gw (xi )){ (* ( ( where (*is the contribution by the regional prediction function gw to the overall training loss in the case where all {xi are differentfinding decision tree that gives zero squared-error or zero-one training loss is easysee exercise but such an "overfittedtree will have poor predictive behaviorexpressed in terms of the generalization risk instead we consider restricted class of decision trees and aim to minimize the training loss within that class it is common to use top-down greedy approachwhich can only achieve an approximate minimization of the training loss top-down construction of decision trees let {(xi yi )}ni= be the training set the key to constructing binary decision tree is to specify splitting rule for each node vwhich can be defined as logical function {falsetrueorequivalentlya binary function { for examplein the decision tree of figure the root node has splitting rule { }in correspondence with the logical condition { during the construction of the treeeach node is associated with specific region rv and therefore also the training subset {(xyt rv using splitting rule swe can divide any subset of the training set into two setsst :{(xys (xtrueand sf :{(xys (xfalse( starting from an empty tree and the initial data set ta generic decision tree construction takes the form of the recursive algorithm here we use the notation tv for subtree of starting from node the final tree is thus obtained via construct_subtree( )where is the root of the tree splitting rule |
15,058 | algorithm construct_subtree inputa node and subset of the training datas outputa (subdecision tree tv if termination criterion is met then / is leaf node train regional prediction function using the training data else /split the node find the best splitting rule sv for node create successors vt and vf of st {(xys sv (xtrue sf {(xys sv (xfalse tvt construct_subtree (vt st /left branch tvf construct_subtree (vf sf /right branch return tv the splitting rule sv divides the region rv into two disjoint partssay rvt and rvf the corresponding prediction functionsgt and gf satisfy gv (xgt ( { rvt gf ( { rvf } rv in order to implement the procedure described in algorithm we need to address the construction of the regional prediction functions gv at the leaves (line )the specification of the splitting rule (line )and the termination criterion (line these important aspects are detailed in the following sections and respectively regional prediction functions in generalthere is no restriction on how to choose the prediction function gw for leaf node in line of algorithm in principle we can train any model from the datae via linear regression howeverin practice very simple prediction functions are used belowwe detail popular choice for classificationas well as one for regression in the classification setting with class labels the regional prediction function gw for leaf node is usually chosen to be constant and equal to the most common class label of the training data in the associated region rw (ties can be broken randomlymore preciselylet nw be the number of feature vectors in region rw and let { =zpwz nw {( , ) xr be the proportion of feature vectors in rw that have class label the regional prediction function for node is chosen to be the constant gw (xargmax pwz ( { , - in the regression settinggw is usually chosen as the mean response in the regionthat isx gw (xyrw : ( nw {( , ) xr |
15,059 | where nw is again the number of feature vectors in rw it is not difficult to show that gw (xyrw minimizes the squared-error loss with respect to all constant functionsin the region rw see exercise splitting rules in line in algorithm we divide region rv into two setsusing splitting rule (functionsv consequentlythe data set associated with node (that isthe subset of the original data set whose feature vectors lie in rv )is also split -into st and sf what is the benefit of such split in terms of reduction in the training lossif were set to leaf nodeits contribution to the training loss would be (see ( )) {( , )sloss(yi gv (xi ) = ( if were to be split insteadits contribution to the overall training loss would ben {( , )st loss(yi gt (xi ) {( , )sf loss(yi gf (xi )) = = ( where gt and gf are the prediction functions belonging to the child nodes vt and vf greedy heuristic is to pretend that the tree construction algorithm immediately terminates after the splitin which case vt and vf are leaf nodesand gt and gf are readily evaluated - as in section note that for any splitting rule the contribution ( is always greater than or equal to ( it therefore makes sense to choose the splitting rule such that ( is minimized moreoverthe termination criterion may involve comparing ( with ( if their difference is too small it may not be worth further splitting the feature space as an examplesuppose the feature space is and we consider splitting rules of the form ( { }( for some and rwhere we identify with false and with true due to the computational and interpretative simplicitysuch binary splitting rules are implemented in many software packages and are considered to be the de facto standard as we have seenthese rules divide up the feature space into rectanglesas in figure it is natural to ask how and should be chosen so as to minimize ( for regression problemusing squared-error loss and constant regional prediction function as in ( )the sum ( is given by yt yf ( ( , ) : ( , ) : > where yt and yf are the average responses for the st and sf datarespectively let { , }mk= denote the possible values of within the training subset (with elementsnote thatfor fixed ( is piecewise constant function of xand that its minimal value is attained at some value , as consequenceto minimize ( over all and xit suffices to evaluate ( for each of the values , and then take the minimizing pair jx , |
15,060 | for classification problemusing the indicator loss and constant regional prediction function as in ( )the aim is to choose splitting rule that minimizes { * { * } ( , ) ( , ) ( where * gt (xis the most prevalent class (majority votein the data set st and * is the most prevalent class in sf if the feature space is and the splitting rules are of the form ( )then the optimal splitting rule can be obtained in the same way as described above for the regression casethe only difference is that ( is replaced with ( we can view the minimization of ( as minimizing weighted average of "impuritiesof nodes st and sf namelyfor an arbitrary training subset tif yis the most prevalent labelthen { { py max pz { , - | ( , ) | ( , ) where pz is the proportion of data points in that have class label zz the quantity max pz { , - misclassification measures the diversity of the labels in and is called the misclassification impurity conimpurity sequently( is the weighted sum of the misclassification impurities of st and sf with entropy impurity weights by |st |/ and |sf |/nrespectively note that the misclassification impurity only depends on the label proportions rather than on the individual responses instead of using the misclassification impurity to decide if and how to split data set swe can use other impurity measures that only depend on the label proportions two popular choices are the entropy impurityc- pz log (pz = gini impurity and the gini impurityc- = all of these impurities are maximal when the label proportions are equal to / typical shapes of the above impurity measures are illustrated in figure for the two-label casewith class probabilities and we see here the similarity of the different impurity measures note that impurities can be arbitrarily scaledand so using ln(pz log (pz ln( instead of log (pz above gives an equivalent entropy impurity termination criterion when building treeone can define various types of termination conditions for examplewe might stop when the number of data points in the tree node (the size of the input set in algorithm is less than or equal to some predefined number or we might choose the maximal depth of the tree in advance another possibility is to stop when there is no |
15,061 | cross-entropy gini index misclassification impurity figure entropyginiand misclassification impurities for binary classificationwith class frequencies and the entropy impurity was normalized (divided by )to ensure that all impurity measures attain the same maximum value of / at / significant advantagein terms of training lossto split regions ultimatelythe quality of tree is determined by its predictive performance (generalization riskand the termination condition should aim to strike balance between minimizing the approximation error and minimizing the statistical erroras discussed in section example (fixed tree depthto illustrate how the tree depth impacts on the generalization riskconsider figure which shows the typical behavior of the cross-validation loss as function of the tree depth recall that the cross-validation loss is an estimate of the expected generalization risk complicated (deeptrees tend to overfit the training data by producing many divisions of the feature space as we have seenthis overfitting problem is typical of all learning methodssee and in particular example to concludeincreasing the maximal depth does not necessarily result in better performance loss tree depth figure the ten-fold cross-validation loss as function of the maximal tree depth for classification problem the optimal maximal tree depth is here |
15,062 | to create figure we used the python method make_blobs from the sklearn module to produce training set of size with ten-dimensional feature vectors (thusp and )each of which is classified into one of classes the full code is given below treedepthcv py import numpy as np from sklearn datasets import make_blobs from sklearn model_selection import cross_val_score from sklearn tree import decisiontreeclassifier from sklearn metrics import zero_one_loss import matplotlib pyplot as plt def zeroonescore (clf xy)y_pred clf predict (xreturn zero_one_loss (yy_pred construct the training set xy make_blobs n_samples = n_features = centers = random_state = cluster_std = construct decision tree classifier clf decisiontreeclassifier random_state = cross validation loss as function of tree depth ( to xdepthlist [cvlist [tree_depth range ( , for in tree_depth xdepthlist append (dclf max_depth = cv np meancross_val_score (clf xycv = scoring zeroonescore )cvlist append (cvplt xlabel ('tree depth 'fontsize = color ='black 'plt ylabel ('loss 'fontsize = color ='black 'plt plotxdepthlist cvlist ,-*linewidth = the code above relies heavily on sklearn and hides the implementation details to show how decision trees are actually constructed using the previous theorywe proceed with very basic implementation basic implementation in this section we implement regression treestep by step to run the programamalgamate the code snippets below into one filein the order presented firstwe import various packages and define function to generate the training and test data the data used for figure was produced in similar way |
15,063 | basictree py import numpy as np from sklearn datasets import make_friedman from sklearn model_selection import train_test_split def makedata ()n_points number of samples xy make_friedman n_samples =n_points n_features = noise = random_state = return train_test_split (xytest_size = random_state = the "mainmethod calls the makedata methoduses the training data to build regression treeand then predicts the responses of the test set and reports the mean squared-error loss def main ()x_train x_test y_train y_test makedata (maxdepth maximum tree depth create tree root at depth treeroot tnode ( x_train y_train build the regression tree with maximal depth equal to max_depth construct_subtree (treeroot maxdepth predict y_hat np zeros (lenx_test )for in range (lenx_test ))y_hat [ipredict x_test [ ]treeroot mse np mean(np power y_hat y_test , )print (basic treetree loss "msethe next step is to specify tree node as python class each node has number of attributesincluding the features and the response data ( and yand the depth at which the node is placed in the tree the root node has depth each node can calculate its contribution to the squared-error training loss ni= {xi rw }(yi gw (xi )) note that we have omitted the constant / term when training the treewhich simply scales the loss ( class tnode def __init__ (self depth xy)self depth depth self matrix of features self vector of response variables initialize optimal split parameters self none self xi none initialize children to be none self left none self right none initialize the regional predictor |
15,064 | self none def calculateloss (self)if(len(self )== )return return np sum(np power (self self mean (, )the function below implements the training (tree-buildingalgorithm def construct_subtree (node max_depth )if(node depth =max_depth or len(node = )node node mean (elsejxi calculateoptimalsplit (nodenode node xi xi xt yt xf yf datasplit (node xnode yjxiif(len(yt> )node left tnode (node depth + ,xt ,ytconstruct_subtree (node left max_depth if(len(yf> )node right tnode (node depth + xf ,yfconstruct_subtree (node right max_depth return node this requires an implementation of the calculateoptimalsplit function to startwe implement function datasplit that splits the data according to ( { xdef datasplit ( , , ,xi)ids [:, ]<=xi xt [ids =true ,:xf [ids =false ,:yt [ids =trueyf [ids =false return xt yt xf yf the calculateoptimalsplit method runs through the possible splitting thresholds from the set { , and finds the optimal split def calculateoptimalsplit (node) node node best_var best_xi [ best_var best_split_val node calculateloss (mn shape for in range ( , ) |
15,065 | for in range ( , )xi [ ,jxt yt xf yf datasplit ( , , ,xitmpt tnode ( xt yttmpf tnode ( xf yfloss_t tmpt calculateloss (loss_f tmpf calculateloss (curr_val loss_t loss_f if curr_val best_split_val )best_split_val curr_val best_var best_xi xi return best_var best_xi finallywe implement the recursive method for prediction def predict ( ,node)if(node right =none and node left !none)return predict ( ,node leftif(node right !none and node left =none)return predict ( ,node right if(node right =none and node left =none)return node elseif( [node <node xi)return predict ( ,node leftelsereturn predict ( ,node right running the main function defined above gives similar result to what one would achieve with the sklearn packageusing the decisiontreeregressor method main (run the main program compare with sklearn from sklearn tree import decisiontreeregressor x_train x_test y_train y_test makedata (use the same data regtree decisiontreeregressor max_depth random_state = regtree fit(x_train y_train y_hat regtree predict x_test mse np mean(np power y_hat y_test , )print (decisiontreeregressor tree loss "mse basic treetree loss decisiontreeregressor tree loss after establishing best split , sklearn assigns the corresponding feature vector randomly to one of the two child nodesrather than to the true child |
15,066 | additional considerations binary versus non-binary trees while it is possible to split tree node into more than two groups (multiway splits)it generally produces inferior results compared to the simple binary split the major reason is that multiway splits can lead to too many nodes near the tree root that have only few data pointsthus leaving insufficient data for later splits as multiway splits can be represented as several binary splitsthe latter is preferred [ data preprocessing sometimesit can be beneficial to preprocess the data prior to the tree construction for examplepca can be used with view to identify the most important dimensionswhich in turn will lead to simpler and possibly more informative splitting rules in the internal nodes alternative splitting rules we restricted our attention to splitting rules of the type ( { }where { pand these types of rules may not always result in simple partition of the feature spaceas illustrated by the binary data in figure in this casethe feature space could have been partitioned into just two regionsseparated by straight line figure the two groups of points can here be separated by straight line insteadthe classification tree divides up the space into many rectanglesleading to an unnecessarily complicated classification procedure in this case many classification methods discussed in such as linear discriminant analysis (section )will work very wellwhereas the classification tree is rather elaboratedividing the feature set into too many regions an obvious remedy is to use splitting rules of the form ( {ax |
15,067 | in some casessuch as the one just discussedit may be useful to use splitting rule that involves several variablesas opposed to single one the decision regarding the split type clearly depends on the problem domain for examplefor logical (binaryvariables our domain knowledge may indicate that different behavior is expected when both xi and ( jare true in this casewe will naturally introduce decision rule of the forms( {xi true and truecategorical variables when an explanatory variable is categorical with labels (levelssay { }the splitting rule is generally defined via partition of the label set { kinto two subsets specificallylet and be partition of { kthenthe splitting rule is defined via ( { lfor the general supervised learning casefinding the optimal partition in the sense of minimal loss requires one to consider subsets of { kconsequentlyfinding good splitting rule for categorical variables can be challenging when the number of labels is large missing values missing data is present in many real-life problems generallywhen working with incomplete feature vectorswhere one or more values are missingit is typical to either completely delete the feature vector from the data (which may distort the dataor to impute (guessits missing values from the available datasee [ tree methodshoweverallow an elegant approach for handling missing data specificallyin the general casethe missing data problem can be handled via surrogate splitting rules [ when dealing with categorical (factorfeatureswe can introduce an additional category "missingfor the absent data the main idea of surrogate rules is as follows firstwe construct decision (regression or classificationtree via algorithm during this construction processthe solution of the optimization problem ( is calculated only over the observations that are not missing particular variable suppose that tree node has splitting rule ( { xfor some and threshold xfor the node we can introduce set of alternative splitting rules that resemble the original splitting rulesometimes called the primary splitting ruleusing different variables and thresholds namelywe look for binary splitting rule ( jx) jsuch that the data split introduced by will be similar to the original data split from sthe similarity is generally measured via binary misclassification losswhere the true classes of observations are determined by the primary splitting rule and the surrogate splitting rules serve as classifiers considerfor examplethe data in table and suppose that the primary splitting rule at node is {age that isthe five data points are split such that the left and the right child of contains two and three data pointsrespectively nextthe following surrogate splitting rules can be considered |
15,068 | {salary }and {height table example data with three variables (ageheightand salaryid age height salary the {salary surrogate rule completely mimics the primary rulein the sense that the data splits induced by these rules are identical namelyboth rules partition the data into two sets (by id{ and { on the other handthe {height rule is less similar to the primary rulesince it causes the different partition { and { it is up to the user to define the number of surrogate rules for each tree node as soon as these surrogate rules are availablewe can use them to handle new data pointeven if the main rule cannot be applied due to missing value of the primary variable jspecificallyif the observation is missing the primary split variablewe apply the first (bestsurrogate rule if the first surrogate variable is also missingwe apply the second best surrogate ruleand so on controlling the tree shape eventuallywe are interested in getting the right-size tree namelya tree that shows good generalization properties it was already discussed in section (figure that shallow trees tend to underfit and deep trees tend to overfit the data basicallya shallow tree does not produce sufficient number of splits and deep tree will produce many partitions and thus many leaf nodes if we grow the tree to sufficient deptheach training sample will occupy separate leaf and we will observe zero loss with respect to the training data the above phenomenon is illustrated in figure which presents the cross-validation loss and the training loss as function of the tree depth in order to overcome the underand the overfitting problembreiman et al [ examined the possibility of stopping the tree from growing as soon as the decrease in loss due to split of node vas expressed in the difference of ( and ( )is smaller than some predefined parameter under this settingthe tree construction process will terminate when no leaf node can be split such that the contribution to the training loss after this split is greater than the authors found that this approach was unsatisfactory specificallyit was noted that very small leads to an excessive amount of splitting and thus causes overfitting increasing did not work either the problem is that the nature of the proposed rule is one-step-lookahead to see thisconsider tree node for which the best possible decrease in loss is |
15,069 | train cv loss tree depth figure the cross-validation and the training loss as function of the tree depth for binary classification problem smaller than according to the proposed procedurethis node will not be split further this mayhoweverbe sub-optimalbecause it could happen that one of the node' descendantsif splitcould lead to major decrease in loss to address these issuesa so-called pruning routine can be employed the idea is as follows we first grow very deep tree and then prune (remove nodesit upwards until we reach the root node consequentlythe pruning process causes the number of tree nodes to decrease while the tree is being prunedthe generalization risk gradually decreases up to the point where it starts increasing againat which point the pruning is stopped this decreasing/increasing behavior is due to the bias-variance tradeoff ( tree pruning we next describe the details to start withlet and be tree nodes we say that is descendant of if there is path down the treewhich leads from to if such path existswe also say that is an ancestor of consider the tree in figure to formally define pruningwe will require the following definition an example of pruning is demonstrated in figure definition branches and pruning tree branch tv of the tree is sub-tree of rooted at node the pruning of branch tv from tree is performed via deletion of the entire branch tv from except the branch' root node the resulting pruned tree is denoted by tv sub-tree tv is called pruned sub-tree of we indicate this with the notation tv or tv basic decision tree pruning procedure is summarized in algorithm tree branch |
15,070 | figure the node is descendant of and is an ancestor of { }but is not descendant of (at (btv (ct tv figure the pruned tree tv in (cis the result of pruning the tv branch in (bfrom the original tree in (aalgorithm decision tree pruning inputtraining set outputsequence of decision trees build large decision tree via algorithm [ possible termination criterion for that algorithm is to have some small predetermined number of data points at each terminal node of while has more than one node do - + choose prune the branch rooted at from tk tv and tk return tk |
15,071 | let be the initial (deeptree and let tk be the tree obtained after the -th pruning operationfor as soon as the sequence of trees tk is availk ableone can choose the best tree of {tk } = according to the smallest generalization risk specificallywe can split the data into training and validation sets in this casealgorithm is executed using the training set and the generalization risks of {tk } = are estimated via the validation set while algorithm and the corresponding best tree selection process look appealingthere is still an important question to considernamelyhow to choose the node and the corresponding branch tv in line of the algorithm in order to overcome this problembreiman proposed method called cost complexity pruningwhich we discuss next cost-complexity pruning let be tree obtained via pruning of tree denote the set of leaf (terminalnodes of by the number of leaves |wis measure for the complexity of the treerecall that |wis the number of regions {rw in the partition of corresponding to each tree is prediction function gas in ( in cost-complexity pruning the objective is to find prediction function (orequivalentlytree tthat minimizes the training loss ` (gwhile taking into account the complexity of the tree the idea is to regularize the training losssimilar to what was done in by adding penalty term for the complexity of the tree this leads to the following definition cost-complexity pruning definition cost-complexity measure let {(xi yi )}ni= be data set and be real number for given tree tthe cost-complexity measure ct (gtis defined asn ct (gt: {xi rw loss(yi (xi ) |wn ww = ( ` (gg | |where ` (gis the training loss ( small values of result in small penalty for the tree complexity | |and thus large trees (that fit the entire training data wellwill minimize the measure ct (gtin particularfor will be the minimizer of ct (gton the other handlarge values of will prefer smaller trees ormore preciselytrees with fewer leaves for sufficiently large gthe solution will collapse to single (rootnode it can be shown thatfor every value of gthere exists smallest minimizing subtree with respect to the cost-complexity measure in practicea suitable is selected via observing the performance of the learner on the validation set or by cross-validation these advantages and the corresponding limitations are detailed next cost-complexity measure |
15,072 | advantages and limitations of decision trees we list number of advantages and disadvantages of decision treesas compared with other supervised learning methods such as were discussed in and advantages the tree structure can handle both categorical and numerical features in natural and straightforward way specificallythere is no need to pre-process categorical featuressay via the introduction of dummy variables the final tree obtained after the training phase can be compactly stored for the purpose of making predictions for new feature vectors the prediction process only involves single tree traversal from the tree root to leaf the hierarchical nature of decision trees allows for an efficient encoding of the feature' conditional information specificallyafter an internal split of feature via the standard splitting rule ( )algorithm will only consider such subsets of data that were constructed based on this splitthus implicitly exploiting the corresponding conditional information from the initial split of the tree structure can be easily understood and interpreted by domain experts with little statistical knowledgesince it is essentially logical decision flow diagram the sequential decision tree growth procedure in algorithm and in particular the fact that the tree has been split using the most important featuresprovides an implicit step-wise variable elimination procedure in additionthe partition of the variable space into smaller regions results in simpler prediction problems in these regions decision trees are invariant under monotone transformations of the data to see thisconsider the (optimalsplitting rule ( { }where is positive feature suppose that is transformed to nowthe optimal splitting rule will take the form ( { in the classification settingit is common to report not only the predicted value of feature vectore as in ( )but also the respective class probabilities decision trees handle this task without any additional effort specificallyconsider new feature vector during the estimation processwe will perform tree traversal and the point will end up in certain leaf the probability of this feature vector lying in class can be estimated as the proportion of training points in that are in class as each training point is treated equally in the construction of treetheir structure of the tree will be relatively robust to outliers in waytrees exhibit similar kind of robustness as the sample median does for real-valued data |
15,073 | limitations despite the fact that the decision trees are extremely interpretablethe predictive accuracy is generally inferior to other established statistical learning methods in additiondecision treesand in particular very deep trees that were not subject to pruningare heavily reliant on their training set small change in the training set can result in dramatic change of the resulting decision tree their inferior predictive accuracyhoweveris direct consequence of the bias-variance tradeoff specificallya decision tree model generally exhibits high variance to overcome the above limitationsseveral promising approaches such as baggingrandom forestand boosting are introduced below the bagging approach was initially introduced in the context of an ensemble of decision trees howeverboth the bagging and the boosting methods can be applied to improve the accuracy of general prediction functions bootstrap aggregation the major idea of the bootstrap aggregation or bagging method is to combine prediction functions learned from multiple data setswith view to improving overall prediction accuracy bagging is especially beneficial when dealing with predictors that tend to overfit the datasuch as in decision treeswhere the (unprunedtree structure is very sensitive to small changes in the training set [ to start withconsider an idealized setting for regression treewhere we have access to iid copies tb of training set thenwe can train separate regression models ( different decision treesusing these setsgiving learners gt gtb and take their averageb gt ( ( gavg (xb = by the law of large numbersas the average prediction function converges to the expected prediction function :egt the following result shows that using gas prediction function (if it were knownwould result in an expected squared-error generalization risk that is less than or equal to the expected generalization risk for general prediction function gt it thus suggests that taking an average of prediction functions may lead to better expected squared-error generalization risk theorem expected squared-error generalization risk let be random training set and let xy be random feature vector and response that are independent of then gt (xe ( in this section tk means the -th training setnot training set of size bagging |
15,074 | proofwe have " gt (xxy [ xye[gt (xxyy ( where the inequality follows from eu (eu) for any (conditionalexpectation consequentlyby the tower property ii gt (xe gt (xxy ( bagged estimator unfortunatelymultiple independent data sets are rarely available but we can substitute them by bootstrapped ones specificallyinstead of the tb setswe can obtain random training sets tbby resampling them from single (fixedtraining set tsimilar to algorithm and use them to train separate models by model averaging as in ( we obtain the bootstrapped aggregated estimator or bagged estimator of the formb gbag (xgt ( ( = algorithm bootstrap aggregation sampling inputtraining set {(xi yi )}ni= and resample size outputbootstrapped data sets for to do tb for to do draw ( dnue /select random index tbtb{(xi yi ) return tbb remark (bootstrap aggregation for classification problemsnote that ( is suitable for handling regression problems howeverthe bagging idea can be readily extended to handle classification settings as well for examplegbag can take the majority vote among {gtb} bthat isto accept the most frequent class among predictors while bagging can be applied for any statistical model (such as decision treesneural networkslinear regressionk-nearest neighborsand so on)it is most effective for predictors that are sensitive to small changes in the training set the reason becomes clear when we decompose the expected generalization risk as `(gt ` ( [gt (xxg( )) [var[gt (xx]]{ { expected squared bias expected variance ( |
15,075 | similar to ( compare this with the same decomposition for the average prediction function gbag in ( as egbag (xegt ( )we see that any possible improvement in the generalization risk must be due to the expected variance term averaging and bagging are thus only useful for predictors with large expected variancerelative to the other two terms examples of such "unstablepredictors include decision treesneural networksand subset selection in linear regression [ on the other hand"stablepredictors are insensitive to small data changesan example being the -nearest neighbors method note that for independent training sets tb reduction of the variance by factor is achievedvar gbag (xb- var gt (xagainit depends on the squared bias and irreducible loss how significant this reduction is for the generalization risk remark (limitations of baggingit is important to remember that gbag is not exactly equal to gavg which in turn is not exactly gspecificallygbag is constructed from the bootstrap approximation of the sampling pdf as consequencefor stable predictorsit can happen that gbag will perform worse than gt in addition to the deterioration of the bagging performance for stable proceduresit can also happen that gt has already achieved near optimal predictive accuracy given the available training data in this casebagging will not introduce significant improvement the bagging process provides an opportunity to estimate the generalization risk of the bagged model without an additional test set specificallyrecall that we obtain the tbsets from single training set by sampling via algorithm and use them to train separate models it can be shown (see exercise thatfor large sample sizeson average about third (more preciselya fraction - of the original sample points are not included in bootstrapped set tbfor thereforethese samples can be used for the loss estimation these samples are called out-of-bag (oobobservations specificallyfor each sample from the original data setwe calculate the oob loss using predictors that were trained without this particular sample the estimation procedure is summarized in algorithm hastie et al [ observe thatunder certain conditionsthe oob loss is almost identical to the -fold cross-validation loss in additionthe oob loss can be used to determine the number of trees required specificallywe can train predictors until the oob loss stops changing namelydecision trees are added until the oob loss stabilizes algorithm out-of-bag loss estimation inputthe original data set {( )(xnn yn )}the bootstrapped data sets { tb }and the trained predictors gt gtb outputout-of-bag loss for the averaged model for to do ci /indices of predictors not depending on (xi yi for to do if (xi yi tbthen ci ci {bp yi |ci |- bci gtb(xi li loss yi yi pn loob = li return loob out-of-bag |
15,076 | example (bagging for regression treewe next proceed with basic bagging example for regression treein which we compare the decision tree estimator with the corresponding bagged estimator we use the metric (coefficient of determinationfor comparison baggingexample py import numpy as np from sklearn datasets import make_friedman from sklearn tree import decisiontreeregressor from sklearn model_selection import train_test_split from sklearn metrics import _score np random seed ( create regression problem n_points points xy make_friedman n_samples =n_points n_features = noise = random_state = split to train /test set x_train x_test y_train y_test train_test_split (xytest_size = random_state = training regtree decisiontreeregressor random_state = regtree fit(x_train y_train test yhat regtree predict x_test bagging construction n_estimators = bag np empty (n_estimators )dtype object bootstrap_ds_arr np empty (n_estimators )dtype object for in range n_estimators )sample bootstrapped data set ids np random choice range ( lenx_train )),size=lenx_train )replace =truex_boot x_train [idsy_boot y_train [idsbootstrap_ds_arr [inp unique (idsbag[idecisiontreeregressor (bag[ifit(x_boot y_boot bagging prediction yhatbag np zeros (leny_test )for in range n_estimators )yhatbag yhatbag bag[ipredict x_test yhatbag yhatbag n_estimators out of bag loss estimation |
15,077 | oob_pred_arr np zeros (lenx_train )for in range (lenx_train )) x_train [ireshape ( - [for in range n_estimators )if(np isin(ibootstrap_ds_arr [ ]=false ) append (bfor pred in bag[ ]oob_pred_arr [ioob_pred_arr [ (pred predict ( )/len( )l_oob _score (y_train oob_pred_arr print (decisiontreeregressor ^ score ", _score (y_test yhat)"nbagging ^ score " _score (y_test yhatbag )"nbagging oob ^ score ",l_oob decisiontreeregressor ^ score bagging ^ score bagging oob ^ score the decision tree bagging improves the test-set score by about (from to moreoverthe oob score ( is very close to the true generalization risk ( of the bagged estimator the bagging procedure can be further enhanced by introducing random forestswhich is discussed next random forests in section we discussed the intuition behind the prediction averaging procedure specificallyfor some feature vector let zb gtb ( ) be iid prediction valuesobtained from independent training sets tb suppose that var zb for all then the variance of the average prediction value is equal to / howeverif bootstrapped data sets {tbare used insteadthe corresponding random variables {zb will be correlated in particularzb gtb(xfor are identically distributed (but not independentwith some positive pairwise correlation it then holds that (see exercise var ( % ( while the second term of ( goes to zero as the number of observation increasesthe first term remains constant this issue is particularly relevant for bagging with decision trees for exampleconsider situation in which there exists feature that provides very good split of the data such feature will be selected and split for every {gtb} = at the root level and we will consequently end up with highly correlated predictions in such situationprediction averaging will not introduce the desired improvement in the performance of the bagged predictor |
15,078 | the major idea of random forests is to perform bagging in combination with "decorrelationof the trees by including only subset of features during the tree construction for each bootstrapped training set tbwe build decision tree using randomly selected subset of features for the splitting rules this simple but powerful idea will decorrelate the treessince strong predictors will have smaller chance to be considered at the root levels consequentiallywe can expect to improve the predictive performance of the bagged estimator the resulting predictor (random forestconstruction is summarized in algorithm algorithm random forest construction inputtraining set {(xi yi )}ni= the number of trees in the forest band the number of features to be includedwhere is the total number of features in outputensemble of trees generate bootstrapped training sets { via algorithm for to do train decision tree gtbvia algorithm where each split is performed using randomly selected features out of return {gt } = for regression problemsthe output of algorithm is combined to yield the random forest prediction functionb gt (xgrf (xb = in the classification settingsimilar to remark we take instead the majority vote from the {gtbexample (random forest for regression treewe continue with the basic bagging example for regression treein which we compared the decision tree estimator with the corresponding bagged estimator herehoweverwe use the random forest with trees and subset size it can be seen that the random forest' score is outperforming that of the bagged estimator baggingexamplerf py from from from from sklearn datasets import make_friedman sklearn model_selection import train_test_split sklearn metrics import _score sklearn ensemble import randomforestregressor create regression problem n_points points xy make_friedman n_samples =n_points n_features = noise = random_state = split to train /test set x_train x_test y_train y_test train_test_split (xytest_size = random_state = rf randomforestregressor n_estimators = oob_score true |
15,079 | max_features = random_state = rf fit(x_train y_train yhatrf rf predict x_test print ("rf ^ score " _score (y_test yhatrf )"\nrf oob ^ score "rf oob_score_ rf ^ score rf oob ^ score remark (the optimal number of subset features mthe default values for are bp/ and for regression and classification settingrespectively howeverthe standard practice is to treat as hyperparameter that requires tuningdepending on the specific problem at hand [ note that the procedure of bagging decision trees is special case of random forest construction (see exercise consequentlythe oob loss is readily available for random forests while the advantage of bagging in the sense of enhanced accuracy is clearwe should also consider its negative aspects andin particularthe loss of interpretability specifically random forest consists of many treesthus making the prediction process both hard to visualize and interpret for examplegiven random forestit is not easy to determine subset of features that are essential for accurate prediction the feature importance measure intends to address this issue the idea is as follows each internal node of decision tree induces certain decrease in the training losssee ( let us denote this decrease in the training loss by loss ( )where is not leaf node of in additionrecall that for splitting rules of the type { ( )each node is associated with feature that determines the split using the above definitionswe can define the feature importance of as it ( loss ( { is associated with } ( internal while ( is defined for single treeit can be readily extended to random forests specificallythe feature importance in that case will be averaged over all trees of the forestthat isfor forest consisting of trees { tb }the feature importance measure isb irf ( it ( ) = ( example (feature importancewe consider classification problem with features the data is specifically designed to contain only informative features out of in the code belowwe apply the random forest procedure and calculate the corresponding feature importance measureswhich are summarized in figure feature importance |
15,080 | varimportance py import numpy as np from sklearn datasets import make_classification from sklearn ensemble import randomforestclassifier import matplotlib pyplot as plt pylab n_points create regression data with data points xy make_classification n_samples =n_points n_features = n_informative = n_redundant = n_repeated = random_state = shuffle false rf randomforestclassifier n_estimators = max_features ="log "rf fit( ,yimportances rf feature_importances_ indices np argsort importances )[:- for in range ( )print (feature % (% )indices [ ]+ importances indices [ ]])importance std np std (rf feature_importances_ for tree in rf estimators_ ]axis = plt figure (plt barrange ( shape [ ]importances indices ]color =" "yerr=stdindices ]align =center "plt xticks range ( shape [ ]indices + plt xlim ([- shape [ ]]pylab xlabel (feature index "pylab ylabel (importance "plt show ( feature index figure importance measure for the -feature data set with only informative features and |
15,081 | clearlyit is hard to visualize and understand the prediction process based on trees howeverfigure shows that the features and were correctly identified as being important boosting boosting is powerful idea that aims to improve the accuracy of any learning algorithmespecially when involving weak learners -simple prediction functions that exhibit performance slightly better than random guessing shallow decision trees typically yield weak learners originallyboosting was developed for binary classification tasksbut it can be readily extended to handle general classification and regression problems the boosting approach has some similarity with the bagging method in the sense that boosting uses an ensemble of prediction functions despite this similaritythere exists fundamental difference between these methods specificallywhile bagging involves the fitting of prediction functions to bootstrapped datathe predicting functions in boosting are learned sequentially that iseach learner uses information from previous learners the idea is to start with simple model (weak learnerg for the data {(xi yi )}ni= and then to improve or "boostthis learner to learner : herethe function is found by minimizing the training loss for over all functions in some class of functions for exampleh could be the set of prediction functions that can be obtained via decision tree of maximal depth given loss function lossthe function is thus obtained as the solution to the optimization problem loss (yi (xi (xi ) argmin = hh ( this process can be repeated for to obtain and so onyielding the boosted prediction function gb (xg (xhb ( ( = instead of using the updating step gb gb- hb one prefers to use the smooth updating step gb gb- hb for some suitably chosen step-size parameter as we shall see shortlythis helps reduce overfitting boosting can be used for regression and classification problems we start with simple regression settingusing the squared-error lossthusloss( , ( ) in this caseit is common to start with (xn- ni= yi and each hb for isnchosenoas learner for the data set tb of residuals corresponding to gb- that istb :xi (bi = with (bi :yi gb- (xi ( this leads to the following boosting procedure for regression with squared-error loss weak learners |
15,082 | algorithm regression boosting with squared-error loss inputtraining set {(xi yi )}ni= the number of boosting rounds band shrinkage step-size parameter outputboosted prediction function - pn set (xn = yi for to do (bn set (by ( for nand let - = fit prediction function hb on the training data tb set gb (xgb- (xg hb ( step-size parameter return gb the step-size parameter introduced in algorithm controls the speed of the fitting process specificallyfor small values of gboosting takes smaller steps towards the training loss minimization the step-size is of great practical importancesince it helps the boosting algorithm to avoid overfitting this phenomenon is demonstrated in figure train data - - - - train data - - figure the left and the right panels show the fitted boosting regression model with and respectively note the overfitting on the left very basic implementation of algorithm which reproduces figure is provided below regressionboosting py import numpy as np from sklearn tree import decisiontreeregressor from sklearn model_selection import train_test_split from sklearn datasets import make_regression import matplotlib pyplot as plt def trainboost (alpha boostingrounds , , )g_ np mean( |
15,083 | residuals yalpha *g_ list of basic regressor g_boost [for in range boostingrounds )h_i decisiontreeregressor max_depth = h_i fit(xresiduals residuals residuals alpha *h_i predict (xg_boost append (h_ireturn g_ g_boost def predict (g_ g_boost ,alpha )yhat alpha *g_ *np ones(len( )for in range (leng_boost ))yhat yhatalpha g_boost [jpredict (xreturn yhat np random seed ( sz create data set , make_regression n_samples =sz n_features = n_informative = noise = boosting algorithm boostingrounds alphas [ for alpha in alphas g_ g_boost trainboost (alpha boostingrounds , ,yyhat predict (g_ g_boost alpha xplot tmpx np reshape (np linspace - , , ,( , )yhatx predict (g_ g_boost alpha tmpxf plt figure (plt plot( , ,'*'plt plot(tmpx yhatx plt show (the parameter can be viewed as step size made in the direction of the negative gradient of the squared-error training loss to see thisnote that the negative gradient loss (yi (yi ) = (yi gb- (xi ) =gb- (xi =gb- (xi is two times the residual (bi given in ( that is used in algorithm to fit the prediction function hb in factone of the major advances in the theory of boosting was the recognition that one can use similar gradient descent method for any differentiable loss function the |
15,084 | gradient boosting resulting algorithm is called gradient boosting the general gradient boosting algorithm is summarized in algorithm the main idea is to mimic gradient descent algorithm in the following sense at each stage of the boosting procedurewe calculate negative gradient on training points xn (lines - thenwe fit simple model (such as shallow decision treeto approximate the gradient (line for any feature finallysimilar to the gradient descent methodwe make -sized step in the direction of the negative gradient (line algorithm gradient boosting inputtraining set {(xi yi )}ni= the number of boosting rounds ba differentiable loss function loss( , )and gradient step-size parameter outputgradient boosted prediction function set ( for to do for to do evaluate the negative gradient of the loss at (xi yi via ri( loss (yi zz =gb- (xi approximate the negative gradient by solving (bri (xi hb argmin = hh ( set gb (xgb- (xg hb (xreturn gb example (gradient boosting for regression treelet us continue with the basic bagging and random forest examples for regression tree (examples and )where we compared the standard decision tree estimator with the corresponding bagging and random forest estimators nowwe use the gradient boosting estimator from algorithm as implemented in sklearn we use and perform boosting rounds as prediction function hb for we use small regression trees of depth at most note that such individual trees do not usually give good performancethat isthey are weak prediction functions we can see that the resulting boosting prediction function gives the score equal to which is better than scores of simple decision tree ( )the bagged tree ( )and the random forest ( gradientboostingregression py import numpy as np from sklearn datasets import make_friedman from sklearn tree import decisiontreeregressor from sklearn model_selection import train_test_split from sklearn metrics import _score |
15,085 | create regression problem n_points points xy make_friedman n_samples =n_points n_features = noise = random_state = split to train /test set x_train x_test y_train y_test train_test_split (xytest_size = random_state = boosting sklearn from sklearn ensemble import gradientboostingregressor breg gradientboostingregressor learning_rate = n_estimators = max_depth = random_state = breg fit(x_train y_train yhat breg predict x_test print (gradient boosting ^ score ", _score (y_test yhat)gradient boosting ^ score we proceed with the classification setting and consider the original boosting algorithmadaboost the inventors of the adaboost method considered binary classification problemwhere the response variable belongs to the {- set the idea of adaboost is similar to the one presented in the regression settingthat isadaboost fits sequence of prediction functions with final prediction function gb (xg (xb hb ( )( = where each function hb is of the form hb (xab cb ( )with ab rand where cb is proper (but weakclassifier in some class thuscb ( {- exactly as in ( )we solve at each boosting iteration the optimization problem loss (yi gb- (xi (xi )(ab cb argmin > cc = ( howeverin this case the loss function is defined as loss( , ye-yby the algorithm starts with simple model : and for each successive iteration solves ( thusn gb- (xi -yi (xi -yi (xi (ab cb argmin |-yi{ argmin (bi > cc > cc = = wi(bwhere (bi :exp{-yi gb- (xi )does not depend on or it follows that - (ab cb argmin > cc = (bi { (xi yi - argmin ( > cc ` ( (ce- = (bi { (xi yi ( adaboost |
15,086 | where ` ( ( :(bi= wi { (xi yi pn (bi= wi pn can be interpreted as the weighted zero-one training loss at iteration for any the program ( is minimized by classifier that minimizes this weighted training lossthat iscb (xargmin ` ( ( cc substituting ( into ( and solving for the optimal gives ` ( (cb ab ln ` ( (cb ( this gives the adaboost algorithmsummarized below algorithm adaboost inputtraining set {(xi yi )}ni= and the number of boosting rounds outputadaboost prediction function set ( for to do ( / for to do fit classifier cb on the training set by solving cb argmin ` ( (cargmin cc cc set ab ln ` ( (cb ` ( (cb (bi= wi { (xi yi pn (bi= wi pn /update weights for to do ( + (bi exp{-yi ab cb (xi )pb return ( : = ab cb ( algorithm is quite intuitive at the first step ( )adaboost assigns an equal weight ( / to each training sample (xi yi in the set {(xi yi )}ni= note thatin this casethe weighted zero-one training loss is equal to the regular zero-one training loss at each successive step the weights of observations that were incorrectly classified by the previous boosting prediction function gb are increasedand the weights of correctly classified observations are decreased due to the use of the weighted zero-one lossthe set of incorrectly classified training samples will receive an extra weight and thus have better chance of being classified correctly by the next classifier cb+ as soon as the adaboost algorithm finds the prediction function gb the final classification is delivered via sign ab cb (xb= |
15,087 | the step-size parameter ab found by the adaboost algorithm in line can be viewed as an optimal step-size in the sense of training loss minimization howeversimilar to the regression settingone can slow down the adaboost algorithm by setting ab to be fixed (smallvalue ab as usualwhen the latter is done in practiceit is tackling the problem of overfitting we consider an implementation of algorithm for binary classification problem specificallyduring all boosting roundswe use simple decision trees of depth (also called decision tree stumpsas weak learners the exponential and zero-one training losses as function of the number of boosting rounds are presented in figure adaboost py from sklearn datasets import make_blobs from sklearn tree import decisiontreeclassifier from sklearn model_selection import train_test_split from sklearn metrics import zero_one_loss import numpy as np def exponentialloss ( ,yhat) len(yloss for in range ( )loss loss+np exp(- [ ]yhat[ ]loss loss/ return loss create binary classification problem np random seed ( n_points points xy make_blobs n_samples =n_points n_features = centers = cluster_std = random_state = [ == ]- adaboost implementation boostingrounds len(xw / *np ones(nlearner [alpha_b_arr [for in range boostingrounds )clf decisiontreeclassifier max_depth = clf fit( ,ysample_weight =wlearner append (clftrain_pred clf predict (xerr_b stumps |
15,088 | for in range ( )iftrain_pred [ ]!= [ ])err_b err_b + [ierr_b err_b /np sum(walpha_b np log (( err_b )err_b alpha_b_arr append alpha_b for in range ( ) [iw[ ]np exp(- [ ]alpha_b train_pred [ ]yhat_boost np zeros (len( )for in range boostingrounds )yhat_boost yhat_boost alpha_b_arr [ ]learner [jpredict (xyhat np zeros (nyhatyhat_boost >= yhatyhat_boost < - print (adaboost classifier exponential loss "exponentialloss (yyhat_boost )print (adaboost classifier zero --one loss ",zero_one_loss ( ,yhat)adaboost classifier exponential loss adaboost classifier zero --one loss exponential loss zero-one loss loss , figure exponential and zero-one training loss as function of the number of boosting rounds for binary classification problem |
15,089 | further reading breiman' book on decision trees[ ]serves as great starting point some additional advances can be found in [ from the computational point of viewthere exists an efficient recursive procedure for tree pruningsee and in [ several advantages and disadvantages of using decision trees are debated in [ detailed discussion on bagging and random forests can be found in [ and [ ]respectively freund and schapire [ provide the first boosting algorithmthe adaboost while adaboost was developed in the context of the computational complexity of learningit was later discovered by friedman [ that adaboost is special case of an additive model in additionit was shown that for any differentiable loss functionthere exists an efficient boosting procedure which mimics the gradient descent algorithm the foundation of the resulting gradient boosting method is detailed in [ python packages that implement gradient boosting include xgboost and lightgbm exercises show that any training set {(xyi ) ncan be fitted via tree with zero training loss suppose during the construction of decision tree we wish to specify constant regional prediction function gw on the region rw based on the training data in rw say {( )(xk yk )show that gw ( : - ki= yi minimizes the squared-error loss using the program from section write basic implementation of decision tree for binary classification problem implement the misclassificationgini indexand entropy impurity criteria to split nodes compare the results suppose in the decision tree of example there are blue and red data points in certain tree region calculate the misclassification impuritythe gini impurityand the entropy impurity repeat these calculations for blue and red data points consider the procedure of finding the best splitting rule for categorical variable with labels from section show that one needs to consider subsets of { kto find the optimal partition of labels reproduce figure using the following classification data from sklearn datasets import make_blobs xy make_blobs n_samples = n_features = centers = random_state = cluster_std = prove ( )that isshow that { loss( gw ( ) (gi ww = |
15,090 | suppose is training set with elements and talso of size nis obtained from by bootstrappingthat isresampling with replacement show that for large ntdoes not contain fraction of about - of the points from prove equation ( consider the following training/test split of the data construct random forest regressor and identify the optimal subset size in the sense of score (see remark import numpy as np from sklearn datasets import make_friedman from sklearn tree import decisiontreeregressor from sklearn model_selection import train_test_split from sklearn metrics import _score create regression problem n_points points xy make_friedman n_samples =n_points n_features = noise = random_state = split to train /test set x_train x_test y_train y_test train_test_split (xytest_size = random_state = explain why bagging decision trees are special case of random forests show that ( holds consider the following classification data and module importsfrom sklearn datasets import make_blobs from sklearn metrics import zero_one_loss from sklearn model_selection import train_test_split import numpy as np import matplotlib pyplot as plt from sklearn ensemble import gradientboostingclassifier x_train y_train make_blobs n_samples = n_features = centers = random_state = cluster_std = using the gradient boosting algorithm with roundsplot the training loss as function of gfor what is your conclusion regarding the relation between and |
15,091 | eep earning in this we show how one can construct rich class of approximating functions called neural networks the learners belonging to the neural-network class of functions have attractive properties that have made them ubiquitous in modern machine learning applications -their training is computationally feasible and their complexity is easy to control and fine-tune introduction in we described the basic supervised learning tasknamelywe wish to predict random output from random input xusing prediction function that belongs to suitably chosen class of approximating functions more generallywe may wish to predict vector-valued output using prediction function from class in this denotes the vector-valued output for given input this differs from our previous use ( in table )where denotes vector of scalar outputs in the machine learning contextthe class is sometimes referred to as the hypothesis space or the universe of possible modelsand the representational capacity of hypothesis representational capacity space is simply its complexity suppose that we have class of functions gl indexed by parameter that controls the complexity of the classso that gl gl+ gl+ in selecting suitable class of functionswe have to be mindful of the approximation-estimation tradeoff on the one handthe class gl must be complex (richenough to accurately represent the optimal unknown prediction function gwhich may require very large on the other handthe learners in the class gl must be simple enough to train with small estimation error and with minimal demands on computer memorywhich may necessitate small in balancing these competing objectivesit helps if the more complex class gl+ is easily constructed from an already existing and simpler gl the simpler class of functions gl may itself be constructed by modifying an even simpler class gl- and so on class of functions that permits such natural hierarchical construction is the class of neural networks conceptuallya neural network with layers is nonlinear parametric neural networks regression model whose representational capacity can easily be controlled by |
15,092 | alternativelyin ( we will define the output of neural network as the repeated composition of linear and (componentwisenonlinear functions as we shall seethis representation of the output will provide flexible class of nonlinear functions that can be easily differentiated as resultthe training of learners via gradient optimization methods involves mostly standard matrix operations that can be performed very efficiently historicallyneural networks were originally intended to mimic the workings of the human brainwith the network nodes modeling neurons and the network links modeling the axons connecting neurons for this reasonrather than using the terminology of the regression models in we prefer to use nomenclature inspired by the apparent resemblance of neural networks to structures in the human brain we notehoweverthat the attempts at building efficient machine learning algorithms by mimicking the functioning of the human brain have been as unsuccessful as the attempts at building flying aircraft by mimicking the flapping of birdswings insteadmany effective machine algorithms have been inspired by age-old mathematical ideas for function approximation one such idea is the following fundamental result (see [ for prooftheorem kolmogorov ( every continuous function [ with can be written as ( + = hi (xi = where { hi is set of univariate continuous functions that depend on gthis result tells us that any continuous high-dimensional map can be represented as the function composition of much simpler (one-dimensionalmaps the composition of the maps needed to compute the output (xfor given input are depicted in figure showing directed graph or neural network with three layersdenoted as xi hi xp hj zj aj (xhq pq zq aq figure every continuous function [ can be represented by neural network with one hidden layer ( )an input layer ( )and an output layer ( |
15,093 | in particulareach of the components of the input is represented as node in the input layer ( in the hidden layer ( there are : nodeseach of which is associated with pair of variables (zawith values :hi (xi and : ( hidden layer = link between nodes ( and xi with weight hi signifies that the value of depends on the value of xi via the function hi finallythe output layer ( represents the value (xqj= note that the arrows on the graph remind us that the sequence of the computations is executed from left to rightor from the input layer through to the output layer in practicewe do not know the collection of functions { hi }because they depend on the unknown gin the unlikely event that gis linearthen all of the ( )( one-dimensional functions will be linear as well howeverin generalwe should expect that each of the functions in { hi is nonlinear unfortunatelytheorem only asserts the existence of { hi }and does not tell us how to construct these nonlinear functions one way out of this predicament is to replace these ( )( unknown functions with much larger number of known nonlinear functions called activation functions for examplea logistic activation function is - ( ( exp(- )we then hope that such networkbuilt from sufficiently large number of activation functionswill have similar representational capacity as the neural network in figure with ( )( functions in generalwe wish to use the simplest activation functions that will allow us to build learner with large representational capacity and low training cost the logistic function is merely one possible choice for an activation function from among infinite possibilities figure shows small selection of activation functions with different regularity or smoothness properties heaviside or unit step rectified linear unit (relu logistic - { - { - - ( exp(- )figure some common activation functions (zwith their defining formulas and plots the logistic function is an example of sigmoid (that isan -shapedfunction some books define the logistic function as ( (in terms of our definitionin addition to choosing the type and number of activation functions in neural networkwe can improve its representational capacity in another important wayintroduce more hidden layers in the next section we explore this possibility in detail activation functions derive their name from models of neuron' response when exposed to chemical or electric stimuli activation functions |
15,094 | feed-forward weight matrix bias vector feed-forward neural networks in neural network with + layersthe zero or input layer ( encodes the input feature vector xand the last or output layer ( lencodes the (multivaluedoutput function (xthe remaining layers are called hidden layers each layer has number of nodessay pl nodes for layer in this notationp is the dimension of the input feature vector andfor examplepl signifies that (xis scalar output all nodes in the hidden layers ( are associated with pair of variables (za)which we gather into pl -dimensional column vectors zl and al in the so-called feed-forward networksthe variables in any layer are simple functions of the variables in the preceding layer in particularzl and al- are related via the linear relation zl wl al- bl for some weight matrix wl and bias vector bl within any hidden layer the components of the vectors zl and al are related via al sl (zl )where sl pl pl is nonlinear multivalued function all of these multivalued functions are typically of the form sl ( [ ( ) (zdim( )] ( where is an activation function common to all hidden layers the function sl pl- pl in the output layer is more general and its specification dependsfor exampleon whether the network is used for classification or for the prediction of continuous output four-layer ( network is illustrated in figure input layer output layer hidden layers bias xi ji weight , , , , , ,mk , (xfigure neural network with the layer is the input layerfollowed by two hidden layersand the output layer hidden layers may have different numbers of nodes the output of this neural network is determined by the input vector (nonlinearfunctions {sl }as well as weight matrices wl [wl, and bias vectors bl [blj for |
15,095 | herethe (ij)-th element of the weight matrix wl [wl, is the weight that connects the -th node in the ( )-st layer with the -th node in the -th layer the name given to (the number of layers without the input layeris the network depth and maxl pl is called the network width while we mostly study networks that have an equal number of nodes in the hidden layers ( pl- )in general there can be different numbers of nodes in each hidden layer the output (xof multiple-layer neural network is obtained from the input via the following sequence of computationsx - ( ( |{ { { |{ |{za - - bl sl (zl (xl { |{zzl ( al denoting the function wl bl by ml the output (xcan thus be written as the function composition (xsl ( ( the algorithm for computing the output (xfor an input is summarized next note that we leave open the possibility that the activation functions {sl have different definitions for each layer in some casessl may even depend on some or all of the already computed and algorithm feed-forward propagation for neural network inputfeature vector xweights {wl, }biases {bl, for each layer outputthe value of the prediction function ( /the zero or input layer for to do compute the hidden variable zl, for each node in layer lzl wl al- bl compute the activation function al, for each node in layer lal sl (zl return (xal /the output layer network depth network width |
15,096 | example (nonlinear multi-output regressiongiven the input and an activation function rthe output ( :[ ( ) ( )]of nonlinear multioutput regression model can be computed via neural network withz where xp , ( , ) (xw where xp which is neural network with one hidden layer and output function (zz in the special case where and we collect all parameters into the vector th[ + the neural network can be interpreted as generalized linear model with [ xh([ xthfor some activation function example (multi-logit classificationsuppose thatfor classification probleman input has to be classified into one of classeslabeled we can perform the classification via neural network with one hidden layerwith nodes in particularwe have ( )softmax where is the softmax functionexp(zsoftmax exp(zk for the outputwe take ( [ ( )gc ( )] which can then be used as pre-classifier of the actual classifier of into one of the categories is then argmax gk+ (xk { , - this is equivalent to the multi-logit classifier in section notehoweverthat there we used slightly different notationwith instead of and we have reference classsee exercise in practical implementationsthe softmax function can cause numerical overand under-flow errors when either one of the exp(zk happens to be extremely large or exp(zk happens to be very small in such cases we can exploit the invariance property (exercise )softmax(zsoftmax( for any constant using this propertywe can compute softmax(zwith greater numerical stability via softmax( maxk {zk when neural networks are used for classification into classes and the number of output nodes is then the gi (xmay be viewed as nonlinear discriminant functions |
15,097 | example (density estimationestimating the density of some random feature is the prototypical unsupervised learning taskwhich we tackled in section using gaussian mixture models we can view gaussian mixture model with components and common scale parameter as neural network with two hidden layerssimilar to the one on figure in particularif the activation function in the first hidden layer is of the form ( with ( :exp(- /( )) ps then the density value (xis computed viaz (xa> ( ) ( )where is column vector of onesw is matrix of zerosand is the softmax function we identify the column vector with the location parameters[ ]of the gaussian mixture and with the weights of the mixture note the unusual activation function of the output layer -it requires the value of from the first hidden layer and from the second hidden layer there are number of key design characteristics of feed-forward network firstwe need to choose the activation function(ssecondwe need to choose the loss function for the training of the network as we shall explain in the next sectionthe most common choices are the relu activation function and the cross-entropy loss cruciallywe need to carefully construct the network architecture -the number of connections among the nodes in different layers and the overall number of layers of the network for exampleif the connections from one layer to the next are pruned (called sparse connectivityand the links share the same weight values {wl, (called parameter sharingfor all {(ij| }then the weight matrices will be sparse and toeplitz intuitivelythe parameter sharing and sparse connectivity can speed up the training of the networkbecause there are fewer parameters to learnand the toeplitz structure permits quick computation of the matrix-vector products in algorithm an important example of such network is the convolution neural network (cnn)in which some or all of the network layers encode the linear operation of convolutionnetwork architecture convolution neural network wl al- wl al- where [ ] : xk yi- + as discussed in example convolution matrix is special type of sparse toeplitz matrixand its action on vector of learning parameters can be evaluated quickly via the fast fourier transform cnns are particularly suited to image processing problemsbecause their convolution layers closely mimic the neurological properties of the visual cortex in particularthe cortex partitions the visual field into many small regions and assigns group of neurons to every such region moreoversome of these groups of neurons respond only to the presence of particular features (for exampleedgesthis neurological property is naturally modeled via convolution layers in the neural network specificallysuppose that the input image is given by an matrix of pixels nowdefine matrix (sometimes called kernelwhere is generally taken to be or thenthe convolution layer output can be calculated using the discrete convolution |
15,098 | of all possible input matrix regions and the kernel matrix(see example in particularby noting that there are ( ( possible regions in the original imagewe conclude that the convolution layer output size is ( ( in practicewe frequently define several kernel matricesgiving an output layer of size ( ( (the number of kernelsfigure shows input image and kernel with output matrix an example of using cnn for image classification is given in section figure an example input image and kernel the kernel is applied to every region of the original image deep learning back-propagation the training of neural networks is major challenge that requires both ingenuity and much experimentation the algorithms for training neural networks with great depth are collectively referred to as deep learning methods one of the simplest and most effective methods for training is via steepest descent and its variations steepest descent requires computation of the gradient with respect to all bias vectors and weight matrices given the potentially large number of parameters (weight and bias termsin neural networkwe need to find an efficient method to calculate this gradient to illustrate the nature of the gradient computationslet th {wl bl be column vecpl tor of length dim(thl= (pl- pl pl that collects all the weight parameters (numberpl pl ing = pl- pl and bias parameters (numbering = pl of multiple-layer network with training lossn loss(yi (xi th)` ( (th): = writing ci (th:loss(yi (xi th)for short (using for cost)we have ` ( (th)ci (th) = ( so that obtaining the gradient of ` requires computation of ci /th for every for activation functions of the form ( )define dl as the diagonal matrix with the vector of derivatives ( :[ (zl, ) (zl,pl )]down its main diagonalthat isdl :diag( (zl, ) (zl,pl )) |
15,099 | the following theorem provides us with the formulas needed to compute the gradient of typical ci (ththeorem gradient of training loss for given (inputoutputpair (xy)let ( thbe the output of algorithm and let (thloss(yg( th)be an almost-everywhere differentiable loss funcl tion suppose {zl al } = are the vectors obtained during the feed-forward propagation ( xal ( th)thenwe have for lc dl > - wl dl bl and where dl : /zl is computed recursively for dl- dl- > dl with dl sl zl ( proofthe scalar value is obtained from the transitions ( )followed by the mapping ( th loss(yg( th)using the chain rule (see appendix )we have dl sl (xc zl zl (xzl recall that the vector/vector derivative of linear mapping wz is given by wsee ( it follows thatsince zl wl al- bl and al (zl )the chain rule gives zl al- zl dl- > zl- zl- al- hencethe recursive formula ( )dl- zl dl- > dl zl- zl- zl using the {dl }we can now compute the derivatives with respect to the weight matrices and the biases in particularapplying the "scalar/matrixdifferentiation rule ( to zl wl al- bl givesc zl dl > - wl zl wl and zl dl bl bl zl from the theorem we can see that for each pair (xyin the training setwe can compute the gradient /th in sequential mannerby computing dl this procedure is called back-propagation since back-propagation mostly involves simple matrix multiplicationit backpropagation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.