id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
16,000 |
let' try to extract the first three principal components now from our breast cancer feature set of features using svd we first center our feature matrix and then use svd and subsetting to extract the first three pcs using the following code in [ ]center the feature set bc_xc bc_x bc_x mean(axis= decompose using svd usvt np linalg svd(bc_xcget principal components pc vt get first principal components pc pc[: : pc shape out[ ]( we can now get the reduced feature set of three features by using the dot product operation we discussed earlier the following snippet gives us the final reduced feature set that can be used for modeling reduce feature set dimensionality np round(bc_xc dot(pc ) out[ ]array([[- - - ][- ]- ]- ][- ] - - ]]thus you can see how powerful svd and pca can be in helping us reduce dimensionality by extracting necessary features of course in machine learning systems and pipelines you can use utilities from scikitlearn instead of writing unnecessary code and equations the following code enables us to perform pca on our breast cancer feature set leveraging scikit-learn' apis in [ ]from sklearn decomposition import pca pca pca(n_components= pca fit(bc_xout[ ]pca(copy=trueiterated_power='auto'n_components= random_state=nonesvd_solver='auto'tol= whiten=falseto understand how much of the variance is explained by each of these principal componentsyou can use the following code in [ ]pca explained_variance_ratio_ out[ ]array( ]
|
16,001 |
from the preceding outputas expectedwe can see the maximum variance is explained by the first principal component to obtain the reduced feature setwe can use the following snippet in [ ]bc_pca pca transform(bc_xnp round(bc_pca out[ ]array([ - ] - ] - ] - ] - ]- - ]]if you compare the values of this reduced feature set with the values obtained in our mathematical implementation based codeyou will see they are exactly the same except sign inversions in some cases the reason for sign inversion in some of the values in principal components is because the direction of these principal components is unstable the sign indicates direction hence even if the principal components point in opposite directionsthey should still be on the same plane and hence shouldn' have an effect when modeling with this data let' now quickly build logistic regression model as before and use model accuracy and five-fold cross validation to evaluate the model quality using these three features in [ ]np average(cross_val_score(lrbc_pcabc_yscoring='accuracy'cv= )out[ ] we can see from the preceding output that even though we used only three features derived from the principal components instead of the original featureswe still obtained model accuracy close to %which is quite decentsummary this was content packed with lot of hands-on examples based on real-world datasets the main intent of this is to get you familiarized with essential conceptstoolstechniquesand strategies used for feature extractionengineeringscalingand selection one of the toughest tasks that data scientists face day in and day out is data processing and feature engineering hence it is of paramount importance that you understand the various aspects involved with deriving features from raw data this is intended to be used both as starting ground as well as reference guide for understanding what techniques and strategy should be applied when trying to engineer features on your own datasets we cover the basic concepts of feature engineeringscalingand selection and also the importance behind each of these processes feature engineering techniques are covered extensively for diverse data types including numericalcategoricaltext
|
16,002 |
temporal and images multiple feature scaling techniques are also coveredwhich are useful to tone down the scale and magnitude of features before modeling finallywe cover feature selection techniques in detail with emphasis on the three different strategies of feature selection namely filterwrapperand embedded methods special sections on dimensionality reduction and automated feature extraction using deep learning have also been included since they have gained lot of prominence in both research as well as the industry want to conclude this by leaving you with the following quote by peter norvigrenowned computer scientist and director at googlewhich should reinforce the importance of feature engineering "more data beats clever algorithmsbut better data beats more data --peter norvig
|
16,003 |
buildingtuningand deploying models very popular saying in the machine learning community is " of machine learning is data processingand going by the structure of this bookthe quote seems quite apt in the preceding you saw how you can extractprocessand transform data to convert it to form suitable for learning using machine learning algorithms this deals with the most important part of using that processed datato learn model that you can then use to solve real-world problems you also learned about the crisp-dm methodology for developing data solutions and projects--the step involving building and tuning these models is the final step in the iterative cycle of machine learning if you followed all the steps prescribed in the earlier by now you must have cleaned and processed data\feature set this data will mostly be numeric in the form of arrays or dataframes (feature setmost machine learning algorithms require the data to be in numeric format as at the heart of any machine learning algorithmwe have some mathematical equations and an optimization problem to either minimize error\loss or maximize profit hence machine learning algorithms always work on numeric data check out for feature engineering techniques to convert structured as well as unstructured data into ready-to-use numeric formats we start this by learning about different types of algorithms you can use then you will learn how to choose relevant algorithm for the data that you haveyou will then be introduced to the concept of hyperparameters and learn how to tune the hyperparameters of any algorithm the also covers novel approach to interpreting models using open source frameworks besides thisyou will also learn about persisting and deploying the developed models so you can start using them for your own needs and benefits based on the preceding topicsthe includes into the following five major sectionsbuilding models model evaluation techniques model tuning model interpretation deploying models in action you should be fully acquainted with the material of the earlier since it will help in better understanding of the various aspects of this all the code snippets and examples used in this are available in the github repository for this book at named model_build_tune_deploy py for all the examples used in this and try the examples as you read this or you can even refer to the jupyter notebook named buildingtuning and deploying models ipynb for more interactive experience (cdipanjan sarkarraghav bali and tushar sharma sarkar et al practical machine learning with python
|
16,004 |
uilding models before we get on with the process of building modelswe should try to understand what model represents in the simplest of termsa model can be described as relationship between output or response variables and its corresponding input or independent variables in dataset sometimes this relationship can just be among input variables (in case of datasets with no defined output or dependent variablesthis relationship among variables can be expressed in terms of mathematical equationsfunctionsand ruleswhich link the output of the model to its inputs consider the case of linear regression analysisthe output in that case is set of parameters also known as weights or coefficients (we explore this later in the and those parameters define the relationship between the input and output variables the idea is to build model using learning processsuch that you can learn the necessary parameters (coefficientsin the model that help translate the input variables (independentinto the corresponding output variable (dependentwith the least error for dataset (leveraging validation metrics like mean squared errorthe idea is not to predict correct output value for every input data point (leads to model over-fittingbut to generalize well over lots of data points such that the error is minimum and the same is maintained when you use this model over new data points this is done by learning the right values of coefficients\parameters during the model building process so when we say we are learning linear regression modelthese are the set of important considerations implicit in that statement see figure - figure - high-level representation of model building when we specify linear regression as the candidate modelwe define the nature of relationship between our dependent and independent variables the candidate model then becomes all the possible combinations of parameters for our model (more on this laterthe learning algorithm is the way to determine the most optimal value of those parameters using some optimization process and validating the performance with some metrics (such as mean squared error to reduce the overall errorthe final model is nothing but most optimal value of our parameters as selected by our learning algorithm so in the case of simple linear regression is nothing but tuple containing the values of our two parametersa and point to remember here is that the term parameter is analogous to coefficients or weights in model there are some other types of parameters called hyperparameterswhich represent higher-level meta-parameters of the model and do not depend on the underlying data they usually need to be set before we start the
|
16,005 |
building or learning process usually these hyperparameters are tuned to get the optimal values as part of the model-tuning phase ( part of the learning phase itself another important point to remember is that the output model is generally dependent on the learning algorithm we choose for our data model types models can be differentiated on variety of categories and nomenclatures lot of this is based on the learning algorithm or method itselfwhich is used to build the model examples can be the model is linear or nonlinearwhat is the output of modelwhether it is parametric model or non-parametric modelwhether it is supervisedunsupervisedor semi-supervisedwhether it is an ensemble model or even deep learning based model refer to the section "machine learning methodsin to refresh your memory of possible machine learning methods used for building models on datasets in this sectionwe focus on some of the most popular models from supervised and unsupervised learning methods lassification models classification is one of the most readily recognizable machine learning tasks and it' covered in detail in it is subset of broader class of machine learning problems known as supervised learning supervised learning is the set of machine learning problems\tasks in which we have labeled dataset with input attributes and corresponding output labels or classes (discretethese inputs and corresponding outputs are then used in learning generalized systemwhich can be used to predict results (output class labelsfor previously unseen data points classification is one major part of the overall supervised learning domain the output of classification model is normally label or category to which the input data point may belong the task of solving classification (or in general any supervisedproblem involves training set of data in which we have the data points labeled with their correct classes/categories we then use supervised machine learning algorithms specific to classification problemsto generalize something similar to classification function for our problem the input to this classification function is exactly similar to the data that we used to train our model this input is typically data attributes or features that are generated in the feature engineering step typical classification models include the following major types of methodshoweverthe list is not exhaustive linear models like logistic regressionnaive bayesand support vector machines non-parametric models like -nearest neighbors tree based methods like decision trees ensemble methods like random forests (baggingand gradient boosted machines (boostingneural networks (mlps
|
16,006 |
classification models can be further broken down on the type of output variables and the number of output variables produced by them this nomenclature is extremely important to understand the type of classification problem you are dealing with by looking at the dataset attributes and the objective to be solved binary classificationwhen we have total of two categories to differentiate between in the output response variable in the datathen the problem is termed as binary classification problem hence you would need an appropriate model that performs binary classification (known as binary classification modela popular binary classification problem is the "email classification problemin this problemthe candidate -mails need to be classified and labeled into either of the two different categories"spamor "non spam(also known as "ham"multi-class classificationthis is an extension of the binary classification problem in this case we have more than two categories or classes into which our data can be classified an example of the multi-class classification problem is predicting handwritten digits where response variable can have any value ranging from to this becomes -class classification problem the multi-class classification is tough problem to solve and the general scheme for solving the multi-class problem mostly involves some modifications of the binary classification problem multi-label classificationthese classification problems typically involve data where the output variable is not always single value but vector having multiple values or labels simple example is predicting categories of news articles that can have multiple labels for each news article like sciencepoliticsreligionand so on classification models often output the actual class labels or probabilities for each possible class label that gives confidence level for each class in the prediction the following are the major output formats from classification models category classification outputin some classification modelsthe output for any unknown data point is the predicted category or class label these models usually calculate the probabilities of all the categoriesbut report only one class label having the maximum probability or confidence category probability classification outputin these classification modelsthe output is the probability value of each possible class label these models are important when we want to further use the output produced by our classification model for detailed analysis or to make complex decisions very simple example can be typical marketing candidate selection problem in this problemby getting the probability output of potential conversionwe can narrow down our marketing expenses regression models in classification modelswe saw that the output variable predicted by the model was discrete valueeven when we got the output as probability valuethose probability values were tied to the discrete class label values of the possible categories regression models are another subset of the supervised learning family of models in these modelsthe input data is generally labeled with real valued output variable (continuous instead of discreteregression analysis is an important part of statistical learning and it has very similar utilityin the field of machine learning in statistical learningregression analysis is used to find relationships between the dependent and the independent variables (which can be one or more than onein the case of regression modelswhen we feed our new data points to our learned\trained regression modelthe output of the model is continuous value
|
16,007 |
based on the number of variablesthe probability distribution of output variables and form of relationship (linear versus nonlinear)we have different types of regression models the following are some of the major categories of regression models simple linear regressionit is the simplest of all the regression modelsbut it is very effective and widely used for practical purposes in this casewe only have single independent variable and single dependent variable the dependent variable is real value and assumed to follow normal distribution in linear regressionwhile developing the model we assume linear relationship between the independent and dependent variable multiple linear regressionit is the extension of the simple linear regression modelto include more than one independent variable the other assumptions remain the samei the dependent variable is still real value and follows normal distribution non linear regressiona regression modelin which the dependent variable is dependent on nonlinear transformation of the parameters\coefficientsis termed nonlinear regression model it is slightly different from models in which we use nonlinear transformation of the independent variables let' consider an example to make this point clear consider the modely in the previous model we have used the square of the independent variable but the parameters of the models (the betasor coefficientsare still linear hence this model is still an example of linear regression modelor to be more specifica polynomial regression model the model in which the coefficients are not linear is model that can be termed as nonlinear regression model consider an example that will fulfill this criterion and hence can be termed as nonlinear regression model (log ) these models are quite hard to learn and hence not as widely used in practice in most casesa linear model with nonlinear transformations applied to the input variables usually suffices regression models are very important part of both statistics and machine learning and we encourage you to refresh your memory by checking out the "regressionsection in as well as to read some standard literature on regression models to deep dive into further detailed concepts as necessary we will be looking at regression in future dealing with real-world case study lustering models we briefly talked about clustering in in case you might want to refresh your memory clustering is member of different class of machine learning methodsknown as unsupervised learning the simplest definition of clustering is the process of grouping similar data points together that do not have any prelabeled classes or categories the output of typical clustering process is segregated groups of data pointssuch that the data points in the same group are similar to each other but dissimilar from the members (data pointsof other groups the major difference between the two methods is thatunlike supervised learningwe don' have pre-labeled set of data that we can use to train and build our model the input set for unsupervised learning problems is generally the whole dataset itself another important hallmark of the unsupervised learning set of problems is that they are quite hard to evaluateas we will see in the later part of this
|
16,008 |
clustering models can be of different types on the basis of clustering methodologies and principles we will briefly introduce the different types of clustering algorithmswhich are as follows partition based clusteringa partition based clustering method is the most natural way to imagine the process of clustering partition based clustering method will define notion of similarity this can be any measure that can be derived from the attributes of the data points by applying mathematical functions on these attributes (featuresthenon the basis of this similarity measurewe can group data points that are similar to each other in single group and separate the ones that are different partition based clustering model is usually developed using recursive techniquei we start with some arbitrary partitions of data andbased on the similarity measurewe keep reassigning data points until we reach stable stopping criteria examples include techniques like -meansk-medoidsclaransand so on hierarchical clusteringa hierarchical clustering model is different from the partition based clustering model in the way they are developed and the way they work in hierarchical clustering paradigmwe either start with all the data points in one group (divisive clusteringor all the data points in different groups (agglomerativebased on the starting point we can either keep dividing the big group into smaller groups or clusters based on some accepted similarity criteria or we can keep merging different groups or clusters into bigger ones based on the same criteria this process is normally stopped when decided stopping condition is achieved similarity criteria could be inter-data point distance in cluster as compared to other cluster data points examples include ward' minimum variance criterion based agglomerative hierarchical clustering density based clusteringboth the clustering models mentioned previously are quite dependent on the notion of distance this leads to these algorithms primarily finding out spherical clusters of data this can become problem when we have arbitrary shaped clusters in the data this limitation can be addressed by doing away with the concept of distance metric based clustering we can define notion of "densityof data and use that to develop our cluster the cluster development methodology then changes from finding points in the vicinity of some points to finding areas where we have some data points this approach is not as straightforward to interpret as the distance metric approach but it leads to clusters that necessarily need not be spherical this is very desirable trait as it is unlikely that all the clusters of interest will be spherical in shape examples include dbscan and optics learning model we have been talking about building modelslearning parametersand so onsince the very start of this in this sectionwe explain what we actually mean by the term building model from the perspective of machine learning in the following sectionwe briefly discuss the mathematical aspects of learning model by taking specific model as an example to make things clearer we try to go light on the math in this sectionso that you don' get overwhelmed with excess information howeverinterested readers are recommended to check out any standard book on theoretical and conceptual details of machine learning models and their implementations (we recommend an introduction to statistical learning by tibshirani et al
|
16,009 |
three stages of machine learning machine learning can often be complex field we have different types of problems and tasks and different algorithms to solve them we also have complex mathstatsand logic that form the very backbone of this diverse field if you rememberyou learned in the first that machine learning is combination of statisticsmathematicsoptimizationlinear algebraand bunch of other topics but despair notyou do not need to start learning all of them right awaythese diverse set of machine learning practices can be mostly unified by simple three-stage paradigm these three stages arerepresentation evaluation optimization let' now discuss each of these steps separately to understand how almost all of the machine learning algorithms or methods work representation the first stage in any machine learning problem is the representation of the problem in formal language this is where we usually define the machine learning task to be performed based on the data and the business objective or problem to be solved usually this stage of the problem is masked as another stagewhich is the selection of the ml algorithm or algorithms (you might have multiple possible model representations at this phasewhen we select target algorithmwe are implicitly deciding on the representation that we want use for our problem this stage is akin to deciding on the set of hypothesis models,any of which can be the solution of our problem for examplewhen we decide the machine learning task to be performed is regression looking at our dataset and then select linear regression as our regression model then we have decided on the linear combination based relationship between the dependent and the independent variables another implicit selection made in this stage is deciding on the parameters/weights/coefficients of the model that we need to learn evaluation once we decide on the representation of our problem and possible set of modelswe need some judging criterion or criteria that will help us choose one model over the othersor the best model from set of candidate models the idea is to define metric for evaluation or scoring function\loss function that will help enable this this evaluation metric is generally provided in terms of an objective or an evaluation function (can also be called loss functionwhat these objective functions normally do is provide numerical performance value which will help us to decide on the effectiveness of any candidate model the objective function depends on the type of problem we are solvingthe representation we selectedand other things simple example would be the lower the loss or error ratethe better the model is performing optimization the final stage in the learning process is optimization optimization in this case can be simply described as searching through all the hypothesis model space representationsto find the one that will give us the most optimal value of our evaluation function while this description of optimization hides the vast majority of complexities involved in the processit is good way to understand the core principles the optimization method that we will normally use is dependent on the choice of representations and the evaluation function or functions fortunately we already have huge set of robust optimizers we can use once we have decided
|
16,010 |
on the representation and the evaluation aspects optimization methods can be methods like gradient descent and even meta-heuristical methods like generic algorithms the three stages of logistic regression the best way to understand the nuances of complex process is to explain it using an example in this sectionwe trace the three stages of the machine learning processusing the logistic regression model logistic regression is an extension of linear regression to solve classification problems we will see how simple logistic regression problem is solved using gradient descent based optimizationwhich is one of the most popular optimization methods representation the representation of logistic regression is obtained by applying the logit function to the representation of linear regression model the linear regression representation is given by this hypothesis functionh(ththt hereth represents the parameters of the model and is the input vector the logit function is given bys ( - applying the logit function on the representation of linear regression gives us the representation of logistic regression ( - this is our representation for the logistic regression model as the value of logit function ranges between and we can decide between the two categories by supplying an input vector and set of parameters th and calculating the value of (thif it is less than then typically the label is otherwisethe label is (binary classification problems leverage thisevaluation the next step in the process is specifying an evaluation or cost function the cost function in our case is dependent on the actual class of the data point suppose the output of the logit function is for data point whose class is then the error or loss of that case is but if that data point is of category then the error is using this analogywe can define the cost function for one data point as follows cost hq log(( - hq ))if = log if = leveraging the previous logicthe cost function for the whole dataset is given bym cost ( ( log ( log =
|
16,011 |
optimization the cost function we described earlier is function of th and hence we need to maximize the previous function and find the set of th that gives us the maximum value (normally we will minimize the cost functionbut here we have taken log and hence we will maximize the log functionthe value th that we obtain by represents the model (parametersthat we wanted to learn the basic idea of maximizing or minimizing function is that you differentiate the function and find the point where the gradient is zero that is the point where the function is taking either minimum or maximum value but we have to keep in mind that the function that we have is nonlinear function in the parameter th hence we won' be able to directly solve for the optimal values of th this is where we introduce the concept of the gradient descent method in the simplest termsgradient descent is the process in which we calculate the gradient of the function we want to optimize at each point and then keep moving in the direction of negative gradient values here by movingwe mean to update the values of th according to the gradient that we calculate we can calculate the gradient of the cost function with respect to each component of the parameter vector as followsp hq pq by repeating this calculation for each of the component of parameter vectorwe can calculate the gradient of the function with respect to the whole parameter vector once we get the gradientthe next step is to update the new set of parameter vector values using this equation ( : hq ij herea represents the small step we want to take in the direction of the gradient is hyperparameter of the optimization process (you can think of it as learning rate or learning step sizeand its value can determine whether we reach global minima or local one if we keep reiterating the processwe will reach point where our cost function will not change much irrespective of any small update that we make to values of th using this methodwe can obtain the optimal set of parameter values keep in mind that this is simple description of gradient to make things easy to understand and interpret usually there are many other considerations involved in solving an optimization problem and vast set of challenges the main intent of this section is to make you aware of how optimization is an essential part of any machine learning problem model building examples the future of this book are dedicated to build and tune models on real-world datasets so we will be doing lot of model buildingtuningand evaluation in general in this sectionwe want to depict some examples of each category of models that we discussed in the previous section this will serve as ready reckoner starting guide for our model building exploits in the future
|
16,012 |
classification in all classification (or supervised learningproblemsthe first step after preparing the whole dataset is to segregate the data into testing and training set and optionally validation set the idea is to make the model learn by training it on the train datasetevaluate and tune it on the validation datasetor use techniques like cross validation and finally check its performance on the test dataset you will learn in the model evaluation section of this that evaluating model is critical part of any machine learning solution henceas rule of thumbwe must always remember that the actual evaluation of machine learning algorithm is always on the data that it has not previously seen (even cross validation on the training dataset will use part of the train data for model building and the rest for evaluationsometimes we will use the whole dataset to train the model and then use some subset of it as test set this is common mistake done by many of us often in machine learning to accurately analyze modelit must generalize well and perform well on data that it has never seen before good evaluation metric on training data but bad performance on unseen (validation or testdata means that the algorithm has failed to produce generalized solution for the problem (more on this laterfor our classification examplewe will use popular multi-class classification problem we talked about earlierhandwritten digit recognition the data for the same is available as part of the scikit-learn library the problem here is to predict the actual digit value from handwritten image of digit in its original form the problem comes in the domain of image based classification and computer vision in the dataset we have is feature vectorwhich represents the image representation of grey scale image of the handwritten digit before we proceed to building any modellet' first see how both the data and the image we intend to analyze look the following code will load the data for the image at index and plot it in [ ]from sklearn import datasets import matplotlib pyplot as plt %matplotlib inline digits datasets load_digits(plt figure(figsize=( )plt imshow(digits images[ ]cmap=plt cm gray_rthe image generated by the code is depicted in figure - any guesses as to which number it representsfigure - handwritten digit data representing the digit zero
|
16,013 |
we can determine how the raw pixel data looks the flattened vector representation and the number (class label)which is represented by the image using the following code actual image pixel matrix in [ ]digits images[ out[ ]array([ ] ] ] ] ] ] ] ]]flattened vector in [ ]digits data[ out[ ]array( ]image class label in [ ]digits target[ out[ ] we will later see that we can frame this problem in variety of ways but for this tutorial we will use logistic regression model to do this classification before we proceed to model buildingwe will split the dataset into separate test and train sets the size of the test set is generally dependent on the total amount of data available in our examplewe will use test setwhich is of the overall dataset the total data points in each dataset is printed for ease of understanding in [ ]x_digits digits data y_digits digits target num_data_points len(x_digitsx_train x_digits[:int num_data_points)y_train y_digits[:int num_data_points)x_test x_digits[int num_data_points):y_test y_digits[int num_data_points):print(x_train shapex_test shape( ( from the preceding outputwe can see our train dataset has data points and the test dataset has data points the next step in the process is specifying the model that we will be using and the hyperparameter values that we want to use the values of these hyperparameters do not depend on the underlying data and are usually set prior to model training and are fine tuned for extracting the best model you will learn about tuning later on in this for the time beingwe will use the default values as depicted when we initialize the model estimator and fit our model on the training set
|
16,014 |
in [ ]from sklearn import linear_model logistic linear_model logisticregression(logistic fit(x_trainy_trainlogisticregression( = class_weight=nonedual=falsefit_intercept=trueintercept_scaling= max_iter= multi_class='ovr'n_jobs= penalty=' 'random_state=nonesolver='liblinear'tol= verbose= warm_start=falseyou can see various hyperparameters and parameters of the model depicted in the preceding output let' now test the accuracy of this model on the test dataset in [ ]print('logistic regression mean accuracy%flogistic score(x_testy_test)logistic regression mean accuracy this is all it takes in scikit-learn to fit model like logistic regression in the first stepwe identified the model that we wanted to use which in our case was linear modelcalled logistic regression then we called the fit method of that object with our training data and its output labels the fit method updates the model object with the learned parameters of the model we then used the score method of the object to determine the accuracy of the fitted model on our test set so the model we developed without any intensive tuning is accurate at predicting handwritten digits this concludes our very basic example of fitting classification model on our dataset note that our dataset was in fully processed and cleaned format you need to ensure your data is in the same way before you proceed to fit any models on it when solving any problem lustering in this sectionyou will learn how we can fit clustering model on another dataset in the example which we will pickwe will use labeled dataset to help us see the results of the clustering model and compare it with actual labels point to remember here is thatusually labeled data is not available in the real worldwhich is why we choose to go for unsupervised methods like clustering we will try to cover two different algorithmsone each from partitioning based clustering and hierarchical clustering the data that we will use for our clustering example will be the very popular wisconsin diagnostic breast cancer datasetwhich we covered in detail in in the section "feature selection and dimensionality reductiondo check out those sections to refresh your memory this dataset has attributes or features and corresponding label for each data point (breast massdepicting if it has cancer (malignantlabel value or no cancer (benignlabel value let' load the data using the following code import numpy as np from sklearn datasets import load_breast_cancer load data data load_breast_cancer( data data data target print( shapedata feature_names( ['mean radius'mean texture'mean perimeter'worst fractal dimension'it is evident that we have total of observations and attributes or features for each observation
|
16,015 |
partition based clustering we will choose the simplest yet most popular partition based clustering model for our examplewhich is -means algorithm this algorithm is centroid based clustering algorithmwhich starts with some assumption about the total clusters in the data and with random centers assigned to each of the clusters it then reassigns each data point to the center closest to itusing euclidean distance as the distance metric after each reassignmentit recalculates the center of that cluster the whole process is repeated iteratively and stopped when reassignment of data points doesn' change the cluster centers variants include algorithms like -medoids since we already know from the data labels that we have two possible types of categories either or the following code tries to determine these two clusters from the data by leveraging -means clustering in the real worldthis is not always the casesince we will not know the possible number of clusters this is one of the most important downsides of -means clustering from sklearn cluster import kmeans km kmeans(n_clusters= km fit(xlabels km labels_ centers km cluster_centers_ print(labels[: ][ once the fit process is complete we can get the centers and labels of our two clusters in the dataset by using the preceding attributes the centers here refer to some numerical value of the dimensions of the data (the attributes in the datasetaround which data is clustered but can we visualize and compare the clusters with the actual labelsremember we are dealing with features and visualizing the clusters on -dimensional feature space would be impossible to interpret or even perform hencewe will leverage pca to reduce the input dimensions to two principal components and visualize the clusters on top of the same refer to to learn more about principal component analysis from sklearn decomposition import pca pca pca(n_components= bc_pca pca fit_transform(xthe following code helps visualize the clusters on the reduced feature space for the actual labels as well as the clustered output labels fig(ax ax plt subplots( figsize=( )fig suptitle('visualizing breast cancer clusters'fig subplots_adjust(top= wspace= ax set_title('actual labels'ax set_title('clustered labels'for in range(len( ))if [ = ax scatter(bc_pca[ , ]bc_pca[ , ], =' 'marker='if [ = ax scatter(bc_pca[ , ]bc_pca[ , ], =' 'marker='
|
16,016 |
if labels[ = ax scatter(bc_pca[ , ]bc_pca[ , ], =' 'marker='if labels[ = ax scatter(bc_pca[ , ]bc_pca[ , ], =' 'marker=' ax legend([ ][' '' '] ax legend([ ][' '' ']figure - visualizing clusters in the breast cancer dataset from figure - you can clearly see that the clustering has worked quite well and it shows distinct separation between clusters with labels and and is quite similar to the actual labels however we do have some overlap where we have mislabeled some instanceswhich is evident in the plot on the right remember in an actual real-world scenarioyou will not have the actual labels to compare with and the main idea is to find structures or patterns in your data in the form of these clusters another very important point to remember is that cluster label values have no significance the labels and are just values to distinguish cluster data points from each other if you run this process againyou can easily obtain the same plot with the labels reversed hence even when dealing with labeled data and running clustering do not compare clustered label values with actual labels and try to measure accuracy also another important note is that if we had asked for more than two clustersthe algorithm would have readily supplied more clusters but it would have been hard to interpret those and many of them would not make sense henceone of the caveats of using the -means algorithm is to use it in the case where we have some idea about the total number of clusters that may exist in the data
|
16,017 |
hierarchical clustering we can use the same data to perform hierarchical clustering and see if the results change much as compared to -means clustering and the actual labels in scikit-learn we have multitude of interfaces like the agglomerativeclustering class to perform hierarchical clustering based on what we discussed earlier in this as well as in agglomerative clustering is hierarchical clustering using bottom up approach each observation starts in its own cluster and clusters are successively merged together the merging criteria can be used from candidate set of linkagesthe selection of linkage governs the merge strategy some examples of linkage criteria are wardcomplete linkageaverage linkage and so on we will leverage low-level functions from scipy however because we still need to mention the number of clusters in the agglomerativeclustering interface which we want to avoid since we already have the breast cancer feature set in variable xthe following code helps us compute the linkage matrix using ward' minimum variance criterion from scipy cluster hierarchy import dendrogramlinkage import numpy as np np set_printoptions(suppress=truez linkage( 'ward'print( [ ]on seeing the preceding outputyou might think what does this linkage matrix indicateyou can think of the linkage matrix as complete historical mapkeeping track of which data points were merged into which cluster during each iteration if you have data pointsthe linkage matrixz will be having shape of ( where [iwill tell us which clusters were merged at the th iteration each row has four elementsthe first two elements are either data point identifiers or cluster labels (in the later parts of the matrix once multiple data points are merged)the third element is the cluster distance between the first two elements (either data points or clusters)and the last element is the total number of elements\data points in the cluster once the merge is complete we recommend you refer to reference/generated/scipy cluster hierarchy linkage htmlwhich explains this in detail the best way to visualize these distance-based merges is to use dendrogramas shown in figure - plt figure(figsize=( )plt title('hierarchical clustering dendrogram'plt xlabel('data point'plt ylabel('distance'dendrogram(zplt axhline( = =' 'ls='--'lw= plt show(
|
16,018 |
figure - visualizing the hierarchical clustering dendrogram in the dendrogram depicted in figure - we can see how each data point starts as an individual cluster and slowly starts getting merged with other data points to form clusters on high level from the colors and the dendrogramyou can see that the model has correctly identified two major clusters if you consider distance metric of around or above leveraging this distancewe can get the cluster labels using the following code from scipy cluster hierarchy import fcluster max_dist hc_labels fcluster(zmax_distcriterion='distance'let' compare how the cluster outputs look based on the pca reduced dimensions as compared to the original label distribution (detailed code is in the notebooksee figure - figure - visualizing hierarchical clusters in the breast cancer dataset
|
16,019 |
we definitely see two distinct clusters but there is more overlap as compared to the -means method between the two clusters and we have more mislabeled instances howeverdo take note of the label numbershere we have and as the label values this is just to reinforce the fact that the label values are just to distinguish the clusters and don' mean anything the advantage of this method is that you do not need to input the number of clusters beforehand and the model tries to find it from the underlying data model evaluation we have seen the process of data retrievalprocessingwrangling and modeling based on various requirements logical question that follows is how we can make the judgment whether model is good or badjust because we have developed something fancy using renowned algorithmdoesn' guarantee its performance will be great model evaluation is the answer to these questions and is an essential part of the whole machine learning pipeline we have mentioned it quite number of times in that past about how model development is an iterative process model evaluation is the defining part of the iterative process which makes it iterative in nature based on model evaluation and subsequent comparisons we can take call whether to continue our efforts in model enhancement or cease them and which model should be selected as the final model to be used\deployed model evaluation also helps us in the very important process of tuning the hyperparameters of the model and also deciding scenarios likeif the intelligent feature that we just developed is adding any value to our model or not combining all these arguments makes compelling case for having defined process for model evaluation and what metrics can be used to measuring and evaluating models so how can we evaluate modelhow can we make decision whether model is better or model performs betterthe ideal way is to have some numerical measure or metric of model' effectiveness and use that measure to rank and select models this will be one of the primary ways for us to evaluate models but we should also keep in mind that lot of times these evaluation metrics may not capture the required success criteria of the problem we are trying to solve in these caseswe will be required to become imaginative and adapt these metrics to our problem and use things like business constraints and objectives model evaluation metrics are highly dependent on the type of model we haveso metrics for regression models will be different from the classification models or clustering models considering this dependency we will break this section down in three sub-sections we cover the major model evaluation metrics for three categories of models evaluating classification models classification models are one of the most popular models among machine learning practitioners due to their popularityit is essential to know how to build good qualitygeneralized models they have varied set of metrics that can be used to evaluate classification models in this sectionwe target small subset of those metrics that are essential we use the models that we developed in the previous section to illustrate them in detail for thislet' first prepare train and test datasets to build our classification models we will be leveraging the and variables from beforewhich holds the data and labels for the breast cancer dataset observations from sklearn model_selection import train_test_split x_trainx_testy_trainy_test train_test_split(xytest_size= random_state= print(x_train shapex_test shape( (
|
16,020 |
from the preceding outputit is clear that we have observations in our train dataset and observations in our test dataset we will be leveraging nifty module we have created for model evaluation it is named model_evaluation_utils and you can find it along with the code files and notebooks for this we recommend you to check out the codewhich leverages the scikit-learn metrics module to compute most of the evaluation metrics and plots confusion matrix confusion matrix is one of the most popular ways to evaluate classification model although the matrix by itself is not metricthe matrix representation can be used to define variety of metricsall of which become important in some specific case or scenario confusion matrix can be created for binary classification as well as multi-class classification model confusion matrix is created by comparing the predicted class label of data point with its actual class label this comparison is repeated for the whole dataset and the results of this comparison are compiled in matrix or tabular format this resultant matrix is our confusion matrix before we go any furtherlet' build logistic regression model on our breast cancer dataset and look at the confusion matrix for the model predictions on the test dataset from sklearn import linear_model train and build the model logistic linear_model logisticregression(logistic fit(x_train,y_trainpredict on test data and view confusion matrix import model_evaluation_utils as meu y_pred logistic predict(x_testmeu display_confusion_matrix(true_labels=y_testpredicted_labels=y_predclasses=[ ]predicted actual the preceding output depicts the confusion matrix with necessary annotations we can see that out of observations with label (malignant)our model has correctly predicted observations similarly out of observations with label (benign)our model has correctly predicted observations more detailed analysis is coming right upunderstanding the confusion matrix while the name itself sounds pretty overwhelmingunderstanding the confusion matrix is not that confusing once you have the basics rightto reiterate what you learned in the previous sectionthe confusion matrix is tabular structure to keep track of correct classifications as well as misclassifications this is useful to evaluate the performance of classification model for which we know the true data labels and can compare with the predicted data labels each column in the confusion matrix represents classified instance counts based on predictions from the model and each row of the matrix represents instance counts based on the actual\true class labels this structure can also be reversedi predictions depicted by rows and true labels by columns in typical binary classification problemwe usually have class label which defined as the positive class which is basically the class of our interest for instance in our breast cancer datasetlet' say we
|
16,021 |
are interested in detecting or predicting when the patient does not have breast cancer (benignthen label is our positive class howeversuppose our class of interest was to detect cancer (malignantthen we could have chosen label as our positive class figure - shows typical confusion matrix for binary classification problemwhere denotes the positive class and denotes the negative class figure - typical structure of confusion matrix figure - should make things more clear with regard to the structure of confusion matrices in generalwe usually have positive class as we discussed earlier and the other class is the negative class based on this structurewe can clearly see four terms of importance true positive (tp)this is the count of the total number of instances from the positive class where the true class label was equal to the predicted class labeli the total instances where we correctly predicted the positive class label with our model false positive (fp)this is the count of the total number of instances from the negative class where our model misclassified them by predicting them as positive hence the namefalse positive true negative (fn)this is the count of the total number of instances from the negative class where the true class label was equal to the predicted class labeli the total instances where we correctly predicted the negative class label with our model false negative (fn)this is the count of the total number of instances from the positive class where our model misclassified them by predicting them as negative hence the namefalse negative
|
16,022 |
thus based on this informationcan you compute the previously mentioned metrics for our confusion matrix based on the model predictions on the breast cancer test datapositive_class tp fp tn fn performance metrics the confusion matrix by itself is not performance measure for classification models but it can be used to calculate several metrics that are useful measures for different scenarios we will describe how the major metrics can be calculated from the confusion matrixcompute them manually using necessary formulaeand then compare the results with functions provided by scikit-learn on our predicted results and give an intuition of scenarios where each of those metric can be used accuracythis is one of the most popular measures of classifier performance it is defined as the overall accuracy or proportion of correct predictions of the model the formula for computing accuracy from the confusion matrix isaccuracy tp tn tp fp tn fn accuracy measure is normally used when our classes are almost balanced and correct predictions of those classes are equally important the following code computes accuracy on our model predictions fw_acc round(meu metrics accuracy_score(y_true=y_testy_pred=y_pred) mc_acc round((tp tn(tp tn fp fn) print('framework accuracy:'fw_accprint('manually computed accuracy:'mc_accframework accuracy manually computed accuracy precisionprecisionalso known as positive predictive valueis another metric that can be derived from the confusion matrix it is defined as the number of predictions made that are actually correct or relevant out of all the predictions based on the positive class the formula for precision is as followsprecision tp tp fp model with high precision will identify higher fraction of positive class as compared to model with lower precision precision becomes important in cases where we are more concerned about finding the maximum number of positive class even if the total accuracy reduces the following code computes precision on our model predictions fw_prec round(meu metrics precision_score(y_true=y_testy_pred=y_pred) mc_prec round((tp(tp fp) print('framework precision:'fw_precprint('manually computed precision:'mc_prec
|
16,023 |
framework precision manually computed precision recallrecallalso known as sensitivityis measure of model to identify the percentage of relevant data points it is defined as the number of instances of the positive class that were correctly predicted this is also known as hit ratecoverageor sensitivity the formula for recall isrecall tp tp fn recall becomes an important measure of classifier performance in scenarios where we want to catch the most number of instances of particular class even when it increases our false positives for exampleconsider the case of bank frauda model with high recall will give us higher number of potential fraud cases but it will also help us raise alarm for most of the suspicious cases the following code computes recall on our model predictions fw_rec round(meu metrics recall_score(y_true=y_testy_pred=y_pred) mc_rec round((tp(tp fn) print('framework recall:'fw_recprint('manually computed recall:'mc_recframework recall manually computed recall scorethere are some cases in which we want balanced optimization of both precision and recall score is metric that is the harmonic mean of precision and recall and helps us optimize classifier for balanced precision and recall performance the formula for the score isf score precision recall precision recall let' compute the score on the predictions made by our model using the following code fw_f round(meu metrics _score(y_true=y_testy_pred=y_pred) mc_f round(( *mc_prec*mc_rec(mc_prec+mc_rec) print('framework -score:'fw_f print('manually computed -score:'mc_f framework -score manually computed -score thus you can see how our manually computed metrics match the results obtained from the scikit-learn functions this should give you good idea of how to evaluate classification models with these metrics
|
16,024 |
receiver operating characteristic curve roc which stands for receiver operating characteristic is concept from early radar days this concept can be extended to evaluation of binary classifiers as well as multi-class classifiers (note that to adapt the roc curve for multi-class classifiers we have to use one-vs-all scheme and averaging techniques like macro and micro averaging it can be interpreted as the effectiveness with which the model can distinguish between actual signal and the noise in the data the roc curve can be created by plotting the fraction of true positives versus the fraction of false positivesi it is plot of true positive rate (tprversus the false positive rate (fprit is applicable mostly for scoring classifiers scoring classifiers are the type of classifiers which will return probability value or score for each class labelfrom which class label can be deduced (based on maximum probability valuethis curve can be plotted using the true positive rate (tprand the false positive rate (fprof classifier tpr is known as sensitivity or recallwhich is the total number of correct positive resultspredicted among all the positive samples the dataset fpr is known as false alarms or ( specificity)determining the total number of incorrect positive predictions among all negative samples in the dataset although we will rarely be plotting the roc curve manuallyit is always good idea to understand how they can be plotted the following steps can be followed to plot roc curve given the class label probabilities of each data point and their correct or true labels order the outputs of the classifier by their scores (or the probability of being the positive class start at the ( coordinate for each example in the sorted order up pos right if is negativemove neg if is positivemove here pos and neg are the fraction of positive and negative examples respectively the idea is that typically in any roc curvethe roc space is between points ( and ( each prediction result from the confusion matrix occupies one point in this roc space ideallythe best prediction model would give point on the top left corner ( indicating perfect classification ( sensitivity specificitya diagonal line depicts classifier that does random guess ideally if your roc curve occurs in the top half of the graphyou have decent classifier which is better than average you can always leverage the roc_curve function provided by scikit-learn to generate the necessary data for an roc curve refer to org/stable/auto_examples/model_selection/plot_roc html for further details figure - shows sample roc curve from the link we just mentioned
|
16,025 |
figure - sample roc curve (sourcehtml#roc-metricsfigure - depicts the sample roc curve in generalthe roc curve is an important tool for visually interpreting classification models but it doesn' directly provide us with numerical value that we can use to compare models the metric which does that task is the area under curve popularly known as auc in the roc plot in figure - the area under the orange line is the area under the classifier' roc curve the ideal classifier will have the unit area under the curve based on this value we can compare two modelsgenerally the model with the auc score is better one we have built generic function for plotting roc curves with auc scores for binary as well as multi-class classification problems in our model_evaluation_utils module do check out the functionplot_model_roc_curveto know more about it the following code plots the roc curve for our breast cancer logistic regression model leveraging the same function meu plot_model_roc_curve(clf=logisticfeatures=x_testtrue_labels=y_test
|
16,026 |
figure - roc curve for our logistic regression model considering our model has an accuracy and score of around %figure - makes sense where we see near perfect roc curvecheck out to see multi-class classifier roc curve in actionevaluating clustering models we discussed some of the popular ways to evaluate classification models in the previous section the confusion matrix alone provided us with bunch of metrics that we can use to compare classification models the tables are turned drastically when it comes to evaluating clustering (or unsupervised models in generalthis difficulty arises from the lack of validated ground truth in case of unsupervised modelsi the absence of true labels in the data in this sectionyou learn about some of the methods/metrics we can use to evaluate the performance of our clustering models to illustrate the evaluation metrics with real-world examplewe will leverage the breast cancer dataset available in the variables for the data and for the observation labels we will also use the -means algorithm to fit two models on this data--one with two clusters and the second one with five clusters--and then evaluate their performance km kmeans(n_clusters= random_state= fit(xkm _labels km labels_ km kmeans(n_clusters= random_state= fit(xkm _labels km labels_ external validation external validation means validating the clustering model when we have some ground truth available as labeled data the presence of external labels reduces most of the complexity of model evaluation as the clustering (unsupervisedmodel can be validated in similar fashion to classification models recall the breast cancer dataset example that we took in the first section of this we ran the labeled data through clustering algorithm in that case we had two classes and we got two clusters from our algorithm however evaluating the performance is not as straightforward as classification algorithms
|
16,027 |
if you remember our discussion earlier on cluster labelsthey are just indicators used to distinguish data points from each other based on which cluster or group they fall into hence we cannot compare cluster with label directly with true class label it is possible that all data points with true class label of were actually clustered with label during the clustering process based on thiswe can leverage several metrics to validate clustering performance when we have the true labels available three popular metrics can be used in this scenariohomogeneitya clustering model prediction result satisfies homogeneity if all of its clusters contain only data points that are members of single class (based on the true class labelscompletenessa clustering model prediction result satisfies completeness if all the data points of specific ground truth class label are also elements of the same cluster -measurethe harmonic mean of homogeneity and completeness scores gives us the -measure value values are typically bounded between and and usually higher values are better let' compute these metric on our two -means clustering models km _hcv np round(metrics homogeneity_completeness_v_measure(ykm _labels) km _hcv np round(metrics homogeneity_completeness_v_measure(ykm _labels) print('homogeneitycompletenessv-measure metrics for num clusters= 'km _hcvprint('homogeneitycompletenessv-measure metrics for num clusters= 'km _hcvhomogeneitycompletenessv-measure metrics for num clusters= homogeneitycompletenessv-measure metrics for num clusters= we can see that the -measure for the first model with two clusters is better than the one with five clusters and the reason is because of higher completeness score another metric you can try out includes the fowlkes-mallows score internal validation internal validation means validating clustering model by defining metrics that capture the expected behavior of good clustering model good clustering model can be identified by two very desirable traitscompact groupsi the data points in one cluster occur close to each other well separated groupsi two groups\clusters have as large distance among them as possible we can define metrics that mathematically calculate the goodness of these two major traits and use them to evaluate clustering models most of such metrics will use some concept of distance between data points the distance between data points can be defined using any candidate distance metric ranging from euclidian distancemanhattan distanceor any metric that meets the criteria for being distance metric
|
16,028 |
silhouette coefficient silhouette coefficient is metric that tries to combine the two requirements of good clustering model the silhouette coefficient is defined for each sample and is combination of its similarity to the data points in its own cluster and its dissimilarity to the data points not in its cluster the mathematical formula of silhouette coefficient for clustering model with data points is given by sample sci = heresample sc is the silhouette coefficient for each of the samples the formula for sample' silhouette coefficient issample sc - max , herea mean distance between sample and all other points in the same class mean distance between sample and all other points in the next nearest cluster the silhouette coefficient is usually bounded between - (incorrect clusteringand + (excellent quality dense clustersa higher value of silhouette coefficient generally means that the clustering model is leading to clusters that are dense and well separated and distinguishable from each other lower scores indicate overlapping clusters in scikit-learnwe can compute the silhouette coefficient by using the silhouette_ score function the function also allows for different options for distance metrics from sklearn import metrics km _silc metrics silhouette_score(xkm _labelsmetric='euclidean'km _silc metrics silhouette_score(xkm _labelsmetric='euclidean'print('silhouette coefficient for num clusters= 'km _silcprint('silhouette coefficient for num clusters= 'km _silcsilhouette coefficient for num clusters= silhouette coefficient for num clusters= based on the preceding outputwe can observe that from the metric results it seems like we have better cluster quality with two clusters as compared to five clusters calinski-harabaz index the calinski-harabaz index is another metric that we can use to evaluate clustering models when the ground truth is not known the calinski-harabaz score is given as the ratio of the between-clusters dispersion mean and the within-cluster dispersion the mathematical formula for the score for clusters is given bys ( tr bk tr (wk -
|
16,029 |
herek wk aa ) = xic bk anq ) with tr being the trace of matrix operatorn being the number of data points in our datacq being the set of points in cluster qcq being the center of cluster qc being the center of eand nq being the number of points in cluster thankfully we can calculate this index without having to calculate this complex formula by leveraging scikit-learn higher score normally indicates that the clusters are dense and well separatedwhich relates to the general principles of clustering models km _chi metrics calinski_harabaz_score(xkm _labelskm _chi metrics calinski_harabaz_score(xkm _labelsprint('calinski-harabaz index for num clusters= 'km _chiprint('calinski-harabaz index for num clusters= 'km _chicalinski-harabaz index for num clusters= calinski-harabaz index for num clusters= we can see that both the scores are pretty high with the results for five clusters being even higher this goes to show that just relying on metric number alone is not sufficient and you must try multiple evaluation methods coupled with feedback from data scientists as well as domain experts evaluating regression models regression models are an example of supervised learning methods and owing to the availability of the correct measures (real valued numeric response variables)their evaluation is relatively easier than unsupervised models usually in the case of supervised modelswe are spoilt for the choice of metrics and the important decision is choosing the right one for our use case regression modelslike classification modelshave varied set of metrics that can be used for evaluating them in this sectionwe go through small subset of these metrics which are essential coefficient of determination or the coefficient of determination measures the proportion of variance in the dependent variable which is explained by the independent variable coefficient of determination score of denotes perfect regression model indicating that all of the variance is explained by the independent variables it also provides measure of how well the future samples are likely to be predicted by the model the mathematical formula for calculating is given as followswhere is the mean of the dependent variabley indicates the actual true response valuesand indicates the model predicted outputs nsamples nsamples
|
16,030 |
in the scikit-learn packagethis can be calculated by using the _score function by supplying it the true values and the predicted values (of the output\response variablemean squared error mean squared error calculates the average of the squares of the errors or deviation between the actual value and the predicted valuesas predicted by regression model the mean squared error or mse can be used to evaluate regression modelwith lower values meaning better regression models with less errors taking the square root of the mse yields the root-mean-square-error or rmsewhich can also be used as an evaluation metric for regression models the mathematical formula for calculating mse and rmse is quite simple and is given as followsn mse samples nsamples in the scikit-learn package the mse can be calculated by invoking the mean_squared_error function from the metrics module regression models have many more metrics that can be used for evaluating themincluding median absolute errormean absolute errorexplained variance scoreand so on they are easy to calculate using the functions provided by the scikit-learn library their mathematical formulae are easy to interpret and have an intuitive understanding associated with them we have only introduced two of them but you are encouraged to explore other sets of metrics that can be used to regression models we will look at regression models in more detail in the next model tuning in the first two sections of this you learned how to fit models on our processed data and how to evaluate those models we will build further upon the concepts introduced till now in this sectionyou will learn an important characteristic of all machine learning algorithms (which we have been glossing over till now)their importanceand how to find the optimal values for these entities model tuning is one of the most important concepts of machine learning and it does require some knowledge of the underlying math and logic of the algorithm in focus although we cannot deep dive into extensive theoretical aspects of the algorithms that we discusswe will try to give some intuition about them so that you are empowered to tune them better and learn the essential concepts needed for the same the models we have developed till now were mostly the default models provided to us by the scikit-learn package by default we mean models with the default configurations and settings if you remember seeing some of the model estimator object parameters since the datasets we were analyzing were essentially not tough-to-analyze datasetseven models with default configurations turn up decent solutions the situation is not that rosy when it comes to actual real-world datasets that have lot of featuresnoiseand missing data you will see in the subsequent how the actual datasets are often tough to processwrangleand even harder to model henceit is unlikely that we will always use the default configured models out of the box instead we will delve deeper into the models that we are targetinglook at the knobs that can be tuned and set to extract the best performance out of any given models this process of iterative experimentation with datasetmodel parametersand features is the very core of the model tuning process we start this section by introducing these so-called parameters that are associated with ml algorithmsthen we try to justify why it is hard to have perfect modeland in the last section we discuss some strategies that we can pursue to tune our models
|
16,031 |
introduction to hyperparameters what are hyperparametersthe simplest definition is that hyperparameters are meta parameters that are associated with any machine learning algorithm and are usually set before the model training and building process we do this because model hyperparameters do not have any dependency on being derived from the underlying dataset on which model is trained hyperparameters are extremely important for tuning the performance of learning algorithms hyperparameters are often confused with model parametersbut we must keep in mind that hyperparameters are different than model parameters since they do not have dependency on the data in simple termsmodel hyperparameters represent some high-level concepts or knobs that data scientist can tweak and tune during the model training and building process to improve its performance let' take an example to illustrate this in case you still have difficulty in interpreting them ecision trees decision trees are one of the simplest and easy to interpret classification algorithms (also used in regression sometimescheck out cart modelsfirst you will learn how decision tree is created as hyperparameters are often tightly coupled with the actual intricacies of the algorithm the decision tree algorithm is based on greedy recursive partitioning of the initial dataset (featuresit leverages decision tree based structure for taking decisions of how to perform the partitions the steps involved in learning decision tree are as follows start with whole dataset and find the attribute (featurethat will best differentiate between the classes this best attribute is found out using metrics such as information gain or gini impurity once the best attribute is foundseparate the dataset in two (or more partsbased on the values of the attributes if any one part of the dataset contains only labels of one classwe can stop the process for that part and label it as leaf node of that class we repeat the whole process until we have leaf nodes in allof which we have data points of one class only the final model returned by the decision tree algorithm can be represented as flow chart (the core decision tree structureconsider sample decision tree for the titanic survival prediction problem depicted in figure -
|
16,032 |
figure - sample decision tree model the decision tree is easy to interpret by following the path with the values of an unknown data point the leaf node where you end up is the predicted class for the data point the model parameters in this case are the attributes on which we are splitting (here sexageand sibspand the values of those attributes for example if person was femaleit is likely she had survived based on this model howeverinfant males having age less than years and months are likely to have perished in the algorithmthe decision whether we will continue splitting the dataset at node further or stop the splitting process is governed by one of the hyperparameters of the algorithm named min_samples_leaf this is hyperparameter associated with the decision tree algorithm the default value of this parameter is which means that we can potentially keep splitting the data until we have leaf node with single data point (with unique class labelthis leads to lot of overfitting as potentially each data point can end up in its own leaf node and the model will not learn anything useful suppose we want to stop the splitting process if we have - of the whole dataset in leaf node and label that node with the majority class of that node this can be achieved by setting different value for the specified hyperparameter this allows us to control the overfitting and help us develop generalized model this is just one of the hyperparameters associated with the algorithmthere are many more like the splitting criterion (criterion)maximum depth of the tree (max_depth)number of features (max_features)and so onwhich can have different effects on the quality of the overall model similar hyperparameters exist for each learning algorithm examples include the learning rate in logistic regressionthe kernel in svmsand the dropout rate in neural networks hyperparameters are generally closely related to the learning algorithm hence we require some understanding of the algorithm to have intuition about setting the value of particular hyperparameter in the later sections of this and the bookwe deal with datasets and models that will require some level of hyperparameter tuning the bias-variance tradeoff so farwe learned about the necessary concepts which talk about tuning our models but before go into the process of putting it all together and actually tuning our modelswe must understand potential tradeoff that puts some restriction on the best model that we can develop this tradeoff is called the bias versus variance tradeoff the obvious question that arises is what are bias and variance in the context of machine learning models
|
16,033 |
biasthis is the error that arises due to the model (learning algorithmmaking wrong assumptions on the parameters in the underlying data the bias error is the difference between the expected or predicted value of the model estimator and the true or actual value which we are trying to predict if you remembermodel building is an iterative process if you imagine building model multiple times over dataset every time you get some new observationsdue to the underlying noise and randomness in the datapredictions will not always be what is expected and bias tries to measure the difference\error in actual and predicted values it can also be specified as the average approximation error that the models have over all possible training datasets the last part hereall possible training datasetsneeds some explanation the dataset that we observe and develop our models on is one of the possible combinations of data that exist all the possible combinations of each of the attributes\features that we have in our data will give rise to different dataset for exampleconsider if we have dataset with binary (categoricalfeaturesthen the size of that entire dataset would be data points the dataset that we model on will obviously be subset of this huge data so bias is the average approximation error that we can expect over subset of this entire dataset bias is mostly affected by our assumptions (or the model' assumptionsabout the underlying data and patterns for exampleconsider simple linear regression modelit makes the assumption that the dependent variable is linearly dependent on the independent variable whereas consider the case of decision tree model it makes no such assumption about the structure of the data and purely learns patterns from the data hencein relative sensea linear model may tend to have higher bias than decision tree model high bias makes model miss relevant relationships between features and the output variables in data variancethis error arises due to model sensitivity to fluctuations in the dataset that can arise due to new data pointsfeaturesrandomnessnoiseand so on it is the variance of our approximation function over all possible datasets it represents the sensitivity of the model prediction results on particular set of data points suppose you could have learned the model on different subset of all possible datasets then variance would quantify how the results of the model change with the change in the dataset if the results stay quite stable then the model would be said to having low variance but if the results vary considerably each time then the model would said to be having high variance consider the same example of contrasting linear model against decision tree modelunder the assumption that clear linear relationship exists between the dependent and the independent data variables then for sufficiently large dataset our linear model will always capture that relationship whereas the capability of decision tree model depends on the datasetif we get dataset which consists of lot of outlierswe are likely to get bad decision tree model hence we can make statement that the decision tree model will be having higher variance than linear regression model based on data and the underlying noise\randomness high variance makes model too sensitive to outliers or random noise instead of generalizing well an effective way to get clearer idea at this somewhat confusing concept is through visual representation of bias and varianceas depicted in figure -
|
16,034 |
figure - the bias-variance tradeoff in figure - the inner red circle represents the perfect model that we can have considering all the combinations of the data that we can get each blue dot (marks model that we have learned on the basis of combinations of the dataset and features that we get models with low biaslow variancerepresented by the top left imagewill learn good general structure of underlying data patterns and relationships that will be close to the hypothetical model and predictions will be consistent and hit the bull' eyemodels with low biashigh variancerepresented by the top right imageare models that generalize to some extent (learn proper relationships\patternsand perform decently on average due to low bias but are sensitive to the data it is trained on leading to high variance and hence predictions keep fluctuating models with high biaslow variance will tend to make consistent predictions irrespective of datasets on which the models are built leading to low variance but due to high biasit will not learn the necessary patterns\relationships in the data that are required for correct predictions and hence misses the mark due to the high bias error on averageas depicted in the bottom-left image
|
16,035 |
models with high biashigh variance are the worst sort of models possibleas they will not learn necessary data attribute relationships that are essential to correlation with output responses also they will be extremely sensitive to data and outliers and noise leading to highly fluctuating predictions which result in high varianceas depicted in the bottom-right image extreme cases of bias-variance in real-world modelingwe will always have tradeoff between decreasing bias and variance simultaneously to understand why we have this tradeoffwe must first consider the two possible extreme cases of bias and variance underfitting consider linear model that is lazy and always predicts constant value this model will have extremely low variance (in fact it will be zero variance modelas the model is not dependent at all on which subset of data it gets it will always predict constant and hence have stable performance but on the other hand it will have extremely high bias as it has not learned anything from the data and made very rigid and erroneous assumption about the data this is the case of model underfittingin which we fail to learn anything about the dataits underlying patternsand relationships verfitting consider the opposite case in which we have model that attempts to fit every data point it encounters (the closest example would be fitting an th order polynomial curve for an -observation dataset so that the curve passes through each pointin this casewe will get model which will have low bias as no assumption to structure of data was made (even when there was some structurebut the variance will be very high as we have tightly fit the model to one of the possible subsets of data (focusing too much on the training dataany subset different from the training set will lead to lot of error this is the case of overfittingwhere we have built our model so specific to the data at hand that it fails to do any generalization over other subsets of data the tradeoff the total generalization error of any model will be sum of its bias errorvariance errorand irreducible erroras depicted in the following equation generalization error bias error variance error irreducible error such that the irreducible error is the error that gets introduced due to noise in the training data itselfsomething that is common in real-world datasets and not much can be done about it the idea is to focus on the other two errors every model needs to do tradeoff between the two choicesmaking assumptions about the structure of data or fitting itself too closely to the data at hand either choice in entirety will lead to one of the extreme cases the idea is to focus on balancing model complexity by doing an optimal tradeoff between bias and varianceas depicted in figure -
|
16,036 |
figure - test and train errors as function of model complexity (sourcethe elements of statistical learningtibshirani et al springerfigure - should give you more clarity on the tradeoff that needs to be done to prevent an increase in model errors we will need to make some assumptions about the underlying structure in the data but they must be reasonable at the same timethe model must ensure that it learns from the data at hand and generalizes well instead of overfitting to each and every data point this tradeoff can be controlled by making sure that our model is not very complex model and by ensuring reasonable performance on the unseen validation data we will cover more on cross validation in the next section we recommend you to check out the section on model selection and the book bias-variance tradeoff in the elements of statistical learningtibshirani et al springer cross validation in the initial sections of this when we were learning to fit different modelswe followed the practice of partitioning the data into training set and test set we built the model on the training set and reported its performance on the test set although that way of building models workswhen working on tuning models intensivelywe need to consider some other strategies around validation datasets in this section we will discuss how we can use the same data to build different models and also tune their hyperparameters using simple data partitioning strategy this strategy is one of the most prevalent practices in data science domain irrespective of the type of models and it is called cross validation or just cv this is extremely useful when you also have less data observations and cannot segregate specific partition of data for being validation set (more on this shortly!you can then leverage cross-validation strategy to leverage parts of the training data itself for validation in such way that you don' end up overfitting the model the main intention of any model building activity is to develop generalized model on the available data which will perform well on the unseen data but to estimate model' performance on unseen datawe need to simulate that unseen data using the data that we have available this is achieved by splitting our available data into training and testing sets by following this simple principle we ensure that we don' evaluate the model on the data that it has already seen and been trained on the story would be over here if we were completely satisfied with the model that we developed but the initial models are seldom satisfactory enough for deployment
|
16,037 |
in theorywe can extend the same principles for tuning our algorithm we can evaluate the performance of particular values of the model hyperparameters on the test set retrain the model with different partition of training and test set with different values of hyperparameters if the new parameters perform better than the old ones we take them and keep repeating the same process until we have the optimal values of the hyperparameters this scheme of things is simple but it suffers from serious flaw it induces bias in the model development process although the test set is changed in every iterationthe data is being seen by the model to make some choices about the model development process (as we tune and build the modelhencethe models that we develop end up being biased and not well-generalized and their performance may or may not reflect their performance on unseen data simple change in the data splitting process can help us avoid this leakage of unseen data suppose we initially made three different subsets of data instead of the original two one is the usual training setthe second one is the test set and the last one is called validation set so we can train our models on the train data evaluate their performance on the validation data to tune model parameters (or even to select among different modelsonce we are done with the tuning processwe can evaluate the final model on the really unseen test set and report the performance on the test set as the approximate performance of the model on the real-world unseen data in the very essence of things this is the basic principle behind the process of cross validationas depicted in figure - figure - building toward the cross-validation process for model building and tuning figure - gives us an idea of how the whole process works we divide the original dataset into train and test set the test set is completely set aside from the learning process the train set so obtained is again split into an actual train set and validation set then we learn different models on the train set point worth noting here is that the models are generali all of them can be of single type for example logistic regression but with different hyperparameters they can also be models using other algorithms like tree based methodssupport vector machinesand so on the process of model selection is similar irrespective of whether we are assessing completely different models or whether we are trying out different hyperparameter values of the same type of models once we have the models developedwe assess their performance on the validation set and select the model with the best performance as the final model we leverage model evaluation metrics for this based on the type of model (accuracyf scorermsesilhouette coefficientand so on
|
16,038 |
the previously described process seems to be good we have described the validation part of the process but we haven' touched on the cross part of it so where is the cross-validationto understand that intricacy of the cv processwe would have to discuss why we need this in the first place the need for it arises from the fact that by dividing the data into test and validation set we have lost out on decent amount of data which we could have used to further refine our modeling process also another important point is that if we take model' error for single iteration to be its overall error we are making serious mistake instead we want to take some average measure of error by building multiple iterations of the same model but if we keep rebuilding the model on the same datasetwe are not going to get much difference in the model performance we address these two issues by introducing the cross concept of cross-validation the idea of cross validation is to get different splits of train and validation sets (different observations in each seteach timeusing some strategy (which we will elaborate on laterand then build multiple iterations of each model on these different splits the average error on these splits is then reported as the error of the model in question and the final decision is made on this averaged error metric this strategy has brilliant effect on the estimated error of each modelas it ensures that the averaged error is close approximation of the model' error on really unseen data (here our test setand we could also leverage the complete training dataset for building the model this process is explained pictorially in figure - figure - the final cross-validation process for model building and tuning the various strategies in which these different train and validation sets can be generated gives rise to different kind of cross-validation strategies the common idea in each of these strategies remains the same the only difference is in the way the original train set is split into train and validation set for each iteration of model building ross-validation strategies we explained the basic principle of cross validation in the previous section in this sectionwe see the different strategies in which we can split the training data into training and validation data apart from the way of this splitas mentioned beforethe process for each of these strategies remains the same the major types of cross-validation strategies are described as follows
|
16,039 |
eave one out cv in this strategy for cross validationwe select random single data point from the initial training dataset and that becomes our validation set so we have single point only in our validation set and the rest - observations become our training set this means that if have data points in training set than we will be developing iterations of each model with different training set and validation set each time such that the validation set has one observation and the rest ( go into the training set this may become infeasible if the dataset size is large but in practice the error can be estimated by performing some small number of iterations due to the computational complexity of this measureit is mostly suitable for small datasets and rarely used in practice -fold cv the other strategy for cross-validation is to split the training dataset into equal subsets out of these subsets we train the model on - subsets and keep one subset as validation set this process is repeated times and the error is averaged over the models that are obtained by developing different iterations of the model we keep changing the validation set in each of these iterations which ensures that in each iterationthe model is trained on different subset of data this practice of cross-validation is quite effective in practiceboth for model selection and hyperparameter optimization natural question for this strategy is to select the appropriate number of foldsas they will both control our error approximation and the computational runtime of our cv process there are mathematical ways to select the most appropriate but in practice good choice of ranges from - soin most caseswe can do -fold or -fold validation and be confident of the results that we obtained hyperparameter tuning strategies based on our discussions until nowwe have all the prerequisites for tuning our model we know what hyperparameters arehow the performance of model can be evaluatedand how we can use crossvalidation to search through the parameter space for optimal value of our algorithm' hyperparameters in this sectionwe discuss two major strategies that tie all this together to determine the most optimal hyperparameters fortunatelythe scikit-learn library has an excellent built-in support for performing hyperparameter search with cross-validation there are two major ways in which we can search our parameter space for an optimal model these two methods differ in the way we will search for themsystemic versus random in this section we will discuss these two methods along with hands-on examples the takeaway from this section is to understand the processes so that you can start leveraging on your own datasets also note that even if we don' mention it explicitlywe will always be using cross validation to perform any of these searches grid search this is the simplest of the hyperparameter optimization methods in this method we will specify the grid of values (of hyperparametersthat we want to try out and optimize to get the best parameter combinations then we will build models on each of those values (combination of multiple parameter values)using cross-validation of courseand report the best parameterscombination in the whole grid the output will be the model using the best combination from the grid although it is quite simpleit suffers from one serious drawback that the user has to manually supply the actual parameterswhich may or may not contain the most optimal parameters
|
16,040 |
in scikit-learngrid search can be done using the gridsearchcv class we go through an example by performing grid search on support vector machine (svmmodel on the breast cancer dataset from earlier the svm model is another example of supervised machine learning algorithm that can be used for classification it is an example of maximum margin classifierwhere it tries to learn representation of all the data points such that separate categories\labels are divided or separated by clear gapwhich is as large as possible we won' be going into further extensive details here since the intent is to run grid searchbut we recommend you to check out some standard literation on svms if you are interested let' first split our breast cancer dataset variables and into train and test datasets and build an svm model with default parameters then we'll evaluate its performance on the test dataset by leveraging our model_evaluation_utils module from sklearn model_selection import train_test_split from sklearn svm import svc prepare datasets x_trainx_testy_trainy_test train_test_split(xytest_size= random_state= build default svm model def_svc svc(random_state= def_svc fit(x_trainy_trainpredict and evaluate performance def_y_pred def_svc predict(x_testprint('default model stats:'meu display_model_performance_metrics(true_labels=y_testpredicted_labels=def_y_predclasses=[ , ]figure - model performance metrics for default svm model on the breast cancer dataset would you look at thatour model gives an overall score of only and model accuracy of as depicted in figure - also by looking at the confusion matrixyou can clearly see that it is predicting every data point as benign (label basically our model learned nothinglet' try tuning this model to see if we get something better since we have chosen svm modelwe specify some hyperparameters specific to itwhich includes the parameter (deals with the margin parameter in svm)the kernel function (used for transforming data into higher dimensional feature spaceand gamma (determines the influence single training data point hasthere are lot of other hyperparameters to tunewhich you can check out at we build grid by supplying some pre-set values the next choice is selecting the score or metric we want to maximize here we have chosen to maximize accuracy of the model once that is donewe will be using five-fold cross-validation to build multiple models over this grid and evaluate them to get the best model detailed code and outputs are depicted as follows from sklearn model_selection import gridsearchcv setting the parameter grid
|
16,041 |
grid_parameters {'kernel'['linear''rbf']'gamma'[ - - ]' '[ ]perform hyperparameter tuning print("tuning hyper-parameters for accuracy\ "clf gridsearchcv(svc(random_state= )grid_parameterscv= scoring='accuracy'clf fit(x_trainy_trainview accuracy scores for all the models print("grid scores for all the models based on cv:\ "means clf cv_results_['mean_test_score'stds clf cv_results_['std_test_score'for meanstdparams in zip(meansstdsclf cv_results_['params'])print("% (+/-% ffor % (meanstd params)check out best model performance print("\nbest parameters set found on development set:"clf best_params_print("best model validation accuracy:"clf best_score_tuning hyper-parameters for accuracy grid scores for all the models based on cv (+/- for {' ' 'gamma' 'kernel''linear' (+/- for {' ' 'gamma' 'kernel''rbf' (+/- for {' ' 'gamma' 'kernel''linear' (+/- for {' ' 'gamma' 'kernel''rbf' (+/- for {' ' 'gamma' 'kernel''linear' (+/- for {' ' 'gamma' 'kernel''rbf' (+/- for {' ' 'gamma' 'kernel''linear' (+/- for {' ' 'gamma' 'kernel''rbf' (+/- for {' ' 'gamma' 'kernel''linear' (+/- for {' ' 'gamma' 'kernel''rbf' (+/- for {' ' 'gamma' 'kernel''linear' (+/- for {' ' 'gamma' 'kernel''rbf' (+/- for {' ' 'gamma' 'kernel''linear' (+/- for {' ' 'gamma' 'kernel''rbf' (+/- for {' ' 'gamma' 'kernel''linear' (+/- for {' ' 'gamma' 'kernel''rbf'best parameters set found on development set{' ' 'gamma' 'kernel''linear'best model validation accuracy thusfrom the preceding output and codeyou can see how the best model parameters were obtained based on cross-validation accuracy and we get pretty awesome validation accuracy of let' take this optimized and tuned model and put it to the test on our test datags_best clf best_estimator_ tuned_y_pred gs_best predict(x_testprint('\ \ntuned model stats:'meu display_model_performance_metrics(true_labels=y_testpredicted_labels=tuned_y_predclasses=[ , ]
|
16,042 |
figure - model performance metrics for tuned svm model on the breast cancer dataset well things are certainly looking great nowour model gives an overall score and model accuracy of on the test dataset tooas depicted in figure - this should give you clear indication of the power of hyperparameter tuningthis scheme of things can be extended for different models and their respective hyperparameters we can also play around with the evaluation measure we want to optimize the scikit-learn framework provides us with different values that we can optimize some of them are adjusted_rand_scoreaverage_precisionf average_recalland so on andomized search grid search is very popular method to optimizing hyperparameters in practice it is due to its simplicity and the fact that it is embarrassingly parallelizable this becomes important when the dataset we are dealing with is of large size but it suffers from some major shortcomingsthe most important one being the limitation of manually specifying the grid this brings human element into process that could benefit from purely automatic mechanism randomized parameter search is modification to the traditional grid search it takes input for grid elements as in normal grid search but it can also take distributions as input for example consider the parameter gamma whose values we supplied explicitly in the last section instead we can supply distribution from which to sample gamma the efficacy of randomized parameter search is based on the proven (empirically and mathematicallyresult that the hyperparameter optimization functions normally have low dimensionality and the effect of certain parameters are more than others we control the number of times we want to do the random parameter sampling by specifying the number of iterations we want to run (n_iternormally higher number of iterations mean more granular parameter search but higher computation time to illustrate the use of randomized parameter searchwe will use the example we used earlier but replace the gamma and values with distribution the results in our example may not be very different from the grid searchbut we establish the process that can be followed for future reference import scipy from sklearn model_selection import randomizedsearchcv param_grid {' 'scipy stats expon(scale= )'gamma'scipy stats expon(scale )'kernel'['rbf''linear']random_search randomizedsearchcv(svc(random_state= )param_distributions=param_gridn_iter= cv= random_search fit(x_trainy_trainprint("best parameters set found on development set:"random_search best_params_ best parameters set found on development setout[ ]{' ' 'gamma' 'kernel''linear'
|
16,043 |
get best modelpredict and evaluate performance rs_best random_search best_estimator_ rs_y_pred rs_best predict(x_testmeu get_metrics(true_labels=y_testpredicted_labels=rs_y_predaccuracy precision recall score in this examplewe are getting the values of parameter and gamma from an exponential distribution and we are controlling the number of iterations of model search by the parameter n_iter while the overall model performance is similar to grid searchthe intent is to be aware of the different strategies in model tuning odel interpretation the objective of data science or machine learning is to solve real-world problemsautomate complex tasksand make our life easier and better while data scientists spend huge amount of time buildingtuningand deploying modelsone must ask the questions"what is this going to be used for?and "how does this really work?and the most important question"why should trust your model? business or organization will be more concerned about business objectivegenerating profitsand minimizing losses by leveraging analytics and machine learning hence often there is disconnect between analytics teams and key stakeholderscustomersclientsor management in trying to explain how models really work most of the timeexplaining complex theoretical and mathematical concepts can be really difficult to non-experts who may not have an idea orworsemight not be interested in knowing all the gory details this brings us back to the main objective"can we explain and interpret machine learning models in an easy to understand way"such that anyone even without thorough knowledge of machine learning can understand them the benefit of this would be two-fold--machine learning models will not just stop at being research projects or proofof-concepts and it will pave the way for higher adoption of machine learning based solutions in enterprises some machine learning models use interpretable algorithmsfor example decision tree will give you the importance of all the variables as an output also the prediction path of any new data point can be analyzed using decision tree hence we can learn what variable played crucial role for prediction unfortunatelythis can' be said for lot of modelsespecially for the ones who have no notion of variable importance some machine learning models are interpretable in nature by default generative model such as bayesian rule listletham et al (models such as simple decision trees could be made interpretable by using the feature importance as an output alsothe prediction path for single tree from the root of the tree to its leaves can be visualized capturing the contribution of the feature to the estimators decision policies butthis intuitiveness may not be possible for complex non-linear models random forestdeep neural networks convolutional neural networks (cnns)recurrent neural networks (rnnsthe lack of understanding of the complex nature of machine learned decision policies makes predictive models to be still viewed as black boxes model interpretations can help data scientist and an end user in variety of ways it will help bridge the gap that often exists between the technology teams and the business for exampleit can help identify the reason why particular prediction is being made and it can be verified using the domain knowledge of the end user by leveraging that easy to understand interpretation it can also help the data scientists understand the interactions among features that can lead to better feature engineering and enhanced performance it can also help in model comparisons and explaining the results better to the business stakeholders
|
16,044 |
while the simplest approach to having models that are interpretable is to use algorithms that lead to interpretable models like decision treeslogistic regression and others but we don' have the guarantee that an interpretable model will provide us with the best performance hence we cannot always resort to such models recentmuch better approach is to explain model predictions in an easy-to-interpret manner by learning an interpretable model locally around the prediction this topic in fact has gained extensive attention very recently in refer to the original research paper by ribeiros singh guestrin titled "why should trust you?"explaining the predictions of any classifier from pdf/ pdf to understand more about model interpretation and the lime frameworkwhich proposes to solve this the lime framework attempts to successfully explain any black box model locally (somewhere we need define the scope of interpretation globally and locallyand you can check out the github repository at we will be leveraging another library named skateran open sourced python library designed to demystify the inner workings of of predictive models skater defines the scope of interpretating models globally(on the basis of complete datasetand locally (on the basis of an individual predictionfor global explanationsskater makes use of model-agnostic variable importance and partial dependence plots to judge the bias of model and understand its general behavior to validate model' decision policies for single predictionon the other handthe library currently embraces novel technique called local interpretable model agnostic explanation (limeribeiro et al )which uses local surrogate models to assess performance the library is authored by aaron kramerpramit choudharyand the datascience com teamskater is now mainstream project and an excellent framework for model interpretation we would like to acknowledge and thank the folks at datascience com--ian swansonpramit choudharyand aaron kramer--for developing this amazing framework and especially pramit for taking out time to explain to us in detail the features and vision for the skater project some advantages of leveraging skater are mentioned as follows and some of them are still actively being worked on and improved production ready code using functional style programming (declarative programming paradigmenable interpretation for both classification and regression based models for supervised learning problems to start with and then gradually extend it to support interpretation for unsupervised learning problems as well this includes computationally efficient partial dependence plots and model independent feature importance plots workflow abstractioncommon interface to perform local interpretation for in-memory (model is under developmentas well as deployed model (model has been deployed in productionextending lime added support for interpreting regression based modelbetter sampling distribution for generating samples around local predictionresearching the ability to include non-linear models for local evaluation enabling support of rule based interpretable models letham et al (better support for model evaluation for nlp based models bach et al layerwise relevance propagation (article?id= /journal pone better support for image interpretability batra et al gradient weighted class activation map (
|
16,045 |
besides thissincethe time this project startedthey have committed some improvementsnamely support for regression back into the original lime repository and they still have other aspects of interpretation in their roadmap and further improvements to lime in the future you can easily install skater by running the pip install - skater command from your prompt or terminal for more informationyou can check out the github repository at join the chat group hereunderstanding skater skater is an open source python framework that aims to provide model agnostic interpretation of predictive models it is an active project on github at the previously mentioned features being worked upon actively the idea of skater is to understand black box machine learning models by querying them and interpreting their learned decision policies the philosophy of skater is that all models should be evaluated as black boxes and decision criteria of the models are inferred and interpreted based on input perturbations and observing the corresponding output predictions the scope of model interpretation by leveraging skaterenables us to do both global and local interpretation as depicted in figure - figure - scope of model interpretation (sourcedatascience comusing the skater librarywe can explore the featuresimportancepartial dependency plots upon featuresand global and local fidelity of the predictions made by the model the fidelity of model can be described as the reasons on the basis of which the model calculated and predicted particular class for examplesuppose we have model that predicts whether particular user transaction can be tagged as fraudulent transaction or not the output of the model will be much more trustworthy if we can identifyinterpretand depict that the reason the model marked the prediction as fraudis because the amount is larger than the maximum transaction of the user in the last six months and the location of transaction is kms away from user' normal transaction areas contrast it with the case where we are only given prediction label without any justifying explanation
|
16,046 |
the general workflow within the skater package is to create an interpretation objectcreate model objectand run interpretation algorithms alsoan interpretation object takes as inputa datasetand optionally some metadata like feature names and row identifiers internallythe interpretation object will generate datamanager to handle data requests and sampling while we can interpret any model by leveraging the model estimator objectsfor ensuring consistency and proper functionality across all of skater' interfacesmodel objects need to be wrapped in skater' model objectwhich can either be an inmemorymodel object over an actual model or even deployedmodel object to take model behind an api or web service figure - depicts standard machine learning workflow and how skater can be leveraged for interpreting the two different types of models we just mentioned let' use our logistic regression model from earlier to do some model interpretation on our breast cancer datasetfigure - model interpretation in standard machine learning workflow (sourcedatascience commodel interpretation in action we will be using our train and test datasets from the breast cancer dataset that we have been using in this for consistency we will leverage the x_train and x_test variables and also the logistic model object (logistic regression modelthat we created previously we will try to run some model interpretations on this model object the standard workflow for model interpretation is to create skater interpretation and model object from skater core explanations import interpretation from skater model import inmemorymodel interpreter interpretation(x_testfeature_names=data feature_namesmodel inmemorymodel(logistic predict_probaexamples=x_traintarget_names=logistic classes_
|
16,047 |
once this is completewe are ready to run model interpretation algorithms we will start by trying to generate feature importances this will give us an idea of the degree to which our predictive model relies on particular features the skater framework' feature importance implementation is based on an information theory criterionwhere it measures the entropy in the change of predictionsgiven perturbation of specific feature the idea is that the more model' decision making criteria depends on featurethe more the predictions will change as function of perturbing the feature plots interpreter feature_importance plot_feature_importance(modelascending=falsefigure - feature importances obtained from our logistic regression model we can clearly observe from figure - that the most important feature in our model is worst areafollowed by mean perimeter and area error let' now consider the most important featureworst areaand think about ways it might influence the model decision making process during predictions partial dependence plots are an excellent tool to leverage to visualize this in generalpartial dependence plots help describe the marginal impact of specific feature on model prediction by holding the other features in the model constant the derivative of partial dependencedescribes the impact of feature the following code helps build the partial dependence plot for the worst area feature in our model interpreter partial_dependence plot_partial_dependence(['worst area']modelgrid_resolution= with_variance=truefigsize ( )
|
16,048 |
figure - one-way partial dependence plot for our logistic regression model predictor based on worst area from the plot in figure - we can see that the worst area feature has strong influence on the model decision making process based on the plotif the worst area value decreases from the model is more prone to classify the data point as benign (label which indicates no cancer this is definitely interestinglet' try to interpret some actual predictions now we will predict two data pointsone not having cancer (label and one having cancer (label )and try to interpret the prediction making process from skater core local_interpretation lime lime_tabular import limetabularexplainer exp limetabularexplainer(x_trainfeature_names=data feature_namesdiscretize_continuous=trueclass_names=[' '' ']explain prediction for data point having no canceri label exp explain_instance(x_test[ ]logistic predict_probashow_in_notebook(
|
16,049 |
figure - model interpretation for our logistic regression model' prediction for data point having no cancer (benignthe results depicted in figure - show the features that were primarily responsible for the model to predict the data point as label having no cancer we can also see the feature that was the most influential in this decision was worst arealet' run similar interpretation on data point with malignant cancer explain prediction for data point having malignant canceri label exp explain_instance(x_test[ ]logistic predict_probashow_in_notebook(figure - model interpretation for our logistic regression model' prediction for data point having cancer (malignant
|
16,050 |
the results depicted in figure - once again show us the features that were primarily responsible for the model to predict the data point as label having malignant cancer the feature worst area was again the most influential one and you can notice the stark difference in its value as compared to the previous data point hopefully this should give you some insight into how model interpretation works point to remember here is that we are just getting started with model interpretation based on the recent interest since but it is going to be good and worthwhile journey toward making models easy to understand for anyonem odel deployment the tough part of the whole modeling process is mostly the iterative process of feature engineeringmodel buildingtuningand evaluation once we are done with this iterative process of model developmentwe can breathe sigh of relief--but not for longthe final piece of the machine learning modeling puzzle is that of deploying the model in production so that we actually start using it in this sectionyou learn of the various ways you can deploy your models in action and the necessary dependencies that must be taken care of in this process model persistence model persistence is the simplest way of deploying model in this scheme of things we will persist our final model on permanent media like our hard drive and use this persisted version for making predictions in the future this simple scheme is good way to deploy models with minimal effort model development is generally done on static data source but once deployedtypically the model is used on constant stream of data either in realtime\near-realtime or in batches for exampleconsider bank fraud detection modelat the time of model developmentwe will have data collected over some historical time span we will use this data for the model development process and come up with model with good performancei model that is very good at flagging potential fraud transactions the model then needs to be deployed over all of the future transactions that the bank (or any other financial entityconducts it means that for all the transactionswe need to extract the data required for our model and feed that data to our model the model prediction is attached to the transaction and on the basis of it the transaction is flagged as fraud transaction or clean transaction in the simplest scheme of thingswe can write standalone python script that is given the new data as soon as it arrives it performs the necessary data transformations on the raw data and then reads our model from the permanent data store once we have the data and the model we can make prediction and this prediction communication can be integrated with the required operations these required operations are often tied to the business needs of the model in our case of tagging fraudulent transactionsit can involve notifying the fraud department or simply denying the transaction most of the steps involved in this process like data acquisition\retrievalextractionfeature engineering and actions to be taken upon prediction are related to the software or data engineering process and require custom software development and tinkering with data engineering processes like etl (extract-transform-loadfor persisting our model to diskwe can leverage libraries like pickle or joblibwhich is also available with scikit-learn this allows us to deploy and use the model in the futurewithout having to retrain it each time we want to use it from sklearn externals import joblib joblib dump(logistic'lr_model pkl'
|
16,051 |
this code will persist our model on the disk as file named lr_model pkl so whenever we will load this object in memory again we will get the logistic regression model object lr joblib load('lr_model pkl'lr logisticregression( = class_weight=nonedual=falsefit_intercept=trueintercept_scaling= max_iter= multi_class='ovr'n_jobs= penalty=' 'random_state=nonesolver='liblinear'tol= verbose= warm_start=falsewe can now use this lr objectwhich is our model loaded from the diskand make predictions sample is depicted as follows print(lr predict(x_test[ : ])y_test[ : ][ [ remember that once you have persisted modelyou can easily integrate it with python based script or application that can be scheduled to predict in realtime or batches of new data howeverproper engineering of the solution is necessary to ensure the right data reaches the model and the prediction output should also be broadcasted to the right channels custom development another option to deploy model is by developing the implementation of model prediction method separately the output of most machine learning algorithms is just the values of parameters that were learned once we have extracted these parameter valuesthe prediction process is pretty straightforward for examplethe prediction of logistic regression can be done by multiplying the coefficient vector with the input data vector this simple calculation will give us the score for the data vector that we can feed to the sigmoid\logistic function and extract the prediction for our input data this method has more roots in the software development process as the developed model is reduced to set of configurations and parameters and the main focus would be on engineering the data and the necessary mathematical computations using some programming language this configuration can be used to develop custom implementation pipeline in which the prediction process is just simple mathematical operation in-house model deployment lot of enterprises and organizations will not want to expose their private and confidential data on which models need to be built and deployed hence they will be leveraging their own software and data science expertise to build and deploy custom solutions on their own infrastructure this can involve leveraging commercial-off-the-shelf tools to deploy models or using custom open source tools and frameworks python based models can be easily integrated with frameworks like flask or django to create rest apis or micro-services on top of the prediction models and these api endpoints can then be exposed and integrated with any other solutions or applications that might need it
|
16,052 |
model deployment as service the computational world is seeing surge of the cloud and the xaas (anything as servicemodel in all areas this is also true for model development and deployment major providers like googlemicrosoftand amazon web services (awsprovide the facility of developing machine learning models using their cloud services and also the facility of deploying those models as service on the cloud this is very beneficial to the end users due to the reliability and ease of scaling offered by these service providers major downside to custom development or deploying models in-house is the extra work and maintenance required the scalability of the solution is also another problem that may exist for some kind of models like fraud predictiondue to the sheer number of prediction volumes required model deployment as service takes care of these issues as in most cases the model prediction can be accessed via request made to cloud based api endpoint (by supplying in the necessary data of coursethis capability frees the burden of maintaining an extra system for the developers of the application that will be consuming the outputs of our model in most casesif the developers can take care of passing the required data to the model deployment apisthey don' have to deal with the computational requirement of the prediction system and dealing with its maintenance another advantage of cloud deployment comes from how easy it is to update the models model development is an iterative process and the deployed models need to be updated from time to time to maintain their relevance by maintaining the models at single end point in the cloudwe simplify the process of model updating as only single replacement is requiredwhich can actually happen with the push of buttonwhich also syncs with all downstream applications summary this concludes the second part of this bookwhich focused on the machine learning pipeline we learned the most important aspects of the model building processwhich include model trainingtuningevaluationinterpretationand deployment details of various types of models was discussed in the model building section including classificationregressionand clustering models we also covered the three vital stages of any machine learning process with an example of the logistic regression model and how gradient descent is an important optimization process hands-on examples of classification and clustering model building processes were depicted on real datasets various strategies of evaluating classificationregressionand clustering models were also covered with detailed metrics for each of themwhich were depicted with real examples section of this book has been completely dedicated to tuning of models that include strategies for hyperparameter tuning and cross validation with detailed depiction of tuning on real models nascent field in machine learning is model interpretationwhere we try to understand and explain how model predictions really work detailed coverage on various aspects of model interpretation have also been coveredincluding feature importancespartial dependence plotsand prediction explanations finallywe also looked at some aspects pertaining to model deployment and the various options for deploying models this should give you good idea of how to start building and tuning models we will reinforce these concepts and methodologies in the third part of this book where we will be working on real-world case studies
|
16,053 |
real-world case studies
|
16,054 |
analyzing bike sharing trends "all work and no playis well-known proverb and we certainly do not want to be dull so farwe have covered the theoretical conceptsframeworksworkflowsand tools required to solve data science problems the use case driven theme begins with this in this section of the bookwe cover wide range of machine learning/data science concepts through real life case studies through this and subsequent we will discuss and apply concepts learned so far to solve some exciting real-world problems this discusses regression based models to analyze data and predict outcomes in particularwe will utilize the capital bike sharing dataset from the uci machine learning repository to understand regression models to predict bike usage demand through this we cover the following topicsthe bike sharing dataset to understand the dataset available from the uci machine learning repository problem statement to formally define the problem to be solved exploratory data analysis to explore and understand the dataset at hand regression analysis to understand regression modeling concepts and apply them to solve the problem the bike sharing dataset the crisp-dm model introduced in the initial talks about typical workflow associated with data science problem/project the workflow diagram has data at its center for reason before we get started on different techniques to understand and play with the datalet' understand its origins the bike sharing dataset is available from the uci machine learning repository it is one of the largest and probably also the longest standing online repository of datasets used in all sorts of studies and research from across the world the dataset we will be utilizing is one such dataset from among hundreds available on the web site the dataset was donated by university of portoportugal in more information is available at ##note we encourage you to check out the uci machine learning repository and particularly the bike sharing data set page we thank fanaee et al for their work and sharing the dataset through the uci machine learning repository fanaee-thadiand gamajoaoevent labeling combining ensemble detectors and background knowledgeprogress in artificial intelligence ( )pp - springer berlin heidelberg (cdipanjan sarkarraghav bali and tushar sharma sarkar et al practical machine learning with python
|
16,055 |
problem statement with environmental issues and health becoming trending topicsusage of bicycles as mode of transportation has gained traction in recent years to encourage bike usagecities across the world have successfully rolled out bike sharing programs under such schemesriders can rent bicycles using manualautomated kiosks spread across the city for defined periods in most casesriders can pick up bikes from one location and return them to any other designated place the bike sharing platforms from across the world are hotspots of all sorts of dataranging from travel timestart and end locationdemographics of ridersand so on this data along with alternate sources of information such as weathertrafficterrainand so on makes it an attractive proposition for different research areas the capital bike sharing dataset contains information related to one such bike sharing program underway in washington dc given this augmented (bike sharing details along with weather informationdatasetcan we forecast bike rental demand for this programexploratory data analysis now that we have an overview of the business case and formal problem statementthe very next stage is to explore and understand the data this is also called the exploratory data analysis (edastep in this sectionwe will load the data into our analysis environment and explore its properties it is worth mentioning again that eda is one of the most important phases in the whole workflow and can help with not just understanding the datasetbut also in presenting certain fine points that can be useful in the coming steps ##note the bike sharing dataset contains day level and hour level data we will be concentrating only on hourly data available in hour csv preprocessing the eda process begins with loading the data into the environmentgetting quick look at it along with count of records and number of attributes we will be making heavy use of pandas and numpy to perform data manipulation and related tasks for visualization purposeswe will use matplotlib and seaborn along with pandasvisualization capabilities wherever possible we begin with loading the hour csv and checking the shape of the loaded dataframe the following snippet does the same in [ ]hour_df pd read_csv('hour csv'print("shape of dataset::{}format(hour_df shape)shape of dataset::( the dataset contains more than records with attributes let' check the top few rows to see how the data looks we use the head(utility from pandas for the same to get the output in figure -
|
16,056 |
figure - sample rows from bike sharing dataset the data seems to have loaded correctly nextwe need to check what data types pandas has inferred and if any of the attributes require type conversions the following snippet helps us check the data types of all attributes in [ ]hour_df dtypes out[ ]instant int dteday object season int yr int mnth int hr int holiday int weekday int workingday int weathersit int temp float atemp float hum float windspeed float casual int registered int cnt int dtypeobject as mentioned in the documentation for the datasetthere are bike sharing as well as weather attributes available the attribute dteday would require type conversion from object (or string typeto timestamp attributes like seasonholidayweekdayand so on are inferred as integers by pandasand they would require conversion to categoricals for proper understanding before jumping into type casting attributesthe following snippet cleans up the attribute names to make them more understandable and pythonic in [ ]hour_df rename(columns={'instant':'rec_id''dteday':'datetime''holiday':'is_holiday''workingday':'is_workingday''weathersit':'weather_condition''hum':'humidity''mnth':'month''cnt':'total_count''hr':'hour''yr':'year'},inplace=true
|
16,057 |
now that we have attribute names cleaned upwe perform type-casting of attributes using utilities like pd to_datetime(and astype(the following snippet gets the attributes into proper data types in [ ]date time conversion hour_df['datetime'pd to_datetime(hour_df datetimecategorical variables hour_df['season'hour_df season astype('category'hour_df['is_holiday'hour_df is_holiday astype('category'hour_df['weekday'hour_df weekday astype('category'hour_df['weather_condition'hour_df weather_condition astype('category'hour_df['is_workingday'hour_df is_workingday astype('category'hour_df['month'hour_df month astype('category'hour_df['year'hour_df year astype('category'hour_df['hour'hour_df hour astype('category'distribution and trends the dataset after preprocessing (which we performed in the previous stepis ready for some visual inspection we begin with visualizing hourly ridership counts across the seasons the following snippet uses seaborn' pointplot to visualize the same in [ ]fig,ax plt subplots(sn pointplot(data=hour_df[['hour''total_count''season']] ='hour', ='total_count'hue='season',ax=axax set(title="season wise hourly distribution of counts"figure - season wise hourly data distribution the plot in figure - shows similar trends for all seasons with counts peaking in the morning between - am and in the evening between - pmpossibly due to high movement during start and end of office hours the counts are lowest for the spring seasonwhile fall sees highest riders across all hours
|
16,058 |
similarlydistribution of ridership across days of the week also presents interesting trends of higher usage during afternoon hours over weekendswhile weekdays see higher usage during mornings and evenings the code for the same is available in the jupyter notebook bike_sharing_eda ipynb the plot is as shown in figure - figure - day-wise hourly data distribution having observed hourly distribution of data across different categoricalslet' see if there are any aggregated trends the following snippet helps us visualize monthly ridership trends using seaborn' barplot(in [ ]fig,ax plt subplots(sn barplot(data=hour_df[['month''total_count']] ="month", ="total_count"ax set(title="monthly distribution of counts"the generated barplot showcases definite trend in ridership based on month of the year the months june-september see highest ridership looks like fall is good season for bike sharing programs in washingtond the plot is shown in figure - figure - month-wise ridership distribution
|
16,059 |
we encourage you to try and plot the four seasons across different subplots as an exercise to employ plotting concepts and see the trends for each season separately moving up the aggregation levellet' look at the distribution at year level our dataset contains year value of representing and representing we use violin plot to understand multiple facets of this distribution in crisp format ##note violin plots are similar to boxplots like boxplotsviolin plots also visualize inter-quartile range and other summary statistics like mean/median yet these plots are more powerful than standard boxplots due to their ability to visualize probability density of data this is particularly helpful if data is multimodal the following snippet plots yearly distribution on violin plots in [ ]sn violinplot(data=hour_df[['year''total_count']] ="year", ="total_count"figure - clearly helps us understand the multimodal distribution in both and ridership counts with having peaks at lower values as compared to the spread of counts is also much more for although the max density for both the years is between - rides figure - violin plot showcasing year-wise ridership distribution outliers while exploring and learning about any datasetit is imperative that we check for extreme and unlikely values though we handle missing and incorrect information while preprocessing the datasetoutliers are usually caught during eda outliers can severely and adversely impact the downstream steps like modeling and the results we usually utilize boxplots to check for outliers in the data in the following snippetwe analyze outliers for numeric attributes like total_counttemperatureand wind_speed
|
16,060 |
in [ ]fig,(ax ,ax )plt subplots(ncols= sn boxplot(data=hour_df[['total_count''casual','registered']],ax=ax sn boxplot(data=hour_df[['temp','windspeed']],ax=ax the generated plot is shown in figure - we can easily mark out that for the three count related attributesall of them seem to have sizable number of outlier values the casual rider distribution has overall lower numbers though for weather attributes of temperature and wind speedwe find outliers only in the case of wind speed figure - outliers in the dataset we can similarly try to check outliers at different granularity levels like hourlymonthlyand so on the visualization in figure - showcases boxplots at hourly level (the code is available in the bike_sharing_ eda ipynb jupyter notebookfigure - outliers in hourly distribution of ridership
|
16,061 |
correlations correlation helps us understand relationships between different attributes of the data since this focuses on forecastingcorrelations can help us understand and exploit relationships to build better models ##note it is important to understand that correlation does not imply causation we strongly encourage you to explore more on the same the following snippet first prepares correlational matrix using the pandas utility function corr(it then uses heat map to plot the correlation matrix in [ ]corrmatt hour_df[["temp","atemp""humidity","windspeed""casual","registered""total_count"]corr(mask np array(corrmattmask[np tril_indices_from(mask)false sn heatmap(corrmattmask=maskvmax square=true,annot=truefigure - shows the output correlational matrix (heat mapshowing values in the lower triangular form on blue to red gradient (negative to positive correlationfigure - correlational matrix
|
16,062 |
the two count variablesregistered and casualshow obvious strong correlation to total_count similarlytemp and atemp show high correlation wind_speed and humidity have slight negative correlation overallnone of the attributes show high correlational statistics regression analysis regression analysis is statistical modeling technique used by statisticians and data scientists alike it is the process of investigating relationships between dependent and independent variables regression itself includes variety of techniques for modeling and analyzing relationships between variables it is widely used for predictive analysisforecastingand time series analysis the dependent or target variable is estimated as function of independent or predictor variables the estimation function is called the regression function ##note in very abstract senseregression is referred to estimation of continuous response/target variables as opposed to classificationwhich estimates discrete targets the height-weight relationship is classic example to get started with regression analysis the example states that weight of person is dependent on his/her height thuswe can formulate regression function to estimate the weight (dependent variablegiven height (independent variableof personprovided we have enough training examples we discuss more on this in the coming section regression analysis models the relationship between dependent and independent variables it should be kept in mind that correlation between dependent and independent variables does not imply causationtypes of regression there are multiple techniques that have evolved over the years and that help us perform regression analysis in generalall regression modeling techniques involve the followingthe independent variable the dependent or target variable unknown parameter( )denoted as thusa regression function relates these entities asy , the function (needs to be specified or learned from the dataset available depending upon the data and use case at handthe following are commonly used regression techniqueslinear regressionas the name suggestsit maps linear relationships between dependent and independent variables the regression line is straight line in this technique the aim here is to minimize the error (sum of squared error for instancelogistic regressionin cases where the dependent variable is binary ( / or yes/no)this technique is utilized it helps us determine the probability of the binary target variable it derives its name from the logit function used by this technique the aim here is to maximize the likelihood of observed values this technique has more in common with classification techniques than regression
|
16,063 |
non-linear regressionin cases where dependent variable is related polynomially to independent variablei the regression function has independent variablespower of more than it is also termed as polynomial regression regression techniques may also be classified as non-parametric assumptions regression analysis has few general assumptions while specific analysis techniques have added (or reducedassumptions as well the following are important general assumptions for regression analysisthe training dataset needs to be representative of the population being modeled the independent variables are linearly independenti one independent variable cannot be explained as linear combination of others in other wordsthere should be no multicollinearity homoscedasticity of errori the variance of erroris consistent across the sample evaluation criteria evaluation of model performance is an important aspect of data science use cases we should be able to not just understand the outcomes but also evaluate how models compare to each other or whether the performance is acceptable or not in generalevaluation metrics and performance guidelines are pretty use case and domain specificregression analysis often uses few standard metrics esidual analysis regression is an estimation of target variable using the regression function on explanatory variables since the output is an approximationthere will be some difference between the predicted value of target and the observed value residual is the difference between the observed and the predicted (output of the regression functionmathematicallythe residual or difference between the observed and the predicted value of the ith data point is given asei regression model that has nicely fit the data will have its residuals display randomness ( lack of any patternthis comes from the homoscedasticity assumption of regression modeling typically scatter plots between residuals and predictors are used to confirm the assumption any patternresults in violation of this property and points toward poor fitting model normality test ( - plotthis is visual/graphical test to check for normality of the data this test helps us identify outliersskewnessand so on the test is performed by plotting the data verses theoretical quartiles the same data is also plotted on histogram to confirm normality the following are sample plots showcasing data confirming the normality test (see figure -
|
16,064 |
figure - normal plot ( - ploton the left and histogram to confirm normality on the right any deviation from the straight line in normal plot or skewness/multi-modality in histogram shows that the data does not pass the normality test -squaredgoodness of fit -squared or the coefficient of determination is another measure used to check for goodness of fit for regression analysis it is measure used to determine if the regression line is able to indicate the variance in dependent variable as explained by the independent variables(sr-squared is numeric value between and with pointing to the fact that the independent variable(sare able to explain the variance in dependent variable values closer to are indicative of poor fitting models cross validation as discussed in model generalization is also an important aspect of working on data science problems model which overfits its training set may perform poorly on unseen data and lead to all sorts of problems and business impacts hencewe employ -fold cross validation on regression models as well to make sure there is no overfitting happening modeling the stage is now set to start modeling our bike sharing dataset and solve the business problem of predicting bike demand for given date time we will utilize the concepts of regression analysis discussed in the previous section to model and evaluate the performance of our models the dataset was analyzed and certain transformations like renaming attributes and type casting were performed earlier in the since the dataset contains multiple categorical variablesit is imperative that we encode the nominal ones before we use them in our modeling process
|
16,065 |
the following snippet showcases the function to one hot encode categorical variablesbased on methodologies we discussed in detail in feature engineering and selection def fit_transform_ohe(df,col_name)"""this function performs one hot encoding for the specified column argsdf(pandas dataframe)the data frame containing the mentioned column name col_namethe column to be one hot encoded returnstuplelabel_encoderone_hot_encodertransformed column as pandas series ""label encode the column le preprocessing labelencoder(le_labels le fit_transform(df[col_name]df[col_name+'_label'le_labels one hot encoding ohe preprocessing onehotencoder(feature_arr ohe fit_transform(df[[col_name+'_label']]toarray(feature_labels [col_name+' '+str(cls_labelfor cls_label in le classes_features_df pd dataframe(feature_arrcolumns=feature_labelsreturn le,ohe,features_df we use the fit_transform_ohe(function along with transform_ohe(to encode the categoricals the label and one hot encoders are available as part of scikit-learn' preprocessing module ##note we will be using scikit and sklearn interchangeably in the coming sections as discussed in the earlier we usually divide the dataset at hand into training and testing sets to evaluate the performance of our models in this case as wellwe use scikit-learn' train_test_split(function available through model_selection module we split our dataset into and as train and testrespectively the following snippet showcases the same in [ ]xx_testyy_test train_test_split(hour_df iloc[:, :- ]hour_df iloc[:,- ]test_size= random_state= reset_index(inplace=truey reset_index(x_test reset_index(inplace=truey_test y_test reset_index(
|
16,066 |
the following snippet loops through the list of categorical variables to transform and prepare list of encoded attributes in [ ]cat_attr_list ['season','is_holiday''weather_condition','is_workingday''hour','weekday','month','year'encoded_attr_list [for col in cat_attr_listreturn_obj fit_transform_ohe( ,colencoded_attr_list append({'label_enc':return_obj[ ]'ohe_enc':return_obj[ ]'feature_df':return_obj[ ]'col_name':col}##note though we have transformed all categoricals into their one-hot encodingsnote that ordinal attributes such as hourweekdayand so on do not require such encoding nextwe merge the numeric and one hot encoded categoricals into dataframe that we will use for our modeling purposes the following snippet helps us prepare the required dataset in [ ]feature_df_list [ [numeric_feature_cols]feature_df_list extend([enc['feature_df'for enc in encoded_attr_list if enc['col_name'in subset_cat_features]train_df_new pd concat(feature_df_listaxis= print("shape::{}format(train_df_new shape)we prepared new dataframe using numeric and one hot encoded categorical attributes from the original training dataframe the original dataframe had such attributes (including both numeric and categoricalspost this transformationthe new dataframe has attributes due to one hot encoding of the categoricals linear regression one of the simplest regression analysis techniques is linear regression as discussed earlier in the linear regression is the analysis of relationship between the dependent and independent variables linear regression assumes linear relationship between the two variables extending on the general regression analysis notationlinear regression takes the following formy bx in this equationy is the dependent variablex is the independent variable the symbol denotes the intercept of the regression line and is the slope of it numerous lines can be fitted to given dataset based on different combinations of the intercept ( aand slope ( bthe aim is to find the best fitting line to model our data
|
16,067 |
if we think for secondwhat would best fitting line look likesuch line would invariably have the least error/residuali the difference between the predicted and observed would be least for such line the ordinary least squares criterion is one such technique to identify the best fitting line the algorithm tries to minimize the error with respect to slope and intercept it uses the squared error formshown as followsq yobserved predicted whereq is the total squared error we minimize the total error to get the slope and intercept of the best fitting line training now that we have the background on linear regression and olswe'll get started with the model building the linear regression model is exposed through scikit-learn' linear_model module like all machine learning algorithms in scikitthis also works on the familiar fit(and predict(theme the following snippet prepares the linear regression object for us in [ ] train_df_new yy total_count values reshape(- , lin_reg linear_model linearregression(one simple way of proceeding would be call the fit(function to build our linear regression model and then call the predict(function on the test dataset to get the predictions for evaluation we also want to keep in mind the aspects of overfitting and reduce its affects and obtain generalizable model as discussed in the previous section and earlier cross validation is one method to keep overfitting in check we thus use the -fold cross validation (specifically -foldas shown in the following snippet in [ ]predicted cross_val_predict(lin_regxycv= the function cross_val_predict(is exposed through model_selection module of sklearn this function takes the model objectpredictorsand targets as inputs we specify the in -fold using the cv parameter in our examplewe use -fold cross validation this function returns cross validated prediction values as fitted by the model object we use scatter plot to analyze our predictions the following snippet uses matplotlib to generate scatter plot between residuals and observed values in [ ]figax plt subplots(ax scatter(yy-predictedax axhline(lw= ,color='black'ax set_xlabel('observed'ax set_ylabel('residual'plt show(
|
16,068 |
figure - residual plot the plot in figure - clearly violates the homoscedasticity assumptionwhich is about residuals being random and not following any pattern to further quantify our findings related to the modelwe plot the cross-validation scores we use the cross_val_score(function available again as part of the model_selection modulewhich is shown in the visualization in figure - figure - cross validation scores the -squared or the coefficient of determination is on an average for -fold cross validation this points to the fact that the predictor is only able to explain of the variance in the target variable you are encouraged to plot and confirm the normality of data it is important to understand if the data can be modeled by linear model or not this is being left as an exercise for you to explore testing the linear regression model prepared and evaluated in the training phase needs to be checked for its performance on completely un-seen datasetthe testing dataset at the beginning of this sectionwe used the train_test_split(function to keep dataset specifically for testing purposes
|
16,069 |
but before we can use the test dataset on the learned regression linewe need to make sure the attributes have been through the same preprocessing in both training and testing sets since we transformed categorical variables into their one hot encodings in the train datasetin the following snippet we perform the same actions on the test dataset as well in [ ]test_encoded_attr_list [for enc in encoded_attr_listcol_name enc['col_name'le enc['label_enc'ohe enc['ohe_enc'test_encoded_attr_list append({'feature_df':transform_ohe(x_testle,ohecol_name)'col_name':col_name}test_feature_df_list [x_test[numeric_feature_cols]test_feature_df_list extend([enc['feature_df'for enc in test_encoded_attr_list if enc['col_name'in subset_cat_features]test_df_new pd concat(test_feature_df_listaxis= print("shape::{}format(test_df_new shape)the transformed test dataset is shown in figure - figure - test dataset after transformations the final piece of the puzzle is to use the predict(function of the linearregression object and compare our results/predictions the following snippet performs the said actions in [ ]x_test test_df_new y_test y_test total_count values reshape(- , y_pred lin_reg predict(x_testresiduals y_test-y_pred we also calculate the residuals and use them to prepare the residual plotsimilar to the one we created during training step the following snippet plots the residual plot on the test dataset in [ ]figax plt subplots(ax scatter(y_testresiduals
|
16,070 |
ax axhline(lw= ,color='black'ax set_xlabel('observed'ax set_ylabel('residuals'ax title set_text("residual plot with -squared={}format(np average( _score))plt show(the generated plot shows an -squared that is comparable to training performance the plot is shown in figure - figure - residual plot for test dataset it is clearly evident from our evaluation that the linear regression model is unable to model the data to generate decent results though it should be noted that the model is performing equally on both training and testing datasets it seems like case where we would need to model this data using methods that can model non-linear relationships exercise in this sectionwe used training and testing datasets with attributes (including both numeric and one hot encoded categoricalsthe performance is dismal due to non-linearity and other factors experiment with different combination of attributes (use only subset or use only numerical attributes or any combination of themand prepare different linear regression models follow the same steps as outlined in this section check the performance against the model prepared in this section and analyze if better performing model could be possible decision tree based regression decision trees are supervised learning algorithms used for both regression and classification problems they are simple yet powerful in modeling non-linear relationships being non-parametric modelthe aim of this algorithm is to learn model that can predict outcomes based on simple decision rules (for instanceif-else conditionsbased on features the interpretability of decision trees makes them even more lucrativeas we can visualize the rules it has inferred from the data
|
16,071 |
we explain the concepts and terminologies related to decision trees using an example suppose we have hypothetical dataset of car models from different manufacturers assume each data point has features like fuel_capacityengine_capacitypriceyear_of_purchasemiles_drivenand mileage given this datawe need model that can predict the mileage given other attributes since decision trees are supervised learning algorithmswe have certain number of data points with actual mileage values available decision tree starts off at the root and divides the dataset into two or more non-overlapping subsetseach represented as child node of the root it divides the root into subsets based on specific attribute it goes on performing the split at every node until leaf node is achieved where the target value is available there might be lot many questions on how it all happensand we will get to each of them in bit for better understandingassume figure - is the structure of the decision tree inferred from the dataset at hand year of purchase >= miles driven price > cc cc mi/gal fuel capacity mi/gal engine capacity mi/gal > miles miles >$ $ gal mi/gal mi/gal > gal mi/gal figure - sample decision tree the visualization depicted in figure - showcases sample decision tree with leaf nodes pointing toward target values the tree starts off by splitting the dataset at root based on year of purchase with left child representing purchases before and right child for purchases after and similarly for other nodes when presented with new/unseen data pointwe simply traverse the tree and arrive at leaf node which determines the target value even though the previous example is simpleit clearly brings out the interpretability of the model as well as its ability to learn simple rules ode splitting decision trees work in top-down manner and node splitting is an important concept for any decision tree algorithm most algorithms follow greedy approach to divide the input space into subsets the basic process in simple terms is to try and split data points using different attributes/features and test against cost function the split resulting in least cost is selected at every step classification and regression problems use different set of cost functions some of the most common ones are as follows mean squared error (mse)used mainly for regression treesit is calculated as the square of difference between observed and predicted values
|
16,072 |
mean absolute errorused for regression treesit is similar to mse though we only use the difference between the observed and predicted values variance reductionthis was first introduced with cart algorithmand it uses the standard formula of variance and we choose the split that results in least variance gini impurity/indexmainly used by classification treesit is measure of randomly chosen data point to have an incorrect label given it was labeled randomly information gainagain used mainly for classification problemsit is also termed as entropy we choose the splits based on the amount of information gain the higher information gainthe better it is ##note these are some of the most commonly used cost functions there are many more which are used under specific scenarios topping criteria as mentioneddecision trees follow greedy recursive splitting of nodesbut how or when do they stopthere are many strategies applied to define the stopping criteria the most common being the minimum count of data points node splitting is stopped if further splitting would violate this constraint another constraint used is the depth of the tree the stopping criteria together with other parameters help us achieve trees that can generalize well tree that is very deep or has too many non-leaf nodes often results in overfitting hyperparameters hyperparameters are the knobs and controls we set with an aim to optimize the model' performance on unseen data these hyperparameters are different from parameters which are learned by our learning algorithm over the course of training process hyperparameters help us achieve objectives of avoiding overfitting and so on decision trees provide us with quite few hyperparameters to play withsome of which we discussed in maximum depthminimum samples for leaf nodesminimum samples to split internal nodesmaximum leaf nodesand so on are some of the hyperparameters actively used to improve performance of decision trees we will use techniques like grid search (refresh your memory from to identify optimal values for these hyperparameters in the coming sections decision tree algorithms decision trees have been around for quite some time now they have evolved with improvements in algorithms based on different techniques over the years some of the most commonly used algorithms are listed as followscart or classification and regression tree id or iterative dichotomizer now that we have decent understanding of decision treeslet' see if we can achieve improvements by using them for our regression problem of predicting the bike sharing demand
|
16,073 |
training similar to the process with linear regressionwe will use the same preprocessed dataframe train_df_new with categoricals transformed into one hot encoded form along with other numerical attributes we also split the dataset into train and test again using the train_test_split(utility from scikit-learn the training process for decision trees is bit involved and different as compared to linear regression even though we performed cross validation while training our linear regression modelwe did not have any hyperparameters to tune in the case of decision treeswe have quite handful of them (some of which we even discussed in the previous sectionbefore we get into the specifics of obtaining optimal hyperparameterswe will look at the decisiontreeregressor from sklearn' tree module we do so by instantiating regressor object with some of the hyperparameters set as follows in [ ]dtr decisiontreeregressor(max_depth= min_samples_split= max_leaf_nodes= this code snippet prepares decisiontreeregressor object that is set to have maximum depth of maximum leaf nodes as and minimum number of samples required to split node as though there can be many morethis example outlines how hyperparameters are utilized in algorithms ##note you are encouraged to try and fit the default decision tree regressor on the training data and observe its performance on testing dataset as mentioneddecision trees have an added advantage of being interpretable we can visualize the model object using graphviz and pydot librariesas shown in the following snippet in [ ]dot_data tree export_graphviz(dtrout_file=nonegraph pydotplus graph_from_dot_data(dot_datagraph write_pdf("bikeshare pdf"the output is pdf file showcasing decision tree with hyperparameters as set in the previous step the following plot as depicted in figure - shows the root node being split on attribute and then going on until depth of is achieved there are some leaves at depth lesser than as well each node clearly marks out the attributes associated with it
|
16,074 |
figure - decision tree with defined hyperparameters on bike sharing dataset now we start with the actual training process as must be evident from our workflow so farwe would train our regressor using -fold cross validation since we have hyperparameters as well in case of decision trees to worrywe need method to fine-tune them as well there are many ways of fine-tuning the hyperparametersthe most common ones are grid search and random searchwith grid search being the more popular one as the name suggestsrandom search randomly searches the combinations of hyperparameters to find the best combinationgrid search on the other hand is more systematic approach where all combinations are tried before the best is identified to make our lives easiersklearn provides utility to grid search the hyperparameters while cross validating the model using the gridsearchcv(method from model_selection module the gridsearchcv(method takes the regression/classifier object as input parameter along with dictionary of hyperparametersnumber of cross validations requiredand few more we use the following dictionary to define our grid of hyperparameters in [ ]param_grid {"criterion"["mse""mae"]"min_samples_split"[ ]"max_depth"[ ]"min_samples_leaf"[ ]"max_leaf_nodes"[ ]the dictionary basically provides list of feasible values for each of the hyperparameters that we want to fine-tune the hyperparameters are the keyswhile the values are presented as list of possible values of these hyperparameters for instanceour dictionary provides max_depth with possible values of and levels the gridsearchcv(function will in turn search in this defined list of possible values to arrive at the best one value the following snippet prepares gridsearchcv object and fits our training dataset to it in [ ]grid_cv_dtr gridsearchcv(dtrparam_gridcv=
|
16,075 |
the grid search of hyperparameters with -fold cross validation is an iterative process wrappedoptimizedand standardized by gridsearchcv(function the training process takes time due to the same and results in quite few useful attributes to analyze the best_score_ attribute helps us get the best cross validation score our decision tree regressor could achieve we can view the hyperparameters for the model that generates the best score using best_params_ we can view the detailed information of each iteration of gridsearchcv(using the cv_results_ attribute the following snippet showcases some of these attributes in [ ]print(" -squared::{}format(grid_cv_dtr best_score_)print("best hyperparameters::\ {}format(grid_cv_dtr best_params_) -squared:: best hyperparameters:{'min_samples_split' 'max_depth' 'max_leaf_nodes' 'min_samples_leaf' 'criterion''mse'the results are decent and show dramatic improvement over our linear regression model let' first try to understand the learning/model fitting results across different settings of this model fitting to get to different models prepared during our grid searchwe use the cv_results_ attribute of our gridsearchcv object the cv_results_ attribute is numpy array that we can easily convert to pandas dataframe the dataframe is shown in figure - figure - dataframe showcasing tuning results with few attributes of grid search with cv it important to understand that grid search with cross validation was optimizing on finding the best set of hyperparameters that can help prepare generalizable decision tree regressor it may be possible that there are further optimizations possible we use seaborn to plot the impact of depth of the tree on the overall score along with number of leaf nodes the following snippet uses the same dataframe we prepared using cv_results_ attribute of gridsearchcv object discussed previously in [ ]fig,ax plt subplots(sn pointplot(data=df[['mean_test_score''param_max_leaf_nodes''param_max_depth']] ='mean_test_score', ='param_max_depth'hue='param_max_leaf_nodes',ax=axax set(title="affect of depth and leaf nodes on model performance"
|
16,076 |
the output shows sudden improvement in score as depth increases from to while gradual improvement as we reach from the impact of number of leaf nodes is rather interesting the difference in scores between and leaf nodes is strikingly not much this is clear indicator that further finetuning is possible figure - depicts the visualization showcasing these results figure - mean test score and impact of tree depth and count of leaf nodes as mentionedthere is still scope of fine-tuning to further improve the results it is therefore decision which is use case and cost dependent cost can be in terms of efforttime and corresponding improvements achieved for nowwe'll proceed with the best model our gridsearchcv object has helped us identify testing once we have model trained with optimized hyperparameterswe can begin with our usual workflow to test performance on an unseen dataset we will utilize the same preprocessing as discussed while preparing test set for linear regression (you may refer to the "testingsection of linear regression and/or the jupyter notebook decision_tree_regression ipynbthe following snippet predicts the output values for the test dataset using the best estimator achieved during training phase in [ ]y_pred best_dtr_model predict(x_testresiduals y_test flatten(y_pred the final step is to view the -squared score on this dataset well-fit model should have comparable performance on this set as wellthe same is evident from the following snippet in [ ]print(" -squared::{}format( _score) -squared:: as is evident from the -squared valuethe performance is quite comparable to our training performance we can conclude by saying decision tree regressor was better at forecasting bike demand as compared to linear regression
|
16,077 |
next steps decision trees helped us achieve better performance over linear regression based modelsyet there are improvements possible the following are few next steps to ponder and keep in mindmodel fine-tuningwe achieved drastic improvement from using decision treesyet this can be improved further by analyzing the results of trainingcross validationand so on acceptable model performance is something that is use case dependent and is usually discussed upon while formalizing the problem statement in our casean -squared of might be very good or missing the markhence the results need to be discussed with all stakeholders involved other models and ensemblingif our current models do not achieve the performance criteriawe need to evaluate other algorithms and even ensembles there many other regression models to explore ensembles are also quite useful and are used extensively machine learning pipelinethe workflow shared in this was verbose to assist in understanding and exploring concepts and techniques once sequence of preprocessing and modeling steps have stabilizedstandardized pipelines are used to maintain consistency and validity of the entire process you are encouraged to explore sklearn' pipeline module for further details summary this introduced the bike sharing dataset from the uci machine learning repository we followed the machine learning workflow discussed in detail in the previous section of the book we started off with brief discussion about the dataset followed by formally defining the problem statement once we had mission to forecast the bike demandthe next was to get started with exploratory data analysis to understand and uncover properties and patterns in the data we utilized pandasnumpyand seaborn/matplotlib to manipulatetransformand visualize the dataset at hand the line plotsbar chartsbox plotsviolin plotsetc all helped us understand various aspects of the dataset we then took detour and explored the regression analysis the important conceptsassumptionsand types of regression analysis techniques were discussed brieflywe touched on various performanceevaluation criteria typically used for regression problems like residual plotsnormal plotsand coefficient of determination equipped with better understanding of dataset and regression itselfwe started off with simple algorithm called linear regression it is not only simple algorithm but one of the most well studied and extensively used algorithms for regression use cases we utilized sklearn' linear_model to build and test our dataset for the problem at hand we also utilized the model_selection module to split our dataset and cross validate our model the next step saw us graduating to decision tree based regression model to improve on the performance of linear regression we touched upon the concepts and important aspects related to decision trees before using them for modeling our dataset the same set of preprocessing steps was used for both linear regression and decision trees finallywe concluded the by briefly discussing about next steps for improvement and enhancements this sets the flow and context for coming which will build on the concepts and workflows from here on
|
16,078 |
analyzing movie reviews sentiment in this we continue with our focus on case-study oriented where we will focus on specific real-world problems and scenarios and how we can use machine learning to solve them we will cover aspects pertaining to natural language processing (nlp)text analyticsand machine learning in this the problem at hand is sentiment analysis or opinion miningwhere we want to analyze some textual documents and predict their sentiment or opinion based on the content of these documents sentiment analysis is perhaps one of the most popular applications of natural language processing and text analytics with vast number of websitesbooks and tutorials on this subject typically sentiment analysis seems to work best on subjective textwhere people express opinionsfeelingsand their mood from real-world industry standpointsentiment analysis is widely used to analyze corporate surveysfeedback surveyssocial media dataand reviews for moviesplacescommoditiesand many more the idea is to analyze and understand the reactions of people toward specific entity and take insightful actions based on their sentiment text corpus consists of multiple text documents and each document can be as simple as single sentence to complete document with multiple paragraphs textual datain spite of being highly unstructuredcan be classified into two major types of documents factual documents that typically depict some form of statements or facts with no specific feelings or emotion attached to them these are also known as objective documents subjective documents on the other hand have text that expresses feelingsmoodsemotionsand opinions sentiment analysis is also popularly known as opinion analysis or opinion mining the key idea is to use techniques from text analyticsnlpmachine learningand linguistics to extract important information or data points from unstructured text this in turn can help us derive qualitative outputs like the overall sentiment being on positiveneutralor negative scale and quantitative outputs like the sentiment polaritysubjectivityand objectivity proportions sentiment polarity is typically numeric score that' assigned to both the positive and negative aspects of text document based on subjective parameters like specific words and phrases expressing feelings and emotion neutral sentiment typically has polarity since it does not express and specific sentimentpositive sentiment will have polarity and negative of courseyou can always change these thresholds based on the type of text you are dealing withthere are no hard constraints on this in this we focus on trying to analyze large corpus of movie reviews and derive the sentiment we cover wide variety of techniques for analyzing sentimentwhich include the following unsupervised lexicon-based models traditional supervised machine learning models newer supervised deep learning models advanced supervised deep learning models (cdipanjan sarkarraghav bali and tushar sharma sarkar et al practical machine learning with python
|
16,079 |
besides looking at various approaches and modelswe also focus on important aspects in the machine learning pipeline including text pre-processingnormalizationand in-depth analysis of modelsincluding model interpretation and topic models the key idea here is to understand how we tackle problem like sentiment analysis on unstructured textlearn various techniquesmodels and understand how to interpret the results this will enable you to use these methodologies in the future on your own datasets let' get startedproblem statement the main objective in this is to predict the sentiment for number of movie reviews obtained from the internet movie database (imdbthis dataset contains , movie reviews that have been pre-labeled with "positiveand "negativesentiment class labels based on the review content besides thisthere are additional movie reviews that are unlabeled the dataset can be obtained from edu/~amaas/data/sentiment/courtesy of stanford university and andrew maasraymond dalypeter phamdan huangandrew ngand christopher potts this dataset was also used in their famous paperlearning word vectors for sentiment analysis proceedings of the th annual meeting of the association for computational linguistics (acl they have datasets in the form of raw text as well as already processed bag of words formats we will only be using the raw labeled movie reviews for our analyses in this hence our task will be to predict the sentiment of , labeled movie reviews and use the remaining , reviews for training our supervised models we will still predict sentiments for only , reviews in case of unsupervised models to maintain consistency and enable ease of comparison setting up dependencies we will be using several python libraries and frameworks specific to text analyticsnlpand machine learning while most of them will be mentioned in each sectionyou need to make sure you have pandasnumpyscipyand scikit-learn installedwhich will be used for data processing and machine learning deep learning frameworks used in this include keras with the tensorflow backendbut you can also use theano as the backend if you choose to do so nlp libraries which will be used include spacynltkand gensim do remember to check that your installed nltk version is at least >otherwisethe toktoktokenizer class may not be present if you want to use lower nltk version for some reasonyou can use any other tokenizer like the default word_tokenize(based on the treebankwordtokenizer the version for gensim should be at least and for spacythe version used was we recommend using the latest version of spacy which was recently released (version xas this has fixed several bugs and added several improvements you also need to download the necessary dependencies and corpora for spacy and nltk in case you are installing them for the first time the following snippets should get this done for nltk you need to type the following code from python or ipython shell after installing nltk using either pip or conda import nltk nltk download('all'halt_on_error=falsefor spacyyou need to type the following code in unix shell\windows command promptto install the library (use pip install spacy if you don' want to use condaand also get the english model dependency conda config --add channels conda-forge conda install spacy python - spacy download en
|
16,080 |
we also use our custom developed text pre-processing and normalization modulewhich you will find in the files named contractions py and text_normalizer py utilities related to supervised model fittingpredictionand evaluation are present in model_evaluation_utils pyso make sure you have these modules in the same directory and the other python files and jupyter notebooks for this getting the data the dataset will be available along with the code files for this in the github repository for this book at movie_reviews csv containing , labeled imdb movie reviews you can also download the same data from easily load it in python using the read_csvutility function from pandas text pre-processing and normalization one of the key steps before diving into the process of feature engineering and modeling involves cleaningpre-processingand normalizing text to bring text components like phrases and words to some standard format this enables standardization across document corpuswhich helps build meaningful features and helps reduce noise that can be introduced due to many factors like irrelevant symbolsspecial charactersxml and html tagsand so on the file named text_normalizer py has all the necessary utilities we will be using for our text normalization needs you can also refer to the jupyter notebook named text normalization demo ipynb for more interactive experience the main components in our text normalization pipeline are described in this section cleaning textour text often contains unnecessary content like html tagswhich do not add much value when analyzing sentiment hence we need to make sure we remove them before extracting features the beautifulsoup library does an excellent job in providing necessary functions for this our strip_html_tagsfunction enables in cleaning and stripping out html code removing accented charactersin our datasetwe are dealing with reviews in the english language so we need to make sure that characters with any other formatespecially accented characters are converted and standardized into ascii characters simple example would be converting to our remove_accented_ charsfunction helps us in this respect expanding contractionsin the english languagecontractions are basically shortened versions of words or syllables these shortened versions of existing words or phrases are created by removing specific letters and sounds more than often vowels are removed from the words examples would bedo not to don' and would to ' contractions pose problem in text normalization because we have to deal with special characters like the apostrophe and we also have to convert each contraction to its expandedoriginal form our expand_contractionsfunction uses regular expressions and various contractions mapped in our contractions py module to expand all contractions in our text corpus removing special charactersanother important task in text cleaning and normalization is to remove special characters and symbols that often add to the extra noise in unstructured text simple regexes can be used to achieve this our function remove_special_charactershelps us remove special characters in our codewe have retained numbers but you can also remove numbers if you do not want them in your normalized corpus
|
16,081 |
stemming and lemmatizationword stems are usually the base form of possible words that can be created by attaching affixes like prefixes and suffixes to the stem to create new words this is known as inflection the reverse process of obtaining the base form of word is known as stemming simple example are the words watcheswatchingand watched they have the word root stem watch as the base form the nltk package offers wide range of stemmers like the porterstemmer and lancasterstemmer lemmatization is very similar to stemmingwhere we remove word affixes to get to the base form of word however the base form in this case is known as the root word but not the root stem the difference being that the root word is always lexicographically correct word (present in the dictionarybut the root stem may not be so we will be using lemmatization only in our normalization pipeline to retain lexicographically correct words the function lemmatize_texthelps us with this aspect removing stopwordswords which have little or no significance especially when constructing meaningful features from text are also known as stopwords or stop words these are usually words that end up having the maximum frequency if you do simple term or word frequency in document corpus words like aantheand so on are considered to be stopwords there is no universal stopword list but we use standard english language stopwords list from nltk you can also add your own domain specific stopwords if needed the function remove_stopwordshelps us remove stopwords and retain words having the most significance and context in corpus we use all these components and tie them together in the following function called normalize_ corpus)which can be used to take document corpus as input and return the same corpus with cleaned and normalized text documents def normalize_corpus(corpushtml_stripping=truecontraction_expansion=trueaccented_char_removal=truetext_lower_case=truetext_lemmatization=truespecial_char_removal=truestopword_removal=true)normalized_corpus [normalize each document in the corpus for doc in corpusstrip html if html_strippingdoc strip_html_tags(docremove accented characters if accented_char_removaldoc remove_accented_chars(docexpand contractions if contraction_expansiondoc expand_contractions(doclowercase the text if text_lower_casedoc doc lower(remove extra newlines doc re sub( '[\ |\ |\ \ ]+'',docinsert spaces between special characters to isolate them special_char_pattern re compile( '([(-)!}])'doc special_char_pattern sub(\\ "doc
|
16,082 |
lemmatize text if text_lemmatizationdoc lemmatize_text(docremove special characters if special_char_removaldoc remove_special_characters(docremove extra whitespace doc re sub(+''docremove stopwords if stopword_removaldoc remove_stopwords(docis_lower_case=text_lower_casenormalized_corpus append(docreturn normalized_corpus the following snippet depicts small demo of text normalization on sample document using our normalization module in [ ]from text_normalizer import normalize_corpus in [ ]document """hellohellocan you hear mei just heard about python< >!\ \ it' an amazing language which can be used for scriptingweb development,\ \ \ \ information retrievalnatural language processingmachine learning artificial intelligence!\ what are you waiting forgo and get started he' learningshe' learningthey've already\ \ got headstart""in [ ]document out[ ]"hellohellocan you hear mei just heard about python!\ \ \ it' an amazing language which can be used for scriptingweb development\ \ \ \ \ information retrievalnatural language processingmachine learning artificial intelligence!\ \ what are you waiting forgo and get started he' learningshe' learningthey've already\ \ \ got headstart!\ in [ ]normalize_corpus([document]text_lemmatization=falsestopword_removal=falsetext_lower_case=falseout[ ]['hello hello can you hear me just heard about python it is an amazing language which can be used for scripting web development information retrieval natural language processing machine learning artificial intelligence what are you waiting for go and get started he is learning she is learning they have already got headstart 'in [ ]normalize_corpus([document]out[ ]['hello hello hear hear python amazing language use scripting web development information retrieval natural language processing machine learning artificial intelligence wait go get start learn learn already get headstart'
|
16,083 |
now that we have our normalization module readywe can start modeling and analyzing our corpus nlp and text analytics enthusiasts who might be interested in more in-depth details of text normalization can refer to the section "text normalization,page of text analytics with python (apressdipanjan sarkar unsupervised lexicon-based models we have talked about unsupervised learning methods in the pastwhich refer to specific modeling methods that can be applied directly on data features without the presence of labeled data one of the major challenges in any organization is getting labeled datasets due the lack of time as well as resources to do this tedious task unsupervised methods are very useful in this scenario and we will be looking at some of these methods in this section even though we have labeled datathis section should give you good idea of how lexicon based models work and you can apply the same in your own datasets when you do not have labeled data unsupervised sentiment analysis models use well curated knowledgebasesontologieslexiconsand databases that have detailed information pertaining to subjective wordsphrases including sentimentmoodpolarityobjectivitysubjectivityand so on lexicon model typically uses lexiconalso known as dictionary or vocabulary of words specifically aligned toward sentiment analysis usually these lexicons contain list of words associated with positive and negative sentimentpolarity (magnitude of negative or positive score)parts of speech (postagssubjectivity classifiers (strongweakneutral)moodmodalityand so on you can use these lexicons and compute sentiment of text document by matching the presence of specific words from the lexiconlook at other additional factors like presence of negation parameterssurrounding wordsoverall context and phrases and aggregate overall sentiment polarity scores to decide the final sentiment score there are several popular lexicon models used for sentiment analysis some of them are mentioned as follows bing liu' lexicon mpqa subjectivity lexicon pattern lexicon afinn lexicon sentiwordnet lexicon vader lexicon this is not an exhaustive list of lexicon modelsbut definitely lists among the most popular ones available today we will be covering the last three lexicon models in more detail with hands-on code and examples using our movie review dataset we will be using the last , reviews and predict their sentiment and see how well our model performs based on model evaluation metrics like accuracyprecisionrecalland -scorewhich we covered in detail in since we have labeled datait will be easy for us to see how well our actual sentiment values for these movie reviews match our lexiconmodel based predicted sentiment values you can refer to the python file titled unsupervised_sentiment_ analysis py for all the code used in this section or use the jupyter notebook titled sentiment analysis unsupervised lexical ipynb for more interactive experience before we start our analysislet' load the necessary dependencies and configuration settings using the following snippet in [ ]import pandas as pd import numpy as np import text_normalizer as tn import model_evaluation_utils as meu np set_printoptions(precision= linewidth=
|
16,084 |
nowwe can load our imdb review datasetsubset out the last , reviews which will be used for our analysisand normalize them using the following snippet in [ ]dataset pd read_csv( 'movie_reviews csv'reviews np array(dataset['review']sentiments np array(dataset['sentiment']extract data for model evaluation test_reviews reviews[ :test_sentiments sentiments[ :sample_review_ids [ normalize dataset norm_test_reviews tn normalize_corpus(test_reviewswe also extract out some sample reviews so that we can run our models on them and interpret their results in detail bing liu' lexicon this lexicon contains over , words which have been divided into two files named positive-words txtcontaining around , words/phrases and negative-words txtwhich contains , words/phrases the lexicon has been developed and curated by bing liu over several years and has also been explained in detail in his original paper by nitin jindal and bing liu"identifying comparative sentences in text documentsproceedings of the th annual international acm sigirseattle if you want to use this lexiconyou can get it from which also includes link to download it as an archive (rar formatmpqa subjectivity lexicon the term mpqa stands for multi-perspective question answering and it contains diverse set of resources pertaining to opinion corporasubjectivity lexiconsubjectivity sense annotationsargument lexicondebate corporaopinion finderand many more this is developed and maintained by the university of pittsburgh and their official web site subjectivity lexicon is part of their opinion finder framework and contains subjectivity clues and contextual polarity details on this can be found in the paper by theresa wilsonjanyce wiebeand paul hoffmann"recognizing contextual polarity in phrase-level sentiment analysisproceeding of hlt-emnlp- you can download the subjectivity lexicon from their official web site at lexicons/subj_lexicon/contains subjectivity clues present in the dataset named subjclueslen hltemnlp tff the following snippet shows some sample lines from the lexicon type=weaksubj len= word =abandonment pos =noun stemmed = priorpolarity=negative type=weaksubj len= word =abandon pos =verb stemmed = priorpolarity=negative type=strongsubj len= word =zenith pos =noun stemmed = priorpolarity=positive type=strongsubj len= word =zest pos =noun stemmed = priorpolarity=positive each line consists of specific word and its associated polaritypos tag informationlength (right now only words of length are present)subjective contextand stem information
|
16,085 |
pattern lexicon the pattern package is complete natural language processing framework available in python which can be used for text processingsentiment analysis and more this has been developed by clips (computational linguistics psycholinguistics) research center associated with the linguistics department of the faculty of arts of the university of antwerp pattern uses its own sentiment module which internally uses lexicon which you can access from their official github repository at master/pattern/text/en/en-sentiment xml and this contains the complete subjectivity based lexicon database each line in the lexicon typically looks like the following sample <word form="absurdwordnet_id=" - pos="jjsense="incongruouspolarity="- subjectivity=" intensity=" confidence=" /thus you get important metadata information like wordnet corpus identifierspolarity scoresword sensepos tagsintensitysubjectivity scoresand so on these can in turn be used to compute sentiment over text document based on polarity and subjectivity score unfortunatelypattern has still not been ported officially for python and it works on python howeveryou can still load this lexicon and do your own modeling as needed afinn lexicon the afinn lexicon is perhaps one of the simplest and most popular lexicons that can be used extensively for sentiment analysis developed and curated by finn arup nielsenyou can find more details on this lexicon in the paper by finn arup nielsen" new anewevaluation of word list for sentiment analysis in microblogs"proceedings of the eswc workshop the current version of the lexicon is afinn-en- txt and it contains over , words with polarity score associated with each word you can find this lexicon at the author' official github repository along with previous versions of this lexicon including afinn- at created nice wrapper library on top of this in python called afinn which we will be using for our analysis needs you can import the library and instantiate an object using the following code in [ ]from afinn import afinn afn afinn(emoticons=truewe can now use this object and compute the polarity of our chosen four sample reviews using the following snippet in [ ]for reviewsentiment in zip(test_reviews[sample_review_ids]test_ sentiments[sample_review_ids])print('review:'reviewprint('actual sentiment:'sentimentprint('predicted sentiment polarity:'afn score(review)print('-'* reviewno comment stupid movieacting average or worse screenplay no sense at all skip itactual sentimentnegative
|
16,086 |
predicted sentiment polarity- reviewi don' care if some people voted this movie to be bad if you want the truth this is very good movieit has every thing movie should have you really should get this one actual sentimentpositive predicted sentiment polarity reviewworst horror film ever but funniest film ever rolled in one you have got to see this film it is so cheap it is unbelievable but you have to see it really!!! watch the carrot actual sentimentpositive predicted sentiment polarity- we can compare the actual sentiment label for each review and also check out the predicted sentiment polarity score negative polarity typically denotes negative sentiment to predict sentiment on our complete test dataset of , reviews ( used the raw text documents because afinn takes into account other aspects like emoticons and exclamations)we can now use the following snippet used threshold of > to determine if the overall sentiment is positive else negative you can choose your own threshold based on analyzing your own corpora in the future in [ ]sentiment_polarity [afn score(reviewfor review in test_reviewspredicted_sentiments ['positiveif score > else 'negativefor score in sentiment_polaritynow that we have our predicted sentiment labelswe can evaluate our model performance based on standard performance metrics using our utility function see figure - in [ ]meu display_model_performance_metrics(true_labels=test_sentimentspredicted_labels=predicted_sentimentsclasses=['positive''negative']figure - model performance metrics for afinn lexicon based model we get an overall -score of %which is quite decent considering it' an unsupervised model looking at the confusion matrix we can clearly see that quite number of negative sentiment based reviews have been misclassified as positive ( , and this leads to the lower recall of for the negative sentiment class performance for positive class is better with regard to recall or hit-ratewhere we correctly predicted , out of , positive reviewsbut precision is because of the many wrong positive predictions made in case of negative sentiment reviews
|
16,087 |
sentiwordnet lexicon the wordnet corpus is definitely one of the most popular corpora for the english language used extensively in natural language processing and semantic analysis wordnet gave us the concept of synsets or synonym sets the sentiwordnet lexicon is based on wordnet synsets and can be used for sentiment analysis and opinion mining the sentiwordnet lexicon typically assigns three sentiment scores for each wordnet synset these include positive polarity scorea negative polarity score and an objectivity score further details are available on the official web site download links for the lexicon we will be using the nltk librarywhich provides pythonic interface into sentiwordnet consider we have the adjective awesome we can get the sentiment scores associated with the synset for this word using the following snippet in [ ]from nltk corpus import sentiwordnet as swn awesome list(swn senti_synsets('awesome'' '))[ print('positive polarity score:'awesome pos_score()print('negative polarity score:'awesome neg_score()print('objective score:'awesome obj_score()positive polarity score negative polarity score objective score let' now build generic function to extract and aggregate sentiment scores for complete textual document based on matched synsets in that document def analyze_sentiment_sentiwordnet_lexicon(reviewverbose=false)tokenize and pos tag text tokens tagged_text [(token texttoken tag_for token in tn nlp(review)pos_score neg_score token_count obj_score get wordnet synsets based on pos tags get sentiment scores if synsets are found for wordtag in tagged_textss_set none if 'nnin tag and list(swn senti_synsets(word' '))ss_set list(swn senti_synsets(word' '))[ elif 'vbin tag and list(swn senti_synsets(word' '))ss_set list(swn senti_synsets(word' '))[ elif 'jjin tag and list(swn senti_synsets(word' '))ss_set list(swn senti_synsets(word' '))[ elif 'rbin tag and list(swn senti_synsets(word' '))ss_set list(swn senti_synsets(word' '))[ if senti-synset is found if ss_setadd scores for all found synsets pos_score +ss_set pos_score(neg_score +ss_set neg_score(obj_score +ss_set obj_score(token_count +
|
16,088 |
aggregate final scores final_score pos_score neg_score norm_final_score round(float(final_scoretoken_count final_sentiment 'positiveif norm_final_score > else 'negativeif verbosenorm_obj_score round(float(obj_scoretoken_count norm_pos_score round(float(pos_scoretoken_count norm_neg_score round(float(neg_scoretoken_count to display results in nice table sentiment_frame pd dataframe([[final_sentimentnorm_obj_scorenorm_pos_scorenorm_neg_scorenorm_final_score]]columns=pd multiindex(levels=[['sentiment stats:']['predicted sentiment''objectivity''positive''negative''overall']]labels=[[ , , , , ],[ , , , , ]])print(sentiment_framereturn final_sentiment our function basically takes in movie reviewtags each word with its corresponding pos tagextracts out sentiment scores for any matched synset token based on its pos tagand finally aggregates the scores this will be clearer when we run it on our sample documents in [ ]for reviewsentiment in zip(test_reviews[sample_review_ids]test_ sentiments[sample_review_ids])print('review:'reviewprint('actual sentiment:'sentimentpred analyze_sentiment_sentiwordnet_lexicon(reviewverbose=trueprint('-'* reviewno comment stupid movieacting average or worse screenplay no sense at all skip itactual sentimentnegative sentiment statspredicted sentiment objectivity positive negative overall negative - reviewi don' care if some people voted this movie to be bad if you want the truth this is very good movieit has every thing movie should have you really should get this one actual sentimentpositive sentiment statspredicted sentiment objectivity positive negative overall positive reviewworst horror film ever but funniest film ever rolled in one you have got to see this film it is so cheap it is unbelievable but you have to see it really!!! watch the carrot
|
16,089 |
actual sentimentpositive sentiment statspredicted sentiment objectivity positive negative overall positive we can clearly see the predicted sentiment along with sentiment polarity scores and an objectivity score for each sample movie review depicted in formatted dataframes let' use this model now to predict the sentiment of all our test reviews and evaluate its performance threshold of >= has been used for the overall sentiment polarity to be classified as positive and for negative sentiment see figure - in [ ]predicted_sentiments [analyze_sentiment_sentiwordnet_lexicon(reviewverbose=falsefor review in norm_test_reviewsmeu display_model_performance_metrics(true_labels=test_sentimentspredicted_labels=predicted_sentimentsclasses=['positive''negative']figure - model performance metrics for sentiwordnet lexicon based model we get an overall -score of %which is definitely step down from our afinn based model while we have lesser number of negative sentiment based reviews being misclassified as positivethe other aspects of the model performance have been affected vader lexicon the vader lexicondeveloped by huttois lexicon that is based on rule-based sentiment analysis frameworkspecifically tuned to analyze sentiments in social media vader stands for valence aware dictionary and sentiment reasoner details about this framework can be read in the original paper by huttoc gilberte ( titled "vadera parsimonious rule-based model for sentiment analysis of social media text"proceedings of the eighth international conference on weblogs and social media (icwsm- you can use the library based on nltk' interface under the nltk sentiment vader module besides thisyou can also download the actual lexicon or install the framework from vadersentimentwhich also contains detailed information about vader this lexiconpresent in the file titled vader_lexicon txt contains necessary sentiment scores associated with wordsemoticons and slangs (like wtflolnahand so onthere were total of over lexical features from which over curated lexical features were finally selected in the lexicon with proper validated valence scores each feature was rated on scale from "[- extremely negativeto "[ extremely positive"with allowance for "[ neutral (or neithern/ )the process of selecting lexical features was done by keeping all features that had non-zero mean rating and whose standard deviation was less than which was determined by the aggregate of ten independent raters we depict sample from the vader lexicon as follows
|
16,090 |
:- [- - - - - - - - - : [ terrorizing - [- - - - - - - - - - thankful [ each line in the preceding lexicon sample depicts unique termwhich can either be an emoticon or word the first token indicates the word/emoticonthe second token indicates the mean sentiment polarity scorethe third token indicates the standard deviationand the final token indicates list of scores given by ten independent scorers now let' use vader to analyze our movie reviewswe build our own modeling function as follows from nltk sentiment vader import sentimentintensityanalyzer def analyze_sentiment_vader_lexicon(reviewthreshold= verbose=false)pre-process text review tn strip_html_tags(reviewreview tn remove_accented_chars(reviewreview tn expand_contractions(reviewanalyze the sentiment for review analyzer sentimentintensityanalyzer(scores analyzer polarity_scores(reviewget aggregate scores and final sentiment agg_score scores['compound'final_sentiment 'positiveif agg_score >thresholdelse 'negativeif verbosedisplay detailed sentiment statistics positive str(round(scores['pos'] )* )+'%final round(agg_score negative str(round(scores['neg'] )* )+'%neutral str(round(scores['neu'] )* )+'%sentiment_frame pd dataframe([[final_sentimentfinalpositivenegativeneutral]]columns=pd multiindex(levels=[['sentiment stats:']['predicted sentiment''polarity score''positive''negative''neutral']]labels=[[ , , , , ],[ , , , , ]])print(sentiment_framereturn final_sentiment in our modeling functionwe do some basic pre-processing but keep the punctuations and emoticons intact besides thiswe use vader to get the sentiment polarity and also proportion of the review text with regard to positiveneutral and negative sentiment we also predict the final sentiment based on user-input threshold for the aggregated sentiment polarity typicallyvader recommends using positive sentiment
|
16,091 |
for aggregated polarity > neutral between [- ]and negative for polarity - we use threshold of > for positive and for negative in our corpus the following is the analysis of our sample reviews in [ ]for reviewsentiment in zip(test_reviews[sample_review_ids]test_ sentiments[sample_review_ids])print('review:'reviewprint('actual sentiment:'sentimentpred analyze_sentiment_vader_lexicon(reviewthreshold= verbose=trueprint('-'* reviewno comment stupid movieacting average or worse screenplay no sense at all skip itactual sentimentnegative sentiment statspredicted sentiment polarity score positive negative neutral negative - reviewi don' care if some people voted this movie to be bad if you want the truth this is very good movieit has every thing movie should have you really should get this one actual sentimentpositive sentiment statspredicted sentiment polarity score positive negative neutral negative - reviewworst horror film ever but funniest film ever rolled in one you have got to see this film it is so cheap it is unbelievable but you have to see it really!!! watch the carrot actual sentimentpositive sentiment statspredicted sentiment polarity score positive negative neutral positive we can see the details statistics pertaining to the sentiment and polarity for each sample movie review let' try out our model on the complete test movie review corpus now and evaluate the model performance see figure - in [ ]predicted_sentiments [analyze_sentiment_vader_lexicon(reviewthreshold= verbose=falsefor review in test_ reviewsmeu display_model_performance_metrics(true_labels=test_sentimentspredicted_labels=predicted_sentimentsclasses=['positive''negative']figure - model performance metrics for vader lexicon based model
|
16,092 |
we get an overall -score and model accuracy of %which is quite similar to the afinn based model the afinn based model only wins out on the average precision by %otherwiseboth models have similar performance classifying sentiment with supervised learning another way to build model to understand the text content and predict the sentiment of the text based reviews is to use supervised machine learning to be more specificwe will be using classification models for solving this problem we have already covered the concepts relevant to supervised learning and classification in under the section "supervised learningwith regard to details on building and evaluating classification modelsyou can head over to and refresh your memory if needed we will be building an automated sentiment text classification system in subsequent sections the major steps to achieve this are mentioned as follows prepare train and test datasets (optionally validation dataset pre-process and normalize text documents feature engineering model training model prediction and evaluation these are the major steps for building our system optionally the last step would be to deploy the model in your server or on the cloud figure - shows detailed workflow for building standard text classification system with supervised learning (classificationmodels figure - blueprint for building an automated text classification system (sourcetext analytics with pythonapress
|
16,093 |
in our scenariodocuments indicate the movie reviews and classes indicate the review sentiments that can either be positive or negativemaking it binary classification problem we will build models using both traditional machine learning methods and newer deep learning in the subsequent sections you can refer to the python file titled supervised_sentiment_analysis py for all the code used in this section or use the jupyter notebook titled sentiment analysis supervised ipynb for more interactive experience let' load the necessary dependencies and settings before getting started in [ ]import pandas as pd import numpy as np import text_normalizer as tn import model_evaluation_utils as meu np set_printoptions(precision= linewidth= we can now load our imdb movie reviews datasetuse the first , reviews for training models and the remaining , reviews as the test dataset to evaluate model performance besides thiswe will also use our normalization module to normalize our review datasets (steps and in our workflowin [ ]dataset pd read_csv( 'movie_reviews csv'take peek at the data print(dataset head()reviews np array(dataset['review']sentiments np array(dataset['sentiment']build train and test datasets train_reviews reviews[: train_sentiments sentiments[: test_reviews reviews[ :test_sentiments sentiments[ :normalize datasets norm_train_reviews tn normalize_corpus(train_reviewsnorm_test_reviews tn normalize_corpus(test_reviewsreview sentiment one of the other reviewers has mentioned that positive wonderful little production the positive thought this was wonderful way to spend ti positive basically there' family where little boy negative petter mattei' "love in the time of moneyis positive our datasets are now prepared and normalized so we can proceed from step in our text classification workflow described earlier to build our classification system traditional supervised machine learning models we will be using traditional classification models in this section to classify the sentiment of our movie reviews our feature engineering techniques (step will be based on the bag of words model and the tf-idf modelwhich we discussed extensively in the section titled "feature engineering on text datain the following snippet helps us engineer features using both these models on our train and test datasets
|
16,094 |
in [ ]from sklearn feature_extraction text import countvectorizertfidfvectorizer build bow features on train reviews cv countvectorizer(binary=falsemin_df= max_df= ngram_range=( , )cv_train_features cv fit_transform(norm_train_reviewsbuild tfidf features on train reviews tv tfidfvectorizer(use_idf=truemin_df= max_df= ngram_range=( , )sublinear_tf=truetv_train_features tv fit_transform(norm_train_reviewstransform test reviews into features cv_test_features cv transform(norm_test_reviewstv_test_features tv transform(norm_test_reviewsprint('bow model:train features shape:'cv_train_features shapetest features shape:'cv_test_features shapeprint('tfidf model:train features shape:'tv_train_features shapetest features shape:'tv_test_features shapebow model:train features shape( test features shape( tfidf model:train features shape( test features shape( we take into account word as well as bi-grams for our feature-sets we can now use some traditional supervised machine learning algorithms which work very well on text classification we recommend using logistic regressionsupport vector machinesand multinomial naive bayes models when you work on your own datasets in the future in this we built models using logistic regression as well as svm the following snippet helps initialize these classification model estimators in [ ]from sklearn linear_model import sgdclassifierlogisticregression lr logisticregression(penalty=' 'max_iter= = svm sgdclassifier(loss='hinge'n_iter= without going into too many theoretical complexitiesthe logistic regression model is supervised linear machine learning model used for classification regardless of its name in this modelwe try to predict the probability that given movie review will belong to one of the discrete classes (binary classes in our scenariothe function used by the model for learning is represented here positive ( negative ( where the model tries to predict the sentiment class using the feature vector and which is - popularly known as the sigmoid function or logistic function or the logit function the main objective of this model is to search for an optimal value of such that probability of the positive sentiment class is maximum when the feature vector is for positive movie review and small when it is for negative movie review the logistic function helps model the probability to describe the final prediction class the optimal value of can be obtained by minimizing an appropriate cost\loss function using standard methods like
|
16,095 |
gradient descent (refer to the section"the three stages of logistic regressionin if you are interested in more detailslogistic regression is also popularly known as logit regression or maxent (maximum entropyclassifier we will now use our utility function train_predict_modelfrom our model_evaluation_utils module to build logistic regression model on our training features and evaluate the model performance on our test features (steps and see figure - in [ ]logistic regression model on bow features lr_bow_predictions meu train_predict_model(classifier=lrtrain_features=cv_train_featurestrain_labels=train_ sentimentstest_features=cv_test_featurestest_labels=test_sentimentsmeu display_model_performance_metrics(true_labels=test_sentimentspredicted_labels=lr_bow_predictionsclasses=['positive''negative']figure - model performance metrics for logistic regression on bag of words features we get an overall -score and model accuracy of %as depicted in figure - which is really excellentwe can now build logistic regression model similarly on our tf-idf features using the following snippet see figure - in [ ]logistic regression model on tf-idf features lr_tfidf_predictions meu train_predict_model(classifier=lrtrain_features=tv_train_featurestrain_labels=train_ sentimentstest_features=tv_test_featurestest_labels=test_sentimentsmeu display_model_performance_metrics(true_labels=test_sentimentspredicted_labels=lr_tfidf_predictionsclasses=['positive''negative']figure - model performance metrics for logistic regression on tf-idf features
|
16,096 |
we get an overall -score and model accuracy of %depicted in figure - which is great but our previous model is still slightly better you can similarly use the support vector machine model estimator object svmwhich we created earlierand use the same snippet to train and predict using an svm model we obtained maximum accuracy and -score of with the svm model (refer to the jupyter notebook for step-by-step code snippetsthus you can see how effective and accurate these supervised machine learning classification algorithms are in building text sentiment classifier newer supervised deep learning models we have already mentioned multiple times in previous about how deep learning has revolutionized the machine learning landscape over the last decade in this sectionwe will be building some deep neural networks and train them on some advanced text features based on word embeddings to build text sentiment classification system similar to what we did in the previous section let' load the following necessary dependencies before we start our analysis in [ ]import gensim import keras from keras models import sequential from keras layers import dropoutactivationdense from sklearn preprocessing import labelencoder using tensorflow backend if you remember in we talked about encoding categorical class labels and also the onehot encoding scheme so farour models in scikit-learn directly accepted the sentiment class labels as positive and negative and internally performed these operations however for our deep learning modelswe need to do this explicitly the following snippet helps us tokenize our movie reviews and also converts the text-based sentiment class labels into one-hot encoded vectors (forms part of step in [ ]le labelencoder(num_classes= tokenize train reviews encode train labels tokenized_train [tn tokenizer tokenize(textfor text in norm_train_reviewsy_tr le fit_transform(train_sentimentsy_train keras utils to_categorical(y_trnum_classestokenize test reviews encode test labels tokenized_test [tn tokenizer tokenize(textfor text in norm_test_reviewsy_ts le fit_transform(test_sentimentsy_test keras utils to_categorical(y_tsnum_classesprint class label encoding map and encoded labels print('sentiment class label map:'dict(zip(le classes_le transform (le classes_)))print('sample test label transformation:\ '+'-'* '\nactual labels:'test_sentiments[: ]'\nencoded labels:'y_ts[: ]'\none hot encoded labels:\ 'y_test[: ]sentiment class label map{'positive' 'negative'
|
16,097 |
sample test label transformationactual labels['negative'positive'negative'encoded labels[ one hot encoded labels[ ]thuswe can see from the preceding sample outputs how our sentiment class labels have been encoded into numeric representationswhich in turn have been converted into one-hot encoded vectors the feature engineering techniques we will be using in this section (step are slightly more advanced word vectorization techniques that are based on the concept of word embeddings we will be using the word vec and glove models to generate embeddings the word vec model was built by google and we have covered this in detail in under the section "word embeddingswe will be choosing the size parameter to be in this scenario representing feature vector size to be for each word in [ ]build word vec model v_num_features v_model gensim models word vec(tokenized_trainsize= v_num_featureswindow= min_count= sample= - we will be using the document word vector averaging scheme on this model from to represent each movie review as an averaged vector of all the word vector representations for the different words in the review the following function helps us compute averaged word vector representations for any corpus of text documents def averaged_word vec_vectorizer(corpusmodelnum_features)vocabulary set(model wv index worddef average_word_vectors(wordsmodelvocabularynum_features)feature_vector np zeros((num_features,)dtype="float "nwords for word in wordsif word in vocabularynwords nwords feature_vector np add(feature_vectormodel[word]if nwordsfeature_vector np divide(feature_vectornwordsreturn feature_vector features [average_word_vectors(tokenized_sentencemodelvocabularynum_featuresfor tokenized_sentence in corpusreturn np array(featureswe can now use the previous function to generate averaged word vector representations on our two movie review datasets
|
16,098 |
in [ ]generate averaged word vector features from word vec model avg_wv_train_features averaged_word vec_vectorizer(corpus=tokenized_trainmodel= v_modelnum_features= avg_wv_test_features averaged_word vec_vectorizer(corpus=tokenized_testmodel= v_modelnum_features= the glove modelwhich stands for global vectorsis an unsupervised model for obtaining word vector representations created at stanford universitythis model is trained on various corpora like wikipediacommon crawland twitter and corresponding pre-trained word vectors are available that can be used for our analysis needs you can refer to the original paper by jeffrey penningtonrichard socherand christopher manning called gloveglobal vectors for word representationfor more details the spacy library provided -dimensional word vectors trained on the common crawl corpus using the glove model they provide simple standard interface to get feature vectors of size for each word as well as the averaged feature vector of complete text document the following snippet leverages spacy to get the glove embeddings for our two datasets do note that you can also build your own glove model by leveraging other pre-trained models or by building model on your own corpus by using the resources available at in [ ]feature engineering with glove model train_nlp [tn nlp(itemfor item in norm_train_reviewstrain_glove_features np array([item vector for item in train_nlp]test_nlp [tn nlp(itemfor item in norm_test_reviewstest_glove_features np array([item vector for item in test_nlp]you can check the feature vector dimensions for our datasets based on each of the previous models using the following code in [ ]print('word vec model:train features shape:'avg_wv_train_features shapetest features shape:'avg_wv_test_features shapeprint('glove model:train features shape:'train_glove_features shapetest features shape:'test_glove_features shapeword vec model:train features shape( test features shape( glove model:train features shape( test features shape( we can see from the preceding output that as expected the word vec model features are of size and the glove features are of size we can now proceed to step of our classification system workflow where we will build and train deep neural network on these features we have already briefly covered the various aspects and architectures with regard to deep neural networks in under the section "deep learningwe will be using fully-connected four layer deep neural network (multi-layer perceptron or deep annfor our model we do not count the input layer usually in any deep architecturehence our model will consist of three hidden layers of neurons or units and one output layer with two units that will be used to either predict positive or negative sentiment based on the input layer features figure - depicts our deep neural network model for sentiment classification
|
16,099 |
figure - fully connected deep neural network model for sentiment classification we call this fully connected deep neural network (dnnbecause neurons or units in each pair of adjacent layers are fully pairwise connected these networks are also known as deep artificial neural networks (annsor multi-layer perceptrons (mlpssince they have more than one hidden layer the following function leverages keras on top of tensorflow to build the desired dnn model def construct_deepnn_architecture(num_input_features)dnn_model sequential(dnn_model add(dense( activation='relu'input_shape=(num_input_features,))dnn_model add(dropout( )dnn_model add(dense( activation='relu')dnn_model add(dropout( )dnn_model add(dense( activation='relu')dnn_model add(dropout( )dnn_model add(dense( )dnn_model add(activation('softmax')dnn_model compile(loss='categorical_crossentropy'optimizer='adam'metrics=['accuracy']return dnn_model from the preceding functionyou can see that we accept parameter num_input_featureswhich decides the number of units needed in the input layer ( for word vec and for glove featureswe build sequential modelwhich helps us linearly stack our hidden and output layers we use units for all our hidden layers and the activation function relu indicates rectified linear unit this function is typically defined as relu max , where is typically the input to neuron this is popularly known as the ramp function also in electronics and electrical engineering this function is preferred now as compared to the previously popular sigmoid function because it tries to solve the vanishing gradient problem this problem occurs when and as increasesthe gradient from sigmoids becomes really small (almost vanishingbut relu prevents this from happening besides thisit also helps with faster convergence of gradient descent we also use regularization in the network in the form of dropout layers by adding dropout rate of it randomly sets of the input feature units to at each update during training the model this form of regularization helps prevent overfitting the model
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.