id
int64
0
25.6k
text
stringlengths
0
4.59k
11,400
what is deep learningfinding these transformationsthey're merely searching through predefined set of operationscalled hypothesis space so that' what machine learning istechnicallysearching for useful representations of some input datawithin predefined space of possibilitiesusing guidance from feedback signal this simple idea allows for solving remarkably broad range of intellectual tasksfrom speech recognition to autonomous car driving now that you understand what we mean by learninglet' take look at what makes deep learning special the deepin deep learning deep learning is specific subfield of machine learninga new take on learning representations from data that puts an emphasis on learning successive layers of increasingly meaningful representations the deep in deep learning isn' reference to any kind of deeper understanding achieved by the approachratherit stands for this idea of successive layers of representations how many layers contribute to model of the data is called the depth of the model other appropriate names for the field could have been layered representations learning and hierarchical representations learning modern deep learning often involves tens or even hundreds of successive layers of representations-and they're all learned automatically from exposure to training data meanwhileother approaches to machine learning tend to focus on learning only one or two layers of representations of the datahencethey're sometimes called shallow learning in deep learningthese layered representations are (almost alwayslearned via models called neural networksstructured in literal layers stacked on top of each other the term neural network is reference to neurobiologybut although some of the central concepts in deep learning were developed in part by drawing inspiration from our understanding of the braindeep-learning models are not models of the brain there' no evidence that the brain implements anything like the learning mechanisms used in modern deep-learning models you may come across pop-science articles proclaiming that deep learning works like the brain or was modeled after the brainbut that isn' the case it would be confusing and counterproductive for newcomers to the field to think of deep learning as being in any way related to neurobiologyyou don' need that shroud of "just like our mindsmystique and mysteryand you may as well forget anything you may have read about hypothetical links between deep learning and biology for our purposesdeep learning is mathematical framework for learning representations from data licensed to
11,401
artificial intelligencemachine learningand deep learning what do the representations learned by deep-learning algorithm look likelet' examine how network several layers deep (see figure transforms an image of digit in order to recognize what digit it is layer layer layer layer original input final output figure deep neural net work for digit classificat ion as you can see in figure the network transforms the digit image into representations that are increasingly different from the original image and increasingly informative about the final result you can think of deep network as multistage information-distillation operationwhere information goes through successive filters and comes out increasingly purified (that isuseful with regard to some tasklayer representations layer representations layer representations layer representations (final output original input layer figure layer layer layer deep represent at ions learned by digit -classificat ion model so that' what deep learning istechnicallya multistage way to learn data representations it' simple idea--butas it turns outvery simple mechanismssufficiently scaledcan end up looking like magic understanding how deep learning worksin three figures at this pointyou know that machine learning is about mapping inputs (such as imagesto targets (such as the label "cat")which is done by observing many examples of input and targets you also know that deep neural networks do this input-to-target licensed to
11,402
what is deep learningmapping via deep sequence of simple data transformations (layersand that these data transformations are learned by exposure to examples now let' look at how this learning happensconcretely the specification of what layer does to its input data is stored in the layer' weightswhich in essence are bunch of numbers in technical termswe' say that the transformation implemented by layer is parameterized by its weights (see figure (weights are also sometimes called the parameters of layer in this contextlearning means finding set of values for the weights of all layers in networksuch that the network will correctly map example inputs to their associated targets but here' the thinga deep neural network can contain tens of millions of parameters finding the correct value for all of them may seem like daunting taskespecially given that modifying the value of one parameter will affect the behavior of all the othersinput goalfinding the right values for these weights weights layer (data transformationweights layer (data transformationfigure neural net work is paramet erized by it weight predictions yto control somethingfirst you need to be able to observe it to control the output of neural networkyou need to be able to measure how far this output is from what you expected this is the job of the loss function of the networkalso called the objective function the loss function takes the predictions of the network and the true target (what you wanted the network to outputand computes distance scorecapturing how well the network has done on this specific example (see figure input weights layer (data transformationweights layer (data transformationpredictions ytrue targets loss function loss score figure loss function measures he quality of he network' out put licensed to
11,403
the fundamental trick in deep learning is to use this score as feedback signal to adjust the value of the weights littlein direction that will lower the loss score for the current example (see figure this adjustment is the job of the optimizerwhich implements what' called the backpropagation algorithmthe central algorithm in deep learning the next explains in more detail how backpropagation works input weights layer (data transformationweights layer (data transformationweight update predictions yoptimizer true targets loss function loss score figure the loss score is used as feedback signal to adjust he weight initiallythe weights of the network are assigned random valuesso the network merely implements series of random transformations naturallyits output is far from what it should ideally beand the loss score is accordingly very high but with every example the network processesthe weights are adjusted little in the correct directionand the loss score decreases this is the training loopwhichrepeated sufficient number of times (typically tens of iterations over thousands of examples)yields weight values that minimize the loss function network with minimal loss is one for which the outputs are as close as they can be to the targetsa trained network once againit' simple mechanism thatonce scaledends up looking like magic what deep learning has achieved so far although deep learning is fairly old subfield of machine learningit only rose to prominence in the early in the few years sinceit has achieved nothing short of revolution in the fieldwith remarkable results on perceptual problems such as seeing and hearing--problems involving skills that seem natural and intuitive to humans but have long been elusive for machines in particulardeep learning has achieved the following breakthroughsall in historically difficult areas of machine learningnear-human-level image classification near-human-level speech recognition near-human-level handwriting transcription improved machine translation licensed to
11,404
what is deep learningimproved text-to-speech conversion digital assistants such as google now and amazon alexa near-human-level autonomous driving improved ad targetingas used by googlebaiduand bing improved search results on the web ability to answer natural-language questions superhuman go playing we're still exploring the full extent of what deep learning can do we've started applying it to wide variety of problems outside of machine perception and natural-language understandingsuch as formal reasoning if successfulthis may herald an age where deep learning assists humans in sciencesoftware developmentand more don' believe the short-term hype although deep learning has led to remarkable achievements in recent yearsexpectations for what the field will be able to achieve in the next decade tend to run much higher than what will likely be possible although some world-changing applications like autonomous cars are already within reachmany more are likely to remain elusive for long timesuch as believable dialogue systemshuman-level machine translation across arbitrary languagesand human-level natural-language understanding in particulartalk of human-level general intelligence shouldn' be taken too seriously the risk with high expectations for the short term is thatas technology fails to deliverresearch investment will dry upslowing progress for long time this has happened before twice in the pastai went through cycle of intense optimism followed by disappointment and skepticismwith dearth of funding as result it started with symbolic ai in the in those early daysprojections about ai were flying high one of the best-known pioneers and proponents of the symbolic ai approach was marvin minskywho claimed in "within generation the problem of creating 'artificial intelligencewill substantially be solved three years laterin he made more precisely quantified prediction"in from three to eight years we will have machine with the general intelligence of an average human being in such an achievement still appears to be far in the future--so far that we have no way to predict how long it will take--but in the and early sseveral experts believed it to be right around the corner (as do many people todaya few years lateras these high expectations failed to materializeresearchers and government funds turned away from the fieldmarking the start of the first ai winter ( reference to nuclear winterbecause this was shortly after the height of the cold warit wouldn' be the last one in the sa new take on symbolic aiexpert systemsstarted gathering steam among large companies few initial success stories triggered wave of investmentwith corporations around the world starting their own in-house ai departments to develop expert systems around companies were spending over $ billion each year on the technologybut by the early sthese systems had proven expensive to maintaindifficult to scaleand limited in scopeand interest died down thus began the second ai winter licensed to
11,405
we may be currently witnessing the third cycle of ai hype and disappointment-and we're still in the phase of intense optimism it' best to moderate our expectations for the short term and make sure people less familiar with the technical side of the field have clear idea of what deep learning can and can' deliver the promise of ai although we may have unrealistic short-term expectations for aithe long-term picture is looking bright we're only getting started in applying deep learning to many important problems for which it could prove transformativefrom medical diagnoses to digital assistants ai research has been moving forward amazingly quickly in the past five yearsin large part due to level of funding never before seen in the short history of aibut so far relatively little of this progress has made its way into the products and processes that form our world most of the research findings of deep learning aren' yet appliedor at least not applied to the full range of problems they can solve across all industries your doctor doesn' yet use aiand neither does your accountant you probably don' use ai technologies in your day-to-day life of courseyou can ask your smartphone simple questions and get reasonable answersyou can get fairly useful product recommendations on amazon comand you can search for "birthdayon google photos and instantly find those pictures of your daughter' birthday party from last month that' far cry from where such technologies used to stand but such tools are still only accessories to our daily lives ai has yet to transition to being central to the way we workthinkand live right nowit may seem hard to believe that ai could have large impact on our worldbecause it isn' yet widely deployed--much asback in it would have been difficult to believe in the future impact of the internet back thenmost people didn' see how the internet was relevant to them and how it was going to change their lives the same is true for deep learning and ai today but make no mistakeai is coming in notso-distant futureai will be your assistanteven your friendit will answer your questionshelp educate your kidsand watch over your health it will deliver your groceries to your door and drive you from point to point it will be your interface to an increasingly complex and information-intensive world andeven more importantai will help humanity as whole move forwardby assisting human scientists in new breakthrough discoveries across all scientific fieldsfrom genomics to mathematics on the waywe may face few setbacks and maybe new ai winter--in much the same way the internet industry was overhyped in - and suffered from crash that dried up investment throughout the early but we'll get there eventually ai will end up being applied to nearly every process that makes up our society and our daily livesmuch like the internet is today don' believe the short-term hypebut do believe in the long-term vision it may take while for ai to be deployed to its true potential-- potential the full extent of which no one has yet dared to dream--but ai is comingand it will transform our world in fantastic way licensed to
11,406
what is deep learningbefore deep learninga brief history of machine learning deep learning has reached level of public attention and industry investment never before seen in the history of aibut it isn' the first successful form of machine learning it' safe to say that most of the machine-learning algorithms used in the industry today aren' deep-learning algorithms deep learning isn' always the right tool for the job--sometimes there isn' enough data for deep learning to be applicableand sometimes the problem is better solved by different algorithm if deep learning is your first contact with machine learningthen you may find yourself in situation where all you have is the deep-learning hammerand every machine-learning problem starts to look like nail the only way not to fall into this trap is to be familiar with other approaches and practice them when appropriate detailed discussion of classical machine-learning approaches is outside of the scope of this bookbut we'll briefly go over them and describe the historical context in which they were developed this will allow us to place deep learning in the broader context of machine learning and better understand where deep learning comes from and why it matters probabilistic modeling probabilistic modeling is the application of the principles of statistics to data analysis it was one of the earliest forms of machine learningand it' still widely used to this day one of the best-known algorithms in this category is the naive bayes algorithm naive bayes is type of machine-learning classifier based on applying bayestheorem while assuming that the features in the input data are all independent ( strongor "naiveassumptionwhich is where the name comes fromthis form of data analysis predates computers and was applied by hand decades before its first computer implementation (most likely dating back to the sbayestheorem and the foundations of statistics date back to the eighteenth centuryand these are all you need to start using naive bayes classifiers closely related model is the logistic regression (logreg for short)which is sometimes considered to be the "hello worldof modern machine learning don' be misled by its name--logreg is classification algorithm rather than regression algorithm much like naive bayeslogreg predates computing by long timeyet it' still useful to this daythanks to its simple and versatile nature it' often the first thing data scientist will try on dataset to get feel for the classification task at hand early neural networks early iterations of neural networks have been completely supplanted by the modern variants covered in these pagesbut it' helpful to be aware of how deep learning originated although the core ideas of neural networks were investigated in toy forms as early as the sthe approach took decades to get started for long timethe missing piece was an efficient way to train large neural networks this changed in the mid- slicensed to
11,407
when multiple people independently rediscovered the backpropagation algorithm- way to train chains of parametric operations using gradient-descent optimization (later in the bookwe'll precisely define these concepts)--and started applying it to neural networks the first successful practical application of neural nets came in from bell labswhen yann lecun combined the earlier ideas of convolutional neural networks and backpropagationand applied them to the problem of classifying handwritten digits the resulting networkdubbed lenetwas used by the united states postal service in the to automate the reading of zip codes on mail envelopes kernel methods as neural networks started to gain some respect among researchers in the sthanks to this first successa new approach to machine learning rose to fame and quickly sent neural nets back to oblivionkernel methods kernel methods are group of classification algorithmsthe best known of which is the support vector machine (svmthe modern formulation of an svm was developed by vladimir vapnik and corinna cortes in the early at bell labs and published in , although an older linear formulation was published by vapnik and alexey chervonenkis as early as svms aim at solving classification problems by finding good decision boundaries (see figure between two sets of points belonging to two different categories decision boundary can be thought of as line or surface separating your training data into two spaces corresponding to two categories to classify new data pointsyou just need to check which side of the decision figure decision boundary boundary they fall on svms proceed to find these boundaries in two steps the data is mapped to new high-dimensional representation where the decision boundary can be expressed as hyperplane (if the data was twodimensionalas in figure hyperplane would be straight linea good decision boundary ( separation hyperplaneis computed by trying to maximize the distance between the hyperplane and the closest data points from each classa step called maximizing the margin this allows the boundary to generalize well to new samples outside of the training dataset the technique of mapping data to high-dimensional representation where classification problem becomes simpler may look good on paperbut in practice it' often computationally intractable that' where the kernel trick comes in (the key idea that kernel methods are named afterhere' the gist of itto find good decision vladimir vapnik and corinna cortes"support-vector networks,machine learning no ( ) - vladimir vapnik and alexey chervonenkis" note on one class of perceptrons,automation and remote control ( licensed to
11,408
what is deep learninghyperplanes in the new representation spaceyou don' have to explicitly compute the coordinates of your points in the new spaceyou just need to compute the distance between pairs of points in that spacewhich can be done efficiently using kernel function kernel function is computationally tractable operation that maps any two points in your initial space to the distance between these points in your target representation spacecompletely bypassing the explicit computation of the new representation kernel functions are typically crafted by hand rather than learned from data--in the case of an svmonly the separation hyperplane is learned at the time they were developedsvms exhibited state-of-the-art performance on simple classification problems and were one of the few machine-learning methods backed by extensive theory and amenable to serious mathematical analysismaking them well understood and easily interpretable because of these useful propertiessvms became extremely popular in the field for long time but svms proved hard to scale to large datasets and didn' provide good results for perceptual problems such as image classification because an svm is shallow methodapplying an svm to perceptual problems requires first extracting useful representations manually ( step called feature engineering)which is difficult and brittle decision treesrandom forestsand gradient boosting machines decision trees are flowchart-like structures that let you classify input data points or predict output values given inputs (see figure they're easy to visualize and interpret decisions trees learned from data began to receive significant research interest in the sand by they were often preferred to kernel methods input data question question category category question category category figure decision reethe parameters hat are learned are the questions about he dat question could befor instanceis coefficient in the dat greater han ?in particularthe random forest algorithm introduced robustpractical take on decision-tree learning that involves building large number of specialized decision trees and then ensembling their outputs random forests are applicable to wide range of problems--you could say that they're almost always the second-best algorithm for any shallow machine-learning task when the popular machine-learning competition website kaggle (became favorite on the platform--until when gradient boosting machines took over gradient boosting machinemuch like random forestis machine-learning technique based on ensembling weak prediction modelsgenerally decision trees it licensed to
11,409
uses gradient boostinga way to improve any machine-learning model by iteratively training new models that specialize in addressing the weak points of the previous models applied to decision treesthe use of the gradient boosting technique results in models that strictly outperform random forests most of the timewhile having similar properties it may be one of the bestif not the bestalgorithm for dealing with nonperceptual data today alongside deep learningit' one of the most commonly used techniques in kaggle competitions back to neural networks around although neural networks were almost completely shunned by the scientific community at largea number of people still working on neural networks started to make important breakthroughsthe groups of geoffrey hinton at the university of torontoyoshua bengio at the university of montrealyann lecun at new york universityand idsia in switzerland in dan ciresan from idsia began to win academic image-classification competitions with gpu-trained deep neural networks--the first practical success of modern deep learning but the watershed moment came in with the entry of hinton' group in the yearly large-scale image-classification challenge imagenet the imagenet challenge was notoriously difficult at the timeconsisting of classifying highresolution color images into , different categories after training on million images in the top-five accuracy of the winning modelbased on classical approaches to computer visionwas only thenin team led by alex krizhevsky and advised by geoffrey hinton was able to achieve top-five accuracy of %-- significant breakthrough the competition has been dominated by deep convolutional neural networks every year since by the winner reached an accuracy of %and the classification task on imagenet was considered to be completely solved problem since deep convolutional neural networks (convnetshave become the go-to algorithm for all computer vision tasksmore generallythey work on all perceptual tasks at major computer vision conferences in and it was nearly impossible to find presentations that didn' involve convnets in some form at the same timedeep learning has also found applications in many other types of problemssuch as natural-language processing it has completely replaced svms and decision trees in wide range of applications for instancefor several yearsthe european organization for nuclear researchcernused decision tree-based methods for analysis of particle data from the atlas detector at the large hadron collider (lhc)but cern eventually switched to keras-based deep neural networks due to their higher performance and ease of training on large datasets what makes deep learning different the primary reason deep learning took off so quickly is that it offered better performance on many problems but that' not the only reason deep learning also makes licensed to
11,410
what is deep learningproblem-solving much easierbecause it completely automates what used to be the most crucial step in machine-learning workflowfeature engineering previous machine-learning techniques--shallow learning--only involved transforming the input data into one or two successive representation spacesusually via simple transformations such as high-dimensional non-linear projections (svmsor decision trees but the refined representations required by complex problems generally can' be attained by such techniques as suchhumans had to go to great lengths to make the initial input data more amenable to processing by these methodsthey had to manually engineer good layers of representations for their data this is called feature engineering deep learningon the other handcompletely automates this stepwith deep learningyou learn all features in one pass rather than having to engineer them yourself this has greatly simplified machine-learning workflowsoften replacing sophisticated multistage pipelines with singlesimpleend-to-end deep-learning model you may askif the crux of the issue is to have multiple successive layers of representationscould shallow methods be applied repeatedly to emulate the effects of deep learningin practicethere are fast-diminishing returns to successive applications of shallow-learning methodsbecause the optimal first representation layer in threelayer model isn' the optimal first layer in one-layer or two-layer model what is transformative about deep learning is that it allows model to learn all layers of representation jointlyat the same timerather than in succession (greedilyas it' calledwith joint feature learningwhenever the model adjusts one of its internal featuresall other features that depend on it automatically adapt to the changewithout requiring human intervention everything is supervised by single feedback signalevery change in the model serves the end goal this is much more powerful than greedily stacking shallow modelsbecause it allows for complexabstract representations to be learned by breaking them down into long series of intermediate spaces (layers)each space is only simple transformation away from the previous one these are the two essential characteristics of how deep learning learns from datathe incrementallayer-by-layer way in which increasingly complex representations are developedand the fact that these intermediate incremental representations are learned jointlyeach layer being updated to follow both the representational needs of the layer above and the needs of the layer below togetherthese two properties have made deep learning vastly more successful than previous approaches to machine learning the modern machine-learning landscape great way to get sense of the current landscape of machine-learning algorithms and tools is to look at machine-learning competitions on kaggle due to its highly competitive environment (some contests have thousands of entrants and milliondollar prizesand to the wide variety of machine-learning problems coveredkaggle offers realistic way to assess what works and what doesn' sowhat kind of algorithm is reliably winning competitionswhat tools do top entrants uselicensed to
11,411
in and kaggle was dominated by two approachesgradient boosting machines and deep learning specificallygradient boosting is used for problems where structured data is availablewhereas deep learning is used for perceptual problems such as image classification practitioners of the former almost always use the excellent xgboost librarywhich offers support for the two most popular languages of data sciencepython and meanwhilemost of the kaggle entrants using deep learning use the keras librarydue to its ease of useflexibilityand support of python these are the two techniques you should be the most familiar with in order to be successful in applied machine learning todaygradient boosting machinesfor shallowlearning problemsand deep learningfor perceptual problems in technical termsthis means you'll need to be familiar with xgboost and keras--the two libraries that currently dominate kaggle competitions with this book in handyou're already one big step closer licensed to
11,412
what is deep learningwhy deep learningwhy nowthe two key ideas of deep learning for computer vision--convolutional neural networks and backpropagation--were already well understood in the long shortterm memory (lstmalgorithmwhich is fundamental to deep learning for timeserieswas developed in and has barely changed since so why did deep learning only take off after what changed in these two decadesin generalthree technical forces are driving advances in machine learninghardware datasets and benchmarks algorithmic advances because the field is guided by experimental findings rather than by theoryalgorithmic advances only become possible when appropriate data and hardware are available to try new ideas (or scale up old ideasas is often the casemachine learning isn' mathematics or physicswhere major advances can be done with pen and piece of paper it' an engineering science the real bottlenecks throughout the and were data and hardware but here' what happened during that timethe internet took offand high-performance graphics chips were developed for the needs of the gaming market hardware between and off-the-shelf cpus became faster by factor of approximately , as resultnowadays it' possible to run small deep-learning models on your laptopwhereas this would have been intractable years ago but typical deep-learning models used in computer vision or speech recognition require orders of magnitude more computational power than what your laptop can deliver throughout the scompanies like nvidia and amd have been investing billions of dollars in developing fastmassively parallel chips (graphical processing units [gpus]to power the graphics of increasingly photorealistic video games-cheapsingle-purpose supercomputers designed to render complex scenes on your screen in real time this investment came to benefit the scientific community whenin nvidia launched cuda (massive clusters of cpus in various highly parallelizable applicationsbeginning with physics modeling deep neural networksconsisting mostly of many small matrix multiplicationsare also highly parallelizableand around some researchers began to write cuda implementations of neural nets--dan ciresan and alex krizhevsky were among the first see "flexiblehigh performance convolutional neural networks for image classification,proceedings of the nd international joint conference on artificial intelligence ( )www ijcai org/proceedings/ /papers pdf see "imagenet classification with deep convolutional neural networks,advances in neural information processing systems ( )licensed to
11,413
what happened is that the gaming market subsidized supercomputing for the next generation of artificial intelligence applications sometimesbig things begin as games todaythe nvidia titan xa gaming gpu that cost $ , at the end of can deliver peak of tflops in single precision trillion float operations per second that' about times more than what you can get out of modern laptop on titan xit takes only couple of days to train an imagenet model of the sort that would have won the ilsvrc competition few years ago meanwhilelarge companies train deep-learning models on clusters of hundreds of gpus of type developed specifically for the needs of deep learningsuch as the nvidia tesla the sheer computational power of such clusters is something that would never have been possible without modern gpus what' morethe deep-learning industry is starting to go beyond gpus and is investing in increasingly specializedefficient chips for deep learning in at its annual / conventiongoogle revealed its tensor processing unit (tpuprojecta new chip design developed from the ground up to run deep neural networkswhich is reportedly times faster and far more energy efficient than top-of-the-line gpus data ai is sometimes heralded as the new industrial revolution if deep learning is the steam engine of this revolutionthen data is its coalthe raw material that powers our intelligent machineswithout which nothing would be possible when it comes to datain addition to the exponential progress in storage hardware over the past years (following moore' law)the game changer has been the rise of the internetmaking it feasible to collect and distribute very large datasets for machine learning todaylarge companies work with image datasetsvideo datasetsand natural-language datasets that couldn' have been collected without the internet user-generated image tags on flickrfor instancehave been treasure trove of data for computer vision so are youtube videos and wikipedia is key dataset for natural-language processing if there' one dataset that has been catalyst for the rise of deep learningit' the imagenet datasetconsisting of million images that have been hand annotated with , image categories ( category per imagebut what makes imagenet special isn' just its large sizebut also the yearly competition associated with it as kaggle has been demonstrating since public competitions are an excellent way to motivate researchers and engineers to push the envelope having common benchmarks that researchers compete to beat has greatly helped the recent rise of deep learning algorithms in addition to hardware and datauntil the late swe were missing reliable way to train very deep neural networks as resultneural networks were still fairly shallow the imagenet large scale visual recognition challenge (ilsvrc)www image-net org/challenges/lsvrc licensed to
11,414
what is deep learningusing only one or two layers of representationsthusthey weren' able to shine against more-refined shallow methods such as svms and random forests the key issue was that of gradient propagation through deep stacks of layers the feedback signal used to train neural networks would fade away as the number of layers increased this changed around - with the advent of several simple but important algorithmic improvements that allowed for better gradient propagationbetter activation functions for neural layers better weight-initialization schemesstarting with layer-wise pretrainingwhich was quickly abandoned better optimization schemessuch as rmsprop and adam only when these improvements began to allow for training models with or more layers did deep learning start to shine finallyin and even more advanced ways to help gradient propagation were discoveredsuch as batch normalizationresidual connectionsand depthwise separable convolutions today we can train from scratch models that are thousands of layers deep new wave of investment as deep learning became the new state of the art for computer vision in - and eventually for all perceptual tasksindustry leaders took note what followed was gradual wave of industry investment far beyond anything previously seen in the history of ai in right before deep learning took the spotlightthe total venture capital investment in ai was around $ millionwhich went almost entirely to practical applications of shallow machine-learning approaches by it had risen to staggering $ million dozens of startups launched in these three yearstrying to capitalize on the deep-learning hype meanwhilelarge tech companies such as googlefacebookbaiduand microsoft have invested in internal research departments in amounts that would most likely dwarf the flow of venture-capital money only few numbers have surfacedin google acquired the deep-learning startup deepmind for reported $ million--the largest acquisition of an ai company in history in baidu started deep-learning research center in silicon valleyinvesting $ million in the project the deep-learning hardware startup nervana systems was acquired by intel in for over $ million machine learning--in particulardeep learning--has become central to the product strategy of these tech giants in late google ceo sundar pichai stated"machine learning is coretransformative way by which we're rethinking how we're doing everything we're thoughtfully applying it across all our productsbe it searchadsyoutubeor play and we're in early daysbut you'll see us--in systematic way-apply machine learning in all these areas " sundar pichaialphabet earnings calloct licensed to
11,415
as result of this wave of investmentthe number of people working on deep learning went in just five years from few hundred to tens of thousandsand research progress has reached frenetic pace there are currently no signs that this trend will slow any time soon the democratization of deep learning one of the key factors driving this inflow of new faces in deep learning has been the democratization of the toolsets used in the field in the early daysdoing deep learning required significant +and cuda expertisewhich few people possessed nowadaysbasic python scripting skills suffice to do advanced deep-learning research this has been driven most notably by the development of theano and then tensorflow--two symbolic tensor-manipulation frameworks for python that support autodifferentiationgreatly simplifying the implementation of new models--and by the rise of user-friendly libraries such as keraswhich makes deep learning as easy as manipulating lego bricks after its release in early keras quickly became the go-to deep-learning solution for large numbers of new startupsgraduate studentsand researchers pivoting into the field will it lastis there anything special about deep neural networks that makes them the "rightapproach for companies to be investing in and for researchers to flock toor is deep learning just fad that may not lastwill we still be using deep neural networks in yearsdeep learning has several properties that justify its status as an ai revolutionand it' here to stay we may not be using neural networks two decades from nowbut whatever we use will directly inherit from modern deep learning and its core concepts these important properties can be broadly sorted into three categoriessimplicity--deep learning removes the need for feature engineeringreplacing complexbrittleengineering-heavy pipelines with simpleend-to-end trainable models that are typically built using only five or six different tensor operations scalability--deep learning is highly amenable to parallelization on gpus or tpusso it can take full advantage of moore' law in additiondeep-learning models are trained by iterating over small batches of dataallowing them to be trained on datasets of arbitrary size (the only bottleneck is the amount of parallel computational power availablewhichthanks to moore' lawis fastmoving barrier versatility and reusability--unlike many prior machine-learning approachesdeep-learning models can be trained on additional data without restarting from scratchmaking them viable for continuous online learning--an important property for very large production models furthermoretrained deep-learning models are repurposable and thus reusablefor instanceit' possible to take deep-learning model trained for image classification and drop it into videoprocessing pipeline this allows us to reinvest previous work into increasingly licensed to
11,416
what is deep learningcomplex and powerful models this also makes deep learning applicable to fairly small datasets deep learning has only been in the spotlight for few yearsand we haven' yet established the full scope of what it can do with every passing monthwe learn about new use cases and engineering improvements that lift previous limitations following scientific revolutionprogress generally follows sigmoid curveit starts with period of fast progresswhich gradually stabilizes as researchers hit hard limitationsand then further improvements become incremental deep learning in seems to be in the first half of that sigmoidwith much more progress to come in the next few years licensed to
11,417
mathematical building blocks of neural networks this covers first example of neural network tensors and tensor operations how neural networks learn via backpropagation and gradient descent understanding deep learning requires familiarity with many simple mathematical conceptstensorstensor operationsdifferentiationgradient descentand so on our goal in this will be to build your intuition about these notions without getting overly technical in particularwe'll steer away from mathematical notationwhich can be off-putting for those without any mathematics background and isn' strictly necessary to explain things well to add some context for tensors and gradient descentwe'll begin the with practical example of neural network then we'll go over every new concept licensed to
11,418
before we beginthe mathematical building blocks of neural networks that' been introducedpoint by point keep in mind that these concepts will be essential for you to understand the practical examples that will come in the following after reading this you'll have an intuitive understanding of how neural networks workand you'll be able to move on to practical applications--which will start with licensed to
11,419
first look at neural network let' look at concrete example of neural network that uses the python library keras to learn to classify handwritten digits unless you already have experience with keras or similar librariesyou won' understand everything about this first example right away you probably haven' even installed keras yetthat' fine in the next we'll review each element in the example and explain them in detail so don' worry if some steps seem arbitrary or look like magic to youwe've got to start somewhere the problem we're trying to solve here is to classify grayscale images of handwritten digits ( pixelsinto their categories ( through we'll use the mnist dataseta classic in the machine-learning communitywhich has been around almost as long as the field itself and has been intensively studied it' set of , training imagesplus , test imagesassembled by the national institute of standards and technology (the nist in mnistin the you can think of "solvingmnist as the "hello worldof deep learning--it' what you do to verify that your algorithms are working as expected as you become machine-learning practitioneryou'll see mnist come up over and over againin scientific papersblog postsand so on you can see some mnist samples in figure note on classes and labels in machine learninga category in classification problem is called class data points are called samples the class associated with specific sample is called label figure mnist sample digit you don' need to try to reproduce this example on your machine just now if you wish toyou'll first need to set up keraswhich is covered in section the mnist dataset comes preloaded in kerasin the form of set of four numpy arrays listing loading he mnist dat aset in keras from keras datasets import mnist (train_imagestrain_labels)(test_imagestest_labelsmnist load_data(train_images and train_labels form the training setthe data that the model will learn from the model will then be tested on the test settest_images and test_labels licensed to
11,420
before we beginthe mathematical building blocks of neural networks the images are encoded as numpy arraysand the labels are an array of digitsranging from to the images and labels have one-to-one correspondence let' look at the training datatrain_images shape ( len(train_labels train_labels array([ ]dtype=uint and here' the test datatest_images shape ( len(test_labels test_labels array([ ]dtype=uint the workflow will be as followsfirstwe'll feed the neural network the training datatrain_images and train_labels the network will then learn to associate images and labels finallywe'll ask the network to produce predictions for test_imagesand we'll verify whether these predictions match the labels from test_labels let' build the network--againremember that you aren' expected to understand everything about this example yet listing the net work archit ect ure from keras import models from keras import layers network models sequential(network add(layers dense( activation='relu'input_shape=( ,))network add(layers dense( activation='softmax')the core building block of neural networks is the layera data-processing module that you can think of as filter for data some data goes inand it comes out in more useful form specificallylayers extract representations out of the data fed into them--hopefullyrepresentations that are more meaningful for the problem at hand most of deep learning consists of chaining together simple layers that will implement form of progressive data distillation deep-learning model is like sieve for data processingmade of succession of increasingly refined data filters--the layers hereour network consists of sequence of two dense layerswhich are densely connected (also called fully connectedneural layers the second (and lastlayer is -way softmax layerwhich means it will return an array of probability scores (summing to each score will be the probability that the current digit image belongs to one of our digit classes licensed to
11,421
to make the network ready for trainingwe need to pick three more thingsas part of the compilation stepa loss function--how the network will be able to measure its performance on the training dataand thus how it will be able to steer itself in the right direction an optimizer--the mechanism through which the network will update itself based on the data it sees and its loss function metrics to monitor during training and testing--herewe'll only care about accuracy (the fraction of the images that were correctly classifiedthe exact purpose of the loss function and the optimizer will be made clear throughout the next two listing the compilation st ep network compile(optimizer='rmsprop'loss='categorical_crossentropy'metrics=['accuracy']before trainingwe'll preprocess the data by reshaping it into the shape the network expects and scaling it so that all values are in the [ interval previouslyour training imagesfor instancewere stored in an array of shape ( of type uint with values in the [ interval we transform it into float array of shape ( with values between and listing preparing he image data train_images train_images reshape(( )train_images train_images astype('float ' test_images test_images reshape(( )test_images test_images astype('float ' we also need to categorically encode the labelsa step that' explained in listing preparing he labels from keras utils import to_categorical train_labels to_categorical(train_labelstest_labels to_categorical(test_labelswe're now ready to train the networkwhich in keras is done via call to the network' fit method--we fit the model to its training datanetwork fit(train_imagestrain_labelsepochs= batch_size= epoch / / [============================== loss acc epoch / / [========================eta loss acc licensed to
11,422
before we beginthe mathematical building blocks of neural networks two quantities are displayed during trainingthe loss of the network over the training dataand the accuracy of the network over the training data we quickly reach an accuracy of ( %on the training data now let' check that the model performs well on the test settootest_losstest_acc network evaluate(test_imagestest_labelsprint('test_acc:'test_acctest_acc the test-set accuracy turns out to be %--that' quite bit lower than the training set accuracy this gap between training accuracy and test accuracy is an example of overfittingthe fact that machine-learning models tend to perform worse on new data than on their training data overfitting is central topic in this concludes our first example--you just saw how you can build and train neural network to classify handwritten digits in less than lines of python code in the next 'll go into detail about every moving piece we just previewed and clarify what' going on behind the scenes you'll learn about tensorsthe data-storing objects going into the networktensor operationswhich layers are made ofand gradient descentwhich allows your network to learn from its training examples licensed to
11,423
data representations for neural networks in the previous examplewe started from data stored in multidimensional numpy arraysalso called tensors in generalall current machine-learning systems use tensors as their basic data structure tensors are fundamental to the field--so fundamental that google' tensorflow was named after them so what' tensorat its corea tensor is container for data--almost always numerical data soit' container for numbers you may be already familiar with matriceswhich are tensorstensors are generalization of matrices to an arbitrary number of dimensions (note that in the context of tensorsa dimension is often called an axisscalars ( tensorsa tensor that contains only one number is called scalar (or scalar tensoror -dimensional tensoror tensorin numpya float or float number is scalar tensor (or scalar arrayyou can display the number of axes of numpy tensor via the ndim attributea scalar tensor has axes (ndim = the number of axes of tensor is also called its rank here' numpy scalarimport numpy as np np array( array( ndim vectors ( tensorsan array of numbers is called vectoror tensor tensor is said to have exactly one axis following is numpy vectorx np array([ ] array([ ] ndim this vector has five entries and so is called -dimensional vector don' confuse vector with tensora vector has only one axis and has five dimensions along its axiswhereas tensor has five axes (and may have any number of dimensions along each axisdimensionality can denote either the number of entries along specific axis (as in the case of our vectoror the number of axes in tensor (such as tensor)which can be confusing at times in the latter caseit' technically more correct to talk about tensor of rank (the rank of tensor being the number of axes)but the ambiguous notation tensor is common regardless matrices ( tensorsan array of vectors is matrixor tensor matrix has two axes (often referred to rows and columnsyou can visually interpret matrix as rectangular grid of numbers this is numpy matrixlicensed to
11,424
before we beginthe mathematical building blocks of neural networks np array([[ ][ ][ ]] ndim the entries from the first axis are called the rowsand the entries from the second axis are called the columns in the previous example[ is the first row of xand [ is the first column tensors and higher-dimensional tensors if you pack such matrices in new arrayyou obtain tensorwhich you can visually interpret as cube of numbers following is numpy tensorx np array([[[ ][ ][ ]][[ ][ ][ ]][[ ][ ][ ]]] ndim by packing tensors in an arrayyou can create tensorand so on in deep learningyou'll generally manipulate tensors that are to dalthough you may go up to if you process video data key attributes tensor is defined by three key attributesnumber of axes (rank)--for instancea tensor has three axesand matrix has two axes this is also called the tensor' ndim in python libraries such as numpy shape--this is tuple of integers that describes how many dimensions the tensor has along each axis for instancethe previous matrix example has shape ( )and the tensor example has shape ( vector has shape with single elementsuch as ( ,)whereas scalar has an empty shape(data type (usually called dtype in python libraries)--this is the type of the data contained in the tensorfor instancea tensor' type could be float uint float and so on on rare occasionsyou may see char tensor note that string tensors don' exist in numpy (or in most other libraries)because tensors live in preallocatedcontiguous memory segmentsand stringsbeing variable lengthwould preclude the use of this implementation licensed to
11,425
to make this more concretelet' look back at the data we processed in the mnist example firstwe load the mnist datasetfrom keras datasets import mnist (train_imagestrain_labels)(test_imagestest_labelsmnist load_data(nextwe display the number of axes of the tensor train_images the ndim attributeprint(train_images ndim here' its shapeprint(train_images shape( and this is its data typethe dtype attributeprint(train_images dtypeuint so what we have here is tensor of -bit integers more preciselyit' an array of , matrices of integers each such matrix is grayscale imagewith coefficients between and let' display the fourth digit in this tensorusing the library matplotlib (part of the standard scientific python suite)see figure listing displaying he fourt digit digit train_images[ import matplotlib pyplot as plt plt imshow(digitcmap=plt cm binaryplt show(figure the fourt sample in our dat aset licensed to
11,426
before we beginthe mathematical building blocks of neural networks manipulating tensors in numpy in the previous examplewe selected specific digit alongside the first axis using the syntax train_images[iselecting specific elements in tensor is called tensor slicing let' look at the tensor-slicing operations you can do on numpy arrays the following example selects digits # to # (# isn' includedand puts them in an array of shape ( )my_slice train_images[ : print(my_slice shape( it' equivalent to this more detailed notationwhich specifies start index and stop index for the slice along each tensor axis note that is equivalent to selecting the entire axisequivalent to the previous example my_slice train_images[ : ::my_slice shape also equivalent to the ( previous example my_slice train_images[ : : : my_slice shape ( in generalyou may select between any two indices along each tensor axis for instancein order to select pixels in the bottom-right corner of all imagesyou do thismy_slice train_images[: : :it' also possible to use negative indices much like negative indices in python liststhey indicate position relative to the end of the current axis in order to crop the images to patches of pixels centered in the middleyou do thismy_slice train_images[: :- :- the notion of data batches in generalthe first axis (axis because indexing starts at in all data tensors you'll come across in deep learning will be the samples axis (sometimes called the samples dimensionin the mnist examplesamples are images of digits in additiondeep-learning models don' process an entire dataset at onceratherthey break the data into small batches concretelyhere' one batch of our mnist digitswith batch size of batch train_images[: and here' the next batchbatch train_images[ : and the th batchbatch train_images[ : ( )licensed to
11,427
when considering such batch tensorthe first axis (axis is called the batch axis or batch dimension this is term you'll frequently encounter when using keras and other deep-learning libraries real-world examples of data tensors let' make data tensors more concrete with few examples similar to what you'll encounter later the data you'll manipulate will almost always fall into one of the following categoriesvector data-- tensors of shape (samplesfeaturestimeseries data or sequence data-- tensors of shape (samplestimestepsfeaturesimages-- tensors of shape (samplesheightwidthchannelsor (sampleschannelsheightwidthvideo-- tensors of shape (samplesframesheightwidthchannelsor (samplesframeschannelsheightwidthvector data this is the most common case in such dataseteach single data point can be encoded as vectorand thus batch of data will be encoded as tensor (that isan array of vectors)where the first axis is the samples axis and the second axis is the features axis let' take look at two examplesan actuarial dataset of peoplewhere we consider each person' agezip codeand income each person can be characterized as vector of valuesand thus an entire dataset of , people can be stored in tensor of shape ( dataset of text documentswhere we represent each document by the counts of how many times each word appears in it (out of dictionary of , common wordseach document can be encoded as vector of , values (one count per word in the dictionary)and thus an entire dataset of documents can be stored in tensor of shape ( timeseries data or sequence data whenever time matters in your data (or the notion of sequence order)it makes sense to store it in tensor with an explicit time axis each sample can be encoded as sequence of vectors ( tensor)and thus batch of data will be encoded as tensor (see figure features samples timesteps figure licensed to timeseries dat ensor
11,428
before we beginthe mathematical building blocks of neural networks the time axis is always the second axis (axis of index )by convention let' look at few examplesa dataset of stock prices every minutewe store the current price of the stockthe highest price in the past minuteand the lowest price in the past minute thus every minute is encoded as vectoran entire day of trading is encoded as tensor of shape ( (there are minutes in trading day)and daysworth of data can be stored in tensor of shape ( hereeach sample would be one day' worth of data dataset of tweetswhere we encode each tweet as sequence of characters out of an alphabet of unique characters in this settingeach character can be encoded as binary vector of size (an all-zeros vector except for entry at the index corresponding to the characterthen each tweet can be encoded as tensor of shape ( )and dataset of million tweets can be stored in tensor of shape ( image data images typically have three dimensionsheightwidthand color depth although grayscale images (like our mnist digitshave only single color channel and could thus be stored in tensorsby convention image tensors are always dwith onedimensional color channel for grayscale images batch of grayscale images of size could thus be stored in tensor of shape ( )and batch of color images could be stored in tensor of shape ( (see figure color channels height samples width figure image dat ensor (channels-first convent ionthere are two conventions for shapes of images tensorsthe channels-last convention (used by tensorflowand the channels-first convention (used by theanothe tensorflow machine-learning frameworkfrom googleplaces the color-depth axis at the end(samplesheightwidthcolor_depthmeanwhiletheano places the color depth axis right after the batch axis(samplescolor_depthheightwidthwith licensed to
11,429
the theano conventionthe previous examples would become ( and ( the keras framework provides support for both formats video data video data is one of the few types of real-world data for which you'll need tensors video can be understood as sequence of frameseach frame being color image because each frame can be stored in tensor (heightwidthcolor_depth) sequence of frames can be stored in tensor (framesheightwidthcolor_ depth)and thus batch of different videos can be stored in tensor of shape (samplesframesheightwidthcolor_depthfor instancea -second youtube video clip sampled at frames per second would have frames batch of four such video clips would be stored in tensor of shape ( that' total of , , valuesif the dtype of the tensor was float then each value would be stored in bitsso the tensor would represent mb heavyvideos you encounter in real life are much lighterbecause they aren' stored in float and they're typically compressed by large factor (such as in the mpeg formatlicensed to
11,430
before we beginthe mathematical building blocks of neural networks the gears of neural networkstensor operations much as any computer program can be ultimately reduced to small set of binary operations on binary inputs (andornorand so on)all transformations learned by deep neural networks can be reduced to handful of tensor operations applied to tensors of numeric data for instanceit' possible to add tensorsmultiply tensorsand so on in our initial examplewe were building our network by stacking dense layers on top of each other keras layer instance looks like thiskeras layers dense( activation='relu'this layer can be interpreted as functionwhich takes as input tensor and returns another tensor-- new representation for the input tensor specificallythe function is as follows (where is tensor and is vectorboth attributes of the layer)output relu(dot(winputblet' unpack this we have three tensor operations herea dot product (dotbetween the input tensor and tensor named wan addition (+between the resulting tensor and vector bandfinallya relu operation relu(xis max( note although this section deals entirely with linear algebra expressionsyou won' find any mathematical notation here 've found that mathematical concepts can be more readily mastered by programmers with no mathematical background if they're expressed as short python snippets instead of mathematical equations so we'll use numpy code throughout element-wise operations the relu operation and addition are element-wise operationsoperations that are applied independently to each entry in the tensors being considered this means these operations are highly amenable to massively parallel implementations (vectorized implementationsa term that comes from the vector processor supercomputer architecture from the - periodif you want to write naive python implementation of an element-wise operationyou use for loopas in this naive implementation of an element-wise relu operationdef naive_relu( )assert len( shape= is numpy tensor copy(avoid overwriting the input tensor for in range( shape[ ])for in range( shape[ ]) [ijmax( [ij] return licensed to
11,431
you do the same for additionx and are numpy tensors def naive_add(xy)assert len( shape= assert shape = shape copy(for in range( shape[ ])for in range( shape[ ]) [ij+ [ijreturn avoid overwriting the input tensor on the same principleyou can do element-wise multiplicationsubtractionand so on in practicewhen dealing with numpy arraysthese operations are available as welloptimized built-in numpy functionswhich themselves delegate the heavy lifting to basic linear algebra subprograms (blasimplementation if you have one installed (which you shouldblas are low-levelhighly parallelefficient tensor-manipulation routines that are typically implemented in fortran or soin numpyyou can do the following element-wise operationand it will be blazing fastimport numpy as np np maximum( element-wise addition element-wise relu broadcasting our earlier naive implementation of naive_add only supports the addition of tensors with identical shapes but in the dense layer introduced earlierwe added tensor with vector what happens with addition when the shapes of the two tensors being added differwhen possibleand if there' no ambiguitythe smaller tensor will be broadcasted to match the shape of the larger tensor broadcasting consists of two steps axes (called broadcast axesare added to the smaller tensor to match the ndim of the larger tensor the smaller tensor is repeated alongside these new axes to match the full shape of the larger tensor let' look at concrete example consider with shape ( and with shape ( ,firstwe add an empty first axis to ywhose shape becomes ( thenwe repeat times alongside this new axisso that we end up with tensor with shape ( )where [ := for in range( at this pointwe can proceed to add and ybecause they have the same shape in terms of implementationno new tensor is createdbecause that would be terribly inefficient the repetition operation is entirely virtualit happens at the algorithmic level rather than at the memory level but thinking of the vector being licensed to
11,432
before we beginthe mathematical building blocks of neural networks repeated times alongside new axis is helpful mental model here' what naive implementation would look likedef naive_add_matrix_and_vector(xy)assert len( shape= assert len( shape= assert shape[ = shape[ copy(for in range( shape[ ])for in range( shape[ ]) [ij+ [jreturn is numpy tensor is numpy vector avoid overwriting the input tensor with broadcastingyou can generally apply two-tensor element-wise operations if one tensor has shape (abnn mand the other has shape (nn mthe broadcasting will then automatically happen for axes through the following example applies the element-wise maximum operation to two tensors of different shapes via broadcastingimport numpy as np np random random(( ) np random random(( ) np maximum(xyx is random tensor with shape ( is random tensor with shape ( the output has shape ( like tensor dot the dot operationalso called tensor product (not to be confused with an elementwise productis the most commonmost useful tensor operation contrary to element-wise operationsit combines entries in the input tensors an element-wise product is done with the operator in numpykerastheanoand tensorflow dot uses different syntax in tensorflowbut in both numpy and keras it' done using the standard dot operatorimport numpy as np np dot(xyin mathematical notationyou' note the operation with dot ) mathematicallywhat does the dot operation dolet' start with the dot product of two vectors and it' computed as followsdef naive_vector_dot(xy)assert len( shape= assert len( shape= assert shape[ = shape[ and are numpy vectors licensed to
11,433
for in range( shape[ ]) + [iy[ireturn you'll have noticed that the dot product between two vectors is scalar and that only vectors with the same number of elements are compatible for dot product you can also take the dot product between matrix and vector ywhich returns vector where the coefficients are the dot products between and the rows of you implement it as followsimport numpy as np is numpy matrix def naive_matrix_vector_dot(xy)assert len( shape= assert len( shape= assert shape[ = shape[ np zeros( shape[ ]for in range( shape[ ])for in range( shape[ ]) [ + [ijy[jreturn is numpy vector the first dimension of must be the same as the th dimension of ythis operation returns vector of with the same shape as you could also reuse the code we wrote previouslywhich highlights the relationship between matrix-vector product and vector productdef naive_matrix_vector_dot(xy) np zeros( shape[ ]for in range( shape[ ]) [inaive_vector_dot( [ :]yreturn note that as soon as one of the two tensors has an ndim greater than dot is no longer symmetricwhich is to say that dot(xyisn' the same as dot(yxof coursea dot product generalizes to tensors with an arbitrary number of axes the most common applications may be the dot product between two matrices you can take the dot product of two matrices and (dot(xy)if and only if shape[ = shape[ the result is matrix with shape ( shape[ ] shape[ ])where the coefficients are the vector products between the rows of and the columns of here' the naive implementationx and are numpy matrices def naive_matrix_dot(xy)assert len( shape= assert len( shape= assert shape[ = shape[ the first dimension of must be the same as the th dimension of ythis operation returns matrix of with specific shape np zeros(( shape[ ] shape[ ])for in range( shape[ ])iterates over the rows of for in range( shape[ ])and over the columns of row_x [ :column_y [:jz[ijnaive_vector_dot(row_xcolumn_yreturn licensed to
11,434
before we beginthe mathematical building blocks of neural networks to understand dot-product shape compatibilityit helps to visualize the input and output tensors by aligning them as shown in figure shape(bcx = column of shape(acx shape(abz ij row of figure mat rix dot-product box diagram xyand are pictured as rectangles (literal boxes of coefficientsbecause the rows and and the columns of must have the same sizeit follows that the width of must match the height of if you go on to develop new machine-learning algorithmsyou'll likely be drawing such diagrams often more generallyyou can take the dot product between higher-dimensional tensorsfollowing the same rules for shape compatibility as outlined earlier for the case(abcd( ,-(abc(abcd(de-(abceand so on tensor reshaping third type of tensor operation that' essential to understand is tensor reshaping although it wasn' used in the dense layers in our first neural network examplewe used it when we preprocessed the digits data before feeding it into our networktrain_images train_images reshape(( )reshaping tensor means rearranging its rows and columns to match target shape naturallythe reshaped tensor has the same total number of coefficients as the initial tensor reshaping is best understood via simple examplesx np array([[ ][ ][ ]]print( shape( licensed to
11,435
the gears of neural networkstensor operations reshape(( ) array([ ] ] ] ] ] ]] reshape(( ) array([ ] ]] special case of reshaping that' commonly encountered is transposition transposing matrix means exchanging its rows and its columnsso that [ :becomes [: ] np zeros(( ) np transpose(xprint( shape( creates an all-zeros matrix of shape ( geometric interpretation of tensor operations because the contents of the tensors manipulated by tensor operations can be interpreted as coordinates of points in some geometric spaceall tensor operations have geometric interpretation for instancelet' consider addition we'll start with the following vectora [ it' point in space (see figure it' common to picture vector as an arrow linking the origin to the pointas shown in figure [ figure [ point in space figure point in space pict ured as an arrow licensed to
11,436
before we beginthe mathematical building blocks of neural networks let' consider new pointb [ ]which we'll add to the previous one this is done geometrically by chaining together the vector arrowswith the resulting location being the vector representing the sum of the previous two vectors (see figure + figure geometric interpret at ion of the sum of wo vect ors in generalelementary geometric operations such as affine transformationsrotationsscalingand so on can be expressed as tensor operations for instancea rotation of vector by an angle theta can be achieved via dot product with matrix [uv]where and are both vectors of the planeu [cos(theta)sin(theta)and [-sin(theta)cos(theta) geometric interpretation of deep learning you just learned that neural networks consist entirely of chains of tensor operations and that all of these tensor operations are just geometric transformations of the input data it follows that you can interpret neural network as very complex geometric transformation in high-dimensional spaceimplemented via long series of simple steps in dthe following mental image may prove useful imagine two sheets of colored paperone red and one blue put one on top of the other now crumple them together into small ball that crumpled paper ball is your input dataand each sheet of paper is class of data in classification problem what neural network (or any other machine-learning modelis meant to do is figure out transformation of the paper ball that would uncrumple itso as to make the two classes cleanly separable again with deep learningthis would be implemented as series of simple transformations of the spacesuch as those you could apply on the paper ball with your fingersone movement at time figure uncrumpling complicat ed manifold of data licensed to
11,437
uncrumpling paper balls is what machine learning is aboutfinding neat representations for complexhighly folded data manifolds at this pointyou should have pretty good intuition as to why deep learning excels at thisit takes the approach of incrementally decomposing complicated geometric transformation into long chain of elementary oneswhich is pretty much the strategy human would follow to uncrumple paper ball each layer in deep network applies transformation that disentangles the data little--and deep stack of layers makes tractable an extremely complicated disentanglement process licensed to
11,438
before we beginthe mathematical building blocks of neural networks the engine of neural networksgradient-based optimization as you saw in the previous sectioneach neural layer from our first network example transforms its input data as followsoutput relu(dot(winputbin this expressionw and are tensors that are attributes of the layer they're called the weights or trainable parameters of the layer (the kernel and bias attributesrespectivelythese weights contain the information learned by the network from exposure to training data initiallythese weight matrices are filled with small random values ( step called random initializationof coursethere' no reason to expect that relu(dot(winputb)when and are randomwill yield any useful representations the resulting representations are meaningless--but they're starting point what comes next is to gradually adjust these weightsbased on feedback signal this gradual adjustmentalso called trainingis basically the learning that machine learning is all about this happens within what' called training loopwhich works as follows repeat these steps in loopas long as necessary draw batch of training samples and corresponding targets run the network on ( step called the forward passto obtain predictions y_pred compute the loss of the network on the batcha measure of the mismatch between y_pred and update all weights of the network in way that slightly reduces the loss on this batch you'll eventually end up with network that has very low loss on its training dataa low mismatch between predictions y_pred and expected targets the network has "learnedto map its inputs to correct targets from afarit may look like magicbut when you reduce it to elementary stepsit turns out to be simple step sounds easy enough--just / code steps and are merely the application of handful of tensor operationsso you could implement these steps purely from what you learned in the previous section the difficult part is step updating the network' weights given an individual weight coefficient in the networkhow can you compute whether the coefficient should be increased or decreasedand by how muchone naive solution would be to freeze all weights in the network except the one scalar coefficient being consideredand try different values for this coefficient let' say the initial value of the coefficient is after the forward pass on batch of datathe loss of the network on the batch is if you change the coefficient' value to and rerun the forward passthe loss increases to but if you lower the coefficient to the loss falls to in this caseit seems that updating the coefficient by - licensed to
11,439
would contribute to minimizing the loss this would have to be repeated for all coefficients in the network but such an approach would be horribly inefficientbecause you' need to compute two forward passes (which are expensivefor every individual coefficient (of which there are manyusually thousands and sometimes up to millionsa much better approach is to take advantage of the fact that all operations used in the network are differentiableand compute the gradient of the loss with regard to the network' coefficients you can then move the coefficients in the opposite direction from the gradientthus decreasing the loss if you already know what differentiable means and what gradient isyou can skip to section otherwisethe following two sections will help you understand these concepts what' derivativeconsider continuoussmooth function (xymapping real number to new real number because the function is continuousa small change in can only result in small change in --that' the intuition behind continuity let' say you increase by small factor epsilon_xthis results in small epsilon_y change to yf( epsilon_xy epsilon_y in additionbecause the function is smooth (its curve doesn' have any abrupt angles)when epsilon_x is small enougharound certain point pit' possible to approximate as linear function of slope aso that epsilon_y becomes epsilon_xf( epsilon_xy epsilon_x obviouslythis linear approximation is valid only when is close enough to the slope is called the derivative of in if is negativeit means small change of around will result in decrease of ( (as shown in figure )and if is positivea small change in will result in an increase of (xfurtherthe absolute value of (the magnitude of the derivativetells you how quickly this increase or decrease will happen local linear approximation of fwith slope figure derivat ive of in for every differentiable function ( (differentiable means "can be derived"for examplesmoothcontinuous functions can be derived)there exists derivative function '(xthat maps values of to the slope of the local linear approximation of in those licensed to
11,440
before we beginthe mathematical building blocks of neural networks points for instancethe derivative of cos(xis -sin( )the derivative of (xa is '(xaand so on if you're trying to update by factor epsilon_x in order to minimize ( )and you know the derivative of fthen your job is donethe derivative completely describes how (xevolves as you change if you want to reduce the value of ( )you just need to move little in the opposite direction from the derivative derivative of tensor operationthe gradient gradient is the derivative of tensor operation it' the generalization of the concept of derivatives to functions of multidimensional inputsthat isto functions that take tensors as inputs consider an input vector xa matrix wa target yand loss function loss you can use to compute target candidate y_predand compute the lossor mismatchbetween the target candidate y_pred and the target yy_pred dot(wxloss_value loss(y_predyif the data inputs and are frozenthen this can be interpreted as function mapping values of to loss valuesloss_value (wlet' say the current value of is then the derivative of in the point is tensor gradient( )( with the same shape as wwhere each coefficient gradient( ( )[ijindicates the direction and magnitude of the change in loss_value you observe when modifying [ijthat tensor gradient( )( is the gradient of the function (wloss_value in you saw earlier that the derivative of function (xof single coefficient can be interpreted as the slope of the curve of likewisegradient( )( can be interpreted as the tensor describing the curvature of (waround for this reasonin much the same way thatfor function ( )you can reduce the value of (xby moving little in the opposite direction from the derivativewith function (wof tensoryou can reduce (wby moving in the opposite direction from the gradientfor examplew step gradient( )( (where step is small scaling factorthat means going against the curvaturewhich intuitively should put you lower on the curve note that the scaling factor step is needed because gradient( )( only approximates the curvature when you're close to so you don' want to get too far from stochastic gradient descent given differentiable functionit' theoretically possible to find its minimum analyticallyit' known that function' minimum is point where the derivative is so all you have to do is find all the points where the derivative goes to and check for which of these points the function has the lowest value licensed to
11,441
applied to neural networkthat means finding analytically the combination of weight values that yields the smallest possible loss function this can be done by solving the equation gradient( )( for this is polynomial equation of variableswhere is the number of coefficients in the network although it would be possible to solve such an equation for or doing so is intractable for real neural networkswhere the number of parameters is never less than few thousand and can often be several tens of millions insteadyou can use the four-step algorithm outlined at the beginning of this sectionmodify the parameters little by little based on the current loss value on random batch of data because you're dealing with differentiable functionyou can compute its gradientwhich gives you an efficient way to implement step if you update the weights in the opposite direction from the gradientthe loss will be little less every time draw batch of training samples and corresponding targets run the network on to obtain predictions y_pred compute the loss of the network on the batcha measure of the mismatch between y_pred and compute the gradient of the loss with regard to the network' parameters ( backward passmove the parameters little in the opposite direction from the gradient--for example -step gradient--thus reducing the loss on the batch bit easy enoughwhat just described is called mini-batch stochastic gradient descent (minibatch sgdthe term stochastic refers to the fact that each batch of data is drawn at random (stochastic is scientific synonym of randomfigure illustrates what happens in dwhen the network has only one parameter and you have only one training sample loss value stepalso called learning rate starting point ( = = = = parameter value figure sgd down loss curve (one learnable parameterlicensed to
11,442
before we beginthe mathematical building blocks of neural networks as you can seeintuitively it' important to pick reasonable value for the step factor if it' too smallthe descent down the curve will take many iterationsand it could get stuck in local minimum if step is too largeyour updates may end up taking you to completely random locations on the curve note that variant of the mini-batch sgd algorithm would be to draw single sample and target at each iterationrather than drawing batch of data this would be true sgd (as opposed to mini-batch sgdalternativelygoing to the opposite extremeyou could run every step on all data availablewhich is called batch sgd each update would then be more accuratebut far more expensive the efficient compromise between these two extremes is to use mini-batches of reasonable size although figure illustrates gradient descent in parameter spacein practice you'll use gradient descent in highly dimensional spacesevery weight coefficient in neural network is free dimension in the spaceand there may be tens of thousands or even millions of them to help you build intuition about loss surfacesyou can also visualize gradient descent along loss surfaceas shown in figure but you can' possibly visualize what the actual process of training neural network looks like--you can' represent , , -dimensional space in way that makes sense to humans as suchit' good to keep in mind that the intuitions you develop through these low-dimensional representations may not always be accurate in practice this has historically been source of issues in the world of deep-learning research starting point figure gradient descent down loss surface (two learnable paramet ersfinal point additionallythere exist multiple variants of sgd that differ by taking into account previous weight updates when computing the next weight updaterather than just looking at the current value of the gradients there isfor instancesgd with momentumas well as adagradrmspropand several others such variants are known as optimization methods or optimizers in particularthe concept of momentumwhich is used in many of these variantsdeserves your attention momentum addresses two issues with sgdconvergence speed and local minima consider figure which shows the curve of loss as function of network parameter licensed to
11,443
loss value local minimum global minimum parameter value figure local minimum and global minimum as you can seearound certain parameter valuethere is local minimumaround that pointmoving left would result in the loss increasingbut so would moving right if the parameter under consideration were being optimized via sgd with small learning ratethen the optimization process would get stuck at the local minimum instead of making its way to the global minimum you can avoid such issues by using momentumwhich draws inspiration from physics useful mental image here is to think of the optimization process as small ball rolling down the loss curve if it has enough momentumthe ball won' get stuck in ravine and will end up at the global minimum momentum is implemented by moving the ball at each step based not only on the current slope value (current accelerationbut also on the current velocity (resulting from past accelerationin practicethis means updating the parameter based not only on the current gradient value but also on the previous parameter updatesuch as in this naive implementationpast_velocity constant momentum factor momentum optimization loop while loss wlossgradient get_current_parameters(velocity past_velocity momentum learning_rate gradient momentum velocity learning_rate gradient past_velocity velocity update_parameter(wchaining derivativesthe backpropagation algorithm in the previous algorithmwe casually assumed that because function is differentiablewe can explicitly compute its derivative in practicea neural network function consists of many tensor operations chained togethereach of which has simpleknown derivative for instancethis is network composed of three tensor operationsaband cwith weight matrices and ( ( ( ( ))calculus tells us that such chain of functions can be derived using the following identitycalled the chain rulef( ( ) '( ( ) '(xapplying the chain rule to the computation of the gradient values of neural network gives rise to an algorithm licensed to
11,444
before we beginthe mathematical building blocks of neural networks called backpropagation (also sometimes called reverse-mode differentiationbackpropagation starts with the final loss value and works backward from the top layers to the bottom layersapplying the chain rule to compute the contribution that each parameter had in the loss value nowadaysand for years to comepeople will implement networks in modern frameworks that are capable of symbolic differentiationsuch as tensorflow this means thatgiven chain of operations with known derivativethey can compute gradient function for the chain (by applying the chain rulethat maps network parameter values to gradient values when you have access to such functionthe backward pass is reduced to call to this gradient function thanks to symbolic differentiationyou'll never have to implement the backpropagation algorithm by hand for this reasonwe won' waste your time and your focus on deriving the exact formulation of the backpropagation algorithm in these pages all you need is good understanding of how gradient-based optimization works licensed to
11,445
looking back at our first example you've reached the end of this and you should now have general understanding of what' going on behind the scenes in neural network let' go back to the first example and review each piece of it in the light of what you've learned in the previous three sections this was the input data(train_imagestrain_labels)(test_imagestest_labelsmnist load_data(train_images train_images reshape(( )train_images train_images astype('float ' test_images test_images reshape(( )test_images test_images astype('float ' now you understand that the input images are stored in numpy tensorswhich are here formatted as float tensors of shape ( (training dataand ( (test data)respectively this was our networknetwork models sequential(network add(layers dense( activation='relu'input_shape=( ,))network add(layers dense( activation='softmax')now you understand that this network consists of chain of two dense layersthat each layer applies few simple tensor operations to the input dataand that these operations involve weight tensors weight tensorswhich are attributes of the layersare where the knowledge of the network persists this was the network-compilation stepnetwork compile(optimizer='rmsprop'loss='categorical_crossentropy'metrics=['accuracy']now you understand that categorical_crossentropy is the loss function that' used as feedback signal for learning the weight tensorsand which the training phase will attempt to minimize you also know that this reduction of the loss happens via minibatch stochastic gradient descent the exact rules governing specific use of gradient descent are defined by the rmsprop optimizer passed as the first argument finallythis was the training loopnetwork fit(train_imagestrain_labelsepochs= batch_size= now you understand what happens when you call fitthe network will start to iterate on the training data in mini-batches of samples times over (each iteration over all the training data is called an epochat each iterationthe network will compute the gradients of the weights with regard to the loss on the batchand update the weights licensed to
11,446
before we beginthe mathematical building blocks of neural networks accordingly after these epochsthe network will have performed , gradient updates ( per epoch)and the loss of the network will be sufficiently low that the network will be capable of classifying handwritten digits with high accuracy at this pointyou already know most of what there is to know about neural networks licensed to
11,447
summary learning means finding combination of model parameters that minimizes loss function for given set of training data samples and their corresponding targets learning happens by drawing random batches of data samples and their targetsand computing the gradient of the network parameters with respect to the loss on the batch the network parameters are then moved bit (the magnitude of the move is defined by the learning ratein the opposite direction from the gradient the entire learning process is made possible by the fact that neural networks are chains of differentiable tensor operationsand thus it' possible to apply the chain rule of derivation to find the gradient function mapping the current parameters and current batch of data to gradient value two key concepts you'll see frequently in future are loss and optimizers these are the two things you need to define before you begin feeding data into network the loss is the quantity you'll attempt to minimize during trainingso it should represent measure of success for the task you're trying to solve the optimizer specifies the exact way in which the gradient of the loss will be used to update parametersfor instanceit could be the rmsprop optimizersgd with momentumand so on licensed to
11,448
with neural networks this covers core components of neural networks an introduction to keras setting up deep-learning workstation using neural networks to solve basic classification and regression problems this is designed to get you started with using neural networks to solve real problems you'll consolidate the knowledge you gained from our first practical example in and you'll apply what you've learned to three new problems covering the three most common use cases of neural networksbinary classificationmulticlass classificationand scalar regression in this we'll take closer look at the core components of neural networks that we introduced in layersnetworksobjective functionsand optimizers we'll give you quick introduction to kerasthe python deep-learning library that we'll use throughout the book you'll set up deep-learning workstationwith licensed to
11,449
tensorflowkerasand gpu support we'll dive into three introductory examples of how to use neural networks to address real problemsclassifying movie reviews as positive or negative (binary classificationclassifying news wires by topic (multiclass classificationestimating the price of housegiven real-estate data (regressionby the end of this you'll be able to use neural networks to solve simple machine problems such as classification and regression over vector data you'll then be ready to start building more principledtheory-driven understanding of machine learning in licensed to
11,450
getting started with neural networks anatomy of neural network as you saw in the previous training neural network revolves around the following objectslayerswhich are combined into network (or modelthe input data and corresponding targets the loss functionwhich defines the feedback signal used for learning the optimizerwhich determines how learning proceeds you can visualize their interaction as illustrated in figure the networkcomposed of layers that are chained togethermaps the input data to predictions the loss function then compares these predictions to the targetsproducing loss valuea measure of how well the network' predictions match what was expected the optimizer uses this loss value to update the network' weights input weights layer (data transformationweights layer (data transformationweight update optimizer predictions ytrue targets loss function loss score figure relationship bet ween he net worklayersloss functionand optimizer let' take closer look at layersnetworksloss functionsand optimizers layersthe building blocks of deep learning the fundamental data structure in neural networks is the layerto which you were introduced in layer is data-processing module that takes as input one or more tensors and that outputs one or more tensors some layers are statelessbut more frequently layers have statethe layer' weightsone or several tensors learned with stochastic gradient descentwhich together contain the network' knowledge different layers are appropriate for different tensor formats and different types of data processing for instancesimple vector datastored in tensors of shape (samplesfeatures)is often processed by densely connected layersalso called fully connected or dense layers (the dense class in kerassequence datastored in tensors of shape (samplestimestepsfeatures)is typically processed by recurrent layers such as an lstm layer image datastored in tensorsis usually processed by convolution layers (conv dlicensed to
11,451
anatomy of neural network you can think of layers as the lego bricks of deep learninga metaphor that is made explicit by frameworks like keras building deep-learning models in keras is done by clipping together compatible layers to form useful data-transformation pipelines the notion of layer compatibility here refers specifically to the fact that every layer will only accept input tensors of certain shape and will return output tensors of certain shape consider the following examplefrom keras import layers layer layers dense( input_shape=( ,) dense layer with output units we're creating layer that will only accept as input tensors where the first dimension is (axis the batch dimensionis unspecifiedand thus any value would be acceptedthis layer will return tensor where the first dimension has been transformed to be thus this layer can only be connected to downstream layer that expects dimensional vectors as its input when using kerasyou don' have to worry about compatibilitybecause the layers you add to your models are dynamically built to match the shape of the incoming layer for instancesuppose you write the followingfrom keras import models from keras import layers model models sequential(model add(layers dense( input_shape=( ,))model add(layers dense( )the second layer didn' receive an input shape argument--insteadit automatically inferred its input shape as being the output shape of the layer that came before modelsnetworks of layers deep-learning model is directedacyclic graph of layers the most common instance is linear stack of layersmapping single input to single output but as you move forwardyou'll be exposed to much broader variety of network topologies some common ones include the followingtwo-branch networks multihead networks inception blocks the topology of network defines hypothesis space you may remember that in we defined machine learning as "searching for useful representations of some input datawithin predefined space of possibilitiesusing guidance from feedback signal by choosing network topologyyou constrain your space of possibilities (hypothesis spaceto specific series of tensor operationsmapping input data to output data what you'll then be searching for is good set of values for the weight tensors involved in these tensor operations licensed to
11,452
getting started with neural networks picking the right network architecture is more an art than scienceand although there are some best practices and principles you can rely ononly practice can help you become proper neural-network architect the next few will both teach you explicit principles for building neural networks and help you develop intuition as to what works or doesn' work for specific problems loss functions and optimizerskeys to configuring the learning process once the network architecture is definedyou still have to choose two more thingsloss function (objective function)--the quantity that will be minimized during training it represents measure of success for the task at hand optimizer--determines how the network will be updated based on the loss function it implements specific variant of stochastic gradient descent (sgda neural network that has multiple outputs may have multiple loss functions (one per outputbut the gradient-descent process must be based on single scalar loss valuesofor multiloss networksall losses are combined (via averaginginto single scalar quantity choosing the right objective function for the right problem is extremely importantyour network will take any shortcut it canto minimize the lossso if the objective doesn' fully correlate with success for the task at handyour network will end up doing things you may not have wanted imagine stupidomnipotent ai trained via sgdwith this poorly chosen objective function"maximizing the average well-being of all humans alive to make its job easierthis ai might choose to kill all humans except few and focus on the well-being of the remaining ones--because average well-being isn' affected by how many humans are left that might not be what you intendedjust remember that all neural networks you build will be just as ruthless in lowering their loss function--so choose the objective wiselyor you'll have to face unintended side effects fortunatelywhen it comes to common problems such as classificationregressionand sequence predictionthere are simple guidelines you can follow to choose the correct loss for instanceyou'll use binary crossentropy for two-class classification problemcategorical crossentropy for many-class classification problemmeansquared error for regression problemconnectionist temporal classification (ctcfor sequence-learning problemand so on only when you're working on truly new research problems will you have to develop your own objective functions in the next few we'll detail explicitly which loss functions to choose for wide range of common tasks licensed to
11,453
introduction to keras throughout this bookthe code examples use keras (deep-learning framework for python that provides convenient way to define and train almost any kind of deep-learning model keras was initially developed for researcherswith the aim of enabling fast experimentation keras has the following key featuresit allows the same code to run seamlessly on cpu or gpu it has user-friendly api that makes it easy to quickly prototype deep-learning models it has built-in support for convolutional networks (for computer vision)recurrent networks (for sequence processing)and any combination of both it supports arbitrary network architecturesmulti-input or multi-output modelslayer sharingmodel sharingand so on this means keras is appropriate for building essentially any deep-learning modelfrom generative adversarial network to neural turing machine keras is distributed under the permissive mit licensewhich means it can be freely used in commercial projects it' compatible with any version of python from to (as of mid- keras has well over , usersranging from academic researchers and engineers at both startups and large companies to graduate students and hobbyists keras is used at googlenetflixubercernyelpsquareand hundreds of startups working on wide range of problems keras is also popular framework on kagglethe machine-learning competition websitewhere almost every recent deep-learning competition has been won using keras models figure google web search int erest for different deep-learning frameworks over ime licensed to
11,454
getting started with neural networks kerastensorflowtheanoand cntk keras is model-level libraryproviding high-level building blocks for developing deep-learning models it doesn' handle low-level operations such as tensor manipulation and differentiation insteadit relies on specializedwell-optimized tensor library to do soserving as the backend engine of keras rather than choosing single tensor library and tying the implementation of keras to that librarykeras handles the problem in modular way (see figure )thus several different backend engines can be plugged seamlessly into keras currentlythe three existing backend implementations are the tensorflow backendthe theano backendand the microsoft cognitive toolkit (cntkbackend in the futureit' likely that keras will be extended to work with even more deep-learning execution engines figure the deep-learning soft ware and hardware st ack tensorflowcntkand theano are some of the primary platforms for deep learning today theano (lab at universite de montrealtensorflow (www tensorflow orgis developed by googleand cntk (piece of code that you write with keras can be run with any of these backends without having to change anything in the codeyou can seamlessly switch between the two during developmentwhich often proves useful--for instanceif one of these backends proves to be faster for specific task we recommend using the tensorflow backend as the default for most of your deep-learning needsbecause it' the most widely adoptedscalableand production ready via tensorflow (or theanoor cntk)keras is able to run seamlessly on both cpus and gpus when running on cputensorflow is itself wrapping low-level library for tensor operations called eigen (tensorflow wraps library of well-optimized deep-learning operations called the nvidia cuda deep neural network library (cudnndeveloping with kerasa quick overview you've already seen one example of keras modelthe mnist example the typical keras workflow looks just like that example define your training datainput tensors and target tensors define network of layers (or model that maps your inputs to your targets licensed to
11,455
configure the learning process by choosing loss functionan optimizerand some metrics to monitor iterate on your training data by calling the fit(method of your model there are two ways to define modelusing the sequential class (only for linear stacks of layerswhich is the most common network architecture by faror the functional api (for directed acyclic graphs of layerswhich lets you build completely arbitrary architecturesas refresherhere' two-layer model defined using the sequential class (note that we're passing the expected shape of the input data to the first layer)from keras import models from keras import layers model models sequential(model add(layers dense( activation='relu'input_shape=( ,))model add(layers dense( activation='softmax')and here' the same model defined using the functional apiinput_tensor layers input(shape=( ,) layers dense( activation='relu')(input_tensoroutput_tensor layers dense( activation='softmax')(xmodel models model(inputs=input_tensoroutputs=output_tensorwith the functional apiyou're manipulating the data tensors that the model processes and applying layers to this tensor as if they were functions detailed guide to what you can do with the functional api can be found in until we'll only be using the sequential class in our code examples note once your model architecture is definedit doesn' matter whether you used sequential model or the functional api all of the following steps are the same the learning process is configured in the compilation stepwhere you specify the optimizer and loss function(sthat the model should useas well as the metrics you want to monitor during training here' an example with single loss functionwhich is by far the most common casefrom keras import optimizers model compile(optimizer=optimizers rmsprop(lr= )loss='mse'metrics=['accuracy']finallythe learning process consists of passing numpy arrays of input data (and the corresponding target datato the model via the fit(methodsimilar to what you would do in scikit-learn and several other machine-learning librariesmodel fit(input_tensortarget_tensorbatch_size= epochs= licensed to
11,456
getting started with neural networks over the next few you'll build solid intuition about what type of network architectures work for different kinds of problemshow to pick the right learning configurationand how to tweak model until it gives the results you want to see we'll look at three basic examples in sections and two-class classification examplea many-class classification exampleand regression example licensed to
11,457
setting up deep-learning workstation before you can get started developing deep-learning applicationsyou need to set up your workstation it' highly recommendedalthough not strictly necessarythat you run deep-learning code on modern nvidia gpu some applications--in particularimage processing with convolutional networks and sequence processing with recurrent neural networks--will be excruciatingly slow on cpueven fast multicore cpu and even for applications that can realistically be run on cpuyou'll generally see speed increase by factor or or by using modern gpu if you don' want to install gpu on your machineyou can alternatively consider running your experiments on an aws ec gpu instance or on google cloud platform but note that cloud gpu instances can become expensive over time whether you're running locally or in the cloudit' better to be using unix workstation although it' technically possible to use keras on windows (all three keras backends support windows)we don' recommend it in the installation instructions in appendix awe'll consider an ubuntu machine if you're windows userthe simplest solution to get everything running is to set up an ubuntu dual boot on your machine it may seem like hasslebut using ubuntu will save you lot of time and trouble in the long run note that in order to use kerasyou need to install tensorflow or cntk or theano (or all of themif you want to be able to switch back and forth among the three backendsin this bookwe'll focus on tensorflowwith some light instructions relative to theano we won' cover cntk jupyter notebooksthe preferred way to run deep-learning experiments jupyter notebooks are great way to run deep-learning experiments--in particularthe many code examples in this book they're widely used in the data-science and machine-learning communities notebook is file generated by the jupyter notebook app (execute python code with rich text-editing capabilities for annotating what you're doing notebook also allows you to break up long experiments into smaller pieces that can be executed independentlywhich makes development interactive and means you don' have to rerun all of your previous code if something goes wrong late in an experiment we recommend using jupyter notebooks to get started with kerasalthough that isn' requirementyou can also run standalone python scripts or run code from within an ide such as pycharm all the code examples in this book are available as open source notebooksyou can download them from the book' website at www manning com/books/deep-learning-with-python licensed to
11,458
getting started with neural networks getting keras runningtwo options to get started in practicewe recommend one of the following two optionsuse the official ec deep learning ami (you don' already have gpu on your local machine appendix provides step-by-step guide install everything from scratch on local unix workstation you can then run either local jupyter notebooks or regular python codebase do this if you already have high-end nvidia gpu appendix provides an ubuntu-specificstep-by-step guide let' take closer look at some of the compromises involved in picking one option over the other running deep-learning jobs in the cloudpros and cons if you don' already have gpu that you can use for deep learning ( recenthigh-end nvidia gpu)then running deep-learning experiments in the cloud is simplelowcost way for you to get started without having to buy any additional hardware if you're using jupyter notebooksthe experience of running in the cloud is no different from running locally as of mid- the cloud offering that makes it easiest to get started with deep learning is definitely aws ec appendix provides step-by-step guide to running jupyter notebooks on ec gpu instance but if you're heavy user of deep learningthis setup isn' sustainable in the long term--or even for more than few weeks ec instances are expensivethe instance type recommended in appendix (the xlarge instancewhich won' provide you with much powercosts $ per hour as of mid- meanwhilea solid consumerclass gpu will cost you somewhere between $ , and $ , -- price that has been fairly stable over timeeven as the specs of these gpus keep improving if you're serious about deep learningyou should set up local workstation with one or more gpus in shortec is great way to get started you could follow the code examples in this book entirely on an ec gpu instance but if you're going to be power user of deep learningget your own gpus what is the best gpu for deep learningif you're going to buy gpuwhich one should you choosethe first thing to note is that it must be an nvidia gpu nvidia is the only graphics computing company that has invested heavily in deep learning so farand modern deep-learning frameworks can only run on nvidia cards as of mid- we recommend the nvidia titan xp as the best card on the market for deep learning for lower budgetsyou may want to consider the gtx if you're reading these pages in or latertake the time to look online for fresher recommendationsbecause new models come out every year licensed to
11,459
from this section onwardwe'll assume that you have access to machine with keras and its dependencies installed--preferably with gpu support make sure you finish this step before you proceed go through the step-by-step guides in the appendixesand look online if you need further help there is no shortage of tutorials on how to install keras and common deep-learning dependencies we can now dive into practical keras examples licensed to
11,460
getting started with neural networks classifying movie reviewsa binary classification example two-class classificationor binary classificationmay be the most widely applied kind of machine-learning problem in this exampleyou'll learn to classify movie reviews as positive or negativebased on the text content of the reviews the imdb dataset you'll work with the imdb dataseta set of , highly polarized reviews from the internet movie database they're split into , reviews for training and , reviews for testingeach set consisting of negative and positive reviews why use separate training and test setsbecause you should never test machinelearning model on the same data that you used to train itjust because model performs well on its training data doesn' mean it will perform well on data it has never seenand what you care about is your model' performance on new data (because you already know the labels of your training data--obviously you don' need your model to predict thosefor instanceit' possible that your model could end up merely memorizing mapping between your training samples and their targetswhich would be useless for the task of predicting targets for data the model has never seen before we'll go over this point in much more detail in the next just like the mnist datasetthe imdb dataset comes packaged with keras it has already been preprocessedthe reviews (sequences of wordshave been turned into sequences of integerswhere each integer stands for specific word in dictionary the following code will load the dataset (when you run it the first timeabout mb of data will be downloaded to your machinelisting loading he imdb dat aset from keras datasets import imdb (train_datatrain_labels)(test_datatest_labelsimdb load_datanum_words= the argument num_words= means you'll only keep the top , most frequently occurring words in the training data rare words will be discarded this allows you to work with vector data of manageable size the variables train_data and test_data are lists of reviewseach review is list of word indices (encoding sequence of wordstrain_labels and test_labels are lists of and swhere stands for negative and stands for positivetrain_data[ [ train_labels[ licensed to
11,461
classifying movie reviewsa binary classification example because you're restricting yourself to the top , most frequent wordsno word index will exceed , max([max(sequencefor sequence in train_data] for kickshere' how you can quickly decode one of these reviews back to english wordsword_index is dictionary mapping words to an integer index word_index imdb get_word_index(reverse_word_index dict[(valuekeyfor (keyvaluein word_index items()]decoded_review join[reverse_word_index get( '?'for in train_data[ ]]reverses itmapping integer indices to words decodes the review note that the indices are offset by because and are reserved indices for padding,start of sequence,and unknown preparing the data you can' feed lists of integers into neural network you have to turn your lists into tensors there are two ways to do thatpad your lists so that they all have the same lengthturn them into an integer tensor of shape (samplesword_indices)and then use as the first layer in your network layer capable of handling such integer tensors (the embedding layerwhich we'll cover in detail later in the bookone-hot encode your lists to turn them into vectors of and this would meanfor instanceturning the sequence [ into , -dimensional vector that would be all except for indices and which would be then you could use as the first layer in your network dense layercapable of handling floating-point vector data let' go with the latter solution to vectorize the datawhich you'll do manually for maximum clarity listing encoding he int eger sequences int binary mat rix import numpy as np creates an all-zero matrix of shape (len(sequences)def vectorize_sequences(sequencesdimension= )dimensionresults np zeros((len(sequences)dimension)for isequence in enumerate(sequences)results[isequence sets specific indices return results of results[ito x_train vectorize_sequences(train_datax_test vectorize_sequences(test_datalicensed to vectorized training data vectorized test data
11,462
getting started with neural networks here' what the samples look like nowx_train[ array( ]you should also vectorize your labelswhich is straightforwardy_train np asarray(train_labelsastype('float 'y_test np asarray(test_labelsastype('float 'now the data is ready to be fed into neural network building your network the input data is vectorsand the labels are scalars ( and )this is the easiest setup you'll ever encounter type of network that performs well on such problem is simple stack of fully connected (denselayers with relu activationsdense( activation='relu'the argument being passed to each dense layer ( is the number of hidden units of the layer hidden unit is dimension in the representation space of the layer you may remember from that each such dense layer with relu activation implements the following chain of tensor operationsoutput relu(dot(winputbhaving hidden units means the weight matrix will have shape (input_dimension )the dot product with will project the input data onto -dimensional representation space (and then you'll add the bias vector and apply the relu operationyou can intuitively understand the dimensionality of your representation space as "how much freedom you're allowing the network to have when learning internal representations having more hidden units ( higher-dimensional representation spaceallows your network to learn more-complex representationsbut it makes the network more computationally expensive and may lead to learning unwanted patterns (patterns that will improve performance on the training data but not on the test datathere are two key architecture decisions to be made about such stack of dense layershow many layers to use how many hidden units to choose for each layer in you'll learn formal principles to guide you in making these choices for the time beingyou'll have to trust me with the following architecture choicetwo intermediate layers with hidden units each third layer that will output the scalar prediction regarding the sentiment of the current review the intermediate layers will use relu as their activation functionand the final layer will use sigmoid activation so as to output probability ( score between and licensed to
11,463
indicating how likely the sample is to have the target " "how likely the review is to be positivea relu (rectified linear unitis function meant to zero out negative values (see figure )whereas sigmoid "squashesarbitrary values into the [ interval (see figure )outputting something that can be interpreted as probability figure the rect ified linear unit function figure the sigmoid function licensed to
11,464
getting started with neural networks output (probabilitydense (units= dense (units= dense (units= sequential input (vectorized textfigure the hree-layer net work figure shows what the network looks like and here' the keras implementationsimilar to the mnist example you saw previously listing the model definit ion from keras import models from keras import layers model models sequential(model add(layers dense( activation='relu'input_shape=( ,))model add(layers dense( activation='relu')model add(layers dense( activation='sigmoid')what are act ivation functionsand why are they necessarywithout an activation function like relu (also called non-linearity)the dense layer would consist of two linear operations-- dot product and an additionoutput dot(winputb so the layer could only learn linear transformations (affine transformationsof the input datathe hypothesis space of the layer would be the set of all possible linear transformations of the input data into -dimensional space such hypothesis space is too restricted and wouldn' benefit from multiple layers of representationsbecause deep stack of linear layers would still implement linear operationadding more layers wouldn' extend the hypothesis space in order to get access to much richer hypothesis space that would benefit from deep representationsyou need non-linearityor activation function relu is the most popular activation function in deep learningbut there are many other candidateswhich all come with similarly strange namesprelueluand so on finallyyou need to choose loss function and an optimizer because you're facing binary classification problem and the output of your network is probability (you end your network with single-unit layer with sigmoid activation)it' best to use the licensed to
11,465
binary_crossentropy loss it isn' the only viable choiceyou could usefor instancemean_squared_error but crossentropy is usually the best choice when you're dealing with models that output probabilities crossentropy is quantity from the field of information theory that measures the distance between probability distributions orin this casebetween the ground-truth distribution and your predictions here' the step where you configure the model with the rmsprop optimizer and the binary_crossentropy loss function note that you'll also monitor accuracy during training listing compiling he model model compile(optimizer='rmsprop'loss='binary_crossentropy'metrics=['accuracy']you're passing your optimizerloss functionand metrics as stringswhich is possible because rmspropbinary_crossentropyand accuracy are packaged as part of keras sometimes you may want to configure the parameters of your optimizer or pass custom loss function or metric function the former can be done by passing an optimizer class instance as the optimizer argumentas shown in listing the latter can be done by passing function objects as the loss and/or metrics argumentsas shown in listing listing configuring the opt imizer from keras import optimizers model compile(optimizer=optimizers rmsprop(lr= )loss='binary_crossentropy'metrics=['accuracy']listing using cust om losses and met rics from keras import losses from keras import metrics model compile(optimizer=optimizers rmsprop(lr= )loss=losses binary_crossentropymetrics=[metrics binary_accuracy]validating your approach in order to monitor during training the accuracy of the model on data it has never seen beforeyou'll create validation set by setting apart , samples from the original training data listing set ing aside validat ion set x_val x_train[: partial_x_train x_train[ :licensed to
11,466
getting started with neural networks y_val y_train[: partial_y_train y_train[ :you'll now train the model for epochs ( iterations over all samples in the x_train and y_train tensors)in mini-batches of samples at the same timeyou'll monitor loss and accuracy on the , samples that you set apart you do so by passing the validation data as the validation_data argument listing training your model model compile(optimizer='rmsprop'loss='binary_crossentropy'metrics=['acc']history model fit(partial_x_trainpartial_y_trainepochs= batch_size= validation_data=(x_valy_val)on cputhis will take less than seconds per epoch--training is over in seconds at the end of every epochthere is slight pause as the model computes its loss and accuracy on the , samples of the validation data note that the call to model fit(returns history object this object has member historywhich is dictionary containing data about everything that happened during training let' look at ithistory_dict history history history_dict keys([ 'acc' 'loss' 'val_acc' 'val_loss'the dictionary contains four entriesone per metric that was being monitored during training and during validation in the following two listinglet' use matplotlib to plot the training and validation loss side by side (see figure )as well as the training and validation accuracy (see figure note that your own results may vary slightly due to different random initialization of your network listing plot ing he training and validat ion loss import matplotlib pyplot as plt history_dict history history loss_values history_dict['loss'val_loss_values history_dict['val_loss'epochs range( len(acc bois for blue dot plt plot(epochsloss_values'bo'label='training loss'plt plot(epochsval_loss_values' 'label='validation loss'plt title('training and validation loss'bis for solid plt xlabel('epochs'blue line plt ylabel('loss'plt legend(plt show(licensed to
11,467
figure training and validat ion loss listing plot ing he raining and validat ion accuracy plt clf(clears the figure acc_values history_dict['acc'val_acc_values history_dict['val_acc'plt plot(epochsacc'bo'label='training acc'plt plot(epochsval_acc' 'label='validation acc'plt title('training and validation accuracy'plt xlabel('epochs'plt ylabel('loss'plt legend(plt show(figure training and validat ion accuracy licensed to
11,468
getting started with neural networks as you can seethe training loss decreases with every epochand the training accuracy increases with every epoch that' what you would expect when running gradientdescent optimization--the quantity you're trying to minimize should be less with every iteration but that isn' the case for the validation loss and accuracythey seem to peak at the fourth epoch this is an example of what we warned against earliera model that performs better on the training data isn' necessarily model that will do better on data it has never seen before in precise termswhat you're seeing is overfitting after the second epochyou're overoptimizing on the training dataand you end up learning representations that are specific to the training data and don' generalize to data outside of the training set in this caseto prevent overfittingyou could stop training after three epochs in generalyou can use range of techniques to mitigate overfittingwhich we'll cover in let' train new network from scratch for four epochs and then evaluate it on the test data listing ret raining model from scrat ch model models sequential(model add(layers dense( activation='relu'input_shape=( ,))model add(layers dense( activation='relu')model add(layers dense( activation='sigmoid')model compile(optimizer='rmsprop'loss='binary_crossentropy'metrics=['accuracy']model fit(x_trainy_trainepochs= batch_size= results model evaluate(x_testy_testthe final results are as followsresults [ this fairly naive approach achieves an accuracy of with state-of-the-art approachesyou should be able to get close to using trained network to generate predictions on new data after having trained networkyou'll want to use it in practical setting you can generate the likelihood of reviews being positive by using the predict methodmodel predict(x_testarray([ ]]dtype=float licensed to
11,469
as you can seethe network is confident for some samples ( or moreor or lessbut less confident for others ( further experiments the following experiments will help convince you that the architecture choices you've made are all fairly reasonablealthough there' still room for improvementyou used two hidden layers try using one or three hidden layersand see how doing so affects validation and test accuracy try using layers with more hidden units or fewer hidden units units unitsand so on try using the mse loss function instead of binary_crossentropy try using the tanh activation (an activation that was popular in the early days of neural networksinstead of relu wrapping up here' what you should take away from this exampleyou usually need to do quite bit of preprocessing on your raw data in order to be able to feed it--as tensors--into neural network sequences of words can be encoded as binary vectorsbut there are other encoding optionstoo stacks of dense layers with relu activations can solve wide range of problems (including sentiment classification)and you'll likely use them frequently in binary classification problem (two output classes)your network should end with dense layer with one unit and sigmoid activationthe output of your network should be scalar between and encoding probability with such scalar sigmoid output on binary classification problemthe loss function you should use is binary_crossentropy the rmsprop optimizer is generally good enough choicewhatever your problem that' one less thing for you to worry about as they get better on their training dataneural networks eventually start overfitting and end up obtaining increasingly worse results on data they've never seen before be sure to always monitor performance on data that is outside of the training set licensed to
11,470
classifying newswiresa multiclass classification example getting started with neural networks in the previous sectionyou saw how to classify vector inputs into two mutually exclusive classes using densely connected neural network but what happens when you have more than two classesin this sectionyou'll build network to classify reuters newswires into mutually exclusive topics because you have many classesthis problem is an instance of multiclass classificationand because each data point should be classified into only one categorythe problem is more specifically an instance of single-labelmulticlass classification if each data point could belong to multiple categories (in this casetopics)you' be facing multilabelmulticlass classification problem the reuters dataset you'll work with the reuters dataseta set of short newswires and their topicspublished by reuters in it' simplewidely used toy dataset for text classification there are different topicssome topics are more represented than othersbut each topic has at least examples in the training set like imdb and mnistthe reuters dataset comes packaged as part of keras let' take look listing loading he reut ers dat aset from keras datasets import reuters (train_datatrain_labels)(test_datatest_labelsreuters load_datanum_words= as with the imdb datasetthe argument num_words= restricts the data to the , most frequently occurring words found in the data you have , training examples and , test exampleslen(train_data len(test_data as with the imdb reviewseach example is list of integers (word indices)train_data[ [ here' how you can decode it back to wordsin case you're curious listing decoding newswires back ext word_index reuters get_word_index(reverse_word_index dict([(valuekeyfor (keyvaluein word_index items()]decoded_newswire join([reverse_word_index get( '?'for in train_data[ ]]note that the indices are offset by because and are reserved indices for padding,start of sequence,and unknown licensed to
11,471
the label associated with an example is an integer between and -- topic indextrain_labels[ preparing the data you can vectorize the data with the exact same code as in the previous example listing encoding he dat import numpy as np def vectorize_sequences(sequencesdimension= )results np zeros((len(sequences)dimension)for isequence in enumerate(sequences)results[isequence return results vectorized training data x_train vectorize_sequences(train_datax_test vectorize_sequences(test_datavectorized test data to vectorize the labelsthere are two possibilitiesyou can cast the label list as an integer tensoror you can use one-hot encoding one-hot encoding is widely used format for categorical dataalso called categorical encoding for more detailed explanation of one-hot encodingsee section in this caseone-hot encoding of the labels consists of embedding each label as an all-zero vector with in the place of the label index here' an exampledef to_one_hot(labelsdimension= )results np zeros((len(labels)dimension)for ilabel in enumerate(labels)results[ilabel return results one_hot_train_labels to_one_hot(train_labelsone_hot_test_labels to_one_hot(test_labelsvectorized training labels vectorized test labels note that there is built-in way to do this in keraswhich you've already seen in action in the mnist examplefrom keras utils np_utils import to_categorical one_hot_train_labels to_categorical(train_labelsone_hot_test_labels to_categorical(test_labelsbuilding your network this topic-classification problem looks similar to the previous movie-review classification problemin both casesyou're trying to classify short snippets of text but there is new constraint herethe number of output classes has gone from to the dimensionality of the output space is much larger in stack of dense layers like that you've been usingeach layer can only access information present in the output of the previous layer if one layer drops some information licensed to
11,472
getting started with neural networks relevant to the classification problemthis information can never be recovered by later layerseach layer can potentially become an information bottleneck in the previous exampleyou used -dimensional intermediate layersbut -dimensional space may be too limited to learn to separate different classessuch small layers may act as information bottleneckspermanently dropping relevant information for this reason you'll use larger layers let' go with units listing model definit ion from keras import models from keras import layers model models sequential(model add(layers dense( activation='relu'input_shape=( ,))model add(layers dense( activation='relu')model add(layers dense( activation='softmax')there are two other things you should note about this architectureyou end the network with dense layer of size this means for each input samplethe network will output -dimensional vector each entry in this vector (each dimensionwill encode different output class the last layer uses softmax activation you saw this pattern in the mnist example it means the network will output probability distribution over the different output classes--for every input samplethe network will produce dimensional output vectorwhere output[iis the probability that the sample belongs to class the scores will sum to the best loss function to use in this case is categorical_crossentropy it measures the distance between two probability distributionsherebetween the probability distribution output by the network and the true distribution of the labels by minimizing the distance between these two distributionsyou train the network to output something as close as possible to the true labels listing compiling the model model compile(optimizer='rmsprop'loss='categorical_crossentropy'metrics=['accuracy']validating your approach let' set apart , samples in the training data to use as validation set listing set ing aside validation set x_val x_train[: partial_x_train x_train[ :y_val one_hot_train_labels[: partial_y_train one_hot_train_labels[ :licensed to
11,473
nowlet' train the network for epochs listing training he model history model fit(partial_x_trainpartial_y_trainepochs= batch_size= validation_data=(x_valy_val)and finallylet' display its loss and accuracy curves (see figures and listing plot ing he raining and validat ion loss import matplotlib pyplot as plt loss history history['loss'val_loss history history['val_loss'epochs range( len(loss plt plot(epochsloss'bo'label='training loss'plt plot(epochsval_loss' 'label='validation loss'plt title('training and validation loss'plt xlabel('epochs'plt ylabel('loss'plt legend(plt show(listing plt clf(plot ing he raining and validat ion accuracy clears the figure acc history history['acc'val_acc history history['val_acc'plt plot(epochsacc'bo'label='training acc'plt plot(epochsval_acc' 'label='validation acc'plt title('training and validation accuracy'plt xlabel('epochs'plt ylabel('loss'plt legend(plt show(figure training and validation loss licensed to
11,474
getting started with neural networks figure training and validation accuracy the network begins to overfit after nine epochs let' train new network from scratch for nine epochs and then evaluate it on the test set listing ret raining model from scrat ch model models sequential(model add(layers dense( activation='relu'input_shape=( ,))model add(layers dense( activation='relu')model add(layers dense( activation='softmax')model compile(optimizer='rmsprop'loss='categorical_crossentropy'metrics=['accuracy']model fit(partial_x_trainpartial_y_trainepochs= batch_size= validation_data=(x_valy_val)results model evaluate(x_testone_hot_test_labelshere are the final resultsresults [ this approach reaches an accuracy of ~ with balanced binary classification problemthe accuracy reached by purely random classifier would be but in this case it' closer to %so the results seem pretty goodat least when compared to random baselineimport copy test_labels_copy copy copy(test_labelsnp random shuffle(test_labels_copyhits_array np array(test_labels=np array(test_labels_copyfloat(np sum(hits_array)len(test_labels licensed to
11,475
generating predictions on new data you can verify that the predict method of the model instance returns probability distribution over all topics let' generate topic predictions for all of the test data listing generat ing predict ions for new dat predictions model predict(x_testeach entry in predictions is vector of length predictions[ shape ( ,the coefficients in this vector sum to np sum(predictions[ ] the largest entry is the predicted class--the class with the highest probabilitynp argmax(predictions[ ] different way to handle the labels and the loss we mentioned earlier that another way to encode the labels would be to cast them as an integer tensorlike thisy_train np array(train_labelsy_test np array(test_labelsthe only thing this approach would change is the choice of the loss function the loss function used in listing categorical_crossentropyexpects the labels to follow categorical encoding with integer labelsyou should use sparse_categorical_ crossentropymodel compile(optimizer='rmsprop'loss='sparse_categorical_crossentropy'metrics=['acc']this new loss function is still mathematically the same as categorical_crossentropyit just has different interface the importance of having sufficiently large intermediate layers we mentioned earlier that because the final outputs are -dimensionalyou should avoid intermediate layers with many fewer than hidden units now let' see what happens when you introduce an information bottleneck by having intermediate layers that are significantly less than -dimensionalfor example -dimensional licensed to
11,476
listing getting started with neural networks model with an informat ion bot leneck model models sequential(model add(layers dense( activation='relu'input_shape=( ,))model add(layers dense( activation='relu')model add(layers dense( activation='softmax')model compile(optimizer='rmsprop'loss='categorical_crossentropy'metrics=['accuracy']model fit(partial_x_trainpartial_y_trainepochs= batch_size= validation_data=(x_valy_val)the network now peaks at ~ validation accuracyan absolute drop this drop is mostly due to the fact that you're trying to compress lot of information (enough information to recover the separation hyperplanes of classesinto an intermediate space that is too low-dimensional the network is able to cram most of the necessary information into these eight-dimensional representationsbut not all of it further experiments try using larger or smaller layers units unitsand so on you used two hidden layers now try using single hidden layeror three hidden layers wrapping up here' what you should take away from this exampleif you're trying to classify data points among classesyour network should end with dense layer of size in single-labelmulticlass classification problemyour network should end with softmax activation so that it will output probability distribution over the output classes categorical crossentropy is almost always the loss function you should use for such problems it minimizes the distance between the probability distributions output by the network and the true distribution of the targets there are two ways to handle labels in multiclass classificationencoding the labels via categorical encoding (also known as one-hot encodingand using categorical_crossentropy as loss function encoding the labels as integers and using the sparse_categorical_crossentropy loss function if you need to classify data into large number of categoriesyou should avoid creating information bottlenecks in your network due to intermediate layers that are too small licensed to
11,477
predicting house pricesa regression example the two previous examples were considered classification problemswhere the goal was to predict single discrete label of an input data point another common type of machine-learning problem is regressionwhich consists of predicting continuous value instead of discrete labelfor instancepredicting the temperature tomorrowgiven meteorological dataor predicting the time that software project will take to completegiven its specifications note don' confuse regression and the algorithm logistic regression confusinglylogistic regression isn' regression algorithm--it' classification algorithm the boston housing price dataset you'll attempt to predict the median price of homes in given boston suburb in the mid- sgiven data points about the suburb at the timesuch as the crime ratethe local property tax rateand so on the dataset you'll use has an interesting difference from the two previous examples it has relatively few data pointsonly split between training samples and test samples and each feature in the input data (for examplethe crime ratehas different scale for instancesome values are proportionswhich take values between and others take values between and others between and and so on listing loading he bost on housing dat aset from keras datasets import boston_housing (train_datatrain_targets)(test_datatest_targetsboston_housing load_data(let' look at the datatrain_data shape ( test_data shape ( as you can seeyou have training samples and test sampleseach with numerical featuressuch as per capita crime rateaverage number of rooms per dwellingaccessibility to highwaysand so on the targets are the median values of owner-occupied homesin thousands of dollarstrain_targets the prices are typically between $ , and $ , if that sounds cheapremember that this was the mid- sand these prices aren' adjusted for inflation licensed to
11,478
getting started with neural networks preparing the data it would be problematic to feed into neural network values that all take wildly different ranges the network might be able to automatically adapt to such heterogeneous databut it would definitely make learning more difficult widespread best practice to deal with such data is to do feature-wise normalizationfor each feature in the input data ( column in the input data matrix)you subtract the mean of the feature and divide by the standard deviationso that the feature is centered around and has unit standard deviation this is easily done in numpy listing normalizing he dat mean train_data mean(axis= train_data -mean std train_data std(axis= train_data /std test_data -mean test_data /std note that the quantities used for normalizing the test data are computed using the training data you should never use in your workflow any quantity computed on the test dataeven for something as simple as data normalization building your network because so few samples are availableyou'll use very small network with two hidden layerseach with units in generalthe less training data you havethe worse overfitting will beand using small network is one way to mitigate overfitting listing model definit ion from keras import models from keras import layers because you'll need to instantiate the same model multiple timesyou use function to construct it def build_model()model models sequential(model add(layers dense( activation='relu'input_shape=(train_data shape[ ],))model add(layers dense( activation='relu')model add(layers dense( )model compile(optimizer='rmsprop'loss='mse'metrics=['mae']return model the network ends with single unit and no activation (it will be linear layerthis is typical setup for scalar regression ( regression where you're trying to predict single continuous valueapplying an activation function would constrain the range the output can takefor instanceif you applied sigmoid activation function to the last layerthe network could only learn to predict values between and herebecause the last layer is purely linearthe network is free to learn to predict values in any range licensed to
11,479
predicting house pricesa regression example note that you compile the network with the mse loss function--mean squared errorthe square of the difference between the predictions and the targets this is widely used loss function for regression problems you're also monitoring new metric during trainingmean absolute error (maeit' the absolute value of the difference between the predictions and the targets for instancean mae of on this problem would mean your predictions are off by $ on average validating your approach using -fold validation to evaluate your network while you keep adjusting its parameters (such as the number of epochs used for training)you could split the data into training set and validation setas you did in the previous examples but because you have so few data pointsthe validation set would end up being very small (for instanceabout examplesas consequencethe validation scores might change lot depending on which data points you chose to use for validation and which you chose for trainingthe validation scores might have high variance with regard to the validation split this would prevent you from reliably evaluating your model the best practice in such situations is to use -fold cross-validation (see figure it consists of splitting the available data into partitions (typically or )instantiating identical modelsand training each one on partitions while evaluating on the remaining partition the validation score for the model used is then the average of the validation scores obtained in terms of codethis is straightforward data split into partitions fold validation training training validation score # fold validation validation training validation score # fold validation training validation validation score # figure -fold cross-validat ion listing -fold validat ion import numpy as np num_val_samples len(train_data/ num_epochs all_scores [licensed to final scoreaverage
11,480
getting started with neural networks prepares the validation datadata from partition prepares the training datadata from all other partitions for in range( )print('processing fold #'ival_data train_data[ num_val_samples( num_val_samplesval_targets train_targets[ num_val_samples( num_val_samplespartial_train_data np concatenate[train_data[: num_val_samples]train_data[( num_val_samples:]]axis= partial_train_targets np concatenate[train_targets[: num_val_samples]train_targets[( num_val_samples:]]axis= builds the keras model (already compiledtrains the model (in silent modeverbose model build_model(model fit(partial_train_datapartial_train_targetsepochs=num_epochsbatch_size= verbose= val_mseval_mae model evaluate(val_dataval_targetsverbose= all_scores append(val_maeevaluates the model on the validation data running this with num_epochs yields the following resultsall_scores [ np mean(all_scores the different runs do indeed show rather different validation scoresfrom to the average ( is much more reliable metric than any single score--that' the entire point of -fold cross-validation in this caseyou're off by $ , on averagewhich is significant considering that the prices range from $ , to $ , let' try training the network bit longer epochs to keep record of how well the model does at each epochyou'll modify the training loop to save the perepoch validation score log listing saving he validat ion logs at each fold num_epochs prepares the validation dataall_mae_histories [data from partition for in range( )print('processing fold #'ival_data train_data[ num_val_samples( num_val_samplesval_targets train_targets[ num_val_samples( num_val_samplespartial_train_data np concatenate[train_data[: num_val_samples]train_data[( num_val_samples:]]axis= licensed to prepares the training datadata from all other partitions
11,481
predicting house pricesa regression example partial_train_targets np concatenate[train_targets[: num_val_samples]train_targets[( num_val_samples:]]axis= builds the keras model (already compiledmodel build_model(history model fit(partial_train_datapartial_train_targetsvalidation_data=(val_dataval_targets)epochs=num_epochsbatch_size= verbose= mae_history history history['val_mean_absolute_error'all_mae_histories append(mae_historytrains the model (in silent modeverbose you can then compute the average of the per-epoch mae scores for all folds listing building he hist ory of successive mean -fold validat ion scores average_mae_history np mean([ [ifor in all_mae_histories]for in range(num_epochs)let' plot thissee figure listing plot ing validat ion scores import matplotlib pyplot as plt plt plot(range( len(average_mae_history )average_mae_historyplt xlabel('epochs'plt ylabel('validation mae'plt show(figure validat ion mae by epoch it may be little difficult to see the plotdue to scaling issues and relatively high variance let' do the followingomit the first data pointswhich are on different scale than the rest of the curve replace each point with an exponential moving average of the previous pointsto obtain smooth curve licensed to
11,482
getting started with neural networks the result is shown in figure listing plot ing validat ion scoresexcluding he first dat point def smooth_curve(pointsfactor= )smoothed_points [for point in pointsif smoothed_pointsprevious smoothed_points[- smoothed_points append(previous factor point ( factor)elsesmoothed_points append(pointreturn smoothed_points smooth_mae_history smooth_curve(average_mae_history[ :]plt plot(range( len(smooth_mae_history )smooth_mae_historyplt xlabel('epochs'plt ylabel('validation mae'plt show(figure validation mae by epochexcluding he first dat point according to this plotvalidation mae stops improving significantly after epochs past that pointyou start overfitting once you're finished tuning other parameters of the model (in addition to the number of epochsyou could also adjust the size of the hidden layers)you can train final production model on all of the training datawith the best parametersand then look at its performance on the test data listing training he final model gets freshcompiled model model build_model(trains it on the entirety of the data model fit(train_datatrain_targetsepochs= batch_size= verbose= test_mse_scoretest_mae_score model evaluate(test_datatest_targetslicensed to
11,483
here' the final resulttest_mae_score you're still off by about $ , wrapping up here' what you should take away from this exampleregression is done using different loss functions than what we used for classification mean squared error (mseis loss function commonly used for regression similarlyevaluation metrics to be used for regression differ from those used for classificationnaturallythe concept of accuracy doesn' apply for regression common regression metric is mean absolute error (maewhen features in the input data have values in different rangeseach feature should be scaled independently as preprocessing step when there is little data availableusing -fold validation is great way to reliably evaluate model when little training data is availableit' preferable to use small network with few hidden layers (typically only one or two)in order to avoid severe overfitting licensed to
11,484
getting started with neural networks summary you're now able to handle the most common kinds of machine-learning tasks on vector databinary classificationmulticlass classificationand scalar regression the "wrapping upsections earlier in the summarize the important points you've learned regarding these types of tasks you'll usually need to preprocess raw data before feeding it into neural network when your data has features with different rangesscale each feature independently as part of preprocessing as training progressesneural networks eventually begin to overfit and obtain worse results on never-before-seen data if you don' have much training datause small network with only one or two hidden layersto avoid severe overfitting if your data is divided into many categoriesyou may cause information bottlenecks if you make the intermediate layers too small regression uses different loss functions and different evaluation metrics than classification when you're working with little datak-fold validation can help reliably evaluate your model licensed to
11,485
machine learning this covers forms of machine learning beyond classification and regression formal evaluation procedures for machinelearning models preparing data for deep learning feature engineering tackling overfitting the universal workflow for approaching machinelearning problems after the three practical examples in you should be starting to feel familiar with how to approach classification and regression problems using neural networksand you've witnessed the central problem of machine learningoverfitting this will formalize some of your new intuition into solid conceptual framework for attacking and solving deep-learning problems we'll consolidate all of these concepts--model evaluationdata preprocessing and feature engineeringand tackling overfitting--into detailed seven-step workflow for tackling any machine-learning task licensed to
11,486
fundamentals of machine learning four branches of machine learning in our previous examplesyou've become familiar with three specific types of machine-learning problemsbinary classificationmulticlass classificationand scalar regression all three are instances of supervised learningwhere the goal is to learn the relationship between training inputs and training targets supervised learning is just the tip of the iceberg--machine learning is vast field with complex subfield taxonomy machine-learning algorithms generally fall into four broad categoriesdescribed in the following sections supervised learning this is by far the most common case it consists of learning to map input data to known targets (also called annotations)given set of examples (often annotated by humansall four examples you've encountered in this book so far were canonical examples of supervised learning generallyalmost all applications of deep learning that are in the spotlight these days belong in this categorysuch as optical character recognitionspeech recognitionimage classificationand language translation although supervised learning mostly consists of classification and regressionthere are more exotic variants as wellincluding the following (with examples)sequence generation--given picturepredict caption describing it sequence generation can sometimes be reformulated as series of classification problems (such as repeatedly predicting word or token in sequencesyntax tree prediction--given sentencepredict its decomposition into syntax tree object detection--given picturedraw bounding box around certain objects inside the picture this can also be expressed as classification problem (given many candidate bounding boxesclassify the contents of each oneor as joint classification and regression problemwhere the bounding-box coordinates are predicted via vector regression image segmentation--given picturedraw pixel-level mask on specific object unsupervised learning this branch of machine learning consists of finding interesting transformations of the input data without the help of any targetsfor the purposes of data visualizationdata compressionor data denoisingor to better understand the correlations present in the data at hand unsupervised learning is the bread and butter of data analyticsand it' often necessary step in better understanding dataset before attempting to solve supervised-learning problem dimensionality reduction and clustering are well-known categories of unsupervised learning self-supervised learning this is specific instance of supervised learningbut it' different enough that it deserves its own category self-supervised learning is supervised learning without licensed to
11,487
human-annotated labels--you can think of it as supervised learning without any humans in the loop there are still labels involved (because the learning has to be supervised by something)but they're generated from the input datatypically using heuristic algorithm for instanceautoencoders are well-known instance of self-supervised learningwhere the generated targets are the inputunmodified in the same waytrying to predict the next frame in videogiven past framesor the next word in textgiven previous wordsare instances of self-supervised learning (temporally supervised learningin this casesupervision comes from future input datanote that the distinction between supervisedself-supervisedand unsupervised learning can be blurry sometimes--these categories are more of continuum without solid borders self-supervised learning can be reinterpreted as either supervised or unsupervised learningdepending on whether you pay attention to the learning mechanism or to the context of its application in this bookwe'll focus specifically on supervised learningbecause it' by far the dominant form of deep learning todaywith wide range of industry applications we'll also take briefer look at self-supervised learning in later note reinforcement learning long overlookedthis branch of machine learning recently started to get lot of attention after google deepmind successfully applied it to learning to play atari games (andlaterlearning to play go at the highest levelin reinforcement learningan agent receives information about its environment and learns to choose actions that will maximize some reward for instancea neural network that "looksat videogame screen and outputs game actions in order to maximize its score can be trained via reinforcement learning currentlyreinforcement learning is mostly research area and hasn' yet had significant practical successes beyond games in timehoweverwe expect to see reinforcement learning take over an increasingly large range of real-world applicationsself-driving carsroboticsresource managementeducationand so on it' an idea whose time has comeor will come soon classification and regression glossary classification and regression involve many specialized terms you've come across some of them in earlier examplesand you'll see more of them in future they have precisemachine-learning-specific definitionsand you should be familiar with themsample or input--one data point that goes into your model prediction or output--what comes out of your model target--the truth what your model should ideally have predictedaccording to an external source of data licensed to
11,488
fundamentals of machine learning (continuedprediction error or loss value-- measure of the distance between your model' prediction and the target classes-- set of possible labels to choose from in classification problem for examplewhen classifying cat and dog picturesdogand catare the two classes label-- specific instance of class annotation in classification problem for instanceif picture # is annotated as containing the class dog,then dogis label of picture # ground-truth or annotations--all targets for datasettypically collected by humans binary classification-- classification task where each input sample should be categorized into two exclusive categories multiclass classification-- classification task where each input sample should be categorized into more than two categoriesfor instanceclassifying handwritten digits multilabel classification-- classification task where each input sample can be assigned multiple labels for instancea given image may contain both cat and dog and should be annotated both with the catlabel and the doglabel the number of labels per image is usually variable scalar regression-- task where the target is continuous scalar value predicting house prices is good examplethe different target prices form continuous space vector regression-- task where the target is set of continuous valuesfor examplea continuous vector if you're doing regression against multiple values (such as the coordinates of bounding box in an image)then you're doing vector regression mini-batch or batch-- small set of samples (typically between and that are processed simultaneously by the model the number of samples is often power of to facilitate memory allocation on gpu when traininga mini-batch is used to compute single gradient-descent update applied to the weights of the model licensed to
11,489
evaluating machine-learning models in the three examples presented in we split the data into training seta validation setand test set the reason not to evaluate the models on the same data they were trained on quickly became evidentafter just few epochsall three models began to overfit that istheir performance on never-before-seen data started stalling (or worseningcompared to their performance on the training data--which always improves as training progresses in machine learningthe goal is to achieve models that generalize--that perform well on never-before-seen data--and overfitting is the central obstacle you can only control that which you can observeso it' crucial to be able to reliably measure the generalization power of your model the following sections look at strategies for mitigating overfitting and maximizing generalization in this sectionwe'll focus on how to measure generalizationhow to evaluate machine-learning models trainingvalidationand test sets evaluating model always boils down to splitting the available data into three setstrainingvalidationand test you train on the training data and evaluate your model on the validation data once your model is ready for prime timeyou test it one final time on the test data you may askwhy not have two setsa training set and test setyou' train on the training data and evaluate on the test data much simplerthe reason is that developing model always involves tuning its configurationfor examplechoosing the number of layers or the size of the layers (called the hyperparameters of the modelto distinguish them from the parameterswhich are the network' weightsyou do this tuning by using as feedback signal the performance of the model on the validation data in essencethis tuning is form of learninga search for good configuration in some parameter space as resulttuning the configuration of the model based on its performance on the validation set can quickly result in overfitting to the validation seteven though your model is never directly trained on it central to this phenomenon is the notion of information leaks every time you tune hyperparameter of your model based on the model' performance on the validation setsome information about the validation data leaks into the model if you do this only oncefor one parameterthen very few bits of information will leakand your validation set will remain reliable to evaluate the model but if you repeat this many times--running one experimentevaluating on the validation setand modifying your model as result--then you'll leak an increasingly significant amount of information about the validation set into the model at the end of the dayyou'll end up with model that performs artificially well on the validation databecause that' what you optimized it for you care about performance on completely new datanot the validation dataso you need to use completely differentnever-before-seen dataset to evaluate the modelthe test dataset your model shouldn' have had access to any information about the test seteven indirectly licensed to
11,490
fundamentals of machine learning if anything about the model has been tuned based on test set performancethen your measure of generalization will be flawed splitting your data into trainingvalidationand test sets may seem straightforwardbut there are few advanced ways to do it that can come in handy when little data is available let' review three classic evaluation recipessimple hold-out validationkfold validationand iterated -fold validation with shuffling simple hold-out validation set apart some fraction of your data as your test set train on the remaining dataand evaluate on the test set as you saw in the previous sectionsin order to prevent information leaksyou shouldn' tune your model based on the test setand therefore you should also reserve validation set schematicallyhold-out validation looks like figure the following listing shows simple implementation total available labeled data training set train on this listing held-out validation set figure simple holdout validation split evaluate on this hold-out validat ion num_validation_samples np random shuffle(datashuffling the data is usually appropriate validation_data data[:num_validation_samplesdata data[num_validation_samples:defines the validation set defines the training set training_data data[:model get_model(model train(training_datavalidation_score model evaluate(validation_datatrains model on the training dataand evaluates it on the validation data at this point you can tune your modelretrain itevaluate ittune it again model get_model(model train(np concatenate([training_datavalidation_data])test_score model evaluate(test_datalicensed to once you've tuned your hyperparametersit' common to train your final model from scratch on all non-test data available
11,491
evaluating machine-learning models this is the simplest evaluation protocoland it suffers from one flawif little data is availablethen your validation and test sets may contain too few samples to be statistically representative of the data at hand this is easy to recognizeif different random shuffling rounds of the data before splitting end up yielding very different measures of model performancethen you're having this issue -fold validation and iterated -fold validation are two ways to address thisas discussed next -fold validation with this approachyou split your data into partitions of equal size for each partition itrain model on the remaining partitionsand evaluate it on partition your final score is then the averages of the scores obtained this method is helpful when the performance of your model shows significant variance based on your traintest split like hold-out validationthis method doesn' exempt you from using distinct validation set for model calibration schematicallyk-fold cross-validation looks like figure listing shows simple implementation data split into partitions fold validation training training validation score # fold validation validation training validation score # fold validation training validation validation score # figure final scoreaverage three-fold validat ion listing -fold cross-validat ion num_validation_samples len(data/ np random shuffle(dataselects the validationvalidation_scores [data partition for fold in range( )validation_data data[num_validation_samples foldnum_validation_samples (fold )training_data data[:num_validation_samples folddata[num_validation_samples (fold ):model get_model(model train(training_datavalidation_score model evaluate(validation_datavalidation_scores append(validation_scoreuses the remainder of the data as training data note that the operator is list concatenationnot summation creates brand-new instance of the model (untrainedlicensed to
11,492
fundamentals of machine learning validation_score np average(validation_scoresmodel get_model(model train(datatest_score model evaluate(test_datatrains the final model on all nontest data available validation scoreaverage of the validation scores of the folds iterated -fold validation with shuffling this one is for situations in which you have relatively little data available and you need to evaluate your model as precisely as possible 've found it to be extremely helpful in kaggle competitions it consists of applying -fold validation multiple timesshuffling the data every time before splitting it ways the final score is the average of the scores obtained at each run of -fold validation note that you end up training and evaluating models (where is the number of iterations you use)which can very expensive things to keep in mind keep an eye out for the following when you're choosing an evaluation protocoldata representativeness--you want both your training set and test set to be representative of the data at hand for instanceif you're trying to classify images of digitsand you're starting from an array of samples where the samples are ordered by their classtaking the first of the array as your training set and the remaining as your test set will result in your training set containing only classes - whereas your test set contains only classes - this seems like ridiculous mistakebut it' surprisingly common for this reasonyou usually should randomly shuffle your data before splitting it into training and test sets the arrow of time--if you're trying to predict the future given the past (for exampletomorrow' weatherstock movementsand so on)you should not randomly shuffle your data before splitting itbecause doing so will create temporal leakyour model will effectively be trained on data from the future in such situationsyou should always make sure all data in your test set is posterior to the data in the training set redundancy in your data--if some data points in your data appear twice (fairly common with real-world data)then shuffling the data and splitting it into training set and validation set will result in redundancy between the training and validation sets in effectyou'll be testing on part of your training datawhich is the worst thing you can domake sure your training set and validation set are disjoint licensed to
11,493
data preprocessingfeature engineeringand feature learning in addition to model evaluationan important question we must tackle before we dive deeper into model development is the followinghow do you prepare the input data and targets before feeding them into neural networkmany data-preprocessing and feature-engineering techniques are domain specific (for examplespecific to text data or image data)we'll cover those in the following as we encounter them in practical examples for nowwe'll review the basics that are common to all data domains data preprocessing for neural networks data preprocessing aims at making the raw data at hand more amenable to neural networks this includes vectorizationnormalizationhandling missing valuesand feature extraction vectorization all inputs and targets in neural network must be tensors of floating-point data (orin specific casestensors of integerswhatever data you need to process--soundimagestext--you must first turn into tensorsa step called data vectorization for instancein the two previous text-classification exampleswe started from text represented as lists of integers (standing for sequences of words)and we used one-hot encoding to turn them into tensor of float data in the examples of classifying digits and predicting house pricesthe data already came in vectorized formso you were able to skip this step value normalization in the digit-classification exampleyou started from image data encoded as integers in the - rangeencoding grayscale values before you fed this data into your networkyou had to cast it to float and divide by so you' end up with floatingpoint values in the - range similarlywhen predicting house pricesyou started from features that took variety of ranges--some features had small floating-point valuesothers had fairly large integer values before you fed this data into your networkyou had to normalize each feature independently so that it had standard deviation of and mean of in generalit isn' safe to feed into neural network data that takes relatively large values (for examplemultidigit integerswhich are much larger than the initial values taken by the weights of networkor data that is heterogeneous (for exampledata where one feature is in the range - and another is in the range - doing so can trigger large gradient updates that will prevent the network from converging to make learning easier for your networkyour data should have the following characteristicstake small values--typicallymost values should be in the - range be homogenous--that isall features should take values in roughly the same range licensed to
11,494
fundamentals of machine learning additionallythe following stricter normalization practice is common and can helpalthough it isn' always necessary (for exampleyou didn' do this in the digit-classification example)normalize each feature independently to have mean of normalize each feature independently to have standard deviation of this is easy to do with numpy arraysx - mean(axis= / std(axis= assuming is data matrix of shape (samplesfeatureshandling missing values you may sometimes have missing values in your data for instancein the house-price examplethe first feature (the column of index in the datawas the per capita crime rate what if this feature wasn' available for all samplesyou' then have missing values in the training or test data in generalwith neural networksit' safe to input missing values as with the condition that isn' already meaningful value the network will learn from exposure to the data that the value means missing data and will start ignoring the value note that if you're expecting missing values in the test databut the network was trained on data without any missing valuesthe network won' have learned to ignore missing valuesin this situationyou should artificially generate training samples with missing entriescopy some training samples several timesand drop some of the features that you expect are likely to be missing in the test data feature engineering feature engineering is the process of using your own knowledge about the data and about the machine-learning algorithm at hand (in this casea neural networkto make the algorithm work better by applying hardcoded (nonlearnedtransformations to the data before it goes raw datapixel grid into the model in many casesit isn' reasonable to expect machinelearning model to be able to learn from completely arbitrary data the better { { featuresy data needs to be presented to the clock hands{ { - coordinates model in way that will make the model' job easier let' look at an intuitive example theta theta even better theta theta suppose you're trying to develop featuresangles of model that can take as input an clock hands image of clock and can output the figure feature engineering for reading he ime on time of day (see figure clock licensed to
11,495
if you choose to use the raw pixels of the image as input datathen you have difficult machine-learning problem on your hands you'll need convolutional neural network to solve itand you'll have to expend quite bit of computational resources to train the network but if you already understand the problem at high level (you understand how humans read time on clock face)then you can come up with much better input features for machine-learning algorithmfor instanceit' easy to write five-line python script to follow the black pixels of the clock hands and output the (xycoordinates of the tip of each hand then simple machine-learning algorithm can learn to associate these coordinates with the appropriate time of day you can go even furtherdo coordinate changeand express the (xycoordinates as polar coordinates with regard to the center of the image your input will become the angle theta of each clock hand at this pointyour features are making the problem so easy that no machine learning is requireda simple rounding operation and dictionary lookup are enough to recover the approximate time of day that' the essence of feature engineeringmaking problem easier by expressing it in simpler way it usually requires understanding the problem in depth before deep learningfeature engineering used to be criticalbecause classical shallow algorithms didn' have hypothesis spaces rich enough to learn useful features by themselves the way you presented the data to the algorithm was essential to its success for instancebefore convolutional neural networks became successful on the mnist digit-classification problemsolutions were typically based on hardcoded features such as the number of loops in digit imagethe height of each digit in an imagea histogram of pixel valuesand so on fortunatelymodern deep learning removes the need for most feature engineeringbecause neural networks are capable of automatically extracting useful features from raw data does this mean you don' have to worry about feature engineering as long as you're using deep neural networksnofor two reasonsgood features still allow you to solve problems more elegantly while using fewer resources for instanceit would be ridiculous to solve the problem of reading clock face using convolutional neural network good features let you solve problem with far less data the ability of deeplearning models to learn features on their own relies on having lots of training data availableif you have only few samplesthen the information value in their features becomes critical licensed to
11,496
fundamentals of machine learning overfitting and underfitting in all three examples in the previous -predicting movie reviewstopic classificationand house-price regression--the performance of the model on the held-out validation data always peaked after few epochs and then began to degradethe model quickly started to overfit to the training data overfitting happens in every machine-learning problem learning how to deal with overfitting is essential to mastering machine learning the fundamental issue in machine learning is the tension between optimization and generalization optimization refers to the process of adjusting model to get the best performance possible on the training data (the learning in machine learning)whereas generalization refers to how well the trained model performs on data it has never seen before the goal of the game is to get good generalizationof coursebut you don' control generalizationyou can only adjust the model based on its training data at the beginning of trainingoptimization and generalization are correlatedthe lower the loss on training datathe lower the loss on test data while this is happeningyour model is said to be underfitthere is still progress to be madethe network hasn' yet modeled all relevant patterns in the training data but after certain number of iterations on the training datageneralization stops improvingand validation metrics stall and then begin to degradethe model is starting to overfit that isit' beginning to learn patterns that are specific to the training data but that are misleading or irrelevant when it comes to new data to prevent model from learning misleading or irrelevant patterns found in the training datathe best solution is to get more training data model trained on more data will naturally generalize better when that isn' possiblethe next-best solution is to modulate the quantity of information that your model is allowed to store or to add constraints on what information it' allowed to store if network can only afford to memorize small number of patternsthe optimization process will force it to focus on the most prominent patternswhich have better chance of generalizing well the processing of fighting overfitting this way is called regularization let' review some of the most common regularization techniques and apply them in practice to improve the movie-classification model from section reducing the network' size the simplest way to prevent overfitting is to reduce the size of the modelthe number of learnable parameters in the model (which is determined by the number of layers and the number of units per layerin deep learningthe number of learnable parameters in model is often referred to as the model' capacity intuitivelya model with more parameters has more memorization capacity and therefore can easily learn perfect dictionary-like mapping between training samples and their targets-- mapping without any generalization power for instancea model with , binary parameters could easily be made to learn the class of every digit in the mnist training setlicensed to
11,497
we' need only binary parameters for each of the , digits but such model would be useless for classifying new digit samples always keep this in minddeeplearning models tend to be good at fitting to the training databut the real challenge is generalizationnot fitting on the other handif the network has limited memorization resourcesit won' be able to learn this mapping as easilythusin order to minimize its lossit will have to resort to learning compressed representations that have predictive power regarding the targets--precisely the type of representations we're interested in at the same timekeep in mind that you should use models that have enough parameters that they don' underfityour model shouldn' be starved for memorization resources there is compromise to be found between too much capacity and not enough capacity unfortunatelythere is no magical formula to determine the right number of layers or the right size for each layer you must evaluate an array of different architectures (on your validation setnot on your test setof coursein order to find the correct model size for your data the general workflow to find an appropriate model size is to start with relatively few layers and parametersand increase the size of the layers or add new layers until you see diminishing returns with regard to validation loss let' try this on the movie-review classification network the original network is shown next listing original model from keras import models from keras import layers model models sequential(model add(layers dense( activation='relu'input_shape=( ,))model add(layers dense( activation='relu')model add(layers dense( activation='sigmoid')now let' try to replace it with this smaller network listing version of he model with lower capacit model models sequential(model add(layers dense( activation='relu'input_shape=( ,))model add(layers dense( activation='relu')model add(layers dense( activation='sigmoid')figure shows comparison of the validation losses of the original network and the smaller network the dots are the validation loss values of the smaller networkand the crosses are the initial network (remembera lower validation loss signals better modellicensed to
11,498
fundamentals of machine learning figure effect of model capacit on validat ion losst rying smaller model as you can seethe smaller network starts overfitting later than the reference network (after six epochs rather than four)and its performance degrades more slowly once it starts overfitting nowfor kickslet' add to this benchmark network that has much more capacity--far more than the problem warrants listing version of he model with higher capacit model models sequential(model add(layers dense( activation='relu'input_shape=( ,))model add(layers dense( activation='relu')model add(layers dense( activation='sigmoid')figure shows how the bigger network fares compared to the reference network the dots are the validation loss values of the bigger networkand the crosses are the initial network figure effect of model capacit on validat ion losst rying bigger model licensed to
11,499
overfitting and underfitting the bigger network starts overfitting almost immediatelyafter just one epochand it overfits much more severely its validation loss is also noisier meanwhilefigure shows the training losses for the two networks as you can seethe bigger network gets its training loss near zero very quickly the more capacity the network hasthe more quickly it can model the training data (resulting in low training loss)but the more susceptible it is to overfitting (resulting in large difference between the training and validation lossfigure effect of model capacity on raining losst rying bigger model adding weight regularization you may be familiar with the principle of occam' razorgiven two explanations for somethingthe explanation most likely to be correct is the simplest one--the one that makes fewer assumptions this idea also applies to the models learned by neural networksgiven some training data and network architecturemultiple sets of weight values (multiple modelscould explain the data simpler models are less likely to overfit than complex ones simple model in this context is model where the distribution of parameter values has less entropy (or model with fewer parametersas you saw in the previous sectionthus common way to mitigate overfitting is to put constraints on the complexity of network by forcing its weights to take only small valueswhich makes the distribution of weight values more regular this is called weight regularizationand it' done by adding to the loss function of the network cost associated with having large weights this cost comes in two flavorsl regularization--the cost added is proportional to the absolute value of the weight coefficients (the norm of the weightsl regularization--the cost added is proportional to the square of the value of the weight coefficients (the norm of the weightsl regularization is also called weight decay in the context of neural networks don' let the different name confuse youweight decay is mathematically the same as regularization licensed to