prompt
stringlengths
501
4.98M
target
stringclasses
1 value
chunk_prompt
bool
1 class
kind
stringclasses
2 values
prob
float64
0.2
0.97
path
stringlengths
10
394
quality_prob
float64
0.4
0.99
learning_prob
float64
0.15
1
filename
stringlengths
4
221
## Learning Objectives The goal of this notebook is for describing data and to see and practice: - Load raw data - View the loaded data - Formulate an explorative data description question - Describe the raw data tables - See and practice data science research tools and practices ### 1 Practical Data Science Research Can we find good online learning strategies? We need to define: - "good", what is good in context of learning? - "learning", what is learning, and how do you measure it? - "strategy", what is a strategy for learning and how would you observe it? Then we need to ask: - Is it possible to observe any learning in the data? - Are there any observable strategies? **Note:** Without a clear goal you can get lost in the data, however there is also a need to explore. So how do you balance broad exploration for knowledge and in-depth exploitation knowledge. #### Data used ``` Kuzilek J., Hlosta M., Zdrahal Z. Open University Learning Analytics dataset Sci. Data 4:170171 doi: 10.1038/sdata.2017.171 (2017). ``` See [https://analyse.kmi.open.ac.uk/open_dataset#about](https://analyse.kmi.open.ac.uk/open_dataset#about) #### What do the tables in the data look like? **Note:** If you are getting ahead explore the other tables as well ### 2 Import software libraries There are data science libraries in `Python` for data structures, analysis and plotting **Note:** - Using the correct path to the library is important. - Do not ask me how many times I have had import errors. I usually have a "standard" layout for projects to avoid spending too much time on configuration of software. - Using an appropriate data structure is important for usability and computational performance ``` # Import python standard library for operating system functionality. # This improves portability import os # Library for random numbers import random # Blank line convention to separate the standard libraries and the other libraries. # Data structure and analysis library import pandas as pd # Data visualization based on `matplotlib` import seaborn as sns # Plotting library import matplotlib.pyplot as plt ``` ### 3 Notebook variables and settings By declaring variables with at the top means that they are in scope for the entire `Jupyter` notebook **Note:** - The scope of the variables in a `Jupyter` notebook can be confusing if your programing experience is in a different environment. - A code structure can reduce confusion and reproducability - Sensible variable names provides better readability - Editors with auto-complete maintains typing efficiency ``` # Where am I print(os.getcwd()) # Declare constants DATA_FOLDER = '../data' # Set visualization styles with a predefined `seaborn` style sns.set_context("notebook", font_scale=1.5, rc={"lines.linewidth": 2.5}) ``` ### 4 Load raw data We want to work with data in the `DATA_FOLDER` - What is the size of the data you want to work with? - Can it fit in memory on the the hardware you are using? - **Note:** Too large data can reduce the performance. - Is all the data needed at this point? **Note:** When writing output, remember to be consistent, that makes searching outupt, e.g. files or content. ``` print('File name: File size') # List files in the `DATA_FOLDER` files = os.listdir(DATA_FOLDER) # Iterate over each file in the folder for file_name in files: # Get the file stats file_stat = os.stat(os.path.join(DATA_FOLDER, file_name)) # Convert the file size in bytes to MB size_MB = file_stat.st_size / 1024**2 # Print the file name and size print('{}: {:.3f} MB'.format(file_name, size_MB)) ``` What is in the data that can help answer the research question? - Break down the exploration into small steps, sub questions. - `cources.csv` seems promising and not too large. Initial sub question is - How many courses are there in the data set? ``` # Load "raw" data regarding courses. # Loading data in a separate cell can avoid computationally expensive file IO. # Declare the path to the data file with course info courses_path = os.path.join(DATA_FOLDER, 'courses.csv') # Use `pandas` `read_csv` to read the CSV data file in as a `pandas` `DataFrame` courses = pd.read_csv(courses_path) ``` When can we work with a subset of the the data and why would we? - Early in development to speed up parts not dependent on all data by both reducing computational time and analysis. **Note:** avoid drawing too strong conclusions on data subsets. ``` # Define functions in separate cells for code separation and structure. def count_lines(file_path): """ Count total number of lines in a file. Not optimized for speed. See e.g. https://stackoverflow.com/questions/845058/how-to-get-line-count-cheaply-in-python :param file_path: Path to file :type file_path: str :return: Total number of lines in the file :rtype int: """ with open(file_path, 'r') as file_descriptor: _n_lines = 0 for _ in file_descriptor: _n_lines += 1 return _n_lines ``` Get a data sample from the `courses.csv` data ``` # Ratio of lines to sample RATIO_OF_SAMPLES_FROM_DATA_FILE = 0.2 # Get number of lines in the file n_lines = count_lines(courses_path) # Get number of lines to sample. Cast it to an integer n_samples_course = int(n_lines * RATIO_OF_SAMPLES_FROM_DATA_FILE) # `pandas` `read_csv` API specifies number of lines(rows) to skip n_lines_to_skip = n_lines - n_samples_course # Uniformly randomly sample which lines to skip. They need to be ordered skip_lines = sorted(random.sample(range(1, n_lines), n_lines_to_skip)) # Read file the sampled lines of the file sample_courses = pd.read_csv(courses_path, skiprows=skip_lines) # Print sampled data. Note the difference in size with the complete data frame print("Rows and columns in the data sample: {}".format(sample_courses.shape)) # Assert that the sample has fewer (or equal) lines than the original # `assert` is a very convenient keyword to check TODO elaborate assert n_lines >= sample_courses.shape[0] ``` What does the data look like ``` print("Information about the data frame") print(courses.info()) print("First row in dataframe") print(courses.head(1)) # What are the names of the columns print("Columns: {}".format(courses.columns)) # There are often prettier ways to display things. import IPython.display as display # Use the `IPython` display function display.display(courses.head(1)) ``` ### 5 Explore the data The data is now loaded. So we can see if we can answer some questions. - How many unique modules (`code_module`) are there? - **Note:** It can be useful to formulate questions to guide the exploration to avoid getting lost. - How many unique module presentations (`code_presentation`) are there? ``` # Columns of interest cols = ['code_module', 'code_presentation'] # Iterate over columns in the data frame for col in cols: # Get unique column values unique_values = courses[col].unique() # Get number of unique values n_unique_values = len(unique_values) print('Categories for {}: {}; n categories: {}'.format(col, unique_values, n_unique_values)) ``` What is the presentation length for each code_module? - What could the difference depend on? ``` # Column name for x-axis values xs = 'code_module' # Column name for y-axis values ys = 'module_presentation_length' # Column name for the colors in the plot group_name = 'code_presentation' # Plot the data points ax = sns.stripplot(x=xs, y=ys, hue=group_name, data=courses) # Set the legend plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) # We call `matplotlib` instead of using the `seaborn` api # Set plot title ax.set_title('Presentation Length of Modules') # Lazy programmer showoff. Create an anonymous function (lambda) that splits the function argument on `_` to a list and # captalizes each element. Then the elements of the list are joined as a string separated by ` ` pretty_label = lambda x: ' '.join([_.capitalize() for _ in x.split('_')]) # Set x label ax.set_xlabel(pretty_label(xs)) # Set y label ax.set_ylabel(pretty_label(ys)) ``` Tabular description of the groups based on descriptive statistics ``` # Get the groups code_modules = courses.groupby(by=xs) # Iterate over the groups for name, group in code_modules: print('Descriptive stats for {} grouped by {}: {}'.format(ys, xs, name)) print(group[ys].describe()) ``` #### What is the descriptive statistics and visualization of the other tables in the data? ### 6 Explore the concept of learning as measured by final grade - What is the final grade? - In this data it is `final_result` in the `studentInfo` table ``` # Load data student_info_path = os.path.join(DATA_FOLDER, 'studentInfo.csv') student_info = pd.read_csv(student_info_path) print(student_info.info()) ``` **Note:** With unfamiliar APIs and dynamically typed languages such as `python` it can be difficult to know which variable operations are syntactically correct. ``` # Get final results column values # Get the unique values print('Final result values (categories): {}'.format(student_info['final_result'].unique())) # Get the number of values for each category final_results = student_info['final_result'].value_counts() print(final_results.head()) # These values are now a `pandas.Series` indexed by the category print(type(final_results), final_results.index) # Normalize the value counts final_results = final_results.div(final_results.sum(), axis=0) # Plot the final results categories ax = final_results.plot(kind='bar') ax.set_title('Final result ratios for all students') ax.set_xlabel('Final Result') ax.set_ylabel('Ratio') ``` What is the completion rate? ``` # Sum the values for non-completion # Find the rows that has Fail or Withdrawn and sum the values def get_completion_rate(df): not_completed = df.loc[['Fail', 'Withdrawn']].sum() completed = 1.0 - not_completed print("Final result:") print("not completed: {:.3f}".format(not_completed)) print("completed: {:.3f}".format(completed)) get_completion_rate(final_results) ``` What is the final grade for module `AAA`? - Now we need to filter the data based on the module we are interested in **Note:** The code seems repetitive ``` # Define code module code_module = 'AAA' # Get the number of values for each category for the code module student_info_f = student_info.loc[student_info['code_module'] == code_module] final_results_f = student_info_f['final_result'].value_counts() print(final_results_f.head()) # Normalize the value counts final_results_f = final_results_f.div(final_results_f.sum(), axis=0) # Plot the final results categories ax = final_results_f.plot(kind='bar') ax.set_title('Final result ratios for all students on {}'.format(code_module)) ax.set_xlabel('Final Result') ax.set_ylabel('Ratio') ``` What is the completion rate for `AAA`? ``` get_completion_rate(final_results_f) ``` #### What are the final results for each code_module?
true
code
0.640214
null
null
null
null
# Session 1: Introduction to Tensorflow <p class='lead'> Creative Applications of Deep Learning with Tensorflow<br /> Parag K. Mital<br /> Kadenze, Inc.<br /> </p> <a name="learning-goals"></a> # Learning Goals * Learn the basic idea behind machine learning: learning from data and discovering representations * Learn how to preprocess a dataset using its mean and standard deviation * Learn the basic components of a Tensorflow Graph # Table of Contents <!-- MarkdownTOC autolink=true autoanchor=true bracket=round --> - [Introduction](#introduction) - [Promo](#promo) - [Session Overview](#session-overview) - [Learning From Data](#learning-from-data) - [Deep Learning vs. Machine Learning](#deep-learning-vs-machine-learning) - [Invariances](#invariances) - [Scope of Learning](#scope-of-learning) - [Existing datasets](#existing-datasets) - [Preprocessing Data](#preprocessing-data) - [Understanding Image Shapes](#understanding-image-shapes) - [The Batch Dimension](#the-batch-dimension) - [Mean/Deviation of Images](#meandeviation-of-images) - [Dataset Preprocessing](#dataset-preprocessing) - [Histograms](#histograms) - [Histogram Equalization](#histogram-equalization) - [Tensorflow Basics](#tensorflow-basics) - [Variables](#variables) - [Tensors](#tensors) - [Graphs](#graphs) - [Operations](#operations) - [Tensor](#tensor) - [Sessions](#sessions) - [Tensor Shapes](#tensor-shapes) - [Many Operations](#many-operations) - [Convolution](#convolution) - [Creating a 2-D Gaussian Kernel](#creating-a-2-d-gaussian-kernel) - [Convolving an Image with a Gaussian](#convolving-an-image-with-a-gaussian) - [Convolve/Filter an image using a Gaussian Kernel](#convolvefilter-an-image-using-a-gaussian-kernel) - [Modulating the Gaussian with a Sine Wave to create Gabor Kernel](#modulating-the-gaussian-with-a-sine-wave-to-create-gabor-kernel) - [Manipulating an image with this Gabor](#manipulating-an-image-with-this-gabor) - [Homework](#homework) - [Next Session](#next-session) - [Reading Material](#reading-material) <!-- /MarkdownTOC --> <a name="introduction"></a> # Introduction This course introduces you to deep learning: the state-of-the-art approach to building artificial intelligence algorithms. We cover the basic components of deep learning, what it means, how it works, and develop code necessary to build various algorithms such as deep convolutional networks, variational autoencoders, generative adversarial networks, and recurrent neural networks. A major focus of this course will be to not only understand how to build the necessary components of these algorithms, but also how to apply them for exploring creative applications. We'll see how to train a computer to recognize objects in an image and use this knowledge to drive new and interesting behaviors, from understanding the similarities and differences in large datasets and using them to self-organize, to understanding how to infinitely generate entirely new content or match the aesthetics or contents of another image. Deep learning offers enormous potential for creative applications and in this course we interrogate what's possible. Through practical applications and guided homework assignments, you'll be expected to create datasets, develop and train neural networks, explore your own media collections using existing state-of-the-art deep nets, synthesize new content from generative algorithms, and understand deep learning's potential for creating entirely new aesthetics and new ways of interacting with large amounts of data.​​ <a name="promo"></a> ## Promo Deep learning has emerged at the forefront of nearly every major computational breakthrough in the last 4 years. It is no wonder that it is already in many of the products we use today, from netflix or amazon's personalized recommendations; to the filters that block our spam; to ways that we interact with personal assistants like Apple's Siri or Microsoft Cortana, even to the very ways our personal health is monitored. And sure deep learning algorithms are capable of some amazing things. But it's not just science applications that are benefiting from this research. Artists too are starting to explore how Deep Learning can be used in their own practice. Photographers are starting to explore different ways of exploring visual media. Generative artists are writing algorithms to create entirely new aesthetics. Filmmakers are exploring virtual worlds ripe with potential for procedural content. In this course, we're going straight to the state of the art. And we're going to learn it all. We'll see how to make an algorithm paint an image, or hallucinate objects in a photograph. We'll see how to train a computer to recognize objects in an image and use this knowledge to drive new and interesting behaviors, from understanding the similarities and differences in large datasets to using them to self organize, to understanding how to infinitely generate entirely new content or match the aesthetics or contents of other images. We'll even see how to teach a computer to read and synthesize new phrases. But we won't just be using other peoples code to do all of this. We're going to develop everything ourselves using Tensorflow and I'm going to show you how to do it. This course isn't just for artists nor is it just for programmers. It's for people that want to learn more about how to apply deep learning with a hands on approach, straight into the python console, and learn what it all means through creative thinking and interaction. I'm Parag Mital, artist, researcher and Director of Machine Intelligence at Kadenze. For the last 10 years, I've been exploring creative uses of computational models making use of machine and deep learning, film datasets, eye-tracking, EEG, and fMRI recordings exploring applications such as generative film experiences, augmented reality hallucinations, and expressive control of large audiovisual corpora. But this course isn't just about me. It's about bringing all of you together. It's about bringing together different backgrounds, different practices, and sticking all of you in the same virtual room, giving you access to state of the art methods in deep learning, some really amazing stuff, and then letting you go wild on the Kadenze platform. We've been working very hard to build a platform for learning that rivals anything else out there for learning this stuff. You'll be able to share your content, upload videos, comment and exchange code and ideas, all led by the course I've developed for us. But before we get there we're going to have to cover a lot of groundwork. The basics that we'll use to develop state of the art algorithms in deep learning. And that's really so we can better interrogate what's possible, ask the bigger questions, and be able to explore just where all this is heading in more depth. With all of that in mind, Let's get started> Join me as we learn all about Creative Applications of Deep Learning with Tensorflow. <a name="session-overview"></a> ## Session Overview We're first going to talk about Deep Learning, what it is, and how it relates to other branches of learning. We'll then talk about the major components of Deep Learning, the importance of datasets, and the nature of representation, which is at the heart of deep learning. If you've never used Python before, we'll be jumping straight into using libraries like numpy, matplotlib, and scipy. Before starting this session, please check the resources section for a notebook introducing some fundamentals of python programming. When you feel comfortable with loading images from a directory, resizing, cropping, how to change an image datatype from unsigned int to float32, and what the range of each data type should be, then come back here and pick up where you left off. We'll then get our hands dirty with Tensorflow, Google's library for machine intelligence. We'll learn the basic components of creating a computational graph with Tensorflow, including how to convolve an image to detect interesting features at different scales. This groundwork will finally lead us towards automatically learning our handcrafted features/algorithms. <a name="learning-from-data"></a> # Learning From Data <a name="deep-learning-vs-machine-learning"></a> ## Deep Learning vs. Machine Learning So what is this word I keep using, Deep Learning. And how is it different to Machine Learning? Well Deep Learning is a *type* of Machine Learning algorithm that uses Neural Networks to learn. The type of learning is "Deep" because it is composed of many layers of Neural Networks. In this course we're really going to focus on supervised and unsupervised Deep Learning. But there are many other incredibly valuable branches of Machine Learning such as Reinforcement Learning, Dictionary Learning, Probabilistic Graphical Models and Bayesian Methods (Bishop), or Genetic and Evolutionary Algorithms. And any of these branches could certainly even be combined with each other or with Deep Networks as well. We won't really be able to get into these other branches of learning in this course. Instead, we'll focus more on building "networks", short for neural networks, and how they can do some really amazing things. Before we can get into all that, we're going to need to understand a bit more about data and its importance in deep learning. <a name="invariances"></a> ## Invariances Deep Learning requires data. A lot of it. It's really one of the major reasons as to why Deep Learning has been so successful. Having many examples of the thing we are trying to learn is the first thing you'll need before even thinking about Deep Learning. Often, it is the biggest blocker to learning about something in the world. Even as a child, we need a lot of experience with something before we begin to understand it. I find I spend most of my time just finding the right data for a network to learn. Getting it from various sources, making sure it all looks right and is labeled. That is a lot of work. The rest of it is easy as we'll see by the end of this course. Let's say we would like build a network that is capable of looking at an image and saying what object is in the image. There are so many possible ways that an object could be manifested in an image. It's rare to ever see just a single object in isolation. In order to teach a computer about an object, we would have to be able to give it an image of an object in every possible way that it could exist. We generally call these ways of existing "invariances". That just means we are trying not to vary based on some factor. We are invariant to it. For instance, an object could appear to one side of an image, or another. We call that translation invariance. Or it could be from one angle or another. That's called rotation invariance. Or it could be closer to the camera, or farther. and That would be scale invariance. There are plenty of other types of invariances, such as perspective or brightness or exposure to give a few more examples for photographic images. <a name="scope-of-learning"></a> ## Scope of Learning With Deep Learning, you will always need a dataset that will teach the algorithm about the world. But you aren't really teaching it everything. You are only teaching it what is in your dataset! That is a very important distinction. If I show my algorithm only faces of people which are always placed in the center of an image, it will not be able to understand anything about faces that are not in the center of the image! Well at least that's mostly true. That's not to say that a network is incapable of transfering what it has learned to learn new concepts more easily. Or to learn things that might be necessary for it to learn other representations. For instance, a network that has been trained to learn about birds, probably knows a good bit about trees, branches, and other bird-like hangouts, depending on the dataset. But, in general, we are limited to learning what our dataset has access to. So if you're thinking about creating a dataset, you're going to have to think about what it is that you want to teach your network. What sort of images will it see? What representations do you think your network could learn given the data you've shown it? One of the major contributions to the success of Deep Learning algorithms is the amount of data out there. Datasets have grown from orders of hundreds to thousands to many millions. The more data you have, the more capable your network will be at determining whatever its objective is. <a name="existing-datasets"></a> ## Existing datasets With that in mind, let's try to find a dataset that we can work with. There are a ton of datasets out there that current machine learning researchers use. For instance if I do a quick Google search for Deep Learning Datasets, i can see for instance a link on deeplearning.net, listing a few interesting ones e.g. http://deeplearning.net/datasets/, including MNIST, CalTech, CelebNet, LFW, CIFAR, MS Coco, Illustration2Vec, and there are ton more. And these are primarily image based. But if you are interested in finding more, just do a quick search or drop a quick message on the forums if you're looking for something in particular. * MNIST * CalTech * CelebNet * ImageNet: http://www.image-net.org/ * LFW * CIFAR10 * CIFAR100 * MS Coco: http://mscoco.org/home/ * WLFDB: http://wlfdb.stevenhoi.com/ * Flickr 8k: http://nlp.cs.illinois.edu/HockenmaierGroup/Framing_Image_Description/KCCA.html * Flickr 30k <a name="preprocessing-data"></a> # Preprocessing Data In this section, we're going to learn a bit about working with an image based dataset. We'll see how image dimensions are formatted as a single image and how they're represented as a collection using a 4-d array. We'll then look at how we can perform dataset normalization. If you're comfortable with all of this, please feel free to skip to the next video. We're first going to load some libraries that we'll be making use of. ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt plt.style.use('ggplot') ``` I'll be using a popular image dataset for faces called the CelebFaces dataset. I've provided some helper functions which you can find on the resources page, which will just help us with manipulating images and loading this dataset. ``` from libs import utils # utils.<tab> files = utils.get_celeb_files() ``` Let's get the 50th image in this list of files, and then read the file at that location as an image, setting the result to a variable, `img`, and inspect a bit further what's going on: ``` img = plt.imread(files[50]) # img.<tab> print(img) ``` When I print out this image, I can see all the numbers that represent this image. We can use the function `imshow` to see this: ``` # If nothing is drawn and you are using notebook, try uncommenting the next line: #%matplotlib inline plt.imshow(img) ``` <a name="understanding-image-shapes"></a> ## Understanding Image Shapes Let's break this data down a bit more. We can see the dimensions of the data using the `shape` accessor: ``` img.shape # (218, 178, 3) ``` This means that the image has 218 rows, 178 columns, and 3 color channels corresponding to the Red, Green, and Blue channels of the image, or RGB. Let's try looking at just one of the color channels. ``` plt.imshow(img[:, :, 0], cmap='gray') plt.imshow(img[:, :, 1], cmap='gray') plt.imshow(img[:, :, 2], cmap='gray') ``` We use the special colon operator to say take every value in this dimension. This is saying, give me every row, every column, and the 0th dimension of the color channels. What we're seeing is the amount of Red, Green, or Blue contributing to the overall color image. Let's use another helper function which will load every image file in the celeb dataset rather than just give us the filenames like before. By default, this will just return the first 1000 images because loading the entire dataset is a bit cumbersome. In one of the later sessions, I'll show you how tensorflow can handle loading images using a pipeline so we can load this same dataset. For now, let's stick with this: ``` imgs = utils.get_celeb_imgs() ``` We now have a list containing our images. Each index of the `imgs` list is another image which we can access using the square brackets: ``` plt.imshow(imgs[0]) ``` <a name="the-batch-dimension"></a> ## The Batch Dimension Remember that an image has a shape describing the height, width, channels: ``` imgs[0].shape ``` It turns out we'll often use another convention for storing many images in an array using a new dimension called the batch dimension. The resulting image shape will be exactly the same, except we'll stick on a new dimension on the beginning... giving us number of images x the height x the width x the number of color channels. N x H x W x C A Color image should have 3 color channels, RGB. We can combine all of our images to have these 4 dimensions by telling numpy to give us an array of all the images. ``` data = np.array(imgs) data.shape ``` This will only work if every image in our list is exactly the same size. So if you have a wide image, short image, long image, forget about it. You'll need them all to be the same size. If you are unsure of how to get all of your images into the same size, then please please refer to the online resources for the notebook I've provided which shows you exactly how to take a bunch of images of different sizes, and crop and resize them the best we can to make them all the same size. <a name="meandeviation-of-images"></a> ## Mean/Deviation of Images Now that we have our data in a single numpy variable, we can do alot of cool stuff. Let's look at the mean of the batch channel: ``` mean_img = np.mean(data, axis=0) plt.imshow(mean_img.astype(np.uint8)) ``` This is the first step towards building our robot overlords. We've reduced down our entire dataset to a single representation which describes what most of our dataset looks like. There is one other very useful statistic which we can look at very easily: ``` std_img = np.std(data, axis=0) plt.imshow(std_img.astype(np.uint8)) ``` So this is incredibly cool. We've just shown where changes are likely to be in our dataset of images. Or put another way, we're showing where and how much variance there is in our previous mean image representation. We're looking at this per color channel. So we'll see variance for each color channel represented separately, and then combined as a color image. We can try to look at the average variance over all color channels by taking their mean: ``` plt.imshow(np.mean(std_img, axis=2).astype(np.uint8)) ``` This is showing us on average, how every color channel will vary as a heatmap. The more red, the more likely that our mean image is not the best representation. The more blue, the less likely that our mean image is far off from any other possible image. <a name="dataset-preprocessing"></a> ## Dataset Preprocessing Think back to when I described what we're trying to accomplish when we build a model for machine learning? We're trying to build a model that understands invariances. We need our model to be able to express *all* of the things that can possibly change in our data. Well, this is the first step in understanding what can change. If we are looking to use deep learning to learn something complex about our data, it will often start by modeling both the mean and standard deviation of our dataset. We can help speed things up by "preprocessing" our dataset by removing the mean and standard deviation. What does this mean? Subtracting the mean, and dividing by the standard deviation. Another word for that is "normalization". <a name="histograms"></a> ## Histograms Let's have a look at our dataset another way to see why this might be a useful thing to do. We're first going to convert our `batch` x `height` x `width` x `channels` array into a 1 dimensional array. Instead of having 4 dimensions, we'll now just have 1 dimension of every pixel value stretched out in a long vector, or 1 dimensional array. ``` flattened = data.ravel() print(data[:1]) print(flattened[:10]) ``` We first convert our N x H x W x C dimensional array into a 1 dimensional array. The values of this array will be based on the last dimensions order. So we'll have: [<font color='red'>251</font>, <font color='green'>238</font>, <font color='blue'>205</font>, <font color='red'>251</font>, <font color='green'>238</font>, <font color='blue'>206</font>, <font color='red'>253</font>, <font color='green'>240</font>, <font color='blue'>207</font>, ...] We can visualize what the "distribution", or range and frequency of possible values are. This is a very useful thing to know. It tells us whether our data is predictable or not. ``` plt.hist(flattened.ravel(), 255) ``` The last line is saying give me a histogram of every value in the vector, and use 255 bins. Each bin is grouping a range of values. The bars of each bin describe the frequency, or how many times anything within that range of values appears.In other words, it is telling us if there is something that seems to happen more than anything else. If there is, it is likely that a neural network will take advantage of that. <a name="histogram-equalization"></a> ## Histogram Equalization The mean of our dataset looks like this: ``` plt.hist(mean_img.ravel(), 255) ``` When we subtract an image by our mean image, we remove all of this information from it. And that means that the rest of the information is really what is important for describing what is unique about it. Let's try and compare the histogram before and after "normalizing our data": ``` bins = 20 fig, axs = plt.subplots(1, 3, figsize=(12, 6), sharey=True, sharex=True) axs[0].hist((data[0]).ravel(), bins) axs[0].set_title('img distribution') axs[1].hist((mean_img).ravel(), bins) axs[1].set_title('mean distribution') axs[2].hist((data[0] - mean_img).ravel(), bins) axs[2].set_title('(img - mean) distribution') ``` What we can see from the histograms is the original image's distribution of values from 0 - 255. The mean image's data distribution is mostly centered around the value 100. When we look at the difference of the original image and the mean image as a histogram, we can see that the distribution is now centered around 0. What we are seeing is the distribution of values that were above the mean image's intensity, and which were below it. Let's take it one step further and complete the normalization by dividing by the standard deviation of our dataset: ``` fig, axs = plt.subplots(1, 3, figsize=(12, 6), sharey=True, sharex=True) axs[0].hist((data[0] - mean_img).ravel(), bins) axs[0].set_title('(img - mean) distribution') axs[1].hist((std_img).ravel(), bins) axs[1].set_title('std deviation distribution') axs[2].hist(((data[0] - mean_img) / std_img).ravel(), bins) axs[2].set_title('((img - mean) / std_dev) distribution') ``` Now our data has been squished into a peak! We'll have to look at it on a different scale to see what's going on: ``` axs[2].set_xlim([-150, 150]) axs[2].set_xlim([-100, 100]) axs[2].set_xlim([-50, 50]) axs[2].set_xlim([-10, 10]) axs[2].set_xlim([-5, 5]) ``` What we can see is that the data is in the range of -3 to 3, with the bulk of the data centered around -1 to 1. This is the effect of normalizing our data: most of the data will be around 0, where some deviations of it will follow between -3 to 3. If our data does not end up looking like this, then we should either (1): get much more data to calculate our mean/std deviation, or (2): either try another method of normalization, such as scaling the values between 0 to 1, or -1 to 1, or possibly not bother with normalization at all. There are other options that one could explore, including different types of normalization such as local contrast normalization for images or PCA based normalization but we won't have time to get into those in this course. <a name="tensorflow-basics"></a> # Tensorflow Basics Let's now switch gears and start working with Google's Library for Numerical Computation, TensorFlow. This library can do most of the things we've done so far. However, it has a very different approach for doing so. And it can do a whole lot more cool stuff which we'll eventually get into. The major difference to take away from the remainder of this session is that instead of computing things immediately, we first define things that we want to compute later using what's called a `Graph`. Everything in Tensorflow takes place in a computational graph and running and evaluating anything in the graph requires a `Session`. Let's take a look at how these both work and then we'll get into the benefits of why this is useful: <a name="variables"></a> ## Variables We're first going to import the tensorflow library: ``` import tensorflow as tf ``` Let's take a look at how we might create a range of numbers. Using numpy, we could for instance use the linear space function: ``` x = np.linspace(-3.0, 3.0, 100) # Immediately, the result is given to us. An array of 100 numbers equally spaced from -3.0 to 3.0. print(x) # We know from numpy arrays that they have a `shape`, in this case a 1-dimensional array of 100 values print(x.shape) # and a `dtype`, in this case float64, or 64 bit floating point values. print(x.dtype) ``` <a name="tensors"></a> ## Tensors In tensorflow, we could try to do the same thing using their linear space function: ``` x = tf.linspace(-3.0, 3.0, 100) print(x) ``` Instead of a `numpy.array`, we are returned a `tf.Tensor`. The name of it is "LinSpace:0". Wherever we see this colon 0, that just means the output of. So the name of this Tensor is saying, the output of LinSpace. Think of `tf.Tensor`s the same way as you would the `numpy.array`. It is described by its `shape`, in this case, only 1 dimension of 100 values. And it has a `dtype`, in this case, `float32`. But *unlike* the `numpy.array`, there are no values printed here! That's because it actually hasn't computed its values yet. Instead, it just refers to the output of a `tf.Operation` which has been already been added to Tensorflow's default computational graph. The result of that operation is the tensor that we are returned. <a name="graphs"></a> ## Graphs Let's try and inspect the underlying graph. We can request the "default" graph where all of our operations have been added: ``` g = tf.get_default_graph() ``` <a name="operations"></a> ## Operations And from this graph, we can get a list of all the operations that have been added, and print out their names: ``` [op.name for op in g.get_operations()] ``` So Tensorflow has named each of our operations to generally reflect what they are doing. There are a few parameters that are all prefixed by LinSpace, and then the last one which is the operation which takes all of the parameters and creates an output for the linspace. <a name="tensor"></a> ## Tensor We can request the output of any operation, which is a tensor, by asking the graph for the tensor's name: ``` g.get_tensor_by_name('LinSpace' + ':0') ``` What I've done is asked for the `tf.Tensor` that comes from the operation "LinSpace". So remember, the result of a `tf.Operation` is a `tf.Tensor`. Remember that was the same name as the tensor `x` we created before. <a name="sessions"></a> ## Sessions In order to actually compute anything in tensorflow, we need to create a `tf.Session`. The session is responsible for evaluating the `tf.Graph`. Let's see how this works: ``` # We're first going to create a session: sess = tf.Session() # Now we tell our session to compute anything we've created in the tensorflow graph. computed_x = sess.run(x) print(computed_x) # Alternatively, we could tell the previous Tensor to evaluate itself using this session: computed_x = x.eval(session=sess) print(computed_x) # We can close the session after we're done like so: sess.close() ``` We could also explicitly tell the session which graph we want to manage: ``` sess = tf.Session(graph=g) sess.close() ``` By default, it grabs the default graph. But we could have created a new graph like so: ``` g2 = tf.Graph() ``` And then used this graph only in our session. To simplify things, since we'll be working in iPython's interactive console, we can create an `tf.InteractiveSession`: ``` sess = tf.InteractiveSession() x.eval() ``` Now we didn't have to explicitly tell the `eval` function about our session. We'll leave this session open for the rest of the lecture. <a name="tensor-shapes"></a> ## Tensor Shapes ``` # We can find out the shape of a tensor like so: print(x.get_shape()) # %% Or in a more friendly format print(x.get_shape().as_list()) ``` <a name="many-operations"></a> ## Many Operations Lets try a set of operations now. We'll try to create a Gaussian curve. This should resemble a normalized histogram where most of the data is centered around the mean of 0. It's also sometimes refered to by the bell curve or normal curve. ``` # The 1 dimensional gaussian takes two parameters, the mean value, and the standard deviation, which is commonly denoted by the name sigma. mean = 0.0 sigma = 1.0 # Don't worry about trying to learn or remember this formula. I always have to refer to textbooks or check online for the exact formula. z = (tf.exp(tf.neg(tf.pow(x - mean, 2.0) / (2.0 * tf.pow(sigma, 2.0)))) * (1.0 / (sigma * tf.sqrt(2.0 * 3.1415)))) ``` Just like before, amazingly, we haven't actually computed anything. We *have just added a bunch of operations to Tensorflow's graph. Whenever we want the value or output of this operation, we'll have to explicitly ask for the part of the graph we're interested in before we can see its result. Since we've created an interactive session, we should just be able to say the name of the Tensor that we're interested in, and call the `eval` function: ``` res = z.eval() plt.plot(res) # if nothing is drawn, and you are using ipython notebook, uncomment the next two lines: #%matplotlib inline #plt.plot(res) ``` <a name="convolution"></a> # Convolution <a name="creating-a-2-d-gaussian-kernel"></a> ## Creating a 2-D Gaussian Kernel Let's try creating a 2-dimensional Gaussian. This can be done by multiplying a vector by its transpose. If you aren't familiar with matrix math, I'll review a few important concepts. This is about 98% of what neural networks do so if you're unfamiliar with this, then please stick with me through this and it'll be smooth sailing. First, to multiply two matrices, their inner dimensions must agree, and the resulting matrix will have the shape of the outer dimensions. So let's say we have two matrices, X and Y. In order for us to multiply them, X's columns must match Y's rows. I try to remember it like so: <pre> (X_rows, X_cols) x (Y_rows, Y_cols) | | | | | |___________| | | ^ | | inner dimensions | | must match | | | |__________________________| ^ resulting dimensions of matrix multiplication </pre> But our matrix is actually a vector, or a 1 dimensional matrix. That means its dimensions are N x 1. So to multiply them, we'd have: <pre> (N, 1) x (1, N) | | | | | |___________| | | ^ | | inner dimensions | | must match | | | |__________________________| ^ resulting dimensions of matrix multiplication </pre> ``` # Let's store the number of values in our Gaussian curve. ksize = z.get_shape().as_list()[0] # Let's multiply the two to get a 2d gaussian z_2d = tf.matmul(tf.reshape(z, [ksize, 1]), tf.reshape(z, [1, ksize])) # Execute the graph plt.imshow(z_2d.eval()) ``` <a name="convolving-an-image-with-a-gaussian"></a> ## Convolving an Image with a Gaussian A very common operation that we'll come across with Deep Learning is convolution. We're going to explore what this means using our new gaussian kernel that we've just created. For now, just think of it a way of filtering information. We're going to effectively filter our image using this Gaussian function, as if the gaussian function is the lens through which we'll see our image data. What it will do is at every location we tell it to filter, it will average the image values around it based on what the kernel's values are. The Gaussian's kernel is basically saying, take a lot the center, a then decesasingly less as you go farther away from the center. The effect of convolving the image with this type of kernel is that the entire image will be blurred. If you would like an interactive exploratin of convolution, this website is great: http://setosa.io/ev/image-kernels/ ``` # Let's first load an image. We're going to need a grayscale image to begin with. skimage has some images we can play with. If you do not have the skimage module, you can load your own image, or get skimage by pip installing "scikit-image". from skimage import data img = data.camera().astype(np.float32) plt.imshow(img, cmap='gray') print(img.shape) ``` Notice our img shape is 2-dimensional. For image convolution in Tensorflow, we need our images to be 4 dimensional. Remember that when we load many iamges and combine them in a single numpy array, the resulting shape has the number of images first. N x H x W x C In order to perform 2d convolution with tensorflow, we'll need the same dimensions for our image. With just 1 grayscale image, this means the shape will be: 1 x H x W x 1 ``` # We could use the numpy reshape function to reshape our numpy array img_4d = img.reshape([1, img.shape[0], img.shape[1], 1]) print(img_4d.shape) # but since we'll be using tensorflow, we can use the tensorflow reshape function: img_4d = tf.reshape(img, [1, img.shape[0], img.shape[1], 1]) print(img_4d) ``` Instead of getting a numpy array back, we get a tensorflow tensor. This means we can't access the `shape` parameter like we did with the numpy array. But instead, we can use `get_shape()`, and `get_shape().as_list()`: ``` print(img_4d.get_shape()) print(img_4d.get_shape().as_list()) ``` The H x W image is now part of a 4 dimensional array, where the other dimensions of N and C are 1. So there is only 1 image and only 1 channel. We'll also have to reshape our Gaussian Kernel to be 4-dimensional as well. The dimensions for kernels are slightly different! Remember that the image is: Number of Images x Image Height x Image Width x Number of Channels we have: Kernel Height x Kernel Width x Number of Input Channels x Number of Output Channels Our Kernel already has a height and width of `ksize` so we'll stick with that for now. The number of input channels should match the number of channels on the image we want to convolve. And for now, we just keep the same number of output channels as the input channels, but we'll later see how this comes into play. ``` # Reshape the 2d kernel to tensorflow's required 4d format: H x W x I x O z_4d = tf.reshape(z_2d, [ksize, ksize, 1, 1]) print(z_4d.get_shape().as_list()) ``` <a name="convolvefilter-an-image-using-a-gaussian-kernel"></a> ## Convolve/Filter an image using a Gaussian Kernel We can now use our previous Gaussian Kernel to convolve our image: ``` convolved = tf.nn.conv2d(img_4d, z_4d, strides=[1, 1, 1, 1], padding='SAME') res = convolved.eval() print(res.shape) ``` There are two new parameters here: `strides`, and `padding`. Strides says how to move our kernel across the image. Basically, we'll only ever use it for one of two sets of parameters: [1, 1, 1, 1], which means, we are going to convolve every single image, every pixel, and every color channel by whatever the kernel is. and the second option: [1, 2, 2, 1], which means, we are going to convolve every single image, but every other pixel, in every single color channel. Padding says what to do at the borders. If we say "SAME", that means we want the same dimensions going in as we do going out. In order to do this, zeros must be padded around the image. If we say "VALID", that means no padding is used, and the image dimensions will actually change. ``` # Matplotlib cannot handle plotting 4D images! We'll have to convert this back to the original shape. There are a few ways we could do this. We could plot by "squeezing" the singleton dimensions. plt.imshow(np.squeeze(res), cmap='gray') # Or we could specify the exact dimensions we want to visualize: plt.imshow(res[0, :, :, 0], cmap='gray') ``` <a name="modulating-the-gaussian-with-a-sine-wave-to-create-gabor-kernel"></a> ## Modulating the Gaussian with a Sine Wave to create Gabor Kernel We've now seen how to use tensorflow to create a set of operations which create a 2-dimensional Gaussian kernel, and how to use that kernel to filter or convolve another image. Let's create another interesting convolution kernel called a Gabor. This is a lot like the Gaussian kernel, except we use a sine wave to modulate that. <graphic: draw 1d gaussian wave, 1d sine, show modulation as multiplication and resulting gabor.> We first use linspace to get a set of values the same range as our gaussian, which should be from -3 standard deviations to +3 standard deviations. ``` xs = tf.linspace(-3.0, 3.0, ksize) ``` We then calculate the sine of these values, which should give us a nice wave ``` ys = tf.sin(xs) plt.figure() plt.plot(ys.eval()) ``` And for multiplication, we'll need to convert this 1-dimensional vector to a matrix: N x 1 ``` ys = tf.reshape(ys, [ksize, 1]) ``` We then repeat this wave across the matrix by using a multiplication of ones: ``` ones = tf.ones((1, ksize)) wave = tf.matmul(ys, ones) plt.imshow(wave.eval(), cmap='gray') ``` We can directly multiply our old Gaussian kernel by this wave and get a gabor kernel: ``` gabor = tf.mul(wave, z_2d) plt.imshow(gabor.eval(), cmap='gray') ``` <a name="manipulating-an-image-with-this-gabor"></a> ## Manipulating an image with this Gabor We've already gone through the work of convolving an image. The only thing that has changed is the kernel that we want to convolve with. We could have made life easier by specifying in our graph which elements we wanted to be specified later. Tensorflow calls these "placeholders", meaning, we're not sure what these are yet, but we know they'll fit in the graph like so, generally the input and output of the network. Let's rewrite our convolution operation using a placeholder for the image and the kernel and then see how the same operation could have been done. We're going to set the image dimensions to `None` x `None`. This is something special for placeholders which tells tensorflow "let this dimension be any possible value". 1, 5, 100, 1000, it doesn't matter. ``` # This is a placeholder which will become part of the tensorflow graph, but # which we have to later explicitly define whenever we run/evaluate the graph. # Pretty much everything you do in tensorflow can have a name. If we don't # specify the name, tensorflow will give a default one, like "Placeholder_0". # Let's use a more useful name to help us understand what's happening. img = tf.placeholder(tf.float32, shape=[None, None], name='img') # We'll reshape the 2d image to a 3-d tensor just like before: # Except now we'll make use of another tensorflow function, expand dims, which adds a singleton dimension at the axis we specify. # We use it to reshape our H x W image to include a channel dimension of 1 # our new dimensions will end up being: H x W x 1 img_3d = tf.expand_dims(img, 2) dims = img_3d.get_shape() print(dims) # And again to get: 1 x H x W x 1 img_4d = tf.expand_dims(img_3d, 0) print(img_4d.get_shape().as_list()) # Let's create another set of placeholders for our Gabor's parameters: mean = tf.placeholder(tf.float32, name='mean') sigma = tf.placeholder(tf.float32, name='sigma') ksize = tf.placeholder(tf.int32, name='ksize') # Then finally redo the entire set of operations we've done to convolve our # image, except with our placeholders x = tf.linspace(-3.0, 3.0, ksize) z = (tf.exp(tf.neg(tf.pow(x - mean, 2.0) / (2.0 * tf.pow(sigma, 2.0)))) * (1.0 / (sigma * tf.sqrt(2.0 * 3.1415)))) z_2d = tf.matmul( tf.reshape(z, tf.pack([ksize, 1])), tf.reshape(z, tf.pack([1, ksize]))) ys = tf.sin(x) ys = tf.reshape(ys, tf.pack([ksize, 1])) ones = tf.ones(tf.pack([1, ksize])) wave = tf.matmul(ys, ones) gabor = tf.mul(wave, z_2d) gabor_4d = tf.reshape(gabor, tf.pack([ksize, ksize, 1, 1])) # And finally, convolve the two: convolved = tf.nn.conv2d(img_4d, gabor_4d, strides=[1, 1, 1, 1], padding='SAME', name='convolved') convolved_img = convolved[0, :, :, 0] ``` What we've done is create an entire graph from our placeholders which is capable of convolving an image with a gabor kernel. In order to compute it, we have to specify all of the placeholders required for its computation. If we try to evaluate it without specifying placeholders beforehand, we will get an error `InvalidArgumentError: You must feed a value for placeholder tensor 'img' with dtype float and shape [512,512]`: ``` convolved_img.eval() ``` It's saying that we didn't specify our placeholder for `img`. In order to "feed a value", we use the `feed_dict` parameter like so: ``` convolved_img.eval(feed_dict={img: data.camera()}) ``` But that's not the only placeholder in our graph! We also have placeholders for `mean`, `sigma`, and `ksize`. Once we specify all of them, we'll have our result: ``` res = convolved_img.eval(feed_dict={ img: data.camera(), mean:0.0, sigma:1.0, ksize:100}) plt.imshow(res, cmap='gray') ``` Now, instead of having to rewrite the entire graph, we can just specify the different placeholders. ``` res = convolved_img.eval(feed_dict={ img: data.camera(), mean: 0.0, sigma: 0.5, ksize: 32 }) plt.imshow(res, cmap='gray') ``` <a name="homework"></a> # Homework For your first assignment, we'll work on creating our own dataset. You'll need to find at least 100 images and work through the [notebook](session-1.ipynb). <a name="next-session"></a> # Next Session In the next session, we'll create our first Neural Network and see how it can be used to paint an image. <a name="reading-material"></a> # Reading Material Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., … Zheng, X. (2015). TensorFlow : Large-Scale Machine Learning on Heterogeneous Distributed Systems. https://arxiv.org/abs/1603.04467 Yoshua Bengio, Aaron Courville, Pascal Vincent. Representation Learning: A Review and New Perspectives. 24 Jun 2012. https://arxiv.org/abs/1206.5538 J. Schmidhuber. Deep Learning in Neural Networks: An Overview. Neural Networks, 61, p 85-117, 2015. https://arxiv.org/abs/1404.7828 LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. “Deep learning.” Nature 521, no. 7553 (2015): 436-444. Ian Goodfellow Yoshua Bengio and Aaron Courville. Deep Learning. 2016. http://www.deeplearningbook.org/
true
code
0.695131
null
null
null
null
# EXTRA STUFF: Day 8 First, import our usual things: ``` import ipywidgets import pandas as pd import numpy as np import matplotlib.pyplot as plt import bqplot.pyplot as bplt # also: import bqplot ``` Load data: ``` planets = pd.read_csv('https://jnaiman.github.io/csci-p-14110_su2020/lesson08/planets_2020.06.17_14.04.11.csv', sep=",", comment="#") ``` Let's take a quick look: ``` planets ``` ## Heatmap dashboard Let's make a 2D histogram showing how the NASA planets are distributed across these 2 parameters. First, looking at the plots of the individual distributions, we can make some guesses for bins along each parameter: ``` ecc = [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0] ``` We can also do this with NumPy: ``` ecc = np.arange(0.0, 1.1, step=0.1) # start, stop, step => note step+stop there! ecc ``` And semi-major axis: ``` sa = np.arange(0.0, 50+5, step=5) sa ``` Let's also use NumPy to make a 2D histogram with these bins: ``` myHist, xedges, yedges = np.histogram2d(planets['pl_orbeccen'], planets['pl_orbsmax'], bins=[ecc,sa]) yedges myHist ``` We see that we mostly have entries between 0-25 AU, so we can update our binning: ``` sa = np.arange(0.0, 24+4, step=4) myHist, xedges, yedges = np.histogram2d(planets['pl_orbeccen'], planets['pl_orbsmax'], bins=[ecc,sa]) yedges myHist ``` xedges & yedges give the bin *edges* but we want the centers: ``` xcenter = (xedges[:-1] + xedges[1:]) / 2 ycenter = (yedges[:-1] + yedges[1:]) / 2 myLabel = ipywidgets.Label()# make label ``` Function to print out data values: ``` def get_data_value(change): # redefine this function to now use myHist, not data i,j = change['owner'].selected[0] v = myHist[i,j] # grab data value myLabel.value = 'Value of data = ' + str(v) ``` Put all together: ``` fig = bplt.figure(padding_y=0.0) # set up a figure object # use bqplot's plt interface to plot: heat_map = bplt.gridheatmap(myHist, row=xcenter, column=ycenter, interactions={'click':'select'}) # hook heat_maps selected value to the label heat_map.observe(get_data_value, 'selected') # show both the fig and label in a vertical box ipywidgets.VBox([myLabel,fig]) ``` We can change the color scale as well: ``` fig = bplt.figure(padding_y=0.0) # set up a figure object # add in color: col_sc = bqplot.ColorScale(scheme='Reds') # use bqplot's plt interface to plot: heat_map = bplt.gridheatmap(myHist, row=xcenter, column=ycenter, interactions={'click':'select'}, scales={'color':col_sc}) # hook heat_maps selected value to the label heat_map.observe(get_data_value, 'selected') # show both the fig and label in a vertical box ipywidgets.VBox([myLabel,fig]) ``` However, doing things like adding in x/y labels is somewhat familiar, but we call fig.axes instead of ax[#] to set axis labels. You can check out what fig.axes[0], fig.axes[1], fig.axes[2] is by: ``` fig.axes ``` So, it looks like the 0th axes is color, the 1th one is the horizontal axis and the 2th is the vertical axes. We can change x/y labels as follows: ``` fig = bplt.figure(padding_y=0.0) # set up a figure object bplt.scales(scales={'color':bqplot.ColorScale(scheme='Reds')}) # use bqplot's plt interface to plot: heat_map = bplt.gridheatmap(myHist, row=xcenter, column=ycenter, interactions={'click':'select'}) # hook heat_maps selected value to the label heat_map.observe(get_data_value, 'selected') # change labels fig.axes[0].side = 'top' # so it doesn't overlap with scale fig.axes[1].label = 'semi-major axes in AU' # xaxes label fig.axes[2].label = 'eccentricity' # yaxes label # show both the fig and label in a vertical box ipywidgets.VBox([myLabel,fig]) ``` Now let's generate our line plot -- this will use the $r(\theta)$ equation to plot orbits: ``` fig_lines = bplt.figure(padding_y=0.0) # set up a figure object # use bqplot's plt interface to plot: lines = bplt.plot([],[]) # empty to start # change labels fig_lines.axes[0].label = 'x in AU' # xaxes label fig_lines.axes[1].label = 'y in AU' # yaxes label fig_lines # empty plot of x/y ``` Now, lets first put all of our plots in the alighment we want, keeping in mind that the x/y plot of the analytical trajectory won't be updated when we click anything yet: ``` ipywidgets.VBox([myLabel,ipywidgets.HBox([fig,fig_lines])]) ``` Oh but it looks squished, lets try messing with the layout: ``` fig.layout.min_width='500px' fig_lines.layout.min_width='500px' ipywidgets.VBox([myLabel,ipywidgets.HBox([fig,fig_lines])]) #figOut = ipywidgets.VBox([myLabel,ipywidgets.HBox([fig,fig_lines])]) #figOut.layout.min_width='1000px' #figOut ``` To make this interactive first we need to update our `get_data_value` function to *also* update our lines plot when the heatmap plot is selected: ``` theta = np.arange(0, 2*np.pi, 0.001) # theta array def get_data_value(change): # redefine this function to now use myHist, not data # 1. Update the label i,j = change['owner'].selected[0] v = myHist[i,j] # grab data value myLabel.value = 'Value of data = ' + str(v) # 2. Update the x/y values in our lines plot a = ycenter[j] # semi major axis based on bins in heatmap ecc = xcenter[i] # eccentricity for bins on heatmap r = a*(1-ecc**2)/(1+ecc*np.cos(theta)) # calculate r(theta) x = r*np.cos(theta) # translate into x/y y = r*np.sin(theta) lines.x = x lines.y = y ``` Finally before we plot, we have to re-hook this back into our heatmap figure: ``` heat_map.observe(get_data_value, 'selected') ``` Now use the orientation of our plots we had above to re-plot with new interactivity: ``` ipywidgets.VBox([myLabel,ipywidgets.HBox([fig,fig_lines])]) ``` If we want to keep the x/y range static when we plot, we can re-do our trajectory plot with: ``` fig_lines = bplt.figure(padding_y=0.0) # set up a figure object # use bqplot's plt interface to plot: lines = bplt.plot([],[]) # empty to start # set x/y lim in the bqplot way bplt.set_lim(-30,30,'x') bplt.set_lim(-30,30, 'y') # change labels fig_lines.axes[0].label = 'x in AU' # xaxes label fig_lines.axes[1].label = 'y in AU' # yaxes label # to be sure: fig_lines.layout.min_width='500px' fig_lines # empty plot of x/y ``` And finally put it all together: ``` ipywidgets.VBox([myLabel,ipywidgets.HBox([fig,fig_lines])]) ```
true
code
0.564699
null
null
null
null
# 5. Neural Networks ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import fetch_openml, make_moons from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelBinarizer from sklearn.metrics import accuracy_score from prml import nn np.random.seed(1234) ``` ## 5.1 Feed-forward Network Functions ``` class RegressionNetwork(nn.Network): def __init__(self, n_input, n_hidden, n_output): super().__init__() with self.set_parameter(): self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden)) self.b1 = nn.zeros(n_hidden) self.w2 = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_output)) self.b2 = nn.zeros(n_output) def __call__(self, x): h = nn.tanh(x @ self.w1 + self.b1) return h @ self.w2 + self.b2 def create_toy_data(func, n=50): x = np.linspace(-1, 1, n)[:, None] return x, func(x) def sinusoidal(x): return np.sin(np.pi * x) def heaviside(x): return 0.5 * (np.sign(x) + 1) func_list = [np.square, sinusoidal, np.abs, heaviside] plt.figure(figsize=(20, 10)) x = np.linspace(-1, 1, 1000)[:, None] for i, func, n_iter in zip(range(1, 5), func_list, [1000, 10000, 10000, 10000]): plt.subplot(2, 2, i) x_train, y_train = create_toy_data(func) model = RegressionNetwork(1, 3, 1) optimizer = nn.optimizer.Adam(model.parameter, 0.1) for _ in range(n_iter): model.clear() loss = nn.square(y_train - model(x_train)).sum() optimizer.minimize(loss) y = model(x).value plt.scatter(x_train, y_train, s=10) plt.plot(x, y, color="r") plt.show() ``` ## 5.3 Error Backpropagation ``` class ClassificationNetwork(nn.Network): def __init__(self, n_input, n_hidden, n_output): super().__init__() with self.set_parameter(): self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden)) self.b1 = nn.zeros(n_hidden) self.w2 = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_output)) self.b2 = nn.zeros(n_output) def __call__(self, x): h = nn.tanh(x @ self.w1 + self.b1) return h @ self.w2 + self.b2 def create_toy_data(): x = np.random.uniform(-1., 1., size=(100, 2)) labels = np.prod(x, axis=1) > 0 return x, labels.reshape(-1, 1) x_train, y_train = create_toy_data() model = ClassificationNetwork(2, 4, 1) optimizer = nn.optimizer.Adam(model.parameter, 1e-3) history = [] for i in range(10000): model.clear() logit = model(x_train) log_likelihood = -nn.loss.sigmoid_cross_entropy(logit, y_train).sum() optimizer.maximize(log_likelihood) history.append(log_likelihood.value) plt.plot(history) plt.xlabel("iteration") plt.ylabel("Log Likelihood") plt.show() x0, x1 = np.meshgrid(np.linspace(-1, 1, 100), np.linspace(-1, 1, 100)) x = np.array([x0, x1]).reshape(2, -1).T y = nn.sigmoid(model(x)).value.reshape(100, 100) levels = np.linspace(0, 1, 11) plt.scatter(x_train[:, 0], x_train[:, 1], c=y_train.ravel()) plt.contourf(x0, x1, y, levels, alpha=0.2) plt.colorbar() plt.xlim(-1, 1) plt.ylim(-1, 1) plt.gca().set_aspect('equal') plt.show() ``` ## 5.5 Regularization in Neural Networks ``` def create_toy_data(n=10): x = np.linspace(0, 1, n)[:, None] return x, np.sin(2 * np.pi * x) + np.random.normal(scale=0.25, size=(10, 1)) x_train, y_train = create_toy_data() x = np.linspace(0, 1, 100)[:, None] plt.figure(figsize=(20, 5)) for i, m in enumerate([1, 3, 30]): plt.subplot(1, 3, i + 1) model = RegressionNetwork(1, m, 1) optimizer = nn.optimizer.Adam(model.parameter, 0.1) for j in range(10000): model.clear() y = model(x_train) optimizer.minimize(nn.square(y - y_train).sum()) if j % 1000 == 0: optimizer.learning_rate *= 0.9 y = model(x) plt.scatter(x_train.ravel(), y_train.ravel(), marker="x", color="k") plt.plot(x.ravel(), y.value.ravel(), color="k") plt.annotate("M={}".format(m), (0.7, 0.5)) plt.show() class RegularizedRegressionNetwork(nn.Network): def __init__(self, n_input, n_hidden, n_output): super().__init__() with self.set_parameter(): self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden)) self.b1 = nn.zeros(n_hidden) self.w2 = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_output)) self.b2 = nn.zeros(n_output) self.prior = nn.Gaussian(0, 1) def __call__(self, x): h = nn.tanh(x @ self.w1 + self.b1) return h @ self.w2 + self.b2 def log_prior(self): logp = 0 for param in self.parameter.values(): logp += self.prior.log_pdf(param) return logp model = RegularizedRegressionNetwork(1, 30, 1) optimizer = nn.optimizer.Adam(model.parameter, 0.1) for i in range(10000): model.clear() pred = model(x_train) log_posterior = -nn.square(pred - y_train).sum() + model.log_prior() optimizer.maximize(log_posterior) if i % 1000 == 0: optimizer.learning_rate *= 0.9 y = model(x).value plt.scatter(x_train, y_train, marker="x", color="k") plt.plot(x, y, color="k") plt.annotate("M=30", (0.7, 0.5)) plt.show() def load_mnist(): x, label = fetch_openml("mnist_784", return_X_y=True) x = x / np.max(x, axis=1, keepdims=True) x = x.reshape(-1, 28, 28, 1) label = label.astype(np.int) x_train, x_test, label_train, label_test = train_test_split(x, label, test_size=0.1) y_train = LabelBinarizer().fit_transform(label_train) return x_train, x_test, y_train, label_test x_train, x_test, y_train, label_test = load_mnist() class ConvolutionalNeuralNetwork(nn.Network): def __init__(self): super().__init__() with self.set_parameter(): self.conv1 = nn.image.Convolve2d( nn.random.truncnormal(-2, 2, 1, (5, 5, 1, 20)), stride=(1, 1), pad=(0, 0)) self.b1 = nn.array([0.1] * 20) self.conv2 = nn.image.Convolve2d( nn.random.truncnormal(-2, 2, 1, (5, 5, 20, 20)), stride=(1, 1), pad=(0, 0)) self.b2 = nn.array([0.1] * 20) self.w3 = nn.random.truncnormal(-2, 2, 1, (4 * 4 * 20, 100)) self.b3 = nn.array([0.1] * 100) self.w4 = nn.random.truncnormal(-2, 2, 1, (100, 10)) self.b4 = nn.array([0.1] * 10) def __call__(self, x): h = nn.relu(self.conv1(x) + self.b1) h = nn.max_pooling2d(h, (2, 2), (2, 2)) h = nn.relu(self.conv2(h) + self.b2) h = nn.max_pooling2d(h, (2, 2), (2, 2)) h = h.reshape(-1, 4 * 4 * 20) h = nn.relu(h @ self.w3 + self.b3) return h @ self.w4 + self.b4 model = ConvolutionalNeuralNetwork() optimizer = nn.optimizer.Adam(model.parameter, 1e-3) while True: indices = np.random.permutation(len(x_train)) for index in range(0, len(x_train), 50): model.clear() x_batch = x_train[indices[index: index + 50]] y_batch = y_train[indices[index: index + 50]] logit = model(x_batch) log_likelihood = -nn.loss.softmax_cross_entropy(logit, y_batch).mean(0).sum() if optimizer.iter_count % 100 == 0: accuracy = accuracy_score( np.argmax(y_batch, axis=-1), np.argmax(logit.value, axis=-1) ) print("step {:04d}".format(optimizer.iter_count), end=", ") print("accuracy {:.2f}".format(accuracy), end=", ") print("Log Likelihood {:g}".format(log_likelihood.value[0])) optimizer.maximize(log_likelihood) if optimizer.iter_count == 1000: break else: continue break print("accuracy (test):", accuracy_score(np.argmax(model(x_test).value, axis=-1), label_test)) ``` ## 5.6 Mixture Density Networks ``` def create_toy_data(func, n=300): t = np.random.uniform(size=(n, 1)) x = func(t) + np.random.uniform(-0.05, 0.05, size=(n, 1)) return x, t def func(x): return x + 0.3 * np.sin(2 * np.pi * x) def sample(x, t, n=None): assert len(x) == len(t) N = len(x) if n is None: n = N indices = np.random.choice(N, n, replace=False) return x[indices], t[indices] x_train, y_train = create_toy_data(func) class MixtureDensityNetwork(nn.Network): def __init__(self, n_input, n_hidden, n_components): self.n_components = n_components super().__init__() with self.set_parameter(): self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden)) self.b1 = nn.zeros(n_hidden) self.w2c = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_components)) self.b2c = nn.zeros(n_components) self.w2m = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_components)) self.b2m = nn.zeros(n_components) self.w2s = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_components)) self.b2s = nn.zeros(n_components) def __call__(self, x): h = nn.tanh(x @ self.w1 + self.b1) coef = nn.softmax(h @ self.w2c + self.b2c) mean = h @ self.w2m + self.b2m std = nn.exp(h @ self.w2s + self.b2s) return coef, mean, std def gaussian_mixture_pdf(x, coef, mu, std): gauss = ( nn.exp(-0.5 * nn.square((x - mu) / std)) / std / np.sqrt(2 * np.pi) ) return (coef * gauss).sum(axis=-1) model = MixtureDensityNetwork(1, 5, 3) optimizer = nn.optimizer.Adam(model.parameter, 1e-4) for i in range(30000): model.clear() coef, mean, std = model(x_train) log_likelihood = nn.log(gaussian_mixture_pdf(y_train, coef, mean, std)).sum() optimizer.maximize(log_likelihood) x = np.linspace(x_train.min(), x_train.max(), 100)[:, None] y = np.linspace(y_train.min(), y_train.max(), 100)[:, None, None] coef, mean, std = model(x) plt.figure(figsize=(20, 15)) plt.subplot(2, 2, 1) plt.plot(x[:, 0], coef.value[:, 0], color="blue") plt.plot(x[:, 0], coef.value[:, 1], color="red") plt.plot(x[:, 0], coef.value[:, 2], color="green") plt.title("weights") plt.subplot(2, 2, 2) plt.plot(x[:, 0], mean.value[:, 0], color="blue") plt.plot(x[:, 0], mean.value[:, 1], color="red") plt.plot(x[:, 0], mean.value[:, 2], color="green") plt.title("means") plt.subplot(2, 2, 3) proba = gaussian_mixture_pdf(y, coef, mean, std).value levels_log = np.linspace(0, np.log(proba.max()), 21) levels = np.exp(levels_log) levels[0] = 0 xx, yy = np.meshgrid(x.ravel(), y.ravel()) plt.contour(xx, yy, proba.reshape(100, 100), levels) plt.xlim(x_train.min(), x_train.max()) plt.ylim(y_train.min(), y_train.max()) plt.subplot(2, 2, 4) argmax = np.argmax(coef.value, axis=1) for i in range(3): indices = np.where(argmax == i)[0] plt.plot(x[indices, 0], mean.value[(indices, np.zeros_like(indices) + i)], color="r", linewidth=2) plt.scatter(x_train, y_train, facecolor="none", edgecolor="b") plt.show() ``` ## 5.7 Bayesian Neural Networks ``` x_train, y_train = make_moons(n_samples=500, noise=0.2) y_train = y_train[:, None] class Gaussian(nn.Network): def __init__(self, shape): super().__init__() with self.set_parameter(): self.m = nn.zeros(shape) self.s = nn.zeros(shape) def __call__(self): self.q = nn.Gaussian(self.m, nn.softplus(self.s) + 1e-8) return self.q.draw() class BayesianNetwork(nn.Network): def __init__(self, n_input, n_hidden, n_output=1): super().__init__() with self.set_parameter(): self.qw1 = Gaussian((n_input, n_hidden)) self.qb1 = Gaussian(n_hidden) self.qw2 = Gaussian((n_hidden, n_hidden)) self.qb2 = Gaussian(n_hidden) self.qw3 = Gaussian((n_hidden, n_output)) self.qb3 = Gaussian(n_output) self.posterior = [self.qw1, self.qb1, self.qw2, self.qb2, self.qw3, self.qb3] self.prior = nn.Gaussian(0, 1) def __call__(self, x): h = nn.tanh(x @ self.qw1() + self.qb1()) h = nn.tanh(h @ self.qw2() + self.qb2()) return nn.Bernoulli(logit=h @ self.qw3() + self.qb3()) def kl(self): kl = 0 for pos in self.posterior: kl += nn.loss.kl_divergence(pos.q, self.prior).mean() return kl model = BayesianNetwork(2, 5, 1) optimizer = nn.optimizer.Adam(model.parameter, 0.1) for i in range(1, 2001, 1): model.clear() py = model(x_train) elbo = py.log_pdf(y_train).mean(0).sum() - model.kl() / len(x_train) optimizer.maximize(elbo) if i % 100 == 0: optimizer.learning_rate *= 0.9 x_grid = np.mgrid[-2:3:100j, -2:3:100j] x1, x2 = x_grid[0], x_grid[1] x_grid = x_grid.reshape(2, -1).T y = np.mean([model(x_grid).mean.value.reshape(100, 100) for _ in range(10)], axis=0) plt.scatter(x_train[:, 0], x_train[:, 1], c=y_train.ravel(), s=5) plt.contourf(x1, x2, y, np.linspace(0, 1, 11), alpha=0.2) plt.colorbar() plt.xlim(-2, 3) plt.ylim(-2, 3) plt.gca().set_aspect('equal', adjustable='box') plt.show() ```
true
code
0.780066
null
null
null
null
# TensorFlow-Slim [TensorFlow-Slim](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/slim) is a high-level API for building TensorFlow models. TF-Slim makes defining models in TensorFlow easier, cutting down on the number of lines required to define models and reducing overall clutter. In particular, TF-Slim shines in image domain problems, and weights pre-trained on the [ImageNet dataset](http://www.image-net.org/) for many famous CNN architectures are provided for [download](https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models). *Note: Unlike previous notebooks, not every cell here is necessarily meant to run. Some are just for illustration.* ## VGG-16 To show these benefits, this tutorial will focus on [VGG-16](https://arxiv.org/abs/1409.1556). This style of architecture came in 2nd during the 2014 ImageNet Large Scale Visual Recognition Challenge and is famous for its simplicity and depth. The model looks like this: ![vgg16](Figures/vgg16.png) The architecture is pretty straight-forward: simply stack multiple 3x3 convolutional filters one after another, interleave with 2x2 maxpools, double the number of convolutional filters after each maxpool, flatten, and finish with fully connected layers. A couple ideas behind this model: - Instead of using larger filters, VGG notes that the receptive field of two stacked layers of 3x3 filters is 5x5, and with 3 layers, 7x7. Using 3x3's allows VGG to insert additional non-linearities and requires fewer weight parameters to learn. - Doubling the width of the network every time the features are spatially downsampled (maxpooled) gives the model more representational capacity while achieving spatial compression. ### TensorFlow Core In code, setting up the computation graph for prediction with just TensorFlow Core API is kind of a lot: ``` import tensorflow as tf # Set up the data loading: images, labels = ... # Define the model with tf.name_scope('conv1_1') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 3, 64], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv1 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv1_2') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 64], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv1, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv1 = tf.nn.relu(bias, name=scope) pool1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool1') with tf.name_scope('conv2_1') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 128], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(pool1, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv2 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv2_2') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 128, 128], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv2, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv2 = tf.nn.relu(bias, name=scope) pool2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool2') with tf.name_scope('conv3_1') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 128, 256], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(pool2, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv3 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv3_2') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv3, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv3 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv3_3') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv3, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv3 = tf.nn.relu(bias, name=scope) pool3 = tf.nn.max_pool(conv3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool3') with tf.name_scope('conv4_1') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(pool3, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv4 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv4_2') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv4, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv4 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv4_3') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv4, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv4 = tf.nn.relu(bias, name=scope) pool4 = tf.nn.max_pool(conv4, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool4') with tf.name_scope('conv5_1') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(pool4, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv5 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv5_2') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv5, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv5 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv5_3') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv5, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv5 = tf.nn.relu(bias, name=scope) pool5 = tf.nn.max_pool(conv5, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool5') with tf.name_scope('fc_6') as scope: flat = tf.reshape(pool5, [-1, 7*7*512]) weights = tf.Variable(tf.truncated_normal([7*7*512, 4096], dtype=tf.float32, stddev=1e-1), name='weights') mat = tf.matmul(flat, weights) biases = tf.Variable(tf.constant(0.0, shape=[4096], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(mat, biases) fc6 = tf.nn.relu(bias, name=scope) fc6_drop = tf.nn.dropout(fc6, keep_prob=0.5, name='dropout') with tf.name_scope('fc_7') as scope: weights = tf.Variable(tf.truncated_normal([4096, 4096], dtype=tf.float32, stddev=1e-1), name='weights') mat = tf.matmul(fc6, weights) biases = tf.Variable(tf.constant(0.0, shape=[4096], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(mat, biases) fc7 = tf.nn.relu(bias, name=scope) fc7_drop = tf.nn.dropout(fc7, keep_prob=0.5, name='dropout') with tf.name_scope('fc_8') as scope: weights = tf.Variable(tf.truncated_normal([4096, 1000], dtype=tf.float32, stddev=1e-1), name='weights') mat = tf.matmul(fc7, weights) biases = tf.Variable(tf.constant(0.0, shape=[1000], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(mat, biases) predictions = bias ``` Understanding every line of this model isn't important. The main point to notice is how much space this takes up. Several of the above lines (conv2d, bias_add, relu, maxpool) can obviously be combined to cut down on the size a bit, and you could also try to compress the code with some clever `for` looping, but all at the cost of sacrificing readability. With this much code, there is high potential for bugs or typos (to be honest, there are probably a few up there^), and modifying or refactoring the code becomes a huge pain. By the way, although VGG-16's paper was titled "Very Deep Convolutional Networks for Large-Scale Image Recognition", it isn't even considered a particularly deep network by today's standards. [Residual Networks](https://arxiv.org/abs/1512.03385) (2015) started beating state-of-the-art results with 50, 101, and 152 layers in their first incarnation, before really going off the deep end and getting up to 1001 layers and beyond. I'll spare you from me typing out the uncompressed TensorFlow Core code for that. ### TF-Slim Enter TF-Slim. The same VGG-16 model can be expressed as follows: ``` import tensorflow as tf slim = tf.contrib.slim # Set up the data loading: images, labels = ... # Define the model: with slim.arg_scope([slim.conv2d, slim.fully_connected], activation_fn=tf.nn.relu, weights_initializer=tf.truncated_normal_initializer(0.0, 0.01), weights_regularizer=slim.l2_regularizer(0.0005)): net = slim.repeat(images, 2, slim.conv2d, 64, [3, 3], scope='conv1') net = slim.max_pool2d(net, [2, 2], scope='pool1') net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2') net = slim.max_pool2d(net, [2, 2], scope='pool2') net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3') net = slim.max_pool2d(net, [2, 2], scope='pool3') net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv4') net = slim.max_pool2d(net, [2, 2], scope='pool4') net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv5') net = slim.max_pool2d(net, [2, 2], scope='pool5') net = slim.fully_connected(net, 4096, scope='fc6') net = slim.dropout(net, 0.5, scope='dropout6') net = slim.fully_connected(net, 4096, scope='fc7') net = slim.dropout(net, 0.5, scope='dropout7') net = slim.fully_connected(net, 1000, activation_fn=None, scope='fc8') predictions = net ``` Much cleaner. For the TF-Slim version, it's much more obvious what the network is doing, writing it is faster, and typos and bugs are much less likely. Things to notice: - Weight and bias variables for every layer are automatically generated and tracked. Also, the "in_channel" parameter for determining weight dimension is automatically inferred from the input. This allows you to focus on what layers you want to add to the model, without worrying as much about boilerplate code. - The repeat() function allows you to add the same layer multiple times. In terms of variable scoping, repeat() will add "_#" to the scope to distinguish the layers, so we'll still have layers of scope "`conv1_1`, `conv1_2`, `conv2_1`, etc...". - The non-linear activation function (here: ReLU) is wrapped directly into the layer. In more advanced architectures with batch normalization, that's included as well. - With slim.argscope(), we're able to specify defaults for common parameter arguments, such as the type of activation function or weights_initializer. Of course, these defaults can still be overridden in any individual layer, as demonstrated in the finally fully connected layer (fc8). If you're reusing one of the famous architectures (like VGG-16), TF-Slim already has them defined, so it becomes even easier: ``` import tensorflow as tf slim = tf.contrib.slim vgg = tf.contrib.slim.nets.vgg # Set up the data loading: images, labels = ... # Define the model: predictions = vgg.vgg16(images) ``` ## Pre-Trained Weights TF-Slim provides weights pre-trained on the ImageNet dataset available for [download](https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models). First a quick tutorial on saving and restoring models: ### Saving and Restoring One of the nice features of modern machine learning frameworks is the ability to save model parameters in a clean way. While this may not have been a big deal for the MNIST logistic regression model because training only took a few seconds, it's easy to see why you wouldn't want to have to re-train a model from scratch every time you wanted to do inference or make a small change if training takes days or weeks. TensorFlow provides this functionality with its [Saver()](https://www.tensorflow.org/programmers_guide/variables#saving_and_restoring) class. While I just said that saving the weights for the MNIST logistic regression model isn't necessary because of how it is easy to train, let's do it anyway for illustrative purposes: ``` import tensorflow as tf from tqdm import trange from tensorflow.examples.tutorials.mnist import input_data # Import data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) # Create the model x = tf.placeholder(tf.float32, [None, 784], name='x') W = tf.Variable(tf.zeros([784, 10]), name='W') b = tf.Variable(tf.zeros([10]), name='b') y = tf.nn.bias_add(tf.matmul(x, W), b, name='y') # Define loss and optimizer y_ = tf.placeholder(tf.float32, [None, 10], name='y_') cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y)) train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) # Variable Initializer init_op = tf.global_variables_initializer() # Create a Saver object for saving weights saver = tf.train.Saver() # Create a Session object, initialize all variables sess = tf.Session() sess.run(init_op) # Train for _ in trange(1000): batch_xs, batch_ys = mnist.train.next_batch(100) sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) # Save model save_path = saver.save(sess, "./log_reg_model.ckpt") print("Model saved in file: %s" % save_path) # Test trained model correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print('Test accuracy: {0}'.format(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))) sess.close() ``` Note, the differences from what we worked with yesterday: - In lines 9-12, 15, there are now 'names' properties attached to certain ops and variables of the graph. There are many reasons to do this, but here, it will help us identify which variables are which when restoring. - In line 23, we create a Saver() object, and in line 35, we save the variables of the model to a checkpoint file. This will create a series of files containing our saved model. Otherwise, the code is more or less the same. To restore the model: ``` import tensorflow as tf from tqdm import trange from tensorflow.examples.tutorials.mnist import input_data # Import data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) # Create a Session object, initialize all variables sess = tf.Session() # Restore weights saver = tf.train.import_meta_graph('./log_reg_model.ckpt.meta') saver.restore(sess, tf.train.latest_checkpoint('./')) print("Model restored.") graph = tf.get_default_graph() x = graph.get_tensor_by_name("x:0") y = graph.get_tensor_by_name("y:0") y_ = graph.get_tensor_by_name("y_:0") # Test trained model correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print('Test accuracy: {0}'.format(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))) sess.close() ``` Importantly, notice that we didn't have to retrain the model. Instead, the graph and all variable values were loaded directly from our checkpoint files. In this example, this probably takes just as long, but for more complex models, the utility of saving/restoring is immense. ### TF-Slim Model Zoo One of the biggest and most surprising unintended benefits of the ImageNet competition was deep networks' transfer learning properties: CNNs trained on ImageNet classification could be re-used as general purpose feature extractors for other tasks, such as object detection. Training on ImageNet is very intensive and expensive in both time and computation, and requires a good deal of set-up. As such, the availability of weights already pre-trained on ImageNet has significantly accelerated and democratized deep learning research. Pre-trained models of several famous architectures are listed in the TF Slim portion of the [TensorFlow repository](https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models). Also included are the papers that proposed them and their respective performances on ImageNet. Side note: remember though that accuracy is not the only consideration when picking a network; memory and speed are important to keep in mind as well. Each entry has a link that allows you to download the checkpoint file of the pre-trained network. Alternatively, you can download the weights as part of your program. A tutorial can be found [here](https://github.com/tensorflow/models/blob/master/research/slim/slim_walkthrough.ipynb), but the general idea: ``` from datasets import dataset_utils import tensorflow as tf url = "http://download.tensorflow.org/models/vgg_16_2016_08_28.tar.gz" checkpoints_dir = './checkpoints' if not tf.gfile.Exists(checkpoints_dir): tf.gfile.MakeDirs(checkpoints_dir) dataset_utils.download_and_uncompress_tarball(url, checkpoints_dir) import os import tensorflow as tf from nets import vgg slim = tf.contrib.slim # Load images images = ... # Pre-process processed_images = ... # Create the model, use the default arg scope to configure the batch norm parameters. with slim.arg_scope(vgg.vgg_arg_scope()): logits, _ = vgg.vgg_16(processed_images, num_classes=1000, is_training=False) probabilities = tf.nn.softmax(logits) # Load checkpoint values init_fn = slim.assign_from_checkpoint_fn( os.path.join(checkpoints_dir, 'vgg_16.ckpt'), slim.get_model_variables('vgg_16')) ```
true
code
0.7764
null
null
null
null
# Gaussian mixture model with expectation maximization algorithm GMM with EM. This notebook implements the following: 1) Function that avoids computing inverse of matrix when computing $y = A^{-1}x$ by solving system of linear equations. 2) Log sum trick to avoid underflow when multiplying small numbers. 3) pdf of the Multivariate normal distribution 4) E-step function of the EM algorithm 5) M-step function of the EM algorithm 6) Variational lower bound function 7) GMM function 8) Training function for GMM 9) Scatter plot of clusters (Plot at the bottom of this notebook shows 8 clusters from a dataset of 100 points) # Imports ``` import sys import numpy as np from numpy.linalg import det, solve import matplotlib import matplotlib.pyplot as plt %matplotlib inline ``` # Variables info N: number of data (rows) d: dimension of data X: (N x d), data points C: int, number of clusters Computed from E-step: gamma: (N x C), distribution q(T), probabilities of clusters for objects Initial values are also subsequently computed & updated from M-step: pi: (C), mixture component weights, initial weights of T (latent variable), sum to 1, t=1, 2 or 3. mu: (C x d), mixture component means sigma: (C x d x d), # mixture component covariance matrices # Generate random data. ``` N = 100 d = 2 X = np.random.rand(N,d) print(X[:5]) print(X.shape) fig, ax = plt.subplots(1,1, figsize=(15,10)) ax.set_title('Data') ax.scatter(X[:, 0], X[:, 1], c='black', s=100) #plt.axis('equal') plt.show() ``` # Generate initial values ``` epsilon = 1e-10 # Use in stopping criterion as well as in preventing numerical errors. C = 7 def rand_input(C, d): # https://stackoverflow.com/questions/18659858/generating-a-list-of-random-numbers-summing-to-1 pi0 = np.random.dirichlet(np.ones(C),size=1)[0] # Generating a list of random numbers, summing to 1 mu0 = np.random.rand(C,d) sigma0 = np.random.rand(C,d,d) return pi0, mu0, sigma0 pi0, mu0, sigma0 = rand_input(C, d) print(pi0) print(pi0.shape) print(mu0) print(mu0.shape) print(sigma0) print(sigma0.shape) ``` # Avoid computing inverse of matrix when computing y=(A_inverse)x. ``` # Function which avoids computing inverse of matrix when computing y=(A_inverse)x by solving linear equations. # Use in E-step. def _A_inverse_times_X(A, X): # A is nxn # X is rxn Y = [] for row_data in X: Y_new = np.linalg.solve(A, row_data) Y.append(Y_new) Y = np.asarray(Y) assert Y.shape == X.shape, "Output shape must be equal to shape of X." return Y ``` # Multivariate normal (Gaussian) distribution $MVN = \frac{1}{\sqrt{(2\pi)^n|\boldsymbol\Sigma_c|}} \exp\left(-\frac{1}{2}({x}-{\mu_c})^T{\boldsymbol\Sigma_c}^{-1}({x}-{\mu_c})\right)$ Computes pdf of Multivariate normal (Gaussian) distribution. ``` # Alternatively, one could also use multivariate_normal.pdf from scipy.stats # instead of this function def __mvg(cov, X, mean): diff = X - mean Y = _A_inverse_times_X(cov, diff) pow_term = -0.5 * np.matmul(Y, diff.T) e_term = np.exp(pow_term) const_term = (2*np.pi)**(X.shape[1]) det_term = np.linalg.det(cov) deno_term = np.sqrt(np.multiply(const_term, det_term)) P = np.divide(e_term, deno_term) return P.diagonal() # returns the pdf, shape=(num_X,) # Returns pdf for multiple components. def _mvg(cov, X, mean): P = [] for i, r in enumerate(mean): P.append(__mvg(cov[i], X, mean[i])) return P # shape=(C, num_X) ``` # Log sum trick ``` # log sum trick to prevent underflow in E-step. # https://timvieira.github.io/blog/post/2014/02/11/exp-normalize-trick/ # https://web.archive.org/web/20150502150148/http://machineintelligence.tumblr.com/post/4998477107/the-log-sum-exp-trick # https://www.quora.com/Why-is-e-log_-e-x-equal-to-x def exp_normalize(x): b = x.max() y = np.exp(x - b) return y / y.sum() ``` # E-step Multiply the initial weight of the class with the multivariate guassian pdf of each data from the class. ``` def E_step(X, pi, mu, sigma): N = X.shape[0] # number of objects C = pi.shape[0] # number of clusters d = mu.shape[1] # dimension of each object gamma = np.zeros((N, C)) # distribution q(T) gamma = np.mat(np.zeros((N, C))) prob = _mvg(sigma, X, mu) # pdf of data X in each class prob = np.mat(prob) for c in range(C): # Instead of multiplying probabilities directly which could result in underflow, # we'll work in log scale. # pi[c] = P(T=c), prob[c, :] = P(X|T=c) #gamma[:, c] = np.multiply(pi[c], prob[c, :].T) gamma[:, c] = np.log(pi[c] + epsilon) + np.log(prob[c, :].T + epsilon) for i in range(N): # Instead of summing the denominator, we'll use the log sum trick coded in exp_normalize function. gamma[i, :] = exp_normalize(gamma[i, :]) return gamma # Q(T) = P(T|X,theta), weights of each model (class) for each data in X. ``` # M-Step Compute the following: [Equations from wiki](https://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm#E_step) ![alt text](https://wikimedia.org/api/rest_v1/media/math/render/svg/0e0327c8676ae66ec651b422a19f5ea532913c7a) ![alt text](https://wikimedia.org/api/rest_v1/media/math/render/svg/45f3e73f50d396aadc98182709eee0c0d513aa6b) ![alt text](https://wikimedia.org/api/rest_v1/media/math/render/svg/a92651be432155520db19dc0b4da807039d96eb0) ``` def M_step(X, gamma): N = X.shape[0] # number of objects C = gamma.shape[1] # number of clusters d = X.shape[1] # dimension of each object mu = np.zeros((C, d)) sigma = [] pi = np.zeros(C) # for each model in C for c in range(C): # sum of all Q(t) of model c sum_Q_t = np.sum(gamma[:, c]) # mean of model c mu[c, :] = np.sum(np.multiply(X, gamma[:, c]), axis=0) / sum_Q_t # cov of model c diff = X - mu[c] sigma.append(diff.T @ np.multiply(diff, gamma[:, c]) / sum_Q_t) # weight of model c pi[c] = sum_Q_t / N return pi, mu, np.asarray(sigma) ``` # Variational lower bound Computes the scalar output of the following: $$\sum_{i=1}^{N} \sum_{c=1}^{C} q(t_i =c) (\log \pi_c + \log(MVN)) - \sum_{i=1}^{N} \sum_{c=1}^{K} q(t_i =c) \log q(t_i =c)$$ ``` def compute_vlb(X, pi, mu, sigma, gamma): """ Each input is numpy array: X: (N x d), data points gamma: (N x C), distribution q(T) pi: (C) mu: (C x d) sigma: (C x d x d) Returns value of variational lower bound """ N = X.shape[0] # number of objects C = gamma.shape[1] # number of clusters d = X.shape[1] # dimension of each object VLB = 0.0 for c in range(C): mu_c = np.expand_dims(mu[c,:], axis=0) sigma_c = np.expand_dims(sigma[c,:], axis=0) gamma_c = gamma[:,c] mvg = np.asarray(_mvg(sigma_c, X, mu_c)) # 1xc sum = np.log(pi[c] + epsilon) + np.log(mvg + epsilon) # 1xc, + 1e-30 to prevent log(0) prod = np.multiply(gamma_c, sum.T) # transpose sum for element wise multiplication prod2 = np.multiply(gamma_c, np.log(gamma_c + epsilon)) # element wise multiplication, + 1e-30 to prevent log(0) VLB += (prod - prod2) VLB = np.sum(VLB, axis=0) # sum all values for all rows return VLB ``` # GMM Find the best parameters by optimizing with the following criterion: Stopping threshold: ($|\frac{\mathcal{L}_i-\mathcal{L}_{i-1}}{\mathcal{L}_{i-1}}| \le \text{threshold}$) ``` def GMM(X, C, d, threshold=epsilon, max_iter=1000, trial=500): N = X.shape[0] # number of objects d = X.shape[1] # dimension of each object best_VLB = None best_pi = None best_mu = None best_sigma = None for rs in range(trial): try: pi, mu, sigma = rand_input(C, d) # Try random initial values curr_LVB, prev_LVB = 0.0, 0.0 iter = 0 while iter < max_iter: #print('iter, rs', iter, rs) prev_LVB = curr_LVB gamma = E_step(X, pi, mu, sigma) pi, mu, sigma = M_step(X, gamma) curr_LVB = compute_vlb(X, pi, mu, sigma, gamma) #print('prev_LVB', prev_LVB) #print('curr_LVB', curr_LVB) # LVB is the variation lower bound function. It must NOT be decreasing. # We are trying to maximize LVB so that the gap between LVB & GMM is minimized. if prev_LVB != 0.0 and curr_LVB < prev_LVB: print('VLB ERROR EXIT!: curr_LVB < prev_LVB') sys.exit(1) # If numerical error in LVB, goto next trial. if np.isnan(curr_LVB) == True: break if prev_LVB != 0.0 and abs((curr_LVB - prev_LVB) / (prev_LVB)) <= threshold: if best_VLB == None or curr_LVB > np.float32(best_VLB): best_VLB = curr_LVB best_pi = pi best_mu = mu best_sigma = sigma break # end while loop, goto for loop iter += 1 except np.linalg.LinAlgError: print("Singular matrix not allowed.") pass return best_VLB, best_pi, best_mu, best_sigma ``` # Train ``` # Train # If numerical errors occured, run a couple of more times. best_VLB, best_pi, best_mu, best_sigma = GMM(X, C, d) print('best_VLB', best_VLB) print('best_pi', best_pi) print('best_mu', best_mu) print('best_sigma', best_sigma) # Use the best values to do 1 more E-step to get gamma. gamma = E_step(X, best_pi, best_mu, best_sigma) labels = np.ravel(gamma.argmax(axis=1)) ``` # Scatter plot ``` import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt ''' # Generate colors for each class. # only works for max of 4 classes. def gen_col(C): colors =[] for c in range(C): colors.append(np.random.randint(0, 255, C) / 255) print(colors) return colors colors = gen_col(C) plt.scatter(X[:, 0], X[:, 1], c=labels, cmap=matplotlib.colors.ListedColormap(colors), s=30) plt.axis('equal') plt.show() ''' # https://stackoverflow.com/questions/12487060/matplotlib-color-according-to-class-labels N = C # Number of labels # setup the plot fig, ax = plt.subplots(1,1, figsize=(15,10)) # define the data x = X[:, 0] y = X[:, 1] tag = labels # define the colormap cmap = plt.cm.jet # extract all colors from the .jet map cmaplist = [cmap(i) for i in range(cmap.N)] # create the new map cmap = cmap.from_list('Custom cmap', cmaplist, cmap.N) # define the bins and normalize bounds = np.linspace(0,N,N+1) norm = mpl.colors.BoundaryNorm(bounds, cmap.N) # make the scatter scat = ax.scatter(x, y, c=tag, s=np.random.randint(100,500,N), cmap=cmap, norm=norm) # create the colorbar cb = plt.colorbar(scat, spacing='proportional',ticks=bounds) cb.set_label('Custom cbar') ax.set_title('Discrete color mappings') plt.show() ```
true
code
0.627666
null
null
null
null
# Aim 1. **Introduce the python ecosystem** * How do I run a `.py` script? * Where do I enter python commands? * What is `Python 2` and `Python 3`? * wait!, there is something called `Anaconda`? * `JupyterLab`, `Jupyter Notebooks` and reproducible research 2. **Why should I use python?** * Is python as easy as `Ferret`? * Is python as fast as `Fortran`? * Does python have many toolboxes like in `MATLAB`? * Can python read and write `netCDF` files? * Can python plot geographic maps and coastlines? * Can it handle larger than memory files (say >2 GB)? 3. **Possibilities with python** * Exploratory Data Analysis * Interactive plotting * Parallel processing * Cloud computing --- ## Python ecosystem ### Running a `.py` script * Activate your environment, and run the script by ```bash python your_script.py ``` ### Three ways to spawn a python interpreter * old fashioned `python` console * rich and colorful `ipython` console * `JupyterLab` #### Starting the console * In terminal, type `python` * In terminal, type `ipython` * In terminal, type `jupyter lab` #### IPython * Old python interface is boring and less interactive * `IPython` supports tab completion, syntax highlighting, documentation lookup * [cell magics](https://ipython.org/ipython-doc/3/interactive/magics.html) like `%run`, `%debug`, `%edit` and `%bookmark` makes interactive coding easier ```{note} More info can be found [here](https://ipython.org/ipython-doc/3/interactive/tutorial.html) ``` #### JupyterLab and Jupyter Notebook * [JupyterLab](https://jupyterlab.readthedocs.io/en/stable/) is an interface where you can * create notebooks * manage files and folders * display images * start terminal * display csv files and much more * [Notebooks](https://jupyter-notebook.readthedocs.io/en/stable/notebook.html) holds your code, plots and discussion in a single space * Notebook sharing promotes reproducible research * Notebooks are future of scientific communication, ([Nature article](https://www.nature.com/news/interactive-notebooks-sharing-the-code-1.16261)) * Jupyter is not limited to Python. You can run codes in * `Julia` * `Bash` * `R` * `Pyferret` and much more ````{tip} **Additional benefits of `JupyterLab/Notebook`** * Start jupyter in a remote computer say HPC and connect in your local browser ```bash # in remote machine type: jupyter lab --no-browser --ip="$HOSTNAME" --port=8888 # in local machine type: ssh -N -L 8888:localhost:8888 username@remoteIP ``` * Open browser and type address as `localhost:8888` and press `Enter` * No more waiting for the slow X-window forwarding to pop-up * Easily access and view remote files ```` ## Anaconda, miniconda and conda env * `Anaconda` and `miniconda` differs only in the number of pre-packed packages * `Anaconda` comes with many common-use packages (> 500 MB) * While `miniconda` is a lite version (<60 MB) * Both installs `conda`, which is the package manager * `Conda` helps you isolate environments, allowing you to update/install certain packages without affecting other working environment. ```{attention} * **Stay away from Python 2** * Avoid Python 2. It is now in *legacy* mode * Packages are dropping support for Python 2 * Most scientific packages have moved to Python 3 * Found an old code in Python 2? Use conda to create a Python 2 environment ``` ## Further references * Rather than general python tutorials, look for scientific computing tutorials * Some such python lessons covering basics are: * <https://geo-python.github.io/site/index.html> * <https://scipy-lectures.org/> * <https://fabienmaussion.info/scientific_programming/html/00-Introduction.html> * <https://github.com/geoschem/GEOSChem-python-tutorial> * <https://unidata.github.io/online-python-training/> * <https://rabernat.github.io/research_computing/> * <http://swcarpentry.github.io/python-novice-inflammation/>
true
code
0.736661
null
null
null
null
# Importing Libraries ``` import networkx as nx import matplotlib.pyplot as plt import pandas as pd import seaborn as sns import numpy as np from matplotlib.ticker import MaxNLocator ``` # Creating Erdos Renyi Graph and Plotting Degree Centrality ``` def visualiseER(nodes,p): G = nx.erdos_renyi_graph(nodes,p) d = nx.degree_centrality(G) fig = plt.figure(figsize = (6,5)) colors = list(d.values()) pos = nx.kamada_kawai_layout(G) nx.draw(G, with_labels=True, pos = pos, node_size = 350, node_color = colors, edge_color = 'k') fig.set_facecolor('white') plt.title("Erdos Renyi Graph with nodes = {} and p = {}".format(nodes, p)) plt.show() fig = plt.figure(figsize = (8,5)) w = 0.01 bins = np.arange(min(list(d.values())), max(list(d.values())) + w, w) plt.hist(list(d.values()),bins = bins, density = True, alpha = 0.65, edgecolor = "black") plt.title("Degree Centrality Histogram") plt.gca().yaxis.set_major_locator(MaxNLocator(integer=True)) plt.xlabel("Degree Centrality Values") plt.ylabel("Frequency") plt.show() return(d) d1 = visualiseER(100, 0.3) d2 = visualiseER(100,0.6) ``` ## Inferences 1. The more connected nodes can be seen in Yellow Color. As the connectivity decreases, the color moves from **Yellow -> Green -> Blue -> Violet**. 2. As the probability of connections (p) increases, there will be a higher frequency of those nodes with a higher degree centrality. It means that overall **nodes get more connected**. 3. Below curve gives a comparison between the two ER Models. It follows a **Binomial Distribution**. # Comparative ER Models ``` fig = plt.figure(figsize = (8,5)) sns.kdeplot(list(d1.values()), shade = True, label = "ER1") sns.kdeplot(list(d2.values()), shade = True, label = "ER2") plt.title("Degree Centrality Density Plot") plt.gca().yaxis.set_major_locator(MaxNLocator(integer=True)) plt.legend() plt.xlabel("Degree Centrality Values") plt.ylabel("Frequency") ``` # Creating Barabasi Albert Random Graph and Plotting Degree Centrality ``` def visualiseBAR(nodes,m): G = nx.barabasi_albert_graph(nodes,m) d = nx.degree_centrality(G) fig = plt.figure(figsize = (6,5)) colors = list(d.values()) pos = nx.kamada_kawai_layout(G) nx.draw(G, with_labels=True, pos = pos, node_size = 350, node_color = colors, edge_color = 'k') fig.set_facecolor('white') plt.title("Barabasi Albert Random Graph with nodes = {} and m = {}".format(nodes, m)) plt.show() fig = plt.figure(figsize = (8,5)) w = 0.01 bins = np.arange(min(list(d.values())), max(list(d.values())) + w, w) plt.hist(list(d.values()), bins = bins, density = True, alpha = 0.65, edgecolor = "black") plt.title("Degree Centrality Histogram") plt.gca().yaxis.set_major_locator(MaxNLocator(integer=True)) plt.xlabel("Degree Centrality Values") plt.ylabel("Frequency") plt.show() visualiseBAR(100,3) ``` ## Inferences 1. The Node that is most connected will be in Yellow. As the connectivity decreases it moves from **Yellow -> Green -> Blue -> Violet**. 2. The Histogram shows that most nodes are having lower degree centrality. They are less connected to their neighbours. 3. Barabasi Albert Model follows a **Power Law distribution.** ## Conclusion 1. **Degree Centrality**:<br> It is a measure of node connectivity in a Graph. It is simply a measure of the number of edges it has w.r.t to the rest of nodes in a network. Here, in directed networks, nodes are having both In-degree and Out-degree, and both are used to calculate it. 2. **Erdos Renyi Model**:<br> a. The edges are set in this Graph such that each edge has a fixed probability of being present or absent, independently of the other edges.<br> b. They follow a binomial distribution. As p increases, the curve shifts as connectivity increases.<br> c. Most real world networks are not ER Random Graphs. They are scale free or BA Graphs. 3. **Barabasi Albert Model:** <br> They have two main characteristics:<br> a. **Growth** : They start with an initial number of nodes. They keep adding the nodes to an initial small network.<br> b. **Preferential Attachment:** Follows **Rich gets Richer** Phenomenon. In this model, an edge is most likely to attach to nodes with higher degrees.<br> This is the reason why for Barabasi we see a higher frequency of nodes with lesser degree centrality. These could be the initial set of nodes that are initially less connected and have lower chance of getting the new nodes, to be connected to them. There are a few nodes that will have a higher degree centrality and that becomes the **hubs** in these types of Networks.This follows a **Scale free or Power law distribution**.<br> *Examples for further analysis : Social Networks, Citation Networks, World Wide Web Network (WWW)*
true
code
0.637708
null
null
null
null
# L1-SVM vs L2-SVM: using the Barrier and SMO algorithms ``` %cd .. import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from collections import Counter import time from sklearn.metrics import confusion_matrix, classification_report from opt.svm import SVC from opt.utils.data_splitter import split4ovr, split4ovo from opt.utils.metrics import measure_performance # Fix random seed np.random.seed(0) ``` ## Dataset ``` # This code block is used for reducing the MNIST dataset # from opt.utils.preprocess import process_raw_data # process_raw_data(input_filepath='data/mnist/', # output_filepath='data/filtered_mnist', # classes2keep=[0,1,6,9], # nTrain=500, # nTest=200) # Load data data = np.load('data/filtered_mnist.npz') x_train = data['a'] y_train = data['b'] x_test = data['c'] y_test = data['d'] print("Number of training samples: ", len(y_train)) print("Distribution of training samples: ", Counter(y_train)) print("Number of test samples: ", len(y_test)) print("Distribution of training samples: ", Counter(y_test)) # Visualise data plt.rcParams.update({'font.size': 16}) fig, (ax1, ax2, ax3, ax4) = plt.subplots(1,4, figsize=(35,35)) imx, imy = (28,28) visual = np.reshape(x_train[2], (imx,imy)) ax1.set_title("Example Data Image, y="+str(int(y_train[2]))) ax1.imshow(visual, vmin=0, vmax=1) visual = np.reshape(x_train[36], (imx,imy)) ax2.set_title("Example Data Image, y="+str(int(y_train[36]))) ax2.imshow(visual, vmin=0, vmax=1) visual = np.reshape(x_train[10], (imx,imy)) ax3.set_title("Example Data Image, y="+str(int(y_train[10]))) ax3.imshow(visual, vmin=0, vmax=1) visual = np.reshape(x_train[8], (imx,imy)) ax4.set_title("Example Data Image, y="+str(int(y_train[8]))) ax4.imshow(visual, vmin=0, vmax=1) # plt.savefig("report/report_pics/data.pdf", format="pdf") # # Visualise data # plt.rcParams.update({'font.size': 16}) # fig, (ax1, ax2, ax3, ax4) = plt.subplots(1,4, figsize=(35,35)) # imx, imy = (28,28) # visual = np.reshape(x_test[10], (imx,imy)) # ax1.set_title("Example Data Image, y="+str(int(y_test[10]))) # ax1.imshow(visual, vmin=0, vmax=1) # visual = np.reshape(x_test[412], (imx,imy)) # ax2.set_title("Example Data Image, y="+str(int(y_test[412]))) # ax2.imshow(visual, vmin=0, vmax=1) # visual = np.reshape(x_test[524], (imx,imy)) # ax3.set_title("Example Data Image, y="+str(int(y_test[524]))) # ax3.imshow(visual, vmin=0, vmax=1) # visual = np.reshape(x_test[636], (imx,imy)) # ax4.set_title("Example Data Image, y="+str(int(y_test[636]))) # ax4.imshow(visual, vmin=0, vmax=1) # # plt.savefig("report/report_pics/bad_data.pdf", format="pdf") ``` ## Barrier method ``` # OVO data x_train_ovo, y_train_ovo = split4ovo(x_train, y_train) # OVR data: y_train_ovr = split4ovr(y_train) # Train OVR: svm.fit(x_train, y_train_ovr) # Initialise L1-SVM L1_barrier_svm = SVC(C=1.0, kernel="gauss", param='scale', decision_function_shape="ovo", loss_fn='L1', opt_algo="barrier") # Barrier fit L1_barrier_svm.fit(x_train_ovo, y_train_ovo, t=1, mu=20, tol=1e-6, max_iter=100, tolNewton=1e-12, maxIterNewton=100) # Test L1_barrier_yhat = L1_barrier_svm.predict(x_test) print("Time taken: ", L1_barrier_svm.time_taken) measure_performance(y_test, L1_barrier_yhat, average="macro") # Initialise L2-SVM L2_barrier_svm = SVC(C=1.0, kernel="gauss", param='scale', decision_function_shape="ovo", loss_fn='L2', opt_algo="barrier") # Barrier fit L2_barrier_svm.fit(x_train_ovo, y_train_ovo, t=1, mu=20, tol=1e-6, max_iter=100, tolNewton=1e-12, maxIterNewton=100) # Test L2_barrier_yhat = L2_barrier_svm.predict(x_test) print("Time taken: ", L2_barrier_svm.time_taken) measure_performance(y_test, L2_barrier_yhat, average="macro") ``` #### Convergence Plots ``` fig, (ax1, ax2) = plt.subplots(1,2, figsize=(25,10)) plt.rcParams.update({'font.size': 20}) # L1-SVM for ClassVsClass, info in L1_barrier_svm.opt_info.items(): ax1.plot(np.linalg.norm(np.array(info['iterates'])-info['x'], axis=1), label=ClassVsClass) ax1.set_title("L1-SVM \n Barrier method: Iterate Convergence Plot") ax1.set_ylabel("$|| \mathbf{x}_k-\mathbf{x}^{\star} ||_2$") ax1.set_xlabel("Iterations $k$") ax1.set_yscale("log") ax1.legend() ax1.grid(which='both', axis='both') # L2-SVM for ClassVsClass, info in L2_barrier_svm.opt_info.items(): ax2.plot(np.linalg.norm(np.array(info['iterates'])-info['x'], axis=1), label=ClassVsClass) ax2.set_title("L2-SVM \n Barrier method: Iterate Convergence Plot") ax2.set_ylabel("$|| \mathbf{x}_k-\mathbf{x}^{\star} ||_2$") ax2.set_xlabel("Iterations $k$") ax2.set_yscale("log") ax2.legend() ax2.grid(which='both', axis='both') plt.tight_layout() # plt.savefig("report/report_pics/barrier_iterate_conv.pdf", format="pdf") plt.show() fig, (ax1, ax2) = plt.subplots(1,2, figsize=(25,10)) plt.rcParams.update({'font.size': 20}) # L1-SVM for ClassVsClass, info in L1_barrier_svm.opt_info.items(): ax1.step(np.cumsum(info['newton_iterations']), info['duality_gaps'], label=str(ClassVsClass)+": $\mu=$"+str(info['mu'])) ax1.set_title("L1-SVM \n Progress of Barrier method") ax1.set_ylabel("$|| \mathbf{x}_k-\mathbf{x}^{\star} ||_2$") ax1.set_xlabel("Newton Iterations $k$") ax1.set_yscale("log") ax1.legend() ax1.grid(which='both', axis='both') # L2-SVM for ClassVsClass, info in L2_barrier_svm.opt_info.items(): ax2.step(np.cumsum(info['newton_iterations']), info['duality_gaps'], label=str(ClassVsClass)+": $\mu=$"+str(info['mu'])) ax2.set_title("L2-SVM \n Progress of Barrier method") ax2.set_ylabel("$|| \mathbf{x}_k-\mathbf{x}^{\star} ||_2$") ax2.set_xlabel("Newton Iterations $k$") ax2.set_yscale("log") ax2.legend() ax2.grid(which='both', axis='both') plt.tight_layout() # plt.savefig("report/report_pics/barrier_duality_gap.pdf", format="pdf") plt.show() ``` #### Confusion matrix ``` L1_barrier_conf_matrix = confusion_matrix(y_test, L1_barrier_yhat, normalize=None) L2_barrier_conf_matrix = confusion_matrix(y_test, L2_barrier_yhat, normalize=None) fig, (ax1, ax2) = plt.subplots(1,2, figsize=(25,10)) plt.rcParams.update({'font.size': 20}) # Plot confusion matrix L1-SVM df_cm = pd.DataFrame(L1_barrier_conf_matrix, index=np.unique(y_test), columns=np.unique(y_test)) sns.heatmap(df_cm, annot=True, fmt='', cmap='Blues', cbar=True, square=True, center=110, linewidths=.1, linecolor='black', ax=ax1) ax1.set_ylim((4,0)) ax1.set_title("L1-SVM \n Barrier method: Confusion Matrix") ax1.set_ylabel('True label') ax1.set_xlabel('Predicted label') # Plot confusion matrix L2-SVM df_cm = pd.DataFrame(L2_barrier_conf_matrix, index=np.unique(y_test), columns=np.unique(y_test)) sns.heatmap(df_cm, annot=True, fmt='', cmap='Blues', cbar=True, square=True, center=110, linewidths=.1, linecolor='black', ax=ax2) ax2.set_ylim((4,0)) ax2.set_title("L2-SVM \n Barrier method: Confusion Matrix") ax2.set_ylabel('True label') ax2.set_xlabel('Predicted label') plt.tight_layout() # plt.savefig("report/report_pics/barrier_confusion_matrix.pdf", format="pdf") plt.show() ``` ## SMO method ``` # Initialise L1-SVM L1_smo_svm = SVC(C=1.0, kernel="gauss", param='scale', decision_function_shape="ovo", loss_fn='L1', opt_algo="smo") # SMO fit L1_smo_svm.fit(x_train_ovo, y_train_ovo, tol=1e-3, max_iter=5) # Test L1_smo_yhat = L1_smo_svm.predict(x_test) print("Time taken: ", L1_smo_svm.time_taken) measure_performance(y_test, L1_smo_yhat, average="macro") # Initialise L2-SVM L2_smo_svm = SVC(C=1.0, kernel="gauss", param='scale', decision_function_shape="ovo", loss_fn='L2', opt_algo="smo") # SMO fit L2_smo_svm.fit(x_train_ovo, y_train_ovo, tol=1e-3, max_iter=100) # Test L2_smo_yhat = L2_smo_svm.predict(x_test) print("Time taken: ", L2_smo_svm.time_taken) measure_performance(y_test, L2_smo_yhat, average="macro") ``` #### Convergence Plots ``` fig, (ax1, ax2) = plt.subplots(1,2, figsize=(25,10)) plt.rcParams.update({'font.size': 20}) # L1-SVM for ClassVsClass, info in L1_smo_svm.opt_info.items(): ax1.plot(np.linalg.norm(np.array(info['iterates'])-info['x'], axis=1), label=ClassVsClass) ax1.set_title("L1-SVM \n SMO method: Iterate Convergence Plot") ax1.set_ylabel("$|| \mathbf{x}_k-\mathbf{x}^{\star} ||_2$") ax1.set_xlabel("Iterations $k$") ax1.set_yscale("log") ax1.legend() ax1.grid(which='both', axis='both') # L2-SVM for ClassVsClass, info in L2_smo_svm.opt_info.items(): ax2.plot(np.linalg.norm(np.array(info['iterates'])-info['x'], axis=1), label=ClassVsClass) ax2.set_title("L2-SVM \n SMO method: Iterate Convergence Plot") ax2.set_ylabel("$|| \mathbf{x}_k-\mathbf{x}^{\star} ||_2$") ax2.set_xlabel("Iterations $k$") ax2.set_yscale("log") ax2.legend() ax2.grid(which='both', axis='both') plt.tight_layout() # plt.savefig("report/report_pics/smo_iterate_conv.pdf", format="pdf") plt.show() ``` #### Confusion matrix ``` L1_smo_conf_matrix = confusion_matrix(y_test, L1_barrier_yhat, normalize=None) L2_smo_conf_matrix = confusion_matrix(y_test, L2_barrier_yhat, normalize=None) fig, (ax1, ax2) = plt.subplots(1,2, figsize=(25,10)) plt.rcParams.update({'font.size': 20}) # Plot confusion matrix L1-SVM df_cm = pd.DataFrame(L1_smo_conf_matrix, index=np.unique(y_test), columns=np.unique(y_test)) sns.heatmap(df_cm, annot=True, fmt='', cmap='Blues', cbar=True, square=True, center=110, linewidths=.1, linecolor='black', ax=ax1) ax1.set_ylim((4,0)) ax1.set_title("L1-SVM \n SMO method: Confusion Matrix") ax1.set_ylabel('True label') ax1.set_xlabel('Predicted label') # Plot confusion matrix L2-SVM df_cm = pd.DataFrame(L2_smo_conf_matrix, index=np.unique(y_test), columns=np.unique(y_test)) sns.heatmap(df_cm, annot=True, fmt='', cmap='Blues', cbar=True, square=True, center=110, linewidths=.1, linecolor='black', ax=ax2) ax2.set_ylim((4,0)) ax2.set_title("L2-SVM \n SMO method: Confusion Matrix") ax2.set_ylabel('True label') ax2.set_xlabel('Predicted label') plt.tight_layout() # plt.savefig("report/report_pics/smo_confusion_matrix.pdf", format="pdf") plt.show() ``` ## CVXOPT method (for comparison) ``` # Initialise L1-SVM L1_cvxopt_svm = SVC(C=1.0, kernel="gauss", param='scale', decision_function_shape="ovo", loss_fn='L1', opt_algo="cvxopt") # CVXOPT fit L1_cvxopt_svm.fit(x_train_ovo, y_train_ovo) # Test L1_cvxopt_yhat = L1_cvxopt_svm.predict(x_test) print("Time taken: ", L1_cvxopt_svm.time_taken) measure_performance(y_test, L1_cvxopt_yhat, average="macro") # Initialise L2-SVM L2_cvxopt_svm = SVC(C=1.0, kernel="gauss", param='scale', decision_function_shape="ovo", loss_fn='L2', opt_algo="cvxopt") # CVXOPT fit L2_cvxopt_svm.fit(x_train_ovo, y_train_ovo) # Test L2_cvxopt_yhat = L2_cvxopt_svm.predict(x_test) print("Time taken: ", L2_cvxopt_svm.time_taken) measure_performance(y_test, L2_cvxopt_yhat, average="macro") ``` ## Scikit-Learn SVM (for comparison) ``` # Scikit-learn for L1-SVM from sklearn.svm import SVC as sklearnSVM sklearn_svm = sklearnSVM(C=1.0, kernel='rbf', decision_function_shape='ovo') sklearn_svm.fit(x_train, y_train) sklearn_pred = sklearn_svm.predict(x_test) measure_performance(y_test, sklearn_pred, average="macro") ```
true
code
0.627267
null
null
null
null
# Teil 8 (fortgeführt) - Einleitung für Protokolle ### Kontext Nachdem nun Pläne behandelt wurden, wird es jetzt um ein neues Objekt names Protokoll gehen. Ein Protokoll koordiniert eine Sequenz von Plänen und wendet sie auf entfernten Helfern in einem einzigen Durchgang an. Es ist ein Objekt höchster Ebene und beinhaltet die Logik einer komplexen Berechnung auf mehreren verteilten Helfern. Die wichtigste Eigenschaft eines Protokolls ist die Fähigkeit von Helfern gesendet / gesucht / geholt zu werden und schließlich auf festgelegten Helfern angewendet zu werden. Somit kann ein Nutzer ein Protokoll erstellen und auf einem Helfer in der Cloud bereit stellen. Jeder andere Helfer kann dieses Protokoll dort suchen, herunterladen und bei sich und allen mit ihm verbundenen Helfern anwenden. Im Folgenden können Sie sehen, wie das erreicht wird. Autoren: - Théo Ryffel - Twitter [@theoryffel](https://twitter.com/theoryffel) - GitHub: [@LaRiffle](https://github.com/LaRiffle) Übersetzer: - Jan Moritz Behnken - Github: [@JMBehnken](https://github.com/JMBehnken) ### 1. Erstellen und Anwenden Protokolle werden mit Listen aus `(worker, plan)`-Paaren erstellt. Dabei kann `worker` entweder ein echter Helfer, eine Helfer-Id oder auch ein String eines fiktiven Helfers sein. Der letzte Fall kann verwendet werden, um beim Erstellen zu spezifizieren, dass zwei Pläne beim Anwenden zum selben Helfer gehören (oder auch nicht). `plan` kann entweder einen Plan oder auch einen PointerPlan enthalten. ``` import torch as th import syft as sy hook = sy.TorchHook(th) # IMPORTANT: Local worker should not be a client worker hook.local_worker.is_client_worker = False ``` Es werden drei unterschiedliche Pläne erstellt und in einem Protokoll vereint. Jeder der Pläne erhöht den Zähler jeweils um eins. ``` @sy.func2plan(args_shape=[(1,)]) def inc1(x): return x + 1 @sy.func2plan(args_shape=[(1,)]) def inc2(x): return x + 1 @sy.func2plan(args_shape=[(1,)]) def inc3(x): return x + 1 protocol = sy.Protocol([("worker1", inc1), ("worker2", inc2), ("worker3", inc3)]) ``` Nun muss das Protokoll noch an die Helfer gebunden werden. Dies wird erreicht mit dem Aufrufen von `.deploy(*workers)`. Dafür werden drei Helfer erstellt. ``` bob = sy.VirtualWorker(hook, id="bob") alice = sy.VirtualWorker(hook, id="alice") charlie = sy.VirtualWorker(hook, id="charlie") workers = alice, bob, charlie protocol.deploy(*workers) ``` Wie zu erkennen ist, wurden die Pläne gleich an die richtigen Helfer gesendet: das Protokoll wurde verteilt! Dies geschah in zwei Phasen: - zuerst wurden die Helfer-Strings auf die echten Helfer abgebildet - danach wurden die Pläne ihren jeweiligen Helfern übermittelt ### 2. Starten eines Protokolls Ein Protokoll zu starten, bedeutet alle seine Pläne der Reihe nach abzuarbeiten. Dafür werden die Eingabe-Daten an den Ort des ersten Planes gesendet. Dieser erste Plan wird auf die Daten angewendet und seine Ausgabe-Daten an den zweiten Plan weiter geleitet. Dies setzt sich so fort. Das letzte Ergebnis wird zurückgegeben, sobald alle Pläne abgeschlossen sind. Es enthält Pointer auf den Ort des letzten Planes. ``` x = th.tensor([1.0]) ptr = protocol.run(x) ptr ptr.get() ``` Die Eingabe `1.0` durchlief alle drei Pläne und wurde dort jeweils um eins erhöht. Deshalb entspricht die Ausgabe einer `4.0`! Selbstverständlich können **Protokolle auch mit Pointern** arbeiten. ``` james = sy.VirtualWorker(hook, id="james") protocol.send(james) x = th.tensor([1.0]).send(james) ptr = protocol.run(x) ptr ``` Wie zu erkennen ist, ist das Ergebnis ein Pointer zu `james`. ``` ptr = ptr.get() ptr ptr = ptr.get() ptr ``` ### 3. Suche nach einem Protokoll In einem realen Projekt kann es gewünscht sein ein Protokoll herunterzuladen und automatisiert auf den Daten der eigenen Helfer anzuwenden: Dafür wir ein **nicht verteiltes Protokoll** erstellt und auf einem Helfer bereit gestellt. ``` protocol = sy.Protocol([("worker1", inc1), ("worker2", inc2), ("worker3", inc3)]) protocol.tag('my_protocol') protocol.send(james) me = sy.hook.local_worker # get access to me as a local worker ``` Nun wird eine Suche nach dem Protokoll gestartet. ``` responses = me.request_search(['my_protocol'], location=james) responses ``` Zurückgegeben wurde ein Pointer auf das Protokoll. ``` ptr_protocol = responses[0] ``` Wie gewohnt kann der Pointer genutzt werden um das Objekt zu holen: ``` protocol_back = ptr_protocol.get() protocol_back ``` Von hier wird genau wie in Teil 1. & 2. verfahren. ``` protocol_back.deploy(alice, bob, charlie) x = th.tensor([1.0]) ptr = protocol_back.run(x) ptr.get() ``` Weitere Beispiele mit Protokollen und echten Szenarien sind in Vorbereitung, doch die Möglichkeiten eines solchen Objektes sollten nun schon erkennbar sein! ### PySyft auf GitHub einen Stern geben! Der einfachste Weg, unserer Community zu helfen, besteht darin, die GitHub-Repos mit Sternen auszuzeichnen! Dies hilft, das Bewusstsein für die coolen Tools zu schärfen, die wir bauen. - [Gib PySyft einen Stern](https://github.com/OpenMined/PySyft) ### Nutze unsere Tutorials auf GitHub! Wir haben hilfreiche Tutorials erstellt, um ein Verständnis für Federated und Privacy-Preserving Learning zu entwickeln und zu zeigen wie wir die einzelnen Bausteine weiter entwickeln. - [PySyft Tutorials ansehen](https://github.com/OpenMined/PySyft/tree/master/examples/tutorials) ### Mach mit bei Slack! Der beste Weg, um über die neuesten Entwicklungen auf dem Laufenden zu bleiben, ist, sich unserer Community anzuschließen! Sie können dies tun, indem Sie das Formular unter [http://slack.openmined.org](http://slack.openmined.org) ausfüllen. ### Treten Sie einem Code-Projekt bei! Der beste Weg, um zu unserer Community beizutragen, besteht darin, Entwickler zu werden! Sie können jederzeit zur PySyft GitHub Issues-Seite gehen und nach "Projects" filtern. Dies zeigt Ihnen alle Top-Level-Tickets und gibt einen Überblick darüber, an welchen Projekten Sie teilnehmen können! Wenn Sie nicht an einem Projekt teilnehmen möchten, aber ein wenig programmieren möchten, können Sie auch nach weiteren "einmaligen" Miniprojekten suchen, indem Sie nach GitHub-Problemen suchen, die als "good first issue" gekennzeichnet sind. - [PySyft Projects](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3AProject) - [Good First Issue Tickets](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) ### Spenden Wenn Sie keine Zeit haben, zu unserer Codebase beizutragen, aber dennoch Unterstützung leisten möchten, können Sie auch Unterstützer unseres Open Collective werden. Alle Spenden fließen in unser Webhosting und andere Community-Ausgaben wie Hackathons und Meetups! - [OpenMined's Open Collective Page](https://opencollective.com/openmined)
true
code
0.231006
null
null
null
null
# Doppler timing tests Benchmark tests for various methods in the ``DopplerMap`` class. ``` # Enable progress bars? TQDM = False %matplotlib inline %run notebook_setup.py import starry starry.config.lazy = False starry.config.quiet = True import starry import numpy as np import matplotlib.pyplot as plt import timeit from tqdm.notebook import tqdm as _tqdm tqdm = lambda *args, **kwargs: _tqdm(*args, disable=not TQDM, **kwargs) def get_time(statement="map.flux()", number=100, **kwargs): setup = f"map = starry.DopplerMap(**kwargs); {statement}" t0 = timeit.timeit( statement, setup=setup, number=1, globals={**locals(), **globals()} ) if t0 > 0.1: return t0 else: return ( timeit.timeit( statement, setup=setup, number=number, globals={**locals(), **globals()} ) / number ) ``` ## `DopplerMap.flux()` Benchmarks for different evaluation ``method``s. ### As a function of `ydeg` With `nt = 1`, `nc = 1`, `nw = 200`. ``` methods = ["dotconv", "convdot", "conv", "design"] ydegs = [1, 2, 3, 5, 8, 10, 13, 15] nt = 1 nc = 1 wav = np.linspace(500, 501, 200) time = np.zeros((len(methods), len(ydegs))) for i, method in tqdm(enumerate(methods), total=len(methods)): for j, ydeg in tqdm(enumerate(ydegs), total=len(ydegs), leave=False): time[i, j] = get_time( f"map.flux(method='{method}')", ydeg=ydeg, nt=nt, nc=nc, wav=wav ) plt.figure(figsize=(8, 5)) plt.plot(ydegs, time.T, "o-", label=methods) plt.legend(fontsize=10) plt.yscale("log") plt.xscale("log") plt.xlabel("spherical harmonic degree") plt.ylabel("time [s]"); ``` ### As a function of `nt` With `ydeg = 3`, `nc = 1`, `nw = 200`. ``` methods = ["dotconv", "convdot", "conv", "design"] ydeg = 3 nts = [1, 2, 3, 5, 10, 20] nc = 1 wav = np.linspace(500, 501, 200) time = np.zeros((len(methods), len(nts))) for i, method in tqdm(enumerate(methods), total=len(methods)): for j, nt in tqdm(enumerate(nts), total=len(nts), leave=False): time[i, j] = get_time( f"map.flux(method='{method}')", ydeg=ydeg, nt=nt, nc=nc, wav=wav ) plt.figure(figsize=(8, 5)) plt.plot(nts, time.T, "o-", label=methods) plt.legend(fontsize=10) plt.yscale("log") plt.xscale("log") plt.xlabel("number of epochs") plt.ylabel("time [s]"); ``` ### As a function of `nw` With `ydeg = 3`, `nt = 1`, `nc = 1`. ``` methods = ["dotconv", "convdot", "conv", "design"] ydeg = 3 nt = 1 nc = 1 nws = [100, 200, 300, 400, 500, 800, 1000] wavs = [np.linspace(500, 501, nw) for nw in nws] time = np.zeros((len(methods), len(wavs))) for i, method in tqdm(enumerate(methods), total=len(methods)): for j, wav in tqdm(enumerate(wavs), total=len(wavs), leave=False): time[i, j] = get_time( f"map.flux(method='{method}')", ydeg=ydeg, nt=nt, nc=nc, wav=wav ) plt.figure(figsize=(8, 5)) plt.plot(nws, time.T, "o-", label=methods) plt.legend(fontsize=10) plt.yscale("log") plt.xscale("log") plt.xlabel("number of wavelength bins") plt.ylabel("time [s]"); ```
true
code
0.585516
null
null
null
null
# Anchor explanations for movie sentiment In this example, we will explain why a certain sentence is classified by a logistic regression as having negative or positive sentiment. The logistic regression is trained on negative and positive movie reviews. ``` import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.linear_model import LogisticRegression from sklearn.metrics import accuracy_score from sklearn.model_selection import train_test_split import spacy from alibi.explainers import AnchorText from alibi.datasets import fetch_movie_sentiment from alibi.utils.download import spacy_model ``` ### Load movie review dataset The `fetch_movie_sentiment` function returns a `Bunch` object containing the features, the targets and the target names for the dataset. ``` movies = fetch_movie_sentiment() movies.keys() data = movies.data labels = movies.target target_names = movies.target_names ``` Define shuffled training, validation and test set ``` train, test, train_labels, test_labels = train_test_split(data, labels, test_size=.2, random_state=42) train, val, train_labels, val_labels = train_test_split(train, train_labels, test_size=.1, random_state=42) train_labels = np.array(train_labels) test_labels = np.array(test_labels) val_labels = np.array(val_labels) ``` ### Apply CountVectorizer to training set ``` vectorizer = CountVectorizer(min_df=1) vectorizer.fit(train) ``` ### Fit model ``` np.random.seed(0) clf = LogisticRegression(solver='liblinear') clf.fit(vectorizer.transform(train), train_labels) ``` ### Define prediction function ``` predict_fn = lambda x: clf.predict(vectorizer.transform(x)) ``` ### Make predictions on train and test sets ``` preds_train = predict_fn(train) preds_val = predict_fn(val) preds_test = predict_fn(test) print('Train accuracy', accuracy_score(train_labels, preds_train)) print('Validation accuracy', accuracy_score(val_labels, preds_val)) print('Test accuracy', accuracy_score(test_labels, preds_test)) ``` ### Load spaCy model English multi-task CNN trained on OntoNotes, with GloVe vectors trained on Common Crawl. Assigns word vectors, context-specific token vectors, POS tags, dependency parse and named entities. ``` model = 'en_core_web_md' spacy_model(model=model) nlp = spacy.load(model) ``` ### Initialize anchor text explainer ``` explainer = AnchorText(nlp, predict_fn) ``` ### Explain a prediction ``` class_names = movies.target_names text = data[4] print(text) ``` Prediction: ``` pred = class_names[predict_fn([text])[0]] alternative = class_names[1 - predict_fn([text])[0]] print('Prediction: %s' % pred) ``` Explanation: ``` np.random.seed(0) explanation = explainer.explain(text, threshold=0.95, use_unk=True) ``` use_unk=True means we will perturb examples by replacing words with UNKs. Let us now take a look at the anchor. The word 'exercise' basically guarantees a negative prediction. ``` print('Anchor: %s' % (' AND '.join(explanation.anchor))) print('Precision: %.2f' % explanation.precision) print('\nExamples where anchor applies and model predicts %s:' % pred) print('\n'.join([x for x in explanation.raw['examples'][-1]['covered_true']])) print('\nExamples where anchor applies and model predicts %s:' % alternative) print('\n'.join([x for x in explanation.raw['examples'][-1]['covered_false']])) ``` ### Changing the perturbation distribution Let's try this with another perturbation distribution, namely one that replaces words by similar words instead of UNKs. Explanation: ``` np.random.seed(0) explanation = explainer.explain(text, threshold=0.95, use_unk=False, sample_proba=0.5) ``` The anchor now shows that we need more to guarantee the negative prediction: ``` print('Anchor: %s' % (' AND '.join(explanation.anchor))) print('Precision: %.2f' % explanation.precision) print('\nExamples where anchor applies and model predicts %s:' % pred) print('\n'.join([x for x in explanation.raw['examples'][-1]['covered_true']])) print('\nExamples where anchor applies and model predicts %s:' % alternative) print('\n'.join([x for x in explanation.raw['examples'][-1]['covered_false']])) ``` We can make the token perturbation distribution sample words that are more similar to the ground truth word via the `top_n` argument. Smaller values (default=100) should result in sentences that are more coherent and thus more in the distribution of natural language which could influence the returned anchor. By setting the `use_probability_proba` to True, the sampling distribution for perturbed tokens is proportional to the similarity score between the possible perturbations and the original word. We can also put more weight on similar words via the `temperature` argument. Lower values of `temperature` increase the sampling weight of more similar words. The following example will perturb tokens in the original sentence with probability equal to `sample_proba`. The sampling distribution for the perturbed tokens is proportional to the similarity score between the ground truth word and each of the `top_n` words. ``` np.random.seed(0) explanation = explainer.explain(text, threshold=0.95, use_similarity_proba=True, sample_proba=0.5, use_unk=False, top_n=20, temperature=.2) print('Anchor: %s' % (' AND '.join(explanation.anchor))) print('Precision: %.2f' % explanation.precision) print('\nExamples where anchor applies and model predicts %s:' % pred) print('\n'.join([x for x in explanation.raw['examples'][-1]['covered_true']])) print('\nExamples where anchor applies and model predicts %s:' % alternative) print('\n'.join([x for x in explanation.raw['examples'][-1]['covered_false']])) ```
true
code
0.606935
null
null
null
null
# Tutorial with 1d advection equation Jiawei Zhuang 7/24/2019 (updated 02/13/2020) ``` !pip install git+https://github.com/JiaweiZhuang/data-driven-pdes@fix-beam %tensorflow_version 1.x import os import matplotlib.pyplot as plt import numpy as np import pandas as pd import tensorflow as tf tf.enable_eager_execution() %matplotlib inline import tensorflow as tf import numpy as np import pandas as pd import matplotlib.pyplot as plt plt.rcParams['font.size'] = 14 from google.colab import files # colab-specific utilities; comment out when running locally tf.enable_eager_execution() tf.__version__, tf.keras.__version__ import xarray from datadrivenpdes.core import grids from datadrivenpdes.core import integrate from datadrivenpdes.core import models from datadrivenpdes.core import tensor_ops from datadrivenpdes.advection import equations as advection_equations from datadrivenpdes.pipelines import model_utils ``` # Define simulation grids ``` # we mostly run simulation on coarse grid # the fine grid is only for obtaining training data and generate the reference "truth" grid_length = 32 fine_grid_resolution = 256 coarse_grid_resolution = 32 assert fine_grid_resolution % coarse_grid_resolution == 0 # 1d domain, so only 1 point along y dimension fine_grid = grids.Grid( size_x=fine_grid_resolution, size_y=1, step=grid_length/fine_grid_resolution ) coarse_grid = grids.Grid( size_x=coarse_grid_resolution, size_y=1, step=grid_length/coarse_grid_resolution ) x_fine, _ = fine_grid.get_mesh() x_coarse, _ = coarse_grid.get_mesh() x_fine.shape, x_coarse.shape ``` # Generate initial condition ``` def make_square(x, height=1.0, center=0.25, width=0.1): """ Args: x: Numpy array. Shape should be (nx, 1) or (nx,) height: float, peak concentration center: float, relative center position in 0~1 width: float, relative width in 0~0.5 Returns: Numpy array, same shape as `x` """ nx = x.shape[0] c = np.zeros_like(x) c[int((center-width)*nx):int((center+width)*nx)] = height return c fig, axes = plt.subplots(1, 2, figsize=[8, 3]) axes[0].plot(x_fine, make_square(x_fine), marker='.') axes[1].plot(x_coarse, make_square(x_coarse), marker='.') for ax in axes: ax.set_ylim(-0.1, 1.1) def make_multi_square(x, height_list, width_list): c_list = [] for height in height_list: for width in width_list: c_temp = make_square(x, height=height, width=width) c_list.append(c_temp) return np.array(c_list) height_list = np.arange(0.1, 1.1, 0.1) width_list = np.arange(1/16, 1/4, 1/16) # width is chosen so that coarse-graining of square wave is symmetric c_init = make_multi_square( x_coarse, height_list = height_list, width_list = width_list ) c_init.shape # (sample, x, y) fig, axes = plt.subplots(1, 5, figsize=[16, 3]) for i, ax in enumerate(axes): ax.plot(x_coarse, c_init[4*i+2, :, 0], marker='.') ax.set_ylim(-0.1, 1.1) ``` # Wrap with velocity fields ``` # for simplicity, use uniform constant velocity field for all samples initial_state = { 'concentration': c_init.astype(np.float32), # tensorflow code expects float32 'x_velocity': np.ones(c_init.shape, np.float32) * 1.0, 'y_velocity': np.zeros(c_init.shape, np.float32) } for k, v in initial_state.items(): print(k, v.shape) # (sample, x, y) ``` # Run baseline advection solver ``` # first-order finite difference model, very diffusive model_1st = models.FiniteDifferenceModel( advection_equations.UpwindAdvection(cfl_safety_factor=0.5), coarse_grid ) # second-order scheme with monotonic flux limiter model_2nd = models.FiniteDifferenceModel( advection_equations.VanLeerAdvection(cfl_safety_factor=0.5), coarse_grid ) time_steps = np.arange(0, 256+1) %time integrated_1st = integrate.integrate_steps(model_1st, initial_state, time_steps) %time integrated_2nd = integrate.integrate_steps(model_2nd, initial_state, time_steps) for k, v in integrated_1st.items(): print(k, v.shape) # (time, sample, x, y) def wrap_as_xarray(integrated): dr = xarray.DataArray( integrated['concentration'].numpy().squeeze(), dims = ('time', 'sample', 'x'), coords = {'time': time_steps, 'x': x_coarse.squeeze()} ) return dr dr_1st = wrap_as_xarray(integrated_1st) dr_1st.isel(time=[0, 10, 128], sample=[4, 10, 16]).plot(col='sample', hue='time') dr_2nd = wrap_as_xarray(integrated_2nd) dr_2nd.isel(time=[0, 10, 128], sample=[4, 10, 16]).plot(col='sample', hue='time') ``` # Run untrained neural net model ``` model_nn = models.PseudoLinearModel( advection_equations.FiniteVolumeAdvection(0.5), coarse_grid, num_time_steps=4, # multi-step loss function stencil_size=3, kernel_size=(3, 1), num_layers=4, filters=32, constrained_accuracy_order=1, learned_keys = {'concentration_edge_x', 'concentration_edge_y'}, # finite volume view, use edge concentration activation='relu', ) model_nn.learned_keys, model_nn.fixed_keys tf.random.set_random_seed(0) %time integrated_untrained = integrate.integrate_steps(model_nn, initial_state, time_steps) (wrap_as_xarray(integrated_untrained) .isel(time=[0, 2, 10], sample=[4, 10, 16]) .plot(col='sample', hue='time', ylim=[-0.2, 0.5]) ) # untrained model is diverging! # weights are initialized at the first model call len(model_nn.get_weights()) model_nn.get_weights()[0].shape # first convolutional filter, (x, y, input_channel, filter_channel) model_nn.get_weights()[2].shape # second convolutional filter, (x, y, filter_channel, filter_channel) ``` # Generate training data from high-resolution baseline simulations ``` # This data-generation code is a bit involved, mostly because we use multi-step loss function. # To produce large training data in parallel, refer to the create_training_data.py script in source code. def reference_solution(initial_state_fine, fine_grid, coarse_grid, coarse_time_steps=256): # use high-order traditional scheme as reference model equation = advection_equations.VanLeerAdvection(cfl_safety_factor=0.5) key_defs = equation.key_definitions # reference model runs at high resolution model = models.FiniteDifferenceModel(equation, fine_grid) # need 8x more time steps for 8x higher resolution to satisfy CFL coarse_ratio = fine_grid.size_x // coarse_grid.size_x steps = np.arange(0, coarse_time_steps*coarse_ratio+1, coarse_ratio) # solve advection at high resolution integrated_fine = integrate.integrate_steps(model, initial_state_fine, steps) # regrid to coarse resolution integrated_coarse = tensor_ops.regrid( integrated_fine, key_defs, fine_grid, coarse_grid) return integrated_coarse def make_train_data(integrated_coarse, coarse_time_steps=256, example_time_steps=4): # we need to re-format data so that single-step input maps to multi-step output # remove the last several time steps, as training input train_input = {k: v[:-example_time_steps] for k, v in integrated_coarse.items()} # merge time and sample dimension as required by model n_time, n_sample, n_x, n_y = train_input['concentration'].shape for k in train_input: train_input[k] = tf.reshape(train_input[k], [n_sample * n_time, n_x, n_y]) print('\n train_input shape:') for k, v in train_input.items(): print(k, v.shape) # (merged_sample, x, y) # pick the shifted time series, as training output output_list = [] for shift in range(1, example_time_steps+1): # output time series, starting from each single time step output_slice = integrated_coarse['concentration'][shift:coarse_time_steps - example_time_steps + shift + 1] # merge time and sample dimension as required by training n_time, n_sample, n_x, n_y = output_slice.shape output_slice = tf.reshape(output_slice, [n_sample * n_time, n_x, n_y]) output_list.append(output_slice) train_output = tf.stack(output_list, axis=1) # concat along shift_time dimension, after sample dimension print('\n train_output shape:', train_output.shape) # (merged_sample, shift_time, x, y) # sanity check on shapes assert train_output.shape[0] == train_input['concentration'].shape[0] # merged_sample assert train_output.shape[2] == train_input['concentration'].shape[1] # x assert train_output.shape[3] == train_input['concentration'].shape[2] # y assert train_output.shape[1] == example_time_steps return train_input, train_output # need to re-evaluate initial condition on high-resolution grid c_init_fine = make_multi_square( x_fine, height_list = height_list, width_list = width_list ) initial_state_fine = { 'concentration': c_init_fine.astype(np.float32), # tensorflow code expects float32 'x_velocity': np.ones(c_init_fine.shape, np.float32) * 1.0, 'y_velocity': np.zeros(c_init_fine.shape, np.float32) } %time integrated_ref = reference_solution(initial_state_fine, fine_grid, coarse_grid) train_input, train_output = make_train_data(integrated_ref) [v.shape for v in initial_state_fine.values()] # make sure that single-step input corresponds to multi-step (advected) output i_sample = 48 # any number between 0 and train_output.shape[0] plt.plot(train_input['concentration'][i_sample].numpy(), label='init') for shift in range(train_output.shape[1])[:3]: plt.plot(train_output[i_sample, shift].numpy(), label=f'shift={shift+1}') plt.title(f'no. {i_sample} sample') plt.legend() ``` # Train neural net model Can skip to the next section "load existing weights" if weights have been saved before. ``` %%time # same as training standard Keras model model_nn.compile( optimizer='adam', loss='mae' ) tf.random.set_random_seed(42) np.random.seed(42) history = model_nn.fit( train_input, train_output, epochs=120, batch_size=32, verbose=1, shuffle=True ) df_history = pd.DataFrame(history.history) df_history.plot(marker='.') df_history['loss'][3:].plot(marker='.') # might not converged yet ``` ## Save trained model ``` model_utils.save_weights(model_nn, 'weights_1d_120epochs.h5') # files.download('weights_1d_120epochs.h5') ``` # Or directly load trained model Need to manually upload weights as Colab local file ``` model_utils.load_weights(model_nn, 'weights_1d_120epochs.h5') ``` # Integrate trained model ``` %time integrated_nn = integrate.integrate_steps(model_nn, initial_state, time_steps) dr_nn = wrap_as_xarray(integrated_nn) dr_nn.sizes dr_nn.isel(time=[0, 10, 128], sample=[4, 10, 16]).plot(col='sample', hue='time') # much better than traditional finite difference scheme ``` ## Evaluate accuracy on training set Here just test on training data. Next section makes new test data. ``` dr_ref = wrap_as_xarray(integrated_ref) # reference "truth" dr_all_train = xarray.concat([dr_nn, dr_2nd, dr_1st, dr_ref], dim='model') dr_all_train.coords['model'] = ['nn', '2nd', '1st', 'ref'] (dr_all_train.isel(time=[0, 16, 64, 128, 256], sample=[4, 10, 16]) .plot(hue='model', col='time', row='sample', alpha=0.6, linewidth=2) ) # neural net model (blue line) almost overlaps with reference truth (red line); so lines are hard to see clearly ( (dr_all_train.sel(model=['nn', '1st', '2nd']) - dr_all_train.sel(model='ref')) .pipe(abs).mean(dim=['x', 'sample']) # mean absolute error .isel(time=slice(0, 129, 2)) # the original error series oscillates between odd & even steps, because CFL=0.5 .plot(hue='model') ) plt.title('Error on training set') plt.grid() ``` # Prediction on new test data ``` np.random.seed(41) height_list_test = np.random.uniform(0.1, 0.9, size=10) # width_list_test = np.random.uniform(1/16, 1/4, size=3) # doesn't make sense to randomly sample widths of square waves, as a square has to align with grid c_init_test = make_multi_square( x_coarse, height_list = height_list_test, width_list = width_list # just use width in training set ) c_init_test.shape # (sample, x, y) height_list_test # , width_list_test plt.plot(x_coarse, c_init_test[5]) initial_state_test = { 'concentration': c_init_test.astype(np.float32), # tensorflow code expects float32 'x_velocity': np.ones(c_init_test.shape, np.float32) * 1.0, 'y_velocity': np.zeros(c_init_test.shape, np.float32) } for k, v in initial_state_test.items(): print(k, v.shape) %time dr_nn_test = wrap_as_xarray(integrate.integrate_steps(model_nn, initial_state_test, time_steps)) %time dr_1st_test = wrap_as_xarray(integrate.integrate_steps(model_1st, initial_state_test, time_steps)) %time dr_2nd_test = wrap_as_xarray(integrate.integrate_steps(model_2nd, initial_state_test, time_steps)) dr_sol_test = xarray.concat([dr_nn_test, dr_2nd_test, dr_1st_test], dim='model') dr_sol_test.coords['model'] = ['Neural net', 'Baseline', 'First order'] (dr_sol_test.isel(time=[0, 16, 64, 128, 256], sample=[4, 10, 16]) .plot(hue='model', col='time', row='sample', alpha=0.6, linewidth=2) ) plt.ylim(0, 1) (dr_sol_test.isel(time=[0, 16, 64, 256], sample=16).rename({'time': 'Time step'}) .plot(hue='model', col='Time step', alpha=0.6, col_wrap=2, linewidth=2, figsize=[6, 4.5], ylim=[None, 0.8]) ) plt.suptitle('Advection under 1-D constant velocity', y=1.05) plt.savefig('1d-test-sample.png', dpi=288, bbox_inches='tight') # files.download('1d-test-sample.png') ``` ### Reference solution for test set ``` # need to re-evaluate initial condition on high-resolution grid c_init_fine_test = make_multi_square( x_fine, height_list = height_list_test, width_list = width_list ) initial_state_fine_test = { 'concentration': c_init_fine_test.astype(np.float32), # tensorflow code expects float32 'x_velocity': np.ones(c_init_fine_test.shape, np.float32) * 1.0, 'y_velocity': np.zeros(c_init_fine_test.shape, np.float32) } %time integrated_ref_test = reference_solution(initial_state_fine_test, fine_grid, coarse_grid) dr_ref_test = wrap_as_xarray(integrated_ref_test) # reference "truth" dr_all_test = xarray.concat([dr_nn_test, dr_2nd_test, dr_1st_test, dr_ref_test], dim='model') dr_all_test.coords['model'] = ['Neural net', 'Baseline', 'First order', 'Reference'] (dr_all_test.isel(time=[0, 16, 64, 128, 256], sample=[4, 10, 16]) .plot(hue='model', col='time', row='sample', alpha=0.6, linewidth=2) ) plt.ylim(0, 1) (dr_all_test.isel(time=[0, 16, 64, 256], sample=16).rename({'time': 'Time step'}) .plot(hue='model', col='Time step', alpha=0.6, col_wrap=2, linewidth=2, figsize=[6, 4.5], ylim=[None, 0.8]) ) plt.suptitle('Advection under 1-D constant velocity', y=1.05) # plt.savefig('1d-test-sample.png', dpi=288, bbox_inches='tight') ``` ## Plot test accuracy ``` ( (dr_all_test.sel(model=['Neural net', 'Baseline', 'First order']) - dr_all_test.sel(model='Reference')) .pipe(abs).mean(dim=['x', 'sample']) # mean absolute error .isel(time=slice(0, 257, 2)) # the original error series oscillates between odd & even steps, because CFL=0.5 .plot(hue='model', figsize=[4.5, 3.5], linewidth=2.0) ) plt.title('Error for 1-D advection') plt.xlabel('Time step') plt.ylabel('Mean Absolute Error (MAE)') plt.grid() plt.xticks(range(0, 257, 50)) plt.savefig('1d-test-mae.png', dpi=288, bbox_inches='tight') # files.download('1d-test-mae.png') ``` # Out-of-sample prediction ``` def make_gaussian(x, height=1.0, center=0.25, width=0.1): """ Args: x: Numpy array. Shape should be (nx, 1) or (nx,) height: float, peak concentration center: float, relative center position in 0~1 width: float, relative width in 0~0.5 Returns: Numpy array, same shape as `x` """ nx = x.shape[0] x_max = x.max() center *= x_max width *= x_max c = height * np.exp(-(x-center)**2 / width**2) return c def make_multi_gaussian(x, height_list, width_list): c_list = [] for height in height_list: for width in width_list: c_temp = make_gaussian(x, height=height, width=width) c_list.append(c_temp) return np.array(c_list) np.random.seed(41) height_list_guass = np.random.uniform(0.1, 0.5, size=10) width_list_guass = np.random.uniform(1/16, 1/4, size=3) c_init_guass = make_multi_gaussian( x_coarse, height_list = height_list_guass, width_list = width_list_guass ) c_init_guass.shape # (sample, x, y) height_list_guass, width_list_guass plt.plot(x_coarse, make_gaussian(x_coarse, height=0.5)) initial_state_gauss = { 'concentration': c_init_guass.astype(np.float32), # tensorflow code expects float32 'x_velocity': np.ones(c_init_guass.shape, np.float32) * 1.0, 'y_velocity': np.zeros(c_init_guass.shape, np.float32) } for k, v in initial_state_gauss.items(): print(k, v.shape) %time dr_nn_gauss = wrap_as_xarray(integrate.integrate_steps(model_nn, initial_state_gauss, time_steps)) (dr_nn_gauss.isel(time=[0, 16, 64, 128, 256], sample=[4, 10, 16]) .plot(hue='time', col='sample', alpha=0.6, linewidth=2) ) (dr_nn_gauss.isel(time=[0, 4, 16, 64], sample=[0, 4, 16]) .plot(col='time', hue='sample', col_wrap=4, alpha=0.6, linewidth=2) ) plt.suptitle('Out-of-sample prediction', y=1.05) (dr_nn_gauss.isel(time=[0, 16, 64, 256], sample=[0, 4, 29]).rename({'time': 'Time step'}) .plot(col='Time step', hue='sample', alpha=0.6, linewidth=2, col_wrap=2, figsize=[6, 4.5]) ) plt.suptitle('Neural net out-of-sample prediction', y=1.05) plt.savefig('out-of-sample.png', dpi=288, bbox_inches='tight') # files.download('out-of-sample.png') ```
true
code
0.81593
null
null
null
null
# Multiclass logistic regression from scratch If you've made it through our tutorials on linear regression from scratch, then you're past the hardest part. You already know how to load and manipulate data, build computation graphs on the fly, and take derivatives. You also know how to define a loss function, construct a model, and write your own optimizer. Nearly all neural networks that we'll build in the real world consist of these same fundamental parts. The main differences will be the type and scale of the data and the complexity of the models. And every year or two, a new hipster optimizer comes around, but at their core they're all subtle variations of stochastic gradient descent. In [the previous chapter](logistic-regressio-gluon.ipynb), we introduced logistic regression, a classic algorithm for performing binary classification. We implemented a model $$\hat{y} = \sigma( \boldsymbol{x} \boldsymbol{w}^T + b)$$ where $\sigma$ is the sigmoid squashing function. This activation function on the final layer was crucial because it forced our outputs to take values in the range [0,1]. That allowed us to interpret these outputs as probabilties. We then updated our parameters to give the true labels (which take values either 1 or 0) the highest probability. In that tutorial, we looked at predicting whether or not an individual's income exceeded $50k based on features available in 1994 census data. Binary classification is quite useful. We can use it to predict spam vs. not spam or cancer vs not cancer. But not every problem fits the mold of binary classification. Sometimes we encounter a problem where each example could belong to one of $k$ classes. For example, a photograph might depict a cat or a dog or a zebra or ... (you get the point). Given $k$ classes, the most naive way to solve a *multiclass classification* problem is to train $k$ different binary classifiers $f_i(\boldsymbol{x})$. We could then predict that an example $\boldsymbol{x}$ belongs to the class $i$ for which the probability that the label applies is highest: $$\max_i {f_i(\boldsymbol{x})}$$ There's a smarter way to go about this. We could force the output layer to be a discrete probability distribution over the $k$ classes. To be a valid probability distribution, we'll want the output $\hat{y}$ to (i) contain only non-negative values, and (ii) sum to 1. We accomplish this by using the *softmax* function. Given an input vector $z$, softmax does two things. First, it exponentiates (elementwise) $e^{z}$, forcing all values to be strictly positive. Then it normalizes so that all values sum to $1$. Following the softmax operation computes the following $$\text{softmax}(\boldsymbol{z}) = \frac{e^{\boldsymbol{z}} }{\sum_{i=1}^k e^{z_i}}$$ Because now we have $k$ outputs and not $1$ we'll need weights connecting each of our inputs to each of our outputs. Graphically, the network looks something like this: ![](https://github.com/zackchase/mxnet-the-straight-dope/blob/master/img/simple-softmax-net.png?raw=true) We can represent these weights one for each input node, output node pair in a matrix $W$. We generate the linear mapping from inputs to outputs via a matrix-vector product $\boldsymbol{x} W + \boldsymbol{b}$. Note that the bias term is now a vector, with one component for each output node. The whole model, including the activation function can be written: $$\hat{y} = \text{softmax}(\boldsymbol{x} W + \boldsymbol{b})$$ This model is sometimes called *multiclass logistic regression*. Other common names for it include *softmax regression* and *multinomial regression*. For these concepts to sink in, let's actually implement softmax regression, and pick a slightly more interesting dataset this time. We're going to classify images of handwritten digits like these: ![png](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/example/mnist.png) ## About batch training In the above, we used plain lowercase letters for scalar variables, bolded lowercase letters for **row** vectors, and uppercase letters for matrices. Assume we have $d$ inputs and $k$ outputs. Let's note the shapes of the various variables explicitly as follows: $$\underset{1 \times k}{\boldsymbol z} = \underset{1 \times d}{\boldsymbol{x}}\ \underset{d \times k}{W} + \underset{1 \times k}{\boldsymbol{b}}$$ Often we would one-hot encode the output label, for example $\hat y = 5$ would be $\boldsymbol {\hat y}_{one-hot} = [0, 0, 0, 0, 1, 0, 0, 0, 0, 0]$ when one-hot encoded for a 10-class classfication problem. So $\hat{y} = \text{softmax}(\boldsymbol z)$ becomes $$\underset{1 \times k}{\boldsymbol{\hat{y}}_{one-hot}} = \text{softmax}_{one-hot}(\underset{1 \times k}{\boldsymbol z})$$ When we input a batch of $m$ training examples, we would have matrix $\underset{m \times d}{X}$ that is the vertical stacking of individual training examples $\boldsymbol x_i$, due to the choice of using row vectors. $$ X= \begin{bmatrix} \boldsymbol x_1 \\ \boldsymbol x_2 \\ \vdots \\ \boldsymbol x_m \end{bmatrix} = \begin{bmatrix} x_{11} & x_{12} & x_{13} & \dots & x_{1d} \\ x_{21} & x_{22} & x_{23} & \dots & x_{2d} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ x_{m1} & x_{m2} & x_{m3} & \dots & x_{md} \end{bmatrix}$$ Under this batch training situation, ${\boldsymbol{\hat{y}}_{one-hot}} = \text{softmax}({\boldsymbol z})$ turns into $$Y = \text{softmax}(Z) = \text{softmax}(XW + B)$$ where matrix $\underset{m \times k}{B}$ is formed by having $m$ copies of $\boldsymbol b$ as follows $$ B = \begin{bmatrix} \boldsymbol b \\ \boldsymbol b \\ \vdots \\ \boldsymbol b \end{bmatrix} = \begin{bmatrix} b_{1} & b_{2} & b_{3} & \dots & b_{k} \\ b_{1} & b_{2} & b_{3} & \dots & b_{k} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ b_{1} & b_{2} & b_{3} & \dots & b_{k} \end{bmatrix}$$ In actual implementation we can often get away with using $\boldsymbol b$ directly instead of $B$ in the equation for $Z$ above, due to [broadcasting](https://docs.scipy.org/doc/numpy-1.13.0/user/basics.broadcasting.html). Each row of matrix $\underset{m \times k}{Z}$ corresponds to one training example. The softmax function operates on each row of matrix $Z$ and returns a matrix $\underset{m \times k}Y$, each row of which corresponds to the one-hot encoded prediction of one training example. ## Imports To start, let's import the usual libraries. ``` from __future__ import print_function import numpy as np import mxnet as mx from mxnet import nd, autograd, gluon mx.random.seed(1) ``` ## Set Context We'll also want to set the compute context where our data will typically live and where we'll be doing our modeling. Feel free to go ahead and change `model_ctx` to `mx.gpu(0)` if you're running on an appropriately endowed machine. ``` data_ctx = mx.cpu() model_ctx = mx.cpu() # model_ctx = mx.gpu() ``` ## The MNIST dataset This time we're going to work with real data, each a 28 by 28 centrally cropped black & white photograph of a handwritten digit. Our task will be come up with a model that can associate each image with the digit (0-9) that it depicts. To start, we'll use MXNet's utility for grabbing a copy of this dataset. The datasets accept a transform callback that can preprocess each item. Here we cast data and label to floats and normalize data to range [0, 1]: ``` def transform(data, label): return data.astype(np.float32)/255, label.astype(np.float32) mnist_train = gluon.data.vision.MNIST(train=True, transform=transform) mnist_test = gluon.data.vision.MNIST(train=False, transform=transform) ``` There are two parts of the dataset for training and testing. Each part has N items and each item is a tuple of an image and a label: ``` image, label = mnist_train[0] print(image.shape, label) ``` Note that each image has been formatted as a 3-tuple (height, width, channel). For color images, the channel would have 3 dimensions (red, green and blue). ## Record the data and label shapes Generally, we don't want our model code to care too much about the exact shape of our input data. This way we could switch in a different dataset without changing the code that follows. Let's define variables to hold the number of inputs and outputs. ``` num_inputs = 784 num_outputs = 10 num_examples = 60000 ``` Machine learning libraries generally expect to find images in (batch, channel, height, width) format. However, most libraries for visualization prefer (height, width, channel). Let's transpose our image into the expected shape. In this case, matplotlib expects either (height, width) or (height, width, channel) with RGB channels, so let's broadcast our single channel to 3. ``` im = mx.nd.tile(image, (1,1,3)) print(im.shape) ``` Now we can visualize our image and make sure that our data and labels line up. ``` import matplotlib.pyplot as plt plt.imshow(im.asnumpy()) plt.show() ``` Ok, that's a beautiful five. ## Load the data iterator Now let's load these images into a data iterator so we don't have to do the heavy lifting. ``` batch_size = 64 train_data = mx.gluon.data.DataLoader(mnist_train, batch_size, shuffle=True) ``` We're also going to want to load up an iterator with *test* data. After we train on the training dataset we're going to want to test our model on the test data. Otherwise, for all we know, our model could be doing something stupid (or treacherous?) like memorizing the training examples and regurgitating the labels on command. ``` test_data = mx.gluon.data.DataLoader(mnist_test, batch_size, shuffle=False) ``` ## Allocate model parameters Now we're going to define our model. For this example, we're going to ignore the multimodal structure of our data and just flatten each image into a single 1D vector with 28x28 = 784 components. Because our task is multiclass classification, we want to assign a probability to each of the classes $P(Y = c \mid X)$ given the input $X$. In order to do this we're going to need one vector of 784 weights for each class, connecting each feature to the corresponding output. Because there are 10 classes, we can collect these weights together in a 784 by 10 matrix. We'll also want to allocate one offset for each of the outputs. We call these offsets the *bias term* and collect them in the 10-dimensional array ``b``. ``` W = nd.random_normal(shape=(num_inputs, num_outputs),ctx=model_ctx) b = nd.random_normal(shape=num_outputs,ctx=model_ctx) params = [W, b] ``` As before, we need to let MXNet know that we'll be expecting gradients corresponding to each of these parameters during training. ``` for param in params: param.attach_grad() ``` ## Multiclass logistic regression In the linear regression tutorial, we performed regression, so we had just one output $\hat{y}$ and tried to push this value as close as possible to the true target $y$. Here, instead of regression, we are performing *classification*, where we want to assign each input $X$ to one of $L$ classes. The basic modeling idea is that we're going to linearly map our input $X$ onto 10 different real valued outputs ``y_linear``. Then, before outputting these values, we'll want to normalize them so that they are non-negative and sum to 1. This normalization allows us to interpret the output $\hat{y}$ as a valid probability distribution. ``` def softmax(y_linear): exp = nd.exp(y_linear-nd.max(y_linear, axis=1).reshape((-1,1))) norms = nd.sum(exp, axis=1).reshape((-1,1)) return exp / norms sample_y_linear = nd.random_normal(shape=(2,10)) sample_yhat = softmax(sample_y_linear) print(sample_yhat) ``` Let's confirm that indeed all of our rows sum to 1. ``` print(nd.sum(sample_yhat, axis=1)) ``` But for small rounding errors, the function works as expected. ## Define the model Now we're ready to define our model ``` def net(X): y_linear = nd.dot(X, W) + b yhat = softmax(y_linear) return yhat ``` ## The cross-entropy loss function Before we can start training, we're going to need to define a loss function that makes sense when our prediction is a probability distribution. The relevant loss function here is called cross-entropy and it may be the most common loss function you'll find in all of deep learning. That's because at the moment, classification problems tend to be far more abundant than regression problems. The basic idea is that we're going to take a target Y that has been formatted as a one-hot vector, meaning one value corresponding to the correct label is set to 1 and the others are set to 0, e.g. ``[0, 1, 0, 0, 0, 0, 0, 0, 0, 0]``. The basic idea of cross-entropy loss is that we only care about how much probability the prediction assigned to the correct label. In other words, for true label 2, we only care about the component of yhat corresponding to 2. Cross-entropy attempts to maximize the log-likelihood given to the correct labels. ``` def cross_entropy(yhat, y): return - nd.sum(y * nd.log(yhat+1e-6)) ``` ## Optimizer For this example we'll be using the same stochastic gradient descent (SGD) optimizer as last time. ``` def SGD(params, lr): for param in params: param[:] = param - lr * param.grad ``` ## Write evaluation loop to calculate accuracy While cross-entropy is nice, differentiable loss function, it's not the way humans usually evaluate performance on multiple choice tasks. More commonly we look at accuracy, the number of correct answers divided by the total number of questions. Let's write an evaluation loop that will take a data iterator and a network, returning the model's accuracy averaged over the entire dataset. ``` def evaluate_accuracy(data_iterator, net): numerator = 0. denominator = 0. for i, (data, label) in enumerate(data_iterator): data = data.as_in_context(model_ctx).reshape((-1,784)) label = label.as_in_context(model_ctx) label_one_hot = nd.one_hot(label, 10) output = net(data) predictions = nd.argmax(output, axis=1) numerator += nd.sum(predictions == label) denominator += data.shape[0] return (numerator / denominator).asscalar() ``` Because we initialized our model randomly, and because roughly one tenth of all examples belong to each of the ten classes, we should have an accuracy in the ball park of .10. ``` evaluate_accuracy(test_data, net) ``` ## Execute training loop ``` epochs = 5 learning_rate = .005 for e in range(epochs): cumulative_loss = 0 for i, (data, label) in enumerate(train_data): data = data.as_in_context(model_ctx).reshape((-1,784)) label = label.as_in_context(model_ctx) label_one_hot = nd.one_hot(label, 10) with autograd.record(): output = net(data) loss = cross_entropy(output, label_one_hot) loss.backward() SGD(params, learning_rate) cumulative_loss += nd.sum(loss).asscalar() test_accuracy = evaluate_accuracy(test_data, net) train_accuracy = evaluate_accuracy(train_data, net) print("Epoch %s. Loss: %s, Train_acc %s, Test_acc %s" % (e, cumulative_loss/num_examples, train_accuracy, test_accuracy)) ``` ## Using the model for prediction Let's make it more intuitive by picking 10 random data points from the test set and use the trained model for predictions. ``` # Define the function to do prediction def model_predict(net,data): output = net(data) return nd.argmax(output, axis=1) # let's sample 10 random data points from the test set sample_data = mx.gluon.data.DataLoader(mnist_test, 10, shuffle=True) for i, (data, label) in enumerate(sample_data): data = data.as_in_context(model_ctx) print(data.shape) im = nd.transpose(data,(1,0,2,3)) im = nd.reshape(im,(28,10*28,1)) imtiles = nd.tile(im, (1,1,3)) plt.imshow(imtiles.asnumpy()) plt.show() pred=model_predict(net,data.reshape((-1,784))) print('model predictions are:', pred) break ``` ## Conclusion Jeepers. We can get nearly 90% accuracy at this task just by training a linear model for a few seconds! You might reasonably conclude that this problem is too easy to be taken seriously by experts. But until recently, many papers (Google Scholar says 13,800) were published using results obtained on this data. Even this year, I reviewed a paper whose primary achievement was an (imagined) improvement in performance. While MNIST can be a nice toy dataset for testing new ideas, we don't recommend writing papers with it. ## Next [Softmax regression with gluon](../chapter02_supervised-learning/softmax-regression-gluon.ipynb) For whinges or inquiries, [open an issue on GitHub.](https://github.com/zackchase/mxnet-the-straight-dope)
true
code
0.781997
null
null
null
null
# Pokémon Image Embeddings Can you create image embeddings of Pokémon in order to compare them? Let's find out! ``` import requests import os from tqdm.auto import tqdm from imgbeddings import imgbeddings from PIL import Image import logging import numpy as np import pandas as pd logger = logging.getLogger() logger.setLevel(logging.INFO) ``` Here's a compact script [modified from my AI Generated Pokémon experiments](https://github.com/minimaxir/ai-generated-pokemon-rudalle/blob/master/build_image_dataset.py) to obtain the metadata of all the normal forms of Pokémon, and download the images of the official portraits to `pokemon_images` if not already present. (roughly 30MB total) ``` folder_name = "pokemon_images" size = 224 graphql_query = """ { pokemon_v2_pokemon(where: {id: {_lt: 10000}}, order_by: {id: asc}) { pokemon_v2_pokemontypes { pokemon_v2_type { name } } id name } } """ image_url = ( "https://raw.githubusercontent.com/PokeAPI/sprites/master/" "sprites/pokemon/other/official-artwork/{0}.png" ) r = requests.post( "https://beta.pokeapi.co/graphql/v1beta", json={ "query": graphql_query, }, ) pokemon = r.json()["data"]["pokemon_v2_pokemon"] def encode_pokemon(p): return { "id": p["id"], "name": p["name"].title(), "type_1": p["pokemon_v2_pokemontypes"][0]["pokemon_v2_type"]["name"].title(), } poke_dict = [encode_pokemon(p) for p in pokemon] if os.path.exists(folder_name): print(f"/{folder_name} already exists; skipping downloading images.") else: print(f"Saving Pokemon images to /{folder_name}.") os.makedirs(folder_name) for p in tqdm(pokemon): p_id = p["id"] img = Image.open(requests.get(image_url.format(p_id), stream=True).raw) img = img.resize((size, size), Image.ANTIALIAS) # https://stackoverflow.com/a/9459208 bg = Image.new("RGB", (size, size), (255, 255, 255)) bg.paste(img, mask=img.split()[3]) name = f"{p_id:04d}.png" bg.save(os.path.join(folder_name, name)) ibed = imgbeddings() ``` Now we can generate the 768D imgbeddings for all the Pokémon. ``` # get a list of all the Pokemon image filenames inputs = [os.path.join(folder_name, x) for x in os.listdir(folder_name)] inputs.sort() print(inputs[0:10]) embeddings = ibed.to_embeddings(inputs) embeddings.shape ``` Fit a PCA to the imgbeddings (for simplicity, we won't use any image augmentation as the Pokémon designs are already standardized), and generate the new embeddings. ``` ibed.pca_fit(embeddings, 128) embeddings_pca = ibed.pca_transform(embeddings) embeddings_pca.shape ``` The PCA is automatically saved as `pca.npz`; we can also save the embeddings as a separate `.npy` file and reload them for conveinence. ``` np.save("pokemon_embeddings_pca.npy", embeddings_pca) ``` # Pokémon Similarity Search Let's build a `faiss` index to see which Pokémon are closest to another by visual design! (you can install faiss via `pip3 install faiss-cpu`) This approach find the Pokémon most similar using [cosine similarity](https://github.com/facebookresearch/faiss/wiki/MetricType-and-distances#how-can-i-index-vectors-for-cosine-similarity). First, load the embeddings and build the index. ``` import faiss from sklearn.preprocessing import normalize from IPython.display import HTML from io import BytesIO import base64 index = faiss.index_factory(embeddings_pca.shape[1], "Flat", faiss.METRIC_INNER_PRODUCT) index.add(normalize(embeddings_pca)) index.ntotal ``` Load a Pokémon embedding you already generated, then find the closest Pokémon! `faiss` will return the indices of the corresponding Pokemon. Let's start with Pikachu (id `25`), and find the 10 closest Pokemon and get their similarity. ``` search_id = 25 # faiss results are zero-indexed, so must -1 when searching, +1 after retrieving q_embedding = np.expand_dims(embeddings_pca[search_id - 1, :], 0) # the search will return the query itself, so search for +1 result distances, indices = index.search(normalize(q_embedding), 10 + 1) print(indices + 1) print(distances) # https://www.kaggle.com/code/stassl/displaying-inline-images-in-pandas-dataframe/notebook def get_thumbnail(path): i = Image.open(path) i.thumbnail((64, 64), Image.LANCZOS) return i def image_base64(im): if isinstance(im, str): im = get_thumbnail(im) with BytesIO() as buffer: im.save(buffer, 'jpeg') return base64.b64encode(buffer.getvalue()).decode() def image_formatter(im): return f'<img src="data:image/jpeg;base64,{image_base64(im)}">' def percent_formatter(perc): return f"{perc:.1%}" data = [] for i, idx in enumerate(indices[0]): data.append([idx + 1, poke_dict[idx]["name"], distances[0][i], get_thumbnail(inputs[idx])]) pd.set_option('display.max_colwidth', None) df = pd.DataFrame(data, columns=["ID", "Name", "Similarity", "Image"]) HTML(df.to_html(formatters={"Image": image_formatter, "Similarity": percent_formatter}, escape=False, index=False)) ``` To the eyes of an AI, similarity is not necessairly color; it can be shapes and curves as well. If you check other Pokémon by tweaking the `search_id` above, you'll notice that the similar images tend to have the [same stance and body features](https://twitter.com/minimaxir/status/1507166313281585164). You can use the cell below to run your own images though the index and see what Pokémon they are most similar to! ``` img_path = "/Users/maxwoolf/Downloads/shrek-facts.jpg" img_embedding = ibed.to_embeddings(img_path) distances, indices = index.search(normalize(img_embedding), 10) data = [["—", "Input Image", 1, get_thumbnail(img_path)]] for i, idx in enumerate(indices[0]): data.append([idx + 1, poke_dict[idx]["name"], distances[0][i], get_thumbnail(inputs[idx])]) pd.set_option('display.max_colwidth', None) df = pd.DataFrame(data, columns=["ID", "Name", "Similarity", "Image"]) HTML(df.to_html(formatters={"Image": image_formatter, "Similarity": percent_formatter}, escape=False, index=False)) ```
true
code
0.511229
null
null
null
null
# Introduction to Machine Learning (ML) This tutorial aims to get you familiar with the basis of ML. You will go through several tasks to build some basic regression and classification models. ``` #essential imports import sys sys.path.insert(1,'utils') import numpy as np import matplotlib.pyplot as plt # display plots in this notebook %matplotlib nbagg import pandas as pd import ml_utils print np.__version__ print np.__file__ ``` ## 1. Linear regression ### 1. 1. Univariate linear regression Let start with the most simple regression example. Firstly, read the data in a file named "house_price_statcrunch.xls". ``` house_data = pd.ExcelFile('data/house_price_statcrunch.xls').parse(0) ``` Let see what is inside by printing out the first few lines. ``` print " ".join([field.ljust(10) for field in house_data.keys()]) for i in xrange(10): print " ".join([str(house_data[field][i]).ljust(10) for field in house_data.keys()]) TOTALS = len(house_data['House']) print "...\n\nTotal number of samples: {}".format(TOTALS) ``` Let preserve some data for test. Here we extract 10% for testing. ``` np.random.seed(0) idx = np.random.permutation(TOTALS) idx_train = idx[:90] idx_test = idx[90:] house_data_train = {} house_data_test = {} for field in house_data.keys(): house_data_test[field] = house_data[field][idx_test] house_data_train[field] = house_data[field][idx_train] ``` For univariate regression, we are interested in the "size" parameter only. Let's extract necessary data and visualise it. ``` X, Z = ml_utils.extract_data(house_data, ['size'], ['price']) Z = Z/1000.0 #price has unit x1000 USD plt.plot(X[0],Z[0], '.') plt.xlabel('size (feet^2)') plt.ylabel('price (USD x1000)') plt.title('house data scatter plot') plt.show() ``` Our goal is to build a house price prediction model that will approximate the price of a house given its size. To do it, we need to fit a linear line (y = ax + b) to the data above using linear regression. Remember the procedure: 1. Define training set 2. Define hypothesis function. Here $F(x,W) = Wx$ 3. Loss function. Here $L(W) = \frac{1}{2N}{\sum_{i=1}^N{(F(x^{(i)},W)-z)^2}}$ 4. Update procedure (gradient descent). $W = W - k\frac{\partial L}{\partial W}$ To speed up computation, you should avoid using loop when working with scripting languges e.g. Python, Matlab. Try using array/matrix instead. Here you are provided code for step 1 and 2. Your will be asked to implement step 3 and 4. Some skeleton code will be provided for your convenience. ``` """step 1: define training and test set X, Z.""" X_train, Z_train = ml_utils.extract_data(house_data_train, ['size'], ['price']) X_test, Z_test = ml_utils.extract_data(house_data_test, ['size'], ['price']) Z_train = Z_train/1000.0 #price has unit x1000 USD Z_test = Z_test/1000.0 ##normalise data, uncomment for now #X_train, u, scale = ml_utils.normalise_data(X_train) #X_test = ml_utils.normalise_data(X_test, u, scale) N = Z_train.size #number of training samples ones_array = np.ones((1,N),dtype=np.float32) X_train = np.concatenate((X_train, ones_array), axis=0) #why? X_test = np.concatenate((X_test, np.ones((1, Z_test.size), dtype=np.float32)), axis = 0) #same for test data print "size of X_train ", X_train.shape print "size of Z_train ", Z_train.shape """step 2: define hypothesis function""" def F_Regression(X, W): """ Compute the hypothesis function y=F(x,W) in batch. input: X input array, must has size DxN (each column is one sample) W parameter array, must has size 1xD output: linear multiplication of W*X, size 1xN """ return np.dot(W,X) ``` **Task 1.1**: define the loss function for linear regression according to the following formula: $$L = \frac{1}{2N}{\sum_{i=1}^N{(y^{(i)}-z^{(i)})^2}}$$ Please fill in the skeleton code below. Hints: (i) in Python numpy the square operator $x^2$ is implemented as x**2; (ii) try to use matrix form and avoid for loop ``` """step 3: loss function""" def Loss_Regression(Y, Z): """ Compute the loss between the predicted (Y=F(X,W)) and the groundtruth (Z) values. input: Y predicted results Y = F(X,W) with given parameter W, has size 1xN Z groundtruth vector Z, has size 1xN output: loss value, is a scalar """ #enter the code here N = float(Z.size) diff = Y-Z return 1/(2*N)*np.dot(diff, diff.T).squeeze() ``` **Task 1.2**: compute gradient of the loss function w.r.t parameter W according to the following formula:<br> $$\frac{\partial L}{\partial W} = \frac{1}{N}\sum_{i=1}^N{(y^{(i)}-z^{(i)})x^{(i)}}$$ Please fill in the skeleton code below. ``` """step 4: gradient descent - compute gradient""" def dLdW_Regression(X, Y, Z): """ Compute gradient of the loss w.r.t parameter W. input: X input array, each column is one sample, has size DxN Y predicted values, has size 1xN Z groundtruth values, has size 1xN output: gradient, has same size as W """ #enter the code here N = float(Z.size) return 1/N * (Y-Z).dot(X.T) ``` Now we will perform gradient descent update procedure according to the following formula: $$W = W - k\frac{\partial L}{\partial W}$$ Here we use fixed number of iterations and learning rate. ``` """step 4: gradient descent - update loop""" np.random.seed(0) W = np.random.rand(1,X_train.shape[0]).astype(np.float32) #W has size 1xD, randomly initialised k = 1e-8 #learning rate niters = 160 #number of training iterations #visualisation settings vis_interval = niters/50 loss_collections = [] plt.close() plt.ion() fig = plt.figure(1,figsize=(16, 4)) axis_loss = fig.add_subplot(131) axis_data = fig.add_subplot(132) for i in xrange(niters): Y_train = F_Regression(X_train,W) #compute hypothesis function aka. predicted values loss = Loss_Regression(Y_train, Z_train) #compute loss dLdW = dLdW_Regression(X_train, Y_train, Z_train) #compute gradient W = W - k*dLdW #update loss_collections.append(loss) if (i+1)% vis_interval == 0: ml_utils.plot_loss(axis_loss, range(i+1),loss_collections, "loss = " + str(loss)) ml_utils.plot_scatter_and_line(axis_data, X_train, Z_train, W, "iter #" + str(i)) fig.canvas.draw() print "Learned parameters ", W.squeeze() ``` Now evaluate your learned model using the test set. Measure the total error of your prediction ``` Y_test = F_Regression(X_test, W) error = Loss_Regression(Y_test, Z_test) print "Evaluation error: ", error ``` **Quiz**: you may notice the learning rate k is set to $10^{-8}$. Why is it too small? Try to play with several bigger values of k, you will soon find out that the training is extremely sensitive to the learning rate (the training easily diverges or even causes "overflow" error with large k).<br><br> Answer: It is because both the input (size of house) and output (price) have very large range of values, which result in very large gradient. **Task 1.3**: Test your learned model. Suppose you want to sell a house of size 3000 $feat^2$, how much do you expect your house will cost?<br> Answer: you should get around 260k USD for that house. ``` x = 3000 x = np.array([x,1])[...,None] #make sure feature vector has size 2xN, here N=1 print "Expected price: ", F_Regression(x,W).squeeze() ``` **Task 1.4**: The gradient descent in the code above terminates after 100 iterations. You may want it to terminate when improvement in the loss is below a threshold. $$\Delta L_t = |L_t - L_{t-1}| < \epsilon$$ Edit the code to terminate the loop when the loss improvement is below $\epsilon=10^{-2}$. Re-evaluate your model to see if its performance has improved. ``` """step 4: gradient descent - update loop""" W = np.random.rand(1,X_train.shape[0]).astype(np.float32) #W has size 1xD, randomly initialised k = 1e-8 #learning rate epsilon = 1e-2 #terminate condition #visualisation settings vis_interval = 10 loss_collections = [] prev_loss = 0 plt.close() plt.ion() fig = plt.figure(1,figsize=(16, 4)) axis_loss = fig.add_subplot(131) axis_data = fig.add_subplot(132) while(1): Y_train = F_Regression(X_train,W) #compute hypothesis function aka. predicted values loss = Loss_Regression(Y_train, Z_train) #compute loss dLdW = dLdW_Regression(X_train, Y_train, Z_train) #compute gradient W = W - k*dLdW #update loss_collections.append(loss) if abs(loss - prev_loss) < epsilon: break prev_loss = loss if (len(loss_collections)+1) % vis_interval==0: #print "Iter #", len(loss_collections) ml_utils.plot_loss(axis_loss, range(len(loss_collections)),loss_collections, "loss = " + str(loss)) ml_utils.plot_scatter_and_line(axis_data, X_train, Z_train, W, "iter #" + str(len(loss_collections))) fig.canvas.draw() print "Learned parameters ", W.squeeze() print "Learning terminates after {} iterations".format(len(loss_collections)) #run the test Y_test = F_Regression(X_test, W) error = Loss_Regression(Y_test, Z_test) print "Evaluation error: ", error ``` Confirm that the error measurement on the test set has improved. ### 1.2 Multivariate regression So far we assume the house price is affected by the size only. Now let consider also other fields "Bedrooms", "Baths", "lot" (location) and "NW" (whether or not the houses face Nothern West direction).<br><br> **Important**: now your feature vector is multi-dimensional, it is crucial to normalise your training set for gradient descent to converge properly. The code below is almost identical to the previous step 1, except it loads more fields and implements data normalisation. ``` """step 1: define training set X, Z.""" selected_fields = ['size', 'Bedrooms', 'Baths', 'lot', 'NW'] X_train, Z_train = ml_utils.extract_data(house_data_train, selected_fields, ['price']) X_test, Z_test = ml_utils.extract_data(house_data_test, selected_fields, ['price']) Z_train = Z_train/1000.0 #price has unit x1000 USD Z_test = Z_test/1000.0 ##normalise X_train, u, scale = ml_utils.normalise_data(X_train) X_test = ml_utils.normalise_data(X_test, u, scale) N = Z_train.size #number of training samples ones_array = np.ones((1,N),dtype=np.float32) X_train = np.concatenate((X_train, ones_array), axis=0) #why? X_test = np.concatenate((X_test, np.ones((1, Z_test.size), dtype=np.float32)), axis = 0) #same for test data print "size of X_train ", X_train.shape print "size of Z_train ", Z_train.shape ``` Now run step 2-4 again. Note the followings: 1. You need not to modify the *Loss_Regression* and *dLdW_Regression* functions. They should generalise enough to work with multi-dimensional data 2. Since your training samples are normalised you can now use much higher learning rate e.g. k = 1e-2 3. Note that the plot function *plot_scatter_and_line* will not work in multivariate regression since it is designed for 1-D input only. Consider commenting it out.<br> **Question**: how many iterations are required to pass the threshold $\Delta L < 10^{-2}$ ?<br> Answer: ~4000 iterations (and it will take a while to complete). **Task 1.5**: (a) evaluate your learned model on the test set. (b) Suppose the house you want to sell has a size of 3000 $feet^2$, has 3 bedrooms, 2 baths, lot number 10000 and in NW direction. How much do you think its price would be? Hints: don't forget to normalise the test sample.<br> Answer: You will get ~150k USD only, much lower than the previous prediction based on size only. Your house has an advantage of size, but other parameters matter too. ``` """step 4: gradient descent - update loop""" """ same code but change k = 1e-2""" W = np.random.rand(1,X_train.shape[0]).astype(np.float32) #W has size 1xD, randomly initialised k = 1e-2 #learning rate epsilon = 1e-2 #terminate condition #visualisation settings vis_interval = 10 loss_collections = [] prev_loss = 0 plt.close() plt.ion() fig = plt.figure(1,figsize=(16, 4)) axis_loss = fig.add_subplot(131) #axis_data = fig.add_subplot(132) while(1): Y_train = F_Regression(X_train,W) #compute hypothesis function aka. predicted values loss = Loss_Regression(Y_train, Z_train) #compute loss dLdW = dLdW_Regression(X_train, Y_train, Z_train) #compute gradient W = W - k*dLdW #update loss_collections.append(loss) if abs(loss - prev_loss) < epsilon: break prev_loss = loss if (len(loss_collections)+1) % vis_interval==0: #print "Iter #", len(loss_collections) ml_utils.plot_loss(axis_loss, range(len(loss_collections)),loss_collections, "loss = " + str(loss)) #ml_utils.plot_scatter_and_line(axis_data, X_train, Z_train, W, "iter #" + str(len(loss_collections))) fig.canvas.draw() print "Learned parameters ", W.squeeze() print "Learning terminates after {} iterations".format(len(loss_collections)) """apply on the test set""" Y_test = F_Regression(X_test, W) error = Loss_Regression(Y_test, Z_test) print "Evaluation error: ", error """test a single sample""" x = np.array([3000, 3,2, 10000, 1],dtype=np.float32)[...,None] x = ml_utils.normalise_data(x, u, scale) x = np.concatenate((x,np.ones((1,1))),axis=0) print "Price: ", F_Regression(x,W).squeeze() ``` ### 1.3 Gradient descent with momentum In the latest experiment, our training takes ~4000 iterations to converge. Now let try gradient descent with momentum to speed up the training. We will employ the following formula: $$v_t = m*v_{t-1} + k\frac{\partial L}{\partial W}$$ $$W = W - v_t$$ ``` """step 4: gradient descent with momentum - update loop""" W = np.random.rand(1,X_train.shape[0]).astype(np.float32) #W has size 1xD, randomly initialised k = 1e-2 #learning rate epsilon = 1e-2 #terminate condition m = 0.9 #momentum v = 0 #initial velocity #visualisation settings vis_interval = 10 loss_collections = [] prev_loss = 0 plt.close() plt.ion() fig = plt.figure(1,figsize=(16, 4)) axis_loss = fig.add_subplot(131) #axis_data = fig.add_subplot(132) while(1): Y_train = F_Regression(X_train,W) #compute hypothesis function aka. predicted values loss = Loss_Regression(Y_train, Z_train) #compute loss dLdW = dLdW_Regression(X_train, Y_train, Z_train) #compute gradient v = v*m + k*dLdW W = W - v #update loss_collections.append(loss) if abs(loss - prev_loss) < epsilon: break prev_loss = loss if (len(loss_collections)+1) % vis_interval==0: #print "Iter #", len(loss_collections) ml_utils.plot_loss(axis_loss, range(len(loss_collections)),loss_collections, "loss = " + str(loss)) #ml_utils.plot_scatter_and_line(axis_data, X_train, Z_train, W, "iter #" + str(len(loss_collections))) fig.canvas.draw() print "Learned parameters ", W.squeeze() print "Learning terminates after {} iterations".format(len(loss_collections)) ``` ## 2. Classification In this part you will walk through different steps to implement several basic classification tasks. ### 2.1. Binary classification Imagine you were an USP professor who teaches Computer Science. This year there is 100 year-one students who want to register your module. You examine their performance based on their scores on two exams. You have gone through the records of 80 students and already made admission decisions for them. Now you want to build a model to automatically make admission decisions for the rest 20 students. Your training data will be the exam results and admission decisions for the 80 students that you have assessed.<br><br> Firstly, let load the data. ``` student_data = pd.read_csv('data/student_data_binary_clas.txt', header = None, names=['exam1', 'exam2', 'decision']) student_data #split train/test set X = np.array([student_data['exam1'], student_data['exam2']], dtype=np.float32) Z = np.array([student_data['decision']], dtype = np.float32) #assume the first 80 students have been assessed, use them as the training data X_train = X[:,:80] X_test = X[:,80:] #you later have to manually assess the rest 20 students according to the university policies. # Great, now you have a chance to evaluate your learned model Z_train = Z[:,:80] Z_test = Z[:,80:] #normalise data X_train, u, scale = ml_utils.normalise_data(X_train) X_test = ml_utils.normalise_data(X_test, u, scale) #concatenate array of "1s" to X array X_train = np.concatenate((X_train, np.ones_like(Z_train)), axis = 0) X_test = np.concatenate((X_test, np.ones_like(Z_test)), axis = 0) #let visualise the training set plt.close() plt.ion() fig = plt.figure(1) axis_data = fig.add_subplot(111) ml_utils.plot_scatter_with_label_2d(axis_data, X_train, Z_train,msg="student score scatter plot") ``` **Task 2.1**: your first task is to define the hypothesis function. Do you remember the hypothesis function in a binary classification task? It has form of a sigmoid function: $$F(x,W) = \frac{1}{1+e^{-Wx}}$$ ``` def F_Classification(X, W): """ Compute the hypothesis function given input array X and parameter W input: X input array, must has size DxN (each column is one sample) W parameter array, must has size 1xD output: sigmoid of W*X, size 1xN """ return 1/(1+np.exp(-np.dot(W,X))) ``` **Task 2.2**: define the loss function for binary classification. It is called "negative log loss": $$L(W) = -\frac{1}{N} \sum_{i=1}^N{[z^{(i)} log(F(x^{(i)},W)) + (1-z^{(i)})(log(1-F(x^{(i)},W))]}$$ Next, define the gradient function: $$\frac{\partial L}{\partial W} = \frac{1}{N}(F(X,W) - Z)X^T$$ ``` """step 3: loss function for classification""" def Loss_Classification(Y, Z): """ Compute the loss between the predicted (Y=F(X,W)) and the groundtruth (Z) values. input: Y predicted results Y = F(X,W) with given parameter W, has size 1xN Z groundtruth vector Z, has size 1xN output: loss value, is a scalar """ #enter the code here N = float(Z.size) return -1/N*(np.dot(np.log(Y), Z.T) + np.dot(np.log(1-Y), (1-Z).T)).squeeze() """step 4: gradient descent for classification - compute gradient""" def dLdW_Classification(X, Y, Z): """ Compute gradient of the loss w.r.t parameter W. input: X input array, each column is one sample, has size DxN Y probability of label = 1, has size 1xN Z groundtruth values, has size 1xN output: gradient, has same size as W """ #enter the code here N = float(Z.size) return 1/N * (Y-Z).dot(X.T) W = np.random.rand(1,X_train.shape[0]).astype(np.float32) #W has size 1xD, randomly initialised k = 0.2 #learning rate epsilon = 1e-6 #terminate condition m = 0.9 #momentum v = 0 #initial velocity #visualisation settings vis_interval = 10 loss_collections = [] prev_loss = 0 plt.close() plt.ion() fig = plt.figure(1,figsize=(16, 4)) axis_loss = fig.add_subplot(131) axis_data = fig.add_subplot(132) while(1): Y_train = F_Classification(X_train,W) #compute hypothesis function aka. predicted values loss = Loss_Classification(Y_train, Z_train) #compute loss dLdW = dLdW_Classification(X_train, Y_train, Z_train) #compute gradient v = v*m + k*dLdW W = W - v #update loss_collections.append(loss) if abs(loss - prev_loss) < epsilon: break prev_loss = loss if (len(loss_collections)+1) % vis_interval==0: ml_utils.plot_loss(axis_loss, range(len(loss_collections)),loss_collections, "loss = " + str(loss)) ml_utils.plot_scatter_with_label_2d(axis_data, X_train, Z_train, W, "student score scatter plot") fig.canvas.draw() print "Learned parameters ", W.squeeze() print "Learning terminates after {} iterations".format(len(loss_collections)) #evaluate Y_test = F_Classification(X_test, W) predictions = Y_test > 0.5 accuracy = np.sum(predictions == Z_test)/float(Z_test.size) print "Test accuracy: ", accuracy ``` We achieve 90% accuracy (only two students have been misclassified). Not too bad, isn't it? **Task 2.3**: regularisation Now we want to add a regularisation term into the loss to prevent overfitting. Regularisation loss is simply magnitude of the parameter vector W after removing the last element (i.e. bias doesn't count to regularisation). $$L_R = \frac{1}{2}|W'|^2$$ where W' is W with the last element truncated.<br> Now the total loss would be: $$L(W) = -\frac{1}{N} \sum_{i=1}^N{[z^{(i)} log(F(x^{(i)},W)) + (1-z^{(i)})(log(1-F(x^{(i)},W))]} + \frac{1}{2}|W'|^2$$ The gradient become: $$\frac{\partial L}{\partial W} = \frac{1}{N}(F(X,W) - Z)X^T + W''$$ where W'' is W with the last element change to 0. Your task is to implement the loss and gradient function with added regularisation. ``` """step 3: loss function with regularisation""" def Loss_Classification_Reg(Y, Z, W): """ Compute the loss between the predicted (Y=F(X,W)) and the groundtruth (Z) values. input: Y predicted results Y = F(X,W) with given parameter W, has size 1xN Z groundtruth vector Z, has size 1xN W parameter vector, size 1xD output: loss value, is a scalar """ #enter the code here N = float(Z.size) W_ = W[:,:-1] return -1/N*(np.dot(np.log(Y), Z.T) + np.dot(np.log(1-Y), (1-Z).T)).squeeze() + 0.5*np.dot(W_,W_.T).squeeze() """step 4: gradient descent with regularisation - compute gradient""" def dLdW_Classification_Reg(X, Y, Z, W): """ Compute gradient of the loss w.r.t parameter W. input: X input array, each column is one sample, has size DxN Y probability of label = 1, has size 1xN Z groundtruth values, has size 1xN W parameter vector, size 1xD output: gradient, has same size as W """ #enter the code here N = float(Z.size) W_ = W W_[:,-1] = 0 return 1/N * (Y-Z).dot(X.T) + W_ ``` Rerun the update loop again with the new loss and gradient functions. Note you may need to change the learning rate accordingly to have proper convergence. Now you have implemented both regularisation and momentum techniques, you can use a standard learning rate value of 0.01 which is widely used in practice. ``` """ gradient descent with regularisation- parameter update loop""" W = np.random.rand(1,X_train.shape[0]).astype(np.float32) #W has size 1xD, randomly initialised k = 0.01 #learning rate epsilon = 1e-6 #terminate condition m = 0.9 #momentum v = 0 #initial velocity #visualisation settings vis_interval = 10 loss_collections = [] prev_loss = 0 plt.close() plt.ion() fig = plt.figure(1,figsize=(16, 4)) axis_loss = fig.add_subplot(131) axis_data = fig.add_subplot(132) for i in range(500): Y_train = F_Classification(X_train,W) #compute hypothesis function aka. predicted values loss = Loss_Classification_Reg(Y_train, Z_train, W) #compute loss dLdW = dLdW_Classification_Reg(X_train, Y_train, Z_train, W) #compute gradient v = v*m + k*dLdW W = W - v #update loss_collections.append(loss) if abs(loss - prev_loss) < epsilon: break prev_loss = loss if (len(loss_collections)+1) % vis_interval==0: ml_utils.plot_loss(axis_loss, range(len(loss_collections)),loss_collections, "loss = " + str(loss)) ml_utils.plot_scatter_with_label_2d(axis_data, X_train, Z_train, W, "student score scatter plot") fig.canvas.draw() print "Learned parameters ", W.squeeze() print "Learning terminates after {} iterations".format(len(loss_collections)) ``` **Question**: Do you see any improvement in accuracy or convergence speed? Why? Answer: Regularisation does help speed up the training (it adds stricter rules to the update procedure). Accuracy is the same (90%) is probably because (i) number of parameters to be trained is small (2-D) and so is the number of training samples; and (ii) the data are well separated. In a learning task which involves large number of parameters (such as neural network), regularisation proves a very efficient technique. ### 2.2 Multi-class classification Here we are working with a very famous dataset. The Iris flower dataset has 150 samples of 3 Iris flower species (Setosa, Versicolour, and Virginica), each sample stores the height and length of its sepal and pedal in cm (4-D in total). Your task is to build a classifier to distinguish these flowers. ``` #read the Iris dataset iris = np.load('data/iris.npz') X = iris['X'] Z = iris['Z'] print "size X ", X.shape print "size Z ", Z.shape #split train/test with ratio 120:30 TOTALS = Z.size idx = np.random.permutation(TOTALS) idx_train = idx[:120] idx_test = idx[120:] X_train = X[:, idx_train] X_test = X[:, idx_test] Z_train = Z[:, idx_train] Z_test = Z[:, idx_test] #normalise data X_train, u, scale = ml_utils.normalise_data(X_train) X_test = ml_utils.normalise_data(X_test, u, scale) #concatenate array of "1s" to X array X_train = np.concatenate((X_train, np.ones_like(Z_train)), axis = 0) X_test = np.concatenate((X_test, np.ones_like(Z_test)), axis = 0) ``` **Task 2.4**: one-vs-all. Train 3 binary one-vs-all classifiers $F_i$ (i=1-3), one for each class. An unknown feture vector x belongs to class i if: $$max_i F(x,W_i)$$ **Task 2.5**: implement one-vs-one and compare the results with one-vs-all.
true
code
0.631083
null
null
null
null
# Conditional generation via Bayesian optimization in latent space ## Introduction I recently read [Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules](https://arxiv.org/abs/1610.02415) by Gómez-Bombarelli et. al.<sup>[1]</sup> and it motivated me to experiment with the approaches described in the paper. Here's a brief summary of the paper. It describes how they use a variational autoencoder<sup>[2]</sup> for generating new chemical compounds with properties that are of interest for drug discovery. For training, they used a large database of chemical compounds whose properties of interest are known. The variational autoencoder can encode compounds into 196-dimensional latent space representations. By sampling from the continuous latent space new compounds can be generated e.g. by sampling near a known compound to generate slight variations of it or by interpolating between more distant compounds. By simply autoencoding chemical compounds, however, they were not able organize latent space w.r.t. properties of interest. To additionally organize latent space w.r.t. these properties they jointly trained the variational autoencoder with a predictor that predicts these properties from latent space representations. Joint training with a predictor resulted in a latent space that reveals a gradient of these properties. This gradient can then be used to drive search for new chemical compounds into regions of desired properties. The following figure, copied from the paper<sup>[1]</sup>, summarizes the approach. ![vae-chem](images/vae-opt/vae-chem.png) For representing compounds in structure space, they use [SMILES](https://en.wikipedia.org/wiki/Simplified_molecular-input_line-entry_system) strings which can be converted to and from structural representations using standard computational chemistry software. For the encoder network, they experimented with both, 1D-CNNs and RNNs, for the decoder network they used a RNN. Architecure details are described in the paper. The predictor is a small dense neural network. For optimization in latent space i.e. for navigating into regions of desired properties they use a Bayesian optimization approach with Gaussian processes as surrogate model. The authors open-sourced their [chemical variational autoencoder](https://github.com/aspuru-guzik-group/chemical_vae) but didn't publish code related to Bayesian optimization, at least not at the time of writing this article. So I decided to start some experiments but on a toy dataset that is not related to chemistry at all: the [MNIST handwritten digits dataset](https://en.wikipedia.org/wiki/MNIST_database). All methods described in the paper can be applied in this context too and results are easier to visualize and probably easier to grasp for people not familiar with chemistry. The only property associated with the MNIST dataset is the label or value of the digits. In the following, it will be shown how to conditionally generate new digits by following a gradient in latent space. In other words, it will be shown how to navigate into regions of latent space that decode into digit images of desired target label. I'm also going to adress the following questions: - How does joint training with a predictor change the latent space of a variational autoencoder? - How can useful optimization objectives be designed? - How can application of Bayesian optimization methods be justified? - What are possible alternatives to this approach? I'll leave experiments with the chemical compounds dataset and the public chemical VAE for another article. The following assumes some basic familiarity with [variational autoencoders](https://nbviewer.jupyter.org/github/krasserm/bayesian-machine-learning/blob/dev/autoencoder-applications/variational_autoencoder.ipynb), [Bayesian otpimization](http://nbviewer.jupyter.org/github/krasserm/bayesian-machine-learning/blob/dev/bayesian-optimization/bayesian_optimization.ipynb) and [Gaussian processes](http://nbviewer.jupyter.org/github/krasserm/bayesian-machine-learning/blob/dev/gaussian-processes/gaussian_processes.ipynb). For more information on these topics you may want to read the linked articles. ## Architecture The high-level architecture of the joint VAE-predictor model is shown in the following figure. ![model](images/vae-opt/model.png) ### Encoder The encoder is a CNN, identical to the one presented in the the [variational autoencoder](https://nbviewer.jupyter.org/github/krasserm/bayesian-machine-learning/blob/dev/autoencoder-applications/variational_autoencoder.ipynb) notebook. ![encoder](images/vae-opt/encoder.png) ### Decoder The decoder is a CNN, identical to the one presented in the the [variational autoencoder](https://nbviewer.jupyter.org/github/krasserm/bayesian-machine-learning/blob/dev/autoencoder-applications/variational_autoencoder.ipynb) notebook. ![decoder](images/vae-opt/decoder.png) ### Predictor The predictor is a dense network with two hidden layers that predicts the probabilities of MNIST image labels 0-9 from the mean i.e. `t_mean` of the variational Gaussian distribution (details below). `t_mean` is one of the encoder outputs. The output layer of the predictor is a softmax layer with 10 units. ![predictor](images/vae-opt/predictor.png) ## Implementation We will use a 2-dimensional latent space for easier visualization. By default, this notebook loads pre-trained models. If you want to train the models yourself set `use_pretrained` to `False`. Expects about 15 minutes on a GPU for training and much longer on a CPU. ``` # Use pre-trained models by default use_pretrained = True # Dimensionality of latent space latent_dim = 2 # Mini-batch size used for training batch_size = 64 ``` Code for the encoder and decoder have already been presented [elsewhere](https://nbviewer.jupyter.org/github/krasserm/bayesian-machine-learning/blob/dev/autoencoder-applications/variational_autoencoder.ipynb), so only code for the predictor is shown here (see [variational_autoencoder_opt_util.py](variational_autoencoder_opt_util.py) for other function definitions): ``` import keras from keras import layers from keras.models import Model def create_predictor(): ''' Creates a classifier that predicts digit image labels from latent variables. ''' predictor_input = layers.Input(shape=(latent_dim,), name='t_mean') x = layers.Dense(128, activation='relu')(predictor_input) x = layers.Dense(128, activation='relu')(x) x = layers.Dense(10, activation='softmax', name='label_probs')(x) return Model(predictor_input, x, name='predictor') ``` The following composes the joint VAE-predictor model. Note that input to the predictor is the mean i.e. `t_mean` of the variational distribution, not a sample from it. ``` from variational_autoencoder_opt_util import * encoder = create_encoder(latent_dim) decoder = create_decoder(latent_dim) sampler = create_sampler() predictor = create_predictor() x = layers.Input(shape=image_shape, name='image') t_mean, t_log_var = encoder(x) t = sampler([t_mean, t_log_var]) t_decoded = decoder(t) t_predicted = predictor(t_mean) model = Model(x, [t_decoded, t_predicted], name='composite') ``` ## Dataset ``` from keras.datasets import mnist from keras.utils import to_categorical (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train = x_train.astype('float32') / 255. x_train = x_train.reshape(x_train.shape + (1,)) y_train_cat = to_categorical(y_train) x_test = x_test.astype('float32') / 255. x_test = x_test.reshape(x_test.shape + (1,)) y_test_cat = to_categorical(y_test) ``` ## Training ``` from keras import backend as K from keras.models import load_model def vae_loss(x, t_decoded): ''' Negative variational lower bound used as loss function for training the variational autoencoder on the MNIST dataset. ''' # Reconstruction loss rc_loss = K.sum(K.binary_crossentropy( K.batch_flatten(x), K.batch_flatten(t_decoded)), axis=-1) # Regularization term (KL divergence) kl_loss = -0.5 * K.sum(1 + t_log_var \ - K.square(t_mean) \ - K.exp(t_log_var), axis=-1) return K.mean(rc_loss + kl_loss) if use_pretrained: # Load VAE that was jointly trained with a # predictor returned from create_predictor() model = load_model('models/vae-opt/vae-predictor-softmax.h5', custom_objects={'vae_loss': vae_loss}) else: model.compile(optimizer='rmsprop', loss=[vae_loss, 'categorical_crossentropy'], loss_weights=[1.0, 20.0]) model.fit(x=x_train, y=[x_train, y_train_cat], epochs=15, shuffle=True, batch_size=batch_size, validation_data=(x_test, [x_test, y_test_cat]), verbose=2) ``` ## Results ### Latent space This sections addresses the question > How does joint training with a predictor change the latent space of a variational autoencoder? To answer, three models have been trained: - A VAE as described above but without a predictor (`model_predictor_off`). - A VAE jointly trained with a classifier as predictor (`model_predictor_softmax`). This is the model described above where the predictor predicts the probabilities of labels 0-9 from encoded MNIST images. - A VAE as described above but jointly trained with a regressor as predictor (`model_predictor_linear`). The predictor of this model is trained to predict digit values on a continuous scale i.e. predictions are floating point numbers that can also be less than 0 and greater than 9. See also `create_predictor_linear` in [variational_autoencoder_opt_util.py](variational_autoencoder_opt_util.py). ``` model_predictor_softmax = model model_predictor_linear = load_model('models/vae-opt/vae-predictor-linear.h5', custom_objects={'vae_loss': vae_loss}) model_predictor_off = load_model('models/vae-opt/vae-predictor-off.h5', custom_objects={'vae_loss': vae_loss}) ``` The following plots show the latent spaces of these three models and the distribution of the validation dataset `x_test` in these spaces. Validation data points are colored by their label. ``` import matplotlib.pyplot as plt %matplotlib inline def encode(x, model): return model.get_layer('encoder').predict(x)[0] ts = [encode(x_test, model_predictor_off), encode(x_test, model_predictor_softmax), encode(x_test, model_predictor_linear)] titles = ['VAE latent space without predictor', 'VAE latent space with classifier', 'VAE latent space with regressor'] fig = plt.figure(figsize=(15, 4)) cmap = plt.get_cmap('viridis', 10) for i, t in enumerate(ts): plt.subplot(1, 3, i+1) im = plt.scatter(t[:, 0], t[:, 1], c=y_test, cmap=cmap, vmin=-0.5, vmax=9.5, marker='o', s=0.4) plt.xlim(-4, 4) plt.ylim(-4, 4) plt.title(titles[i]) fig.subplots_adjust(right=0.8) fig.colorbar(im, fig.add_axes([0.82, 0.13, 0.02, 0.74]), ticks=range(10)); ``` One can clearly see that the latent space of models with a predictor (middle and right plot) has more structure i.e. less overlap of regions with different labels than the latent space of the model without a predictor (left plot). Furthermore, when the predictor is a regressor it establishes a gradient in latent space. The right plot clearly shows a gradient from upper-right (lower vaues) to lower-left (higher values). This is exactly what the authors of the paper wanted to achieve: additionally organizing the latent space w.r.t. certain continuous properties so that gradient-based navigation into regions of desired properties becomes possible. If you want to train a model yourself with a regressor as predictor you should make the following modifications to the setup above: ``` # ... predictor = create_predictor_linear(latent_dim) # ... model.compile(optimizer='rmsprop', loss=[vae_loss, 'mean_squared_error'], loss_weights=[1.0, 20.0]) model.fit(x=x_train, y=[x_train, y_train_cat], epochs=15, shuffle=True, batch_size=batch_size, validation_data=(x_test, [x_test, y_test_cat])) ``` Note that in the case of the MNIST dataset the latent space in the left plot is already sufficiently organized to navigate into regions of desired labels. However, this is not the case for the chemical compound dataset (see paper for details) so that further structuring is required. For the MNIST dataset, the goal is merely to demonstrate that further structuring is possible too. In the following we will use the model that uses a classifier as predictor i.e. the model corresponding to the middle plot. ### Optimization objectives > How can useful optimization objectives be designed? First of all, the optimization objective must be a function of the desired target label in addition to latent variable $\mathbf{t}$. For example, if the desired target label is 5 the optimization objective must have an optimum in that region of the latent space where images with a 5 are located. Also remember that the variational distributions i.e. the distributions of codes in latent space have been regularized to be close to the standard normal distribution during model training (see *regularization term* in `vae_loss`). This regularization term should also be considered in the optimization objective to avoid directing search too far from the origin. Hence the optimization objective should not only reflect the probability that a sample corresponds to an image of desired target label but also the standard normal probability distribution. In the following, we will use an optimization objective $f$ that is the negative logarithm of the product of these two terms: $$ f(\mathbf{t}, target) = - \log p(y=target \lvert \mathbf{t}) - \log \mathcal{N}(\mathbf{t} \lvert \mathbf{0}, \mathbf{I}) \tag{1} $$ where $y$ follows a categorical distribution and $p(y=target \lvert \mathbf{t})$ is the probability that $y$ has the desired target value given latent vector $\mathbf{t}$. I'll show two alternatives for computing $p(y=target \lvert \mathbf{t})$. The first alternative simply uses the output of the predictor. The corresponding optimization objective is visualized in the following figure. ``` from matplotlib.colors import LogNorm from scipy.stats import multivariate_normal predictor = model.get_layer('predictor') rx, ry = np.arange(-4, 4, 0.10), np.arange(-4, 4, 0.10) gx, gy = np.meshgrid(rx, ry) t_flat = np.c_[gx.ravel(), gy.ravel()] y_flat = predictor.predict(t_flat) mvn = multivariate_normal(mean=[0, 0], cov=[[1, 0], [0, 1]]) nll_prior = -mvn.logpdf(t_flat).reshape(-1, 1) def nll_predict(i): '''Optimization objective based on predictor output.''' return nll_prior - np.log(y_flat[:,i] + 1e-8).reshape(-1, 1) plot_nll(gx, gy, nll_predict) ``` One can clearly see how the minima in these plots overlap with the different target value regions in the previous figure (see section [Latent space](#Latent-space)). We could now use a gradient-based optimizer for navigating into regions of high log-likelihood i.e. low negative log-likelihood and sample within that region to conditionally generate new digits of the desired target value. Another alternative is to design an optimization objective that can also include "external" results. To explain, let's assume for a moment that we are again in the latent space of chemical compounds. External results in this context can mean experimental results or externally computed results obtained for new compounds sampled from latent space. For example, experimental results can come from expensive drug discovery experiments and externally computed results from computational chemistry software. For working with external results we can use a Gaussian process model (a regression model) initialized with samples from the training set and let a Bayesian optimization algorithm propose new samples from latent space. Then we gather external results for these proposals and update the Gaussian process model with it. Based on the updated model, Bayesian optimization can now propose new samples. A Bayesian optimization approach is especially useful if experimental results are expensive to obtain as it is designed to optimize the objective in a minimum number of steps. This should answer the question > How can application of Bayesian optimization methods be justified? But how can we transfer these ideas to the MNIST dataset? One option is to decode samples from latent space to real images and then let a separate MNIST image classifier compute the probability that a decoded image shows a digit equal to the desired target. For that purpose we use a small CNN that was trained to achieve 99.2% validation accuracy on `x_test`, enough for our purposes. ``` if use_pretrained: classifier = load_model('models/vae-opt/classifier.h5') else: classifier = create_classifier() classifier.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) classifier.fit(x_train, y_train_cat, epochs=5, shuffle=True, batch_size=batch_size, validation_data=(x_test, y_test_cat), verbose=2) ``` If we combine the negative log-likelihood computed with the separate image classifier with the regularization term mentioned before, we obtain the following optimization objective: ``` decoder = model.get_layer('decoder') # Decode latent vector into image d_flat = decoder.predict(t_flat) # Predict probabilities with separate classifier y_flat = classifier.predict(d_flat) def nll_decode_classify(i): '''Optimization objective based on separate image classifier output.''' return nll_prior - np.log(y_flat[:,i] + 1e-8).reshape(-1, 1) plot_nll(gx, gy, nll_decode_classify) ``` The locations of the minima closely match those of the previous optimization objective but the new optimization objective is more fuzzy. It also shows moderately low negative log-likelihood in regions outside of the desired target as well as in regions that are sparsely populated by validation examples. Anyway, let's use it and see if we can achieve reasonable results. ### Bayesian optimization For Bayesian optimization, we use [GPyOpt](http://sheffieldml.github.io/GPyOpt/) with more or less default settings and constrain the the search space as given by `bounds` below. Note that the underlying Gaussian process model is initialized with only two random samples from latent space. ``` import GPyOpt from GPyOpt.methods import BayesianOptimization def nll(t, target): ''' Bayesian optimization objective. ''' # Decode latent vector into image decoded = decoder.predict(t) # Predict probabilities with separate classifier c_probs = classifier.predict(decoded) nll_prior = -mvn.logpdf(t).reshape(-1, 1) nll_pred = -np.log(c_probs[:,target] + 1e-8).reshape(-1, 1) return nll_prior + nll_pred bounds = [{'name': 't1', 'type': 'continuous', 'domain': (-4.0, 4.0)}, {'name': 't2', 'type': 'continuous', 'domain': (-4.0, 4.0)}] def optimizer_for(target): def nll_target(t): return nll(t, target) return BayesianOptimization(f=nll_target, domain=bounds, model_type='GP', acquisition_type ='EI', acquisition_jitter = 0.01, initial_design_numdata = 2, exact_feval=False) ``` We start by running Bayesian optimization for a desired target value of 4 and then visualize the Gaussian process posterior mean, variance and the acquisition function using the built-in `plot_acquisition()` method. ``` optimizer = optimizer_for(target=4) optimizer.run_optimization(max_iter=50) optimizer.plot_acquisition() ``` By comparing with previous figures, we can see that most samples are located around the minimum corresponding to target value 4 (left plot). The acquisition function has high values in this region (right plot). Because Bayesian optimization makes a compromise between *exploration* of regions with high uncertainty and *exploitation* of regions with (locally) optimal values high acquisition function values also exist in regions outside the desired target value. This is also the reason why some samples are broadly scattered across the search space. We finally have to verify that the samples with the lowest optimization objective values actually correspond to images with number 4. ``` def plot_top(optimizer, num=10): top_idx = np.argsort(optimizer.Y, axis=0).ravel() top_y = optimizer.Y[top_idx] top_x = optimizer.X[top_idx] top_dec = np.squeeze(decoder.predict(top_x), axis=-1) plt.figure(figsize=(20, 2)) for i in range(num): plt.subplot(1, num, i + 1) plt.imshow(top_dec[i], cmap='Greys_r') plt.title(f'{top_y[i,0]:.2f}') plt.axis('off') plot_top(optimizer) ``` Indeed, they do! The numbers on top of the images are the optimization objective values of the corresponding points in latent space. To generate more images of desired target value we also could select the top scoring samples and continue sampling in a more or less narrow region around them (not shown here). How does conditional sampling for other desired target values work? ``` optimizer = optimizer_for(target=3) optimizer.run_optimization(max_iter=50) plot_top(optimizer) optimizer = optimizer_for(target=2) optimizer.run_optimization(max_iter=50) plot_top(optimizer) optimizer = optimizer_for(target=5) optimizer.run_optimization(max_iter=50) plot_top(optimizer) optimizer = optimizer_for(target=9) optimizer.run_optimization(max_iter=50) plot_top(optimizer) ``` This looks pretty good! Also note how for targets 3 and 5 the negative log-likelihood significantly increases for images that are hard to be recognized as their desired targets. ## Alternatives > What are possible alternatives to this approach? The approach presented here is one possible approach for conditionally generating images i.e. images with desired target values. In the case of MNIST images, this is actually a very expensive approach. A conditional variational autoencoder<sup>[3]</sup> (CVAE) would be a much better choice here. Anyway, the goal was to demonstrate the approach taken in the paper<sup>[1]</sup> and it worked reasonably well on the MNIST dataset too. Another interesting approach is described in \[4\]. The proposed approach identifies regions in latent space with desired properties without training the corresponding models with these properties in advance. This should help to prevent expensive model retraining and allows post-hoc learning of so-called *latent constraints*, value functions that identify regions in latent space that generate outputs with desired properties. Definitely one of my next papers to read. ## References - \[1\] Rafael Gómez-Bombarelli et. al. [Automatic chemical design using a data-driven continuous representation of molecules](https://arxiv.org/abs/1610.02415). - \[2\] Diederik P Kingma, Max Welling [Auto-Encoding Variational Bayes](https://arxiv.org/abs/1312.6114). - \[3\] Carl Doersch [Tutorial on Variational Autoencoders](https://arxiv.org/abs/1606.05908). - \[4\] Jesse Engel et. al. [Latent Constraints: Learning to Generate Conditionally from Unconditional Generative Models](https://arxiv.org/abs/1711.05772).
true
code
0.741811
null
null
null
null
# SGDRegressor with StandardScaler & Power Transformer This Code template is for regression analysis using the SGDRegressor where rescaling method used is StandardScaler and feature transformation is done using PowerTransformer. ### Required Packages ``` import warnings import numpy as np import pandas as pd import seaborn as se import matplotlib.pyplot as plt from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler, PowerTransformer from sklearn.model_selection import train_test_split from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error from sklearn.linear_model import SGDRegressor warnings.filterwarnings('ignore') ``` ### Initialization Filepath of CSV file ``` #filepath file_path= "" ``` List of features which are required for model training. ``` #x_values features=[] ``` Target feature for prediction. ``` #y_value target='' ``` ### Data Fetching Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools. We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry. ``` df=pd.read_csv(file_path) df.head() ``` ### Feature Selections It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model. We will assign all the required input features to X and target/outcome to Y. ``` X=df[features] Y=df[target] ``` ### Data Preprocessing Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes. ``` def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df) ``` Calling preprocessing functions on the feature and target set. ``` x=X.columns.to_list() for i in x: X[i]=NullClearner(X[i]) X=EncodeX(X) Y=NullClearner(Y) X.head() ``` #### Correlation Map In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns. ``` f,ax = plt.subplots(figsize=(18, 18)) matrix = np.triu(X.corr()) se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix) plt.show() ``` ### Data Splitting The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data. ``` x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123) ``` ### Model Stochastic Gradient Descent (SGD) is a simple yet very efficient approach to fitting linear classifiers and regressors under convex loss functions such as (linear) Support Vector Machines and Logistic Regression. SGD is merely an optimization technique and does not correspond to a specific family of machine learning models. It is only a way to train a model. Often, an instance of SGDClassifier or SGDRegressor will have an equivalent estimator in the scikit-learn API, potentially using a different optimization technique. For example, using SGDRegressor(loss='squared_loss', penalty='l2') and Ridge solve the same optimization problem, via different means. #### Model Tuning Parameters > - **loss** -> The loss function to be used. The possible values are ‘squared_loss’, ‘huber’, ‘epsilon_insensitive’, or ‘squared_epsilon_insensitive’ > - **penalty** -> The penalty (aka regularization term) to be used. Defaults to ‘l2’ which is the standard regularizer for linear SVM models. ‘l1’ and ‘elasticnet’ might bring sparsity to the model (feature selection) not achievable with ‘l2’. > - **alpha** -> Constant that multiplies the regularization term. The higher the value, the stronger the regularization. Also used to compute the learning rate when set to learning_rate is set to ‘optimal’. > - **l1_ratio** -> The Elastic Net mixing parameter, with 0 <= l1_ratio <= 1. l1_ratio=0 corresponds to L2 penalty, l1_ratio=1 to L1. Only used if penalty is ‘elasticnet’. > - **tol** -> The stopping criterion > - **learning_rate** -> The learning rate schedule,possible values {'optimal','constant','invscaling','adaptive'} > - **eta0** -> The initial learning rate for the ‘constant’, ‘invscaling’ or ‘adaptive’ schedules. > - **power_t** -> The exponent for inverse scaling learning rate. > - **epsilon** -> Epsilon in the epsilon-insensitive loss functions; only if loss is ‘huber’, ‘epsilon_insensitive’, or ‘squared_epsilon_insensitive’. ### Standard Scaler Standardize features by removing the mean and scaling to unit variance The standard score of a sample x is calculated as: z = (x - u) / s where u is the mean of the training samples or zero if with_mean=False, and s is the standard deviation of the training samples or one if with_std=False. ### Power Transformer Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired. Currently, PowerTransformer supports the Box-Cox transform and the Yeo-Johnson transform. The optimal parameter for stabilizing variance and minimizing skewness is estimated through maximum likelihood. Apply a power transform featurewise to make data more Gaussian-like. ``` model=make_pipeline(StandardScaler(), PowerTransformer(), SGDRegressor(random_state=123)) model.fit(x_train,y_train) ``` #### Model Accuracy We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model. score: The score function returns the coefficient of determination R2 of the prediction. ``` print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100)) ``` > **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions. > **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model. > **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model. ``` y_pred=model.predict(x_test) print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100)) print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred))) print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred))) ``` #### Prediction Plot First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis. For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis. ``` n=len(x_test) if len(x_test)<20 else 20 plt.figure(figsize=(14,10)) plt.plot(range(n),y_test[0:n], color = "green") plt.plot(range(n),model.predict(x_test[0:n]), color = "red") plt.legend(["Actual","prediction"]) plt.title("Predicted vs True Value") plt.xlabel("Record number") plt.ylabel(target) plt.show() ``` #### Creator: Ayush Gupta , Github: [Profile](https://github.com/guptayush179)
true
code
0.511717
null
null
null
null
![imagen](../../imagenes/ejercicios.png) # Ejercicio SQL Para este ejercicio usaremos una base de datos del FIFA 20. **Asegúrante que tienes el CSV "FIFA20.csv" en la misma carpeta donde está este Notebook**. Realiza los siguientes apartados: 1. Obtén una tabla con todos los campos 2. Obtén una tabla con los campos "short_name", "club", "team_position" 3. Obtén la misma tabla que el apartado anterior, pero en este caso renombrando los campos al castellano. 4. ¿Cuáles son todos los "team_position" diferentes? 5. ¿Cuáles son todos los "team_position" y "preferred_foot" diferentes? 6. ¿Cuáles son los jugadores diestros? ("preferred_foot" = "Right") 7. Obtén una tabla con los jugadores influencers 8. Obtén una tabla con los extremos izquierda ('team_position' = 'LW') influencers 9. Obtén una tabla con los jugadores cuyo nombre empieze por "W" y tenga una puntuación ('overall') mayor de 80 puntos. 1. ¿Y si ponemos el límite de la puntuación en mayor de 90 puntos? 10. Saca una tabla con los jugadores del Real Madrid, que NO sean diestros y tengan un potencial superior a 85 11. ¿Cuál es el jugador con la puntuación ('overall') más alta? 12. ¿Cuál es la media del valor (value_eur) de todos los jugadores, en millones de euros, sabiendo que las unidades del valor de la tabla son euros? 13. ¿Cuál es la media del salario por equipo? 14. Calcula la máxima puntuación ('overall') por "'preferred_foot' **NOTA**: se recomienda añadir un `LIMIT 5` en la mayoría de apartados para evitar grandes outputs de las queries. ``` # Importamos paquetes import pandas as pd import sqlite3 cnx = sqlite3.connect(':memory:') # Importamos datos de un CSV df = pd.read_csv('FIFA20.csv') df.head() # Pasamos el DataFrame de Pandas a SQL df.to_sql('fifa20', con=cnx, if_exists='replace', index=False) # Definimos la función para hacer queries. def sql_query(query): return pd.read_sql(query, cnx) # 1. Obten una tabla con todos los campos query = ''' SELECT * FROM fifa20 LIMIT 5 ''' sql_query(query) # 2. Obtén una tabla con los campos "short_name", "club", "team_position" query = ''' SELECT "short_name", "club", "team_position" FROM fifa20 LIMIT 5 ''' sql_query(query) # 3. Obtén la misma tabla que el apartado anterior, pero en este caso renombrando los campos al castellano. query = ''' SELECT "short_name" as "Nombre corto", "club" as "Equipo", "team_position" as "Posición en el equipo" FROM fifa20 LIMIT 5 ''' sql_query(query) # 4. ¿Cuáles son todos los "team_position" diferentes? query = ''' SELECT DISTINCT "team_position" FROM fifa20 ''' sql_query(query) # 5. ¿Cuáles son todos los "team_position" y "preferred_foot" diferentes? query = ''' SELECT DISTINCT "team_position", "preferred_foot" FROM fifa20 ''' sql_query(query) # 6. ¿Cuáles son los jugadores diestros? ("preferred_foot" = "Right") query = ''' SELECT * FROM fifa20 WHERE "preferred_foot" = "Right" LIMIT 5 ''' sql_query(query) # 7. Obtén una tabla con los jugadores influencers query = ''' SELECT DISTINCT "influencer" FROM fifa20 ''' sql_query(query) query = ''' SELECT * FROM fifa20 WHERE influencer = True ''' sql_query(query) # 8. Obtén una tabla con los extremos izquierda ('team_position' = 'LW') influencers query = ''' SELECT * FROM fifa20 WHERE "team_position" = "LW" and influencer = 1 ''' sql_query(query) # 9. Obtén una tabla con los jugadores cuyo nombre empieze por "W" y tenga una puntuación ('overall') mayor de 80 puntos query = ''' SELECT * FROM fifa20 WHERE long_name like "W%" and overall > 80 ''' sql_query(query) # 9. ¿Y si en lugar de 80, buscamos que la puntuación sea mayor de 90? query = ''' SELECT * FROM fifa20 WHERE long_name like "W%" and overall > 90 ''' sql_query(query) ``` Correcto. Ninguno lo cumple ``` # 10. Saca una tabla con los jugadores del Real Madrid, que NO sean diestros y tengan un potencial superior a 85 query = ''' SELECT * FROM fifa20 WHERE club = "Real Madrid" and "preferred_foot" <> "Right" and potential > 85 ''' sql_query(query) # 11. ¿Cuál es el jugador con la puntuación ('overall') más alta? query = ''' SELECT short_name, MAX(overall) as "Puntuación más alta" FROM fifa20 ''' sql_query(query) # 12. ¿Cuál es la media del valor (value_eur) de todos los jugadores, en millones de euros, sabiendo que las unidades del valor de la tabla son euros? query = ''' SELECT AVG(value_eur)/1000000 as "Media valor (M€)" FROM fifa20 ''' sql_query(query) # 13. ¿Cuál es la media del salario por equipo? Ordena el resultado de mayor a menor salario query = ''' SELECT club, AVG(wage_eur) as "Media Salario (€)" FROM fifa20 GROUP BY club order by "Media Salario (€)" desc ''' sql_query(query) # 14. Calcula la máxima puntuación ('overall') por club. ORdena el resultado de menor a mayor puntuación query = ''' SELECT "club", MAX(overall) as "Max Puntuación" FROM fifa20 GROUP BY "club" ORDER BY "Max Puntuación" ''' sql_query(query) ```
true
code
0.241333
null
null
null
null
# Introduction to Planning for Self Driving Vehicles In this notebook you are going to train your own ML policy to fully control an SDV. You will train your model using the Lyft Prediction Dataset and [L5Kit](https://github.com/woven-planet/l5kit). **Before starting, please download the [Lyft L5 Prediction Dataset 2020](https://self-driving.lyft.com/level5/prediction/) and follow [the instructions](https://github.com/woven-planet/l5kit#download-the-datasets) to correctly organise it.** The policy will be a deep neural network (DNN) which will be invoked by the SDV to obtain the next command to execute. More in details, you will be working with a CNN architecture based on ResNet50. ![model](../../docs/images/planning/model.svg) #### Inputs The network will receive a Bird's-Eye-View (BEV) representation of the scene surrounding the SDV as the only input. This has been rasterised in a fixed grid image to comply with the CNN input. L5Kit is shipped with various rasterisers. Each one of them captures different aspects of the scene (e.g. lanes or satellite view). This input representation is very similar to the one used in the [prediction competition](https://www.kaggle.com/c/lyft-motion-prediction-autonomous-vehicles/overview). Please refer to our [competition baseline notebook](../agent_motion_prediction/agent_motion_prediction.ipynb) and our [data format notebook](../visualisation/visualise_data.ipynb) if you want to learn more about it. #### Outputs The network outputs the driving signals required to fully control the SDV. In particular, this is a trajectory of XY and yaw displacements which can be used to move and steer the vehicle. After enough training, your model will be able to drive an agent along a specific route. Among others, it will do lane-following while respecting traffic lights. Let's now focus on how to train this model on the available data. ### Training using imitation learning The model is trained using a technique called *imitation learning*. We feed examples of expert driving experiences to the model and expect it to take the same actions as the driver did in those episodes. Imitation Learning is a subfield of supervised learning, in which a model tries to learn a function f: X -> Y describing given input / output pairs - one prominent example of this is image classification. This is also the same concept we use in our [motion prediction notebook](../agent_motion_prediction/agent_motion_prediction.ipynb), so feel free to check that out too. ##### Imitation learning limitations Imitation Learning is powerful, but it has a strong limitation. It's not trivial for a trained model to generalise well on out-of-distribution data. After training the model, we would like it to take full control and drive the AV in an autoregressive fashion (i.e. by following its own predictions). During evaluation it's very easy for errors to compound and make the AV drift away from the original distribution. In fact, during training our model has seen only good examples of driving. In particular, this means **almost perfect midlane following**. However, even a small constant displacement during evaluation can accumulate enough error to lead the AV completely out of its distribution in a matter of seconds. ![drifting](../../docs/images/planning/drifting.svg) This is a well known issue in SDV control and simulation discussed, among others, in [this article](https://ri.cmu.edu/pub_files/2010/5/Ross-AIStats10-paper.pdf). # Adding perturbations to the mix One of the simplest techniques to ensure a good generalisation is **data augmentation**, which exposes the network to different versions of the input and helps it to generalise better to out-of-distribution situations. In our setting, we want to ensure that **our model can recover if it ends up slightly off the midlane it is following**. Following [the noteworthy approach from Waymo](https://arxiv.org/pdf/1812.03079.pdf), we can enrich the training set with **online trajectory perturbations**. These perturbations are kinematically feasible and affect both starting angle and position. A new ground truth trajectory is then generated to link this new starting point with the original trajectory end point. These starting point will be slightly rotated and off the original midlane, and the new trajectory will teach the model how to recover from this situation. ![perturbation](../../docs/images/planning/perturb.svg) In the following cell, we load the training data and leverage L5Kit to add these perturbations to our training set. We also plot the same example with and without perturbation. During training, our model will see also those examples and learn how to recover from positional and angular offsets. ``` from tempfile import gettempdir import matplotlib.pyplot as plt import numpy as np import torch from torch import nn, optim from torch.utils.data import DataLoader from tqdm import tqdm from l5kit.configs import load_config_data from l5kit.data import LocalDataManager, ChunkedDataset from l5kit.dataset import EgoDataset from l5kit.rasterization import build_rasterizer from l5kit.geometry import transform_points from l5kit.visualization import TARGET_POINTS_COLOR, draw_trajectory from l5kit.planning.rasterized.model import RasterizedPlanningModel from l5kit.kinematic import AckermanPerturbation from l5kit.random import GaussianRandomGenerator import os ``` ## Prepare data path and load cfg By setting the `L5KIT_DATA_FOLDER` variable, we can point the script to the folder where the data lies. Then, we load our config file with relative paths and other configurations (rasteriser, training params...). ``` # set env variable for data os.environ["L5KIT_DATA_FOLDER"] = open("../dataset_dir.txt", "r").read().strip() dm = LocalDataManager(None) # get config cfg = load_config_data("./config.yaml") perturb_prob = cfg["train_data_loader"]["perturb_probability"] # rasterisation and perturbation rasterizer = build_rasterizer(cfg, dm) mean = np.array([0.0, 0.0, 0.0]) # lateral, longitudinal and angular std = np.array([0.5, 1.5, np.pi / 6]) perturbation = AckermanPerturbation( random_offset_generator=GaussianRandomGenerator(mean=mean, std=std), perturb_prob=perturb_prob) # ===== INIT DATASET train_zarr = ChunkedDataset(dm.require(cfg["train_data_loader"]["key"])).open() train_dataset = EgoDataset(cfg, train_zarr, rasterizer, perturbation) # plot same example with and without perturbation for perturbation_value in [1, 0]: perturbation.perturb_prob = perturbation_value data_ego = train_dataset[0] im_ego = rasterizer.to_rgb(data_ego["image"].transpose(1, 2, 0)) target_positions = transform_points(data_ego["target_positions"], data_ego["raster_from_agent"]) draw_trajectory(im_ego, target_positions, TARGET_POINTS_COLOR) plt.imshow(im_ego) plt.axis('off') plt.show() # before leaving, ensure perturb_prob is correct perturbation.perturb_prob = perturb_prob model = RasterizedPlanningModel( model_arch="resnet50", num_input_channels=rasterizer.num_channels(), num_targets=3 * cfg["model_params"]["future_num_frames"], # X, Y, Yaw * number of future states, weights_scaling= [1., 1., 1.], criterion=nn.MSELoss(reduction="none") ) print(model) ``` # Prepare for training Our `EgoDataset` inherits from PyTorch `Dataset`; so we can use it inside a `Dataloader` to enable multi-processing. ``` train_cfg = cfg["train_data_loader"] train_dataloader = DataLoader(train_dataset, shuffle=train_cfg["shuffle"], batch_size=train_cfg["batch_size"], num_workers=train_cfg["num_workers"]) device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model = model.to(device) optimizer = optim.Adam(model.parameters(), lr=1e-3) print(train_dataset) ``` # Training loop Here, we purposely include a barebone training loop. Clearly, many more components can be added to enrich logging and improve performance. Still, the sheer size of our dataset ensures that a reasonable performance can be obtained even with this simple loop. ``` tr_it = iter(train_dataloader) progress_bar = tqdm(range(cfg["train_params"]["max_num_steps"])) losses_train = [] model.train() torch.set_grad_enabled(True) for _ in progress_bar: try: data = next(tr_it) except StopIteration: tr_it = iter(train_dataloader) data = next(tr_it) # Forward pass data = {k: v.to(device) for k, v in data.items()} result = model(data) loss = result["loss"] # Backward pass optimizer.zero_grad() loss.backward() optimizer.step() losses_train.append(loss.item()) progress_bar.set_description(f"loss: {loss.item()} loss(avg): {np.mean(losses_train)}") ``` ### Plot the train loss curve We can plot the train loss against the iterations (batch-wise) to check if our model has converged. ``` plt.plot(np.arange(len(losses_train)), losses_train, label="train loss") plt.legend() plt.show() ``` # Store the model Let's store the model as a torchscript. This format allows us to re-load the model and weights without requiring the class definition later. **Take note of the path, you will use it later to evaluate your planning model!** ``` to_save = torch.jit.script(model.cpu()) path_to_save = f"{gettempdir()}/planning_model.pt" to_save.save(path_to_save) print(f"MODEL STORED at {path_to_save}") ``` # Congratulations in training your first ML policy for planning! ### What's Next Now that your model is trained and safely stored, you can evaluate how it performs in two very different situations using our dedicated notebooks: ### [Open-loop evaluation](./open_loop_test.ipynb) In this setting the model **is not controlling the AV**, and predictions are used to compute metrics only. ### [Closed-loop evaluation](./closed_loop_test.ipynb) In this setting the model **is in full control of the AV** future movements. ## Pre-trained models we provide a collection of pre-trained models for the planning task: - [model](https://lyft-l5-datasets-public.s3-us-west-2.amazonaws.com/models/planning_models/planning_model_20201208.pt) trained on train.zarr for 15 epochs; - [model](https://lyft-l5-datasets-public.s3-us-west-2.amazonaws.com/models/planning_models/planning_model_20201208_early.pt) trained on train.zarr for 2 epochs; - [model](https://lyft-l5-datasets-public.s3-us-west-2.amazonaws.com/models/planning_models/planning_model_20201208_nopt.pt) trained on train.zarr with perturbations disabled for 15 epochs; - [model](https://lyft-l5-datasets-public.s3-us-west-2.amazonaws.com/models/planning_models/planning_model_20201208_nopt_early.pt) trained on train.zarr with perturbations disabled for 2 epochs; We include two partially trained models to emphasise the important role of perturbations during training, especially during the first stage of training. To use one of the models simply download the corresponding `.pt` file and load it in the evaluation notebooks.
true
code
0.681223
null
null
null
null
# Download the Dataset Download the dataset from this link: https://www.kaggle.com/shanwizard/modest-museum-dataset ## Dataset Description Description of the contents of the dataset can be found here: https://shan18.github.io/MODEST-Museum-Dataset ### Mount Google Drive (Works only on Google Colab) For running the notebook on Google Colab, upload the dataset into you Google Drive and execute the two cells below ``` from google.colab import drive drive.mount('/content/gdrive') ``` Unzip the data from Google Drive into Colab ``` !unzip -qq '/content/gdrive/My Drive/modest_museum_dataset.zip' -d . ``` ### Check GPU ``` !nvidia-smi ``` # Install Packages ``` !pip install -r requirements.txt ``` # Import Packages ``` %matplotlib inline import random import matplotlib.pyplot as plt import torch from tensornet.data import MODESTMuseum from tensornet.utils import initialize_cuda, plot_metric from tensornet.model import DSResNet from tensornet.model.optimizer import sgd from tensornet.engine import LRFinder from tensornet.engine.ops import ModelCheckpoint, TensorBoard from tensornet.engine.ops.lr_scheduler import reduce_lr_on_plateau from loss import RmseBceDiceLoss, SsimDiceLoss from learner import ModelLearner ``` # Set Seed and Get GPU Availability ``` # Initialize CUDA and set random seed cuda, device = initialize_cuda(1) ``` # Data Fetch ``` DATASET_PATH = 'modest_museum_dataset' # Common parameter values for the dataset dataset_params = dict( cuda=cuda, num_workers=16, path=DATASET_PATH, hue_saturation_prob=0.25, contrast_prob=0.25, ) %%time # Create dataset dataset = MODESTMuseum( train_batch_size=256, val_batch_size=256, resize=(96, 96), **dataset_params ) # Create train data loader train_loader = dataset.loader(train=True) # Create val data loader val_loader = dataset.loader(train=False) ``` # Model Architecture and Summary ``` %%time model = DSResNet().to(device) model.summary({ k: v for k, v in dataset.image_size.items() if k in ['bg', 'bg_fg'] }) ``` # Find Initial Learning Rate Multiple LR Range Test are done on the model to find the best initial learning rate. ## Range Test 1 ``` model = DSResNet().to(device) # Create model optimizer = sgd(model, 1e-7, 0.9) # Create optimizer criterion = RmseBceDiceLoss() # Create loss function # Find learning rate lr_finder = LRFinder(model, optimizer, criterion, device=device) lr_finder.range_test(train_loader, 400, learner=ModelLearner, start_lr=1e-7, end_lr=5, step_mode='exp') # Get best initial learning rate initial_lr = lr_finder.best_lr # Print learning rate and loss print('Learning Rate:', initial_lr) print('Loss:', lr_finder.best_metric) # Plot learning rate vs loss lr_finder.plot() # Reset graph lr_finder.reset() ``` ## Range Test 2 ``` model = DSResNet().to(device) # Create model optimizer = sgd(model, 1e-5, 0.9) # Create optimizer criterion = RmseBceDiceLoss() # Create loss function # Find learning rate lr_finder = LRFinder(model, optimizer, criterion, device=device) lr_finder.range_test(train_loader, 400, learner=ModelLearner, start_lr=1e-5, end_lr=1, step_mode='exp') # Get best initial learning rate initial_lr = lr_finder.best_lr # Print learning rate and loss print('Learning Rate:', initial_lr) print('Loss:', lr_finder.best_metric) # Plot learning rate vs loss lr_finder.plot() # Reset graph lr_finder.reset() ``` ## Range Test 3 ``` model = DSResNet().to(device) # Create model optimizer = sgd(model, 1e-4, 0.9) # Create optimizer criterion = RmseBceDiceLoss() # Create loss function # Find learning rate lr_finder = LRFinder(model, optimizer, criterion, device=device) lr_finder.range_test(train_loader, 200, learner=ModelLearner, start_lr=1e-4, end_lr=10, step_mode='exp') # Get best initial learning rate initial_lr = lr_finder.best_lr # Print learning rate and loss print('Learning Rate:', initial_lr) print('Loss:', lr_finder.best_metric) # Plot learning rate vs loss lr_finder.plot() # Reset graph lr_finder.reset() ``` ## Range Test 4 ``` model = DSResNet().to(device) # Create model optimizer = sgd(model, 1e-5, 0.9) # Create optimizer criterion = RmseBceDiceLoss() # Create loss function # Find learning rate lr_finder = LRFinder(model, optimizer, criterion, device=device) lr_finder.range_test(train_loader, 100, learner=ModelLearner, start_lr=1e-5, end_lr=2, step_mode='exp') # Get best initial learning rate initial_lr = lr_finder.best_lr # Print learning rate and loss print('Learning Rate:', initial_lr) print('Loss:', lr_finder.best_metric) # Plot learning rate vs loss lr_finder.plot() # Reset graph lr_finder.reset() ``` ## Range Test 5 ``` model = DSResNet().to(device) # Create model optimizer = sgd(model, 1e-7, 0.9) # Create optimizer criterion = RmseBceDiceLoss() # Create loss function # Find learning rate lr_finder = LRFinder(model, optimizer, criterion, device=device) lr_finder.range_test(train_loader, 400, learner=ModelLearner, start_lr=1e-7, end_lr=10, step_mode='exp') # Get best initial learning rate initial_lr = lr_finder.best_lr # Print learning rate and loss print('Learning Rate:', initial_lr) print('Loss:', lr_finder.best_metric) # Plot learning rate vs loss lr_finder.plot() # Reset graph lr_finder.reset() ```
true
code
0.651687
null
null
null
null
# Chapter 10 - Bet Sizing ## Introduction Your ML algorithm can achieve high accuracy, but if you do not size your bets properly, your investment strategy will inevitably lose money. This notebook contains the worked exercises from the end of chapter 10 of "Advances in Financial Machine Learning" by Marcos López de Prado. The questions are restated here in this notebook, with the accompanying code solutions following directly below each question. All code in this notebook can be run as is and requires no external data, with the exception of the EF3M algorithm used in exercise 10.4 which can be found in `mlfinlab.bet_sizing.ef3m.py`. ## Conclusion This notebook demonstrates different bet sizing algorithms in the sample case of a *long-only* trading strategy. Simply counting and averaging the number of open bets at any given time led to a more aggressive bet sizing than the algorithm based on fit Gaussians to the distribution of open bets, as discussed in exercise 10.4. The EF3M algorithm implemented in `mlfinlab.bet_sizing.ef3m.py` and applied in exercise 10.4 provides a scalable and accurate way to determine the parameters of a distribution under the assumption that it is a mixture of two Gaussian distributions. ## Next Steps While the examples in these exercises were relatively simple, it could be useful to be able to determine the parameters of a distribution under the assumption that it is a mixture of $n$ Gaussian distributions. Generalizing the currently implemented EF3M algorithm to fit to any number of distributions is seen as possible future work. ---- ---- ## Exercises Below are the worked solutions to the exercises. All code can be run as is in this notebook, with the exception of exercise 10.4 which requires functions from ´ef3m.py´ (included in this repository). We begin with importing relavant packages and functions to this notebook. ``` # imports import numpy as np from scipy.stats import norm, moment import pandas as pd from sklearn.neighbors import KernelDensity import matplotlib.pyplot as plt from matplotlib import cm import seaborn as sns import datetime as dt from mlfinlab.bet_sizing.ef3m import M2N, raw_moment ``` ---- #### EXERCISE 10.1 Using the formulation in Section 10.3, plot the bet size ($m$) as a function of the maximum predicted probability ($\tilde{p}$) when $||X|| = 2, 3, ..., 10$. ``` num_classes_list = [i for i in range(2, 11, 1)] # array of number of classes, 2 to 10 n = 10_000 # number of points to plot colors = iter(cm.coolwarm(np.linspace(0,1,len(num_classes_list)))) fig_10_1, ax_10_1 = plt.subplots(figsize=(16, 10)) for num_classes in num_classes_list: min_prob, max_prob = 1 / num_classes, 1 # possible range for maximum predicted probability, [1/||X||, 1] P = np.linspace(min_prob, max_prob, n, endpoint=False) # range of maximum predicted probabilities to plot z = (P - min_prob) / (P*(1-P))**0.5 m = 2 * norm.cdf(z) - 1 ax_10_1.plot(P, m, label=f"||X||={num_classes}", linewidth=2, alpha=1, color=colors.__next__()) ax_10_1.set_ylabel("Bet Size $m=2Z[z]-1$", fontsize=16) ax_10_1.set_xlabel(r"Maximum Predicted Probability $\tilde{p}=max_i${$p_i$}", fontsize=16) ax_10_1.set_title("Figure 10.1: Bet Size vs. Maximum Predicted Probability", fontsize=18) ax_10_1.set_xticks([0.1*i for i in range(11)]) ax_10_1.set_yticks([0.1*i for i in range(11)]) ax_10_1.legend(loc="upper left", fontsize=14, title="Number of bet size labels", title_fontsize=12) ax_10_1.set_ylim((0,1.05)) ax_10_1.set_xlim((0, 1.05)) ax_10_1.grid(linewidth=1, linestyle=':') plt.show() ``` **Figure 10.1** shows the bet size vs. the maximum predicted probability given the number of discrete bet size labels. The left-side of each line represents the case in which the maximum probability across the size labels is $\frac{1}{||X||}$ and all probabilities are equal, leading to a bet size of zero. Note the bet size reaches the limiting value $1.0$ faster at greater values of $||X||$, since the greater number of alternative bets spreads the remaining probability much thinner, leading to a greater confidence in that with the maximum predicted probability. ---- #### EXERCISE 10.2 Draw 10,000 random numbers from a uniform distribution with bounds U[.5, 1.]. (Author's note: These exercises are intended to simulate dynamic bet sizing of a long-only strategy.) __(a)__ Compute bet sizes _m_ for $||X||=2$. __(b)__ Assign 10,000 consecutive calendar days to the bet sizes. __(c)__ Draw 10,000 random numbers from a uniform distribution with bounds U[1, 25]. __(d)__ Form a `pandas.Series` indexed by the dates in 2.b, and with values equal to the index shifted forward the number of days in 2.c. This is a `t1` object similar to the ones we used in Chapter 3. __(e)__ Compute the resulting average active bets, following Section 10.4. ``` # draw random numbers from a uniform distribution (all bets are long) np.random.seed(0) sample_size = 10_000 P_t = np.random.uniform(.5, 1., sample_size) # array of random from uniform dist. # 10.2(a) Compute bet sizes for ||X||=2 z = (P_t - 0.5) / (P_t*(1-P_t))**0.5 m = 2 * norm.cdf(z) - 1 # bet sizes, x=1 # 10.2(b) Assign 10,000 consecutive calendar days start_date = dt.datetime(2000, 1, 1) # starting at 01-JAN-2000 date_step = dt.timedelta(days=1) dates = np.array([start_date + i*date_step for i in range(sample_size)]) bet_sizes = pd.Series(data=m, index=dates) # 10.2(c) Draw 10,000 random numbers from a uniform distribution shift_list = np.random.uniform(1., 25., sample_size) shift_dt = np.array([dt.timedelta(days=d) for d in shift_list]) # 10.2(d) Create a pandas.Series object dates_shifted = dates + shift_dt t1 = pd.Series(data=dates_shifted, index=dates) # Collect the series into a single DataFrame. df_events = pd.concat(objs=[t1, bet_sizes], axis=1) df_events = df_events.rename(columns={0: 't1', 1: 'bet_size_prob'}) df_events['p'] = P_t df_events = df_events[['t1', 'p', 'bet_size_prob']] # 10.2(e) Compute the average active bets (sizes). avg_bet = pd.Series() active_bets = pd.Series() for idx, val in t1.iteritems(): active_idx = t1[(t1.index<=idx)&(t1>idx)].index num_active = len(active_idx) active_bets[idx] = num_active avg_bet[idx] = bet_sizes[active_idx].mean() df_events['num_active_bets'] = active_bets df_events['avg_active_bet_size'] = avg_bet print("The first 10 rows of the resulting DataFrame from Exercise 10.2:") display(df_events.head(10)) print("Summary statistics on the bet size columns:") display(df_events[['bet_size_prob', 'num_active_bets', 'avg_active_bet_size']].describe()) ``` ---- #### EXERCISE 10.3 Using the `t1` object from exercise 2.d: __(a)__ Determine the maximum number of concurrent long bets, $\bar{c_l}$. __(b)__ Determine the maximum number of concurrent short bets, $\bar{c_s}$. __(c)__ Derive the bet size as $m_t = c_{t,l}\frac{1}{\bar{c_l}} - c_{t,s}\frac{1}{\bar{c_s}}$, where $c_{t,l}$ is the number of concurrent long bets at time $t$, and $c_{t,s}$ is the number of concurrent short bets at time $t$. ``` # 10.3(a) max number of concurrent long bets df_events2 = df_events.copy() active_long = pd.Series() active_short = pd.Series() for idx in df_events2.index: # long bets are defined as having a prediction probability greater than or equal to 0.5 df_long_active_idx = set(df_events2[(df_events2.index<=idx) & (df_events2.t1>idx) & (df_events2.p>=0.5)].index) active_long[idx] = len(df_long_active_idx) # short bets are defined as having a prediction probability less than 0.5 df_short_active_idx = set(df_events2[(df_events2.index<=idx) & (df_events2.t1>idx) & (df_events2.p<0.5)].index) active_short[idx] = len(df_short_active_idx) print(f" 10.3(a) Maximum number of concurrent long bets: {active_long.max()}") # 10.3(b) max number of concurrent short bets # p[x=1]: U[0.5, 1], thus all bets are long, and the # number of concurrent short bets is always zero in this exercise. print(f" 10.3(b) Maximum number of concurrent short bets: {active_short.max()}") # 10.3(c) bet size as difference between fractions of concurrent long and short bets # Handle possible division by zero. avg_active_long = active_long/active_long.max() if active_long.max() > 0 else active_long avg_active_short = active_short/active_short.max() if active_short.max() > 0 else active_short bet_sizes_2 = avg_active_long - avg_active_short df_events2 = df_events2.assign(active_long=active_long, active_short=active_short, bet_size_budget=bet_sizes_2) display(df_events2.head(10)) # plot the frequency of different bet sizes fig_10_3, ax_10_3 = plt.subplots(figsize=(16, 8)) colors = iter(cm.coolwarm(np.linspace(0,1,3))) n_bins = 50 for i, col in enumerate(['bet_size_prob', 'avg_active_bet_size', 'bet_size_budget']): ax_10_3.hist(df_events2[col], bins=n_bins, alpha=0.75, color=colors.__next__(), label=col) ax_10_3.set_xticks([i/10 for i in range(11)]) ax_10_3.set_xlabel("Column value", fontsize=12) ax_10_3.set_ylabel("Value count", fontsize=12) ax_10_3.set_title("Figure 10.3: Visualization of distribution of values from exercise 10.3", fontsize=16) ax_10_3.legend(loc="upper left", fontsize=14, title="Column distribution", title_fontsize=12) fig_10_3.tight_layout() plt.show() ``` **Figure 10.3** visualizes the distributions of bet sizes found in the DataFrame as of exercise 10.3. `bet_size_prob` is the bet size calculated from predicted probilities, $m=2Z[z]-1$, as in section 10.3, and runs between $[0, 1]$ as seen in **Figure 10.1**. `avg_active_bet_size` is the average of the values of active bets in `bet_size_prob` at any given time. Values for `bet_size_budget` are calculated as described in section 10.2, and take on 20 discrete values since the maximum number of concurrent bets is 20 and the minimum is 1 (there is always at least one active bet). ---- #### EXERCISE 10.4 Using the `t1` object from exercise 2.d: __(a)__ Compute the series $c_t = c_{t,l} - c_{t,s}$, where $c_{t,l}$ is the number of concurrent long bets at time $t$, and $c_{t,s}$ is the number of concurrent short bets at time $t$. __(b)__ Fit a mixture of two Gaussians on {$c_t$}. You may want to use the method described in López de Prado and Foreman (2014). __(c)__ Derive the bet size as $$m_t = \begin{cases} \frac{F[c_t]-F[0]}{1-F[0]}, & \text{if } c_t\geq 0\\\ \frac{F[c_t]-F[0]}{F[0]}, & \text{if } c_t\le 0 \end{cases}$$ where $F[x]$ is the CDF of the fitted mixture of two Gaussians for a value of $x$. __(d)__ Explain how this series ${m_t}$ differ from the bet size computed in exercise 3. ``` # 10.4(a) compute the series c_t = c_{t,l} (all bets are long) # ====================================================== df_events2['c_t'] = df_events2.active_long - 0 # number of short bets is always zero fig_10_4, ax_10_4 = plt.subplots(figsize=(10,6)) ax_10_4a = sns.distplot(df_events2['c_t'], bins=20, kde=True, kde_kws={"bw":0.6}, norm_hist=False, ax=ax_10_4) ax_10_4a.set_xlabel('$c_t$', fontsize=12) ax_10_4.set_ylabel("Value counts", fontsize=12) ax_10_4a.set_title("Figure 10.4(a): Distribution of series $c_t$", fontsize=14) plt.show() ``` **Figure 10.4(a)** shows the distribution of the number of concurrent long bets at any given time $t$. Note the slightly longer tail to the left due to the maximum number of bets being limiting in the start of the sequence. ``` # 10.4(b) fit a mixture of 2 Gaussians # compute the first 5 centered moments # ====================================================== print(f"Mean (first raw moment): {df_events2.c_t.mean()}") print("First 5 centered moments") mmnts = [moment(df_events2.c_t.to_numpy(), moment=i) for i in range(1, 6)] for i, mnt in enumerate(mmnts): print(f"E[r^{i+1}] = {mnt}") ``` The EF3M algorithm that we plan to implment uses the first 5 raw moments, so we must convert the centered moments (just previously calculated) to raw moments using the `raw_moment` function from `ef3m.py`. ``` # Calculate raw moments from centered moments # ====================================================== raw_mmnts = raw_moment(central_moments=mmnts, dist_mean=df_events2.c_t.mean()) for i, mnt in enumerate(raw_mmnts): print(f"E_Raw[r^{i+1}]={mnt}") ``` Now that we have the first 5 raw moments, we can apply the EF3M algorithm. We use variant 2 since we have the 5th moment, and it converges faster in practice. While the first 3 moments are fit exactly, there is not a unqiue solution to this, so we have to make multple runs to find the most likely value. We visualize the results of the fitted parameters in histograms, and use a kernel density estimate to identify the most likely value for each parameter. ``` # On an Intel i7-7700K with 32GB RAM, execution times are as follows: # 10 runs will take approximately 6 minutes # 100 runs will take approximately 1 hour # 1000 runs will take approximately 9 hours # ====================================================== n_runs = 50 m2n = M2N(raw_mmnts, epsilon=10**-5, factor=5, n_runs=n_runs, variant=2, max_iter=10_000_000) df_10_4 = m2n.mp_fit() # Visualize results and determine the most likely values from a KDE plot # ====================================================== fig_10_4b, ax_10_4b = plt.subplots(nrows=5, ncols=1, figsize=(8,12)) cols = ['mu_1', 'mu_2', 'sigma_1', 'sigma_2', 'p_1'] bins = int(n_runs / 4) # to minimize number of bins without results, choose at own discretion fit_parameters = [] print(f"=== Values chosen based on the mode of the distribution of results from EF3M ({n_runs} runs) ===") for col_i, ax in enumerate(ax_10_4b.flatten()): col = cols[col_i] df = df_10_4.copy() df[col+'_bin'] = pd.cut(df[col], bins=bins) df = df.groupby([col+'_bin']).count() ax = sns.distplot(df_10_4[col], bins=bins, kde=True, ax=ax) dd = ax.get_lines()[0].get_data() most_probable_val = dd[0][np.argmax(dd[1])] fit_parameters.append(most_probable_val) ax.set_ylabel("Value count") ax.set_xlabel(f"Estimated value of parameter: {col}") ax.axvline(most_probable_val, color='red', alpha=0.6) ax.set_title(f"{col}: {round(most_probable_val,3)}", color='red') print(f"Most probable estimate for parameter '{col}':", round(dd[0][np.argmax(dd[1])], 3)) fig_10_4b.suptitle(f"Figure 10.4(b): Results of running {n_runs} EF3M fitting rounds", fontsize=14) fig_10_4b.tight_layout(pad=3.5) plt.show() ``` **Figure 10.4(b)** shows the distribution of the parameter estimates from the fitting rounds. The red vertical line indicates the most likely value for each parameter according to the KDE, with the estimated value stated above the subplot in red. The CDF of a [mixture of $n$ distributions](https://en.wikipedia.org/wiki/Mixture_distribution), $F_{mixture}(x)$, can be represented as the weighted sum of the individual distributions: $$ F_{mixture}(x) = \sum_{i=1}^{n}{w_i F_{i, norm}(x)}$$ Where $w_i$ are the weights corresponding to each of the individual cumulative distribution functions of a normal distribution, $F_{i,norm}(x)$. Thus, for the mixture of $n=2$ distributions in this question, the CDF of the mixture, $F(x)$, is: $$ F(x) = p_1 F_{norm}(x, \mu_1, \sigma_1) + (1-p_1) F_{norm}(x, \mu_2, \sigma_2) $$ Where $F_{norm}(x, \mu_i, \sigma_i)$ is the cumulative distribution evaluated at $x$ of a Normal distribution with parameters $\mu_i$ and $\sigma_i$, and $p_1$ is the probability of a given random sample being drawn from the first distribution. ``` # 10.4(c) Calculating the bet size using the mixture of 2 Gaussians # ====================================================== def cdf_mixture(x, parameters): # the CDF of a mixture of 2 normal distributions, evaluated at x # :param x: (float) x-value # :param parameters: (list) mixture parameters, [mu1, mu2, sigma1, sigma2, p1] # :return: (float) CDF of the mixture # =================================== mu1, mu2, sigma1, sigma2, p1 = parameters # for clarity return p1*norm.cdf(x, mu1, sigma1) + (1-p1)*norm.cdf(x, mu2, sigma2) def bet_size_mixed(c_t, parameters): # return the bet size based on the description provided in # question 10.4(c). # :param c_t: (int) different of the number of concurrent long bets minus short bets # :param parameters: (list) mixture parameters, [mu1, mu2, sigma1, sigma2, p1] # :return: (float) bet size # ========================= if c_t >= 0: return ( cdf_mixture(c_t, parameters) - cdf_mixture(0, parameters) ) / ( 1 - cdf_mixture(0, parameters) ) else: ( cdf_mixture(c_t, parameters) - cdf_mixture(0, parameters) ) / cdf_mixture(0, parameters) df_events2['bet_size_reserve'] = df_events2.c_t.apply(lambda c: bet_size_mixed(c, fit_parameters)) fig_10_4c, ax_10_4c = plt.subplots(figsize=(10,6)) for c in ['bet_size_prob', 'avg_active_bet_size', 'bet_size_budget', 'bet_size_reserve']: ax_10_4c.hist(df_events2[c].to_numpy(), bins=100, label=c, alpha=0.7) ax_10_4c.legend(loc='upper left', fontsize=12, title="Bet size type", title_fontsize=10) ax_10_4c.set_xlabel("Bet Size, $m_t$", fontsize=12) ax_10_4c.set_ylabel("Value count", fontsize=12) ax_10_4c.set_title("Figure 10.4(c): Bet Size distributions from exercises 10.3 and 10.4", fontsize=14) display(df_events2[['c_t', 'bet_size_prob', 'avg_active_bet_size', 'bet_size_budget', 'bet_size_reserve']].describe()) ``` **Figure 10.4(c)** shows the distribution of bet sizes as calculated in exercises 10.2, 10.3 and 10.4. `bet_size_budget` is the number of active bets divided by the maximum number of active bets, while `bet_size_reserve` is the bet sizes calculated using the fit mixture of 2 Gaussian distributions. Note that both are 20 discrete values due to the underlying data as previously discussed. **10.4(d) Discussion** The bet size distribution calculated in exercise 3, `bet_size_budget`, and exercise 4, `bet_size_reserve`, are both made up of discrete values. Since the series $\{c_t\}$ is made up of integers between 1 and 20, the bet size from exercise 3, $m_t=c_{t,l}\frac{1}{\tilde{c_l}}$, is also a set of discrete values bounded by $[\frac{1}{\tilde{c_l}}, 1]$ (since there is always at least 1 active bet for any given $t$). However, $98\%$ of all bet sizes fall between $[0.45, 0.9]$, with a mean at $0.67$. In exercise 4 the bet size is calculated using $c_t$ as an input, which results in the bet sizes being a series composed of 20 unique values. Here the bet size values are bounded by $(0, 1)$ but are spread out more evenly across the range than in exercise 3; here $98\%$ of the bet sizes fall between $[0.014, 0.998]$, with a lower mean of $0.50$. Even though we are examining a *long-only* betting strategy, the bet sizes calculated in exercise 4 typically get much closer to zero (i.e. not placing the bet at all), whereas in exercise 3 $99\%$ of all bet sizes are at least $0.45$. ``` print("Quantiles of the bet size values as calculated in the previous exercises:") display(pd.concat([df_events2.quantile([0.001, 0.01, 0.05, 0.25, 0.5, 0.75, 0.95, 0.99, 0.999]), df_events2.mean().to_frame(name='Mean').transpose()])) ``` ---- #### EXERCISE 10.5 Repeat exercise 1, where you discretize $m$ with a `stepSize=.01`, `setpSize=.05`, and `stepSize=.1`. ``` num_classes_list = [i for i in range(2, 11, 1)] # array of number of classes, 2 to 10 n = 10000 # number of points to plot fig_10_5, ax_10_5 = plt.subplots(2, 2, figsize=(20, 16)) ax_10_5 = fig_10_5.get_axes() d_list = [None, 0.01, 0.05, 0.1] d = d_list[2] sub_fig_num = ['i', 'ii', 'iii', 'iv'] for i, axi in enumerate(ax_10_5): colors = iter(cm.coolwarm(np.linspace(0,1,len(num_classes_list)))) for num_classes in num_classes_list: d = d_list[i] min_prob, max_prob = 1 / num_classes, 1 # possible range for maximum predicted probability, [1/||X||, 1] P = np.linspace(min_prob, max_prob, n, endpoint=False) # range of maximum predicted probabilities to plot z = (P - min_prob) / (P*(1-P))**0.5 m = 2 * norm.cdf(z) - 1 if not isinstance(d, type(None)): m = (m/d).round()*d axi.plot(P, m, label=f"||X||={num_classes}", linewidth=2, alpha=1, color=colors.__next__()) axi.set_ylabel("Bet Size $m=2Z[z]-1$", fontsize=14) axi.set_xlabel(r"Maximum Predicted Probability $\tilde{p}=max_i${$p_i$}", fontsize=14) axi.set_xticks([0.1*i for i in range(11)]) axi.set_yticks([0.1*i for i in range(11)]) axi.legend(loc="upper left", fontsize=10, title="Number of bet size labels", title_fontsize=11) axi.set_ylim((0,1.05)) axi.set_xlim((0, 1.05)) if not isinstance(d, type(None)): axi.set_title(f"({sub_fig_num[i]}) Discretized Bet Size, d={d}", fontsize=16) else: axi.set_title(f"({sub_fig_num[i]}) Continuous Bet Size", fontsize=16) axi.grid(linewidth=1, linestyle=':') fig_10_5.suptitle("Figure 10.5: Plots where bet size $m$ is discretized", fontsize=18) fig_10_5.tight_layout(pad=4) plt.show() ``` **Figure 10.5** shows the bet size calculated from predicted probabilities using (i) a continuous bet size, as well as bet sizes discretized by step sizes of (ii) $d=0.01$, (iii) $d=0.05$, and (iv) $d=0.1$. ---- #### EXERCISE 10.6 Rewrite the equations in Section 10.6, so that the bet size is determined by a power function rather than a sigmoid function. We can substitute a power function to calculate bet size, $\tilde{m}$: $$\tilde{m}[\omega, x] = sgn[x]|x|^\omega$$ $L[f_i, \omega, \tilde{m}]$, the inverse function of $\tilde{m}[\omega, x]$ with respect to the market price $p_t$, can be rewritten as: $$L[f_i, \omega, \tilde{m}] = f_i - sgn[\tilde{m}]|\tilde{m}|^{1/\omega}$$ The inverse of $\tilde{m}[\omega, x]$ with respect to $\omega$ can be rewritten as: $$\omega = \frac{log[\frac{\tilde{m}}{sgn(x)}]}{log[|x|]}$$ Where $x = f_i - p_t$ is still the divergence between the current market price, $p_t$, and the price forecast, $f_i$. ---- #### EXERCISE 10.7 Modify Snippet 10.4 so that in implements the equations you derived in exercise 6. ``` # Snippet 10.4, modified to use a power function for the Bet Size # =============================================================== # pos : current position # tPos : target position # w : coefficient for regulating width of the bet size function (sigmoid, power) # f : forecast price # mP : market price # x : divergence, f - mP # maxPos : maximum absolute position size # =============================================================== def betSize_power(w, x): # returns the bet size given the price divergence sgn = np.sign(x) return sgn * abs(x)**w def getTPos_power(w, f, mP, maxPos): # returns the target position size associated with the given forecast price return int( betSize_power(w, f-mP)*maxPos ) def invPrice_power(f, w, m): # inverse function of bet size with respect to the market price sgn = np.sign(m) return f - sgn*abs(m)**(1/w) def limitPrice_power(tPos, pos, f, w, maxPos): # returns the limit price given forecast price sgn = np.sign(tPos-pos) lP = 0 for j in range(abs(pos+sgn), abs(tPos+1)): lP += invPrice_power(f, w, j/float(maxPos)) lP = lP / (tPos-pos) return lP def getW_power(x, m): # inverse function of the bet size with respect to the 'w' coefficient return np.log(m/np.sign(x)) / np.log(abs(x)) # a short script to check calculations forwards and backwards # =============================================================== mP, f, wParams = 100, 115, {'divergence': 10, 'm': 0.95} w = getW_power(wParams['divergence'], wParams['m']) # calibrate w # checking forward and backward calculations m_test = betSize_power(w, f-mP) mP_test = invPrice_power(f, w, m_test) w_test = getW_power(f-mP, m_test) print(f"Market price: {mP}; Result of inverse price: {mP_test}; Diff: {abs(mP-mP_test)}") print(f"w: {w}; Result of inverse w: {w_test}; Diff: {abs(w-w_test)}") # setup data for replicating Figure 10.3 mP, f, wParams = 100, 115, {'divergence': 10, 'm': 0.95} n_points = 1000 X = np.linspace(-1.0, 1.0, n_points) w = 2 bet_sizes_power = np.array([betSize_power(w, xi) for xi in X]) # plotting fig_10_7, ax_10_7 = plt.subplots(figsize=(10,8)) ax_10_7.plot(X, bet_sizes_power, label='$f[x]=sgn[x]|x|^2$', color='blue', linestyle='-') ax_10_7.set_xlabel("$x$", fontsize=16) ax_10_7.set_ylabel("$f[x]$", fontsize=16) ax_10_7.set_xlim((-1, 1)) ax_10_7.set_ylim((-1, 1)) fig_10_7.suptitle("Figure 10.7: Bet Sizes vs. Price Divergence", fontsize=18) plt.legend(loc='upper left', fontsize=16) fig_10_7.tight_layout(pad=3) plt.show() ``` **Figure 10.7** shows a plot of bet size vs. price divergence using the same parameters as Figure 10.3 on page 148 of "Advances in Financial Machine Learning".
true
code
0.665574
null
null
null
null
# Discrimination Threshold Analysis This is a discrimination threshold analysis on selected better performing decision trees. This was determined in the notebook "Classification Report Selected Decision Trees.ipynb" The data is from the team's "MLTable1" Using Yellowbrick's discrimination threshold. Link: https://www.scikit-yb.org/en/latest/api/classifier/threshold.html ``` #imports import pandas as pd import boto3 from sklearn.tree import DecisionTreeClassifier from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split from yellowbrick.classifier import ClassificationReport pd.set_option('display.max_columns', 200) %matplotlib inline from yellowbrick.classifier import DiscriminationThreshold from yellowbrick.classifier.threshold import discrimination_threshold ``` ## Load the Data ``` #load in the csvs #TODO For Team: enter the credentails below to run S3_Key_id='' S3_Secret_key='' def pull_data(Key_id, Secret_key, file): """ Function which CJ wrote to pull data from S3 """ BUCKET_NAME = "gtown-wildfire-ds" OBJECT_KEY = file client = boto3.client( 's3', aws_access_key_id= Key_id, aws_secret_access_key= Secret_key) obj = client.get_object(Bucket= BUCKET_NAME, Key= OBJECT_KEY) file_df = pd.read_csv(obj['Body']) return (file_df) #Pull in the firms and scan df file = 'MLTable1.csv' df = pull_data(S3_Key_id, S3_Secret_key, file) df.head() #unnamed seems to be a column brought in that we dont want. drop it. df = df.drop(['Unnamed: 0'], axis=1) df.head() df.shape df['FIRE_DETECTED'].value_counts() ``` ## ML Prep ``` #seperate data sets as labels and features X = df.drop('FIRE_DETECTED', axis=1) y = df['FIRE_DETECTED'] #train test splitting of data #common syntax here is to use X_train, X_test, y_train, y_test X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) #create our scalar to get optimized result sc = StandardScaler() #runs the standard scalar with default settings. you can refine this, see docs #transform the feature data by using the fit_transform X_train = sc.fit_transform(X_train) X_test = sc.fit_transform(X_test) ``` ## Discrimination Threshold Start with discrimination threshold from highest "Fire" recall score with a (relatively) higher F1 score from the "Classisfication Report Selected Decision Trees.ipynb" ``` #start with the model below model = DecisionTreeClassifier(criterion='entropy', splitter='random') visualizer = DiscriminationThreshold(model) visualizer.fit(X, y) # Fit the data to the visualizer visualizer.show() # Finalize and render the figure #try another model? model = DecisionTreeClassifier(splitter='random') visualizer = DiscriminationThreshold(model) visualizer.fit(X, y) # Fit the data to the visualizer visualizer.show() # Finalize and render the figure ```
true
code
0.32751
null
null
null
null
<a href="https://colab.research.google.com/github/gptix/DS-Unit-2-Applied-Modeling/blob/master/module2/Follow_LS_DS10_232.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Lambda School Data Science *Unit 2, Sprint 3, Module 2* --- # Wrangle ML datasets 🍌 In today's lesson, we’ll work with a dataset of [3 Million Instacart Orders, Open Sourced](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2)! ### Setup ``` # Download data import requests def download(url): filename = url.split('/')[-1] print(f'Downloading {url}') r = requests.get(url) with open(filename, 'wb') as f: f.write(r.content) print(f'Downloaded {filename}') download('https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz') # Uncompress data import tarfile tarfile.open('instacart_online_grocery_shopping_2017_05_01.tar.gz').extractall() # Change directory to where the data was uncompressed %cd instacart_2017_05_01 # Print the csv filenames from glob import glob for filename in glob('*.csv'): print(filename) ``` ### For each csv file, look at its shape & head ``` import pandas as pd from IPython.display import display def preview(): for filename in glob('*.csv'): df = pd.read_csv(filename) print(filename, df.shape) display(df.head()) print('\n') preview() ``` ## The original task was complex ... [The Kaggle competition said,](https://www.kaggle.com/c/instacart-market-basket-analysis/data): > The dataset for this competition is a relational set of files describing customers' orders over time. The goal of the competition is to predict which products will be in a user's next order. > orders.csv: This file tells to which set (prior, train, test) an order belongs. You are predicting reordered items only for the test set orders. Each row in the submission is an order_id from the test set, followed by product_id(s) predicted to be reordered. > sample_submission.csv: ``` order_id,products 17,39276 29259 34,39276 29259 137,39276 29259 182,39276 29259 257,39276 29259 ``` ## ... but we can simplify! Simplify the question, from "Which products will be reordered?" (Multi-class, [multi-label](https://en.wikipedia.org/wiki/Multi-label_classification) classification) to **"Will customers reorder this one product?"** (Binary classification) Which product? How about **the most frequently ordered product?** # Questions: - What is the most frequently ordered product? - How often is this product included in a customer's next order? - Which customers have ordered this product before? - How can we get a subset of data, just for these customers? - What features can we engineer? We want to predict, will these customers reorder this product on their next order? ## What was the most frequently ordered product? ``` prior = pd.read_csv('order_products__prior.csv') prior['product_id'].mode() prior['product_id'].value_counts() train = pd.read_csv('order_products__train.csv') train['product_id'].mode() train['product_id'].value_counts() products = pd.read_csv('products.csv') products[products['product_id']==24852] prior = pd.merge(prior, products, on='product_id') ``` ## How often are bananas included in a customer's next order? There are [three sets of data](https://gist.github.com/jeremystan/c3b39d947d9b88b3ccff3147dbcf6c6b): > "prior": orders prior to that users most recent order (3.2m orders) "train": training data supplied to participants (131k orders) "test": test data reserved for machine learning competitions (75k orders) Customers' next orders are in the "train" and "test" sets. (The "prior" set has the orders prior to the most recent orders.) We can't use the "test" set here, because we don't have its labels (only Kaggle & Instacart have them), so we don't know what products were bought in the "test" set orders. So, we'll use the "train" set. It currently has one row per product_id and multiple rows per order_id. But we don't want that. Instead we want one row per order_id, with a binary column: "Did the order include bananas?" Let's wrangle! ## Technique #1 ``` df = train.head(16).copy() df['bananas'] = df['product_id'] == 24852 df.groupby('order_id')['bananas'].any() train['bananas'] = train['product_id'] == 24852 train.groupby('order_id')['bananas'].any() train_wrangled = train.groupby('order_id')['bananas'].any().reset_index() target = 'bananas' train_wrangled[target].value_counts(normalize=True) ``` ## Technique #2 ``` df # Group by order_id, get a list of product_ids for that order df.groupby('order_id')['product_id'].apply(list) # Group by order_id, get a list of product_ids for that order, check if that list includes bananas def includes_bananas(product_ids): return 24852 in list(product_ids) df.groupby('order_id')['product_id'].apply(includes_bananas) train = (train .groupby('order_id') .agg({'product_id': includes_bananas}) .reset_index() .rename(columns={'product_id': 'bananas'})) target = 'bananas' train[target].value_counts(normalize=True) ``` ## Which customers have ordered this product before? - Customers are identified by `user_id` - Products are identified by `product_id` Do we have a table with both these id's? (If not, how can we combine this information?) ``` preview() ``` Answer: No, we don't have a table with both these id's. But: - `orders.csv` has `user_id` and `order_id` - `order_products__prior.csv` has `order_id` and `product_id` - `order_products__train.csv` has `order_id` and `product_id` too ``` # In the order_products__prior table, which orders included bananas? BANANAS = 24852 prior[prior.product_id==BANANAS] banana_prior_order_ids = prior[prior.product_id==BANANAS].order_id # Look at the orders table, which orders included bananas? orders = pd.read_csv('orders.csv') orders.sample(n=5) # In the orders table, which orders included bananas? orders[orders.order_id.isin(banana_prior_order_ids)] # Check this order id, confirm that yes it includes bananas prior[prior.order_id==738281] banana_orders = orders[orders.order_id.isin(banana_prior_order_ids)] # In the orders table, which users have bought bananas? banana_user_ids = banana_orders.user_id.unique() ``` ## How can we get a subset of data, just for these customers? We want *all* the orders from customers who have *ever* bought bananas. (And *none* of the orders from customers who have *never* bought bananas.) ``` # orders table, shape before getting subset orders.shape # orders table, shape after getting subset orders = orders[orders.user_id.isin(banana_user_ids)] orders.shape # IDs of *all* the orders from customers who have *ever* bought bananas subset_order_ids = orders.order_id.unique() # order_products__prior table, shape before getting subset prior.shape # order_products__prior table, shape after getting subset prior = prior[prior.order_id.isin(subset_order_ids)] prior.shape # order_products__train table, shape before getting subset train.shape # order_products__train table, shape after getting subset train = train[train.order_id.isin(subset_order_ids)] train.shape # In this subset, how often were bananas reordered in the customer's most recent order? train[target].value_counts(normalize=True) ``` ## What features can we engineer? We want to predict, will these customers reorder bananas on their next order? - Other fruit they buy - Time between banana orders - Frequency of banana orders by a customer - Organic or not - Time of day ``` preview() train.shape train.head() # Merge user_id, order_number, order_dow, order_hour_of_day, and days_since_prior_order # with the training data train = pd.merge(train, orders) train.head() ``` - Frequency of banana orders - % of orders - Every n days on average - Total orders - Recency of banana orders - n of orders - n days ``` USER = 61911 prior = pd.merge(prior, orders[['order_id', 'user_id']]) prior['bananas'] = prior.product_id == BANANAS # This user has ordered 196 products, df = prior[prior.user_id==USER] df # This person has ordered bananas six times df['bananas'].sum() df[df['bananas']] # How many unique orders for this user? df['order_id'].nunique() df['bananas'].sum() / df['order_id'].nunique() ```
true
code
0.352536
null
null
null
null
# Keyboard shortcuts In this notebook, you'll get some practice using keyboard shortcuts. These are key to becoming proficient at using notebooks and will greatly increase your work speed. First up, switching between edit mode and command mode. Edit mode allows you to type into cells while command mode will use key presses to execute commands such as creating new cells and openning the command palette. When you select a cell, you can tell which mode you're currently working in by the color of the box around the cell. In edit mode, the box and thick left border are colored green. In command mode, they are colored blue. Also in edit mode, you should see a cursor in the cell itself. By default, when you create a new cell or move to the next one, you'll be in command mode. To enter edit mode, press Enter/Return. To go back from edit mode to command mode, press Escape. > **Exercise:** Click on this cell, then press Enter + Shift to get to the next cell. Switch between edit and command mode a few times. ``` # mode practice ``` ## Help with commands If you ever need to look up a command, you can bring up the list of shortcuts by pressing `H` in command mode. The keyboard shortcuts are also available above in the Help menu. Go ahead and try it now. ## Creating new cells One of the most common commands is creating new cells. You can create a cell above the current cell by pressing `A` in command mode. Pressing `B` will create a cell below the currently selected cell. > **Exercise:** Create a cell above this cell using the keyboard command. > **Exercise:** Create a cell below this cell using the keyboard command. ## Switching between Markdown and code With keyboard shortcuts, it is quick and simple to switch between Markdown and code cells. To change from Markdown to a code cell, press `Y`. To switch from code to Markdown, press `M`. > **Exercise:** Switch the cell below between Markdown and code cells. ``` ## Practice here def fibo(n): # Recursive Fibonacci sequence! if n == 0: return 0 elif n == 1: return 1 return fibo(n-1) + fibo(n-2) ``` ## Line numbers A lot of times it is helpful to number the lines in your code for debugging purposes. You can turn on numbers by pressing `L` (in command mode of course) on a code cell. > **Exercise:** Turn line numbers on and off in the above code cell. ## Deleting cells Deleting cells is done by pressing `D` twice in a row so `D`, `D`. This is to prevent accidently deletions, you have to press the button twice! > **Exercise:** Delete the cell below. ``` # DELETE ME ``` ## Saving the notebook Notebooks are autosaved every once in a while, but you'll often want to save your work between those times. To save the book, press `S`. So easy! ## The Command Palette You can easily access the command palette by pressing Shift + Control/Command + `P`. > **Note:** This won't work in Firefox and Internet Explorer unfortunately. There is already a keyboard shortcut assigned to those keys in those browsers. However, it does work in Chrome and Safari. This will bring up the command palette where you can search for commands that aren't available through the keyboard shortcuts. For instance, there are buttons on the toolbar that move cells up and down (the up and down arrows), but there aren't corresponding keyboard shortcuts. To move a cell down, you can open up the command palette and type in "move" which will bring up the move commands. > **Exercise:** Use the command palette to move the cell below down one position. ``` # Move this cell down # below this cell ``` ## Finishing up There is plenty more you can do such as copying, cutting, and pasting cells. I suggest getting used to using the keyboard shortcuts, you’ll be much quicker at working in notebooks. When you become proficient with them, you'll rarely need to move your hands away from the keyboard, greatly speeding up your work. Remember, if you ever need to see the shortcuts, just press `H` in command mode.
true
code
0.3975
null
null
null
null
# Distribution Strategy Design Pattern This notebook demonstrates how to use distributed training with Keras. ``` import datetime import os import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow import feature_column as fc # Determine CSV, label, and key columns # Create list of string column headers, make sure order matches. CSV_COLUMNS = ["weight_pounds", "is_male", "mother_age", "plurality", "gestation_weeks", "mother_race"] # Add string name for label column LABEL_COLUMN = "weight_pounds" # Set default values for each CSV column as a list of lists. # Treat is_male and plurality as strings. DEFAULTS = [[0.0], ["null"], [0.0], ["null"], [0.0], ["null"]] def features_and_labels(row_data): """Splits features and labels from feature dictionary. Args: row_data: Dictionary of CSV column names and tensor values. Returns: Dictionary of feature tensors and label tensor. """ label = row_data.pop(LABEL_COLUMN) return row_data, label def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL): """Loads dataset using the tf.data API from CSV files. Args: pattern: str, file pattern to glob into list of files. batch_size: int, the number of examples per batch. mode: tf.estimator.ModeKeys to determine if training or evaluating. Returns: `Dataset` object. """ # Make a CSV dataset dataset = tf.data.experimental.make_csv_dataset( file_pattern=pattern, batch_size=batch_size, column_names=CSV_COLUMNS, column_defaults=DEFAULTS) # Map dataset to features and label dataset = dataset.map(map_func=features_and_labels) # features, label # Shuffle and repeat for training if mode == tf.estimator.ModeKeys.TRAIN: dataset = dataset.shuffle(buffer_size=1000).repeat() # Take advantage of multi-threading; 1=AUTOTUNE dataset = dataset.prefetch(buffer_size=1) return dataset ``` Build model as before. ``` def create_input_layers(): """Creates dictionary of input layers for each feature. Returns: Dictionary of `tf.Keras.layers.Input` layers for each feature. """ inputs = { colname: tf.keras.layers.Input( name=colname, shape=(), dtype="float32") for colname in ["mother_age", "gestation_weeks"]} inputs.update({ colname: tf.keras.layers.Input( name=colname, shape=(), dtype="string") for colname in ["is_male", "plurality", "mother_race"]}) return inputs ``` And set up feature columns. ``` def categorical_fc(name, values): cat_column = fc.categorical_column_with_vocabulary_list( key=name, vocabulary_list=values) return fc.indicator_column(categorical_column=cat_column) def create_feature_columns(): feature_columns = { colname : fc.numeric_column(key=colname) for colname in ["mother_age", "gestation_weeks"] } feature_columns["is_male"] = categorical_fc( "is_male", ["True", "False", "Unknown"]) feature_columns["plurality"] = categorical_fc( "plurality", ["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)", "Multiple(2+)"]) feature_columns["mother_race"] = fc.indicator_column( fc.categorical_column_with_hash_bucket( "mother_race", hash_bucket_size=17, dtype=tf.dtypes.string)) feature_columns["gender_x_plurality"] = fc.embedding_column( fc.crossed_column(["is_male", "plurality"], hash_bucket_size=18), dimension=2) return feature_columns def get_model_outputs(inputs): # Create two hidden layers of [64, 32] just in like the BQML DNN h1 = layers.Dense(64, activation="relu", name="h1")(inputs) h2 = layers.Dense(32, activation="relu", name="h2")(h1) # Final output is a linear activation because this is regression output = layers.Dense(units=1, activation="linear", name="weight")(h2) return output def rmse(y_true, y_pred): return tf.sqrt(tf.reduce_mean((y_pred - y_true) ** 2)) ``` ## Build the model and set up distribution strategy Next, we'll combine the components of the model above to build the DNN model. Here is also where we'll define the distribution strategy. To do that, we'll place the building of the model inside the scope of the distribution strategy. Notice the output after excuting the cell below. We'll see ``` INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3') ``` This indicates that we're using the MirroredStrategy on 4 GPUs. That is because my machine has 4 GPUs. Your output may look different depending on how many GPUs you have on your device. ``` def build_dnn_model(): """Builds simple DNN using Keras Functional API. Returns: `tf.keras.models.Model` object. """ # Create input layer inputs = create_input_layers() # Create feature columns feature_columns = create_feature_columns() # The constructor for DenseFeatures takes a list of numeric columns # The Functional API in Keras requires: LayerConstructor()(inputs) dnn_inputs = layers.DenseFeatures( feature_columns=feature_columns.values())(inputs) # Get output of model given inputs output = get_model_outputs(dnn_inputs) # Build model and compile it all together model = tf.keras.models.Model(inputs=inputs, outputs=output) model.compile(optimizer="adam", loss="mse", metrics=[rmse, "mse"]) return model # Create the distribution strategy mirrored_strategy = tf.distribute.MirroredStrategy() with mirrored_strategy.scope(): model = build_dnn_model() print("Here is our DNN architecture so far:\n") print(model.summary()) ``` To see how many GPU devices you have attached to your machine, run the cell below. As mentioned above, I have 4. ``` print('Number of devices: {}'.format(mirrored_strategy.num_replicas_in_sync)) ``` Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
true
code
0.852844
null
null
null
null
[View in Colaboratory](https://colab.research.google.com/github/AnujArora23/FlightDelayML/blob/master/DTFlightDelayDataset.ipynb) # Flight Delay Prediction (Regression) **NOTE: THIS IS A CONTINUATION OF THE *SGDFlightDelayDataset.ipynb* NOTEBOOK WHICH USES STOCHASTIC GRADIENT DESCENT REGRESSION. THIS PART ONLY CONTAINS THE BOOSTED DECISION TREE PREDICTION AND COMPARISON BETWEEN THE TWO. FOR DATA CLEANING AND EXPLORATION, PLEASE SEE THE PREVIOUS NOTEBOOK.** These datasets are taken from Microsoft Azure Machine Learning Studio's sample datasets. It contains flight delay data for various airlines for the year 2013. There are two files uploaded as a compressed archive on my GitHub page: 1) **Flight_Delays_Data.csv** : This contains arrival and departure details for various flights operated by 16 different airlines. The schema is pretty self explanatory but I will mention the important and slightly obscure columns: *OriginAirportID/DestAirportID* : The unique 5 digit integer identifier for a particular airport. *CRSDepTime/CRSArrTime* : Time in 24 hour format (e.g. 837 is 08:37AM) *ArrDel15/DepDel15* : Binary columns where *1* means that the flight was delayed beyond 15 minutes and *0* means it was not. *ArrDelay/DepDelay* : Time (in minutes) by which flight was delayed. 2) **Airport_Codes_Dataset.csv** : This file gives the city, state and name of the airport along with the unique 5 digit integer identifier. ### Goals: **1. Clean the data, and see which features may be important and which might be redundant.** **2. Do an exploratory analysis of the data to identify where most of the flight delays lie (e.g. which carrier, airport etc.).** **3. Choose and build an appropriate regression model for this dataset to predict *ArrDelay* time in minutes.** **4. Choose and build alternative models and compare all models with various accuracy metrics.** ## Install and import necessary libraries ``` !pip install -U -q PyDrive #Only if you are loading your data from Google Drive from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth from oauth2client.client import GoogleCredentials import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn import preprocessing from sklearn.linear_model import SGDRegressor from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import AdaBoostRegressor from sklearn.model_selection import train_test_split from sklearn.metrics import mean_squared_error, r2_score from sklearn.model_selection import cross_val_score, cross_val_predict from sklearn.model_selection import KFold from sklearn.model_selection import GridSearchCV from sklearn import grid_search from sklearn import metrics ``` ## Authorize Google Drive (if your data is stored in Drive) ``` %%capture auth.authenticate_user() gauth = GoogleAuth() gauth.credentials = GoogleCredentials.get_application_default() drive = GoogleDrive(gauth) ``` ## Data Ingestion I have saved the two files in my personal drive storage and read them from there into a pandas data frame. Please modify the following cells to read the CSV files into a Pandas dataframe as per your storage location. ``` %%capture downloaded = drive.CreateFile({'id':'1VxxZFZO7copAM_AHHF42zjO7rlGR1aPm'}) # replace the id with id of file you want to access downloaded.GetContentFile('Airport_Codes_Dataset.csv') downloaded = drive.CreateFile({'id':'1owzv86uWVRace_8xvRShFrDTRXSljp3I'}) # replace the id with id of file you want to access downloaded.GetContentFile('Flight_Delays_Data.csv') airpcode = pd.read_csv('Airport_Codes_Dataset.csv') flightdel = pd.read_csv('Flight_Delays_Data.csv') ``` ## Data Cleanup ### Remove NULL /NaN rows and drop redundant columns ``` flightdel.dropna(inplace=True) #Drop NaNs. We will still have enough data flightdel.drop(['Year','Cancelled'],axis=1,inplace=True) #There is only 1 unique value for both (2013 and 0 respectively) flightdel.reset_index(drop=True,inplace=True) ``` ###Join the 2 CSV files to get airport code details for origin and destination ``` result=pd.merge(flightdel,airpcode,left_on='OriginAirportID',right_on='airport_id',how='left') result.drop(['airport_id'],axis=1,inplace=True) #result.reset_index(drop=True,inplace=True) result.rename(columns={'city':'cityor','state':'stateor','name':'nameor'},inplace=True) result=pd.merge(result,airpcode,left_on='DestAirportID',right_on='airport_id',how='left') result.drop(['airport_id'],axis=1,inplace=True) result.reset_index(drop=True,inplace=True) result.rename(columns={'city':'citydest','state':'statedest','name':'namedest'},inplace=True) flightdelfin=result ``` ### Perform Feature Conversion (to categorical dtype) ``` cols=['Carrier','DepDel15','ArrDel15','OriginAirportID','DestAirportID','cityor','stateor','nameor','citydest','statedest','namedest'] flightdelfin[cols]=flightdelfin[cols].apply(lambda x: x.astype('category')) ``` ###Drop duplicate observations ``` flightdelfin.drop_duplicates(keep='first',inplace=True) flightdelfin.reset_index(drop=True,inplace=True) ``` ###Drop columns that are unnecessary for analysis **In particular, we drop ArrDel15 and DepDel15 columns as they add no extra information from the ArrDel and DepDel columns respectively.** ``` flightdelan=flightdelfin.iloc[:,0:11] flightdelan.drop('DepDel15',axis=1,inplace=True) flightdelan.head() ``` ###Final check before analysis ** We check if our data types are correct and do a general scan of the dataframe information. It looks good! Everything is as it should be.** ``` flightdelan.info() #flightdelan[['Month','DayofMonth','DayOfWeek']]=flightdelan[['Month','DayofMonth','DayOfWeek']].apply(lambda x: x.astype(np.int64)) ``` ## Prediction ### Convert Categorical Variables to Indicator Variables **To do any sort of prediction, we need to convert the categorical variables to dummy (indicator) variables and drop one group in for each categorical column in the original table, so as to get a baseline to compare to. If we do not drop one group from each categorical variable, our regression will fail due to multicollinearity.** **The choice of which group(s) to drop is complete arbitrary but in our case, we will drop the carrier with the least number of flights i.e. Hawaiian Airlines (HA), and we will choose an arbitrary city pair with just 1 flight frequency to drop. As of now I have chosen the OriginAirportID as 14771 and the DestAirportID as 13871 to be dropped.** ``` flightdeldum=pd.get_dummies(flightdelan) flightdeldum.drop(['Carrier_HA','OriginAirportID_14771','DestAirportID_13871'],axis=1,inplace=True) flightdeldum.head() ``` **As one can see above, each cateogrical column has been converted to 'n' binary columns where n was the number of groups in that particular categorical column. For example, the carrier column has been split into 16 indicator columns (number of unique carriers) and one has been dropped ('Carrier_HA').** **Similar logic can be applied to the DestAirportID and OriginAirportID categorical columns.** **NOTE: The Month, DayofMonth and DayofWeek columns have not been converted to indicator variables because they are ORDINAL cateogorical variables and not nominal. There is a need to retain their ordering because the 2nd month comes after the 1st and so on. Hence, since their current form retains their natural ordering, we do not need to touch these columns.** ### Decision Tree Regression **To predict the arrival delays for various combinations of the input variables and future unseen data, we need to perform a regression since the output variable (ArrDelay) is a continuous one.** **Decision trees are an alternative algorithm for prediction (both classification and regression). SGD regression, is after all, a form of multiple linear regression, which assumes linearity between features, whereas decision trees do not assume linearity between features. Hence, it is a good idea to run a decision tree regressor on our data set to see if we get any imporvement.** ``` scaler = preprocessing.StandardScaler() flightdeldum[['CRSDepTime','CRSArrTime','DepDelay']]=scaler.fit_transform(flightdeldum[['CRSDepTime','CRSArrTime','DepDelay']]) y=flightdeldum.ArrDelay X=flightdeldum.drop('ArrDelay',axis=1) ``` **In the cell above, we have scaled the relevant columns (Z score) that had values that were dissimilar to the rest of the features, as the regularization strategy we are going to use, requires features to be in a similar range.** **We now split the data into training and testing data and in this case, we are going to use 80% of the data for training and the remaining for testing.** ``` X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=123) ``` **We use the min_samples_split parameter in the sklearn decision tree package, as a stopping criteria. ** ``` # Fit regression model regr_1 = DecisionTreeRegressor(min_samples_split=100) regr_1.fit(X_train,y_train) ``` **In the above cell, we trained the model using the DecisionTreeRegressor and now it is time to predict using the test data.** ``` y_pred=regr_1.predict(X_test) # The coefficients #print('Coefficients: \n', reg.coef_) # The mean squared error print("Mean squared error: %.2f" % mean_squared_error(y_test.values, y_pred)) # Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % r2_score(y_test.values, y_pred)) plt.scatter(y_test.values, y_pred) plt.xlabel("True Values") plt.ylabel("Predictions") plt.show() ``` **As we can see from the above cells, 88% of the variance (R^2 score) in the data has been captured and the model is predicting well. The trend line in the graph would be very close to the ideal 45 degree trend line (expected line if the model predicted with 100% accuracy).** **However, we do not want to overfit our model because we want it to perform well on new untested data. To check if our model overfits, we can run a 6 fold cross validation and analyze the variance (R^2 scores) on each fold ,as shown below.** ``` # Perform 6-fold cross validation kfold = KFold(n_splits=6) scores = cross_val_score(regr_1, X, y, cv=kfold) print 'Cross-validated scores:', scores print 'Average Cross-validated score:', np.mean(scores) ``` ###Grid Search (Hyperparameter Tuning) (Optional and Computationally Expensive) **The following function performs a search over the entire parameter grid (as specified below) for the minimum number of samples to split (minimum number of observations required for node to be split), and returns the optimal parameters, after an n fold cross validation.** ``` #https://medium.com/@aneesha/svm-parameter-tuning-in-scikit-learn-using-gridsearchcv-2413c02125a0 def dectree_param_selection(X, y, nfolds): min_samples_splits = [60,80,100] param_grid = {'min_samples_split': min_samples_splits} grid_search = GridSearchCV(DecisionTreeRegressor(), param_grid, cv=nfolds) grid_search.fit(X, y) grid_search.best_params_ return grid_search.best_params_ dectree_param_selection(X_train,y_train,5) ``` **As we can see above, the grid search has yielded the optimal parameters for the min_samples_split parameter. This process took about 40 minutes to execute as it is very computationally expensive.** **Since, we received the same parameter as our initial estimate, we can conclude our decision tree regressor here.** ##Comparison **We have seen that, after running a 6 fold cross validation (to eliminate overfitting), we have the following results for each of the predictors:** **1) SGD Regression (with gridsearch parameters) : MSE = ~164, R^2=~89%** **2) Decision Tree Regression: MSE= ~184, R^2=~86%** **Since both results involve at least a 5 fold cross validation, we can be assured that the models are not overfit. Hence, given the above figures, we can clearly see that the Stochastic Gradient Descent Regression is a better algorithm for this problem.** **FINAL CHOICE: Stochastic Gradient Descent Regression.**
true
code
0.633779
null
null
null
null
``` import sys sys.path.append('../..') import torchdyn; from torchdyn.models import *; from torchdyn.datasets import * from pytorch_lightning.loggers import WandbLogger data = ToyDataset() n_samples = 1 << 16 n_gaussians = 7 X, yn = data.generate(n_samples // n_gaussians, 'gaussians', n_gaussians=7, std_gaussians=0.1, dim=2, radius=2) X = (X - X.mean())/X.std() import matplotlib.pyplot as plt plt.figure(figsize=(3, 3)) plt.scatter(X[:,0], X[:,1], c='orange', alpha=0.3, s=4) import torch import torch.utils.data as data device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") X_train = torch.Tensor(X).to(device) y_train = torch.LongTensor(yn).long().to(device) train = data.TensorDataset(X_train, y_train) trainloader = data.DataLoader(train, batch_size=512, shuffle=True) ``` ## Model ``` from torch.distributions import MultivariateNormal, Uniform, TransformedDistribution, SigmoidTransform, Categorical prior = MultivariateNormal(torch.zeros(2).to(device), torch.eye(2).to(device)) f = nn.Sequential( nn.Linear(2, 32), nn.Softplus(), DataControl(), nn.Linear(32+2, 2) ) # cnf wraps the net as with other energy models noise_dist = MultivariateNormal(torch.zeros(2).to(device), torch.eye(2).to(device)) cnf = nn.Sequential(CNF(f)) nde = NeuralDE(cnf, solver='dopri5', s_span=torch.linspace(0, 1, 2), atol=1e-6, rtol=1e-6, sensitivity='adjoint') model = nn.Sequential(Augmenter(augment_idx=1, augment_dims=1), nde).to(device) ``` ## Learner ``` def cnf_density(model): with torch.no_grad(): npts = 200 side = np.linspace(-2., 2., npts) xx, yy = np.meshgrid(side, side) memory= 100 x = np.hstack([xx.reshape(-1, 1), yy.reshape(-1, 1)]) x = torch.from_numpy(x).type(torch.float32).to(device) z, delta_logp = [], [] inds = torch.arange(0, x.shape[0]).to(torch.int64) for ii in torch.split(inds, int(memory**2)): z_full = model(x[ii]).cpu().detach() z_, delta_logp_ = z_full[:, 1:], z_full[:, 0] z.append(z_) delta_logp.append(delta_logp_) z = torch.cat(z, 0) delta_logp = torch.cat(delta_logp, 0) logpz = prior.log_prob(z.cuda()).cpu() # logp(z) logpx = logpz - delta_logp px = np.exp(logpx.cpu().numpy()).reshape(npts, npts) plt.imshow(px); plt.xlabel([]) plt.ylabel([]) class Learner(pl.LightningModule): def __init__(self, model:nn.Module): super().__init__() self.model = model self.lr = 1e-3 def forward(self, x): return self.model(x) def training_step(self, batch, batch_idx): # plot logging if batch_idx == 0: cnf_density(self.model) self.logger.experiment.log({"chart": plt}) plt.close() nde.nfe = 0 x, _ = batch x += 1e-2*torch.randn_like(x).to(x) xtrJ = self.model(x) logprob = prior.log_prob(xtrJ[:,1:]).to(x) - xtrJ[:,0] loss = -torch.mean(logprob) nfe = nde.nfe nde.nfe = 0 metrics = {'loss': loss, 'nfe':nfe} self.logger.experiment.log(metrics) return {'loss': loss} def configure_optimizers(self): return torch.optim.AdamW(self.model.parameters(), lr=self.lr, weight_decay=1e-5) def train_dataloader(self): self.loader_l = len(trainloader) return trainloader logger = WandbLogger(project='torchdyn-toy_cnf-bench') learn = Learner(model) trainer = pl.Trainer(min_steps=45000, max_steps=45000, gpus=1, logger=logger) trainer.fit(learn); sample = prior.sample(torch.Size([1<<15])) # integrating from 1 to 0, 8 steps of rk4 model[1].s_span = torch.linspace(0, 1, 2) new_x = model(sample).cpu().detach() cnf_density(model) plt.figure(figsize=(12, 4)) plt.subplot(121) plt.scatter(new_x[:,1], new_x[:,2], s=0.3, c='blue') #plt.scatter(boh[:,0], boh[:,1], s=0.3, c='black') plt.subplot(122) plt.scatter(X[:,0], X[:,1], s=0.3, c='red') def cnf_density(model): with torch.no_grad(): npts = 200 side = np.linspace(-2., 2., npts) xx, yy = np.meshgrid(side, side) memory= 100 x = np.hstack([xx.reshape(-1, 1), yy.reshape(-1, 1)]) x = torch.from_numpy(x).type(torch.float32).to(device) z, delta_logp = [], [] inds = torch.arange(0, x.shape[0]).to(torch.int64) for ii in torch.split(inds, int(memory**2)): z_full = model(x[ii]).cpu().detach() z_, delta_logp_ = z_full[:, 1:], z_full[:, 0] z.append(z_) delta_logp.append(delta_logp_) z = torch.cat(z, 0) delta_logp = torch.cat(delta_logp, 0) logpz = prior.log_prob(z.cuda()).cpu() # logp(z) logpx = logpz - delta_logp px = np.exp(logpx.cpu().numpy()).reshape(npts, npts) plt.imshow(px, cmap='inferno', vmax=px.mean()); a = cnf_density(model) ```
true
code
0.723798
null
null
null
null
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Public/5.1_Text_classification_examples_in_SparkML_SparkNLP.ipynb) # Text Classification with Spark NLP ``` %%capture # This is only to setup PySpark and Spark NLP on Colab !wget https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/colab_setup.sh -O - | bash # for Spark 2.4.x and Spark NLP 2.x.x, do the following # !wget https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/colab_setup.sh # !bash colab_setup.sh -p 2.4.x -s 2.x.x ``` <b> if you want to work with Spark 2.3 </b> ``` import os # Install java ! apt-get update -qq ! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null !wget -q https://archive.apache.org/dist/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz !tar xf spark-2.3.0-bin-hadoop2.7.tgz !pip install -q findspark os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"] os.environ["SPARK_HOME"] = "/content/spark-2.3.0-bin-hadoop2.7" ! java -version import findspark findspark.init() from pyspark.sql import SparkSession ! pip install --ignore-installed -q spark-nlp==2.7.5 import sparknlp spark = sparknlp.start(spark23=True) ``` ``` import os import sys from pyspark.sql import SparkSession from pyspark.ml import Pipeline from sparknlp.annotator import * from sparknlp.common import * from sparknlp.base import * import pandas as pd import sparknlp spark = sparknlp.start() print("Spark NLP version: ", sparknlp.version()) print("Apache Spark version: ", spark.version) ! wget https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/tutorials/Certification_Trainings/Public/data/news_category_train.csv ! wget https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/tutorials/Certification_Trainings/Public/data/news_category_test.csv # newsDF = spark.read.parquet("data/news_category.parquet") >> if it is a parquet newsDF = spark.read \ .option("header", True) \ .csv("news_category_train.csv") newsDF.show(truncate=50) newsDF.take(2) from pyspark.sql.functions import col newsDF.groupBy("category") \ .count() \ .orderBy(col("count").desc()) \ .show() ``` ## Building Classification Pipeline ### LogReg with CountVectorizer Tokenizer: Tokenization stopwordsRemover: Remove Stop Words countVectors: Count vectors (“document-term vectors”) ``` from pyspark.ml.feature import CountVectorizer, HashingTF, IDF, OneHotEncoder, StringIndexer, VectorAssembler, SQLTransformer %%time document_assembler = DocumentAssembler() \ .setInputCol("description") \ .setOutputCol("document") tokenizer = Tokenizer() \ .setInputCols(["document"]) \ .setOutputCol("token") normalizer = Normalizer() \ .setInputCols(["token"]) \ .setOutputCol("normalized") stopwords_cleaner = StopWordsCleaner()\ .setInputCols("normalized")\ .setOutputCol("cleanTokens")\ .setCaseSensitive(False) stemmer = Stemmer() \ .setInputCols(["cleanTokens"]) \ .setOutputCol("stem") finisher = Finisher() \ .setInputCols(["stem"]) \ .setOutputCols(["token_features"]) \ .setOutputAsArray(True) \ .setCleanAnnotations(False) countVectors = CountVectorizer(inputCol="token_features", outputCol="features", vocabSize=10000, minDF=5) label_stringIdx = StringIndexer(inputCol = "category", outputCol = "label") nlp_pipeline = Pipeline( stages=[document_assembler, tokenizer, normalizer, stopwords_cleaner, stemmer, finisher, countVectors, label_stringIdx]) nlp_model = nlp_pipeline.fit(newsDF) processed = nlp_model.transform(newsDF) processed.count() processed.select('description','token_features').show(truncate=50) processed.select('token_features').take(2) processed.select('features').take(2) processed.select('description','features','label').show() # set seed for reproducibility (trainingData, testData) = processed.randomSplit([0.7, 0.3], seed = 100) print("Training Dataset Count: " + str(trainingData.count())) print("Test Dataset Count: " + str(testData.count())) trainingData.printSchema() from pyspark.ml.classification import LogisticRegression lr = LogisticRegression(maxIter=10, regParam=0.3, elasticNetParam=0) lrModel = lr.fit(trainingData) predictions = lrModel.transform(testData) predictions.filter(predictions['prediction'] == 0) \ .select("description","category","probability","label","prediction") \ .orderBy("probability", ascending=False) \ .show(n = 10, truncate = 30) from pyspark.ml.evaluation import MulticlassClassificationEvaluator evaluator = MulticlassClassificationEvaluator(predictionCol="prediction") evaluator.evaluate(predictions) from sklearn.metrics import confusion_matrix, classification_report, accuracy_score y_true = predictions.select("label") y_true = y_true.toPandas() y_pred = predictions.select("prediction") y_pred = y_pred.toPandas() y_pred.prediction.value_counts() cnf_matrix = confusion_matrix(list(y_true.label.astype(int)), list(y_pred.prediction.astype(int))) cnf_matrix print(classification_report(y_true.label, y_pred.prediction)) print(accuracy_score(y_true.label, y_pred.prediction)) ``` ### LogReg with TFIDF ``` from pyspark.ml.feature import HashingTF, IDF hashingTF = HashingTF(inputCol="token_features", outputCol="rawFeatures", numFeatures=10000) idf = IDF(inputCol="rawFeatures", outputCol="features", minDocFreq=5) #minDocFreq: remove sparse terms nlp_pipeline_tf = Pipeline( stages=[document_assembler, tokenizer, normalizer, stopwords_cleaner, stemmer, finisher, hashingTF, idf, label_stringIdx]) nlp_model_tf = nlp_pipeline_tf.fit(newsDF) processed_tf = nlp_model_tf.transform(newsDF) processed_tf.count() # set seed for reproducibility processed_tf.select('description','features','label').show() (trainingData, testData) = processed_tf.randomSplit([0.7, 0.3], seed = 100) print("Training Dataset Count: " + str(trainingData.count())) print("Test Dataset Count: " + str(testData.count())) lrModel_tf = lr.fit(trainingData) predictions_tf = lrModel_tf.transform(testData) predictions_tf.select("description","category","probability","label","prediction") \ .orderBy("probability", ascending=False) \ .show(n = 10, truncate = 30) y_true = predictions_tf.select("label") y_true = y_true.toPandas() y_pred = predictions_tf.select("prediction") y_pred = y_pred.toPandas() print(classification_report(y_true.label, y_pred.prediction)) print(accuracy_score(y_true.label, y_pred.prediction)) ``` ### Random Forest with TFIDF ``` from pyspark.ml.classification import RandomForestClassifier rf = RandomForestClassifier(labelCol="label", \ featuresCol="features", \ numTrees = 100, \ maxDepth = 4, \ maxBins = 32) # Train model with Training Data rfModel = rf.fit(trainingData) predictions_rf = rfModel.transform(testData) predictions_rf.select("description","category","probability","label","prediction") \ .orderBy("probability", ascending=False) \ .show(n = 10, truncate = 30) y_true = predictions_rf.select("label") y_true = y_true.toPandas() y_pred = predictions_rf.select("prediction") y_pred = y_pred.toPandas() print(classification_report(y_true.label, y_pred.prediction)) print(accuracy_score(y_true.label, y_pred.prediction)) ``` ## LogReg with Spark NLP Glove Word Embeddings ``` document_assembler = DocumentAssembler() \ .setInputCol("description") \ .setOutputCol("document") tokenizer = Tokenizer() \ .setInputCols(["document"]) \ .setOutputCol("token") normalizer = Normalizer() \ .setInputCols(["token"]) \ .setOutputCol("normalized") stopwords_cleaner = StopWordsCleaner()\ .setInputCols("normalized")\ .setOutputCol("cleanTokens")\ .setCaseSensitive(False) glove_embeddings = WordEmbeddingsModel().pretrained() \ .setInputCols(["document",'cleanTokens'])\ .setOutputCol("embeddings")\ .setCaseSensitive(False) embeddingsSentence = SentenceEmbeddings() \ .setInputCols(["document", "embeddings"]) \ .setOutputCol("sentence_embeddings") \ .setPoolingStrategy("AVERAGE") embeddings_finisher = EmbeddingsFinisher() \ .setInputCols(["sentence_embeddings"]) \ .setOutputCols(["finished_sentence_embeddings"]) \ .setOutputAsVector(True)\ .setCleanAnnotations(False) explodeVectors = SQLTransformer(statement= "SELECT EXPLODE(finished_sentence_embeddings) AS features, * FROM __THIS__") label_stringIdx = StringIndexer(inputCol = "category", outputCol = "label") nlp_pipeline_w2v = Pipeline( stages=[document_assembler, tokenizer, normalizer, stopwords_cleaner, glove_embeddings, embeddingsSentence, embeddings_finisher, explodeVectors, label_stringIdx]) nlp_model_w2v = nlp_pipeline_w2v.fit(newsDF) processed_w2v = nlp_model_w2v.transform(newsDF) processed_w2v.count() processed_w2v.columns processed_w2v.show(5) processed_w2v.select('finished_sentence_embeddings').take(1) # IF SQLTransformer IS NOT USED INSIDE THE PIPELINE, WE CAN EXPLODE OUTSIDE from pyspark.sql.functions import explode # processed_w2v= processed_w2v.withColumn("features", explode(processed_w2v.finished_sentence_embeddings)) processed_w2v.select("features").take(1) processed_w2v.select("features").take(1) processed_w2v.select('description','features','label').show() # set seed for reproducibility (trainingData, testData) = processed_w2v.randomSplit([0.7, 0.3], seed = 100) print("Training Dataset Count: " + str(trainingData.count())) print("Test Dataset Count: " + str(testData.count())) from pyspark.sql.functions import udf @udf("long") def num_nonzeros(v): return v.numNonzeros() testData = testData.where(num_nonzeros("features") != 0) lrModel_w2v = lr.fit(trainingData) predictions_w2v = lrModel_w2v.transform(testData) predictions_w2v.select("description","category","probability","label","prediction") \ .orderBy("probability", ascending=False) \ .show(n = 10, truncate = 30) y_true = predictions_w2v.select("label") y_true = y_true.toPandas() y_pred = predictions_w2v.select("prediction") y_pred = y_pred.toPandas() print(classification_report(y_true.label, y_pred.prediction)) print(accuracy_score(y_true.label, y_pred.prediction)) processed_w2v.select('description','cleanTokens.result').show(truncate=50) ``` ## LogReg with Spark NLP Bert Embeddings ``` document_assembler = DocumentAssembler() \ .setInputCol("description") \ .setOutputCol("document") tokenizer = Tokenizer() \ .setInputCols(["document"]) \ .setOutputCol("token") normalizer = Normalizer() \ .setInputCols(["token"]) \ .setOutputCol("normalized") stopwords_cleaner = StopWordsCleaner()\ .setInputCols("normalized")\ .setOutputCol("cleanTokens")\ .setCaseSensitive(False) bert_embeddings = BertEmbeddings\ .pretrained('bert_base_cased', 'en') \ .setInputCols(["document",'cleanTokens'])\ .setOutputCol("bert")\ .setCaseSensitive(False)\ embeddingsSentence = SentenceEmbeddings() \ .setInputCols(["document", "bert"]) \ .setOutputCol("sentence_embeddings") \ .setPoolingStrategy("AVERAGE") embeddings_finisher = EmbeddingsFinisher() \ .setInputCols(["sentence_embeddings"]) \ .setOutputCols(["finished_sentence_embeddings"]) \ .setOutputAsVector(True)\ .setCleanAnnotations(False) label_stringIdx = StringIndexer(inputCol = "category", outputCol = "label") nlp_pipeline_bert = Pipeline( stages=[document_assembler, tokenizer, normalizer, stopwords_cleaner, bert_embeddings, embeddingsSentence, embeddings_finisher, label_stringIdx]) nlp_model_bert = nlp_pipeline_bert.fit(newsDF) processed_bert = nlp_model_bert.transform(newsDF) processed_bert.count() from pyspark.sql.functions import explode processed_bert= processed_bert.withColumn("features", explode(processed_bert.finished_sentence_embeddings)) processed_bert.select('description','features','label').show() # set seed for reproducibility (trainingData, testData) = processed_bert.randomSplit([0.7, 0.3], seed = 100) print("Training Dataset Count: " + str(trainingData.count())) print("Test Dataset Count: " + str(testData.count())) from pyspark.ml.classification import LogisticRegression lr = LogisticRegression(maxIter=20, regParam=0.3, elasticNetParam=0) lrModel = lr.fit(trainingData) from pyspark.sql.functions import udf @udf("long") def num_nonzeros(v): return v.numNonzeros() testData = testData.where(num_nonzeros("features") != 0) predictions = lrModel.transform(testData) predictions.select("description","category","probability","label","prediction") \ .orderBy("probability", ascending=False) \ .show(n = 10, truncate = 30) from sklearn.metrics import confusion_matrix, classification_report, accuracy_score import pandas as pd df = predictions.select('description','category','label','prediction').toPandas() print(classification_report(df.label, df.prediction)) print(accuracy_score(df.label, df.prediction)) ``` ## LogReg with ELMO Embeddings ``` document_assembler = DocumentAssembler() \ .setInputCol("description") \ .setOutputCol("document") tokenizer = Tokenizer() \ .setInputCols(["document"]) \ .setOutputCol("token") normalizer = Normalizer() \ .setInputCols(["token"]) \ .setOutputCol("normalized") stopwords_cleaner = StopWordsCleaner()\ .setInputCols("normalized")\ .setOutputCol("cleanTokens")\ .setCaseSensitive(False) elmo_embeddings = ElmoEmbeddings.pretrained()\ .setPoolingLayer("word_emb")\ .setInputCols(["document",'cleanTokens'])\ .setOutputCol("elmo") embeddingsSentence = SentenceEmbeddings() \ .setInputCols(["document", "elmo"]) \ .setOutputCol("sentence_embeddings") \ .setPoolingStrategy("AVERAGE") embeddings_finisher = EmbeddingsFinisher() \ .setInputCols(["sentence_embeddings"]) \ .setOutputCols(["finished_sentence_embeddings"]) \ .setOutputAsVector(True)\ .setCleanAnnotations(False) label_stringIdx = StringIndexer(inputCol = "category", outputCol = "label") nlp_pipeline_elmo = Pipeline( stages=[document_assembler, tokenizer, normalizer, stopwords_cleaner, elmo_embeddings, embeddingsSentence, embeddings_finisher, label_stringIdx]) nlp_model_elmo = nlp_pipeline_elmo.fit(newsDF) processed_elmo = nlp_model_elmo.transform(newsDF) processed_elmo.count() (trainingData, testData) = newsDF.randomSplit([0.7, 0.3], seed = 100) processed_trainingData = nlp_model_elmo.transform(trainingData) processed_trainingData.count() processed_testData = nlp_model_elmo.transform(testData) processed_testData.count() processed_trainingData.columns processed_testData= processed_testData.withColumn("features", explode(processed_testData.finished_sentence_embeddings)) processed_trainingData= processed_trainingData.withColumn("features", explode(processed_trainingData.finished_sentence_embeddings)) from pyspark.sql.functions import udf @udf("long") def num_nonzeros(v): return v.numNonzeros() processed_testData = processed_testData.where(num_nonzeros("features") != 0) %%time from pyspark.ml.classification import LogisticRegression lr = LogisticRegression(maxIter=20, regParam=0.3, elasticNetParam=0) lrModel = lr.fit(processed_trainingData) processed_trainingData.columns predictions = lrModel.transform(processed_testData) predictions.select("description","category","probability","label","prediction") \ .orderBy("probability", ascending=False) \ .show(n = 10, truncate = 30) df = predictions.select('description','category','label','prediction').toPandas() df.shape df.head() from sklearn.metrics import classification_report, accuracy_score print(classification_report(df.label, df.prediction)) print(accuracy_score(df.label, df.prediction)) ``` ## LogReg with Universal Sentence Encoder ``` useEmbeddings = UniversalSentenceEncoder.pretrained()\ .setInputCols("document")\ .setOutputCol("use_embeddings") document_assembler = DocumentAssembler() \ .setInputCol("description") \ .setOutputCol("document") loaded_useEmbeddings = UniversalSentenceEncoder.load('/root/cache_pretrained/tfhub_use_en_2.4.0_2.4_1587136330099')\ .setInputCols("document")\ .setOutputCol("use_embeddings") embeddings_finisher = EmbeddingsFinisher() \ .setInputCols(["use_embeddings"]) \ .setOutputCols(["finished_use_embeddings"]) \ .setOutputAsVector(True)\ .setCleanAnnotations(False) label_stringIdx = StringIndexer(inputCol = "category", outputCol = "label") use_pipeline = Pipeline( stages=[ document_assembler, loaded_useEmbeddings, embeddings_finisher, label_stringIdx] ) use_df = use_pipeline.fit(newsDF).transform(newsDF) use_df.select('finished_use_embeddings').show(3) from pyspark.sql.functions import explode use_df= use_df.withColumn("features", explode(use_df.finished_use_embeddings)) use_df.show(2) # set seed for reproducibility (trainingData, testData) = use_df.randomSplit([0.7, 0.3], seed = 100) print("Training Dataset Count: " + str(trainingData.count())) print("Test Dataset Count: " + str(testData.count())) from sklearn.metrics import confusion_matrix, classification_report, accuracy_score import pandas as pd from pyspark.ml.classification import LogisticRegression lr = LogisticRegression(maxIter=20, regParam=0.3, elasticNetParam=0) lrModel = lr.fit(trainingData) predictions = lrModel.transform(testData) predictions.filter(predictions['prediction'] == 0) \ .select("description","category","probability","label","prediction") \ .orderBy("probability", ascending=False) \ .show(n = 10, truncate = 30) df = predictions.select('description','category','label','prediction').toPandas() #df['result'] = df['result'].apply(lambda x: x[0]) df.head() print(classification_report(df.label, df.prediction)) print(accuracy_score(df.label, df.prediction)) ``` ### train on entire dataset ``` lr = LogisticRegression(maxIter=20, regParam=0.3, elasticNetParam=0) lrModel = lr.fit(use_df) test_df = spark.read.parquet("data/news_category_test.parquet") test_df = use_pipeline.fit(test_df).transform(test_df) test_df= test_df.withColumn("features", explode(test_df.finished_use_embeddings)) test_df.show(2) predictions = lrModel.transform(test_df) df = predictions.select('description','category','label','prediction').toPandas() df['label'] = df.category.replace({'World':2.0, 'Sports':3.0, 'Business':0.0, 'Sci/Tech':1.0}) df.head() print(classification_report(df.label, df.prediction)) print(accuracy_score(df.label, df.prediction)) ``` ## Spark NLP Licensed DocClassifier ``` from sparknlp_jsl.annotator import * # set seed for reproducibility (trainingData, testData) = newsDF.randomSplit([0.7, 0.3], seed = 100) print("Training Dataset Count: " + str(trainingData.count())) print("Test Dataset Count: " + str(testData.count())) document_assembler = DocumentAssembler() \ .setInputCol("description") \ .setOutputCol("document") tokenizer = Tokenizer() \ .setInputCols(["document"]) \ .setOutputCol("token") normalizer = Normalizer() \ .setInputCols(["token"]) \ .setOutputCol("normalized") stopwords_cleaner = StopWordsCleaner()\ .setInputCols("normalized")\ .setOutputCol("cleanTokens")\ .setCaseSensitive(False) stemmer = Stemmer() \ .setInputCols(["cleanTokens"]) \ .setOutputCol("stem") logreg = DocumentLogRegClassifierApproach()\ .setInputCols(["stem"])\ .setLabelCol("category")\ .setOutputCol("prediction") nlp_pipeline = Pipeline( stages=[document_assembler, tokenizer, normalizer, stopwords_cleaner, stemmer, logreg]) nlp_model = nlp_pipeline.fit(trainingData) processed = nlp_model.transform(testData) processed.count() processed.select('description','category','prediction.result').show(truncate=50) processed.select('description','prediction.result').show(truncate=50) from sklearn.metrics import confusion_matrix, classification_report, accuracy_score import pandas as pd df = processed.select('description','category','prediction.result').toPandas() df.head() df.result[0][0] df = processed.select('description','category','prediction.result').toPandas() df['result'] = df['result'].apply(lambda x: x[0]) df.head() df = processed.select('description','category','prediction.result').toPandas() df['result'] = df['result'].apply(lambda x: x[0]) print(classification_report(df.category, df.result)) print(accuracy_score(df.category, df.result)) ``` # ClassifierDL ``` # actual content is inside description column document = DocumentAssembler()\ .setInputCol("description")\ .setOutputCol("document") use = UniversalSentenceEncoder.load('/root/cache_pretrained/tfhub_use_en_2.4.4_2.4_1583158595769')\ .setInputCols(["document"])\ .setOutputCol("sentence_embeddings") # the classes/labels/categories are in category column classsifierdl = ClassifierDLApproach()\ .setInputCols(["sentence_embeddings"])\ .setOutputCol("class")\ .setLabelColumn("category")\ .setMaxEpochs(5)\ .setEnableOutputLogs(True) pipeline = Pipeline( stages = [ document, use, classsifierdl ]) # set seed for reproducibility (trainingData, testData) = newsDF.randomSplit([0.7, 0.3], seed = 100) print("Training Dataset Count: " + str(trainingData.count())) print("Test Dataset Count: " + str(testData.count())) pipelineModel = pipeline.fit(trainingData) from sklearn.metrics import classification_report, accuracy_score df = pipelineModel.transform(testDataset).select('category','description',"class.result").toPandas() df['result'] = df['result'].apply(lambda x: x[0]) print(classification_report(df.category, df.result)) print(accuracy_score(df.category, df.result)) ``` ## Loading the trained classifier from disk ``` classsifierdlmodel = ClassifierDLModel.load('classifierDL_model_20200317_5e') import sparknlp sparknlp.__path__ .setInputCols(["sentence_embeddings"])\ .setOutputCol("class")\ .setLabelColumn("category")\ .setMaxEpochs(5)\ .setEnableOutputLogs(True) trainDataset = spark.read \ .option("header", True) \ .csv("data/news_category_train.csv") trainDataset.count() trainingData.count() document = DocumentAssembler()\ .setInputCol("description")\ .setOutputCol("document") sentence = SentenceDetector()\ .setInputCols(['document'])\ .setOutputCol('sentence') use = UniversalSentenceEncoder.load('/root/cache_pretrained/tfhub_use_en_2.4.4_2.4_1583158595769')\ .setInputCols(["sentence"])\ .setOutputCol("sentence_embeddings") classsifierdlmodel = ClassifierDLModel.load('classifierDL_model_20200317_5e') pipeline = Pipeline( stages = [ document, sentence, use, classsifierdlmodel ]) pipeline.fit(testData.limit(1)).transform(testData.limit(10)).select('category','description',"class.result").show(10, truncate=50) lm = LightPipeline(pipeline.fit(testDataset.limit(1))) lm.annotate('In its first two years, the UK dedicated card companies have surge') text=''' Fearing the fate of Italy, the centre-right government has threatened to be merciless with those who flout tough restrictions. As of Wednesday it will also include all shops being closed across Greece, with the exception of supermarkets. Banks, pharmacies, pet-stores, mobile phone stores, opticians, bakers, mini-markets, couriers and food delivery outlets are among the few that will also be allowed to remain open. ''' lm = LightPipeline(pipeline.fit(testDataset.limit(1))) lm.annotate(text) ``` # Classifier DL + Glove + Basic text processing ``` tokenizer = Tokenizer() \ .setInputCols(["document"]) \ .setOutputCol("token") lemma = LemmatizerModel.pretrained('lemma_antbnc') \ .setInputCols(["token"]) \ .setOutputCol("lemma") lemma_pipeline = Pipeline( stages=[document_assembler, tokenizer, lemma, glove_embeddings]) lemma_pipeline.fit(trainingData.limit(1000)).transform(trainingData.limit(1000)).show(truncate=30) document_assembler = DocumentAssembler() \ .setInputCol("description") \ .setOutputCol("document") tokenizer = Tokenizer() \ .setInputCols(["document"]) \ .setOutputCol("token") normalizer = Normalizer() \ .setInputCols(["token"]) \ .setOutputCol("normalized") stopwords_cleaner = StopWordsCleaner()\ .setInputCols("normalized")\ .setOutputCol("cleanTokens")\ .setCaseSensitive(False) lemma = LemmatizerModel.pretrained('lemma_antbnc') \ .setInputCols(["cleanTokens"]) \ .setOutputCol("lemma") glove_embeddings = WordEmbeddingsModel().pretrained() \ .setInputCols(["document",'lemma'])\ .setOutputCol("embeddings")\ .setCaseSensitive(False) embeddingsSentence = SentenceEmbeddings() \ .setInputCols(["document", "embeddings"]) \ .setOutputCol("sentence_embeddings") \ .setPoolingStrategy("AVERAGE") classsifierdl = ClassifierDLApproach()\ .setInputCols(["sentence_embeddings"])\ .setOutputCol("class")\ .setLabelColumn("category")\ .setMaxEpochs(10)\ .setEnableOutputLogs(True) clf_pipeline = Pipeline( stages=[document_assembler, tokenizer, normalizer, stopwords_cleaner, lemma, glove_embeddings, embeddingsSentence, classsifierdl]) !rm -rf classifier_dl_pipeline_glove clf_pipelineModel.save('classifier_dl_pipeline_glove') clf_pipelineModel = clf_pipeline.fit(trainingData) df = clf_pipelineModel.transform(testDataset).select('category','description',"class.result").toPandas() df['result'] = df['result'].apply(lambda x: x[0]) print(classification_report(df.category, df.result)) print(accuracy_score(df.category, df.result)) !cd data && ls -l import pandas as pd import news_df = newsDF.toPandas() news_df.head() news_df.to_csv('data/news_dataset.csv', index=False) document_assembler = DocumentAssembler() \ .setInputCol("description") \ .setOutputCol("document") tokenizer = Tokenizer() \ .setInputCols(["document"]) \ .setOutputCol("token") normalizer = Normalizer() \ .setInputCols(["token"]) \ .setOutputCol("normalized") stopwords_cleaner = StopWordsCleaner()\ .setInputCols("normalized")\ .setOutputCol("cleanTokens")\ .setCaseSensitive(False) lemma = LemmatizerModel.pretrained('lemma_antbnc') \ .setInputCols(["cleanTokens"]) \ .setOutputCol("lemma") glove_embeddings = WordEmbeddingsModel().pretrained() \ .setInputCols(["document",'lemma'])\ .setOutputCol("embeddings")\ .setCaseSensitive(False) txt_pipeline = Pipeline( stages=[document_assembler, tokenizer, normalizer, stopwords_cleaner, lemma, glove_embeddings, embeddingsSentence]) txt_pipelineModel = txt_pipeline.fit(testData.limit(1)) txt_pipelineModel.save('text_prep_pipeline_glove') df.head() ```
true
code
0.475727
null
null
null
null
# MNIST handwritten digits classification with MLPs In this notebook, we'll train a multi-layer perceptron model to classify MNIST digits using [TensorFlow](https://www.tensorflow.org/) (version $\ge$ 2.0 required) with the [Keras API](https://www.tensorflow.org/guide/keras/overview). First, the needed imports. ``` %matplotlib inline from pml_utils import show_failures import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation, Dropout, Flatten from tensorflow.keras.utils import plot_model, to_categorical from IPython.display import SVG, display import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set() print('Using Tensorflow version: {}, and Keras version: {}.'.format(tf.__version__, tf.keras.__version__)) ``` Let's check if we have GPU available. ``` if tf.test.is_gpu_available(): from tensorflow.python.client import device_lib for d in device_lib.list_local_devices(): if d.device_type == 'GPU': print('GPU', d.physical_device_desc) else: print('No GPU, using CPU instead.') ``` ## MNIST data set Next we'll load the MNIST handwritten digits data set using TensorFlow's own tools. First time we may have to download the data, which can take a while. #### Altenative: Fashion-MNIST Alternatively, MNIST can be replaced with Fashion-MNIST, which can be used as drop-in replacement for MNIST. Fashion-MNIST contains images of 10 fashion categories: Label|Description|Label|Description --- | --- |--- | --- 0|T-shirt/top|5|Sandal 1|Trouser|6|Shirt 2|Pullover|7|Sneaker 3|Dress|8|Bag 4|Coat|9|Ankle boot ``` from tensorflow.keras.datasets import mnist, fashion_mnist ## MNIST: (X_train, y_train), (X_test, y_test) = mnist.load_data() ## Fashion-MNIST: #(X_train, y_train), (X_test, y_test) = fashion_mnist.load_data() nb_classes = 10 X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train /= 255.0 X_test /= 255.0 # one-hot encoding: Y_train = to_categorical(y_train, nb_classes) Y_test = to_categorical(y_test, nb_classes) print() print('MNIST data loaded: train:',len(X_train),'test:',len(X_test)) print('X_train:', X_train.shape) print('y_train:', y_train.shape) print('Y_train:', Y_train.shape) ``` The training data (`X_train`) is a 3rd-order tensor of size (60000, 28, 28), i.e. it consists of 60000 images of size 28x28 pixels. `y_train` is a 60000-dimensional vector containing the correct classes ("0", "1", ..., "9") for each training sample, and `Y_train` is a [one-hot](https://en.wikipedia.org/wiki/One-hot) encoding of `y_train`. Let's take a closer look. Here are the first 10 training digits (or fashion items for Fashion-MNIST): ``` pltsize=1 plt.figure(figsize=(10*pltsize, pltsize)) for i in range(10): plt.subplot(1,10,i+1) plt.axis('off') plt.imshow(X_train[i,:,:], cmap="gray") plt.title('Class: '+str(y_train[i])) print('Training sample',i,': class:',y_train[i], ', one-hot encoded:', Y_train[i]) ``` ## Multi-layer perceptron (MLP) network ### Activation functions Let's start by plotting some common activation functions for neural networks. `'relu'` stands for rectified linear unit, $y=\max(0,x)$, a very simple non-linearity we will be using in our MLP network below. ``` x = np.arange(-4,4,.01) plt.figure() plt.plot(x, np.maximum(x,0), label='relu') plt.plot(x, 1/(1+np.exp(-x)), label='sigmoid') plt.plot(x, np.tanh(x), label='tanh') plt.axis([-4, 4, -1.1, 1.5]) plt.title('Activation functions') plt.legend(loc='best'); ``` ### Initialization Let's now create an MLP model that has multiple layers, non-linear activation functions, and optionally dropout layers for regularization. We first initialize the model with `Sequential()`. Then we add a `Dense` layer that has 28*28=784 input nodes (one for each pixel in the input image) and 20 output nodes. The `Dense` layer connects each input to each output with some weight parameter. Next, the output of the dense layer is passed through a ReLU non-linear activation function. Commented out is an alternative, more complex, model that you can also try out. It uses more layers and dropout. `Dropout()` randomly sets a fraction of inputs to zero during training, which is one approach to regularization and can sometimes help to prevent overfitting. The output of the last layer needs to be a softmaxed 10-dimensional vector to match the groundtruth (`Y_train`). This means that it will output 10 values between 0 and 1 which sum to 1, hence, together they can be interpreted as a probability distribution over our 10 classes. Finally, we select *categorical crossentropy* as the loss function, select [*Adam*](https://keras.io/optimizers/#adam) as the optimizer, add *accuracy* to the list of metrics to be evaluated, and `compile()` the model. Adam is simply a an advanced version of stochastic gradient descent, note there are [several different options](https://keras.io/optimizers/) for the optimizer in Keras that we could use instead of *adam*. ``` # Model initialization: model = Sequential() # A simple model: model.add(Dense(units=20, input_dim=28*28)) model.add(Activation('relu')) # A bit more complex model: #model.add(Dense(units=50, input_dim=28*28)) #model.add(Activation('relu')) #model.add(Dropout(0.2)) #model.add(Dense(units=50)) #model.add(Activation('relu')) #model.add(Dropout(0.2)) # The last layer needs to be like this: model.add(Dense(units=10, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) print(model.summary()) ``` The summary shows that there are 15,910 parameters in total in our model. For example for the first dense layer we have 785x20 = 15,700 parameters as the weight matrix is of size 785x20 (not 784, as there's an additional bias term). We can also draw a fancier graph of our model. ``` plot_model(model, show_shapes=True) ``` ### Learning Next, we'll train our model. Notice how the interface is similar to scikit-learn: we still call the `fit()` method on our model object. An *epoch* means one pass through the whole training data, we'll begin by running training for 10 epochs. The `reshape()` function flattens our 28x28 images into vectors of length 784. (This means we are not using any information about the spatial neighborhood relations of pixels. This setup is known as the *permutation invariant MNIST*.) You can run code below multiple times and it will continue the training process from where it left off. If you want to start from scratch, re-initialize the model using the code a few cells ago. We use a batch size of 32, so the actual input will be 32x784 for each batch of 32 images. ``` %%time epochs = 10 history = model.fit(X_train.reshape((-1,28*28)), Y_train, epochs=epochs, batch_size=32, verbose=2) ``` Let's now see how the training progressed. * *Loss* is a function of the difference of the network output and the target values. We are minimizing the loss function during training so it should decrease over time. * *Accuracy* is the classification accuracy for the training data. It gives some indication of the real accuracy of the model but cannot be fully trusted, as it may have overfitted and just memorizes the training data. ``` plt.figure(figsize=(5,3)) plt.plot(history.epoch,history.history['loss']) plt.title('loss') plt.figure(figsize=(5,3)) plt.plot(history.epoch,history.history['accuracy']) plt.title('accuracy'); ``` ### Inference For a better measure of the quality of the model, let's see the model accuracy for the test data. ``` %%time scores = model.evaluate(X_test.reshape((-1,28*28)), Y_test, verbose=2) print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100)) ``` We can now take a closer look at the results using the `show_failures()` helper function. Here are the first 10 test digits the MLP classified to a wrong class: ``` predictions = model.predict(X_test.reshape((-1,28*28))) show_failures(predictions, y_test, X_test) ``` We can use `show_failures()` to inspect failures in more detail. For example, here are failures in which the true class was "6": ``` show_failures(predictions, y_test, X_test, trueclass=6) ``` We can also compute the confusion matrix to see which digits get mixed the most, and look at classification accuracies separately for each class: ``` from sklearn.metrics import confusion_matrix print('Confusion matrix (rows: true classes; columns: predicted classes):'); print() cm=confusion_matrix(y_test, np.argmax(predictions, axis=1), labels=list(range(10))) print(cm); print() print('Classification accuracy for each class:'); print() for i,j in enumerate(cm.diagonal()/cm.sum(axis=1)): print("%d: %.4f" % (i,j)) ``` ## Model tuning Modify the MLP model. Try to improve the classification accuracy, or experiment with the effects of different parameters. If you are interested in the state-of-the-art performance on permutation invariant MNIST, see e.g. this [recent paper](https://arxiv.org/abs/1507.02672) by Aalto University / The Curious AI Company researchers. You can also consult the Keras documentation at https://keras.io/. For example, the Dense, Activation, and Dropout layers are described at https://keras.io/layers/core/.
true
code
0.66236
null
null
null
null
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_08_4_bayesian_hyperparameter_opt.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # T81-558: Applications of Deep Neural Networks **Module 8: Kaggle Data Sets** * Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx) * For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). # Module 8 Material * Part 8.1: Introduction to Kaggle [[Video]](https://www.youtube.com/watch?v=v4lJBhdCuCU&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_08_1_kaggle_intro.ipynb) * Part 8.2: Building Ensembles with Scikit-Learn and Keras [[Video]](https://www.youtube.com/watch?v=LQ-9ZRBLasw&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_08_2_keras_ensembles.ipynb) * Part 8.3: How Should you Architect Your Keras Neural Network: Hyperparameters [[Video]](https://www.youtube.com/watch?v=1q9klwSoUQw&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_08_3_keras_hyperparameters.ipynb) * **Part 8.4: Bayesian Hyperparameter Optimization for Keras** [[Video]](https://www.youtube.com/watch?v=sXdxyUCCm8s&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_08_4_bayesian_hyperparameter_opt.ipynb) * Part 8.5: Current Semester's Kaggle [[Video]](https://www.youtube.com/watch?v=48OrNYYey5E) [[Notebook]](t81_558_class_08_5_kaggle_project.ipynb) # Google CoLab Instructions The following code ensures that Google CoLab is running the correct version of TensorFlow. ``` # Startup Google CoLab try: %tensorflow_version 2.x COLAB = True print("Note: using Google CoLab") except: print("Note: not using Google CoLab") COLAB = False # Nicely formatted time string def hms_string(sec_elapsed): h = int(sec_elapsed / (60 * 60)) m = int((sec_elapsed % (60 * 60)) / 60) s = sec_elapsed % 60 return "{}:{:>02}:{:>05.2f}".format(h, m, s) ``` # Part 8.4: Bayesian Hyperparameter Optimization for Keras Bayesian Hyperparameter Optimization is a method of finding hyperparameters in a more efficient way than a grid search. Because each candidate set of hyperparameters requires a retraining of the neural network, it is best to keep the number of candidate sets to a minimum. Bayesian Hyperparameter Optimization achieves this by training a model to predict good candidate sets of hyperparameters. Snoek, J., Larochelle, H., & Adams, R. P. (2012). [Practical bayesian optimization of machine learning algorithms](https://arxiv.org/pdf/1206.2944.pdf). In *Advances in neural information processing systems* (pp. 2951-2959). * [bayesian-optimization](https://github.com/fmfn/BayesianOptimization) * [hyperopt](https://github.com/hyperopt/hyperopt) * [spearmint](https://github.com/JasperSnoek/spearmint) ``` # Ignore useless W0819 warnings generated by TensorFlow 2.0. Hopefully can remove this ignore in the future. # See https://github.com/tensorflow/tensorflow/issues/31308 import logging, os logging.disable(logging.WARNING) os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3" import pandas as pd from scipy.stats import zscore # Read the data set df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv", na_values=['NA','?']) # Generate dummies for job df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1) df.drop('job', axis=1, inplace=True) # Generate dummies for area df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1) df.drop('area', axis=1, inplace=True) # Missing values for income med = df['income'].median() df['income'] = df['income'].fillna(med) # Standardize ranges df['income'] = zscore(df['income']) df['aspect'] = zscore(df['aspect']) df['save_rate'] = zscore(df['save_rate']) df['age'] = zscore(df['age']) df['subscriptions'] = zscore(df['subscriptions']) # Convert to numpy - Classification x_columns = df.columns.drop('product').drop('id') x = df[x_columns].values dummies = pd.get_dummies(df['product']) # Classification products = dummies.columns y = dummies.values import pandas as pd import os import numpy as np import time import tensorflow.keras.initializers import statistics import tensorflow.keras from sklearn import metrics from sklearn.model_selection import StratifiedKFold from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation, Dropout, InputLayer from tensorflow.keras import regularizers from tensorflow.keras.callbacks import EarlyStopping from sklearn.model_selection import StratifiedShuffleSplit from tensorflow.keras.layers import LeakyReLU,PReLU from tensorflow.keras.optimizers import Adam def generate_model(dropout, neuronPct, neuronShrink): # We start with some percent of 5000 starting neurons on the first hidden layer. neuronCount = int(neuronPct * 5000) # Construct neural network # kernel_initializer = tensorflow.keras.initializers.he_uniform(seed=None) model = Sequential() # So long as there would have been at least 25 neurons and fewer than 10 # layers, create a new layer. layer = 0 while neuronCount>25 and layer<10: # The first (0th) layer needs an input input_dim(neuronCount) if layer==0: model.add(Dense(neuronCount, input_dim=x.shape[1], activation=PReLU())) else: model.add(Dense(neuronCount, activation=PReLU())) layer += 1 # Add dropout after each hidden layer model.add(Dropout(dropout)) # Shrink neuron count for each layer neuronCount = neuronCount * neuronShrink model.add(Dense(y.shape[1],activation='softmax')) # Output return model # Generate a model and see what the resulting structure looks like. model = generate_model(dropout=0.2, neuronPct=0.1, neuronShrink=0.25) model.summary() def evaluate_network(dropout,lr,neuronPct,neuronShrink): SPLITS = 2 # Bootstrap boot = StratifiedShuffleSplit(n_splits=SPLITS, test_size=0.1) # Track progress mean_benchmark = [] epochs_needed = [] num = 0 # Loop through samples for train, test in boot.split(x,df['product']): start_time = time.time() num+=1 # Split train and test x_train = x[train] y_train = y[train] x_test = x[test] y_test = y[test] model = generate_model(dropout, neuronPct, neuronShrink) model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=lr)) monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=100, verbose=0, mode='auto', restore_best_weights=True) # Train on the bootstrap sample model.fit(x_train,y_train,validation_data=(x_test,y_test),callbacks=[monitor],verbose=0,epochs=1000) epochs = monitor.stopped_epoch epochs_needed.append(epochs) # Predict on the out of boot (validation) pred = model.predict(x_test) # Measure this bootstrap's log loss y_compare = np.argmax(y_test,axis=1) # For log loss calculation score = metrics.log_loss(y_compare, pred) mean_benchmark.append(score) m1 = statistics.mean(mean_benchmark) m2 = statistics.mean(epochs_needed) mdev = statistics.pstdev(mean_benchmark) # Record this iteration time_took = time.time() - start_time #print(f"#{num}: score={score:.6f}, mean score={m1:.6f}, stdev={mdev:.6f}, epochs={epochs}, mean epochs={int(m2)}, time={hms_string(time_took)}") tensorflow.keras.backend.clear_session() return (-m1) print(evaluate_network( dropout=0.2, lr=1e-3, neuronPct=0.2, neuronShrink=0.2)) from bayes_opt import BayesianOptimization import time # Supress NaN warnings, see: https://stackoverflow.com/questions/34955158/what-might-be-the-cause-of-invalid-value-encountered-in-less-equal-in-numpy import warnings warnings.filterwarnings("ignore",category =RuntimeWarning) # Bounded region of parameter space pbounds = {'dropout': (0.0, 0.499), 'lr': (0.0, 0.1), 'neuronPct': (0.01, 1), 'neuronShrink': (0.01, 1) } optimizer = BayesianOptimization( f=evaluate_network, pbounds=pbounds, verbose=2, # verbose = 1 prints only when a maximum is observed, verbose = 0 is silent random_state=1, ) start_time = time.time() optimizer.maximize(init_points=10, n_iter=100,) time_took = time.time() - start_time print(f"Total runtime: {hms_string(time_took)}") print(optimizer.max) ``` {'target': -0.6500334282952827, 'params': {'dropout': 0.12771198428037775, 'lr': 0.0074010841641111965, 'neuronPct': 0.10774655638231533, 'neuronShrink': 0.2784788676498257}}
true
code
0.701777
null
null
null
null
# LSTM Stock Predictor Using Closing Prices In this notebook, you will build and train a custom LSTM RNN that uses a 10 day window of Bitcoin closing prices to predict the 11th day closing price. You will need to: 1. Prepare the data for training and testing 2. Build and train a custom LSTM RNN 3. Evaluate the performance of the model ## Data Preparation In this section, you will need to prepare the training and testing data for the model. The model will use a rolling 10 day window to predict the 11th day closing price. You will need to: 1. Use the `window_data` function to generate the X and y values for the model. 2. Split the data into 70% training and 30% testing 3. Apply the MinMaxScaler to the X and y values 4. Reshape the X_train and X_test data for the model. Note: The required input format for the LSTM is: ```python reshape((X_train.shape[0], X_train.shape[1], 1)) ``` ``` import numpy as np import pandas as pd import hvplot.pandas # Set the random seed for reproducibility # Note: This is for the homework solution, but it is good practice to comment this out and run multiple experiments to evaluate your model from numpy.random import seed seed(1) from tensorflow import random random.set_seed(2) # Load the fear and greed sentiment data for Bitcoin df = pd.read_csv('btc_sentiment.csv', index_col="date", infer_datetime_format=True, parse_dates=True) df = df.drop(columns="fng_classification") df.head() # Load the historical closing prices for Bitcoin df2 = pd.read_csv('btc_historic.csv', index_col="Date", infer_datetime_format=True, parse_dates=True)['Close'] df2 = df2.sort_index() df2.tail() # Join the data into a single DataFrame df = df.join(df2, how="inner") df.tail() df.head() # This function accepts the column number for the features (X) and the target (y) # It chunks the data up with a rolling window of Xt-n to predict Xt # It returns a numpy array of X any y def window_data(df, window, feature_col_number, target_col_number): X = [] y = [] for i in range(len(df) - window - 1): features = df.iloc[i:(i + window), feature_col_number] target = df.iloc[(i + window), target_col_number] X.append(features) y.append(target) return np.array(X), np.array(y).reshape(-1, 1) # Predict Closing Prices using a 10 day window of previous closing prices # Then, experiment with window sizes anywhere from 1 to 10 and see how the model performance changes window_size = 10 # Column index 0 is the 'fng_value' column # Column index 1 is the `Close` column feature_column = 1 target_column = 1 X, y = window_data(df, window_size, feature_column, target_column) # Use 70% of the data for training and the remaineder for testing split = int(0.7 * len(X)) X_train = X[: split -1] X_test = X[split:] y_train = y[: split -1] y_test = y[split:] from sklearn.preprocessing import MinMaxScaler # Use the MinMaxScaler to scale data between 0 and 1. scaler = MinMaxScaler() scaler.fit(X) X_train = scaler.transform(X_train) X_test = scaler.transform(X_test) scaler.fit(y) y_train = scaler.transform(y_train) y_test = scaler.transform(y_test) # Reshape the features for the model X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1)) X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], 1)) print(f"X_train sample values:\n{X_train[:5]} \n") print(f"X_test sample values:\n{X_test[:5]}") ``` --- ## Build and Train the LSTM RNN In this section, you will design a custom LSTM RNN and fit (train) it using the training data. You will need to: 1. Define the model architecture 2. Compile the model 3. Fit the model to the training data ### Hints: You will want to use the same model architecture and random seed for both notebooks. This is necessary to accurately compare the performance of the FNG model vs the closing price model. ``` from tensorflow.keras.models import Sequential from tensorflow.keras.layers import LSTM, Dense, Dropout # Build the LSTM model. # The return sequences need to be set to True if you are adding additional LSTM layers, but # You don't have to do this for the final layer. # Note: The dropouts help prevent overfitting # Note: The input shape is the number of time steps and the number of indicators # Note: Batching inputs has a different input shape of Samples/TimeSteps/Features model = Sequential() number_units = 30 dropout_fraction = 0.2 #1 model.add(LSTM( units=number_units, return_sequences=True, input_shape=(X_train.shape[1], 1)) ) model.add(Dropout(dropout_fraction)) #2 model.add(LSTM(units=number_units, return_sequences=True)) model.add(Dropout(dropout_fraction)) #3 model.add(LSTM(units=number_units)) model.add(Dropout(dropout_fraction)) # Output model.add(Dense(1)) # Compile the model model.compile(optimizer="adam", loss="mean_squared_error") # Summarize the model model.summary() # Train the model # Use at least 10 epochs # Do not shuffle the data # Experiement with the batch size, but a smaller batch size is recommended model.fit(X_train, y_train, epochs=10, shuffle=False, batch_size=1000, verbose=1) ``` --- ## Model Performance In this section, you will evaluate the model using the test data. You will need to: 1. Evaluate the model using the `X_test` and `y_test` data. 2. Use the X_test data to make predictions 3. Create a DataFrame of Real (y_test) vs predicted values. 4. Plot the Real vs predicted values as a line chart ### Hints Remember to apply the `inverse_transform` function to the predicted and y_test values to recover the actual closing prices. ``` # Evaluate the model model.evaluate(X_test, y_test) # Make some predictions predicted = model.predict(X_test) # Recover the original prices instead of the scaled version predicted_prices = scaler.inverse_transform(predicted) real_prices = scaler.inverse_transform(y_test.reshape(-1, 1)) # Create a DataFrame of Real and Predicted values stocks = pd.DataFrame({ "Real": real_prices.ravel(), "Predicted": predicted_prices.ravel() }, index = df.index[-len(real_prices): ]) stocks.head() # Plot the real vs predicted values as a line chart stocks.plot() ``` Which model has a lower loss? - Closing Prices Which model tracks the actual values better over time? - Closing Prices Which window size works best for the model? - As the window size increased, the predicted values more closely matched the real values. Since we only tested 1-10, the window size of 10 was the best fit for the model.
true
code
0.686712
null
null
null
null
``` import numpy as np from scipy import fft from scipy.io import loadmat import matplotlib.pyplot as plt whale = loadmat('whale.mat',squeeze_me=True,struct_as_record=True) ``` ## 绘制信号时域和频域波形 ``` whale_data = whale['w'] whale_fs = whale['fs'] t = np.arange(0,len(whale_data)/whale_fs,1/whale_fs) whale_num = len(whale_data) whale_fftnum = np.power(2,(np.ceil(np.log2(whale_num)))) whale_spec = np.abs(fft.fft(whale_data,int(whale_fftnum))) whale_power = 20*np.log10(whale_spec/np.max(whale_spec)) whale_freq = (np.arange(whale_fftnum)/whale_fftnum*whale_fs) import seaborn as sns sns.set(style = 'darkgrid') fig,axs = plt.subplots(2,1,constrained_layout=True) plt.rcParams['font.sans-serif']=['SimHei'] #用来正常显示中文标签 plt.rcParams['axes.unicode_minus'] = False #用来显示负号 axs[0].plot(t,whale_data);axs[0].autoscale(tight=True) axs[0].set_xlabel('时间t/s') axs[1].plot(whale_freq[:int(np.floor(whale_fftnum/2)+1)],whale_power[:int(np.floor(whale_fftnum/2)+1)]) axs[1].set_xlabel('频率f/Hz');axs[1].set_ylabel('功率谱/dB') axs[1].autoscale(tight=True) ``` ## 对信号图形坐标缩放操作 ``` import ipywidgets as widgets from ipywidgets import interact,interact_manual from IPython.display import display def scalefunc(Time_Domain_xaxis,Time_Domain_yaxis,Spec_Domain_xaxis,Spec_Domain_yaxis,Time_Domain_Color,Spec_Domain_Color): fig,axs = plt.subplots(2,1,constrained_layout=True) plt.rcParams['font.sans-serif']=['SimHei'] #用来正常显示中文标签 plt.rcParams['axes.unicode_minus'] = False #用来显示负号 axs[0].plot(t,whale_data,c=Time_Domain_Color);axs[0].autoscale(tight=True) axs[0].set_xlabel('时间t/s');axs[0].set_ylabel('幅度') t_xaxis = Time_Domain_xaxis;t_yaxis = Time_Domain_yaxis axs[0].set_xlim(t_xaxis[0],t_xaxis[1]);axs[0].set_ylim(t_yaxis[0],t_yaxis[1]) axs[1].plot(whale_freq[:int(np.floor(whale_fftnum/2)+1)],whale_power[:int(np.floor(whale_fftnum/2)+1)],c=Spec_Domain_Color) axs[1].autoscale(tight=True) axs[1].set_xlabel('频率f/Hz');axs[1].set_ylabel('功率谱/dB') s_xaxis = Spec_Domain_xaxis;s_yaxis = Spec_Domain_yaxis axs[1].set_xlim(s_xaxis[0],s_xaxis[1]);axs[1].set_ylim(s_yaxis[0],s_yaxis[1]) myplot = interact_manual(scalefunc, Time_Domain_xaxis=widgets.FloatRangeSlider(value=(np.min(t),np.max(t)),min=np.min(t),max=np.max(t),step=0.0001,description='Time Domain xaxis:'), Time_Domain_yaxis=widgets.FloatRangeSlider(value=(np.min(whale_data),np.max(whale_data)),min=np.min(whale_data),max=np.max(whale_data),step=0.0001,description='Time Domain yaxis:'), Spec_Domain_xaxis=widgets.FloatRangeSlider(value=(np.min(whale_freq[:int(np.floor(whale_fftnum/2)+1)]),np.max(whale_freq[:int(np.floor(whale_fftnum/2)+1)])),min=np.min(whale_freq[:int(np.floor(whale_fftnum/2)+1)]),max=np.max(whale_freq[:int(np.floor(whale_fftnum/2)+1)]),step=1,description='Spec Domain xaxis:'), Spec_Domain_yaxis=widgets.FloatRangeSlider(value=(np.min(whale_power[:int(np.floor(whale_fftnum/2)+1)]),np.max(whale_power[:int(np.floor(whale_fftnum/2)+1)])),min=np.min(whale_power[:int(np.floor(whale_fftnum/2)+1)]),max=np.max(whale_power[:int(np.floor(whale_fftnum/2)+1)]),step=1,description='Spec Domain yaxis:'), Time_Domain_Color=widgets.Dropdown(options=['black','blue','coral','cyan','gold','pink','red','yellow','magenta'],value='black',description='Time Domain Color:'), Spec_Domain_Color=widgets.Dropdown(options=['black','blue','coral','cyan','gold','pink','red','yellow','magenta'],value='black',description='Spec Domain Color:')) ``` ## 对信号图形上某点的数值进行读取 ``` def datafunc(Time_Domain_xdata,Spec_Domain_xdata): fig,axs = plt.subplots(2,1,constrained_layout=True) plt.rcParams['font.sans-serif']=['SimHei'] #用来正常显示中文标签 plt.rcParams['axes.unicode_minus'] = False #用来显示负号 axs[0].plot(t,whale_data,alpha=0.7);axs[0].autoscale(tight=True) axs[0].set_xlabel('时间t/s');axs[0].set_ylabel('幅度') require_x_time = np.around(Time_Domain_xdata*whale_fs)/whale_fs x_tloc = np.where(t==require_x_time);y_tval = whale_data[x_tloc] axs[0].text(x=require_x_time+0.05,y=y_tval[0]+0.05,s='%f,%f'%(require_x_time,y_tval[0]),fontdict=dict(fontsize=10, color='b',family='Times New Roman',weight='normal') ,bbox={'facecolor': 'w', #填充色 'edgecolor':'gray',#外框色 'alpha': 0.5, #框透明度 'pad': 1,#本文与框周围距离 }) axs[0].scatter(require_x_time,y_tval[0],marker='o',c='r',s=20) axs[1].plot(whale_freq[:int(np.floor(whale_fftnum/2)+1)],whale_power[:int(np.floor(whale_fftnum/2)+1)],alpha=0.7) axs[1].autoscale(tight=True) axs[1].set_xlabel('频率f/Hz');axs[1].set_ylabel('功率谱/dB') require_x_spec = np.around(Spec_Domain_xdata*whale_fftnum/whale_fs)*(1/whale_fftnum*whale_fs) x_sloc = np.where(whale_freq==require_x_spec);y_sval = whale_power[x_sloc] axs[1].text(x=require_x_spec-12,y=y_sval[0]-15,s='%f,%f'%(require_x_spec,y_sval[0]),fontdict=dict(fontsize=10, color='b',family='Times New Roman',weight='normal') ,bbox={'facecolor': 'w', #填充色 'edgecolor':'gray',#外框色 'alpha': 0.5, #框透明度 'pad': 1,#本文与框周围距离 }) axs[1].scatter(require_x_spec,y_sval[0],marker='o',c='r',s=20) myplot = interact_manual(datafunc, Time_Domain_xdata=widgets.FloatSlider(value=np.min(t),min=np.min(t),max=np.max(t),step=1/whale_fs,description='Time Domain xdata:'), Spec_Domain_xdata=widgets.FloatSlider(value=np.min(whale_freq[:int(np.floor(whale_fftnum/2)+1)]),min=np.min(whale_freq[:int(np.floor(whale_fftnum/2)+1)]),max=np.max(whale_freq[:int(np.floor(whale_fftnum/2)+1)]),step=1/whale_fftnum*whale_fs,description='Spec Domain xdata:')) ```
true
code
0.478041
null
null
null
null
## Telluric correction with `muler` and `gollum` Telluric correction can be complicated, with the right approach depending on one's own science application. In this notebook we demonstrate one uncontroversial, albeit imperfect approach: dividing by an observed A0V standard, and multiplying back by an A0V template. #### August 24, 2021: This method is under active development ``` %config InlineBackend.figure_format='retina' from muler.hpf import HPFSpectrum example_file = '../../data/HPF/Goldilocks_20210517T054403_v1.0_0060.spectra.fits' spectrum = HPFSpectrum(file=example_file, order=19) spectrum = spectrum.normalize().sky_subtract(method='vector').deblaze().normalize() ``` ### Retrieve and doctor the model with `gollum` `gollum` is a sister project to muler. While not a dependency, it has lots of parallel use cases and we hope you will install it too. It makes it easy to retrieve and manipulate theoretical models. Follow the instructions in gollum for downloading the raw high-resolution PHOENIX models. ``` from gollum.phoenix import PHOENIXSpectrum wl_lo, wl_hi = spectrum.wavelength.min().value*0.998, spectrum.wavelength.max().value*1.002 native_resolution_template = PHOENIXSpectrum(teff=9600, logg=4.5, path='~/libraries/raw/PHOENIX/', wl_lo=wl_lo, wl_hi=wl_hi).normalize() ax = spectrum.plot(ylo=0.0, yhi=1.5, label='Observed A0V') native_resolution_template.plot(ax=ax, label='Native resolution A0V template') ax.set_xlim(10_820, 10_960) ax.legend(); ``` The deep Hydrogen lines appear to have a different shape, since the A0V template is shown with near-infinite resolution. Let's "doctor the template" with rotational broadening. ``` A0V_model = native_resolution_template.rotationally_broaden(130.0) A0V_model = A0V_model.instrumental_broaden() # "just works" for HPF A0V_model = A0V_model.rv_shift(25.0) A0V_model = A0V_model.resample(spectrum) ax = spectrum.plot(ylo=0.0, yhi=1.5, label='Observed A0V') A0V_model.plot(ax=ax, label='Fine-tuned A0V model') ax.legend(); ``` Awesome! That's a better fit! Let's compute the ratio. ``` telluric_response = spectrum / A0V_model ax = telluric_response.plot(ylo=0); ax.axhline(1.0, linestyle='dotted', color='k'); ``` We see that the telluric response exhibits sharp discrete lines from the atmosphere. The spectrum also exhibits a smooth continuum variation likely attributable to the imperfections in the A0V templates. The imperfect A0V templates are a known limitation of this telluric correction strategy.
true
code
0.659638
null
null
null
null
# Content __1. Exploratory Visualization__ __2. Data Cleaning__ __3. Feature Engineering__ __4. Modeling & Evaluation__ __5. Ensemble Methods__ ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import warnings warnings.filterwarnings('ignore') %matplotlib inline plt.style.use('ggplot') from sklearn.base import BaseEstimator, TransformerMixin, RegressorMixin, clone from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import RobustScaler, StandardScaler from sklearn.metrics import mean_squared_error from sklearn.pipeline import Pipeline, make_pipeline from scipy.stats import skew from sklearn.decomposition import PCA, KernelPCA from sklearn.preprocessing import Imputer from sklearn.model_selection import cross_val_score, GridSearchCV, KFold from sklearn.linear_model import LinearRegression from sklearn.linear_model import Ridge from sklearn.linear_model import Lasso from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor, ExtraTreesRegressor from sklearn.svm import SVR, LinearSVR from sklearn.linear_model import ElasticNet, SGDRegressor, BayesianRidge from sklearn.kernel_ridge import KernelRidge from xgboost import XGBRegressor train = pd.read_csv('../input/train.csv') test = pd.read_csv('../input/test.csv') ``` # Exploratory Visualization + __It seems that the price of recent-built houses are higher. So later I 'll use labelencoder for three "Year" feature.__ ``` plt.figure(figsize=(15,8)) sns.boxplot(train.YearBuilt, train.SalePrice) ``` + __As is discussed in other kernels, the bottom right two two points with extremely large GrLivArea are likely to be outliers. So we delete them.__ ``` plt.figure(figsize=(12,6)) plt.scatter(x=train.GrLivArea, y=train.SalePrice) plt.xlabel("GrLivArea", fontsize=13) plt.ylabel("SalePrice", fontsize=13) plt.ylim(0,800000) train.drop(train[(train["GrLivArea"]>4000)&(train["SalePrice"]<300000)].index,inplace=True) full=pd.concat([train,test], ignore_index=True) full.drop(['Id'],axis=1, inplace=True) full.shape ``` # Data Cleaning ### Missing Data ``` aa = full.isnull().sum() aa[aa>0].sort_values(ascending=False) ``` + __Let's first imput the missing values of LotFrontage based on the median of LotArea and Neighborhood. Since LotArea is a continuous feature, We use qcut to divide it into 10 parts.__ ``` full.groupby(['Neighborhood'])[['LotFrontage']].agg(['mean','median','count']) full["LotAreaCut"] = pd.qcut(full.LotArea,10) full.groupby(['LotAreaCut'])[['LotFrontage']].agg(['mean','median','count']) full['LotFrontage']=full.groupby(['LotAreaCut','Neighborhood'])['LotFrontage'].transform(lambda x: x.fillna(x.median())) # Since some combinations of LotArea and Neighborhood are not available, so we just LotAreaCut alone. full['LotFrontage']=full.groupby(['LotAreaCut'])['LotFrontage'].transform(lambda x: x.fillna(x.median())) ``` + __Then we filling in other missing values according to data_description.__ ``` cols=["MasVnrArea", "BsmtUnfSF", "TotalBsmtSF", "GarageCars", "BsmtFinSF2", "BsmtFinSF1", "GarageArea"] for col in cols: full[col].fillna(0, inplace=True) cols1 = ["PoolQC" , "MiscFeature", "Alley", "Fence", "FireplaceQu", "GarageQual", "GarageCond", "GarageFinish", "GarageYrBlt", "GarageType", "BsmtExposure", "BsmtCond", "BsmtQual", "BsmtFinType2", "BsmtFinType1", "MasVnrType"] for col in cols1: full[col].fillna("None", inplace=True) # fill in with mode cols2 = ["MSZoning", "BsmtFullBath", "BsmtHalfBath", "Utilities", "Functional", "Electrical", "KitchenQual", "SaleType","Exterior1st", "Exterior2nd"] for col in cols2: full[col].fillna(full[col].mode()[0], inplace=True) ``` + __And there is no missing data except for the value we want to predict !__ ``` full.isnull().sum()[full.isnull().sum()>0] ``` # Feature Engineering + __Convert some numerical features into categorical features. It's better to use LabelEncoder and get_dummies for these features.__ ``` NumStr = ["MSSubClass","BsmtFullBath","BsmtHalfBath","HalfBath","BedroomAbvGr","KitchenAbvGr","MoSold","YrSold","YearBuilt","YearRemodAdd","LowQualFinSF","GarageYrBlt"] for col in NumStr: full[col]=full[col].astype(str) ``` + __Now I want to do a long list of value-mapping. __ + __I was influenced by the insight that we should build as many features as possible and trust the model to choose the right features. So I decided to groupby SalePrice according to one feature and sort it based on mean and median. Here is an example:__ ``` full.groupby(['MSSubClass'])[['SalePrice']].agg(['mean','median','count']) ``` + __So basically I'll do__ '180' : 1 '30' : 2 '45' : 2 '190' : 3, '50' : 3, '90' : 3, '85' : 4, '40' : 4, '160' : 4 '70' : 5, '20' : 5, '75' : 5, '80' : 5, '150' : 5 '120': 6, '60' : 6 + __Different people may have different views on how to map these values, so just follow your instinct =^_^=__ __Below I also add a small "o" in front of the features so as to keep the original features to use get_dummies in a moment.__ ``` def map_values(): full["oMSSubClass"] = full.MSSubClass.map({'180':1, '30':2, '45':2, '190':3, '50':3, '90':3, '85':4, '40':4, '160':4, '70':5, '20':5, '75':5, '80':5, '150':5, '120': 6, '60':6}) full["oMSZoning"] = full.MSZoning.map({'C (all)':1, 'RH':2, 'RM':2, 'RL':3, 'FV':4}) full["oNeighborhood"] = full.Neighborhood.map({'MeadowV':1, 'IDOTRR':2, 'BrDale':2, 'OldTown':3, 'Edwards':3, 'BrkSide':3, 'Sawyer':4, 'Blueste':4, 'SWISU':4, 'NAmes':4, 'NPkVill':5, 'Mitchel':5, 'SawyerW':6, 'Gilbert':6, 'NWAmes':6, 'Blmngtn':7, 'CollgCr':7, 'ClearCr':7, 'Crawfor':7, 'Veenker':8, 'Somerst':8, 'Timber':8, 'StoneBr':9, 'NoRidge':10, 'NridgHt':10}) full["oCondition1"] = full.Condition1.map({'Artery':1, 'Feedr':2, 'RRAe':2, 'Norm':3, 'RRAn':3, 'PosN':4, 'RRNe':4, 'PosA':5 ,'RRNn':5}) full["oBldgType"] = full.BldgType.map({'2fmCon':1, 'Duplex':1, 'Twnhs':1, '1Fam':2, 'TwnhsE':2}) full["oHouseStyle"] = full.HouseStyle.map({'1.5Unf':1, '1.5Fin':2, '2.5Unf':2, 'SFoyer':2, '1Story':3, 'SLvl':3, '2Story':4, '2.5Fin':4}) full["oExterior1st"] = full.Exterior1st.map({'BrkComm':1, 'AsphShn':2, 'CBlock':2, 'AsbShng':2, 'WdShing':3, 'Wd Sdng':3, 'MetalSd':3, 'Stucco':3, 'HdBoard':3, 'BrkFace':4, 'Plywood':4, 'VinylSd':5, 'CemntBd':6, 'Stone':7, 'ImStucc':7}) full["oMasVnrType"] = full.MasVnrType.map({'BrkCmn':1, 'None':1, 'BrkFace':2, 'Stone':3}) full["oExterQual"] = full.ExterQual.map({'Fa':1, 'TA':2, 'Gd':3, 'Ex':4}) full["oFoundation"] = full.Foundation.map({'Slab':1, 'BrkTil':2, 'CBlock':2, 'Stone':2, 'Wood':3, 'PConc':4}) full["oBsmtQual"] = full.BsmtQual.map({'Fa':2, 'None':1, 'TA':3, 'Gd':4, 'Ex':5}) full["oBsmtExposure"] = full.BsmtExposure.map({'None':1, 'No':2, 'Av':3, 'Mn':3, 'Gd':4}) full["oHeating"] = full.Heating.map({'Floor':1, 'Grav':1, 'Wall':2, 'OthW':3, 'GasW':4, 'GasA':5}) full["oHeatingQC"] = full.HeatingQC.map({'Po':1, 'Fa':2, 'TA':3, 'Gd':4, 'Ex':5}) full["oKitchenQual"] = full.KitchenQual.map({'Fa':1, 'TA':2, 'Gd':3, 'Ex':4}) full["oFunctional"] = full.Functional.map({'Maj2':1, 'Maj1':2, 'Min1':2, 'Min2':2, 'Mod':2, 'Sev':2, 'Typ':3}) full["oFireplaceQu"] = full.FireplaceQu.map({'None':1, 'Po':1, 'Fa':2, 'TA':3, 'Gd':4, 'Ex':5}) full["oGarageType"] = full.GarageType.map({'CarPort':1, 'None':1, 'Detchd':2, '2Types':3, 'Basment':3, 'Attchd':4, 'BuiltIn':5}) full["oGarageFinish"] = full.GarageFinish.map({'None':1, 'Unf':2, 'RFn':3, 'Fin':4}) full["oPavedDrive"] = full.PavedDrive.map({'N':1, 'P':2, 'Y':3}) full["oSaleType"] = full.SaleType.map({'COD':1, 'ConLD':1, 'ConLI':1, 'ConLw':1, 'Oth':1, 'WD':1, 'CWD':2, 'Con':3, 'New':3}) full["oSaleCondition"] = full.SaleCondition.map({'AdjLand':1, 'Abnorml':2, 'Alloca':2, 'Family':2, 'Normal':3, 'Partial':4}) return "Done!" map_values() # drop two unwanted columns full.drop("LotAreaCut",axis=1,inplace=True) full.drop(['SalePrice'],axis=1,inplace=True) ``` ## Pipeline + __Next we can build a pipeline. It's convenient to experiment different feature combinations once you've got a pipeline.__ + __Label Encoding three "Year" features.__ ``` class labelenc(BaseEstimator, TransformerMixin): def __init__(self): pass def fit(self,X,y=None): return self def transform(self,X): lab=LabelEncoder() X["YearBuilt"] = lab.fit_transform(X["YearBuilt"]) X["YearRemodAdd"] = lab.fit_transform(X["YearRemodAdd"]) X["GarageYrBlt"] = lab.fit_transform(X["GarageYrBlt"]) return X ``` + __Apply log1p to the skewed features, then get_dummies.__ ``` class skew_dummies(BaseEstimator, TransformerMixin): def __init__(self,skew=0.5): self.skew = skew def fit(self,X,y=None): return self def transform(self,X): X_numeric=X.select_dtypes(exclude=["object"]) skewness = X_numeric.apply(lambda x: skew(x)) skewness_features = skewness[abs(skewness) >= self.skew].index X[skewness_features] = np.log1p(X[skewness_features]) X = pd.get_dummies(X) return X # build pipeline pipe = Pipeline([ ('labenc', labelenc()), ('skew_dummies', skew_dummies(skew=1)), ]) # save the original data for later use full2 = full.copy() data_pipe = pipe.fit_transform(full2) data_pipe.shape data_pipe.head() ``` + __use robustscaler since maybe there are other outliers.__ ``` scaler = RobustScaler() n_train=train.shape[0] X = data_pipe[:n_train] test_X = data_pipe[n_train:] y= train.SalePrice X_scaled = scaler.fit(X).transform(X) y_log = np.log(train.SalePrice) test_X_scaled = scaler.transform(test_X) ``` ## Feature Selection + __I have to confess, the feature engineering above is not enough, so we need more.__ + __Combining different features is usually a good way, but we have no idea what features should we choose. Luckily there are some models that can provide feature selection, here I use Lasso, but you are free to choose Ridge, RandomForest or GradientBoostingTree.__ ``` lasso=Lasso(alpha=0.001) lasso.fit(X_scaled,y_log) FI_lasso = pd.DataFrame({"Feature Importance":lasso.coef_}, index=data_pipe.columns) FI_lasso.sort_values("Feature Importance",ascending=False) FI_lasso[FI_lasso["Feature Importance"]!=0].sort_values("Feature Importance").plot(kind="barh",figsize=(15,25)) plt.xticks(rotation=90) plt.show() ``` + __Based on the "Feature Importance" plot and other try-and-error, I decided to add some features to the pipeline.__ ``` class add_feature(BaseEstimator, TransformerMixin): def __init__(self,additional=1): self.additional = additional def fit(self,X,y=None): return self def transform(self,X): if self.additional==1: X["TotalHouse"] = X["TotalBsmtSF"] + X["1stFlrSF"] + X["2ndFlrSF"] X["TotalArea"] = X["TotalBsmtSF"] + X["1stFlrSF"] + X["2ndFlrSF"] + X["GarageArea"] else: X["TotalHouse"] = X["TotalBsmtSF"] + X["1stFlrSF"] + X["2ndFlrSF"] X["TotalArea"] = X["TotalBsmtSF"] + X["1stFlrSF"] + X["2ndFlrSF"] + X["GarageArea"] X["+_TotalHouse_OverallQual"] = X["TotalHouse"] * X["OverallQual"] X["+_GrLivArea_OverallQual"] = X["GrLivArea"] * X["OverallQual"] X["+_oMSZoning_TotalHouse"] = X["oMSZoning"] * X["TotalHouse"] X["+_oMSZoning_OverallQual"] = X["oMSZoning"] + X["OverallQual"] X["+_oMSZoning_YearBuilt"] = X["oMSZoning"] + X["YearBuilt"] X["+_oNeighborhood_TotalHouse"] = X["oNeighborhood"] * X["TotalHouse"] X["+_oNeighborhood_OverallQual"] = X["oNeighborhood"] + X["OverallQual"] X["+_oNeighborhood_YearBuilt"] = X["oNeighborhood"] + X["YearBuilt"] X["+_BsmtFinSF1_OverallQual"] = X["BsmtFinSF1"] * X["OverallQual"] X["-_oFunctional_TotalHouse"] = X["oFunctional"] * X["TotalHouse"] X["-_oFunctional_OverallQual"] = X["oFunctional"] + X["OverallQual"] X["-_LotArea_OverallQual"] = X["LotArea"] * X["OverallQual"] X["-_TotalHouse_LotArea"] = X["TotalHouse"] + X["LotArea"] X["-_oCondition1_TotalHouse"] = X["oCondition1"] * X["TotalHouse"] X["-_oCondition1_OverallQual"] = X["oCondition1"] + X["OverallQual"] X["Bsmt"] = X["BsmtFinSF1"] + X["BsmtFinSF2"] + X["BsmtUnfSF"] X["Rooms"] = X["FullBath"]+X["TotRmsAbvGrd"] X["PorchArea"] = X["OpenPorchSF"]+X["EnclosedPorch"]+X["3SsnPorch"]+X["ScreenPorch"] X["TotalPlace"] = X["TotalBsmtSF"] + X["1stFlrSF"] + X["2ndFlrSF"] + X["GarageArea"] + X["OpenPorchSF"]+X["EnclosedPorch"]+X["3SsnPorch"]+X["ScreenPorch"] return X ``` + __By using a pipeline, you can quickily experiment different feature combinations.__ ``` pipe = Pipeline([ ('labenc', labelenc()), ('add_feature', add_feature(additional=2)), ('skew_dummies', skew_dummies(skew=1)), ]) ``` ## PCA + __Im my case, doing PCA is very important. It lets me gain a relatively big boost on leaderboard. At first I don't believe PCA can help me, but in retrospect, maybe the reason is that the features I built are highly correlated, and it leads to multicollinearity. PCA can decorrelate these features.__ + __So I'll use approximately the same dimension in PCA as in the original data. Since the aim here is not deminsion reduction.__ ``` full_pipe = pipe.fit_transform(full) full_pipe.shape n_train=train.shape[0] X = full_pipe[:n_train] test_X = full_pipe[n_train:] y= train.SalePrice X_scaled = scaler.fit(X).transform(X) y_log = np.log(train.SalePrice) test_X_scaled = scaler.transform(test_X) pca = PCA(n_components=410) X_scaled=pca.fit_transform(X_scaled) test_X_scaled = pca.transform(test_X_scaled) X_scaled.shape, test_X_scaled.shape ``` # Modeling & Evaluation ``` # define cross validation strategy def rmse_cv(model,X,y): rmse = np.sqrt(-cross_val_score(model, X, y, scoring="neg_mean_squared_error", cv=5)) return rmse ``` + __We choose 13 models and use 5-folds cross-calidation to evaluate these models.__ Models include: + LinearRegression + Ridge + Lasso + Random Forrest + Gradient Boosting Tree + Support Vector Regression + Linear Support Vector Regression + ElasticNet + Stochastic Gradient Descent + BayesianRidge + KernelRidge + ExtraTreesRegressor + XgBoost ``` models = [LinearRegression(),Ridge(),Lasso(alpha=0.01,max_iter=10000),RandomForestRegressor(),GradientBoostingRegressor(),SVR(),LinearSVR(), ElasticNet(alpha=0.001,max_iter=10000),SGDRegressor(max_iter=1000,tol=1e-3),BayesianRidge(),KernelRidge(alpha=0.6, kernel='polynomial', degree=2, coef0=2.5), ExtraTreesRegressor(),XGBRegressor()] names = ["LR", "Ridge", "Lasso", "RF", "GBR", "SVR", "LinSVR", "Ela","SGD","Bay","Ker","Extra","Xgb"] for name, model in zip(names, models): score = rmse_cv(model, X_scaled, y_log) print("{}: {:.6f}, {:.4f}".format(name,score.mean(),score.std())) ``` + __Next we do some hyperparameters tuning. First define a gridsearch method.__ ``` class grid(): def __init__(self,model): self.model = model def grid_get(self,X,y,param_grid): grid_search = GridSearchCV(self.model,param_grid,cv=5, scoring="neg_mean_squared_error") grid_search.fit(X,y) print(grid_search.best_params_, np.sqrt(-grid_search.best_score_)) grid_search.cv_results_['mean_test_score'] = np.sqrt(-grid_search.cv_results_['mean_test_score']) print(pd.DataFrame(grid_search.cv_results_)[['params','mean_test_score','std_test_score']]) ``` ### Lasso ``` grid(Lasso()).grid_get(X_scaled,y_log,{'alpha': [0.0004,0.0005,0.0007,0.0009],'max_iter':[10000]}) ``` ### Ridge ``` grid(Ridge()).grid_get(X_scaled,y_log,{'alpha':[35,40,45,50,55,60,65,70,80,90]}) ``` ### SVR ``` grid(SVR()).grid_get(X_scaled,y_log,{'C':[11,13,15],'kernel':["rbf"],"gamma":[0.0003,0.0004],"epsilon":[0.008,0.009]}) ``` ### Kernel Ridge ``` param_grid={'alpha':[0.2,0.3,0.4], 'kernel':["polynomial"], 'degree':[3],'coef0':[0.8,1]} grid(KernelRidge()).grid_get(X_scaled,y_log,param_grid) ``` ### ElasticNet ``` grid(ElasticNet()).grid_get(X_scaled,y_log,{'alpha':[0.0008,0.004,0.005],'l1_ratio':[0.08,0.1,0.3],'max_iter':[10000]}) ``` # Ensemble Methods ### Weight Average + __Average base models according to their weights.__ ``` class AverageWeight(BaseEstimator, RegressorMixin): def __init__(self,mod,weight): self.mod = mod self.weight = weight def fit(self,X,y): self.models_ = [clone(x) for x in self.mod] for model in self.models_: model.fit(X,y) return self def predict(self,X): w = list() pred = np.array([model.predict(X) for model in self.models_]) # for every data point, single model prediction times weight, then add them together for data in range(pred.shape[1]): single = [pred[model,data]*weight for model,weight in zip(range(pred.shape[0]),self.weight)] w.append(np.sum(single)) return w lasso = Lasso(alpha=0.0005,max_iter=10000) ridge = Ridge(alpha=60) svr = SVR(gamma= 0.0004,kernel='rbf',C=13,epsilon=0.009) ker = KernelRidge(alpha=0.2 ,kernel='polynomial',degree=3 , coef0=0.8) ela = ElasticNet(alpha=0.005,l1_ratio=0.08,max_iter=10000) bay = BayesianRidge() # assign weights based on their gridsearch score w1 = 0.02 w2 = 0.2 w3 = 0.25 w4 = 0.3 w5 = 0.03 w6 = 0.2 weight_avg = AverageWeight(mod = [lasso,ridge,svr,ker,ela,bay],weight=[w1,w2,w3,w4,w5,w6]) score = rmse_cv(weight_avg,X_scaled,y_log) print(score.mean()) ``` + __But if we average only two best models, we gain better cross-validation score.__ ``` weight_avg = AverageWeight(mod = [svr,ker],weight=[0.5,0.5]) score = rmse_cv(weight_avg,X_scaled,y_log) print(score.mean()) ``` ## Stacking + __Aside from normal stacking, I also add the "get_oof" method, because later I'll combine features generated from stacking and original features.__ ``` class stacking(BaseEstimator, RegressorMixin, TransformerMixin): def __init__(self,mod,meta_model): self.mod = mod self.meta_model = meta_model self.kf = KFold(n_splits=5, random_state=42, shuffle=True) def fit(self,X,y): self.saved_model = [list() for i in self.mod] oof_train = np.zeros((X.shape[0], len(self.mod))) for i,model in enumerate(self.mod): for train_index, val_index in self.kf.split(X,y): renew_model = clone(model) renew_model.fit(X[train_index], y[train_index]) self.saved_model[i].append(renew_model) oof_train[val_index,i] = renew_model.predict(X[val_index]) self.meta_model.fit(oof_train,y) return self def predict(self,X): whole_test = np.column_stack([np.column_stack(model.predict(X) for model in single_model).mean(axis=1) for single_model in self.saved_model]) return self.meta_model.predict(whole_test) def get_oof(self,X,y,test_X): oof = np.zeros((X.shape[0],len(self.mod))) test_single = np.zeros((test_X.shape[0],5)) test_mean = np.zeros((test_X.shape[0],len(self.mod))) for i,model in enumerate(self.mod): for j, (train_index,val_index) in enumerate(self.kf.split(X,y)): clone_model = clone(model) clone_model.fit(X[train_index],y[train_index]) oof[val_index,i] = clone_model.predict(X[val_index]) test_single[:,j] = clone_model.predict(test_X) test_mean[:,i] = test_single.mean(axis=1) return oof, test_mean ``` + __Let's first try it out ! It's a bit slow to run this method, since the process is quite compliated. __ ``` # must do imputer first, otherwise stacking won't work, and i don't know why. a = Imputer().fit_transform(X_scaled) b = Imputer().fit_transform(y_log.values.reshape(-1,1)).ravel() stack_model = stacking(mod=[lasso,ridge,svr,ker,ela,bay],meta_model=ker) score = rmse_cv(stack_model,a,b) print(score.mean()) ``` + __Next we extract the features generated from stacking, then combine them with original features.__ ``` X_train_stack, X_test_stack = stack_model.get_oof(a,b,test_X_scaled) X_train_stack.shape, a.shape X_train_add = np.hstack((a,X_train_stack)) X_test_add = np.hstack((test_X_scaled,X_test_stack)) X_train_add.shape, X_test_add.shape score = rmse_cv(stack_model,X_train_add,b) print(score.mean()) ``` + __You can even do parameter tuning for your meta model after you get "X_train_stack", or do it after combining with the original features. but that's a lot of work too !__ ### Submission ``` # This is the final model I use stack_model = stacking(mod=[lasso,ridge,svr,ker,ela,bay],meta_model=ker) stack_model.fit(a,b) pred = np.exp(stack_model.predict(test_X_scaled)) result=pd.DataFrame({'Id':test.Id, 'SalePrice':pred}) result.to_csv("submission.csv",index=False) ```
true
code
0.429968
null
null
null
null
## GeostatsPy: Basic Univariate Statistics and Distribution Plotting for Subsurface Data Analytics in Python ### Michael Pyrcz, Associate Professor, University of Texas at Austin #### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1) ### PGE 383 Exercise: Basic Univariate Summary Statistics and Data Distribution Plotting in Python with GeostatsPy Here's a simple workflow with some basic univariate statistics and distribution plotting of tabular (easily extended to gridded) data summary statistics and distributions. This should help you get started data visualization and interpretation. #### Objective In the PGE 383: Stochastic Subsurface Modeling class I want to provide hands-on experience with building subsurface modeling workflows. Python provides an excellent vehicle to accomplish this. I have coded a package called GeostatsPy with GSLIB: Geostatistical Library (Deutsch and Journel, 1998) functionality that provides basic building blocks for building subsurface modeling workflows. The objective is to remove the hurdles of subsurface modeling workflow construction by providing building blocks and sufficient examples. This is not a coding class per se, but we need the ability to 'script' workflows working with numerical methods. #### Getting Started Here's the steps to get setup in Python with the GeostatsPy package: 1. Install Anaconda 3 on your machine (https://www.anaconda.com/download/). 2. From Anaconda Navigator (within Anaconda3 group), go to the environment tab, click on base (root) green arrow and open a terminal. 3. In the terminal type: pip install geostatspy. 4. Open Jupyter and in the top block get started by copy and pasting the code block below from this Jupyter Notebook to start using the geostatspy functionality. You will need to copy the data files to your working directory. They are avaiable here: 1. Tabular data - sample_data.csv at https://git.io/fh4gm 2. Gridded data - AI_grid.csv at https://git.io/fh4gU There are exampled below with these functions. You can go here to see a list of the available functions, https://git.io/fh4eX, other example workflows and source code. ``` import geostatspy.GSLIB as GSLIB # GSLIB utilies, visualization and wrapper import geostatspy.geostats as geostats # GSLIB methods convert to Python ``` We will also need some standard packages. These should have been installed with Anaconda 3. ``` import numpy as np # ndarrys for gridded data import pandas as pd # DataFrames for tabular data import os # set working directory, run executables import matplotlib.pyplot as plt # for plotting from scipy import stats # summary statistics ``` #### Set the working directory I always like to do this so I don't lose files and to simplify subsequent read and writes (avoid including the full address each time). ``` os.chdir("c:/PGE383/Examples") # set the working directory ``` #### Loading Tabular Data Here's the command to load our comma delimited data file in to a Pandas' DataFrame object. For fun try misspelling the name. You will get an ugly, long error. ``` df = pd.read_csv('sample_data_cow.csv') # load our data table (wrong name!) ``` That's Python, but there's method to the madness. In general the error shows a trace from the initial command into all the nested programs involved until the actual error occured. If you are debugging code (I know, I'm getting ahead of myself now), this is valuable for the detective work of figuring out what went wrong. I've spent days in C++ debugging one issue, this helps. So since you're working in Jupyter Notebook, the program just assumes you code. Fine. If you scroll to the bottom of the error you often get a summary statement *FileNotFoundError: File b'sample_data_cow.csv' does not exist*. Ok, now you know that you don't have a file iwth that name in the working directory. Painful to leave that error in our workflow, eh? Everytime I passes it while making this documented I wanted to fix it. Its a coder thing... go ahead and erase it if you like. Just select the block and click on the scissors above in the top bar of this window. While we are at it, notice if you click the '+' you can add in a new block anywhere. Ok, let's spell the file name correctly and get back to work, already. ``` df = pd.read_csv('sample_data.csv') # load our data table (wrong name!) ``` No error now! It worked, we loaded our file into our DataFrame called 'df'. But how do you really know that it worked? Visualizing the DataFrame would be useful and we already leard about these methods in this demo (https://git.io/fNgRW). We can preview the DataFrame by printing a slice or by utilizing the 'head' DataFrame member function (with a nice and clean format, see below). With the slice we could look at any subset of the data table and with the head command, add parameter 'n=13' to see the first 13 rows of the dataset. ``` print(df.iloc[0:5,:]) # display first 4 samples in the table as a preview df.head(n=13) # we could also use this command for a table preview ``` #### Summary Statistics for Tabular Data The table includes X and Y coordinates (meters), Facies 1 and 2 (1 is sandstone and 0 interbedded sand and mudstone), Porosity (fraction), permeability as Perm (mDarcy) and acoustic impedance as AI (kg/m2s*10^6). There are a lot of efficient methods to calculate summary statistics from tabular data in DataFrames. The describe command provides count, mean, minimum, maximum, and quartiles all in a nice data table. We use transpose just to flip the table so that features are on the rows and the statistics are on the columns. ``` df.describe().transpose() ``` We can also use a wide variety of statistical summaries built into NumPy's ndarrays. When we use the command: ```p df['Porosity'] # returns an Pandas series df['Porosity'].values # returns an ndarray ``` Panda's DataFrame returns all the porosity data as a series and if we add 'values' it returns a NumPy ndarray and we have access to a lot of NumPy methods. I also like to use the round function to round the answer to a limited number of digits for accurate reporting of precision and ease of reading. For example, now we could use commands. like this one: ``` print('The minimum is ' + str(round((df['Porosity'].values).min(),2)) + '.') print('The maximum is ' + str(round((df['Porosity'].values).max(),2)) + '.') print('The standard deviation is ' + str(round((df['Porosity'].values).std(),2)) + '.') print('The standard deviation is ' + str(round((df['Porosity'].values).std(),2)) + '.') ``` Here's some of the NumPy statistical functions that take ndarrays as an inputs. With these methods if you had a multidimensional array you could calculate the average by row (axis = 1) or by column (axis = 0) or over the entire array (no axis specified). We just have a 1D ndarray so this is not applicable here. ``` print('The minimum is ' + str(round(np.amin(df['Porosity'].values),2))) print('The maximum is ' + str(round(np.amax(df['Porosity'].values),2))) print('The range (maximum - minimum) is ' + str(round(np.ptp(df['Porosity'].values),2))) print('The P10 is ' + str(round(np.percentile(df['Porosity'].values,10),3))) print('The P50 is ' + str(round(np.percentile(df['Porosity'].values,50),3))) print('The P90 is ' + str(round(np.percentile(df['Porosity'].values,90),3))) print('The P13 is ' + str(round(np.percentile(df['Porosity'].values,13),3))) print('The media (P50) is ' + str(round(np.median(df['Porosity'].values),3))) print('The mean is ' + str(round(np.mean(df['Porosity'].values),3))) ``` Later in the ocurse we will talke about weights statistics. The NumPy command average allows for weighted averages as in the case of statistical expectation and declutered statistics. For demonstration, lets make a weighting array and apply it. ``` nd = len(df) # get the number of data values wts = np.ones(nd) # make an array of nd length of 1's print('The equal weighted average is ' + str(round(np.average(df['Porosity'].values,weights = wts),3)) + ', the same as the mean above.') ``` Let's get fancy, we will modify the weights to be 0.5 if the porosity is greater than 13% and retain 1.0 if the porosity is less than or equal to 13%. The results should be a lower weighted average. ``` porosity = df['Porosity'].values wts[porosity > 0.13] *= 0.1 print('The equal weighted average is ' + str(round(np.average(df['Porosity'].values,weights = wts),3)) + ', lower than the equal weighted average above.') ``` I should note that SciPy stats functions provide a handy summary statistics function. The output is a 'list' of values (actually it is a SciPy.DescribeResult ojbect). One can extract any one of them to use in a workflow as follows. ``` print(stats.describe(df['Porosity'].values)) # summary statistics por_stats = stats.describe(df['Porosity'].values) # store as an array print('Porosity kurtosis is ' + str(round(por_stats[5],2))) # extract a statistic ``` #### Plotting Distributions Let's display some histograms. I reimplimented the hist function from GSLIB. See the parameters. ``` GSLIB.hist ``` Let's make a histogram for porosity. ``` pormin = 0.05; pormax = 0.25 GSLIB.hist(df['Porosity'].values,pormin,pormax,log=False,cumul = False,bins=10,weights = None, xlabel='Porosity (fraction)',title='Porosity Well Data',fig_name='hist_Porosity') ``` What's going on here? Looks quite bimodal. Let's explore with a couple bins sizes to check. ``` plt.subplot(131) GSLIB.hist_st(df['Porosity'].values,pormin,pormax,log=False,cumul = False,bins=5,weights = None,xlabel='Porosity (fraction)',title='Porosity Well Data') plt.subplot(132) GSLIB.hist_st(df['Porosity'].values,pormin,pormax,log=False,cumul = False,bins=10,weights = None,xlabel='Porosity (fraction)',title='Porosity Well Data') plt.subplot(133) GSLIB.hist_st(df['Porosity'].values,pormin,pormax,log=False,cumul = False,bins=20,weights = None,xlabel='Porosity (fraction)',title='Porosity Well Data') plt.subplots_adjust(left=0.0, bottom=0.0, right=3.0, top=1.5, wspace=0.1, hspace=0.2) plt.savefig('hist_Porosity_Multiple_bins.tif',dpi=600,bbox_inches="tight") plt.show() ``` What about cumulative plots? This method makes a cumulative histogram, but the axis remains in frequency. To be a true cumulative distribution function we would need to standardize the Y-axis to be from 0.0 to 1.0. ``` GSLIB.hist(df['Porosity'].values,pormin,pormax,log=False,cumul = True,bins=100,weights = None,xlabel='Porosity (fraction)',title='Porosity Well Data',fig_name='hist_Porosity_CDF') ``` I don't want to suggest that matplotlib is hard to use. The GSLIB visualizations provide convenience and once again use the same parameters as the GSLIB methods. Particularly, the 'hist' function is pretty easy to use. Here's how we can make a pretty nice looking CDF from our data. Note after the initial hist command we can add a variety of features such as labels to our plot as shown below. ``` plt.hist(df['Porosity'].values,density=True, cumulative=True, label='CDF', histtype='stepfilled', alpha=0.2, bins = 100, color='red', edgecolor = 'black', range=[0.0,0.25]) plt.xlabel('Porosity (fraction)') plt.title('Porosity CDF') plt.ylabel('Cumulation Probability') plt.subplots_adjust(left=0.0, bottom=0.0, right=1.0, top=1.0, wspace=0.1, hspace=0.2) plt.savefig('cdf_Porosity.tif',dpi=600,bbox_inches="tight") plt.show() ``` Let's finish with the histograms of all our properties of interest as a finale! ``` permmin = 0.01; permmax = 3000; # user specified min and max AImin = 1000.0; AImax = 8000 Fmin = 0; Fmax = 1 plt.subplot(221) GSLIB.hist_st(df['Facies'].values,Fmin,Fmax,log=False,cumul = False,bins=20,weights = None,xlabel='Facies (1-sand, 0-shale)',title='Facies Well Data') plt.subplot(222) GSLIB.hist_st(df['Porosity'].values,pormin,pormax,log=False,cumul = False,bins=20,weights = None,xlabel='Porosity (fraction)',title='Porosity Well Data') plt.subplot(223) GSLIB.hist_st(df['Perm'].values,permmin,permmax,log=False,cumul = False,bins=20,weights = None,xlabel='Permeaiblity (mD)',title='Permeability Well Data') plt.subplot(224) GSLIB.hist_st(df['AI'].values,AImin,AImax,log=False,cumul = False,bins=20,weights = None,xlabel='Acoustic Impedance (kg/m2s*10^6)',title='Acoustic Impedance Well Data') plt.subplots_adjust(left=0.0, bottom=0.0, right=3.0, top=3.5, wspace=0.1, hspace=0.2) plt.savefig('hist_Porosity_Multiple_bins.tif',dpi=600,bbox_inches="tight") plt.show() ``` #### Comments This was a basic demonstration of calculating univariate statistics and visualizing data distributions. Much more could be done, I have other demosntrations on basics of working with DataFrames, ndarrays and many other workflows availble at https://github.com/GeostatsGuy/PythonNumericalDemos and https://github.com/GeostatsGuy/GeostatsPy. I hope this was helpful, *Michael* Michael Pyrcz, Ph.D., P.Eng. Associate Professor The Hildebrand Department of Petroleum and Geosystems Engineering, Bureau of Economic Geology, The Jackson School of Geosciences, The University of Texas at Austin #### More Resources Available at: [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
true
code
0.665981
null
null
null
null
# 使用卷积神经网络进行图像分类 **作者:** [PaddlePaddle](https://github.com/PaddlePaddle) <br> **日期:** 2021.12 <br> **摘要:** 本示例教程将会演示如何使用飞桨的卷积神经网络来完成图像分类任务。这是一个较为简单的示例,将会使用一个由三个卷积层组成的网络完成[cifar10](https://www.cs.toronto.edu/~kriz/cifar.html)数据集的图像分类任务。 ## 一、环境配置 本教程基于Paddle 2.2 编写,如果你的环境不是本版本,请先参考官网[安装](https://www.paddlepaddle.org.cn/install/quick) Paddle 2.2 。 ``` import paddle import paddle.nn.functional as F from paddle.vision.transforms import ToTensor import numpy as np import matplotlib.pyplot as plt print(paddle.__version__) ``` ## 二、加载数据集 本案例将会使用飞桨提供的API完成数据集的下载并为后续的训练任务准备好数据迭代器。cifar10数据集由60000张大小为32 * 32的彩色图片组成,其中有50000张图片组成了训练集,另外10000张图片组成了测试集。这些图片分为10个类别,将训练一个模型能够把图片进行正确的分类。 ``` transform = ToTensor() cifar10_train = paddle.vision.datasets.Cifar10(mode='train', transform=transform) cifar10_test = paddle.vision.datasets.Cifar10(mode='test', transform=transform) ``` ## 三、组建网络 接下来使用飞桨定义一个使用了三个二维卷积( ``Conv2D`` ) 且每次卷积之后使用 ``relu`` 激活函数,两个二维池化层( ``MaxPool2D`` ),和两个线性变换层组成的分类网络,来把一个(32, 32, 3)形状的图片通过卷积神经网络映射为10个输出,这对应着10个分类的类别。 ``` class MyNet(paddle.nn.Layer): def __init__(self, num_classes=1): super(MyNet, self).__init__() self.conv1 = paddle.nn.Conv2D(in_channels=3, out_channels=32, kernel_size=(3, 3)) self.pool1 = paddle.nn.MaxPool2D(kernel_size=2, stride=2) self.conv2 = paddle.nn.Conv2D(in_channels=32, out_channels=64, kernel_size=(3,3)) self.pool2 = paddle.nn.MaxPool2D(kernel_size=2, stride=2) self.conv3 = paddle.nn.Conv2D(in_channels=64, out_channels=64, kernel_size=(3,3)) self.flatten = paddle.nn.Flatten() self.linear1 = paddle.nn.Linear(in_features=1024, out_features=64) self.linear2 = paddle.nn.Linear(in_features=64, out_features=num_classes) def forward(self, x): x = self.conv1(x) x = F.relu(x) x = self.pool1(x) x = self.conv2(x) x = F.relu(x) x = self.pool2(x) x = self.conv3(x) x = F.relu(x) x = self.flatten(x) x = self.linear1(x) x = F.relu(x) x = self.linear2(x) return x ``` ## 四、模型训练&预测 接下来,用一个循环来进行模型的训练,将会: <br> - 使用 ``paddle.optimizer.Adam`` 优化器来进行优化。 - 使用 ``F.cross_entropy`` 来计算损失值。 - 使用 ``paddle.io.DataLoader`` 来加载数据并组建batch。 ``` epoch_num = 10 batch_size = 32 learning_rate = 0.001 val_acc_history = [] val_loss_history = [] def train(model): print('start training ... ') # turn into training mode model.train() opt = paddle.optimizer.Adam(learning_rate=learning_rate, parameters=model.parameters()) train_loader = paddle.io.DataLoader(cifar10_train, shuffle=True, batch_size=batch_size) valid_loader = paddle.io.DataLoader(cifar10_test, batch_size=batch_size) for epoch in range(epoch_num): for batch_id, data in enumerate(train_loader()): x_data = data[0] y_data = paddle.to_tensor(data[1]) y_data = paddle.unsqueeze(y_data, 1) logits = model(x_data) loss = F.cross_entropy(logits, y_data) if batch_id % 1000 == 0: print("epoch: {}, batch_id: {}, loss is: {}".format(epoch, batch_id, loss.numpy())) loss.backward() opt.step() opt.clear_grad() # evaluate model after one epoch model.eval() accuracies = [] losses = [] for batch_id, data in enumerate(valid_loader()): x_data = data[0] y_data = paddle.to_tensor(data[1]) y_data = paddle.unsqueeze(y_data, 1) logits = model(x_data) loss = F.cross_entropy(logits, y_data) acc = paddle.metric.accuracy(logits, y_data) accuracies.append(acc.numpy()) losses.append(loss.numpy()) avg_acc, avg_loss = np.mean(accuracies), np.mean(losses) print("[validation] accuracy/loss: {}/{}".format(avg_acc, avg_loss)) val_acc_history.append(avg_acc) val_loss_history.append(avg_loss) model.train() model = MyNet(num_classes=10) train(model) plt.plot(val_acc_history, label = 'validation accuracy') plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.ylim([0.5, 0.8]) plt.legend(loc='lower right') ``` ## The End 从上面的示例可以看到,在cifar10数据集上,使用简单的卷积神经网络,用飞桨可以达到70%以上的准确率。你也可以通过调整网络结构和参数,达到更好的效果。
true
code
0.80211
null
null
null
null
# Neural Networks as Dynamical Systems ![feedforward](Single_layer_ann.png) A neuronal model is made up of an input vector $\overrightarrow{X}=(x_1,x_2,\ldots, x_n)^T$, A vector of synaptic weights, $W_k=w_{kj}$, $j=1,2,\ldots,n$ a bias $b_k$ and an output $y_k$. The neuron itself is a nonlinear transfer function $\phi$. \begin{align} v_k &= W_k X + b_k\\ y_k &= \phi(v_k). \end{align} The function $\phi$, also known as the activation function, typically range from $-1$ to $+1$ and can take many forms such as the Heaviside function, piece-wise linear function, a sigmoid function, etc. # Training training a neural network corresponds to the act of adjusting the parameters described above to minimize the error of the network with respect to a target vector. ## Generalized Delta rule for a linear activation function, the weights are adjusted according to: $$w_{kj}(n+1)=w_{kj}(n)-\eta g_{kj},$$ where n is the number of iterations, $g_{kj}=-(t_k-y_k)$, and $\eta$ is a small positive constant called the learning rate. For a nonlinear activation function, the generalized delta rule is: $$w_{kj}(n+1)=w_{kj}(n)-\eta g_{kj},$$ where $$g_{kj}=(y_k-t_k)\frac{\partial \phi}{\partial v_k}x_j$$ ``` np.c_[np.ones(rows), X].shape import matplotlib.pyplot as plt import numpy as np data = np.loadtxt('housing.txt') rows, columns = data.shape columns = 4 # Using 4 columns from data in this case X = data[:, [5, 8, 12]] t = data[:, 13] ws1, ws2, ws3, ws4 = [], [], [], [] k = 0 # Scale the data to zero mean and unit variance xmean = X.mean(axis=0) xstd = X.std(axis=0) ones = np.ones((1,rows)) X = (X - xmean * ones.T) / (xstd * ones.T) X = np.c_[np.ones(rows), X] tmean = (max(t) + min(t)) / 2 tstd = (max(t) - min(t)) / 2 t = (t - tmean) / tstd w = 0.1 * np.random.random(columns) y1 = np.tanh(X.dot(w)) e1 = t - y1 mse = np.var(e1) num_epochs = 20 # number of iterations eta = 0.001 # Learning rate k = 1 erros = [e1.mean()] for m in range(num_epochs): for n in range(rows): yk = np.tanh(X[n, :].dot(w)) err = yk - t[n] g = X[n, :].T * ((1 - yk**2) * err) w = w - eta*g k += 1 ws1.append([k, np.array(w[0]).tolist()]) ws2.append([k, np.array(w[1]).tolist()]) ws3.append([k, np.array(w[2]).tolist()]) ws4.append([k, np.array(w[3]).tolist()]) # print(err.mean()) erros.append(err.mean()) # break # print(erros) ws1 = np.array(ws1) ws2 = np.array(ws2) ws3 = np.array(ws3) ws4 = np.array(ws4) plt.plot(ws1[:, 0], ws1[:, 1], 'k.', markersize=0.1, label='ws1') plt.plot(ws2[:, 0], ws2[:, 1], 'g.', markersize=0.1, label='ws2') plt.plot(ws3[:, 0], ws3[:, 1], 'b.', markersize=0.1, label='ws3') plt.plot(ws4[:, 0], ws4[:, 1], 'r.', markersize=0.1, label='ws4') # plt.plot(erros, label='erro') plt.xlabel('Number of iterations', fontsize=15) plt.ylabel('Weights', fontsize=15) plt.tick_params(labelsize=15) plt.grid() plt.legend() plt.show() ``` ## Backpropagation Backpropagation is the most common training algorithm in use. It's required to train neurons in a *hidden* layer. ``` %%html <div style='position:relative; padding-bottom:calc(56.25% + 44px)'><iframe src='https://gfycat.com/ifr/AdolescentIdioticGoldeneye' frameborder='0' scrolling='no' width='100%' height='100%' style='position:absolute;top:0;left:0;' allowfullscreen></iframe></div><p> <a href="https://gfycat.com/adolescentidioticgoldeneye">via Gfycat</a></p> ``` ### Algorithm **Definitions:** Partial derivatives: $$\frac{\partial E_d}{\partial w_{ij}^k} = \delta_j^k o_i^{k-1}.\label{eq:1}$$ Final layer's error: $$\delta_1^m = g_o^{\prime}(a_1^m)\left(\hat{y_d}-y_d\right).\label{eq:2}$$ Hidden layer's error: $$\delta_j^k = g^{\prime}\big(a_j^k\big)\sum_{l=1}^{r^{k+1}}w_{jl}^{k+1}\delta_l^{k+1}.\label{eq:3}$$ Combining the partial derivatives for each input-output pair, $$\frac{\partial E(X, \theta)}{\partial w_{ij}^k} = \frac{1}{N}\sum_{d=1}^N\frac{\partial}{\partial w_{ij}^k}\left(\frac{1}{2}\left(\hat{y_d} - y_d\right)^{2}\right) = \frac{1}{N}\sum_{d=1}^N\frac{\partial E_d}{\partial w_{ij}^k}.\label{eq:4}$$ Weight update: $$\Delta w_{ij}^k = - \alpha \frac{\partial E(X, \theta)}{\partial w_{ij}^k}.\label{eq:5}$$ 1. Calculate the forward phase for each input-output pair $(\vec{x_d}, y_d)$ and store the results $\hat{y_d}$, $a_j^k$, and $o_j^k$ for each node $j$ in layer $k$ by proceeding from layer $0$, the input layer, to layer $m$, the output layer. 1. Calculate the backward phase for each input-output pair $(\vec{x_d}, y_d)$ and store the results $\frac{\partial E_d}{\partial w_{ij}^k}$ for each weight $w_{ij}^k$ connecting node $i$ in layer $k-1$ to node $j$ in layer $k$ by proceeding from layer $m$, the output layer, to layer $1$, the input layer. - Evaluate the error term for the final layer $\delta_1^m$ by using equation (\ref{eq:2}). - Backpropagate the error terms for the hidden layers $\delta_j^k$, working backwards from the final hidden layer $k = m-1$, by repeatedly using equation (\ref{eq:3}). - Evaluate the partial derivatives of the individual error $E_d$ with respect to $w_{ij}^k$ by using equation (\ref{eq:1}). 1. Combine the individual gradients for each input-output pair $\frac{\partial E_d}{\partial w_{ij}^k}$ to get the total gradient $\frac{\partial E(X, \theta)}{\partial w_{ij}^k}$ for the entire set of input-output pairs $X = \big\{(\vec{x_1}, y_1), \dots, (\vec{x_N}, y_N) \big\}$ by using equation (\ref{eq:4}) (a simple average of the individual gradients). 1. Update the weights according to the learning rate α\alpha and total gradient $\frac{\partial E(X, \theta)}{\partial w_{ij}^k}$ by using equation (\ref{eq:5}) (moving in the direction of the negative gradient). In the example below, the matrix $X$ is the set of inputs $\vec{x}$ and the matrix y is the set of outputs $y$. The number of nodes in the hidden layer can be customized by setting the value of the variable `num_hidden`. The learning rate $\alpha$ is controlled by the variable `alpha`. The number of iterations of gradient descent is controlled by the variable `num_iterations`. By changing these variables and comparing the output of the program to the target values $y$, one can see how these variables control how well backpropagation can learn the dataset $X$ and y. For example, more nodes in the hidden layer and more iterations of gradient descent will generally improve the fit to the training dataset. However, using too large or too small a learning rate can cause the model to diverge or converge too slowly, respectively. Adapted from: `Backpropagation. Brilliant.org`. Retrieved 08:32, August 31, 2021, from https://brilliant.org/wiki/backpropagation/ ``` import numpy as np # define the sigmoid function def sigmoid(x, derivative=False): if (derivative == True): return sigmoid(x,derivative=False) * (1 - sigmoid(x,derivative=False)) else: return 1 / (1 + np.exp(-x)) # choose a random seed for reproducible results np.random.seed(1) # learning rate alpha = .1 # number of nodes in the hidden layer num_hidden = 3 # inputs X = np.array([ [0, 0, 1], [0, 1, 1], [1, 0, 0], [1, 1, 0], [1, 0, 1], [1, 1, 1], ]) # outputs # x.T is the transpose of x, making this a column vector y = np.array([[0, 1, 0, 1, 1, 0]]).T # initialize weights randomly with mean 0 and range [-1, 1] # the +1 in the 1st dimension of the weight matrices is for the bias weight hidden_weights = 2*np.random.random((X.shape[1] + 1, num_hidden)) - 1 output_weights = 2*np.random.random((num_hidden + 1, y.shape[1])) - 1 # number of iterations of gradient descent num_iterations = 10000 outputs=[] # for each iteration of gradient descent for i in range(num_iterations): # forward phase # np.hstack((np.ones(...), X) adds a fixed input of 1 for the bias weight input_layer_outputs = np.hstack((np.ones((X.shape[0], 1)), X)) hidden_layer_outputs = np.hstack((np.ones((X.shape[0], 1)), sigmoid(np.dot(input_layer_outputs, hidden_weights)))) output_layer_outputs = np.dot(hidden_layer_outputs, output_weights) # backward phase # output layer error term output_error = output_layer_outputs - y # hidden layer error term # [:, 1:] removes the bias term from the backpropagation hidden_error = hidden_layer_outputs[:, 1:] * (1 - hidden_layer_outputs[:, 1:]) * np.dot(output_error, output_weights.T[:, 1:]) # partial derivatives hidden_pd = input_layer_outputs[:, :, np.newaxis] * hidden_error[: , np.newaxis, :] output_pd = hidden_layer_outputs[:, :, np.newaxis] * output_error[:, np.newaxis, :] # average for total gradients total_hidden_gradient = np.average(hidden_pd, axis=0) total_output_gradient = np.average(output_pd, axis=0) # update weights hidden_weights += - alpha * total_hidden_gradient output_weights += - alpha * total_output_gradient outputs.append(output_layer_outputs) # print the final outputs of the neural network on the inputs X print("Output After Training: \n{}".format(output_layer_outputs)) plt.plot(np.hstack(outputs).T); plt.legend([str(i) for i in y]); from scipy.integrate import odeint ``` ## Continuous Hopfield Model Hopfield equation where derived from Kirchoff's laws for electrical circuits. $$\frac{d\overrightarrow{x}(t)}{dt}=-x(t) +Wa(t) +b$$ where x(t) is a vector of neuron activation levels, $W$ is the weigth matrix, b are the biases, and $a(t)=\phi(x(t))$. ### Two neuron example \begin{align} \dot{x}&=-x +\frac{2}{\pi}tan^{-1}\left(\frac{\gamma\pi y}{2}\right)\\ \dot{y}&=-y +\frac{2}{\pi}tan^{-1}\left(\frac{\gamma\pi x}{2}\right) \end{align} let \begin{equation} \nonumber W=\begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}, b=\begin{bmatrix} 0\\ 0 \end{bmatrix}, a1=2/\pi tan^{-1} \end{equation} ``` def hop2(Y,t): x,y = Y gamma = 20.5 return -x+(2/np.pi)*np.arctan(gamma*np.pi*y/2), -y+(2/np.pi)*np.arctan(gamma*np.pi*x/2) res = odeint(hop2,[0.5,0.5], range(100)) plt.plot(res); ``` ## Discrete Hopfield Model ``` from sympy import Matrix, eye import random # The fundamental memories: x1 = [1, 1, 1, 1, 1] x2 = [1, -1, -1, 1, -1] x3 = [-1, 1, -1, 1, 1] X = Matrix([x1, x2, x3]) W = X.T * X / 5 - 3*eye(5) / 5 def hsgn(v, x): if v > 0: return 1 elif v == 0: return x else: return -1 L = [0, 1, 2, 3, 4] n = random.sample(L, len(L)) xinput = [1, -1, -1, 1, 1] xtest = xinput for j in range(4): M = W.row(n[j]) * Matrix(xtest) xtest[n[j]] = hsgn(M[0], xtest[n[j]]) if xtest == x1: print('Net has converged to X1') elif xtest == x2: print('Net has converged to X2') elif xtest == x3: print('Net has converged to X3') else: print('Iterate again: May have converged to spurious state') # Program 20c: Iteration of the minimal chaotic neuromodule. # See Figure 20.13. import matplotlib.pyplot as plt import numpy as np # Parameters b1, b2, w11, w21, w12, a = -2, 3, -20, -6, 6, 1 num_iterations = 10000 def neuromodule(X): x,y=X xn=b1+w11/(1+np.exp(-a*x))+w12/(1+np.exp(-a*y)) yn=b2+w21/(1+np.exp(-a*x)) return xn,yn X0 = [0, 2] X, Y = [], [] for i in range(num_iterations): xn, yn = neuromodule(X0) X, Y = X + [xn], Y + [yn] X0 = [xn, yn] fig, ax = plt.subplots(figsize=(8, 8)) ax.scatter(X, Y, color='blue', s=0.1) plt.xlabel('x', fontsize=15) plt.ylabel('y', fontsize=15) plt.tick_params(labelsize=15) plt.show() # Program 20d: Bifurcation diagram of the neuromodule. # See Figure 20.16. from matplotlib import pyplot as plt import numpy as np # Parameters b2, w11, w21, w12, a = 3, 7, 5, -4, 1 start, max = -5, 10 half_N = 1999 N = 2 * half_N + 1 N1 = 1 + half_N xs_up, xs_down = [], [] x, y = -10, -3 ns_up = np.arange(half_N) ns_down = np.arange(N1, N) # Ramp b1 up for n in ns_up: b1 = start + n*max / half_N x = b1 + w11 / (1 + np.exp(-a*x)) + w12 / (1 + np.exp(-a*y)) y = b2+w21 / (1 + np.exp(-a*x)) xn = x xs_up.append([n, xn]) xs_up = np.array(xs_up) # Ramp b1 down for n in ns_down: b1 = start + 2*max - n*max / half_N x = b1 + w11 / (1 + np.exp(-a*x)) + w12 / (1 + np.exp(-a*y)) y = b2 + w21 / (1 + np.exp(-a*x)) xn = x xs_down.append([N-n, xn]) xs_down = np.array(xs_down) fig, ax = plt.subplots() xtick_labels = np.linspace(start, max, 7) ax.set_xticks([(-start + x) / max * N1 for x in xtick_labels]) ax.set_xticklabels(['{:.1f}'.format(xtick) for xtick in xtick_labels]) plt.plot(xs_up[:, 0], xs_up[:, 1], 'r.', markersize=0.1) plt.plot(xs_down[:, 0], xs_down[:,1], 'b.', markersize=0.1) plt.xlabel(r'$b_1$', fontsize=15) plt.ylabel(r'$x_n$', fontsize=15) plt.tick_params(labelsize=15) plt.show() ```
true
code
0.566618
null
null
null
null
``` %load_ext autoreload %autoreload 2 %matplotlib inline from numpy import * from IPython.html.widgets import * import matplotlib.pyplot as plt from IPython.core.display import clear_output ``` # PCA and EigenFaces Demo In this demo, we will go through the basic concepts behind the principal component analysis (PCA). We will then apply PCA to a face dataset to find the characteristic faces ("eigenfaces"). ## What is PCA? PCA is a **linear** transformation. Suppose we have a $N \times P$ data matrix ${\bf X}$, where $N$ is the number of samples and $P$ is the dimension of each sample. Then PCA will find you a $K \times P$ matrix ${\bf V}$ such that $$ \underbrace{{\bf X}}_{N \times P} = \underbrace{{\bf S}}_{P \times K} \underbrace{{\bf V}}_{K \times P}. $$ Here, $K$ is the number of **principal components** with $K \le P$. ## But what does the V matrix do? We can think of ${\bf V}$ in many different ways. The first way is to think of it as a de-correlating transformation: originally, each variable (or dimension) in ${\bf X}$ - there are $P$ of them - may be *correlated*. That is, if we take any two column vectors of ${\bf X}$, say ${\bf x}_0$ and ${\bf x}_1$, their covariance is not going to be zero. Let's try this in a randomly generated data: ``` from numpy.random import standard_normal # Gaussian variables N = 1000; P = 5 X = standard_normal((N, P)) W = X - X.mean(axis=0,keepdims=True) print(dot(W[:,0], W[:,1])) ``` I'll skip ahead and use a pre-canned PCA routine from `scikit-learn` (but we'll dig into it a bit later!) Let's see what happens to the transformed variables, ${\bf S}$: ``` from sklearn.decomposition import PCA S=PCA(whiten=True).fit_transform(X) print(dot(S[:,0], S[:,1])) ``` Another way to look at ${\bf V}$ is to think of them as **projections**. Since the row vectors of ${\bf V}$ is *orthogonal* to each other, the projected data ${\bf S}$ lines in a new "coordinate system" specified by ${\bf V}$. Furthermore, the new coordinate system is sorted in the decreasing order of *variance* in the original data. So, PCA can be thought of as calculating a new coordinate system where the basis vectors point toward the direction of largest variances first. <img src="files/images/PCA/pca.png" style="margin:auto; width: 483px;"/> Exercise 1. Let's get a feel for this in the following interactive example. Try moving the sliders around to generate the data, and see how the principal component vectors change. In this demo, `mu_x` and `mu_y` specifies the center of the data, `sigma_x` and `sigma_y` the standard deviations, and everything is rotated by the angle `theta`. The two blue arrows are the rows of ${\bf V}$ that gets calculated. When you click on `center`, the data is first centered (mean is subtracted from the data) first. (Question: why is it necessary to "center" data when `mu_x` and `mu_y` are not zero?) ``` from numpy.random import standard_normal from matplotlib.patches import Ellipse from numpy.linalg import svd @interact def plot_2d_pca(mu_x=FloatSlider(min=-3.0, max=3.0, value=0), mu_y=FloatSlider(min=-3.0, max=3.0, value=0), sigma_x=FloatSlider(min=0.2, max=1.8, value=1.8), sigma_y=FloatSlider(min=0.2, max=1.8, value=0.3), theta=FloatSlider(min=0.0, max=pi, value=pi/6), center=False): mu=array([mu_x, mu_y]) sigma=array([sigma_x, sigma_y]) R=array([[cos(theta),-sin(theta)],[sin(theta),cos(theta)]]) X=dot(standard_normal((1000, 2)) * sigma[newaxis,:],R.T) + mu[newaxis,:] # Plot the points and the ellipse fig, ax = plt.subplots(figsize=(8,8)) ax.scatter(X[:200,0], X[:200,1], marker='.') ax.grid() M=8.0 ax.set_xlim([-M,M]) ax.set_ylim([-M,M]) e=Ellipse(xy=array([mu_x, mu_y]), width=sigma_x*3, height=sigma_y*3, angle=theta/pi*180, facecolor=[1.0,0,0], alpha=0.3) ax.add_artist(e) # Perform PCA and plot the vectors if center: X_mean=X.mean(axis=0,keepdims=True) else: X_mean=zeros((1,2)) # Doing PCA here... I'm using svd instead of scikit-learn PCA, we'll come back to this. U,s,V =svd(X-X_mean, full_matrices=False) for v in dot(diag(s/sqrt(X.shape[0])),V): # Each eigenvector ax.arrow(X_mean[0,0],X_mean[0,1],-v[0],-v[1], head_width=0.5, head_length=0.5, fc='b', ec='b') Ustd=U.std(axis=0) ax.set_title('std(U*s) [%f,%f]' % (Ustd[0]*s[0],Ustd[1]*s[1])) ``` Yet another use for ${\bf V}$ is to perform a **dimensionality reduction**. In many scenarios you encounter in image manipulation (as we'll see soon), we might want to have a more concise representation of the data ${\bf X}$. PCA with $K < P$ is one way to *reduce the dimesionality*: because PCA picks the directions with highest data variances, if a small number of top $K$ rows are sufficient to approximate (reconstruct) ${\bf X}$. ## How do we actually *perform* PCA? Well, we can use `from sklearn.decomposition import PCA`. But for learning, let's dig just one step into what it acutally does. One of the easiest way to perform PCA is to use the singular value decomposition (SVD). SVD decomposes a matrix ${\bf X}$ into a unitary matrix ${\bf U}$, rectangular diagonal matrix ${\bf \Sigma}$ (called "singular values"), and another unitary matrix ${\bf W}$ such that $$ {\bf X} = {\bf U} {\bf \Sigma} {\bf W}$$ So how can we use that to do PCA? Well, it turns out ${\bf \Sigma} {\bf W}$ of SVD, are exactly what we need to calculate the ${\bf V}$ matrix for the PCA, so we just have to run SVD and set ${\bf V} = {\bf \Sigma} {\bf W}$. (Note: `svd` of `numpy` returns only the diagonal elements of ${\bf \Sigma}$.) Exercise 2. Generate 1000 10-dimensional data and perform PCA this way. Plot the squares of the singular values. To reduce the the $P$-dimesional data ${\bf X}$ to a $K$-dimensional data, we just need to pick the top $K$ row vectors of ${\bf V}$ - let's call that ${\bf W}$ - then calcuate ${\bf T} = {\bf X} {\bf W}^\intercal$. ${\bf T}$ then has the dimension $N \times K$. If we want to reconstruct the data ${\bf T}$, we simply do ${\hat {\bf X}} = {\bf T} {\bf W}$ (and re-add the means for ${\bf X}$, if necessary). Exercise 3. Reduce the same data to 5 dimensions, then based on the projected data ${\bf T}$, reconstruct ${\bf X}$. What's the mean squared error of the reconstruction? # Performing PCA on a face dataset Now that we have a handle on the PCA method, let's try applying it to a dataset consisting of face data. We have two datasets in this demo, CAFE and POFA. The following code loads the dataset into the `dataset` variable: ``` import pickle dataset=pickle.load(open('data/cafe.pkl','r')) # or 'pofa.pkl' for POFA disp('dataset.images shape is %s' % str(dataset.images.shape)) disp('dataset.data shape is %s' % str(dataset.data.shape)) @interact def plot_face(image_id=(0, dataset.images.shape[0]-1)): plt.imshow(dataset.images[image_id],cmap='gray') plt.title('Image Id = %d, Gender = %d' % (dataset.target[image_id], dataset.gender[image_id])) plt.axis('off') ``` ## Preprocessing We'll center the data by subtracting the mean. The first axis (`axis=0`) is the `n_samples` dimension. ``` X=dataset.data.copy() # So that we won't mess up the data in the dataset\ X_mean=X.mean(axis=0,keepdims=True) # Mean for each dimension across sample (centering) X_std=X.std(axis=0,keepdims=True) X-=X_mean disp(all(abs(X.mean(axis=0))<1e-12)) # Are means for all dimensions very close to zero? ``` Then we perform SVD to calculate the projection matrix $V$. By default, `U,s,V=svd(...)` returns full matrices, which will return $n \times n$ matrix `U`, $n$-dimensional vector of singular values `s`, and $d \times d$ matrix `V`. But here, we don't really need $d \times d$ matrix `V`; with `full_matrices=False`, `svd` only returns $n \times d$ matrix for `V`. ``` from numpy.linalg import svd U,s,V=svd(X,compute_uv=True, full_matrices=False) disp(str(U.shape)) disp(str(s.shape)) disp(str(V.shape)) ``` We can also plot how much each eigenvector in `V` contributes to the overall variance by plotting `variance_ratio` = $\frac{s^2}{\sum s^2}$. (Notice that `s` is already in the decreasing order.) The `cumsum` (cumulative sum) of `variance_ratio` then shows how much of the variance is explained by components up to `n_components`. ``` variance_ratio=s**2/(s**2).sum() # Normalized so that they add to one. @interact def plot_variance_ratio(n_components=(1, len(variance_ratio))): n=n_components-1 fig, axs = plt.subplots(1, 2, figsize=(12, 5)) axs[0].plot(variance_ratio) axs[0].set_title('Explained Variance Ratio') axs[0].set_xlabel('n_components') axs[0].axvline(n, color='r', linestyle='--') axs[0].axhline(variance_ratio[n], color='r', linestyle='--') axs[1].plot(cumsum(variance_ratio)) axs[1].set_xlabel('n_components') axs[1].set_title('Cumulative Sum') captured=cumsum(variance_ratio)[n] axs[1].axvline(n, color='r', linestyle='--') axs[1].axhline(captured, color='r', linestyle='--') axs[1].annotate(s='%f%% with %d components' % (captured * 100, n_components), xy=(n, captured), xytext=(10, 0.5), arrowprops=dict(arrowstyle="->")) ``` Since we're dealing with face data, each row vector of ${\bf V}$ is called an "eigenface". The first "eigenface" is the one that explains a lot of variances in the data, whereas the last one explains the least. ``` image_shape=dataset.images.shape[1:] # (H x W) @interact def plot_eigenface(eigenface=(0, V.shape[0]-1)): v=V[eigenface]*X_std plt.imshow(v.reshape(image_shape), cmap='gray') plt.title('Eigenface %d (%f to %f)' % (eigenface, v.min(), v.max())) plt.axis('off') ``` Now let's try reconstructing faces with different number of principal components (PCs)! Now, the transformed `X` is reconstructed by multiplying by the sample standard deviations for each dimension and adding the sample mean. For this reason, even for zero components, you get a face-like image! The rightmost plot is the "relative" reconstruction error (image minus the reconstruction squared, divided by the data standard deviations). White is where the error is close to zero, and black is where the relative error is large (1 or more). As you increase the number of PCs, you should see the error mostly going to zero (white). ``` @interact def plot_reconstruction(image_id=(0,dataset.images.shape[0]-1), n_components=(0, V.shape[0]-1), pc1_multiplier=FloatSlider(min=-2,max=2, value=1)): # This is where we perform the projection and un-projection Vn=V[:n_components] M=ones(n_components) if n_components > 0: M[0]=pc1_multiplier X_hat=dot(multiply(dot(X[image_id], Vn.T), M), Vn) # Un-center I=X[image_id] + X_mean I_hat = X_hat + X_mean D=multiply(I-I_hat,I-I_hat) / multiply(X_std, X_std) # And plot fig, axs = plt.subplots(1, 3, figsize=(10, 10)) axs[0].imshow(I.reshape(image_shape), cmap='gray', vmin=0, vmax=1) axs[0].axis('off') axs[0].set_title('Original') axs[1].imshow(I_hat.reshape(image_shape), cmap='gray', vmin=0, vmax=1) axs[1].axis('off') axs[1].set_title('Reconstruction') axs[2].imshow(1-D.reshape(image_shape), cmap='gray', vmin=0, vmax=1) axs[2].axis('off') axs[2].set_title('Difference^2 (mean = %f)' % sqrt(D.mean())) plt.tight_layout() ``` ## Image morphing As a fun exercise, we'll morph two images by taking averages of the two images within the transformed data space. How is it different than simply morphing them in the pixel space? ``` def plot_morph(left=0, right=1, mix=0.5): # Projected images x_lft=dot(X[left], V.T) x_rgt=dot(X[right], V.T) # Mix x_avg = x_lft * (1.0-mix) + x_rgt * (mix) # Un-project X_hat = dot(x_avg[newaxis,:], V) I_hat = X_hat + X_mean # And plot fig, axs = plt.subplots(1, 3, figsize=(10, 10)) axs[0].imshow(dataset.images[left], cmap='gray', vmin=0, vmax=1) axs[0].axis('off') axs[0].set_title('Left') axs[1].imshow(I_hat.reshape(image_shape), cmap='gray', vmin=0, vmax=1) axs[1].axis('off') axs[1].set_title('Morphed (%.2f %% right)' % (mix * 100)) axs[2].imshow(dataset.images[right], cmap='gray', vmin=0, vmax=1) axs[2].axis('off') axs[2].set_title('Right') plt.tight_layout() interact(plot_morph, left=IntSlider(max=dataset.images.shape[0]-1), right=IntSlider(max=dataset.images.shape[0]-1,value=1), mix=FloatSlider(value=0.5, min=0, max=1.0)) ```
true
code
0.672412
null
null
null
null
# Modeling and Simulation in Python Project 1 example Copyright 2018 Allen Downey License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0) ``` # Configure Jupyter so figures appear in the notebook %matplotlib inline # Configure Jupyter to display the assigned value after an assignment %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' # import functions from the modsim library from modsim import * from pandas import read_html filename = '../data/World_population_estimates.html' tables = read_html(filename, header=0, index_col=0, decimal='M') table2 = tables[2] table2.columns = ['census', 'prb', 'un', 'maddison', 'hyde', 'tanton', 'biraben', 'mj', 'thomlinson', 'durand', 'clark'] def plot_results(census, un, timeseries, title): """Plot the estimates and the model. census: TimeSeries of population estimates un: TimeSeries of population estimates timeseries: TimeSeries of simulation results title: string """ plot(census, ':', label='US Census') plot(un, '--', label='UN DESA') if len(timeseries): plot(timeseries, color='gray', label='model') decorate(xlabel='Year', ylabel='World population (billion)', title=title) un = table2.un / 1e9 census = table2.census / 1e9 empty = TimeSeries() plot_results(census, un, empty, 'World population estimates') ``` ### Why is world population growing linearly? Since 1970, world population has been growing approximately linearly, as shown in the previous figure. During this time, death and birth rates have decreased in most regions, but it is hard to imagine a mechanism that would cause them to decrease in a way that yields constant net growth year after year. So why is world population growing linearly? To explore this question, we will look for a model that reproduces linear growth, and identify the essential features that yield this behavior. Specifically, we'll add two new features to the model: 1. Age: The current model does not account for age; we will extend the model by including two age groups, young and old, roughly corresponding to people younger or older than 40. 2. The demographic transition: Birth rates have decreased substantially since 1970. We model this transition with an abrupt change in 1970 from an initial high level to a lower level. We'll use the 1950 world population from the US Census as an initial condition, assuming that half the population is young and half old. ``` half = get_first_value(census) / 2 init = State(young=half, old=half) ``` We'll use a `System` object to store the parameters of the model. ``` system = System(birth_rate1 = 1/18, birth_rate2 = 1/25, transition_year = 1970, mature_rate = 1/40, death_rate = 1/40, t_0 = 1950, t_end = 2016, init=init) ``` Here's an update function that computes the state of the system during the next year, given the current state and time. ``` def update_func1(state, t, system): if t <= system.transition_year: births = system.birth_rate1 * state.young else: births = system.birth_rate2 * state.young maturings = system.mature_rate * state.young deaths = system.death_rate * state.old young = state.young + births - maturings old = state.old + maturings - deaths return State(young=young, old=old) ``` We'll test the update function with the initial condition. ``` state = update_func1(init, system.t_0, system) ``` And we can do one more update using the state we just computed: ``` state = update_func1(state, system.t_0, system) ``` The `run_simulation` function is similar to the one in the book; it returns a time series of total population. ``` def run_simulation(system, update_func): """Simulate the system using any update function. init: initial State object system: System object update_func: function that computes the population next year returns: TimeSeries """ results = TimeSeries() state = system.init results[system.t_0] = state.young + state.old for t in linrange(system.t_0, system.t_end): state = update_func(state, t, system) results[t+1] = state.young + state.old return results ``` Now we can run the simulation and plot the results: ``` results = run_simulation(system, update_func1); plot_results(census, un, results, 'World population estimates') ``` This figure shows the results from our model along with world population estimates from the United Nations Department of Economic and Social Affairs (UN DESA) and the US Census Bureau. We adjusted the parameters by hand to fit the data as well as possible. Overall, the model fits the data well. Nevertheless, between 1970 and 2016 there is clear curvature in the model that does not appear in the data, and in the most recent years it looks like the model is diverging from the data. In particular, the model would predict accelerating growth in the near future, which does not seem consistent with the trend in the data, and it contradicts predictions by experts. It seems that this model does not explain why world population is growing linear. We conclude that adding two age groups to the model is not sufficient to produce linear growth. Modeling the demographic transition with an abrupt change in birth rate is not sufficient either. In future work, we might explore whether a gradual change in birth rate would work better, possibly using a logistic function. We also might explore the behavior of the model with more than two age groups.
true
code
0.680348
null
null
null
null
# Задача 2: аппроксимация функции ``` from math import sin, exp def func(x): return sin(x / 5.) * exp(x / 10.) + 5. * exp(-x / 2.) import numpy as np from scipy import linalg arrCoordinates = np.arange(1., 15.1, 0.1) arrFunction = np.array([func(coordinate) for coordinate in arrCoordinates]) ``` ## 1. Сформировать СЛАУ для многочлена первой степени, который должен совпадать с функцией в точках 1 и 15. ``` #многочлен первой степени arrCoord1 = np.array([1, 15]) N = 2 arrA1 = np.empty((0, N)) for i in xrange(N): arrA1Line = list() for j in xrange(N): arrA1Line.append(arrCoord1[i] ** j) arrA1 = np.append(arrA1, np.array([arrA1Line]), axis = 0) arrB1 = np.array([func(coordinate) for coordinate in arrCoord1]) print arrCoord1 print arrA1 print arrB1 arrX1 = linalg.solve(arrA1, arrB1) print arrX1 def func1(x): return arrX1[0] + arrX1[1] * x arrFunc1 = np.array([func1(coordinate) for coordinate in arrCoordinates]) %matplotlib inline import matplotlib.pylab as plt plt.plot(arrCoordinates, arrFunction, arrCoordinates, arrFunc1) plt.show() ``` ## 2. Многочлен второй степени в точка 1, 8, 15. ``` #многочлен второй степени arrCoord2 = np.array([1, 8, 15]) N = 3 arrA2 = np.empty((0, N)) for i in xrange(N): arrA2Line = list() for j in xrange(N): arrA2Line.append(arrCoord2[i] ** j) arrA2 = np.append(arrA2, np.array([arrA2Line]), axis = 0) arrB2 = np.array([func(coordinate) for coordinate in arrCoord2]) print arrCoord2 print arrA2 print arrB2 arrX2 = linalg.solve(arrA2, arrB2) print arrX2 def func2(x): return arrX2[0] + arrX2[1] * x + arrX2[2] * (x ** 2) arrFunc2 = np.array([func2(coordinate) for coordinate in arrCoordinates]) plt.plot(arrCoordinates, arrFunction, arrCoordinates, arrFunc1, arrCoordinates, arrFunc2) plt.show() ``` ## 3. Многочлен третьей степени в точка 1, 4, 10, 15. ``` #многочлен третьей степени arrCoord3 = np.array([1, 4, 10, 15]) N = 4 arrA3 = np.empty((0, N)) for i in xrange(N): arrA3Line = list() for j in xrange(N): arrA3Line.append(arrCoord3[i] ** j) arrA3 = np.append(arrA3, np.array([arrA3Line]), axis = 0) arrB3 = np.array([func(coordinate) for coordinate in arrCoord3]) print arrCoord3 print arrA3 print arrB3 arrX3 = linalg.solve(arrA3, arrB3) print arrX3 def func3(x): return arrX3[0] + arrX3[1] * x + arrX3[2] * (x ** 2) + arrX3[3] * (x ** 3) arrFunc3 = np.array([func3(coordinate) for coordinate in arrCoordinates]) plt.plot(arrCoordinates, arrFunction, arrCoordinates, arrFunc1, arrCoordinates, arrFunc2, arrCoordinates, arrFunc3) plt.show() with open('answer2.txt', 'w') as fileAnswer: for item in arrX3: fileAnswer.write(str(item) + ' ') ```
true
code
0.287268
null
null
null
null
<h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#What-is-Probability-Theory?" data-toc-modified-id="What-is-Probability-Theory?-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>What is Probability Theory?</a></span><ul class="toc-item"><li><span><a href="#A-simple-(?)-question" data-toc-modified-id="A-simple-(?)-question-1.1"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>A simple (?) question</a></span></li><li><span><a href="#Simulating-coin-flips" data-toc-modified-id="Simulating-coin-flips-1.2"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Simulating coin flips</a></span></li><li><span><a href="#Summary" data-toc-modified-id="Summary-1.3"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Summary</a></span></li></ul></li><li><span><a href="#What-is-probability-theory?" data-toc-modified-id="What-is-probability-theory?-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>What is probability theory?</a></span></li></ul></div> ``` %pylab inline ``` # What is Probability Theory? * Probability Theory is a **mathematical** framework for computing the probability of complex events. * Under the assumption that **we know the probabilities of the basic events.** * What is the precise meaning of <font color='red'>"probability"</font> and <font color='red'>"event"</font>? * We will give precise definitions later in the class. * For now, we'll rely on common sense. ## A simple (?) question We all know that if one flips a fair coin then the outcome is "heads" or "tails" with equal probabilities. What does that mean? It means that if we flip the coin $k$ times, for some large value of $k$, say $k=10,000$, Then the number of "heads" is **about** $\frac{k}{2}=\frac{10,000}{2} = 5,000$ What do we mean by **about** ?? ## Simulating coin flips We will use the pseudo random number generators in `numpy` to simulate the coin flips. instead of "Heads" and "Tails" we will use $x_i=1$ or $x_i=-1$ and consider the sum $S_{10000} = x_1+x_2+\cdots+x_{10000}$. If the number of heads is about 5,000 then $S_{10000}\approx 0$ We will vary the number of coin flips, which we denote by $k$ ``` # Generate the sum of k coin flips, repeat that n times def generate_counts(k=1000,n=100): X=2*(random.rand(k,n)>0.5)-1 # generate a kXn matrix of +-1 random numbers S=sum(X,axis=0) return S k=1000 n=1000 counts=generate_counts(k=k,n=n) figure(figsize=[10,4]) hist(counts); xlim([-k,k]) xlabel("sum") ylabel("count") title("Histogram of coin flip sum when flipping a fair coin %d times"%k) grid() ``` Note that the sum $S_{1000}$ is not **exactly** $0$, it is only **close to** $0$. Using **probability theory** we can calculate **how small** is $\big|S_k\big|$ In a later lesson we will show that the probability that $$\big| S_k \big| \geq 4\sqrt{k}$$ is smaller than $2 \times 10^{-8}$ which is $0.000002\%$ Let's use our simulation to demonstrate that this is the case: ``` from math import sqrt figure(figsize=[13,3.5]) for j in range(2,5): k=10**j counts=generate_counts(k=k,n=100) subplot(130+j-1) hist(counts,bins=10); d=4*sqrt(k) plot([-d,-d],[0,30],'r') plot([+d,+d],[0,30],'r') grid() title('%d flips, bound=+-%6.1f'%(k,d)) figure(figsize=[13,3.5]) for j in range(2,5): k=10**j counts=generate_counts(k=k,n=100) subplot(130+j-1) hist(counts,bins=10); xlim([-k,k]) d=4*sqrt(k) plot([-d,-d],[0,30],'r') plot([+d,+d],[0,30],'r') grid() title('%d flips, bound=+-%6.1f'%(k,d)) ``` ## Summary We did some experiments summing $k$ random numbers: $S_k=x_1+x_2+\cdots+x_k$ $x_i=-1$ with probability $1/2$, $x_i=+1$ with probability $1/2$ Our experiments show that the sum $S_k$ is (almost) always in the range $\big[-4\sqrt{k},+4\sqrt{k}\big]$ $$\mbox{ If } k \to \infty,\;\;\; \frac{4 \sqrt{k}}{k} = \frac{4}{\sqrt{k}} \to 0$$ $$ \mbox{Therefor if }\;\;k \to \infty, \frac{S_k}{k} \to 0$$ # What is probability theory? It is the math involved in **proving** (a precise version of) the statements above. In most cases, we can **approximate** probabilities using simulations (Monte-Carlo simulations) Calculating the probabilities is better because: * It provides a precise answer * It is much faster than Monte Carlo simulations. ** <font size=4 > Up Next: What is Statistics ?</font> **
true
code
0.426501
null
null
null
null
``` # Data manipulation import pandas as pd import numpy as np # Data Viz import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline # More Data Preprocessing & Machine Learning from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler, normalize import warnings warnings.filterwarnings('ignore') df = pd.read_csv('properties.csv') ``` ## Initial Data Preprocessing ### Sanity Check ``` # TO DO: check uniqueness print(df.id.nunique() == len(df)) print(df.AIN.nunique() == len(df)) # TO DO: check one level categorical features df.isTaxableParcel.value_counts() df = df.drop(['isTaxableParcel'], axis = 1) # TO DO: remove features that are improper for modelling df = df.drop(['id','AIN','AssessorID','SpecificUseType','CENTER_LAT','CENTER_LON','City','RollYear'], axis = 1) df = df[(df.EffectiveYearBuilt != 0) | (df.LandBaseYear != 0)] df = df[df.EffectiveYearBuilt >= df.YearBuilt.min()] df = df[df.ImpBaseYear != 0] ``` ### Feature Creation ``` # TO DO: create proportion-based features # Total value = LandValue + ImprovementValue + FixtureValue + PersonalPropertyValue df['LandValue_percent'] = df['LandValue']/df['TotalValue'] df['PersonalPropertyValue_percent'] = df['PersonalPropertyValue']/df['TotalValue'] df['TotalExemption_percent'] = df['TotalExemption']/df['TotalValue'] # Other proportion-based features df['ZHVI_sf'] = df['ZHVI']/df['SQFTmain'] df['Bathroom_per_bedroom'] = df['Bathrooms']/df['Bedrooms'] df['Price_per_unit'] = df['TotalValue']/df['Units'] # TO DO: aviod multicolinearity df = df.drop(['LandValue','ImprovementValue','PersonalPropertyValue','TotalExemption','SQFTmain','Bathrooms'], axis = 1) # TO DO: create difference-based features df['years_until_effective'] = df['EffectiveYearBuilt'] - df['YearBuilt'] df = df.drop(['EffectiveYearBuilt'], axis = 1) df['BaseYear_difference'] = df['ImpBaseYear'] - df['LandBaseYear'] df = df.drop(['ImpBaseYear'], axis = 1) # TO DO: aviod multicolinearity # Total exemption value = HomeownersExemption + RealEstateExemption + FixtureExemption + PersonalPropertyExemption df = df.drop(['FixtureExemption','PersonalPropertyExemption'], axis = 1) # TotalLandImpValue = LandValue + ImprovementValue df = df.drop(['Cluster','TotalLandImpValue','RecordingDate','netTaxableValue'], axis = 1) # TO DO: create a identifier for EDA df['school_district'] = 'ucla' df['school_district'][df.distance_to_usc<=3] = 'usc' ``` ### Missing Values Management ``` def plot_NA(dataframe, benchmark, bins): ## Assessing Missing Values per Column na_info = dataframe.isnull().sum()/len(dataframe)*100 na_info = na_info.sort_values(0, ascending=True).reset_index() na_info.columns = ['feature','% of NA'] na_info['higher than benchmark'] = (na_info['% of NA']>= benchmark) fig, (ax1, ax2) = plt.subplots(1,2, figsize=(16,5)) colours = {True: "red", False: "skyblue"} na_info.plot('feature','% of NA',kind='barh', color=na_info['higher than benchmark'].replace(colours),ax=ax1) ax1.vlines(x=benchmark, ymin=0, ymax=100, color='red', linestyles='dashed') ax1.set_title('Distribution of % of Missing Values per Feature') ax1.set_ylabel('feature') ax1.set_xlabel('% of NA') ax1.get_legend().remove() print('NAs per Feature:') print(na_info,end='\n\n') ## Assessing Missing Values per Row dataframe['NA_this_row'] = dataframe.isna().sum(axis=1) # the number of NA values per row ax2.hist(dataframe['NA_this_row'], bins=bins) ax2.set_title('Distribution of Amount of Missing Values per Row') ax2.set_xlabel('Missing Values per Row') ax2.set_ylabel('Number of Records') temp = dataframe['NA_this_row'].value_counts(normalize=True) print('NAs per Row:') print('count percent') print(temp) ## TO DO: Assess missing values by columns and rows plot_NA(df, benchmark=5, bins=5) ## TO DO: fill missing values (NAs or inf) with -1 df['Bathroom_per_bedroom'].fillna(value=-1, inplace=True) df['Bathroom_per_bedroom'][df.Bathroom_per_bedroom == np.inf] = -1 df['Price_per_unit'][df.Price_per_unit == np.inf] = -1 ``` Obviously, the NAs in *Bathroom_per_bedroom* and *Price_per_unit* due to zero denominator. Rather than simply remove those properties, here we mark those missing values specially as -1 to avoid introducing any bias or losing information. ``` df = df.drop('NA_this_row', axis=1) ``` ### Handle Outliers ``` ## TO DO: detect outliers through extreme values df.describe().T # TO DO: identify outliers as data point that falls outside of 3 standard deviations def replace_outliers_z_score(dataframe, column, Z=3): from scipy.stats import zscore df = dataframe.copy(deep=True) df.dropna(inplace=True, subset=[column]) # Calculate mean without outliers df["zscore"] = zscore(df[column]) mean_ = df[(df["zscore"] > -Z) & (df["zscore"] < Z)][column].mean() # Replace with mean values no_outliers = dataframe[column].isnull().sum() dataframe[column] = dataframe[column].fillna(mean_) dataframe["zscore"] = zscore(dataframe[column]) dataframe.loc[(dataframe["zscore"] < -Z) | (dataframe["zscore"] > Z),column] = mean_ # Print message print("Replaced:", no_outliers, " outliers in ", column) return dataframe.drop(columns="zscore") ## TO DO: replace potential outliers with mean cat_cols = df.select_dtypes(['int','float']).columns.values i = 1 for col in cat_cols: df = replace_outliers_z_score(df,col) ``` ## EDA ``` ## TO DO: correct feature types df.zip2 = df.zip2.astype('int') df.zip2 = df.zip2.astype('object') df.YearBuilt = df.YearBuilt.astype('object') df.AdministrativeRegion = df.AdministrativeRegion.astype('object') df.PropertyType = df.PropertyType.astype('object') # TO DO: Calculate correlation of features for UCLA district correlation = df[df.school_district=='ucla'].corr() mask = np.zeros_like(correlation) mask[np.triu_indices_from(mask)] = True # Plot correlation plt.figure(figsize=(18,18)) sns.heatmap(correlation, mask=mask, cmap="RdBu_r",xticklabels=correlation.columns.values, yticklabels=correlation.columns.values, annot = True, annot_kws={'size':10}) # Axis ticks size plt.xticks(fontsize=10) plt.yticks(fontsize=10) plt.show() ``` All of the correlation coefficients are less than 0.95, no significant multicolinearity. ``` # TO DO: Calculate correlation of features for USC district correlation = df[df.school_district=='usc'].corr() mask = np.zeros_like(correlation) mask[np.triu_indices_from(mask)] = True # Plot correlation plt.figure(figsize=(18,18)) sns.heatmap(correlation, mask=mask, cmap="RdBu_r",xticklabels=correlation.columns.values, yticklabels=correlation.columns.values, annot = True, annot_kws={'size':10}) # Axis ticks size plt.xticks(fontsize=10) plt.yticks(fontsize=10) plt.show() ``` All of the correlation coefficients are less than 0.95, no significant multicolinearity. ``` # TO DO: remove geographic identifier for EDA df_clean = df.drop('PropertyLocation',axis=1) # TO DO: plot categorical features plt.figure(figsize=(12, 12)) cat_cols = df_clean.select_dtypes(['object']).columns.values i = 1 for col in cat_cols: plt.subplot(2, 2, i) sns.countplot(y=df_clean[col],hue='school_district',data=df_clean) plt.xticks() plt.tick_params(labelbottom=True) i += 1 plt.tight_layout() # TO DO: plot numerical features plt.figure(figsize=(15, 13)) cat_cols = df_clean.select_dtypes(['int64']).columns.values i = 1 for col in cat_cols: plt.subplot(4, 3, i) sns.histplot(x=df_clean[col],kde=True, hue='school_district',data=df_clean) #res = stats.probplot(df_clean[col], plot=plt) #sns.boxplot(y=df_clean['price_sf'],x=col, data=df_clean ,color='green') plt.xticks(rotation=0) plt.tick_params() i += 1 plt.tight_layout() ``` YearBuilt shows distinct pattern, properties in UCLA district are younger than those in USC district overall. UCLA district shows higher ZHVI. ``` # TO DO: plot numerical float features plt.figure(figsize=(10, 10)) cat_cols = df_clean.select_dtypes(['float']).columns.values i = 1 for col in cat_cols: #plt.subplot(4, 4, i) sns.displot(x=df_clean[col], hue='school_district',data=df_clean) #res = stats.probplot(df_clean[col], plot=plt) #sns.boxplot(y=df_clean['price_sf'],x=col, data=df_clean ,color='green') plt.xticks(rotation=0) plt.tick_params() i += 1 plt.tight_layout() df_clean.info() ``` ## Feature Engineering ### One-hot Encoding ``` df_clean.head().T # TO DO: drop some multivariate features for computational simplicity df_clean = df_clean.drop(['LandBaseYear', 'RecordingDateYear'],axis=1) ## TO DO: separate the one-hot dataset for two schools df_ucla = df_clean[df_clean.school_district == 'ucla'] df_ucla = df_ucla.drop('school_district',axis=1) df_usc = df_clean[df_clean.school_district == 'usc'] df_usc = df_usc.drop('school_district',axis=1) # TO DO: Create dummy variables for ucla cat1 = pd.get_dummies(df_ucla.zip2, prefix = "zip") cat2 = pd.get_dummies(df_ucla.AdministrativeRegion, prefix = "Region") cat3 = pd.get_dummies(df_ucla.PropertyType, prefix = "PropertyType") cat4 = pd.get_dummies(df_ucla.YearBuilt, prefix = "YearBuilt") #cat5 = pd.get_dummies(df_clean.LandBaseYear, prefix = "LandBaseYear") #cat6 = pd.get_dummies(df_clean.RecordingDateYear, prefix = "RecordingDateYear") # TO DO: Merge dummy variables to main dataframe ucla_hot = pd.concat([df_ucla,cat1], axis=1) ucla_hot = pd.concat([ucla_hot,cat2], axis=1) ucla_hot = pd.concat([ucla_hot,cat3], axis=1) ucla_hot = pd.concat([ucla_hot,cat4], axis=1) #df_hot = pd.concat([df_hot,cat5], axis=1) #df_hot = pd.concat([df_hot,cat6], axis=1) # TO DO: Correct the data type for cat in [cat1,cat2, cat3, cat4]: cat_cols = cat.columns for col in cat_cols: ucla_hot[col] = ucla_hot[col].astype("category") # TO DO: drop original features ucla_hot = ucla_hot.drop(["zip2","PropertyType","YearBuilt","zip_90057"], axis=1) ucla_hot.head().T # TO DO: Create dummy variables for ucla cat1 = pd.get_dummies(df_usc.zip2, prefix = "zip") cat2 = pd.get_dummies(df_usc.AdministrativeRegion, prefix = "Region") cat3 = pd.get_dummies(df_usc.PropertyType, prefix = "PropertyType") cat4 = pd.get_dummies(df_usc.YearBuilt, prefix = "YearBuilt") #cat5 = pd.get_dummies(df_clean.LandBaseYear, prefix = "LandBaseYear") #cat6 = pd.get_dummies(df_clean.RecordingDateYear, prefix = "RecordingDateYear") # TO DO: Merge dummy variables to main dataframe usc_hot = pd.concat([df_usc,cat1], axis=1) usc_hot = pd.concat([usc_hot,cat2], axis=1) usc_hot = pd.concat([usc_hot,cat3], axis=1) usc_hot = pd.concat([usc_hot,cat4], axis=1) #df_hot = pd.concat([df_hot,cat5], axis=1) #df_hot = pd.concat([df_hot,cat6], axis=1) # TO DO: Correct the data type for cat in [cat1,cat2, cat3, cat4]: cat_cols = cat.columns for col in cat_cols: usc_hot[col] = usc_hot[col].astype("category") # TO DO: drop original features usc_hot = usc_hot.drop(["zip2","PropertyType","YearBuilt","zip_90023", "zip_90063"], axis=1) usc_hot.head().T ``` ### Data Scaling for ucla ``` # TO DO: split the dataset into the Training set and Test set from sklearn.model_selection import train_test_split X_ucla = ucla_hot.drop(['price_sf'], axis = 1) y_ucla = ucla_hot['price_sf'] X_train, X_test, y_train, y_test = train_test_split(X_ucla, y_ucla, test_size = 0.2, random_state = 99) # TO DO: separate numerical and categorical features X_train_num = X_train.select_dtypes(['int','float']) X_train_cat = X_train.select_dtypes(['category']) X_test_num = X_test.select_dtypes(['int','float']) X_test_cat = X_test.select_dtypes(['category']) # TO DO: standardize the data for UCLA from sklearn.preprocessing import StandardScaler scaler_ucla = StandardScaler() X_train_num_scaled = pd.DataFrame(scaler_ucla.fit_transform(X_train_num)) X_train_num_scaled.columns = X_train_num.columns X_train_num_scaled.index = X_train_num.index X_test_num_scaled = pd.DataFrame(scaler_ucla.transform(X_test_num)) X_test_num_scaled.columns = X_test_num.columns X_test_num_scaled.index = X_test_num.index # TO DO: combine the scaled the part with categorical features X_train_ucla_scaled = pd.concat([X_train_num_scaled,X_train_cat.sort_index()], axis=1) X_test_ucla_scaled = pd.concat([X_test_num_scaled,X_test_cat.sort_index()], axis=1) # TO DO: scale the target scaler_y_ucla = StandardScaler() y_train_ucla_scaled = scaler_y_ucla.fit_transform(y_train.values.reshape(-1, 1)) y_test_ucla_scaled = scaler_y_ucla.transform(y_test.values.reshape(-1, 1)) ``` for usc ``` # TO DO: split the dataset into the Training set and Test set from sklearn.model_selection import train_test_split X_usc = usc_hot.drop(['price_sf'], axis = 1) y_usc = usc_hot['price_sf'] X_train, X_test, y_train, y_test = train_test_split(X_usc, y_usc, test_size = 0.2, random_state = 99) # TO DO: separate numerical and categorical features X_train_num = X_train.select_dtypes(['int','float']) X_train_cat = X_train.select_dtypes(['category']) X_test_num = X_test.select_dtypes(['int','float']) X_test_cat = X_test.select_dtypes(['category']) # TO DO: standardize the data for USC from sklearn.preprocessing import StandardScaler scaler_usc = StandardScaler() X_train_num_scaled = pd.DataFrame(scaler_usc.fit_transform(X_train_num)) X_train_num_scaled.columns = X_train_num.columns X_train_num_scaled.index = X_train_num.index X_test_num_scaled = pd.DataFrame(scaler_usc.transform(X_test_num)) X_test_num_scaled.columns = X_test_num.columns X_test_num_scaled.index = X_test_num.index # TO DO: combine the scaled the part with categorical features X_train_usc_scaled = pd.concat([X_train_num_scaled,X_train_cat.sort_index()], axis=1) X_test_usc_scaled = pd.concat([X_test_num_scaled,X_test_cat.sort_index()], axis=1) # TO DO: scale the target scaler_y_usc = StandardScaler() y_train_usc_scaled = scaler_y_usc.fit_transform(y_train.values.reshape(-1, 1)) y_test_usc_scaled = scaler_y_usc.transform(y_test.values.reshape(-1, 1)) ``` ### Output for Modelling ``` X_train_ucla_scaled.to_csv('X_train_ucla.csv') X_test_ucla_scaled.to_csv('X_test_ucla.csv') pd.DataFrame(y_train_ucla_scaled).to_csv('y_train_ucla.csv') pd.DataFrame(y_test_ucla_scaled).to_csv('y_test_ucla.csv') X_train_usc_scaled.to_csv('X_train_usc.csv') X_test_usc_scaled.to_csv('X_test_usc.csv') pd.DataFrame(y_train_usc_scaled).to_csv('y_train_usc.csv') pd.DataFrame(y_test_usc_scaled).to_csv('y_test_usc.csv') ``` ## Results Analysis (after modelling) for ucla ``` # TO DO: import predicted results of our best model y_pred_ucla = pd.read_csv('y_pred_svr_ucla.csv') y_pred_ucla = y_pred_ucla.drop('Unnamed: 0',axis=1) y_pred_usc = pd.read_csv('y_pred_svr_usc.csv') y_pred_usc = y_pred_usc.drop('Unnamed: 0',axis=1) # TO DO: transfrom back to price_sf y_pred_ucla = pd.DataFrame(scaler_y_ucla.inverse_transform(y_pred_ucla)) y_pred_ucla.index = X_test_ucla_scaled.index y_pred_usc = pd.DataFrame(scaler_y_usc.inverse_transform(y_pred_usc)) y_pred_usc.index = X_test_usc_scaled.index # TO DO: obtain the full properties info fullset = df[['PropertyLocation','price_sf']] # TO DO: obtain the expected value of the SE model for ucla scaler_y_ucla.inverse_transform(np.zeros((9541, 1)))[0] # TO DO: combine ucla price outputs with property Location ucla_result = pd.merge(fullset,y_pred_ucla,left_index=True,right_index=True) ucla_result = ucla_result.rename(columns={0:'price_hat'}) # TO DO: naive adjustment ucla_result['price_hat_adjusted'] = (ucla_result['price_hat'] - ucla_result['price_sf'] )*3 # naive adjustment ucla_opp = ucla_result[ucla_result.price_hat_adjusted >= 553].sort_values('price_hat_adjusted') ucla_opp # TO DO: calculate opportunities density len(ucla_opp)/len(ucla_result) ``` for USC ``` # TO DO: obtain the expected value of the SE model for usc scaler_y_usc.inverse_transform(np.zeros((9541, 1)))[0] # TO DO: combine usc price outputs with property Location usc_result = pd.merge(fullset,y_pred_usc,left_index=True,right_index=True) usc_result = usc_result.rename(columns={0:'price_hat'}) # TO DO: naive adjustment usc_result['price_hat_adjusted'] = (usc_result['price_hat'] - usc_result['price_sf'] )*3 usc_opp = usc_result[usc_result.price_hat_adjusted>242].sort_values('price_hat_adjusted') usc_opp # TO DO: calculate opportunities density len(usc_opp)/len(usc_result) # TO DO: opportunities exploration # ucla_result[ucla_result.price_hat_adjusted>700][ucla_result['PropertyLocation'].str.find('SANTA ') != -1] ```
true
code
0.527803
null
null
null
null
# Deep Learning on IBM Stocks ## The Data We choose to analyse IBM history stock data which include about 13K records from the last 54 years. [From the year 1962 to this day] Each record contains: - Open price: The price in which the market in that month started at. - Close price: The price in which the market in that month closed at. - High Price: The max price the stock reached within the month. - Low price: The min price the stock reached within the month. - Volume: The max price the stock reached within the month. - [Adjacent close price](https://marubozu.blogspot.co.il/2006/09/how-yahoo-calculates-adjusted-closing.html). - Date: Day, Month and Year. The main challenges of this project are: - The limited data within a market that is changed by wide variety of things. In particular, things that we don't see in the raw data, like special accouncments on new technology. - The historic data of stocks in a particular situation doesn't necessarily resolve the same outcome in the exact same situation a few years later. - We wondered whether it is possible to actually find some features that will give us better accuracy than random. This project is interesting because as everybody knows deep learning solved tasks that considered difficult even with pretty basic deep learning features. And of course, If we find something useful when it comes to stock then good prediction = profit. ``` from pandas_datareader.data import DataReader from datetime import datetime import os import pandas as pd import random import numpy as np from keras.models import Sequential from keras.layers.recurrent import LSTM,GRU,SimpleRNN from keras.layers.core import Dense, Activation, Dropout from sklearn.cross_validation import train_test_split from sklearn.ensemble import RandomForestClassifier import warnings warnings.filterwarnings('ignore') from keras.utils.np_utils import to_categorical ``` #### Load or Download the data ``` def get_data_if_not_exists(force=False): if os.path.exists("./data/ibm.csv") and not force: return pd.read_csv("./data/ibm.csv") else: if not os.path.exists("./data"): os.mkdir("data") ibm_data = DataReader('IBM', 'yahoo', datetime(1950, 1, 1), datetime.today()) pd.DataFrame(ibm_data).to_csv("./data/ibm.csv") return pd.DataFrame(ibm_data).reset_index() ``` ## Data Exploration ``` print "loading the data" data = get_data_if_not_exists(force=True) print "done loading the data" print "data columns names: %s"%data.columns.values print data.shape data.head() ``` #### Data exploration highlights: - The data contains 13,733 records. - Each record reprsent one specific day. - Each record contain: Date, Open, High, Low, Close, Volume and Adj Close. # Creating sequence of close price from the stock data Our motivation was trying to imitiate a a stock similiar to IBM stock. ### Feature extraction: We'll use for our features only the closing price of the stock. And the sequence generated will include only the closing price aswell. ``` def extract_features(items): return [[item[4]] for item in items] def extract_expected_result(item): return [item[4]] MAX_WINDOW = 5 def train_test_split(data, test_size=0.1): """ This just splits data to training and testing parts """ ntrn = int(round(len(data) * (1 - test_size))) X, y = generate_input_and_outputs(data,extract_features,extract_expected_result) X_train,y_train,X_test, y_test = X[:ntrn],y[:ntrn],X[ntrn:],y[ntrn:] return X_train, y_train, X_test, y_test def generate_input_and_outputs(data,extractFeaturesFunc=extract_features,expectedResultFunc=extract_expected_result): step = 1 inputs = [] outputs = [] for i in range(0, len(data) - MAX_WINDOW, step): inputs.append(extractFeaturesFunc(data.iloc[i:i + MAX_WINDOW].as_matrix())) outputs.append(expectedResultFunc(data.iloc[i + MAX_WINDOW].as_matrix())) return inputs, outputs X_train,y_train, X_test, y_test = train_test_split(data,test_size=0.15) ``` ### Distance metrics: For our evaluation of the quality we used several distance metrics: * Euclidean distance. * Squared Euclidean distance. * Chebyshev distance. * Cosine distance. ``` import scipy.spatial.distance as dist def distance_functions(generated_seq): generated_sequence = np.asarray(generated_seq) original_sequence = np.asarray(y_test) print 'Euclidean distance: ', dist.euclidean(original_sequence, generated_sequence) print 'Squared Euclidean distance: ', dist.sqeuclidean(original_sequence, generated_sequence) print 'Chebyshev distance: ', dist.chebyshev(original_sequence, generated_sequence) print 'Cosine distance: ', dist.cosine(original_sequence, generated_sequence) return generated_sequence def train_and_evaluate(model, model_name): print 'Done building' print 'Training...' model.fit(X_train, y_train, batch_size=500, nb_epoch=500, validation_split=0.15,verbose=0) print 'Generating sequence...' generated_sequence = model.predict(X_test) return distance_functions(generated_sequence) ``` ### Training and Evaluation We tried 3 different deep-learning algorithms: * LSTM. * GRU. * SimpleRNN. For each algorithm we generated a sequence, Measured its distance and plotted the given result with the original sequence. ``` layer_output_size1 = 128 print 'Building LSTM Model' model = Sequential() model.add(LSTM(layer_output_size1, return_sequences=False, input_shape=(MAX_WINDOW, len(X_train[0][0])))) model.add(Dense(len(y_train[0]), input_dim=layer_output_size1)) model.add(Activation("linear")) model.compile(loss="mean_squared_error", optimizer="rmsprop") LSTM_seq = train_and_evaluate(model, 'LSTM') print '----------------------' print 'Building SimpleRNN Model' model = Sequential() model.add(SimpleRNN(layer_output_size1, return_sequences=False, input_shape=(MAX_WINDOW, len(X_train[0][0])))) model.add(Dense(len(y_train[0]), input_dim=layer_output_size1)) model.add(Activation("linear")) model.compile(loss="mean_squared_error", optimizer="rmsprop") SimpleRNN_seq = train_and_evaluate(model, 'SimpleRNN') print '----------------------' print 'Building GRU Model' model = Sequential() model.add(GRU(layer_output_size1, return_sequences=False, input_shape=(MAX_WINDOW, len(X_train[0][0])))) model.add(Dense(len(y_train[0]), input_dim=layer_output_size1)) model.add(Activation("linear")) model.compile(loss="mean_squared_error", optimizer="rmsprop") GRU_seq = train_and_evaluate(model, 'GRU') ``` ### Graphs showing the difference between the generated sequence and the original #### LSTM Sequence vs Original Sequence. ``` %matplotlib inline import matplotlib.pyplot as plt import pylab pylab.rcParams['figure.figsize'] = (32, 6) pylab.xlim([0,len(y_test)]) plt.plot(y_test, linewidth=1) plt.plot(LSTM_seq, marker='o', markersize=4, linewidth=0) plt.legend(['Original = Blue', 'LSTM = Green '], loc='best', prop={'size':20}) plt.show() ``` #### GRU Sequence vs Original Sequence ``` plt.plot(y_test, linewidth=1) plt.plot(GRU_seq, marker='o', markersize=4, linewidth=0, c='r') plt.legend(['Original = Blue','GRU = Red'], loc='best', prop={'size':20}) plt.show() ``` #### SimpleRNN Sequence vs Original Sequence. ``` plt.plot(y_test, linewidth=1) plt.plot(SimpleRNN_seq, marker='o', markersize=4, linewidth=0, c='black') plt.legend(['Original = Blue', 'SimpleRNN = Black'], loc='best', prop={'size':20}) plt.show() ``` # Up / Down sequences. After the generation of a new sequence we wanted to try another thing: Trying to predict up / down sequences. ## Feature Extraction and Data Pre-processing. #### The features are: 1. Open price within the day. 1. Highest price within the day. 1. Lowest price within the day. 1. Close price within the day. 1. Adj Close. 1. Raise percentage. 1. Spread. 1. Up Spread. 1. Down Spread. 1. Absolute Difference between Close and Previous day close. 1. Absolute Difference between Open and Previous day open. 1. Absolute Difference between High and Previous day high. 1. Absolute Difference between low and Previous day low. 1. For each day we've also added a 7 previous day sliding window containing all of the above. 1. 1 When the stock price raised for that day, 0 When the stock price didn't raise. ``` data = get_data_if_not_exists(force=True) for i in range(1,len(data)): prev = data.iloc[i-1] data.set_value(i,"prev_close",prev["Close"]) data["up/down"] = (data["Close"] - data["prev_close"]) > 0 data["raise_percentage"] = (data["Close"] - data["prev_close"])/data["prev_close"] data["spread"] = abs(data["High"]-data["Low"]) data["up_spread"] = abs(data["High"]-data["Open"]) data["down_spread"] = abs(data["Open"]-data["Low"]) # import re for i in range(1,len(data)): prev = data.iloc[i-1] data.set_value(i,"prev_open",prev["Open"]) data.set_value(i,"prev_high",prev["High"]) data.set_value(i,"prev_low",prev["Low"]) # data.set_value(i,"month",re.findall("[1-9]+", str(data.Date[i]))[2]) # data.set_value(i,"year",re.findall("[1-9]+", str(data.Date[i]))[0]) # prev = data.iloc[i-2] # data.set_value(i,"prev_prev_open",prev["Open"]) # data.set_value(i,"prev_prev_high",prev["High"]) # data.set_value(i,"prev_prev_low",prev["Low"]) # data.set_value(i,"prev_prev_close",prev["Close"]) data["close_diff"] = abs(data["Close"] - data["prev_close"]) # data["close_diff"] = data["Close"] - data["prev_close"] # data["close_diff"] = abs(data["Close"] / data["prev_close"]) data["open_diff"] = abs(data["Open"] - data["prev_open"]) # data["open_diff"] = data["Open"] - data["prev_open"] # data["open_diff"] = abs(data["Open"] / data["prev_open"]) data["high_diff"] = abs(data["High"] - data["prev_high"]) # data["high_diff"] = data["High"] - data["prev_high"] # data["high_diff"] = abs(data["High"] / data["prev_high"]) data["low_diff"] = abs(data["Low"] - data["prev_low"]) # data["low_diff"] = data["Low"] - data["prev_low"] # data["low_diff"] = abs(data["Low"] / data["prev_low"]) # data["prev_prev_close_diff"] = (data["Close"] - data["prev_prev_close"]) # data["prev_prev_raise_percentage"] = (data["Close"] - data["prev_prev_close"])/data["prev_prev_close"] # data["prev_prev_open_diff"] = (data["Open"] - data["prev_prev_open"]) # data["prev_prev_high_diff"] = (data["High"] - data["prev_prev_high"]) # data["prev_prev_low_diff"] = (data["Low"] - data["prev_prev_low"]) # data["open_close_mean"] = (data["Open"] + data["Close"])/2 # removing the first record because have no previuse record therefore can't know if up or down data = data[1:] data.describe() MAX_WINDOW = 5 def extract_features(items): return [[item[1], item[2], item[3], item[4], item[5], item[6], item[9], item[10], item[11], item[12], item[16], item[17], item[18], item[19], 1] if item[8] else [item[1], item[2], item[3], item[4], item[5], item[6], item[9], item[10], item[11], item[12], item[16], item[17], item[18], item[19], 0] for item in items] def extract_expected_result(item): return 1 if item[8] else 0 def generate_input_and_outputs(data): step = 1 inputs = [] outputs = [] for i in range(0, len(data) - MAX_WINDOW, step): inputs.append(extract_features(data.iloc[i:i + MAX_WINDOW].as_matrix())) outputs.append(extract_expected_result(data.iloc[i + MAX_WINDOW].as_matrix())) return inputs, outputs print "generating model input and outputs" X, y = generate_input_and_outputs(data) print "done generating input and outputs" y = to_categorical(y) ``` ### Splitting the data to train and test ``` X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.15) X_train,X_validation,y_train,y_validation = train_test_split(X_train,y_train,test_size=0.15) ``` ## Configuration of the deep learning models ``` models = [] layer_output_size1 = 128 layer_output_size2 = 128 output_classes = len(y[0]) percentage_of_neurons_to_ignore = 0.2 model = Sequential() model.add(LSTM(layer_output_size1, return_sequences=True, input_shape=(MAX_WINDOW, len(X[0][0])))) model.add(Dropout(percentage_of_neurons_to_ignore)) model.add(LSTM(layer_output_size2, return_sequences=False)) model.add(Dropout(percentage_of_neurons_to_ignore)) model.add(Dense(output_classes)) model.add(Activation('softmax')) model.alg_name = "LSTM" model.compile(loss='categorical_crossentropy',metrics=['accuracy'], optimizer='rmsprop') models.append(model) model = Sequential() model.add(SimpleRNN(layer_output_size1, return_sequences=True, input_shape=(MAX_WINDOW, len(X[0][0])))) model.add(Dropout(percentage_of_neurons_to_ignore)) model.add(SimpleRNN(layer_output_size2, return_sequences=False)) model.add(Dropout(percentage_of_neurons_to_ignore)) model.add(Dense(output_classes)) model.add(Activation('softmax')) model.alg_name = "SimpleRNN" model.compile(loss='categorical_crossentropy',metrics=['accuracy'], optimizer='rmsprop') models.append(model) model = Sequential() model.add(GRU(layer_output_size1, return_sequences=True, input_shape=(MAX_WINDOW, len(X[0][0])))) model.add(Dropout(percentage_of_neurons_to_ignore)) model.add(GRU(layer_output_size2, return_sequences=False)) model.add(Dropout(percentage_of_neurons_to_ignore)) model.add(Dense(output_classes)) model.add(Activation('softmax')) model.alg_name = "GRU" model.compile(loss='categorical_crossentropy',metrics=['accuracy'], optimizer='rmsprop') models.append(model) ``` ### Training ``` def trainModel(model): epochs = 5 print "Training model %s"%(model.alg_name) model.fit(X_train, y_train, batch_size=128, nb_epoch=epochs,validation_data=(X_validation,y_validation), verbose=0) ``` ### Evaluation ``` from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import ExtraTreesClassifier from sklearn.tree import DecisionTreeClassifier def createSplit(model): print 'Adding layer of DecisionTreeClassifier' # split_model = RandomForestClassifier() # split_model.fit(model.predict(X_validation), y_validation) # split_model = ExtraTreesClassifier(n_estimators=15, max_depth=None, min_samples_split=2, random_state=0) # split_model.fit(model.predict(X_validation), y_validation) # split_model = DecisionTreeClassifier(max_depth=None, min_samples_split=1, random_state=0) # split_model.fit(model.predict(X_validation), y_validation) split_model = DecisionTreeClassifier() split_model.fit(model.predict(X_validation), y_validation) return split_model def probabilities_to_prediction(record): return [1,0] if record[0]>record[1] else [0,1] def evaluateModel(model): success, success2 = 0,0 predicts = model.predict(X_test) split_model = createSplit(model) for index, record in enumerate(predicts): predicted = list(split_model.predict([np.array(record)])[0]) predicted2 = probabilities_to_prediction(record) expected = y_test[index] if predicted[0] == expected[0]: success += 1 if predicted2[0] == expected[0]: success2 += 1 accuracy = float(success) / len(predicts) accuracy2 = float(success2) / len(predicts) print "The Accuracy for %s is: %s" % (model.alg_name, max(accuracy2, accuracy, 1-accuracy, 1-accuracy2)) return accuracy def train_and_evaluate(): accuracies = {} for model in models: trainModel(model) acc = evaluateModel(model) if model.alg_name not in accuracies: accuracies[model.alg_name] = [] accuracies[model.alg_name].append(acc) return accuracies acc = train_and_evaluate() ``` ### Naive algorithm: We'll choose the most frequent up / down of the stock. ``` all_data = data["up/down"].count() most_frequent = data["up/down"].describe().top frequency = data["up/down"].describe().freq acc = float(frequency) / all_data print 'The most frequent is: %s' % (most_frequent) print 'The accuracy of naive algorithm is: ', acc ``` ## Summary & Evaluation analysis: #### Evaluation process: Our evaluation used two different configurations: 1. Raw Deep-Learning algorithm. 1. Deep-Learning algorithm With added layer of DecisionTreeClassifier. In both cases we used the predictions of the algorithm to create a sequence to tell us whether the stock is going to get up or down. Then we checked it with the actual data and calculated accuracy. ### Results: The accuracy as stated above is better then a naive algorithm, Not by far, But still better which means that if we follow the algorithm we are actually expected to make profit. ### What next? As expected it seems like the raw stock data isn't get a high estimation of the stock behavior. We could try mixing it with information from financial articles and news, try to take into account related stocks like the sector, S&P500 and new features, even checking for a country specific economics laws.
true
code
0.527256
null
null
null
null
## Softmax regression in plain Python Softmax regression, also called multinomial logistic regression extends [logistic regression](logistic_regression.ipynb) to multiple classes. **Given:** - dataset $\{(\boldsymbol{x}^{(1)}, y^{(1)}), ..., (\boldsymbol{x}^{(m)}, y^{(m)})\}$ - with $\boldsymbol{x}^{(i)}$ being a $d-$dimensional vector $\boldsymbol{x}^{(i)} = (x^{(i)}_1, ..., x^{(i)}_d)$ - $y^{(i)}$ being the target variable for $\boldsymbol{x}^{(i)}$, for example with $K = 3$ classes we might have $y^{(i)} \in \{0, 1, 2\}$ A softmax regression model has the following features: - a separate real-valued weight vector $\boldsymbol{w}= (w^{(1)}, ..., w^{(d)})$ for each class. The weight vectors are typically stored as rows in a weight matrix. - a separate real-valued bias $b$ for each class - the softmax function as an activation function - the cross-entropy loss function The training procedure of a softmax regression model has different steps. In the beginning (step 0) the model parameters are initialized. The other steps (see below) are repeated for a specified number of training iterations or until the parameters have converged. An illustration of the whole procedure is given below. ![title](figures/softmax_regression.jpg) * * * **Step 0: ** Initialize the weight matrix and bias values with zeros (or small random values). * * * **Step 1: ** For each class $k$ compute a linear combination of the input features and the weight vector of class $k$, that is, for each training example compute a score for each class. For class $k$ and input vector $\boldsymbol{x}^{(i)}$ we have: $score_{k}(\boldsymbol{x}^{(i)}) = \boldsymbol{w}_{k}^T \cdot \boldsymbol{x}^{(i)} + b_{k}$ where $\cdot$ is the dot product and $\boldsymbol{w}_{(k)}$ the weight vector of class $k$. We can compute the scores for all classes and training examples in parallel, using vectorization and broadcasting: $\boldsymbol{scores} = \boldsymbol{X} \cdot \boldsymbol{W}^T + \boldsymbol{b} $ where $\boldsymbol{X}$ is a matrix of shape $(n_{samples}, n_{features})$ that holds all training examples, and $\boldsymbol{W}$ is a matrix of shape $(n_{classes}, n_{features})$ that holds the weight vector for each class. * * * **Step 2: ** Apply the softmax activation function to transform the scores into probabilities. The probability that an input vector $\boldsymbol{x}^{(i)}$ belongs to class $k$ is given by $\hat{p}_k(\boldsymbol{x}^{(i)}) = \frac{\exp(score_{k}(\boldsymbol{x}^{(i)}))}{\sum_{j=1}^{K} \exp(score_{j}(\boldsymbol{x}^{(i)}))}$ Again we can perform this step for all classes and training examples at once using vectorization. The class predicted by the model for $\boldsymbol{x}^{(i)}$ is then simply the class with the highest probability. * * * ** Step 3: ** Compute the cost over the whole training set. We want our model to predict a high probability for the target class and a low probability for the other classes. This can be achieved using the cross entropy loss function: $J(\boldsymbol{W},b) = - \frac{1}{m} \sum_{i=1}^m \sum_{k=1}^{K} \Big[ y_k^{(i)} \log(\hat{p}_k^{(i)})\Big]$ In this formula, the target labels are *one-hot encoded*. So $y_k^{(i)}$ is $1$ is the target class for $\boldsymbol{x}^{(i)}$ is k, otherwise $y_k^{(i)}$ is $0$. Note: when there are only two classes, this cost function is equivalent to the cost function of [logistic regression](logistic_regression.ipynb). * * * ** Step 4: ** Compute the gradient of the cost function with respect to each weight vector and bias. A detailed explanation of this derivation can be found [here](http://ufldl.stanford.edu/tutorial/supervised/SoftmaxRegression/). The general formula for class $k$ is given by: $ \nabla_{\boldsymbol{w}_k} J(\boldsymbol{W}, b) = \frac{1}{m}\sum_{i=1}^m\boldsymbol{x}^{(i)} \left[\hat{p}_k^{(i)}-y_k^{(i)}\right]$ For the biases, the inputs $\boldsymbol{x}^{(i)}$ will be given 1. * * * ** Step 5: ** Update the weights and biases for each class $k$: $\boldsymbol{w}_k = \boldsymbol{w}_k - \eta \, \nabla_{\boldsymbol{w}_k} J$ $b_k = b_k - \eta \, \nabla_{b_k} J$ where $\eta$ is the learning rate. ``` from sklearn.datasets import load_iris import numpy as np from sklearn.model_selection import train_test_split from sklearn.datasets import make_blobs import matplotlib.pyplot as plt np.random.seed(13) ``` ## Dataset ``` X, y_true = make_blobs(centers=4, n_samples = 5000) fig = plt.figure(figsize=(8,6)) plt.scatter(X[:,0], X[:,1], c=y_true) plt.title("Dataset") plt.xlabel("First feature") plt.ylabel("Second feature") plt.show() # reshape targets to get column vector with shape (n_samples, 1) y_true = y_true[:, np.newaxis] # Split the data into a training and test set X_train, X_test, y_train, y_test = train_test_split(X, y_true) print(f'Shape X_train: {X_train.shape}') print(f'Shape y_train: {y_train.shape}') print(f'Shape X_test: {X_test.shape}') print(f'Shape y_test: {y_test.shape}') ``` ## Softmax regression class ``` class SoftmaxRegressor: def __init__(self): pass def train(self, X, y_true, n_classes, n_iters=10, learning_rate=0.1): """ Trains a multinomial logistic regression model on given set of training data """ self.n_samples, n_features = X.shape self.n_classes = n_classes self.weights = np.random.rand(self.n_classes, n_features) self.bias = np.zeros((1, self.n_classes)) all_losses = [] for i in range(n_iters): scores = self.compute_scores(X) probs = self.softmax(scores) y_predict = np.argmax(probs, axis=1)[:, np.newaxis] y_one_hot = self.one_hot(y_true) loss = self.cross_entropy(y_one_hot, probs) all_losses.append(loss) dw = (1 / self.n_samples) * np.dot(X.T, (probs - y_one_hot)) db = (1 / self.n_samples) * np.sum(probs - y_one_hot, axis=0) self.weights = self.weights - learning_rate * dw.T self.bias = self.bias - learning_rate * db if i % 100 == 0: print(f'Iteration number: {i}, loss: {np.round(loss, 4)}') return self.weights, self.bias, all_losses def predict(self, X): """ Predict class labels for samples in X. Args: X: numpy array of shape (n_samples, n_features) Returns: numpy array of shape (n_samples, 1) with predicted classes """ scores = self.compute_scores(X) probs = self.softmax(scores) return np.argmax(probs, axis=1)[:, np.newaxis] def softmax(self, scores): """ Tranforms matrix of predicted scores to matrix of probabilities Args: scores: numpy array of shape (n_samples, n_classes) with unnormalized scores Returns: softmax: numpy array of shape (n_samples, n_classes) with probabilities """ exp = np.exp(scores) sum_exp = np.sum(np.exp(scores), axis=1, keepdims=True) softmax = exp / sum_exp return softmax def compute_scores(self, X): """ Computes class-scores for samples in X Args: X: numpy array of shape (n_samples, n_features) Returns: scores: numpy array of shape (n_samples, n_classes) """ return np.dot(X, self.weights.T) + self.bias def cross_entropy(self, y_true, probs): loss = - (1 / self.n_samples) * np.sum(y_true * np.log(probs)) return loss def one_hot(self, y): """ Tranforms vector y of labels to one-hot encoded matrix """ one_hot = np.zeros((self.n_samples, self.n_classes)) one_hot[np.arange(self.n_samples), y.T] = 1 return one_hot ``` ## Initializing and training the model ``` regressor = SoftmaxRegressor() w_trained, b_trained, loss = regressor.train(X_train, y_train, learning_rate=0.1, n_iters=800, n_classes=4) fig = plt.figure(figsize=(8,6)) plt.plot(np.arange(800), loss) plt.title("Development of loss during training") plt.xlabel("Number of iterations") plt.ylabel("Loss") plt.show() ``` ## Testing the model ``` n_test_samples, _ = X_test.shape y_predict = regressor.predict(X_test) print(f"Classification accuracy on test set: {(np.sum(y_predict == y_test)/n_test_samples) * 100}%") ```
true
code
0.9314
null
null
null
null
<a href="https://colab.research.google.com/github/xSakix/AI_colab_notebooks/blob/master/imdb_2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # IMDB DNN Lets do the IMDB dataset with a simple DNN. The first one is in numpy and second will be done in pytorch, but only using tensor for the GPU. Not using backwards or any NN functionality, as the goal is to implement it and learn how it works behind the scenes. ``` import keras import numpy as np import torch import matplotlib.pyplot as plt torch.manual_seed(2019) np.random.seed(42) EPS = torch.finfo(torch.float32).eps def convert_to_array(x): x_temp = [] for x in x_train: if len(x) < maxlen: for i in range(maxlen - len(x)): x.append(0.0) elif len(x) > maxlen: x = x[0:maxlen] x_temp.append(x) return np.array(x_temp) def relu(z): other = torch.zeros(z.size()).cuda() return torch.maximum(other,z).cuda() def back_relu(Z, dA): dZ = dA.detach().clone().cuda() # just converting dz to a correct object. # When z <= 0, you should set dz to 0 as well. # normaly it would be: # Z[Z <= 0] = 0. # Z[Z > 0] = 1 # dZ = dA*Z # so for short we have this dZ[Z <= 0.] = 0. # which says, that make dZ a copy od dA,then where Z <= 0 we have 0 # and where Z > 0 we have 1*dA = dA return dZ def sigmoid(z): return 1. / (1. + torch.exp(-z).cuda()+EPS) def back_sigmoid(Z, dA): s = 1 / (1 + torch.exp(-Z).cuda()+EPS) dZ = dA * s * (1 - s) return dZ x=torch.randn(1,5).cuda() dx=torch.randn(1,5).cuda() print(x) print(relu(x)) print("*"*80) print(x) print(dx) #where x <= 0 dx will 0 print(back_relu(x,dx)) print("*"*80) print(sigmoid(x)) print(back_sigmoid(x,dx)) max_features = 20000 # Only consider the top 20k words maxlen = 200 # Only consider the first 200 words of each movie review (x_train, y_train), (x_val, y_val) = keras.datasets.imdb.load_data(num_words=max_features) x_train = convert_to_array(x_train) x_val = convert_to_array(x_val) y_train = y_train.reshape(y_train.shape[0], -1).T y_val = y_val.reshape(y_val.shape[0], -1).T x_train = x_train.reshape(x_train.shape[0], -1).T x_val = x_val.reshape(x_val.shape[0], -1).T print("*" * 80) print("x_train:{}".format(x_train.shape)) print("x_val:{}".format(x_val.shape)) print("y_train:{}".format(y_train.shape)) print("y_val:{}".format(y_val.shape)) print("*" * 80) assert (x_train.shape == (maxlen, 25000)) assert (y_train.shape == (1, 25000)) assert (x_val.shape == (maxlen, 25000)) assert (y_val.shape == (1, 25000)) print("*" * 80) print("max x_train before:{}".format(np.max(x_train))) print("max x_val before:{}".format(np.max(x_val))) print("min before:{}, {}".format(np.min(x_train), np.min(x_val))) # norm didn't work well # norm = np.linalg.norm(x_train, ord=2) # print("norm={}".format(norm)) # normalizing around max_features works well # x_train = x_train / max_features # x_val = x_val / max_features # centering around mean x_mean = np.mean(x_train) x_std = np.std(x_train) print("(mean,std)=({},{})".format(x_mean, x_std)) x_train = (x_train - x_mean) / x_std x_val = (x_val - x_mean) / x_std print("max x_train after norm:{}".format(np.max(x_train))) print("max x_val after norm:{}".format(np.max(x_val))) print("min after norm:{}, {}".format(np.min(x_train), np.min(x_val))) # assert ((x_train >= 0.).all() and (x_train < 1.).all()) print("*" * 80) print("y_train unique vals:{}".format(np.unique(y_train))) print("y_val unique vals:{}".format(np.unique(y_train))) print("*" * 80) # 2 layer network m = x_train.shape[1] n_x = x_train.shape[0] n_h = 128 n_y = 1 # init params W1 = torch.randn(n_h, n_x).cuda() * 0.01 b1 = torch.zeros((n_h, 1)).cuda() W2 = torch.randn(n_y, n_h).cuda() * 0.01 b2 = torch.zeros((n_y, 1)).cuda() assert (W1.size() == (n_h, n_x)) assert (b1.size() == (n_h, 1)) assert (W2.size() == (n_y, n_h)) assert (b2.size() == (n_y, 1)) costs = [] n_iter = 100000 learning_rate = 0.01 x_train = torch.tensor(x_train,dtype=torch.float32).cuda() y_train = torch.tensor(y_train,dtype=torch.float32).cuda() for i in range(0, n_iter): # forward # A1, cache1 = linear_activation_forward(X, W1, b1, "relu") # do a forward pass over relu # print("W1.shape:{}".format(W1.shape)) # print("X.shape:{}".format(x_train.shape)) # m x n * n x p = m x p # (5, 200) * (200, 25000) Z1 = torch.mm(W1, x_train).cuda() + b1 assert (Z1.size() == (n_h, m)) A1 = relu(Z1) assert (A1.size() == (n_h, m)) # A2, cache2 = linear_activation_forward(A1, W2, b2, "sigmoid") Z2 = torch.mm(W2, A1).cuda() + b2 assert (Z2.size() == (n_y, m)) A2 = sigmoid(Z2) assert (A2.size() == (n_y, m)) # compute cost cost = -(1 / m) * torch.sum(y_train * torch.log(A2).cuda() + (1 - y_train) * torch.log(1 - A2).cuda()).cuda() cost = torch.squeeze(cost) # backward compute loss dA2 = -(torch.divide(y_train, A2).cuda() - torch.divide(1 - y_train, 1 - A2).cuda()) # print("dA2.shape={}".format(dA2.shape)) assert (dA2.size() == A2.size()) # backward dZ2 = back_sigmoid(Z2, dA2) assert (dZ2.size() == dA2.size()) dW2 = (1 / m) * torch.mm(dZ2, A1.T).cuda() db2 = (1 / m) * torch.sum(dZ2, dim=1, keepdims=True).cuda() dA1 = torch.mm(W2.T, dZ2).cuda() assert (dA1.size() == A1.size()) assert (dW2.size() == W2.size()) assert (db2.size() == b2.size()) dZ1 = back_relu(Z1, dA1) assert (dZ1.size() == dA1.size()) dW1 = (1 / m) * torch.mm(dZ1, x_train.T).cuda() db1 = (1 / m) * torch.sum(dZ1, dim=1, keepdims=True).cuda() assert (dW1.size() == W1.size()) assert (db1.size() == b1.size()) # update params W2 = W2 - learning_rate * dW2 b2 = b2 - learning_rate * db2 W1 = W1 - learning_rate * dW1 b1 = b1 - learning_rate * db1 # print stats if i % 1000 == 0: print("Cost after iteration {}: {}".format(i, cost)) if i % 1000 == 0: costs.append(cost) #predict p = torch.zeros((1, x_train.shape[1])).cuda() Z1 = torch.mm(W1, x_train).cuda() + b1 A1 = relu(Z1) Z2 = torch.mm(W2, A1).cuda() + b2 A2 = sigmoid(Z2) # convert probas to 0/1 predictions for i in range(0, A2.shape[1]): if A2[0, i] > 0.5: p[0, i] = 1 else: p[0, i] = 0 print("Accuracy on training set: " + str(torch.sum((p == y_train)/x_train.shape[1]).cuda())) x_val = torch.tensor(x_val,dtype=torch.float32).cuda() y_val = torch.tensor(y_val,dtype=torch.float32).cuda() #predict p = torch.zeros((1, x_val.shape[1])).cuda() Z1 = torch.mm(W1, x_val).cuda() + b1 A1 = relu(Z1) Z2 = torch.mm(W2, A1).cuda() + b2 A2 = sigmoid(Z2) # convert probas to 0/1 predictions for i in range(0, A2.shape[1]): if A2[0, i] > 0.5: p[0, i] = 1 else: p[0, i] = 0 print("Accuracy on validation/test set: " + str(torch.sum((p == y_val)/x_val.shape[1]).cuda())) plt.plot(costs) plt.ylabel('cost') plt.xlabel('iterations (per hundreds)') plt.title("Learning rate =" + str(learning_rate)) plt.show() ``` # summary So even with enough power(GPU) and kinda low loss/cost we actually don't get better accuracy on validation set. That looks like **overfitting**.
true
code
0.441793
null
null
null
null
``` import keras keras.__version__ ``` # Classifying newswires: a multi-class classification example This notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments. ---- In the previous section we saw how to classify vector inputs into two mutually exclusive classes using a densely-connected neural network. But what happens when you have more than two classes? In this section, we will build a network to classify Reuters newswires into 46 different mutually-exclusive topics. Since we have many classes, this problem is an instance of "multi-class classification", and since each data point should be classified into only one category, the problem is more specifically an instance of "single-label, multi-class classification". If each data point could have belonged to multiple categories (in our case, topics) then we would be facing a "multi-label, multi-class classification" problem. ## The Reuters dataset We will be working with the _Reuters dataset_, a set of short newswires and their topics, published by Reuters in 1986. It's a very simple, widely used toy dataset for text classification. There are 46 different topics; some topics are more represented than others, but each topic has at least 10 examples in the training set. Like IMDB and MNIST, the Reuters dataset comes packaged as part of Keras. Let's take a look right away: ``` from keras.datasets import reuters (train_data, train_labels), (test_data, test_labels) = reuters.load_data(num_words=10000) ``` Like with the IMDB dataset, the argument `num_words=10000` restricts the data to the 10,000 most frequently occurring words found in the data. We have 8,982 training examples and 2,246 test examples: ``` len(train_data) len(test_data) ``` As with the IMDB reviews, each example is a list of integers (word indices): ``` train_data[10] ``` Here's how you can decode it back to words, in case you are curious: ``` word_index = reuters.get_word_index() reverse_word_index = dict([(value, key) for (key, value) in word_index.items()]) # Note that our indices were offset by 3 # because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown". decoded_newswire = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]]) decoded_newswire ``` The label associated with an example is an integer between 0 and 45: a topic index. ``` train_labels[10] ``` ## Preparing the data We can vectorize the data with the exact same code as in our previous example: ``` import numpy as np def vectorize_sequences(sequences, dimension=10000): results = np.zeros((len(sequences), dimension)) for i, sequence in enumerate(sequences): results[i, sequence] = 1. return results # Our vectorized training data x_train = vectorize_sequences(train_data) # Our vectorized test data x_test = vectorize_sequences(test_data) ``` To vectorize the labels, there are two possibilities: we could just cast the label list as an integer tensor, or we could use a "one-hot" encoding. One-hot encoding is a widely used format for categorical data, also called "categorical encoding". For a more detailed explanation of one-hot encoding, you can refer to Chapter 6, Section 1. In our case, one-hot encoding of our labels consists in embedding each label as an all-zero vector with a 1 in the place of the label index, e.g.: ``` def to_one_hot(labels, dimension=46): results = np.zeros((len(labels), dimension)) for i, label in enumerate(labels): results[i, label] = 1. return results # Our vectorized training labels one_hot_train_labels = to_one_hot(train_labels) # Our vectorized test labels one_hot_test_labels = to_one_hot(test_labels) ``` Note that there is a built-in way to do this in Keras, which you have already seen in action in our MNIST example: ``` from keras.utils.np_utils import to_categorical one_hot_train_labels = to_categorical(train_labels) one_hot_test_labels = to_categorical(test_labels) ``` ## Building our network 这个主题分类问题看起来非常类似于我们以前的电影评论分类问题:在这两种情况下,我们正在尝试分类短片段的文本。然而,这里有一个新的约束:输出类的数量从2增加到46,即输出空间的维数要大得多。 在我们所使用的密集层堆栈中,每个层只能访问前一层输出中存在的信息。如果一层删除了与分类问题相关的信息,那么这些信息就永远不能被后面的层所恢复:每一层都有可能成为“信息瓶颈”。在我们前面的例子中,我们使用了16个维度的中间层,但是16维空间可能太有限,无法学会分离46个不同的类:这样的小层可以充当信息瓶颈,永久地丢弃相关信息。 由于这个原因,我们将使用具有更多单元的层。这里使用用64个单元: ``` from keras import models from keras import layers model = models.Sequential() model.add(layers.Dense(64, activation='relu', input_shape=(10000,))) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(46, activation='softmax')) ``` There are two other things you should note about this architecture: * We are ending the network with a `Dense` layer of size 46. This means that for each input sample, our network will output a 46-dimensional vector. Each entry in this vector (each dimension) will encode a different output class. * The last layer uses a `softmax` activation. You have already seen this pattern in the MNIST example. It means that the network will output a _probability distribution_ over the 46 different output classes, i.e. for every input sample, the network will produce a 46-dimensional output vector where `output[i]` is the probability that the sample belongs to class `i`. The 46 scores will sum to 1. The best loss function to use in this case is `categorical_crossentropy`. It measures the distance between two probability distributions: in our case, between the probability distribution output by our network, and the true distribution of the labels. By minimizing the distance between these two distributions, we train our network to output something as close as possible to the true labels. ``` model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) ``` ## Validating our approach Let's set apart 1,000 samples in our training data to use as a validation set: ``` x_val = x_train[:1000] partial_x_train = x_train[1000:] y_val = one_hot_train_labels[:1000] partial_y_train = one_hot_train_labels[1000:] ``` Now let's train our network for 20 epochs: ``` history = model.fit(partial_x_train, partial_y_train, epochs=20, batch_size=512, validation_data=(x_val, y_val)) ``` Let's display its loss and accuracy curves: ``` import matplotlib.pyplot as plt loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(loss) + 1) plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() plt.clf() # clear figure acc = history.history['acc'] val_acc = history.history['val_acc'] plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() ``` It seems that the network starts overfitting after 8 epochs. Let's train a new network from scratch for 8 epochs, then let's evaluate it on the test set: ``` model = models.Sequential() model.add(layers.Dense(64, activation='relu', input_shape=(10000,))) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(46, activation='softmax')) model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(partial_x_train, partial_y_train, epochs=8, batch_size=512, validation_data=(x_val, y_val)) results = model.evaluate(x_test, one_hot_test_labels) results ``` Our approach reaches an accuracy of ~78%. With a balanced binary classification problem, the accuracy reached by a purely random classifier would be 50%, but in our case it is closer to 19%, so our results seem pretty good, at least when compared to a random baseline: 下面展示了随机猜测所有样本的类时,猜测正确的概率。 ``` import copy test_labels_copy = copy.copy(test_labels) np.random.shuffle(test_labels_copy) float(np.sum(np.array(test_labels) == np.array(test_labels_copy))) / len(test_labels) ``` ## Generating predictions on new data We can verify that the `predict` method of our model instance returns a probability distribution over all 46 topics. Let's generate topic predictions for all of the test data: ``` predictions = model.predict(x_test) ``` Each entry in `predictions` is a vector of length 46: ``` predictions[0].shape ``` The coefficients in this vector sum to 1:表示概率。 ``` np.sum(predictions[0]) ``` The largest entry is the predicted class, i.e. the class with the highest probability: ``` np.argmax(predictions[0]) ``` ## A different way to handle the labels and the loss We mentioned earlier that another way to encode the labels would be to cast them as an integer tensor, like such: ``` y_train = np.array(train_labels) y_test = np.array(test_labels) ``` The only thing it would change is the choice of the loss function. Our previous loss, `categorical_crossentropy`, expects the labels to follow a categorical encoding. With integer labels, we should use `sparse_categorical_crossentropy`: ``` model.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics=['acc']) ``` This new loss function is still mathematically the same as `categorical_crossentropy`; it just has a different interface. ## On the importance of having sufficiently large intermediate layers 十分重要!!! We mentioned earlier that since our final outputs were 46-dimensional, we should avoid intermediate layers with much less than 46 hidden units. Now let's try to see what happens when we introduce an information bottleneck by having intermediate layers significantly less than 46-dimensional, e.g. 4-dimensional. ``` model = models.Sequential() model.add(layers.Dense(64, activation='relu', input_shape=(10000,))) model.add(layers.Dense(4, activation='relu')) model.add(layers.Dense(46, activation='softmax')) model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(partial_x_train, partial_y_train, epochs=20, batch_size=128, validation_data=(x_val, y_val)) ``` Our network now seems to peak at ~71% test accuracy, a 8% absolute drop. This drop is mostly due to the fact that we are now trying to compress a lot of information (enough information to recover the separation hyperplanes of 46 classes) into an intermediate space that is too low-dimensional. The network is able to cram _most_ of the necessary information into these 8-dimensional representations, but not all of it. ## Further experiments * Try using larger or smaller layers: 32 units, 128 units... * We were using two hidden layers. Now try to use a single hidden layer, or three hidden layers. ## Wrapping up Here's what you should take away from this example: * If you are trying to classify data points between N classes, your network should end with a `Dense` layer of size N. * 在单标签多类分类问题中,您的网络应该以SOFTMax激活结束,以便它将在N个输出类上输出概率分布。 * _Categorical crossentropy_ 几乎总是你用来解决这些问题的损失函数。它最小化了由网络输出的概率分布与目标的真实分布之间的距离。 * There are two ways to handle labels in multi-class classification: ** Encoding the labels via "categorical encoding" (also known as "one-hot encoding") and using `categorical_crossentropy` as your loss function. ** Encoding the labels as integers and using the `sparse_categorical_crossentropy` loss function. * If you need to classify data into a large number of categories, then you should avoid creating information bottlenecks in your network by having intermediate layers that are too small.
true
code
0.771978
null
null
null
null
## Neural networks for segmentation ``` ! wget https://www.dropbox.com/s/jy34yowcf85ydba/data.zip?dl=0 -O data.zip ! unzip -q data.zip ``` Your next task is to train neural network to segment cells edges. Here is an example of input data with corresponding ground truth: ``` import scipy as sp import scipy.misc import matplotlib.pyplot as plt import numpy as np import skimage.io import skimage %matplotlib inline # Human HT29 colon-cancer cells plt.figure(figsize=(10,8)) plt.subplot(1,2,1) im = skimage.img_as_ubyte(skimage.io.imread('BBBC018_v1_images-fixed/train/00735-actin.DIB.bmp')) plt.imshow(im) plt.subplot(1,2,2) mask = skimage.img_as_ubyte(skimage.io.imread('BBBC018_v1_outlines/train/00735-cells.png')) plt.imshow(mask, 'gray') ``` This time you aren't provided with any code snippets, just input data and target metric - intersection-over-union (IoU) (see implementation below). You should train neural network to predict mask of edge pixels (pixels in gt images with value greater than 0). Use everything you've learnt by now: * any architectures for semantic segmentation (encoder-decoder like or based on dilated convolutions) * data augmentation (you will need that since train set consists of just 41 images) * fine-tuning You're not allowed to do only one thing: to train you network on test set. Your final solution will consist of an ipython notebook with code (for final network training + any experiments with data) and an archive with png images with network predictions for test images (one-channel images, 0 - for non-edge pixels, any non-zero value for edge pixels). Forestalling questions about baseline... well, let's say that a good network should be able to segment images with iou >= 0.29. This is not a strict criterion of full points solution, but try to obtain better numbers. Practical notes: * There is a hard data class imbalance in dataset, so the network output will be biased toward "zero" class. You can either tune the minimal probability threshold for "edge" class, or add class weights to increase the cost of edge pixels in optimized loss. * Dataset is small so actively use data augmentation: rotations, flip, random contrast and brightness * Better spend time on experiments with neural network than on postprocessing tricks (i.e test set augmentation). * Keep in mind that network architecture defines receptive field of pixel. If the size of network input is smaller than receptive field of output pixel, than probably you can throw some layers without loss of quality. It is ok to modify "of-the-shelf" architectures. Good luck! ``` def calc_iou(prediction, ground_truth): n_images = len(prediction) intersection, union = 0, 0 for i in range(n_images): intersection += np.logical_and(prediction[i] > 0, ground_truth[i] > 0).astype(np.float32).sum() union += np.logical_or(prediction[i] > 0, ground_truth[i] > 0).astype(np.float32).sum() return float(intersection) / union ```
true
code
0.312861
null
null
null
null
# VIB: Theory **Notation** * $x$ be our input source, * $y$ be our target * $z$ be our latent representation ### Mutual Information Mutual information (MI) measures the amount of information obtained about one random variable after observing another random variable. Formally given two random variables $x$ and $y$ with joint distribution $p(x,y)$ and marginal densities $p(x)$ and $p(y)$ their MI is defined as the KL-divergence between the joint density and the product of their marginal densities $$\begin{align} I(x;y)&=I(y;x)\\ &=KL\Big(p(x,y)||p(x)p(y)\Big)\\ &=\mathbb{E}_{(x,y)\sim p(x,y)}\bigg[\log\frac{p(x,y)}{p(x)p(y)}\bigg]\\ &=\int dxdyp(x,y)\log\frac{p(x,y)}{p(x)p(y)} \end{align}$$ ### Information Bottlenecks IB regards supervised learning as a representation learning problem, seeking a stochastic map from input data $x$ to some latent representation $z$ that can still be used to predict the labels $y$ , under a constraint on its total complexity. We assume our joint distribution $p(x,y,z)$ can be factorised as follows: $$p(x,y,z)=p(z\mid x,y)p(y\mid x)p(x)=p(z\mid x)p(y\mid x)p(x)$$ which corresponds to the following Markov Chain $$y\rightarrow x\rightarrow z$$ Our goal is to learn an encoding that is maximally informative about our target $y$ measured by $I(y;z)$. We could always ensure a maximally informative representation by taking the identity encoding $x=z$ which is not useful. Instead we apply a constraint such that the objective is $$\begin{alignat}{3} &\underset{}{\text{max }} & \quad & I(y;z)\\ &\text{subject to } & \quad & I(x;z)\leq I_c \end{alignat}$$ where $I_c$ is the information constraint. The Lagrangian of the above constrained optimisation problem which we would like to **maximise** is $$\begin{align} L_{IB}&=I(y;z)-\beta \big(I(x;z)-I_c\big)\\ &=I(y;z)-\beta I(x;z) \end{align}$$ where $\beta\geq0$ is a Lagrange multiplier. * Intuitively the first term encourages $z$ to be predictive of $y$, whilst the second term encourages $z$ to "forget" $x$. * In essence, IB principle explicitly enforces the learned representation $z$ to only preserve the information in $x$ that is useful to the prediction of $y$, i.e., the minimal sufficient statistics of $x$ w.r.t. $y$. ### Variational Information Bottlenecks **The first term**<br> We can write out the terms in the objective as $$I(y;z)=\int dydz p(y,z)\log \frac{p(y,z)}{p(y)p(z)}=\int dydz p(y,z)\log \frac{p(y\mid z)}{p(y)}$$ where $p(y\mid z)$ is defined as $$p(y\mid z)=\int dx \frac{p(x,y,z)}{p(z)}=\int dx \frac{p(z\mid x)p(y\mid x)p(x)}{p(z)}$$ which is intractable. Let $q(y\mid z)$ be a variational approximation to $p(y\mid z)$. By using the KL divergence we can obtain a lower bound on $I(y;z)$ $$KL\Big(p(y\mid z)|| q(y\mid z)\Big)\geq0\Longrightarrow \int dy p(y\mid z)\log p(y\mid z)\geq \int dy p(y\mid z)\log q(y\mid z)$$ Hence we have that $$\begin{align} I(y;z)&= \int dydz p(y,z)\log p(y\mid z) - \int dy p(y)\log p(y)\\ &\geq \int dydz p(y, z)\log q(y\mid z) - \int dy p(y)\log p(y)\\ &=\int dxdydz p(z\mid x)p(y\mid x)p(x)\log q(y\mid z) \end{align}$$ where the entropy of the labels $H(y)=- \int dy p(y)\log p(y)$ is independent of our optimisation and so can be ignored. **The second term**<br> We can write out the second term in the objective as $$I(x;z)=\int dxdz p(x,z)\log \frac{p(x,z)}{p(x)p(z)}=\int dxdz p(x,z)\log \frac{p(z\mid x)}{p(z)}$$ Let $q(z)$ be a variational approximation to the marginal $p(z)$. By using the KL divergence we can obtain an upper bound on $I(x;z)$ as $$KL\Big(p(z)|| q(z)\Big)\geq0\Longrightarrow \int dz p(z)\log p(z)\geq \int dz p(z)\log q(z)$$ Hence we have $$\begin{align} I(x;z)&=\int dxdz p(x,z)\log p(z\mid x) - \int dz p(z)\log p(z)\\ &\leq\int dxdz p(x,z)\log p(z\mid x) - \int dz p(z)\log q(z)\\ &=\int dxdz p(x)p(z\mid x)\log \frac{p(z\mid x)}{q(z)} \end{align}$$ ### Loss Function Combining the above two bounds we can rewrite the Lagrangian which we would like to **maximise** as $$\begin{align} L_{IB}&=I(y;z)-\beta I(x;z)\\ &\geq \int dxdydz p(z\mid x)p(y\mid x)p(x)\log q(y\mid z) -\beta\int dxdz p(x)p(z\mid x)\log \frac{p(z\mid x)}{q(z)}\\ &=\int dxdydz p(z\mid x)p(y,x)\log q(y\mid z) -\beta\int dxdydz p(z\mid x)p(x,y)KL\Big(p(z\mid x)||q(z)\Big)\\ &=\mathbb{E}_{(x,y)\sim p(x,y), z\sim p(z\mid x)}\bigg[\log q(y\mid z)-\beta KL\Big(p(z\mid x)||q(z)\Big)\bigg]\\ &=J_{IB} \end{align}$$ To compute the lower bound in practice make the following assumptions: * We approximate $p(x,y)=p(x)p(y\mid x)$ using the empirical data distribution $p(x,y)=\frac{1}{n}\sum^{n}_{i=1}\delta_{x_i}(x)\delta_{y_i}(y)$ such that $$\begin{align} J_{IB}&= \int dxdydz p(z\mid x)p(y\mid x)p(x)\log q(y\mid z) -\beta\int dxdz p(x)p(z\mid x)\log \frac{p(z\mid x)}{q(z)}\\ &\approx \frac{1}{n}\sum^{n}_{i=1}\bigg[\int dz p(z\mid x_i)\log q(y_i\mid z)-\beta\int dz p(z\mid x_i)\log \frac{p(z\mid x_i)}{q(z)}\bigg]\\ &=\frac{1}{n}\sum^{n}_{i=1}\bigg[\int dz p(z\mid x_i)\log q(y_i\mid z)- \beta KL\Big(p(z\mid x_i)|| q(z)\Big) \bigg] \end{align}$$ * By using an encoder parameterised as multivariate Gaussian $$p_\phi(z\mid x)=\mathcal{N}\bigg(z;\boldsymbol{\mu}_\phi(x), \boldsymbol{\Sigma}_\phi(x)\bigg)$$ then we can use the reparameterisation trick such that $z=g_\phi(\epsilon,x)$ which is a deterministic function of $x$ and the Gaussian random variable $\epsilon\sim p(\epsilon)=\mathcal{N}(0,I)$. * We assume that our choice of parameterisation of $p(z\mid x)$ and $q(z)$ allow for computation of an analytic KL-divergence, Thus the final objective we would **minimise** is $$J_{IB}=\frac{1}{n}\sum^{n}_{i=1}\Bigg[\beta KL\Big(p(z\mid x_i)|| q(z)\Big) - \mathbb{E}_{\epsilon\sim p(\epsilon)}\Big[\log q\big(y_i\mid g_\phi(\epsilon,x)\big)\Big]\Bigg]$$ where we have that * $p_\phi(z\mid x)$ is the encoder parameterised as a multivariate Gaussian $$p_\phi(z\mid x)=\mathcal{N}\bigg(z;\boldsymbol{\mu}_\phi(x), \boldsymbol{\Sigma}_\phi(x)\bigg)$$ * $q_\theta(y\mid z)$ is the decoder parameterised as an independent Bernoulli for each element $y_j$ of $y$ (for binary data) $$q_\theta(y_j\mid z)=\text{Ber}\Big(\mu_\theta(z)\Big)$$ * $q(z)$ is the approximated latent marginal often fixed to a standard normal. $$q_\theta(z)=\mathcal{N}\Big(z;\mathbf{0},\mathbf{I}_k\Big)$$ By using our parameterisation of the decoder $q_\theta(y\mid z)$ as an indepenedent Bernoulli we have that $$-\log q_\theta(y\mid z)=-\Big[y\log \hat{y} + (1-y)\log(1-\hat{y})\Big]$$ i.e. this is the Binary Cross Entropy loss. ### Connection to Variational Autoencoder The VAE is a special case of an unsupervised version of VIB with $\beta=1.0$ as they consider the objective $$L=I(x;z)-\beta I(i;z)$$ where the aim is to take our data $x$ and maximise the mutual information contained in some encoding $z$, while restricting how much information we allow our representation to contain about the identity of each data element in our sample $i$. While this objective takes the same mathematical form as that of a Variational Autoencoder, the interpretation of the objective is very different: * In the VAE, the model starts life as a generative model with a defined prior $p(z)$ and stochastic decoder $p(x|z)$ as part of the model, and the encoder $q(z|x)$ is created to serve as a variational approximation to the true posterior $p(z|x) = \frac{p(x|z)}{p(z)p(x)}$. * In the VIB approach, the model is originally just the stochastic encoder $p(z|x)$, and the decoder $q(x|z)$ is the variational approximation to the true $p(x|z) = \frac{p(z|x)p(x)}{p(z)}$ and $q(z)$ is the variational approximation to the marginal $p(z) =\int dx p(x)p(z|x)$. ### References * Original Deep VIB paper: https://arxiv.org/abs/1612.00410 # VIB: Code The code is almost identical to my VAE implementation found here: [torch_vae](https://github.com/udeepam/vae/blob/master/notebooks/vae.ipynb) **References:** * https://github.com/makezur/VIB_pytorch * https://github.com/sungyubkim/DVIB * https://github.com/1Konny/VIB-pytorch ``` from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np import matplotlib.pyplot as plt import time from collections import defaultdict import torch import torch.nn as nn from torch.nn import functional as F import torch.utils.data as data_utils # Device Config device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # Fix random seeds for reproducibility seed = 73 torch.manual_seed(seed) np.random.seed(seed) ``` ## Load MNIST Dataset ``` # import torchvision # from torchvision import transforms # from torchvision.datasets import MNIST # # 60000 tuples with 1x28x28 image and corresponding label # data = MNIST('data', # train=True, # download=True, # transform = transforms.Compose([transforms.ToTensor()])) # # Split data into images and labels # x_train = data.train_data # y_train = data.train_labels # # Scale images from [0,255] to [0,+1] # x_train = x_train.float() / 255 # # Save as .npz # np.savez_compressed('data/mnist_train', # a=x_train, # b=y_train) # # 10000 tuples with 1x28x28 image and corresponding label # data = MNIST('data', # train=False, # download=True, # transform = transforms.Compose([transforms.ToTensor()])) # # Split data into images and labels # x_test = data.test_data # y_test = data.test_labels # # Scale images from [0,255] to [0,+1] # x_test = x_test.float() / 255 # # Save as .npz # np.savez_compressed('data/mnist_test', # a=x_test, # b=y_test) # Load MNIST data locally train_data = np.load('data/mnist_train.npz') x_train = torch.Tensor(train_data['a']) y_train = torch.Tensor(train_data['b']) n_classes = len(np.unique(y_train)) test_data = np.load('data/mnist_test.npz') x_test = torch.Tensor(test_data['a']) y_test = torch.Tensor(test_data['b']) # Visualise data plt.rcParams.update({'font.size': 16}) fig, axes = plt.subplots(1,4, figsize=(35,35)) imx, imy = (28,28) labels = [0,1,2,3] for i, ax in enumerate(axes): visual = np.reshape(x_train[labels[i]], (imx,imy)) ax.set_title("Example Data Image, y="+str(int(y_train[labels[i]]))) ax.imshow(visual, vmin=0, vmax=1) plt.show() ``` ## Models ``` class DeepVIB(nn.Module): def __init__(self, input_shape, output_shape, z_dim): """ Deep VIB Model. Arguments: ---------- input_shape : `int` Flattened size of image. (Default=784) output_shape : `int` Number of classes. (Default=10) z_dim : `int` The dimension of the latent variable z. (Default=256) """ super(DeepVIB, self).__init__() self.input_shape = input_shape self.output_shape = output_shape self.z_dim = z_dim # build encoder self.encoder = nn.Sequential(nn.Linear(input_shape, 1024), nn.ReLU(inplace=True), nn.Linear(1024, 1024), nn.ReLU(inplace=True) ) self.fc_mu = nn.Linear(1024, self.z_dim) self.fc_std = nn.Linear(1024, self.z_dim) # build decoder self.decoder = nn.Linear(self.z_dim, output_shape) def encode(self, x): """ x : [batch_size,784] """ x = self.encoder(x) return self.fc_mu(x), F.softplus(self.fc_std(x)-5, beta=1) def decode(self, z): """ z : [batch_size,z_dim] """ return self.decoder(z) def reparameterise(self, mu, std): """ mu : [batch_size,z_dim] std : [batch_size,z_dim] """ # get epsilon from standard normal eps = torch.randn_like(std) return mu + std*eps def forward(self, x): """ Forward pass Parameters: ----------- x : [batch_size,28,28] """ # flattent image x_flat = x.view(x.size(0), -1) mu, std = self.encode(x_flat) z = self.reparameterise(mu, std) return self.decode(z), mu, std ``` ## Training ``` # Hyperparameters beta = 1e-3 z_dim = 256 epochs = 200 batch_size = 128 learning_rate = 1e-4 decay_rate = 0.97 # Create DatatLoader train_dataset = data_utils.TensorDataset(x_train, y_train) train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True) # Loss function: Cross Entropy Loss (CE) + beta*KL divergence def loss_function(y_pred, y, mu, std): """ y_pred : [batch_size,10] y : [batch_size,10] mu : [batch_size,z_dim] std: [batch_size,z_dim] """ CE = F.cross_entropy(y_pred, y, reduction='sum') KL = 0.5 * torch.sum(mu.pow(2) + std.pow(2) - 2*std.log() - 1) return (beta*KL + CE) / y.size(0) # Initialize Deep VIB vib = DeepVIB(np.prod(x_train[0].shape), n_classes, z_dim) # Optimiser optimiser = torch.optim.Adam(vib.parameters(), lr=learning_rate) scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer=optimiser, gamma=decay_rate) # Send to GPU if available vib.to(device) print("Device: ", device) print(vib) # Training measures = defaultdict(list) start_time = time.time() # put Deep VIB into train mode vib.train() for epoch in range(epochs): epoch_start_time = time.time() # exponential decay of learning rate every 2 epochs if epoch % 2 == 0 and epoch > 0: scheduler.step() batch_loss = 0 batch_accuracy = 0 for _, (X,y) in enumerate(train_dataloader): X = X.to(device) y = y.long().to(device) # Zero accumulated gradients vib.zero_grad() # forward pass through Deep VIB y_pred, mu, std = vib(X) # Calculate loss loss = loss_function(y_pred, y, mu, std) # Backpropogation: calculating gradients loss.backward() # Update parameters of generator optimiser.step() # Save loss per batch batch_loss += loss.item()*X.size(0) # Save accuracy per batch y_pred = torch.argmax(y_pred,dim=1) batch_accuracy += int(torch.sum(y == y_pred)) # Save losses per epoch measures['total_loss'].append(batch_loss / len(train_dataloader.dataset)) # Save accuracy per epoch measures['accuracy'].append(batch_accuracy / len(train_dataloader.dataset)) print("Epoch: {}/{}...".format(epoch+1, epochs), "Loss: {:.4f}...".format(measures['total_loss'][-1]), "Accuracy: {:.4f}...".format(measures['accuracy'][-1]), "Time Taken: {:,.4f} seconds".format(time.time()-epoch_start_time)) print("Total Time Taken: {:,.4f} seconds".format(time.time()-start_time)) ``` ## Testing ``` # Create DatatLoader test_dataset = data_utils.TensorDataset(x_test, y_test) test_dataloader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=True) measures = defaultdict(int) start_time = time.time() # put Deep VIB into train mode vib.eval() with torch.no_grad(): for _, (X,y) in enumerate(test_dataloader): X = X.to(device) y = y.long().to(device) # forward pass through Deep VIB y_pred, mu, std = vib(X) y_pred = torch.argmax(y_pred,dim=1) measures['accuracy'] += int(torch.sum(y == y_pred)) print("Accuracy: {:.4f}...".format(measures['accuracy']/len(test_dataloader.dataset)), "Time Taken: {:,.4f} seconds".format(time.time()-start_time)) ```
true
code
0.922831
null
null
null
null
Fast Proportional Selection === [RETWEET] Proportional selection -- or, roulette wheel selection -- comes up frequently when developing agent-based models. Based on the code I have read over the years, researchers tend to write proportional selection as either a linear walk or a bisecting search. I compare the two approaches, then introduce Lipowski and Lipowska's [stochastic acceptance algorithm](http://arxiv.org/abs/1109.3627). For most of our uses, I argue that their algorithm is a better choice. See also: This IPython notebook's [repository on GitHub](https://github.com/jbn/fast_proportional_selection). Preliminaries --- I will only use Python's internal random module for random number generation. I include numpy and pandas convenience only when running demos. I import seaborne because it overrides some matplotlib defaults in a pretty way. ``` import random from bisect import bisect_left import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns %matplotlib inline ``` A Proportional Selection Base Class --- I am interested in testing how the different algorithms perform when *implemented* and *used*. Algorithmic analysis gives us asymptotic estimations. From these, we know which algorithm should be the fastest, in the limit. But, for trivial values of $n$, asymptotics can be misleading. $O(1)$ is not always faster than $O(n)$. $O(1)$ is really $O(c)$, and $c$ can be very costly! I use a base class, `ProportionalSelection`, to generate an equal playing field. This class takes the size of the vector of frequencies representing a distribution. I use frequencies because they are more natural to think about, and are easier to update. The client calls the `normalize` method any time the underlying frequencies change. Call the object like a dictionary to update a frequency. ``` class PropSelection(object): def __init__(self, n): self._n = n self._frequencies = [0] * n def copy_from(self, values): assert len(values) == self._n for i, x in enumerate(values): self[i] = x def __getitem__(self, i): return self._frequencies[i] def normalize(self): pass ``` Linear Walk --- Sampling via linear walk is $O(n)$. The algorithm generates a random number between 0 and the sum of the frequencies. Then, it walks through the array of frequencies, producing a running total. At some point the running total exceeds the generated threshold. The index at that point is the selection. The algorithm has no cost associated with updates to the underlying frequency distribution. ``` class LinearWalk(PropSelection): def __init__(self, n): super(LinearWalk, self).__init__(n) self._total = 0 def __setitem__(self, i, x): self._total += (x - self._frequencies[i]) self._frequencies[i] = x def sample(self): terminal_cdf_point = random.randint(0, self._total - 1) accumulator = 0 for i, k in enumerate(self._frequencies): accumulator += k if accumulator > terminal_cdf_point: return i ``` Bisecting Search --- Sampling via bisecting search is $O(log~n)$. From an asymptotic perspective, this is better than a linear walk. However, the algorithm achieves this by spending some compute time up front. That is, before sampling occurs. It cannot sample directly over the frequency distribution. Instead, it transforms the frequencies into a cumulative density function (CDF). This is an $O(n)$ operation. It must occur every time an element in the frequency distribution changes. Given the CDF, the algorithm draws a random number from [0, 1). It then uses bisection to identify the insertion point in the CDF for this number. This point is the selected index. ``` class BisectingSearch(PropSelection): def __init__(self, n): super(BisectingSearch, self).__init__(n) self._cdf = None self._total = 0 def __setitem__(self, i, x): self._total += (x - self._frequencies[i]) self._frequencies[i] = x def normalize(self): total = float(sum(self._frequencies)) cdf = [] accumulator = 0.0 for x in self._frequencies: accumulator += (x / float(total)) cdf.append(accumulator) self._cdf = cdf def sample(self): return bisect_left(self._cdf, random.random()) ``` Stochastic Acceptance --- For sampling, stochastic acceptance is $O(1)$. With respect to time, this dominates both the linear walk and bisecting search methods. Yet, this is asymptotic. The algorithm generates many random numbers per selection. In fact, the number of random variates grows in proportion to $n$. So, the random number generator matters. This algorithm has another advantage. It can operate on the raw frequency distribution, like linear walk. It only needs to track the maximum value in the frequency distribution. ``` class StochasticAcceptance(PropSelection): def __init__(self, n): super(StochasticAcceptance, self).__init__(n) self._max_value = 0 def __setitem__(self, i, x): last_x = self._frequencies[i] if x > self._max_value: self._max_value = float(x) elif last_x == self._max_value and x < last_x: self._max_value = float(max(self._frequencies)) self._frequencies[i] = x def sample(self): n = self._n max_value = self._max_value freqs = self._frequencies while True: i = int(n * random.random()) if random.random() < freqs[i] / max_value: return i ``` First Demonstration: Sampling --- The following code generates a target frequency distribution. Then, it instantiates each algorithm; copies the frequency distribution; and, draws 10,000 samples. For each algorithm, this code compiles the resulting probability distribution. For comparison, I plot these side by side. In the figure below, the target distribution is to the left (green). The linear walk, bisecting search, and stochastic acceptance algorithms are to the right of the targert distribution (blue). Visually, there is a compelling case for the distributions being equal. ``` fig, ax = plt.subplots(1, 4, sharey=True, figsize=(10,2)) def plot_proportions(xs, ax, **kwargs): xs = pd.Series(xs) xs /= xs.sum() return xs.plot(kind='bar', ax=ax, **kwargs) def sample_and_plot(roulette_algo, ax, n_samples=10000, **kwargs): samples = [roulette_algo.sample() for _ in range(n_samples)] value_counts = pd.Series(samples).value_counts().sort_index() props = (value_counts / value_counts.sum()) props.plot(kind='bar', ax=ax, **kwargs) return samples freqs = np.random.randint(1, 100, 10) plot_proportions(freqs, ax[0], color=sns.color_palette()[1], title="Target Distribution") klasses = [LinearWalk, BisectingSearch, StochasticAcceptance] for i, klass in enumerate(klasses): algo = klass(len(freqs)) algo.copy_from(freqs) algo.normalize() name = algo.__class__.__name__ xs = sample_and_plot(algo, ax=plt.subplot(ax[i+1]), title=name) ``` Second Demonstration: Performance Testing --- The following code times the sample method for each algorithm. I am using the `timeit` module's `default_timer` for timing. For such fast functions, this may lead to measurement error. But, over 10,000 samples, I expect these errors to wash out. ``` import timeit def sample_n_times(algo, n): samples = [] for _ in range(n): start = timeit.default_timer() algo.sample() samples.append(timeit.default_timer() - start) return np.array(samples) timings = [] for i, klass in enumerate(klasses): algo = klass(len(freqs)) algo.copy_from(freqs) algo.normalize() name = algo.__class__.__name__ timings.append((name, sample_n_times(algo, 10000))) ``` The graph immediately below plots the kernel density estimation of timings for each algorithm. I truncate the results, limiting the range to everything less than the 90th percentile. (I'll explain why momentarily.) Bisecting search appears to be the fastest and the most stable. This makes sense. It has nice worse-case properties. Stochastic acceptance and linear walk both display variability in timings. Again, the timer is not very precise. But, since bisecting search used the same timer, a comparison is possible. Linear walk has a worst case performance of $O(n)$. That is, if it starts at index 0 and generates the maximum value, it has to traverse the entire array. Bisecting search generates a stream of random numbers until finding an acceptable one. Technically, this algorithm has no limit. It could loop infinitely, waiting for a passing condition. But, probabilistically, this is fantastically unlikely. (Sometimes, you come across coders saying code like this is incorrect. That's pretty absurd. Most of the time, the probability of pathological conditions is so small, it's irrelevant. Most of the time, the machine running your code is more likely to crumb to dust before an error manifests.) For real-time code, timing variability matters. Introduce some jitter into something like a HFT algorithm, and you lose. But, for agent-based models and offline machine learning, variability doesn't matter. For us, averages matter. ``` values = np.vstack([times for _, times in timings]).T values = values[np.all(values < np.percentile(values, 90, axis=0), axis=1)] sns.boxplot(values, names=[name for name, _ in timings]); ``` The relationship between algorithms remains the same. But, the difference between linear walk and stochastic acceptance grows. Over the entire distribution, stochastic acceptance lags both linear walk and bisecting search. ``` values = np.vstack([times for _, times in timings]).T values = values[np.all(values > np.percentile(values, 90, axis=0), axis=1)] sns.boxplot(values, names=[name for name, _ in timings]); ``` Third Demonstration: Average Time as a Function of N --- The previous demonstrations fixed n to 10. What happens as n increases? ``` import timeit def sample_n_times(algo, n): samples = [] for _ in range(n): start = timeit.default_timer() algo.sample() samples.append(timeit.default_timer() - start) return np.array(samples) averages = [] for n in [10, 100, 1000, 10000, 100000, 1000000]: row = {'n': n} freqs = np.random.randint(1, 100, n) for i, klass in enumerate(klasses): algo = klass(len(freqs)) algo.copy_from(freqs) algo.normalize() name = algo.__class__.__name__ row[name] = np.mean(sample_n_times(algo, 10000)) averages.append(row) ``` The following graph plots the average time as a function of n, the number of elements in the distribution. There is nothing unexpected. Linear walk gets increasingly terrible. It's $O(n)$. Bisecting search out-performs Stochastic acceptance. They appear to be converging. But, this convergence occurs at the extreme end of n. Few simulations sample over a distribution of 1,000,00 values. At this point, it seems like bisecting search is the best choice. ``` averages_df = pd.DataFrame(averages).set_index('n') averages_df.plot(logy=True, logx=True, style={'BisectingSearch': 'o-', 'LinearWalk': 's--', 'StochasticAcceptance': 'd:'})#marker='o') plt.ylabel('$Average runtime$'); ``` Fourth Demonstration: Time Given a Dynamic Distribution --- Many of my simulations use proportional selection with dynamic proportions. For example, consider preferential attachment in social network generation. Edges form probabilistically, proportional to a node's degree. But, when an edge forms, the degree changes as well. In this case, the distribution changes *for each sample*! Below, I repeat the previous experiment, but I change the distribution and call `normalize` before each sample. Bisecting search is now the loser in this race. After each frequency alteration, it runs an expensive $O(n)$ operation. Then, it still must run its $O(log~n)$ operation at sample time. Linear walk and stochastic acceptance incur almost no performance penalty for alterations. Linear walk merely updates the total count. And, stochastic acceptance only runs a calculation if the alteration reduces the maximum value. (This hints at an important exception. As the range of frequencies narrows and the number of elements increases, performance suffers. The number of $O(n)$ searches for the new maximum becomes expensive.) ``` import timeit def normalize_sample_n_times(algo, n_samples, n): samples = [] for _ in range(n_samples): algo[random.randint(0, n-1)] = random.randint(1, 100) start = timeit.default_timer() algo.normalize() algo.sample() samples.append(timeit.default_timer() - start) return np.array(samples) averages = [] for n in [10, 100, 1000, 10000, 100000]: row = {'n': n} freqs = np.random.randint(1, 100, n) for i, klass in enumerate(klasses): algo = klass(len(freqs)) algo.copy_from(freqs) algo.normalize() name = algo.__class__.__name__ row[name] = np.mean(normalize_sample_n_times(algo, 1000, n)) averages.append(row) averages_df = pd.DataFrame(averages).set_index('n') averages_df.plot(logy=True, logx=True, style={'BisectingSearch': 'o-', 'LinearWalk': 's--', 'StochasticAcceptance': 'd:'})#marker='o') plt.ylabel('$Average runtime$'); ``` Conclusions --- "Premature optimization is the root of all evil." This is programmer's cannon -- The Gospel According to Knuth. Certainly, I'm not advocating for heresy. I wrote this post after a project of mine demanded better performance. Execution took too long. I did some profiling. It told me that proportional selection dominated execution time. So, partially, this notebook is a guide for modelers in similar situations. Given a dynamic distribution and proportional selection, stochastic acceptance has great performance. Outside of dynamic distributions, it does not dominate performance-wise in all cases. And, it is subject to jitter, making it questionable for real-time systems. But, it is robust across a variety of usage patterns. Furthermore, the algorithm is straight-forward to implement. The following code expresses it in it's simplest form. There are no dependencies, other than to a random number generator. And random number generation is a universal facility in programming languages. ```python def random_proportional_selection(freqs, max_freq): n = len(freqs) while True: i = int(n * random.random()) if random.random() < (freqs[i] / float(max_freq)): return i ``` Given these properties, I think it makes a good default approach to proportional selection. And, rewards accrue to those who collect good defaults.
true
code
0.622918
null
null
null
null
### Prepare Data Install pytorch and torchvision: ```bash conda install pytorch torchvision -c pytorch ``` Download cifar10 data and save to a simple binary file: ``` import torchvision import os, pickle import numpy as np def create_dataset(): trainset = torchvision.datasets.CIFAR10(root='./data', download=True) fname = "./data/cifar-10-batches-py/data_batch_1" fo = open(fname, 'rb') entry = pickle.load(fo, encoding='latin1') train_data = entry['data'] fo.close() train_data.tofile("train_data.dat") create_dataset() ``` Now we load and transform the input data using HPAT: ``` import time import hpat from hpat import prange import cv2 hpat.multithread_mode = True cv2.setNumThreads(0) # we use threading across images @hpat.jit(locals={'images:return': 'distributed'}) def read_data(): file_name = "train_data.dat" blob = np.fromfile(file_name, np.uint8) # reshape to images n_channels = 3 height = 32 width = 32 n_images = len(blob)//(n_channels*height*width) data = blob.reshape(n_images, height, width, n_channels) # resize resize_len = 224 images = np.empty((n_images, resize_len, resize_len, n_channels), np.uint8) for i in prange(n_images): images[i] = cv2.resize(data[i], (resize_len, resize_len)) # convert from [0,255] to [0.0,1.0] # normalize u2f_ratio = np.float32(255.0) c0_m = np.float32(0.485) c1_m = np.float32(0.456) c2_m = np.float32(0.406) c0_std = np.float32(0.229) c1_std = np.float32(0.224) c2_std = np.float32(0.225) for i in prange(n_images): images[i,:,:,0] = (images[i,:,:,0]/ u2f_ratio - c0_m) / c0_std images[i,:,:,1] = (images[i,:,:,1]/ u2f_ratio - c1_m) / c1_std images[i,:,:,2] = (images[i,:,:,2]/ u2f_ratio - c2_m) / c2_std # convert to CHW images = images.transpose(0, 3, 1, 2) return images t1 = time.time() imgs = read_data() print("data read time", time.time()-t1) ``` The `'V:return':'distributed'` annotation indicates that chunks of array `V` are returned in distributed fashion, instead of replicating it which is the default behavior for return. The I/O function `np.fromfile`, as well as all operations on images are parallelized by HPAT. Let's run a simple resnet18 DNN using pretrained weights as an example. We run only on 100 images for faster demonstration. ``` from torch import Tensor from torch.autograd import Variable model = torchvision.models.resnet18(True) t1 = time.time() res = model(Variable(Tensor(imgs[:100]))) print("dnn time", time.time()-t1) ``` Now we use HPAT to get some statistics on the results. ``` # get top class stats vals, inds = res.max(1) import pandas as pd @hpat.jit(locals={'vals:input': 'distributed', 'inds:input': 'distributed'}) def get_stats(vals, inds): df = pd.DataFrame({'vals': vals, 'classes': inds}) stat = df.describe() print(stat) TRUCK = 717 print((inds == TRUCK).sum()) get_stats(vals.data.numpy(), inds.data.numpy()) ``` Similar to distributed return annotation, distributed inputs are annotated as well.
true
code
0.611266
null
null
null
null
<h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><span><a href="#Name" data-toc-modified-id="Name-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Name</a></span></li><li><span><a href="#Search" data-toc-modified-id="Search-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Search</a></span><ul class="toc-item"><li><span><a href="#Load-Cached-Results" data-toc-modified-id="Load-Cached-Results-2.1"><span class="toc-item-num">2.1&nbsp;&nbsp;</span>Load Cached Results</a></span></li><li><span><a href="#Build-Model-From-Google-Images" data-toc-modified-id="Build-Model-From-Google-Images-2.2"><span class="toc-item-num">2.2&nbsp;&nbsp;</span>Build Model From Google Images</a></span></li></ul></li><li><span><a href="#Analysis" data-toc-modified-id="Analysis-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Analysis</a></span><ul class="toc-item"><li><span><a href="#Gender-cross-validation" data-toc-modified-id="Gender-cross-validation-3.1"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>Gender cross validation</a></span></li><li><span><a href="#Face-Sizes" data-toc-modified-id="Face-Sizes-3.2"><span class="toc-item-num">3.2&nbsp;&nbsp;</span>Face Sizes</a></span></li><li><span><a href="#Screen-Time-Across-All-Shows" data-toc-modified-id="Screen-Time-Across-All-Shows-3.3"><span class="toc-item-num">3.3&nbsp;&nbsp;</span>Screen Time Across All Shows</a></span></li><li><span><a href="#Appearances-on-a-Single-Show" data-toc-modified-id="Appearances-on-a-Single-Show-3.4"><span class="toc-item-num">3.4&nbsp;&nbsp;</span>Appearances on a Single Show</a></span></li><li><span><a href="#Other-People-Who-Are-On-Screen" data-toc-modified-id="Other-People-Who-Are-On-Screen-3.5"><span class="toc-item-num">3.5&nbsp;&nbsp;</span>Other People Who Are On Screen</a></span></li></ul></li><li><span><a href="#Persist-to-Cloud" data-toc-modified-id="Persist-to-Cloud-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>Persist to Cloud</a></span><ul class="toc-item"><li><span><a href="#Save-Model-to-Google-Cloud-Storage" data-toc-modified-id="Save-Model-to-Google-Cloud-Storage-4.1"><span class="toc-item-num">4.1&nbsp;&nbsp;</span>Save Model to Google Cloud Storage</a></span></li><li><span><a href="#Save-Labels-to-DB" data-toc-modified-id="Save-Labels-to-DB-4.2"><span class="toc-item-num">4.2&nbsp;&nbsp;</span>Save Labels to DB</a></span><ul class="toc-item"><li><span><a href="#Commit-the-person-and-labeler" data-toc-modified-id="Commit-the-person-and-labeler-4.2.1"><span class="toc-item-num">4.2.1&nbsp;&nbsp;</span>Commit the person and labeler</a></span></li><li><span><a href="#Commit-the-FaceIdentity-labels" data-toc-modified-id="Commit-the-FaceIdentity-labels-4.2.2"><span class="toc-item-num">4.2.2&nbsp;&nbsp;</span>Commit the FaceIdentity labels</a></span></li></ul></li></ul></li></ul></div> ``` from esper.prelude import * from esper.identity import * from esper import embed_google_images ``` # Name Please add the person's name and their expected gender below (Male/Female). ``` name = 'Yasmin Vossoughian' gender = 'Female' ``` # Search ## Load Cached Results Reads cached identity model from local disk. Run this if the person has been labelled before and you only wish to regenerate the graphs. Otherwise, if you have never created a model for this person, please see the next section. ``` assert name != '' results = FaceIdentityModel.load(name=name) imshow(tile_imgs([cv2.resize(x[1][0], (200, 200)) for x in results.model_params['images']], cols=10)) plt.show() plot_precision_and_cdf(results) ``` ## Build Model From Google Images Run this section if you do not have a cached model and precision curve estimates. This section will grab images using Google Image Search and score each of the faces in the dataset. We will interactively build the precision vs score curve. It is important that the images that you select are accurate. If you make a mistake, rerun the cell below. ``` assert name != '' # Grab face images from Google img_dir = embed_google_images.fetch_images(name) # If the images returned are not satisfactory, rerun the above with extra params: # query_extras='' # additional keywords to add to search # force=True # ignore cached images face_imgs = load_and_select_faces_from_images(img_dir) face_embs = embed_google_images.embed_images(face_imgs) assert(len(face_embs) == len(face_imgs)) reference_imgs = tile_imgs([cv2.resize(x[0], (200, 200)) for x in face_imgs if x], cols=10) def show_reference_imgs(): print('User selected reference images for {}.'.format(name)) imshow(reference_imgs) plt.show() show_reference_imgs() # Score all of the faces in the dataset (this can take a minute) face_ids_by_bucket, face_ids_to_score = face_search_by_embeddings(face_embs) precision_model = PrecisionModel(face_ids_by_bucket) ``` Now we will validate which of the images in the dataset are of the target identity. __Hover over with mouse and press S to select a face. Press F to expand the frame.__ ``` show_reference_imgs() print(('Mark all images that ARE NOT {}. Thumbnails are ordered by DESCENDING distance ' 'to your selected images. (The first page is more likely to have non "{}" images.) ' 'There are a total of {} frames. (CLICK THE DISABLE JUPYTER KEYBOARD BUTTON ' 'BEFORE PROCEEDING.)').format( name, name, precision_model.get_lower_count())) lower_widget = precision_model.get_lower_widget() lower_widget show_reference_imgs() print(('Mark all images that ARE {}. Thumbnails are ordered by ASCENDING distance ' 'to your selected images. (The first page is more likely to have "{}" images.) ' 'There are a total of {} frames. (CLICK THE DISABLE JUPYTER KEYBOARD BUTTON ' 'BEFORE PROCEEDING.)').format( name, name, precision_model.get_lower_count())) upper_widget = precision_model.get_upper_widget() upper_widget ``` Run the following cell after labelling to compute the precision curve. Do not forget to re-enable jupyter shortcuts. ``` # Compute the precision from the selections lower_precision = precision_model.compute_precision_for_lower_buckets(lower_widget.selected) upper_precision = precision_model.compute_precision_for_upper_buckets(upper_widget.selected) precision_by_bucket = {**lower_precision, **upper_precision} results = FaceIdentityModel( name=name, face_ids_by_bucket=face_ids_by_bucket, face_ids_to_score=face_ids_to_score, precision_by_bucket=precision_by_bucket, model_params={ 'images': list(zip(face_embs, face_imgs)) } ) plot_precision_and_cdf(results) ``` The next cell persists the model locally. ``` results.save() ``` # Analysis ## Gender cross validation Situations where the identity model disagrees with the gender classifier may be cause for alarm. We would like to check that instances of the person have the expected gender as a sanity check. This section shows the breakdown of the identity instances and their labels from the gender classifier. ``` gender_breakdown = compute_gender_breakdown(results) print('Expected counts by gender:') for k, v in gender_breakdown.items(): print(' {} : {}'.format(k, int(v))) print() print('Percentage by gender:') denominator = sum(v for v in gender_breakdown.values()) for k, v in gender_breakdown.items(): print(' {} : {:0.1f}%'.format(k, 100 * v / denominator)) print() ``` Situations where the identity detector returns high confidence, but where the gender is not the expected gender indicate either an error on the part of the identity detector or the gender detector. The following visualization shows randomly sampled images, where the identity detector returns high confidence, grouped by the gender label. ``` high_probability_threshold = 0.8 show_gender_examples(results, high_probability_threshold) ``` ## Face Sizes Faces shown on-screen vary in size. For a person such as a host, they may be shown in a full body shot or as a face in a box. Faces in the background or those part of side graphics might be smaller than the rest. When calculuating screentime for a person, we would like to know whether the results represent the time the person was featured as opposed to merely in the background or as a tiny thumbnail in some graphic. The next cell, plots the distribution of face sizes. Some possible anomalies include there only being very small faces or large faces. ``` plot_histogram_of_face_sizes(results) ``` The histogram above shows the distribution of face sizes, but not how those sizes occur in the dataset. For instance, one might ask why some faces are so large or whhether the small faces are actually errors. The following cell groups example faces, which are of the target identity with probability, by their sizes in terms of screen area. ``` high_probability_threshold = 0.8 show_faces_by_size(results, high_probability_threshold, n=10) ``` ## Screen Time Across All Shows One question that we might ask about a person is whether they received a significantly different amount of screentime on different shows. The following section visualizes the amount of screentime by show in total minutes and also in proportion of the show's total time. For a celebrity or political figure such as Donald Trump, we would expect significant screentime on many shows. For a show host such as Wolf Blitzer, we expect that the screentime be high for shows hosted by Wolf Blitzer. ``` screen_time_by_show = get_screen_time_by_show(results) plot_screen_time_by_show(name, screen_time_by_show) ``` ## Appearances on a Single Show For people such as hosts, we would like to examine in greater detail the screen time allotted for a single show. First, fill in a show below. ``` show_name = 'First Look' # Compute the screen time for each video of the show screen_time_by_video_id = compute_screen_time_by_video(results, show_name) ``` One question we might ask about a host is "how long they are show on screen" for an episode. Likewise, we might also ask for how many episodes is the host not present due to being on vacation or on assignment elsewhere. The following cell plots a histogram of the distribution of the length of the person's appearances in videos of the chosen show. ``` plot_histogram_of_screen_times_by_video(name, show_name, screen_time_by_video_id) ``` For a host, we expect screentime over time to be consistent as long as the person remains a host. For figures such as Hilary Clinton, we expect the screentime to track events in the real world such as the lead-up to 2016 election and then to drop afterwards. The following cell plots a time series of the person's screentime over time. Each dot is a video of the chosen show. Red Xs are videos for which the face detector did not run. ``` plot_screentime_over_time(name, show_name, screen_time_by_video_id) ``` We hypothesized that a host is more likely to appear at the beginning of a video and then also appear throughout the video. The following plot visualizes the distibution of shot beginning times for videos of the show. ``` plot_distribution_of_appearance_times_by_video(results, show_name) ``` In the section 3.3, we see that some shows may have much larger variance in the screen time estimates than others. This may be because a host or frequent guest appears similar to the target identity. Alternatively, the images of the identity may be consistently low quality, leading to lower scores. The next cell plots a histogram of the probabilites for for faces in a show. ``` plot_distribution_of_identity_probabilities(results, show_name) ``` ## Other People Who Are On Screen For some people, we are interested in who they are often portrayed on screen with. For instance, the White House press secretary might routinely be shown with the same group of political pundits. A host of a show, might be expected to be on screen with their co-host most of the time. The next cell takes an identity model with high probability faces and displays clusters of faces that are on screen with the target person. ``` get_other_people_who_are_on_screen(results, k=25, precision_thresh=0.8) ``` # Persist to Cloud The remaining code in this notebook uploads the built identity model to Google Cloud Storage and adds the FaceIdentity labels to the database. ## Save Model to Google Cloud Storage ``` gcs_model_path = results.save_to_gcs() ``` To ensure that the model stored to Google Cloud is valid, we load it and print the precision and cdf curve below. ``` gcs_results = FaceIdentityModel.load_from_gcs(name=name) imshow(tile_imgs([cv2.resize(x[1][0], (200, 200)) for x in gcs_results.model_params['images']], cols=10)) plt.show() plot_precision_and_cdf(gcs_results) ``` ## Save Labels to DB If you are satisfied with the model, we can commit the labels to the database. ``` from django.core.exceptions import ObjectDoesNotExist def standardize_name(name): return name.lower() person_type = ThingType.objects.get(name='person') try: person = Thing.objects.get(name=standardize_name(name), type=person_type) print('Found person:', person.name) except ObjectDoesNotExist: person = Thing(name=standardize_name(name), type=person_type) print('Creating person:', person.name) labeler = Labeler(name='face-identity:{}'.format(person.name), data_path=gcs_model_path) ``` ### Commit the person and labeler The labeler and person have been created but not set saved to the database. If a person was created, please make sure that the name is correct before saving. ``` person.save() labeler.save() ``` ### Commit the FaceIdentity labels Now, we are ready to add the labels to the database. We will create a FaceIdentity for each face whose probability exceeds the minimum threshold. ``` commit_face_identities_to_db(results, person, labeler, min_threshold=0.001) print('Committed {} labels to the db'.format(FaceIdentity.objects.filter(labeler=labeler).count())) ```
true
code
0.645623
null
null
null
null
# Skip-gram Word2Vec In this notebook, I'll lead you through using PyTorch to implement the [Word2Vec algorithm](https://en.wikipedia.org/wiki/Word2vec) using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation. ## Readings Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material. * A really good [conceptual overview](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/) of Word2Vec from Chris McCormick * [First Word2Vec paper](https://arxiv.org/pdf/1301.3781.pdf) from Mikolov et al. * [Neural Information Processing Systems, paper](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) with improvements for Word2Vec also from Mikolov et al. --- ## Word embeddings When you're dealing with words in text, you end up with tens of thousands of word classes to analyze; one for each word in a vocabulary. Trying to one-hot encode these words is massively inefficient because most values in a one-hot vector will be set to zero. So, the matrix multiplication that happens in between a one-hot input vector and a first, hidden layer will result in mostly zero-valued hidden outputs. <img src='assets/one_hot_encoding.png' width=50%> To solve this problem and greatly increase the efficiency of our networks, we use what are called **embeddings**. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit. <img src='assets/lookup_matrix.png' width=50%> Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an **embedding lookup** and the number of hidden units is the **embedding dimension**. <img src='assets/tokenize_lookup.png' width=50%> There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix. Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called **Word2Vec** uses the embedding layer to find vector representations of words that contain semantic meaning. --- ## Word2Vec The Word2Vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. <img src="assets/context_drink.png" width=40%> Words that show up in similar **contexts**, such as "coffee", "tea", and "water" will have vectors near each other. Different words will be further away from one another, and relationships can be represented by distance in vector space. <img src="assets/vector_distance.png" width=40%> There are two architectures for implementing Word2Vec: >* CBOW (Continuous Bag-Of-Words) and * Skip-gram <img src="assets/word2vec_architectures.png" width=60%> In this implementation, we'll be using the **skip-gram architecture** because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts. --- ## Loading Data Next, we'll ask you to load in data and place it in the `data` directory 1. Load the [text8 dataset](https://s3.amazonaws.com/video.udacity-data.com/topher/2018/October/5bbe6499_text8/text8.zip); a file of cleaned up *Wikipedia article text* from Matt Mahoney. 2. Place that data in the `data` folder in the home directory. 3. Then you can extract it and delete the archive, zip file to save storage space. After following these steps, you should have one file in your data directory: `data/text8`. ``` with open('data/text8') as f: text = f.read() text[:100] ``` ## Pre-processing Here I'm fixing up the text to make training easier. This comes from the `utils.py` file. The `preprocess` function does a few things: >* It converts any punctuation into tokens, so a period is changed to ` <PERIOD> `. In this data set, there aren't any periods, but it will help in other NLP problems. * It removes all words that show up five or *fewer* times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. * It returns a list of words in the text. This may take a few seconds to run, since our text file is quite large. If you want to write your own functions for this stuff, go for it! ``` import utils words = utils.preprocess(text) words[:30] print("total number of words {}".format(len(words))) print("total number of unique words {}".format(len(list(set(words))))) ``` ### Dictionaries Next, I'm creating two dictionaries to convert words to integers and back again (integers to words). This is again done with a function in the `utils.py` file. `create_lookup_tables` takes in a list of words in a text and returns two dictionaries. >* The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1, and so on. Once we have our dictionaries, the words are converted to integers and stored in the list `int_words`. ``` vocab_to_int, int_to_vocab = utils.create_lookup_tables(words) int_words = [vocab_to_int[word] for word in words] ``` ## Subsampling Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by $$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$ where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset. $$ P(0) = 1 - \sqrt{\frac{1*10^{-5}}{1*10^6/16*10^6}} = 0.98735 $$ I'm going to leave this up to you as an exercise. Check out my solution to see how I did it. > **Exercise:** Implement subsampling for the words in `int_words`. That is, go through `int_words` and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to `train_words`. ``` from collections import Counter import random import numpy as np threshold = 1e-5 word_counts = Counter(int_words) #print(list(word_counts.items())[0]) # dictionary of int_words, how many times they appear total_count = len(int_words) freqs = {word: count/total_count for word, count in word_counts.items()} p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts} # discard some frequent words, according to the subsampling equation # create a new list of words for training train_words = [word for word in int_words if random.random() < (1 - p_drop[word])] print(train_words[:30]) ``` Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to define a surrounding _context_ and grab all the words in a window around that word, with size $C$. From [Mikolov et al.](https://arxiv.org/pdf/1301.3781.pdf): "Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $[ 1: C ]$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels." > **Exercise:** Implement a function `get_target` that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you chose a random number of words to from the window. Say, we have an input and we're interested in the idx=2 token, `741`: ``` [5233, 58, 741, 10571, 27349, 0, 15067, 58112, 3580, 58, 10712] ``` For `R=2`, `get_target` should return a list of four values: ``` [5233, 58, 10571, 27349] ``` def get_target(words, idx, window_size=5): ''' Get a list of words in a window around an index. ''' R = np.random.randint(1, window_size+1) start = idx - R if (idx - R) > 0 else 0 stop = idx + R target_words = words[start:idx] + words[idx+1:stop+1] return list(target_words) # test your code! # run this cell multiple times to check for random window selection int_text = [i for i in range(10)] print('Input: ', int_text) idx=5 # word index of interest target = get_target(int_text, idx=idx, window_size=5) print('Target: ', target) ``` ### Generating Batches Here's a generator function that returns batches of input and target data for our model, using the `get_target` function from above. The idea is that it grabs `batch_size` words from a words list. Then for each of those batches, it gets the target words in a window. ``` def get_batches(words, batch_size, window_size=5): ''' Create a generator of word batches as a tuple (inputs, targets) ''' n_batches = len(words)//batch_size # only full batches words = words[:n_batches*batch_size] for idx in range(0, len(words), batch_size): x, y = [], [] batch = words[idx:idx+batch_size] for ii in range(len(batch)): batch_x = batch[ii] batch_y = get_target(batch, ii, window_size) y.extend(batch_y) x.extend([batch_x]*len(batch_y)) yield x, y int_text = [i for i in range(20)] x,y = next(get_batches(int_text, batch_size=4, window_size=5)) print('x\n', x) print('y\n', y) ``` ## Building the graph Below is an approximate diagram of the general structure of our network. <img src="assets/skip_gram_arch.png" width=60%> >* The input words are passed in as batches of input word tokens. * This will go into a hidden layer of linear units (our embedding layer). * Then, finally into a softmax output layer. We'll use the softmax layer to make a prediction about the context words by sampling, as usual. The idea here is to train the embedding layer weight matrix to find efficient representations for our words. We can discard the softmax layer because we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in _other_ networks we build using this dataset. --- ## Validation Here, I'm creating a function that will help us observe our model as it learns. We're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them using the cosine similarity: <img src="assets/two_vectors.png" width=30%> $$ \mathrm{similarity} = \cos(\theta) = \frac{\vec{a} \cdot \vec{b}}{|\vec{a}||\vec{b}|} $$ We can encode the validation words as vectors $\vec{a}$ using the embedding table, then calculate the similarity with each word vector $\vec{b}$ in the embedding table. With the similarities, we can print out the validation words and words in our embedding table semantically similar to those words. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings. ``` def cosine_similarity(embedding, valid_size=16, valid_window=100, device='cpu'): """ Returns the cosine similarity of validation words with words in the embedding matrix. Here, embedding should be a PyTorch embedding module. """ # Here we're calculating the cosine similarity between some random words and # our embedding vectors. With the similarities, we can look at what words are # close to our random words. # sim = (a . b) / |a||b| embed_vectors = embedding.weight # magnitude of embedding vectors, |b| magnitudes = embed_vectors.pow(2).sum(dim=1).sqrt().unsqueeze(0) # pick N words from our ranges (0,window) and (1000,1000+window). lower id implies more frequent valid_examples = np.array(random.sample(range(valid_window), valid_size//2)) valid_examples = np.append(valid_examples, random.sample(range(1000,1000+valid_window), valid_size//2)) valid_examples = torch.LongTensor(valid_examples).to(device) valid_vectors = embedding(valid_examples) similarities = torch.mm(valid_vectors, embed_vectors.t())/magnitudes return valid_examples, similarities ``` ## SkipGram model Define and train the SkipGram model. > You'll need to define an [embedding layer](https://pytorch.org/docs/stable/nn.html#embedding) and a final, softmax output layer. An Embedding layer takes in a number of inputs, importantly: * **num_embeddings** – the size of the dictionary of embeddings, or how many rows you'll want in the embedding weight matrix * **embedding_dim** – the size of each embedding vector; the embedding dimension ``` import torch from torch import nn import torch.optim as optim class SkipGram(nn.Module): def __init__(self, n_vocab, n_embed): super().__init__() self.embed = nn.Embedding(n_vocab, n_embed) self.output = nn.Linear(n_embed, n_vocab) self.log_softmax = nn.LogSoftmax(dim=1) def forward(self, x): x = self.embed(x) scores = self.output(x) log_ps = self.log_softmax(scores) return log_ps ``` ### Training Below is our training loop, and I recommend that you train on GPU, if available. **Note that, because we applied a softmax function to our model output, we are using NLLLoss** as opposed to cross entropy. This is because Softmax in combination with NLLLoss = CrossEntropy loss . ``` # check if GPU is available device = 'cuda' if torch.cuda.is_available() else 'cpu' embedding_dim=300 # you can change, if you want model = SkipGram(len(vocab_to_int), embedding_dim).to(device) criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) print_every = 500 steps = 0 epochs = 5 # train for some number of epochs for e in range(epochs): # get input and target batches for inputs, targets in get_batches(train_words, 512): steps += 1 inputs, targets = torch.LongTensor(inputs), torch.LongTensor(targets) inputs, targets = inputs.to(device), targets.to(device) log_ps = model(inputs) loss = criterion(log_ps, targets) optimizer.zero_grad() loss.backward() optimizer.step() if steps % print_every == 0: # getting examples and similarities valid_examples, valid_similarities = cosine_similarity(model.embed, device=device) _, closest_idxs = valid_similarities.topk(6) # topk highest similarities valid_examples, closest_idxs = valid_examples.to('cpu'), closest_idxs.to('cpu') for ii, valid_idx in enumerate(valid_examples): closest_words = [int_to_vocab[idx.item()] for idx in closest_idxs[ii]][1:] print(int_to_vocab[valid_idx.item()] + " | " + ', '.join(closest_words)) print("...") ```
true
code
0.567517
null
null
null
null
<a href="https://colab.research.google.com/github/tiwarylab/State-Predictive-Information-Bottleneck/blob/main/SPIB_Demo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # SPIB Demo 2021 This notebook aims to serve as a simple introduction to the state predictive information bottleneck method of [Wang and Tiwary 2021](https://aip.scitation.org/doi/abs/10.1063/5.0038198). The code is implemented using Pytorch. <img src="https://github.com/tiwarylab/State-Predictive-Information-Bottleneck/blob/main/fig/Fig_alg.png?raw=1"> <img src="https://github.com/tiwarylab/State-Predictive-Information-Bottleneck/blob/main/fig/Fig_FW_example.png?raw=1"> ## Clone the Github to Colab ``` !git clone https://github.com/tiwarylab/State-Predictive-Information-Bottleneck %matplotlib notebook %matplotlib inline import numpy as np import matplotlib.pyplot as plt import time plt.rcParams['figure.figsize'] = [25, 20] np.random.seed(42) large = 54; med = 36; small = 24 l_width = 3; m_width = 1.5; s_width = 0.7 params = {'axes.titlesize': large, 'legend.fontsize': large, 'legend.title_fontsize':large, 'figure.figsize': (16, 10), 'axes.labelsize': large, 'xtick.labelsize': med, 'ytick.labelsize': med, 'figure.titlesize': large, 'lines.linewidth': l_width, 'lines.markersize': 10, 'axes.linewidth': l_width, 'xtick.major.size': 8, 'ytick.major.size': 8, 'xtick.minor.size': 4, 'ytick.minor.size': 4, 'xtick.major.width': m_width, 'ytick.major.width': m_width, 'xtick.minor.width': s_width, 'ytick.minor.width': s_width, 'grid.linewidth': m_width} plt.rcParams.update(params) ``` ## Data Preparation The trajectory data can be generated from the molecular dynamics simulation or experiements. Here, we use a sample trajectory generated from Langevin dynamics simulation of a four-well analytical potential. ``` # Load trajectory data traj_data = np.load("State-Predictive-Information-Bottleneck/examples/Four_Well_beta3_gamma4_traj_data.npy") ``` ### Visualization of the trajectory ``` fig, ax = plt.subplots(figsize=(12,10)) t = np.arange(traj_data.shape[0]) ax.plot(t[::100],traj_data[::100,0],'x',label='x') ax.plot(t[::100],traj_data[::100,1],'o',fillstyle='none',label='y') ax.set_xlabel('time step') ax.legend(fontsize=36,bbox_to_anchor=(0.99, 0.7)) # The four-well analytical potential along x def potential_fn_FW(x): A=0.6 a=80 B=0.2 b=80 C=0.5 c=40 return 2*(x**8+A*np.exp(-a*x**2)+B*np.exp(-b*(x-0.5)**2)+C*np.exp(-c*(x+0.5)**2))+(x**2-1)**2 from mpl_toolkits.axes_grid1 import make_axes_locatable fig, ax = plt.subplots(1,2,figsize=(18,8)) beta=3 lw=8 x=np.arange(-1,1,0.01) v=potential_fn_FW(x) ax[0].plot(x,v,color='k',lw=lw) ax[0].axvline(x=0,color='b',linestyle='--',lw=lw) ax[0].axvline(x=-0.5,color='b',linestyle='--',lw=lw) ax[0].axvline(x=0.5,color='b',linestyle='--',lw=lw) ax[0].text(-0.75, 1.8, 'A', horizontalalignment='center',fontsize=54) ax[0].text(-0.25, 1.8, 'B', horizontalalignment='center', fontsize=54) ax[0].text(0.25, 1.8, 'C', horizontalalignment='center',fontsize=54) ax[0].text(0.75, 1.8, 'D', horizontalalignment='center', fontsize=54) ax[0].set_xlabel("x") ax[0].set_ylabel("Potential") ax[0].text(-0.2, 1.2, '(a)', horizontalalignment='center', transform=ax[0].transAxes,fontsize=54, va='top') FW_counts,FW_xbins,FW_ybins,images = plt.hist2d(traj_data[:,0],traj_data[:,1],bins=100) FW_counts[FW_counts==0]=FW_counts[FW_counts!=0].min() FW_G=-np.log(FW_counts)/beta FW_G=FW_G-np.nanmin(FW_G) h0=ax[1].contourf(FW_G.transpose(),levels=5,extent=[FW_xbins[0],FW_xbins[-1],FW_ybins[0],FW_ybins[-1]],cmap='jet') divider = make_axes_locatable(ax[1]) cax = divider.append_axes("top", "5%", pad="3%") tickz = np.arange(0,FW_G.max(),1) cb1 = fig.colorbar(h0, cax=cax, orientation="horizontal",ticks=tickz) cb1.set_label('Free Energy',fontsize=48) cax.xaxis.set_ticks_position("top") cax.xaxis.set_label_position("top") ax[1].set_xlabel("x") ax[1].set_ylabel('y') ax[1].text(-0.2, 1.3, '(b)', horizontalalignment='center', transform=ax[1].transAxes,fontsize=54, va='top') plt.tight_layout(pad=0.4, w_pad=5, h_pad=3.0) ``` ### Generation of initial state labels ``` # discretize the system along x to 10 states as initial state labels index=0 x_max=traj_data[:,index].max()+0.01 x_min=traj_data[:,index].min()-0.01 state_num=10 eps=1e-3 x_det=(x_max-x_min+2*eps)/state_num init_label=np.zeros((traj_data.shape[0],state_num)) x_list=np.array([(x_min-eps+n*x_det) for n in range(state_num+1)]) for j in range(state_num): indices=(traj_data[:,index]>x_list[j])&(traj_data[:,index]<=x_list[j+1]) init_label[indices,j]=1 np.save('State-Predictive-Information-Bottleneck/examples/Four_Well_beta3_gamma4_init_label10.npy',init_label) # plot the initial state labels for four well potential system import matplotlib from matplotlib import colors as c data=traj_data labels=init_label fig0, ax0 = plt.subplots(figsize=(9,6)) hist=ax0.hist2d(data[:,0],data[:,1],bins=100) state_num=labels.shape[1] state_labels=np.arange(state_num) x_max=np.max(data[:,0]) x_min=np.min(data[:,0]) eps=1e-3 x_det=(x_max-x_min+2*eps)/state_num x_list=np.array([(x_min-eps+n*x_det) for n in range(state_num+1)]) hist_state=np.zeros([state_num]+list(hist[0].shape)) for i in range(state_num): hist_state[i]=ax0.hist2d(data[:,0],data[:,1],bins=[hist[1],hist[2]],weights=labels[:,i])[0] init_label_map=np.argmax(hist_state,axis=0).astype(float) init_label_map[hist[0]==0]=np.nan plt.close(fig0) fig, ax = plt.subplots(figsize=(9,6)) fmt = matplotlib.ticker.FuncFormatter(lambda x, pos: state_labels[x]) tickz = np.arange(0,len(state_labels)) cMap = c.ListedColormap(plt.cm.tab20.colors[0:10]) im=ax.pcolormesh(hist[1], hist[2], init_label_map.T, cmap=cMap, vmin=-0.5, vmax=len(state_labels)-0.5) cb1 = fig.colorbar(im,ax=ax,format=fmt, ticks=tickz) for i in range(state_num): ax.text((x_list[i]+x_list[i+1])/2,0,state_labels[i],horizontalalignment='center',verticalalignment='center',fontsize=32) plt.xlabel("x") plt.ylabel("y") ``` ## Model We provide two ways to run SPIB: test_model.py and test_model_advanced.py. Here, we will only discuss the use of test_model.py. But for advanced analyses, we will strongly recommend to use test_model_advanced.py as it provides more features to help you to control the training process and tune the hyper-parameters. ## Training ``` %run State-Predictive-Information-Bottleneck/test_model.py -dt 50 -d 1 -encoder_type Nonlinear -bs 512 -threshold 0.01 -patience 2 -refinements 8 -lr 0.001 -b 0.01 -seed 0 -label State-Predictive-Information-Bottleneck/examples/Four_Well_beta3_gamma4_init_label10.npy -traj State-Predictive-Information-Bottleneck/examples/Four_Well_beta3_gamma4_traj_data.npy ``` ## Result Analysis ``` prefix='SPIB/Unweighted_d=1_t=50_b=0.0100_learn=0.001000' repeat='0' # load the results # the deterministic part of RC leanred by SPIB (the mean of output gaussian distrition of the encoder) traj_mean_rep=np.load(prefix+"_traj0_mean_representation"+repeat+".npy") # the final state labels leanred by SPIB traj_labels=np.load(prefix+"_traj0_labels"+repeat+".npy") # plot the learned state labels for four well potential system import matplotlib from matplotlib import colors as c data=traj_data labels=traj_labels hist=plt.hist2d(data[:,0],data[:,1],bins=100) state_num=labels.shape[1] state_labels=np.arange(state_num) hist_state=np.zeros([state_num]+list(hist[0].shape)) for i in range(state_num): hist_state[i]=plt.hist2d(data[:,0],data[:,1],bins=[hist[1],hist[2]],weights=labels[:,i])[0] label_map50=np.argmax(hist_state,axis=0).astype(float) label_map50[hist[0]==0]=np.nan plt.close() fig, ax = plt.subplots(figsize=(9,6)) fmt = matplotlib.ticker.FuncFormatter(lambda x, pos: state_labels[x]) tickz = np.arange(0,len(state_labels)) cMap = c.ListedColormap(plt.cm.tab20.colors[0:10]) im=ax.pcolormesh(hist[1], hist[2], label_map50.T, cmap=cMap, vmin=-0.5, vmax=len(state_labels)-0.5) cb1 = fig.colorbar(im,ax=ax,format=fmt, ticks=tickz) ax.text(-0.75,0,'1',horizontalalignment='center',verticalalignment='center',fontsize=64) ax.text(0.75,0,'8',horizontalalignment='center',verticalalignment='center',fontsize=64) ax.text(-0.25,0,'3',horizontalalignment='center',verticalalignment='center',fontsize=64) ax.text(0.25,0,'6',horizontalalignment='center',verticalalignment='center',fontsize=64) plt.xlabel("x") plt.ylabel("y") # plot the learned RC for four well potential system data=traj_data hist=plt.hist2d(data[:,0],data[:,1],bins=100) hist_RC=plt.hist2d(data[:,0],data[:,1],bins=[hist[1],hist[2]],weights=traj_mean_rep[:,0]) plt.close() fig, ax = plt.subplots(figsize=(15,10)) RC=np.divide(hist_RC[0],hist[0]) im=ax.contourf(RC.T, extent=[hist_RC[1][0],hist_RC[1][-1],hist_RC[2][0],hist_RC[2][-1]],levels=10, cmap=plt.cm.jet) cb1 = fig.colorbar(im,ax=ax) cb1.set_label('RC') plt.xlabel("x") plt.ylabel("y") plt.tight_layout() ```
true
code
0.556098
null
null
null
null
# Detecting and mitigating age bias on credit decisions The goal of this tutorial is to introduce the basic functionality of AI Fairness 360 to an interested developer who may not have a background in bias detection and mitigation. ### Biases and Machine Learning A machine learning model makes predictions of an outcome for a particular instance. (Given an instance of a loan application, predict if the applicant will repay the loan.) The model makes these predictions based on a training dataset, where many other instances (other loan applications) and actual outcomes (whether they repaid) are provided. Thus, a machine learning algorithm will attempt to find patterns, or generalizations, in the training dataset to use when a prediction for a new instance is needed. (For example, one pattern it might discover is "if a person has salary > USD 40K and has outstanding debt < USD 5, they will repay the loan".) In many domains this technique, called supervised machine learning, has worked very well. However, sometimes the patterns that are found may not be desirable or may even be illegal. For example, a loan repay model may determine that age plays a significant role in the prediction of repayment because the training dataset happened to have better repayment for one age group than for another. This raises two problems: 1) the training dataset may not be representative of the true population of people of all age groups, and 2) even if it is representative, it is illegal to base any decision on a applicant's age, regardless of whether this is a good prediction based on historical data. AI Fairness 360 is designed to help address this problem with _fairness metrics_ and _bias mitigators_. Fairness metrics can be used to check for bias in machine learning workflows. Bias mitigators can be used to overcome bias in the workflow to produce a more fair outcome. The loan scenario describes an intuitive example of illegal bias. However, not all undesirable bias in machine learning is illegal it may also exist in more subtle ways. For example, a loan company may want a diverse portfolio of customers across all income levels, and thus, will deem it undesirable if they are making more loans to high income levels over low income levels. Although this is not illegal or unethical, it is undesirable for the company's strategy. As these two examples illustrate, a bias detection and/or mitigation toolkit needs to be tailored to the particular bias of interest. More specifically, it needs to know the attribute or attributes, called _protected atrributes_, that are of interest: race is one example of a _protected attribute_ and income level is a second. ### The Machine Learning Workflow To understand how bias can enter a machine learning model, we first review the basics of how a model is created in a supervised machine learning process. ![image](https://github.com/IBM/ensure-loan-fairness-aif360/tree/master/doc/source/images/Complex_NoProc_V3.jpg) First, the process starts with a _training dataset_, which contains a sequence of instances, where each instance has two components: the features and the correct prediction for those features. Next, a machine learning algorithm is trained on this training dataset to produce a machine learning model. This generated model can be used to make a prediction when given a new instance. A second dataset with features and correct predictions, called a _test dataset_, is used to assess the accuracy of the model. Since this test dataset is the same format as the training dataset, a set of instances of features and prediction pairs, often these two datasets derive from the same initial dataset. A random partitioning algorithm is used to split the initial dataset into training and test datasets. Bias can enter the system in any of the three steps above. The training data set may be biased in that its outcomes may be biased towards particular kinds of instances. The algorithm that creates the model may be biased in that it may generate models that are weighted towards particular features in the input. The test data set may be biased in that it has expectations on correct answers that may be biased. These three points in the machine learning process represent points for testing and mitigating bias. In AI Fairness 360 codebase, we call these points _pre-processing_, _in-processing_, and _post-processing_. ### AI Fairness 360 We are now ready to utilize AI Fairness 360 (aif360) to detect and mitigate bias. We will use the German credit dataset, splitting it into a training and test dataset. We will look for bias in the creation of a machine learning model to predict if an applicant should be given credit based on various features from a typical credit application. The protected attribute will be "Age", with "1" and "0" being the values for the privileged and unprivileged groups, respectively. For this first tutorial, we will check for bias in the initial training data, mitigate the bias, and recheck. More sophisticated machine learning workflows are given in the author tutorials and demo notebooks in the codebase. Here are the steps involved 1. Import the aif360 toolkit and install it 1. Write import statements 1. Set bias detection options, load dataset, and split between train and test 1. Compute fairness metric on original training dataset 1. Mitigate bias by transforming the original dataset 1. Compute fairness metric on transformed training dataset ### Step 1 We'll install the aif360 toolkit ``` !pip install aif360 ``` ### Step 2 As with any python program, the first step will be to import the necessary packages. Below we import several components from the aif360 package. We import metrics to check for bias, and classes related to the algorithm we will use to mitigate bias. We also import some other non-aif360 useful packages. ``` !pip install cvxpy==0.4.11 # %matplotlib inline # Load all necessary packages import numpy from aif360.datasets import GermanDataset from aif360.metrics import BinaryLabelDatasetMetric from aif360.algorithms.preprocessing.optim_preproc import OptimPreproc from aif360.algorithms.preprocessing.optim_preproc_helpers.data_preproc_functions\ import load_preproc_data_german from aif360.algorithms.preprocessing.optim_preproc_helpers.distortion_functions\ import get_distortion_german from aif360.algorithms.preprocessing.optim_preproc_helpers.opt_tools import OptTools # from common_utils import compute_metrics # from aif360.datasets import BinaryLabelDataset # from aif360.metrics.utils import compute_boolean_conditioning_vector from IPython.display import Markdown, display ``` ### Step 3 Load dataset, specifying protected attribute, and split dataset into train and test In Step 3 we begin by dowloading the dataset. Then we load the initial dataset, setting the protected attribute to be age. We then split the original dataset into training and testing datasets. Note that we use a random seed number for this demonstration, which gives us the same result for each split(). Although we will use only the training dataset in this tutorial, a normal workflow would also use a test dataset for assessing the efficacy (accuracy, fairness, etc.) during the development of a machine learning model. Finally, we set two variables (to be used in Step 3) for the privileged (1) and unprivileged (0) values for the age attribute. These are key inputs for detecting and mitigating bias, which will be Step 3 and Step 4. ``` aif360_location = !python -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())" import os install_loc = os.path.join(aif360_location[0], "aif360/data/raw/german/") %cd $install_loc !wget ftp://ftp.ics.uci.edu/pub/machine-learning-databases/statlog/german/german.data !wget ftp://ftp.ics.uci.edu/pub/machine-learning-databases/statlog/german/german.doc %cd - dataset_orig = load_preproc_data_german(['age']) numpy.random.seed(27) dataset_orig_train, dataset_orig_test = dataset_orig.split([0.7], shuffle=True) privileged_groups = [{'age': 1}] unprivileged_groups = [{'age': 0}] ``` ### Step 4 Compute fairness metric on original training dataset Now that we've identified the protected attribute 'age' and defined privileged and unprivileged values, we can use aif360 to detect bias in the dataset. One simple test is to compare the percentage of favorable results for the privileged and unprivileged groups, subtracting the former percentage from the latter. A negative value indicates less favorable outcomes for the unprivileged groups. This is implemented in the method called mean_difference on the BinaryLabelDatasetMetric class. The code below performs this check and displays the output, showing that the difference is -0.102466 ``` metric_orig_train = BinaryLabelDatasetMetric(dataset_orig_train, unprivileged_groups=unprivileged_groups, privileged_groups=privileged_groups) display(Markdown("#### Original training dataset")) print("Difference in mean outcomes between unprivileged and privileged groups = %f" % metric_orig_train.mean_difference()) ``` ### Step 5 Mitigate bias by transforming the original dataset The previous step showed that the privileged group was getting 10.2% more positive outcomes in the training dataset. Since this is not desirable, we are going to try to mitigate this bias in the training dataset. As stated above, this is called _pre-processing_ mitigation because it happens before the creation of the model. AI Fairness 360 implements several pre-processing mitigation algorithms. We will choose the Optimized Preprocess algorithm [1], which is implemented in "OptimPreproc" class in the "aif360.algorithms.preprocessing" directory. This algorithm will transform the dataset to have more equity in positive outcomes on the protected attribute for the privileged and unprivileged groups. The algorithm requires some tuning parameters, which are set in the optim_options variable and passed as an argument along with some other parameters, including the 2 variables containg the unprivileged and privileged groups defined in Step 3. We then call the fit and transform methods to perform the transformation, producing a newly transformed training dataset (dataset_transf_train). Finally, we ensure alignment of features between the transformed and the original dataset to enable comparisons. [1] Optimized Pre-Processing for Discrimination Prevention, NIPS 2017, Flavio Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush R. Varshney ``` optim_options = { "distortion_fun": get_distortion_german, "epsilon": 0.1, "clist": [0.99, 1.99, 2.99], "dlist": [.1, 0.05, 0] } OP = OptimPreproc(OptTools, optim_options, unprivileged_groups = unprivileged_groups, privileged_groups = privileged_groups) OP = OP.fit(dataset_orig_train) dataset_transf_train = OP.transform(dataset_orig_train, transform_Y = True) dataset_transf_train = dataset_orig_train.align_datasets(dataset_transf_train) ``` ### Step 6 Compute fairness metric on transformed dataset Now that we have a transformed dataset, we can check how effective it was in removing bias by using the same metric we used for the original training dataset in Step 4. Once again, we use the function mean_difference in the BinaryLabelDatasetMetric class. We see the mitigation step was very effective, the difference in mean outcomes is now 0.001276 . So we went from a 10.2% advantage for the privileged group to a 0.1% advantage for the unprivileged group. ``` metric_transf_train = BinaryLabelDatasetMetric(dataset_transf_train, unprivileged_groups=unprivileged_groups, privileged_groups=privileged_groups) display(Markdown("#### Transformed training dataset")) print("Difference in mean outcomes between unprivileged and privileged groups = %f" % metric_transf_train.mean_difference()) ``` ### Summary The purpose of this tutorial is to give a new user to bias detection and mitigation a gentle introduction to some of the functionality of AI Fairness 360. A more complete use case would take the next step and see how the transformed dataset impacts the accuracy and fairness of a trained model. This is implemented in the demo notebook in the examples directory of toolkit, called demo_optim_data_preproc.ipynb. I highly encourage readers to view that notebook as it is generalization and extension of this simple tutorial. There are many metrics one can use to detect the pressence of bias. AI Fairness 360 provides many of them for your use. Since it is not clear which of these metrics to use, we also provide some guidance. Likewise, there are many different bias mitigation algorithms one can employ, many of which are in AI Fairness 360. Other tutorials will demonstrate the use of some of these metrics and mitigations algorithms. As mentioned earlier, both fairness metrics and mitigation algorithms can be performed at various stages of the machine learning pipeline. We recommend checking for bias as often as possible, using as many metrics are relevant for the application domain. We also recommend incorporating bias detection in an automated continouus integration pipeline to ensure bias awareness as a software project evolves.
true
code
0.602763
null
null
null
null
# Geospatial operations with Shapely: Round-Trip Reprojection, Affine Transformations, Rasterisation, and Vectorisation Sometimes we want to take a geospatial object and transform it to a new coordinate system, and perhaps translate and rotate it by some amount. We may want to rasterise the object for raster operations. We'd like to do this all with Shapely geometry object so we have access to all their useful methods. ``` import json, geojson, pyproj from shapely import geometry from shapely.ops import transform from shapely.affinity import affine_transform from functools import partial from skimage import measure from scipy.ndimage.morphology import binary_dilation from PIL import Image, ImageDraw import numpy as np import matplotlib.pyplot as plt ``` ## Reprojection-Affine-Rasterisation roundtrip We're going to: - take a shapely polygon with lon/lat coordinates, say the building footprint of the Oxford School of Geography and the Environment (SOGE) - convert it to UTM coordinates - draw it on a 1km raster with Oxford's Carfax tower landmarking the bottom left corner. ``` # grab a quick geojson from geojson.io feature = json.loads("""{ "type": "Feature", "properties": {}, "geometry": { "type": "Polygon", "coordinates": [ [ [ -1.2537717819213867, 51.75935044524217 ], [ -1.2541741132736204, 51.75922759060092 ], [ -1.2538844347000122, 51.75860999193512 ], [ -1.2542277574539185, 51.75844728980553 ], [ -1.2540507316589353, 51.75822813907169 ], [ -1.2531226873397827, 51.75858342836217 ], [ -1.2537717819213867, 51.75935044524217 ] ] ] } }""") # load the polygon: SOGE = geometry.shape(feature['geometry']) ``` ### Forward and Reverse Projection We want to convert the geometry from lon/lat to a cartesian coordinate system. Let's use Universal Transfer Mercator with units in m. The UTM projection is arranged in 'zones' to keep angles and shapes conformal in images. ``` # A function to grab the UTM zone number for any lat/lon location def get_utm_zone(lat,lon): zone_str = str(int((lon + 180)/6) + 1) if ((lat>=56.) & (lat<64.) & (lon >=3.) & (lon <12.)): zone_str = '32' elif ((lat >= 72.) & (lat <84.)): if ((lon >=0.) & (lon<9.)): zone_str = '31' elif ((lon >=9.) & (lon<21.)): zone_str = '33' elif ((lon >=21.) & (lon<33.)): zone_str = '35' elif ((lon >=33.) & (lon<42.)): zone_str = '37' return zone_str # get the UTM zone using the centroid of the polygon utm_zone = get_utm_zone(SOGE.centroid.y, SOGE.centroid.x) # define the native WGS84 lon/lat projection proj_wgs = pyproj.Proj("+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs") # define the UTM projection using the utm zone proj_utm = pyproj.Proj(proj='utm',zone=utm_zone,ellps='WGS84') # create reprojection functions using functools.partial reproj_wgs_utm = partial(pyproj.transform, proj_wgs, proj_utm) reproj_utm_wgs = partial(pyproj.transform, proj_utm, proj_wgs) # use shapely.transform with the reprojection functions to reproject shapely objects SOGE_utm = transform(reproj_wgs_utm, SOGE) # Check the reverse transform with a tolerance of 1e-9 of a decimal degree print (SOGE.almost_equals(transform(reproj_utm_wgs, SOGE_utm), 1e-9)) ``` ### Affine Transformation Now lets say we want to get a sense of the footprint of the School of Geograpy on a square kilometer of Oxford, with the bottom left corner centered on Carfax tower. We want to create a mask of the building footprint on a numpy array. <div> <img src="https://user-images.githubusercontent.com/22874837/74949464-b52dec00-53f5-11ea-9107-53d91c93d70c.png" width="500"/> </div> ``` # Point for Carfax tower carfax = geometry.Point(-1.25812, 51.7519) # Convert the point to utm carfax_utm = transform(reproj_wgs_utm,carfax) # use the utm point as the lower-left coordinate for a shapely box oxford_box = geometry.box(carfax_utm.x, carfax_utm.y, carfax_utm.x+1000, carfax_utm.y+1000) # visualise fig, ax = plt.subplots(1,1,figsize=(4,4)) ax.plot(*SOGE_utm.exterior.xy,c='b') ax.scatter(carfax_utm.x, carfax_utm.y, c='k') ax.plot(*oxford_box.exterior.xy,c='g') plt.show() ``` We'll choose the pixel resolution of our numpy array to be 25m. We'll use a [shapely affine tranformation](https://shapely.readthedocs.io/en/latest/manual.html#affine-transformations) with a geotransform to transform the shape to pixel coordinates. *Note!* The Shapely Geotransform matrix is different than many other spatial packages (e.g. GDAL, PostGIS). ``` # Define the geotransform matrix a = e = 1/25 # stretch along-axis 25m/px b = d = 0 # rotate across-axis 0m/px x_off = - carfax_utm.x / 25 # offset from cartesian origin in pixel coordinates y_off = - carfax_utm.y / 25 # offset from cartesian origin in pixel coordinates GT = [a,b,d,e,x_off,y_off] # GeoTransform Matrix # Apply GeoTransform SOGE_pix = affine_transform(SOGE_utm,GT) ``` ### Rasterising Lastly, let's say we want to rasterise our converted polygons to create a numpy array mask. Let's use PIL to draw our polygon on a numpy array. ``` # initialise a numpy array SOGE_mask = np.zeros((int(1000/25), int(1000/25))) # 1000m / 25m/px # create an Image object im = Image.fromarray(mask, mode='L') # create an ImageDraw object draw = ImageDraw.Draw(im) # draw the polygon draw.polygon(list(SOGE_pix.exterior.coords), fill=255) # un-draw any holes in the polygon... for hole in SOGE_pix.interiors: draw.polygon(list(hole.coords), fill=0) # return the image object to the mask array SOGE_mask = np.array(im) # visualise fix, ax = plt.subplots(1,1,figsize=(4,4)) ax.imshow(SOGE_mask, origin='lower') ax.plot(*SOGE_pix.exterior.xy, c='g') plt.show() ``` ### Vectorising To complete the round-trip, lets perform a raster operation (say, a [simple binary dialation](https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.ndimage.morphology.binary_dilation.html)), then re-vectorise our polygon and get it all the way back to native lat-lon coordinates. We'll use a vectoriser built with [skimage.measure.find_contours](https://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.find_contours). ``` def vectoriser(arr, disp=2): """ input: arr -> a square 2D binary mask array output: polys -> a list of vectorised polygons """ polys = [] contours = measure.find_contours(np.pad(arr,disp,'constant', constant_values=(0)), 0.0001) for c in contours: c = (c-disp).clip(0.,float(arr.shape[0])) # clip back extraneous geomtries c = np.round(c) # round to int c.T[[0, 1]] = c.T[[1, 0]] # swap lons<->lats poly = geometry.Polygon(c) # pass into geometry polys.append(poly) return polys # grab the 0th element in the list of polygons SOGE_dialated = vectoriser(binary_dilation(SOGE_mask>0))[0] # Visualise fig, ax = plt.subplots(1,1,figsize=(6,6)) ax.imshow(binary_dilation(SOGE_mask),origin='lower') ax.plot(*SOGE_dialated.exterior.xy) plt.show() ``` Now we need to convert this polygon in the pixel coordinate system back to the UTM coordinate system, and finally back to lon/lat. We do this by first reversing the affine transformation, and then reversing the projection. ``` GT_rev = [1/a,b,d,1/e,carfax_utm.x, carfax_utm.y] # visualise fig, axs = plt.subplots(1,2,figsize=(12,6)) #utm polygons axs[0].plot(*affine_transform(SOGE_dialated, GT_rev).exterior.xy,c='r') axs[0].plot(*SOGE_utm.exterior.xy, c='b') axs[0].plot(*oxford_box.exterior.xy,c='g') #lon/lat polygons axs[1].plot(*transform(reproj_utm_wgs,affine_transform(SOGE_dialated, GT_rev)).exterior.xy,c='r') axs[1].plot(*transform(reproj_utm_wgs,SOGE_utm).exterior.xy, c='b') axs[1].plot(*transform(reproj_utm_wgs,oxford_box).exterior.xy,c='g') plt.show() ```
true
code
0.60842
null
null
null
null
<img width="10%" alt="Naas" src="https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160"/> <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/3f/HubSpot_Logo.svg/220px-HubSpot_Logo.svg.png" alt="drawing" width="200" align='left'/> # Hubspot - Send sales brief <a href="https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/Hubspot/Hubspot_send_sales_brief.ipynb" target="_parent"><img src="https://img.shields.io/badge/-Open%20in%20Naas-success?labelColor=000000&logo=data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPHN2ZyB3aWR0aD0iMTAyNHB4IiBoZWlnaHQ9IjEwMjRweCIgdmlld0JveD0iMCAwIDEwMjQgMTAyNCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayIgdmVyc2lvbj0iMS4xIj4KIDwhLS0gR2VuZXJhdGVkIGJ5IFBpeGVsbWF0b3IgUHJvIDIuMC41IC0tPgogPGRlZnM+CiAgPHRleHQgaWQ9InN0cmluZyIgdHJhbnNmb3JtPSJtYXRyaXgoMS4wIDAuMCAwLjAgMS4wIDIyOC4wIDU0LjUpIiBmb250LWZhbWlseT0iQ29tZm9ydGFhLVJlZ3VsYXIsIENvbWZvcnRhYSIgZm9udC1zaXplPSI4MDAiIHRleHQtZGVjb3JhdGlvbj0ibm9uZSIgZmlsbD0iI2ZmZmZmZiIgeD0iMS4xOTk5OTk5OTk5OTk5ODg2IiB5PSI3MDUuMCI+bjwvdGV4dD4KIDwvZGVmcz4KIDx1c2UgaWQ9Im4iIHhsaW5rOmhyZWY9IiNzdHJpbmciLz4KPC9zdmc+Cg=="/></a> ## Input ### Import library ``` from naas_drivers import emailbuilder, hubspot import naas import pandas as pd from datetime import datetime ``` ### Enter your Hubspot api key ``` auth_token = 'YOUR_HUBSPOT_API_KEY' ``` ### Connect to Hubspot ``` hs = hubspot.connect(auth_token) ``` ### Email parameters ``` # Receivers email_to = ["your_email_adresse"] # Email subject email_subject = f"🚀 Hubspot - Sales Brief as of {datetime.now().strftime('%d/%m/%Y')} (Draft)" ``` ### Sales target ``` objective = 300000 ``` ### Pick your pipeline #### Get all pipelines ``` df_pipelines = hs.pipelines.get_all() df_pipelines ``` #### Enter your pipeline id ``` pipeline_id = "8432671" ``` ### Constants ``` HUBSPOT_CARD = "https://lib.umso.co/lib_sluGpRGQOLtkyEpz/na1lz0v9ejyurau2.png?w=1200&h=900&fit=max&dpr=2" NAAS_WEBSITE = "https://www.naas.ai" EMAIL_DESCRIPTION = "Your sales brief" DATE_FORMAT = "%Y-%m-%d" ``` ### Schedule automation ``` naas.scheduler.add(cron="0 8 * * 1") ``` ### Get dealstages from pipeline ``` df_dealstages = df_pipelines.copy() # Filter on pipeline df_dealstages = df_dealstages[df_dealstages.pipeline_id == pipeline_id] df_dealstages ``` ### Get deals from pipeline ``` properties = [ "hs_object_id", "dealname", "dealstage", "pipeline", "createdate", "hs_lastmodifieddate", "closedate", "amount" ] df_deals = hs.deals.get_all(properties) # Filter on pipeline df_deals = df_deals[df_deals.pipeline == pipeline_id].reset_index(drop=True) df_deals ``` ## Model ### Formatting functions ``` def format_number(num): NUMBER_FORMAT = "{:,.0f} €" num = str(NUMBER_FORMAT.format(num)).replace(",", " ") return num def format_pourcentage(num): NUMBER_FORMAT = "{:,.0%}" num = str(NUMBER_FORMAT.format(num)) return num def format_varv(num): NUMBER_FORMAT = "+{:,.0f} €" num = str(NUMBER_FORMAT.format(num)).replace(",", " ") return num ``` ### Create sales pipeline database ``` df_sales = pd.merge(df_deals.drop("pipeline", axis=1), df_dealstages.drop(["pipeline", "pipeline_id", "createdAt", "updatedAt", "archived"], axis=1), left_on="dealstage", right_on="dealstage_id", how="left") df_sales df_sales_c = df_sales.copy() # Cleaning df_sales_c["amount"] = df_sales_c["amount"].fillna("0") df_sales_c.loc[df_sales_c["amount"] == "", "amount"] = "0" # Formatting df_sales_c["amount"] = df_sales_c["amount"].astype(float) df_sales_c["probability"] = df_sales_c["probability"].astype(float) df_sales_c.createdate = pd.to_datetime(df_sales_c.createdate) df_sales_c.hs_lastmodifieddate = pd.to_datetime(df_sales_c.hs_lastmodifieddate) df_sales_c.closedate = pd.to_datetime(df_sales_c.closedate) # Calc df_sales_c["forecasted"] = df_sales_c["amount"] * df_sales_c["probability"] df_sales_c ``` ### Create sales pipeline agregated by dealstages ``` df_details = df_sales_c.copy() # Groupby to_group = [ "dealstage_label", "probability", "displayOrder" ] to_agg = { "amount": "sum", "dealname": "count", "forecasted": "sum" } df_details = df_details.groupby(to_group, as_index=False).agg(to_agg) # Sort df_details = df_details.sort_values("displayOrder") df_details ``` ### Calculate email parameters ``` forecasted = df_details.forecasted.sum() forecasted won = df_details[df_details["probability"] == 1].forecasted.sum() won weighted = df_details[df_details["probability"] < 1].forecasted.sum() weighted completion_p = forecasted / objective completion_p completion_v = objective - forecasted completion_v today = datetime.now().strftime(DATE_FORMAT) today ``` ### Get pipeline details ``` df = df_details.copy() details = [] for _, row in df.iterrows(): # status part dealstage = row.dealstage_label probability = row.probability detail = f"{dealstage} ({format_pourcentage(probability)})" # amount part amount = row.amount number = row.dealname forecasted_ = row.forecasted if (probability < 1 and probability > 0): detail = f"{detail}: <ul><li>Amount : {format_number(amount)}</li><li>Number : {number}</li><li>Weighted amount : <b>{format_number(forecasted_)}</b></li></ul>" else: detail = f"{detail}: {format_number(amount)}" details += [detail] details ``` ### Get inactives deals ``` df_inactive = df_sales_c.copy() df_inactive.hs_lastmodifieddate = pd.to_datetime(df_inactive.hs_lastmodifieddate).dt.strftime(DATE_FORMAT) df_inactive["inactive_time"] = (datetime.now() - pd.to_datetime(df_inactive.hs_lastmodifieddate, format=DATE_FORMAT)).dt.days df_inactive.loc[(df_inactive["inactive_time"] > 30, "inactive")] = "inactive" df_inactive = df_inactive[(df_inactive.inactive == 'inactive') & (df_inactive.amount != 0) & (df_inactive.probability > 0.) & (df_inactive.probability < 1)].sort_values("amount", ascending=False).reset_index(drop=True) df_inactive inactives = [] for _, row in df_inactive[:10].iterrows(): # status part dealname = row.dealname dealstage_label = row.dealstage_label amount = row.amount probability = row.probability inactive = f"{dealname} ({dealstage_label}): <b>{format_number(amount)}</b>" inactives += [inactive] inactives ``` ### Create pipeline waterfall ``` import plotly.graph_objects as go fig = go.Figure(go.Waterfall(name="20", orientation = "v", measure = ["relative", "relative", "total", "relative", "total"], x = ["Won", "Pipeline", "Forecast", "Missing", "Objective"], textposition = "outside", text = [format_number(won), format_varv(weighted), format_number(forecasted), format_varv(completion_v), format_number(objective)], y = [won, weighted, forecasted, completion_v, objective], decreasing = {"marker":{"color":"#33475b"}}, increasing = {"marker":{"color":"#33475b"}}, totals = {"marker":{"color":"#ff7a59"}} )) fig.update_layout(title = "Sales Metrics", plot_bgcolor="#ffffff", hovermode='x') fig.update_yaxes(tickprefix="€", gridcolor='#eaeaea') fig.show() fig.write_html("GRAPH_FILE.html") fig.write_image("GRAPH_IMG.png") params = {"inline": True} graph_url = naas.asset.add("GRAPH_FILE.html", params=params) graph_image = naas.asset.add("GRAPH_IMG.png") ``` ### Create email ``` def email_brief(today, forecasted, won, weighted, objective, completion_p, completion_v, details, inactives ): content = { 'title': (f"<a href='{NAAS_WEBSITE}'>" f"<img align='center' width='100%' target='_blank' style='border-radius:5px;'" f"src='{HUBSPOT_CARD}' alt={EMAIL_DESCRIPTION}/>" "</a>"), 'txt_intro': (f"Hi there,<br><br>" f"Here is your weekly sales email as of {today}."), 'title_1': emailbuilder.text("Overview", font_size="27px", text_align="center", bold=True), "text_1": emailbuilder.text(f"As of today, your yearly forecasted revenue is {format_number(forecasted)}."), "list_1": emailbuilder.list([f"Won : {format_number(won)}", f"Weighted pipeline : <b>{format_number(weighted)}</b>"]), "text_1_2": emailbuilder.text(f"You need to find 👉 <u>{format_number(completion_v)}</u> to reach your goal !"), "text_1_1": emailbuilder.text(f"Your yearly objective is {format_number(objective)} ({format_pourcentage(completion_p)} completion)."), 'image_1': emailbuilder.image(graph_image, link=graph_url), 'title_2': emailbuilder.text("🚀 Pipeline", font_size="27px", text_align="center", bold=True), "list_2": emailbuilder.list(details), 'title_3': emailbuilder.text("🧐 Actions needed", font_size="27px", text_align="center", bold=True), 'text_3': emailbuilder.text("Here are deals where you need to take actions :"), 'list_3': emailbuilder.list(inactives), 'text_3_1': emailbuilder.text("If you need more details, connect to Hubspot with the link below."), 'button_1': emailbuilder.button(link="https://app.hubspot.com/", text="Go to Hubspot", background_color="#ff7a59"), 'title_4': emailbuilder.text("Glossary", text_align="center", bold=True, underline=True), 'list_4': emailbuilder.list(["Yearly forecasted revenue : Weighted amount + WON exclude LOST", "Yearly objective : Input in script", "Inactive deal : No activity for more than 30 days"]), 'footer_cs': emailbuilder.footer_company(naas=True), } email_content = emailbuilder.generate(display='iframe', **content) return email_content email_content = email_brief(today, forecasted, won, weighted, objective, completion_p, completion_v, details, inactives) ``` ## Output ### Send email ``` naas.notification.send(email_to, email_subject, email_content) ```
true
code
0.477676
null
null
null
null
# Scorey - Scraping, aggregating and assessing technical ability of a candidate based on publicly available sources ## Problem Statement The current interview scenario is biased towards "candidate's performance during the 2 hour interview" and doesn't take other factors into account such as candidate's competitive coding abilities, contribution towards developer community and so on. ## Approach Scorey tries to solve this problem by aggregating publicly available data from various websites such as * Github * StackOverflow * CodeChef * Codebuddy * Codeforces * Hackerearth * SPOJ * GitAwards Once the data is collected, the algorithm then defines a comprehensive scoring system that grades the candidates technical capablities based on following factors - Ranking - Number of Problems Solved - Activity - Reputation - Contribution - Followers The candidate is then assigned a scored out of 100 <br> This helps the interviewer get a full view of candidates abilties and therefore helps them make a unbiased, informed decision. ------- ## ( Initial Setup ) ``` def color_negative_red(val): """ Takes a scalar and returns a string with the css property `'color: red'` for negative strings, black otherwise. """ color = 'red' if val > 0 else 'black' return 'color: %s' % color # Set CSS properties for th elements in dataframe th_props = [ ('font-size', '15px'), ('text-align', 'center'), ('font-weight', 'bold'), ('color', '#00c936'), ('background-color', '##f7f7f7') ] # Set CSS properties for td elements in dataframe td_props = [ ('font-size', '15px'), ('font-weight', 'bold') ] # Set table styles styles = [ dict(selector="th", props=th_props), dict(selector="td", props=td_props) ] ``` ------- ## Candidate's Username - This information can be extracted in two ways <br> - Parsing the resume - Adding additional fields in the job application form - Scrape it from personal website ``` import urllib.request import requests #return requests.get(url).json() import json import csv serviceurl = 'https://api.github.com/users/' #user = 'poke19962008' user = input('Enter candidates username - ') ``` ## Extracting all possible links/ usernames from personal website ``` from bs4 import BeautifulSoup import urllib import urllib.parse import urllib.request from urllib.request import urlopen url = input('Enter your personal website - ') html = urlopen(url).read() soup = BeautifulSoup(html, "lxml") tags = soup('a') for tag in tags: print (tag.get('href',None)) ``` ## 1. Github ``` user = 'poke19962008' #user = raw_input('Enter user_name : ') #if len(user) < 1 : break serviceurl += user +'?' access_token = "0b330104cd6dc94a8c29afb28a77ee8398e1c77b" url = serviceurl + urllib.parse.urlencode({'access_token': access_token}) print('Retrieving', url) uh = urllib.request.urlopen(url) data = uh.read() #print data #js = json.loads(str(data)) js = json.loads(data.decode("utf-8")) js import pandas as pd df = pd.DataFrame.from_dict(js, orient='columns') df = df.iloc[3:] df df_new = df.filter(['email','public_repos','followers','hireable','company','updated_at'], axis=1) (df_new.style .set_table_styles(styles)) #json.dumps(js, indent=4) # df['public_repos'] = js['public_repos'] # #df['total_private_repos'] = js['total_private_repos'] # df['followers'] = js['followers'] # df['hireable'] = js['hireable'] # df['company'] = js['company'] # df['updated_at'] = js['updated_at'] print("Number of public repositores - ", js['public_repos']) #print("Number of private repositores - ", js['total_private_repos']) print("Number of followers - ", js['followers']) print("Is candidate looking for job? - ", js['hireable']) print("Company - ", js['company']) print("Last seen - ", js['updated_at']) ``` -------- ## 2. StackOverflow ``` import stackexchange so = stackexchange.Site(stackexchange.StackOverflow) u = so.user('1600172') print('reputation is' , u.reputation.format()) print('no of questions answered - ', u.answers.count) df_new['stack_reputation'] = u.reputation.format() df_new['stack_answer_count'] = u.answers.count df_new.set_index('email') (df_new.style .set_table_styles(styles)) ``` -------- ## 3. CodeChef ``` import requests from bs4 import BeautifulSoup head = "https://wwww.codechef.com/users/" var = user URL = head + user page = requests.get(URL) soup = BeautifulSoup(page.content,'html.parser') #These three lines give the Rating of the user. listRating = list(soup.findAll('div',class_="rating-number")) rating = list(listRating[0].children) rating = rating[0] print ("Rating: "+rating) df_new['CodeChef_rating'] = rating listGCR = [] #Global and country ranking. listRanking = list(soup.findAll('div',class_="rating-ranks")) rankingSoup = listRanking[0] for item in rankingSoup.findAll('a'): listGCR.append(item.get_text()) #Extracting the text from all anchor tags print ("Global Ranking: "+listGCR[0]) df_new['CodeChef_global_rank'] = listGCR[0] print ("Country Ranking: "+listGCR[1]) df_new['CodeChef_Country_rank'] = listGCR[1] (df_new.style .set_table_styles(styles)) ``` -------- ## 4. Spoj ``` import requests import bs4 as bs #url = input('enter spoj profile url - ') url = 'https://www.spoj.com/users/poke19962008/' sauce = urllib.request.urlopen(url).read() soup = bs.BeautifulSoup(sauce,'lxml') # lxml is a parser #print(soup) info = (soup.find_all('p')) for i in info: print(i.text) no_of_questions = int(soup.find('dd').text) print(" no. of questions = ",no_of_questions) df_new['SPOJ_no_of_ques'] = no_of_questions (df_new.style .set_table_styles(styles)) ``` ## 5. Codebuddy ``` import requests from bs4 import BeautifulSoup def codebuddy(username): URL = "https://codebuddy.co.in/ranks/practice" page = requests.get(URL) soup = BeautifulSoup(page.content,'html.parser') table = list(soup.find_all('tr')) for i in table: parameters = list(i.find_all('label')) #print (i.find('label',class_="highlight").text) if str(parameters[1].text)==username: output = str(int(parameters[0].text))+" "+str(int(parameters[2].text))+" "+str(float(parameters[3].text)) return (output) return (-1) #ranking out of 2000 a = codebuddy("spectrum").split(" ") a = pd.DataFrame(a) a = a.transpose() df_new['Codebuddy_rank'] = a[0].values df_new['Codebuddy_problem_solved'] = a[1].values df_new['Codebuddy_points'] = a[2].values (df_new.style .set_table_styles(styles)) ``` ------- ## 6. Hackerearth ``` import bs4 as bs import urllib.request #url = input('enter hackerearth profile url - ') url = 'https://www.hackerearth.com/@poke19962008' sauce = urllib.request.urlopen(url).read() soup = bs.BeautifulSoup(sauce,'lxml') # lxml is a parser #print(soup) name = soup.find('h1', class_="name ellipsis larger") print(name) followersNo = soup.find('span', class_="track-followers-num") followingNo = soup.find('span', class_="track-following-num") print("No. of followers = ",followersNo) print("No. of following = ",followingNo) ``` ------- ## 7. CodeForces ``` def codeforces(username): head = 'http://codeforces.com/profile/' var = username URL = head + var page = requests.get(URL) soup = BeautifulSoup(page.content,'html.parser') listRating = list(soup.findAll('div',class_="user-rank")) CheckRating = listRating[0].get_text() #Check for rated or unrated if str(CheckRating) == '\nUnrated \n': # print('Not rated') out = 1000000 return(out) else: # print('rated') listinfo = list((soup.find('div',class_="info")).findAll('li')) string = (listinfo[0].get_text()) string = string.replace(" ","") str1,str2 = string.split('(') str3,str4 = str1.split(':') out = int((str4.strip())) return(out) df_new['CodeForce_ranking'] = codeforces('user') (df_new.style .set_table_styles(styles)) ``` ------- ## 8. Git-Awards Ranking ``` import requests from bs4 import BeautifulSoup import re head = "http://git-awards.com/users/" var = user URL = head + var page = requests.get(URL) soup = BeautifulSoup(page.content,'html.parser') a = list(soup.findAll('div',class_='col-md-3 info')) b = list(soup.findAll('td')) lang=[] f = 0 for i in a: c = i.text.lstrip().rstrip() if 'your ranking' in c and f==0: f=1 continue if 'ranking' in c: s="" d = c.split(" ") for j in d: if j!="ranking": s+=j+" " lang.append(s.rstrip()) print(lang) df_new['lang_1'] = lang[0] df_new['lang_2'] = lang[1] df_new['lang_3'] = lang[2] df_new['lang_4'] = lang[3] (df_new.style .set_table_styles(styles)) ``` (If you want to know about their work) ``` #username = input('enter github username - ') username = user print("Loading...") print() url = "https://github.com/"+username sauce = urllib.request.urlopen(url).read() soup = bs.BeautifulSoup(sauce,'lxml') # lxml is a parser #print(soup) repoNo = int(soup.find('span',class_='Counter').text) n1 = repoNo print("No. of repositories = ",n1) print() url2 = url + "?tab=repositories" sauce = urllib.request.urlopen(url2).read() soup = bs.BeautifulSoup(sauce,'lxml') #print(soup) arr = [0] tags = soup.find_all('a', itemprop="name codeRepository") for tag in tags: if tag.text!="": arr.append((tag.text).lstrip()) k=2 while(len(arr)<=n1): url3 = url + "?page="+str(k)+"&tab=repositories" k+=1 sauce = urllib.request.urlopen(url3).read() soup = bs.BeautifulSoup(sauce,'lxml') tags = soup.find_all('a', itemprop="name codeRepository") for tag in tags: if tag.text!="": arr.append((tag.text).lstrip()) for i in range(1,len(arr)): h1 = str(i) + ". "+str(arr[i]) print(h1) (df_new.style .set_table_styles(styles)) df_new.to_csv("final_data.csv") ``` -------- ## Scoring Next part is to score the candidates on following parameters - - Rank (25) - Number of problems solved (25) - Reputation (25) - Followers (15) - Activity (5) - Contributions (5) ``` df_new['stack_reputation']= df_new['stack_reputation'].apply(pd.to_numeric) df_new['CodeChef_rating']= df_new['CodeChef_rating'].apply(pd.to_numeric) df_new['Codebuddy_points']= df_new['Codebuddy_points'].apply(pd.to_numeric) df_new['stack_answer_count']= df_new['stack_answer_count'].apply(pd.to_numeric) df_new['SPOJ_no_of_ques']= df_new['SPOJ_no_of_ques'].apply(pd.to_numeric) df_new['Codebuddy_problem_solved']= df_new['Codebuddy_problem_solved'].apply(pd.to_numeric) df_new['CodeChef_global_rank']= df_new['CodeChef_global_rank'].apply(pd.to_numeric) df_new['CodeChef_Country_rank']= df_new['CodeChef_Country_rank'].apply(pd.to_numeric) df_new['Codebuddy_rank']= df_new['Codebuddy_rank'].apply(pd.to_numeric) df_new['CodeForce_ranking']= df_new['CodeForce_ranking'].apply(pd.to_numeric) df_res = df_new.filter(['email'], axis=1) ``` ## 1. Reputation ``` df_new.loc[ df_new['stack_reputation'] >= 360 , 'score_rep_1'] = 10 df_new.loc[ (df_new['stack_reputation'] < 359) & (df_new['stack_reputation'] >= 150) , 'score_rep_1'] = 5 df_new.loc[ (df_new['stack_reputation'] < 149) & (df_new['stack_reputation'] >= 100) , 'score_rep_1'] = 2 df_new.loc[ df_new['CodeChef_rating'] > 1500 , 'score_rep_2'] = 10 df_new.loc[ (df_new['CodeChef_rating'] < 1499) & (df_new['CodeChef_rating'] >= 1000) , 'score_rep_2'] = 5 df_new.loc[ (df_new['CodeChef_rating'] < 999) & (df_new['CodeChef_rating'] >= 500) , 'score_rep_2'] = 2 df_new.loc[ df_new['Codebuddy_points'] > 100 , 'score_rep_3'] = 5 df_new.loc[ (df_new['Codebuddy_points'] < 99) & (df_new['Codebuddy_points'] >= 50) , 'score_rep_3'] = 2 df_new.loc[ (df_new['Codebuddy_points'] < 49) & (df_new['Codebuddy_points'] >= 20) , 'score_rep_3'] = 1 df_res['score_rep'] = df_new['score_rep_1'] + df_new['score_rep_2'] + df_new['score_rep_3'] df_res ``` ----- ## 2. Number of Problems Solved ``` df_new.loc[ df_new['stack_answer_count'] >= 20 , 'score_ps_1'] = 9 df_new.loc[ (df_new['stack_answer_count'] < 20) & (df_new['stack_answer_count'] >= 10) , 'score_ps_1'] = 5 df_new.loc[ (df_new['stack_answer_count'] < 10) & (df_new['stack_answer_count'] >= 5) , 'score_ps_1'] = 3 df_new.loc[ df_new['SPOJ_no_of_ques'] >= 10 , 'score_ps_2'] = 8 df_new.loc[ (df_new['SPOJ_no_of_ques'] < 9) & (df_new['SPOJ_no_of_ques'] >= 5) , 'score_ps_2'] = 4 df_new.loc[ (df_new['SPOJ_no_of_ques'] < 5) & (df_new['SPOJ_no_of_ques'] >= 2) , 'score_ps_2'] = 2 df_new.loc[ df_new['Codebuddy_problem_solved'] >= 50 , 'score_ps_3'] = 8 df_new.loc[ (df_new['Codebuddy_problem_solved'] < 49) & (df_new['Codebuddy_problem_solved'] >= 25) , 'score_ps_3'] = 4 df_new.loc[ (df_new['Codebuddy_problem_solved'] < 24) & (df_new['Codebuddy_problem_solved'] >= 10) , 'score_ps_3'] = 2 df_res['score_ps'] = df_new['score_ps_1'] + df_new['score_ps_2'] + df_new['score_ps_3'] df_res ``` ------ ## 3. Ranking ``` df_new.loc[ df_new['CodeChef_global_rank'] <= 5000 , 'score_rank_1'] = 7 df_new.loc[ (df_new['CodeChef_global_rank'] > 5000) & (df_new['CodeChef_global_rank'] <= 15000) , 'score_rank_1'] = 4 df_new.loc[ (df_new['CodeChef_global_rank'] > 15000) & (df_new['CodeChef_global_rank'] <= 25000) , 'score_rank_1'] = 2 df_new.loc[ df_new['CodeChef_Country_rank'] <= 2000 , 'score_rank_2'] = 6 df_new.loc[ (df_new['CodeChef_Country_rank'] > 2000) & (df_new['CodeChef_Country_rank'] <= 7000) , 'score_rank_2'] = 3 df_new.loc[ (df_new['CodeChef_Country_rank'] > 7000) & (df_new['CodeChef_Country_rank'] <= 15000) , 'score_rank_2'] = 1 df_new.loc[ df_new['Codebuddy_rank'] <= 50 , 'score_rank_3'] = 6 df_new.loc[ (df_new['Codebuddy_rank'] > 50) & (df_new['Codebuddy_rank'] <= 250) , 'score_rank_3'] = 3 df_new.loc[ (df_new['Codebuddy_rank'] > 250) & (df_new['Codebuddy_rank'] <= 500) , 'score_rank_3'] = 1 df_new.loc[ df_new['CodeForce_ranking'] <= 500 , 'score_rank_4'] = 6 df_new.loc[ (df_new['CodeForce_ranking'] > 500) & (df_new['CodeForce_ranking'] <= 2000) , 'score_rank_4'] = 3 df_new.loc[ (df_new['CodeForce_ranking'] > 2000) & (df_new['CodeForce_ranking'] <= 5000) , 'score_rank_4'] = 1 df_res['score_rank'] = df_new['score_rank_1'] + df_new['score_rank_2'] + df_new['score_rank_3'] + df_new['score_rank_4'] df_res ``` ------- ## 4. Activity ``` df_new['updated_at'] = df_new['updated_at'].apply(pd.to_datetime) df_new['updated_at'] df_new['updated_at'] = df_new['updated_at'].dt.date df_new['updated_at'] = df_new['updated_at'].apply(pd.to_datetime) df_new['updated_at'] df_new['current_date'] = '2018-06-24' df_new['current_date']= df_new['current_date'].apply(pd.to_datetime) df_new['current_date'] from datetime import date df_new['last_active'] = df_new['current_date'] - df_new['updated_at'] df_new['last_active'].astype(str) df_new['last_active'] = df_new['last_active'].astype(str).str[0] df_new['last_active']= df_new['last_active'].apply(pd.to_numeric) df_new['last_active'] df_new.loc[df_new['last_active'] <= 7 , 'score_activity_1'] = 5 df_new.loc[ (df_new['last_active'] > 8) & (df_new['last_active'] <= 15) , 'score_activity_1'] = 3 df_new.loc[ (df_new['last_active'] > 15) & (df_new['last_active'] <= 30) , 'score_activity_1'] = 1 df_res['score_activity'] = df_new['score_activity_1'] ``` -------- ## 5. Followers ``` df_new.loc[ df_new['followers'] >= 200 , 'score_followers_1'] = 15 df_new.loc[ (df_new['followers'] < 200) & (df_new['followers'] >= 50) , 'score_followers_1'] = 10 df_new.loc[ (df_new['followers'] < 50) & (df_new['followers'] >= 30) , 'score_followers_1'] = 5 df_res['score_followers'] = df_new['score_followers_1'] df_res ``` ------ ## 6. Contributions ``` df_new.loc[ df_new['public_repos'] >= 30 , 'score_con_1'] = 5 df_new.loc[ (df_new['followers'] < 30) & (df_new['followers'] >= 10) , 'score_con_1'] = 3 df_new.loc[ (df_new['followers'] < 10) & (df_new['followers'] >= 3) , 'score_con_1'] = 1 df_res['score_contributions'] = df_new['score_con_1'] df_res ``` ------- ## 7. Final Score ``` df_res['total_score'] = df_res['score_rep'] + df_res['score_ps'] + df_res['score_rank'] + df_res['score_activity'] + df_res['score_followers'] + df_res['score_contributions'] (df_res.style .applymap(color_negative_red, subset=['total_score']) .set_table_styles(styles)) ``` ------- ------- ## Demo - Visualizing candidates perfomance throughs charts and graphs ------- ------- ## Tech Stack - Python: BeautifulSoup, Urllib, Pandas, Scipy - D3.js ------- ------- ## What's Next? - Scorey for non technical recruitments - Sales, Marketing and HR - Integrating Machine learning components for rule generation - Handling missing data exceptions dynamically ------- ------- Thank you!
true
code
0.442637
null
null
null
null
# 0. Magic ``` %load_ext autoreload %autoreload 2 %matplotlib inline ``` # 1. Import ``` import torch from torch import tensor from torch import nn import torch.nn.functional as F from torch.utils import data import matplotlib.pyplot as plt from pathlib import Path from IPython.core.debugger import set_trace from fastai import datasets from fastai.metrics import accuracy import pickle, gzip, math, torch import operator ``` # 2. Data ``` MNIST_URL='http://deeplearning.net/data/mnist/mnist.pkl' def get_data(): path = datasets.download_data(MNIST_URL, ext='.gz') with gzip.open(path, 'rb') as f: ((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding='latin-1') return map(tensor, (x_train, y_train, x_valid, y_valid)) x_train, y_train, x_valid, y_valid = get_data() ``` # 3. เตรียม Data ``` class Dataset(data.Dataset): def __init__(self, x, y): self.x, self.y = x, y def __len__(self): return len(self.x) def __getitem__(self, i): return self.x[i], self.y[i] class DataLoader(): def __init__(self, ds, bs): self.ds,self.bs = ds,bs def __iter__(self): for i in range(0, len(self.ds), self.bs): yield self.ds[i:i+self.bs] # x = data, m = mean, s = standard deviation def normalize(x, m, s): return (x-m)/s n, m = x_train.shape c = (y_train.max()+1).numpy() n, m, c train_mean, train_std = x_train.mean(), x_train.std() x_train = normalize(x_train, train_mean, train_std) x_valid = normalize(x_valid, train_mean, train_std) # batch size bs = 64 train_ds, valid_ds = Dataset(x_train, y_train), Dataset(x_valid, y_valid) train_dl, valid_dl = DataLoader(train_ds, bs), DataLoader(valid_ds, bs) ``` # 4. สร้าง Model Hyperparameter ของโมเดล ``` # learning rate lr = 0.03 epoch = 1 nh = 50 ``` ประกาศฟังก์ชันเอาไว้สร้างโมเดล ``` def get_model(): # loss function loss_func = F.cross_entropy model = nn.Sequential(nn.Linear(m, nh), nn.ReLU(), nn.Linear(nh,c)) return model, loss_func ``` # 5. Training Loop เราจะเทรนโมเดล ด้วยอัลกอริทึม [Stochastic Gradient Descent (SGD)](https://www.bualabs.com/wp-admin/post.php?post=631&action=edit) และ เก็บ Loss, Accuracy เอาไว้พล็อตกราฟ ประกาศฟังก์ชัน fit เอาไว้เรียกเทรนเวลาที่ต้องการ ``` def fit(): losses, metrics = [], [] # e = epoch number for e in range(epoch): for xb, yb in train_dl: # Feedforward yhatb = model(xb) loss = loss_func(yhatb, yb) # Metrics acc = accuracy(yhatb, yb) losses.append(loss); metrics.append(acc) # Backpropagation loss.backward() optim.step() optim.zero_grad() plot_metrics(losses, metrics) ``` ประการฟัง์ชัน ไว้พล็อตกราฟ Loss และ Accuracy ``` def plot_metrics(losses, metrics): x = torch.arange(len(losses)).numpy() fig,ax = plt.subplots(figsize=(9, 9)) ax.grid(True) ax.plot(x, losses, label="Loss") ax.plot(x, metrics, label="Accuracy") ax.legend(loc='upper right') ``` # 6. Refactor DataLoader ## 6.1 Random Sampler ในการเทรนโมเดล เราควรสับเปลี่ยนกองข้อมูล เหมือนสับไพ่ ให้ลำดับข้อมูลตัวอย่างไม่เหมือนกันทุกครั้ง ก่อนที่จะป้อนให้กับโมเดล ``` class Sampler(): # ds = Dataset, bs = Batch Size, n = Length of Dataset def __init__(self, ds, bs, shuffle=False): self.n, self.bs, self.shuffle = len(ds), bs, shuffle def __iter__(self): self.idxs = torch.randperm(self.n) if self.shuffle else torch.arange(self.n) for i in range(0, self.n, self.bs): yield self.idxs[i:i+self.bs] small_ds = Dataset(*train_ds[:10]) ``` เทสแบบไม่สับเปลี่ยน ``` s = Sampler(small_ds, 3, shuffle=False) [o for o in s] ``` เทสแบบสับเปลี่ยน ``` s = Sampler(small_ds, 3, shuffle=True) [o for o in s] ``` ## 6.2 Collate เมื่อเราสับเปลี่ยนข้อมูลออกมาจากกอง Dataset เป็น (x7, y7), (x3, y3), (x1, y1), (...) แล้ว เราต้องมีฟังก์ชันเล็ก ๆ ใน DataLoader ในการรวมเป็นกองเล็ก ๆ ขึ้นมาใหม่ เป็น Mini-Batch (x7, x3, x1, ...), (y7, y3, y1, ...) ก่อนส่งให้กับโมเดล ``` def collate(b): xs, ys = zip(*b) return torch.stack(xs), torch.stack(ys) ``` เพิ่ม Feature ให้กับ DataLoader ในการรับ Sampler และ Collate ``` class DataLoader2(): def __init__(self, ds, sampler, collate_fn=collate): self.ds, self.sampler, self.collate_fn = ds, sampler, collate_fn def __iter__(self): for s in self.sampler: yield self.collate_fn([self.ds[i] for i in s]) ``` เรามักจะ Shuffle ข้อมูล Training Set แต่ Validation Set ไม่จำเป็นต้อง Shuffle ``` train_samp = Sampler(train_ds, bs, shuffle=True) valid_samp = Sampler(valid_ds, bs, shuffle=False) # train_dl = Training Set DataLoader, valid_dl = Validation Set DataLoader train_dl = DataLoader2(train_ds, train_samp, collate) valid_dl = DataLoader2(valid_ds, valid_samp, collate) xb, yb = next(iter(train_dl)) yb[0], plt.imshow(xb[0].view(28, 28)) xb, yb = next(iter(train_dl)) yb[0], plt.imshow(xb[0].view(28, 28)) model, loss_func = get_model() optim = torch.optim.SGD(model.parameters(), lr=lr) fit() ``` ## 6.3 PyTorch DataLoader ``` from torch.utils import data ``` PyTorch DataLoader สามารถรับ shuffle=True/False หรือ รับเป็น class RandomSampler/SequentialSampler ก็ได้ ``` # train_dl = data.DataLoader(train_ds, bs, shuffle=True, collate_fn=collate) # valid_dl = data.DataLoader(valid_ds, bs, shuffle=False, collate_fn=collate) train_dl = data.DataLoader(train_ds, bs, sampler=data.RandomSampler(train_ds), collate_fn=collate, num_workers=8) valid_dl = data.DataLoader(valid_ds, bs, sampler=data.SequentialSampler(valid_ds), collate_fn=collate, num_workers=8) ``` เราสามารถ ใช้ num_workers เพื่อกำหนดให้ PyTorch DataLoader แตก SubProcess เพื่อช่วยโหลดข้อมูลแบบขนาน ทำให้โหลดข้อมูลขนาดใหญ่ได้เร็วขึ้น ``` model, loss_func = get_model() optim = torch.optim.SGD(model.parameters(), lr=lr) fit() ``` # 7. สรุป 1. ในการเทรนโมเดล แต่ละ Epoch เราไม่ควรป้อนข้อมูลเหมือน ๆ กันทุกครั้งให้โมเดล เราจึงได้สร้าง DataLoader เวอร์ชันใหม่ ที่จะสับไพ่ข้อมูลตัวอย่างก่อนป้อนให้โมเดล 1. ในการสับไพ่ข้อมูล จำเป็นต้องมีกระบวนการนำข้อมูลมารวมกันใหม่ เรียกว่า Collate 1. DataLoader ของ PyTorch จัดการปัญหาพวกนี้ให้เราหมด พร้อมทั้งมี Feature num_workers เพิ่มความเร็วในการโหลดข้อมูล แบบขนาน # Credit * https://course.fast.ai/videos/?lesson=9 * https://pytorch.org/docs/stable/data.html ``` ```
true
code
0.812421
null
null
null
null
# Counterfactual with Reinforcement Learning (CFRL) on Adult Census This method is described in [Model-agnostic and Scalable Counterfactual Explanations via Reinforcement Learning](https://arxiv.org/abs/2106.02597) and can generate counterfactual instances for any black-box model. The usual optimization procedure is transformed into a learnable process allowing to generate batches of counterfactual instances in a single forward pass even for high dimensional data. The training pipeline is model-agnostic and relies only on prediction feedback by querying the black-box model. Furthermore, the method allows target and feature conditioning. **We exemplify the use case for the TensorFlow backend. This means that all models: the autoencoder, the actor and the critic are TensorFlow models. Our implementation supports PyTorch backend as well.** CFRL uses [Deep Deterministic Policy Gradient (DDPG)](https://arxiv.org/abs/1509.02971) by interleaving a state-action function approximator called critic, with a learning an approximator called actor to predict the optimal action. The method assumes that the critic is differentiable with respect to the action argument, thus allowing to optimize the actor's parameters efficiently through gradient-based methods. The DDPG algorithm requires two separate networks, an actor $\mu$ and a critic $Q$. Given the encoded representation of the input instance $z = enc(x)$, the model prediction $y_M$, the target prediction $y_T$ and the conditioning vector $c$, the actor outputs the counterfactual’s latent representation $z_{CF} = \mu(z, y_M, y_T, c)$. The decoder then projects the embedding $z_{CF}$ back to the original input space, followed by optional post-processing. The training step consists of simultaneously optimizing the actor and critic networks. The critic regresses on the reward $R$ determined by the model prediction, while the actor maximizes the critic’s output for the given instance through $L_{max}$. The actor also minimizes two objectives to encourage the generation of sparse, in-distribution counterfactuals. The sparsity loss $L_{sparsity}$ operates on the decoded counterfactual $x_{CF}$ and combines the $L_1$ loss over the standardized numerical features and the $L_0$ loss over the categorical ones. The consistency loss $L_{consist}$ aims to encode the counterfactual $x_{CF}$ back to the same latent representation where it was decoded from and helps to produce in-distribution counterfactual instances. Formally, the actor's loss can be written as: $L_{actor} = L_{max} + \lambda_{1}L_{sparsity} + \lambda_{2}L_{consistency}$ This example will use the [xgboost](https://github.com/dmlc/xgboost) library, which can be installed with: ``` !pip install xgboost import os import numpy as np import pandas as pd from copy import deepcopy from typing import List, Tuple, Dict, Callable import tensorflow as tf import tensorflow.keras as keras from sklearn.compose import ColumnTransformer from sklearn.impute import SimpleImputer from sklearn.preprocessing import StandardScaler, OneHotEncoder from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score from xgboost import XGBClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.linear_model import LogisticRegression from alibi.explainers import CounterfactualRLTabular, CounterfactualRL from alibi.datasets import fetch_adult from alibi.models.tensorflow.autoencoder import HeAE from alibi.models.tensorflow.actor_critic import Actor, Critic from alibi.models.tensorflow.cfrl_models import ADULTEncoder, ADULTDecoder from alibi.explainers.cfrl_base import Callback from alibi.explainers.backends.cfrl_tabular import get_he_preprocessor, get_statistics, \ get_conditional_vector, apply_category_mapping ``` ### Load Adult Census Dataset ``` # Fetch adult dataset adult = fetch_adult() # Separate columns in numerical and categorical. categorical_names = [adult.feature_names[i] for i in adult.category_map.keys()] categorical_ids = list(adult.category_map.keys()) numerical_names = [name for i, name in enumerate(adult.feature_names) if i not in adult.category_map.keys()] numerical_ids = [i for i in range(len(adult.feature_names)) if i not in adult.category_map.keys()] # Split data into train and test X, Y = adult.data, adult.target X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=13) ``` ### Train black-box classifier ``` # Define numerical standard scaler. num_transf = StandardScaler() # Define categorical one-hot encoder. cat_transf = OneHotEncoder( categories=[range(len(x)) for x in adult.category_map.values()], handle_unknown="ignore" ) # Define column transformer preprocessor = ColumnTransformer( transformers=[ ("cat", cat_transf, categorical_ids), ("num", num_transf, numerical_ids), ], sparse_threshold=0 ) # Fit preprocessor. preprocessor.fit(X_train) # Preprocess train and test dataset. X_train_ohe = preprocessor.transform(X_train) X_test_ohe = preprocessor.transform(X_test) # Select one of the below classifiers. # clf = XGBClassifier(min_child_weight=0.5, max_depth=3, gamma=0.2) # clf = LogisticRegression(C=10) # clf = DecisionTreeClassifier(max_depth=10, min_samples_split=5) clf = RandomForestClassifier(max_depth=15, min_samples_split=10, n_estimators=50) # Fit the classifier. clf.fit(X_train_ohe, Y_train) ``` ### Define the predictor (black-box) Now that we've trained the classifier, we can define the black-box model. Note that the output of the black-box is a distribution which can be either a soft-label distribution (probabilities/logits for each class) or a hard-label distribution (one-hot encoding). Internally, CFRL takes the `argmax`. Moreover the output **DOES NOT HAVE TO BE DIFFERENTIABLE**. ``` # Define prediction function. predictor = lambda x: clf.predict_proba(preprocessor.transform(x)) # Compute accuracy. acc = accuracy_score(y_true=Y_test, y_pred=predictor(X_test).argmax(axis=1)) print("Accuracy: %.3f" % acc) ``` ### Define and train autoencoder Instead of directly modelling the perturbation vector in the potentially high-dimensional input space, we first train an autoencoder. The weights of the encoder are frozen and the actor applies the counterfactual perturbations in the latent space of the encoder. The pre-trained decoder maps the counterfactual embedding back to the input feature space. The autoencoder follows a standard design. The model is composed from two submodules, the encoder and the decoder. The forward pass consists of passing the input to the encoder, obtain the input embedding and pass the embedding through the decoder. ```python class HeAE(keras.Model): def __init__(self, encoder: keras.Model, decoder: keras.Model, **kwargs) -> None: super().__init__(**kwargs) self.encoder = encoder self.decoder = decoder def call(self, x: tf.Tensor, **kwargs): z = self.encoder(x) x_hat = self.decoder(z) return x_hat ``` The heterogeneous variant used in this example uses an additional type checking to ensure that the output of the decoder is a list of tensors. Heterogeneous dataset require special treatment. In this work we modeled the numerical features by normal distributions with constant standard deviation and categorical features by categorical distributions. Due to the choice of feature modeling, some numerical features can end up having different types than the original numerical features. For example, a feature like `Age` having the type of `int` can become a `float` due to the autoencoder reconstruction (e.g., `Age=26 -> Age=26.3`). This behavior can be undesirable. Thus we performed casting when process the output of the autoencoder (decoder component). ``` # Define attribute types, required for datatype conversion. feature_types = {"Age": int, "Capital Gain": int, "Capital Loss": int, "Hours per week": int} # Define data preprocessor and inverse preprocessor. The invers preprocessor include datatype conversions. heae_preprocessor, heae_inv_preprocessor = get_he_preprocessor(X=X_train, feature_names=adult.feature_names, category_map=adult.category_map, feature_types=feature_types) # Define trainset trainset_input = heae_preprocessor(X_train).astype(np.float32) trainset_outputs = { "output_1": X_train_ohe[:, :len(numerical_ids)] } for i, cat_id in enumerate(categorical_ids): trainset_outputs.update({ f"output_{i+2}": X_train[:, cat_id] }) trainset = tf.data.Dataset.from_tensor_slices((trainset_input, trainset_outputs)) trainset = trainset.shuffle(1024).batch(128, drop_remainder=True) # Define autoencoder path and create dir if it doesn't exist. heae_path = os.path.join("tensorflow", "ADULT_autoencoder") if not os.path.exists(heae_path): os.makedirs(heae_path) # Define constants. EPOCHS = 50 # epochs to train the autoencoder HIDDEN_DIM = 128 # hidden dimension of the autoencoder LATENT_DIM = 15 # define latent dimension # Define output dimensions. OUTPUT_DIMS = [len(numerical_ids)] OUTPUT_DIMS += [len(adult.category_map[cat_id]) for cat_id in categorical_ids] # Define the heterogeneous auto-encoder. heae = HeAE(encoder=ADULTEncoder(hidden_dim=HIDDEN_DIM, latent_dim=LATENT_DIM), decoder=ADULTDecoder(hidden_dim=HIDDEN_DIM, output_dims=OUTPUT_DIMS)) # Define loss functions. he_loss = [keras.losses.MeanSquaredError()] he_loss_weights = [1.] # Add categorical losses. for i in range(len(categorical_names)): he_loss.append(keras.losses.SparseCategoricalCrossentropy(from_logits=True)) he_loss_weights.append(1./len(categorical_names)) # Define metrics. metrics = {} for i, cat_name in enumerate(categorical_names): metrics.update({f"output_{i+2}": keras.metrics.SparseCategoricalAccuracy()}) # Compile model. heae.compile(optimizer=keras.optimizers.Adam(learning_rate=1e-3), loss=he_loss, loss_weights=he_loss_weights, metrics=metrics) if len(os.listdir(heae_path)) == 0: # Fit and save autoencoder. heae.fit(trainset, epochs=EPOCHS) heae.save(heae_path, save_format="tf") else: # Load the model. heae = keras.models.load_model(heae_path, compile=False) ``` ### Counterfactual with Reinforcement Learning ``` # Define constants COEFF_SPARSITY = 0.5 # sparisty coefficient COEFF_CONSISTENCY = 0.5 # consisteny coefficient TRAIN_STEPS = 10000 # number of training steps -> consider increasing the number of steps BATCH_SIZE = 100 # batch size ``` #### Define dataset specific attributes and constraints A desirable property of a method for generating counterfactuals is to allow feature conditioning. Real-world datasets usually include immutable features such as `Sex` or `Race`, which should remain unchanged throughout the counterfactual search procedure. Similarly, a numerical feature such as `Age` should only increase for a counterfactual to be actionable. ``` # Define immutable features. immutable_features = ['Marital Status', 'Relationship', 'Race', 'Sex'] # Define ranges. This means that the `Age` feature can not decrease. ranges = {'Age': [0.0, 1.0]} ``` #### Define and fit the explainer ``` explainer = CounterfactualRLTabular(predictor=predictor, encoder=heae.encoder, decoder=heae.decoder, latent_dim=LATENT_DIM, encoder_preprocessor=heae_preprocessor, decoder_inv_preprocessor=heae_inv_preprocessor, coeff_sparsity=COEFF_SPARSITY, coeff_consistency=COEFF_CONSISTENCY, category_map=adult.category_map, feature_names=adult.feature_names, ranges=ranges, immutable_features=immutable_features, train_steps=TRAIN_STEPS, batch_size=BATCH_SIZE, backend="tensorflow") # Fit the explainer. explainer = explainer.fit(X=X_train) ``` #### Test explainer ``` # Select some positive examples. X_positive = X_test[np.argmax(predictor(X_test), axis=1) == 1] X = X_positive[:1000] Y_t = np.array([0]) C = [{"Age": [0, 20], "Workclass": ["State-gov", "?", "Local-gov"]}] # Generate counterfactual instances. explanation = explainer.explain(X, Y_t, C) # Concat labels to the original instances. orig = np.concatenate( [explanation.data['orig']['X'], explanation.data['orig']['class']], axis=1 ) # Concat labels to the counterfactual instances. cf = np.concatenate( [explanation.data['cf']['X'], explanation.data['cf']['class']], axis=1 ) # Define new feature names and category map by including the label. feature_names = adult.feature_names + ["Label"] category_map = deepcopy(adult.category_map) category_map.update({feature_names.index("Label"): adult.target_names}) # Replace label encodings with strings. orig_pd = pd.DataFrame( apply_category_mapping(orig, category_map), columns=feature_names ) cf_pd = pd.DataFrame( apply_category_mapping(cf, category_map), columns=feature_names ) orig_pd.head(n=10) cf_pd.head(n=10) ``` #### Diversity ``` # Generate counterfactual instances. X = X_positive[1].reshape(1, -1) explanation = explainer.explain(X=X, Y_t=Y_t, C=C, diversity=True, num_samples=100, batch_size=10) # Concat label column. orig = np.concatenate( [explanation.data['orig']['X'], explanation.data['orig']['class']], axis=1 ) cf = np.concatenate( [explanation.data['cf']['X'], explanation.data['cf']['class']], axis=1 ) # Transfrom label encodings to string. orig_pd = pd.DataFrame( apply_category_mapping(orig, category_map), columns=feature_names, ) cf_pd = pd.DataFrame( apply_category_mapping(cf, category_map), columns=feature_names, ) orig_pd.head(n=5) cf_pd.head(n=5) ``` ### Logging Logging is clearly important when dealing with deep learning models. Thus, we provide an interface to write custom callbacks for logging purposes after each training step which we defined [here](../api/alibi.explainers.cfrl_base.rst#alibi.explainers.cfrl_base.Callback). In the following cells we provide some example to log in **Weights and Biases**. #### Logging reward callback ``` class RewardCallback(Callback): def __call__(self, step: int, update: int, model: CounterfactualRL, sample: Dict[str, np.ndarray], losses: Dict[str, float]): if (step + update) % 100 != 0: return # get the counterfactual and target Y_t = sample["Y_t"] X_cf = model.params["decoder_inv_preprocessor"](sample["X_cf"]) # get prediction label Y_m_cf = predictor(X_cf) # compute reward reward = np.mean(model.params["reward_func"](Y_m_cf, Y_t)) wandb.log({"reward": reward}) ``` #### Logging losses callback ``` class LossCallback(Callback): def __call__(self, step: int, update: int, model: CounterfactualRL, sample: Dict[str, np.ndarray], losses: Dict[str, float]): # Log training losses. if (step + update) % 100 == 0: wandb.log(losses) ``` #### Logging tables callback ``` class TablesCallback(Callback): def __call__(self, step: int, update: int, model: CounterfactualRL, sample: Dict[str, np.ndarray], losses: Dict[str, float]): # Log every 1000 steps if step % 1000 != 0: return # Define number of samples to be displayed. NUM_SAMPLES = 5 X = heae_inv_preprocessor(sample["X"][:NUM_SAMPLES]) # input instance X_cf = heae_inv_preprocessor(sample["X_cf"][:NUM_SAMPLES]) # counterfactual Y_m = np.argmax(sample["Y_m"][:NUM_SAMPLES], axis=1).astype(int).reshape(-1, 1) # input labels Y_t = np.argmax(sample["Y_t"][:NUM_SAMPLES], axis=1).astype(int).reshape(-1, 1) # target labels Y_m_cf = np.argmax(predictor(X_cf), axis=1).astype(int).reshape(-1, 1) # counterfactual labels # Define feature names and category map for input. feature_names = adult.feature_names + ["Label"] category_map = deepcopy(adult.category_map) category_map.update({feature_names.index("Label"): adult.target_names}) # Construct input array. inputs = np.concatenate([X, Y_m], axis=1) inputs = pd.DataFrame(apply_category_mapping(inputs, category_map), columns=feature_names) # Define feature names and category map for counterfactual output. feature_names += ["Target"] category_map.update({feature_names.index("Target"): adult.target_names}) # Construct output array. outputs = np.concatenate([X_cf, Y_m_cf, Y_t], axis=1) outputs = pd.DataFrame(apply_category_mapping(outputs, category_map), columns=feature_names) # Log table. wandb.log({ "Input": wandb.Table(dataframe=inputs), "Output": wandb.Table(dataframe=outputs) }) ``` Having defined the callbacks, we can define a new explainer that will include logging. ```python import wandb # Initialize wandb. wandb_project = "Adult Census Counterfactual with Reinforcement Learning" wandb.init(project=wandb_project) # Define explainer as before and include callbacks. explainer = CounterfactualRLTabular(..., callbacks=[LossCallback(), RewardCallback(), TablesCallback()]) # Fit the explainers. explainer = explainer.fit(X=X_train) # Close wandb. wandb.finish() ```
true
code
0.736369
null
null
null
null
# Multilayer Perceptrons with scikit-learn **XBUS-512: Introduction to AI and Deep Learning** In this exercise, we will see how to build a preliminary neural model using the familiar scikit-learn library. While scikit-learn is not a deep learning library, it does provide basic implementations of the multilayer perceptron (MLP) for both classification and regression. Thanks to [this team](https://github.com/Wall-eSociety/CommentVolumeML) for figuring out the labels for this dataset! ## Imports ``` import os import time import pickle import zipfile import requests import numpy as np import pandas as pd from sklearn.preprocessing import MinMaxScaler from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split as tts from sklearn.neural_network import MLPClassifier, MLPRegressor from yellowbrick.regressor import PredictionError, ResidualsPlot ``` ## Download the data ``` def fetch_data(url, fname): """ Helper method to retreive the data from the UCI ML Repository. """ response = requests.get(url) outpath = os.path.abspath(fname) with open(outpath, "wb") as f: f.write(response.content) return outpath # Fetch and unzip the data FIXTURES = os.path.join("..", "fixtures") if not os.path.exists(FIXTURES): os.makedirs(FIXTURES) URL = "https://archive.ics.uci.edu/ml/machine-learning-databases/00363/Dataset.zip" ZIPPED_FILES = "facebook_data.zip" UNZIPPED_FILES = "facebook_data" zipped_data = fetch_data(URL, os.path.join(FIXTURES, ZIPPED_FILES)) with zipfile.ZipFile(os.path.join(FIXTURES, ZIPPED_FILES), "r") as zfiles: zfiles.extractall(os.path.join(FIXTURES, UNZIPPED_FILES)) data = pd.read_csv( os.path.join( FIXTURES, UNZIPPED_FILES, "Dataset", "Training", "Features_Variant_2.csv" ), header=None ) data.columns = [ "likes", "views", "returns", "category", "derived_1", "derived_2", "derived_3", "derived_4", "derived_5", "derived_6", "derived_7", "derived_8", "derived_9", "derived_10", "derived_11", "derived_12", "derived_13", "derived_14", "derived_15", "derived_16", "derived_17", "derived_18", "derived_19", "derived_20", "derived_21", "derived_22", "derived_23", "derived_24", "derived_25", "cc_1", "cc_2", "cc_3", "cc_4", "cc_5", "base_time", "length", "shares", "status", "h_local", "sunday_post", "monday_post", "tuesday_post", "wednesday_post", "thursday_post", "friday_post", "saturday_post", "sunday_base", "monday_base", "tuesday_base", "wednesday_base", "thursday_base", "friday_base", "saturday_base", "target" ] data.describe() def prepare_for_regression(dataframe): """ Prepare the data for a regression problem where we will attempt to regress the number of comments that a Facebook post will get given other features of the data. Returns a tuple containing an nd array of features (X) and a 1d array for the target (y) """ features = [ "likes", "views", "returns", "category", "derived_1", "derived_2", "derived_3", "derived_4", "derived_5", "derived_6", "derived_7", "derived_8", "derived_9", "derived_10", "derived_11", "derived_12", "derived_13", "derived_14", "derived_15", "derived_16", "derived_17", "derived_18", "derived_19", "derived_20", "derived_21", "derived_22", "derived_23", "derived_24", "derived_25", "cc_1", "cc_2", "cc_3", "cc_4", "cc_5", "base_time", "length", "shares", "status", "h_local", "sunday_post", "monday_post", "tuesday_post", "wednesday_post", "thursday_post", "friday_post", "saturday_post", "sunday_base", "monday_base", "tuesday_base", "wednesday_base", "thursday_base", "friday_base", "saturday_base" ] target = "target" # MLP is sensitive to feature scaling! X = MinMaxScaler().fit_transform(dataframe[features].values) y = dataframe[target].values return X, y def prepare_for_classification(dataframe): """ Prepare the data for a classification problem where we will attempt to predict the category of a Facebook post given features of the data. Returns a tuple containing an nd array of features (X) and a 1d array for the target (y) """ features = [ "likes", "views", "returns", "derived_1", "derived_2", "derived_3", "derived_4", "derived_5", "derived_6", "derived_7", "derived_8", "derived_9", "derived_10", "derived_11", "derived_12", "derived_13", "derived_14", "derived_15", "derived_16", "derived_17", "derived_18", "derived_19", "derived_20", "derived_21", "derived_22", "derived_23", "derived_24", "derived_25", "cc_1", "cc_2", "cc_3", "cc_4", "cc_5", "base_time", "length", "shares", "status", "h_local", "sunday_post", "monday_post", "tuesday_post", "wednesday_post", "thursday_post", "friday_post", "saturday_post", "sunday_base", "monday_base", "tuesday_base", "wednesday_base", "thursday_base", "friday_base", "saturday_base", "target" ] target = "category" # MLP is sensitive to feature scaling! X = MinMaxScaler().fit_transform(dataframe[features].values) y = dataframe[target].values return X, y # Prepare the data and break in to training and test splits X, y = prepare_for_regression(data) X_train, X_test, y_train, y_test = tts(X, y, test_size=0.2, random_state=42) ``` ## Instantiate the model, set hyperparameters, and train ``` start = time.time() model = MLPRegressor( hidden_layer_sizes=(100, 50, 25), activation="relu", solver="adam", batch_size=2, max_iter=100, verbose=True ) model.fit(X_train, y_train) print("Training took {} seconds".format( time.time() - start )) pred_train = model.predict(X_train) print("Training error: {}".format( np.sqrt(mean_squared_error(y_train, pred_train)) )) pred = model.predict(X_test) print("Test error: {}".format( np.sqrt(mean_squared_error(y_test, pred)) )) ``` ## Visualize the results using Yellowbrick ``` visualizer = PredictionError(model) visualizer.fit(X_train, y_train) visualizer.score(X_test, y_test) visualizer.show() visualizer = ResidualsPlot(model) visualizer.fit(X_train, y_train) visualizer.score(X_test, y_test) visualizer.show() ``` ## Pickle the model ``` RESULTS = os.path.join("..", "results") if not os.path.exists(RESULTS): os.makedirs(RESULTS) filename = os.path.join(RESULTS, "sklearn_model.pkl") pickle.dump(model, open(filename, "wb")) ``` ## Restore the model and run live predictions ``` unpickled_model = pickle.load(open(filename, "rb")) new_prediction = unpickled_model.predict(X_test[10].reshape(1, -1)) print("Predicted value: ", new_prediction[0]) print("Actual value: ", y_test[10]) ``` ## Takeaways from our scikit-learn prototype: - sklearn API is convenient - can tune some hyperparams - easy to visualize & diagnose with Yellowbrick - tough to tune for overfit model... would be nice to have dropout, for instance - sloooooow
true
code
0.592077
null
null
null
null
**Appendix D – Autodiff** _This notebook contains toy implementations of various autodiff techniques, to explain how they works._ # Setup # Introduction Suppose we want to compute the gradients of the function $f(x,y)=x^2y + y + 2$ with regards to the parameters x and y: ``` def f(x,y): return x*x*y + y + 2 ``` One approach is to solve this analytically: $\dfrac{\partial f}{\partial x} = 2xy$ $\dfrac{\partial f}{\partial y} = x^2 + 1$ ``` def df(x,y): return 2*x*y, x*x + 1 ``` So for example $\dfrac{\partial f}{\partial x}(3,4) = 24$ and $\dfrac{\partial f}{\partial y}(3,4) = 10$. ``` df(3, 4) ``` Perfect! We can also find the equations for the second order derivatives (also called Hessians): $\dfrac{\partial^2 f}{\partial x \partial x} = \dfrac{\partial (2xy)}{\partial x} = 2y$ $\dfrac{\partial^2 f}{\partial x \partial y} = \dfrac{\partial (2xy)}{\partial y} = 2x$ $\dfrac{\partial^2 f}{\partial y \partial x} = \dfrac{\partial (x^2 + 1)}{\partial x} = 2x$ $\dfrac{\partial^2 f}{\partial y \partial y} = \dfrac{\partial (x^2 + 1)}{\partial y} = 0$ At x=3 and y=4, these Hessians are respectively 8, 6, 6, 0. Let's use the equations above to compute them: ``` def d2f(x, y): return [2*y, 2*x], [2*x, 0] d2f(3, 4) ``` Perfect, but this requires some mathematical work. It is not too hard in this case, but for a deep neural network, it is pratically impossible to compute the derivatives this way. So let's look at various ways to automate this! # Numeric differentiation Here, we compute an approxiation of the gradients using the equation: $\dfrac{\partial f}{\partial x} = \displaystyle{\lim_{\epsilon \to 0}}\dfrac{f(x+\epsilon, y) - f(x, y)}{\epsilon}$ (and there is a similar definition for $\dfrac{\partial f}{\partial y}$). ``` def gradients(func, vars_list, eps=0.0001): partial_derivatives = [] base_func_eval = func(*vars_list) for idx in range(len(vars_list)): tweaked_vars = vars_list[:] tweaked_vars[idx] += eps tweaked_func_eval = func(*tweaked_vars) derivative = (tweaked_func_eval - base_func_eval) / eps partial_derivatives.append(derivative) return partial_derivatives def df(x, y): return gradients(f, [x, y]) df(3, 4) ``` It works well! The good news is that it is pretty easy to compute the Hessians. First let's create functions that compute the first order derivatives (also called Jacobians): ``` def dfdx(x, y): return gradients(f, [x,y])[0] def dfdy(x, y): return gradients(f, [x,y])[1] dfdx(3., 4.), dfdy(3., 4.) ``` Now we can simply apply the `gradients()` function to these functions: ``` def d2f(x, y): return [gradients(dfdx, [3., 4.]), gradients(dfdy, [3., 4.])] d2f(3, 4) ``` So everything works well, but the result is approximate, and computing the gradients of a function with regards to $n$ variables requires calling that function $n$ times. In deep neural nets, there are often thousands of parameters to tweak using gradient descent (which requires computing the gradients of the loss function with regards to each of these parameters), so this approach would be much too slow. ## Implementing a Toy Computation Graph Rather than this numerical approach, let's implement some symbolic autodiff techniques. For this, we will need to define classes to represent constants, variables and operations. ``` class Const(object): def __init__(self, value): self.value = value def evaluate(self): return self.value def __str__(self): return str(self.value) class Var(object): def __init__(self, name, init_value=0): self.value = init_value self.name = name def evaluate(self): return self.value def __str__(self): return self.name class BinaryOperator(object): def __init__(self, a, b): self.a = a self.b = b class Add(BinaryOperator): def evaluate(self): return self.a.evaluate() + self.b.evaluate() def __str__(self): return "{} + {}".format(self.a, self.b) class Mul(BinaryOperator): def evaluate(self): return self.a.evaluate() * self.b.evaluate() def __str__(self): return "({}) * ({})".format(self.a, self.b) ``` Good, now we can build a computation graph to represent the function $f$: ``` x = Var("x") y = Var("y") f = Add(Mul(Mul(x, x), y), Add(y, Const(2))) # f(x,y) = x²y + y + 2 ``` And we can run this graph to compute $f$ at any point, for example $f(3, 4)$. ``` x.value = 3 y.value = 4 f.evaluate() ``` Perfect, it found the ultimate answer. ## Computing gradients The autodiff methods we will present below are all based on the *chain rule*. Suppose we have two functions $u$ and $v$, and we apply them sequentially to some input $x$, and we get the result $z$. So we have $z = v(u(x))$, which we can rewrite as $z = v(s)$ and $s = u(x)$. Now we can apply the chain rule to get the partial derivative of the output $z$ with regards to the input $x$: $ \dfrac{\partial z}{\partial x} = \dfrac{\partial s}{\partial x} \cdot \dfrac{\partial z}{\partial s}$ Now if $z$ is the output of a sequence of functions which have intermediate outputs $s_1, s_2, ..., s_n$, the chain rule still applies: $ \dfrac{\partial z}{\partial x} = \dfrac{\partial s_1}{\partial x} \cdot \dfrac{\partial s_2}{\partial s_1} \cdot \dfrac{\partial s_3}{\partial s_2} \cdot \dots \cdot \dfrac{\partial s_{n-1}}{\partial s_{n-2}} \cdot \dfrac{\partial s_n}{\partial s_{n-1}} \cdot \dfrac{\partial z}{\partial s_n}$ In forward mode autodiff, the algorithm computes these terms "forward" (i.e., in the same order as the computations required to compute the output $z$), that is from left to right: first $\dfrac{\partial s_1}{\partial x}$, then $\dfrac{\partial s_2}{\partial s_1}$, and so on. In reverse mode autodiff, the algorithm computes these terms "backwards", from right to left: first $\dfrac{\partial z}{\partial s_n}$, then $\dfrac{\partial s_n}{\partial s_{n-1}}$, and so on. For example, suppose you want to compute the derivative of the function $z(x)=\sin(x^2)$ at x=3, using forward mode autodiff. The algorithm would first compute the partial derivative $\dfrac{\partial s_1}{\partial x}=\dfrac{\partial x^2}{\partial x}=2x=6$. Next, it would compute $\dfrac{\partial z}{\partial x}=\dfrac{\partial s_1}{\partial x}\cdot\dfrac{\partial z}{\partial s_1}= 6 \cdot \dfrac{\partial \sin(s_1)}{\partial s_1}=6 \cdot \cos(s_1)=6 \cdot \cos(3^2)\approx-5.46$. Let's verify this result using the `gradients()` function defined earlier: ``` from math import sin def z(x): return sin(x**2) gradients(z, [3]) ``` Look good. Now let's do the same thing using reverse mode autodiff. This time the algorithm would start from the right hand side so it would compute $\dfrac{\partial z}{\partial s_1} = \dfrac{\partial \sin(s_1)}{\partial s_1}=\cos(s_1)=\cos(3^2)\approx -0.91$. Next it would compute $\dfrac{\partial z}{\partial x}=\dfrac{\partial s_1}{\partial x}\cdot\dfrac{\partial z}{\partial s_1} \approx \dfrac{\partial s_1}{\partial x} \cdot -0.91 = \dfrac{\partial x^2}{\partial x} \cdot -0.91=2x \cdot -0.91 = 6\cdot-0.91=-5.46$. Of course both approaches give the same result (except for rounding errors), and with a single input and output they involve the same number of computations. But when there are several inputs or outputs, they can have very different performance. Indeed, if there are many inputs, the right-most terms will be needed to compute the partial derivatives with regards to each input, so it is a good idea to compute these right-most terms first. That means using reverse-mode autodiff. This way, the right-most terms can be computed just once and used to compute all the partial derivatives. Conversely, if there are many outputs, forward-mode is generally preferable because the left-most terms can be computed just once to compute the partial derivatives of the different outputs. In Deep Learning, there are typically thousands of model parameters, meaning there are lots of inputs, but few outputs. In fact, there is generally just one output during training: the loss. This is why reverse mode autodiff is used in TensorFlow and all major Deep Learning libraries. There's one additional complexity in reverse mode autodiff: the value of $s_i$ is generally required when computing $\dfrac{\partial s_{i+1}}{\partial s_i}$, and computing $s_i$ requires first computing $s_{i-1}$, which requires computing $s_{i-2}$, and so on. So basically, a first pass forward through the network is required to compute $s_1$, $s_2$, $s_3$, $\dots$, $s_{n-1}$ and $s_n$, and then the algorithm can compute the partial derivatives from right to left. Storing all the intermediate values $s_i$ in RAM is sometimes a problem, especially when handling images, and when using GPUs which often have limited RAM: to limit this problem, one can reduce the number of layers in the neural network, or configure TensorFlow to make it swap these values from GPU RAM to CPU RAM. Another approach is to only cache every other intermediate value, $s_1$, $s_3$, $s_5$, $\dots$, $s_{n-4}$, $s_{n-2}$ and $s_n$. This means that when the algorithm computes the partial derivatives, if an intermediate value $s_i$ is missing, it will need to recompute it based on the previous intermediate value $s_{i-1}$. This trades off CPU for RAM (if you are interested, check out [this paper](https://pdfs.semanticscholar.org/f61e/9fd5a4878e1493f7a6b03774a61c17b7e9a4.pdf)). ### Forward mode autodiff ``` Const.gradient = lambda self, var: Const(0) Var.gradient = lambda self, var: Const(1) if self is var else Const(0) Add.gradient = lambda self, var: Add(self.a.gradient(var), self.b.gradient(var)) Mul.gradient = lambda self, var: Add(Mul(self.a, self.b.gradient(var)), Mul(self.a.gradient(var), self.b)) x = Var(name="x", init_value=3.) y = Var(name="y", init_value=4.) f = Add(Mul(Mul(x, x), y), Add(y, Const(2))) # f(x,y) = x²y + y + 2 dfdx = f.gradient(x) # 2xy dfdy = f.gradient(y) # x² + 1 dfdx.evaluate(), dfdy.evaluate() ``` Since the output of the `gradient()` method is fully symbolic, we are not limited to the first order derivatives, we can also compute second order derivatives, and so on: ``` d2fdxdx = dfdx.gradient(x) # 2y d2fdxdy = dfdx.gradient(y) # 2x d2fdydx = dfdy.gradient(x) # 2x d2fdydy = dfdy.gradient(y) # 0 [[d2fdxdx.evaluate(), d2fdxdy.evaluate()], [d2fdydx.evaluate(), d2fdydy.evaluate()]] ``` Note that the result is now exact, not an approximation (up to the limit of the machine's float precision, of course). ### Forward mode autodiff using dual numbers A nice way to apply forward mode autodiff is to use [dual numbers](https://en.wikipedia.org/wiki/Dual_number). In short, a dual number $z$ has the form $z = a + b\epsilon$, where $a$ and $b$ are real numbers, and $\epsilon$ is an infinitesimal number, positive but smaller than all real numbers, and such that $\epsilon^2=0$. It can be shown that $f(x + \epsilon) = f(x) + \dfrac{\partial f}{\partial x}\epsilon$, so simply by computing $f(x + \epsilon)$ we get both the value of $f(x)$ and the partial derivative of $f$ with regards to $x$. Dual numbers have their own arithmetic rules, which are generally quite natural. For example: **Addition** $(a_1 + b_1\epsilon) + (a_2 + b_2\epsilon) = (a_1 + a_2) + (b_1 + b_2)\epsilon$ **Subtraction** $(a_1 + b_1\epsilon) - (a_2 + b_2\epsilon) = (a_1 - a_2) + (b_1 - b_2)\epsilon$ **Multiplication** $(a_1 + b_1\epsilon) \times (a_2 + b_2\epsilon) = (a_1 a_2) + (a_1 b_2 + a_2 b_1)\epsilon + b_1 b_2\epsilon^2 = (a_1 a_2) + (a_1b_2 + a_2b_1)\epsilon$ **Division** $\dfrac{a_1 + b_1\epsilon}{a_2 + b_2\epsilon} = \dfrac{a_1 + b_1\epsilon}{a_2 + b_2\epsilon} \cdot \dfrac{a_2 - b_2\epsilon}{a_2 - b_2\epsilon} = \dfrac{a_1 a_2 + (b_1 a_2 - a_1 b_2)\epsilon - b_1 b_2\epsilon^2}{{a_2}^2 + (a_2 b_2 - a_2 b_2)\epsilon - {b_2}^2\epsilon} = \dfrac{a_1}{a_2} + \dfrac{a_1 b_2 - b_1 a_2}{{a_2}^2}\epsilon$ **Power** $(a + b\epsilon)^n = a^n + (n a^{n-1}b)\epsilon$ etc. Let's create a class to represent dual numbers, and implement a few operations (addition and multiplication). You can try adding some more if you want. ``` class DualNumber(object): def __init__(self, value=0.0, eps=0.0): self.value = value self.eps = eps def __add__(self, b): return DualNumber(self.value + self.to_dual(b).value, self.eps + self.to_dual(b).eps) def __radd__(self, a): return self.to_dual(a).__add__(self) def __mul__(self, b): return DualNumber(self.value * self.to_dual(b).value, self.eps * self.to_dual(b).value + self.value * self.to_dual(b).eps) def __rmul__(self, a): return self.to_dual(a).__mul__(self) def __str__(self): if self.eps: return "{:.1f} + {:.1f}ε".format(self.value, self.eps) else: return "{:.1f}".format(self.value) def __repr__(self): return str(self) @classmethod def to_dual(cls, n): if hasattr(n, "value"): return n else: return cls(n) ``` $3 + (3 + 4 \epsilon) = 6 + 4\epsilon$ ``` 3 + DualNumber(3, 4) ``` $(3 + 4ε)\times(5 + 7ε)$ = $3 \times 5 + 3 \times 7ε + 4ε \times 5 + 4ε \times 7ε$ = $15 + 21ε + 20ε + 28ε^2$ = $15 + 41ε + 28 \times 0$ = $15 + 41ε$ ``` DualNumber(3, 4) * DualNumber(5, 7) ``` Now let's see if the dual numbers work with our toy computation framework: ``` x.value = DualNumber(3.0) y.value = DualNumber(4.0) f.evaluate() ``` Yep, sure works. Now let's use this to compute the partial derivatives of $f$ with regards to $x$ and $y$ at x=3 and y=4: ``` x.value = DualNumber(3.0, 1.0) # 3 + ε y.value = DualNumber(4.0) # 4 dfdx = f.evaluate().eps x.value = DualNumber(3.0) # 3 y.value = DualNumber(4.0, 1.0) # 4 + ε dfdy = f.evaluate().eps dfdx dfdy ``` Great! However, in this implementation we are limited to first order derivatives. Now let's look at reverse mode. ### Reverse mode autodiff Let's rewrite our toy framework to add reverse mode autodiff: ``` class Const(object): def __init__(self, value): self.value = value def evaluate(self): return self.value def backpropagate(self, gradient): pass def __str__(self): return str(self.value) class Var(object): def __init__(self, name, init_value=0): self.value = init_value self.name = name self.gradient = 0 def evaluate(self): return self.value def backpropagate(self, gradient): self.gradient += gradient def __str__(self): return self.name class BinaryOperator(object): def __init__(self, a, b): self.a = a self.b = b class Add(BinaryOperator): def evaluate(self): self.value = self.a.evaluate() + self.b.evaluate() return self.value def backpropagate(self, gradient): self.a.backpropagate(gradient) self.b.backpropagate(gradient) def __str__(self): return "{} + {}".format(self.a, self.b) class Mul(BinaryOperator): def evaluate(self): self.value = self.a.evaluate() * self.b.evaluate() return self.value def backpropagate(self, gradient): self.a.backpropagate(gradient * self.b.value) self.b.backpropagate(gradient * self.a.value) def __str__(self): return "({}) * ({})".format(self.a, self.b) x = Var("x", init_value=3) y = Var("y", init_value=4) f = Add(Mul(Mul(x, x), y), Add(y, Const(2))) # f(x,y) = x²y + y + 2 result = f.evaluate() f.backpropagate(1.0) print(f) result x.gradient y.gradient ``` Again, in this implementation the outputs are just numbers, not symbolic expressions, so we are limited to first order derivatives. However, we could have made the `backpropagate()` methods return symbolic expressions rather than values (e.g., return `Add(2,3)` rather than 5). This would make it possible to compute second order gradients (and beyond). This is what TensorFlow does, as do all the major libraries that implement autodiff. ### Reverse mode autodiff using TensorFlow ``` import tensorflow as tf tf.reset_default_graph() x = tf.Variable(3., name="x") y = tf.Variable(4., name="y") f = x*x*y + y + 2 jacobians = tf.gradients(f, [x, y]) init = tf.global_variables_initializer() with tf.Session() as sess: init.run() f_val, jacobians_val = sess.run([f, jacobians]) f_val, jacobians_val ``` Since everything is symbolic, we can compute second order derivatives, and beyond. However, when we compute the derivative of a tensor with regards to a variable that it does not depend on, instead of returning 0.0, the `gradients()` function returns None, which cannot be evaluated by `sess.run()`. So beware of `None` values. Here we just replace them with zero tensors. ``` hessians_x = tf.gradients(jacobians[0], [x, y]) hessians_y = tf.gradients(jacobians[1], [x, y]) def replace_none_with_zero(tensors): return [tensor if tensor is not None else tf.constant(0.) for tensor in tensors] hessians_x = replace_none_with_zero(hessians_x) hessians_y = replace_none_with_zero(hessians_y) init = tf.global_variables_initializer() with tf.Session() as sess: init.run() hessians_x_val, hessians_y_val = sess.run([hessians_x, hessians_y]) hessians_x_val, hessians_y_val ``` And that's all folks! Hope you enjoyed this notebook.
true
code
0.665302
null
null
null
null
``` # Logistic Regression # Importing the libraries import numpy as np import matplotlib.pyplot as plt import pandas as pd # Importing the dataset dataset = pd.read_csv('Social_Network_Ads.csv') X = dataset.iloc[:, [2, 3]].values y = dataset.iloc[:, 4].values # Splitting the dataset into the Training set and Test set from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0) # Feature Scaling from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) # Fitting Logistic Regression to the Training set from sklearn.linear_model import LogisticRegression classifier = LogisticRegression(random_state = 0) classifier.fit(X_train, y_train) # Predicting the Test set results y_pred = classifier.predict(X_test) # Making the Confusion Matrix from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, y_pred) # Visualising the Training set results from matplotlib.colors import ListedColormap X_set, y_set = X_train, y_train X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('Logistic Regression (Training set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() # Visualising the Test set results from matplotlib.colors import ListedColormap X_set, y_set = X_test, y_test X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('Logistic Regression (Test set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() ```
true
code
0.729309
null
null
null
null
# Extract FVCOM time series from aggregated OPeNDAP endpoints ``` # Plot time series data from FVCOM model from list of lon,lat locations # (uses the nearest point, no interpolation) %matplotlib inline import numpy as np import matplotlib.pyplot as plt import netCDF4 import datetime as dt import pandas as pd from StringIO import StringIO # make dictionary of various model simulation endpoints models={} models['Massbay_forecast']='http://www.smast.umassd.edu:8080/thredds/dodsC/FVCOM/NECOFS/Forecasts/NECOFS_FVCOM_OCEAN_MASSBAY_FORECAST.nc' models['GOM3_Forecast']='http://www.smast.umassd.edu:8080/thredds/dodsC/FVCOM/NECOFS/Forecasts/NECOFS_GOM3_FORECAST.nc' models['Massbay_forecast_archive']='http://www.smast.umassd.edu:8080/thredds/dodsC/fvcom/archives/necofs_mb' models['GOM3_30_year_hindcast']='http://www.smast.umassd.edu:8080/thredds/dodsC/fvcom/hindcasts/30yr_gom3' def start_stop(url,tvar): nc = netCDF4.Dataset(url) ncv = nc.variables time_var = ncv[tvar] first = netCDF4.num2date(time_var[0],time_var.units) last = netCDF4.num2date(time_var[-1],time_var.units) print first.strftime('%Y-%b-%d %H:%M') print last.strftime('%Y-%b-%d %H:%M') tvar = 'time' for model,url in models.iteritems(): print model try: start_stop(url,tvar) except: print '[problem accessing data]' #model='Massbay_forecast_archive' model='Massbay_forecast' #model='GOM3_Forecast' #model='GOM3_30_year_hindcast' url=models[model] # Desired time for snapshot # ....right now (or some number of hours from now) ... start = dt.datetime.utcnow() - dt.timedelta(hours=72) stop = dt.datetime.utcnow() + dt.timedelta(hours=72) # ... or specific time (UTC) #start = dt.datetime(2004,9,1,0,0,0) #stop = dt.datetime(2004,11,1,0,0,0) def dms2dd(d,m,s): return d+(m+s/60.)/60. dms2dd(41,33,15.7) dms2dd(42,51,17.40) -dms2dd(70,30,20.2) -dms2dd(70,18,42.0) x = ''' Station, Lat, Lon Falmouth Harbor, 41.541575, -70.608020 Sage Lot Pond, 41.554361, -70.505611 ''' x = ''' Station, Lat, Lon Boston, 42.368186, -71.047984 Carolyn Seep Spot, 39.8083, -69.5917 Falmouth Harbor, 41.541575, -70.608020 ''' # Enter desired (Station, Lat, Lon) values here: x = ''' Station, Lat, Lon Boston, 42.368186, -71.047984 Scituate Harbor, 42.199447, -70.720090 Scituate Beach, 42.209973, -70.724523 Falmouth Harbor, 41.541575, -70.608020 Marion, 41.689008, -70.746576 Marshfield, 42.108480, -70.648691 Provincetown, 42.042745, -70.171180 Sandwich, 41.767990, -70.466219 Hampton Bay, 42.900103, -70.818510 Gloucester, 42.610253, -70.660570 ''' x = ''' Station, Lat, Lon Buoy A, 42.52280, -70.56535 Buoy B, 43.18089, -70.42788 Nets, 42.85483, -70.3116 DITP, 42.347 , -70.960 ''' # Create a Pandas DataFrame obs=pd.read_csv(StringIO(x.strip()), sep=",\s*",index_col='Station',engine='python') obs # find the indices of the points in (x,y) closest to the points in (xi,yi) def nearxy(x,y,xi,yi): ind = np.ones(len(xi),dtype=int) for i in np.arange(len(xi)): dist = np.sqrt((x-xi[i])**2+(y-yi[i])**2) ind[i] = dist.argmin() return ind nc=netCDF4.Dataset(url) # open NECOFS remote OPeNDAP dataset ncv = nc.variables # find closest NECOFS nodes to station locations obs['0-Based Index'] = nearxy(ncv['lon'][:],ncv['lat'][:],obs['Lon'],obs['Lat']) obs ncv['lon'][0:10] # get time values and convert to datetime objects time_var = ncv['time'] istart = netCDF4.date2index(start,time_var,select='nearest') istop = netCDF4.date2index(stop,time_var,select='nearest') jd = netCDF4.num2date(time_var[istart:istop],time_var.units) # get all time steps of water level from each station # NOTE: this takes a while.... nsta=len(obs) z = np.ones((len(jd),nsta)) layer = 0 # surface layer =0, bottom layer=-1 for i in range(nsta): z[:,i] = ncv['temp'][istart:istop,layer,obs['0-Based Index'][i]] # make a DataFrame out of the interpolated time series at each location zvals=pd.DataFrame(z*9./5.+32.,index=jd,columns=obs.index) # list out a few values zvals.head() # plotting at DataFrame is easy! ax=zvals.plot(figsize=(16,4),grid=True, title=('NECOFS Forecast Bottom Water Temperature from %s Grid' % model),legend=False); # read units from dataset for ylabel plt.ylabel(ncv['temp'].units) # plotting the legend outside the axis is a bit tricky box = ax.get_position() ax.set_position([box.x0, box.y0, box.width * 0.8, box.height]) ax.legend(loc='center left', bbox_to_anchor=(1, 0.5)); # make a new DataFrame of maximum water levels at all stations b=pd.DataFrame(zvals.idxmax(),columns=['time of max water temp (UTC)']) # create heading for new column containing max water level zmax_heading='tmax (%s)' % ncv['temp'].units # Add new column to DataFrame b[zmax_heading]=zvals.max() b ```
true
code
0.469034
null
null
null
null
# Lost Luggage Distribution Problem ## Objective and Prerequisites In this example, you’ll learn how to use mathematical optimization to solve a vehicle routing problem with time windows, which involves helping a company figure out the minimum number of vans required to deliver pieces of lost or delayed baggage to their rightful owners and determining the optimal assignment of vans to customers. This model is example 27 from the fifth edition of Model Building in Mathematical Programming by H. Paul Williams on pages 287-289 and 343-344. This modeling example is at the advanced level, where we assume that you know Python and the Gurobi Python API and that you have advanced knowledge of building mathematical optimization models. Typically, the objective function and/or constraints of these examples are complex or require advanced features of the Gurobi Python API. **Download the Repository** <br /> You can download the repository containing this and other examples by clicking [here](https://github.com/Gurobi/modeling-examples/archive/master.zip). **Gurobi License** <br /> In order to run this Jupyter Notebook properly, you must have a Gurobi license. If you do not have one, you can request an [evaluation license](https://www.gurobi.com/downloads/request-an-evaluation-license/?utm_source=3PW&utm_medium=OT&utm_campaign=WW-MU-MUI-OR-O_LEA-PR_NO-Q3_FY20_WW_JPME_Lost_Luggage_Distribution_COM_EVAL_GitHub&utm_term=Lost%20Luggage%20Distribution&utm_content=C_JPM) as a *commercial user*, or download a [free license](https://www.gurobi.com/academia/academic-program-and-licenses/?utm_source=3PW&utm_medium=OT&utm_campaign=WW-MU-EDU-OR-O_LEA-PR_NO-Q3_FY20_WW_JPME_Lost_Luggage_Distribution_COM_EVAL_GitHub&utm_term=Lost%20Luggage%20Distribution&utm_content=C_JPM) as an *academic user*. ## Problem Description A small company with six vans has a contract with a number of airlines to pick up lost or delayed baggage, belonging to customers in the London area, from Heathrow airport at 6 p.m. each evening. The contract stipulates that each customer must have their baggage delivered by 8 p.m. The company requires a model to advise them what is the minimum number of vans they need to use and to which customers each van should deliver and in what order. There is no practical capacity limitation on each van. Each van can hold all baggage that needs to be delivered in a two-hour period. To solve this problem, we can formulate an optimization model that minimizes the number of vans that need to be used. ## Model Formulation ### Sets and Indices $i,j \in \text{Locations} \equiv L=\{0,1..(n-1)\}$: Set of locations where $0$ is the index for the single depot -Heathrow airport, and $n$ is the number of locations. $k \in \text{Vans} \equiv V=\{0..K-1\}$: Index and set of vans, where $K$ is the number of vans. $S_k \in S $: Tour of van $k$, i.e. subset of locations visited by the van. ### Parameters $t_{i,j} \in \mathbb{R}^+$: Travel time from location $i$ to location $j$. ### Decision Variables $x_{i,j,k} \in \{0,1 \}$: This binary variable is equal 1, if van $k$ visits and goes directly from location $i$ to location $j$, and zero otherwise. $y_{i,k} \in \{0,1 \}$: This binary variable is equal 1, if van $k$ visits location $i$, and zero otherwise. $z_{k} \in \{0,1 \}$: This binary variable is equal 1, if van $k \in \{1,2..K\}$ is used, and zero otherwise. ### Objective Function **Number of vans**: Minimize number of vans used. \begin{equation} \text{Minimize} \quad \sum_{k = 1}^{K} z_k \end{equation} ### Constraints **Van utilization**: For all locations different from the depot, i.e. $i > 0$, if the location is visited by van $k$, then it is used. \begin{equation} y_{i,k} \leq z_{k} \quad \forall i \in L \setminus \{0\}, \; k \in V \end{equation} **Travel time**: No van travels for more than 120 min. Note that we do not consider the travel time to return to the depot. \begin{equation} \sum_{i \in L} \sum_{j \in L \setminus \{0\}} t_{i,j} \cdot x_{i,j,k} \leq 120 \quad \forall k \in V \end{equation} **Visit all customers**: Each customer location is visited by exactly one van. \begin{equation} \sum_{k \in V} y_{i,k} = 1 \quad \forall i \in L \setminus \{0\} \end{equation} **Depot**: Heathrow is visited by every van used. \begin{equation} \sum_{k \in V} y_{1,k} \geq \sum_{k \in V} z_k \end{equation} **Arriving at a location**: If location $j$ is visited by van $k$, then the van is coming from another location $i$. \begin{equation} \sum_{i \in L} x_{i,j,k} = y_{j,k} \quad \forall j \in L, \; k \in V \end{equation} **Leaving a location**: If van $k$ leaves location $j$, then the van is going to another location $i$. \begin{equation} \sum_{i \in L} x_{j,i,k} = y_{j,k} \quad \forall j \in L, \; k \in V \end{equation} **Breaking symmetry**: \begin{equation} \sum_{i \in L} y_{i,k} \geq \sum_{i \in L} y_{i,k+1} \quad \forall k \in \{0..K-1\} \end{equation} **Subtour elimination**: These constraints ensure that for each van route, there is no cycle. \begin{equation} \sum_{(i,j) \in S_k}x_{i,j,k} \leq |S_k|-1 \quad \forall k \in K, \; S_k \subseteq L \end{equation} ## Python Implementation We import the Gurobi Python Module and other Python libraries. ``` import sys import math import random from itertools import permutations import gurobipy as gp from gurobipy import GRB # tested with Python 3.7.0 & Gurobi 9.1.0 ``` ## Input data We define all the input data for the model. The user defines the number of locations, including the depot, and the number of vans. We randomly determine the coordinates of each location and then calculate the Euclidean distance between each pair of locations. We assume a speed of 60 km/hr, which is 1 km/min. Hence travel time is equal to the distance. ``` # number of locations, including the depot. The index of the depot is 0 n = 17 locations = [*range(n)] # number of vans K = 6 vans = [*range(K)] # Create n random points # Depot is located at (0,0) coordinates random.seed(1) points = [(0, 0)] points += [(random.randint(0, 50), random.randint(0, 50)) for i in range(n-1)] # Dictionary of Euclidean distance between each pair of points # Assume a speed of 60 km/hr, which is 1 km/min. Hence travel time = distance time = {(i, j): math.sqrt(sum((points[i][k]-points[j][k])**2 for k in range(2))) for i in locations for j in locations if i != j} ``` ## Model Deployment We create a model and the variables. The decision variables determines the order in which each van visits a subset of custormers, which customer is visited by each van, and if a van is used or not. ``` m = gp.Model('lost_luggage_distribution.lp') # Create variables: # x =1, if van k visits and goes directly from location i to location j x = m.addVars(time.keys(), vans, vtype=GRB.BINARY, name='FromToBy') # y = 1, if customer i is visited by van k y = m.addVars(locations, vans, vtype=GRB.BINARY, name='visitBy') # Number of vans used is a decision variable z = m.addVars(vans, vtype=GRB.BINARY, name='used') # Travel time per van t = m.addVars(vans, ub=120, name='travelTime') # Maximum travel time s = m.addVar(name='maxTravelTime') ``` ## Constraints For all locations different from depot, i.e. $i > 0$, if the location is visited by van $k$, then it is used. ``` # Van utilization constraint visitCustomer = m.addConstrs((y[i,k] <= z[k] for k in vans for i in locations if i > 0), name='visitCustomer' ) ``` No van travels for more than 120 min. We make a small change from the original H.P. Williams version to introduce a slack variable for the travel time for each van, t[k]. ``` # Travel time constraint # Exclude the time to return to the depot travelTime = m.addConstrs((gp.quicksum(time[i,j]*x[i,j,k] for i,j in time.keys() if j > 0) == t[k] for k in vans), name='travelTimeConstr' ) ``` Each customer location is visited by exactly one van ``` # Visit all customers visitAll = m.addConstrs((y.sum(i,'*') == 1 for i in locations if i > 0), name='visitAll' ) ``` Heathrow (depot) is visited by every van used. ``` # Depot constraint depotConstr = m.addConstr(y.sum(0,'*') >= z.sum(), name='depotConstr' ) ``` If location j is visited by van k , then the van is coming from another location i. ``` # Arriving at a customer location constraint ArriveConstr = m.addConstrs((x.sum('*',j,k) == y[j,k] for j,k in y.keys()), name='ArriveConstr' ) ``` If van k leaves location j , then the van is going to another location i. ``` # Leaving a customer location constraint LeaveConstr = m.addConstrs((x.sum(j,'*',k) == y[j,k] for j,k in y.keys()), name='LeaveConstr' ) ``` Breaking symmetry constraints. ``` breakSymm = m.addConstrs((y.sum('*',k-1) >= y.sum('*',k) for k in vans if k>0), name='breakSymm' ) ``` Relate the maximum travel time to the travel times of each van ``` maxTravelTime = m.addConstrs((t[k] <= s for k in vans), name='maxTravelTimeConstr') # Alternately, as a general constraint: # maxTravelTime = m.addConstr(s == gp.max_(t), name='maxTravelTimeConstr') ``` ### Objective Function We use two hierarchical objectives: - First, minimize the number of vans used - Then, minimize the maximum of the time limit constraints ``` m.ModelSense = GRB.MINIMIZE m.setObjectiveN(z.sum(), 0, priority=1, name="Number of vans") m.setObjectiveN(s, 1, priority=0, name="Travel time") ``` ### Callback Definition Subtour constraints prevent a van from visiting a set of destinations without starting or ending at the Heathrow depot. Because there are an exponential number of these constraints, we don't want to add them all to the model. Instead, we use a callback function to find violated subtour constraints and add them to the model as lazy constraints. ``` # Callback - use lazy constraints to eliminate sub-tours def subtourelim(model, where): if where == GRB.Callback.MIPSOL: # make a list of edges selected in the solution vals = model.cbGetSolution(model._x) selected = gp.tuplelist((i,j) for i, j, k in model._x.keys() if vals[i, j, k] > 0.5) # find the shortest cycle in the selected edge list tour = subtour(selected) if len(tour) < n: for k in vans: model.cbLazy(gp.quicksum(model._x[i, j, k] for i, j in permutations(tour, 2)) <= len(tour)-1) # Given a tuplelist of edges, find the shortest subtour not containing depot (0) def subtour(edges): unvisited = list(range(1, n)) cycle = range(n+1) # initial length has 1 more city while unvisited: thiscycle = [] neighbors = unvisited while neighbors: current = neighbors[0] thiscycle.append(current) if current != 0: unvisited.remove(current) neighbors = [j for i, j in edges.select(current, '*') if j == 0 or j in unvisited] if 0 not in thiscycle and len(cycle) > len(thiscycle): cycle = thiscycle return cycle ``` ## Solve the model ``` # Verify model formulation m.write('lost_luggage_distribution.lp') # Run optimization engine m._x = x m.Params.LazyConstraints = 1 m.optimize(subtourelim) ``` ## Analysis The optimal route of each van used and the total lost luggage delivery time report follows. ``` # Print optimal routes for k in vans: route = gp.tuplelist((i,j) for i,j in time.keys() if x[i,j,k].X > 0.5) if route: i = 0 print(f"Route for van {k}: {i}", end='') while True: i = route.select(i, '*')[0][1] print(f" -> {i}", end='') if i == 0: break print(f". Travel time: {round(t[k].X,2)} min") print(f"Max travel time: {round(s.X,2)}") ``` ## References H. Paul Williams, Model Building in Mathematical Programming, fifth edition. Copyright © 2020 Gurobi Optimization, LLC
true
code
0.433981
null
null
null
null
Actually this is not PCA, but PCA-related questions in CS357. ``` import numpy as np import numpy.linalg as la def center_data(A): ''' Given a matrix A, we want every column to shift by their column mean ''' B = np.copy(A).astype(float) for i in range(B.shape[1]): B[:,i] -= np.mean(B[:,i]) return B # Generic Test Case for center_data X = np.array([[0.865, 1.043, -0.193], [-1.983, -1.250, 2.333], [0.305, 1.348, -0.024], [1.509, -0.069, -1.400]]) expected = np.array([ [ 0.691, 0.775, -0.372], [-2.157, -1.518, 2.154], [ 0.131, 1.08 , -0.203], [ 1.335, -0.337, -1.579] ]) actual = center_data(X) tol = 10**-4 assert la.norm(expected-actual) < tol def pc_needed(U, S, V, limit): ''' Given a singular value decomposition return the number of principal components needed to achieve a variance coverage equal to or higher than limit ''' var = S@S / np.sum(S@S) curr = 0.0 counter = 0 for i in range(len(S)): if curr >= limit: break curr += var[i,i] counter += 1 return counter # Generic Test Case for pc_needed limit = 0.57 U = np.array([[-0.1, 0.1, -0.1, -0.1, -0.1, 0.0, -0.3, 0.0, -0.6], [0.1, -0.5, 0.1, 0.1, 0.2, 0.1, -0.4, 0.2, 0.0], [0.2, 0.2, -0.1, 0.7, 0.3, -0.1, 0.3, 0.1, -0.4], [0.1, -0.1, -0.2, 0.0, -0.1, 0.0, 0.2, 0.1, 0.2], [-0.2, 0.0, -0.1, 0.4, -0.1, -0.2, -0.4, 0.1, 0.4], [0.1, -0.3, -0.1, -0.2, 0.2, -0.3, 0.0, 0.1, 0.0], [-0.4, 0.2, 0.0, -0.1, 0.2, 0.4, -0.3, 0.2, -0.1], [0.2, -0.3, -0.3, -0.2, 0.4, -0.2, 0.0, 0.1, 0.1], [0.6, 0.3, -0.2, 0.0, -0.2, 0.1, -0.3, 0.1, 0.1], [0.0, -0.1, 0.1, -0.3, -0.4, -0.2, 0.2, 0.5, -0.1], [-0.4, 0.1, -0.2, 0.0, 0.1, 0.0, 0.2, 0.4, 0.1], [0.3, 0.2, 0.3, -0.2, 0.3, 0.1, -0.1, 0.3, -0.1], [0.3, 0.2, 0.2, -0.2, -0.1, 0.0, 0.0, -0.4, 0.1], [-0.2, 0.0, 0.3, -0.2, 0.4, -0.4, 0.0, -0.2, 0.0], [-0.1, 0.3, -0.4, -0.1, -0.1, -0.4, -0.1, -0.1, -0.2], [0.0, 0.0, -0.6, -0.2, 0.2, 0.4, 0.1, -0.2, 0.0], [0.0, 0.0, -0.1, 0.0, -0.1, -0.4, -0.4, -0.1, -0.2], [-0.1, -0.5, 0.1, 0.1, -0.2, 0.2, 0.0, -0.3, -0.3]]) S = np.array([[39, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 36, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 32, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 25, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 24, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 23, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 17, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 12, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 6]]) V = np.array([[-0.5, -0.1, 0.6, 0.1, -0.2, 0.2, 0.5, -0.2, -0.1], [0.2, -0.1, 0.2, -0.5, -0.7, -0.4, 0.0, 0.0, 0.1], [0.0, -0.5, 0.1, 0.4, -0.1, 0.1, -0.3, -0.3, 0.6], [-0.5, 0.6, -0.3, 0.3, -0.2, -0.3, 0.0, 0.0, 0.3], [-0.1, -0.2, -0.7, -0.1, -0.3, 0.2, 0.3, -0.4, -0.2], [0.3, -0.1, 0.0, 0.0, 0.4, -0.3, 0.7, -0.2, 0.4], [0.1, -0.3, 0.0, 0.6, -0.1, -0.5, 0.0, 0.0, -0.5], [0.0, 0.2, 0.2, -0.2, 0.3, -0.2, -0.3, -0.8, -0.2], [0.6, 0.5, 0.1, 0.4, -0.3, 0.3, 0.1, -0.2, 0.0]]) expected = 3 actual = pc_needed(U, S, V, 0.57) assert expected == actual def singular_values_for_pca_mean(X): ''' Designed specifically for exam quesiton "Singular Values for PCA Mean" ''' X_zeroed = center_data(X) U, S, Vh = la.svd(X_zeroed) return X_zeroed, S[-1] # Workspace U = np.array([[0.0, -0.1, -0.4, -0.3, 0.4, -0.1, -0.1, -0.1, -0.1, 0.2], [-0.1, -0.1, 0.3, 0.4, 0.5, 0.0, -0.3, 0.0, 0.0, 0.2], [0.1, -0.2, -0.2, -0.1, -0.1, -0.4, -0.1, 0.4, 0.2, 0.1], [0.4, -0.3, -0.2, 0.1, 0.1, 0.3, -0.1, -0.3, -0.1, 0.0], [0.0, -0.3, -0.2, -0.1, -0.3, 0.1, -0.4, -0.3, 0.4, -0.2], [0.0, 0.3, 0.0, 0.1, -0.1, 0.0, -0.5, 0.0, -0.2, 0.5], [0.0, 0.6, -0.1, 0.0, -0.1, 0.1, -0.1, 0.0, 0.4, 0.1], [0.2, 0.0, -0.2, 0.3, 0.1, 0.3, 0.3, -0.1, 0.2, 0.0], [0.0, 0.1, -0.2, -0.4, 0.1, 0.4, 0.2, 0.2, -0.4, 0.0], [0.6, 0.2, -0.2, 0.3, 0.1, -0.3, 0.2, 0.0, 0.0, -0.1], [-0.1, -0.1, -0.1, -0.1, -0.1, -0.3, -0.3, -0.2, -0.2, -0.3], [0.1, 0.2, -0.5, 0.0, 0.0, -0.1, 0.0, -0.1, 0.1, 0.2], [-0.2, -0.2, 0.0, 0.2, -0.5, 0.0, 0.2, -0.2, -0.2, 0.5], [-0.4, -0.1, -0.3, 0.3, 0.2, 0.2, -0.1, 0.3, 0.3, 0.0], [-0.2, -0.3, 0.0, 0.0, 0.0, -0.4, 0.4, -0.1, 0.1, 0.2], [-0.5, 0.3, -0.3, 0.2, 0.1, -0.2, 0.1, -0.4, -0.2, -0.3], [0.0, -0.1, -0.3, 0.4, -0.3, 0.1, -0.2, 0.4, -0.4, -0.1]]) S = np.array([[32, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 29, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 28, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 25, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 22, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 21, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 19, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 12, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 9, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 2]]) V = np.array([[0.0, 0.7, -0.2, 0.0, -0.6, 0.0, 0.2, -0.2, 0.2, -0.1], [-0.1, -0.3, -0.5, 0.0, -0.2, -0.1, -0.1, 0.2, 0.7, 0.0], [0.2, -0.2, -0.3, 0.4, -0.4, -0.4, -0.2, 0.0, -0.4, -0.4], [-0.6, 0.2, -0.1, -0.2, 0.0, -0.5, -0.1, 0.3, -0.2, 0.3], [0.3, 0.2, 0.0, -0.1, 0.1, -0.2, -0.8, -0.3, 0.1, 0.3], [-0.5, -0.3, 0.5, -0.1, -0.3, -0.1, -0.2, -0.5, 0.2, -0.3], [-0.2, 0.2, 0.1, 0.8, 0.3, -0.1, 0.0, -0.1, 0.3, 0.1], [-0.2, -0.1, -0.5, -0.2, 0.4, -0.1, 0.2, -0.6, -0.1, -0.1], [-0.3, 0.2, -0.1, 0.0, 0.2, 0.4, -0.5, 0.3, -0.1, -0.6], [-0.3, -0.2, -0.3, 0.2, -0.3, 0.5, -0.2, -0.1, -0.3, 0.5]]) pc_needed(U, S, V, 0.73) ``` Remember to check for negative signs! ``` # Workspace 2 X = np.array([[-0.273, -1.366, -0.906], [-0.940, -0.957, -0.019], [-2.025, 0.228, -0.696], [0.469, -0.329, -0.059]]) mat, sca = singular_values_for_pca_mean(X) print(mat) print(sca) ```
true
code
0.482734
null
null
null
null
This is a collection of scratch work that i use to organize the tests for coddiwomple ``` from simtk import openmm from openmmtools.testsystems import HarmonicOscillator from coddiwomple.tests.utils import get_harmonic_testsystem from coddiwomple.tests.utils import HarmonicAlchemicalState from simtk import unit import numpy as np testsystem, period, collision_rate, timestep, alchemical_functions = get_harmonic_testsystem() testsystem.system.getNumForces() for force_index in range(testsystem.system.getNumForces()): force = testsystem.system.getForce(force_index) n_global_parameters = force.getNumGlobalParameters() print(n_global_parameters) for term in range(n_global_parameters): print(force.getGlobalParameterName(term)) def test_OpenMMPDFState(): """ conduct a class-wide test on coddiwomple.openmm.states.OpenMMPDFState with the `get_harmonic_testsystem` testsystem this will assert successes on __init__, set_parameters, get_parameters, reduced_potential methods """ temperature = 300 * unit.kelvin pressure = None from coddiwomple.openmm.states import OpenMMPDFState, OpenMMParticleState #create the default get_harmonic_testsystem testsystem, period, collision_rate, timestep, alchemical_functions = get_harmonic_testsystem(temperature = temperature) #test init method pdf_state = OpenMMPDFState(system = testsystem.system, alchemical_composability = HarmonicAlchemicalState, temperature = temperature, pressure = pressure) assert isinstance(pdf_state._internal_context, openmm.Context) print(f"pdf_state parameters: {pdf_state._parameters}") #test set_parameters new_parameters = {key : 1.0 for key, val in pdf_state._parameters.items() if val is not None} pdf_state.set_parameters(new_parameters) #this should set the new parameters, but now we have to make sure that the context actually has those parameters bound swig_parameters = pdf_state._internal_context.getParameters() context_parameters = {q: swig_parameters[q] for q in swig_parameters} assert context_parameters['testsystems_HarmonicOscillator_x0'] == 1. assert context_parameters['testsystems_HarmonicOscillator_U0'] == 1. #test get_parameters returnable_parameters = pdf_state.get_parameters() assert len(returnable_parameters) == 2 assert returnable_parameters['testsystems_HarmonicOscillator_x0'] == 1. assert returnable_parameters['testsystems_HarmonicOscillator_U0'] == 1. #test reduced_potential particle_state = OpenMMParticleState(positions = testsystem.positions) #make a particle state so that we can compute a reduced potential reduced_potential = pdf_state.reduced_potential(particle_state) externally_computed_reduced_potential = pdf_state._internal_context.getState(getEnergy=True).getPotentialEnergy()*pdf_state.beta assert np.isclose(reduced_potential, externally_computed_reduced_potential) def test_OpenMMParticleState(): """ conduct a class-wide test on coddiwomple.openmm.states.OpenMMParticleState with the `get_harmonic_testsystem` testsystem this will assert successes on __init__, as well as _all_ methods in the coddiwomple.particles.Particle class """ temperature = 300 * unit.kelvin pressure = None from coddiwomple.openmm.states import OpenMMPDFState, OpenMMParticleState from coddiwomple.particles import Particle #create the default get_harmonic_testsystem testsystem, period, collision_rate, timestep, alchemical_functions = get_harmonic_testsystem(temperature = temperature) #test __init__ method particle_state = OpenMMParticleState(positions = testsystem.positions) #make a particle state particle = Particle(index = 0, record_state=False, iteration = 0) #test update_state assert particle.state is None assert not particle._record_states particle.update_state(particle_state) assert particle.state is not None #test update_iteration assert particle.iteration == 0 particle.update_iteration() assert particle.iteration == 1 #test update ancestry assert particle.ancestry == [0] particle.update_ancestry(1) assert particle.ancestry == [0,1] #the rest of the methods are trivial or would be redundant to test test_OpenMMPDFState() test_OpenMMParticleState() def test_OpenMMReporter(): """ test the OpenMMReporter object for its ability to make appropriate trajectory writes for particles. use the harmonic oscillator testsystem NOTE : this class will conduct dynamics on 5 particles defined by the harmonic oscillator testsystem in accordance with the coddiwomple.openmm.propagators.OMMBIP equipped with the coddiwomple.openmm.integrators.OMMLI integrator, but will NOT explicitly conduct a full test on the propagators or integrators. """ import os from coddiwomple.openmm.propagators import OMMBIP from coddiwomple.openmm.integrators import OMMLI temperature = 300 * unit.kelvin pressure = None from coddiwomple.openmm.states import OpenMMPDFState, OpenMMParticleState from coddiwomple.particles import Particle from coddiwomple.openmm.reporters import OpenMMReporter import shutil #create the default get_harmonic_testsystem testsystem, period, collision_rate, timestep, alchemical_functions = get_harmonic_testsystem(temperature = temperature) #create a particle state and 5 particles particles = [] for i in range(5): particle_state = OpenMMParticleState(positions = testsystem.positions) #make a particle state particle = Particle(index = i, record_state=False, iteration = 0) particle.update_state(particle_state) particles.append(particle) #since we are copying over the positions, we need a simple assert statement to make sure that the id(hex(particle_state.positions)) are separate in memory position_hexes = [hex(id(particle.state.positions)) for particle in particles] assert len(position_hexes) == len(list(set(position_hexes))), f"positions are copied identically; this is a problem" #create a pdf_state pdf_state = OpenMMPDFState(system = testsystem.system, alchemical_composability = HarmonicAlchemicalState, temperature = temperature, pressure = pressure) #create an integrator integrator = OMMLI(temperature=temperature, collision_rate=collision_rate, timestep=timestep) #create a propagator propagator = OMMBIP(openmm_pdf_state = pdf_state, integrator = integrator) steps_per_application = 100 #the only thing we want to do here is to run independent md for each of the particles and save trajectories; at the end, we will delete the directory and the traj files temp_traj_dir, temp_traj_prefix = os.path.join(os.getcwd(), 'test_dir'), 'traj_prefix' reporter = OpenMMReporter(trajectory_directory = 'test_dir', trajectory_prefix='traj_prefix', md_topology=testsystem.mdtraj_topology) assert reporter.write_traj num_applications=10 for application_index in range(num_applications): returnables = [propagator.apply(particle.state, n_steps=100, reset_integrator=True, apply_pdf_to_context=True, randomize_velocities=True) for particle in particles] _save=True if application_index == num_applications-1 else False reporter.record(particles, save_to_disk=_save) assert reporter.hex_counter == len(reporter.hex_dict) assert os.path.exists(temp_traj_dir) assert os.listdir(temp_traj_dir) is not None #then we can delete shutil.rmtree(temp_traj_dir) def test_OMMLI(): """ test OMMLI (OpenMMLangevinIntegrator) in the baoab regime on the harmonic test system; Specifically, we run MD to convergence and assert that the potential energy of the system and the standard deviation thereof is within a specified threshold. We also check the accumulation of shadow, proposal works, as well as the ability to reset, initialize, and subsume the integrator into an OMMBIP propagator """ from coddiwomple.openmm.propagators import OMMBIP from coddiwomple.openmm.integrators import OMMLI import tqdm temperature = 300 * unit.kelvin pressure = None from coddiwomple.openmm.states import OpenMMPDFState, OpenMMParticleState from coddiwomple.particles import Particle #create the default get_harmonic_testsystem testsystem, period, collision_rate, timestep, alchemical_functions = get_harmonic_testsystem(temperature = temperature) particle_state = OpenMMParticleState(positions = testsystem.positions) #make a particle state particle = Particle(index = 0, record_state=False, iteration = 0) particle.update_state(particle_state) num_applications = 100 #create a pdf_state pdf_state = OpenMMPDFState(system = testsystem.system, alchemical_composability = HarmonicAlchemicalState, temperature = temperature, pressure = pressure) #create an integrator integrator = OMMLI(temperature=temperature, collision_rate=collision_rate, timestep=timestep) #create a propagator propagator = OMMBIP(openmm_pdf_state = pdf_state, integrator = integrator) #expected reduced potential mean_reduced_potential = testsystem.get_potential_expectation(pdf_state) * pdf_state.beta std_dev_reduced_potential = testsystem.get_potential_standard_deviation(pdf_state) * pdf_state.beta reduced_pe = [] #some sanity checks for propagator: global_integrator_variables_before_integration = propagator._get_global_integrator_variables() print(f"starting integrator variables: {global_integrator_variables_before_integration}") #some sanity checks for integrator: start_proposal_work = propagator.integrator._get_energy_with_units('proposal_work', dimensionless=True) start_shadow_work = propagator.integrator._get_energy_with_units('shadow_work', dimensionless=True) assert start_proposal_work == global_integrator_variables_before_integration['proposal_work'] assert start_shadow_work == global_integrator_variables_before_integration['shadow_work'] for app_num in tqdm.trange(num_applications): particle_state, proposal_work = propagator.apply(particle_state, n_steps=20, reset_integrator=False, apply_pdf_to_context=False, randomize_velocities=True) assert proposal_work==0. #this must be the case since we did not pass a 'returnable_key' #sanity checks for inter-application methods assert propagator.integrator._get_energy_with_units('proposal_work', dimensionless=True) != 0. #this cannot be zero after a step of MD without resets assert propagator.integrator._get_energy_with_units('shadow_work', dimensionless=True) != 0. #this cannot be zero after a step of MD without resets reduced_pe.append(pdf_state.reduced_potential(particle_state)) tol=6 * std_dev_reduced_potential calc_mean_reduced_pe = np.mean(reduced_pe) calc_stddev_reduced_pe = np.std(reduced_pe) assert calc_mean_reduced_pe < mean_reduced_potential + tol and calc_mean_reduced_pe > mean_reduced_potential - tol, f"the mean reduced energy and standard deviation ({calc_mean_reduced_pe}, {calc_stddev_reduced_pe}) is outside the tolerance \ of a theoretical mean potential energy of {mean_reduced_potential} +/- {tol}" print(f"the mean reduced energy/standard deviation is {calc_mean_reduced_pe, calc_stddev_reduced_pe} and the theoretical mean reduced energy and stddev are {mean_reduced_potential}") #some cleanup of the integrator propagator.integrator.reset() #this should reset proposal, shadow, and ghmc staticstics (we omit ghmc stats) assert propagator.integrator._get_energy_with_units('proposal_work', dimensionless=True) == 0. #this should be zero after a reset assert propagator.integrator._get_energy_with_units('shadow_work', dimensionless=True) == 0. #this should be zero after a reset def test_OMMBIP(): """ test OMMBIP (OpenMMBaseIntegratorPropagator) in the baoab regime on the harmonic test system; specifically, we validate the init, apply, _get_global_integrator_variables, _get_context_parameters methods. For the sake of testing all of the internal methods, we equip an OMMLI integrator """ from coddiwomple.openmm.propagators import OMMBIP from coddiwomple.openmm.integrators import OMMLI temperature = 300 * unit.kelvin pressure = None from coddiwomple.openmm.states import OpenMMPDFState, OpenMMParticleState #create the default get_harmonic_testsystem testsystem, period, collision_rate, timestep, alchemical_functions = get_harmonic_testsystem(temperature = temperature) particle_state = OpenMMParticleState(positions = testsystem.positions) #make a particle state num_applications = 100 #create a pdf_state pdf_state = OpenMMPDFState(system = testsystem.system, alchemical_composability = HarmonicAlchemicalState, temperature = temperature, pressure = pressure) #create an integrator integrator = OMMLI(temperature=temperature, collision_rate=collision_rate, timestep=timestep) #create a propagator propagator = OMMBIP(openmm_pdf_state = pdf_state, integrator = integrator) #check the __init__ method for appropriate equipment assert hex(id(propagator.pdf_state)) == hex(id(pdf_state)) #the defined pdf state is tethered to the propagator (this is VERY important for SMC) #conduct null application prior_reduced_potential = pdf_state.reduced_potential(particle_state) return_state, proposal_work = propagator.apply(particle_state, n_steps=0) assert proposal_work == 0. #there is no proposal work if returnable_key is None assert pdf_state.reduced_potential(particle_state) == prior_reduced_potential propagator_state = propagator.context.getState(getEnergy=True) assert np.isclose(propagator_state.getPotentialEnergy()*pdf_state.beta, pdf_state.reduced_potential(particle_state)) #check context update internals prior_reduced_potential = pdf_state.reduced_potential(particle_state) parameters = pdf_state.get_parameters() #change an alchemical parameter parameters['testsystems_HarmonicOscillator_U0'] = 1. #update parameter dict pdf_state.set_parameters(parameters) #set new params _ = propagator.apply(particle_state, n_steps=0, apply_pdf_to_context=False) # if we do not apply to context, then the internal_context should not be modified assert propagator._get_context_parameters()['testsystems_HarmonicOscillator_U0'] == 0. assert np.isclose(propagator.context.getState(getEnergy=True).getPotentialEnergy()*pdf_state.beta, prior_reduced_potential) _ = propagator.apply(particle_state, n_steps=0, apply_pdf_to_context=True) # if we do apply to context, then the internal_context should be modified assert propagator._get_context_parameters()['testsystems_HarmonicOscillator_U0'] == 1. assert np.isclose(prior_reduced_potential + 1.0 * unit.kilojoules_per_mole * pdf_state.beta, propagator.context.getState(getEnergy=True).getPotentialEnergy()*pdf_state.beta) #check gettable integrator variables integrator_vars = propagator._get_global_integrator_variables() #check propagator stability with integrator reset and velocity randomization _ = propagator.apply(particle_state, n_steps=1000, reset_integrator=True, apply_pdf_to_context=True, randomize_velocities=True) test_OMMBIP() ```
true
code
0.741564
null
null
null
null
# ElasticNet with MinMaxScaler & Polynomial Features This Code template is for Regression tasks using a ElasticNet based on the Regression linear model Technique with MinMaxScaler and feature transformation technique Polynomial Features in a pipeline. ### Required Packages ``` import warnings as wr import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from sklearn.preprocessing import LabelEncoder from sklearn.pipeline import make_pipeline from sklearn.preprocessing import MinMaxScaler, PolynomialFeatures from sklearn.model_selection import train_test_split from sklearn.linear_model import ElasticNet from sklearn.metrics import mean_squared_error, r2_score,mean_absolute_error wr.filterwarnings('ignore') ``` ### Initialization Filepath of CSV file ``` #filepath file_path= '' ``` List of features which are required for model training . ``` #x_values features=[] ``` Target feature for prediction. ``` #y_value target='' ``` ### Data Fetching Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools. We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry. ``` df=pd.read_csv(file_path) #reading file df.head()#displaying initial entries print('Number of rows are :',df.shape[0], ',and number of columns are :',df.shape[1]) df.columns.tolist() ``` ### Data Preprocessing Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes. ``` def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df) ``` #### Correlation Map In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns. ``` plt.figure(figsize = (15, 10)) corr = df.corr() mask = np.triu(np.ones_like(corr, dtype = bool)) sns.heatmap(corr, mask = mask, linewidths = 1, annot = True, fmt = ".2f") plt.show() correlation = df[df.columns[1:]].corr()[target][:] correlation ``` ### Feature Selections It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model. We will assign all the required input features to X and target/outcome to Y. ``` X=df[features] Y=df[target] ``` Calling preprocessing functions on the feature and target set. ``` x=X.columns.to_list() for i in x: X[i]=NullClearner(X[i]) X=EncodeX(X) Y=NullClearner(Y) X.head() ``` ### Data Splitting The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data. ``` #we can choose randomstate and test_size as over requerment X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 1) #performing datasplitting ``` ## Model ### Data Scaling **Used MinMaxScaler** * Transform features by scaling each feature to a given range. * This estimator scales and translates each feature individually such that it is in the given range on the training set, e.g. between zero and one. ### Feature Transformation Generate polynomial and interaction features. Generate a new feature matrix consisting of all polynomial combinations of the features with degree less than or equal to the specified degree. Refer [API](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.html) for parameters. ### ElasticNet Elastic Net first emerged as a result of critique on Lasso, whose variable selection can be too dependent on data and thus unstable. The solution is to combine the penalties of Ridge regression and Lasso to get the best of both worlds. **Features of ElasticNet Regression-** * It combines the L1 and L2 approaches. * It performs a more efficient regularization process. * It has two parameters to be set, λ and α. #### Model Tuning Parameters 1. alpha : float, default=1.0 > Constant that multiplies the penalty terms. Defaults to 1.0. See the notes for the exact mathematical meaning of this parameter. alpha = 0 is equivalent to an ordinary least square, solved by the LinearRegression object. For numerical reasons, using alpha = 0 with the Lasso object is not advised. Given this, you should use the LinearRegression object. 2. l1_ratio : float, default=0.5 > The ElasticNet mixing parameter, with 0 <= l1_ratio <= 1. For l1_ratio = 0 the penalty is an L2 penalty. For l1_ratio = 1 it is an L1 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1 and L2. 3. normalize : bool, default=False >This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False. 4. max_iter : int, default=1000 >The maximum number of iterations. 5. tol : float, default=1e-4 >The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol. 6. selection : {‘cyclic’, ‘random’}, default=’cyclic’ >If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4. ``` model = make_pipeline(MinMaxScaler(),PolynomialFeatures(),ElasticNet(random_state = 5)) model.fit(X_train,y_train) ``` #### Model Accuracy score() method return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. ``` print("Accuracy score {:.2f} %\n".format(model.score(X_test,y_test)*100)) #prediction on testing set prediction=model.predict(X_test) ``` ### Model evolution **r2_score:** The r2_score function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions. **MAE:** The mean abosolute error function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model. **MSE:** The mean squared error function squares the error(penalizes the model for large errors) by our model. ``` print('Mean Absolute Error:', mean_absolute_error(y_test, prediction)) print('Mean Squared Error:', mean_squared_error(y_test, prediction)) print('Root Mean Squared Error:', np.sqrt(mean_squared_error(y_test, prediction))) print("R-squared score : ",r2_score(y_test,prediction)) #ploting actual and predicted red = plt.scatter(np.arange(0,80,5),prediction[0:80:5],color = "red") green = plt.scatter(np.arange(0,80,5),y_test[0:80:5],color = "green") plt.title("Comparison of Regression Algorithms") plt.xlabel("Index of Candidate") plt.ylabel("target") plt.legend((red,green),('ElasticNet', 'REAL')) plt.show() ``` ### Prediction Plot¶ First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis. For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis. ``` plt.figure(figsize=(10,6)) plt.plot(range(20),y_test[0:20], color = "green") plt.plot(range(20),model.predict(X_test[0:20]), color = "red") plt.legend(["Actual","prediction"]) plt.title("Predicted vs True Value") plt.xlabel("Record number") plt.ylabel(target) plt.show() ``` #### Creator: Snehaan Bhawal , Github: [Profile](https://github.com/Sbhawal)
true
code
0.539711
null
null
null
null
# RadiusNeighborsClassifier with Power Transformer This Code template is for the Classification task using a simple Radius Neighbor Classifier with pipeline and PowerTransformer Feature Transformation. It implements learning based on the number of neighbors within a fixed radius r of each training point, where r is a floating-point value specified by the user. ## Required Packages ``` !pip install imblearn import numpy as np import pandas as pd import seaborn as se import warnings import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.neighbors import RadiusNeighborsClassifier from imblearn.over_sampling import RandomOverSampler from sklearn.pipeline import make_pipeline from sklearn.preprocessing import LabelEncoder, PowerTransformer from sklearn.metrics import classification_report,plot_confusion_matrix warnings.filterwarnings('ignore') ``` ## Initialization Filepath of CSV file ``` #filepath file_path= "" ``` List of features which are required for model training ``` #x_values features=[] ``` Target feature for prediction ``` #y_value target='' ``` ## Data Fetching Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools. We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry. ``` df=pd.read_csv(file_path) df.head() ``` ## Feature Selections It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model. We will assign all the required input features to X and target/outcome to Y. ``` X=df[features] Y=df[target] ``` ## Data Preprocessing Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes ``` def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df) def EncodeY(df): if len(df.unique())<=2: return df else: un_EncodedT=np.sort(pd.unique(df), axis=-1, kind='mergesort') df=LabelEncoder().fit_transform(df) EncodedT=[xi for xi in range(len(un_EncodedT))] print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT)) return df ``` Calling preprocessing functions on the feature and target set. ``` x=X.columns.to_list() for i in x: X[i]=NullClearner(X[i]) X=EncodeX(X) Y=EncodeY(NullClearner(Y)) X.head() ``` ## Correlation Map In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns. ``` f,ax = plt.subplots(figsize=(18, 18)) matrix = np.triu(X.corr()) se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix) plt.show() ``` ## Distribution Of Target Variable ``` plt.figure(figsize = (10,6)) se.countplot(Y) ``` ## Data Splitting The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data. ``` x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123) ``` ## Handling Target Imbalance The challenge of working with imbalanced datasets is that most machine learning techniques will ignore, and in turn have poor performance on, the minority class, although typically it is performance on the minority class that is most important. One approach to addressing imbalanced datasets is to oversample the minority class. The simplest approach involves duplicating examples in the minority class.We will perform overspampling using imblearn library. ``` x_train,y_train = RandomOverSampler(random_state=123).fit_resample(x_train, y_train) ``` ## Feature Transformation Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired. <a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PowerTransformer.html">More about Power Transformer module</a> ## Model RadiusNeighborsClassifier implements learning based on the number of neighbors within a fixed radius of each training point, where is a floating-point value specified by the user. In cases where the data is not uniformly sampled, radius-based neighbors classification can be a better choice. ### Tuning parameters **radius:** Range of parameter space to use by default for radius_neighbors queries. **algorithm:** Algorithm used to compute the nearest neighbors: **leaf_size:** Leaf size passed to BallTree or KDTree. **p:** Power parameter for the Minkowski metric. **metric:** the distance metric to use for the tree. **outlier_label:** label for outlier samples **weights:** weight function used in prediction. <br><br>FOR MORE INFO : <a href="https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.RadiusNeighborsClassifier.html">API</a> ``` model = make_pipeline(PowerTransformer(),RadiusNeighborsClassifier()) model.fit(x_train, y_train) ``` ## Model Accuracy score() method return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. ``` print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100)) ``` ## Confusion Matrix A confusion matrix is utilized to understand the performance of the classification model or algorithm in machine learning for a given test set where results are known. ``` plot_confusion_matrix(model,x_test,y_test,cmap=plt.cm.Blues) ``` ## Classification Report A Classification report is used to measure the quality of predictions from a classification algorithm. How many predictions are True, how many are False. where: - Precision:- Accuracy of positive predictions. - Recall:- Fraction of positives that were correctly identified. - f1-score:- percent of positive predictions were correct - support:- Support is the number of actual occurrences of the class in the specified dataset. ``` print(classification_report(y_test,model.predict(x_test))) ``` ## Creator: Abhishek Garg, Github: <a href="https://github.com/abhishek-252">Profile</a>
true
code
0.274643
null
null
null
null
<a href="https://githubtocolab.com/giswqs/geemap/blob/master/examples/notebooks/28_voila.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a> Uncomment the following line to install [geemap](https://geemap.org) if needed. ``` # !pip install geemap ``` ## Deploy Earth Engine Apps using Voila and ngrok **Steps to deploy an Earth Engine App:** 1. Install ngrok by following the [instruction](https://ngrok.com/download) 2. Install voila by following the [instruction](https://voila.readthedocs.io/en/stable/install.html) 3. Download the notebook [28_voila.ipynb](https://github.com/giswqs/geemap/blob/master/examples/notebooks/28_voila.ipynb) 4. Run this from the command line: `voila --no-browser 28_voila.ipynb` 5. Run this from the command line: `ngrok http 8866` 6. Copy the link from the ngrok terminal window. The links looks like the following: https://randomstring.ngrok.io 7. Share the link with anyone. **Optional steps:** * To show code cells from you app, run this from the command line: `voila --no-browser --strip_sources=False 28_voila.ipynb` * To protect your app with a password, run this: `ngrok http -auth="username:password" 8866` * To run python simple http server in the directory, run this:`sudo python -m http.server 80` ``` import os import ee import geemap import ipywidgets as widgets Map = geemap.Map() Map.add_basemap('HYBRID') Map style = {'description_width': 'initial'} title = widgets.Text( description='Title:', value='Landsat Timelapse', width=200, style=style ) bands = widgets.Dropdown( description='Select RGB Combo:', options=[ 'Red/Green/Blue', 'NIR/Red/Green', 'SWIR2/SWIR1/NIR', 'NIR/SWIR1/Red', 'SWIR2/NIR/Red', 'SWIR2/SWIR1/Red', 'SWIR1/NIR/Blue', 'NIR/SWIR1/Blue', 'SWIR2/NIR/Green', 'SWIR1/NIR/Red', ], value='NIR/Red/Green', style=style, ) hbox1 = widgets.HBox([title, bands]) hbox1 speed = widgets.IntSlider( description=' Frames per second:', tooltip='Frames per second:', value=10, min=1, max=30, style=style, ) cloud = widgets.Checkbox( value=True, description='Apply fmask (remove clouds, shadows, snow)', style=style ) hbox2 = widgets.HBox([speed, cloud]) hbox2 start_year = widgets.IntSlider( description='Start Year:', value=1984, min=1984, max=2020, style=style ) end_year = widgets.IntSlider( description='End Year:', value=2020, min=1984, max=2020, style=style ) start_month = widgets.IntSlider( description='Start Month:', value=5, min=1, max=12, style=style ) end_month = widgets.IntSlider( description='End Month:', value=10, min=1, max=12, style=style ) hbox3 = widgets.HBox([start_year, end_year, start_month, end_month]) hbox3 font_size = widgets.IntSlider( description='Font size:', value=30, min=10, max=50, style=style ) font_color = widgets.ColorPicker( concise=False, description='Font color:', value='white', style=style ) progress_bar_color = widgets.ColorPicker( concise=False, description='Progress bar color:', value='blue', style=style ) hbox4 = widgets.HBox([font_size, font_color, progress_bar_color]) hbox4 create_gif = widgets.Button( description='Create timelapse', button_style='primary', tooltip='Click to create timelapse', style=style, ) download_gif = widgets.Button( description='Download GIF', button_style='primary', tooltip='Click to download timelapse', disabled=False, style=style, ) output = widgets.Output() hbox5 = widgets.HBox([create_gif]) hbox5 def submit_clicked(b): with output: output.clear_output() if start_year.value > end_year.value: print('The end year must be great than the start year.') return if start_month.value > end_month.value: print('The end month must be great than the start month.') return if start_year.value == end_year.value: add_progress_bar = False else: add_progress_bar = True start_date = str(start_month.value).zfill(2) + '-01' end_date = str(end_month.value).zfill(2) + '-30' print('Computing...') Map.add_landsat_ts_gif( roi=Map.user_roi, label=title.value, start_year=start_year.value, end_year=end_year.value, start_date=start_date, end_date=end_date, bands=bands.value.split('/'), font_color=font_color.value, frames_per_second=speed.value, font_size=font_size.value, add_progress_bar=add_progress_bar, progress_bar_color=progress_bar_color.value, download=True, apply_fmask=cloud.value, ) create_gif.on_click(submit_clicked) output ```
true
code
0.388647
null
null
null
null
# **Classification of iris varieties within the same species** ## Introduction The aim of this Notebook is to use AI TRAINING product to train a simple model, on the Iris dataset, with the PyTorch library. It is an exemple of neural network for data classification ## Code The neural network will be set up in different step. First, librairies have to be imported. Next, the neural network model will be defined et the dataset split. Then, the model will be trained. Finally, the loss rate will be displayed. ### Step 1 - librairies importation (and installation if required) ``` pip install pandas sklearn matplotlib %matplotlib inline import matplotlib.pyplot as plt import torch import torch.nn as nn import torch.nn.functional as F import pandas as pd from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score from sklearn.datasets import load_iris ``` ### Step 2 - Define the neural network model ``` class Model(nn.Module): def __init__(self): super().__init__() # fully connected layer : 4 input features for 4 parameters in X self.layer1 = nn.Linear(in_features=4, out_features=16) # fully connected layer self.layer2 = nn.Linear(in_features=16, out_features=12) # output layer : 3 output features for 3 species self.output = nn.Linear(in_features=12, out_features=3) def forward(self, x): # activation fonction : reLU x = F.relu(self.layer1(x)) x = F.relu(self.layer2(x)) x = self.output(x) return x ``` ### Step 3 - Load and split Iris dataset ``` # data opening dataset = load_iris() # input of the neural network X = dataset.data # output of the neural network y = dataset.target Y = y.astype("float64") # train and test split : 20 % for the test and 80 % for the learning X_train,X_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=0) # convert split data from numpy array to Pytorch tensors X_train = torch.FloatTensor(X_train) X_test = torch.FloatTensor(X_test) y_train = torch.LongTensor(y_train) y_test = torch.LongTensor(y_test) ``` ### Step 4 - Train model ``` # display the model architecture model = Model() print("Model display: ",model) # measure loss criterion = nn.CrossEntropyLoss() # optimizer Adam with a learning rate of 0.01 optimizer = torch.optim.Adam(model.parameters(), lr=0.01) # the model will be train during 100 epochs epochs = 100 epoch_list = [] loss_list = [] perf= [] print("The loss is printed for each epoch: ") for i in range(epochs): optimizer.zero_grad() y_pred = model.forward(X_train) loss = criterion(y_pred, y_train) loss_list.append(loss) loss.backward() epoch_list.append(i) optimizer.step() # the loss is printed for each epoch print(f'Epoch: {i} Loss: {loss}') ``` ### Step 5 - Prediction and loss display ``` # print the last loss last_loss = loss_list[99].item() print('Last value of loss: ',round(last_loss,3)) # make prediction predict_out = model(X_test) _, predict_y = torch.max(predict_out, 1) # print he accuracy print('Prediction accuracy: ', accuracy_score(y_test.data, predict_y.data)) # display the graph of loss plt.plot(epoch_list,loss_list) plt.title('Evolution of the loss according to the number of epochs') plt.xlabel('Epochs') plt.ylabel('Loss') plt.show() ``` ## Conclusion - The loss of this neural network is really low (arround 0.05 %). - The accuracy of the prediction is 100 %. It means that the prediction il always good with this model.
true
code
0.878939
null
null
null
null
``` # Copyright 2021 Google LLC # Use of this source code is governed by an MIT-style # license that can be found in the LICENSE file or at # https://opensource.org/licenses/MIT. # Notebook authors: Kevin P. Murphy (murphyk@gmail.com) # and Mahmoud Soliman (mjs@aucegypt.edu) # This notebook reproduces figures for chapter 2 from the book # "Probabilistic Machine Learning: An Introduction" # by Kevin Murphy (MIT Press, 2021). # Book pdf is available from http://probml.ai ``` <a href="https://opensource.org/licenses/MIT" target="_parent"><img src="https://img.shields.io/github/license/probml/pyprobml"/></a> <a href="https://colab.research.google.com/github/probml/pml-book/blob/main/pml1/figure_notebooks/chapter2_probability_univariate_models_figures.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## Figure 2.1:<a name='2.1'></a> <a name='multinom'></a> Some discrete distributions on the state space $\mathcal X =\ 1,2,3,4\ $. (a) A uniform distribution with $p(x=k)=1/4$. (b) A degenerate distribution (delta function) that puts all its mass on $x=1$. Figure(s) generated by [discrete_prob_dist_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/discrete_prob_dist_plot.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n discrete_prob_dist_plot.py ``` ## Figure 2.2:<a name='2.2'></a> <a name='gaussianPdf'></a> (a) Plot of the cdf for the standard normal, $\mathcal N (0,1)$. Figure(s) generated by [gauss_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/gauss_plot.py) [quantile_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/quantile_plot.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n gauss_plot.py try_deimport() %run -n quantile_plot.py ``` ## Figure 2.3:<a name='2.3'></a> <a name='roweis-xtimesy'></a> Computing $p(x,y) = p(x) p(y)$, where $ X \perp Y $. Here $X$ and $Y$ are discrete random variables; $X$ has 6 possible states (values) and $Y$ has 5 possible states. A general joint distribution on two such variables would require $(6 \times 5) - 1 = 29$ parameters to define it (we subtract 1 because of the sum-to-one constraint). By assuming (unconditional) independence, we only need $(6-1) + (5-1) = 9$ parameters to define $p(x,y)$ ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_2.3.png" width="256"/> ## Figure 2.4:<a name='2.4'></a> <a name='bimodal'></a> Illustration of a mixture of two 1d Gaussians, $p(x) = 0.5 \mathcal N (x|0,0.5) + 0.5 \mathcal N (x|2,0.5)$. Figure(s) generated by [bimodal_dist_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/bimodal_dist_plot.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n bimodal_dist_plot.py ``` ## Figure 2.5:<a name='2.5'></a> <a name='anscombe'></a> Illustration of Anscombe's quartet. All of these datasets have the same low order summary statistics. Figure(s) generated by [anscobmes_quartet.py](https://github.com/probml/pyprobml/blob/master/scripts/anscobmes_quartet.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n anscobmes_quartet.py ``` ## Figure 2.6:<a name='2.6'></a> <a name='datasaurus'></a> Illustration of the Datasaurus Dozen. All of these datasets have the same low order summary statistics. Adapted from Figure 1 of <a href='#Matejka2017'>[JG17]</a> . Figure(s) generated by [datasaurus_dozen.py](https://github.com/probml/pyprobml/blob/master/scripts/datasaurus_dozen.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n datasaurus_dozen.py ``` ## Figure 2.7:<a name='2.7'></a> <a name='boxViolin'></a> Illustration of 7 different datasets (left), the corresponding box plots (middle) and violin box plots (right). From Figure 8 of https://www.autodesk.com/research/publications/same-stats-different-graphs . Used with kind permission of Justin Matejka ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_2.7.png" width="256"/> ## Figure 2.8:<a name='2.8'></a> <a name='3d2d'></a> Any planar line-drawing is geometrically consistent with infinitely many 3-D structures. From Figure 11 of <a href='#Sinha1993'>[PE93]</a> . Used with kind permission of Pawan Sinha ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_2.8.png" width="256"/> ## Figure 2.9:<a name='2.9'></a> <a name='binomDist'></a> Illustration of the binomial distribution with $N=10$ and (a) $\theta =0.25$ and (b) $\theta =0.9$. Figure(s) generated by [binom_dist_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/binom_dist_plot.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n binom_dist_plot.py ``` ## Figure 2.10:<a name='2.10'></a> <a name='sigmoidHeaviside'></a> (a) The sigmoid (logistic) function $\bm \sigma (a)=(1+e^ -a )^ -1 $. (b) The Heaviside function $\mathbb I \left ( a>0 \right )$. Figure(s) generated by [activation_fun_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/activation_fun_plot.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n activation_fun_plot.py ``` ## Figure 2.11:<a name='2.11'></a> <a name='iris-logreg-1d'></a> Logistic regression applied to a 1-dimensional, 2-class version of the Iris dataset. Figure(s) generated by [iris_logreg.py](https://github.com/probml/pyprobml/blob/master/scripts/iris_logreg.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n iris_logreg.py ``` ## Figure 2.12:<a name='2.12'></a> <a name='softmaxDemo'></a> Softmax distribution $\mathcal S ( \bm a /T)$, where $ \bm a =(3,0,1)$, at temperatures of $T=100$, $T=2$ and $T=1$. When the temperature is high (left), the distribution is uniform, whereas when the temperature is low (right), the distribution is ``spiky'', with most of its mass on the largest element. Figure(s) generated by [softmax_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/softmax_plot.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n softmax_plot.py ``` ## Figure 2.13:<a name='2.13'></a> <a name='iris-logistic-2d-3class-prob'></a> Logistic regression on the 3-class, 2-feature version of the Iris dataset. Adapted from Figure of 4.25 <a href='#Geron2019'>[Aur19]</a> . Figure(s) generated by [iris_logreg.py](https://github.com/probml/pyprobml/blob/master/scripts/iris_logreg.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n iris_logreg.py ``` ## Figure 2.14:<a name='2.14'></a> <a name='hetero'></a> Linear regression using Gaussian output with mean $\mu (x)=b + w x$ and (a) fixed variance $\sigma ^2$(homoskedastic) or (b) input-dependent variance $\sigma (x)^2$(heteroscedastic). Figure(s) generated by [linreg_1d_hetero_tfp.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_1d_hetero_tfp.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n linreg_1d_hetero_tfp.py ``` ## Figure 2.15:<a name='2.15'></a> <a name='studentPdf'></a> (a) The pdf's for a $\mathcal N (0,1)$, $\mathcal T (\mu =0,\sigma =1,\nu =1)$, $\mathcal T (\mu =0,\sigma =1,\nu =2)$, and $\mathrm Lap (0,1/\sqrt 2 )$. The mean is 0 and the variance is 1 for both the Gaussian and Laplace. When $\nu =1$, the Student is the same as the Cauchy, which does not have a well-defined mean and variance. (b) Log of these pdf's. Note that the Student distribution is not log-concave for any parameter value, unlike the Laplace distribution. Nevertheless, both are unimodal. Figure(s) generated by [student_laplace_pdf_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/student_laplace_pdf_plot.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n student_laplace_pdf_plot.py ``` ## Figure 2.16:<a name='2.16'></a> <a name='robustDemo'></a> Illustration of the effect of outliers on fitting Gaussian, Student and Laplace distributions. (a) No outliers (the Gaussian and Student curves are on top of each other). (b) With outliers. We see that the Gaussian is more affected by outliers than the Student and Laplace distributions. Adapted from Figure 2.16 of <a href='#BishopBook'>[Bis06]</a> . Figure(s) generated by [robust_pdf_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/robust_pdf_plot.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n robust_pdf_plot.py ``` ## Figure 2.17:<a name='2.17'></a> <a name='gammaDist'></a> (a) Some beta distributions. If $a<1$, we get a ``spike'' on the left, and if $b<1$, we get a ``spike'' on the right. if $a=b=1$, the distribution is uniform. If $a>1$ and $b>1$, the distribution is unimodal. Figure(s) generated by [beta_dist_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/beta_dist_plot.py) [gamma_dist_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/gamma_dist_plot.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n beta_dist_plot.py try_deimport() %run -n gamma_dist_plot.py ``` ## Figure 2.18:<a name='2.18'></a> <a name='empiricalDist'></a> Illustration of the (a) empirical pdf and (b) empirical cdf derived from a set of $N=5$ samples. From https://bit.ly/3hFgi0e . Used with kind permission of Mauro Escudero ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_2.18_A.png" width="256"/> <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_2.18_B.png" width="256"/> ## Figure 2.19:<a name='2.19'></a> <a name='changeOfVar1d'></a> (a) Mapping a uniform pdf through the function $f(x) = 2x + 1$. (b) Illustration of how two nearby points, $x$ and $x+dx$, get mapped under $f$. If $\frac dy dx >0$, the function is locally increasing, but if $\frac dy dx <0$, the function is locally decreasing. From <a href='#JangBlog'>[Jan18]</a> . Used with kind permission of Eric Jang ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_2.19_A.png" width="256"/> <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_2.19_B.png" width="256"/> ## Figure 2.20:<a name='2.20'></a> <a name='affine2d'></a> Illustration of an affine transformation applied to a unit square, $f( \bm x ) = \mathbf A \bm x + \bm b $. (a) Here $\mathbf A =\mathbf I $. (b) Here $ \bm b = \bm 0 $. From <a href='#JangBlog'>[Jan18]</a> . Used with kind permission of Eric Jang ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_2.20.png" width="256"/> ## Figure 2.21:<a name='2.21'></a> <a name='polar'></a> Change of variables from polar to Cartesian. The area of the shaded patch is $r \tmspace +\thickmuskip .2777em dr \tmspace +\thickmuskip .2777em d\theta $. Adapted from Figure 3.16 of <a href='#Rice95'>[Ric95]</a> ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_2.21.png" width="256"/> ## Figure 2.22:<a name='2.22'></a> <a name='bellCurve'></a> Distribution of the sum of two dice rolls, i.e., $p(y)$ where $y=x_1 + x_2$ and $x_i \sim \mathrm Unif (\ 1,2,\ldots ,6\ )$. From https://en.wikipedia.org/wiki/Probability\_distribution . Used with kind permission of Wikipedia author Tim Stellmach ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') ``` <img src="https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_2.22.png" width="256"/> ## Figure 2.23:<a name='2.23'></a> <a name='clt'></a> The central limit theorem in pictures. We plot a histogram of $ \mu _N^s = \frac 1 N \DOTSB \sum@ \slimits@ _ n=1 ^Nx_ ns $, where $x_ ns \sim \mathrm Beta (1,5)$, for $s=1:10000$. As $N\rightarrow \infty $, the distribution tends towards a Gaussian. (a) $N=1$. (b) $N=5$. Adapted from Figure 2.6 of <a href='#BishopBook'>[Bis06]</a> . Figure(s) generated by [centralLimitDemo.py](https://github.com/probml/pyprobml/blob/master/scripts/centralLimitDemo.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n centralLimitDemo.py ``` ## Figure 2.24:<a name='2.24'></a> <a name='changeOfVars'></a> Computing the distribution of $y=x^2$, where $p(x)$ is uniform (left). The analytic result is shown in the middle, and the Monte Carlo approximation is shown on the right. Figure(s) generated by [change_of_vars_demo1d.py](https://github.com/probml/pyprobml/blob/master/scripts/change_of_vars_demo1d.py) ``` #@title Click me to run setup { display-mode: "form" } try: if PYPROBML_SETUP_ALREADY_RUN: print('skipping setup') except: PYPROBML_SETUP_ALREADY_RUN = True print('running setup...') !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null %cd -q /pyprobml/scripts %reload_ext autoreload %autoreload 2 !pip install superimport deimport -qqq import superimport def try_deimport(): try: from deimport.deimport import deimport deimport(superimport) except Exception as e: print(e) print('finished!') try_deimport() %run -n change_of_vars_demo1d.py ``` ## References: <a name='Geron2019'>[Aur19]</a> G. Aur'elien "Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques for BuildingIntelligent Systems (2nd edition)". (2019). <a name='BishopBook'>[Bis06]</a> C. Bishop "Pattern recognition and machine learning". (2006). <a name='Matejka2017'>[JG17]</a> M. Justin and F. George. "Same Stats, Different Graphs: Generating Datasets with VariedAppearance and Identical Statistics through Simulated Annealing". (2017). <a name='JangBlog'>[Jan18]</a> E. Jang "Normalizing Flows Tutorial". (2018). <a name='Sinha1993'>[PE93]</a> S. P and A. E. "Recovering reflectance and illumination in a world of paintedpolyhedra". (1993). <a name='Rice95'>[Ric95]</a> J. Rice "Mathematical statistics and data analysis". (1995).
true
code
0.573201
null
null
null
null
# Parametric Dynamic Mode Decomposition In this tutorial we explore the usage of the class `pydmd.ParametricDMD`, which is implemented following the work presented in [arXiv:2110.09155](https://arxiv.org/pdf/2110.09155.pdf) (Andreuzzi, Demo, Rozza. _A dynamic mode decomposition extension for the forecasting of parametric dynamical systems_, 2021). The approach is an attempt to extend Dynamic Mode Decomposition to parametric problems, in order to obtain predictions for future time instants in untested parameters. In this tutorial we apply the parametric approach to a simple parametric time-dependent problem, for which we are going to construct a dataset _on the fly_. $$\begin{cases} f_1(x,t) &:= e^{2.3i*t} \cosh(x+3)^{-1}\\ f_2(x,t) &:= 2 * e^{2.8j*t} \tanh(x) \cosh(x)^{-1}\\ f^{\mu}(x,t) &:= \mu f_1(x,t) + (1-\mu) f_2(x,t), \qquad \mu \in [0,1] \end{cases}$$ First of all we import the modules needed for the tutorial, which include: + Several classes from `pydmd` (in addition to `ParametricDMD` we import also the class `DMD` which actually performs the Dynamic Mode Decomposition); + The classes `POD` and `RBF` from `ezyrb`, which respectively are used to reduce the dimensionality before the interpolation and to perform the actual interpolation (see the reference for more details); + `NumPy` and `Matplotlib`. ``` import warnings warnings.filterwarnings('ignore') from pydmd import ParametricDMD, DMD, HankelDMD from ezyrb import POD, RBF import numpy as np import matplotlib.pyplot as plt import matplotlib.colors as colors ``` First of all we define several functions to construct our system and gather the data needed to train the algorithm: ``` def f1(x,t): return 1./np.cosh(x+3)*np.exp(2.3j*t) def f2(x,t): return 2./np.cosh(x)*np.tanh(x)*np.exp(2.8j*t) def f(mu): def fmu(x,t): return mu*f1(x,t) + (1-mu)*f2(x,t) return fmu ``` Then we construct a discrete space-time grid with an acceptable number of sample points in both the dimensions: ``` N = 160 m = 500 x = np.linspace(-5, 5, m) t = np.linspace(0, 4*np.pi, N) xgrid, tgrid = np.meshgrid(x, t) ``` We can now construct our dataset by computing the value of `f` for several known parameters (since our problem is quite simple we consider only 10 samples): ``` training_params = np.round(np.linspace(0,1,10),1) print(training_params) training_snapshots = np.array([f(p)(xgrid, tgrid) for p in training_params]) print(training_snapshots.shape) ``` After defining several utility functions which we are going to use in the following sections, we visualize our dataset for several values of $\mu$: ``` def title(param): return '$\mu$={}'.format(param) def visualize(X, param, ax, log=False, labels_func=None): ax.set_title(title(param)) if labels_func != None: labels_func(ax) if log: return ax.pcolormesh(X.real, norm=colors.LogNorm(vmin=X.min(), vmax=X.max())) else: return ax.pcolormesh(X.real) def visualize_multiple(Xs, params, log=False, figsize=(20,6), labels_func=None): if log: Xs[Xs == 0] = np.min(Xs[Xs != 0]) fig = plt.figure(figsize=figsize) axes = fig.subplots(nrows=1, ncols=5, sharey=True) if labels_func is None: def labels_func_default(ax): ax.set_yticks([0, N//2, N]) ax.set_yticklabels(['0', '$\pi$', '2$\pi$']) ax.set_xticks([0, m//2, m]) ax.set_xticklabels(['-5', '0', '5']) labels_func = labels_func_default im = [visualize(X, param, ax, log, labels_func) for X, param, ax in zip(Xs, params, axes)][-1] fig.colorbar(im, ax=axes) plt.show() idxes = [0,2,4,6,8] visualize_multiple(training_snapshots[idxes], training_params[idxes]) ``` As you can see the parameter is 1-dimensional, but the approach works also with parameters living in multi-dimensional spaces. It is important to provide a sufficient number of _training_ parameters, otherwise the algorithm won't be able to explore the solution manifold in an acceptable way. We now select several _unknown_ (or _testing_) parameters in order to assess the results obtained using the parametric approach. As you can see we consider testing parameters having dishomogeneous distances from our training parameters. ``` similar_testing_params = [1,3,5,7,9] testing_params = training_params[similar_testing_params] + np.array([5*pow(10,-i) for i in range(2,7)]) testing_params_labels = [str(training_params[similar_testing_params][i-2]) + '+$10^{{-{}}}$'.format(i) for i in range(2,7)] step = t[1]-t[0] N_predict = 40 N_nonpredict = 40 t2 = np.array([4*np.pi + i*step for i in range(-N_nonpredict+1,N_predict+1)]) print(t2.shape) xgrid2, tgrid2 = np.meshgrid(x, t2) testing_snapshots = np.array([f(p)(xgrid2, tgrid2) for p in testing_params]) plt.figure(figsize=(8,2)) plt.scatter(training_params, np.zeros(len(training_params)), label='training') plt.scatter(testing_params, np.zeros(len(testing_params)), label='testing') plt.legend() plt.title('Distribution of the parameters'); plt.xlabel('$\mu$') plt.yticks([],[]); ``` ## Mathematical formulation The reference mentioned above proposes two possible ways to achieve the parametrization of DMD, namely _monolithic_ and _partitioned_ approach. We briefly present each one before demonstrating how to use the class `ParametricDMD` in the two cases. The two methods share a common part, which consists in assembling the matrix $$\mathbf{X} = \begin{bmatrix} X_{\mu_1} & \dots & X_{\mu_p}\\ \end{bmatrix} \in \mathbb{R}^{m \times N p}$$ where $\mu_1, \dots, \mu_p$ are the parameters in the training set, $X_{\mu_i} \in \mathbb{R}^{m \times N}$ is the set of $N$ snapshots of the time-dependent system computed with the parameter $\mu_i$ represented by a vector with $m$ components (which may be a sampling of a continuous distribution like the pressure field on a surface). In our formulation $m$ is the dimension of the space and $N$ is the number of known time instants for each instance of the parametric problem. The dimensionality of the problem is reduced using Proper Orthogonal Decomposition (POD) (retaining the first $n$ POD modes using the singular values criteria), thus obtaining the (reduced) matrix $$\tilde{\mathbf{X}} = \begin{bmatrix} \tilde{X}_{\mu_1} & \dots & \tilde{X}_{\mu_p}\\ \end{bmatrix} \in \mathbb{R}^{n \times N p}$$ We now examine two ways to treat the reduced matrices $\tilde{X}_{\mu_1}, \dots, \tilde{X}_{\mu_p}$ to obtain an approximation of the system for untested parameters and future time instants. ## Monolithic approach This method consists in the application of a single DMD altogether to the matrix $$\begin{bmatrix} \tilde{X}_{\mu_1}, \dots, \tilde{X}_{\mu_p} \end{bmatrix}^T \in \mathbb{R}^{np \times N}$$ Note that the index of the time instant increases along the rows of this matrix. For this reason the constructor requires, in this case, only one instance of `DMD`, since the reduced representations of the snapshots in the testing dataset are treated as one single time-dependent non-parametric system. This allows us to obtain an approximation of the POD coefficients for the parameters in the training set in future time instants. The POD coefficients are then interpolated separately for each unknown parameter, and the approximated full-dimensional system is reconstructed via a matrix multiplication with the matrix of POD modes. We choose to retain the first 10 POD modes for each parameter, and set `svd_rank=-1` for our DMD instance, in order to protect us from divergent DMD modes which may ruin the results. We also provide an instance of an `RBF` interpolator to be used for the interpolation of POD coefficients. ``` pdmd = ParametricDMD(DMD(svd_rank=-1), POD(rank=20), RBF()) pdmd.fit(training_snapshots, training_params) ``` We can now set the testing parameters by chaning the propery `parameters` of our instance of `ParametricDMD`, as well as the time-frame via the property `dmd_time` (see the other tutorials for an overview of the latter): ``` pdmd.parameters = testing_params pdmd.dmd_time['t0'] = pdmd.original_time['tend'] - N_nonpredict + 1 pdmd.dmd_time['tend'] = pdmd.original_time['tend'] + N_nonpredict print(len(pdmd.dmd_timesteps), pdmd.dmd_timesteps) approximation = pdmd.reconstructed_data approximation.shape ``` As you can see above we stored the result of the approximation (which comprises both reconstrction of known time instants and prediction of future time instants) into the variable `approximation`. ``` # this is needed to visualize the time/space in the appropriate way def labels_func(ax): l = len(pdmd.dmd_timesteps) ax.set_yticks([0, l//2, l]) ax.set_yticklabels(['3\pi', '4$\pi$', '5$\pi$']) ax.set_xticks([0, m//2, m]) ax.set_xticklabels(['-5', '0', '5']) print('Approximation') visualize_multiple(approximation, testing_params_labels, figsize=(20,2.5), labels_func=labels_func) print('Truth') visualize_multiple(testing_snapshots, testing_params_labels, figsize=(20,2.5), labels_func=labels_func) print('Absolute error') visualize_multiple(np.abs(testing_snapshots.real - approximation.real), testing_params_labels, figsize=(20,2.5), labels_func=labels_func) ``` Below we plot the dependency of the mean point-wise error of the reconstruction on the distance between the (untested) parameter and the nearest tested parameter in the training set: ``` distances = np.abs(testing_params - training_params[similar_testing_params]) errors = np.mean(np.abs(testing_snapshots.real - approximation.real), axis=(1,2)) plt.loglog(distances, errors, 'ro-') plt.xlabel('Distance from nearest parameter') plt.ylabel('Mean point-wise error'); ``` ## Partitioned approach We now consider the second possible approach implemented in `ParametricDMD`. We consider again the matrices $\tilde{X}_{\mu_1}, \dots, \tilde{X}_{\mu_p}$ which we defined in [Mathematical formulation](#Mathematical-formulation). Unlke we did for the *Monolithic* approach, we now perform $p$ separate DMDs, one for each $X_{\mu_i}$. We then predict the POD coefficients in future time instants. The reconstruction phase then is the same we performed in the monolithic approach, for the details see the reference mentioned in the introduction. In order to apply this approach in `PyDMD`, you just need to pass a list of DMD instances in the constructor of `ParametricDMD`. Clearly you will need $p$ instances, where $p$ is the number of parameters in the training set. ``` dmds = [DMD(svd_rank=-1) for _ in range(len(training_params))] p_pdmd = ParametricDMD(dmds, POD(rank=20), RBF()) p_pdmd.fit(training_snapshots, training_params) ``` We set untested parameters and the time frame in which we want to reconstruct the system in the same way we did in the monolithic approach: ``` # setting unknown parameters and time p_pdmd.parameters = testing_params p_pdmd.dmd_time['t0'] = p_pdmd.original_time['tend'] - N_nonpredict + 1 p_pdmd.dmd_time['tend'] = p_pdmd.original_time['tend'] + N_nonpredict ``` **Important**: Don't pass the same DMD instance $p$ times, since that would mean that this object is trained $p$ times on $p$ different training set, therefore only the last one is retained at the time in which the reconstruction is computed. ``` approximation = p_pdmd.reconstructed_data approximation.shape ``` Below we plot the point-wise absolute error: ``` visualize_multiple(np.abs(testing_snapshots.real - approximation.real), testing_params_labels, figsize=(20,2.5)) ``` As you can see there's not much difference in the absolute error, but for more complex problems (i.e. when new frequencies/modes turn on or off as the parameter moves in the domain) there are documented improvements when using the partitioned approach.
true
code
0.59131
null
null
null
null
# LogisticRegression with StandardScaler & Polynomial Features This Code template is for the Classification task using the LogisticRegression with StandardScaler feature scaling technique and PolynomialFeatures as Feature Transformation Technique in a pipeline. ### Required Packages ``` !pip install imblearn --q import warnings import numpy as np import pandas as pd import seaborn as se import matplotlib.pyplot as plt from imblearn.over_sampling import RandomOverSampler from sklearn.pipeline import make_pipeline from sklearn.preprocessing import LabelEncoder,StandardScaler,PolynomialFeatures from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.metrics import classification_report,plot_confusion_matrix warnings.filterwarnings('ignore') ``` ### Initialization Filepath of CSV file ``` #filepath file_path= "" ``` List of features which are required for model training . ``` #x_values features=[''] ``` Target feature for prediction. ``` #y_value target='' ``` ### Data Fetching Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools. We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry. ``` df=pd.read_csv(file_path) df.head() ``` ### Feature Selections It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model. We will assign all the required input features to X and target/outcome to Y. ``` X = df[features] Y = df[target] ``` ### Data Preprocessing Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes. ``` def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df) def EncodeY(df): if len(df.unique())<=2: return df else: un_EncodedT=np.sort(pd.unique(df), axis=-1, kind='mergesort') df=LabelEncoder().fit_transform(df) EncodedT=[xi for xi in range(len(un_EncodedT))] print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT)) return df x=X.columns.to_list() for i in x: X[i]=NullClearner(X[i]) X=EncodeX(X) Y=EncodeY(NullClearner(Y)) X.head() ``` #### Correlation Map In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns. ``` f,ax = plt.subplots(figsize=(18, 18)) matrix = np.triu(X.corr()) se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix) plt.show() ``` #### Distribution Of Target Variable ``` plt.figure(figsize = (10,6)) se.countplot(Y) ``` ### Data Splitting The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data. ``` x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123) ``` #### Handling Target Imbalance The challenge of working with imbalanced datasets is that most machine learning techniques will ignore, and in turn have poor performance on, the minority class, although typically it is performance on the minority class that is most important. One approach to addressing imbalanced datasets is to oversample the minority class. The simplest approach involves duplicating examples in the minority class.We will perform overspampling using imblearn library. ``` x_train,y_train = RandomOverSampler(random_state=123).fit_resample(x_train, y_train) ``` ### Model Logistic regression is a statistical model that in its basic form uses a logistic function to model a binary dependent variable, although many more complex extensions exist. In regression analysis, logistic regression (or logit regression) is estimating the parameters of a logistic model (a form of binary regression). This can be extended to model several classes of events. #### Model Tuning Parameters 1. penalty : {‘l1’, ‘l2’, ‘elasticnet’, ‘none’}, default=’l2’ > Used to specify the norm used in the penalization. The ‘newton-cg’, ‘sag’ and ‘lbfgs’ solvers support only l2 penalties. ‘elasticnet’ is only supported by the ‘saga’ solver. If ‘none’ (not supported by the liblinear solver), no regularization is applied. 2. C : float, default=1.0 > Inverse of regularization strength; must be a positive float. Like in support vector machines, smaller values specify stronger regularization. 3. tol : float, default=1e-4 > Tolerance for stopping criteria. 4. solver : {‘newton-cg’, ‘lbfgs’, ‘liblinear’, ‘sag’, ‘saga’}, default=’lbfgs’ > Algorithm to use in the optimization problem. For small datasets, ‘<code>liblinear</code>’ is a good choice, whereas ‘<code>sag</code>’ and ‘<code>saga</code>’ are faster for large ones. For multiclass problems, only ‘<code>newton-cg</code>’, ‘<code>sag</code>’, ‘<code>saga</code>’ and ‘<code>lbfgs</code>’ handle multinomial loss; ‘<code>liblinear</code>’ is limited to one-versus-rest schemes. * ‘<code>newton-cg</code>’, ‘<code>lbfgs</code>’, ‘<code>sag</code>’ and ‘<code>saga</code>’ handle L2 or no penalty. * ‘<code>liblinear</code>’ and ‘<code>saga</code>’ also handle L1 penalty. * ‘<code>saga</code>’ also supports ‘<code>elasticnet</code>’ penalty. * ‘<code>liblinear</code>’ does not support setting <code>penalty='none'</code>. 5. random_state : int, RandomState instance, default=None > Used when <code>solver</code> == ‘sag’, ‘saga’ or ‘liblinear’ to shuffle the data. 6. max_iter : int, default=100 > Maximum number of iterations taken for the solvers to converge. 7. multi_class : {‘auto’, ‘ovr’, ‘multinomial’}, default=’auto’ > If the option chosen is ‘<code>ovr</code>’, then a binary problem is fit for each label. For ‘<code>multinomial</code>’ the loss minimised is the multinomial loss fit across the entire probability distribution, even when the data is binary. ‘<code>multinomial</code>’ is unavailable when <code>solver</code>=’<code>liblinear</code>’. ‘auto’ selects ‘ovr’ if the data is binary, or if <code>solver</code>=’<code>liblinear</code>’, and otherwise selects ‘<code>multinomial</code>’. 8. verbose : int, default=0 > For the liblinear and lbfgs solvers set verbose to any positive number for verbosity. 9. n_jobs : int, default=None > Number of CPU cores used when parallelizing over classes if multi_class=’ovr’”. This parameter is ignored when the <code>solver</code> is set to ‘liblinear’ regardless of whether ‘multi_class’ is specified or not. <code>None</code> means 1 unless in a joblib.parallel_backend context. <code>-1</code> means using all processors ``` # Build Model here model = make_pipeline(StandardScaler(),PolynomialFeatures(),LogisticRegression(random_state = 123,n_jobs = -1)) model.fit(x_train, y_train) ``` #### Model Accuracy score() method return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. ``` print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100)) ``` #### Confusion Matrix A confusion matrix is utilized to understand the performance of the classification model or algorithm in machine learning for a given test set where results are known. ``` plot_confusion_matrix(model,x_test,y_test,cmap=plt.cm.Blues) ``` #### Classification Report A Classification report is used to measure the quality of predictions from a classification algorithm. How many predictions are True, how many are False. * **where**: - Precision:- Accuracy of positive predictions. - Recall:- Fraction of positives that were correctly identified. - f1-score:- percent of positive predictions were correct - support:- Support is the number of actual occurrences of the class in the specified dataset. ``` print(classification_report(y_test,model.predict(x_test))) ``` #### Creator: Ganapathi Thota , Github: [Profile](https://github.com/Shikiz)
true
code
0.282196
null
null
null
null
The objective of this experiment is to learn words with similar or different meanings are equally apart in BoW and semantics or Meaning of the word is preserved in W2V In this experiment we will be using a huge dataset named as 20 news classification dataset. This data set consists of 20000 messages taken from 20 newsgroups. ### Datasource http://archive.ics.uci.edu/ml/datasets/Twenty+Newsgroups To get a sense of our data, let us first start by counting the frequencies of the target classes in our news articles in the training set. #### Keywords * Numpy * Collections * Gensim * Bag-of-Words (Word Frequency, Pre-Processing) * Bag-of-Words representation ### Setup Steps ``` #@title Run this cell to complete the setup for this Notebook from IPython import get_ipython ipython = get_ipython() ipython.magic("sx wget https://www.dropbox.com/s/ir5kph0ocvibaqm/Setup_W1_D1_Exp1.sh?dl=1") ipython.magic("sx mv Setup_W1_D1_Exp1.sh?dl=1 Setup_W1_D1_Exp1.sh") ipython.magic("sx bash Setup_W1_D1_Exp1.sh -u standard -p pass123") ipython.magic("sx pip3 install gensim") # Importing required Packages import pickle import re import operator from collections import defaultdict import matplotlib.pyplot as plt import numpy as np import math import collections import gensim from nltk import ngrams import warnings warnings.filterwarnings("ignore") # Loading the dataset dataset = pickle.load(open('week1_exp1/AIML_DS_NEWSGROUPS_PICKELFILE.pkl','rb')) print(dataset.keys()) # Print frequencies of dataset print("Class : count") print("--------------") number_of_documents = 0 for key in dataset: print(key, ':', len(dataset[key])) ``` Next, let us split our dataset which consists of 1000 samples per class, into training and test sets. We use 950 samples from each class in the training set, and the remaining 50 in the test set. As a mental exercise you should try reasoning about why is it important to ensure a nearly equal distribution of classes in your training and test sets. ``` train_set = {} test_set = {} # Clean dataset for text encoding issues :- Very useful when dealing with non-unicode characters for key in dataset: dataset[key] = [[i.decode('utf-8', errors='replace').lower() for i in f] for f in dataset[key]] # Break dataset into 95-5 split for training and testing n_train = 0 n_test = 0 for k in dataset: split = int(0.95*len(dataset[k])) train_set[k] = dataset[k][0:split] test_set[k] = dataset[k][split:-1] n_train += len(train_set[k]) n_test += len(test_set[k]) for key in train_set: print(key) ``` ## 1. Bag-of-Words Let us begin our journey into text classification with one of the simplest but most commonly used feature representations for news documents - Bag-of-Words. As you might have realized, machine learning algorithms need good feature representations of different inputs. Concretely, we would like to represent each news article $D$ in terms of a feature vector $V$, which can be used for classification. Feature vector $V$ is made up of the number of occurences of each word in the vocabulary. Let us begin by counting the number of occurences of every word in the news documents in the training set. ### 1.1 Word frequency ### Let us try understanding the kind of words that appear frequently, and those that occur rarely. We now count the frequencies of words: ``` def frequency_words(train_set): frequency = defaultdict(int) for key in train_set: for f in train_set[key]: # Find all words which consist only of capital and lowercase characters and are between length of 2-9. # We ignore all special characters such as !.$ and words containing numbers words = re.findall(r'(\b[A-Za-z][a-z]{2,9}\b)', ' '.join(f)) for word in words: frequency[word] += 1 return frequency frequency_of_words = frequency_words(train_set) sorted_words = sorted(frequency_of_words.items(), key=operator.itemgetter(1), reverse=True) print("Top-10 most frequent words:") for word in sorted_words[:10]: print(word) print('----------------------------') print("10 least frequent words:") for word in sorted_words[-10:-1]: print(word) ``` Next, we attempt to plot a histogram of the counts of various words in descending order. Could you comment about the relationship between the frequency of the most frequent word to the second frequent word? And what about the third most frequent word? (Hint - Check the relative frequencies of the first, second and third most frequent words) (After answering, you can visit https://en.wikipedia.org/wiki/Zipf%27s_law for further Reading) ``` %matplotlib inline fig = plt.figure() fig.set_size_inches(20,10) plt.bar(range(len(sorted_words[:100])), [v for k, v in sorted_words[:100]] , align='center') plt.xticks(range(len(sorted_words[:100])), [k for k, v in sorted_words[:100]]) locs, labels = plt.xticks() plt.setp(labels, rotation=90) plt.show() ``` ### 1.2 Pre-processing to remove most and least frequent words We can see that different words appear with different frequencies. The most common words appear in almost all documents. Hence, for a classification task, having information about those words' frequencies does not mater much since they appear frequently in every type of document. To get a good feature representation, we eliminate them since they do not add too much value. Additionally, notice how the least frequent words appear so rarely that they might not be useful either. Let us pre-process our news articles now to remove the most frequent and least frequent words by thresholding their counts: ``` def cleaning_vocabulary_words(list_of_grams): valid_words = defaultdict(int) print('Number of words before preprocessing:', len(list_of_grams)) # Ignore the 25 most frequent words, and the words which appear less than 100 times ignore_most_frequent = 25 freq_thresh = 100 feature_number = 0 for word, word_frequency in list_of_grams[ignore_most_frequent:]: if word_frequency > freq_thresh: valid_words[word] = feature_number feature_number += 1 elif '_' in word: valid_words[word] = feature_number feature_number += 1 print('Number of words after preprocessing:', len(valid_words)) vector_size = len(valid_words) return valid_words, vector_size valid_words, number_of_words = cleaning_vocabulary_words(sorted_words) dictionary = valid_words.keys() ``` ### 1.3 Bag-of-Words representation The simplest way to represent a document $D$ as a vector $V$ would be to now count the relevant words in the document. For each document, make a vector of the count of each of the words in the vocabulary (excluding the words removed in the previous step - the "stopwords"). ``` def convert_to_BoW(dataset, number_of_documents): bow_representation = np.zeros((number_of_documents, number_of_words)) labels = np.zeros((number_of_documents, 1)) i=0 for label, class_name in enumerate(dataset): # For each file for f in dataset[class_name]: # words = re.findall(r'(\b[A-Za-z][a-z]{2,9}\b)', ' '.join(f)) for w in f: words=w.split() for word in words: if word in dictionary: bow_representation[i][valid_words[word]]+=1 i+=1 return bow_representation, labels # Convert the dataset into their bag of words representation treating train and test separately train_bow_set, train_bow_labels = convert_to_BoW(train_set, n_train) test_bow_set, test_bow_labels = convert_to_BoW(test_set, n_test) print(train_bow_set) ``` ### 1.4 Document classification using Bag-of-Words For the test documents, use your favorite distance metric (Cosine, Eucilidean, etc.) to find similar news articles from your training set and classify using kNN. ``` # Optimized K-NN:- This does the same thing as you've learned but in an optimized manner def dist(train_features, given_feature): # Calculate euclidea distaces between the training feature set and the given feature # Try and optimise this calculation using in built numpy functions rather than for loops distances =np.sqrt(np.sum(np.square(np.abs(train_features-given_feature)),axis=0)) return distances ''' Optimized K-NN code. This code is the same as what you've already seen, but trades off memory efficency for computational efficency. ''' def kNN(k, train_features, train_labels, given_feature): distances = [] n = train_features.shape[0] # np.tile function repeats the given_feature n times. given_feature = np.tile(given_feature, (n, 1)) # Compute distance distances = dist(train_features, given_feature) sort_neighbors = np.argsort(distances) return np.concatenate((distances[sort_neighbors][:k].reshape(-1, 1), train_labels[sort_neighbors][:k].reshape(-1, 1)), axis = 1) def kNN_classify(k, train_features, train_labels, given_feature): tally = collections.Counter() tally.update(str(int(nn[1])) for nn in kNN(k, train_features, train_labels, given_feature)) return int(tally.most_common(1)[0][0]) ``` For example, using 3 nearest neighbours, the $0^{th}$ test document is classified as: Computing accuracy for the bag-of-words features on the full test set: ``` accuracy = 0 for i, given_feature in enumerate(test_bow_set): print("Progress: {0:.04f}".format((i+1)/len(test_bow_set)), end="\r") predicted_class = kNN_classify(3, train_bow_set, train_bow_labels, given_feature) if predicted_class == int(test_bow_labels[i]): accuracy += 1 BoW_accuracy = (accuracy / len(test_bow_set)) print(BoW_accuracy) ```
true
code
0.532911
null
null
null
null
## Computer vision data ``` %matplotlib inline from fastai.gen_doc.nbdoc import * from fastai.vision import * ``` This module contains the classes that define datasets handling [`Image`](/vision.image.html#Image) objects and their transformations. As usual, we'll start with a quick overview, before we get in to the detailed API docs. Before any work can be done a dataset needs to be converted into a [`DataBunch`](/basic_data.html#DataBunch) object, and in the case of the computer vision data - specifically into an [`ImageDataBunch`](/vision.data.html#ImageDataBunch) subclass. This is done with the help of [data block API](/data_block.html) and the [`ImageList`](/vision.data.html#ImageList) class and its subclasses. However, there is also a group of shortcut methods provided by [`ImageDataBunch`](/vision.data.html#ImageDataBunch) which reduce the multiple stages of the data block API, into a single wrapper method. These shortcuts methods work really well for: - Imagenet-style of datasets ([`ImageDataBunch.from_folder`](/vision.data.html#ImageDataBunch.from_folder)) - A pandas `DataFrame` with a column of filenames and a column of labels which can be strings for classification, strings separated by a `label_delim` for multi-classification or floats for a regression problem ([`ImageDataBunch.from_df`](/vision.data.html#ImageDataBunch.from_df)) - A csv file with the same format as above ([`ImageDataBunch.from_csv`](/vision.data.html#ImageDataBunch.from_csv)) - A list of filenames and a list of targets ([`ImageDataBunch.from_lists`](/vision.data.html#ImageDataBunch.from_lists)) - A list of filenames and a function to get the target from the filename ([`ImageDataBunch.from_name_func`](/vision.data.html#ImageDataBunch.from_name_func)) - A list of filenames and a regex pattern to get the target from the filename ([`ImageDataBunch.from_name_re`](/vision.data.html#ImageDataBunch.from_name_re)) In the last five factory methods, a random split is performed between train and validation, in the first one it can be a random split or a separation from a training and a validation folder. If you're just starting out you may choose to experiment with these shortcut methods, as they are also used in the first lessons of the fastai deep learning course. However, you can completely skip them and start building your code using the data block API from the very beginning. Internally, these shortcuts use this API anyway. The first part of this document is dedicated to the shortcut [`ImageDataBunch`](/vision.data.html#ImageDataBunch) factory methods. Then all the other computer vision data-specific methods that are used with the data block API are presented. ## Quickly get your data ready for training To get you started as easily as possible, the fastai provides two helper functions to create a [`DataBunch`](/basic_data.html#DataBunch) object that you can directly use for training a classifier. To demonstrate them you'll first need to download and untar the file by executing the following cell. This will create a data folder containing an MNIST subset in `data/mnist_sample`. ``` path = untar_data(URLs.MNIST_SAMPLE); path ``` There are a number of ways to create an [`ImageDataBunch`](/vision.data.html#ImageDataBunch). One common approach is to use *Imagenet-style folders* (see a ways down the page below for details) with [`ImageDataBunch.from_folder`](/vision.data.html#ImageDataBunch.from_folder): ``` tfms = get_transforms(do_flip=False) data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ``` Here the datasets will be automatically created in the structure of *Imagenet-style folders*. The parameters specified: - the transforms to apply to the images in `ds_tfms` (here with `do_flip`=False because we don't want to flip numbers), - the target `size` of our pictures (here 24). As with all [`DataBunch`](/basic_data.html#DataBunch) usage, a `train_dl` and a `valid_dl` are created that are of the type PyTorch [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader). If you want to have a look at a few images inside a batch, you can use [`DataBunch.show_batch`](/basic_data.html#DataBunch.show_batch). The `rows` argument is the number of rows and columns to display. ``` data.show_batch(rows=3, figsize=(5,5)) ``` The second way to define the data for a classifier requires a structure like this: ``` path\ train\ test\ labels.csv ``` where the labels.csv file defines the label(s) of each image in the training set. This is the format you will need to use when each image can have multiple labels. It also works with single labels: ``` pd.read_csv(path/'labels.csv').head() ``` You can then use [`ImageDataBunch.from_csv`](/vision.data.html#ImageDataBunch.from_csv): ``` data = ImageDataBunch.from_csv(path, ds_tfms=tfms, size=28) data.show_batch(rows=3, figsize=(5,5)) ``` An example of multiclassification can be downloaded with the following cell. It's a sample of the [planet dataset](https://www.google.com/search?q=kaggle+planet&rlz=1C1CHBF_enFR786FR786&oq=kaggle+planet&aqs=chrome..69i57j0.1563j0j7&sourceid=chrome&ie=UTF-8). ``` planet = untar_data(URLs.PLANET_SAMPLE) ``` If we open the labels files, we seach that each image has one or more tags, separated by a space. ``` df = pd.read_csv(planet/'labels.csv') df.head() data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim=' ', ds_tfms=get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.)) ``` The `show_batch`method will then print all the labels that correspond to each image. ``` data.show_batch(rows=3, figsize=(10,8), ds_type=DatasetType.Valid) ``` You can find more ways to build an [`ImageDataBunch`](/vision.data.html#ImageDataBunch) without the factory methods in [`data_block`](/data_block.html#data_block). ``` show_doc(ImageDataBunch) ``` This is the same initialization as a regular [`DataBunch`](/basic_data.html#DataBunch) so you probably don't want to use this directly, but one of the factory methods instead. ### Factory methods If you quickly want to get a [`ImageDataBunch`](/vision.data.html#ImageDataBunch) and train a model, you should process your data to have it in one of the formats the following functions handle. ``` show_doc(ImageDataBunch.from_folder) ``` Refer to [`create_from_ll`](#ImageDataBunch.create_from_ll) to see all the `**kwargs` arguments. "*Imagenet-style*" datasets look something like this (note that the test folder is optional): ``` path\ train\ clas1\ clas2\ ... valid\ clas1\ clas2\ ... test\ ``` For example: ``` data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ``` Note that this (and all factory methods in this section) pass any `kwargs` to [`DataBunch.create`](/basic_data.html#DataBunch.create). ``` show_doc(ImageDataBunch.from_csv) ``` Refer to [`create_from_ll`](#ImageDataBunch.create_from_ll) to see all the `**kwargs` arguments. Create an [`ImageDataBunch`](/vision.data.html#ImageDataBunch) from `path` by splitting the data in `folder` and labelled in a file `csv_labels` between a training and validation set. Use `valid_pct` to indicate the percentage of the total images to use as the validation set. An optional `test` folder contains unlabelled data and `suffix` contains an optional suffix to add to the filenames in `csv_labels` (such as '.jpg'). `fn_col` is the index (or the name) of the the column containing the filenames and `label_col` is the index (indices) (or the name(s)) of the column(s) containing the labels. Use [`header`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html#pandas-read-csv) to specify the format of the csv header, and `delimiter` to specify a non-standard csv-field separator. In case your csv has no header, column parameters can only be specified as indices. If `label_delim` is passed, split what's in the label column according to that separator. For example: ``` data = ImageDataBunch.from_csv(path, ds_tfms=tfms, size=24); show_doc(ImageDataBunch.from_df) ``` Refer to [`create_from_ll`](#ImageDataBunch.create_from_ll) to see all the `**kwargs` arguments. Same as [`ImageDataBunch.from_csv`](/vision.data.html#ImageDataBunch.from_csv), but passing in a `DataFrame` instead of a csv file. e.g ``` df = pd.read_csv(path/'labels.csv', header='infer') df.head() data = ImageDataBunch.from_df(path, df, ds_tfms=tfms, size=24) ``` Different datasets are labeled in many different ways. The following methods can help extract the labels from the dataset in a wide variety of situations. The way they are built in fastai is constructive: there are methods which do a lot for you but apply in specific circumstances and there are methods which do less for you but give you more flexibility. In this case the hierarchy is: 1. [`ImageDataBunch.from_name_re`](/vision.data.html#ImageDataBunch.from_name_re): Gets the labels from the filenames using a regular expression 2. [`ImageDataBunch.from_name_func`](/vision.data.html#ImageDataBunch.from_name_func): Gets the labels from the filenames using any function 3. [`ImageDataBunch.from_lists`](/vision.data.html#ImageDataBunch.from_lists): Labels need to be provided as an input in a list ``` show_doc(ImageDataBunch.from_name_re) ``` Refer to [`create_from_ll`](#ImageDataBunch.create_from_ll) to see all the `**kwargs` arguments. Creates an [`ImageDataBunch`](/vision.data.html#ImageDataBunch) from `fnames`, calling a regular expression (containing one *re group*) on the file names to get the labels, putting aside `valid_pct` for the validation. In the same way as [`ImageDataBunch.from_csv`](/vision.data.html#ImageDataBunch.from_csv), an optional `test` folder contains unlabelled data. Our previously created dataframe contains the labels in the filenames so we can leverage it to test this new method. [`ImageDataBunch.from_name_re`](/vision.data.html#ImageDataBunch.from_name_re) needs the exact path of each file so we will append the data path to each filename before creating our [`ImageDataBunch`](/vision.data.html#ImageDataBunch) object. ``` fn_paths = [path/name for name in df['name']]; fn_paths[:2] pat = r"/(\d)/\d+\.png$" data = ImageDataBunch.from_name_re(path, fn_paths, pat=pat, ds_tfms=tfms, size=24) data.classes show_doc(ImageDataBunch.from_name_func) ``` Refer to [`create_from_ll`](#ImageDataBunch.create_from_ll) to see all the `**kwargs` arguments. Works in the same way as [`ImageDataBunch.from_name_re`](/vision.data.html#ImageDataBunch.from_name_re), but instead of a regular expression it expects a function that will determine how to extract the labels from the filenames. (Note that `from_name_re` uses this function in its implementation). To test it we could build a function with our previous regex. Let's try another, similar approach to show that the labels can be obtained in a different way. ``` def get_labels(file_path): return '3' if '/3/' in str(file_path) else '7' data = ImageDataBunch.from_name_func(path, fn_paths, label_func=get_labels, ds_tfms=tfms, size=24) data.classes show_doc(ImageDataBunch.from_lists) ``` Refer to [`create_from_ll`](#ImageDataBunch.create_from_ll) to see all the `**kwargs` arguments. The most flexible factory function; pass in a list of `labels` that correspond to each of the filenames in `fnames`. To show an example we have to build the labels list outside our [`ImageDataBunch`](/vision.data.html#ImageDataBunch) object and give it as an argument when we call `from_lists`. Let's use our previously created function to create our labels list. ``` labels_ls = list(map(get_labels, fn_paths)) data = ImageDataBunch.from_lists(path, fn_paths, labels=labels_ls, ds_tfms=tfms, size=24) data.classes show_doc(ImageDataBunch.create_from_ll) ``` Use `bs`, `num_workers`, `collate_fn` and a potential `test` folder. `ds_tfms` is a tuple of two lists of transforms to be applied to the training and the validation (plus test optionally) set. `tfms` are the transforms to apply to the [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader). The `size` and the `kwargs` are passed to the transforms for data augmentation. ``` show_doc(ImageDataBunch.single_from_classes) jekyll_note('This method is deprecated, you should use DataBunch.load_empty now.') ``` ### Other methods In the next few methods we will use another dataset, CIFAR. This is because the second method will get the statistics for our dataset and we want to be able to show different statistics per channel. If we were to use MNIST, these statistics would be the same for every channel. White pixels are [255,255,255] and black pixels are [0,0,0] (or in normalized form [1,1,1] and [0,0,0]) so there is no variance between channels. ``` path = untar_data(URLs.CIFAR); path show_doc(channel_view) data = ImageDataBunch.from_folder(path, ds_tfms=tfms, valid='test', size=24) def channel_view(x:Tensor)->Tensor: "Make channel the first axis of `x` and flatten remaining axes" return x.transpose(0,1).contiguous().view(x.shape[1],-1) ``` This function takes a tensor and flattens all dimensions except the channels, which it keeps as the first axis. This function is used to feed [`ImageDataBunch.batch_stats`](/vision.data.html#ImageDataBunch.batch_stats) so that it can get the pixel statistics of a whole batch. Let's take as an example the dimensions our MNIST batches: 128, 3, 24, 24. ``` t = torch.Tensor(128, 3, 24, 24) t.size() tensor = channel_view(t) tensor.size() show_doc(ImageDataBunch.batch_stats) data.batch_stats() show_doc(ImageDataBunch.normalize) ``` In the fast.ai library we have `imagenet_stats`, `cifar_stats` and `mnist_stats` so we can add normalization easily with any of these datasets. Let's see an example with our dataset of choice: MNIST. ``` data.normalize(cifar_stats) data.batch_stats() ``` ## Data normalization You may also want to normalize your data, which can be done by using the following functions. ``` show_doc(normalize) show_doc(denormalize) show_doc(normalize_funcs) ``` On MNIST the mean and std are 0.1307 and 0.3081 respectively (looked on Google). If you're using a pretrained model, you'll need to use the normalization that was used to train the model. The imagenet norm and denorm functions are stored as constants inside the library named <code>imagenet_norm</code> and <code>imagenet_denorm</code>. If you're training a model on CIFAR-10, you can also use <code>cifar_norm</code> and <code>cifar_denorm</code>. You may sometimes see warnings about *clipping input data* when plotting normalized data. That's because even although it's denormalized when plotting automatically, sometimes floating point errors may make some values slightly out or the correct range. You can safely ignore these warnings in this case. ``` data = ImageDataBunch.from_folder(untar_data(URLs.MNIST_SAMPLE), ds_tfms=tfms, size=24) data.normalize() data.show_batch(rows=3, figsize=(6,6)) show_doc(get_annotations) ``` To use this dataset and collate samples into batches, you'll need to following function: ``` show_doc(bb_pad_collate) ``` Finally, to apply transformations to [`Image`](/vision.image.html#Image) in a [`Dataset`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset), we use this last class. ## ItemList specific to vision The vision application adds a few subclasses of [`ItemList`](/data_block.html#ItemList) specific to images. ``` show_doc(ImageList, title_level=3) ``` It inherits from [`ItemList`](/data_block.html#ItemList) and overwrite [`ItemList.get`](/data_block.html#ItemList.get) to call [`open_image`](/vision.image.html#open_image) in order to turn an image file in `Path` object into an [`Image`](/vision.image.html#Image) object. `label_cls` can be specified for the labels, `xtra` contains any extra information (usually in the form of a dataframe) and `processor` is applied to the [`ItemList`](/data_block.html#ItemList) after splitting and labelling. How [`ImageList.__init__`](/vision.data.html#ImageList.__init__) overwrites on [`ItemList.__init__`](/data_block.html#ItemList.__init__)? [`ImageList.__init__`](/vision.data.html#ImageList.__init__) creates additional attributes like `convert_mode`, `after_open`, `c`, `size` upon [`ItemList.__init__`](/data_block.html#ItemList.__init__); and `convert_mode` and `sizes` in particular are necessary to make use of [`ImageList.get`](/vision.data.html#ImageList.get) (which also overwrites on [`ItemList.get`](/data_block.html#ItemList.get)) and [`ImageList.open`](/vision.data.html#ImageList.open). ``` show_doc(ImageList.from_folder) ``` How [`ImageList.from_folder`](/vision.data.html#ImageList.from_folder) overwrites on [`ItemList.from_folder`](/data_block.html#ItemList.from_folder)? [`ImageList.from_folder`](/vision.data.html#ImageList.from_folder) adds some constraints on `extensions` upon [`ItemList.from_folder`](/data_block.html#ItemList.from_folder), to work with image files specifically; and can take additional input arguments like `convert_mode` and `after_open` which are not available to [`ItemList`](/data_block.html#ItemList). ``` show_doc(ImageList.from_df) show_doc(get_image_files) show_doc(ImageList.open) ``` Let's get a feel of how `open` is used with the following example. ``` from fastai.vision import * path_data = untar_data(URLs.PLANET_TINY); path_data.ls() imagelistRGB = ImageList.from_folder(path_data/'train'); imagelistRGB ``` `open` takes only one input `fn` as `filename` in the type of `Path` or `String`. ``` imagelistRGB.items[10] imagelistRGB.open(imagelistRGB.items[10]) imagelistRGB[10] print(imagelistRGB[10]) ``` The reason why `imagelistRGB[10]` print out an image, is because behind the scene we have [`ImageList.get`](/vision.data.html#ImageList.get) calls [`ImageList.open`](/vision.data.html#ImageList.open) which calls [`open_image`](/vision.image.html#open_image) which uses ` PIL.Image.open(fn).convert(convert_mode)` to open an image file (how we print the image), and finally turns it into an Image object with shape (3, 128, 128) Internally, [`ImageList.open`](/vision.data.html#ImageList.open) passes `ImageList.convert_mode` and `ImageList.after_open` to [`open_image`](/vision.image.html#open_image) to adjust the appearance of the Image object. For example, setting `convert_mode` to `L` can make images black and white. ``` imagelistRGB.convert_mode = 'L' imagelistRGB.open(imagelistRGB.items[10]) show_doc(ImageList.show_xys) show_doc(ImageList.show_xyzs) show_doc(ObjectCategoryList, title_level=3) show_doc(ObjectItemList, title_level=3) show_doc(SegmentationItemList, title_level=3) show_doc(SegmentationLabelList, title_level=3) show_doc(PointsLabelList, title_level=3) show_doc(PointsItemList, title_level=3) show_doc(ImageImageList, title_level=3) ``` ## Building your own dataset This module also contains a few helper functions to allow you to build you own dataset for image classification. ``` show_doc(download_images) show_doc(verify_images) ``` It will try if every image in this folder can be opened and has `n_channels`. If `n_channels` is 3 – it'll try to convert image to RGB. If `delete=True`, it'll be removed it this fails. If `resume` – it will skip already existent images in `dest`. If `max_size` is specified, image is resized to the same ratio so that both sizes are less than `max_size`, using `interp`. Result is stored in `dest`, `ext` forces an extension type, `img_format` and `kwargs` are passed to PIL.Image.save. Use `max_workers` CPUs. ## Undocumented Methods - Methods moved below this line will intentionally be hidden ``` show_doc(PointsItemList.get) show_doc(SegmentationLabelList.new) show_doc(ImageList.from_csv) show_doc(ObjectCategoryList.get) show_doc(ImageList.get) show_doc(SegmentationLabelList.reconstruct) show_doc(ImageImageList.show_xys) show_doc(ImageImageList.show_xyzs) show_doc(ImageList.open) show_doc(PointsItemList.analyze_pred) show_doc(SegmentationLabelList.analyze_pred) show_doc(PointsItemList.reconstruct) show_doc(SegmentationLabelList.open) show_doc(ImageList.reconstruct) show_doc(resize_to) show_doc(ObjectCategoryList.reconstruct) show_doc(PointsLabelList.reconstruct) show_doc(PointsLabelList.analyze_pred) show_doc(PointsLabelList.get) ``` ## New Methods - Please document or move to the undocumented section ``` show_doc(ObjectCategoryList.analyze_pred) ```
true
code
0.695777
null
null
null
null
``` %matplotlib inline ``` ============================= Model Surface Output ============================= Plot an surface map with mean sea level pressure (MSLP), 2m Temperature (F), and Wind Barbs (kt). Imports ``` from datetime import datetime import cartopy.crs as ccrs import cartopy.feature as cfeature import matplotlib.pyplot as plt from metpy.units import units from netCDF4 import num2date import numpy as np import scipy.ndimage as ndimage from siphon.ncss import NCSS ``` Helper functions ``` # Helper function for finding proper time variable def find_time_var(var, time_basename='time'): for coord_name in var.coordinates.split(): if coord_name.startswith(time_basename): return coord_name raise ValueError('No time variable found for ' + var.name) ``` Create NCSS object to access the NetcdfSubset --------------------------------------------- Data from NCEI GFS 0.5 deg Analysis Archive ``` base_url = 'https://www.ncei.noaa.gov/thredds/ncss/grid/gfs-g4-anl-files/' dt = datetime(2018, 1, 4, 12) ncss = NCSS('{}{dt:%Y%m}/{dt:%Y%m%d}/gfsanl_4_{dt:%Y%m%d}' '_{dt:%H}00_000.grb2'.format(base_url, dt=dt)) # Create lat/lon box for location you want to get data for query = ncss.query().time(dt) query.lonlat_box(north=65, south=15, east=310, west=220) query.accept('netcdf') # Request data for model "surface" data query.variables('Pressure_reduced_to_MSL_msl', 'Apparent_temperature_height_above_ground', 'u-component_of_wind_height_above_ground', 'v-component_of_wind_height_above_ground') data = ncss.get_data(query) ``` Begin data maipulation ----------------------- Data for the surface from a model is a bit complicated. The variables come from different levels and may have different data array shapes. MSLP: Pressure_reduced_to_MSL_msl (time, lat, lon) 2m Temp: Apparent_temperature_height_above_ground (time, level, lat, lon) 10m Wind: u/v-component_of_wind_height_above_ground (time, level, lat, lon) Height above ground Temp from GFS has one level (2m) Height above ground Wind from GFS has three levels (10m, 80m, 100m) ``` # Pull out variables you want to use mslp = data.variables['Pressure_reduced_to_MSL_msl'][:].squeeze() temp = units.K * data.variables['Apparent_temperature_height_above_ground'][:].squeeze() u_wind = units('m/s') * data.variables['u-component_of_wind_height_above_ground'][:].squeeze() v_wind = units('m/s') * data.variables['v-component_of_wind_height_above_ground'][:].squeeze() lat = data.variables['lat'][:].squeeze() lon = data.variables['lon'][:].squeeze() time_var = data.variables[find_time_var(data.variables['Pressure_reduced_to_MSL_msl'])] # Convert winds to knots u_wind.ito('kt') v_wind.ito('kt') # Convert number of hours since the reference time into an actual date time = num2date(time_var[:].squeeze(), time_var.units) lev_10m = np.where(data.variables['height_above_ground3'][:] == 10)[0][0] u_wind_10m = u_wind[lev_10m] v_wind_10m = v_wind[lev_10m] # Combine 1D latitude and longitudes into a 2D grid of locations lon_2d, lat_2d = np.meshgrid(lon, lat) # Smooth MSLP a little # Be sure to only put in a 2D lat/lon or Y/X array for smoothing smooth_mslp = ndimage.gaussian_filter(mslp, sigma=3, order=0) * units.Pa smooth_mslp.ito('hPa') ``` Begin map creation ------------------ ``` # Set Projection of Data datacrs = ccrs.PlateCarree() # Set Projection of Plot plotcrs = ccrs.LambertConformal(central_latitude=[30, 60], central_longitude=-100) # Create new figure fig = plt.figure(figsize=(11, 8.5)) # Add the map and set the extent ax = plt.subplot(111, projection=plotcrs) plt.title('GFS Analysis MSLP, 2m Temperature (F), Wind Barbs (kt)' ' {0:%d %B %Y %H:%MZ}'.format(time), fontsize=16) ax.set_extent([235., 290., 20., 55.]) # Add state boundaries to plot states_provinces = cfeature.NaturalEarthFeature(category='cultural', name='admin_1_states_provinces_lakes', scale='50m', facecolor='none') ax.add_feature(states_provinces, edgecolor='black', linewidth=1) # Add country borders to plot country_borders = cfeature.NaturalEarthFeature(category='cultural', name='admin_0_countries', scale='50m', facecolor='none') ax.add_feature(country_borders, edgecolor='black', linewidth=1) # Plot MSLP Contours clev_mslp = np.arange(0, 1200, 4) cs = ax.contour(lon_2d, lat_2d, smooth_mslp, clev_mslp, colors='black', linewidths=1.5, linestyles='solid', transform=datacrs) plt.clabel(cs, fontsize=10, inline=1, inline_spacing=10, fmt='%i', rightside_up=True, use_clabeltext=True) # Plot 2m Temperature Contours clevtemp = np.arange(-60, 101, 10) cs2 = ax.contour(lon_2d, lat_2d, temp.to(units('degF')), clevtemp, colors='tab:red', linewidths=1.25, linestyles='dotted', transform=datacrs) plt.clabel(cs2, fontsize=10, inline=1, inline_spacing=10, fmt='%i', rightside_up=True, use_clabeltext=True) # Plot 10m Wind Barbs ax.barbs(lon_2d, lat_2d, u_wind_10m.magnitude, v_wind_10m.magnitude, length=6, regrid_shape=20, pivot='middle', transform=datacrs) plt.show() ```
true
code
0.60092
null
null
null
null
``` import sys import os from IPython.display import Image os.getcwd() sys.path.append('Your path to Snowmodel') # insert path to Snowmodel import numpy as np from model import * sys.path ``` #### Initialize model geometry In set_up_model_geometry you can choose several intial geometries via *geom* that are described in the docstring. 'FieldScale0.5m' is a snowpack of an initial height *Z* of 0.5 m, 101 computational nodes *nz*, which means a node distance *dz* of 0.005 m. *coord* contains the exact z-coordinates of all computational nodes. ``` geom = 'FieldScale0.5m' [nz, dz, Z, coord] = set_up_model_geometry(geom) ``` #### Initialize time step and maximum iteration number *it* is the maximum number of iterations. The first time step *dt* is set to 0.01 s and the time passed *t_passed* is 0 s. *iter_max* is the maximum iteration numer. ``` it = 7000 [iter_max, dt, t_passed] = set_up_iter(it) ``` #### Set initial conditions for temperature and snow density set_initial_conditions defines the initial conditions for temperature *T* and snow density *rho_eff*. *RHO_ini* and *T_ini* can be replace by all options listed in the doc string of *set_initial_conditions*. *RHO_2Layer_Continuous_smooth* reflects a snowpack with two equally thick snow layers of which the lower one is denser (150 kgm$^{-3}$) and the upper one is lighter (75 kgm$^{-3}$). Ice volume fraction *phi* is then derived from snow density *rho_eff* with *retrieve_phi_from_rho_eff* ``` T_ini = 'T_const_263' RHO_ini = 'RHO_2Layer_Continuous_smooth' [T, rho_eff] = set_initial_conditions(nz, Z, RHO_ini, T_ini) phi = retrieve_phi_from_rho_eff (nz, rho_eff) ``` #### Set up matrices to store results for each time step ``` [all_D_eff, all_k_eff, all_FN, all_rhoC_eff, all_rho_v, all_T,all_c, all_phi, all_rho_eff,all_coord, \ all_v, all_sigma, all_t_passed, all_dz] = set_up_matrices(iter_max, nz) ``` #### Initialize mesh fourier number *FN* and deposition rate *c* ``` FN = np.zeros(nz) c = np.zeros(nz) ``` #### Initialize model parameters Diffusion coefficient *D_eff*, thermal conductivity *k_eff*, heat capacity *rhoC_eff*, *rho_v* saturation water vapor density and *rho_v_dT* is temperature derivate of the saturation water vapor density the. SWVD stands for saturation water vapor density and is an option to decide on the equation to be used for the computation. We choose 'Libbrecht'. ``` SWVD = 'Libbrecht' [D_eff, k_eff, rhoC_eff, rho_v, rho_v_dT] =\ update_model_parameters(phi, T, nz, coord, SWVD) ``` #### Initialize settling velocity *v* is the settling velocity *v_dz* is the z derivative of the velocity and sigma is the vertical stress at each grid node. *SetVel* can be set to 'Y' and 'N' to include or exclude settling respectively. *v_opt* is the option for velocity computation that are described in the docstring. We choose *continuous*. *viscosity* is an option for the viscosity computation. We choose *eta_constant_n1*. \begin{equation}\label{eq:vvsstrainrate} \nabla v = \dot{\epsilon}, \end{equation} \begin{equation} \dot{\epsilon} = \frac{1}{\eta} \sigma^m, \end{equation} \begin{equation} \partial_z v = \frac{1}{\eta} \left( g \int_z^{H(t)} \phi_i\left(\zeta \right) \rho_i \, d \zeta \right)^{m}. \end{equation} \begin{equation}\label{equ:velocity} v (z) = - \int_0^z \frac{1}{\eta} \left( g \int_{\tilde z}^{H(t)} \phi_i(\zeta)\rho_i d \zeta \right)^{m} \, d\tilde{z}, \end{equation} ``` SetVel = 'Y' v_opt = 'continuous' viscosity = 'eta_constant_n1' [v, v_dz, sigma] =\ settling_vel(T, nz, coord, phi, SetVel, v_opt, viscosity) Eterms = True import matplotlib.pyplot as plt plt.style.use('seaborn-colorblind') plt.style.use('seaborn-whitegrid') ``` Ice mass balance: \begin{equation} \partial_t \phi_i + \nabla \cdot (\mathbf{v}\,\phi_i) = \frac{c}{\rho_i}, \label{equ:icemassbalance} \end{equation} Water vapor transport: \begin{equation} \label{eq:vapormassbalance} \partial_t \left( \rho_v \, (1- \phi_i) \right) - \nabla \cdot \left( D_{eff} \, \nabla \rho_v \right) + \rho_v \, \nabla \cdot \left(v \,\phi_i \right) = -c, \end{equation} ``` %matplotlib notebook fig,(ax1, ax2, ax3, ax4,ax5) = plt.subplots(5,1, figsize=(10,13)) for t in range(iter_max): # if t_passed > 3600*(24*2) : # e.g. 2 days # to_stop = 5 #print(t) if t%500 == 0: visualize_juypter(fig, ax1, ax2, ax3, ax4,ax5, T, c, phi, rho_v, v, t) [all_D_eff, all_k_eff, all_FN, all_rhoC_eff, all_rho_v, all_T,all_c,all_phi, all_rho_eff, all_coord, all_v, all_sigma, all_t_passed, all_dz] \ = store_results(all_D_eff, all_k_eff, all_FN, all_rhoC_eff, all_rho_v, all_T, all_c,all_phi, all_rho_eff, all_coord, all_v, all_sigma, all_t_passed,all_dz, D_eff, k_eff, FN, phi, rhoC_eff, rho_v, T, c, rho_eff, coord, v, sigma, t, iter_max, nz,dz,t_passed) T_prev = T # Module I solves for temperature - Diffusion (T, a, b) = solve_for_T(T, rho_v_dT, k_eff, D_eff, rhoC_eff, phi, nz, dt, dz, Eterms) # Module II solves for deposition rate - Diffusion c = solve_for_c(T, T_prev, phi, D_eff, rho_v_dT, nz, dt, dz, Eterms) # Module III solves for ice volume fraction and coordinate update - Advection (phi, coord, dz, v_dz, v, sigma) = coupled_update_phi_coord(T, c, dt, nz, phi, v_dz, coord, SetVel, v_opt, viscosity) [D_eff, k_eff, rhoC_eff, rho_v, rho_v_dT] = update_model_parameters(phi, T, nz, coord, SWVD) t_passed = t_total(t_passed, dt) #print(t_passed) ## find iteration number for specific time by placing a breakpoint at line 58: # activate next line if Module I and II are deactivated #dt = 100 # deactivate next line if Module I and/or II are deactivated [dt, FN] = comp_dt(t_passed, dz, a, b, v) def visualize_juypter(fig, ax1, ax2, ax3, ax4,ax5, T, c, phi, rho_v, v, t): ax1.plot(phi, coord, label='phi profile at t = ' + str(t)) ax1.set_xlabel('Liquid fraction [-]') ax1.set_ylabel('Height [m]' ) ax2.plot(T, coord, label='T profile at t = ' + str(t)) ax2.set_xlabel('Temperature [K]') ax2.set_ylabel('Height [m]' ) ax3.plot(c, coord, label='c at t = ' + str(t)) ax3.set_xlabel('Deposition rate') ax3.set_ylabel('Height [m]' ) ax4.plot(rho_v, coord, label ='rho_v t = ' + str(t)) ax4.set_xlabel('Water vapor density') ax4.set_ylabel('Height [m]' ) ax5.plot(v, coord, label='v profile at t = ' + str(t)) ax5.set_xlabel('Vertical velocity [m/s]') ax5.set_ylabel('Height [m]' ) fig.canvas.draw() ```
true
code
0.541773
null
null
null
null
# Visualizing COVID-19 Data at the State and County Levels in Python ## Part I: Downloading and Organizing Data From casual observation, I surmise that the widespread stay-at-home orders initiated in March 2020 have left data scientists with a bit of extra time as, with each passing week, I find new sources for COVID-19 data and data visualizations. I have written before about the [proper](https://www.ndsu.edu/centers/pcpe/news/detail/58432/) and [improper](https://www.aier.org/article/visualizations-are-powerful-but-often-misleading/) uses of data. In this post, my purpose is pedagogical. I intend to teach the reader how to download and organize COVID-19 data and how to honestly and meaningfully visualize this data. First, a confession. I am a self-taught programmer. Like many of us, much of what I write can be described as *spaghetti code*, at least initially. I don't thoroughly plan a program before I write it. I roll up my sleeves and get to coding. This has its benefits. Rarely is the perfect the enemy of the good when I first construct a program. Since I'm not writing my code for commercial use, I am able to efficiently produce results. One benefit of building code on the fly is that you may not know at the start of a project what sorts of qualities will be useful to include. As with all creative processes, _discovery_ is often a process that is dependent upon context that emerges from the process of development. _Spaghetti code_ can also be swiftly repurposed, usually by creation of a new copy of the script and making some marginal adjustments. However, the more spaghetti code you write for a particular project and the more kinds of output demanded by the project, the greater the difficulty of maintaining quality code that is easy to read. At some point, you've got to standardize components of your code. When I find myself returning time and again to a particular template, I eventually consolidate the scripts that I have developed so as to minimize costs of editing code by modularizing the code. Standardizing different blocks of code by creating and implementing functions allows the script to produce a variety of outputs. The script in this example that is in the middle stages of this process of development, revision, and standardization. Much of the information concerning how the program works is included as notes in the code. These notes are preceded by the _#_ symbol. ### Downloading the COVID-19 data We will use two datasets. First, we will import a shapefile to use with *geopandas*, which we will later use to generate a county level map that tracks COVID-19. The shapefile is provide for you in the Github folder housing this post. You can also download shapefiles from the U.S. Census [website](https://www.census.gov/geographies/mapping-files/time-series/geo/carto-boundary-file.html). We will create maps in [Part III](https://github.com/jlcatonjr/Learn-Python-for-Stats-and-Econ/blob/master/Projects/COVID19/Visualizing%20COVID-19%20Data%20at%20the%20State%20and%20County%20Levels%20in%20Python%20-%20Part%20III.ipynb). We will download Johns Hopkins's COVID-19 data from the Associated Press's [account](https://data.world/associatedpress/johns-hopkins-coronavirus-case-tracker) at data.world using their [Python module](https://data.world/integrations/python). Follow [these instructions](https://github.com/datadotworld/data.world-py/) to install the *datadotworld* module and access their API. Once we have installed the _datadotworld_ module, we can get to work. First, we will need to import our modules. While not all of these modules will be used in _Part I_ of this series, it will be convenient to import them now so that we can use them later. ``` #createCOVID19StateAndCountyVisualization.py import geopandas import numpy as np import pandas as pd # We won't actually use datetime directly. Since the dataframe index will use # data formatted as datetime64, I import it in case I need to use the datetime # module to troubleshoot later import datetime # you could technically call many of the submodules from matplotlib using mpl., #but for convenience we explicitly import submodules. These will be used for # constructing visualizations import matplotlib as mpl import matplotlib.pyplot as plt from matplotlib.animation import FuncAnimation from mpl_toolkits.axes_grid1 import make_axes_locatable from matplotlib.backends.backend_pdf import PdfPages import matplotlib.ticker as mtick import datadotworld as dw ``` Now we are ready to import the shapefile and download the COVID-19 data. Let's start by creating a function to import the shapefile. ``` def import_geo_data(filename, index_col = "Date", FIPS_name = "FIPS"): # import county level shapefile map_data = geopandas.read_file(filename = filename, index_col = index_col) # rename fips code to match variable name in COVID-19 data map_data.rename(columns={"State":"state"}, inplace = True) # Combine statefips and county fips to create a single fips value # that identifies each particular county without referencing the # state separately map_data[FIPS_name] = map_data["STATEFP"].astype(str) + \ map_data["COUNTYFP"].astype(str) map_data[FIPS_name] = map_data[FIPS_name].astype(np.int64) # set FIPS as index map_data.set_index(FIPS_name, inplace=True) return map_data ``` Next we create a function to download the COVID-19 data using the datadotworld API. ``` def import_covid_data(filename, FIPS_name): # Load COVID19 county data using datadotworld API # Data provided by Johns Hopkins, file provided by Associated Press dataset = dw.load_dataset("associatedpress/johns-hopkins-coronavirus-case-tracker", auto_update = True) # the dataset includes multiple dataframes. We will only use #2 covid_data = dataset.dataframes["2_cases_and_deaths_by_county_timeseries"] # Include only oberservation for political entities within states # i.e., not territories, etc... drop any nan fip values with covid_data[FIPS_name] > 0 covid_data = covid_data[covid_data[FIPS_name] < 57000] covid_data = covid_data[covid_data[FIPS_name] > 0] # Transform FIPS codes into integers (not floats) covid_data[FIPS_name] = covid_data[FIPS_name].astype(int) covid_data.set_index([FIPS_name, "date"], inplace = True) # Prepare a column for state abbreviations. We will draw these from a # dictionary created in the next step. covid_data["state_abr"] = "" for state, abr in state_dict.items(): covid_data.loc[covid_data["state"] == state, "state_abr"] = abr # Create "Location" which concatenates county name and state abbreviation covid_data["Location"] = covid_data["location_name"] + ", " + \ covid_data["state_abr"] return covid_data ``` Finally, create script that will employ the functions created above. ``` # I include this dictionary to convenienlty cross reference state names and # state abbreviations. state_dict = { 'Alabama': 'AL', 'Alaska': 'AK', 'Arizona': 'AZ', 'Arkansas': 'AR', 'California': 'CA', 'Colorado': 'CO', 'Connecticut': 'CT', 'Delaware': 'DE', 'District of Columbia': 'DC', 'Florida': 'FL', 'Georgia': 'GA', 'Hawaii': 'HI', 'Idaho': 'ID', 'Illinois': 'IL', 'Indiana': 'IN', 'Iowa': 'IA','Kansas': 'KS', 'Kentucky': 'KY', 'Louisiana': 'LA', 'Maine': 'ME', 'Maryland': 'MD', 'Massachusetts': 'MA', 'Michigan': 'MI', 'Minnesota': 'MN', 'Mississippi': 'MS', 'Missouri': 'MO', 'Montana': 'MT', 'Nebraska': 'NE', 'Nevada': 'NV', 'New Hampshire': 'NH', 'New Jersey': 'NJ', 'New Mexico': 'NM', 'New York': 'NY', 'North Carolina': 'NC', 'North Dakota': 'ND', 'Ohio': 'OH', 'Oklahoma': 'OK', 'Oregon': 'OR', 'Pennsylvania': 'PA', 'Rhode Island': 'RI', 'South Carolina': 'SC', 'South Dakota': 'SD', 'Tennessee': 'TN', 'Texas': 'TX', 'Utah': 'UT', 'Vermont': 'VT', 'Virginia': 'VA', 'Washington': 'WA', 'West Virginia': 'WV', 'Wisconsin': 'WI', 'Wyoming': 'WY' } # When we complete our script, we will add an if statement that ensures that we # only download the data one time. This will prevent us from rudely wasting # bandwidth from data.world. # ignore warnings from NA values upon import. fips_name = "fips_code" covid_filename = "COVID19DataAP.csv" # rename_FIPS matches map_data FIPS with COVID19 FIPS name map_data = import_geo_data(filename = "countiesWithStatesAndPopulation.shp", index_col = "Date", FIPS_name= fips_name) covid_data = import_covid_data(filename = covid_filename, FIPS_name = fips_name) ``` Call both dataframes in the console to check that everything loaded properly. ``` map_data.iloc[:15] covid_data.iloc[:15] ``` ### Reconstructing the Data Next we will generate state level by summing the county level data. This is largely a pedagogical exercise as we could download state data directly. It is helpful, however, to understand how the .sum() and .groupby() function work in _pandas_ and the process for summing county level data is straight forward. ``` def create_state_dataframe(covid_data): # the keys of state_dict are the names of the states states = list(state_dict.keys()) # D.C. is included in the county level data, so I elect to remove D.C. # if you do not remove D.C., it will be called as a Series (i.e., not a DF), # and will require an extra step in the script states.remove("District of Columbia") # We want to sum data within each state by summing the county values for each # date state_data = covid_data.reset_index().set_index(["date", "state","fips_code"])\ .groupby(["state", "date"]).sum(numeric_only = True, ignore_index = False) # These values will be recalculated since the sum of the county values # would need to be weighted to be meaningful drop_cols = ['uid', 'location_name', 'cumulative_cases_per_100_000', 'cumulative_deaths_per_100_000', 'new_cases_per_100_000', 'new_deaths_per_100_000', 'new_cases_7_day_rolling_avg', 'new_deaths_7_day_rolling_avg'] state_data.drop(drop_cols, axis = 1, inplace = True) # .sum() concatenated the strings in the dataframe, so we must correct for this # by redefining these values state_data["location_type"] = "state" for state in states: state_data.loc[state_data.index.get_level_values("state") == state, "Location"] = state state_data.loc[state_data.index.get_level_values("state") == state, "state_abr"] = \ state_dict[state] return state_data ``` At the bottom of the script after the line where *covid_data* is defined, create *state_data*. ``` state_data = create_state_dataframe(covid_data) ``` Call the result to check that *state_data* was correctly constructed. ``` state_data[:15] ``` Now it is time to merge the COVID-19 data with the data from the U.S. Census shapefile. We created *state_data* before merging the county level data with the shapefile data since the state level dataframe does not meed to include the data from the shapefile. ``` def create_covid_geo_dataframe(covid_data, map_data, dates): # create geopandas dataframe with multiindex for date # original geopandas dataframe had no dates, so copies of the df are # stacked vertically, with a new copy for each date in the covid_data index #(dates is a global) i = 0 for date in dates: # select county observations from each date in dates df = covid_data[covid_data.index.get_level_values("date")==date] # use the fips_codes from the slice of covid_data to select counties # from the map_data index,making sure that the map_data index matches # the covid_data index counties = df.index.get_level_values("fips_code") agg_df = map_data.loc[counties] # each row for agg_df will reflect that agg_df["date"] = date if i == 0: # create the geodataframe, select coordinate system (.crs) to # match map_data.crs matching_gpd = geopandas.GeoDataFrame(agg_df, crs = map_data.crs) i += 1 else: # after initial geodataframe is created, stack a dataframe for # each date in dates. Once completed, index of matching_gpd # will match index of covid_data matching_gpd = matching_gpd.append(agg_df, ignore_index = False) # Set mathcing_gpd index as["fips_code", "date"], liked covid_data index matching_gpd.reset_index(inplace=True) matching_gpd.set_index(["fips_code","date"], inplace = True) # add each column from covid_data to mathcing_gpd for key, val in covid_data.items(): matching_gpd[key] = val return matching_gpd # dates will be used to create a geopandas DataFrame with multiindex dates = sorted(list(set(covid_data.index.get_level_values("date")))) covid_data = create_covid_geo_dataframe(covid_data, map_data, dates) ``` As before, let's check the result by calling the covid_data that we have redefined to include the shapefile. ``` covid_data.iloc[-15:] ``` The result is that covid_data is now a geodataframe that can be used to generate maps that reflect data at the county level. We will create these maps in Part III. Next we will generate data that normalizes the number of Cases per Million and Deaths per Million. For daily rates of both variables, we will create a 7 day moving average. ``` def create_new_vars(covid_data, moving_average_days): # use a for loop that performs the same operations on data for cases and for deaths for key in ["cases", "deaths"]: # create a version of the key with the first letter capitalized cap_key = key.title() covid_data[cap_key + " per Million"] = covid_data["cumulative_" + key]\ .div(covid_data["total_population"]).mul(10 ** 6) # generate daily data normalized per million population by taking the daily difference within each # entity (covid_data.index.names[0]), dividing this value by population and multiplying that value # by 1 million 10 ** 6 covid_data["Daily " + cap_key + " per Million"] = \ covid_data["cumulative_" + key ].groupby(covid_data.index.names[0])\ .diff(1).div(covid_data["total_population"]).mul(10 ** 6) # taking the rolling average; choice of number of days is passed as moving_average_days covid_data["Daily " + cap_key + " per Million MA"] = covid_data["Daily " + \ cap_key + " per Million"].rolling(moving_average_days).mean() ``` At the bottom of the script, define the number of days for the rolling moving average. Call *create_new_vars()* to create new variables for *covid_data* and *state_data* ``` moving_average_days = 7 create_new_vars(covid_data, moving_average_days) create_new_vars(state_data, moving_average_days) ``` Now check that the dataframes for the new variables. ``` covid_data.iloc[-15:] state_data.iloc[-15:] ``` The new variables have been created successfully. You might notice that the value of Daily Deaths per Million MA is not exactly zero. This is a technicality, as Python will identify the number zero as an arbtirarily small float value. The last step will be to compare data from each geographic entity by aligning the data in relation to the first day that Cases per Million or Deaths per Million passed a given threshold. This aligned data will be recorded in the *zero_day_dict*. ``` def create_zero_day_dict(covid_data, start_date): # Data from each entity will be stored in the dictionary zero_day_dict = {} # The dictionary will have a total of 4 keys # "Cases per Million", "Daily Cases per Million MA", # "Deaths per Million", "Daily Deaths per Million MA" for key in ["Cases", "Deaths"]: zero_day_dict[key + " per Million"] = {} zero_day_dict["Daily " + key + " per Million MA"] = {} # Each key is associated with a minimal value that identifies day zero # For deaths, the value is drawn from "Deaths per Million" # For cases, the value is drawn from "Cases per Million" day_zero_val = {} for key in zero_day_dict: day_zero_val[key] = 2 if "Deaths" in key else 10 # create a list of entities (states or counties) entities = sorted(list(set(covid_data.index.get_level_values(0)))) # for each key, identify the full set of values for key in zero_day_dict.keys(): vals = covid_data[key] # select values that will be used to identify day zero thresh_vals = covid_data["Deaths per Million"] if "Deaths" in key else \ covid_data["Cases per Million"] # for each entity, select the slice of values greater than the minimum value for entity in entities: dpc = vals[vals.index.get_level_values(0) == entity][thresh_vals > day_zero_val[key]] zero_day_dict[key][entity] = dpc.copy() return zero_day_dict, day_zero_val start_date = "03-15-2020" end_date = dates[-1] county_zero_day_dict, day_zero_val = create_zero_day_dict(covid_data, start_date) state_zero_day_dict, day_zero_val = create_zero_day_dict(state_data, start_date) ``` Check a key from each dictionary to make sure that the data has actually been aligned. We will call only one element from each dictionary to check that the results are as expected. ``` state_zero_day_dict["Deaths per Million"]["New York"].iloc[-15:] county_zero_day_dict["Daily Deaths per Million MA"][1001].iloc[-15:] ``` If you call other fips values, you may notice that in the county level dictionary, sometimes a key will point to an empty Series. In that case, the data for this county never passed the threshold value for day zero in the realligned data. You have completed the last step. All that is left now is to create a feature that does not unnecessarily download and process the data again once all steps have been completed. We will add a term, *data_processed* that confirms when the data has been processed. At the beginning of the main program - after creating *state_dict* we include an if statement that checks if this variable has been created. If you want to redownload the data, simply delete *data_processed* or restart the kernel. ``` if "data_processed" not in locals(): fips_name = "fips_code" covid_filename = "COVID19DataAP.csv" # rename_FIPS matches map_data FIPS with COVID19 FIPS name map_data = import_geo_data(filename = "countiesWithStatesAndPopulation.shp", index_col = "Date", FIPS_name= fips_name) covid_data = import_covid_data(filename = covid_filename, FIPS_name = fips_name) state_data = create_state_dataframe(covid_data) # dates will be used to create a geopandas DataFrame with multiindex dates = sorted(list(set(covid_data.index.get_level_values("date")))) covid_data = create_covid_geo_dataframe(covid_data, map_data, dates) moving_average_days = 7 create_new_vars(covid_data, moving_average_days) create_new_vars(state_data, moving_average_days) start_date = "03-15-2020" end_date = dates[-1] county_zero_day_dict, day_zero_val = create_zero_day_dict(covid_data, start_date) state_zero_day_dict, day_zero_val = create_zero_day_dict(state_data, start_date) # once data is processed, it is saved in the memory # the if statement at the top of this block of code instructs the computer # not to repeat these operations data_processed = True ``` ### Part II: Visualizing Aligned Data Next we will create visualizations of the data in *county_zero_day_dict* and *state_zero_day_dict* like these. The challenge will be to create a single script that can accomodate both kinds of visualizations. We will make efficient use of generators and trailing if statements to switch between types of visualizations. In the process, you will not only create informative visualizations of COVID-19 data, you will learn how to exercise control over the attributes of your visualizations. We will need to indicate whether we are plotting daily rates or cumulative totals. We will also need to distinguish between cases and deaths as we visualize both data in a single figure. This requires preparation. Below I present the main function along with three functions that execute procedures within the main function. These subfunctions are *plot_double_lines()*, *identify_plot_locs()*, and *plot_lines_and_text*. The inclusion of these help improve readability of the code as these select features that distinguish the plots of daily rates from plots of cumulative totals. ``` def plot_zero_day_data(state_name, state, covid_data, zero_day_dict, day_zero_val, keys, entity_type, entities, pp, n_largest = 10, bold_entities = None, daily = False): # initialize figure that will hold two plots, stacked vertically fig, a = plt.subplots(2,1, figsize = (48, 32)) for key in keys: # if daily is true, plot moving average of daily rates for key, # else plot values identified by key val_key = "Daily " + key + " MA" if daily else key # only plot if there are actually values in day_zero_dict if len(entities) > 0: # i will track color of bolded entities # j will track color of non-bolded entities i = 0 j = 0 # select the upper part - [0] - of the figure to plot "Cases" # and the lower part - [1] - to plot "Deaths" ax = a[0] if "Cases" in key else a[1] # For plotting levels, we will include lines that indicate a doubling of # cases or deaths from day zero # Function also identifies the maximum x and y values to determine size of plot max_x, max_y = plot_double_lines(ax, zero_day_dict, day_zero_val, val_key, entities, daily) # select entities to be visualized. # top_locs are either the n most populous counties or states selected # entities in top_locs will be bolded locs, top_locs = identify_plot_locs(state_name, covid_data, bold_entities, n_largest) # cycle through each entity within the set (states within the U.S. # or counties within states) for entity in entities: vals = zero_day_dict[val_key][entity] # D.C. only has series as value # you might include D.C. by using an if statement that does not # call a subset of entities if there is only one entity if len(vals) > 0 and entity != "District of Columbia": # select only observations that include entity loc = locs[locs.index.get_level_values(entity_type) == entity]["Location"][0] # plot lines and increase the i if entity in top_locs, else increase j i, j = plot_lines_and_text(ax, vals, state, state_dict, loc, top_locs, colors_dict, i, j) # set plot attributes if daily: # provide a bit of *breating room* at the top of the visualization ax.set_ylim(bottom = 0, top = max_y * 1.08) else: # if observing totals, log the y-axis so you can compare rates. ax.set_yscale('log') # In some cases, max_y reads as np.nan, so this exception was necessary if max_y is not np.nan: ax.set_ylim(bottom = np.e ** (np.log(day_zero_val[key])), top = np.e ** (np.log(max_y * 4) )) # make sure that axis is labeled with integers, not floats vals = ax.get_yticks() ax.set_yticklabels([int(y) if y >= 1 else round(y,1) for y in vals]) ax.set_ylabel(val_key) # provide space for entity names on the edge of the plot: see plot_lines_and_text() ax.set_xlim(right = max_x + 10) # key (not val_key) provides basis for index values that align data at day zero ax.set_xlabel("Days Since " + key + " Exceeded " + str(day_zero_val[key])) title = str(end_date)[:10] + "\n7 Day Moving Average" + "\nCOVID-19 in " + state_name \ if daily else str(end_date)[:10] + "\nCOVID-19 in " + state_name # title for daily data takes up 3 lines instead of two, so move y_pos higher for daily data y_pos = .987 if daily else .95 fig.suptitle(title , y=y_pos, fontsize = 75) pp.savefig(fig, bbox_inches = "tight") plt.savefig("statePlots/" + state + " " + val_key + ".png", bbox_inches = "tight") plt.show() plt.close() # this function creates lines that indicate a doubling of the day_zero_value every X number of days def plot_double_lines(ax, zero_day_dict, day_zero_val, key, entities, daily): # function also checks if the lines indicating doubling reach the maximum extent of the plot max_x = max([len(zero_day_dict[key][entity]) for entity in entities]) max_y = max([zero_day_dict[key][entity].max() for entity in entities]) # Do not including doubling lines for rates if not daily: double_lines ={} for i in [2,3,5]: double_lines[i] = [day_zero_val[key] * 2 ** (k/i) for k in range(9 * i)] ax.plot(double_lines[i], label = None, alpha = .2, color = "k", linewidth = 5) # labels are placed at the end of each doubling line ax.text(len(double_lines[i]), double_lines[i][len(double_lines[i])-1], "X2 every \n" + str(i) + " days", alpha = .2) # check if doubling lines press outside current max_x, max_y max_x2 = max(len(val) for val in double_lines.values()) max_y2 = max(val[-1] for val in double_lines.values()) max_x = max_x if max_x > max_x2 else max_x2 max_y = max_y if max_y > max_y2 else max_y2 return max_x, max_y # the program must select which entities to plot with bold lines and larger text # for counties, this selection occurs in light of population def identify_plot_locs(state_name, covid_data, bold_entities, n_largest): if state_name not in state_dict.keys(): # select states within the U.S. to bold locs = covid_data top_locs = covid_data[covid_data["state_abr"].isin(bold_entities)] else: # select counties within state to bold locs = covid_data[covid_data["state"] == state_name][["Location", "state_abr", "total_population"]] top_locs = locs[locs.index.get_level_values("date")==locs.index.get_level_values("date")[0]] top_locs = top_locs[top_locs["total_population"] >= top_locs["total_population"]\ .nlargest(n_largest).min()]["Location"] return locs, top_locs def plot_lines_and_text(ax, vals, state, state_dict, loc, top_locs, colors_dict, i, j): # procedure to select color def select_color(loc, top_locs, colors_dict, colors, i, j): val = i if loc in top_locs.values else 7#j # if loc not in colors_dict.keys(): colors_dict[loc] = colors[val % 10] color = colors_dict[loc] if loc in top_locs.values: i += 1 else: j += 1 return color, i, j color, i, j = select_color(loc, top_locs, colors_dict, colors, i, j) # counties are in form: San Bernardino, CA. To select county name from loc, # remove the last 4 characters. # state abbreviations are selected for comparison between states label = state_dict[loc] if state not in state_dict.values() else loc[:-4].replace(" ", "\n") # choose different sets of charactersitics for bolded entities vs. entities not emphasized linewidth, ls, fontsize, alpha = (7, "-", 36, 1) if loc in top_locs.values else (3, "--", 24, .5) ax.plot(vals.values, label = label, ls = ls, linewidth = linewidth, alpha = alpha, color = color) # write text at the end of the line ax.text(x = len(vals.values) - 1, y = vals.values[-1], s = label, fontsize = fontsize, color = color, alpha = alpha) return i, j ``` The functions are the backbone of the program that create data visualizations. Be sure to read carefully through the script to understand its structure. Now we will call the main function, plot_zero_day_data(), distinguishing between county level and state level data and also distinguishing between daily rates and cumulative totals. We will use a boolean variable - _daily_ - to identify whether rates or totals should be ploted. In this case, I have chosen to plot states in the Upper Midwest. ``` plt.rcParams['axes.xmargin'] = 0 plt.rcParams.update({'font.size': 32}) keys = ["Cases per Million", "Deaths per Million"] lines= {} colors = ["C" + str(i) for i in range(10)] colors_dict = {} pp = PdfPages("covidDataByState.pdf") n_largest = 10 for daily in [True, False]: if not daily: for state_name, state in state_dict.items(): state_fips = sorted(list(set(covid_data[covid_data["state_abr"] == state].index.get_level_values("fips_code").copy()))) plot_zero_day_data(state_name, state, covid_data, county_zero_day_dict, day_zero_val, keys, "fips_code", state_fips, pp, n_largest, daily = daily) else: plot_zero_day_data("Upper Midwest", "Upper Midwest", state_data, state_zero_day_dict, day_zero_val, keys, "state", state_dict.keys(), pp, bold_entities = ["IA", "MN", "NE", "ND","SD", "WI"], daily = daily) plot_zero_day_data("Southwest", "Southwest", state_data, state_zero_day_dict, day_zero_val, keys, "state", state_dict.keys(), pp, bold_entities = ["AZ", "CA", "CO", "NM", "NV", "TX", "UT"], daily = daily) plot_zero_day_data("Northwest", "Northwest", state_data, state_zero_day_dict, day_zero_val, keys, "state", state_dict.keys(), pp, bold_entities = ["ID", "MT", "OR", "WA", "WY"], daily = daily) plot_zero_day_data("Midwest", "Midwest", state_data, state_zero_day_dict, day_zero_val, keys, "state", state_dict.keys(), pp, bold_entities = ["IL", "IN", "KS", "MI", "MO", "OH","OK"], daily = daily) plot_zero_day_data("South", "South", state_data, state_zero_day_dict, day_zero_val, keys, "state", state_dict.keys(), pp, bold_entities = ["AL","AR", "KY", "LA", "MS", "TN"], daily = daily) plot_zero_day_data("Southeast", "Southeast", state_data, state_zero_day_dict, day_zero_val, keys, "state", state_dict.keys(), pp, bold_entities = ["FL","GA", "MD", "NC", "SC", "VA", "WV"], daily = daily) plot_zero_day_data("Northeast", "Northeast", state_data, state_zero_day_dict, day_zero_val, keys, "state", state_dict.keys(), pp, bold_entities = ["CT", "DE", "MA","ME", "NH", "NJ", "NY", "PA", "RI", "VT"], daily = daily) pp.close() ``` Output will be generated in the following format: <img src="https://github.com/jlcatonjr/Learn-Python-for-Stats-and-Econ/blob/master/Projects/COVID19/statePlots/Upper%20Midwest%20Daily%20Deaths%20per%20Million%20MA.png?raw=true" alt="U.S. Covid Rates" title="" /> <h3><center></center></h3> <img src="https://github.com/jlcatonjr/Learn-Python-for-Stats-and-Econ/blob/master/Projects/COVID19/statePlots/NY%20Deaths%20per%20Million.png?raw=true" alt="New York Covid Levels" title="" /> If you would like to view the full set of output, go to the statePlots folder in the GitHub directory where this post is stored. ### Conclusion Here we learned to organize COVID-19 data for the purpose of plotting. The result is that we created visualizations that convey two kinds of data - Cases and Deaths - in two different formats - levels and daily rates - for two different levels of analysis - states and counties. We have identified opportunity for improvements in efficiency along the way that may be useful to develop if we continue to repurpose this script. In the next post, we will plot data using dynamic map where color reflects the level of Cases per Million and Deaths per Million. These visualization will provide understanding of the nature of the spread that started to evidence itself in mid-March.
true
code
0.50293
null
null
null
null
# Map Benign Mutations to 3D Structure This notebook maps a dataset of 63,197 missense mutations with allele frequencies >=1% and <25% extracted from the ExAC database to 3D structures in the Protein Data Bank. The dataset is described in: [1] Niroula A, Vihinen M (2019) How good are pathogenicity predictors in detecting benign variants? PLoS Comput Biol 15(2): e1006481. doi: [10.1371/journal.pcbi.1006481](https://doi.org/10.1371/journal.pcbi.1006481) ``` # Disable Numba: temporary workaround for https://github.com/sbl-sdsc/mmtf-pyspark/issues/288 import os os.environ['NUMBA_DISABLE_JIT'] = "1" from pyspark.sql import SparkSession from mmtfPyspark.datasets import dbSnpDataset, pdbjMineDataset from ipywidgets import interact, IntSlider import pandas as pd import py3Dmol #### Initialize Spark spark = SparkSession.builder.appName("BenignMutationsTo3DStructure").getOrCreate() # Enable Arrow-based columnar data transfers between Spark and Pandas dataframes spark.conf.set("spark.sql.execution.arrow.enabled", "true") ``` ## Read ExAC_ASS dataset [1] ``` df = pd.read_excel('http://structure.bmc.lu.se/VariBench/ExAC_AAS_20171214.xlsx', dtype=str, nrows=63198) df = df[df.RSID.str.startswith('rs')] # keep only rows that contain rs ids. df = df[df.RSID.str.contains(';') == False] # skip rows with an ';' in the RSID column df['rs_id'] = df.RSID.str[2:].astype('int') # create integer column of rs ids df.head() ``` Convert Pandas dataframe to Spark Dataframe ``` ds = spark.createDataFrame(df) ``` ## Read file with dbSNP info The following dataset was created from the NCBI dbSNP SNP3D_PDB_GRCH37 dataset by mapping non-synonymous SNPs to human proteins with >= 95% sequence identity in the PDB. ``` dn = dbSnpDataset.get_cached_dataset() ``` ## Find the intersection between the two dataframes ``` pd.set_option('display.max_columns', None) # show all columns dp = dn.join(ds, dn.snp_id == ds.rs_id).toPandas() dp = dp.sort_values(['chr', 'pos']) dp.head() ``` ## View mutations grouped by protein chain Use the slider to view each protein chain. ``` chains = dp.groupby('pdbChainId') def view_grouped_mutations(grouped_df, *args): chainIds = list(grouped_df.groups.keys()) def view3d(show_bio_assembly=False, show_surface=False, show_labels=True, i=0): group = grouped_df.get_group(chainIds[i]) pdb_id, chain_id = chainIds[i].split('.') viewer = py3Dmol.view(query='pdb:' + pdb_id, options={'doAssembly': show_bio_assembly}) # # polymer style viewer.setStyle({'cartoon': {'colorscheme': 'chain', 'width': 0.6, 'opacity':0.9}}) # # non-polymer style viewer.setStyle({'hetflag': True}, {'stick':{'radius': 0.3, 'singleBond': False}}) # highlight chain of interest in blue viewer.setStyle({'chain': chain_id},{'cartoon': {'color': 'blue'}}) rows = group.shape[0] for j in range(0, rows): res_num = str(group.iloc[j]['pdbResNum']) mod_res = {'resi': res_num, 'chain': chain_id} col = 'red' c_col = col + 'Carbon' viewer.addStyle(mod_res, {'stick':{'colorscheme':c_col, 'radius': 0.2}}) viewer.addStyle(mod_res, {'sphere':{'color':col, 'opacity': 0.6}}) if show_labels: label = 'rs' + str(group.iloc[j]['rs_id']) viewer.addLabel(label, {'fontSize':10,'fontColor': 'black','backgroundColor':'ivory'}, {'resi': res_num, 'chain': chain_id}) #print header print("PDB Id: " + pdb_id + " chain Id: " + chain_id) # print any specified additional columns from the dataframe for a in args: print(a + ": " + group.iloc[0][a]) viewer.zoomTo({'chain': chain_id}) if show_surface: viewer.addSurface(py3Dmol.SES,{'opacity':0.8,'color':'lightblue'},{'chain': chain_id}) return viewer.show() s_widget = IntSlider(min=0, max=len(chainIds)-1, description='Structure', continuous_update=False) return interact(view3d, show_bio_assembly=False, show_surface=False, show_labels=True, i=s_widget) view_grouped_mutations(chains, 'uniprotId','Chromosome'); def view_single_mutation(df, distance_cutoff, *args): def view3d(show_bio_assembly=False, show_surface=False, show_labels=True, i=0): pdb_id, chain_id = df.iloc[i]['pdbChainId'].split('.') viewer = py3Dmol.view(query='pdb:' + pdb_id, options={'doAssembly': show_bio_assembly}) # polymer style viewer.setStyle({'cartoon': {'colorscheme': 'chain', 'width': 0.6, 'opacity':0.9}}) # non-polymer style viewer.setStyle({'hetflag': True}, {'stick':{'radius': 0.3, 'singleBond': False}}) # highlight chain of interest in green viewer.setStyle({'chain': chain_id},{'cartoon': {'color': 'blue', 'opacity':0.5}}) # res_num = str(df.iloc[i]['pdbResNum']) label = 'rs' + str(df.iloc[i]['rs_id']) mod_res = {'resi': res_num, 'chain': chain_id} col = 'red' c_col = col + 'Carbon' viewer.addStyle(mod_res, {'stick':{'colorscheme':c_col, 'radius': 0.2}}) viewer.addStyle(mod_res, {'sphere':{'color':col, 'opacity': 0.8}}) if show_labels: viewer.addLabel(label, {'fontSize':12,'fontColor': 'black','backgroundColor':'ivory'}, {'resi': res_num, 'chain': chain_id}) # select neigboring residues by distance surroundings = {'chain': chain_id, 'resi': res_num, 'byres': True, 'expand': distance_cutoff} # residues surrounding mutation positions viewer.addStyle(surroundings,{'stick':{'colorscheme':'orangeCarbon', 'radius': 0.15}}) viewer.zoomTo(surroundings) if show_surface: viewer.addSurface(py3Dmol.SES, {'opacity':0.8,'color':'lightblue'}, {'chain': chain_id}) #print header print("PDB Id:", pdb_id, "chain Id:" , chain_id, "residue:", res_num, "mutation:", label) # print any specified additional columns from the dataframe for a in args: print(a + ": " + str(df.iloc[i][a])) return viewer.show() s_widget = IntSlider(min=0, max=len(df)-1, description='Structure', continuous_update=False) return interact(view3d, show_bio_assembly=False, show_surface=False, show_labels=True, i=s_widget) ``` ## View one mutation at a time Use the slider to view each mutation. Interacting residues within the distance_cutoff of 8 A are rendered as orange sticks. ``` distance_cutoff = 8 view_single_mutation(dp, distance_cutoff, 'uniprotId','Chromosome','Position','Reference_allele','Altered_allele','Reference_AA','Altered_AA','clinsig', 'AF_Adj'); spark.stop() ```
true
code
0.419024
null
null
null
null
## NumPy for Performance ### NumPy constructors We saw previously that NumPy's core type is the `ndarray`, or N-Dimensional Array: ``` import numpy as np np.zeros([3, 4, 2, 5])[2, :, :, 1] ``` The real magic of numpy arrays is that most python operations are applied, quickly, on an elementwise basis: ``` x = np.arange(0, 256, 4).reshape(8, 8) y = np.zeros((8, 8)) %%timeit for i in range(8): for j in range(8): y[i][j] = x[i][j] + 10 x + 10 ``` Numpy's mathematical functions also happen this way, and are said to be "vectorized" functions. ``` np.sqrt(x) ``` Numpy contains many useful functions for creating matrices. In our earlier lectures we've seen `linspace` and `arange` for evenly spaced numbers. ``` np.linspace(0, 10, 21) np.arange(0, 10, 0.5) ``` Here's one for creating matrices like coordinates in a grid: ``` xmin = -1.5 ymin = -1.0 xmax = 0.5 ymax = 1.0 resolution = 300 xstep = (xmax - xmin) / resolution ystep = (ymax - ymin) / resolution ymatrix, xmatrix = np.mgrid[ymin:ymax:ystep, xmin:xmax:xstep] print(ymatrix) ``` We can add these together to make a grid containing the complex numbers we want to test for membership in the Mandelbrot set. ``` values = xmatrix + 1j * ymatrix print(values) ``` ### Arraywise Algorithms We can use this to apply the mandelbrot algorithm to whole *ARRAYS* ``` z0 = values z1 = z0 * z0 + values z2 = z1 * z1 + values z3 = z2 * z2 + values print(z3) ``` So can we just apply our `mandel1` function to the whole matrix? ``` def mandel1(position,limit=50): value = position while abs(value) < 2: limit -= 1 value = value**2 + position if limit < 0: return 0 return limit mandel1(values) ``` No. The *logic* of our current routine would require stopping for some elements and not for others. We can ask numpy to **vectorise** our method for us: ``` mandel2 = np.vectorize(mandel1) data5 = mandel2(values) from matplotlib import pyplot as plt %matplotlib inline plt.imshow(data5, interpolation='none') ``` Is that any faster? ``` %%timeit data5 = mandel2(values) ``` This is not significantly faster. When we use *vectorize* it's just hiding an plain old python for loop under the hood. We want to make the loop over matrix elements take place in the "**C Layer**". What if we just apply the Mandelbrot algorithm without checking for divergence until the end: ``` def mandel_numpy_explode(position, limit=50): value = position while limit > 0: limit -= 1 value = value**2 + position diverging = abs(value) > 2 return abs(value) < 2 data6 = mandel_numpy_explode(values) ``` OK, we need to prevent it from running off to $\infty$ ``` def mandel_numpy(position, limit=50): value = position while limit > 0: limit -= 1 value = value**2 + position diverging = abs(value) > 2 # Avoid overflow value[diverging] = 2 return abs(value) < 2 data6 = mandel_numpy(values) %%timeit data6 = mandel_numpy(values) from matplotlib import pyplot as plt %matplotlib inline plt.imshow(data6, interpolation='none') ``` Wow, that was TEN TIMES faster. There's quite a few NumPy tricks there, let's remind ourselves of how they work: ``` diverging = abs(z3) > 2 z3[diverging] = 2 ``` When we apply a logical condition to a NumPy array, we get a logical array. ``` x = np.arange(10) y = np.ones([10]) * 5 z = x > y x y print(z) ``` Logical arrays can be used to index into arrays: ``` x[x>3] x[np.logical_not(z)] ``` And you can use such an index as the target of an assignment: ``` x[z] = 5 x ``` Note that we didn't compare two arrays to get our logical array, but an array to a scalar integer -- this was broadcasting again. ### More Mandelbrot Of course, we didn't calculate the number-of-iterations-to-diverge, just whether the point was in the set. Let's correct our code to do that: ``` def mandel4(position,limit=50): value = position diverged_at_count = np.zeros(position.shape) while limit > 0: limit -= 1 value = value**2 + position diverging = abs(value) > 2 first_diverged_this_time = np.logical_and(diverging, diverged_at_count == 0) diverged_at_count[first_diverged_this_time] = limit value[diverging] = 2 return diverged_at_count data7 = mandel4(values) plt.imshow(data7, interpolation='none') %%timeit data7 = mandel4(values) ``` Note that here, all the looping over mandelbrot steps was in Python, but everything below the loop-over-positions happened in C. The code was amazingly quick compared to pure Python. Can we do better by avoiding a square root? ``` def mandel5(position, limit=50): value = position diverged_at_count = np.zeros(position.shape) while limit > 0: limit -= 1 value = value**2 + position diverging = value * np.conj(value) > 4 first_diverged_this_time = np.logical_and(diverging, diverged_at_count == 0) diverged_at_count[first_diverged_this_time] = limit value[diverging] = 2 return diverged_at_count %%timeit data8 = mandel5(values) ``` Probably not worth the time I spent thinking about it! ### NumPy Testing Now, let's look at calculating those residuals, the differences between the different datasets. ``` data8 = mandel5(values) data5 = mandel2(values) np.sum((data8 - data5)**2) ``` For our non-numpy datasets, numpy knows to turn them into arrays: ``` xmin = -1.5 ymin = -1.0 xmax = 0.5 ymax = 1.0 resolution = 300 xstep = (xmax-xmin)/resolution ystep = (ymax-ymin)/resolution xs = [(xmin + (xmax - xmin) * i / resolution) for i in range(resolution)] ys = [(ymin + (ymax - ymin) * i / resolution) for i in range(resolution)] data1 = [[mandel1(complex(x, y)) for x in xs] for y in ys] sum(sum((data1 - data7)**2)) ``` But this doesn't work for pure non-numpy arrays ``` data2 = [] for y in ys: row = [] for x in xs: row.append(mandel1(complex(x, y))) data2.append(row) data2 - data1 ``` So we have to convert to NumPy arrays explicitly: ``` sum(sum((np.array(data2) - np.array(data1))**2)) ``` NumPy provides some convenient assertions to help us write unit tests with NumPy arrays: ``` x = [1e-5, 1e-3, 1e-1] y = np.arccos(np.cos(x)) y np.testing.assert_allclose(x, y, rtol=1e-6, atol=1e-20) np.testing.assert_allclose(data7, data1) ``` ### Arraywise operations are fast Note that we might worry that we carry on calculating the mandelbrot values for points that have already diverged. ``` def mandel6(position, limit=50): value = np.zeros(position.shape) + position calculating = np.ones(position.shape, dtype='bool') diverged_at_count = np.zeros(position.shape) while limit > 0: limit -= 1 value[calculating] = value[calculating]**2 + position[calculating] diverging_now = np.zeros(position.shape, dtype='bool') diverging_now[calculating] = value[calculating] * \ np.conj(value[calculating])>4 calculating = np.logical_and(calculating, np.logical_not(diverging_now)) diverged_at_count[diverging_now] = limit return diverged_at_count data8 = mandel6(values) %%timeit data8 = mandel6(values) plt.imshow(data8, interpolation='none') ``` This was **not faster** even though it was **doing less work** This often happens: on modern computers, **branches** (if statements, function calls) and **memory access** is usually the rate-determining step, not maths. Complicating your logic to avoid calculations sometimes therefore slows you down. The only way to know is to **measure** ### Indexing with arrays We've been using Boolean arrays a lot to get access to some elements of an array. We can also do this with integers: ``` x = np.arange(64) y = x.reshape([8,8]) y y[[2, 5]] y[[0, 2, 5], [1, 2, 7]] ``` We can use a : to indicate we want all the values from a particular axis: ``` y[0:4:2, [0, 2]] ``` We can mix array selectors, boolean selectors, :s and ordinary array seqeuencers: ``` z = x.reshape([4, 4, 4]) z z[:, [1, 3], 0:3] ``` We can manipulate shapes by adding new indices in selectors with np.newaxis: ``` z[:, np.newaxis, [1, 3], 0].shape ``` When we use basic indexing with integers and : expressions, we get a **view** on the matrix so a copy is avoided: ``` a = z[:, :, 2] a[0, 0] = -500 z ``` We can also use ... to specify ": for as many as possible intervening axes": ``` z[1] z[...,2] ``` However, boolean mask indexing and array filter indexing always causes a copy. Let's try again at avoiding doing unnecessary work by using new arrays containing the reduced data instead of a mask: ``` def mandel7(position, limit=50): positions = np.zeros(position.shape) + position value = np.zeros(position.shape) + position indices = np.mgrid[0:values.shape[0], 0:values.shape[1]] diverged_at_count = np.zeros(position.shape) while limit > 0: limit -= 1 value = value**2 + positions diverging_now = value * np.conj(value) > 4 diverging_now_indices = indices[:, diverging_now] carry_on = np.logical_not(diverging_now) value = value[carry_on] indices = indices[:, carry_on] positions = positions[carry_on] diverged_at_count[diverging_now_indices[0,:], diverging_now_indices[1,:]] = limit return diverged_at_count data9 = mandel7(values) plt.imshow(data9, interpolation='none') %%timeit data9 = mandel7(values) ``` Still slower. Probably due to lots of copies -- the point here is that you need to *experiment* to see which optimisations will work. Performance programming needs to be empirical. ## Profiling We've seen how to compare different functions by the time they take to run. However, we haven't obtained much information about where the code is spending more time. For that we need to use a profiler. IPython offers a profiler through the `%prun` magic. Let's use it to see how it works: ``` %prun mandel7(values) ``` `%prun` shows a line per each function call ordered by the total time spent on each of these. However, sometimes a line-by-line output may be more helpful. For that we can use the `line_profiler` package (you need to install it using `pip`). Once installed you can activate it in any notebook by running: ``` %load_ext line_profiler ``` And the `%lprun` magic should be now available: ``` %lprun -f mandel7 mandel7(values) ``` Here, it is clearer to see which operations are keeping the code busy.
true
code
0.296597
null
null
null
null
## UCI Adult Data Set ### Dataset URL: https://archive.ics.uci.edu/ml/datasets/adult Predict whether income exceeds $50K/yr based on census data. Also known as "Census Income" dataset. ``` import shutil import math from datetime import datetime import multiprocessing import pandas as pd import numpy as np import tensorflow as tf from tensorflow import data from tensorflow.python.feature_column import feature_column print(tf.__version__) MODEL_NAME = 'cenus-model-01' TRAIN_DATA_FILES_PATTERN = 'data/adult.data.csv' TEST_DATA_FILES_PATTERN = 'data/adult.test.csv' RESUME_TRAINING = False PROCESS_FEATURES = True EXTEND_FEATURE_COLUMNS = True MULTI_THREADING = True ``` ## Define Dataset Metadata ``` HEADER = ['age', 'workclass', 'fnlwgt', 'education', 'education_num', 'marital_status', 'occupation', 'relationship', 'race', 'gender', 'capital_gain', 'capital_loss', 'hours_per_week', 'native_country', 'income_bracket'] HEADER_DEFAULTS = [[0], [''], [0], [''], [0], [''], [''], [''], [''], [''], [0], [0], [0], [''], ['']] NUMERIC_FEATURE_NAMES = ['age', 'education_num', 'capital_gain', 'capital_loss', 'hours_per_week'] CATEGORICAL_FEATURE_NAMES_WITH_VOCABULARY = { 'gender': ['Female', 'Male'], 'race': ['Amer-Indian-Eskimo', 'Asian-Pac-Islander', 'Black', 'Other', 'White'], 'education': ['Bachelors', 'HS-grad', '11th', 'Masters', '9th', 'Some-college', 'Assoc-acdm', 'Assoc-voc', '7th-8th', 'Doctorate', 'Prof-school', '5th-6th', '10th', '1st-4th', 'Preschool', '12th'], 'marital_status': ['Married-civ-spouse', 'Divorced', 'Married-spouse-absent', 'Never-married', 'Separated', 'Married-AF-spouse', 'Widowed'], 'relationship': ['Husband', 'Not-in-family', 'Wife', 'Own-child', 'Unmarried', 'Other-relative'], 'workclass': ['Self-emp-not-inc', 'Private', 'State-gov', 'Federal-gov', 'Local-gov', '?', 'Self-emp-inc', 'Without-pay', 'Never-worked'] } CATEGORICAL_FEATURE_NAMES_WITH_BUCKET_SIZE = { 'occupation': 50, 'native_country' : 100 } CATEGORICAL_FEATURE_NAMES = list(CATEGORICAL_FEATURE_NAMES_WITH_VOCABULARY.keys()) + list(CATEGORICAL_FEATURE_NAMES_WITH_BUCKET_SIZE.keys()) FEATURE_NAMES = NUMERIC_FEATURE_NAMES + CATEGORICAL_FEATURE_NAMES TARGET_NAME = 'income_bracket' TARGET_LABELS = ['<=50K', '>50K'] WEIGHT_COLUMN_NAME = 'fnlwgt' UNUSED_FEATURE_NAMES = list(set(HEADER) - set(FEATURE_NAMES) - {TARGET_NAME} - {WEIGHT_COLUMN_NAME}) print("Header: {}".format(HEADER)) print("Numeric Features: {}".format(NUMERIC_FEATURE_NAMES)) print("Categorical Features: {}".format(CATEGORICAL_FEATURE_NAMES)) print("Target: {} - labels: {}".format(TARGET_NAME, TARGET_LABELS)) print("Unused Features: {}".format(UNUSED_FEATURE_NAMES)) ``` ## Load and Analyse Dataset ``` TRAIN_DATA_SIZE = 32561 TEST_DATA_SIZE = 16278 train_data = pd.read_csv(TRAIN_DATA_FILES_PATTERN, header=None, names=HEADER ) train_data.head(10) train_data.describe() ``` ### Compute Scaling Statistics for Numeric Columns ``` means = train_data[NUMERIC_FEATURE_NAMES].mean(axis=0) stdvs = train_data[NUMERIC_FEATURE_NAMES].std(axis=0) maxs = train_data[NUMERIC_FEATURE_NAMES].max(axis=0) mins = train_data[NUMERIC_FEATURE_NAMES].min(axis=0) df_stats = pd.DataFrame({"mean":means, "stdv":stdvs, "max":maxs, "min":mins}) df_stats.head(15) ``` ### Save Scaling Statistics ``` df_stats.to_csv(path_or_buf="data/adult.stats.csv", header=True, index=True) ``` ## Define Data Input Function ### a. Parsing and preprocessing logic ``` def parse_csv_row(csv_row): columns = tf.decode_csv(csv_row, record_defaults=HEADER_DEFAULTS) features = dict(zip(HEADER, columns)) for column in UNUSED_FEATURE_NAMES: features.pop(column) target = features.pop(TARGET_NAME) return features, target def process_features(features): capital_indicator = features['capital_gain'] > features['capital_loss'] features['capital_indicator'] = tf.cast(capital_indicator, dtype=tf.int32) return features ``` ### b. Data pipeline input function ``` def csv_input_fn(files_name_pattern, mode=tf.estimator.ModeKeys.EVAL, skip_header_lines=0, num_epochs=None, batch_size=200): shuffle = True if mode == tf.estimator.ModeKeys.TRAIN else False num_threads = multiprocessing.cpu_count() if MULTI_THREADING else 1 print("") print("* data input_fn:") print("================") print("Input file(s): {}".format(files_name_pattern)) print("Batch size: {}".format(batch_size)) print("Epoch Count: {}".format(num_epochs)) print("Mode: {}".format(mode)) print("Thread Count: {}".format(num_threads)) print("Shuffle: {}".format(shuffle)) print("================") print("") file_names = tf.matching_files(files_name_pattern) dataset = data.TextLineDataset(filenames=file_names) dataset = dataset.skip(skip_header_lines) if shuffle: dataset = dataset.shuffle(buffer_size=2 * batch_size + 1) dataset = dataset.batch(batch_size) dataset = dataset.map(lambda csv_row: parse_csv_row(csv_row), num_parallel_calls=num_threads) if PROCESS_FEATURES: dataset = dataset.map(lambda features, target: (process_features(features), target), num_parallel_calls=num_threads) dataset = dataset.repeat(num_epochs) iterator = dataset.make_one_shot_iterator() features, target = iterator.get_next() return features, target features, target = csv_input_fn(files_name_pattern="") print("Features in CSV: {}".format(list(features.keys()))) print("Target in CSV: {}".format(target)) ``` ## Define Feature Columns ### a. Load scaling params ``` df_stats = pd.read_csv("data/adult.stats.csv", header=0, index_col=0) df_stats['feature_name'] = NUMERIC_FEATURE_NAMES df_stats.head(10) ``` ### b. Create feature columns ``` def extend_feature_columns(feature_columns, hparams): age_buckets = tf.feature_column.bucketized_column( feature_columns['age'], boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65]) education_X_occupation = tf.feature_column.crossed_column( ['education', 'occupation'], hash_bucket_size=int(1e4)) age_buckets_X_race = tf.feature_column.crossed_column( [age_buckets, feature_columns['race']], hash_bucket_size=int(1e4)) native_country_X_occupation = tf.feature_column.crossed_column( ['native_country', 'occupation'], hash_bucket_size=int(1e4)) native_country_embedded = tf.feature_column.embedding_column( feature_columns['native_country'], dimension=hparams.embedding_size) occupation_embedded = tf.feature_column.embedding_column( feature_columns['occupation'], dimension=hparams.embedding_size) education_X_occupation_embedded = tf.feature_column.embedding_column( education_X_occupation, dimension=hparams.embedding_size) native_country_X_occupation_embedded = tf.feature_column.embedding_column( native_country_X_occupation, dimension=hparams.embedding_size) feature_columns['age_buckets'] = age_buckets feature_columns['education_X_occupation'] = education_X_occupation feature_columns['age_buckets_X_race'] = age_buckets_X_race feature_columns['native_country_X_occupation'] = native_country_X_occupation feature_columns['native_country_embedded'] = native_country_embedded feature_columns['occupation_embedded'] = occupation_embedded feature_columns['education_X_occupation_embedded'] = education_X_occupation_embedded feature_columns['native_country_X_occupation_embedded'] = native_country_X_occupation_embedded return feature_columns def standard_scaler(x, mean, stdv): return (x-mean)/(stdv) def maxmin_scaler(x, max_value, min_value): return (x-min_value)/(max_value-min_value) def get_feature_columns(hparams): numeric_columns = {} for feature_name in NUMERIC_FEATURE_NAMES: feature_mean = df_stats[df_stats.feature_name == feature_name]['mean'].values[0] feature_stdv = df_stats[df_stats.feature_name == feature_name]['stdv'].values[0] normalizer_fn = lambda x: standard_scaler(x, feature_mean, feature_stdv) numeric_columns[feature_name] = tf.feature_column.numeric_column(feature_name, normalizer_fn=normalizer_fn ) CONSTRUCTED_NUMERIC_FEATURES_NAMES = [] if PROCESS_FEATURES: for feature_name in CONSTRUCTED_NUMERIC_FEATURES_NAMES: numeric_columns[feature_name] = tf.feature_column.numeric_column(feature_name) categorical_column_with_vocabulary = \ {item[0]: tf.feature_column.categorical_column_with_vocabulary_list(item[0], item[1]) for item in CATEGORICAL_FEATURE_NAMES_WITH_VOCABULARY.items()} CONSTRUCTED_INDICATOR_FEATURES_NAMES = ['capital_indicator'] categorical_column_with_identity = {} for feature_name in CONSTRUCTED_INDICATOR_FEATURES_NAMES: categorical_column_with_identity[feature_name] = tf.feature_column.categorical_column_with_identity(feature_name, num_buckets=2, default_value=0) categorical_column_with_hash_bucket = \ {item[0]: tf.feature_column.categorical_column_with_hash_bucket(item[0], item[1], dtype=tf.string) for item in CATEGORICAL_FEATURE_NAMES_WITH_BUCKET_SIZE.items()} feature_columns = {} if numeric_columns is not None: feature_columns.update(numeric_columns) if categorical_column_with_vocabulary is not None: feature_columns.update(categorical_column_with_vocabulary) if categorical_column_with_identity is not None: feature_columns.update(categorical_column_with_identity) if categorical_column_with_hash_bucket is not None: feature_columns.update(categorical_column_with_hash_bucket) if EXTEND_FEATURE_COLUMNS: feature_columns = extend_feature_columns(feature_columns, hparams) return feature_columns feature_columns = get_feature_columns(tf.contrib.training.HParams(num_buckets=5,embedding_size=3)) print("Feature Columns: {}".format(feature_columns)) ``` ## Define a DNN Estimator Creation Function ### a. Get wide and deep feature columns ``` def get_wide_deep_columns(): feature_columns = list(get_feature_columns(hparams).values()) dense_columns = list( filter(lambda column: isinstance(column, feature_column._NumericColumn) | isinstance(column, feature_column._EmbeddingColumn), feature_columns ) ) categorical_columns = list( filter(lambda column: isinstance(column, feature_column._VocabularyListCategoricalColumn) | isinstance(column, feature_column._IdentityCategoricalColumn) | isinstance(column, feature_column._BucketizedColumn), feature_columns) ) sparse_columns = list( filter(lambda column: isinstance(column,feature_column._HashedCategoricalColumn) | isinstance(column, feature_column._CrossedColumn), feature_columns) ) indicator_columns = list( map(lambda column: tf.feature_column.indicator_column(column), categorical_columns) ) deep_feature_columns = dense_columns + indicator_columns wide_feature_columns = categorical_columns + sparse_columns return wide_feature_columns, deep_feature_columns ``` ### b. Define the estimator ``` def create_DNNComb_estimator(run_config, hparams, print_desc=False): wide_feature_columns, deep_feature_columns = get_wide_deep_columns() estimator = tf.estimator.DNNLinearCombinedClassifier( n_classes=len(TARGET_LABELS), label_vocabulary=TARGET_LABELS, dnn_feature_columns = deep_feature_columns, linear_feature_columns = wide_feature_columns, weight_column=WEIGHT_COLUMN_NAME, dnn_hidden_units= hparams.hidden_units, dnn_optimizer= tf.train.AdamOptimizer(), dnn_activation_fn= tf.nn.relu, config= run_config ) if print_desc: print("") print("*Estimator Type:") print("================") print(type(estimator)) print("") print("*deep columns:") print("==============") print(deep_feature_columns) print("") print("wide columns:") print("=============") print(wide_feature_columns) print("") return estimator ``` ## 6. Run Experiment ### a. Set HParam and RunConfig ``` TRAIN_SIZE = TRAIN_DATA_SIZE NUM_EPOCHS = 100 BATCH_SIZE = 500 EVAL_AFTER_SEC = 60 TOTAL_STEPS = (TRAIN_SIZE/BATCH_SIZE)*NUM_EPOCHS hparams = tf.contrib.training.HParams( num_epochs = NUM_EPOCHS, batch_size = BATCH_SIZE, embedding_size = 4, hidden_units= [64, 32, 16], max_steps = TOTAL_STEPS ) model_dir = 'trained_models/{}'.format(MODEL_NAME) run_config = tf.estimator.RunConfig( log_step_count_steps=5000, tf_random_seed=19830610, model_dir=model_dir ) print(hparams) print("Model Directory:", run_config.model_dir) print("") print("Dataset Size:", TRAIN_SIZE) print("Batch Size:", BATCH_SIZE) print("Steps per Epoch:",TRAIN_SIZE/BATCH_SIZE) print("Total Steps:", TOTAL_STEPS) print("That is 1 evaluation step after each",EVAL_AFTER_SEC," training seconds") ``` ### b. Define TrainSpec and EvaluSpec ``` train_spec = tf.estimator.TrainSpec( input_fn = lambda: csv_input_fn( TRAIN_DATA_FILES_PATTERN, mode = tf.estimator.ModeKeys.TRAIN, num_epochs=hparams.num_epochs, batch_size=hparams.batch_size ), max_steps=hparams.max_steps, hooks=None ) eval_spec = tf.estimator.EvalSpec( input_fn = lambda: csv_input_fn( TRAIN_DATA_FILES_PATTERN, mode=tf.estimator.ModeKeys.EVAL, num_epochs=1, batch_size=hparams.batch_size, ), throttle_secs = EVAL_AFTER_SEC, steps=None ) ``` ### c. Run Experiment via train_and_evaluate ``` if not RESUME_TRAINING: print("Removing previous artifacts...") shutil.rmtree(model_dir, ignore_errors=True) else: print("Resuming training...") tf.logging.set_verbosity(tf.logging.INFO) time_start = datetime.utcnow() print("Experiment started at {}".format(time_start.strftime("%H:%M:%S"))) print(".......................................") estimator = create_DNNComb_estimator(run_config, hparams, True) tf.estimator.train_and_evaluate( estimator=estimator, train_spec=train_spec, eval_spec=eval_spec ) time_end = datetime.utcnow() print(".......................................") print("Experiment finished at {}".format(time_end.strftime("%H:%M:%S"))) print("") time_elapsed = time_end - time_start print("Experiment elapsed time: {} seconds".format(time_elapsed.total_seconds())) ``` ## Evaluate the Model ``` TRAIN_SIZE = TRAIN_DATA_SIZE TEST_SIZE = TEST_DATA_SIZE train_input_fn = lambda: csv_input_fn(files_name_pattern= TRAIN_DATA_FILES_PATTERN, mode= tf.estimator.ModeKeys.EVAL, batch_size= TRAIN_SIZE) test_input_fn = lambda: csv_input_fn(files_name_pattern= TEST_DATA_FILES_PATTERN, mode= tf.estimator.ModeKeys.EVAL, batch_size= TEST_SIZE) estimator = create_DNNComb_estimator(run_config, hparams) train_results = estimator.evaluate(input_fn=train_input_fn, steps=1) print() print("######################################################################################") print("# Train Measures: {}".format(train_results)) print("######################################################################################") test_results = estimator.evaluate(input_fn=test_input_fn, steps=1) print() print("######################################################################################") print("# Test Measures: {}".format(test_results)) print("######################################################################################") ``` ## Prediction ``` import itertools predict_input_fn = lambda: csv_input_fn(TEST_DATA_FILES_PATTERN, mode= tf.estimator.ModeKeys.PREDICT, batch_size= 10) predictions = list(itertools.islice(estimator.predict(input_fn=predict_input_fn),10)) print("") print("* Predicted Classes: {}".format(list(map(lambda item: item["class_ids"][0] ,predictions)))) print("* Predicted Probabilities: {}".format(list(map(lambda item: list(item["probabilities"]) ,predictions)))) ```
true
code
0.421135
null
null
null
null
``` import pandas as pd import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score import numpy as np from scipy.spatial import distance from sklearn.tree import DecisionTreeClassifier d_tree = DecisionTreeClassifier() from sklearn.naive_bayes import GaussianNB gnb = GaussianNB() plt.rcParams["figure.figsize"] = (12,10) plt.rcParams.update({'font.size': 12}) ``` # **K-Nearest Neighbors Algorithm** **K-nearest neighbors is a non-parametric algorithm that can be utilized for classification or regression. For classification, the algorithm predicts the classification for a given input by finding the closest point or points to that input.** **The algorithm requires a distance calculation to measure between points and the assumption that points that are close together are similar.** <sup>Reference: [Machine Learning A Probabilistic Perspective by Kevin Murphy](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwiossWtlvXyAhVvhOAKHaHYDNUQFnoECAQQAQ&url=http%3A%2F%2Fnoiselab.ucsd.edu%2FECE228%2FMurphy_Machine_Learning.pdf&usg=AOvVaw0ivnxQoBAr1Kn4BwTBbNxe)</sup> <sup>Reference: [Data Science from Scratch First Principles with Python by Joel Grus](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwjO1s20-f7yAhW3GFkFHZwsBcEQFnoECAIQAQ&url=http%3A%2F%2Fmath.ecnu.edu.cn%2F~lfzhou%2Fseminar%2F%5BJoel_Grus%5D_Data_Science_from_Scratch_First_Princ.pdf&usg=AOvVaw3bJ0pcZM201kEXZjeTiLrr)</sup> ``` ``` ## **KNN from scratch** ``` ``` ### **Euclidean Distance between 2 Points** $d = \sqrt{(q_1 - p_1)^2 + (q_2 - p_2)^2}$ ### **Small Data Set Example** ``` ``` ### **Coding the KNN algorithm as a function** ``` ``` ### **Asteroid Data** ``` ``` #### **Measuring how well KNN predicts values** $\text{Accuracy Score =} \frac{\text{True Positive }+\text{ True Negative}}{\text{Total Number of Observed Values}}$ ``` ``` ## **KNN using Scikit-Learn** ``` X = df_features.to_numpy() y = df_target.to_numpy() cmap_light = ListedColormap(['orange', 'cyan']) cmap_bold = ListedColormap(['#FF0000', '#00FF00']) h = .02 x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) classifier.fit(X,y) Z = classifier.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) plt.figure() plt.pcolormesh(xx, yy, Z, cmap=cmap_light) plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.title(f'KNN Classifier - Neighbors = {knn_neighbors}') plt.show() ``` <sup>Source: [Nearest Neighbors Classification from scikit-learn](https://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html#sphx-glr-auto-examples-neighbors-plot-classification-py)</sup> # **Issues with KNN** **There are two significant drawbacks to the KNN algorithm.** **1. The algorithm suffers from the "curse of dimensionality", meaning that the algorithm cannot handle data sets with a large number of features well.** **2. The KNN algorithm is slow at classifying variables relative to other algorithms.** ## **Curse of dimensionality** ### **Euclidean Distance for higher dimensions** $d = \sqrt{(q_1 - p_1)^2 + (q_2 - p_2)^2 + ... + (q_i - p_i)^2 + ... + (q_n - p_n)^2}$ ``` ``` <sup>Reference: [Data Science from Scratch First Principles with Python by Joel Grus](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwjO1s20-f7yAhW3GFkFHZwsBcEQFnoECAIQAQ&url=http%3A%2F%2Fmath.ecnu.edu.cn%2F~lfzhou%2Fseminar%2F%5BJoel_Grus%5D_Data_Science_from_Scratch_First_Princ.pdf&usg=AOvVaw3bJ0pcZM201kEXZjeTiLrr)</sup> ## **Comparing Time to Run** ``` ``` # **References and Additional Learning** ## **Data Sets** - **[NASA JPL Asteroid Data set from Kaggle](https://www.kaggle.com/sakhawat18/asteroid-dataset)** - **[Microsoft Malware Data Set from Kaggle](https://www.kaggle.com/c/microsoft-malware-prediction)** ## **Textbooks** - **[Machine Learning A Probabilistic Perspective by Kevin Murphy](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwiossWtlvXyAhVvhOAKHaHYDNUQFnoECAQQAQ&url=http%3A%2F%2Fnoiselab.ucsd.edu%2FECE228%2FMurphy_Machine_Learning.pdf&usg=AOvVaw0ivnxQoBAr1Kn4BwTBbNxe)** - **[Understanding Machine Learning: From Theory to Algorithms](https://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning/understanding-machine-learning-theory-algorithms.pdf)** - **[Data Science from Scratch First Principles with Python by Joel Grus](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwjO1s20-f7yAhW3GFkFHZwsBcEQFnoECAIQAQ&url=http%3A%2F%2Fmath.ecnu.edu.cn%2F~lfzhou%2Fseminar%2F%5BJoel_Grus%5D_Data_Science_from_Scratch_First_Princ.pdf&usg=AOvVaw3bJ0pcZM201kEXZjeTiLrr)** ## **Videos** - **[StatQuest: K-nearest neighbors, Clearly Explained by Josh Starmer](https://www.youtube.com/watch?v=HVXime0nQeI&t=219s)** - **[K-Nearest Neighbor by ritvikmath](https://www.youtube.com/watch?v=UR2ag4lbBtc&t=196s)**
true
code
0.677394
null
null
null
null
# Chernoff Faces, Deep Learning In this notebook, we use convolutional neural networks (CNNs) to classify the Chernoff faces generated from [chernoff-faces.ipynb](chernoff-faces.ipynb). We want to see if framing a numerical problem as an image problem and using CNNs to classify the data (images) would be a promising approach. ## Boilerplate code Below are the boilerplate code to get data loaders, train the model, and assess its different performance measures. ``` import torch import torch.nn as nn import torch.optim as optim from torch.optim import lr_scheduler import numpy as np import torchvision from torchvision import datasets, models, transforms import matplotlib.pyplot as plt import time import os import copy from collections import namedtuple from sklearn.metrics import multilabel_confusion_matrix from collections import namedtuple import random def get_dataloaders(input_size=256, batch_size=4): data_transforms = { 'train': transforms.Compose([ transforms.Resize(input_size), transforms.CenterCrop(input_size), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]), 'test': transforms.Compose([ transforms.Resize(input_size), transforms.CenterCrop(input_size), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]), 'valid': transforms.Compose([ transforms.Resize(input_size), transforms.CenterCrop(input_size), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) } shuffles = { 'train': True, 'test': True, 'valid': False } data_dir = './faces' samples = ['train', 'test', 'valid'] image_datasets = { x: datasets.ImageFolder(os.path.join(data_dir, x), transform=data_transforms[x]) for x in samples } dataloaders = { x: torch.utils.data.DataLoader(image_datasets[x], batch_size=batch_size, shuffle=shuffles[x], num_workers=4) for x in samples } dataset_sizes = { x: len(image_datasets[x]) for x in samples } class_names = image_datasets['train'].classes return dataloaders, dataset_sizes, class_names, len(class_names) def train_model(model, criterion, optimizer, scheduler, dataloaders, dataset_sizes, num_epochs=25, is_inception=False): since = time.time() best_model_wts = copy.deepcopy(model.state_dict()) best_acc = 0.0 for epoch in range(num_epochs): results = [] # Each epoch has a training and validation phase for phase in ['train', 'test']: if phase == 'train': optimizer.step() scheduler.step() model.train() # Set model to training mode else: model.eval() # Set model to evaluate mode running_loss = 0.0 running_corrects = 0 # Iterate over data. for inputs, labels in dataloaders[phase]: inputs = inputs.to(device) labels = labels.to(device) # zero the parameter gradients optimizer.zero_grad() # forward # track history if only in train with torch.set_grad_enabled(phase == 'train'): if is_inception and phase == 'train': outputs, aux_outputs = model(inputs) loss1 = criterion(outputs, labels) loss2 = criterion(aux_outputs, labels) loss = loss1 + 0.4*loss2 else: outputs = model(inputs) loss = criterion(outputs, labels) _, preds = torch.max(outputs, 1) # backward + optimize only if in training phase if phase == 'train': loss.backward() optimizer.step() # statistics running_loss += loss.item() * inputs.size(0) running_corrects += torch.sum(preds == labels.data) epoch_loss = running_loss / dataset_sizes[phase] epoch_acc = running_corrects.double() / dataset_sizes[phase] result = Result(phase, epoch_loss, float(str(epoch_acc.cpu().numpy()))) results.append(result) # deep copy the model if phase == 'test' and epoch_acc > best_acc: best_acc = epoch_acc best_model_wts = copy.deepcopy(model.state_dict()) results = ['{} loss: {:.4f} acc: {:.4f}'.format(r.phase, r.loss, r.acc) for r in results] results = ' | '.join(results) print('Epoch {}/{} | {}'.format(epoch, num_epochs - 1, results)) time_elapsed = time.time() - since print('Training complete in {:.0f}m {:.0f}s'.format( time_elapsed // 60, time_elapsed % 60)) print('Best val Acc: {:4f}'.format(best_acc)) # load best model weights model.load_state_dict(best_model_wts) return model def get_metrics(model, dataloaders, class_names): y_true = [] y_pred = [] was_training = model.training model.eval() with torch.no_grad(): for i, (inputs, labels) in enumerate(dataloaders['valid']): inputs = inputs.to(device) labels = labels.to(device) cpu_labels = labels.cpu().numpy() outputs = model(inputs) _, preds = torch.max(outputs, 1) for j in range(inputs.size()[0]): cpu_label = f'{cpu_labels[j]:02}' clazz_name = class_names[preds[j]] y_true.append(cpu_label) y_pred.append(clazz_name) model.train(mode=was_training) cmatrices = multilabel_confusion_matrix(y_true, y_pred, labels=class_names) metrics = [] for clazz in range(len(cmatrices)): cmatrix = cmatrices[clazz] tn, fp, fn, tp = cmatrix[0][0], cmatrix[0][1], cmatrix[1][0], cmatrix[1][1] sen = tp / (tp + fn) spe = tn / (tn + fp) acc = (tp + tn) / (tp + fp + fn + tn) f1 = (2.0 * tp) / (2 * tp + fp + fn) mcc = (tp * tn - fp * fn) / np.sqrt((tp + fp) * (tp + fn) * (tn + fp) * (tn + fn)) metric = Metric(clazz, tn, fp, fn, tp, sen, spe, acc, f1, mcc) metrics.append(metric) return metrics def print_metrics(metrics): for m in metrics: print('{}: sen = {:.5f}, spe = {:.5f}, acc = {:.5f}, f1 = {:.5f}, mcc = {:.5f}' .format(m.clazz, m.sen, m.spe, m.acc, m.f1, m.mcc)) random.seed(1299827) torch.manual_seed(1299827) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print('device = {}'.format(device)) Result = namedtuple('Result', 'phase loss acc') Metric = namedtuple('Metric', 'clazz tn fp fn tp sen spe acc f1 mcc') ``` ## Train, Test, Validate Below, we applied different image classification networks to the data and perserve the results in the comments. The different networks tried were * ResNet-18 * ResNet-152 * AlexNet * VGG-19 with batch normalization * SqueezeNet 1.1 * Inception v3 * Densenet-201 * GoogleNet * ShuffleNet V2 * MobileNet V2 * ResNeXt-101-32x8d The most promising initial results were with Inception V3 and so we used that network to learn. You will notice that we use transfer learning (the model with pre-trained weights) to bootstrap learning the weights. Also, we do 2 rounds of 50 epoch learning. ``` # Best val Acc: 0.900000 # 0: sen = 1.00000, spe = 1.00000, acc = 1.00000, f1 = 1.00000, mcc = 1.00000 # 1: sen = 1.00000, spe = 0.96000, acc = 0.97000, f1 = 0.94340, mcc = 0.92582 # 2: sen = 0.64000, spe = 0.94667, acc = 0.87000, f1 = 0.71111, mcc = 0.63509 # 3: sen = 0.72000, spe = 0.88000, acc = 0.84000, f1 = 0.69231, mcc = 0.58521 # dataloaders, dataset_sizes, class_names, num_classes = get_dataloaders(input_size=224) # model = models.resnet18(pretrained=True) # model.fc = nn.Linear(model.fc.in_features, num_classes) # is_inception = False # Best val Acc: 0.900000 # 0: sen = 1.00000, spe = 1.00000, acc = 1.00000, f1 = 1.00000, mcc = 1.00000 # 1: sen = 1.00000, spe = 0.97333, acc = 0.98000, f1 = 0.96154, mcc = 0.94933 # 2: sen = 0.72000, spe = 0.90667, acc = 0.86000, f1 = 0.72000, mcc = 0.62667 # 3: sen = 0.64000, spe = 0.90667, acc = 0.84000, f1 = 0.66667, mcc = 0.56249 # dataloaders, dataset_sizes, class_names, num_classes = get_dataloaders(input_size=224) # model = models.resnet152(pretrained=True) # model.fc = nn.Linear(model.fc.in_features, num_classes) # is_inception = False # Best val Acc: 0.730000 # 0: sen = 1.00000, spe = 0.90667, acc = 0.93000, f1 = 0.87719, mcc = 0.84163 # 1: sen = 1.00000, spe = 0.93333, acc = 0.95000, f1 = 0.90909, mcc = 0.88192 # 2: sen = 0.20000, spe = 0.92000, acc = 0.74000, f1 = 0.27778, mcc = 0.16607 # 3: sen = 0.44000, spe = 0.78667, acc = 0.70000, f1 = 0.42308, mcc = 0.22108 # dataloaders, dataset_sizes, class_names, num_classes = get_dataloaders(input_size=224) # model = models.alexnet(pretrained=True) # model.classifier[6] = nn.Linear(4096, num_classes) # is_inception = False # Best val Acc: 0.910000 # 0: sen = 1.00000, spe = 1.00000, acc = 1.00000, f1 = 1.00000, mcc = 1.00000 # 1: sen = 1.00000, spe = 0.98667, acc = 0.99000, f1 = 0.98039, mcc = 0.97402 # 2: sen = 0.68000, spe = 0.96000, acc = 0.89000, f1 = 0.75556, mcc = 0.69282 # 3: sen = 0.84000, spe = 0.89333, acc = 0.88000, f1 = 0.77778, mcc = 0.69980 # dataloaders, dataset_sizes, class_names, num_classes = get_dataloaders(input_size=224) # model = models.vgg19_bn(pretrained=True) # model.classifier[6] = nn.Linear(model.classifier[6].in_features, num_classes) # is_inception = False # Best val Acc: 0.780000 # 0: sen = 1.00000, spe = 1.00000, acc = 1.00000, f1 = 1.00000, mcc = 1.00000 # 1: sen = 1.00000, spe = 0.88000, acc = 0.91000, f1 = 0.84746, mcc = 0.80440 # 2: sen = 0.40000, spe = 0.90667, acc = 0.78000, f1 = 0.47619, mcc = 0.35351 # 3: sen = 0.48000, spe = 0.84000, acc = 0.75000, f1 = 0.48980, mcc = 0.32444 # dataloaders, dataset_sizes, class_names, num_classes = get_dataloaders(input_size=224) # model = models.squeezenet1_1(pretrained=True) # model.classifier[1] = nn.Conv2d(512, num_classes, kernel_size=(1,1), stride=(1,1)) # model.num_classes = num_classes # is_inception = False # Best val Acc: 0.920000 # 0: sen = 1.00000, spe = 1.00000, acc = 1.00000, f1 = 1.00000, mcc = 1.00000 # 1: sen = 1.00000, spe = 0.97333, acc = 0.98000, f1 = 0.96154, mcc = 0.94933 # 2: sen = 0.76000, spe = 0.93333, acc = 0.89000, f1 = 0.77551, mcc = 0.70296 # 3: sen = 0.72000, spe = 0.92000, acc = 0.87000, f1 = 0.73469, mcc = 0.64889 dataloaders, dataset_sizes, class_names, num_classes = get_dataloaders(input_size=299) model = models.inception_v3(pretrained=True) model.AuxLogits.fc = nn.Linear(model.AuxLogits.fc.in_features, num_classes) model.fc = nn.Linear(model.fc.in_features, num_classes) is_inception = True # Best val Acc: 0.910000 # 0: sen = 1.00000, spe = 1.00000, acc = 1.00000, f1 = 1.00000, mcc = 1.00000 # 1: sen = 1.00000, spe = 0.96000, acc = 0.97000, f1 = 0.94340, mcc = 0.92582 # 2: sen = 0.72000, spe = 0.94667, acc = 0.89000, f1 = 0.76596, mcc = 0.69687 # 3: sen = 0.72000, spe = 0.90667, acc = 0.86000, f1 = 0.72000, mcc = 0.62667 # dataloaders, dataset_sizes, class_names, num_classes = get_dataloaders(input_size=224) # model = models.densenet201(pretrained=True) # model.classifier = nn.Linear(model.classifier.in_features, num_classes) # is_inception = False # Best val Acc: 0.900000 # 0: sen = 1.00000, spe = 1.00000, acc = 1.00000, f1 = 1.00000, mcc = 1.00000 # 1: sen = 0.96000, spe = 1.00000, acc = 0.99000, f1 = 0.97959, mcc = 0.97333 # 2: sen = 0.64000, spe = 0.93333, acc = 0.86000, f1 = 0.69565, mcc = 0.60952 # 3: sen = 0.84000, spe = 0.88000, acc = 0.87000, f1 = 0.76364, mcc = 0.68034 # dataloaders, dataset_sizes, class_names, num_classes = get_dataloaders(input_size=224) # model = models.googlenet(pretrained=True) # model.fc = nn.Linear(model.fc.in_features, num_classes) # is_inception = False # Best val Acc: 0.500000 # 0: sen = 1.00000, spe = 0.49333, acc = 0.62000, f1 = 0.56818, mcc = 0.44246 # 1: sen = 0.96000, spe = 0.82667, acc = 0.86000, f1 = 0.77419, mcc = 0.70554 # 2: sen = 0.00000, spe = 1.00000, acc = 0.75000, f1 = 0.00000, mcc = nan # 3: sen = 0.00000, spe = 1.00000, acc = 0.75000, f1 = 0.00000, mcc = nan # dataloaders, dataset_sizes, class_names, num_classes = get_dataloaders(input_size=224) # model = models.shufflenet_v2_x0_5(pretrained=True) # model.fc = nn.Linear(model.fc.in_features, num_classes) # is_inception = False # Best val Acc: 0.910000 # 0: sen = 1.00000, spe = 1.00000, acc = 1.00000, f1 = 1.00000, mcc = 1.00000 # 1: sen = 0.88000, spe = 0.98667, acc = 0.96000, f1 = 0.91667, mcc = 0.89175 # 2: sen = 0.72000, spe = 0.92000, acc = 0.87000, f1 = 0.73469, mcc = 0.64889 # 3: sen = 0.76000, spe = 0.88000, acc = 0.85000, f1 = 0.71698, mcc = 0.61721 # dataloaders, dataset_sizes, class_names, num_classes = get_dataloaders(input_size=224) # model = models.mobilenet_v2(pretrained=True) # model.classifier[1] = nn.Linear(model.classifier[1].in_features, num_classes) # is_inception = False # Best val Acc: 0.890000 # 0: sen = 1.00000, spe = 1.00000, acc = 1.00000, f1 = 1.00000, mcc = 1.00000 # 1: sen = 1.00000, spe = 0.90667, acc = 0.93000, f1 = 0.87719, mcc = 0.84163 # 2: sen = 0.76000, spe = 0.90667, acc = 0.87000, f1 = 0.74510, mcc = 0.65812 # 3: sen = 0.44000, spe = 0.92000, acc = 0.80000, f1 = 0.52381, mcc = 0.41499 # dataloaders, dataset_sizes, class_names, num_classes = get_dataloaders(input_size=224) # model = models.resnext101_32x8d(pretrained=True) # model.fc = nn.Linear(model.fc.in_features, num_classes) # is_inception = False model = model.to(device) criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9) scheduler = lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.1) model = train_model(model, criterion, optimizer, scheduler, dataloaders, dataset_sizes, num_epochs=50, is_inception=is_inception) print_metrics(get_metrics(model, dataloaders, class_names)) ```
true
code
0.692538
null
null
null
null
``` import numpy as np import matplotlib.pyplot as plt import scipy.interpolate ``` # Интерполяция многочленами Допустим мы знаем значения $f_k=f(x_k)$ некоторой функции $f(x)$ только на некотором множестве аргументов $x_k\in\mathbb R$, $k=1..K$. Мы хотим вычислять $f$ в точках $x$ лежащих между узлами интерполяции $x_k$, для чего мы строим интерполирующую функцию $p(x)$, определенную при всех значениях $x$ и совпадающую с $f$ в узлах интерполяции $f(x)=p(x)$, $k=1..K$. Распространенным (но не единственным) выбором для функции $p$ является многочлен степени $K-1$: $$p(x)=\sum_{n=0}^{K-1} p_n x^n,$$ число коэффициентов которого равно число известных значений функции, что позволяет однозначно найти эти коэффициенты. Формально коэффициенты находятся из системы уравнений: $$p(x_k)=f_k=\sum_{n=0}^{K-1} p_n x_k^n,$$ или в матричном виде $MP=F$, где $$ F=\begin{pmatrix}f_1\\\vdots\\f_K\end{pmatrix},\quad P=\begin{pmatrix}p_0\\\vdots\\p_{K-1}\end{pmatrix},\quad M=\begin{pmatrix} x_1^0 & \cdots & x_1^{K-1} \\ \vdots & \ddots & \vdots \\ x_K^0 & \cdots & x_K^{K-1} \\ \end{pmatrix}. $$ Матрица $M$ называется [матрицей Вандермонда](https://ru.wikipedia.org/wiki/%D0%9E%D0%BF%D1%80%D0%B5%D0%B4%D0%B5%D0%BB%D0%B8%D1%82%D0%B5%D0%BB%D1%8C_%D0%92%D0%B0%D0%BD%D0%B4%D0%B5%D1%80%D0%BC%D0%BE%D0%BD%D0%B4%D0%B0). Попробуем построить интерполяционный многочлен таким способом. ``` # Numpy уже имеет класс для полиномов: numpy.poly1d. # Для полноты изложения мы реализуем свой класс. class Poly(): def __init__(self, pn): """ Создает многочлен с коэффициентами pn: self(x) = sum_n pn[n] * x**n. Коэффициенты pn перечисляются в порядке возрастания степени одночлена. """ self.pn = pn def __call__(self, x): """ Вычисляет многочлен на векторе значений x. """ a = 1. # Здесь мы накапливаем степени x**n. p = 0. # Сюда мы помещаем сумму одночленов. for pn in self.pn: p += a*pn # Учитываем очередной одночлен. a *= x # Повышаем степень одночлена return p # Вспомогательная функция для счета матрицы Вандермонда. def vandermonde(xn): return np.power(xn[:,None], np.arange(len(xn))[None,:]) # Напишем функцию, которая будет находить интерполяционных многочлен через решение системы. def interp_naive(xn, fn): """ Возвращает интерполяционный многочлен, принимающий в точках xp значение fp. """ M = vandermonde(xn) # Мы используем функцию numpy для решения линейных систем. # Методы решения линейных систем обсуждаются в другой лабораторной работе. pn = np.linalg.solve(M, fn) return Poly(pn) # Возьмем логарифмическую решетку на интервале [1E-6,1] и посмотрим, насколько точно мы восстанавливаем многочлен. N = 8 # Число узлов интерполяции. x = np.logspace(-6,0,N) # Точки равномерной решетки # Будет интерполировать многочлен степени N-1, который зададим случайными коэффициентами. f = Poly(np.random.randn(N)) y = f(x) # Значения многочлена на решетке. p = interp_naive(x, y) # Строим интерполяционный многочлен. z = p(x) # Значения интерполяционного многочлена на решетке. print("Absolute error of values", np.linalg.norm(z-y)) print("Absolute error of coefficients", np.linalg.norm(f.pn-p.pn)) ``` Построенный интерполяционный многочлен принимает близкие значения к заданным в узлах значениям. Но хотя значения в узлах должны задавать многочлен однозначно, интерполяционный и интерполируемый многочлен имеют значительно отличающиеся коэффициенты. Значения многочленов между узлами также значительно отличаются. ``` t = np.linspace(0,1,100) _, ax = plt.subplots(figsize=(10,7)) ax.plot(t, f(t), '-k') ax.plot(t, p(t), '-r') ax.plot(x, f(x), '.') ax.set_xlabel("Argument") ax.set_ylabel("Function") ax.legend(["$f(x)$", "$p(x)$"]) plt.show() ``` ## Задания 1. Измените метод `__call__`, так чтобы он реализовывал [схему Горнера](https://ru.wikipedia.org/wiki/Схема_Горнера). Чем эта схема лучше? 2. Почему нахождение коэффициентов интерполяционного многочлена через решение системы дает ошибочный ответ? 3. Найдите определитель матрицы Вандермонда теоретически и численно. 4. Найдите числа обусловленности матрицы Вандермонда. Сравните экспериментально полученные погрешности решения системы и невязку с теоретическим предсказанием. На практике интерполяционный многочлен обычно находится в форме [многочлена Лагранжа](https://ru.wikipedia.org/wiki/Интерполяционный_многочлен_Лагранжа): $$ p(x)=\sum_{k=1}^K f_k L_k(x),\;\text{где}\; L_k(x)=\prod_{j\neq k}\frac{x-x_j}{x_k-x_j}. $$ Для ускорения вычисления многочлена Лагранжа используется [схема Эйткена](https://ru.wikipedia.org/wiki/Схема_Эйткена), основанная на рекурсии. Обозначим $p_{i,\ldots,j}$ многочлен Лагранжа, построенный по узлам интерполяции $(x_i,f_i),\ldots,(x_j,f_j)$, в частности искомое $p=p_{1,\ldots,K}$. Справедливо следующее соотношение, выражающие интеполяционный многочлен через такой же с меньшим числом узлов: $$ p_{i,\ldots,j}=\frac{(x-x_i)p_{i+1,\ldots,j}-(x-x_j)p_{i,\ldots,j-1}(x)}{x_j-x_i}. $$ База рекурсии задается очевидным равенством $p_{i}(x)=f_i$. [Интерполяционные формулы Ньютона](ru.wikipedia.org/wiki/Интерполяционные_формулы_Ньютона) дают другой популярный способ записи интерполяционного многочлена: $$ p(x)=\sum_{k=1}^K[x_1,\ldots,x_k]f\prod_{i=1}^k(x-x_j), $$ где разделенные разности $[\ldots]f$ заданы рекурсивно: $$ [x_1,\ldots,x_k,x]f=\frac{[x_1,\ldots,x_{k-1},x]f-[x_1,\ldots,x_{k-1},x_k]f}{x-x_k}. $$ Данные формулы можно рассматривать как дискретный вариант формулы Тейлора. На основе формул Ньютона разработан [алгоритм Невилла](https://en.wikipedia.org/wiki/Neville%27s_algorithm) для вычисления интерполяционного многочлена, по существу эквивалентный схеме Эйткена. ## Задания 5. Реализуйте метод Эйткена вычисления интерполяционного многочлена. 6. Если мы попытаемся восстановить многочлен через его значения в точках, аналогично заданию 2, получим ли мы с помощью метода Эйткена ответ точнее, чем через решение системы? 7. Scipy содержит готовую реализацию интерполяционного многочлена Лагранжа [`scipy.interpolate.lagrange`](docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.interpolate.lagrange.html). В документации отмечается, что метод численно неустойчив. Что это означает? 8. Ошибки в исходных данных для построения интерполяционного многочлена вызывают ошибки при вычислении интерполяционного многочлена в промежуточных точках. При каком расположении узлов интерполяция многочленом Лагранжа имеет наименьшую ошибку? Как это связано с численной устойчивостью? Рассмотрим теперь насколько хорошо интерполяционный многочлен прилижает интерполируемую функцию. ``` # В качестве интерполируемой функции возьмем f(x)=x sin(2x. def f(x): return x*np.sin(2*x) # Будем интерполировать функцию на интервале [x0-r,x0+r], где x0 = 10 r = 1 # В качестве узлов интерполяции возьмем равномерную решетку из N узлов. N = 5 xn = np.linspace(x0-r, x0+r, N) # Построим интерполяционный многочлен. p = interp_naive(xn, f(xn)) # Оценим точность приближения функции многочленом как максимум # отклонения значений многочлена от значений функции на интервале. # Так как мы не можем рассмотреть все точки, то ограничимся # плотной решеткой. tn = np.linspace(x0-r, x0+r, 10000) error = np.abs(f(tn)-p(tn)) print("Error", np.max(error)) _, ax = plt.subplots(figsize=(10,5)) ax.semilogy(tn, error) ax.set_xlabel("Argument") ax.set_ylabel("Absolute error") plt.show() ``` # Задания 9. Найдите погрешность прилижения функции $f$ интерполяционным многолченом $p$ для $x0=10, 100, 1000$ и для $N=5, 10, 15$. Объясните получающиеся результаты. 10. Постройте график зависимости ошибки от числа узлов интерполяции $N$ для $x0=100$ и $r=5$ в диапазоне $5\leq N \leq 50$. 11. Повторите задания 9 и 10 для узлов интерполяции Чебышева: $$x_n=x0+r\cos\left(\frac{\pi}{2}\frac{2n-1}{N}\right),\; k=1\ldots N.$$ 12. Сравните распределение ошибки внутри интервала $x\in[x0-r,x0+r]$ для равномерно расположенных узлов и для узлов Чебышева. 13. Повторите задания 9 и 10 для функции $f(x)=|x-1|$, $x0=1$, $r=1$. Объясните наблюдающиеся различия. Использование интерполяционного полинома очень высокой степени часто приводит к тому, что в некоторых точках погрешность приближения оказывается очень большой. Вместо одного многочлена высокой степени, приближающего функцию на всем интервале, можно использовать несколько многочленов меньше степени, каждый из которых приближает функцию только на подинтервале. Если функция обладает некоторой степенью гладкости, например, несколько ее производных непрерывные функции, то такую же гладкость естественно требовать от результирующего семейства интерполяционных многочленов, что накладывает ограничения на их коэффициенты. Получающаяся кусочно-полиномиальная функция называется [сплайном](https://ru.wikipedia.org/wiki/%D0%A1%D0%BF%D0%BB%D0%B0%D0%B9%D0%BD). [Кубическим сплайном](https://ru.wikipedia.org/wiki/%D0%9A%D1%83%D0%B1%D0%B8%D1%87%D0%B5%D1%81%D0%BA%D0%B8%D0%B9_%D1%81%D0%BF%D0%BB%D0%B0%D0%B9%D0%BD) дефекта 1 называется функция, которая: 1. на каждом интервале $[x_{k-1}, x_k]$ является многочленом третьей степени (или меньше); 2. имеет непрерывные первую и вторую производные во всех точках; 3. совпадает с интерполируемой функцией в узлах $x_k$. # Задания 14. Для функции из задания 9 постройте кубический сплайн дефекта 1 с узлами из задания 9. Можете воспользоваться функциями `scipy.interpolate.splrep` и `scipy.interpolate.splev` или реализовать свои аналоги. 15. Изучите зависимость погрешности приближения функции сплайном от числа узлов интерполяции. Сравните с результатом из задания 10. Когда погрешности совпадут? 16. Как можно обобщить изученные методы интерполяции на кривые в многомерном пространстве? 17. Как можно интерполировать функции нескольких переменных? 18. Какие еще способы интерполяции существуют?
true
code
0.369244
null
null
null
null
# Bis438 Final Project Problem 2 ## Import Python Libraries ``` import numpy as np import deepchem as dc from MPP.model import GCN, MLP from MPP.utils import process_prediction, make_feature, split_data ``` ## Build GraphConv Model ``` batch_size = 50 gcn_model = GCN(batch_size=batch_size) # build model ``` ## Training GraphConv Model and Calculate ROC-AUC ``` # define metric as roc_auc_score metric = dc.metrics.Metric(dc.metrics.roc_auc_score, task_averager=np.mean, verbose=False, mode='classification') num_models = 10 # the number of iteration roc_auc_train = list() # save roc_auc value for training dataset roc_auc_valid = list() # save roc_auc value for validation dataset roc_auc_test = list() # save roc_auc value for test dataset # Do featurization conv_feature = make_feature(data_name='BACE', feature_name='GraphConv') for i in range(num_models): # Load ith dataset with GraphConv Featurizer and random split train_dataset, valid_dataset, test_dataset = split_data(conv_feature) # Fitting ith model with training dataset gcn_model.fit(train_dataset, epochs=3) # fitting with training epoch 3 # Evaluating model # save roc_auc for training dataset pred_train = gcn_model.predict(train_dataset) pred_train = process_prediction(y_true=train_dataset.y, y_pred=pred_train) train_scores = metric.compute_metric(y_true=train_dataset.y, y_pred=pred_train, w=train_dataset.w) roc_auc_train.append(train_scores) # save roc_auc for valid dataset pred_valid = gcn_model.predict(valid_dataset) pred_valid = process_prediction(y_true=valid_dataset.y, y_pred=pred_valid) valid_scores = metric.compute_metric(y_true=valid_dataset.y, y_pred=pred_valid, w=valid_dataset.w) roc_auc_valid.append(valid_scores) # save roc_auc for test dataset pred_test = gcn_model.predict(test_dataset) pred_test = process_prediction(y_true=test_dataset.y, y_pred=pred_test) test_scores = metric.compute_metric(y_true=test_dataset.y, y_pred=pred_test, w=test_dataset.w) roc_auc_test.append(test_scores) # print roc_auc result print(f'\nEvaluating model number {i+1:02d}.') # 1-based indexing of model number. print(f'Train ROC-AUC Score: {train_scores:.3f}, ' f'Valid ROC-AUC Score: {valid_scores:.3f}, Test ROC-AUC Score: {test_scores:.3f}.\n') ``` ## Calculate mean value of ROC-AUC and use std1 for error bar in GCN model ``` gcn_values = list() gcn_values.append(np.mean(roc_auc_train)) gcn_values.append(np.mean(roc_auc_valid)) gcn_values.append(np.mean(roc_auc_test)) gcn_stds = list() gcn_stds.append(np.std(roc_auc_train)) gcn_stds.append(np.std(roc_auc_valid)) gcn_stds.append(np.std(roc_auc_test)) ``` ## Build Multi Layer Perceptron using keras ``` batch_size = 50 dense_model = MLP(batch_size=batch_size) ``` ## Training Multi Layer Percpetron Model and Calculate ROC-AUC ``` # define metric as roc_auc_score metric = dc.metrics.Metric(dc.metrics.roc_auc_score, task_averager=np.mean, verbose=False, mode='classification') num_models = 10 # the number of iteration roc_auc_train = list() # save roc_auc value for training dataset roc_auc_valid = list() # save roc_auc value for validation dataset roc_auc_test = list() # save roc_auc value for test dataset # Do featurization ecfp_feature = make_feature(data_name='BACE', feature_name='ECFP') for i in range(num_models): # Load ith dataset with GraphConv Featurizer and random split train_dataset, valid_dataset, test_dataset = split_data(ecfp_feature) # Fitting ith model with training dataset dense_model.fit(train_dataset, epochs=3) # fitting with training epoch 3 # Evaluating model # save roc_auc for training dataset pred_train = dense_model.predict(train_dataset) pred_train = process_prediction(y_true=train_dataset.y, y_pred=pred_train) train_scores = metric.compute_metric(y_true=train_dataset.y, y_pred=pred_train) roc_auc_train.append(train_scores) # save roc_auc for valid dataset pred_valid = dense_model.predict(valid_dataset) pred_valid = process_prediction(y_true=valid_dataset.y, y_pred=pred_valid) valid_scores = metric.compute_metric(y_true=valid_dataset.y, y_pred=pred_valid) roc_auc_valid.append(valid_scores) # save roc_auc for test dataset pred_test = dense_model.predict(test_dataset) pred_test = process_prediction(y_true=test_dataset.y, y_pred=pred_test) test_scores = metric.compute_metric(y_true=test_dataset.y, y_pred=pred_test) roc_auc_test.append(test_scores) # print roc_auc result print(f'\nEvaluating model number {i+1:02d}.') print(f'Train ROC-AUC Score: {train_scores:.3f}, ' f'Valid ROC-AUC Score: {valid_scores:.3f}, Test ROC-AUC Score: {test_scores:.3f}.\n') ``` ## Calculate mean value of ROC-AUC and use std1 for error bar in MLP model ``` mlp_values = list() mlp_values.append(np.mean(roc_auc_train)) mlp_values.append(np.mean(roc_auc_valid)) mlp_values.append(np.mean(roc_auc_test)) mlp_stds = list() mlp_stds.append(np.std(roc_auc_train)) mlp_stds.append(np.std(roc_auc_valid)) mlp_stds.append(np.std(roc_auc_test)) ``` ## Plot ROC-AUC Score ``` from matplotlib import pyplot as plt %matplotlib inline topics = ['train', 'valid', 'test'] def create_x(t, w, n, d): return [t*x + w*n for x in range(d)] gcn_values_x = create_x(2, 0.8, 1, 3) mlp_values_x = create_x(2, 0.8, 2, 3) ax = plt.subplot() p1 = ax.bar(gcn_values_x, gcn_values, yerr=gcn_stds, capsize=1) p2 = ax.bar(mlp_values_x, mlp_values, yerr=mlp_stds, capsize=1) middle_x = [(a+b)/2 for (a,b) in zip(gcn_values_x, mlp_values_x)] ax.set_title('Mean ROC-AUC Score for GCN/MLP Model') ax.set_xlabel('Dataset') ax.set_ylabel('ROC-AUC Score') ax.legend((p1[0], p2[0]), ('GCN', 'MLP'), fontsize=15) ax.set_xticks(middle_x) ax.set_xticklabels(topics) plt.show() ```
true
code
0.6771
null
null
null
null
``` # default_exp utils.clusterization ! pip install pyclustering ``` ## clusterization ``` #export import logging import sentencepiece as sp from pyclustering.cluster.kmedoids import kmedoids from pyclustering.utils.metric import euclidean_distance_square, euclidean_distance from pyclustering.cluster.silhouette import silhouette, silhouette_ksearch_type, silhouette_ksearch from sklearn.cluster import KMeans from sklearn.decomposition import PCA from sklearn.manifold import TSNE from sklearn.metrics import silhouette_score, pairwise_distances_argmin_min import umap import numpy as np from abc import ABC from typing import Tuple, Optional # Configs logger = logging.getLogger(__name__) logger.setLevel(logging.INFO) ``` ## Distance metrics In order to allow flexible implementation of several clustering techniques, a base CustomDistance class is defined. ``` # export class CustomDistance(ABC): def compute_distance(self, x, y) -> float: """ Computes the distance between 2 vectors according to a particular distance metric :param x: Vector :param y: Vector :return: """ pass ``` Euclidean distance is the most commonly used distance metric. This distance is used for default in some of the methods ``` # export class EuclideanSquareDistance(CustomDistance): """Euclidean (square) distance.""" def compute_distance(self, x, y) -> float: return euclidean_distance_square(x, y) # export class EuclideanDistance(CustomDistance): """Euclidean distance.""" def compute_distance(self, x, y) -> float: return euclidean_distance(x, y) ``` ## Utils ## Dimensionality reduction ``` # export # Uses PCA first and then t-SNE def reduce_dims_pca_tsne(feature_vectors, dims = 2): """ """ # hyperparameters from https://towardsdatascience.com/visualising-high-dimensional-datasets-using-pca-and-t-sne-in-python-8ef87e7915b pca = PCA(n_components=50) pca_features = pca.fit_transform(feature_vectors) logging.info("Reduced dims via PCA.") tsne = TSNE(n_components=dims, verbose=1, perplexity=40, n_iter=300) tsne_features = tsne.fit_transform(pca_features) logging.info("Reduced dims via t-SNE.") return tsne_features # export def reduce_dims_tsne(vectors, dims=2): """ Perform dimensionality reduction using t-SNE (from sklearn) :param vectors: Data vectors to be reduced :param dims: Optional[int] indicating the number of dimensions of the desired space :return: Vectors with the desired dimensionality """ tsne = TSNE(n_components=dims, verbose=1, perplexity=40, n_iter=300) tsne_feats = tsne.fit_transform(vectors) logging.info("Reduced dims via t-SNE") return tsne_feats # export def reduce_dims_pca(vectors, dims=2): """ Perform dimensionality reduction using PCA (from sklearn) :param vectors: Data vectors to be reduced :param dims: Optional[int] indicating the number of dimensions of the desired space :return: Vectors with the desired dimensionality """ pca = PCA(n_components=dims) pca_feats = pca.fit_transform(vectors) logging.info("Reduced dims via PCA.") return pca_feats # export def get_silhouette(samples1, samples2): cluster1, medoid_id1, kmedoid_instance1 = run_kmedoids(samples1, 1) cluster2, medoid_id2, kmedoid_instance12 = run_kmedoids(samples2, 1) cluster2 = np.array([[len(samples1) + x for x in cluster2[0]]]) samples = np.concatenate((samples1, samples2), axis=0) clusters = np.concatenate((cluster1, cluster2), axis=0) score = sum(silhouette(samples, clusters).process().get_score()) / len(samples) return score ``` Check UMAP details at [the official documentation](https://umap-learn.readthedocs.io/en/latest/) ``` # export def reduce_dims_umap(vectors, n_neighbors: Optional[int]=15, min_dist: Optional[float]=0.1, dims: Optional[int]=2, metric: Optional[str]='euclidean') -> np.ndarray: """ Perform dimensionality reduction using UMAP :param vectors: Data vectors to be reduced :param dims: Optional[int] indicating the number of dimensions of the desired space :return: Vectors with the desired dimensionality """ reducer = umap.UMAP( n_neighbors=n_neighbors, min_dist=min_dist, n_components=dims, metric=metric ) umap_vectors = reducer.fit_transform(vectors) return umap_vectors ``` ## k-means ``` # export def k_means(feature_vectors, k_range=[2, 3]): # finding best k bst_k = k_range[0] bst_silhouette = -1 bst_labels = None bst_centroids = None bst_kmeans = None for k in k_range: kmeans = KMeans(n_clusters = k) kmeans.fit(feature_vectors) labels = kmeans.predict(feature_vectors) centroids = kmeans.cluster_centers_ silhouette_avg = silhouette_score(feature_vectors, labels) if silhouette_avg > bst_silhouette: bst_k = k bst_silhouette = silhouette_avg bst_labels = labels bst_centroids = centroids bst_kmeans = kmeans logger.info(f'Best k = {bst_k} with a silhouette score of {bst_silhouette}') centroid_mthds = pairwise_distances_argmin_min(bst_centroids, feature_vectors) return bst_labels, bst_centroids, bst_kmeans, centroid_mthds # export def clusterize(feature_vecs, k_range = [2], dims = 2): # feature_vectors = reduce_dims(np.array(list(zip(*feature_vecs))[1]), dims = dims) feature_vectors = reduce_dims_umap(np.array(list(zip(*feature_vecs))[1]), dims=dims) experimental_vectors = feature_vectors#[:len(feature_vectors) * 0.1] labels, centroids, kmeans, centroid_mthds = k_means(experimental_vectors, k_range = k_range) return (feature_vectors, centroid_mthds, labels, centroids, kmeans) # export def find_best_k(samples): logging.info("Searching best k for clustering.") search_instance = silhouette_ksearch(samples, 2, 10, algorithm=silhouette_ksearch_type.KMEDOIDS).process() amount = search_instance.get_amount() scores = search_instance.get_scores() logging.info(f"Best Silhouette Score for k = {amount}: {scores[amount]}") return amount # export def run_kmedoids(samples, k): initial_medoids = list(range(k)) # Create instance of K-Medoids algorithm. kmedoids_instance = kmedoids(samples, initial_medoids) kmedoids_instance.process() clusters = kmedoids_instance.get_clusters() medoid_ids = kmedoids_instance.get_medoids() return clusters, medoid_ids, kmedoids_instance # export def perform_clusterize_kmedoids(data: np.array, reduct_dist='euclidean', dims: int = 2) -> Tuple: """ Perform clusterization of the dataset by means of k-medoids :param data: Data to be clusterized :param reduct_dist: Distance metric to be used for dimensionality reduction :param dims: Number of dims to get with umap before clustering :return: Tuple (reduced_vectors, clusters, medoid_ids, pyclustering kmedoids instance) """ reduced_data = reduce_dims_umap(data, dims = dims) k = find_best_k(reduced_data) clusters, medoid_ids, kmedoids_instance = run_kmedoids(reduced_data, k) return reduced_data, clusters, medoid_ids, kmedoids_instance # export def clusterize_kmedoids(data: np.array, distance_metric='euclidean', dims: int = 2) -> Tuple: """ Performs clusterization (k-medoids) using UMAP for dim. reduction """ reduced_data = reduce_dims_umap(data, dims = dims, metric=distance_metric) logging.info('Reduced dimensionality via UMAP') k = find_best_k(reduced_data) clusters, medoid_ids, kmedoids_instance = run_kmedoids(reduced_data, k) return reduced_data, clusters, medoid_ids, kmedoids_instance # export def new_clusterize_kmedoids(h_samples, m1_samples, m2_samples, m3_samples, dims = 2): samples = np.concatenate((h_samples, m1_samples, m2_samples, m3_samples), axis=0) samples = reduce_dims(samples, dims = dims) # np.array(list(zip(*samples)))[0], dims = dims) h_samples, m1_samples, m2_samples, m3_samples = samples[:len(h_samples)], samples[len(h_samples):len(h_samples) + len(m1_samples)], samples[len(h_samples) + len(m1_samples):len(h_samples) + len(m1_samples) + len(m2_samples)], samples[len(h_samples) + len(m1_samples) + len(m2_samples):] h_k = find_best_k(h_samples) h_clusters, h_medoid_ids, h_kmedoids_instance = run_kmedoids(h_samples, h_k) m1_k = find_best_k(m1_samples) m1_clusters, m1_medoid_ids, m1_kmedoids_instance = run_kmedoids(m1_samples, m1_k) m2_k = find_best_k(m2_samples) m2_clusters, m2_medoid_ids, m2_kmedoids_instance = run_kmedoids(m2_samples, m2_k) m3_k = find_best_k(m3_samples) m3_clusters, m3_medoid_ids, m3_kmedoids_instance = run_kmedoids(m3_samples, m3_k) return ( (h_samples, h_clusters, h_medoid_ids, h_kmedoids_instance), (m1_samples, m1_clusters, m1_medoid_ids, m1_kmedoids_instance), (m2_samples, m2_clusters, m2_medoid_ids, m2_kmedoids_instance), (m3_samples, m3_clusters, m3_medoid_ids, m3_kmedoids_instance) ) ``` ## Prototypes and criticisms ``` # export def gen_criticisms(samples, prototypes, n = None, distance = None): if n is None: n = len(prototypes) if distance is None: distance = EuclideanDistance() crits = [] for x in samples: mean_dist_x = 0. for x_i in samples: mean_dist_x += distance.compute_distance(x, x_i) mean_dist_x = mean_dist_x / len(x) mean_dist_proto = 0. for z_j in prototypes: mean_dist_proto += distance.compute_distance(x, z_j) mean_dist_proto = mean_dist_proto / len(prototypes) crits.append(mean_dist_x - mean_dist_proto) crits = np.array(crits) crit_ids = crits.argsort()[-n:][::-1] return crits, crit_ids from nbdev.export import notebook2script notebook2script() ```
true
code
0.872646
null
null
null
null
# Ames Housing Prices - Step 4: Modeling We are now ready to begin building our regression model to predict prices. This notebook demonstrates how to use the previous work (cleaning, feature prep) to quickly build up the engineered features we need to train our ML model. ``` # Basic setup %run config.ipynb # Connect to Cortex 5 and create a Builder instance cortex = Cortex.client() builder = cortex.builder() ``` ### Training Data We will start with the training dataset from our previous steps and run the _features_ pipeline to get cleaned and prepared data ``` train_ds = cortex.dataset('kaggle/ames-housing-train') train_df = train_ds.as_pandas() pipeline = train_ds.pipeline('features') train_df = pipeline.run(train_df) ``` ### Feature Framing We now need to split out our target variable from the training data and convert our categorical values into _dummies_. ``` y = train_df['SalePrice'] train_df.shape def drop_target(pipeline, df): df.drop('SalePrice', 1, inplace=True) def get_dummies(pipeline, df): return pd.get_dummies(df) pipeline = train_ds.pipeline('engineer', depends=['features']) pipeline.reset() pipeline.add_step(drop_target) pipeline.add_step(get_dummies) # Run the feature engineering pipeline to prepare for model training train_df = pipeline.run(train_ds.as_pandas()) # Remember the full set of engineered columns we need to produce for the model pipeline.set_context('columns', train_df.columns.tolist()) # Save the dataset to persist pipeline changes train_ds.save() print('\nTrain shape: (%d, %d)' % train_df.shape) ``` ## Model Training, Validation, and Experimentation We are going to try a variety of alogithms and parameters to achieve optimal results. This will be an iterative process that Cortex 5 will help us track and reproduce in the future by recording the data pipeline used, the model parameters, model metrics, and model artifacts in Experiments. ``` from sklearn.linear_model import LinearRegression, RidgeCV, LassoCV, ElasticNetCV, Ridge, Lasso, ElasticNet from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split, GridSearchCV def train(x, y, **kwargs): alphas = kwargs.get('alphas', [1, 0.1, 0.001, 0.0001]) # Select alogrithm mtype = kwargs.get('model_type') if mtype == 'Lasso': model = LassoCV(alphas=alphas) elif mtype == 'Ridge': # model = RidgeCV(alphas=alphas) model = GridSearchCV(Ridge(), param_grid={'alpha': np.logspace(0, 1, num=10), 'normalize': [True, False], 'solver': ['auto', 'svd']}, scoring=['explained_variance', 'r2', 'neg_mean_squared_error'], n_jobs=-1, cv=10, refit='neg_mean_squared_error') elif mtype == 'ElasticNet': model = ElasticNetCV(alphas=alphas) else: model = LinearRegression() # Train model model.fit(x, y) if hasattr(model, 'best_estimator_'): return model.best_estimator_, model.best_params_ return model, alphas def predict_and_score(model, x, y): predictions = model.predict(x) rmse = np.sqrt(mean_squared_error(predictions, y)) return [predictions, rmse] X_train, X_test, y_train, y_test = train_test_split(train_df, y.values, test_size=0.20, random_state=10) ``` ### Experiment Management We are ready to run our train and validation loop and select the optimal model. As we run our experiment, Cortex will track each run and record the key params, metrics, and artifacts needed to reproduce and/or deploy the model later. ``` %%time best_model = None best_model_type = None best_rmse = 1.0 exp = cortex.experiment('kaggle/ames-housing-regression') # exp.reset() exp.set_meta('style', 'supervised') exp.set_meta('function', 'regression') with exp.start_run() as run: alphas = [1, 0.1, 0.001, 0.0005] for model_type in ['Linear', 'Lasso', 'Ridge', 'ElasticNet']: print('---'*30) print('Training model using {} regression algorithm'.format(model_type)) model, params = train(X_train, y_train, model_type=model_type, alphas=alphas) print('Params: ', params) [predictions, rmse] = predict_and_score(model, X_train, y_train) print('Training error:', rmse) [predictions, rmse] = predict_and_score(model, X_test, y_test) print('Testing error:', rmse) if rmse < best_rmse: best_rmse = rmse best_model = model best_model_type = model_type r2 = best_model.score(X_test, y_test) run.log_metric('r2', r2) run.log_metric('rmse', best_rmse) run.log_param('model_type', best_model_type) run.log_param('alphas', alphas) run.log_artifact('model', best_model) print('---'*30) print('Best model: ' + best_model_type) print('Best testing error: %.6f' % best_rmse) print('R2 score: %.6f' % r2) exp ```
true
code
0.604457
null
null
null
null
<a href="https://colab.research.google.com/github/dcshapiro/AI-Feynman/blob/master/AI_Feynman_2_0.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # AI Feynman 2.0: Learning Regression Equations From Data ### Clone repository and install dependencies ``` !git clone https://github.com/SJ001/AI-Feynman.git ``` Look at what we downloaded ``` !ls /content/AI-Feynman # %pycat AI-Feynman/requirements.txt if you need to fix the dependencies ``` Fix broken requirements file (may not be needed if later versions fix this). ``` %%writefile AI-Feynman/requirements.txt torch>=1.4.0 matplotlib sympy==1.4 pandas scipy sortedcontainers ``` Install dependencies not already installed in Google Collab ``` !pip install -r AI-Feynman/requirements.txt ``` Check that fortran is installed ``` !gfortran --version ``` Check the OS version ``` !lsb_release -a ``` Install the csh shell ``` !sudo apt-get install csh ``` Set loose permissions to avoid some reported file permissions issues ``` !chmod +777 /content/AI-Feynman/Code/* ``` ### Compile the fortran code Look at the code directory ``` !ls -l /content/AI-Feynman/Code ``` Compile .f files into .x files ``` !cd /content/AI-Feynman/Code/ && ./compile.sh ``` ### Run the first example from the AI-Feynman repository Change working directory to the Code directory ``` import os os.chdir("/content/AI-Feynman/Code/") print(os.getcwd()) !pwd ``` Check that the bruteforce code runs without errors ``` from S_brute_force import brute_force brute_force("/content/AI-Feynman/example_data/","example1.txt",30,"14ops.txt") ``` Look at the first line of the example 1 file ``` !head -n 1 /content/AI-Feynman/example_data/example1.txt # Example 1 has data generated from an equation, where the last column is the regression target, and the rest of the columns are the input data # The following example shows the relationship between the first line of the file example1.txt and the formula used to make the data x=[1.6821347439986711,1.1786188905177983,4.749225735259924,1.3238356535004034,3.462199507094163] x0,x1,x2,x3=x[0],x[1],x[2],x[3] (x0**2 - 2*x0*x1 + x1**2 + x2**2 - 2*x2*x3 + x3**2)**0.5 ``` Run the code. It takes a long time, so go get some coffee. ``` from S_run_aifeynman import run_aifeynman # Run example 1 as the regression dataset run_aifeynman("/content/AI-Feynman/example_data/","example1.txt",30,"14ops.txt", polyfit_deg=3, NN_epochs=400) ``` ### Assess the results ``` !cat results.dat ``` We found a candidate with an excellent fit, let's see what we got ``` !ls -l /content/AI-Feynman/Code/results/ !ls -l /content/AI-Feynman/Code/results/NN_trained_models/models !cat /content/AI-Feynman/Code/results/solution_example1.txt ``` Note in the cell above that the solution with the lowest error is the formula this data was generated from ### Try our own dataset generation and equation learning The code below generates our regression example dataset We generate points for 4 columns, where x0 is from the same equation as x1, and x2 is from the same equation as x3 The last column is Y ``` import os import random os.chdir("/content/AI-Feynman/example_data") def getY(x01,x23): y = -0.5*x01+0.5*x23+3 return y def getRow(): [x0,x2]=[random.random() for x in range(2)] x1=x0 x3=x2 y=getY(x1,x3) return str(x0)+" "+str(x1)+" "+str(x2)+" "+str(x3)+" "+str(y)+"\n" with open("duplicateVarsExample.txt", "w") as f: for _ in range(10000): f.write(getRow()) f.close() # switch back to the code directory os.chdir("/content/AI-Feynman/Code") ``` Let's look at our data ``` !head -n 10 ../example_data/duplicateVarsExample.txt ``` Let's also plot the data for x01 and x23 against Y ``` %matplotlib inline import matplotlib.pyplot as plt import pandas as pd plt.style.use('seaborn-whitegrid') import numpy as np df=pd.read_csv("../example_data/duplicateVarsExample.txt",sep=" ",header=None) df.plot.scatter(x=0, y=4) df.plot.scatter(x=2, y=4) ``` Now we run the experiment, and go get more coffee, because this is not going to be fast... ``` from S_run_aifeynman import run_aifeynman run_aifeynman("/content/AI-Feynman/example_data/","duplicateVarsExample.txt",30,"14ops.txt", polyfit_deg=3, NN_epochs=400) ``` Initial models quickly mapped to x0 and x2 (the system realized x1 and x3 are duplicates and so not needed) Later on the system found 3.000000000000+log(sqrt(exp((x2-x1)))) which is a bit crazy but looks like a plane We can see on Wolfram alpha that an equivalent form of this equation is: (x2 - x1)/2 + 3.000000000000 which is what we used to generate the dataset! Link: https://www.wolframalpha.com/input/?i=3.000000000000%2Blog%28sqrt%28exp%28%28x2-x1%29%29%29%29 ``` !ls -l /content/AI-Feynman/Code/results/ !cat /content/AI-Feynman/Code/results/solution_duplicateVarsExample.txt ``` The solver settled on *log(sqrt(exp(-x1 + x3))) + 3.0* which we know is correct Now, that was a bit of a softball problem as it has an exact solution. Let's now add noise to the dataset and see how the library holds up ### Let's add small amount of noise to every variabe and see the fit quality We do the same thing as before, but now we add or subtract noise to x0,x1,x2,x3 after generating y ``` import os import random import numpy as np os.chdir("/content/AI-Feynman/example_data") def getY(x01,x23): y = -0.5*x01+0.5*x23+3 return y def getRow(): x=[random.random() for x in range(4)] x[1]=x[0] x[3]=x[2] y=getY(x[1],x[3]) mu=0 sigma=0.05 noise=np.random.normal(mu, sigma, 4) x=x+noise return str(x[0])+" "+str(x[1])+" "+str(x[2])+" "+str(x[3])+" "+str(y)+"\n" with open("duplicateVarsWithNoise100k.txt", "w") as f: for _ in range(100000): f.write(getRow()) f.close() # switch back to the code directory os.chdir("/content/AI-Feynman/Code") ``` Let's have a look at the data ``` !head -n 20 ../example_data/duplicateVarsWithNoise100k.txt ``` Now let's plot the data ``` %matplotlib inline import matplotlib.pyplot as plt import pandas as pd plt.style.use('seaborn-whitegrid') import numpy as np df=pd.read_csv("../example_data/duplicateVarsWithNoise100k.txt",sep=" ",header=None) df.plot.scatter(x=0, y=4) df.plot.scatter(x=1, y=4) df.plot.scatter(x=2, y=4) df.plot.scatter(x=3, y=4) from S_run_aifeynman import run_aifeynman run_aifeynman("/content/AI-Feynman/example_data/","duplicateVarsWithNoise100k.txt",30,"14ops.txt", polyfit_deg=3, NN_epochs=600) !cat /content/AI-Feynman/Code/results/solution_duplicateVarsWithNoise100k.txt !cp -r /content/AI-Feynman /content/gdrive/My\ Drive/Lemay.ai_research/ # from S_run_aifeynman import run_aifeynman # run_aifeynman("/content/gdrive/My Drive/Lemay.ai_research/AI-Feynman/example_data/","duplicateVarsWithNoise.txt",30,"19ops.txt", polyfit_deg=3, NN_epochs=1000) import os import random import numpy as np os.chdir("/content/gdrive/My Drive/Lemay.ai_research/AI-Feynman/example_data") def getY(x01,x23): y = -0.5*x01+0.5*x23+3 return y def getRow(): x=[0 for x in range(4)] x[1]=random.random() x[3]=random.random() y=getY(x[1],x[3]) mu=0 sigma=0.05 noise=np.random.normal(mu, sigma, 4) x=x+noise return str(x[1])+" "+str(x[3])+" "+str(y)+"\n" with open("varsWithNoise.txt", "w") as f: for _ in range(100000): f.write(getRow()) f.close() # switch back to the code directory os.chdir("/content/gdrive/My Drive/Lemay.ai_research/AI-Feynman/Code") %matplotlib inline import matplotlib.pyplot as plt import pandas as pd plt.style.use('seaborn-whitegrid') import numpy as np df=pd.read_csv("../example_data/varsWithNoise.txt",sep=" ",header=None) df.plot.scatter(x=0, y=2) df.plot.scatter(x=1, y=2) from S_run_aifeynman import run_aifeynman run_aifeynman("/content/gdrive/My Drive/Lemay.ai_research/AI-Feynman/example_data/","varsWithNoise.txt",30,"14ops.txt", polyfit_deg=3, NN_epochs=1000) ```
true
code
0.439026
null
null
null
null
## AI for Medicine Course 1 Week 1 lecture exercises # Data Exploration In the first assignment of this course, you will work with chest x-ray images taken from the public [ChestX-ray8 dataset](https://arxiv.org/abs/1705.02315). In this notebook, you'll get a chance to explore this dataset and familiarize yourself with some of the techniques you'll use in the first graded assignment. <img src="xray-image.png" alt="U-net Image" width="300" align="middle"/> The first step before jumping into writing code for any machine learning project is to explore your data. A standard Python package for analyzing and manipulating data is [pandas](https://pandas.pydata.org/docs/#). With the next two code cells, you'll import `pandas` and a package called `numpy` for numerical manipulation, then use `pandas` to read a csv file into a dataframe and print out the first few rows of data. ``` # Import necessary packages import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline import os import seaborn as sns sns.set() # Read csv file containing training datadata train_df = pd.read_csv("nih/train-small.csv") # Print first 5 rows print(f'There are {train_df.shape[0]} rows and {train_df.shape[1]} columns in this data frame') train_df.head() ``` Have a look at the various columns in this csv file. The file contains the names of chest x-ray images ("Image" column) and the columns filled with ones and zeros identify which diagnoses were given based on each x-ray image. ### Data types and null values check Run the next cell to explore the data types present in each column and whether any null values exist in the data. ``` # Look at the data type of each column and whether null values are present train_df.info() ``` ### Unique IDs check "PatientId" has an identification number for each patient. One thing you'd like to know about a medical dataset like this is if you're looking at repeated data for certain patients or whether each image represents a different person. ``` print(f"The total patient ids are {train_df['PatientId'].count()}, from those the unique ids are {train_df['PatientId'].value_counts().shape[0]} ") ``` As you can see, the number of unique patients in the dataset is less than the total number so there must be some overlap. For patients with multiple records, you'll want to make sure they do not show up in both training and test sets in order to avoid data leakage (covered later in this week's lectures). ### Explore data labels Run the next two code cells to create a list of the names of each patient condition or disease. ``` columns = train_df.keys() columns = list(columns) print(columns) # Remove unnecesary elements columns.remove('Image') columns.remove('PatientId') # Get the total classes print(f"There are {len(columns)} columns of labels for these conditions: {columns}") ``` Run the next cell to print out the number of positive labels (1's) for each condition ``` # Print out the number of positive labels for each class for column in columns: print(f"The class {column} has {train_df[column].sum()} samples") ``` Have a look at the counts for the labels in each class above. Does this look like a balanced dataset? ### Data Visualization Using the image names listed in the csv file, you can retrieve the image associated with each row of data in your dataframe. Run the cell below to visualize a random selection of images from the dataset. ``` # Extract numpy values from Image column in data frame images = train_df['Image'].values # Extract 9 random images from it random_images = [np.random.choice(images) for i in range(9)] # Location of the image dir img_dir = 'nih/images-small/' print('Display Random Images') # Adjust the size of your images plt.figure(figsize=(20,10)) # Iterate and plot random images for i in range(9): plt.subplot(3, 3, i + 1) img = plt.imread(os.path.join(img_dir, random_images[i])) plt.imshow(img, cmap='gray') plt.axis('off') # Adjust subplot parameters to give specified padding plt.tight_layout() ``` ### Investigate a single image Run the cell below to look at the first image in the dataset and print out some details of the image contents. ``` # Get the first image that was listed in the train_df dataframe sample_img = train_df.Image[0] raw_image = plt.imread(os.path.join(img_dir, sample_img)) plt.imshow(raw_image, cmap='gray') plt.colorbar() plt.title('Raw Chest X Ray Image') print(f"The dimensions of the image are {raw_image.shape[0]} pixels width and {raw_image.shape[1]} pixels height, one single color channel") print(f"The maximum pixel value is {raw_image.max():.4f} and the minimum is {raw_image.min():.4f}") print(f"The mean value of the pixels is {raw_image.mean():.4f} and the standard deviation is {raw_image.std():.4f}") ``` ### Investigate pixel value distribution Run the cell below to plot up the distribution of pixel values in the image shown above. ``` # Plot a histogram of the distribution of the pixels sns.distplot(raw_image.ravel(), label=f'Pixel Mean {np.mean(raw_image):.4f} & Standard Deviation {np.std(raw_image):.4f}', kde=False) plt.legend(loc='upper center') plt.title('Distribution of Pixel Intensities in the Image') plt.xlabel('Pixel Intensity') plt.ylabel('# Pixels in Image') ``` <a name="image-processing"></a> # Image Preprocessing in Keras Before training, you'll first modify your images to be better suited for training a convolutional neural network. For this task you'll use the Keras [ImageDataGenerator](https://keras.io/preprocessing/image/) function to perform data preprocessing and data augmentation. Run the next two cells to import this function and create an image generator for preprocessing. ``` # Import data generator from keras from keras.preprocessing.image import ImageDataGenerator # Normalize images image_generator = ImageDataGenerator( samplewise_center=True, #Set each sample mean to 0. samplewise_std_normalization= True # Divide each input by its standard deviation ) ``` ### Standardization The `image_generator` you created above will act to adjust your image data such that the new mean of the data will be zero, and the standard deviation of the data will be 1. In other words, the generator will replace each pixel value in the image with a new value calculated by subtracting the mean and dividing by the standard deviation. $$\frac{x_i - \mu}{\sigma}$$ Run the next cell to pre-process your data using the `image_generator`. In this step you will also be reducing the image size down to 320x320 pixels. ``` # Flow from directory with specified batch size and target image size generator = image_generator.flow_from_dataframe( dataframe=train_df, directory="nih/images-small/", x_col="Image", # features y_col= ['Mass'], # labels class_mode="raw", # 'Mass' column should be in train_df batch_size= 1, # images per batch shuffle=False, # shuffle the rows or not target_size=(320,320) # width and height of output image ) ``` Run the next cell to plot up an example of a pre-processed image ``` # Plot a processed image sns.set_style("white") generated_image, label = generator.__getitem__(0) plt.imshow(generated_image[0], cmap='gray') plt.colorbar() plt.title('Raw Chest X Ray Image') print(f"The dimensions of the image are {generated_image.shape[1]} pixels width and {generated_image.shape[2]} pixels height") print(f"The maximum pixel value is {generated_image.max():.4f} and the minimum is {generated_image.min():.4f}") print(f"The mean value of the pixels is {generated_image.mean():.4f} and the standard deviation is {generated_image.std():.4f}") ``` Run the cell below to see a comparison of the distribution of pixel values in the new pre-processed image versus the raw image. ``` # Include a histogram of the distribution of the pixels sns.set() plt.figure(figsize=(10, 7)) # Plot histogram for original iamge sns.distplot(raw_image.ravel(), label=f'Original Image: mean {np.mean(raw_image):.4f} - Standard Deviation {np.std(raw_image):.4f} \n ' f'Min pixel value {np.min(raw_image):.4} - Max pixel value {np.max(raw_image):.4}', color='blue', kde=False) # Plot histogram for generated image sns.distplot(generated_image[0].ravel(), label=f'Generated Image: mean {np.mean(generated_image[0]):.4f} - Standard Deviation {np.std(generated_image[0]):.4f} \n' f'Min pixel value {np.min(generated_image[0]):.4} - Max pixel value {np.max(generated_image[0]):.4}', color='red', kde=False) # Place legends plt.legend() plt.title('Distribution of Pixel Intensities in the Image') plt.xlabel('Pixel Intensity') plt.ylabel('# Pixel') ``` #### That's it for this exercise, you should now be a bit more familiar with the dataset you'll be using in this week's assignment!
true
code
0.614914
null
null
null
null
``` !nvidia-smi ``` # **Intro to Generative Adversarial Networks (GANs)** Generative adversarial networks (GANs) are algorithmic architectures that use two neural networks, compitting one against the other (thus the “adversarial”) in order to generate new, synthetic instances of data that can pass for real data. They are used widely in image generation, video generation and voice generation. GANs were introduced in [a paper by Ian Goodfellow](https://arxiv.org/abs/1406.2661) and other researchers at the University of Montreal, including Yoshua Bengio, in 2014. Referring to GANs, Facebook’s AI research director Yann LeCun called adversarial training “the most interesting idea in the last 10 years in ML.” ## **Some cool demos**: * Progress over the last several years, from [Ian Goodfellow tweet](https://twitter.com/goodfellow_ian/status/1084973596236144640) <img src='http://drive.google.com/uc?export=view&id=1PSfze4ZHgAn4BAjLuZhqAZO_HJQ1NEHX' width=1000 height=350/> A generative adversarial network (GAN) has two parts: * The **generator** learns to generate plausible data. The generated instances become negative training examples for the discriminator. * The **discriminator** learns to distinguish the generator's fake data from real data. The discriminator penalizes the generator for producing implausible results. When training begins, the generator produces obviously fake data, and the discriminator quickly learns to tell that it's fake: <img src='http://drive.google.com/uc?export=view&id=1Auxzsi3395vL0K80GfYlAEvWufTMTZ59' width=1000 height=350/> # **<font color='Darkblue'>Import Required Libraries:</font>** ``` from __future__ import print_function #%matplotlib inline import random import torch from torch import nn from tqdm.auto import tqdm from torchvision import transforms from torchvision import datasets # Training dataset from torchvision.utils import make_grid from torchvision import utils from torch.utils.data import DataLoader import matplotlib.pyplot as plt import numpy as np import matplotlib.animation as animation from IPython.display import HTML # Decide which device we want to run on device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print("My device: => ", device) # Set random seed for reproducibility my_seed = 123 random.seed(my_seed) torch.manual_seed(my_seed); ``` # **Fashion-MNIST Dataset:** `Fashion-MNIST` is a dataset of [Zalando](https://jobs.zalando.com/en/tech/?gh_src=281f2ef41us)'s article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original [MNIST dataset](http://yann.lecun.com/exdb/mnist/) for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits. <img src='https://raw.githubusercontent.com/zalandoresearch/fashion-mnist/master/doc/img/fashion-mnist-sprite.png' width=1000 height=700/> ``` batch_size = 128 transform = transforms.Compose([transforms.ToTensor()]) data_train = datasets.FashionMNIST('./data', download=True, train=True, transform=transform) train_loader = DataLoader(data_train, batch_size=batch_size, shuffle=True) classes = ['T-shirt/top','Trouser','Pullover','Dress','Coat','Sandal','Shirt','Sneaker','Bag','Ankle Boot'] dataiter = iter(train_loader) images, labels = dataiter.next() images_arr = [] labels_arr = [] for i in range(0, 30): images_arr.append(images[i].unsqueeze(0)) labels_arr.append(labels[i].item()) fig = plt.figure(figsize=(25, 10)) for i in range(30): ax = fig. add_subplot(3, 10, i+1, xticks=[], yticks=[]) ax.imshow(images_arr[i].resize_(1, 28, 28).numpy().squeeze(), cmap='gray') ax.set_title("{}".format(classes[labels_arr[i]]), color=("blue")) ``` # **<font color='darkorange'>Generator Part:</font>** The generator part of a GAN learns to create fake data by incorporating feedback from the discriminator. It learns to make the discriminator classify its output as real. Generator training requires tighter integration between the generator and the discriminator than discriminator training requires. The portion of the GAN that trains the generator includes: * random input * generator network, which transforms the random input into a data instance * discriminator network, which classifies the generated data * discriminator output * generator loss, which penalizes the generator for failing to fool the discriminator <img src='http://drive.google.com/uc?export=view&id=1dbk5FmAHE3LHwspYm8qxL-qHBLXBq29i' width=1000 height=350/> ### **Generator Block:** ``` def get_generator_block(input_dim, output_dim): seq = nn.Sequential( nn.Linear(input_dim, output_dim), nn.BatchNorm1d(output_dim), nn.LeakyReLU(negative_slope=0.2, inplace=False), nn.Dropout(0.3), ) return seq ``` ### **Generator Class:** ``` class Generator(nn.Module): def __init__(self, z_dim=10, img_dim=28*28, hidden_dim=128): super(Generator, self).__init__() self.gen = nn.Sequential( get_generator_block(z_dim, hidden_dim), get_generator_block(hidden_dim, hidden_dim * 2), get_generator_block(hidden_dim * 2, hidden_dim * 4), get_generator_block(hidden_dim * 4, hidden_dim * 8), nn.Linear(hidden_dim * 8, img_dim), nn.Sigmoid(), ) def forward(self, noise): gen_output = self.gen(noise) return gen_output # Generate Noise: def get_generator_noise(n_sample, z_dim, device='cpu'): my_noise = torch.randn(n_sample, z_dim, device=device) return my_noise ``` # **<font color='darkorange'>Discriminator Part:</font>** The discriminator in a GAN is simply a classifier. It tries to distinguish real data from the data created by the generator. It could use any network architecture appropriate to the type of data it's classifying. The discriminator's training data comes from two sources: * **Real data** instances, such as real pictures of people. The discriminator uses these instances as positive examples during training. * **Fake data** instances created by the generator. The discriminator uses these instances as negative examples during training. <img src='http://drive.google.com/uc?export=view&id=1A3_gYqcPORqXFio1wNHAsc8ZndY3zpIP' width=1000 height=350/> ``` # Discriminator Block def get_discriminator_block(input_dim, output_dim): seq = nn.Sequential( nn.Linear(input_dim, output_dim), nn.LeakyReLU(negative_slope=0.2, inplace=False), ) return seq ``` <img src='https://miro.medium.com/max/1400/1*siH_yCvYJ9rqWSUYeDBiRA.png' width=800 height=400/> ``` # Discriminator Class: class Discriminator(nn.Module): def __init__(self, img_dim=28*28, hidden_dim=128): super(Discriminator, self).__init__() self.disc = nn.Sequential( get_discriminator_block(img_dim, hidden_dim * 4), get_discriminator_block(hidden_dim * 4, hidden_dim * 2), get_discriminator_block(hidden_dim * 2, hidden_dim), nn.Linear(hidden_dim, 1), ) def forward(self, image): state = self.disc(image) return state ``` # **<font color='deepskyblue'>Training Process:</font>** Because a GAN contains two separately trained networks, its training algorithm must address two complications: * GANs must juggle two different kinds of training (generator and discriminator). * GAN convergence is hard to identify. ### **Set Hyperparameters:** ``` # Set your parameters criterion = nn.BCEWithLogitsLoss() num_epochs = 51 z_dim = 64 display_step = 100 lr = 0.0001 size = (1, 28, 28) device = 'cuda' # Generator: generator = Generator(z_dim).to(device) gen_optimizer = torch.optim.Adam(generator.parameters(), lr=lr) # Discriminator: discriminator = Discriminator().to(device) disc_optimizer = torch.optim.Adam(discriminator.parameters(), lr=lr) # Discriminator Loss: def get_discriminator_loss(gen, disc, criterion, real, num_images, z_dim, device): noise = get_generator_noise(num_images, z_dim, device=device) gen_output = gen(noise) disc_out_fake = disc(gen_output.detach()) disc_loss_fake = criterion(disc_out_fake, torch.zeros_like(disc_out_fake)) disc_out_real = disc(real) disc_loss_real = criterion(disc_out_real, torch.ones_like(disc_out_real)) disc_loss = (disc_loss_fake + disc_loss_real) / 2 return disc_loss # Generator Loss: def get_generator_loss(gen, disc, criterion, num_images, z_dim, device): noise = get_generator_noise(num_images, z_dim, device=device) gen_output = gen(noise) disc_preds = disc(gen_output) # gen_output.detach() gen_loss = criterion(disc_preds, torch.ones_like(disc_preds)) return gen_loss # Show Images Function: def show_tensor_images(real, fake, num_images=25, size=(1, 28, 28)): plt.figure(figsize=(15,15)) image_unflat_real = real.detach().cpu().view(-1, *size) image_grid_real = make_grid(image_unflat_real[:num_images], nrow=5, normalize=True, padding=2) plt.subplot(1,2,1) plt.axis("off") plt.title("Real Images") plt.imshow(image_grid_real.permute(1, 2, 0).squeeze()) image_unflat_fake = fake.detach().cpu().view(-1, *size) image_grid_fake = make_grid(image_unflat_fake[:num_images], nrow=5, normalize=True, padding=2) plt.subplot(1,2,2) plt.axis("off") plt.title("Fake Images") plt.imshow(image_grid_fake.permute(1, 2, 0).squeeze()) plt.show() # Training Loop img_list = [] G_losses = [] D_losses = [] iters = 0 cur_step = 0 img_show = 3 mean_generator_loss = 0 mean_discriminator_loss = 0 for epoch in range(num_epochs): for real, _ in tqdm(train_loader): cur_batch_size = len(real) real = real.view(cur_batch_size, -1).to(device) disc_optimizer.zero_grad() disc_loss = get_discriminator_loss(generator, discriminator, criterion, real, cur_batch_size, z_dim, device) disc_loss.backward() disc_optimizer.step() gen_optimizer.zero_grad() gen_loss = get_generator_loss(generator, discriminator, criterion, cur_batch_size, z_dim, device) gen_loss.backward() gen_optimizer.step() mean_discriminator_loss += disc_loss.item() / cur_batch_size mean_generator_loss += gen_loss.item() / cur_batch_size G_losses.append(mean_discriminator_loss) D_losses.append(mean_generator_loss) if cur_step % display_step == 0 and cur_step >= 0: print(f"[Epoch: {epoch}/{num_epochs}] | [Step: {cur_step}/{num_epochs*len(train_loader)}], Generator Loss: {mean_generator_loss}, Discriminator Loss: {mean_discriminator_loss}") fake_noise = get_generator_noise(cur_batch_size, z_dim, device=device) fake = generator(fake_noise) img_list.append(make_grid(fake.detach().cpu().view(-1, *size)[:36], nrow=6, normalize=True, padding=2)) mean_discriminator_loss = 0 mean_generator_loss = 0 cur_step += 1 if epoch % img_show == 0: fake_noise = get_generator_noise(cur_batch_size, z_dim, device=device) fake = generator(fake_noise) show_tensor_images(real, fake) ``` # **Visualization:** ``` plt.figure(figsize=(20, 7)) plt.title("Generator and Discriminator Loss During Training") plt.plot(G_losses, label="Generator") plt.plot(D_losses, label="Discriminator") plt.xlabel("steps") plt.ylabel("Loss") plt.legend() plt.show() fig = plt.figure(figsize=(8, 8)) plt.axis("off") imgs = [[plt.imshow(np.transpose(img, (1,2,0)), animated=True)] for img in img_list] anim = animation.ArtistAnimation(fig, imgs, interval=100, repeat_delay=1000, blit=True) HTML(anim.to_jshtml()) ```
true
code
0.792032
null
null
null
null
# Iris Training and Prediction with Sagemaker Scikit-learn ### Modified Version of AWS Example: https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/scikit_learn_iris/Scikit-learn%20Estimator%20Example%20With%20Batch%20Transform.ipynb Following modifications were made: 1. Incorporated scripts for local mode hosting 2. Added Train and Test Channels 3. Visualize results (confusion matrix and reports) 4. Added steps to deploy using model artifacts stored in S3 Following Script changes were made: 1. RandomForest Algorithm 2. Refactored script to follow the template provided in tensorflow example: https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/tensorflow_script_mode_training_and_serving/tensorflow_script_mode_training_and_serving.ipynb This tutorial shows you how to use [Scikit-learn](https://scikit-learn.org/stable/) with Sagemaker by utilizing the pre-built container. Scikit-learn is a popular Python machine learning framework. It includes a number of different algorithms for classification, regression, clustering, dimensionality reduction, and data/feature pre-processing. The [sagemaker-python-sdk](https://github.com/aws/sagemaker-python-sdk) module makes it easy to take existing scikit-learn code, which we will show by training a model on the IRIS dataset and generating a set of predictions. For more information about the Scikit-learn container, see the [sagemaker-scikit-learn-containers](https://github.com/aws/sagemaker-scikit-learn-container) repository and the [sagemaker-python-sdk](https://github.com/aws/sagemaker-python-sdk) repository. For more on Scikit-learn, please visit the Scikit-learn website: <http://scikit-learn.org/stable/>. ### Table of contents * [Upload the data for training](#upload_data) * [Create a Scikit-learn script to train with](#create_sklearn_script) * [Create the SageMaker Scikit Estimator](#create_sklearn_estimator) * [Train the SKLearn Estimator on the Iris data](#train_sklearn) * [Using the trained model to make inference requests](#inferece) * [Deploy the model](#deploy) * [Choose some data and use it for a prediction](#prediction_request) * [Endpoint cleanup](#endpoint_cleanup) * [Batch Transform](#batch_transform) * [Prepare Input Data](#prepare_input_data) * [Run Transform Job](#run_transform_job) * [Check Output Data](#check_output_data) First, lets create our Sagemaker session and role, and create a S3 prefix to use for the notebook example. ### Local Mode Execution - requires docker compose configured ### The below setup script is from AWS SageMaker Python SDK Examples : tf-eager-sm-scriptmode.ipynb ``` !/bin/bash ./setup.sh import os import sys import sagemaker from sagemaker import get_execution_role import pandas as pd import numpy as np import matplotlib.pyplot as plt import itertools from sklearn import preprocessing from sklearn.metrics import classification_report, confusion_matrix # SageMaker SKLearn Estimator from sagemaker.sklearn.estimator import SKLearn sagemaker_session = sagemaker.Session() role = get_execution_role() region = sagemaker_session.boto_session.region_name ``` ## Training Data ``` column_list_file = 'iris_train_column_list.txt' train_file = 'iris_train.csv' test_file = 'iris_validation.csv' columns = '' with open(column_list_file,'r') as f: columns = f.read().split(',') # Specify your bucket name bucket_name = 'chandra-ml-sagemaker' training_folder = r'iris/train' test_folder = r'iris/test' model_folder = r'iris/model/' training_data_uri = r's3://' + bucket_name + r'/' + training_folder testing_data_uri = r's3://' + bucket_name + r'/' + test_folder model_data_uri = r's3://' + bucket_name + r'/' + model_folder training_data_uri,testing_data_uri,model_data_uri sagemaker_session.upload_data(train_file, bucket=bucket_name, key_prefix=training_folder) sagemaker_session.upload_data(test_file, bucket=bucket_name, key_prefix=test_folder) ``` Once we have the data locally, we can use use the tools provided by the SageMaker Python SDK to upload the data to a default bucket. ## Create a Scikit-learn script to train with <a class="anchor" id="create_sklearn_script"></a> SageMaker can now run a scikit-learn script using the `SKLearn` estimator. When executed on SageMaker a number of helpful environment variables are available to access properties of the training environment, such as: * `SM_MODEL_DIR`: A string representing the path to the directory to write model artifacts to. Any artifacts saved in this folder are uploaded to S3 for model hosting after the training job completes. * `SM_OUTPUT_DIR`: A string representing the filesystem path to write output artifacts to. Output artifacts may include checkpoints, graphs, and other files to save, not including model artifacts. These artifacts are compressed and uploaded to S3 to the same S3 prefix as the model artifacts. Supposing two input channels, 'train' and 'test', were used in the call to the `SKLearn` estimator's `fit()` method, the following environment variables will be set, following the format `SM_CHANNEL_[channel_name]`: * `SM_CHANNEL_TRAIN`: A string representing the path to the directory containing data in the 'train' channel * `SM_CHANNEL_TEST`: Same as above, but for the 'test' channel. A typical training script loads data from the input channels, configures training with hyperparameters, trains a model, and saves a model to model_dir so that it can be hosted later. Hyperparameters are passed to your script as arguments and can be retrieved with an `argparse.ArgumentParser` instance. For example, the script that we will run in this notebook is the below: ``` !pygmentize 'scikit_learn_iris.py' ``` Because the Scikit-learn container imports your training script, you should always put your training code in a main guard `(if __name__=='__main__':)` so that the container does not inadvertently run your training code at the wrong point in execution. For more information about training environment variables, please visit https://github.com/aws/sagemaker-containers. ## Create SageMaker Scikit Estimator <a class="anchor" id="create_sklearn_estimator"></a> To run our Scikit-learn training script on SageMaker, we construct a `sagemaker.sklearn.estimator.sklearn` estimator, which accepts several constructor arguments: * __entry_point__: The path to the Python script SageMaker runs for training and prediction. * __role__: Role ARN * __train_instance_type__ *(optional)*: The type of SageMaker instances for training. __Note__: Because Scikit-learn does not natively support GPU training, Sagemaker Scikit-learn does not currently support training on GPU instance types. * __sagemaker_session__ *(optional)*: The session used to train on Sagemaker. * __hyperparameters__ *(optional)*: A dictionary passed to the train function as hyperparameters. To see the code for the SKLearn Estimator, see here: https://github.com/aws/sagemaker-python-sdk/tree/master/src/sagemaker/sklearn ``` #instance_type='ml.m5.xlarge' instance_type='local' # Reference: http://sagemaker.readthedocs.io/en/latest/estimators.html # SDK 2.x version does not require train prefix for instance count and type # Specify framework and python Version estimator = SKLearn(entry_point='scikit_learn_iris.py', framework_version = "0.20.0", py_version = 'py3', instance_type= instance_type, role=role, output_path=model_data_uri, base_job_name='sklearn-iris', hyperparameters={'n_estimators': 50,'max_depth':5}) ``` ## Train SKLearn Estimator on Iris data <a class="anchor" id="train_sklearn"></a> Training is very simple, just call `fit` on the Estimator! This will start a SageMaker Training job that will download the data for us, invoke our scikit-learn code (in the provided script file), and save any model artifacts that the script creates. ``` estimator.fit({'training':training_data_uri,'testing':testing_data_uri}) estimator.latest_training_job.job_name estimator.model_data ``` ## Using the trained model to make inference requests <a class="anchor" id="inference"></a> ### Deploy the model <a class="anchor" id="deploy"></a> Deploying the model to SageMaker hosting just requires a `deploy` call on the fitted model. This call takes an instance count and instance type. ``` predictor = estimator.deploy(initial_instance_count=1, instance_type=instance_type) ``` ### Choose some data and use it for a prediction <a class="anchor" id="prediction_request"></a> In order to do some predictions, we'll extract some of the data we used for training and do predictions against it. This is, of course, bad statistical practice, but a good way to see how the mechanism works. ``` df = pd.read_csv(test_file, names=columns) from sklearn import preprocessing from sklearn.metrics import classification_report, confusion_matrix # Encode Class Labels to integers # Labeled Classes labels=[0,1,2] classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica'] le = preprocessing.LabelEncoder() le.fit(classes) df.head() X_test = df.iloc[:,1:] print(X_test[:5]) result = predictor.predict(X_test) result df['predicted_class'] = result df.head() ``` <h2>Confusion Matrix</h2> Confusion Matrix is a table that summarizes performance of classification model.<br><br> ``` # Reference: # https://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] #print("Normalized confusion matrix") #else: # print('Confusion matrix, without normalization') #print(cm) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) fmt = '.2f' if normalize else 'd' thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, format(cm[i, j], fmt), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.ylabel('True label') plt.xlabel('Predicted label') plt.tight_layout() # Compute confusion matrix cnf_matrix = confusion_matrix(df['encoded_class'], df['predicted_class'],labels=labels) cnf_matrix # Plot confusion matrix plt.figure() plot_confusion_matrix(cnf_matrix, classes=classes, title='Confusion matrix - Count') # Plot confusion matrix plt.figure() plot_confusion_matrix(cnf_matrix, classes=classes, title='Confusion matrix - Count',normalize=True) print(classification_report( df['encoded_class'], df['predicted_class'], labels=labels, target_names=classes)) ``` ### Endpoint cleanup <a class="anchor" id="endpoint_cleanup"></a> When you're done with the endpoint, you'll want to clean it up. ``` # SDK 2 predictor.delete_endpoint() ``` ## Another way to deploy endpoint ## Using trained model artifacts https://sagemaker.readthedocs.io/en/stable/sagemaker.sklearn.html#scikit-learn-predictor https://sagemaker.readthedocs.io/en/stable/using_sklearn.html#working-with-existing-model-data-and-training-jobs ``` model_data = estimator.model_data model_data import sagemaker.sklearn model = sagemaker.sklearn.model.SKLearnModel(model_data=model_data, role=role, entry_point='scikit_learn_iris.py', framework_version = "0.20.0", py_version = 'py3') predictor_2 = model.deploy(initial_instance_count=1, instance_type=instance_type) predictor_2.predict(X_test[:5]) predictor_2.delete_endpoint() ```
true
code
0.580471
null
null
null
null
# Quantum State Tomography (Unsupervised Learning) Quantum state tomography (QST) is a machine learning task which aims to reconstruct the full quantum state from measurement results. __Aim__: Given a variational ansatz $\Psi(\lbrace \boldsymbol{\beta} \rbrace)$ and a set of measurement results, we want to find the parameters $\boldsymbol{\beta}$ which best reproduce the probability distribution of the measurements. __Training Data__: A set of single shot measurements in some basis, e.g. $\lbrace(100010, \textrm{XZZZYZ}), (011001, \textrm{ZXXYZZ}), \dots \rbrace$ ``` import netket as nk import numpy as np import matplotlib.pyplot as plt import math import sys ``` While in practice, the measurement results can be experimental data e.g. from a quantum computer/simulator, for our purpose, we shall construct our data set by making single shot measurement on a wavefunction that we have obtained via exact diagonalisation. For this tutorial, we shall focus on the one-dimensional anti-ferromagnetic transverse-field Ising model defined by $$H = \sum_{i} Z_{i}Z_{i+1} + h \sum_{i} X_{i}$$ ``` # Define the Hamiltonian N = 8 g = nk.graph.Hypercube(length=N, n_dim=1, pbc=False) hi = nk.hilbert.Spin(g, s=0.5) ha = nk.operator.Ising(hi, h=1.0) ``` Next, perform exact diagonalisation to obtain the ground state wavefunction. ``` # Obtain the ED wavefunction res = nk.exact.lanczos_ed(ha, first_n=1, compute_eigenvectors=True) psi = res.eigenvectors[0] print("Ground state energy =", res.eigenvalues[0]) ``` Finally, to construct the dataset, we will make single shot measurements in various bases. To obtain a single shot measurement, we need to sample from the wavefunction in the relevant basis. Since the wavefunction we obtained is in the computational basis (i.e. $Z$ basis), to obtain a single shot measurement in another basis, one would need to transform the wavefunction as follows (this is similar to how one would do a measurement in a different basis on a quantum computer): X Basis: $$ |{\Psi}\rangle \rightarrow I_n \otimes \frac{1}{\sqrt{2}}\pmatrix{1 & 1 \\ 1& -1} \otimes I_m \ |\Psi\rangle$$ Y Basis: $$ |{\Psi}\rangle \rightarrow I_n \otimes \frac{1}{\sqrt{2}}\pmatrix{1 & -i \\ 1& i} \otimes I_m \ |\Psi\rangle$$ ``` def build_rotation(hi, basis): localop = nk.operator.LocalOperator(hi, 1.0) U_X = 1.0 / (math.sqrt(2)) * np.asarray([[1.0, 1.0], [1.0, -1.0]]) U_Y = 1.0 / (math.sqrt(2)) * np.asarray([[1.0, -1j], [1.0, 1j]]) N = hi.size assert len(basis) == hi.size for j in range(hi.size): if basis[j] == "X": localop *= nk.operator.LocalOperator(hi, U_X, [j]) if basis[j] == "Y": localop *= nk.operator.LocalOperator(hi, U_Y, [j]) return localop n_basis = 2*N n_shots = 1000 rotations = [] training_samples = [] training_bases = [] np.random.seed(1234) for m in range(n_basis): basis = np.random.choice( list("XYZ"), size=N, p=[1.0 / N, 1.0 / N, (N - 2.0) / N] ) psi_rotated = np.copy(psi) if 'X' in basis or 'Y' in basis: rotation = build_rotation(hi, basis) psi_rotated = rotation.to_sparse().dot(psi_rotated) psi_square = np.square(np.absolute(psi_rotated)) rand_n = np.random.choice(hi.n_states, p=psi_square, size=n_shots) for rn in rand_n: training_samples.append(hi.number_to_state(rn)) training_bases += [m] * n_shots rotations.append(rotation) print('Number of bases:', n_basis) print('Number of shots:', n_shots) print('Total size of the dataset:', n_basis*n_shots) print('Some single shot results: (sample, basis)\n', list(zip(training_samples[:3], training_bases[:3]))) ``` The basis rotations are contained in ``rotations`` and the single shot measurements are stored in ``training_samples``. ``training_bases`` is a list of integers which labels each samples in ``training_samples`` according to their basis. Having obtained the dataset, we can proceed to define the variational ansatz one wishes to train. We shall simply use the Restricted Boltzmann Machine (RBM) with real parameters defined as: $$ \tilde\psi (\boldsymbol{\sigma}) = p_{\boldsymbol{\lambda}}(\boldsymbol{\sigma}) e^{i \phi_{\boldsymbol{\mu}}(\boldsymbol{\sigma})} $$ where $\phi_{\boldsymbol{\mu}}(\boldsymbol{\sigma}) = \log p_{\boldsymbol{\mu}}(\boldsymbol{\sigma})$ and $p_{\boldsymbol{\lambda/\mu}}$ are standard RBM real probability distributions. Notice that the amplitude part $p_{\boldsymbol{\lambda}}$ completely defines the measurements in the Z basis and vice versa. ``` # Define the variational wavefunction ansatz ma = nk.machine.RbmSpinPhase(hilbert=hi, alpha=1) ``` With the variational ansatz as well as the dataset, the quantum state tomography can now be performed. Recall that the aim is to reconstruct a wavefunction $|\Psi_{U}\rangle$ (for our case, the ground state of the 1D TFIM) given single shot measurements of the wavefunction in various bases. The single shot measurements are governed by a probability distribution $$P_{b}(\boldsymbol{\sigma}) = | \langle \boldsymbol{\sigma}| \hat{U}_{b} |\Psi_{U}\rangle|^{2}$$ which depends on $|\Psi_{U}\rangle$ and the basis $b$ in which the measurement is performed. $\hat{U}_{b}$ is simply the unitary which rotates the wavefunction into the corresponding basis. Similarly, the variational wavefunction $|\tilde\psi\rangle$ also defines a set of basis dependent probability distributions $\tilde P_{b}(\boldsymbol{\sigma})$. The optimisation procedure is then basically a minimisation task to find the set of parameters $\boldsymbol{\kappa}$ which minimises the total Kullback–Leibler divergence between $\tilde P_{b}$ and $P_{b}$, i.e. $$ \Xi(\boldsymbol{\kappa}) = \sum_{b} \mathbb{KL}_{b}({\kappa}) $$ where $$\mathbb{KL}_{b}({\boldsymbol{\kappa}}) = \sum_{\boldsymbol{\sigma}} P_{b}(\boldsymbol{\sigma}) \log \frac{ P_{b}(\boldsymbol{\sigma})}{\tilde P_{b}(\boldsymbol{\sigma})}$$. This minimisation can be achieved by gradient descent. Although one does not have access to the underlying probability distributions $P_{b}(\boldsymbol{\sigma})$, the total KL divergence can be estimated by summing over the dataset $\lbrace D_{b} \rbrace$ $$ \Xi(\boldsymbol{\kappa}) = -\sum_{b} \sum_{\boldsymbol{\sigma} \in D_{b}} \log \tilde P_{b}(\boldsymbol{\sigma}) + S $$ where $S$ is the constant entropy of $P_{b}(\boldsymbol{\sigma})$ which can be ignored. The details regarding the computation of the gradients can be found in arXiv:1703.05334. In addition to reconstructing the quantum state, we would also like to investigate how the size of the dataset affects the quality of the reconstruction. To that end, we shall run the optimisation with different dataset sizes. ``` # Shuffle our datasets import random temp = list(zip(training_bases, training_samples)) random.shuffle(temp) training_bases, training_samples = zip(*temp) # Sampler sa = nk.sampler.MetropolisLocal(machine=ma) # Optimizer op = nk.optimizer.AdaDelta() dataset_sizes = [2000, 4000, 8000, 16000] # During the optimisation, we would like to keep track # of the fidelities and energies fidelities = {} energies = {} for size in dataset_sizes: # First remember to reinitialise the machine ma.init_random_parameters(seed=1234, sigma=0.01) # Quantum State Tomography object qst = nk.unsupervised.Qsr( sampler=sa, optimizer=op, batch_size=300, n_samples=300, rotations=tuple(rotations[:size]), samples=training_samples[:size], bases=training_bases, method="Gd" ) qst.add_observable(ha, "Energy") # Run the optimisation using the iterator print("Starting optimisation for dataset size", size) fidelities_temp = [] energies_temp = [] for step in qst.iter(2000, 100): # Compute fidelity with exact state psima = ma.to_array() fidelity = np.abs(np.vdot(psima, psi)) fidelities_temp.append(fidelity) fidelities[size] = fidelities_temp energies[size] = energies_temp for size in fidelities: plt.plot(np.arange(1,21,1),fidelities[size], label='dataset size = '+str(size)) plt.axhline(y=1, xmin=0, xmax=20, linewidth=3, color='k', label='Perfect Fidelity') plt.ylabel('Fidelity') plt.xlabel('Iteration') plt.xticks(range(0,21,4)) plt.legend() plt.show() ``` As expected, it is relatively clear that with increasing dataset size, the final fidelity does increase.
true
code
0.647575
null
null
null
null
## Result Visualizations with Comparison Analysis ### CHAPTER 02 - *Model Explainability Methods* From **Applied Machine Learning Explainability Techniques** by [**Aditya Bhattacharya**](https://www.linkedin.com/in/aditya-bhattacharya-b59155b6/), published by **Packt** ### Objective In this notebook, we will try to implement some of the concepts related to Comparison Analysis part of the result visualization based explainability methods discussed in Chapter 2 - Model Explainability Methods. ### Installing the modules Install the following libraries in Google Colab or your local environment, if not already installed. ``` !pip install --upgrade pandas numpy matplotlib seaborn scikit-learn statsmodels ``` ### Loading the modules ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set(style="whitegrid") import warnings warnings.filterwarnings("ignore") np.random.seed(5) from sklearn import preprocessing from sklearn.model_selection import train_test_split from sklearn.manifold import TSNE from sklearn.cluster import KMeans from statsmodels.tsa.arima.model import ARIMA import random ``` ### About the data Kaggle Data Source Link - [Kaggle | Pima Indians Diabetes Database](https://www.kaggle.com/uciml/pima-indians-diabetes-database?select=diabetes.csv) The Pima Indian Diabetes dataset is used to predict whether or not the diagnosed patient has diabetes, which is also a Binary Classification problem, based on the various diagnostic feature values provided. The dataset used for this analysis is obtained from Kaggle. Although the dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases. The patient dynamics provided in this dataset is that of female patients who are at least 21 years old and of Pima Indian heritage. The datasets used might be derived and transformed datasets from original datasets. The sources of the original datasets will be mentioned and I would strongly recommend to look at the original data for more details on the data description and for a more detailed analysis. Kaggle Data Source Link - [Kaggle | Daily Female Births Dataset](https://www.kaggle.com/dougcresswell/daily-total-female-births-in-california-1959) This is a time series dataset used for building predictive models. The data is about total number of female births recorded in California, USA on 1959. It consists of the time index and the count of female birth and consist around 365 records. ### Loading the data ``` data = pd.read_csv('Datasets/diabetes.csv') data.head() data.shape ``` ### Data Preprocessing We will perform some preliminary pre-processing and very basic exploration as our main focus is on the comparison analysis. And since some of these methods are already covered in sufficient details in other notebook tutorials provided, I will try to jump to the important steps for the comparison analysis. ``` data[(data['BMI'] == 0) & (data['Glucose'] == 0) & (data['BloodPressure'] == 0)] data[(data['Glucose'] == 0)] ``` From the above observation, it looks like the data does have alot of noise, as there are multiple cases where some of the key features are 0. But, following human intuition, since blood glucose level is one of the key features to observe diabetes, I would consider dropping all records where Glucose value is 0. ``` cleaned_data = data[(data['Glucose'] != 0)] cleaned_data.shape feature_engg_data = cleaned_data.copy() outlier_data = cleaned_data.copy() factor = 3 # Include this only for columns with suspected outliers # Using a factor of 3, following Nelson's rule 1 to remove outliers - https://en.wikipedia.org/wiki/Nelson_rules # Only for non-categorical fields columns_to_include = ['Pregnancies','Glucose','BloodPressure','SkinThickness','Insulin','BMI','DiabetesPedigreeFunction'] for column in columns_to_include: upper_lim = feature_engg_data[column].mean () + feature_engg_data[column].std () * factor lower_lim = feature_engg_data[column].mean () - feature_engg_data[column].std () * factor feature_engg_data = feature_engg_data[(feature_engg_data[column] < upper_lim) & (feature_engg_data[column] > lower_lim)] outlier_data = pd.concat([outlier_data, feature_engg_data]).drop_duplicates(keep=False) print(feature_engg_data.shape) print(outlier_data.shape) ``` In the following section in-order to build the model, we will need to normalize the data and split the data into train, validation and test dataset. The outlier data that we have, we will keep it separate, just in case to see how does our model performs on the outlier dataset. ``` def normalize_data(df): val = df.values min_max_normalizer = preprocessing.MinMaxScaler() norm_val = min_max_normalizer.fit_transform(val) df2 = pd.DataFrame(norm_val, columns=df.columns) return df2 norm_feature_engg_data = normalize_data(feature_engg_data) norm_outlier_data = normalize_data(outlier_data) ``` In the previous steps we have done some fundamental steps to understand and prepare the data so that it can be used for further modeling. Let's split the data and then we will try to apply the comparison analysis for result visualization based explainability. ### Splitting the data ``` input_data = norm_feature_engg_data.drop(['Outcome'],axis='columns') targets =norm_feature_engg_data.filter(['Outcome'],axis='columns') x, x_test, y, y_test = train_test_split(input_data,targets,test_size=0.1,train_size=0.9, random_state=5) x_train, x_valid, y_train, y_valid = train_test_split(x,y,test_size = 0.22,train_size =0.78, random_state=5) ``` ### t-SNE based visualization Now, to compare the classes and the formation of the clusters, we will perform t-SNE based visualization and observe the goodness of the clusters. If the clusters are not compact and well separated, it is highly possible that any classification algorithm will not work effectively because of the data formation.t-Distributed Stochastic Neighbor Embedding (t-SNE) is a dimensionality reduction method, which is often used with clustering. To find out more on this method, please refer this link: https://towardsdatascience.com/an-introduction-to-t-sne-with-python-example-5a3a293108d1. ``` def visualize_clusters(x, labels, hue = "hls"): ''' Visualization of clusters using t-SNE plots ''' tsne_transformed = TSNE(n_components=2, random_state = 0).fit_transform(x) df_tsne_subset = pd.DataFrame() df_tsne_subset['tsne-one'] = tsne_transformed[:,0] df_tsne_subset['tsne-two'] = tsne_transformed[:,1] df_tsne_subset['y'] = labels plt.figure(figsize=(6,4)) sns.scatterplot( x="tsne-one", y="tsne-two", hue="y", palette=sns.color_palette(hue,df_tsne_subset['y'].nunique()), data=df_tsne_subset, legend="full", alpha=1.0 ) plt.show() # K-means model = KMeans(n_clusters=2, init='k-means++', random_state=0) model.fit(x) km_labels = model.predict(x) visualize_clusters(x, km_labels) ``` As we can see that overall, the clusters does have a compact shape and can be separated. If the clusters were not properly formed and if the t_SNE transfored data points we sparse and spread out, we could have hypothesized that any classification algorithm might also fail. But in this, apart from few data points, most of the points are part pf th two distinguishable clusters which are formed. More detailed comparison analysis can be done to identify key information about those points which are not part of the correct clusters. But we will not cover those in this notebook to keep things simple! ### Comparison Analysis for time series data ``` plt.rcParams["figure.figsize"] = (15,5) series = pd.read_csv('Datasets/daily-total-female-births.csv', header=0, index_col=0) series.head() series.plot(color = 'g') X = series.values X = X.astype('float32') size = len(X) - 1 train, test = X[0:size], X[size:] # fit an ARIMA model model = ARIMA(train, order=(2,1,1)) # Simple ARIMA time series forecast model model_fit = model.fit() # forecast forecast = model_fit.predict(start=366, end=466) for i in range(len(forecast)): forecast[i] = random.random() * 10 + forecast[i] result = model_fit.get_forecast() con_interval = result.conf_int(0.05) forecast_ub = forecast + 0.5 * con_interval[0][1] # Upper bound and lower bound of confidence interval forecast_lb = forecast - 0.5 * con_interval[0][0] plt.plot(series.index, series['Births'], color = 'g') plt.plot(list(range(len(series.index), len(series.index)+101)),forecast, color = 'b') plt.fill_between(list(range(len(series.index), len(series.index)+101)), forecast_lb, forecast_ub, alpha = 0.3, color = 'pink') plt.xlabel('Time Period') plt.xticks([0, 50, 120, 200, 300, 360],[series.index[0],series.index[50], series.index[120], series.index[200], series.index[300], series.index[360]] ,rotation=45) plt.ylabel('Births') plt.show() ``` From the above plot, we can see the confidence band around the predicted values. Although, our model isn't very accurate, but our focus is on uderstanding the importance of the comparison analysis. The confidence band gives us a clear indication the range of values that the forecast can take. So, it helps in setting up the right expectation of the best case and worst case scenario and is quite effective rather than just a single point estimation. Even if the actual model prediction is incorrect, the confidence interval showing the possible range of values can prevent any shock for end stakeholders and can eventually make the model more trustworthy. ### Final Thoughts The methods explored in this notebook, are quite simple and helpful for complete black-box models. I strongly recommend to try out more examples and more problems to understand these approaches much better. ### Reference 1. [Kaggle | Pima Indians Diabetes Database](https://www.kaggle.com/uciml/pima-indians-diabetes-database?select=diabetes.csv) 2. [Kaggle | Daily Female Births Dataset](https://www.kaggle.com/dougcresswell/daily-total-female-births-in-california-1959) 3. An Introduction to t-SNE with Python Example, by Andre Violante - https://towardsdatascience.com/an-introduction-to-t-sne-with-python-example-5a3a293108d1 4. Some of the utility functions and code are taken from the GitHub Repository of the author - Aditya Bhattacharya https://github.com/adib0073
true
code
0.676847
null
null
null
null
# Novel Fraud Analysis We show that hybrid model with exploration detects novel frauds better (e.g., trades from new HS6 and new import ID) ``` import numpy as np import pandas as pd import glob import csv import traceback import datetime import os pd.options.display.max_columns=50 ``` ### Basic statistics and Novel fraud statistics Number of test weeks: * dfm: 196 weeks * dfn: 257 weeks * dft: 257 weeks * dfs: 48 weeks ``` def firstCheck(df): """ Sorting and indexing necessary for data preparation """ df = df.dropna(subset=["illicit"]) df = df.sort_values("sgd.date") df = df.reset_index(drop=True) return df dfn = firstCheck(pd.read_csv('../data/ndata.csv')) dft = firstCheck(pd.read_csv('../data/tdata.csv')) dfm = firstCheck(pd.read_csv('../data/mdata.csv')) dfs = firstCheck(pd.read_csv('../data/synthetic-imports-declarations.csv')) for df in [dfm, dfn, dft]: print(df['importer.id'].nunique(), df['tariff.code'].nunique(), df['country'].nunique()) ``` #### Average illicit rates ``` # Malawi date_begin = '20130101' test_length = 7 start_day = datetime.date(int(date_begin[:4]), int(date_begin[4:6]), int(date_begin[6:8])) period = datetime.timedelta(days=test_length) end_day = start_day + datetime.timedelta(days=test_length) old_IID = set() new_proportions = [] avg_illicit_rates_m = [] num_trades_m = [] for week in range(208): weekly_trade = dfm[(dfm['sgd.date'] < end_day.strftime('%y-%m-%d')) & (dfm['sgd.date'] >= start_day.strftime('%y-%m-%d'))] start_day = end_day end_day = start_day + datetime.timedelta(days=test_length) avg_illicit_rates_m.append(np.mean(weekly_trade['illicit'])) num_trades_m.append(len(weekly_trade)) import matplotlib.pyplot as plt % matplotlib inline avg_illicit_rates_m = avg_illicit_rates_m[:-13] plt.style.use('seaborn-whitegrid') f = plt.figure() plt.plot(pd.Series(avg_illicit_rates_m).rolling(4).mean(), color='red') plt.title('Country M', fontsize=25) plt.xlabel('Year', fontsize=25) plt.ylabel('Illicit rate', fontsize=25) plt.xticks(ticks=[0,51,103,155,207], labels=['2013', 14, 15, 16,17], fontsize=25) f.savefig("illicit_rate_m.pdf", bbox_inches='tight') plt.style.use('seaborn-whitegrid') f = plt.figure() plt.plot(pd.Series(num_trades_m).rolling(4).mean(), color='blue') plt.title('Country M', fontsize=25) plt.xlabel('Year', fontsize=25) plt.ylabel('# of weekly trades', fontsize=25) plt.xticks(ticks=[0,51,103,155,207], labels=['2013', 14, 15, 16,17], fontsize=25) f.savefig("num_weekly_trades_m.pdf", bbox_inches='tight') # Nigeria date_begin = '20130101' test_length = 7 start_day = datetime.date(int(date_begin[:4]), int(date_begin[4:6]), int(date_begin[6:8])) period = datetime.timedelta(days=test_length) end_day = start_day + datetime.timedelta(days=test_length) old_IID = set() new_proportions = [] avg_illicit_rates_n = [] num_trades_n = [] for week in range(260): weekly_trade = dfn[(dfn['sgd.date'] < end_day.strftime('%y-%m-%d')) & (dfn['sgd.date'] >= start_day.strftime('%y-%m-%d'))] start_day = end_day end_day = start_day + datetime.timedelta(days=test_length) avg_illicit_rates_n.append(np.mean(weekly_trade['illicit'])) num_trades_n.append(len(weekly_trade)) plt.style.use('seaborn-whitegrid') f = plt.figure() plt.plot(pd.Series(avg_illicit_rates_n).rolling(4).mean(), color='red') plt.title('Country N', fontsize=25) plt.xlabel('Year', fontsize=25) plt.ylabel('Illicit rate', fontsize=25) plt.xticks(ticks=[0,51,103,155,207,259], labels=['2013', 14, 15, 16, 17, 18]) f.savefig("illicit_rate_n.pdf", bbox_inches='tight') plt.style.use('seaborn-whitegrid') f = plt.figure() plt.plot(pd.Series(num_trades_n).rolling(4).mean(), color='blue') plt.title('Country N', fontsize=25) plt.xlabel('Year', fontsize=25) plt.ylabel('# of weekly trades', fontsize=25) plt.xticks(ticks=[0,51,103,155,207,259], labels=['2013', 14, 15, 16, 17, 18]) f.savefig("num_weekly_trades_n.pdf", bbox_inches='tight') # Tunisia date_begin = '20150101' test_length = 7 start_day = datetime.date(int(date_begin[:4]), int(date_begin[4:6]), int(date_begin[6:8])) period = datetime.timedelta(days=test_length) end_day = start_day + datetime.timedelta(days=test_length) old_IID = set() new_proportions = [] avg_illicit_rates_t = [] num_trades_t = [] for week in range(260): weekly_trade = dft[(dft['sgd.date'] < end_day.strftime('%y-%m-%d')) & (dft['sgd.date'] >= start_day.strftime('%y-%m-%d'))] start_day = end_day end_day = start_day + datetime.timedelta(days=test_length) avg_illicit_rates_t.append(np.mean(weekly_trade['illicit'])) num_trades_t.append(len(weekly_trade)) plt.style.use('seaborn-whitegrid') f = plt.figure() plt.plot(pd.Series(avg_illicit_rates_t).rolling(4).mean(), color='red') plt.title('Country T', fontsize=25) plt.xlabel('Year', fontsize=25) plt.ylabel('Illicit rate', fontsize=25) plt.xticks(ticks=[0,51,103,155,207,259], labels=['2015', 16, 17, 18, 19, 20]) f.savefig("illicit_rate_t.pdf", bbox_inches='tight') plt.style.use('seaborn-whitegrid') f = plt.figure() plt.plot(pd.Series(num_trades_t).rolling(4).mean(), color='blue') plt.title('Country T', fontsize=25) plt.xlabel('Year', fontsize=25) plt.ylabel('# of weekly trades', fontsize=25) plt.xticks(ticks=[0,51,103,155,207,259], labels=['2015', 16, 17, 18,19, 20]) f.savefig("num_weekly_trades_t.pdf", bbox_inches='tight') results = glob.glob('../results/performances/fld7-result-*') # quick- or www21- or fld- list1, list2 = zip(*sorted(zip([os.stat(result).st_size for result in results], results))) # Retrieving results num_logs = len([i for i in list1 if i > 1000]) count= 0 summary = [] for i in range(1,num_logs+1): rslt = pd.read_csv(list2[-i]) dic = rslt[['runID','data','sampling','subsamplings','numWeek','current_inspection_rate','test_start','test_end']].iloc[len(rslt)-1].to_dict() run_id = round(dic['runID'], 3) data = dic['data'] subsamplings = dic['subsamplings'].replace('/','+') strategy = dic['sampling'] cir = dic['current_inspection_rate'] summary.append(dic) summary = pd.DataFrame(summary) # Index will be used later summary[summary.data == 'real-t'] ### Previous code def performanceOnNovel(exp_a): run_id = round(exp_a['runID'], 3) strategy = exp_a['sampling'] subsamplings = exp_a['subsamplings'].replace('/','+') cir = exp_a['current_inspection_rate'] week = exp_a['numWeek'] measure_start = 0 measure_end = week novelty = {} old_IID = set() for week in range(measure_start,measure_end): filename = glob.glob(f'results/query_indices/{run_id}-{strategy}-{subsamplings}-*-scratch-week-{week}.csv')[0] with open(filename, "r") as f: reader = csv.reader(f, delimiter=",") expid = next(reader)[1] dataset = next(reader)[1] episode = next(reader)[1] start_day = next(reader)[1] end_day = next(reader)[1] start_day = datetime.date(int(start_day[:4]), int(start_day[5:7]), int(start_day[8:10])).strftime('%y-%m-%d') end_day = datetime.date(int(end_day[:4]), int(end_day[5:7]), int(end_day[8:10])).strftime('%y-%m-%d') if week == measure_start: if dataset == 'real-m': df = dfm elif dataset == 'synthetic': df = dfs elif dataset == 'real-n': df = dfn elif dataset == 'real-t': df = dft alldata = df[(df['sgd.date'] < end_day) & (df['sgd.date'] >= start_day)].loc[:, ['illicit', 'revenue', 'importer.id']] alldata = alldata[~alldata['importer.id'].isin(old_IID)] if alldata.empty: continue all_indices = [] all_samps = '' while True: try: indices = next(reader) samp = indices[0] indices = indices[1:] indices = list(map(int, indices)) all_indices.extend(indices) all_samps = all_samps + (samp + '-') except StopIteration: break if week == measure_start: novelty[f'{all_samps}-pre'] = [] novelty[f'{all_samps}-rec'] = [] novelty[f'{all_samps}-rev'] = [] chosen = df.iloc[all_indices].loc[:, ['illicit', 'revenue', 'importer.id']] chosen = chosen[~chosen['importer.id'].isin(old_IID)] # Recall and revenue try: pre = sum(chosen['illicit'])/chosen['illicit'].count() rec = sum(chosen['illicit'])/sum(alldata['illicit']) rev = sum(chosen['revenue'])/sum(alldata['revenue']) except: continue novelty[f'{all_samps}-pre'].append(pre) novelty[f'{all_samps}-rec'].append(rec) novelty[f'{all_samps}-rev'].append(rev) old_IID = old_IID.union(set(alldata['importer.id'].values)) print(f'# indices = {len(all_indices)}, # old_ID: {len(old_IID)}, # new trades: {len(chosen)}') return pd.DataFrame(novelty) exp1, exp2, exp3, exp4, exp5 = 18, 23, 32, 38, 44 rival1 = performanceOnNovel(summary.loc[exp1]) print('!!!!!!!') rival2 = performanceOnNovel(summary.loc[exp2]) print('!!!!!!!') rival3 = performanceOnNovel(summary.loc[exp3]) print('!!!!!!!') rival4 = performanceOnNovel(summary.loc[exp4]) print('!!!!!!!') rival5 = performanceOnNovel(summary.loc[exp5]) # Compare DATE performances: Between two experiments plt.figure() r1 = rival1['DATE-enhanced_bATE--rev'].rolling(window=14).mean() r2 = rival2['DATE-random--rev'].rolling(window=14).mean() r3 = rival3['DATE-badge--rev'].rolling(window=14).mean() r4 = rival4['DATE-bATE--rev'].rolling(window=14).mean() r5 = rival5['DATE--rev'].rolling(window=14).mean() plt.plot(r1.index, r1, label=summary.loc[exp1]['data']+'-'+summary.loc[exp1]['subsamplings']) plt.plot(r2.index, r2, label=summary.loc[exp2]['data']+'-'+summary.loc[exp2]['subsamplings']) plt.plot(r3.index, r3, label=summary.loc[exp1]['data']+'-'+summary.loc[exp1]['subsamplings']) plt.plot(r4.index, r4, label=summary.loc[exp2]['data']+'-'+summary.loc[exp2]['subsamplings']) plt.plot(r5.index, r5, label=summary.loc[exp2]['data']+'-'+summary.loc[exp2]['subsamplings']) plt.title('Compare performance for novel trade patterns') plt.legend(loc='lower right') plt.ylabel(var) plt.xlabel('numWeeks') plt.show() plt.close() ```
true
code
0.318257
null
null
null
null
# Huggingface Sagemaker-sdk - Spot instances example ### Binary Classification with `Trainer` and `imdb` dataset 1. [Introduction](#Introduction) 2. [Development Environment and Permissions](#Development-Environment-and-Permissions) 1. [Installation](#Installation) 2. [Development environment](#Development-environment) 3. [Permissions](#Permissions) 3. [Processing](#Preprocessing) 1. [Tokenization](#Tokenization) 2. [Uploading data to sagemaker_session_bucket](#Uploading-data-to-sagemaker_session_bucket) 4. [Fine-tuning & starting Sagemaker Training Job](#Fine-tuning-\&-starting-Sagemaker-Training-Job) 1. [Creating an Estimator and start a training job](#Creating-an-Estimator-and-start-a-training-job) 2. [Estimator Parameters](#Estimator-Parameters) 3. [Download fine-tuned model from s3](#Download-fine-tuned-model-from-s3) 3. [Attach to old training job to an estimator ](#Attach-to-old-training-job-to-an-estimator) 5. [_Coming soon_:Push model to the Hugging Face hub](#Push-model-to-the-Hugging-Face-hub) # Introduction Welcome to our end-to-end binary Text-Classification example. In this demo, we will use the Hugging Faces `transformers` and `datasets` library together with a custom Amazon sagemaker-sdk extension to fine-tune a pre-trained transformer on binary text classification. In particular, the pre-trained model will be fine-tuned using the `imdb` dataset. To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on. This demo will also show you can use spot instances and continue training. ![image.png](attachment:image.png) _**NOTE: You can run this demo in Sagemaker Studio, your local machine or Sagemaker Notebook Instances**_ # Development Environment and Permissions ## Installation _*Note:* we only install the required libraries from Hugging Face and AWS. You also need PyTorch or Tensorflow, if you haven´t it installed_ ``` !pip install "sagemaker>=2.31.0" "transformers==4.4.2" "datasets[s3]==1.5.0" --upgrade ``` ## Development environment **upgrade ipywidgets for `datasets` library and restart kernel, only needed when prerpocessing is done in the notebook** ``` %%capture import IPython !conda install -c conda-forge ipywidgets -y IPython.Application.instance().kernel.do_shutdown(True) # has to restart kernel so changes are used import sagemaker.huggingface ``` ## Permissions _If you are going to use Sagemaker in a local environment. You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._ ``` import sagemaker sess = sagemaker.Session() # sagemaker session bucket -> used for uploading data, models and logs # sagemaker will automatically create this bucket if it not exists sagemaker_session_bucket=None if sagemaker_session_bucket is None and sess is not None: # set to default bucket if a bucket name is not given sagemaker_session_bucket = sess.default_bucket() role = sagemaker.get_execution_role() sess = sagemaker.Session(default_bucket=sagemaker_session_bucket) print(f"sagemaker role arn: {role}") print(f"sagemaker bucket: {sess.default_bucket()}") print(f"sagemaker session region: {sess.boto_region_name}") ``` # Preprocessing We are using the `datasets` library to download and preprocess the `imdb` dataset. After preprocessing, the dataset will be uploaded to our `sagemaker_session_bucket` to be used within our training job. The [imdb](http://ai.stanford.edu/~amaas/data/sentiment/) dataset consists of 25000 training and 25000 testing highly polar movie reviews. ## Tokenization ``` from datasets import load_dataset from transformers import AutoTokenizer # tokenizer used in preprocessing tokenizer_name = 'distilbert-base-uncased' # dataset used dataset_name = 'imdb' # s3 key prefix for the data s3_prefix = 'samples/datasets/imdb' # load dataset dataset = load_dataset(dataset_name) # download tokenizer tokenizer = AutoTokenizer.from_pretrained(tokenizer_name) # tokenizer helper function def tokenize(batch): return tokenizer(batch['text'], padding='max_length', truncation=True) # load dataset train_dataset, test_dataset = load_dataset('imdb', split=['train', 'test']) test_dataset = test_dataset.shuffle().select(range(10000)) # smaller the size for test dataset to 10k # sample for a smaller dataset for training #train_dataset = train_dataset.shuffle().select(range(2000)) # smaller the size for test dataset to 10k #test_dataset = test_dataset.shuffle().select(range(150)) # smaller the size for test dataset to 10k # tokenize dataset train_dataset = train_dataset.map(tokenize, batched=True, batch_size=len(train_dataset)) test_dataset = test_dataset.map(tokenize, batched=True, batch_size=len(test_dataset)) # set format for pytorch train_dataset.rename_column_("label", "labels") train_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'labels']) test_dataset.rename_column_("label", "labels") test_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'labels']) ``` ## Uploading data to `sagemaker_session_bucket` After we processed the `datasets` we are going to use the new `FileSystem` [integration](https://huggingface.co/docs/datasets/filesystems.html) to upload our dataset to S3. ``` import botocore from datasets.filesystems import S3FileSystem s3 = S3FileSystem() # save train_dataset to s3 training_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/train' train_dataset.save_to_disk(training_input_path,fs=s3) # save test_dataset to s3 test_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/test' test_dataset.save_to_disk(test_input_path,fs=s3) training_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/train' test_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/test' ``` # Fine-tuning & starting Sagemaker Training Job In order to create a sagemaker training job we need an `HuggingFace` Estimator. The Estimator handles end-to-end Amazon SageMaker training and deployment tasks. In a Estimator we define, which fine-tuning script should be used as `entry_point`, which `instance_type` should be used, which `hyperparameters` are passed in ..... ```python huggingface_estimator = HuggingFace(entry_point='train.py', source_dir='./scripts', base_job_name='huggingface-sdk-extension', instance_type='ml.p3.2xlarge', instance_count=1, transformers_version='4.4', pytorch_version='1.6', py_version='py36', role=role, hyperparameters = {'epochs': 1, 'train_batch_size': 32, 'model_name':'distilbert-base-uncased' }) ``` When we create a SageMaker training job, SageMaker takes care of starting and managing all the required ec2 instances for us with the `huggingface` container, uploads the provided fine-tuning script `train.py` and downloads the data from our `sagemaker_session_bucket` into the container at `/opt/ml/input/data`. Then, it starts the training job by running. ```python /opt/conda/bin/python train.py --epochs 1 --model_name distilbert-base-uncased --train_batch_size 32 ``` The `hyperparameters` you define in the `HuggingFace` estimator are passed in as named arguments. Sagemaker is providing useful properties about the training environment through various environment variables, including the following: * `SM_MODEL_DIR`: A string that represents the path where the training job writes the model artifacts to. After training, artifacts in this directory are uploaded to S3 for model hosting. * `SM_NUM_GPUS`: An integer representing the number of GPUs available to the host. * `SM_CHANNEL_XXXX:` A string that represents the path to the directory that contains the input data for the specified channel. For example, if you specify two input channels in the HuggingFace estimator’s fit call, named `train` and `test`, the environment variables `SM_CHANNEL_TRAIN` and `SM_CHANNEL_TEST` are set. To run your training job locally you can define `instance_type='local'` or `instance_type='local-gpu'` for gpu usage. _Note: this does not working within SageMaker Studio_ ``` !pygmentize ./scripts/train.py ``` ## Creating an Estimator and start a training job ``` from sagemaker.huggingface import HuggingFace # hyperparameters, which are passed into the training job hyperparameters={'epochs': 1, 'train_batch_size': 32, 'model_name':'distilbert-base-uncased', 'output_dir':'/opt/ml/checkpoints' } # s3 uri where our checkpoints will be uploaded during training job_name = "using-spot" checkpoint_s3_uri = f's3://{sess.default_bucket()}/{job_name}/checkpoints' huggingface_estimator = HuggingFace(entry_point='train.py', source_dir='./scripts', instance_type='ml.p3.2xlarge', instance_count=1, base_job_name=job_name, checkpoint_s3_uri=checkpoint_s3_uri, use_spot_instances=True, max_wait=3600, # This should be equal to or greater than max_run in seconds' max_run=1000, # expected max run in seconds role=role, transformers_version='4.4', pytorch_version='1.6', py_version='py36', hyperparameters = hyperparameters) # starting the train job with our uploaded datasets as input huggingface_estimator.fit({'train': training_input_path, 'test': test_input_path}) # Training seconds: 874 # Billable seconds: 262 # Managed Spot Training savings: 70.0% ``` ## Estimator Parameters ``` # container image used for training job print(f"container image used for training job: \n{huggingface_estimator.image_uri}\n") # s3 uri where the trained model is located print(f"s3 uri where the trained model is located: \n{huggingface_estimator.model_data}\n") # latest training job name for this estimator print(f"latest training job name for this estimator: \n{huggingface_estimator.latest_training_job.name}\n") # access the logs of the training job huggingface_estimator.sagemaker_session.logs_for_job(huggingface_estimator.latest_training_job.name) ``` ## Attach to old training job to an estimator In Sagemaker you can attach an old training job to an estimator to continue training, get results etc.. ``` from sagemaker.estimator import Estimator # job which is going to be attached to the estimator old_training_job_name='' # attach old training job huggingface_estimator_loaded = Estimator.attach(old_training_job_name) # get model output s3 from training job huggingface_estimator_loaded.model_data ```
true
code
0.612657
null
null
null
null
# Example: CanvasXpress nonlinearfit Chart No. 1 This example page demonstrates how to, using the Python package, create a chart that matches the CanvasXpress online example located at: https://www.canvasxpress.org/examples/nonlinearfit-1.html This example is generated using the reproducible JSON obtained from the above page and the `canvasxpress.util.generator.generate_canvasxpress_code_from_json_file()` function. Everything required for the chart to render is included in the code below. Simply run the code block. ``` from canvasxpress.canvas import CanvasXpress from canvasxpress.js.collection import CXEvents from canvasxpress.render.jupyter import CXNoteBook cx = CanvasXpress( render_to="nonlinearfit1", data={ "y": { "vars": [ "S1", "S2", "S3", "S4", "S5", "S6", "S7", "S8", "S9", "S10", "S11", "S12" ], "smps": [ "Concentration", "V1" ], "data": [ [ 0.0009, 172 ], [ 0.0018, 177 ], [ 0.0037, 160 ], [ 0.0073, 166 ], [ 0.0146, 211 ], [ 0.0293, 248 ], [ 0.0586, 269 ], [ 0.117, 283 ], [ 0.234, 298 ], [ 0.469, 314 ], [ 0.938, 328 ], [ 1.88, 316 ] ] } }, config={ "decorations": { "nlfit": [ { "param": [ "164", "313", 0.031, -1.5, 1.2e-06, 1.9 ], "type": "cst", "label": "Custom Fit" }, { "type": "reg", "param": [ "164", "313", 0.031, 1.5, 1.2e-06, 1.9 ], "label": "Regular Fit" } ] }, "graphType": "Scatter2D", "setMaxY": 350, "setMinY": 100, "showDecorations": True, "theme": "CanvasXpress", "xAxisTransform": "log10", "xAxisTransformTicks": False, "yAxisExact": True }, width=613, height=613, events=CXEvents(), after_render=[], other_init_params={ "version": 35, "events": False, "info": False, "afterRenderInit": False, "noValidate": True } ) display = CXNoteBook(cx) display.render(output_file="nonlinearfit_1.html") ```
true
code
0.670393
null
null
null
null
# GPs with boundary conditions In the paper entitled '' (https://export.arxiv.org/pdf/2002.00818), the author claims that a GP can be constrained to match boundary conditions. Consider a GP prior with covariance kernel $$k_F(x,y) = \exp\left(-\frac{1}{2}(x-y)^2\right)$$ Try and match the boundary conditions: $$f(0) = f'(0) = f(1) = f'(1) = 0$$ The posterior will be a GP with covariance equal to: $$\exp\left(-\frac{1}{2}(x-y)^2\right) - \frac{\exp\left(-\frac{1}{2}(x^2+y^2)\right)}{e^{-2} + 3e^{-1} + 1} \cdot \left( (xy+1) + (xy-x-y+2)e^{x+y-1} + (-2xy + x+y-1)(e^{x+y-2}+e^{-1}) + (xy-y+1)e^{y-2} + (xy-x+1)e^{x-2} + (y-x-2)e^{y-1} + (x-y-2)e^{x-1}\right)$$ This notebook compares the constrained and unconstrained kernels. ``` import GPy import numpy as np import matplotlib.pyplot as plt import seaborn as sns sample_size = 5 X = np.random.uniform(0, 1., (sample_size, 1)) Y = np.sin(X) + np.random.randn(sample_size, 1)*0.1 testX = np.linspace(0, 1, 100).reshape(-1, 1) def plotSamples(testY, simY, simMse, ax): testY = testY.squeeze() simY = simY.squeeze() simMse = simMse.squeeze() ax.plot(testX.squeeze(), testY, lw=0.2, c='k') ax.plot(X, Y, 'ok', markersize=5) ax.fill_between(testX.squeeze(), simY - 3*simMse**0.5, simY+3*simMse**0.5, alpha=0.1) ax.set_xlabel('Input') ax.set_ylabel('Output') ``` ## Unconstrained case ``` kU = GPy.kern.RBF(1, variance=1, lengthscale=1.) mU = GPy.models.GPRegression(X, Y, kU, noise_var=0.1) priorTestY = mU.posterior_samples_f(testX, full_cov=True, size=10) priorSimY, priorSimMse = mU.predict(testX) ``` Plot the kernel function ``` n = 101 xs = np.linspace(0, 1, n)[:,np.newaxis] KU = np.array([kU.K(x[np.newaxis,:], xs)[0] for x in xs]) ph0 = plt.pcolormesh(xs.T, xs, KU) plt.title('Unconstrained RBF') plt.colorbar(ph0) ``` ## Constrained case ``` def K(x, y=None): if y is None: y = x bb = (x*y+1) + (x*y-x-y+2)*np.exp(x+y-1) + (x+y-1-2*x*y)*(np.exp(x+y-2)+np.exp(-1)) + (x*y-y+1)*np.exp(y-2) + (x*y-x+1)*np.exp(x-2) + (y-x-2)*np.exp(y-1) + (x-y-2)*np.exp(x-1) k = np.exp(-0.5*(x-y)**2.0) - np.exp(-0.5*(x**2.0 + y**2.0)) / (np.exp(-2) - 3*np.exp(-1) + 1) * bb return k KC = [[K(x,y)[0] for x in xs] for y in xs] plt.pcolormesh(xs.T, xs, KC) plt.title('Constrained RBF') plt.colorbar() ``` ## Train the unconstrained model ``` mU.optimize() posteriorTestY = mU.posterior_samples_f(testX, full_cov=True, size=10) postSimY, postSimMse = mU.predict(testX) f, axs = plt.subplots(1, 2, sharey=True, figsize=(10,5)) plotSamples(priorTestY, priorSimY, priorSimMse, axs[0]) plotSamples(posteriorTestY, postSimY, postSimMse, axs[1]) sns.despine() ``` # GPy examples ## Combine normal and derivative observations ``` def plot_gp_vs_real(m, x, yreal, size_inputs, title, fixed_input=1, xlim=[0,11], ylim=[-1.5,3]): fig, ax = plt.subplots() ax.set_title(title) plt.plot(x, yreal, "r", label='Real function') rows = slice(0, size_inputs[0]) if fixed_input == 0 else slice(size_inputs[0], size_inputs[0]+size_inputs[1]) m.plot(fixed_inputs=[(1, fixed_input)], which_data_rows=rows, xlim=xlim, ylim=ylim, ax=ax) f = lambda x: np.sin(x)+0.1*(x-2.)**2-0.005*x**3 fd = lambda x: np.cos(x)+0.2*(x-2.)-0.015*x**2 N = 10 # Number of observations Npred = 100 # Number of prediction points sigma = 0.2 # Noise of observations sigma_der = 1e-3 # Noise of derivative observations x = np.array([np.linspace(1,10,N)]).T y = f(x) + np.array(sigma*np.random.normal(0,1,(N,1))) # M = 10 # Number of derivative observations # xd = np.array([np.linspace(2,8,M)]).T # yd = fd(xd) + np.array(sigma_der*np.random.normal(0,1,(M,1))) # Specify derivatives at end-points M = 2 xd = np.atleast_2d([0, 11]).T yd = np.atleast_2d([0, 0]).T xpred = np.array([np.linspace(0,11,Npred)]).T ypred_true = f(xpred) ydpred_true = fd(xpred) # squared exponential kernel: try: se = GPy.kern.RBF(input_dim = 1, lengthscale=1.5, variance=0.2) # We need to generate separate kernel for the derivative observations and give the created kernel as an input: se_der = GPy.kern.DiffKern(se, 0) except: se = GPy.kern.RBF(input_dim = 1, lengthscale=1.5, variance=0.2) # We need to generate separate kernel for the derivative observations and give the created kernel as an input: se_der = GPy.kern.DiffKern(se, 0) #Then gauss = GPy.likelihoods.Gaussian(variance=sigma**2) gauss_der = GPy.likelihoods.Gaussian(variance=sigma_der**2) # Then create the model, we give everything in lists, the order of the inputs indicates the order of the outputs # Now we have the regular observations first and derivative observations second, meaning that the kernels and # the likelihoods must follow the same order. Crosscovariances are automatically taken car of m = GPy.models.MultioutputGP(X_list=[x, xd], Y_list=[y, yd], kernel_list=[se, se_der], likelihood_list = [gauss, gauss_der]) m.optimize(messages=0, ipython_notebook=False) #Plot the model, the syntax is same as for multioutput models: plot_gp_vs_real(m, xpred, ydpred_true, [x.shape[0], xd.shape[0]], title='Latent function derivatives', fixed_input=1, xlim=[0,11], ylim=[-1.5,3]) plot_gp_vs_real(m, xpred, ypred_true, [x.shape[0], xd.shape[0]], title='Latent function', fixed_input=0, xlim=[0,11], ylim=[-1.5,3]) #making predictions for the values: mu, var = m.predict_noiseless(Xnew=[xpred, np.empty((0,1))]) ``` ## Fixed end-points using a Multitask GP with different likelihood functions ``` N = 10 # Number of observations Npred = 100 # Number of prediction points sigma = 0.25 # Noise of observations sigma_0 = 1e-3 # Noise of zero observations xlow = 0 xhigh = 10 x = np.array([np.linspace(xlow,xhigh,N)]).T y = f(x) + np.array(sigma*np.random.normal(0,1,(N,1))) M = 2 dx = 5 x0 = np.atleast_2d([xlow-dx, xhigh+dx]).T y0 = np.atleast_2d([0, 0]).T xpred = np.array([np.linspace(xlow-dx,xhigh+dx,Npred)]).T ypred_true = f(xpred) # squared exponential kernel: try: se = GPy.kern.RBF(input_dim = 1, lengthscale=1.5, variance=0.2) except: se = GPy.kern.RBF(input_dim = 1, lengthscale=1.5, variance=0.2) # Likelihoods for each task gauss = GPy.likelihoods.Gaussian(variance=sigma**2) gauss_0 = GPy.likelihoods.Gaussian(variance=sigma_0**2) # Create the model, we give everything in lists, the order of the inputs indicates the order of the outputs # Now we have the regular observations first and derivative observations second, meaning that the kernels and # the likelihoods must follow the same order. Crosscovariances are automatically taken car of m = GPy.models.MultioutputGP(X_list=[x, x0], Y_list=[y, y0], kernel_list=[se, se], likelihood_list = [gauss, gauss_0]) m.optimize(messages=0, ipython_notebook=False) # Plot ylims = [-1.5,3] fig, ax = plt.subplots(figsize=(8,5)) ax.set_title('Latent function with fixed end-points') ax.plot(xpred, ypred_true, 'k', label='Real function') ypred_mean, ypred_var = m.predict([xpred]) ypred_std = np.sqrt(ypred_var) ax.fill_between(xpred.squeeze(), (ypred_mean - 1.96*ypred_std).squeeze(), (ypred_mean + 1.96*ypred_std).squeeze(), color='r', alpha=0.1, label='Confidence') ax.plot(xpred, ypred_mean, 'r', label='Mean') ax.plot(x, y, 'kx', label='Data') ax.set_ylim(ylims) ax.plot(x0, y0, 'ro', label='Fixed end-points') ax.legend() sns.despine() ``` ## Fixed end-points using MixedNoise likelihood ``` # Squared exponential kernel: try: se = GPy.kern.RBF(input_dim = 1, lengthscale=1.5, variance=0.2) except: se = GPy.kern.RBF(input_dim = 1, lengthscale=1.5, variance=0.2) # MixedNoise Likelihood gauss = GPy.likelihoods.Gaussian(variance=sigma**2) gauss_0 = GPy.likelihoods.Gaussian(variance=sigma_0**2) mixed = GPy.likelihoods.MixedNoise([gauss, gauss_0]) # Create the model, we give everything in lists, the order of the inputs indicates the order of the outputs # Now we have the regular observations first and derivative observations second, meaning that the kernels and # the likelihoods must follow the same order. Crosscovariances are automatically taken car of xc = np.append(x, x0, axis=0) yc = np.append(y, y0, axis=0) ids = np.append(np.zeros((N,1), dtype=int), np.ones((M,1), dtype=int), axis=0) Y_metadata = {'output_index':ids} m = GPy.core.GP(xc, yc, se, likelihood=mixed, Y_metadata=Y_metadata) m.optimize(messages=0, ipython_notebook=False) # Plot fig, ax = plt.subplots(figsize=(8,5)) ax.set_title('Latent function with fixed end-points') ax.plot(xpred, ypred_true, 'k', label='Real function') #m.plot(fixed_inputs=[(1, 0)], which_data_rows=slice(0, x.shape[0]), xlim=[-dx,10+dx], ylim=[-1.5,3], ax=ax) ypred_mean, ypred_var = m.predict(xpred, Y_metadata={'output_index':np.zeros_like(xpred, dtype=int)}) ypred_std = np.sqrt(ypred_var) ax.fill_between(xpred.squeeze(), (ypred_mean - 1.96*ypred_std).squeeze(), (ypred_mean + 1.96*ypred_std).squeeze(), color='r', alpha=0.1, label='Confidence') ax.plot(xpred, ypred_mean, 'r-', label='Mean') ax.plot(x, y, 'kx', label='Data') ax.set_ylim(ylims) ax.plot(x0, y0, 'ro', label='Fixed end-points') ax.legend() sns.despine() m ```
true
code
0.675925
null
null
null
null
# Import Libraries and Dataset ``` # Importing Libraries import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split, GridSearchCV from sklearn.metrics import accuracy_score, confusion_matrix from sklearn.neighbors import KNeighborsClassifier, LocalOutlierFactor,KNeighborsClassifier, NeighborhoodComponentsAnalysis from sklearn.decomposition import PCA # warning library import warnings warnings.filterwarnings("ignore") # Importing Data set data = pd.read_csv("cancer.csv") ``` # Descriptive Statistics ``` # Preview data data.head() # Dataset dimensions -(rows, columns) data.shape # Features data-type data.info() # Statisctical summary data.describe().T # Count of null values data.isnull().sum() ``` ## Observations: 1. There are a total of 569 records and 33 features in the dataset. 2. Each feature can be integer, float or object datatype. 3. There are zero NaN values in the dataset. 4. In the outcome column, M represents malignant cancer and B represents benign cancer. # Data Preprocessing ``` data.drop(["Unnamed: 32","id"], inplace=True, axis=1) data=data.rename(columns = {"diagnosis":"target"}) # Data count plot sns.countplot(data["target"]) print(data.target.value_counts()) # Target feature change to as 0 and 1 data["target"]=[1 if i.strip()=="M" else 0 for i in data.target] ``` # Explorer Data Analysis ``` # Correlation corr_matrix = data.corr() sns.clustermap(corr_matrix, annot = True, fmt = ".2f") plt.title("Correlation Between Features") plt.show() # Correllation with threshold 0.75 threshold = 0.75 filtre = np.abs(corr_matrix["target"]) > threshold corr_features = corr_matrix.columns[filtre].tolist() sns.clustermap(data[corr_features].corr(), annot = True, fmt = ".2f") plt.title("Correlation Between Features with Corr Threshold 0.75") ``` There are some corelated features. ``` # Pair plot sns.pairplot(data[corr_features],diag_kind="kde", markers = "+", hue="target") plt.show() ``` There is skewness. # Outlier Detection ``` y = data.target x = data.drop(["target"],axis=1) columns = x.columns.tolist() clf = LocalOutlierFactor() isOutlier = clf.fit_predict(x) Xscore = clf.negative_outlier_factor_ outlier_score = pd.DataFrame(Xscore,columns=["score"]) # Threshold for outlier threshold =-2.5 filt = outlier_score["score"] < threshold outlier_index= outlier_score[filt].index.tolist() # Drop outliers x = x.drop(outlier_index) y = y.drop(outlier_index).values ``` # Train Test Split ``` test_size = 0.3 X_train, X_test, Y_train, Y_test = train_test_split(x,y, test_size=test_size, random_state=42) ``` # Standardization ``` scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.transform(X_test) X_train_df = pd.DataFrame (X_train, columns = columns) X_train_df["target"] = Y_train # Box plot data_melted = pd.melt(X_train_df,id_vars = "target", var_name = "features", value_name = "value") plt.figure(figsize=(12, 6)) sns.boxplot(x = "features", y = "value", hue = "target", data = data_melted) plt.xticks(rotation=90) plt.show() ``` # Basic KNN Method ``` knn = KNeighborsClassifier(n_neighbors=2) knn.fit(X_train, Y_train) y_pred = knn.predict(X_test) cm = confusion_matrix (Y_test, y_pred) acc = accuracy_score(Y_test, y_pred) print("CM: ", cm) print("Basic KNN Acc: ", acc) ``` # Choose Best Parameters ``` def KNN_Best_Params (x_train, x_test, y_train, y_test): k_range = list(range(1,31)) weight_options = ["uniform","distance"] p = [1,2] print() param_grid = dict(n_neighbors = k_range, weights = weight_options, p = p) knn = KNeighborsClassifier() grid = GridSearchCV(knn, param_grid, cv=10, scoring = "accuracy") grid.fit(x_train, y_train) print("Best training score: {} with parameters: {}".format(grid.best_score_, grid.best_params_)) print() knn = KNeighborsClassifier(**grid.best_params_) knn.fit(x_train, y_train) y_pred_test = knn.predict(x_test) y_pred_train = knn.predict(x_train) cm_test = confusion_matrix (y_test, y_pred_test) cm_train = confusion_matrix (y_train, y_pred_train) acc_test=accuracy_score(y_test, y_pred_test) acc_train=accuracy_score(y_train, y_pred_train) print("Test Score: {}, Train Score: {}".format(acc_test, acc_train)) print() print("CM test: ", cm_test) print("CM train: ", cm_train) return grid grid = KNN_Best_Params(X_train, X_test, Y_train, Y_test) ``` # PCA ``` # Since pca is unsupervised, scale operation is done on all data scaler = StandardScaler() x_scaled = scaler.fit_transform(x) pca = PCA(n_components = 2) pca.fit(x_scaled) X_reduced_pca = pca.transform(x_scaled) pca_data = pd.DataFrame (X_reduced_pca, columns = ["p1", "p2"]) pca_data["target"]=y plt.subplots(figsize=(10, 10)) sns.scatterplot(x = "p1", y="p2", hue="target", data = pca_data) plt.title("PCA: p1 vs p2") # Ratio of variables to explain the data set print(pca.explained_variance_ratio_) # What percentage of the variables explain the data set completely print(sum(pca.explained_variance_ratio_)) # KNN with reduced dimensionality X_train_pca, X_test_pca, Y_train_pca, Y_test_pca = train_test_split(X_reduced_pca ,y, test_size=test_size, random_state=42) grid_pca = KNN_Best_Params(X_train_pca, X_test_pca, Y_train_pca, Y_test_pca) ``` # NCA ``` nca = NeighborhoodComponentsAnalysis(n_components = 2, random_state=42) nca.fit(x_scaled,y) X_reduced_nca = nca.transform(x_scaled) nca_data = pd.DataFrame (X_reduced_nca, columns = ["p1", "p2"]) nca_data["target"]=y plt.subplots(figsize=(10, 10)) sns.scatterplot(x = "p1", y="p2", hue="target", data = nca_data) plt.title("NCA: p1 vs p2") X_train_nca, X_test_nca, Y_train_nca, Y_test_nca = train_test_split(X_reduced_nca ,y, test_size=test_size, random_state=42) grid_nca = KNN_Best_Params(X_train_nca, X_test_nca, Y_train_nca, Y_test_nca) ```
true
code
0.643973
null
null
null
null
# Importing the libraries ``` %matplotlib inline import IPython.display as ipd import matplotlib.pyplot as plt import numpy as np import pandas as pd from sklearn.linear_model import LogisticRegression from sklearn.metrics import confusion_matrix, roc_curve, auc from sklearn.model_selection import train_test_split, cross_val_score from sklearn.ensemble import RandomForestClassifier from sklearn.svm import SVC from xgboost import XGBClassifier ``` # Loading the data ``` original_data = data = pd.read_csv('../data/heart.csv') data.info() data.head(10) data.tail(10) data.describe() ``` # Splitting the Data ``` y = original_data['target'] X = original_data.drop(columns=['target']) X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, test_size=.2, random_state=50) data = pd.concat([X_train, y_train], axis=1) # for EDA data.head() ``` # EDA ``` data.pivot_table(index='sex', values='target').plot.bar() plt.xticks(rotation='horizontal') plt.title('Heart Disease frequency by Sex') plt.show() sick = data[data['target'] == 1] not_sick = data[data['target'] == 0] sick['age'].plot.hist(color='red', alpha=0.5, density=True) not_sick['age'].plot.hist(color='blue', alpha=0.5, density=True) plt.xlabel('Age') plt.legend(['Heart Disease', 'No Heart Disease']) plt.show() cut_points = [18, 44.5, 54.5, 64.5, 100] labels = ['18-44', '45-54', '55-64', '65+'] data['age categories'] = pd.cut(data['age'], cut_points, labels=labels) data['age categories'].value_counts() data.pivot_table(index='age categories', values='target').plot.bar() plt.xticks(rotation='horizontal') plt.title('Heart Disease frequency by Age') plt.show() data.pivot_table(index='cp', values='target').plot.bar() plt.xticks(rotation='horizontal') plt.title('Heart Disease frequency by Chest Pain') plt.show() sick['trestbps'].plot.hist(color='red', alpha=0.5, density=True) not_sick['trestbps'].plot.hist(color='blue', alpha=0.5, density=True) plt.xlabel('Resting blood pressure') plt.legend(['Heart Disease', 'No Heart Disease']) plt.show() cut_points = [0, 120, 129.5, 139.5, 179.5, 220] labels = ['<120', '120-129', '130-139', '140-179', '>180'] data['trestbps categories'] = pd.cut(data['trestbps'], cut_points, labels=labels) data['trestbps categories'].value_counts() data.pivot_table(index='trestbps categories', values='target').plot.bar() plt.xticks(rotation='horizontal') plt.title('Heart Disease frequency by Resting Blood Pressure') plt.show() sick['chol'].plot.hist(color='red', alpha=0.5, density=True) not_sick['chol'].plot.hist(color='blue', alpha=0.5, density=True) plt.xlabel('Cholestrol') plt.legend(['Heart Disease', 'No Heart Disease']) plt.show() cut_points = [0, 99.5, 199.5, 299.5, 600] labels = ['<100', '100-200', '200-300', '>300'] data['chol categories'] = pd.cut(data['chol'], cut_points, labels=labels) data['chol categories'].value_counts() data.pivot_table(index='chol categories', values='target').plot.bar() plt.xticks(rotation='horizontal') plt.title('Heart Disease frequency by Cholestrol') plt.show() data.pivot_table(index='fbs', values='target').plot.bar() plt.xticks(rotation='horizontal') plt.title('Heart Disease frequency by Fasting Blood Sugar') plt.show() data.pivot_table(index='restecg', values='target').plot.bar() plt.xticks(rotation='horizontal') plt.title('Heart Disease frequency by Rest ECG') plt.show() sick['thalach'].plot.hist(color='red', alpha=0.5, density=True) not_sick['thalach'].plot.hist(color='blue', alpha=0.5, density=True) plt.xlabel('Max Heart Rate') plt.legend(['Heart Disease', 'No Heart Disease']) plt.show() data['thalach categories'] = pd.qcut(data['thalach'], 5) data['thalach categories'] = data['thalach categories'].astype(str).apply(lambda x: '%s-%s' % (x.split(',')[0].strip('('), x.split(',')[1].strip('] '))) data['thalach categories'].value_counts() data.pivot_table(index='thalach categories', values='target').plot.bar() plt.title('Heart Disease frequency by Max Heart Rate') plt.show() data.pivot_table(index='exang', values='target').plot.bar() plt.xticks(rotation='horizontal') plt.title('Heart Disease frequency by Exercise-induced Angina') plt.show() sick['oldpeak'].plot.hist(color='red', alpha=0.5, density=True) not_sick['oldpeak'].plot.hist(color='blue', alpha=0.5, density=True) plt.xlabel('ST Depression') plt.legend(['Heart Disease', 'No Heart Disease']) plt.show() data['oldpeak categories'] = pd.qcut(data['oldpeak'], 3) data['oldpeak categories'] = data['oldpeak categories'].astype(str).apply(lambda x: '%s-%s' % (x.split(',')[0].strip('('), x.split(',')[1].strip('] '))) data['oldpeak categories'].value_counts() data.pivot_table(index='oldpeak categories', values='target').plot.bar() plt.title('Heart Disease frequency by ST Depression') data.pivot_table(index='slope', values='target').plot.bar() plt.xticks(rotation='horizontal') plt.title('Heart Disease frequency by Slope of the peak exercise ST segment') plt.show() data.pivot_table(index='ca', values='target').plot.bar() plt.xticks(rotation='horizontal') plt.title('Heart Disease frequency by Number of major vessels (0-3) colored by flourosopy ') plt.show() data.pivot_table(index='thal', values='target').plot.bar() plt.xticks(rotation='horizontal') plt.title('Heart Disease frequency by the Thalassemia disorder') plt.show() ``` # Data preprocessing ``` original_data.head() categorical_cols = ['sex', 'cp', 'fbs', 'restecg', 'exang', 'slope', 'ca', 'thal'] # One Hot Encoding of all Categorical Vairiables X_train = pd.get_dummies(X_train, columns=categorical_cols, drop_first=True) X_train['cp any'] = (X_train['cp_1'] + X_train['cp_2'] + X_train['cp_3']) # any chest pain = 1 (look at EDA) X_train = X_train.drop(columns=['cp_1', 'cp_2', 'cp_3']) cols = X_train.columns X_test = pd.get_dummies(X_test, columns=categorical_cols, drop_first=True) X_test.columns = [column.rstrip('.0') for column in X_test.columns] X_test['cp any'] = (X_test['cp_1'] + X_test['cp_2'] + X_test['cp_3']) X_test = X_test.drop(columns=['cp_1', 'cp_2', 'cp_3']) X_train.shape, X_test.shape X_train.head() X_test.head() # Add columns of training set that are missing in test set. missing_cols = list(set(X_train.columns) - set(X_test.columns)) for column in missing_cols: X_test[column] = 0 # To maintain same ordering of columns in train and test set. X_test = X_test[cols] X_train.shape[1] == X_test.shape[1] # number of features X_train.columns, X_test.columns ``` # Model Selection ``` rf = RandomForestClassifier(n_estimators=120, max_depth=5, random_state=10) scores = cross_val_score(rf, X_train, y_train, cv=5) print('Mean cross-validation score: ', scores.mean()) lr = LogisticRegression(solver='lbfgs', max_iter=5000) scores = cross_val_score(lr, X_train, y_train, cv=5) print('Mean cross-validation score: ', scores.mean()) xgb = XGBClassifier() scores = cross_val_score(xgb, X_train, y_train, cv=5) print('Mean cross-validation score: ', scores.mean()) svc = SVC(C=1000, gamma='scale', probability=True) scores = cross_val_score(svc, X_train, y_train, cv=5) print('Mean cross-validation score: ', scores.mean()) model = lr # as Logistic Regression gave the best accuracy cross-validation score. model.fit(X_train, y_train) pred = model.predict(X_test) pred_prob = model.predict_proba(X_test)[:, 1] ipd.display(pred[:10], pred_prob[:10]) ``` # Evaluating the model ``` print('Confusion Matrix:') cf = confusion_matrix(y_test, model.predict(X_test)) cf tp, fn, fp, tn = cf.flatten() accuracy = (tp + tn) / (tp + tn + fp + fn) print('Accuracy: ', accuracy) precision = tp / (tp + fp) print('Precision: ', precision) rs = tp / (tp + fn) print('Recall Sensitivity: ', rs) spec = tn / (tn + fp) print('Specificity: ', spec) f1 = (2*tp) / (2*tp +fp + fn) print('F1 score: ', f1) fpr, tpr, thresholds = roc_curve(y_test, pred_prob) plt.plot(fpr, tpr) plt.plot([0, 1], [0, 1], ls="--", c='.3') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.0]) plt.title('ROC curve for diabetes classifier') plt.xlabel('False Positive Rate (1 - Specificity)') plt.ylabel('True Positive Rate (Sensitivity)') plt.grid(True) auc(fpr, tpr) ```
true
code
0.608478
null
null
null
null