prompt
stringlengths 501
4.98M
| target
stringclasses 1
value | chunk_prompt
bool 1
class | kind
stringclasses 2
values | prob
float64 0.2
0.97
⌀ | path
stringlengths 10
394
⌀ | quality_prob
float64 0.4
0.99
⌀ | learning_prob
float64 0.15
1
⌀ | filename
stringlengths 4
221
⌀ |
---|---|---|---|---|---|---|---|---|
# Dog Breed Identification
This example is based on a very popular [Udacity project](https://github.com/udacity/dog-project). The goal is to classify images of dogs according to their breed.
In this notebook, you will take the first steps towards developing an algorithm that could be used as part of a mobile or web app. At the end of this project, your code will accept any user-supplied image as input. If a dog is detected in the image, it will provide an estimate of the dog's breed.

In this real-world setting, you will need to piece together a series of models to perform different tasks.
### The Road Ahead
We break the notebook into separate steps. Feel free to use the links below to navigate the notebook.
* [Step 0](#step0): Install requirements & download datasets
* [Step 1](#step1): Import Datasets
* [Step 2](#step2): Detect Dogs
* [Step 3](#step3): Create a CNN to Classify Dog Breeds (from Scratch)
* [Step 4](#step4): Create a CNN (VGG16) to Classify Dog Breeds (using Transfer Learning)
* [Step 5](#step5): Create a CNN (ResNet-50) to Classify Dog Breeds (using Transfer Learning)
* [Step 6](#step6): Write your Algorithm
* [Step 7](#step7): Test Your Algorithm
---
<a id='step0'></a>
## Step 0: Install requirements & download datasets
### Download datasets
```
# Download and unzip the dog dataset
!wget https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/dogImages.zip
!unzip -qo dogImages.zip
!rm dogImages.zip
# Download the VGG-16 bottleneck features for the dog dataset
!wget https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/DogVGG16Data.npz -O bottleneck_features/DogVGG16Data.npz
# Download the ResNet50 features for the dog dataset
!wget https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/DogResnet50Data.npz -O bottleneck_features/DogResnet50Data.npz
```
### Below is the `imports` cell. This is where we import all the necessary libraries
```
from sklearn.datasets import load_files
from keras.utils import np_utils
import numpy as np
from glob import glob
import os
import random
import cv2
import matplotlib.pyplot as plt
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from tqdm import tqdm
from keras.applications.resnet50 import preprocess_input, decode_predictions
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D
from keras.layers import Dropout, Flatten, Dense
from keras.models import Sequential
from keras.callbacks import ModelCheckpoint
import extract_bottleneck_features as ebf
from keras import optimizers
```
### Install requirements
```
!pip3 install --user -r requirements/requirements.txt
```
### Pipeline Parameters
This is the `pipeline-parameters` cell. Use it to define the parameters you will use for hyperparameter tuning. These variables will be converted to KFP pipeline parameters, so make sure they are used as global variables throughout the notebook.
```
nodes_number = 256
learning_rate = 0.0001
```
<a id='step1'></a>
## Step 1: Import Datasets
### Import Dog Dataset
In the code cell below, we import a dataset of dog images. We populate a few variables through the use of the `load_files` function from the scikit-learn library:
- `train_files`, `valid_files`, `test_files` - numpy arrays containing file paths to images
- `train_targets`, `valid_targets`, `test_targets` - numpy arrays containing onehot-encoded classification labels
- `dog_names` - list of string-valued dog breed names for translating label
```
# define function to load train, test, and validation datasets
def load_dataset(path):
data = load_files(path)
dog_files = np.array(data['filenames'])
dog_targets = np_utils.to_categorical(np.array(data['target']), 133)
return dog_files, dog_targets
# load train, test, and validation datasets
train_files, train_targets = load_dataset('dogImages/train')
valid_files, valid_targets = load_dataset('dogImages/valid')
test_files, test_targets = load_dataset('dogImages/test')
# load list of dog names
dog_names = [item[20:-1] for item in sorted(glob("dogImages/train/*/"))]
# print statistics about the dataset
print('There are %d total dog categories.' % len(dog_names))
print('There are %s total dog images.' % len(np.hstack([train_files, valid_files, test_files])))
print('There are %d training dog images.' % len(train_files))
print('There are %d validation dog images.' % len(valid_files))
print('There are %d test dog images.'% len(test_files))
dog_files_short = train_files[:100]
```
---
<a id='step2'></a>
## Step 2: Detect Dogs
In this section, we use a pre-trained [ResNet-50](http://ethereon.github.io/netscope/#/gist/db945b393d40bfa26006) model to detect dogs in images. Our first line of code downloads the ResNet-50 model, along with weights that have been trained on [ImageNet](http://www.image-net.org/), a very large, very popular dataset used for image classification and other vision tasks. ImageNet contains over 10 million URLs, each linking to an image containing an object from one of [1000 categories](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a). Given an image, this pre-trained ResNet-50 model returns a prediction (derived from the available categories in ImageNet) for the object that is contained in the image.
```
# define ResNet50 model
ResNet50_mod = ResNet50(weights='imagenet')
```
### Pre-process the Data
When using TensorFlow as backend, Keras CNNs require a 4D array (which we'll also refer to as a 4D tensor) as input, with shape
$$
(\text{nb_samples}, \text{rows}, \text{columns}, \text{channels}),
$$
where `nb_samples` corresponds to the total number of images (or samples), and `rows`, `columns`, and `channels` correspond to the number of rows, columns, and channels for each image, respectively.
The `path_to_tensor` function below takes a string-valued file path to a color image as input and returns a 4D tensor suitable for supplying to a Keras CNN. The function first loads the image and resizes it to a square image that is $224 \times 224$ pixels. Next, the image is converted to an array, which is then resized to a 4D tensor. In this case, since we are working with color images, each image has three channels. Likewise, since we are processing a single image (or sample), the returned tensor will always have shape
$$
(1, 224, 224, 3).
$$
The `paths_to_tensor` function takes a numpy array of string-valued image paths as input and returns a 4D tensor with shape
$$
(\text{nb_samples}, 224, 224, 3).
$$
Here, `nb_samples` is the number of samples, or number of images, in the supplied array of image paths. It is best to think of `nb_samples` as the number of 3D tensors (where each 3D tensor corresponds to a different image) in your dataset!
```
def path_to_tensor(img_path):
# loads RGB image as PIL.Image.Image type
img = image.load_img(img_path, target_size=(224, 224))
# convert PIL.Image.Image type to 3D tensor with shape (224, 224, 3)
x = image.img_to_array(img)
# convert 3D tensor to 4D tensor with shape (1, 224, 224, 3) and return 4D tensor
return np.expand_dims(x, axis=0)
def paths_to_tensor(img_paths):
list_of_tensors = [path_to_tensor(img_path) for img_path in tqdm(img_paths)]
return np.vstack(list_of_tensors)
```
### Making Predictions with ResNet-50
Getting the 4D tensor ready for ResNet-50, and for any other pre-trained model in Keras, requires some additional processing. First, the RGB image is converted to BGR by reordering the channels. All pre-trained models have the additional normalization step that the mean pixel (expressed in RGB as $[103.939, 116.779, 123.68]$ and calculated from all pixels in all images in ImageNet) must be subtracted from every pixel in each image. This is implemented in the imported function `preprocess_input`. If you're curious, you can check the code for `preprocess_input` [here](https://github.com/fchollet/keras/blob/master/keras/applications/imagenet_utils.py).
Now that we have a way to format our image for supplying to ResNet-50, we are now ready to use the model to extract the predictions. This is accomplished with the `predict` method, which returns an array whose $i$-th entry is the model's predicted probability that the image belongs to the $i$-th ImageNet category. This is implemented in the `ResNet50_predict_labels` function below.
By taking the argmax of the predicted probability vector, we obtain an integer corresponding to the model's predicted object class, which we can identify with an object category through the use of this [dictionary](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a).
```
def ResNet50_predict_labels(img_path):
# returns prediction vector for image located at img_path
img = preprocess_input(path_to_tensor(img_path))
return np.argmax(ResNet50_mod.predict(img))
```
### Write a Dog Detector
While looking at the [dictionary](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a), you will notice that the categories corresponding to dogs appear in an uninterrupted sequence and correspond to dictionary keys 151-268, inclusive, to include all categories from `'Chihuahua'` to `'Mexican hairless'`. Thus, in order to check to see if an image is predicted to contain a dog by the pre-trained ResNet-50 model, we need only check if the `ResNet50_predict_labels` function above returns a value between 151 and 268 (inclusive).
We use these ideas to complete the `dog_detector` function below, which returns `True` if a dog is detected in an image (and `False` if not).
```
### returns "True" if a dog is detected in the image stored at img_path
def dog_detector(img_path):
prediction = ResNet50_predict_labels(img_path)
return ((prediction <= 268) & (prediction >= 151))
```
### Assess the Dog Detector
We use the code cell below to test the performance of the `dog_detector` function.
- What percentage of the images in `dog_files_short` have a detected dog?
```
n_dog = np.sum([dog_detector(img) for img in dog_files_short])
dog_percentage = n_dog/len(dog_files_short)
print('{:.0%} of the files have a detected dog'.format(dog_percentage))
```
---
<a id='step3'></a>
## Step 3: Create a CNN to Classify Dog Breeds (from Scratch)
Now that we have a function for detecting dogs in images, we need a way to predict breed from images. In this step, you will create a CNN that classifies dog breeds. You must create your CNN _from scratch_ (so, you can't use transfer learning _yet_!), and you must attain a test accuracy of at least 1%. In later steps, you will have the opportunity to use transfer learning to create a CNN that attains greatly improved accuracy.
Be careful with adding too many trainable layers! More parameters means longer training, which means you are more likely to need a GPU to accelerate the training process. Thankfully, Keras provides a handy estimate of the time that each epoch is likely to take; you can extrapolate this estimate to figure out how long it will take for your algorithm to train.
We mention that the task of assigning breed to dogs from images is considered exceptionally challenging. To see why, consider that *even a human* would have great difficulty in distinguishing between a Brittany and a Welsh Springer Spaniel.
Brittany | Welsh Springer Spaniel
- | -
<img src="images/Brittany_02625.jpg" width="100"> | <img src="images/Welsh_springer_spaniel_08203.jpg" width="200">
It is not difficult to find other dog breed pairs with minimal inter-class variation (for instance, Curly-Coated Retrievers and American Water Spaniels).
Curly-Coated Retriever | American Water Spaniel
- | -
<img src="images/Curly-coated_retriever_03896.jpg" width="200"> | <img src="images/American_water_spaniel_00648.jpg" width="200">
Likewise, recall that labradors come in yellow, chocolate, and black. Your vision-based algorithm will have to conquer this high intra-class variation to determine how to classify all of these different shades as the same breed.
Yellow Labrador | Chocolate Labrador | Black Labrador
- | - | -
<img src="images/Labrador_retriever_06457.jpg" width="150"> | <img src="images/Labrador_retriever_06455.jpg" width="240"> | <img src="images/Labrador_retriever_06449.jpg" width="220">
We also mention that random chance presents an exceptionally low bar: setting aside the fact that the classes are slightly imabalanced, a random guess will provide a correct answer roughly 1 in 133 times, which corresponds to an accuracy of less than 1%.
Remember that the practice is far ahead of the theory in deep learning. Experiment with many different architectures, and trust your intuition. And, of course, have fun!
### Pre-process the Data
We rescale the images by dividing every pixel in every image by 255.
```
# pre-process the data for Keras
train_tensors = paths_to_tensor(train_files).astype('float32')/255
valid_tensors = paths_to_tensor(valid_files).astype('float32')/255
test_tensors = paths_to_tensor(test_files).astype('float32')/255
```
### Model Architecture
Create a CNN to classify dog breed. At the end of your code cell block, summarize the layers of your model by executing the line:
model.summary()
Here is a sample architecture of such a model:

```
# Define the model architecture
model = Sequential()
model.add(Conv2D(input_shape=train_tensors.shape[1:],filters=16,kernel_size=2, activation='relu'))
model.add(MaxPooling2D())
model.add(Conv2D(filters=32,kernel_size=2, activation='relu'))
model.add(MaxPooling2D())
model.add(Conv2D(filters=64,kernel_size=2, activation='relu'))
model.add(MaxPooling2D())
model.add(GlobalAveragePooling2D())
model.add(Dense(133,activation='softmax'))
model.summary()
```
### Compile the Model
```
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
```
### Train the Model
Train your model in the code cell below. Use model checkpointing to save the model that attains the best validation loss.
```
### specify the number of epochs that you would like to use to train the model.
# Train for 20 epochs only when using a GPU, otherwise it will take a lot of time
# epochs = 20
# Train for 1 epoch when using a CPU.
epochs = 1
os.makedirs('saved_models', exist_ok=True)
checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.from_scratch.hdf5',
verbose=1, save_best_only=True)
model.fit(train_tensors, train_targets,
validation_data=(valid_tensors, valid_targets),
epochs=epochs, batch_size=32, callbacks=[checkpointer], verbose=1)
```
### Load the Model with the Best Validation Loss
```
model.load_weights('saved_models/weights.best.from_scratch.hdf5')
```
### Test the Model
Try out your model on the test dataset of dog images. Ensure that your test accuracy is greater than 1%.
```
# get index of predicted dog breed for each image in test set
dog_breed_predictions = [np.argmax(model.predict(np.expand_dims(tensor, axis=0))) for tensor in test_tensors]
# report test accuracy
test_accuracy = 100*np.sum(np.array(dog_breed_predictions)==np.argmax(test_targets, axis=1))/len(dog_breed_predictions)
print('Test accuracy: %.4f%%' % test_accuracy)
```
---
<a id='step4'></a>
## Step 4: Create a CNN (VGG16) to Classify Dog Breeds (using Transfer Learning)
To reduce training time without sacrificing accuracy, we show you how to train a CNN using Transfer Learning. In the following step, you will get a chance to use Transfer Learning to train your own CNN.
Transfer Learning is a fine-tuning of a network that was pre-trained on some big dataset with new classification layers. The idea behind is that we want to keep all the good features learned in the lower levels of the network (because there's a high probability the new images will also have those features) and just learn a new classifier on top of those. This tends to work well, especially with small datasets that don't allow for a full training of the network from scratch (it's also much faster than a full training).
One way of doing Transfer Learning is by using bottlenecks. A bottleneck, also called embedding, is the internal representation of one of the input samples in the network, at a certain depth level. We can think of a bottleneck at level N as the output of the network stopped after N layers. Why is this useful? Because we can precompute the bottlenecks for all our samples using a pre-trained network and then simulate the training of only the last layers of the network without having to actually recompute all the (expensive) parts up to the bottleneck point.
Here we will uses pre-computed bottlenecks, but if you want to take a look at how you could generate them yourself, take a look [here](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html).
### Obtain Bottleneck Features
```
bottleneck_features = np.load('bottleneck_features/DogVGG16Data.npz')
train_VGG16 = bottleneck_features['train']
valid_VGG16 = bottleneck_features['valid']
test_VGG16 = bottleneck_features['test']
```
### Model Architecture
The model uses the the pre-trained VGG-16 architecture as a fixed feature extractor, where the last convolutional output of VGG-16 is fed as input to our model. We only add a global average pooling layer and a fully connected layer, where the latter contains one node for each dog category and is equipped with a softmax.
```
VGG16_model = Sequential()
VGG16_model.add(GlobalAveragePooling2D(input_shape=train_VGG16.shape[1:]))
VGG16_model.add(Dense(133, activation='softmax'))
VGG16_model.summary()
```
### Compile the Model
```
VGG16_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
```
### Train the Model
```
os.makedirs('saved_models', exist_ok=True)
checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.VGG16.hdf5',
verbose=1, save_best_only=True)
VGG16_model.fit(train_VGG16, train_targets,
validation_data=(valid_VGG16, valid_targets),
epochs=20, batch_size=32, callbacks=[checkpointer], verbose=1)
```
### Load the Model with the Best Validation Loss
```
VGG16_model.load_weights('saved_models/weights.best.VGG16.hdf5')
```
### Test the Model
Now, we can use the CNN to test how well it identifies breed within our test dataset of dog images. We print the test accuracy below.
```
# get index of predicted dog breed for each image in test set
VGG16_predictions = [np.argmax(VGG16_model.predict(np.expand_dims(feature, axis=0))) for feature in test_VGG16]
# report test accuracy
test_accuracy = 100*np.sum(np.array(VGG16_predictions)==np.argmax(test_targets, axis=1))/len(VGG16_predictions)
print('Test accuracy: %.4f%%' % test_accuracy)
```
### Predict Dog Breed with the Model
```
def VGG16_predict_breed(img_path):
# extract bottleneck features
bottleneck_feature = ebf.extract_VGG16(path_to_tensor(img_path))
# obtain predicted vector
predicted_vector = VGG16_model.predict(bottleneck_feature)
# return dog breed that is predicted by the model
return dog_names[np.argmax(predicted_vector)].split('.')[-1]
# Show first dog image
img_path = test_files[0]
img = cv2.imread(img_path)
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(cv_rgb)
plt.show()
# Print groundtruth and predicted dog breed
gtruth = np.argmax(test_targets[0])
gtruth = dog_names[gtruth].split('.')[-1]
pred = VGG16_predict_breed(img_path)
print("Groundtruth dog breed: {}".format(gtruth))
print("Predicted dog breed: {}".format(pred))
```
---
<a id='step5'></a>
## Step 5: Create a CNN (ResNet-50) to Classify Dog Breeds (using Transfer Learning)
You will now use transfer learning to create a CNN that can identify dog breed from images. Your CNN must attain at least 60% accuracy on the test set.
In Step 4, we used transfer learning to create a CNN using VGG-16 bottleneck features. In this section, we will use the bottleneck features from a different pre-trained model.
### Obtain Bottleneck Features
```
bottleneck_features = np.load('bottleneck_features/DogResnet50Data.npz')
train_ResNet50 = bottleneck_features['train']
valid_ResNet50 = bottleneck_features['valid']
test_ResNet50 = bottleneck_features['test']
```
### Model Architecture
Create a CNN to classify dog breed. At the end of your code cell block, summarize the layers of your model.
```
ResNet50_model = Sequential()
ResNet50_model.add(Flatten(input_shape=train_ResNet50.shape[1:]))
# The layer below includes a hyperparameter (nodes_number)
ResNet50_model.add(Dense(int(nodes_number), activation='relu'))
ResNet50_model.add(Dense(133, activation='softmax'))
# Summarize the layers of the model
ResNet50_model.summary()
```
### Compile the Model
```
### Learning rate (learning_rate) is a hyperparameter in this example
opt = optimizers.Adam(float(learning_rate))
ResNet50_model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
```
### Train the Model
Train your model in the code cell below. Use model checkpointing to save the model that attains the best validation loss.
```
os.makedirs('saved_models', exist_ok=True)
checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.ResNet50.hdf5',
verbose=1, save_best_only=True)
### Train the model.
ResNet50_model.fit(train_ResNet50, train_targets,
validation_data=(valid_ResNet50, valid_targets),
epochs=20, batch_size=32, callbacks=[checkpointer], verbose=1)
```
### Load the Model with the Best Validation Loss
```
### Load the model weights with the best validation
ResNet50_model.load_weights('saved_models/weights.best.ResNet50.hdf5')
```
### Test the Model
Try out your model on the test dataset of dog images. Ensure that your test accuracy is greater than 60%.
```
# get index of predicted dog breed for each image in test set
ResNet50_predictions = [np.argmax(ResNet50_model.predict(np.expand_dims(feature, axis=0))) for feature in test_ResNet50]
# report test accuracy
test_accuracy_resnet = 100*np.sum(np.array(ResNet50_predictions)==np.argmax(test_targets, axis=1))/len(ResNet50_predictions)
print('Test accuracy: %.4f%%' % test_accuracy_resnet)
```
### Predict Dog Breed with the Model
```
def predict_breed(img_path):
img = path_to_tensor(img_path)
bottleneck_feature = ebf.extract_Resnet50(img)
predicted = ResNet50_model.predict(bottleneck_feature)
idx = np.argmax(predicted)
return dog_names[idx].split('.')[-1]
# Show first dog image
img_path = test_files[0]
img = cv2.imread(img_path)
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(cv_rgb)
plt.show()
# Print groundtruth and predicted dog breed
gtruth = np.argmax(test_targets[0])
gtruth = dog_names[gtruth].split('.')[-1]
pred = predict_breed(img_path)
print("Groundtruth dog breed: {}".format(gtruth))
print("Predicted dog breed: {}".format(pred))
```
---
<a id='step6'></a>
## Step 6: Write your Algorithm
Write an algorithm that accepts a file path to an image and first determines whether the image contains a dog or not. Then,
- if a __dog__ is detected in the image, return the predicted breed.
```
def return_breed(img_path):
pred = None
dog = False
if dog_detector(img_path):
dog = True
print('Dog detected')
else:
print('No dog detected')
if dog:
pred = predict_breed(img_path)
print('This photo looks like a(n) {}'.format(pred))
return pred
# Run for the second dog image
img_path = test_files[1]
img = cv2.imread(img_path)
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(cv_rgb)
plt.show()
pred = return_breed(img_path)
```
---
<a id='step7'></a>
## Step 7: Test Your Algorithm
In this section, you will take your new algorithm for a spin! If you have a dog, does it predict your dog's breed accurately? If you have a cat, does it mistakenly think that your cat is a dog?
```
for img_path in sorted(glob("check_images/*")):
print(img_path)
img = cv2.imread(img_path)
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(cv_rgb)
plt.show()
return_breed(img_path)
```
### Pipeline Metrics
This is the `pipeline-metrics` cell. Use it to define the pipeline metrics that KFP will produce for every pipeline run. Kale will associate each one of these metrics to the steps that produced them. Also, you will have to choose one these metrics as the Katib search objective metric.
```
print(test_accuracy_resnet)
```
| true |
code
| 0.733726 | null | null | null | null |
|
# Visual English
### Eryk Wdowiak
This notebook attempts to illustrate the English text that we're using to develop a neural machine translator.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import cm
%matplotlib inline
import nltk
from nltk.tokenize import word_tokenize
from nltk.probability import FreqDist
from nltk.collocations import *
# import string
# import re
from wordcloud import WordCloud
import mxnet as mx
from mxnet import gluon
from mxnet import nd
import gluonnlp as nlp
from data import transform_data_word2vec, preprocess_dataset
from model import SG, CBOW
from utils import print_time
context = mx.cpu()
## I thought this function would do far more than just run NLTK tokenizer.
## We'll leave it in place. It keeps our options open.
def process_line(line):
tokens = word_tokenize(line)
return tokens
## read in the lemmatized data
df = pd.read_csv('dataset/train-mparamu_v2-lemmatized.en',header=None)
df.columns = ['en_text']
# df.head()
```
### frequencies
```
## flatten data to count words
proc_eng = list(map(process_line, df.en_text))
flat_eng = [item for sublist in proc_eng for item in sublist]
freq_eng = FreqDist(flat_eng)
freq_eng.most_common(20)
```
### counts
```
# create counts
eng_bar_words = [x[0] for x in freq_eng.most_common(25)]
eng_bar_counts = [x[1] for x in freq_eng.most_common(25)]
# put data into dictionary
eng_dict = dict(zip(eng_bar_words, eng_bar_counts))
# set the color of our bar graphs
color = cm.viridis_r(np.linspace(.4,.8, 30))
fig, axs = plt.subplots(figsize=(8,4))
axs.bar(eng_bar_words, eng_bar_counts , color=color)
axs.title.set_text('most common English lemmas')
for ax in fig.axes:
plt.sca(ax)
plt.xticks(rotation=45)
plt.tight_layout(pad=0)
plt.savefig('wb-en_lemmas.png')
plt.show()
# create cloud of Sicilian words by frequency
wordcloud = WordCloud(colormap='Spectral').generate_from_frequencies(eng_dict)
plt.figure(figsize=(10,10), facecolor='k')
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
plt.tight_layout(pad=0)
plt.savefig('wb-en_lemma-cloud.png')
plt.show()
```
### make wordcloud from embeddings
```
## load datafile (so that we can retrieve vocabulary)
datafile = 'dataset/train-mparamu_v3-lemmatized.en.tsv'
## CBOW model
model = CBOW
parmfile = './logs/en-cbow-r4-e01.params'
fname_insert = 'cbow'
## skipgram model
# model = SG
# parmfile = './logs/en-skip-r2-e24.params'
# fname_insert = 'skip'
## both trained with hyperparameters:
output_dim = 300
batch_size = 128
## load the data
data = nlp.data.TSVDataset(datafile)
data, vocab, idx_to_counts = preprocess_dataset( data )
## load the model
embedding = model(token_to_idx=vocab.token_to_idx, output_dim=output_dim,
batch_size=batch_size, #num_negatives=num_negatives,
negatives_weights=mx.nd.array(idx_to_counts))
embedding.load_parameters(parmfile)
## get the word vectors
wvecs = embedding.embedding_out.weight.data()
## count words with at least "min_words" appearances
min_words = 10
num_over_min = len( np.array(idx_to_counts)[ np.array(idx_to_counts)>= min_words ] )
print('vocabulary length: ' + str(len(vocab)))
print('lemmas over ' + str(min_words) + ' times: ' + str(num_over_min))
## pairwise cosine similarity
def cos_sim(wordx, wordy):
xx = wvecs[vocab.token_to_idx[wordx],]
yy = wvecs[vocab.token_to_idx[wordy],]
return nd.dot(xx, yy) / (nd.norm(xx) * nd.norm(yy))
## full matrix of cosine similarity
def cos_mat( vecs ):
## dot product divided by the norms
xtx = nd.dot( vecs , vecs.T)
nmx = nd.sqrt( nd.diag(xtx) ).reshape((-1,1))
cnm = nd.dot( nmx , nmx.T )
return xtx / cnm
## create "WC Dict" ("word-to-cosine dictionary") for wordcloud
def mk_wcdict(word,k_words):
## where to start? first two tokens are: <BOS> <EOS>
sv_start = 2
## get cosine matrix
cosmat = cos_mat( wvecs[sv_start:-1,] )
## get the row of cosines
idx_to_lookup = vocab.token_to_idx[word] - sv_start
row_looked_up = cosmat[idx_to_lookup,]
## nearest neighbors by cosine similarity
knn_cosmat = row_looked_up.argsort()[::-1][1:k_words+1].astype(int).asnumpy()
## indexes of nearest neighbors in vocab list
knn_vocab_idx = list(knn_cosmat + sv_start)
## get the words and cosine measures
knn_vocab_words = [vocab.idx_to_token[idx] for idx in knn_vocab_idx]
knn_vocab_cosines = [cosmat[idx_to_lookup,idx].asnumpy()[0] for idx in knn_cosmat]
## return the dictionary for wordcloud
return dict(zip(knn_vocab_words,knn_vocab_cosines))
# create a cloud of 25 words for Don Chisciotti!
knn_wc_dict = mk_wcdict('chisciotti',25)
wordcloud = WordCloud(colormap='Spectral').generate_from_frequencies(knn_wc_dict)
plt.figure(figsize=(10,10), facecolor='k')
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
plt.tight_layout(pad=0)
fname = 'wc-en-' + fname_insert + '_chisciotti.png'
plt.savefig(fname)
plt.show()
# create a cloud of 25 words for Sanciu Panza!
knn_wc_dict = mk_wcdict('sanciu',25)
wordcloud = WordCloud(colormap='Spectral').generate_from_frequencies(knn_wc_dict)
plt.figure(figsize=(10,10), facecolor='k')
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
plt.tight_layout(pad=0)
fname = 'wc-en-' + fname_insert + '_sanciu.png'
plt.savefig(fname)
plt.show()
```
### bigrams and trigrams
```
bigram_measures = nltk.collocations.BigramAssocMeasures()
trigram_measures = nltk.collocations.TrigramAssocMeasures()
eng_bi_finder = BigramCollocationFinder.from_words(flat_eng)
# eng_bi_finder.apply_freq_filter(5)
eng_bi_scored = eng_bi_finder.score_ngrams(bigram_measures.raw_freq)
eng_bi_scored[:10]
eng_bi_pmi_finder = BigramCollocationFinder.from_words(flat_eng)
eng_bi_pmi_finder.apply_freq_filter(5)
eng_bi_pmi_scored = eng_bi_pmi_finder.score_ngrams(bigram_measures.pmi)
eng_bi_pmi_scored[0:10]
eng_tri_finder = TrigramCollocationFinder.from_words(flat_eng)
# eng_tri_finder.apply_freq_filter(5)
eng_tri_scored = eng_tri_finder.score_ngrams(trigram_measures.raw_freq)
eng_tri_scored[:10]
eng_tri_pmi_finder = TrigramCollocationFinder.from_words(flat_eng)
eng_tri_pmi_finder.apply_freq_filter(5)
eng_tri_pmi_scored = eng_tri_pmi_finder.score_ngrams(trigram_measures.pmi)
eng_tri_pmi_scored[0:10]
```
| true |
code
| 0.564279 | null | null | null | null |
|

# 02 - RDD: RESILENT DISTRIBUTED DATASETS
Colección inmutable y distribuida de elementos que pueden manipularse en paralelo
Un programa Spark opera sobre RDDs:
Spark automáticamente distribuye los datos y paraleliza las operaciones
```
!pip install pyspark
# Create apache spark context
from pyspark import SparkContext
sc = SparkContext(master="local", appName="Mi app")
# Stop apache spark context
sc.stop()
```
## Creación de RDDs
Se pueden crear de dos formas:
1. Paralelizando una colección en el programa driver
2. Leyendo datos de un fichero
```
# 1. Paralelizando una colección en el programa driver
rdd1 = sc.parallelize([1,2,3])
print("rdd1: ", rdd1.glom().collect())
import numpy as np
rdd2 = sc.parallelize(np.array(range(100)))
print("rdd2: ", rdd2.glom().collect())
# 2. Leyendo datos de un fichero
quijote = sc.textFile("data/quijote.txt")
print(quijote.take(1000))
```
## Particiones
Spark divide el RDD en en conjunto de particiones
- El número de particiones por defecto es función del tamaño del
cluster o del número de bloques del fichero (p.e. bloques HDFS)
- Se puede especificar otro valor en el momento de crear el RDD
```
import numpy as np
rdd1 = sc.parallelize(np.array(range(100)))
print("rdd1: ", rdd2.glom().collect())
print(rdd1.getNumPartitions())
print("------------")
rdd2 = sc.parallelize(np.array(range(100)), 6)
print(rdd2.glom().collect())
print(rdd2.getNumPartitions())
```
## Transformaciones
Operaciones sobre RDDs que devuelven un nuevo RDD
- Se computan de forma “perezosa” ( *lazy* )
- Normalmente, ejecutan una función (anónima o no) sobre cada uno de
los elementos del RDD de origen
```
quijs = quijote.filter(lambda l: "Quijote" in l)
sanchs = quijote.filter(lambda l: "Sancho" in l)
quijssancs = quijs.intersection(sanchs)
quijssancs.cache()
```
### Transformaciones elemento-a-elemento
Generan un nuevo RDD a partir de uno dado
Todas las transformaciones:
- filter(func)
- map(func)
- flatMap(func)
- sample(withReplacement, faction, seed=None)
- distinct(func)
- groupBy(func)
---
* `map(func)` aplica una función a los elementos de un RDD
```
# Obtener los valores positivos de un rango de números
rdd = sc.parallelize(range(-5, 5)) # Rango [-5, 5)
filtered_rdd = rdd.filter(lambda x: x >= 0) # Devuelve los positivos
assert filtered_rdd.collect() == [0, 1, 2, 3, 4]
print(filtered_rdd.collect())
print([0, 1, 2, 3, 4])
```
* `filter(func)` filtra los elementos de un RDD
```
def add1(x):
return(x+1)
print("RDD original:", filtered_rdd.collect())
squared_rdd = (filtered_rdd
.map(add1) # Añade 1 a cada elemento del RDD
.map(lambda x: (x, x*x))) # Para cada elemento, obtén una tupla (x, x**2)
print("Resultado Esperado:", [(1, 1), (2, 4), (3, 9), (4, 16), (5, 25)])
print("Resultado obtenido", squared_rdd.collect())
```
* `flatMap(func)` igual que `map`, pero “aplana” la salida
```
squaredflat_rdd = (filtered_rdd
.map(add1)
.flatMap(lambda x: (x, x*x))) # Da la salida en forma de lista
print("Resultado Esperado:", [1, 1, 2, 4, 3, 9, 4, 16, 5, 25])
print("Resultado obtenido", squaredflat_rdd.collect())
```
* `sample(withReplacement, fraction, seed=None)` devuelve una muestra del RDD
* `withReplacement` - si True, cada elemento puede aparecer varias veces en la muestra
* `fraction` - tamaño esperado de la muestra como una fracción del tamaño del RDD
- **sin reemplazo**: probabilidad de seleccionar un elemento, su valor debe ser [0, 1]
- **con reemplazo**: número esperado de veces que se escoge un elemento, su valor debe ser >= 0
* `seed` - semilla para el generador de números aleatorios
```
srdd1 = squaredflat_rdd.sample(False, 0.5)
srdd2 = squaredflat_rdd.sample(True, 2)
srdd3 = squaredflat_rdd.sample(False, 0.8, 14)
print('s1={0}\ns2={1}\ns3={2}'.format(srdd1.collect(), srdd2.collect(), srdd3.collect()))
```
* `distinct()` devuelve un nuevo RDD sin duplicados
* El orden de la salida no está definido
```
distinct_rdd = squaredflat_rdd.distinct()
print("Original: ", squaredflat_rdd.collect())
print("Resultado: ", distinct_rdd.collect())
```
* `groupBy(func)` devuelve un RDD con los datos agrupados en formato clave/valor,
usando una función para obtener la clave
```
grouped_rdd = distinct_rdd.groupBy(lambda x: x%3)
print(grouped_rdd.collect())
print([(x,sorted(y)) for (x,y) in grouped_rdd.collect()])
```
---
### Transformaciones sobre dos RDDs
Operaciones tipo conjunto sobre dos RDDs
Transformaciones disponibles:
* `rdda.union(rddb)`
* `rdda.intersection(rddb)`
* `rdda.subtract(rddb)`
* `rdda.cartesian(rddb)`
---
* `rdda.union(rddb)` devuelve un RDD con los datos de los dos de partida
```
rdda = sc.parallelize(['a', 'b', 'c'])
rddb = sc.parallelize(['c', 'd', 'e'])
rddu = rdda.union(rddb)
print("Resultado Esperado:", ['a', 'b', 'c', 'c', 'd', 'e'])
print("Resultado obtenido", rddu.collect())
```
* `rdda.intersection(rddb)` devuelve un RDD con los datos comunes en ambos RDDs
```
rddi = rdda.intersection(rddb)
print("Resultado Esperado:", ['c'])
print("Resultado obtenido", rddi.collect())
```
* `rdda.subtract(rddb)` devuelve un RDD con los datos del primer RDD menos los del segundo
```
rdds = rdda.subtract(rddb)
print("Resultado Esperado:", ['a', 'b'])
print("Resultado obtenido", rdds.collect())
rddc = rdda.cartesian(rddb)
print("Resultado Esperado:", [('a','c'),('a','d'),('a','e'),('b','c'),('b','d'),('b','e'),('c','c'), ('c','d'), ('c','e')])
print("Resultado obtenido", rddc.collect())
```
## Acciones
Obtienen datos de salida a partir de los RDDs
* Devuelven valores al driver o al sistema de almacenamiento
* Fuerzan a que se realicen las transformaciones pendientes
### Acciones sobre RDDs simples
Obtienen datos (simples o compuestos) a partir de un RDD
#### Principales acciones de agregación: `reduce` y `fold`
* `reduce(op)` combina los elementos de un RDD en paralelo, aplicando un operador
* El operador de reducción debe ser un *monoide conmutativo* (operador binario asociativo y conmutativo)
* Primero se realiza la redución a nivel de partición y luego se van reduciendo los valores intermedios
```
rdd = sc.parallelize(range(1,10), 8) # rango [1, 10)
print(rdd.glom().collect())
# Reducción con una función lambda
p = rdd.reduce(lambda x,y: x*y) # r = 1*2*3*4*5*6*7*8*9 = 362880
print("1*2*3*4*5*6*7*8*9 = {0}".format(p))
# Reducción con un operador predefinido
from operator import add
s = rdd.reduce(add) # s = 1+2+3+4+5+6+7+8+9 = 45
print("1+2+3+4+5+6+7+8+9 = {0}".format(s))
# Prueba con un operador no conmutativo
p = rdd.reduce(lambda x,y: x-y) # r = 1-2-3-4-5-6-7-8-9 = -43
print("1-2-3-4-5-6-7-8-9 = {0}".format(p))
# No funciona con RDDs vacíos
#sc.parallelize([]).reduce(add)
```
* `fold(cero, op)` versión general de `reduce`:
* Debemos proporcionar un valor inicial `cero` para el operador
* El valor inicial debe ser el valor identidad para el operador (p.e. 0 para suma; 1 para producto, o una lista vacía para concatenación de listas)
* Permite utilizar RDDs vacíos
* La función `op` debe ser un monoide conmutativo para garantizar un resultado consistente
* Comportamiento diferente a las operaciones `fold` de lenguajes como Scala
* El operador se aplica a nivel de partición (usando `cero` como valor inicial), y finalmente entre todas las particiones (usando `cero`de nuevo)
* Para operadores no conmutativos el resultado podría ser diferente del obtenido mediante un `fold` secuencial
```
rdd = sc.parallelize([[1, 2, 3, 4], [-10, -9, -8, -7, -6, -5, -4], ['a', 'b', 'c']])
print(rdd.glom().collect())
f = rdd.fold([], lambda x,y: x+y)
print(f)
# Se puede hacer un fold de un RDD vacío
sc.parallelize([]).fold(0, add)
```
#### Otras acciones de agregación: `aggregate`
* `aggregate(cero,seqOp,combOp)`: Devuelve una colección agregando los elementos del RDD usando dos funciones:
1. `seqOp` - agregación a nivel de partición: se crea un acumulador por partición (inicializado a `cero`) y se agregan los valores de la partición en el acumulador
2. `combOp` - agregación entre particiones: se agregan los acumuladores de todas las particiones
* Ambas agregaciones usan un valor inicial `cero` (similar al caso de `fold`).
* Versión general de `reduce` y `fold`
* La primera función (`seqOp`) puede devolver un tipo, U, diferente del tipo T de los elementos del RDD
* `seqOp` agregar datos de tipo T y devuelve un tipo U
* `combOp` agrega datos de tipo U
* `cero` debe ser de tipo U
* Permite devolver un tipo diferente al de los elementos del RDD de entrada.
```
l = [1, 2, 3, 4, 5, 6, 7, 8]
rdd = sc.parallelize(l)
# acc es una tupla de tres elementos (List, Double, Int)
# En el primer elemento de acc (lista) le concatenamos los elementos del RDD al cuadrado
# en el segundo, acumulamos los elementos del RDD usando multiplicación
# y en el tercero, contamos los elementos del RDD
seqOp = (lambda acc, val: (acc[0]+[val*val],
acc[1]*val,
acc[2]+1))
# Para cada partición se genera una tupla tipo acc
# En esta operación se combinan los tres elementos de las tuplas
combOp = (lambda acc1, acc2: (acc1[0]+acc2[0],
acc1[1]*acc2[1],
acc1[2]+acc2[2]))
a = rdd.aggregate(([], 1., 0), seqOp, combOp)
print(a)
print("Resultado Esperado:", a[1])
print("Resultado obtenido", 8.*7.*6.*5.*4.*3.*2.*1.)
print("--------------")
print("Resultado Esperado:", a[2])
print("Resultado obtenido", len(l))
```
#### Acciones para contar elementos
- `count()` devuelve un entero con el número exacto de elementos del RDD
- `countApprox(timeout, confidence=0.95)` versión aproximada de `count()` que devuelve un resultado potencialmente incompleto en un tiempo máximo, incluso si no todas las tareas han finalizado. (Experimental).
- `timeout` es un entero largo e indica el tiempo en milisegundos
- `confidence` probabilidad de obtener el valor real. Si `confidence` es 0.90 quiere decir que si se ejecuta múltiples veces, se espera que el 90% de ellas se obtenga el valor correcto. Valor [0,1]
- `countApproxDistinct(relativeSD=0.05)` devuelve una estimación del número de elementos diferentes del RDD. (Experimental).
- `relativeSD` – exactitud relativa (valores más pequeños implican menor error, pero requieren más memoria; debe ser mayor que 0.000017).
```
rdd = sc.parallelize([i % 20 for i in range(10000)], 16)
#print(rdd.collect())
print("Número total de elementos: {0}".format(rdd.count()))
print("Número de elementos distintos: {0}".format(rdd.distinct().count()))
print("Número total de elementos (aprox.): {0}".format(rdd.countApprox(1, 0.4)))
print("Número de elementos distintos (approx.): {0}".format(rdd.countApproxDistinct(0.5)))
```
- `countByValue()` devuelve el número de apariciones de cada elemento del RDD como un mapa (o diccionario) de tipo clave/valor
- Las claves son los elementos del RDD y cada valor, el número de ocurrencias de la clave asociada al mismo
```
rdd = sc.parallelize(list("abracadabra")).cache()
mimapa = rdd.countByValue()
print(type(mimapa))
print(mimapa.items())
```
#### Acciones para obtener valores
Estos métodos deben usarse con cuidado, si el resultado esperado es muy grande puede saturar la memoria del driver
- `collect()` devuelve una lista con todos los elementos del RDD
```
lista = rdd.collect()
print(lista)
```
- `take(n)` devuelve los `n` primeros elementos del RDD
- `takeSample(withRep, n, [seed])` devuelve `n` elementos aleatorios del RDD
- `withRep`: si True, en la muestra puede aparecer el mismo elemento varias veces
- `seed`: semilla para el generador de números aleatorios
```
t = rdd.take(4)
print(t)
s = rdd.takeSample(False, 4)
print(s)
```
- `top(n)` devuelve una lista con los primeros `n` elementos del RDD ordenados en orden descendente
- `takeOrdered(n,[orden])` devuelve una lista con los primeros `n` elementos del RDD en orden ascendente (opuesto a `top`), o siguiendo el orden indicado en la función opcional
```
rdd = sc.parallelize([8, 4, 2, 9, 3, 1, 10, 5, 6, 7]).cache()
print("4 elementos más grandes: {0}".format(rdd.top(4)))
print("4 elementos más pequeños: {0}".format(rdd.takeOrdered(4)))
print("4 elementos más grandes: {0}".format(rdd.takeOrdered(4, lambda x: -x)))
```
| true |
code
| 0.201971 | null | null | null | null |
|
# Synchronisation in Complex Networks
```
import numpy as np
import matplotlib.pylab as plt
import networkx as nx
from NetworkFunctions import *
from NetworkClasses import *
N = 100; # number of nodes
m = 2;
G = nx.barabasi_albert_graph(N,m,seed=None); # Barabasi-Albert graph
A = nx.to_numpy_matrix(G); # creates adjacency matrix
w = np.random.uniform(-2, 2, N); # defines natural frequencies
K = .5 # coupling constant
alpha = 1 # SL parameter
F = np.zeros(N)
for i in range(int(N/5)):
F[5*i] = 1
Omega = np.pi
# initial conditions
theta0 = np.random.uniform(0, 2*np.pi, N)
rho0 = np.random.uniform(0.1, 0.9, N) # so the system doesn't fall into the attractor
z0 = rho0*np.exp(1j*theta0)
z0[:5]
nx.draw(G, node_color='turquoise', edge_color='grey', with_labels=True)
plt.show()
```
## Stuart-Landau Model
The equations for a (forced) complex network of $N$ Stuart-Landau oscillators with natural frequencies $\omega_k$, limit cycle parameter $\alpha$, adjacency matrix $A$, coupling strength (or average coupling strength, for the case where $A$ is weighted) $\lambda$ and a forced term of type $ F_k e^{i \Omega t} $ that acts on a fraction $f = N_F/N$, where $N_F$ is the number of forced oscillators (nonzero $F$), can be written in the following forms:
### 1. Complex Form
$$ \dot{z}_k (z,t) = \{ \alpha^2 + i \omega - |z_k|^2 \} z_k + \lambda \sum_{j=1}^N A_{ij} (z_j - z_k) + F_k e^{i \Omega t} $$
### 2. Real Polar Form
Substituting $z_k = \rho_k e^{i \theta_k}$ in the above equation, we find:
$$ \dot{\rho}_i (\rho, \theta, t) = \rho_i (\alpha^2 - \rho_i^2) + \lambda \sum_{j=1}^N A_{ij} \left\{ \rho_j \cos{(\theta_j - \theta_i)} - \rho_i \right\} + F_i \cos{(\Omega t - \theta_i)} $$
$$ \dot{\theta}_i (\rho, \theta, t) = \omega_i + \lambda \sum_{j=1}^N A_{ij} \frac{\rho_j}{\rho_i} \sin{(\theta_j - \theta_i)} + F_i \sin{(\Omega t - \theta_i)} $$
The Jacobian is then:
$$ J = \left[ \begin{matrix} \frac{\partial \dot{\rho}}{\partial \rho} && \frac{\partial \dot{\rho}}{\partial \theta} \\
\frac{\partial \dot{\theta}}{\partial \rho} && \frac{\partial \dot{\theta}}{\partial \theta} \end{matrix} \right] $$
where, for a network with no self-edges ($A_{jj} = 0\ \forall j$):
$$ \frac{\partial \dot{\rho}_i}{\partial \rho_j} = (\alpha^2 - 3\rho_i^2 - \lambda A_{ij}) \delta_{ij} + \lambda A_{ij} \cos{(\theta_j - \theta_i)} $$
$$ \frac{\partial \dot{\rho}_i}{\partial \theta_j} = - \lambda A_{ij} \rho_j \sin{(\theta_j - \theta_i)} - \delta_{ij} F_i \sin{(\Omega t - \theta_i)} $$
$$ \frac{\partial \dot{\theta}_i}{\partial \rho_j} = \frac{\lambda}{\rho_i} A_{ij} \sin{(\theta_j - \theta_i)} $$
$$ \frac{\partial \dot{\theta}_i}{\partial \rho_j} = \lambda A_{ij} \frac{\rho_j}{\rho_i} \cos{(\theta_j - \theta_i)} + \delta_{ij} F_i \cos{(\Omega t - \theta_i)}$$
### 3. Real Rectangular Form
Substituting $z_k = x_k + iy_k$ in the complex system holds:
$$ \dot{x}_i (x, y, t) = (\alpha^2 - x^2_i - y_i^2) x_i - \omega_i y_i + \lambda \sum_{j=1}^N A_{ij} (x_j - x_i) + F_i \cos{(\Omega t)} $$
$$ \dot{y}_i (x, y, t) = (\alpha^2 - x^2_i - y_i^2) y_i + \omega_i x_i + \lambda \sum_{j=1}^N A_{ij} (y_j - y_i) + F_i \sin{(\Omega t)} $$
The Jacobian is then defined by:
$$ J = \left[ \begin{matrix} \frac{\partial \dot{x}}{\partial x} && \frac{\partial \dot{x}}{\partial y} \\
\frac{\partial \dot{y}}{\partial x} && \frac{\partial \dot{y}}{\partial y} \end{matrix} \right] $$
where:
$$ \frac{\partial \dot{x}_i}{\partial x_j} = \delta_{ij} (\alpha^2 - y_i^2 - 3x_i^2) + \lambda A_{ij} $$
$$ \frac{\partial \dot{x}_i}{\partial y_j} = - \delta_{ij} (2 x_i y_i + \omega_i) $$
$$ \frac{\partial \dot{y}_i}{\partial x_j} = - \delta_{ij} (2 x_i y_i - \omega_i) $$
$$ \frac{\partial \dot{y}_i}{\partial y_j} = \delta_{ij} (\alpha^2 - x_i^2 - 3y_i^2) + \lambda A_{ij} $$
with $k_i = \sum_{i=1}^N A_{ij} (1 - \delta_{ij})$ being the node degree of the $i$th node (excluding self-edges)
## Kuramoto Model
The equations for a (forced) complex network of $N$ Kuramoto oscillators with natural frequencies $\omega_k$, adjacency matrix $A$, coupling strength (or average coupling strength, for the case where $A$ is weighted) $\lambda$ and a forced term of type $ F_i \cos{(\Omega t - \theta)} $ that acts on a fraction $f = N_F/N$, where $N_F$ is the number of forced oscillators (nonzero $F$), can be written as:
$$ \dot{\theta}_i = \omega_i + \lambda \sum_{j=1}^N A_{ij} \sin{(\theta_j - \theta_i)} + F_i \sin{(\Omega t - \theta_i)} $$
which gives the Jacobian:
$$ J_{ij} = \frac{\partial \dot{\theta}_i}{\partial \theta_j} = A_{ij} \cos{(\theta_j - \theta_i)} - \delta_{ij} F_i \cos{(\Omega t - \theta_i)} $$
```
SL = StuartLandau(w, A, K, alpha)
SLforced = StuartLandau(w, A, K, alpha, F, Omega)
kuramoto = KuramotoNetwork(w, A, K)
Kforced = KuramotoNetwork(w, A, K, F, Omega)
%%time
t = np.arange(0,50,.2)
z, _ = SL.integrate(z0, t)
z_f, _ = SLforced.integrate(z0, t)
%%time
theta, _ = kuramoto.integrate(theta0, t)
theta_f, _ = Kforced.integrate(theta0, t)
osc=5
fig, (ax1, ax2) = plt.subplots(2, 1)
fig.suptitle('Time Evolution for an oscillator in the network')
ax1.set_title('Stuart-Landau')
ax1.set_ylabel('$Re(z)$')
ax1.set_xticks([])
ax2.set_title('Kuramoto')
ax2.set_xlabel('$t$')
ax2.set_ylabel(r'$\theta(t)$')
ax2.set_ylim([-1.2, 1.2])
ax1.plot(t, np.real(z[osc]), label='free', color='lightseagreen')
ax1.plot(t, np.real(z_f[osc]), label='forced', color='g')
ax1.plot(t, F[osc]*np.cos(Omega*t), label='force', color='pink', linewidth='.6')
ax1.legend()
ax2.plot(t, np.cos(theta[i*osc]), label='free', color='purple')
ax2.plot(t, np.cos(theta_f[i*osc]), label='forced', color='goldenrod')
ax2.plot(t, F[osc]*np.cos(Omega*t), label='force', color='cyan', linewidth='.3')
ax2.legend()
plt.show()
fig, ax = plt.subplots(1,1)
ax.set_title('Stuart-Landau Oscillator Trajectories')
ax.set_xlabel(r'$\Re(z)$')
ax.set_ylabel(r'$\Im(z)$')
for i in [7, 48, 22]:
ax.plot(np.real(z[i]), np.imag(z[i]), label=i)
ax.legend()
plt.show()
osc=5
fig, ax = plt.subplots(1, 1)
fig.suptitle('Stuart-Landau Time Evolution')
ax.set_xlabel('$t$')
ax.set_ylabel(r'$\Re{z(t)}$')
for osc in range(4):
ax.plot(t, np.real(z[osc]), label=osc+1)
ax.legend()
plt.show()
```
## Order Parameter
For a network of N oscillators with phase $\theta_i$, we can measure the system's synchronization with:
$$ \mathrm{z}(t) = r(t) e^{i \psi(t)} = \frac{1}{N} \sum_{j=1}^N e^{i \theta_j(t)} $$
The real part $r$ is called order parameter, whereas $\psi$ is the mean phase of the system. When the system is not synchronized, $r \approx 0$, whereas global synchronization is said to be achieved when $r \to 1$.
```
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_xlim(-1.2, 1.2)
ax.set_ylim(-1.2, 1.2)
ax.set_aspect('equal')
ax.axis('off')
# Plots points corresponding to the oscillators' phase positions at time t
ax.scatter(np.cos(theta[:,200]), y = np.sin(theta[:,200]), marker = '*', color='crimson')
# Finds the order parameter at the time instant t
thetaT = np.transpose(theta)
order_par = sum(np.exp(thetaT[200]*1j))/N
r = np.absolute(order_par)
psi = np.angle(order_par)
# Plots horizontal and vertical diameters of the circle
ax.plot([-1, 1], [0, 0], linewidth = '.5', color = 'grey')
ax.plot([0, 0], [-1, 1], linewidth = '.5', color = 'grey')
#Plots unit circle
circle = plt.Circle((0,0), radius = 1.0, linewidth = '0.8', color = 'grey', fill = False)
ax.add_patch(circle)
#Plots order parameter line
ax.plot([0, r*np.cos(psi)], [0, r*np.sin(psi)], linewidth = '2.0', color = 'teal')
ax.scatter(r*np.cos(psi), r*np.sin(psi), marker='o', color='teal')
# Shows mean phase
s = np.arange(0,1,0.05)
if r>0.4:
ax.plot(0.25*np.cos(psi*s), 0.25*np.sin(psi*s), color='darkorange')
else:
ax.plot((2*r/3)*np.cos(psi*s), (2*r/3)*np.sin(psi*s), color='darkorange')
plt.show()
```
### Average in time
In practice, we actually calculate the mean value of $r$ and $psi$ (as well as their standard deviation) over a time interval $[t_0, t_0 + \Delta t]$ corresponding to at least one full oscillation period of the system so one can be sure our data is statistically relevant and time fluctuations are accounted for:
$$ \langle r \rangle = \frac{1}{\Delta t} \int_{t_0}^{t_0+\Delta t} r(t) dt $$
since we find the time evolution of the phase through numerical integration already, the integral above is performed as a Riemmann sum of the numerically obtained values. We also find it's useful to computate the angular velocity $\dot{\psi} = \frac{d \psi}{dt}$ of the mean phase, for it produces more insights on the colective dyamical behavior of the system.
We may then calculate such parameters for a range of different coupling constants $\lambda$ the to see how the synchronization behavior is affected.
```
%%time
sync_par = OrderParameter(SL, z0, 40, 50, 0.1, Kf=3)
K = sync_par['K']
r = sync_par['r']
r_std = sync_par['r_std']
psi = sync_par['psi']
psi_std = sync_par['psi_std']
psidot = sync_par['psidot']
psidot_std = sync_par['psidot_std']
fig, ax1 = plt.subplots(1, 1)
ax1.set_title("Order Parameter")
ax1.set_ylabel("r")
ax1.set_xlabel(r'$\lambda$')
ax1.set_ylim(0,1.2)
ax1.errorbar(K ,r, yerr=r_std, marker='^', color = 'darkred', fmt='o', elinewidth=.5, capsize=2)
ax1.plot([0, 3], [1, 1], linewidth = .8, color = 'grey')
plt.show()
fig, ax2 = plt.subplots(1,1)
ax2.set_title("Mean Phase")
ax2.set_ylabel(r'$\psi$')
ax2.set_xlabel(r'$\lambda$')
ax2.errorbar(K ,psi, yerr=psi_std, marker='x', color = 'seagreen', fmt='o', elinewidth=.5, capsize=2)
plt.show()
fig, ax3 = plt.subplots(1,1)
ax3.set_title("Mean Phase Velocity")
ax3.set_xlabel(r'$\lambda$')
ax3.set_ylabel(r'$\dot{\psi}$')
ax3.errorbar(K ,psidot, yerr=psidot_std, marker='d', color = 'royalblue', fmt='o', elinewidth=.5, capsize=2)
plt.show()
```
### Average in initial states
To assure statistical relevance, what may also be done is to find such parameters for a set with several different initial conditions and then take the average. This way, there can be certainty on the fact that the main dynamical properties of the systems indeed depends on the network itself, not relying on any specific initial configuration. We define the standard deviation $\sigma^{(r)}_{z_0}$ of $r$ in the initial conditions as:
$$ \sigma^{(r)}_{z_0} = \langle \ \langle r + \sigma^{(r)}_t \rangle_t + \langle r \rangle_t \ \rangle_{z0} $$
where $\langle \rangle_t$ is the time average, $\sigma^{(r)}_t$ the standard deviation with respect to time (both for a single initial condition $z_0$), and $\langle \rangle_{z_0}$ the average through all initial states $z_0$. It's worth to remark that we mantain $0.1 < \rho_0 < 0.9$ in the Stuart-Landau case, as for larger values of $\rho$ the system may fall into one of its attractors, which is not the situation we desire to analyze.
```
%%time
sync_par_av = AverageOrderPar(SL, 10, 40, 50, 0, Kf=3.2, dK=0.1, dt=0.2)
K_av = sync_par_av['K']
r_av = sync_par_av['r']
r_std_av = sync_par_av['r_std']
psi_av = sync_par_av['psi']
psi_std_av = sync_par_av['psi_std']
psidot_av = sync_par_av['psidot']
psidot_std_av = sync_par_av['psidot_std']
fig, ax1 = plt.subplots(1, 1)
ax1.set_title("Order Parameter")
ax1.set_ylabel("r")
ax1.set_xlabel(r'$\lambda$')
ax1.set_ylim(0,1.2)
ax1.errorbar(K_av ,r_av, yerr=r_std_av, marker='^', color = 'darkred', fmt='o', elinewidth=.5, capsize=2)
ax1.plot([0, 3], [1, 1], linewidth = .8, color = 'grey')
plt.show()
fig, ax2 = plt.subplots(1,1)
ax2.set_title("Mean Phase")
ax2.set_ylabel(r'$\psi$')
ax2.set_xlabel(r'$\lambda$')
ax2.errorbar(K_av ,psi_av, yerr=psi_std_av, marker='x', color = 'seagreen', fmt='o', elinewidth=.5, capsize=2)
plt.show()
fig, ax3 = plt.subplots(1,1)
ax3.set_title("Mean Phase Velocity")
ax3.set_xlabel(r'$\lambda$')
ax3.set_ylabel(r'$\dot{\psi}$')
ax3.errorbar(K_av ,psidot_av, yerr=psidot_std_av, marker='d', color = 'royalblue', fmt='o', elinewidth=.5, capsize=2)
plt.show()
```
## Randomly distribuited coupling
Here we intend to study how a random distribution of the coupling strength may affect the overall synchronization. To achieve that, we redefine the adjacency matrix so that each nonzero element has its value defined by some probability distribution function. We also normalize the elements of $A$ by the mean value of such distribution so that the mean coupling is absorved into our coupling parameter $\lambda$.
### Gamma distribution
A Gamma distribuition of shape $k$ and scale $\theta$ (as it's used in *scipy.stats.gamma*) is defined by:
$$ f(x; k, \theta) = \frac{x^{k-1} e^{- \frac{x}{\theta}}}{\theta^k \Gamma(k)} $$
so that $\langle x \rangle = k \theta$ and $\sigma^2 = \langle x^2 \rangle - \langle x \rangle^2 = k \theta^2 $
```
shape = 1
SLgamma, Kav, Kstd = GammaCoupling(SL, shape)
t = np.arange(0,50,.2)
z_gamma, t = SLgamma.integrate(z0,t)
fig, ax1 = plt.subplots(1,1)
fig.suptitle('Time Evolution')
ax1.set_ylabel('$Re(z)$')
ax1.set_xlabel('$t$')
ax1.set_ylim([-1.2, 1.2])
for osc in range(3):
ax1.plot(t, np.real(z_gamma[7*osc]))
plt.show
%%time
sync_par_gamma = OrderParameter(SLgamma, z0, 40, 50, 0.1, Kf=3)
K_gamma = sync_par['K']
r_gamma = sync_par['r']
r_std_gamma = sync_par['r_std']
psi_gamma = sync_par['psi']
psi_std_gamma = sync_par['psi_std']
psidot_gamma = sync_par['psidot']
psidot_std_gamma = sync_par['psidot_std']
fig, ax1 = plt.subplots(1, 1)
ax1.set_title("Order Parameter")
ax1.set_ylabel("r")
ax1.set_xlabel(r'$\lambda$')
ax1.set_ylim(0,1.2)
ax1.errorbar(K_av ,r_av, yerr=r_std_av, marker='^', color = 'darkred', fmt='o', elinewidth=.5, capsize=2)
ax1.plot([0, 3], [1, 1], linewidth = .8, color = 'grey')
plt.show()
fig, ax2 = plt.subplots(1,1)
ax2.set_title("Mean Phase")
ax2.set_ylabel(r'$\psi$')
ax2.set_xlabel(r'$\lambda$')
ax2.errorbar(K_av ,psi_av, yerr=psi_std_av, marker='x', color = 'seagreen', fmt='o', elinewidth=.5, capsize=2)
plt.show()
fig, ax3 = plt.subplots(1,1)
ax3.set_title("Mean Phase Velocity")
ax3.set_xlabel(r'$\lambda$')
ax3.set_ylabel(r'$\dot{\psi}$')
ax3.errorbar(K_av ,psidot_av, yerr=psidot_std_av, marker='d', color = 'royalblue', fmt='o', elinewidth=.5, capsize=2)
plt.show()
```
| true |
code
| 0.636042 | null | null | null | null |
|
## ALS Implementation
- This notebook |is implementation of ALS algorithm from "collaborative filtering for implicit dataset"
### Initialize parameters
- r_lambda: normalization parameter
- alpha: confidence level
- nf: dimension of latent vector of each user and item
- initilzed values(40, 200, 40) are the best parameters from the paper
```
r_lambda = 40
nf = 200
alpha = 40
```
### Initialize original rating matrix data
- make sample (10 x 11) matrix
- 10 : num of users
- 11 : num of items
```
import numpy as np
# sample rating matrix
R = np.array([[0, 0, 0, 4, 4, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 0, 1, 0, 4, 0],
[0, 3, 4, 0, 3, 0, 0, 2, 2, 0, 0],
[0, 5, 5, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 5, 0, 0, 5, 0],
[0, 0, 4, 0, 0, 0, 0, 0, 0, 0, 5],
[0, 0, 0, 0, 0, 4, 0, 0, 0, 0, 4],
[0, 0, 0, 0, 0, 0, 5, 0, 0, 5, 0],
[0, 0, 0, 3, 0, 0, 0, 0, 4, 5, 0]])
print(R.shape)
```
### Initialize user and item latent factor matrix
- nu: num of users (10)
- ni: num of items (11)
- nf: dimension of latent vector
```
nu = R.shape[0]
ni = R.shape[1]
# initialize X and Y with very small values
X = np.random.rand(nu, nf) * 0.01
Y = np.random.rand(ni, nf) * 0.01
print(X)
```
### Initialize Binary Rating Matrix P
- Convert original rating matrix R into P
- Pui = 1 if Rui > 0
- Pui = 0 if Rui = 0
```
P = np.copy(R)
P[P > 0] = 1
print(P)
```
### Initialize Confidence Matrix C
- Cui = 1 + alpha * Rui
- Cui means confidence level of certain rating data
```
C = 1 + alpha * R
print(C)
```
### Set up loss function
- C: confidence matrix
- P: binary rating matrix
- X: user latent matrix
- Y: item latent matrix
- r_lambda: regularization lambda
- xTy: predict matrix
- Total_loss = (confidence_level * predict loss) + regularization loss
```
def loss_function(C, P, xTy, X, Y, r_lambda):
predict_error = np.square(P - xTy)
confidence_error = np.sum(C * predict_error)
regularization = r_lambda * (np.sum(np.square(X)) + np.sum(np.square(Y)))
total_loss = confidence_error + regularization
return np.sum(predict_error), confidence_error, regularization, total_loss
```
### Optimization Function for user and item
- X[u] = (yTCuy + lambda*I)^-1yTCuy
- Y[i] = (xTCix + lambda*I)^-1xTCix
- two formula is the same when it changes X to Y and u to i
```
def optimize_user(X, Y, C, P, nu, nf, r_lambda):
yT = np.transpose(Y)
for u in range(nu):
Cu = np.diag(C[u])
yT_Cu_y = np.matmul(np.matmul(yT, Cu), Y)
lI = np.dot(r_lambda, np.identity(nf))
yT_Cu_pu = np.matmul(np.matmul(yT, Cu), P[u])
X[u] = np.linalg.solve(yT_Cu_y + lI, yT_Cu_pu)
def optimize_item(X, Y, C, P, ni, nf, r_lambda):
xT = np.transpose(X)
for i in range(ni):
Ci = np.diag(C[:, i])
xT_Ci_x = np.matmul(np.matmul(xT, Ci), X)
lI = np.dot(r_lambda, np.identity(nf))
xT_Ci_pi = np.matmul(np.matmul(xT, Ci), P[:, i])
Y[i] = np.linalg.solve(xT_Ci_x + lI, xT_Ci_pi)
```
### Train
- usually ALS algorithm repeat train steps for 10 ~ 15 times
```
predict_errors = []
confidence_errors = []
regularization_list = []
total_losses = []
for i in range(15):
if i!=0:
optimize_user(X, Y, C, P, nu, nf, r_lambda)
optimize_item(X, Y, C, P, ni, nf, r_lambda)
predict = np.matmul(X, np.transpose(Y))
predict_error, confidence_error, regularization, total_loss = loss_function(C, P, predict, X, Y, r_lambda)
predict_errors.append(predict_error)
confidence_errors.append(confidence_error)
regularization_list.append(regularization)
total_losses.append(total_loss)
print('----------------step %d----------------' % i)
print("predict error: %f" % predict_error)
print("confidence error: %f" % confidence_error)
print("regularization: %f" % regularization)
print("total loss: %f" % total_loss)
predict = np.matmul(X, np.transpose(Y))
print('final predict')
print([predict])
from matplotlib import pyplot as plt
%matplotlib inline
plt.subplots_adjust(wspace=100.0, hspace=20.0)
fig = plt.figure()
fig.set_figheight(10)
fig.set_figwidth(10)
predict_error_line = fig.add_subplot(2, 2, 1)
confidence_error_line = fig.add_subplot(2, 2, 2)
regularization_error_line = fig.add_subplot(2, 2, 3)
total_loss_line = fig.add_subplot(2, 2, 4)
predict_error_line.set_title("Predict Error")
predict_error_line.plot(predict_errors)
confidence_error_line.set_title("Confidence Error")
confidence_error_line.plot(confidence_errors)
regularization_error_line.set_title("Regularization")
regularization_error_line.plot(regularization_list)
total_loss_line.set_title("Total Loss")
total_loss_line.plot(total_losses)
plt.show()
```
| true |
code
| 0.528838 | null | null | null | null |
|
```
import kwant
import numpy as np
import matplotlib.pyplot as pyplot
import tinyarray
%matplotlib inline
import scipy
from tqdm.notebook import tqdm
```
$$H = v_f(k_y \sigma_x - k_x\sigma_y) + (m_0 - m_1(k_x^2 + k_y^2))\sigma_z\tau_z + M_z\sigma_z$$
$$H = v_f(k_x\sigma_x - k_y\sigma_y) + (m_0 - m_1(k_x^2 + k_y^2))\sigma_z$$
```
hamiltonian = """
vf*(k_y*kron(sigma_0,sigma_x) - k_x*kron(sigma_0,sigma_y))+ (m0-m1*(k_x**2+k_y**2))*kron(sigma_z,sigma_z) + Mz(x,y)*kron(sigma_0,sigma_z)
"""
a = 1
W =80
L =80
template = kwant.continuum.discretize(hamiltonian,grid = a)
lat = template.lattice
def shape(site):
(x,y) = site.pos
return (0 <= y <W and 0 <=x <L)
def lead_shape_sd(site):
(x,y) = site.pos
return (0 <= y<W)
def lead_shape_2(site):
(x,y) = site.pos
return (L/5 <= x<L*2/5)
def lead_shape_3(site):
(x,y) = site.pos
return (3*L/5 <= x<L*4/5)
def lead_shape_4(site):
(x,y) = site.pos
return (L/5 <= x<L*2/5)
def lead_shape_5(site):
(x,y) = site.pos
return (3*L/5 <= x<L*4/5)
syst = kwant.Builder()
syst.fill(template,shape,(0,0))
lead1s = kwant.Builder(kwant.TranslationalSymmetry([-a,0]))
lead1s.fill(template,lead_shape_sd,(0,0))
lead1d = lead1s.reversed()
lead2 = kwant.Builder(kwant.TranslationalSymmetry([0,a]))
lead2.fill(template,lead_shape_2,(L/5,0))
lead3 = kwant.Builder(kwant.TranslationalSymmetry([0,a]))
lead3.fill(template,lead_shape_3,(L*3/5,0))
lead4 = kwant.Builder(kwant.TranslationalSymmetry([0,-a]))
lead4.fill(template,lead_shape_4,(L/5,0))
lead5 = kwant.Builder(kwant.TranslationalSymmetry([0,-a]))
lead5.fill(template,lead_shape_5,(L*3/5,0))
syst.attach_lead(lead1s)
syst.attach_lead(lead1d)
syst.attach_lead(lead2)
syst.attach_lead(lead3)
syst.attach_lead(lead4)
syst.attach_lead(lead5)
fig,ax = pyplot.subplots()
kwant.plot(syst,ax = ax)
syst=syst.finalized()
def Mz(x,y):
return 0.1
#params = dict(r0=20, delta=10, J=1)
params = {'vf':1,'m1':1,'m0':-0.5,'Mz':Mz}
wf = kwant.wave_function(syst, energy=0, params=params)
params = {'vf':1,'m1':1,'m0':-0.5,'Mz':Mz}
kwant.plotter.bands(syst.leads[4],params = params, momenta = np.linspace(-0.3,0.3,201), show = False)
pyplot.grid()
pyplot.xlim(-.3, 0.3)
pyplot.ylim(-0.6,0.6)
pyplot.xlabel('momentum [1/A]')
pyplot.ylabel('energy [eV]')
pyplot.show()
nnls=scipy.optimize.nnls
energies = np.linspace(-1,1,100)
dataxx = []
dataxy = []
for energy in tqdm(energies):
smatrix = kwant.smatrix(syst,energy,params = params)
R = nnls(smatrix.conductance_matrix(),np.array((1,0,0,-1,0)))[0]
dataxy.append(R[1]-R[4])
dataxx.append(R[1]-R[2])
pyplot.figure()
pyplot.plot(energies,dataxx,energies,dataxy)
pyplot.show()
%%time
a = 1
r = 30
template = kwant.continuum.discretize(hamiltonian,grid = a)
lat = template.lattice
def circle(site):
x,y = site.pos
return (x**2 + y**2 <= r**2)
def rect(site):
x,y= site.pos
return (0 <= y <W and 0 <=x <L)
syst = kwant.Builder()
syst.fill(template,rect,(0,0))
syst.eradicate_dangling()
kwant.plot(syst)
syst_without_lead = syst.finalized()
where = lambda s : np.linalg.norm(s.pos)<1.1
s_factory = kwant.kpm.LocalVectors(syst_without_lead)
cond_xx = kwant.kpm.conductivity(syst_without_lead, alpha = 'x',beta = 'x',params=params)
s_factory = kwant.kpm.LocalVectors(syst_without_lead)
cond_xy = kwant.kpm.conductivity(syst_without_lead, alpha = 'x',beta = 'y',params=params)
energies = np.linspace(-2,2,200)
#energies = cond_xx.energies
cond_array_xx = np.array([cond_xx(e,temperature = 1E-6) for e in energies])
cond_array_xy = np.array([cond_xy(e,temperature = 1E-6) for e in energies])
cond_array_xx/=W*L
cond_array_xy/=W*L
params = dict(r0=20, delta=10, J=1)
wf = kwant.wave_function(syst, energy=-1, params=params)
psi = wf(0)[0]
fig,ax = pyplot.subplots()
plt = ax.plot(energies,np.abs(cond_array_xx),energies,np.abs(cond_array_xy))
ax.set_xlim([-1,1])
fig
```
| true |
code
| 0.48054 | null | null | null | null |
|
# Custom statespace models
The true power of the state space model is to allow the creation and estimation of custom models. This notebook shows various statespace models that subclass `sm.tsa.statespace.MLEModel`.
Remember the general state space model can be written in the following general way:
$$
\begin{aligned}
y_t & = Z_t \alpha_{t} + d_t + \varepsilon_t \\
\alpha_{t+1} & = T_t \alpha_{t} + c_t + R_t \eta_{t}
\end{aligned}
$$
You can check the details and the dimensions of the objects [in this link](https://www.statsmodels.org/stable/statespace.html#custom-state-space-models)
Most models won't include all of these elements. For example, the design matrix $Z_t$ might not depend on time ($\forall t \;Z_t = Z$), or the model won't have an observation intercept $d_t$.
We'll start with something relatively simple and then show how to extend it bit by bit to include more elements.
+ Model 1: time-varying coefficients. One observation equation with two state equations
+ Model 2: time-varying parameters with non identity transition matrix
+ Model 3: multiple observation and multiple state equations
+ Bonus: pymc3 for Bayesian estimation
```
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
from collections import OrderedDict
plt.rc("figure", figsize=(16,8))
plt.rc("font", size=15)
```
## Model 1: time-varying coefficients
$$
\begin{aligned}
y_t & = d + x_t \beta_{x,t} + w_t \beta_{w,t} + \varepsilon_t \hspace{4em} \varepsilon_t \sim N(0, \sigma_\varepsilon^2)\\
\begin{bmatrix} \beta_{x,t} \\ \beta_{w,t} \end{bmatrix} & = \begin{bmatrix} \beta_{x,t-1} \\ \beta_{w,t-1} \end{bmatrix} + \begin{bmatrix} \zeta_{x,t} \\ \zeta_{w,t} \end{bmatrix} \hspace{3.7em} \begin{bmatrix} \zeta_{x,t} \\ \zeta_{w,t} \end{bmatrix} \sim N \left ( \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \begin{bmatrix} \sigma_{\beta, x}^2 & 0 \\ 0 & \sigma_{\beta, w}^2 \end{bmatrix} \right )
\end{aligned}
$$
The observed data is $y_t, x_t, w_t$. With $x_t, w_t$ being the exogenous variables. Notice that the design matrix is time-varying, so it will have three dimensions (`k_endog x k_states x nobs`)
The states are $\beta_{x,t}$ and $\beta_{w,t}$. The state equation tells us these states evolve with a random walk. Thus, in this case the transition matrix is a 2 by 2 identity matrix.
We'll first simulate the data, the construct a model and finally estimate it.
```
def gen_data_for_model1():
nobs = 1000
rs = np.random.RandomState(seed=93572)
d = 5
var_y = 5
var_coeff_x = 0.01
var_coeff_w = 0.5
x_t = rs.uniform(size=nobs)
w_t = rs.uniform(size=nobs)
eps = rs.normal(scale=var_y**0.5, size=nobs)
beta_x = np.cumsum(rs.normal(size=nobs, scale=var_coeff_x**0.5))
beta_w = np.cumsum(rs.normal(size=nobs, scale=var_coeff_w**0.5))
y_t = d + beta_x * x_t + beta_w * w_t + eps
return y_t, x_t, w_t, beta_x, beta_w
y_t, x_t, w_t, beta_x, beta_w = gen_data_for_model1()
_ = plt.plot(y_t)
class TVRegression(sm.tsa.statespace.MLEModel):
def __init__(self, y_t, x_t, w_t):
exog = np.c_[x_t, w_t] # shaped nobs x 2
super(TVRegression, self).__init__(
endog=y_t, exog=exog, k_states=2,
initialization='diffuse')
# Since the design matrix is time-varying, it must be
# shaped k_endog x k_states x nobs
# Notice that exog.T is shaped k_states x nobs, so we
# just need to add a new first axis with shape 1
self.ssm['design'] = exog.T[np.newaxis, :, :] # shaped 1 x 2 x nobs
self.ssm['selection'] = np.eye(self.k_states)
self.ssm['transition'] = np.eye(self.k_states)
#Which parameters need to be positive?
self.positive_parameters = slice(1, 4)
@property
def param_names(self):
return ['intercept', 'var.e', 'var.x.coeff', 'var.w.coeff']
@property
def start_params(self):
"""
Defines the starting values for the parameters
The linear regression gives us reasonable starting values for the constant
d and the variance of the epsilon error
"""
exog = sm.add_constant(self.exog)
res = sm.OLS(self.endog, exog).fit()
params = np.r_[res.params[0], res.scale, 0.001, 0.001]
return params
def transform_params(self, unconstrained):
"""
We constraint the last three parameters
('var.e', 'var.x.coeff', 'var.w.coeff') to be positive,
because they are variances
"""
constrained = unconstrained.copy()
constrained[self.positive_parameters] = constrained[self.positive_parameters]**2
return constrained
def untransform_params(self, constrained):
"""
Need to unstransform all the parameters you transformed
in the `transform_params` function
"""
unconstrained = constrained.copy()
unconstrained[self.positive_parameters] = unconstrained[self.positive_parameters]**0.5
return unconstrained
def update(self, params, **kwargs):
params = super(TVRegression, self).update(params, **kwargs)
self['obs_intercept', 0, 0] = params[0]
self['obs_cov', 0, 0] = params[1]
self['state_cov'] = np.diag(params[2:4])
```
### And then estimate it with our custom model class
```
mod = TVRegression(y_t, x_t, w_t)
res = mod.fit()
print(res.summary())
```
The values that generated the data were:
+ intercept = 5
+ var.e = 5
+ var.x.coeff = 0.01
+ var.w.coeff = 0.5
As you can see, the estimation recovered the real parameters pretty well.
We can also recover the estimated evolution of the underlying coefficients (or states in Kalman filter talk)
```
fig, axes = plt.subplots(2, figsize=(16, 8))
ss = pd.DataFrame(res.smoothed_state.T, columns=['x', 'w'])
axes[0].plot(beta_x, label='True')
axes[0].plot(ss['x'], label='Smoothed estimate')
axes[0].set(title='Time-varying coefficient on x_t')
axes[0].legend()
axes[1].plot(beta_w, label='True')
axes[1].plot(ss['w'], label='Smoothed estimate')
axes[1].set(title='Time-varying coefficient on w_t')
axes[1].legend()
fig.tight_layout();
```
## Model 2: time-varying parameters with non identity transition matrix
This is a small extension from Model 1. Instead of having an identity transition matrix, we'll have one with two parameters ($\rho_1, \rho_2$) that we need to estimate.
$$
\begin{aligned}
y_t & = d + x_t \beta_{x,t} + w_t \beta_{w,t} + \varepsilon_t \hspace{4em} \varepsilon_t \sim N(0, \sigma_\varepsilon^2)\\
\begin{bmatrix} \beta_{x,t} \\ \beta_{w,t} \end{bmatrix} & = \begin{bmatrix} \rho_1 & 0 \\ 0 & \rho_2 \end{bmatrix} \begin{bmatrix} \beta_{x,t-1} \\ \beta_{w,t-1} \end{bmatrix} + \begin{bmatrix} \zeta_{x,t} \\ \zeta_{w,t} \end{bmatrix} \hspace{3.7em} \begin{bmatrix} \zeta_{x,t} \\ \zeta_{w,t} \end{bmatrix} \sim N \left ( \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \begin{bmatrix} \sigma_{\beta, x}^2 & 0 \\ 0 & \sigma_{\beta, w}^2 \end{bmatrix} \right )
\end{aligned}
$$
What should we modify in our previous class to make things work?
+ Good news: not a lot!
+ Bad news: we need to be careful about a few things
### 1) Change the starting parameters function
We need to add names for the new parameters $\rho_1, \rho_2$ and we need to start corresponding starting values.
The `param_names` function goes from:
```python
def param_names(self):
return ['intercept', 'var.e', 'var.x.coeff', 'var.w.coeff']
```
to
```python
def param_names(self):
return ['intercept', 'var.e', 'var.x.coeff', 'var.w.coeff',
'rho1', 'rho2']
```
and we change the `start_params` function from
```python
def start_params(self):
exog = sm.add_constant(self.exog)
res = sm.OLS(self.endog, exog).fit()
params = np.r_[res.params[0], res.scale, 0.001, 0.001]
return params
```
to
```python
def start_params(self):
exog = sm.add_constant(self.exog)
res = sm.OLS(self.endog, exog).fit()
params = np.r_[res.params[0], res.scale, 0.001, 0.001, 0.8, 0.8]
return params
```
2) Change the `update` function
It goes from
```python
def update(self, params, **kwargs):
params = super(TVRegression, self).update(params, **kwargs)
self['obs_intercept', 0, 0] = params[0]
self['obs_cov', 0, 0] = params[1]
self['state_cov'] = np.diag(params[2:4])
```
to
```python
def update(self, params, **kwargs):
params = super(TVRegression, self).update(params, **kwargs)
self['obs_intercept', 0, 0] = params[0]
self['obs_cov', 0, 0] = params[1]
self['state_cov'] = np.diag(params[2:4])
self['transition', 0, 0] = params[4]
self['transition', 1, 1] = params[5]
```
3) (optional) change `transform_params` and `untransform_params`
This is not required, but you might wanna restrict $\rho_1, \rho_2$ to lie between -1 and 1.
In that case, we first import two utility functions from `statsmodels`.
```python
from statsmodels.tsa.statespace.tools import (
constrain_stationary_univariate, unconstrain_stationary_univariate)
```
`constrain_stationary_univariate` constraint the value to be within -1 and 1.
`unconstrain_stationary_univariate` provides the inverse function.
The transform and untransform parameters function would look like this
(remember that $\rho_1, \rho_2$ are in the 4 and 5th index):
```python
def transform_params(self, unconstrained):
constrained = unconstrained.copy()
constrained[self.positive_parameters] = constrained[self.positive_parameters]**2
constrained[4] = constrain_stationary_univariate(constrained[4:5])
constrained[5] = constrain_stationary_univariate(constrained[5:6])
return constrained
def untransform_params(self, constrained):
unconstrained = constrained.copy()
unconstrained[self.positive_parameters] = unconstrained[self.positive_parameters]**0.5
unconstrained[4] = unconstrain_stationary_univariate(constrained[4:5])
unconstrained[5] = unconstrain_stationary_univariate(constrained[5:6])
return unconstrained
```
I'll write the full class below (without the optional changes I have just discussed)
```
class TVRegressionExtended(sm.tsa.statespace.MLEModel):
def __init__(self, y_t, x_t, w_t):
exog = np.c_[x_t, w_t] # shaped nobs x 2
super(TVRegressionExtended, self).__init__(
endog=y_t, exog=exog, k_states=2,
initialization='diffuse')
# Since the design matrix is time-varying, it must be
# shaped k_endog x k_states x nobs
# Notice that exog.T is shaped k_states x nobs, so we
# just need to add a new first axis with shape 1
self.ssm['design'] = exog.T[np.newaxis, :, :] # shaped 1 x 2 x nobs
self.ssm['selection'] = np.eye(self.k_states)
self.ssm['transition'] = np.eye(self.k_states)
#Which parameters need to be positive?
self.positive_parameters = slice(1, 4)
@property
def param_names(self):
return ['intercept', 'var.e', 'var.x.coeff', 'var.w.coeff',
'rho1', 'rho2']
@property
def start_params(self):
"""
Defines the starting values for the parameters
The linear regression gives us reasonable starting values for the constant
d and the variance of the epsilon error
"""
exog = sm.add_constant(self.exog)
res = sm.OLS(self.endog, exog).fit()
params = np.r_[res.params[0], res.scale, 0.001, 0.001, 0.7, 0.8]
return params
def transform_params(self, unconstrained):
"""
We constraint the last three parameters
('var.e', 'var.x.coeff', 'var.w.coeff') to be positive,
because they are variances
"""
constrained = unconstrained.copy()
constrained[self.positive_parameters] = constrained[self.positive_parameters]**2
return constrained
def untransform_params(self, constrained):
"""
Need to unstransform all the parameters you transformed
in the `transform_params` function
"""
unconstrained = constrained.copy()
unconstrained[self.positive_parameters] = unconstrained[self.positive_parameters]**0.5
return unconstrained
def update(self, params, **kwargs):
params = super(TVRegressionExtended, self).update(params, **kwargs)
self['obs_intercept', 0, 0] = params[0]
self['obs_cov', 0, 0] = params[1]
self['state_cov'] = np.diag(params[2:4])
self['transition', 0, 0] = params[4]
self['transition', 1, 1] = params[5]
```
To estimate, we'll use the same data as in model 1 and expect the $\rho_1, \rho_2$ to be near 1.
The results look pretty good!
Note that this estimation can be quite sensitive to the starting value of $\rho_1, \rho_2$. If you try lower values, you'll see it fails to converge.
```
mod = TVRegressionExtended(y_t, x_t, w_t)
res = mod.fit(maxiter=2000) #it doesn't converge with 50 iters
print(res.summary())
```
## Model 3: multiple observation and state equations
We'll keep the time-varying parameters, but this time we'll also have two observation equations.
### Observation equations
$\hat{i_t}, \hat{M_t}, \hat{s_t}$ are observed each period.
The model for the observation equation has two equations:
$$ \hat{i_t} = \alpha_1 * \hat{s_t} + \varepsilon_1 $$
$$ \hat{M_t} = \alpha_2 + \varepsilon_2 $$
Following the [general notation from state space models](https://www.statsmodels.org/stable/statespace.html), the endogenous part of the observation equation is $y_t = (\hat{i_t}, \hat{M_t})$ and we only have one exogenous variable $\hat{s_t}$
### State equations
$$ \alpha_{1, t+1} = \delta_1 \alpha_{1, t} + \delta_2 \alpha_{2, t} + W_1 $$
$$ \alpha_{2, t+1} = \delta_3 \alpha_{2, t} + W_2 $$
### Matrix notation for the state space model
$$
\begin{aligned}
\begin{bmatrix} \hat{i_t} \\ \hat{M_t} \end{bmatrix} &=
\begin{bmatrix} \hat{s_t} & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} \alpha_{1, t} \\ \alpha_{2, t} \end{bmatrix} + \begin{bmatrix} \varepsilon_{1, t} \\ \varepsilon_{1, t} \end{bmatrix} \hspace{6.5em} \varepsilon_t \sim N \left ( \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \begin{bmatrix} \sigma_{\varepsilon_1}^2 & 0 \\ 0 & \sigma_{\varepsilon_2}^2 \end{bmatrix} \right )
\\
\begin{bmatrix} \alpha_{1, t+1} \\ \alpha_{2, t+1} \end{bmatrix} & = \begin{bmatrix} \delta_1 & \delta_1 \\ 0 & \delta_3 \end{bmatrix} \begin{bmatrix} \alpha_{1, t} \\ \alpha_{2, t} \end{bmatrix} + \begin{bmatrix} W_1 \\ W_2 \end{bmatrix} \hspace{3.em} \begin{bmatrix} W_1 \\ W_2 \end{bmatrix} \sim N \left ( \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \begin{bmatrix} \sigma_{W_1}^2 & 0 \\ 0 & \sigma_{W_2}^2 \end{bmatrix} \right )
\end{aligned}
$$
I'll simulate some data, talk about what we need to modify and finally estimate the model to see if we're recovering something reasonable.
```
true_values = {'var_e1': 0.01, 'var_e2': 0.01,
'var_w1': 0.01, 'var_w2': 0.01,
'delta1': 0.8, 'delta2': 0.5, 'delta3': 0.7}
def gen_data_for_model3():
#Starting values
alpha1_0 = 2.1
alpha2_0 = 1.1
t_max = 500
def gen_i(alpha1, s):
return alpha1*s + np.sqrt(true_values['var_e1'])*np.random.randn()
def gen_m_hat(alpha2):
return 1*alpha2 + np.sqrt(true_values['var_e2'])*np.random.randn()
def gen_alpha1(alpha1, alpha2):
w1 = np.sqrt(true_values['var_w1'])*np.random.randn()
return true_values['delta1'] * alpha1 + true_values['delta2'] * alpha2 + w1
def gen_alpha2(alpha2):
w2 = np.sqrt(true_values['var_w2'])*np.random.randn()
return true_values['delta3'] * alpha2 + w2
s_t = 0.3 + np.sqrt(1.4)*np.random.randn(t_max)
i_hat = np.empty(t_max)
m_hat = np.empty(t_max)
current_alpha1 = alpha1_0
current_alpha2 = alpha2_0
for t in range(t_max):
#Obs eqns
i_hat[t] = gen_i(current_alpha1, s_t[t])
m_hat[t] = gen_m_hat(current_alpha2)
#state eqns
new_alpha1 = gen_alpha1(current_alpha1, current_alpha2)
new_alpha2 = gen_alpha2(current_alpha2)
#Update states for next period
current_alpha1 = new_alpha1
current_alpha2 = new_alpha2
return i_hat, m_hat, s_t
i_hat, m_hat, s_t = gen_data_for_model3()
```
### What do we need to modify?
Once again, we don't need to change much, but we need to be careful about the dimensions.
#### 1) The `__init__` function changes from
```python
def __init__(self, y_t, x_t, w_t):
exog = np.c_[x_t, w_t]
super(TVRegressionExtended, self).__init__(
endog=y_t, exog=exog, k_states=2,
initialization='diffuse')
self.ssm['design'] = exog.T[np.newaxis, :, :] # shaped 1 x 2 x nobs
self.ssm['selection'] = np.eye(self.k_states)
self.ssm['transition'] = np.eye(self.k_states)
```
to
```python
def __init__(self, i_t: np.array, s_t: np.array, m_t: np.array):
exog = np.c_[s_t, np.repeat(1, len(s_t))] # exog.shape => (nobs, 2)
super(MultipleYsModel, self).__init__(
endog=np.c_[i_t, m_t], exog=exog, k_states=2,
initialization='diffuse')
self.ssm['design'] = np.zeros((self.k_endog, self.k_states, self.nobs))
self.ssm['design', 0, 0, :] = s_t
self.ssm['design', 1, 1, :] = 1
```
Note that we did not have to specify `k_endog` anywhere. The initialization does this for us after checking the dimensions of the `endog` matrix.
#### 2) The `update()` function
changes from
```python
def update(self, params, **kwargs):
params = super(TVRegressionExtended, self).update(params, **kwargs)
self['obs_intercept', 0, 0] = params[0]
self['obs_cov', 0, 0] = params[1]
self['state_cov'] = np.diag(params[2:4])
self['transition', 0, 0] = params[4]
self['transition', 1, 1] = params[5]
```
to
```python
def update(self, params, **kwargs):
params = super(MultipleYsModel, self).update(params, **kwargs)
#The following line is not needed (by default, this matrix is initialized by zeroes),
#But I leave it here so the dimensions are clearer
self['obs_intercept'] = np.repeat([np.array([0, 0])], self.nobs, axis=0).T
self['obs_cov', 0, 0] = params[0]
self['obs_cov', 1, 1] = params[1]
self['state_cov'] = np.diag(params[2:4])
#delta1, delta2, delta3
self['transition', 0, 0] = params[4]
self['transition', 0, 1] = params[5]
self['transition', 1, 1] = params[6]
```
The rest of the methods change in pretty obvious ways (need to add parameter names, make sure the indexes work, etc). The full code for the function is right below
```
starting_values = {'var_e1': 0.2, 'var_e2': 0.1,
'var_w1': 0.15, 'var_w2': 0.18,
'delta1': 0.7, 'delta2': 0.1, 'delta3': 0.85}
class MultipleYsModel(sm.tsa.statespace.MLEModel):
def __init__(self, i_t: np.array, s_t: np.array, m_t: np.array):
exog = np.c_[s_t, np.repeat(1, len(s_t))] # exog.shape => (nobs, 2)
super(MultipleYsModel, self).__init__(
endog=np.c_[i_t, m_t], exog=exog, k_states=2,
initialization='diffuse')
self.ssm['design'] = np.zeros((self.k_endog, self.k_states, self.nobs))
self.ssm['design', 0, 0, :] = s_t
self.ssm['design', 1, 1, :] = 1
#These have ok shape. Placeholders since I'm changing them
#in the update() function
self.ssm['selection'] = np.eye(self.k_states)
self.ssm['transition'] = np.eye(self.k_states)
#Dictionary of positions to names
self.position_dict = OrderedDict(var_e1=1, var_e2=2,
var_w1=3, var_w2=4,
delta1=5, delta2=6, delta3=7)
self.initial_values = starting_values
self.positive_parameters = slice(0, 4)
@property
def param_names(self):
return list(self.position_dict.keys())
@property
def start_params(self):
"""
Initial values
"""
#(optional) Use scale for var_e1 and var_e2 starting values
params = np.r_[self.initial_values['var_e1'],
self.initial_values['var_e2'],
self.initial_values['var_w1'],
self.initial_values['var_w2'],
self.initial_values['delta1'],
self.initial_values['delta2'],
self.initial_values['delta3']]
return params
def transform_params(self, unconstrained):
"""
If you need to restrict parameters
For example, variances should be > 0
Parameters maybe have to be within -1 and 1
"""
constrained = unconstrained.copy()
constrained[self.positive_parameters] = constrained[self.positive_parameters]**2
return constrained
def untransform_params(self, constrained):
"""
Need to reverse what you did in transform_params()
"""
unconstrained = constrained.copy()
unconstrained[self.positive_parameters] = unconstrained[self.positive_parameters]**0.5
return unconstrained
def update(self, params, **kwargs):
params = super(MultipleYsModel, self).update(params, **kwargs)
#The following line is not needed (by default, this matrix is initialized by zeroes),
#But I leave it here so the dimensions are clearer
self['obs_intercept'] = np.repeat([np.array([0, 0])], self.nobs, axis=0).T
self['obs_cov', 0, 0] = params[0]
self['obs_cov', 1, 1] = params[1]
self['state_cov'] = np.diag(params[2:4])
#delta1, delta2, delta3
self['transition', 0, 0] = params[4]
self['transition', 0, 1] = params[5]
self['transition', 1, 1] = params[6]
mod = MultipleYsModel(i_hat, s_t, m_hat)
res = mod.fit()
print(res.summary())
```
## Bonus: pymc3 for fast Bayesian estimation
In this section I'll show how you can take your custom state space model and easily plug it to `pymc3` and estimate it with Bayesian methods. In particular, this example will show you an estimation with a version of Hamiltonian Monte Carlo called the No-U-Turn Sampler (NUTS).
I'm basically copying the ideas contained [in this notebook](https://www.statsmodels.org/dev/examples/notebooks/generated/statespace_sarimax_pymc3.html), so make sure to check that for more details.
```
#Extra requirements
import theano
import theano.tensor as tt
import pymc3 as pm
```
We need to define some helper functions to connect theano to the likelihood function that is implied in our model
```
class Loglike(tt.Op):
itypes = [tt.dvector] # expects a vector of parameter values when called
otypes = [tt.dscalar] # outputs a single scalar value (the log likelihood)
def __init__(self, model):
self.model = model
self.score = Score(self.model)
def perform(self, node, inputs, outputs):
theta, = inputs # contains the vector of parameters
llf = self.model.loglike(theta)
outputs[0][0] = np.array(llf) # output the log-likelihood
def grad(self, inputs, g):
# the method that calculates the gradients - it actually returns the
# vector-Jacobian product - g[0] is a vector of parameter values
theta, = inputs # our parameters
out = [g[0] * self.score(theta)]
return out
class Score(tt.Op):
itypes = [tt.dvector]
otypes = [tt.dvector]
def __init__(self, model):
self.model = model
def perform(self, node, inputs, outputs):
theta, = inputs
outputs[0][0] = self.model.score(theta)
```
We'll simulate again the data we used for model 1.
We'll also `fit` it again and save the results to compare them to the Bayesian posterior we get.
```
y_t, x_t, w_t, beta_x, beta_w = gen_data_for_model1()
plt.plot(y_t)
mod = TVRegression(y_t, x_t, w_t)
res_mle = mod.fit(disp=False)
print(res_mle.summary())
```
### Bayesian estimation
We need to define a prior for each parameter and the number of draws and burn-in points
```
# Set sampling params
ndraws = 3000 # 3000 number of draws from the distribution
nburn = 600 # 600 number of "burn-in points" (which will be discarded)
# Construct an instance of the Theano wrapper defined above, which
# will allow PyMC3 to compute the likelihood and Jacobian in a way
# that it can make use of. Here we are using the same model instance
# created earlier for MLE analysis (we could also create a new model
# instance if we preferred)
loglike = Loglike(mod)
with pm.Model():
# Priors
intercept = pm.Uniform('intercept', 1, 10)
var_e = pm.InverseGamma('var.e', 2.3, 0.5)
var_x_coeff = pm.InverseGamma('var.x.coeff', 2.3, 0.1)
var_w_coeff = pm.InverseGamma('var.w.coeff', 2.3, 0.1)
# convert variables to tensor vectors
theta = tt.as_tensor_variable([intercept, var_e, var_x_coeff, var_w_coeff])
# use a DensityDist (use a lamdba function to "call" the Op)
pm.DensityDist('likelihood', lambda v: loglike(v), observed={'v': theta})
# Draw samples
trace = pm.sample(ndraws, tune=nburn, discard_tuned_samples=True, cores=4)
```
### How does the posterior distribution compare with the MLE estimation?
The clearly peak around the MLE estimate.
```
results_dict = {'intercept': res_mle.params[0], 'var.e': res_mle.params[1],
'var.x.coeff': res_mle.params[2], 'var.w.coeff': res_mle.params[3]}
plt.tight_layout()
_ = pm.traceplot(trace,
lines=[(k, {}, [v]) for k, v in dict(results_dict).items()],
combined=True,
figsize=(12, 12))
```
| true |
code
| 0.799794 | null | null | null | null |
|
# Challenge 4: Convolutional Neural Networks
Create a Convolutional Neural Network (a deep learning architecture) to classify the gear data. The architecture or design should contain a mix of layers such as convolutional and pooling.
Train a model on the training dataset using the deided architecture. You may have to iterate on the architecture. Make sure the best trained model is saved to disk.
```
import numpy as np
np.random.seed(42)
%matplotlib inline
from sklearn import metrics
import seaborn as sn
import pandas as pd
from sklearn.metrics import classification_report
import matplotlib.pyplot as plt
def report(y_true, y_pred):
print("Accuracy score: ", metrics.accuracy_score(y_true, y_pred, normalize=True))
print(classification_report(y_true, y_pred))
labels = encoder.inverse_transform(np.unique(y_true))
df_cm = pd.DataFrame(
metrics.confusion_matrix(y_true, y_pred),
index=labels,
columns=labels
)
plt.figure(figsize = (10,7))
sn.heatmap(df_cm, annot=True)
plt.show()
```
## 1. Creating dataset
```
from os import walk, listdir
from os.path import isfile, join
import cv2
from sklearn.utils import shuffle
from keras.utils import np_utils
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
def load_dataset(folder):
X = []
y = []
paths = []
for (dir_path, _, _) in walk(folder):
label = dir_path.replace(folder, '').replace('\\', '').replace('/', '')
files = [f for f in listdir(dir_path) if isfile(join(dir_path, f))]
for file in files:
img = cv2.imread(join(dir_path, file))
X.append(img)
y.append(label)
paths.append(join(dir_path, file))
return np.array(X), np.array(y), np.array(paths)
X, y, paths = load_dataset("data/gear_images_preprocessed")
# Encode class values as integers
encoder = LabelEncoder()
encoder.fit(y)
y_encoded = encoder.transform(y)
y_dummy = np_utils.to_categorical(y_encoded)
print(encoder.inverse_transform(np.argmax(y_dummy[2020])))
#print(*X[1,1])
#print(*y_dummy[2121])
print("Train dataset shape:", X.shape, y.shape)
X_train, X_test, y_train, y_test = train_test_split(X, y_dummy, test_size=0.2, random_state=42, stratify=y)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.2, random_state=42)
print("Train dataset shape:", X_train.shape, y_train.shape)
print("Test dataset shape:", X_test.shape, y_test.shape)
print("Val dataset shape:", X_val.shape, y_val.shape)
from keras.models import Sequential
from keras.layers.normalization import BatchNormalization
from keras.layers.convolutional import Conv2D, MaxPooling2D
from keras.layers.core import Activation, Flatten, Dropout, Dense
from keras import backend as K
from keras import optimizers
from keras import losses
class BasicNet:
@staticmethod
def build(width, height, depth, classes):
model = Sequential()
input_shape = (height, width, depth)
model.add(Conv2D(32, (2, 2), input_shape=input_shape))
model.add(Conv2D(64, (2, 2), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(classes, activation='softmax'))
return model
epochs = 20
batch_size = 64
image_dims = (128, 128, 3)
num_classes = len(np.unique(y))
model = BasicNet.build(
width=image_dims[1], height=image_dims[0],
depth=image_dims[2], classes=num_classes
)
model.summary()
model.compile(
loss=losses.categorical_crossentropy,
optimizer=optimizers.Adadelta(),
metrics=['categorical_accuracy'],
)
history = model.fit(
X_train / 255, y_train,
batch_size=batch_size,
validation_data=(X_val / 255, y_val),
epochs=epochs,
verbose=1
)
# Calculate score
y_pred = model.predict(X_test / 255)
y_pred_flatten = np.argmax(y_pred, axis=1)
y_test_flatten = np.argmax(y_test, axis=1)
report(y_test_flatten, y_pred_flatten)
#from keras.models import load_model
import pickle
# Saving model
model.save('model_ch4.h5')
# Saving labels
with open("model_labels_ch4.dat", "wb") as f:
pickle.dump(encoder, f, pickle.HIGHEST_PROTOCOL)
from keras.models import load_model
m = load_model('model_ch4.h5')
m.summary()
path = "data/gear_images_preprocessed/pulleys/10308568_zm.jpg"
img = cv2.imread(path)
y_pred = m.predict([[img]])
print(encoder.inverse_transform(np.argmax(y_pred)))
```
| true |
code
| 0.593786 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/tjwei/NCTU_DeepLearning/blob/master/tf2_tutorial/02_tf2_Basics.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip install -U tensorflow-gpu
import tensorflow as tf
tf.__version__
matrix1 = tf.constant([[3., 3.]])
matrix2 = tf.constant([[2.],[2.]])
matrix1, matrix2
product = tf.matmul(matrix1, matrix2)
product
matrix1 @ matrix2
product + 3
w = tf.Variable(shape=(1, 2), initial_value=[[2., 1.]])
w
y = w @ [[1], [2]]
y
with tf.GradientTape() as tape:
y = w@[[1], [2]]
loss = (y - 3)**2
gradients = tape.gradient(loss, [w])
gradients
```
## MNIST Again
```
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = (x_train-127.5)/127.5
x_test = (x_test-127.5)/127.5
train_ds = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_ds = train_ds.shuffle(10000).batch(32)
train_ds
test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test))
test_ds = test_ds.shuffle(10000).batch(32)
from tensorflow.keras.layers import Dense, Flatten, Reshape
from tensorflow.keras.models import Model
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self.flatten = Flatten()
self.d1 = Dense(128, activation='relu')
self.d2 = Dense(10, activation='softmax')
def call(self, x):
x = self.flatten(x)
x = self.d1(x)
return self.d2(x)
# Create an instance of the model
model = MyModel()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
optimizer = tf.keras.optimizers.Adam()
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')
#@tf.function
def train_step(images, labels):
with tf.GradientTape() as tape:
predictions = model(images)
loss = loss_object(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train_loss(loss)
train_accuracy(labels, predictions)
#@tf.function
def test_step(images, labels):
predictions = model(images)
t_loss = loss_object(labels, predictions)
test_loss(t_loss)
test_accuracy(labels, predictions)
EPOCHS = 5
for epoch in range(EPOCHS):
for images, labels in train_ds:
train_step(images, labels)
for test_images, test_labels in test_ds:
test_step(test_images, test_labels)
template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}'
print(template.format(epoch+1,
train_loss.result(),
train_accuracy.result()*100,
test_loss.result(),
test_accuracy.result()*100))
# Reset the metrics for the next epoch
train_loss.reset_states()
train_accuracy.reset_states()
test_loss.reset_states()
test_accuracy.reset_states()
```
| true |
code
| 0.790753 | null | null | null | null |
|
# Determining the proton content with a quantum computer
Code at: https://github.com/qiboteam/qibo/tree/master/examples/qPDF.
In this tutorial we show how to use the `qPDF` model implemented in Qibo to create a set of Parton Distribution Functions (PDFs), parameterized by a variational quantum circuit. In the context of High Energy Physics, parton distribution functions estimate the momentum fraction of the proton carried by partons i.e. quarks, antiquarks and gluon. Here we simulate a quantum computer to encode within a circuit the data from these PDFs in such a way that, if we measure the output of the aforementioned quantum circuit, we obtain the corresponding PDFs values.
In order to accomplish our goal, we use a Variational Quantum Circuit (VQC):

### Circuit
We consider two different Ansätze. Those Ansätze depend on tunable parameters and a variable $x$ that also serves as the independent variables for the PDF $f_i(x, Q)$, where $Q$ is fixed.
The first one is the Ansatz _Weighted_. Its basic single qubit gate is
$$
U_w (\alpha, x) = R_z(\alpha_3 \log(x) + \alpha_4) R_y(\alpha_1 x + \alpha_2).
$$
The second Ansatz is the fourier one, whose basic single-qubit gate is, on demand
$$
U_f(\alpha, x) = R_y(\alpha_4)R_z(\alpha_3)R_y(-\pi/2 \log x)R_y(\alpha_2)R_z(\alpha_1)R_y(\pi x)
$$
Both Ansätze have a layered structure with entangling gates among different qubits depicted in the following circuit

The Ansatz is constructed in one qubit per parton. We only let one and all flavours $(s, \bar s, c, u, \bar u, d, \bar d, g)$ to be optimized simultaneously, thus only circuits with one and eight qubits are available.
### Cost function
The cost function driving the optimization process of this circuit is defined through several pieces. First, we need a Hamiltonian to measure. We choose a different hamiltonian for every parton, namely
$$
Z_i = \bigotimes_{j=0}^{n} Z^{\delta_{ij}}.
$$
This family of hamiltonians allows for the definition of their expected values, depending on $\theta$ and $x$
$$
z_i (\theta, x) = \langle \psi(\theta, x) | Z_i | \psi(\theta, x) \rangle.
$$
The relation between the $z(\theta, x)$ quantities and PDFs is
$$
f_i (x, Q_0) = \frac{1 - z_i(\theta, x)}{1 + z_i (\theta, x)}.
$$
Using this definition, we can just use the usual Pearson's chi-squared quantity
$$
\chi^2 = \frac{1}{N}\sum_{i=1}^N \int_{x\in[0, 1]} dx \frac{\left( f_i (x, \theta) - \frac{1 - z(x, \theta)}{1 + z(x, \theta)}\right)^2}{\sigma^2}.
$$
This is the loss function for our minimization procedure.
## Code
First, we must decide the variables for our problem. The meaning of them are
- `ansatz`: Which one is chosen, *Weighted* or *Fourier*.
- `multi_output`: If *True*, all partons are fitted in the same circuit.
- `parton`: which parton is to be fit. Ignored if `multi_output = True`.
- `mode`: if *full*, data is fitted for $x \in [10^{-4}, 1]$, if *partial* only large $x$ is considered.
- `layers`: number of layers.
### Create a qPDF model
```
# import requirements
import numpy as np
from qibo.models.hep import qPDF
# our setup
ansatz = 'Weighted'
multi_output = True
parton = '8flavours' # or gluon
mode = 'full' # or partial
layers = 3
```
Extract reference data and auxiliary variables. These cell controls the importing of different sets of data.
```
# Read input data
def load_data_and_setup():
if multi_output:
data_file = f'data/{mode}/8flavours.dat'
nqubits = 8
else:
data_file = f'data/{mode}/{parton}.dat'
nqubits = 1
return np.loadtxt(data_file), nqubits
# load data
data, nqubits = load_data_and_setup()
# load qPDF model
mypdf = qPDF(ansatz, layers, nqubits, multi_output=multi_output)
```
Now we define a way to compute the loss function
$$
\chi^2 = \frac{1}{N}\sum_{i=1}^N\sum_{j} \frac{\left( f_i (x_j, \theta) - \frac{1 - z(x_j, \theta)}{1 + z(x_j, \theta)}\right)^2}{\sigma^2}
$$
is defined. For multi-flavour fits, a mean is considered.
```
# Define loss function
def loss(params):
"""Compute loss for a given set of parameters.
Args:
parameters (np.array): the list of parameters for the gates.
Returns:
The loss function.
"""
xtrain = data[:, 0]
if multi_output:
cf = 0
i = 1
for ypred in mypdf.predict(params, xtrain).transpose():
ytrain = data[:, i]
ysigma = data[:, i + 1]
cf += np.mean(np.square(ytrain - ypred) / ysigma ** 2)
i += 2
cf /= 8
else:
ytrain = data[:, 1]
ysigma = data[:, 2]
ypred = mypdf.predict(params, xtrain).flatten()
cf = np.mean(np.square(ytrain - ypred) / ysigma ** 2)
return cf
```
Optimization procedure extracted from standard techniques must be used to look for the optimal configuration of $\theta$ parameters. In this case, we will implement optimizers from `scipy` and a genetic one.
```python
# Optimizing
from qibo.optimizers import optimize
np.random.seed(10)
params = np.random.rand(mypdf.nparams)
_, params, _ = optimize(loss, params, method='cma')
```
The optimization may be costly in some cases. In order to save time, we provide some precomputed results that will let you to see the performance of this algorithm in several circumstances. Precomputed results include the ones detailed in the corresponding paper.
```
# For taking old results
import pickle
with open(f'results/{mode}/{parton}/{ansatz}_{nqubits}_q_{layers}_l_result.pkl', 'rb') as f:
results = pickle.load(f)
params = results['x']
```
Let us now take a look at the results! These graphs compare the reference data (black) and the current optimized fit (blue).
```
# Auxiliary plotting function
import matplotlib.pyplot as plt
def plot_PDF(params, chi2):
if multi_output:
fig, axs = plt.subplots(2, 4, figsize=(13, 9), sharex=True, sharey=True)
i = 1
partons = ['sbar', 'ubar', 'dbar', 'gluon', 'd', 'u', 's', 'c']
partons_name = [r'$\bar s$', r'$\bar u$', r'$\bar d$', r'$g$', r'$d$', r'$u$', r'$s$', r'$c$']
xtrain = data[:, 0]
for ax, yprediction in zip(axs.flatten(), mypdf.predict(params, xtrain).transpose()):
ytrain = data[:, i].copy()
ysigma = data[:, i + 1].copy()
if i == 7:
ax.set(title=partons_name[(i - 1) // 2] + ' / 3', xscale='log')
yprediction /= 3
ytrain /= 3
ysigma /= 3
elif i == 15:
ax.set(title=partons_name[(i - 1) // 2] + r' $\times$ 10', xscale='log')
yprediction *= 10
ytrain *= 10
ysigma *= 10
else:
ax.set(title=partons_name[(i - 1) // 2], xscale='log')
if (i - 1) // 2 % 4 == 0:
ax.set(ylabel='PDF')
if (i - 1) // 2 > 3:
ax.set(xlabel='x')
ax.plot(xtrain, ytrain, label='Classical PDF', color='black')
ax.fill_between(xtrain, ytrain + ysigma, ytrain - ysigma, alpha=0.3, color='black')
ax.plot(xtrain, yprediction.flatten(), label=r'qPDF model', color='orange', linewidth=2, zorder=10)
ax.set(ylim=[-0.05, 1])
i += 2
ax.grid(True)
fig.suptitle(f'$\chi^2 = $ {chi2:.4f}')
plt.legend()
else:
fig, ax = plt.subplots(figsize = (8, 6))
ax.set(title=f'$\chi^2 = $ {chi2:.2f}', xlabel='x', ylabel='PDF',
xscale='log')
xtrain = data[:, 0]
ytrain = data[:, 1]
ysigma = data[:, 2]
yprediction = mypdf.predict(params, xtrain).flatten()
ax.plot(xtrain, ytrain, label='Classical '+ parton + ' PDF', color='black')
ax.fill_between(xtrain, ytrain + ysigma, ytrain - ysigma, alpha=0.3, color='black')
ax.plot(xtrain, yprediction.flatten(), label=r'Quantum PDF model', zorder=10)
ax.legend()
```
## Plot results
```
plot_PDF(params, chi2=loss(params))
```
| true |
code
| 0.686528 | null | null | null | null |
|
# Visualization
## Matplotlib
<div style="clear:both"></div>
</div>
<hr style="height:2px;">
<div style="float:right; width:250 px"><img src="https://matplotlib.org/_static/logo2.png" alt="NumPy Logo" style="height: 150px;"></div>
## Objectives
1. Create a basic line plot.
1. Add labels and grid lines to the plot.
1. Plot multiple series of data.
1. Plot imshow, contour, and filled contour plots.
*This notebook was modified from one developed by Unidata*
## Getting Help with Matplotlib
Here are some important resources for learning more about Matplotlib and getting help.
- [NCAR Hackathons Data Visualization in Python Guide](https://ncar-hackathons.github.io/visualization)
- [Matplotlib documentation](http://matplotlib.org)
- [Matplotlib `plot` documentation](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot)
- [Matplotlib GitHub Issue Tracker](https://github.com/matplotlib/matplotlib/issues)
- [Matplotlib questions on StackOverflow](https://stackoverflow.com/questions/tagged/matplotlib)
## Plotting with Matplotlib
Matplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms.
The first step is to set up our notebook environment so that matplotlib plots appear inline as images:
```
%matplotlib inline
```
Next we import the matplotlib library's `pyplot` interface; this interface is the simplest way to create new Matplotlib figures. To shorten this long name, we import it as `plt` to keep things short but clear.
```
import matplotlib.pyplot as plt
import numpy as np
```
Now we generate some data to use while experimenting with plotting:
```
times = np.array([ 93., 96., 99., 102., 105., 108., 111., 114., 117.,
120., 123., 126., 129., 132., 135., 138., 141., 144.,
147., 150., 153., 156., 159., 162.])
temps = np.array([310.7, 308.0, 296.4, 289.5, 288.5, 287.1, 301.1, 308.3,
311.5, 305.1, 295.6, 292.4, 290.4, 289.1, 299.4, 307.9,
316.6, 293.9, 291.2, 289.8, 287.1, 285.8, 303.3, 310.])
```
Now we come to two quick lines to create a plot. Matplotlib has two core objects: the `Figure` and the `Axes`. The `Axes` is an individual plot with an x-axis, a y-axis, labels, etc; it has all of the various plotting methods we use. A `Figure` holds one or more `Axes` on which we draw; think of the `Figure` as the level at which things are saved to files (e.g. PNG, SVG)

Below the first line asks for a `Figure` 10 inches by 6 inches. We then ask for an `Axes` or subplot on the `Figure`. After that, we call `plot`, with `times` as the data along the x-axis (independant values) and `temps` as the data along the y-axis (the dependant values).
```
# Create a figure
fig = plt.figure(figsize=(10, 6))
# Ask, out of a 1x1 grid, the first axes.
ax = fig.add_subplot(1, 1, 1)
# Plot times as x-variable and temperatures as y-variable
ax.plot(times, temps)
```
From there, we can do things like ask the axis to add labels for x and y:
```
# Add some labels to the plot
ax.set_xlabel('Time')
ax.set_ylabel('Temperature')
# Prompt the notebook to re-display the figure after we modify it
fig
```
We can also add a title to the plot:
```
ax.set_title('GFS Temperature Forecast', fontdict={'size':16})
fig
```
Of course, we can do so much more...
```
# Set up more temperature data
temps_1000 = np.array([316.0, 316.3, 308.9, 304.0, 302.0, 300.8, 306.2, 309.8,
313.5, 313.3, 308.3, 304.9, 301.0, 299.2, 302.6, 309.0,
311.8, 304.7, 304.6, 301.8, 300.6, 299.9, 306.3, 311.3])
```
Here we call `plot` more than once to plot multiple series of temperature on the same plot; when plotting we pass `label` to `plot` to facilitate automatic creation. This is added with the `legend` call. We also add gridlines to the plot using the `grid()` call.
```
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(1, 1, 1)
# Plot two series of data
# The label argument is used when generating a legend.
ax.plot(times, temps, label='Temperature (surface)')
ax.plot(times, temps_1000, label='Temperature (1000 mb)')
# Add labels and title
ax.set_xlabel('Time')
ax.set_ylabel('Temperature')
ax.set_title('Temperature Forecast')
# Add gridlines
ax.grid(True)
# Add a legend to the upper left corner of the plot
ax.legend(loc='upper left')
```
We're not restricted to the default look of the plots, but rather we can override style attributes, such as `linestyle` and `color`. `color` can accept a wide array of options for color, such as `red` or `blue` or HTML color codes.
```
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(1, 1, 1)
# Specify how our lines should look
ax.plot(times, temps, color='red', label='Temperature (surface)')
ax.plot(times, temps_1000, color='red', linestyle='--',
label='Temperature (isobaric level)')
# Same as above
ax.set_xlabel('Time')
ax.set_ylabel('Temperature')
ax.set_title('Temperature Forecast')
ax.grid(True)
ax.legend(loc='upper left')
```
### Exercise
* Use `add_subplot` to create two different subplots on the figure
* Create one subplot for temperature, and one for dewpoint
* Set the title of each subplot as appropriate
* Use `ax.set_xlim` and `ax.set_ylim` to control the plot boundaries
* **BONUS:** Experiment with passing `sharex` and `sharey` to `add_subplot` to <a href="https://matplotlib.org/gallery/subplots_axes_and_figures/shared_axis_demo.html#sphx-glr-gallery-subplots-axes-and-figures-shared-axis-demo-py">share plot limits</a>
```
# Fake dewpoint data to plot
dewpoint = 0.9 * temps
dewpoint_1000 = 0.9 * temps_1000
# Create the figure
fig = plt.figure(figsize=(10, 6))
# YOUR CODE GOES HERE
```
#### Solution
```
# %load solutions/subplots.py
```
## Scatter Plots
Maybe it doesn't make sense to plot your data as a line plot, but with markers (a scatter plot). We can do this by setting the `linestyle` to none and specifying a marker type, size, color, etc.
```
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(1, 1, 1)
# Specify no line with circle markers
ax.plot(temps, temps_1000, linestyle='None', marker='o', markersize=5)
ax.set_xlabel('Temperature (surface)')
ax.set_ylabel('Temperature (1000 hPa)')
ax.set_title('Temperature Cross Plot')
ax.grid(True)
```
You can also use the `scatter` methods, which is slower, but will give you more control, such as being able to color the points individually based upon a third variable.
```
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(1, 1, 1)
# Specify no line with circle markers
ax.scatter(temps, temps_1000)
ax.set_xlabel('Temperature (surface)')
ax.set_ylabel('Temperature (1000 hPa)')
ax.set_title('Temperature Cross Plot')
ax.grid(True)
```
### Exercise
* Beginning with our code above, add the `c` keyword argument to the `scatter` call and color the points by the difference between the surface and 1000 hPa temperature.
* Add a 1:1 line to the plot (slope of 1, intercept of zero). Use a black dashed line.
* **BONUS:** Change the color map to be something more appropriate for this plot.
* **BONUS:** Try to add a colorbar to the plot (have a look at the matplotlib documentation for help).
```
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(1, 1, 1)
# YOUR CODE GOES HERE
ax.set_xlabel('Temperature (surface)')
ax.set_ylabel('Temperature (1000 hPa)')
ax.set_title('Temperature Cross Plot')
ax.grid(True)
```
#### Solution
```
# %load solutions/color_scatter.py
```
## imshow/contour
- `imshow` displays the values in an array as colored pixels, similar to a heat map.
- `contour` creates contours around data.
- `contourf` creates filled contours around data.
First let's create some fake data to work with - let's use a bivariate normal distribution.
```
x = y = np.arange(-3.0, 3.0, 0.025)
X, Y = np.meshgrid(x, y)
Z1 = np.exp(-X**2 - Y**2)
Z2 = np.exp(-(X - 1)**2 - (Y - 1)**2)
Z = (Z1 - Z2) * 2
```
Let's start with a simple imshow plot.
```
fig, ax = plt.subplots()
im = ax.imshow(Z, interpolation='bilinear', cmap='RdYlGn',
origin='lower', extent=[-3, 3, -3, 3])
```
We can also create contours around the data.
```
fig, ax = plt.subplots()
ax.contour(X, Y, Z)
fig, ax = plt.subplots()
c = ax.contour(X, Y, Z, levels=np.arange(-2, 2, 0.25))
ax.clabel(c)
fig, ax = plt.subplots()
c = ax.contourf(X, Y, Z)
```
### Exercise
* Create a figure using imshow and contour that is a heatmap in the colormap of your choice. Overlay black contours with a 0.5 contour interval.
```
# YOUR CODE GOES HERE
```
#### Solution
```
# %load solutions/contourf_contour.py
```
## Resources
The goal of this tutorial is to provide an overview of the use of the Matplotlib library. It covers creating simple line plots, but it is by no means comprehensive. For more information, try looking at the:
- [Matplotlib Documentation](http://matplotlib.org)
- [Matplotlib `plot` documentation](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot)
<div class="alert alert-block alert-success">
<p>Previous: <a href="00_intro.ipynb">Introduction</a></p>
<p>Next: <a href="02_cartopy.ipynb">Cartopy</a></p>
</div>
| true |
code
| 0.622976 | null | null | null | null |
|
# Load and Process models
This script will load the M models in the collection using cobrapy, and convert them to a normalized format. They will also be exported to the "mat" format used by the COBRA toolbox.
This requires [cobrapy](https://opencobra.github.io/cobrapy) version 0.4.0b1 or later.
```
import os
import warnings
import re
from itertools import chain
import sympy
import scipy
import scipy.io
import cobra
from read_excel import read_excel
```
## Read in Models
```
def open_exchanges(model, amount=10):
for reaction in model.reactions:
if len(reaction.metabolites) == 1:
# Ensure we are not creating any new sinks
if reaction.metabolites.values()[0] > 0:
reaction.upper_bound = max(reaction.upper_bound, amount)
else:
reaction.lower_bound = min(reaction.lower_bound, -amount)
def add_exchanges(model, extracellular_suffix="[e]", uptake_amount=10):
for metabolite in model.metabolites:
if str(metabolite).endswith(extracellular_suffix):
if len(metabolite.reactions) == 0:
print "no reactions for " + metabolite.id
continue
if min(len(i.metabolites) for i in metabolite.reactions) > 1:
EX_reaction = cobra.Reaction("EX_" + metabolite.id)
EX_reaction.add_metabolites({metabolite: 1})
m.add_reaction(EX_reaction)
EX_reaction.upper_bound = uptake_amount
EX_reaction.lower_bound = -uptake_amount
```
### SBML models
These models will be read in using [libSBML](http://sbml.org/Software/libSBML) through cobrapy. Some models will need their exchanges opened.
```
legacy_SBML = {"T_Maritima", "iNJ661m", "iSR432", "iTH366"}
open_boundaries = {"iRsp1095", "iWV1314", "iFF708", "iZM363"}
models = cobra.DictList()
for i in sorted(os.listdir("sbml")):
if not i.endswith(".xml"):
continue
model_id = i[:-4]
filepath = os.path.join("sbml", i)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
m = cobra.io.read_legacy_sbml(filepath) if model_id in legacy_SBML \
else cobra.io.read_sbml_model(filepath)
m.id = m.description = model_id.replace(".", "_")
if m.id in open_boundaries:
open_exchanges(m)
models.append(m)
```
### Models available in COBRA Toolbox "mat" format
```
for i in sorted(os.listdir("mat")):
if not i.endswith(".mat"):
continue
m = cobra.io.load_matlab_model(os.path.join("mat", i))
m.id = i[:-4]
if m.id in open_boundaries:
open_exchanges(m)
models.append(m)
```
### Some models are only available as Microsoft Excel files
```
m = read_excel("xls/iJS747.xls",
verbose=False, rxn_sheet_header=7)
models.append(m)
m = read_excel("xls/iRM588.xls",
verbose=False, rxn_sheet_header=5)
models.append(m)
m = read_excel("xls/iSO783.xls", verbose=False, rxn_sheet_header=2)
models.append(m)
m = read_excel("xls/iCR744.xls", rxn_sheet_header=4, verbose=False)
models.append(m)
m = read_excel("xls/iNV213.xls", rxn_str_key="Reaction Formula", verbose=False)
# remove boundary metabolites
for met in list(m.metabolites):
if met.id.endswith("[b]"):
met.remove_from_model()
models.append(m)
m = read_excel("xls/iTL885.xls", verbose=False,
rxn_id_key="Rxn name", rxn_gpr_key="Gene-reaction association", met_sheet_name="ignore")
models.append(m)
m = read_excel("xls/iWZ663.xls", verbose=False,
rxn_id_key="auto", rxn_name_key="Reaction name", rxn_gpr_key="Local gene")
models.append(m)
m = read_excel("xls/iOR363.xls", verbose=False)
models.append(m)
m = read_excel("xls/iMA945.xls", verbose=False)
models.append(m)
m = read_excel("xls/iPP668.xls", verbose=False)
add_exchanges(m)
models.append(m)
m = read_excel("xls/iVM679.xls", verbose=False, met_sheet_name="ignore",
rxn_id_key="Name", rxn_name_key="Description", rxn_str_key="Reaction")
open_exchanges(m)
models.append(m)
m = read_excel("xls/iTY425.xls", rxn_sheet_header=1,
rxn_sheet_name="S8", rxn_id_key="Number", rxn_str_key="Reaction", verbose=False)
add_exchanges(m, "xt")
# Protein production reaction does not prdoue "PROTEIN" metabolite
m.reactions.R511.add_metabolites({m.metabolites.PROTEIN: 1})
m.id = m.id + "_fixed"
models.append(m)
m = read_excel("xls/iSS724.xls", rxn_str_key="Reactions",
rxn_sheet_header=1, met_sheet_header=1, rxn_id_key="Name",
verbose=False)
add_exchanges(m, "xt")
models.append(m)
m = read_excel("xls/iCS400.xls", rxn_sheet_name="Complete Rxn List",
rxn_sheet_header=2, rxn_str_key="Reaction",
rxn_id_key="Name", verbose=False)
add_exchanges(m, "xt")
models.append(m)
m = read_excel("xls/iLL672.xls",
rxn_id_key="auto", met_sheet_name="Appendix 3 iLL672 metabolites",\
rxn_str_key="REACTION", rxn_gpr_key="skip", verbose=False,
rxn_sheet_name='Appendix 3 iLL672 reactions')
m.reactions[-1].objective_coefficient = 1
m.metabolites.BM.remove_from_model()
add_exchanges(m, "xt")
models.append(m)
plus_re = re.compile("(?<=\S)\+") # substitute H+ with H, etc.
m = read_excel("xls/iMH551.xls", rxn_sheet_name="GPR Annotation", rxn_sheet_header=4,
rxn_id_key="auto", rxn_str_key="REACTION", rxn_gpr_key="skip",
rxn_name_key="ENZYME", rxn_skip_rows=[625, 782, 787], verbose=False,
rxn_sheet_converters={"REACTION": lambda x: plus_re.sub("", x)})
for met in m.metabolites:
if met.id.endswith("(extracellular)"):
met.id = met.id[:-15] + "_e"
m.repair()
add_exchanges(m, "_e")
models.append(m)
m = read_excel("xls/iCS291.xls", rxn_sheet_name="Sheet1",
rxn_str_key="Reaction",
rxn_sheet_header=5, rxn_id_key="Name",
verbose=False)
add_exchanges(m, "xt")
# BIOMASS is just all model metabolites in the Demands list
m.add_reaction(cobra.Reaction("BIOMASS"))
# taken from Table 1 in publication
biomass_mets = {}
for i in {"ALA", "ARG", "ASN", "ASP", "CYS", "GLU", "GLN", "GLY",
"HIS", "ILE", "LEU", "LYS", "MET", "PHE", "PRO", "SER",
"THR", "TRP", "TYR", "VAL", "PTRC", "SPMD", "ATP", "GTP",
"CTP", "UTP", "DATP", "DGTP", "DCTP", "DTTP", "PS", "PE",
"PG", "PEPTIDO", "LPS", "OPP", "UDPP", "NAD", "NADP", "FAD",
"COA", "ACP", "PTH", "THIAMIN", "MTHF", "MK", "DMK"
}:
biomass_mets[m.metabolites.get_by_id(i)] = -1
dm = cobra.Reaction("DM_" + i)
m.add_reaction(dm)
dm.add_metabolites({m.metabolites.get_by_id(i): -1})
m.reactions.BIOMASS.add_metabolites(biomass_mets)
m.change_objective("BIOMASS")
add_exchanges(m, "xt")
models.append(m)
m = read_excel("xls/iYO844.xls", rxn_sheet_name="Reaction and locus", verbose=False, rxn_gpr_key="Locus name",
rxn_str_key=u'Equation (note [c] and [e] at the beginning refer to the compartment \n'
'the reaction takes place in, cytosolic and extracellular respectively)')
add_exchanges(m)
# create the biomass reaction from supplementary data table
# http://www.jbc.org/content/suppl/2007/06/29/M703759200.DC1/Biomass_composition.doc
r = cobra.Reaction("biomass")
r.objective_coefficient = 1.
m.add_reaction(r)
r.reaction = ("408.3 gly[c] + 266.9 ala-L[c] + 306.7 val-L[c] + 346.4 leu-L[c] + 269.9 ile-L[c] + "
"216.2 ser-L[c] + 186.3 thr-L[c] + 175.9 phe-L[c] + 110.8 tyr-L[c] + 54.3 trp-L[c] + "
"56.7 cys-L[c] + 113.3 met-L[c] + 323.1 lys-L[c] + 193.0 arg-L[c] + 81.7 his-L[c] + "
"148.0 asp-L[c] + 260.4 glu-L[c] + 148.0 asp-L[c] + 260.3 gln-L[c] + 160.6 pro-L[c] + "
"62.7 gtp[c] + 38.9 ctp[c] + 41.5 utp[c] + 23.0 datp[c] + 17.4 dgtp[c] + 17.4 dctp[c] + "
"22.9 dttp[c] + 0.085750 m12dg_BS[c] + 0.110292 d12dg_BS[c] + 0.065833 t12dg_BS[c] + "
"0.004642 cdlp_BS[c] + 0.175859 pgly_BS[c] + 0.022057 lysylpgly_BS[c] + 0.559509 psetha_BS[c] + "
"0.006837 lipo1-24_BS[c] + 0.006123 lipo2-24_BS[c] + 0.018162 lipo3-24_BS[c] + "
"0.014676 lipo4-24_BS[c] + 101.82 peptido_BS[c] + 3.62 gtca1-45_BS[c] + 2.35 gtca2-45_BS[c] + "
"1.82 gtca3-45_BS[c] + 3.11 tcam_BS[c] + 706.3 k[c] + 101.7 mg2[c] + 3.4 fe3[c] + 3.2 ca2[c] + "
"0.9 ppi[c] + 0.3 mql7[c] + 0.4 10fthf[c] + 16.2 nad[c] + 4.7 amp[c] + 2.6 adp[c] + 1.0 cmp[c] + "
"0.9 nadp[c] + 0.5 ctp[c] + 0.5 gmp[c] + 0.4 gtp[c] + 0.3 cdp[c] + 0.2 nadph[c] + 0.2 gdp[c] + "
"105053.5 atp[c] + 105000 h2o[c] --> 104985.6 pi[c] + 104997.4 adp[c] + 105000 h[c]")
# units are in mg for this reaction, so scale to grams
r *= 0.001
models.append(m)
models.sort()
```
## Determine Objective Reactions
Some of these models do not specify an objective (or biomass) reaction. These will be automatically detected if possible, or set from a manually curated list.
```
# regular expression to detect "biomass"
biomass_re = re.compile("biomass", re.IGNORECASE)
# manually identified objective reactions
curated_objectives = {"VvuMBEL943": "R806",
"iAI549": "BIO_CBDB1_DM_855",
"mus_musculus": "BIO028",
"iRsp1095": "RXN1391",
"iLC915": "r1133",
"PpaMBEL1254": "R01288",
"AbyMBEL891": "R761",
"iAbaylyiV4": "GROWTH_DASH_RXN",
"iOG654": "RM00001",
"iOR363": "OF14e_Retli",
"iRM588": "agg_GS13m",
"iJS747": "agg_GS13m_2",
"iTL885": "SS1240",
"iMH551": "R0227"}
for m in models:
if len(m.reactions.query(lambda x: x > 0, "objective_coefficient")):
continue
if m.id in curated_objectives:
m.change_objective(curated_objectives[m.id])
continue
# look for reactions with "biomass" in the id or name
possible_objectives = m.reactions.query(biomass_re)
if len(possible_objectives) == 0:
possible_objectives = m.reactions.query(biomass_re, "name")
# In some cases, a biomass "metabolite" is produced, whose production
# should be the objective function.
possible_biomass_metabolites = m.metabolites.query(biomass_re)
if len(possible_biomass_metabolites) == 0:
possible_biomass_metabolites = m.metabolites.query(biomass_re, "name")
if len(possible_biomass_metabolites) > 0:
biomass_met = possible_biomass_metabolites[0]
r = cobra.Reaction("added_biomass_sink")
r.objective_coefficient = 1
r.add_metabolites({biomass_met: -1})
m.add_reaction(r)
print ("autodetected biomass metabolite '%s' for model '%s'" %
(biomass_met.id, m.id))
elif len(possible_objectives) > 0:
print("autodetected objective reaction '%s' for model '%s'" %
(possible_objectives[0].id, m.id))
m.change_objective(possible_objectives[0])
else:
print("no objective found for " + m.id)
# Ensure the biomass objective flux is unconstrained
for m in models:
for reaction in m.reactions.query(lambda x: x > 0, "objective_coefficient"):
reaction.lower_bound = min(reaction.lower_bound, 0)
reaction.upper_bound = max(reaction.upper_bound, 1000)
```
## Fixes of various encoding bugs
### General
GSMN_TB does not use the convention of extracellular metabolites with exchanges. Although the model still solves with this formulation, this is still normalized here. This process does not change the mathematical structure of the model.
```
h_c = models.GSMN_TB.metabolites.H_c
for r in models.GSMN_TB.reactions:
if len(r.metabolites) == 2 and h_c in r.metabolites:
met = [i for i in r.metabolites if i is not h_c][0]
EX_met = cobra.Metabolite(met.id[:-1] + "e")
r.add_metabolites({EX_met: -r.metabolites[met]})
if "EX_" + EX_met.id not in models.GSMN_TB.reactions:
exchange = cobra.Reaction("EX_" + EX_met.id)
exchange.add_metabolites({EX_met: -1})
exchange.lower_bound = -1000000.0
exchange.upper_bound = 1000000.0
models.GSMN_TB.add_reaction(exchange)
```
### Reaction and Metabolites
### id's
```
# reaction id's with spaces in them
models.iJS747.reactions.get_by_id("HDH [deleted 01/16/2007 12:02:30 PM]").id = "HDH_del"
models.iJS747.reactions.get_by_id("HIBD [deleted 03/21/2007 01:06:12 PM]").id = "HIBD_del"
models.iAC560.reactions.get_by_id("GLUDx [m]").id = "GLUDx[m]"
for r in models.iOR363.reactions:
if " " in r.id:
r.id = r.id.split()[0]
models.textbook.reactions.query("Biomass")[0].id = "Biomass_Ecoli_core"
```
Use the convention underscore + compartment i.e. _c instead of [c] (c) etc.
```
SQBKT_re = re.compile("\[([a-z])\]$")
def fix_brackets(id_str, compiled_re):
result = compiled_re.findall(id_str)
if len(result) > 0:
return compiled_re.sub("_" + result[0], id_str)
else:
return id_str
for r in models.iRS1597.reactions:
r.id = fix_brackets(r.id, re.compile("_LSQBKT_([a-z])_RSQBKT_$"))
for m_id in ["iJS747", "iRM588", "iSO783", "iCR744", "iNV213", "iWZ663", "iOR363", "iMA945", "iPP668",
"iTL885", "iVM679", "iYO844", "iZM363"]:
for met in models.get_by_id(m_id).metabolites:
met.id = fix_brackets(met.id, SQBKT_re)
for met in models.S_coilicolor_fixed.metabolites:
if met.id.endswith("_None_"):
met.id = met.id[:-6]
# Some models only have intra and extracellular metabolites, but don't use _c and _e.
for m_id in ["iCS291", "iCS400", "iTY425_fixed", "iSS724"]:
for metabolite in models.get_by_id(m_id).metabolites:
if metabolite.id.endswith("xt"):
metabolite.id = metabolite.id[:-2] + "_e"
elif len(metabolite.id) < 2 or metabolite.id[-2] != "_":
metabolite.id = metabolite.id + "_c"
# Exchange reactions should have the id of the metabolite after with the same convention
for m_id in ["iAF1260", "iJO1366", "iAF692", "iJN746", "iRC1080", "textbook", "iNV213",
"iIT341", "iJN678", "iJR904", "iND750", "iNJ661", "iPS189_fixed", "iSB619",
"iZM363", "iMH551"]:
for r in models.get_by_id(m_id).reactions:
if len(r.metabolites) != 1:
continue
if r.id.startswith("EX_"):
r.id = "EX_" + list(r.metabolites.keys())[0].id
if r.id.startswith("DM_"):
r.id = "DM_" + list(r.metabolites.keys())[0].id
for m in models:
m.repair()
```
### Metabolite Formulas
```
for model in models:
for metabolite in model.metabolites:
if metabolite.formula is None:
metabolite.formula = ""
continue
if str(metabolite.formula).lower() == "none":
metabolite.formula = ""
continue
# some characters should not be in a formula
if "(" in metabolite.formula or \
")" in metabolite.formula or \
"." in metabolite.formula:
metabolite.formula = ""
```
### Metabolite Compartments
```
compartments = {
'c': 'Cytoplasm',
'e': 'Extracellular',
'p': 'Periplasm',
'm': 'Mitochondria',
'g': 'Golgi',
'n': "Nucleus",
'r': "Endoplasmic reticulum",
'x': "Peroxisome",
'v': "Vacuole",
"h": "Chloroplast",
"x": "Glyoxysome",
"s": "Eyespot",
"default": "No Compartment"}
for model in models:
for metabolite in model.metabolites:
if metabolite.compartment is None or len(metabolite.compartment.strip()) == 0 or metabolite.compartment == "[":
if len(metabolite.id) > 2 and metabolite.id[-2] == "_" and metabolite.id[-1].isalpha():
metabolite.compartment = metabolite.id[-1]
else:
metabolite.compartment = "default"
if metabolite.compartment not in model.compartments:
model.compartments[metabolite.compartment] = compartments.get(metabolite.compartment, metabolite.compartment)
```
### Metabolite and Reaction Names
Names which start with numbers don't need to be escaped with underscores.
```
for model in models:
for x in chain(model.metabolites, model.reactions):
if x.name is not None and x.name.startswith("_"):
x.name = x.name.lstrip("_")
if x.name is not None:
x.name = x.name.strip()
if x.name is None:
x.name = x.id
```
### MISC fixes
```
models.iMM1415.reactions.EX_lnlc_dup_e.remove_from_model()
models.iMM1415.reactions.EX_retpalm_e.remove_from_model(remove_orphans=True)
# these reaction names are reaction strings
for r in models.iCac802.reactions:
r.name = ""
```
## Fix Genes and GPR's
A lot of genes have characters which won't work in their names
```
# nonbreaking spaces
models.iCB925.reactions.FDXNRy.gene_reaction_rule = '( Cbei_0661 or Cbei_2182 )'
for r in models.iCB925.reactions:
if "\xa0" in r.gene_reaction_rule:
r.gene_reaction_rule = r.gene_reaction_rule.replace("\xc2", " ").replace("\xa0", " ")
for g in list(models.iCB925.genes):
if len(g.reactions) == 0:
models.iCB925.genes.remove(g)
```
Some GPR's are not valid boolean expressions.
```
multiple_ors = re.compile("(\s*or\s+){2,}")
multiple_ands = re.compile("(\s*and\s+){2,}")
for model_id in ["iRS1563", "iRS1597", "iMM1415"]:
model = models.get_by_id(model_id)
for reaction in model.reactions:
gpr = reaction.gene_reaction_rule
gpr = multiple_ors.sub(" or ", gpr)
gpr = multiple_ands.sub(" and ", gpr)
if "[" in gpr:
gpr = gpr.replace("[", "(").replace("]", ")")
if gpr.endswith(" or"):
gpr = gpr[:-3]
if gpr.count("(") != gpr.count(")"):
gpr = "" # mismatched parenthesis somewhere
reaction.gene_reaction_rule = gpr
for gene in list(model.genes):
if gene.id.startswith("[") or gene.id.endswith("]"):
if len(gene.reactions) == 0:
model.genes.remove(gene.id)
# Some models are missing spaces between the ands/ors in some of their GPR's
for m_id in ["iJN678", "iTL885"]:
for r in models.get_by_id(m_id).reactions:
r.gene_reaction_rule = r.gene_reaction_rule.replace("and", " and ").replace("or", " or ")
models.iCac802.reactions.R0095.gene_reaction_rule = \
models.iCac802.reactions.R0095.gene_reaction_rule.replace(" AND ", " and ")
# make sbml3 output deterministic by sorting genes
for m in models:
m.genes.sort()
```
## Ensure all ID's are SBML compliant
```
for m in models:
cobra.manipulation.escape_ID(m)
```
## Export Models
### SBML 3
Export the models to the use the fbc version 2 (draft RC6) extension to SBML level 3 version 1.
```
for model in models:
cobra.io.write_sbml_model(model, "sbml3/%s.xml" % model.id)
```
### mat
Save all the models into a single mat file. In addition to the usual fields in the "mat" struct, we will also include S_num and S_denom, which are the numerator and denominator of the stoichiometric coefficients encoded as rational numbers.
```
def convert_to_rational(value):
return sympy.Rational("%.15g" % value)
def construct_S_num_denom(model):
"""convert model to two S matrices
they encode the numerator and denominator of stoichiometric
coefficients encoded as rational numbers
"""
# intialize to 0
dimensions = (len(model.metabolites), len(model.reactions))
S_num = scipy.sparse.lil_matrix(dimensions)
S_denom = scipy.sparse.lil_matrix(dimensions)
# populate with stoichiometry
for i, r in enumerate(model.reactions):
for met, value in r._metabolites.iteritems():
rational_value = convert_to_rational(value)
num, denom = (rational_value.p, rational_value.q)
S_num[model.metabolites.index(met), i] = num
S_denom[model.metabolites.index(met), i] = denom
return S_num, S_denom
all_model_dict = {}
for model in models:
model_dict = cobra.io.mat.create_mat_dict(model)
model_dict["S_num"], model_dict["S_denom"] = construct_S_num_denom(model)
all_model_dict[model.id] = model_dict
scipy.io.savemat("all_models.mat", all_model_dict, oned_as="column")
```
| true |
code
| 0.310077 | null | null | null | null |
|
# Mask R-CNN - Train on Shapes Dataset
This notebook shows how to train Mask R-CNN on your own dataset. To keep things simple we use a synthetic dataset of shapes (squares, triangles, and circles) which enables fast training. You'd still need a GPU, though, because the network backbone is a Resnet101, which would be too slow to train on a CPU. On a GPU, you can start to get okay-ish results in a few minutes, and good results in less than an hour.
The code of the *Shapes* dataset is included below. It generates images on the fly, so it doesn't require downloading any data. And it can generate images of any size, so we pick a small image size to train faster.
```
import os
import sys
import random
import math
import re
import time
import numpy as np
import cv2
import matplotlib
import matplotlib.pyplot as plt
# Root directory of the project
ROOT_DIR = os.path.abspath("../../")
# Import Mask RCNN
sys.path.append(ROOT_DIR) # To find local version of the library
from mrcnn.config import Config
from mrcnn import utils
import mrcnn.model as modellib
from mrcnn import visualize
from mrcnn.model import log
%matplotlib inline
# Directory to save logs and trained model
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
# Local path to trained weights file
COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5")
# Download COCO trained weights from Releases if needed
if not os.path.exists(COCO_MODEL_PATH):
utils.download_trained_weights(COCO_MODEL_PATH)
```
## Configurations
```
class ShapesConfig(Config):
"""Configuration for training on the toy shapes dataset.
Derives from the base Config class and overrides values specific
to the toy shapes dataset.
"""
# Give the configuration a recognizable name
NAME = "shapes"
# Train on 1 GPU and 8 images per GPU. We can put multiple images on each
# GPU because the images are small. Batch size is 8 (GPUs * images/GPU).
GPU_COUNT = 1
IMAGES_PER_GPU = 8
# Number of classes (including background)
NUM_CLASSES = 1 + 3 # background + 3 shapes
# Use small images for faster training. Set the limits of the small side
# the large side, and that determines the image shape.
IMAGE_MIN_DIM = 128
IMAGE_MAX_DIM = 128
# Use smaller anchors because our image and objects are small
RPN_ANCHOR_SCALES = (8, 16, 32, 64, 128) # anchor side in pixels
# Reduce training ROIs per image because the images are small and have
# few objects. Aim to allow ROI sampling to pick 33% positive ROIs.
TRAIN_ROIS_PER_IMAGE = 32
# Use a small epoch since the data is simple
STEPS_PER_EPOCH = 100
# use small validation steps since the epoch is small
VALIDATION_STEPS = 5
config = ShapesConfig()
config.display()
```
## Notebook Preferences
```
def get_ax(rows=1, cols=1, size=8):
"""Return a Matplotlib Axes array to be used in
all visualizations in the notebook. Provide a
central point to control graph sizes.
Change the default size attribute to control the size
of rendered images
"""
_, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows))
return ax
```
## Dataset
Create a synthetic dataset
Extend the Dataset class and add a method to load the shapes dataset, `load_shapes()`, and override the following methods:
* load_image()
* load_mask()
* image_reference()
```
class ShapesDataset(utils.Dataset):
"""Generates the shapes synthetic dataset. The dataset consists of simple
shapes (triangles, squares, circles) placed randomly on a blank surface.
The images are generated on the fly. No file access required.
"""
def load_shapes(self, count, height, width):
"""Generate the requested number of synthetic images.
count: number of images to generate.
height, width: the size of the generated images.
"""
# Add classes
self.add_class("shapes", 1, "square")
self.add_class("shapes", 2, "circle")
self.add_class("shapes", 3, "triangle")
# Add images
# Generate random specifications of images (i.e. color and
# list of shapes sizes and locations). This is more compact than
# actual images. Images are generated on the fly in load_image().
for i in range(count):
bg_color, shapes = self.random_image(height, width)
self.add_image("shapes", image_id=i, path=None,
width=width, height=height,
bg_color=bg_color, shapes=shapes)
def load_image(self, image_id):
"""Generate an image from the specs of the given image ID.
Typically this function loads the image from a file, but
in this case it generates the image on the fly from the
specs in image_info.
"""
info = self.image_info[image_id]
bg_color = np.array(info['bg_color']).reshape([1, 1, 3])
image = np.ones([info['height'], info['width'], 3], dtype=np.uint8)
image = image * bg_color.astype(np.uint8)
for shape, color, dims in info['shapes']:
image = self.draw_shape(image, shape, dims, color)
return image
def image_reference(self, image_id):
"""Return the shapes data of the image."""
info = self.image_info[image_id]
if info["source"] == "shapes":
return info["shapes"]
else:
super(self.__class__).image_reference(self, image_id)
def load_mask(self, image_id):
"""Generate instance masks for shapes of the given image ID.
"""
info = self.image_info[image_id]
shapes = info['shapes']
count = len(shapes)
mask = np.zeros([info['height'], info['width'], count], dtype=np.uint8)
for i, (shape, _, dims) in enumerate(info['shapes']):
mask[:, :, i:i+1] = self.draw_shape(mask[:, :, i:i+1].copy(),
shape, dims, 1)
# Handle occlusions
occlusion = np.logical_not(mask[:, :, -1]).astype(np.uint8)
for i in range(count-2, -1, -1):
mask[:, :, i] = mask[:, :, i] * occlusion
occlusion = np.logical_and(occlusion, np.logical_not(mask[:, :, i]))
# Map class names to class IDs.
class_ids = np.array([self.class_names.index(s[0]) for s in shapes])
return mask.astype(np.bool), class_ids.astype(np.int32)
def draw_shape(self, image, shape, dims, color):
"""Draws a shape from the given specs."""
# Get the center x, y and the size s
x, y, s = dims
if shape == 'square':
cv2.rectangle(image, (x-s, y-s), (x+s, y+s), color, -1)
elif shape == "circle":
cv2.circle(image, (x, y), s, color, -1)
elif shape == "triangle":
points = np.array([[(x, y-s),
(x-s/math.sin(math.radians(60)), y+s),
(x+s/math.sin(math.radians(60)), y+s),
]], dtype=np.int32)
cv2.fillPoly(image, points, color)
return image
def random_shape(self, height, width):
"""Generates specifications of a random shape that lies within
the given height and width boundaries.
Returns a tuple of three valus:
* The shape name (square, circle, ...)
* Shape color: a tuple of 3 values, RGB.
* Shape dimensions: A tuple of values that define the shape size
and location. Differs per shape type.
"""
# Shape
shape = random.choice(["square", "circle", "triangle"])
# Color
color = tuple([random.randint(0, 255) for _ in range(3)])
# Center x, y
buffer = 20
y = random.randint(buffer, height - buffer - 1)
x = random.randint(buffer, width - buffer - 1)
# Size
s = random.randint(buffer, height//4)
return shape, color, (x, y, s)
def random_image(self, height, width):
"""Creates random specifications of an image with multiple shapes.
Returns the background color of the image and a list of shape
specifications that can be used to draw the image.
"""
# Pick random background color
bg_color = np.array([random.randint(0, 255) for _ in range(3)])
# Generate a few random shapes and record their
# bounding boxes
shapes = []
boxes = []
N = random.randint(1, 4)
for _ in range(N):
shape, color, dims = self.random_shape(height, width)
shapes.append((shape, color, dims))
x, y, s = dims
boxes.append([y-s, x-s, y+s, x+s])
# Apply non-max suppression wit 0.3 threshold to avoid
# shapes covering each other
keep_ixs = utils.non_max_suppression(np.array(boxes), np.arange(N), 0.3)
shapes = [s for i, s in enumerate(shapes) if i in keep_ixs]
return bg_color, shapes
# Training dataset
dataset_train = ShapesDataset()
dataset_train.load_shapes(500, config.IMAGE_SHAPE[0], config.IMAGE_SHAPE[1])
dataset_train.prepare()
# Validation dataset
dataset_val = ShapesDataset()
dataset_val.load_shapes(50, config.IMAGE_SHAPE[0], config.IMAGE_SHAPE[1])
dataset_val.prepare()
# Load and display random samples
image_ids = np.random.choice(dataset_train.image_ids, 4)
for image_id in image_ids:
image = dataset_train.load_image(image_id)
mask, class_ids = dataset_train.load_mask(image_id)
visualize.display_top_masks(image, mask, class_ids, dataset_train.class_names)
```
## Create Model
```
# Create model in training mode
model = modellib.MaskRCNN(mode="training", config=config,
model_dir=MODEL_DIR)
# Which weights to start with?
init_with = "coco" # imagenet, coco, or last
if init_with == "imagenet":
model.load_weights(model.get_imagenet_weights(), by_name=True)
elif init_with == "coco":
# Load weights trained on MS COCO, but skip layers that
# are different due to the different number of classes
# See README for instructions to download the COCO weights
model.load_weights(COCO_MODEL_PATH, by_name=True,
exclude=["mrcnn_class_logits", "mrcnn_bbox_fc",
"mrcnn_bbox", "mrcnn_mask"])
elif init_with == "last":
# Load the last model you trained and continue training
model.load_weights(model.find_last(), by_name=True)
```
## Training
Train in two stages:
1. Only the heads. Here we're freezing all the backbone layers and training only the randomly initialized layers (i.e. the ones that we didn't use pre-trained weights from MS COCO). To train only the head layers, pass `layers='heads'` to the `train()` function.
2. Fine-tune all layers. For this simple example it's not necessary, but we're including it to show the process. Simply pass `layers="all` to train all layers.
```
# Train the head branches
# Passing layers="heads" freezes all layers except the head
# layers. You can also pass a regular expression to select
# which layers to train by name pattern.
model.train(dataset_train, dataset_val,
learning_rate=config.LEARNING_RATE,
epochs=1,
layers='heads')
# Fine tune all layers
# Passing layers="all" trains all layers. You can also
# pass a regular expression to select which layers to
# train by name pattern.
model.train(dataset_train, dataset_val,
learning_rate=config.LEARNING_RATE / 10,
epochs=2,
layers="all")
# Save weights
# Typically not needed because callbacks save after every epoch
# Uncomment to save manually
# model_path = os.path.join(MODEL_DIR, "mask_rcnn_shapes.h5")
# model.keras_model.save_weights(model_path)
```
## Detection
```
class InferenceConfig(ShapesConfig):
GPU_COUNT = 1
IMAGES_PER_GPU = 1
inference_config = InferenceConfig()
# Recreate the model in inference mode
model = modellib.MaskRCNN(mode="inference",
config=inference_config,
model_dir=MODEL_DIR)
# Get path to saved weights
# Either set a specific path or find last trained weights
# model_path = os.path.join(ROOT_DIR, ".h5 file name here")
model_path = model.find_last()
# Load trained weights
print("Loading weights from ", model_path)
model.load_weights(model_path, by_name=True)
# Test on a random image
image_id = random.choice(dataset_val.image_ids)
original_image, image_meta, gt_class_id, gt_bbox, gt_mask =\
modellib.load_image_gt(dataset_val, inference_config,
image_id, use_mini_mask=False)
log("original_image", original_image)
log("image_meta", image_meta)
log("gt_class_id", gt_class_id)
log("gt_bbox", gt_bbox)
log("gt_mask", gt_mask)
visualize.display_instances(original_image, gt_bbox, gt_mask, gt_class_id,
dataset_train.class_names, figsize=(8, 8))
results = model.detect([original_image], verbose=1)
r = results[0]
visualize.display_instances(original_image, r['rois'], r['masks'], r['class_ids'],
dataset_val.class_names, r['scores'], ax=get_ax())
```
## Evaluation
```
# Compute VOC-Style mAP @ IoU=0.5
# Running on 10 images. Increase for better accuracy.
image_ids = np.random.choice(dataset_val.image_ids, 10)
APs = []
for image_id in image_ids:
# Load image and ground truth data
image, image_meta, gt_class_id, gt_bbox, gt_mask =\
modellib.load_image_gt(dataset_val, inference_config,
image_id, use_mini_mask=False)
molded_images = np.expand_dims(modellib.mold_image(image, inference_config), 0)
# Run object detection
results = model.detect([image], verbose=0)
r = results[0]
# Compute AP
AP, precisions, recalls, overlaps =\
utils.compute_ap(gt_bbox, gt_class_id, gt_mask,
r["rois"], r["class_ids"], r["scores"], r['masks'])
APs.append(AP)
print("mAP: ", np.mean(APs))
```
| true |
code
| 0.682891 | null | null | null | null |
|
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Neural style transfer on video
Using modified code from `pytorch`'s neural style [example](https://pytorch.org/tutorials/advanced/neural_style_tutorial.html), we show how to setup a pipeline for doing style transfer on video. The pipeline has following steps:
1. Split a video into images
2. Run neural style on each image using one of the provided models (from `pytorch` pretrained models for this example).
3. Stitch the image back into a video.
> **Tip**
If your system requires low-latency processing (to process a single document or small set of documents quickly), use [real-time scoring](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-consume-web-service) instead of batch prediction.
## Prerequisites
If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the configuration Notebook located at https://github.com/Azure/MachineLearningNotebooks first if you haven't. This sets you up with a working config file that has information on your workspace, subscription id, etc.
## Initialize Workspace
Initialize a workspace object from persisted configuration.
```
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
from azureml.core import Workspace, Experiment
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep = '\n')
from azureml.core.compute import AmlCompute, ComputeTarget
from azureml.core import Datastore, Dataset
from azureml.pipeline.core import Pipeline
from azureml.pipeline.steps import PythonScriptStep
from azureml.core.runconfig import CondaDependencies, RunConfiguration
from azureml.core.compute_target import ComputeTargetException
from azureml.data import OutputFileDatasetConfig
```
# Download models
```
import os
# create directory for model
model_dir = 'models'
if not os.path.isdir(model_dir):
os.mkdir(model_dir)
import urllib.request
def download_model(model_name):
# downloaded models from https://pytorch.org/tutorials/advanced/neural_style_tutorial.html are kept here
url = "https://pipelinedata.blob.core.windows.net/styletransfer/saved_models/" + model_name
local_path = os.path.join(model_dir, model_name)
urllib.request.urlretrieve(url, local_path)
```
# Register all Models
```
from azureml.core.model import Model
mosaic_model = None
candy_model = None
models = Model.list(workspace=ws, tags=['scenario'])
for m in models:
print("Name:", m.name,"\tVersion:", m.version, "\tDescription:", m.description, m.tags)
if m.name == 'mosaic' and mosaic_model is None:
mosaic_model = m
elif m.name == 'candy' and candy_model is None:
candy_model = m
if mosaic_model is None:
print('Mosaic model does not exist, registering it')
download_model('mosaic.pth')
mosaic_model = Model.register(model_path = os.path.join(model_dir, "mosaic.pth"),
model_name = "mosaic",
tags = {'type': "mosaic", 'scenario': "Style transfer using batch inference"},
description = "Style transfer - Mosaic",
workspace = ws)
else:
print('Reusing existing mosaic model')
if candy_model is None:
print('Candy model does not exist, registering it')
download_model('candy.pth')
candy_model = Model.register(model_path = os.path.join(model_dir, "candy.pth"),
model_name = "candy",
tags = {'type': "candy", 'scenario': "Style transfer using batch inference"},
description = "Style transfer - Candy",
workspace = ws)
else:
print('Reusing existing candy model')
```
# Create or use existing compute
```
# AmlCompute
cpu_cluster_name = "cpu-cluster"
try:
cpu_cluster = AmlCompute(ws, cpu_cluster_name)
print("found existing cluster.")
except ComputeTargetException:
print("creating new cluster")
provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_D2_v2",
max_nodes = 1)
# create the cluster
cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, provisioning_config)
cpu_cluster.wait_for_completion(show_output=True)
# AmlCompute
gpu_cluster_name = "gpu-cluster"
try:
gpu_cluster = AmlCompute(ws, gpu_cluster_name)
print("found existing cluster.")
except ComputeTargetException:
print("creating new cluster")
provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_NC6",
max_nodes = 3)
# create the cluster
gpu_cluster = ComputeTarget.create(ws, gpu_cluster_name, provisioning_config)
gpu_cluster.wait_for_completion(show_output=True)
```
# Python Scripts
We use an edited version of `neural_style_mpi.py` (original is [here](https://github.com/pytorch/examples/blob/master/fast_neural_style/neural_style/neural_style.py)). Scripts to split and stitch the video are thin wrappers to calls to `ffmpeg`.
We install `ffmpeg` through conda dependencies.
```
scripts_folder = "scripts"
process_video_script_file = "process_video.py"
# peek at contents
with open(os.path.join(scripts_folder, process_video_script_file)) as process_video_file:
print(process_video_file.read())
stitch_video_script_file = "stitch_video.py"
# peek at contents
with open(os.path.join(scripts_folder, stitch_video_script_file)) as stitch_video_file:
print(stitch_video_file.read())
```
The sample video **organutan.mp4** is stored at a publicly shared datastore. We are registering the datastore below. If you want to take a look at the original video, click here. (https://pipelinedata.blob.core.windows.net/sample-videos/orangutan.mp4)
```
# datastore for input video
account_name = "pipelinedata"
video_ds = Datastore.register_azure_blob_container(ws, "videos", "sample-videos",
account_name=account_name, overwrite=True)
# the default blob store attached to a workspace
default_datastore = ws.get_default_datastore()
```
# Sample video
```
video_name=os.getenv("STYLE_TRANSFER_VIDEO_NAME", "orangutan.mp4")
orangutan_video = Dataset.File.from_files((video_ds,video_name))
cd = CondaDependencies()
cd.add_channel("conda-forge")
cd.add_conda_package("ffmpeg==4.0.2")
# Runconfig
amlcompute_run_config = RunConfiguration(conda_dependencies=cd)
amlcompute_run_config.environment.docker.base_image = "pytorch/pytorch"
amlcompute_run_config.environment.spark.precache_packages = False
ffmpeg_audio = OutputFileDatasetConfig(name="ffmpeg_audio")
processed_images = OutputFileDatasetConfig(name="processed_images")
output_video = OutputFileDatasetConfig(name="output_video")
ffmpeg_images = OutputFileDatasetConfig(name="ffmpeg_images")
```
# Define tweakable parameters to pipeline
These parameters can be changed when the pipeline is published and rerun from a REST call.
As part of ParallelRunStep following 2 pipeline parameters will be created which can be used to override values.
node_count
process_count_per_node
```
from azureml.pipeline.core.graph import PipelineParameter
# create a parameter for style (one of "candy", "mosaic") to transfer the images to
style_param = PipelineParameter(name="style", default_value="mosaic")
# create a parameter for the number of nodes to use in step no. 2 (style transfer)
nodecount_param = PipelineParameter(name="nodecount", default_value=2)
split_video_step = PythonScriptStep(
name="split video",
script_name="process_video.py",
arguments=["--input_video", orangutan_video.as_mount(),
"--output_audio", ffmpeg_audio,
"--output_images", ffmpeg_images],
compute_target=cpu_cluster,
runconfig=amlcompute_run_config,
source_directory=scripts_folder
)
stitch_video_step = PythonScriptStep(
name="stitch",
script_name="stitch_video.py",
arguments=["--images_dir", processed_images.as_input(),
"--input_audio", ffmpeg_audio.as_input(),
"--output_dir", output_video],
compute_target=cpu_cluster,
runconfig=amlcompute_run_config,
source_directory=scripts_folder
)
```
# Create environment, parallel step run config and parallel run step
```
from azureml.core import Environment
from azureml.core.runconfig import DEFAULT_GPU_IMAGE
parallel_cd = CondaDependencies()
parallel_cd.add_channel("pytorch")
parallel_cd.add_conda_package("pytorch")
parallel_cd.add_conda_package("torchvision")
parallel_cd.add_conda_package("pillow<7") # needed for torchvision==0.4.0
parallel_cd.add_pip_package("azureml-core")
styleenvironment = Environment(name="styleenvironment")
styleenvironment.python.conda_dependencies=parallel_cd
styleenvironment.docker.base_image = DEFAULT_GPU_IMAGE
from azureml.pipeline.core import PipelineParameter
from azureml.pipeline.steps import ParallelRunConfig
parallel_run_config = ParallelRunConfig(
environment=styleenvironment,
entry_script='transform.py',
output_action='summary_only',
mini_batch_size="1",
error_threshold=1,
source_directory=scripts_folder,
compute_target=gpu_cluster,
node_count=nodecount_param,
process_count_per_node=2
)
from azureml.pipeline.steps import ParallelRunStep
from datetime import datetime
parallel_step_name = 'styletransfer-' + datetime.now().strftime('%Y%m%d%H%M')
distributed_style_transfer_step = ParallelRunStep(
name=parallel_step_name,
inputs=[ffmpeg_images], # Input file share/blob container/file dataset
output=processed_images, # Output file share/blob container
arguments=["--style", style_param],
parallel_run_config=parallel_run_config,
allow_reuse=False #[optional - default value True]
)
```
# Run the pipeline
```
pipeline = Pipeline(workspace=ws, steps=[stitch_video_step])
pipeline.validate()
# submit the pipeline and provide values for the PipelineParameters used in the pipeline
pipeline_run = Experiment(ws, 'styletransfer_parallel_mosaic').submit(pipeline)
```
# Monitor pipeline run
The pipeline run status could be checked in Azure Machine Learning portal (https://ml.azure.com). The link to the pipeline run could be retrieved by inspecting the `pipeline_run` object.
```
# This will output information of the pipeline run, including the link to the details page of portal.
pipeline_run
```
### Optional: View detailed logs (streaming)
```
# Wait the run for completion and show output log to console
pipeline_run.wait_for_completion(show_output=True)
```
# Download output video
Downloads the video in `output_video` folder
```
def download_video(run, target_dir=None):
stitch_run = run.find_step_run(stitch_video_step.name)[0]
port_data = stitch_run.get_details()['outputDatasets'][0]['dataset']
port_data.download(target_dir)
pipeline_run.wait_for_completion()
download_video(pipeline_run, "output_video_mosaic")
```
# Publish pipeline
```
pipeline_name = "style-transfer-batch-inference"
print(pipeline_name)
published_pipeline = pipeline.publish(
name=pipeline_name,
description=pipeline_name)
print("Newly published pipeline id: {}".format(published_pipeline.id))
```
# Get published pipeline
This is another way to get the published pipeline.
```
from azureml.pipeline.core import PublishedPipeline
# You could retrieve all pipelines that are published, or
# just get the published pipeline object that you have the ID for.
# Get all published pipeline objects in the workspace
all_pub_pipelines = PublishedPipeline.list(ws)
# We will iterate through the list of published pipelines and
# use the last ID in the list for Schelue operations:
print("Published pipelines found in the workspace:")
for pub_pipeline in all_pub_pipelines:
print("Name:", pub_pipeline.name,"\tDescription:", pub_pipeline.description, "\tId:", pub_pipeline.id, "\tStatus:", pub_pipeline.status)
if(pub_pipeline.name == pipeline_name):
published_pipeline = pub_pipeline
print("Published pipeline id: {}".format(published_pipeline.id))
```
# Run pipeline through REST calls for other styles
# Get AAD token
```
from azureml.core.authentication import InteractiveLoginAuthentication
import requests
auth = InteractiveLoginAuthentication()
aad_token = auth.get_authentication_header()
```
# Get endpoint URL
```
rest_endpoint = published_pipeline.endpoint
print("Pipeline REST endpoing: {}".format(rest_endpoint))
```
# Send request and monitor
```
experiment_name = 'styletransfer_parallel_candy'
response = requests.post(rest_endpoint,
headers=aad_token,
json={"ExperimentName": experiment_name,
"ParameterAssignments": {"style": "candy", "NodeCount": 3}})
run_id = response.json()["Id"]
from azureml.pipeline.core.run import PipelineRun
published_pipeline_run_candy = PipelineRun(ws.experiments[experiment_name], run_id)
# Show detail information of run
published_pipeline_run_candy
```
# Download output from re-run
```
published_pipeline_run_candy.wait_for_completion()
download_video(published_pipeline_run_candy, target_dir="output_video_candy")
```
| true |
code
| 0.432003 | null | null | null | null |
|
# Padding Oracle
- When a decrypted CBC ciphertext ends in an invalid pad the web server returns a 403 error code (forbidden request). When the CBC padding is valid, but the message is malformed, the web server returns a 404 error code (URL not found).
```
http://crypto-class.appspot.com/po?er="your ciphertext here"
```
- The first ciphertext block is random IV, the decrypted text block is ascii encoded
- the ciphertext following the `"po?er="` is a hex encoded AES CBC encryption with a random IV of some secret data about Alice's session.
```
import urllib3 as ul
BLOCKSIZE = 16
AZ = [i for i in range(ord('A'), ord('Z') + 1)]
space = [ord(' ')]
az = [i for i in range(ord('a'),ord('z') +1)]
paddings = [i for i in range(1, 17)]
misc1 = [i for i in range(17, 32)] + [i for i in range(33, 65)]
misc2 = [i for i in range(91, 97)] + [i for i in range(123, 128)]
ALL = paddings + space + az + AZ + misc1 + misc2
def xor(x, y, z):
assert len(x) == len(y) == len(z)
a = int.from_bytes(x, "big")
b = int.from_bytes(y, "big")
c = int.from_bytes(z, "big")
r = a ^ b ^ c
return r.to_bytes(len(x), "big")
# Target: "http:domain.com/po?er="
class PaddingOracle:
def __init__(self, target):
self.target = target
self.http = ul.PoolManager()
# ct: string representing hex encoded
# 4 * 16 * 2 == 128 characters in length
# 4 blocks of ciphertxt, 1 block IV, 3 blocks ciphertext
def decrypt4blocks(self, ct, debug=True):
assert len(ct) == 128
assert self.status_query(ct) == 200
iv, c0, c1, c2 = ct[:32], ct[32:64], ct[64:96], ct[96:]
print("Decrypting...")
m0 = self.decrypt_block(c0, iv)
print(" > ", m0)
m1 = self.decrypt_block(c1, c0)
print(" > ", m1)
m2 = self.decrypt_block(c2, c1)
print(" > ", m2)
return m0 + m1 + m2
def decrypt_block(self, c, c0_hex):
m = bytearray(BLOCKSIZE)
c0 = bytes.fromhex(c0_hex)
for i in range(1, BLOCKSIZE + 1):
self.overwrite_and_send_byte(m, c, i, c0)
return m
# Overwrites one byte in message m for each iteration
def overwrite_and_send_byte(self, m, c, i, c0):
n = bytes([i for _ in range(BLOCKSIZE)])
CURRENT = BLOCKSIZE - i
for g in ALL:
m[CURRENT] = g
q = xor(n, m, c0).hex() + c
if self.is_valid(q) is True:
print(chr(g), end="_")
return
raise ValueError("Unable to find byte")
def is_valid(self, q):
r = self.http.request('GET', self.target + q, retries=False)
return r.status != 403
def status_query(self, q):
return self.http.request('GET', self.target + q, retries=False).status
TARGET = 'http://crypto-class.appspot.com/po?er='
CIPHERTEXT = "f20bdba6ff29eed7b046d1df9fb7000058b1ffb4210a580f748b4ac714c001bd4a61044426fb515dad3f21f18aa577c0bdf302936266926ff37dbf7035d5eeb4"
po = PaddingOracle(TARGET)
message = po.decrypt4blocks(CIPHERTEXT)
print(message)
ct1 = "4ca00ff4c898d61e1edbf1800618fb2828a226d160dad07883d04e008a7897ee2e4b7465d5290d0c0e6c6822236e1daafb94ffe0c5da05d9476be028ad7c1d81"
ct2 = "5b68629feb8606f9a6667670b75b38a5b4832d0f26e1ab7da33249de7d4afc48e713ac646ace36e872ad5fb8a512428a6e21364b0c374df45503473c5242a253"
pt1 = "Basic CBC mode encryption needs padding."
pt2 = "Our implementation uses rand. IV"
TARGET = "http://localhost:9000/po?er="
po = PaddingOracle(TARGET)
message1 = po.decrypt4blocks(ct1)
print(message1)
message2 = po.decrypt4blocks(ct2)
print(message2)
```
| true |
code
| 0.315189 | null | null | null | null |
|
# Widget Events
## Special events
```
from __future__ import print_function
```
The `Button` is not used to represent a data type. Instead the button widget is used to handle mouse clicks. The `on_click` method of the `Button` can be used to register function to be called when the button is clicked. The doc string of the `on_click` can be seen below.
```
import ipywidgets as widgets
print(widgets.Button.on_click.__doc__)
```
### Example
Since button clicks are stateless, they are transmitted from the front-end to the back-end using custom messages. By using the `on_click` method, a button that prints a message when it has been clicked is shown below. To capture `print`s (or any other kind of output) and ensure it is displayed, be sure to send it to an `Output` widget (or put the information you want to display into an `HTML` widget).
```
from IPython.display import display
button = widgets.Button(description="Click Me!")
output = widgets.Output()
display(button, output)
def on_button_clicked(b):
with output:
print("Button clicked.")
button.on_click(on_button_clicked)
```
## Traitlet events
Widget properties are IPython traitlets and traitlets are eventful. To handle changes, the `observe` method of the widget can be used to register a callback. The doc string for `observe` can be seen below.
```
print(widgets.Widget.observe.__doc__)
```
### Signatures
Mentioned in the doc string, the callback registered must have the signature `handler(change)` where `change` is a dictionary holding the information about the change.
Using this method, an example of how to output an `IntSlider`'s value as it is changed can be seen below.
```
int_range = widgets.IntSlider()
output2 = widgets.Output()
display(int_range, output2)
def on_value_change(change):
with output2:
print(change['new'])
int_range.observe(on_value_change, names='value')
```
## Linking Widgets
Often, you may want to simply link widget attributes together. Synchronization of attributes can be done in a simpler way than by using bare traitlets events.
### Linking traitlets attributes in the kernel
The first method is to use the `link` and `dlink` functions from the `traitlets` module (these two functions are re-exported by the `ipywidgets` module for convenience). This only works if we are interacting with a live kernel.
```
caption = widgets.Label(value='The values of slider1 and slider2 are synchronized')
sliders1, slider2 = widgets.IntSlider(description='Slider 1'),\
widgets.IntSlider(description='Slider 2')
l = widgets.link((sliders1, 'value'), (slider2, 'value'))
display(caption, sliders1, slider2)
caption = widgets.Label(value='Changes in source values are reflected in target1')
source, target1 = widgets.IntSlider(description='Source'),\
widgets.IntSlider(description='Target 1')
dl = widgets.dlink((source, 'value'), (target1, 'value'))
display(caption, source, target1)
```
Function `widgets.jslink` returns a `Link` widget. The link can be broken by calling the `unlink` method.
```
l.unlink()
dl.unlink()
```
### Registering callbacks to trait changes in the kernel
Since attributes of widgets on the Python side are traitlets, you can register handlers to the change events whenever the model gets updates from the front-end.
The handler passed to observe will be called with one change argument. The change object holds at least a `type` key and a `name` key, corresponding respectively to the type of notification and the name of the attribute that triggered the notification.
Other keys may be passed depending on the value of `type`. In the case where type is `change`, we also have the following keys:
- `owner` : the HasTraits instance
- `old` : the old value of the modified trait attribute
- `new` : the new value of the modified trait attribute
- `name` : the name of the modified trait attribute.
```
caption = widgets.Label(value='The values of range1 and range2 are synchronized')
slider = widgets.IntSlider(min=-5, max=5, value=1, description='Slider')
def handle_slider_change(change):
caption.value = 'The slider value is ' + (
'negative' if change.new < 0 else 'nonnegative'
)
slider.observe(handle_slider_change, names='value')
display(caption, slider)
```
### Linking widgets attributes from the client side
When synchronizing traitlets attributes, you may experience a lag because of the latency due to the roundtrip to the server side. You can also directly link widget attributes in the browser using the link widgets, in either a unidirectional or a bidirectional fashion.
Javascript links persist when embedding widgets in html web pages without a kernel.
```
caption = widgets.Label(value='The values of range1 and range2 are synchronized')
range1, range2 = widgets.IntSlider(description='Range 1'),\
widgets.IntSlider(description='Range 2')
l = widgets.jslink((range1, 'value'), (range2, 'value'))
display(caption, range1, range2)
caption = widgets.Label(value='Changes in source_range values are reflected in target_range1')
source_range, target_range1 = widgets.IntSlider(description='Source range'),\
widgets.IntSlider(description='Target range 1')
dl = widgets.jsdlink((source_range, 'value'), (target_range1, 'value'))
display(caption, source_range, target_range1)
```
Function `widgets.jslink` returns a `Link` widget. The link can be broken by calling the `unlink` method.
```
# l.unlink()
# dl.unlink()
```
### The difference between linking in the kernel and linking in the client
Linking in the kernel means linking via python. If two sliders are linked in the kernel, when one slider is changed the browser sends a message to the kernel (python in this case) updating the changed slider, the link widget in the kernel then propagates the change to the other slider object in the kernel, and then the other slider's kernel object sends a message to the browser to update the other slider's views in the browser. If the kernel is not running (as in a static web page), then the controls will not be linked.
Linking using jslink (i.e., on the browser side) means contructing the link in Javascript. When one slider is changed, Javascript running in the browser changes the value of the other slider in the browser, without needing to communicate with the kernel at all. If the sliders are attached to kernel objects, each slider will update their kernel-side objects independently.
To see the difference between the two, go to the [static version of this page in the ipywidgets documentation](http://ipywidgets.readthedocs.io/en/latest/examples/Widget%20Events.html) and try out the sliders near the bottom. The ones linked in the kernel with `link` and `dlink` are no longer linked, but the ones linked in the browser with `jslink` and `jsdlink` are still linked.
## Continuous updates
Some widgets offer a choice with their `continuous_update` attribute between continually updating values or only updating values when a user submits the value (for example, by pressing Enter or navigating away from the control). In the next example, we see the "Delayed" controls only transmit their value after the user finishes dragging the slider or submitting the textbox. The "Continuous" controls continually transmit their values as they are changed. Try typing a two-digit number into each of the text boxes, or dragging each of the sliders, to see the difference.
```
a = widgets.IntSlider(description="Delayed", continuous_update=False)
b = widgets.IntText(description="Delayed", continuous_update=False)
c = widgets.IntSlider(description="Continuous", continuous_update=True)
d = widgets.IntText(description="Continuous", continuous_update=True)
widgets.link((a, 'value'), (b, 'value'))
widgets.link((a, 'value'), (c, 'value'))
widgets.link((a, 'value'), (d, 'value'))
widgets.VBox([a,b,c,d])
```
Sliders, `Text`, and `Textarea` controls default to `continuous_update=True`. `IntText` and other text boxes for entering integer or float numbers default to `continuous_update=False` (since often you'll want to type an entire number before submitting the value by pressing enter or navigating out of the box).
| true |
code
| 0.439266 | null | null | null | null |
|
# Hi, Are you in Google Colab?
In Google colab you can easily run Optimus. If you not you may want to go here
https://colab.research.google.com/github/ironmussa/Optimus/blob/master/examples/10_min_from_spark_to_pandas_with_optimus.ipynb
Install Optimus all the dependencies.
```
import sys
if 'google.colab' in sys.modules:
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
!wget -q https://archive.apache.org/dist/spark/spark-2.4.1/spark-2.4.1-bin-hadoop2.7.tgz
!tar xf spark-2.4.1-bin-hadoop2.7.tgz
!pip install optimuspyspark
```
## Restart Runtime
Before you continue, please go to the 'Runtime' Menu above, and select 'Restart Runtime (Ctrl + M + .)'.
```
if 'google.colab' in sys.modules:
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark-2.4.1-bin-hadoop2.7"
```
## You are done. Enjoy Optimus!
# Hacking Optimus!
To hacking Optimus we recommend to clone the repo and change ```repo_path``` relative to this notebook.
```
repo_path=".."
# This will reload the change you make to Optimus in real time
%load_ext autoreload
%autoreload 2
import sys
sys.path.append(repo_path)
```
## Install Optimus
from command line:
`pip install optimuspyspark`
from a notebook you can use:
`!pip install optimuspyspark`
## Import Optimus and start it
```
from optimus import Optimus
op = Optimus(master="local")
```
## Dataframe creation
Create a dataframe to passing a list of values for columns and rows. Unlike pandas you need to specify the column names.
```
df = op.create.df(
[
"names",
"height(ft)",
"function",
"rank",
"weight(t)",
"japanese name",
"last position",
"attributes"
],
[
("Optim'us", 28.0, "Leader", 10, 4.3, ["Inochi", "Convoy"], "19.442735,-99.201111", [8.5344, 4300.0]),
("bumbl#ebéé ", 17.5, "Espionage", 7, 2.0, ["Bumble", "Goldback"], "10.642707,-71.612534", [5.334, 2000.0]),
("ironhide&", 26.0, "Security", 7, 4.0, ["Roadbuster"], "37.789563,-122.400356", [7.9248, 4000.0]),
("Jazz", 13.0, "First Lieutenant", 8, 1.8, ["Meister"], "33.670666,-117.841553", [3.9624, 1800.0]),
("Megatron", None, "None", None, 5.7, ["Megatron"], None, [None, 5700.0]),
("Metroplex_)^$", 300.0, "Battle Station", 8, None, ["Metroflex"], None, [91.44, None]),
]).h_repartition(1)
df.table()
```
Creating a dataframe by passing a list of tuples specifyng the column data type. You can specify as data type an string or a Spark Datatypes. https://spark.apache.org/docs/2.3.1/api/java/org/apache/spark/sql/types/package-summary.html
Also you can use some Optimus predefined types:
* "str" = StringType()
* "int" = IntegerType()
* "float" = FloatType()
* "bool" = BoleanType()
```
df = op.create.df(
[
("names", "str"),
("height", "float"),
("function", "str"),
("rank", "int"),
],
[
("bumbl#ebéé ", 17.5, "Espionage", 7),
("Optim'us", 28.0, "Leader", 10),
("ironhide&", 26.0, "Security", 7),
("Jazz", 13.0, "First Lieutenant", 8),
("Megatron", None, "None", None),
])
df.table()
```
Creating a dataframe and specify if the column accepts null values
```
df = op.create.df(
[
("names", "str", True),
("height", "float", True),
("function", "str", True),
("rank", "int", True),
],
[
("bumbl#ebéé ", 17.5, "Espionage", 7),
("Optim'us", 28.0, "Leader", 10),
("ironhide&", 26.0, "Security", 7),
("Jazz", 13.0, "First Lieutenant", 8),
("Megatron", None, "None", None),
])
df.table()
```
Creating a Daframe using a pandas dataframe
```
import pandas as pd
data = [("bumbl#ebéé ", 17.5, "Espionage", 7),
("Optim'us", 28.0, "Leader", 10),
("ironhide&", 26.0, "Security", 7)]
labels = ["names", "height", "function", "rank"]
# Create pandas dataframe
pdf = pd.DataFrame.from_records(data, columns=labels)
df = op.create.df(pdf=pdf)
df.table()
```
## Viewing data
Here is how to View the first 10 elements in a dataframe.
```
df.table(10)
```
## About Spark
Spark and Optimus work differently than pandas or R. If you are not familiar with Spark, we recommend taking the time to take a look at the links below.
### Partitions
Partition are the way Spark divide the data in your local computer or cluster to better optimize how it will be processed.It can greatly impact the Spark performance.
Take 5 minutes to read this article:
https://www.dezyre.com/article/how-data-partitioning-in-spark-helps-achieve-more-parallelism/297
### Lazy operations
Lazy evaluation in Spark means that the execution will not start until an action is triggered.
https://stackoverflow.com/questions/38027877/spark-transformation-why-its-lazy-and-what-is-the-advantage
### Inmutability
Immutability rules out a big set of potential problems due to updates from multiple threads at once. Immutable data is definitely safe to share across processes.
https://www.quora.com/Why-is-RDD-immutable-in-Spark
### Spark Architecture
https://jaceklaskowski.gitbooks.io/mastering-apache-spark/spark-architecture.html
## Columns and Rows
Optimus organized operations in columns and rows. This is a little different of how pandas works in which all operations are aroud the pandas class. We think this approach can better help you to access and transform data. For a deep dive about the designing decision please read:
https://towardsdatascience.com/announcing-optimus-v2-agile-data-science-workflows-made-easy-c127a12d9e13
Sort by cols names
```
df.cols.sort().table()
```
Sort by rows rank value
```
df.rows.sort("rank").table()
df.describe().table()
```
## Selection
Unlike Pandas, Spark DataFrames don't support random row access. So methods like `loc` in pandas are not available.
Also Pandas don't handle indexes. So methods like `iloc` are not available.
Select an show an specific column
```
df.cols.select("names").table()
```
Select rows from a Dataframe where a the condition is meet
```
df.rows.select(df["rank"] > 7).table()
```
Select rows by specific values on it
```
df.rows.is_in("rank", [7, 10]).table()
```
Create and unique id for every row.
```
df.rows.create_id().table()
```
Create wew columns
```
df.cols.append("Affiliation", "Autobot").table()
```
## Missing Data
```
df.rows.drop_na("*", how='any').table()
```
Filling missing data.
```
df.cols.fill_na("*", "N//A").table()
```
To get the boolean mask where values are nan.
```
df.cols.is_na("*").table()
```
# Operations
## Stats
```
df.cols.mean("height")
df.cols.mean("*")
```
### Apply
```
def func(value, args):
return value + 1
df.cols.apply("height", func, "float").table()
```
### Histogramming
```
df.cols.count_uniques("*")
```
### String Methods
```
df \
.cols.lower("names") \
.cols.upper("function").table()
```
## Merge
### Concat
Optimus provides and intuitive way to concat Dataframes by columns or rows.
```
df_new = op.create.df(
[
"class"
],
[
("Autobot"),
("Autobot"),
("Autobot"),
("Autobot"),
("Decepticons"),
]).h_repartition(1)
op.append([df, df_new], "columns").table()
df_new = op.create.df(
[
"names",
"height",
"function",
"rank",
],
[
("Grimlock", 22.9, "Dinobot Commander", 9),
]).h_repartition(1)
op.append([df, df_new], "rows").table()
# Operations like `join` and `group` are handle using Spark directly
df_melt = df.melt(id_vars=["names"], value_vars=["height", "function", "rank"])
df.table()
df_melt.pivot("names", "variable", "value").table()
```
## Ploting
```
df.plot.hist("height", 10)
df.plot.frequency("*", 10)
```
## Getting Data In/Out
```
df.cols.names()
df.to_json()
df.schema
df.table()
op.profiler.run(df, "height", infer=True)
df_csv = op.load.csv("https://raw.githubusercontent.com/ironmussa/Optimus/master/examples/data/foo.csv").limit(5)
df_csv.table()
df_json = op.load.json("https://raw.githubusercontent.com/ironmussa/Optimus/master/examples/data/foo.json").limit(5)
df_json.table()
df_csv.save.csv("test.csv")
df.table()
```
## Enrichment
```
df = op.load.json("https://raw.githubusercontent.com/ironmussa/Optimus/master/examples/data/foo.json")
df.table()
import requests
def func_request(params):
# You can use here whatever header or auth info you need to send.
# For more information see the requests library
url= "https://jsonplaceholder.typicode.com/todos/" + str(params["id"])
return requests.get(url)
def func_response(response):
# Here you can parse de response
return response["title"]
e = op.enrich(host="localhost", port=27017, db_name="jazz")
e.flush()
df_result = e.run(df, func_request, func_response, calls= 60, period = 60, max_tries = 8)
df_result.table()
```
| true |
code
| 0.383815 | null | null | null | null |
|
### Outlier Detection using autoencoders-First version
### Using the whole data
#### Edgar Acuna
#### Abril 2021
```
import warnings
warnings.filterwarnings('ignore')
import tensorflow as tf
import keras
from keras.models import Model, load_model
from keras.layers import Input, Dense
from keras.callbacks import ModelCheckpoint, TensorBoard
from keras import regularizers
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.model_selection import train_test_split
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
url= "https://academic.uprm.edu/eacuna/diabetes.dat"
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
data = pd.read_table(url, names=names)
yd=data['class']
Xd=data.iloc[:,0:8]
from sklearn.preprocessing import StandardScaler
cols_to_norm = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age']
scaler = MinMaxScaler()
data[cols_to_norm] = scaler.fit_transform(data[cols_to_norm])
data.shape
train_x = data.drop(['class'], axis=1) #drop the class column
train_x.info()
train_x = train_x.values #transform to ndarray
train_x
# No of Neurons in each Layer
nb_epoch = 20
batch_size = 50
input_dim = train_x.shape[1] #num of columns, 8
encoding_dim = 4
hidden_dim = int(encoding_dim / 2) #i.e. 7
learning_rate = 1e-7
input_layer = Input(shape=(input_dim, ))
encoder = Dense(encoding_dim, activation="tanh", activity_regularizer=regularizers.l1(learning_rate))(input_layer)
encoder = Dense(hidden_dim, activation="relu")(encoder)
decoder = Dense(hidden_dim, activation='tanh')(encoder)
decoder = Dense(input_dim, activation='relu')(decoder)
autoencoder = Model(inputs=input_layer, outputs=decoder)
autoencoder.summary()
import datetime
autoencoder.compile(optimizer='adam', loss='mse' )
t_ini = datetime.datetime.now()
history = autoencoder.fit(train_x, train_x,
epochs=nb_epoch,
batch_size=batch_size,
shuffle=True,
validation_split=0.1,
verbose=0
)
t_fin = datetime.datetime.now()
print('Time to run the model: {} Sec.'.format((t_fin -
t_ini).total_seconds()))
df_history = pd.DataFrame(history.history)
predictions = autoencoder.predict(train_x)
print(predictions)
train_x.shape
mse = np.mean(np.power(train_x- predictions, 2), axis=1)
df_error = pd.DataFrame({'reconstruction_error': mse, 'Label': yd}, index=yd.index)
df_error.describe()
dfOutliers = df_error.index[df_error.reconstruction_error > .15].tolist()
len(dfOutliers)
print(dfOutliers)
y=df_error['reconstruction_error'].tolist()
x = df_error.index.tolist()
thresh=0.15
plt.plot(x, y, 'ro')
plt.ylabel('reconstruction_error')
plt.xlabel('Index')
plt.title(' Threshold = ' +str(thresh))
plt.plot([0,2000],[thresh,thresh],"g--")
#cleaning the data from outliers
data3=data.drop(dfOutliers,axis=0)
```
### Outlier effect on the LDA Classifier
```
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.model_selection import cross_val_score
ldadis = LinearDiscriminantAnalysis().fit(Xd,yd)
scores = cross_val_score(ldadis, Xd, yd, cv=10)
print("Accuracy using LDA: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
y=data3['class']
X=data3.iloc[:,0:8]
#Haciendo el analisis discriminante y calculando el porcentaje de precision
ldadis = LinearDiscriminantAnalysis().fit(X,y)
scores = cross_val_score(ldadis, X, y, cv=10)
scores
print("Accuracy using LDA after outlier removal: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
```
#### Outlier effect on the KNN classifier
```
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import cross_val_score
neigh = KNeighborsClassifier(n_neighbors=5)
scores = cross_val_score(neigh, Xd, yd, cv=10)
scores
print("Accuracy using k=5 neighbors: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
y=data3['class']
X=data3.iloc[:,0:8]
y1=y.to_numpy()
X1=X.to_numpy()
neigh = KNeighborsClassifier(n_neighbors=5)
scores = cross_val_score(neigh, X1, y1, cv=10)
scores
print("Accuracy using k=5 neighbors after outlier removal: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
```
| true |
code
| 0.60903 | null | null | null | null |
|
# Lecture 2b: Introduction to Qiskit
**By Adam Fattal**
Welcome to the first practical lecture! In this lecture, we will be introducing qiskit, a package developed by IBM Quantum that allows one to simulate and run quantum circuits and much more! This lecture covers only the surface of Qiskit's functionality. For more, check out Qiskit's [documentation](https://qiskit.org/documentation/) here.
## Importing Qiskit
```
from qiskit import *
import numpy as np
```
## Part 1: Building Circuits
Let's Try To Build these Quantum Circuits:
<img src='assets/1.png'>
```
circ1 = QuantumCircuit(1,1)
circ1.h(0)
circ1.draw('mpl')
```
<img src='assets/2.png'>
```
circ2 = QuantumCircuit(2,2)
circ2.h(0)
circ2.z(1)
circ2.draw('mpl')
```
<img src='assets/3.png'>
```
circ3 = QuantumCircuit(2,2)
circ3.h(0)
circ3.cx(0,1)
circ3.measure([0,1],[0,1])
circ3.draw('mpl')
```
## Part 2: Using Quantum Circuits
### Statevectors
```
simulator = Aer.get_backend('statevector_simulator')
result = execute(circ2,backend = simulator).result()
psi = result.get_statevector()
psi
np.linalg.norm(psi)
```
### Getting the Unitary
```
simulator = Aer.get_backend('unitary_simulator')
result = execute(circ3,backend = simulator).result()
U = result.get_unitary()
U@np.array([1,0,0,0])
```
### Getting the Bloch Spheres
```
from qiskit.tools.visualization import plot_bloch_multivector
plot_bloch_multivector(psi)
```
### Simulating Results
```
from qiskit.tools.visualization import plot_histogram
backend = Aer.get_backend('qasm_simulator')
result = execute(circ3,backend, shots = 420).result()
output = result.get_counts()
plot_histogram(output)
```
### Full Example
```
qc = QuantumCircuit(4,4)
qc.h(0)
qc.rx(np.pi/3, 1)
qc.x(1)
qc.y(2)
qc.z(3)
qc.cnot(0,2)
qc.measure([i for i in range(3)], [i for i in range(3)])
qc.draw('mpl')
backend = Aer.get_backend('qasm_simulator')
result = execute(qc,backend, shots = 420).result()
output = result.get_counts()
plot_histogram(output)
```
## Part 3: Running circuits on a real quantum computer
```
#Defining a quantum circuit with 2 qubits and 2 classical bits
phiPlus = QuantumCircuit(2,2)
#Preparing a |Φ+> state
phiPlus.h(0)
phiPlus.cnot(0,1)
phiPlus.measure([0,1],[0,1])
#This is what you type to run on real IBMQ hardware
IBMQ.load_account() #This is how you load your account
provider = IBMQ.get_provider('ibm-q') #This is how you get the ibm-q provider
qcomp = provider.get_backend('ibmq_16_melbourne') #This is how you select the device you want to use
job = execute(phiPlus, backend=qcomp, shots=1024) #This is how you tell the device which circuit to run
from qiskit.tools.monitor import job_monitor
job_monitor(job) #Monitor the job
result = job.result() #Get Results
result
plot_histogram(result.get_counts(phiPlus))
```
## Part 4: Grover's Algorithm Demonstration
```
PI = np.pi
def groverCircuit(target):
target_list = [int(x) for x in str(target)] #Converts the target into a list (e.g '1001' => [1,0,0,1])
n = len(target_list) #Length of target list (i.e nbr of qubits)
counter = [i for i in range(n)] #List containing integers from 0 to num_qubits - 1
#Defining a CnP gate. Note that CnP(PI) = CNZ
def mcp(self, lam, control_qubits, target_qubit):
from qiskit.circuit.library import MCPhaseGate
num_ctrl_qubits = len(control_qubits)
return self.append(MCPhaseGate(lam, num_ctrl_qubits), control_qubits[:] + [target_qubit],
[])
#Sub-circuit 1: Hadamard on all qubits
def hadamards(target):
hadCirc = QuantumCircuit(n,n)
hadCirc.h(counter)
hadCirc.barrier()
return hadCirc
#Sub-circuit 2: Oracle
def oracle(target):
filtered = [counter[i] for i in range(n) if target_list[i]==0] #Filtering the counter list to only the indices where target==0
oracleCirc = QuantumCircuit(n,n)
if filtered != []:
oracleCirc.x(filtered) #In other words, if target only has 1s, do nothing
mcp(oracleCirc, np.pi, [i for i in range(n-1)],n-1)
if filtered != []:
oracleCirc.x(filtered) #Applying X gates to the qubits which represent 0
oracleCirc.barrier()
return oracleCirc
#Sub-circuit 3: Amplifier
def amplification(target):
ampCirc = QuantumCircuit(n,n)
ampCirc.h(counter)
ampCirc.x(counter)
mcp(ampCirc, np.pi, [i for i in range(n-1)],n-1)
ampCirc.x(counter)
ampCirc.h(counter)
ampCirc.barrier()
return ampCirc
k = round(PI*n/4 - 0.5) #Ideal number of iterations. k = π/4 * √N - 1/2.
circuit = hadamards(target)
for i in range(k): #Iterating the oracle and amplification
circuit+=oracle(target)
circuit+= amplification(target)
circuit.measure(counter, counter)
return circuit
from qiskit.tools.visualization import plot_histogram
circuit = groverCircuit('1001')
backend = Aer.get_backend('qasm_simulator')
result = execute(circuit,backend, shots = 420).result()
output = result.get_counts()
circuit.draw('mpl')
plot_histogram(output)
```
## Further Reading:
[1] <a href='https://www.youtube.com/watch?v=a1NZC5rqQD8&list=PLOFEBzvs-Vvp2xg9-POLJhQwtVktlYGbY'>Qiskit Tutorial by Qiskit</a>
[2] <a href='https://qiskit.org/documentation/tutorials/circuits/3_summary_of_quantum_operations.html'>Qiskit Summary of Operations </a>
[3] <a href='https://qiskit.org/textbook/preface.html'>Qiskit Textbook</a>
[4] <a href='https://www.youtube.com/watch?v=yprDIC-9D0k'>Getting Started with Qiskit Demo </a>
| true |
code
| 0.587529 | null | null | null | null |
|
# Week 11 - Regression and Classification
In previous weeks we have looked at the steps needed in preparing different types of data for use by machine learning algorithms.
```
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
from sklearn import datasets
diabetes = datasets.load_diabetes()
# Description at http://www4.stat.ncsu.edu/~boos/var.select/diabetes.html
X = diabetes.data
y = diabetes.target
print(X.shape, y.shape)
from sklearn import linear_model
clf = linear_model.LinearRegression()
clf.fit(X, y)
plt.plot(y, clf.predict(X), 'k.')
plt.show()
```
All the different models in scikit-learn follow a consistent structure.
* The class is passed any parameters needed at initialization. In this case none are needed.
* The fit method takes the features and the target as the parameters X and y.
* The predict method takes an array of features and returns the predicted values
These are the basic components with additional methods added when needed. For example, classifiers also have
* A predict_proba method that gives the probability that a sample belongs to each of the classes.
* A predict_log_proba method that gives the log of the probability that a sample belongs to each of the classes.
## Evaluating models
Before we consider whether we have a good model, or which model to choose, we must first decide on how we will evaluate our models.
### Metrics
As part of our evaluation having a single number with which to compare models can be very useful. Choosing a metric that is as close a representation of our goal as possible enables many models to be automatically compared. This can be important when choosing model parameters or comparing different types of algorithm.
Even if we have a metric we feel is reasonable it can be worthwhile considering in detail the predictions made by any model. Some questions to ask:
* Is the model sufficiently sensitive for our use case?
* Is the model sufficiently specific for our use case?
* Is there any systemic bias?
* Does the model perform equally well over the distribution of features?
* How does the model perform outside the range of the training data?
* Is the model overly dependent on one or two samples in the training dataset?
The metric we decide to use will depend on the type of problem we have (regression or classification) and what aspects of the prediction are most important to us. For example, a decision we might have to make is between:
* A model with intermediate errors for all samples
* A model with low errors for the majority of samples but with a small number of samples that have large errors.
For these two situations in a regression task we might choose mean_squared_error and mean_absolute_error.
There are lists for [regression metrics](http://scikit-learn.org/stable/modules/model_evaluation.html#regression-metrics) and [classification metrics](http://scikit-learn.org/stable/modules/model_evaluation.html#classification-metrics).
We can apply the mean_squared_error metric to the linear regression model on the diabetes dataset:
```
diabetes = datasets.load_diabetes()
X = diabetes.data
y = diabetes.target
clf = linear_model.LinearRegression()
clf.fit(X, y)
plt.plot(y, clf.predict(X), 'k.')
plt.show()
from sklearn import metrics
metrics.mean_squared_error(y, clf.predict(X))
```
Although this single number might seem unimpressive, metrics are a key component for model evaluation. As a simple example, we can perform a permutation test to determine whether we might see this performance by chance.
```
diabetes = datasets.load_diabetes()
X = diabetes.data
y = diabetes.target
clf = linear_model.LinearRegression()
clf.fit(X, y)
error = metrics.mean_squared_error(y, clf.predict(X))
rounds = 1000
np.random.seed(0)
errors = []
for i in range(rounds):
y_shuffle = y.copy()
np.random.shuffle(y_shuffle)
clf_shuffle = linear_model.LinearRegression()
clf_shuffle.fit(X, y_shuffle)
errors.append(metrics.mean_squared_error(y_shuffle, clf_shuffle.predict(X)))
better_models_by_chance = len([i for i in errors if i <= error])
if better_models_by_chance > 0:
print('Probability of observing a mean_squared_error of {0} by chance is {1}'.format(error,
better_models_by_chance / rounds))
else:
print('Probability of observing a mean_squared_error of {0} by chance is <{1}'.format(error,
1 / rounds))
```
### Training, validation, and test datasets
When evaluating different models the approach taken above is not going to work. Particularly for models with high variance, that overfit the training data, we will get very good performance on the training data but perform no better than chance on new data.
```
from sklearn import tree
diabetes = datasets.load_diabetes()
X = diabetes.data
y = diabetes.target
clf = tree.DecisionTreeRegressor()
clf.fit(X, y)
plt.plot(y, clf.predict(X), 'k.')
plt.show()
metrics.mean_squared_error(y, clf.predict(X))
from sklearn import neighbors
diabetes = datasets.load_diabetes()
X = diabetes.data
y = diabetes.target
clf = neighbors.KNeighborsRegressor(n_neighbors=1)
clf.fit(X, y)
plt.plot(y, clf.predict(X), 'k.')
plt.show()
metrics.mean_squared_error(y, clf.predict(X))
```
Both these models appear to give perfect solutions but all they do is map our test samples back to the training samples and return the associated value.
To understand how our model truly performs we need to evaluate the performance on previously unseen samples. The general approach is to divide a dataset into training, validation and test datasets. Each model is trained on the training dataset. Multiple models can then be compared by evaluating the model against the validation dataset. There is still the potential of choosing a model that performs well on the validation dataset by chance so a final check is made against a test dataset.
This unfortunately means that part of our, often expensively gathered, data can't be used to train our model. Although it is important to leave out a test dataset an alternative approach can be used for the validation dataset. Rather than just building one model we can build multiple models, each time leaving out a different validation dataset. Our validation score is then the average across each of the models. This is known as cross-validation.
Scikit-learn provides classes to support cross-validation but a simple solution can also be implemented directly. Below we will separate out a test dataset to evaluate the nearest neighbor model.
```
from sklearn import neighbors
diabetes = datasets.load_diabetes()
X = diabetes.data
y = diabetes.target
np.random.seed(0)
split = np.random.random(y.shape) > 0.3
X_train = X[split]
y_train = y[split]
X_test = X[np.logical_not(split)]
y_test = y[np.logical_not(split)]
print(X_train.shape, X_test.shape)
clf = neighbors.KNeighborsRegressor(1)
clf.fit(X_train, y_train)
plt.plot(y_test, clf.predict(X_test), 'k.')
plt.show()
metrics.mean_squared_error(y_test, clf.predict(X_test))
diabetes = datasets.load_diabetes()
X = diabetes.data
y = diabetes.target
np.random.seed(0)
split = np.random.random(y.shape) > 0.3
X_train = X[split]
y_train = y[split]
X_test = X[np.logical_not(split)]
y_test = y[np.logical_not(split)]
print(X_train.shape, X_test.shape)
clf = linear_model.LinearRegression()
clf.fit(X_train, y_train)
plt.plot(y_test, clf.predict(X_test), 'k.')
plt.show()
metrics.mean_squared_error(y_test, clf.predict(X_test))
```
## Model types
Scikit-learn includes a variety of [different models](http://scikit-learn.org/stable/supervised_learning.html). The most commonly used algorithms probably include the following:
* Regression
* Support Vector Machines
* Nearest neighbors
* Decision trees
* Ensembles & boosting
### Regression
We have already seen several examples of regression. The basic form is:
$$f(X) = \beta_{0} + \sum_{j=1}^p X_j\beta_j$$
Each feature is multipled by a coefficient and then the sum returned. This value is then transformed for classification to limit the value to the range 0 to 1.
### Support Vector Machines
Support vector machines attempt to project samples into a higher dimensional space such that they can be divided by a hyperplane. A good explanation can be found in [this article](http://noble.gs.washington.edu/papers/noble_what.html).
### Nearest neighbors
Nearest neighbor methods identify a number of samples from the training set that are close to the new sample and then return the average or most common value depending on the task.
### Decision trees
Decision trees attempt to predict the value of a new sample by learning simple rules from the training samples.
### Ensembles & boosting
Ensembles are combinations of other models. Combining different models together can improve performance by boosting generalizability. An average or most common value from the models is returned.
Boosting builds one model and then attempts to reduce the errors with the next model. At each stage the bias in the model is reduced. In this way many weak predictors can be combined into one much more powerful predictor.
I often begin with an ensemble or boosting approach as they typically give very good performance without needing to be carefully optimized. Many of the other algorithms are sensitive to their parameters.
## Parameter selection
Many of the models require several different parameters to be specified. Their performance is typically heavily influenced by these parameters and choosing the best values is vital in developing the best model.
Some models have alternative implementations that handle parameter selection in an efficient way.
```
from sklearn import datasets
diabetes = datasets.load_diabetes()
# Description at http://www4.stat.ncsu.edu/~boos/var.select/diabetes.html
X = diabetes.data
y = diabetes.target
print(X.shape, y.shape)
from sklearn import linear_model
clf = linear_model.LassoCV(cv=20)
clf.fit(X, y)
print('Alpha chosen was ', clf.alpha_)
plt.plot(y, clf.predict(X), 'k.')
```
There is an expanded example in [the documentation](http://scikit-learn.org/stable/auto_examples/linear_model/plot_lasso_model_selection.html#example-linear-model-plot-lasso-model-selection-py).
There are also general classes to handle parameter selection for situations when dedicated classes are not available. As we will often have parameters in preprocessing steps these general classes will be used much more often.
```
from sklearn import grid_search
from sklearn import neighbors
diabetes = datasets.load_diabetes()
X = diabetes.data
y = diabetes.target
np.random.seed(0)
split = np.random.random(y.shape) > 0.3
X_train = X[split]
y_train = y[split]
X_test = X[np.logical_not(split)]
y_test = y[np.logical_not(split)]
print(X_train.shape, X_test.shape)
knn = neighbors.KNeighborsRegressor()
parameters = {'n_neighbors':[1,2,3,4,5,6,7,8,9,10]}
clf = grid_search.GridSearchCV(knn, parameters)
clf.fit(X_train, y_train)
plt.plot(y_test, clf.predict(X_test), 'k.')
plt.show()
print(metrics.mean_squared_error(y_test, clf.predict(X_test)))
clf.get_params()
```
## Exercises
1. Load the handwritten digits dataset and choose an appropriate metric
2. Divide the data into a training and test dataset
4. Build a RandomForestClassifier on the training dataset, using cross-validation to evaluate performance
5. Choose another classification algorithm and apply it to the digits dataset.
6. Use grid search to find the optimal parameters for the chosen algorithm.
7. Comparing the true values with the predictions from the best model identify the numbers that are most commonly confused.
| true |
code
| 0.649829 | null | null | null | null |
|
# Leverage
### Stupidity or genius?
Updated 2020-August-28.
* This notebook looks at what the last 92 years of daily S&P 500 data has to say about the now well-known intra-day leverage.
* Automatic reinvestment of dividends is assumed.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
```
### Constants
```
# Number of trading days in a year
N_YEAR = 252
# 2x dividends as a fraction of S&P 500 dividends,
# assumed from the current ratio of SSO to SPY dividends
DIV2 = 0.18
# Explored leverage space (0% leverage to 100% leverage)
lev = np.linspace(0, 1, 41)
```
### Load Data
```
# S&P 500 daily - from Yahoo Finance
df = pd.read_csv('../data/^GSPC.csv', index_col=0, parse_dates=[0])
# S&P 500 annual average dividend - from Quandl
dfi = pd.read_csv('../data/MULTPL-SP500_DIV_YIELD_YEAR-annual.csv',
index_col=0, parse_dates=[0])
dividend_year = dict(zip(dfi.index.year, dfi.Value.to_numpy()))
df['DividendYear'] = df.index.map(lambda x: dividend_year[x.year]) / 100
df
```
### Create Daily Leverage
```
dl = (df.Close / df.Close.shift()).iloc[1:]
div = df.DividendYear.iloc[1:]
```
Each entry of `dl` is the end of day multiple of the previous trading day's closing price, such that 1.05 would indicate a 5% increase.
```
# How many trading days in a year, i.e., how long to rebalance?
# We will settle on the standard 252 trading days
dl.groupby(dl.index.year).count().value_counts()
# Long-term accuracy is good, as expected
assert np.round(np.product(dl), 5) == np.round(df.Close[-1] / df.Close[0], 5)
dl2 = 2*(dl-1) + 1
```
## All n-Year periods since 1927
We assume n = 10 and annual (252 trading days) rebalancing of leverage percentages.
#### Constants
```
num_years = 10
n_period = num_years * N_YEAR
len_chunk = n_period
len_sep = 1
n_split = N_YEAR
rebalance_idxs = np.arange(n_split, n_period, n_split)
```
#### Get the index architecture
```
assert dl.size == dl2.size
%%time
n_arrays = np.int(np.ceil((dl.size - len_chunk + 1) / len_sep))
rows = np.array((np.arange(n_arrays).reshape(n_arrays, -1) + np.tile(
np.zeros(len_chunk), n_arrays).reshape(n_arrays, -1)), dtype=np.intp)
columns = np.array(((len_sep*np.arange(0, n_arrays)).reshape(n_arrays, -1) + np.tile(
np.arange(0, len_chunk), n_arrays).reshape(n_arrays, -1)), dtype=np.intp)
n_arrays
```
#### Get the start dates
```
start_dates = dl.index[:n_arrays:len_sep]
```
#### Get the periods
```
def get_periods(array):
return np.tile(array, n_arrays).reshape(n_arrays, -1)[rows, columns]
%%time
dlm = get_periods(dl.to_numpy())
%%time
dlm2 = get_periods(dl2.to_numpy())
```
#### Combine with Dividend Data
```
%%time
divm = get_periods(div.to_numpy())
print(dlm.shape)
assert dlm.shape == dlm2.shape == divm.shape
assert dlm.shape[0] == n_arrays
divmsplit = np.array(np.hsplit(divm, rebalance_idxs)).T
divmsplit = np.average(divmsplit, axis=0)
divmsplit2 = divmsplit * DIV2
```
#### Get returns from each year
```
%%time
dlmsplit = np.array(np.hsplit(dlm, rebalance_idxs)).T
dlmsplit = np.product(dlmsplit, axis=0)
dlmsplit += divmsplit
%%time
dlmsplit2 = np.array(np.hsplit(dlm2, rebalance_idxs)).T
dlmsplit2 = np.product(dlmsplit2, axis=0)
dlmsplit2 += divmsplit2
```
#### Aggregate the results over the n-years with varying leverage rates
```
agg2 = (1-lev).reshape(-1, 1, 1)*dlmsplit + lev.reshape(-1, 1, 1)*dlmsplit2
results2 = np.product(agg2.T, axis=0)
print(results2.shape)
```
#### Get results relative to baseline (S&P 500)
```
relative2 = results2 / results2[:,0].reshape(n_arrays, -1)
```
#### Plot many leverage curves
```
%%time
plt.figure(figsize=(12, 8))
for i in range(0, n_arrays, 5):
plt.plot(lev, results2[i], alpha=0.005, color='#1f77b4')
plt.yscale('log')
plt.xticks(lev[::4], ['{:.0%}'.format(p) for p in lev[::4]])
plt.xlim(0, 1)
plt.xlabel('Percent Leveraged')
plt.title('Return on Investment for 20% of all {}-Year Periods from Jan 1928 to Aug 2020, with Annual\n\
Rebalancing of Leverage Rates, and Assumed Reinvestment of Dividends.'.format(num_years))
plt.tight_layout();
plt.savefig('plots/leverage-2x-10yr-many_lev_curves.png', dpi=300);
```
### Plotting leverage curves by percentile
```
quantiles = np.linspace(0, 1, 101, endpoint=True)
results2q = np.quantile(results2, quantiles, axis=0)
scheme = sns.color_palette('viridis', quantiles.size)
plt.figure(figsize=(12, 8))
for i, quant in enumerate(quantiles):
results2q[i]
color = scheme[i]
label = None
if quant == 0.5:
color = 'r'
label = 'Median'
plt.plot(lev, results2q[i],
color=color, label=label, linewidth=2)
plt.yscale('log')
plt.xticks(lev[::4], ['{:.0%}'.format(p) for p in lev[::4]])
plt.axhline(y=1, color='k')
plt.xlim(0, 0.8)
plt.ylim(.08, 20)
plt.xlabel('Percent Leveraged')
plt.grid(alpha=0.6, which='both')
plt.title('Return on Investment for all {}-Year Periods from Jan 1928 to Aug 2020, with Annual\n\
Rebalancing of Leverage Rates, and Assumed Reinvestment of Dividends.\n\
Each line represents a percentile (0%, 1%,..., 99%, 100%). Median is in Red.'.format(num_years))
plt.tight_layout();
plt.savefig('plots/leverage-2x-10yr-percentiles.png', dpi=300);
relative2q = np.quantile(relative2, quantiles, axis=0
plt.figure(figsize=(12, 8))
for i, quant in enumerate(quantiles):
relative2q[i]
color = scheme[i]
label = None
if quant == 0.5:
color = 'r'
label = 'Median'
plt.plot(lev, relative2q[i],
color=color, label=label, linewidth=2)
plt.yscale('log')
plt.xticks(lev[::4], ['{:.0%}'.format(p) for p in lev[::4]])
plt.axhline(y=1, color='k')
plt.xlim(0, 0.8)
plt.ylim(.1, 5)
plt.xlabel('Percent Leveraged')
plt.grid(alpha=0.6, which='both')
plt.title('Relative Return on Investment for all {}-Year Periods from Jan 1928 to Aug 2020, with Annual\n\
Rebalancing of Leverage Rates, and Assumed Reinvestment of Dividends.\n\
Each line represents a percentile (0%, 1%,..., 99%, 100%). Median is in Red.'.format(num_years))
plt.tight_layout();
plt.savefig('plots/leverage-2x-10yr-relative-percentiles.png', dpi=300);
```
#### Limited quantiles
```
quantiles2 = np.array([0.05, 0.15, 0.25, 0.4, 0.6, 0.75, 0.85, 0.95])[::-1]
scheme2 = sns.color_palette('viridis', quantiles2.size)[::-1]
fig, ax = plt.subplots(4, 2, figsize=(10, 9))
for i, quant in enumerate(quantiles2):
cur_ax = ax.ravel()[i]
q_array = np.quantile(results2, quant, axis=0)
color = scheme2[i]
cur_ax.plot(lev, q_array,
color=color, label='{:.2%}'.format(quant), linewidth=2)
cur_ax.set_xticks(lev[::4])
cur_ax.set_xticklabels(['{:.0%}'.format(p) for p in lev[::4]])
cur_ax.set_xlim(0, 1)
cur_ax.grid(alpha=0.4)
cur_ax.set_xlabel('Percent Leveraged')
cur_ax.legend()
fig.suptitle('Return on Investment for all {}-Year Periods from Jan 1928 to Aug 2020, with Annual\n\
Rebalancing of Leverage Rates, and Assumed Reinvestment of Dividends.'.format(num_years))
plt.savefig('plots/leverage-2x-10yr-limited_percentiles.png', dpi=300);
fig, ax = plt.subplots(4, 2, figsize=(10, 9))
for i, quant in enumerate(quantiles2):
cur_ax = ax.ravel()[i]
q_array = np.quantile(relative2, quant, axis=0)
color = scheme2[i]
cur_ax.plot(lev, q_array,
color=color, label='{:.2%}'.format(quant), linewidth=2)
cur_ax.set_xticks(lev[::4])
cur_ax.set_xticklabels(['{:.0%}'.format(p) for p in lev[::4]])
cur_ax.set_xlim(0, 1)
cur_ax.grid(alpha=0.4)
cur_ax.set_xlabel('Percent Leveraged')
cur_ax.legend()
fig.suptitle('Relative Return on Investment for all {}-Year Periods from Jan 1928 to Aug 2020, with Annual\n\
Rebalancing of Leverage Rates, and Assumed Reinvestment of Dividends.'.format(num_years))
plt.savefig('plots/leverage-2x-10yr-relative-limited_percentiles.png', dpi=300);
plt.figure(figsize=(6.4, 4.8))
q = 0.5
q_array = np.quantile(results2, q, axis=0)
plt.plot(lev, q_array, color='r', linewidth=2)
plt.xticks(lev[::4], ['{:.0%}'.format(p) for p in lev[::4]])
plt.xlim(0, 1)
plt.xlabel('Percent Leveraged')
plt.grid(alpha=0.4)
plt.title('Median Return on Investment for all {}-Year Periods\n\
from Jan 1928 to Aug 2020, with Annual\n\
Rebalancing of Leverage Rates, \n\
and Assumed Reinvestment of Dividends.'.format(num_years));
plt.tight_layout();
plt.savefig('plots/leverage-2x-10yr-median.png', dpi=300);
plt.figure(figsize=(6.4, 4.8))
q = 0.5
q_array = np.quantile(relative2, q, axis=0)
plt.plot(lev, q_array, color='r', linewidth=2)
plt.xticks(lev[::4], ['{:.0%}'.format(p) for p in lev[::4]])
plt.xlim(0, 1)
plt.xlabel('Percent Leveraged')
plt.grid(alpha=0.4)
plt.title('Median Relative Return on Investment for all {}-Year Periods\n\
from Jan 1928 to Aug 2020, with Annual\n\
Rebalancing of Leverage Rates, \n\
and Assumed Reinvestment of Dividends.'.format(num_years));
plt.tight_layout();
plt.savefig('plots/leverage-2x-relative-10yr-median.png', dpi=300);
plt.figure(figsize=(6.4, 4.8))
plt.scatter(quantiles, lev[np.argmax(results2q, axis=1)])
plt.yticks(lev[::4], ['{:.0%}'.format(p) for p in lev[::4]])
plt.ylabel('Percent Leveraged')
plt.xlim(0.2, 0.55)
plt.xlabel('Percentile')
plt.grid(alpha=.4)
plt.ylim(0, 1)
plt.title('Optimal Leverage Rate as a Function of Percentile for all {}-Year Periods\n\
from Jan 1928 to Aug 2020, with Annual Rebalancing of Leverage Rates, \n\
and Assumed Reinvestment of Dividends.'.format(num_years))
plt.tight_layout();
plt.savefig('plots/leverage-2x-10yr-optimal_leverage.png', dpi=300);
```
### Compare histograms of 0% and 50% leveraged.
```
idx50 = 20
lev[idx50]
print(np.quantile(results2[:,0], 0.5), np.quantile(results2[:,idx50], 0.5))
plt.hist(results2[:,0], bins=40, alpha=0.2, density=True)
plt.hist(results2[:,idx50],
bins=40, alpha=0.2, density=True)
plt.xlim(0, 20);
```
### What were some of the craziest n-year returns, and when were they?
#### Maximum returns
```
maximums = np.unique(np.argmax(results2, axis=0))
# 0% leverage, 100% leverage
results2[maximums][:,0], results2[maximums][:,-1]
start_dates[maximums]
```
#### Minimum returns
```
minimums = np.unique(np.argmin(results2, axis=0))
# 0% leverage, 100% leverage
results2[minimums][:,0], results2[minimums][:,-1]
start_dates[minimums]
```
| true |
code
| 0.607314 | null | null | null | null |
|
```
import keras
keras.__version__
```
# 透過二元分類訓練 IMDB 評論資料
二元分類或稱兩類分類可能是在機器學習中應用最廣泛問題。只要處理的問題只有兩個結果,就可以適用。在這個例子中,我們將根據 IMDB 評論的文本內容將電影評論分為「正面」評論和「負面」評論。
## 關於 IMDB Dataset 資料集
IMDB Dataset 是來自 Internet 電影數據庫 50,000 條評論文字。他們分為 25,000 條訓練數據和 25,000 條測試數據,每組皆包含包括 50% 的負面評論和 50% 的正面評論。
我們可以直接透過 Keras Datasets 函式庫載入已經整理好的資料集。這些資料集已經經過處理,會將評論依據單詞順序,排列為整數序列,其中每個整數代表字典中的特定單詞。如下:
```
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
```
num_words=10000 表示我們只採用前 10000 個常出現的字詞,此外在 label 中 0 表示負評 1 表示正評。
```
max([max(sequence) for sequence in train_data])
```
也可以透過字典檔,將資料組合回評論文字。
```
# word_index is a dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# We reverse it, mapping integer indices to words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# We decode the review; note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
# show
decoded_review
```
## 處理資料
我們無法將代表字典檔索引位置的整數資料直接送進網路進行訓練,因此需要對資料進行轉換。由於我們只採用前 10000 常用字詞作為資料集,因此輸入資料可以轉換為 10000 維度的 one-hot-encode,例如 [3, 18] 表示一個全部都是 0 的陣列,只有 index 3, 18 是 1。我們會將這樣的資料格式作為張量進行訓練,轉換如下:
```
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
```
將結果標記進行正規化
```
# Our vectorized labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
```
## 建立網路架構
這裡我們預計使用三層網路,全連接層僅使用兩層 16 個神經元的網路,啟動函數設定為 relu,連接最後使用一個神經元輸出(表示正評或負評),並使用 sigmoid 作為最後輸出的啟動函數。由下而上網路架構如下:

```
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
```
將優化器設定為 rmsprop,損失函數使用 binary_crossentropy,將網路進行 Compile
```
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
```
## 訓練模型
先將準備訓練的 25000 筆資料集,抽出 10000 筆資料集用在訓練時期的驗證資料,好讓我們監控訓練過程的準確性變化。如下:
```
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
```
開始訓練模型
```
history = model.fit(partial_x_train,
partial_y_train,
epochs=100,
batch_size=512,
validation_data=(x_val, y_val))
```
訓練的過程會把相關資訊存放在 history,透過事後分析訓練過程的資訊可以幫助我們優化參數。
```
history_dict = history.history
history_dict.keys()
```
透過上面的方法可以取得訓練 History 包含的資訊,然後我們將資訊繪製成為圖表,如下:
```
#@title
import matplotlib.pyplot as plt
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc_values = history_dict['accuracy']
val_acc_values = history_dict['val_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
```
由上面的數據可以看出來,以現在的網路架構,其實在第 3 次 Epoch 就已經獲得最佳的結果,之後的訓練已經造成過度擬合 (Over Fitting),因此在這個案例中將 Epoch 設定為 3 或 4 是取得最佳訓練模型的方法。
| true |
code
| 0.812477 | null | null | null | null |
|
```
%matplotlib inline
```
# Topic extraction with Non-negative Matrix Factorization and Latent Dirichlet Allocation
This is an example of applying :class:`sklearn.decomposition.NMF` and
:class:`sklearn.decomposition.LatentDirichletAllocation` on a corpus
of documents and extract additive models of the topic structure of the
corpus. The output is a list of topics, each represented as a list of
terms (weights are not shown).
Non-negative Matrix Factorization is applied with two different objective
functions: the Frobenius norm, and the generalized Kullback-Leibler divergence.
The latter is equivalent to Probabilistic Latent Semantic Indexing.
The default parameters (n_samples / n_features / n_components) should make
the example runnable in a couple of tens of seconds. You can try to
increase the dimensions of the problem, but be aware that the time
complexity is polynomial in NMF. In LDA, the time complexity is
proportional to (n_samples * iterations).
```
# Author: Olivier Grisel <olivier.grisel@ensta.org>
# Lars Buitinck
# Chyi-Kwei Yau <chyikwei.yau@gmail.com>
# License: BSD 3 clause
from __future__ import print_function
from time import time
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.decomposition import NMF, LatentDirichletAllocation
from sklearn.datasets import fetch_20newsgroups
n_samples = 2000
n_features = 1000
n_components = 10
n_top_words = 20
def print_top_words(model, feature_names, n_top_words):
for topic_idx, topic in enumerate(model.components_):
message = "Topic #%d: " % topic_idx
message += " ".join([feature_names[i]
for i in topic.argsort()[:-n_top_words - 1:-1]])
print(message)
print()
# Load the 20 newsgroups dataset and vectorize it. We use a few heuristics
# to filter out useless terms early on: the posts are stripped of headers,
# footers and quoted replies, and common English words, words occurring in
# only one document or in at least 95% of the documents are removed.
print("Loading dataset...")
t0 = time()
dataset = fetch_20newsgroups(shuffle=True, random_state=1,
remove=('headers', 'footers', 'quotes'))
data_samples = dataset.data[:n_samples]
print("done in %0.3fs." % (time() - t0))
# Use tf-idf features for NMF.
print("Extracting tf-idf features for NMF...")
tfidf_vectorizer = TfidfVectorizer(max_df=0.95, min_df=2,
max_features=n_features,
stop_words='english')
t0 = time()
tfidf = tfidf_vectorizer.fit_transform(data_samples)
print("done in %0.3fs." % (time() - t0))
# Use tf (raw term count) features for LDA.
print("Extracting tf features for LDA...")
tf_vectorizer = CountVectorizer(max_df=0.95, min_df=2,
max_features=n_features,
stop_words='english')
t0 = time()
tf = tf_vectorizer.fit_transform(data_samples)
print("done in %0.3fs." % (time() - t0))
print()
# Fit the NMF model
print("Fitting the NMF model (Frobenius norm) with tf-idf features, "
"n_samples=%d and n_features=%d..."
% (n_samples, n_features))
t0 = time()
nmf = NMF(n_components=n_components, random_state=1,
alpha=.1, l1_ratio=.5).fit(tfidf)
print("done in %0.3fs." % (time() - t0))
print("\nTopics in NMF model (Frobenius norm):")
tfidf_feature_names = tfidf_vectorizer.get_feature_names()
print_top_words(nmf, tfidf_feature_names, n_top_words)
# Fit the NMF model
print("Fitting the NMF model (generalized Kullback-Leibler divergence) with "
"tf-idf features, n_samples=%d and n_features=%d..."
% (n_samples, n_features))
t0 = time()
nmf = NMF(n_components=n_components, random_state=1,
beta_loss='kullback-leibler', solver='mu', max_iter=1000, alpha=.1,
l1_ratio=.5).fit(tfidf)
print("done in %0.3fs." % (time() - t0))
print("\nTopics in NMF model (generalized Kullback-Leibler divergence):")
tfidf_feature_names = tfidf_vectorizer.get_feature_names()
print_top_words(nmf, tfidf_feature_names, n_top_words)
print("Fitting LDA models with tf features, "
"n_samples=%d and n_features=%d..."
% (n_samples, n_features))
lda = LatentDirichletAllocation(n_components=n_components, max_iter=5,
learning_method='online',
learning_offset=50.,
random_state=0)
t0 = time()
lda.fit(tf)
print("done in %0.3fs." % (time() - t0))
print("\nTopics in LDA model:")
tf_feature_names = tf_vectorizer.get_feature_names()
print_top_words(lda, tf_feature_names, n_top_words)
```
| true |
code
| 0.619011 | null | null | null | null |
|
# Amazon SageMaker와 병렬로 SageMaker 분산 모델을 사용하여 모델 병렬화로 MNIST 훈련 작업 시작
SageMaker 분산 모델 병렬 (SageMaker Distributed Model Parallel, SMP)은 GPU 메모리 제한으로 인해 이전에 학습하기 어려웠던 대규모 딥러닝 모델을 훈련하기 위한 모델 병렬 처리 라이브러리입니다. SageMaker Distributed Model Parallel은 여러 GPU 및 인스턴스에서 모델을 자동으로 효율적으로 분할하고 모델 훈련을 조정하므로 더 많은 매개 변수로 더 큰 모델을 생성하여 예측 정확도를 높일 수 있습니다.
이 노트북에서는 예제 PyTorch 훈련 스크립트 `utils/pt_mnist.py` 및 [Amazon SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable/overview.html#train-a-model-with-the-sagemaker-python-sdk) 를 사용하여 모델을 훈련하도록 Sagemaker Distributed Model Parallel을 구성합니다.
### 추가 리소스
Amazon SageMaker를 처음 사용하는 경우, SageMaker 상에서 SMP로 PyTorch 모델을 훈련 시 다음 정보들이 도움이 될 수 있습니다.
* SageMaker 모델 병렬 처리 라이브러리에 대한 자세한 내용은 [SageMaker Distributed를 사용한 모델 병렬 분산 훈련](http://docs.aws.amazon.com/sagemaker/latest/dg/model-parallel.html)을 참조하세요.
* Pytorch와 함께 SageMaker Python SDK를 사용하는 방법에 대한 자세한 내용은 [SageMaker Python SDK와 함께 PyTorch 사용](https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html)을 참조하세요.
* 자체 훈련 이미지로 Amazon SageMaker에서 훈련 작업을 시작하는 방법에 대한 자세한 내용은 [자체 훈련 알고리즘 사용](https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-training-algo.html)을 참조하세요.
## Amazon SageMaker 초기화
다음 셀을 실행하여 노트북 인스턴스를 초기화합니다. 이 노트북을 실행하는 데 사용되는 SageMaker 실행 역할을 가져옵니다.
```
pip install sagemaker-experiments
pip install sagemaker --upgrade
%%time
import sagemaker
from sagemaker import get_execution_role
from sagemaker.pytorch import PyTorch
from smexperiments.experiment import Experiment
from smexperiments.trial import Trial
import boto3
from time import gmtime, strftime
role = get_execution_role() # provide a pre-existing role ARN as an alternative to creating a new role
print(f'SageMaker Execution Role:{role}')
session = boto3.session.Session()
```
## 훈련 스크립트 준비
이 데모에서 사용할 예제 훈련 스크립트를 보려면 다음 셀을 실행하세요. 이것은 MNIST 데이터셋을 사용하는 PyTorch 1.6 훈련 스크립트입니다.
스크립트에 모델 병렬 학습을 구성하는 `SMP` 특정 오퍼레이션 및 데코레이터가 포함되어 있음을 알 수 있습니다. 스크립트에 사용된 SMP 함수 및 유형에 대한 자세한 내용은 훈련 스크립트 주석을 참조하세요.
```
%%writefile utils/pt_mnist.py
# Future
from __future__ import print_function
# Standard Library
import os, time
import argparse
import math
import random
# Third Party
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.cuda.amp import autocast
from torch.optim.lr_scheduler import StepLR
from torchnet.dataset import SplitDataset
from torchvision import datasets, transforms
# First Party
import smdistributed.modelparallel.torch as smp
# SM Distributed: import scaler from smdistributed.modelparallel.torch.amp, instead of torch.cuda.amp
# Make cudnn deterministic in order to get the same losses across runs.
# The following two lines can be removed if they cause a performance impact.
# For more details, see:
# https://pytorch.org/docs/stable/notes/randomness.html#cudnn
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
def aws_s3_sync(source, destination):
"""aws s3 sync in quiet mode and time profile"""
import time, subprocess
cmd = ["aws", "s3", "sync", "--quiet", source, destination]
print(f"Syncing files from {source} to {destination}")
start_time = time.time()
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.wait()
end_time = time.time()
print("Time Taken to Sync: ", (end_time-start_time))
return
def sync_local_checkpoints_to_s3(local_path="/opt/ml/checkpoints", s3_path=os.path.dirname(os.path.dirname(os.getenv('SM_MODULE_DIR', '')))+'/checkpoints'):
""" sample function to sync checkpoints from local path to s3 """
import boto3, botocore
#check if local path exists
if not os.path.exists(local_path):
raise RuntimeError("Provided local path {local_path} does not exist. Please check")
#check if s3 bucket exists
s3 = boto3.resource('s3')
if 's3://' not in s3_path:
raise ValueError("Provided s3 path {s3_path} is not valid. Please check")
s3_bucket = s3_path.replace('s3://','').split('/')[0]
print(f"S3 Bucket: {s3_bucket}")
try:
s3.meta.client.head_bucket(Bucket=s3_bucket)
except botocore.exceptions.ClientError as e:
error_code = e.response['Error']['Code']
if error_code == '404':
raise RuntimeError('S3 bucket does not exist. Please check')
aws_s3_sync(local_path, s3_path)
return
def sync_s3_checkpoints_to_local(local_path="/opt/ml/checkpoints", s3_path=os.path.dirname(os.path.dirname(os.getenv('SM_MODULE_DIR', '')))+'/checkpoints'):
""" sample function to sync checkpoints from s3 to local path """
import boto3, botocore
#creat if local path does not exists
if not os.path.exists(local_path):
print(f"Provided local path {local_path} does not exist. Creating...")
try:
os.makedirs(local_path)
except Exception as e:
raise RuntimeError(f"failed to create {local_path}")
#check if s3 bucket exists
s3 = boto3.resource('s3')
if 's3://' not in s3_path:
raise ValueError("Provided s3 path {s3_path} is not valid. Please check")
s3_bucket = s3_path.replace('s3://','').split('/')[0]
print(f"S3 Bucket: {s3_bucket}")
try:
s3.meta.client.head_bucket(Bucket=s3_bucket)
except botocore.exceptions.ClientError as e:
error_code = e.response['Error']['Code']
if error_code == '404':
raise RuntimeError('S3 bucket does not exist. Please check')
aws_s3_sync(s3_path, local_path)
return
class Net1(nn.Module):
def __init__(self):
super(Net1, self).__init__()
self.conv1 = nn.Conv2d(1, 32, 3, 1)
self.conv2 = nn.Conv2d(32, 64, 3, 1)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, 2)
x = torch.flatten(x, 1)
return x
class Net2(nn.Module):
def __init__(self):
super(Net2, self).__init__()
self.fc1 = nn.Linear(9216, 128)
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
output = F.log_softmax(x, 1)
return output
class GroupedNet(nn.Module):
def __init__(self):
super(GroupedNet, self).__init__()
self.net1 = Net1()
self.net2 = Net2()
def forward(self, x):
x = self.net1(x)
x = self.net2(x)
return x
# SM Distributed: Define smp.step. Return any tensors needed outside.
@smp.step
def train_step(model, scaler, data, target):
with autocast(1 > 0):
output = model(data)
loss = F.nll_loss(output, target, reduction="mean")
scaled_loss = loss
model.backward(scaled_loss)
return output, loss
def train(model, scaler, device, train_loader, optimizer, epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# SM Distributed: Move input tensors to the GPU ID used by the current process,
# based on the set_device call.
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
# Return value, loss_mb is a StepOutput object
_, loss_mb = train_step(model, scaler, data, target)
# SM Distributed: Average the loss across microbatches.
loss = loss_mb.reduce_mean()
optimizer.step()
if smp.rank() == 0 and batch_idx % 10 == 0:
print(
"Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}".format(
epoch,
batch_idx * len(data),
len(train_loader.dataset),
100.0 * batch_idx / len(train_loader),
loss.item(),
)
)
# SM Distributed: Define smp.step for evaluation.
@smp.step
def test_step(model, data, target):
output = model(data)
loss = F.nll_loss(output, target, reduction="sum").item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct = pred.eq(target.view_as(pred)).sum().item()
return loss, correct
def test(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for batch_idx, (data, target) in enumerate(test_loader):
# SM Distributed: Moves input tensors to the GPU ID used by the current process
# based on the set_device call.
data, target = data.to(device), target.to(device)
# Since test_step returns scalars instead of tensors,
# test_step decorated with smp.step will return lists instead of StepOutput objects.
loss_batch, correct_batch = test_step(model, data, target)
test_loss += sum(loss_batch)
correct += sum(correct_batch)
test_loss /= len(test_loader.dataset)
if smp.mp_rank() == 0:
print(
"\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n".format(
test_loss,
correct,
len(test_loader.dataset),
100.0 * correct / len(test_loader.dataset),
)
)
return test_loss
def main():
if not torch.cuda.is_available():
raise ValueError("The script requires CUDA support, but CUDA not available")
use_ddp = True
use_horovod = False
# Fix seeds in order to get the same losses across runs
random.seed(1)
np.random.seed(1)
torch.manual_seed(1)
torch.cuda.manual_seed(1)
smp.init()
# SM Distributed: Set the device to the GPU ID used by the current process.
# Input tensors should be transferred to this device.
torch.cuda.set_device(smp.local_rank())
device = torch.device("cuda")
kwargs = {"batch_size": 64}
kwargs.update({"num_workers": 1, "pin_memory": True, "shuffle": False})
transform = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
)
# SM Distributed: Download only on a single process per instance.
# When this is not present, the file is corrupted by multiple processes trying
# to download and extract at the same time
if smp.local_rank() == 0:
dataset1 = datasets.MNIST("../data", train=True, download=True, transform=transform)
smp.barrier()
dataset1 = datasets.MNIST("../data", train=True, download=False, transform=transform)
if (use_ddp or use_horovod) and smp.dp_size() > 1:
partitions_dict = {f"{i}": 1 / smp.dp_size() for i in range(smp.dp_size())}
dataset1 = SplitDataset(dataset1, partitions=partitions_dict)
dataset1.select(f"{smp.dp_rank()}")
# Download and create dataloaders for train and test dataset
dataset2 = datasets.MNIST("../data", train=False, transform=transform)
train_loader = torch.utils.data.DataLoader(dataset1, **kwargs)
test_loader = torch.utils.data.DataLoader(dataset2, **kwargs)
model = GroupedNet()
# SMP handles the transfer of parameters to the right device
# and the user doesn't need to call 'model.to' explicitly.
# model.to(device)
optimizer = optim.Adadelta(model.parameters(), lr=4.0)
# SM Distributed: Use the DistributedModel container to provide the model
# to be partitioned across different ranks. For the rest of the script,
# the returned DistributedModel object should be used in place of
# the model provided for DistributedModel class instantiation.
model = smp.DistributedModel(model)
scaler = smp.amp.GradScaler()
optimizer = smp.DistributedOptimizer(optimizer)
scheduler = StepLR(optimizer, step_size=1, gamma=0.7)
for epoch in range(1, 2):
train(model, scaler, device, train_loader, optimizer, epoch)
test_loss = test(model, device, test_loader)
scheduler.step()
if smp.rank() == 0:
if os.path.exists('/opt/ml/local_checkpoints'):
print("-INFO- PATH DO EXIST")
else:
os.makedirs('/opt/ml/local_checkpoints')
print("-INFO- PATH DO NOT EXIST")
# Waiting the save checkpoint to be finished before run another allgather_object
smp.barrier()
if smp.dp_rank() == 0:
model_dict = model.local_state_dict()
opt_dict = optimizer.local_state_dict()
smp.save(
{"model_state_dict": model_dict, "optimizer_state_dict": opt_dict},
f"/opt/ml/local_checkpoints/pt_mnist_checkpoint.pt",
partial=True,
)
smp.barrier()
if smp.local_rank() == 0:
print("Start syncing")
base_s3_path = os.path.dirname(os.path.dirname(os.getenv('SM_MODULE_DIR', '')))
curr_host = os.getenv('SM_CURRENT_HOST')
full_s3_path = f'{base_s3_path}/checkpoints/{curr_host}/'
sync_local_checkpoints_to_s3(local_path='/opt/ml/local_checkpoints', s3_path=full_s3_path)
print("Finished syncing")
if __name__ == "__main__":
main()
```
## SageMaker 훈련 작업 정의
다음으로 SageMaker Estimator API를 사용하여 SageMaker 훈련 작업을 정의합니다. [`Estimator`](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html)를 사용하여 Amazon SageMaker가 훈련에 사용하는 EC2 인스턴스의 수와 유형과 해당 인스턴스에 연결된 볼륨의 크기를 정의합니다.
다음을 업데이트할 수 있습니다.
* `processes_per_host`
* `entry_point`
* `instance_count`
* `instance_type`
* `base_job_name`
또한 SageMaker Distributed Model Parallel 라이브러리에 대한 설정 파라메터를 제공하고 수정할 수 있습니다. 이러한 파라메터는 아래와 같이 `distributions` 인수를 통해 전달됩니다.
### 사용할 EC2 인스턴스의 유형 및 개수 업데이트
`processes_per_host`를 지정하세요. 기본적으로 파티션의 2배수여야 합니다. (예: 2, 4, ...)
`instance_type` 및 `instance_count`에서 각각 지정하는 인스턴스 유형 및 인스턴스 수에 따라 Amazon SageMaker가 훈련 중에 사용하는 GPU 수가 결정됩니다. 명시 적으로`instance_type`은 단일 인스턴스의 GPU 수를 결정하고 그 숫자에 `instance_count`를 곱합니다.
훈련에 사용할 수 있는 총 GPU 수가 훈련 스크립트의 `smp.init`의 `config`에 있는 `partitions`와 같도록 `instance_type`및 `instance_count`의 값을 지정해야 합니다.
인스턴스 유형을 확인하려면 [Amazon EC2 인스턴스 유형](https://aws.amazon.com/sagemaker/pricing/)을 참조하세요.
### 훈련 중 체크 포인트 업로드 또는 이전 훈련에서 체크 포인트 재개
또한 사용자가 훈련 중에 체크 포인트를 업로드하거나 이전 훈련에서 체크 포인트를 재개할 수있는 맞춤형 방법을 제공합니다. 자세한 방법은 `aws_s3_sync`, `sync_local_checkpoints_to_s3` 및` sync_s3_checkpoints_to_local` 함수를 참조하세요.
`pt_mnist.py` 예제 스크립트에서 이를 확인할 수 있으며, 이 예제에서는 `sync_local_checkpoints_to_s3`을 사용하여 훈련 중에 체크 포인트만 업로드합니다.
After you have updated `entry_point`, `instance_count`, `instance_type` and `base_job_name`, run the following to create an estimator.
```
sagemaker_session = sagemaker.session.Session(boto_session=session)
mpioptions = "-verbose -x orte_base_help_aggregate=0 "
mpioptions += "--mca btl_vader_single_copy_mechanism none "
all_experiment_names = [exp.experiment_name for exp in Experiment.list()]
#choose an experiment name (only need to create it once)
experiment_name = "SM-MP-DEMO"
# Load the experiment if it exists, otherwise create
if experiment_name not in all_experiment_names:
customer_churn_experiment = Experiment.create(
experiment_name=experiment_name, sagemaker_boto_client=boto3.client("sagemaker")
)
else:
customer_churn_experiment = Experiment.load(
experiment_name=experiment_name, sagemaker_boto_client=boto3.client("sagemaker")
)
# Create a trial for the current run
trial = Trial.create(
trial_name="SMD-MP-demo-{}".format(strftime("%Y-%m-%d-%H-%M-%S", gmtime())),
experiment_name=customer_churn_experiment.experiment_name,
sagemaker_boto_client=boto3.client("sagemaker"),
)
smd_mp_estimator = PyTorch(
entry_point="pt_mnist.py", # Pick your train script
source_dir='utils',
role=role,
instance_type='ml.p3.16xlarge',
sagemaker_session=sagemaker_session,
framework_version='1.6.0',
py_version='py36',
instance_count=1,
distribution={
"smdistributed": {
"modelparallel": {
"enabled":True,
"parameters": {
"microbatches": 4,
"placement_strategy": "spread",
"pipeline": "interleaved",
"optimize": "speed",
"partitions": 2,
"ddp": True,
}
}
},
"mpi": {
"enabled": True,
"processes_per_host": 2, # Pick your processes_per_host
"custom_mpi_options": mpioptions
},
},
base_job_name="SMD-MP-demo",
)
```
마지막으로 estimator를 사용하여 SageMaker 훈련 작업을 시작합니다.
```
%%time
smd_mp_estimator.fit(
experiment_config={
"ExperimentName": customer_churn_experiment.experiment_name,
"TrialName": trial.trial_name,
"TrialComponentDisplayName": "Training",
})
```
| true |
code
| 0.818972 | null | null | null | null |
|
# Introduction to climlab and 1D grey radiation models
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import netCDF4 as nc
import climlab
```
# Validate climlab against analytical solution for 2-layer atmosphere
```
# Test in a 2-layer atmosphere
col = climlab.GreyRadiationModel(num_lev=2)
print(col)
col.subprocess
col.state
col.Ts
col.Ts[:] = 288.
col.Tatm[:] = np.array([275., 230.])
col.state
LW = col.subprocess['LW']
print (LW)
LW.absorptivity
LW.absorptivity = 0.58377
LW.absorptivity
col.diagnostics
col.compute_diagnostics()
col.diagnostics
col.diagnostics['OLR']
col.state
col.step_forward()
col.state
# integrate out to radiative equilibrium
col.integrate_years(2.)
col.diagnostics['ASR'] - col.diagnostics['OLR']
# Compare these temperatures against our analytical solutions for radiative equilibrium
col.state
```
# Get observed annual, global mean temperature profile
```
ncep_url = "http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis.derived/"
ncep_air = nc.Dataset( ncep_url + "pressure/air.mon.1981-2010.ltm.nc" )
level = ncep_air.variables['level'][:]
lat = ncep_air.variables['lat'][:]
zstar = np.log(level/1000)
Tzon = np.mean(ncep_air.variables['air'][:],axis=(0,3))
Tglobal = np.average( Tzon , weights=np.cos(np.deg2rad(lat)), axis=1) + climlab.constants.tempCtoK
fig = plt.figure( figsize=(8,6) )
ax = fig.add_subplot(111)
ax.plot( Tglobal , level )
ax.invert_yaxis()
ax.set_xlabel('Temperature (K)', fontsize=16)
ax.set_ylabel('Pressure (hPa)', fontsize=16 )
ax.set_title('Global, annual mean sounding from NCEP Reanalysis', fontsize = 24)
ax.grid()
```
# Create 30-layer model with observed temperatures
```
# initialize a grey radiation model with 30 levels
col = climlab.GreyRadiationModel()
print (col)
# interpolate to 30 evenly spaced pressure levels
lev = col.lev
Tinterp = np.flipud(np.interp(np.flipud(lev), np.flipud(level), np.flipud(Tglobal)))
Tinterp
# Initialize model with observed temperatures
col.Ts[:] = Tglobal[0]
col.Tatm[:] = Tinterp
def plot_sounding(collist):
color_cycle=['r', 'g', 'b', 'y']
# col is either a column model object or a list of column model objects
if isinstance(collist, climlab.Process):
# make a list with a single item
collist = [collist]
fig = plt.figure()
ax = fig.add_subplot(111)
for i, col in enumerate(collist):
ax.plot(col.Tatm, col.lev, color=color_cycle[i])
ax.plot(col.Ts, climlab.constants.ps, 'o', markersize=12, color=color_cycle[i])
ax.invert_yaxis()
ax.set_xlabel('Temperature (K)')
ax.set_ylabel('Pressure (hPa)')
ax.grid()
return ax
plot_sounding(col)
```
# Tune absorptivity to get observed OLR
```
col.compute_diagnostics()
col.diagnostics['OLR']
# Need to tune absorptivity to get OLR = 239
epsarray = np.linspace(0.01, 0.1, 100)
OLRarray = np.zeros_like(epsarray)
for i in range(epsarray.size):
col.subprocess['LW'].absorptivity = epsarray[i]
col.compute_diagnostics()
OLRarray[i] = col.diagnostics['OLR']
plt.plot(epsarray, OLRarray)
plt.grid()
def OLRanom(eps):
col.subprocess['LW'].absorptivity = eps
col.compute_diagnostics()
return col.diagnostics['OLR'] - 239.
OLRanom(0.02)
# Use numerical root-finding to get the equilibria
from scipy.optimize import brentq
# brentq is a root-finding function
# Need to give it a function and two end-points
# It will look for a zero of the function between those end-points
eps = brentq(OLRanom, 0.01, 0.1)
print (eps)
col.subprocess['LW'].absorptivity = eps
col.subprocess['LW'].absorptivity
col.compute_diagnostics()
col.diagnostics['OLR']
```
# Compute radiative forcing for a 2% increase in absorptivity
```
col2 = climlab.process_like(col)
print (col2)
col2.subprocess['LW'].absorptivity *= 1.02
col2.subprocess['LW'].absorptivity
col2.compute_diagnostics()
col2.diagnostics['OLR']
col2.Ts - col.Ts
col2.diagnostics['OLR'] - col.diagnostics['OLR']
RF = -(col2.diagnostics['OLR'] - col.diagnostics['OLR'])
print ('The radiative forcing is %f W/m2.' %RF)
```
# Radiative equilibrium in the 30-layer model
```
re = climlab.process_like(col)
re.integrate_years(2.)
# Check for energy balance
re.diagnostics['ASR'] - re.diagnostics['OLR']
plot_sounding([col, re])
```
# Radiative-Convective equilibrium in the 30-layer model
```
rce = climlab.RadiativeConvectiveModel(adj_lapse_rate=6.)
print (rce)
rce.subprocess['LW'].absorptivity = eps
rce.integrate_years(2.)
# Check for energy balance
rce.diagnostics['ASR'] - rce.diagnostics['OLR']
plot_sounding([col, rce])
```
# Greenhouse warming in RCE model
```
# ANother 1% increase in absorptivity
rce2 = climlab.process_like(rce)
rce2.subprocess['LW'].absorptivity *= 1.02
rce2.compute_diagnostics()
RF = -(rce2.diagnostics['OLR'] - rce.diagnostics['OLR'])
print ('The radiative forcing is %f W/m2.' %RF)
# Timestep forward, and the check for energy balance
rce2.integrate_years(2.)
rce2.diagnostics['ASR'] - rce2.diagnostics['OLR']
plot_sounding([col, rce, rce2])
ECS = rce2.Ts - rce.Ts
print ('Equilibrium climate sensitivity is %f K.' %ECS)
# Calculate the net climate feedback
# This is the change in TOA flux per degree warming that was necessary to get back to equilibrium.
feedback = -RF/ECS
print ('The net feedback is %f W/m2/K' %feedback )
# could calculate a Planck feedback explicitly...
# What would the TOA flux change be if the warming were perfectly uniform?
rce3 = climlab.process_like(rce)
rce3.subprocess['LW'].absorptivity *= 1.02
rce3.Ts += ECS
rce3.Tatm += ECS
```
| true |
code
| 0.618032 | null | null | null | null |
|
# Title Generation using Recurrent Neural Networks
I never know what I should title most things I have written. I hope that by using a corpus of titles, recurrent neural networks (RNNs) can write my titles for me.
I thought a fitting title to generate would be something within Machine Learning, so I used [Publish or Perish](https://harzing.com/resources/publish-or-perish) to fetch any title from Google Scholar associated with *Machine Learning*. It retrieved 950 titles, which you can view [here](https://gist.github.com/AngusTheMack/defadcbc503e2d625720661e9893ff0a).
If you want to use this to generate your own titles (or any text whatsoever), just change the `url` to download the data from, or the `save_location` to where your data is stored.
## Titles Generated
During the time playing around with the implementations below I was able to gain some very cool sounding titles:
* Function Classification Using Machine Learning Techniques
* Bayesian Approximation of Effective Machine Learning
* Data Classification With Machine Learning
* Computer Multi-agent Boltzmann Machine Learning
* Machine Learning Approaches for Visual Classification
* New Machine Learning for Astrophysics
* Neural Machine Learning for Medical Imaging
* Deep Similarity Learning Filters
## Implementations
I wanted to compare results between somewhat vanilla RNN implementations and a Long Short Term Memory (LSTM) model. To that end I used a character level RNN, word level RNN and a LSTM. This was done mainly to try and better understand the underlying concepts in RNNs, and what differentiates them from LSTMs.
I used [Andrej Karpathy's blog](https://karpathy.github.io/) post [The Unreasonable Effectiveness of Recurrent Neural Networks ](https://karpathy.github.io/2015/05/21/rnn-effectiveness/) as my starting point - and utilised his amazing [112 line char level RNN](https://gist.github.com/karpathy/d4dee566867f8291f086) implemented in vanilla python.
After that I used [Denny Britz's](https://github.com/dennybritz) [word level RNN](https://github.com/dennybritz/rnn-tutorial-rnnlm/blob/master/RNNLM.ipynb) from his series of [blogspost](http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-2-implementing-a-language-model-rnn-with-python-numpy-and-theano/) on the topic.
Finally, I used [Shivam Bansal's](https://www.kaggle.com/shivamb) [Beginners Guide to Text Generation using LSTMs](https://www.kaggle.com/shivamb/beginners-guide-to-text-generation-using-lstms/notebook) for the LSTM implementation.
```
import numpy as np
import matplotlib.pyplot as plt
import string
import urllib.request
import pickle
%matplotlib inline
def download_data(url, save_location):
"""
Download data to be used as corpus
"""
print('Beginning file download...')
urllib.request.urlretrieve(url,save_location)
print("Downloaded file, saving to:",save_location)
def load_data(save_location):
"""
Load data from Textfile
"""
file = open(save_location,"r")
data = file.read()
return data
def avg_char_per_title(data):
"""
Calculate the average number of chars in a title for the sequence length
"""
lines = data.split("\n")
line_lengths = np.zeros(len(lines))
for i,line in enumerate(lines):
line_lengths[i] = len(line)
return np.average(line_lengths)
def save_object(obj, filename):
"""
Save an object - used to save models
"""
with open(filename, 'wb') as output:
pickle.dump(obj, output, -1)
# Change the URL to whatever text you want to train with
url = "https://gist.githubusercontent.com/AngusTheMack/defadcbc503e2d625720661e9893ff0a/raw/bb978a5ef025ff104009ab8139da4a0b7367992f/Titles.txt"
# Save Location will be used to load the data in
save_location = "Titles.txt" # either the name of the file downloaded with the URL above, or the location of your own file to load in
# Downloads the data, and loads it in
download_data(url,save_location)
data = load_data(save_location)
# Print first 100 characters of the data
print(data[:100])
def clean_text(data):
"""
Removes non essential characters in corpus of text
"""
data = "".join(v for v in data if v not in string.punctuation).lower()
data = data.encode("utf8").decode("ascii",'ignore')
return data
# You don't need to clean, but it can make things simpler
cleaned = clean_text(data)
print(cleaned[:100])
def unique_chars(data):
"""
Get all unique Characters in the Dataset
"""
return list(set(data))
# Some info about the data
chars = unique_chars(cleaned)
data_size, input_size = len(cleaned), len(chars)
print("Data has %d characters, %d of them are unique" % (data_size, input_size))
def tokenize_chars(chars):
"""
Create dictionaries to make it easy to convert from tokens to chars
"""
char_to_idx = {ch:i for i,ch in enumerate(chars)}
idx_to_char = {i:ch for i,ch in enumerate(chars)}
return char_to_idx, idx_to_char
# Create dictionaries, and display example using 11 chars
char_to_idx, idx_to_char = tokenize_chars(chars)
first_title = cleaned[:11]
print("{0:<2}|{1:<2}".format('Character', 'Index'))
print("________________")
for i in range(len(first_title)):
char_index = char_to_idx[first_title[i]]
print("{0:<9}|{a:d}".format(idx_to_char[char_index], a=char_to_idx[first_title[i]]))
```
# Char Level RNN
Created by Andrej Karpathy, available here: [here](https://gist.github.com/karpathy/d4dee566867f8291f086)
```
"""
Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy)
BSD License
Ever so slightly modified to be used with the above code
"""
data = cleaned
chars = unique_chars(cleaned)
data_size, vocab_size = len(cleaned), len(chars)
# hyperparameters
hidden_size = 100 # size of hidden layer of neurons
seq_length = 25 # number of steps to unroll the RNN for
learning_rate = 1e-1
# model parameters
Wxh = np.random.randn(hidden_size, vocab_size)*0.01 # input to hidden
Whh = np.random.randn(hidden_size, hidden_size)*0.01 # hidden to hidden
Why = np.random.randn(vocab_size, hidden_size)*0.01 # hidden to output
bh = np.zeros((hidden_size, 1)) # hidden bias
by = np.zeros((vocab_size, 1)) # output bias
def lossFun(inputs, targets, hprev):
"""
inputs,targets are both list of integers.
hprev is Hx1 array of initial hidden state
returns the loss, gradients on model parameters, and last hidden state
"""
xs, hs, ys, ps = {}, {}, {}, {}
hs[-1] = np.copy(hprev)
loss = 0
# forward pass
for t in range(len(inputs)):
xs[t] = np.zeros((vocab_size,1)) # encode in 1-of-k representation
xs[t][inputs[t]] = 1
hs[t] = np.tanh(np.dot(Wxh, xs[t]) + np.dot(Whh, hs[t-1]) + bh) # hidden state
ys[t] = np.dot(Why, hs[t]) + by # unnormalized log probabilities for next chars
ps[t] = np.exp(ys[t]) / np.sum(np.exp(ys[t])) # probabilities for next chars
loss += -np.log(ps[t][targets[t],0]) # softmax (cross-entropy loss)
# backward pass: compute gradients going backwards
dWxh, dWhh, dWhy = np.zeros_like(Wxh), np.zeros_like(Whh), np.zeros_like(Why)
dbh, dby = np.zeros_like(bh), np.zeros_like(by)
dhnext = np.zeros_like(hs[0])
for t in reversed(range(len(inputs))):
dy = np.copy(ps[t])
dy[targets[t]] -= 1 # backprop into y. see http://cs231n.github.io/neural-networks-case-study/#grad if confused here
dWhy += np.dot(dy, hs[t].T)
dby += dy
dh = np.dot(Why.T, dy) + dhnext # backprop into h
dhraw = (1 - hs[t] * hs[t]) * dh # backprop through tanh nonlinearity
dbh += dhraw
dWxh += np.dot(dhraw, xs[t].T)
dWhh += np.dot(dhraw, hs[t-1].T)
dhnext = np.dot(Whh.T, dhraw)
for dparam in [dWxh, dWhh, dWhy, dbh, dby]:
np.clip(dparam, -5, 5, out=dparam) # clip to mitigate exploding gradients
return loss, dWxh, dWhh, dWhy, dbh, dby, hs[len(inputs)-1]
def sample(h, seed_ix, n):
"""
sample a sequence of integers from the model
h is memory state, seed_ix is seed letter for first time step
"""
x = np.zeros((vocab_size, 1))
x[seed_ix] = 1
ixes = []
for t in range(n):
h = np.tanh(np.dot(Wxh, x) + np.dot(Whh, h) + bh)
y = np.dot(Why, h) + by
p = np.exp(y) / np.sum(np.exp(y))
ix = np.random.choice(range(vocab_size), p=p.ravel())
x = np.zeros((vocab_size, 1))
x[ix] = 1
ixes.append(ix)
return ixes
n, p = 0, 0
mWxh, mWhh, mWhy = np.zeros_like(Wxh), np.zeros_like(Whh), np.zeros_like(Why)
mbh, mby = np.zeros_like(bh), np.zeros_like(by) # memory variables for Adagrad
smooth_loss = -np.log(1.0/vocab_size)*seq_length # loss at iteration 0
while True:
# prepare inputs (we're sweeping from left to right in steps seq_length long)
if p+seq_length+1 >= len(data) or n == 0:
hprev = np.zeros((hidden_size,1)) # reset RNN memory
p = 0 # go from start of data
inputs = [char_to_idx[ch] for ch in data[p:p+seq_length]]
targets = [char_to_idx[ch] for ch in data[p+1:p+seq_length+1]]
# sample from the model now and then
if n % 100 == 0:
sample_ix = sample(hprev, inputs[0], 200)
txt = ''.join(idx_to_char[ix] for ix in sample_ix)
print('----\n %s \n----' % (txt, ))
# forward seq_length characters through the net and fetch gradient
loss, dWxh, dWhh, dWhy, dbh, dby, hprev = lossFun(inputs, targets, hprev)
smooth_loss = smooth_loss * 0.999 + loss * 0.001
if n % 100 == 0: print('iter %d, loss: %f' % (n, smooth_loss)) # print progress
# perform parameter update with Adagrad
for param, dparam, mem in zip([Wxh, Whh, Why, bh, by],
[dWxh, dWhh, dWhy, dbh, dby],
[mWxh, mWhh, mWhy, mbh, mby]):
mem += dparam * dparam
param += -learning_rate * dparam / np.sqrt(mem + 1e-8) # adagrad update
p += seq_length # move data pointer
n += 1 # iteration counter
```
I stopped the above compilation as it takes quite a while to generate meaninful text - and sometimes it doesn't seem to converge at all. Here is the output from an implementation I had running for a day or two that got down to about 16 for its loss.
```
Oxprensur Machine Learning Based Comparison Imagepredalyic Problem A Machine Learning Shidenticing With Stomement Machine
Genetional Translingl Data O
Tby Of Panadigunoous Of Machine Learning Approach
Machine Learning Approaches And Ancerxards
Applications
Ortamenopforcher Image Of And Comparison Hen
Bytesca For Dete
Semapt Recognition
Neural Ontropicaty Stvediction
Thance Resules Of Machinelearning Based And Machine Learning
Ma
Rward Algorithms
Thek Support Vector Machine Learning Toces
Survey
Subperai Scalistose Machine Learning
Classer Ald Optimization
Spatsimentar Scanisys
Twarites In Machine Learning For Algorithms
Realtime S Forildetion For Support Vector Machine Learning Techniques For The Laond Machine Learning For S
Syppbys
Mumporaty Researchon Using Of Temporing
Entruasian Designs Spevied Alghid Machine Learning
Clesit A Dizen Interaninergopers
Machine Learning
D
Operpne Mencal Work2Bated Athito Mativing Thootimic Optoraty For Machine Learning Methodent Methods In Dete Detection Of The Ancherch Of Contratecompu
Hacingar Proborion
Machine Learning In Metric Learning Transif Trassing An Learning
Machine Learning Audomement Machine Learning Of Machine Learning T
Ttymane Learning Coneftrand An Application For Mmfes On Undersec Auport Text Machine Learning A Machine Learning With Stalsaby Data Misuse Contronimic
Rsenticing Machineleseratigg
Machinelearning Of Vector
Machine Learning
Hungersing On Machine Learning And Activity
Approach To Trugbal Machine Learni
Rcemative Learning
Machine Learning And Compilianc User Introppshibution Of Brain Berial Distoneer Machine Learning
Discovery Descnessow Of Ant Seqmen
Oventicing Using Recognstimessing Practical Frainetation
Mesticabily For Parxam Experimaphitist Besk Coxican
Machine Learning Bos Automated Machine Le
Fxamentle Image Of Machine Learning Gave Trapean Schemass Of Machine Learning Of Methods Inty On Combinion Gane Technical Deabficimation Classaletrati
Esintiafforcemental Nerkase Deterabe Optimization Agversitoraling
A For Decision Techniques And Optimization For Usey In Machine Learning Corsed Machi
Onedential Machine Learning
Detection
Drepoutivelearning Machine Learning
Computtess Design Re6Aition To By Intempregressir Tomation
Suportiva Contere
Raph Incrotelaxics Ylame Tring Code
Anemoriomative Reperimity In Paraller
Munt Langouupmi Plediction Of Machine Learning
Predicting Prowibley Increman
Ecosting Machine Learning
Predict Learning And Smanced
Machine Learning
Data With Machine Learning Toateraby Ougcing Word Feature Ussifbees
Jachi Elar
Dations
Analysis Of Liagn Twictite Classification
Patferetistic Prospe Identificies Clamngenoun
Progmaris
Machine Learning For Anpreaching Methoduntac
Ocion Ad Applisition Reclasy Envinids
Quantsys A Otsum Mazining A Machine Learning
Machine Learning
Machine Learning
Extraction
Machine Learning Appro
Iches Using Machine Learning Pprssmase To Machine Learning Approach To Filteral Progrom Om Feremble Identifica Optiman Enviroptimization Of The Use In
```
As you can see, they are generally quite nonsensical. Although, the simple RNN does latch onto a few words that it has learned based on character sequences alone which is really cool! It has basically learned a tiny and very focused bit of the english language.
# Word Level RNN
The second implementation uses [Denny Brit'z Word level model](https://github.com/dennybritz/rnn-tutorial-rnnlm)
```
import csv
import itertools
import operator
import nltk
import sys
from datetime import datetime
# Chops the stream of titles into an array of titles based on new line characters
titles = cleaned.split("\n")
titles[0]
unknown_token = "UNKNOWN_TOKEN"
title_start_token = "SENTENCE_START"
title_end_token = "SENTENCE_END"
# Add the start and end token to the title
titles = ["%s %s %s" % (title_start_token, x, title_end_token) for x in titles]
# Ensure that nltk has the punkt package
nltk.download('punkt')
tokenized_titles = [nltk.word_tokenize(t) for t in titles]
word_freq = nltk.FreqDist(itertools.chain(*tokenized_titles))
print("Found %d unique words tokens." % len(word_freq.items()))
vocabulary_size=2000#len(word_freq.items())
vocab = word_freq.most_common(vocabulary_size-1)
index_to_word = [x[0] for x in vocab]
index_to_word.append(unknown_token)
word_to_index = dict([(w,i) for i,w in enumerate(index_to_word)])
print("Using vocabulary size %d." % vocabulary_size)
print("The least frequent word in our vocabulary is '%s' and appeared %d times." % (vocab[-1][0], vocab[-1][1]))
# Replace all words not in our vocabulary with the unknown token
for i, sent in enumerate(tokenized_titles):
tokenized_titles[i] = [w if w in word_to_index else unknown_token for w in sent]
print("\nExample sentence: '%s'" % titles[0])
print("\nExample sentence after Pre-processing: '%s'" % tokenized_titles[0])
# Create the training data
X_train = np.asarray([[word_to_index[w] for w in sent[:-1]] for sent in tokenized_titles])
y_train = np.asarray([[word_to_index[w] for w in sent[1:]] for sent in tokenized_titles])
# Print training data example
x_example, y_example = X_train[17], y_train[17]
print("x:\n%s\n%s" % (" ".join([index_to_word[x] for x in x_example]), x_example))
print("\ny:\n%s\n%s" % (" ".join([index_to_word[x] for x in y_example]), y_example))
def softmax(x):
xt = np.exp(x - np.max(x))
return xt / np.sum(xt)
class RNNNumpy:
def __init__(self, word_dim, hidden_dim=100, bptt_truncate=4):
# Assign instance variables
self.word_dim = word_dim
self.hidden_dim = hidden_dim
self.bptt_truncate = bptt_truncate
# Randomly initialize the network parameters
self.U = np.random.uniform(-np.sqrt(1./word_dim), np.sqrt(1./word_dim), (hidden_dim, word_dim))
self.V = np.random.uniform(-np.sqrt(1./hidden_dim), np.sqrt(1./hidden_dim), (word_dim, hidden_dim))
self.W = np.random.uniform(-np.sqrt(1./hidden_dim), np.sqrt(1./hidden_dim), (hidden_dim, hidden_dim))
def forward_propagation(self, x):
# The total number of time steps
T = len(x)
# During forward propagation we save all hidden states in s because need them later.
# We add one additional element for the initial hidden, which we set to 0
s = np.zeros((T + 1, self.hidden_dim))
s[-1] = np.zeros(self.hidden_dim)
# The outputs at each time step. Again, we save them for later.
o = np.zeros((T, self.word_dim))
# For each time step...
for t in np.arange(T):
# Note that we are indxing U by x[t]. This is the same as multiplying U with a one-hot vector.
s[t] = np.tanh(self.U[:,x[t]] + self.W.dot(s[t-1]))
o[t] = softmax(self.V.dot(s[t]))
return [o, s]
RNNNumpy.forward_propagation = forward_propagation
def predict(self, x):
# Perform forward propagation and return index of the highest score
o, s = self.forward_propagation(x)
return np.argmax(o, axis=1)
RNNNumpy.predict = predict
np.random.seed(10)
model = RNNNumpy(vocabulary_size)
o, s = model.forward_propagation(X_train[10])
print(o.shape)
print(o)
predictions = model.predict(X_train[10])
print(predictions.shape)
print(predictions)
def calculate_total_loss(self, x, y):
L = 0
# For each sentence...
for i in np.arange(len(y)):
o, s = self.forward_propagation(x[i])
# We only care about our prediction of the "correct" words
correct_word_predictions = o[np.arange(len(y[i])), y[i]]
# Add to the loss based on how off we were
L += -1 * np.sum(np.log(correct_word_predictions))
return L
def calculate_loss(self, x, y):
# Divide the total loss by the number of training examples
N = np.sum((len(y_i) for y_i in y))
return self.calculate_total_loss(x,y)/N
RNNNumpy.calculate_total_loss = calculate_total_loss
RNNNumpy.calculate_loss = calculate_loss
# Limit to 1000 examples to save time
print("Expected Loss for random predictions: %f" % np.log(vocabulary_size))
print("Actual loss: %f" % model.calculate_loss(X_train[:1000], y_train[:1000]))
def bptt(self, x, y):
T = len(y)
# Perform forward propagation
o, s = self.forward_propagation(x)
# We accumulate the gradients in these variables
dLdU = np.zeros(self.U.shape)
dLdV = np.zeros(self.V.shape)
dLdW = np.zeros(self.W.shape)
delta_o = o
delta_o[np.arange(len(y)), y] -= 1.
# For each output backwards...
for t in np.arange(T)[::-1]:
dLdV += np.outer(delta_o[t], s[t].T)
# Initial delta calculation
delta_t = self.V.T.dot(delta_o[t]) * (1 - (s[t] ** 2))
# Backpropagation through time (for at most self.bptt_truncate steps)
for bptt_step in np.arange(max(0, t-self.bptt_truncate), t+1)[::-1]:
# print "Backpropagation step t=%d bptt step=%d " % (t, bptt_step)
dLdW += np.outer(delta_t, s[bptt_step-1])
dLdU[:,x[bptt_step]] += delta_t
# Update delta for next step
delta_t = self.W.T.dot(delta_t) * (1 - s[bptt_step-1] ** 2)
return [dLdU, dLdV, dLdW]
RNNNumpy.bptt = bptt
def gradient_check(self, x, y, h=0.001, error_threshold=0.01):
# Calculate the gradients using backpropagation. We want to checker if these are correct.
bptt_gradients = model.bptt(x, y)
# List of all parameters we want to check.
model_parameters = ['U', 'V', 'W']
# Gradient check for each parameter
for pidx, pname in enumerate(model_parameters):
# Get the actual parameter value from the mode, e.g. model.W
parameter = operator.attrgetter(pname)(self)
print("Performing gradient check for parameter %s with size %d." % (pname, np.prod(parameter.shape)))
# Iterate over each element of the parameter matrix, e.g. (0,0), (0,1), ...
it = np.nditer(parameter, flags=['multi_index'], op_flags=['readwrite'])
while not it.finished:
ix = it.multi_index
# Save the original value so we can reset it later
original_value = parameter[ix]
# Estimate the gradient using (f(x+h) - f(x-h))/(2*h)
parameter[ix] = original_value + h
gradplus = model.calculate_total_loss([x],[y])
parameter[ix] = original_value - h
gradminus = model.calculate_total_loss([x],[y])
estimated_gradient = (gradplus - gradminus)/(2*h)
# Reset parameter to original value
parameter[ix] = original_value
# The gradient for this parameter calculated using backpropagation
backprop_gradient = bptt_gradients[pidx][ix]
# calculate The relative error: (|x - y|/(|x| + |y|))
relative_error = np.abs(backprop_gradient - estimated_gradient)/(np.abs(backprop_gradient) + np.abs(estimated_gradient))
# If the error is to large fail the gradient check
if relative_error > error_threshold:
print("Gradient Check ERROR: parameter=%s ix=%s" % (pname, ix))
print("+h Loss: %f" % gradplus)
print("-h Loss: %f" % gradminus)
print("Estimated_gradient: %f" % estimated_gradient)
print("Backpropagation gradient: %f" % backprop_gradient)
print("Relative Error: %f" % relative_error)
return
it.iternext()
print("Gradient check for parameter %s passed." % (pname))
RNNNumpy.gradient_check = gradient_check
# To avoid performing millions of expensive calculations we use a smaller vocabulary size for checking.
grad_check_vocab_size = 100
np.random.seed(10)
word_model = RNNNumpy(grad_check_vocab_size, 10, bptt_truncate=1000)
word_model.gradient_check([0,1,2,3], [1,2,3,4])
# Performs one step of SGD.
def numpy_sdg_step(self, x, y, learning_rate):
# Calculate the gradients
dLdU, dLdV, dLdW = self.bptt(x, y)
# Change parameters according to gradients and learning rate
self.U -= learning_rate * dLdU
self.V -= learning_rate * dLdV
self.W -= learning_rate * dLdW
RNNNumpy.sgd_step = numpy_sdg_step
# Outer SGD Loop
# - model: The RNN model instance
# - X_train: The training data set
# - y_train: The training data labels
# - learning_rate: Initial learning rate for SGD
# - nepoch: Number of times to iterate through the complete dataset
# - evaluate_loss_after: Evaluate the loss after this many epochs
def train_with_sgd(model, X_train, y_train, learning_rate=0.005, nepoch=100, evaluate_loss_after=5):
# We keep track of the losses so we can plot them later
losses = []
num_examples_seen = 0
for epoch in range(nepoch):
# Optionally evaluate the loss
if (epoch % evaluate_loss_after == 0):
loss = model.calculate_loss(X_train, y_train)
losses.append((num_examples_seen, loss))
time = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
print("%s: Loss after num_examples_seen=%d epoch=%d: %f" % (time, num_examples_seen, epoch, loss))
# Adjust the learning rate if loss increases
if (len(losses) > 1 and losses[-1][1] > losses[-2][1]):
learning_rate = learning_rate * 0.5
print("Setting learning rate to %f" % learning_rate)
sys.stdout.flush()
# For each training example...
for i in range(len(y_train)):
# One SGD step
model.sgd_step(X_train[i], y_train[i], learning_rate)
num_examples_seen += 1
np.random.seed(10)
word_model = RNNNumpy(vocabulary_size)
%timeit model.sgd_step(X_train[10], y_train[10], 0.005)
np.random.seed(10)
model = RNNNumpy(vocabulary_size)
losses = train_with_sgd(model, X_train[:1000], y_train[:1000], nepoch=100, evaluate_loss_after=1)
def generate_sentence(model):
# We start the sentence with the start token
new_sentence = [word_to_index[sentence_start_token]]
# Repeat until we get an end token
while not new_sentence[-1] == word_to_index[sentence_end_token]:
next_word_probs = model.forward_propagation(new_sentence)
#print(next_word_probs[0][-1])
#print(max(next_word_probs[0][-1]))
sampled_word = word_to_index[unknown_token]
# We don't want to sample unknown words
while sampled_word == word_to_index[unknown_token]:
samples = np.random.multinomial(1, next_word_probs[0][-1])
sampled_word = np.argmax(samples)
new_sentence.append(sampled_word)
sentence_str = [index_to_word[x] for x in new_sentence[1:-1]]
return sentence_str
num_sentences = 15
senten_min_length = 5
for i in range(num_sentences):
sent = []
# We want long sentences, not sentences with one or two words
while len(sent) < senten_min_length:
sent = generate_sentence(model)
print(" ".join(sent).title())
```
# LSTM
This is the [Beginners guide to text generation with LSTM](https://www.kaggle.com/shivamb/beginners-guide-to-text-generation-using-lstms) implementation
```
import tensorflow as tf
from keras.preprocessing.sequence import pad_sequences
from keras.layers import Embedding, LSTM, Dense, Dropout
from keras.preprocessing.text import Tokenizer
from keras.callbacks import EarlyStopping
from keras.models import Sequential
import keras.utils as ku
from tensorflow import set_random_seed
from numpy.random import seed
set_random_seed(2)
seed(1)
import os
import warnings
warnings.filterwarnings("ignore")
warnings.simplefilter(action='ignore', category=FutureWarning)
corpus = cleaned.split("\n")
print(corpus[:10])
tokenizer = Tokenizer()
def get_sequence_of_tokens(corpus):
## tokenization
tokenizer.fit_on_texts(corpus)
total_words = len(tokenizer.word_index) + 1
## convert data to sequence of tokens
input_sequences = []
for line in corpus:
token_list = tokenizer.texts_to_sequences([line])[0]
for i in range(1, len(token_list)):
n_gram_sequence = token_list[:i+1]
input_sequences.append(n_gram_sequence)
return input_sequences, total_words
inp_sequences, total_words = get_sequence_of_tokens(corpus)
print(total_words)
inp_sequences[:10]
def generate_padded_sequences(input_sequences):
max_sequence_len = max([len(x) for x in input_sequences])
input_sequences = np.array(pad_sequences(input_sequences, maxlen=max_sequence_len, padding='pre'))
predictors, label = input_sequences[:,:-1],input_sequences[:,-1]
label = ku.to_categorical(label, num_classes=total_words)
return predictors, label, max_sequence_len
predictors, label, max_sequence_len = generate_padded_sequences(inp_sequences)
print(max_sequence_len)
def create_model(max_sequence_len, total_words):
input_len = max_sequence_len - 1
model = Sequential()
# Add Input Embedding Layer
model.add(Embedding(total_words, 10, input_length=input_len))
# Add Hidden Layer 1 - LSTM Layer
model.add(LSTM(100))
model.add(Dropout(0.1))
# Add Output Layer
model.add(Dense(total_words, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
return model
lstm_model = create_model(max_sequence_len, total_words)
lstm_model.summary()
lstm_model.fit(predictors, label, epochs=100, verbose=5)
def generate_text(seed_text, next_words, model, max_sequence_len):
for _ in range(next_words):
token_list = tokenizer.texts_to_sequences([seed_text])[0]
token_list = pad_sequences([token_list], maxlen=max_sequence_len-1, padding='pre')
predicted = model.predict_classes(token_list, verbose=0)
output_word = ""
for word,index in tokenizer.word_index.items():
if index == predicted:
output_word = word
break
seed_text += " "+output_word
return seed_text.title()
print (generate_text("", 5, lstm_model, max_sequence_len))
print (generate_text("euclidean", 4, lstm_model, max_sequence_len))
print (generate_text("generative", 5, lstm_model, max_sequence_len))
print (generate_text("ground breaking", 5, lstm_model, max_sequence_len))
print (generate_text("new", 4, lstm_model, max_sequence_len))
print (generate_text("understanding", 5, lstm_model, max_sequence_len))
print (generate_text("long short term memory", 6, lstm_model, max_sequence_len))
print (generate_text("LSTM", 6, lstm_model, max_sequence_len))
print (generate_text("a", 5, lstm_model, max_sequence_len))
print (generate_text("anomaly", 5, lstm_model, max_sequence_len))
print (generate_text("data", 7, lstm_model, max_sequence_len))
print (generate_text("designing", 7, lstm_model, max_sequence_len))
print (generate_text("reinforcement", 7, lstm_model, max_sequence_len))
```
# Results
When trying to analyse each different method I used the number of titles that made sense from start to finish and whether a title contained a sub-string that made sense. I named these two metrics **Coherent Titles* and **Coherent Sub-strings**.
I then generated 15 titles with each model and calculated the following results:
|Model|Coherent Titles|Coherent Sub-strings|
|------|-----|-----|
|Char RNN|6.67%|6.67%|
|Word RNN|40%|53%|
|LSTM|60%|100%|
Its apparent that the LSTM outperforms the RNNs, but that was to be expected. I think the word level RNN is actually quite good, and the char level one can definitely be improved upon. Also, the dataset is quite small. With a large corpus I think the results would likely be improved.
However, a more formalised method for comparing the models is definitely necessary for further research.
# Going Forward
I think a possible method of comparing the different models would be to use a language model that can indicidate whether a sentence makes sense to some degree. Then that could be used on the generated titles in order to extrapolate some more meaningful and reproducible results. I was advised by my lecturer that a possible method of doing this was to use something like [Google Ngram](https://books.google.com/ngrams/info), and check whether a title or a substring of a title has been previously used to a certain degree. If it has, then it likely makes some sense.
The parameters for the different implementations can definitely also be experimented with in order to better understand the impact on the final titles.
I was also advised that an interesting area of research would be to generate a title for your paper (or writings) based on the abstract (or some subsection of your writings). This would very likely lead to titles that are more related to the actual content.
This was a very fun and interesting experience, and was inspired by the following:
* [Harry Potter and the Portrait of what Looked like a Large Pile of Ash by Botnik ](https://botnik.org/content/harry-potter.html)
* [King James Programming](https://kingjamesprogramming.tumblr.com/)
*[Alice in Elsinore](https://www.eblong.com/zarf/markov/alice_in_elsinore.txt) from [Fun with Markov Chains](https://www.eblong.com/zarf/markov/)
* [Stack Exchange Simulator](https://se-simulator.lw1.at/)
* [Pun Generation with Suprise](https://github.com/hhexiy/pungen)
| true |
code
| 0.555435 | null | null | null | null |
|
# DeepDreaming with TensorFlow
>[Loading and displaying the model graph](#loading)
>[Naive feature visualization](#naive)
>[Multiscale image generation](#multiscale)
>[Laplacian Pyramid Gradient Normalization](#laplacian)
>[Playing with feature visualzations](#playing)
>[DeepDream](#deepdream)
This notebook demonstrates a number of Convolutional Neural Network image generation techniques implemented with TensorFlow for fun and science:
- visualize individual feature channels and their combinations to explore the space of patterns learned by the neural network (see [GoogLeNet](http://storage.googleapis.com/deepdream/visualz/tensorflow_inception/index.html) and [VGG16](http://storage.googleapis.com/deepdream/visualz/vgg16/index.html) galleries)
- embed TensorBoard graph visualizations into Jupyter notebooks
- produce high-resolution images with tiled computation ([example](http://storage.googleapis.com/deepdream/pilatus_flowers.jpg))
- use Laplacian Pyramid Gradient Normalization to produce smooth and colorful visuals at low cost
- generate DeepDream-like images with TensorFlow (DogSlugs included)
The network under examination is the [GoogLeNet architecture](http://arxiv.org/abs/1409.4842), trained to classify images into one of 1000 categories of the [ImageNet](http://image-net.org/) dataset. It consists of a set of layers that apply a sequence of transformations to the input image. The parameters of these transformations were determined during the training process by a variant of gradient descent algorithm. The internal image representations may seem obscure, but it is possible to visualize and interpret them. In this notebook we are going to present a few tricks that allow to make these visualizations both efficient to generate and even beautiful. Impatient readers can start with exploring the full galleries of images generated by the method described here for [GoogLeNet](http://storage.googleapis.com/deepdream/visualz/tensorflow_inception/index.html) and [VGG16](http://storage.googleapis.com/deepdream/visualz/vgg16/index.html) architectures.
```
# boilerplate code
from __future__ import print_function
import os
from io import BytesIO
import numpy as np
from functools import partial
import PIL.Image
from IPython.display import clear_output, Image, display, HTML
import tensorflow as tf
```
<a id='loading'></a>
## Loading and displaying the model graph
The pretrained network can be downloaded [here](https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip). Unpack the `tensorflow_inception_graph.pb` file from the archive and set its path to `model_fn` variable. Alternatively you can uncomment and run the following cell to download the network:
```
!wget -nc https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip && unzip -n inception5h.zip
model_fn = 'tensorflow_inception_graph.pb'
# creating TensorFlow session and loading the model
graph = tf.Graph()
sess = tf.InteractiveSession(graph=graph)
with tf.gfile.FastGFile(model_fn, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
t_input = tf.placeholder(np.float32, name='input') # define the input tensor
imagenet_mean = 117.0
t_preprocessed = tf.expand_dims(t_input-imagenet_mean, 0)
tf.import_graph_def(graph_def, {'input':t_preprocessed})
```
To take a glimpse into the kinds of patterns that the network learned to recognize, we will try to generate images that maximize the sum of activations of particular channel of a particular convolutional layer of the neural network. The network we explore contains many convolutional layers, each of which outputs tens to hundreds of feature channels, so we have plenty of patterns to explore.
```
layers = [op.name for op in graph.get_operations() if op.type=='Conv2D' and 'import/' in op.name]
feature_nums = [int(graph.get_tensor_by_name(name+':0').get_shape()[-1]) for name in layers]
print('Number of layers', len(layers))
print('Total number of feature channels:', sum(feature_nums))
# Helper functions for TF Graph visualization
def strip_consts(graph_def, max_const_size=32):
"""Strip large constant values from graph_def."""
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = tf.compat.as_bytes("<stripped %d bytes>"%size)
return strip_def
def rename_nodes(graph_def, rename_func):
res_def = tf.GraphDef()
for n0 in graph_def.node:
n = res_def.node.add()
n.MergeFrom(n0)
n.name = rename_func(n.name)
for i, s in enumerate(n.input):
n.input[i] = rename_func(s) if s[0]!='^' else '^'+rename_func(s[1:])
return res_def
def show_graph(graph_def, max_const_size=32):
"""Visualize TensorFlow graph."""
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code = """
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe = """
<iframe seamless style="width:800px;height:620px;border:0" srcdoc="{}"></iframe>
""".format(code.replace('"', '"'))
display(HTML(iframe))
# Visualizing the network graph. Be sure expand the "mixed" nodes to see their
# internal structure. We are going to visualize "Conv2D" nodes.
tmp_def = rename_nodes(graph_def, lambda s:"/".join(s.split('_',1)))
show_graph(tmp_def)
```
<a id='naive'></a>
## Naive feature visualization
Let's start with a naive way of visualizing these. Image-space gradient ascent!
```
# Picking some internal layer. Note that we use outputs before applying the ReLU nonlinearity
# to have non-zero gradients for features with negative initial activations.
layer = 'mixed4d_3x3_bottleneck_pre_relu'
channel = 139 # picking some feature channel to visualize
# start with a gray image with a little noise
img_noise = np.random.uniform(size=(224,224,3)) + 100.0
def showarray(a, fmt='jpeg'):
a = np.uint8(np.clip(a, 0, 1)*255)
f = BytesIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
def visstd(a, s=0.1):
'''Normalize the image range for visualization'''
return (a-a.mean())/max(a.std(), 1e-4)*s + 0.5
def T(layer):
'''Helper for getting layer output tensor'''
return graph.get_tensor_by_name("import/%s:0"%layer)
def render_naive(t_obj, img0=img_noise, iter_n=20, step=1.0):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for i in range(iter_n):
g, score = sess.run([t_grad, t_score], {t_input:img})
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print(score, end = ' ')
clear_output()
showarray(visstd(img))
render_naive(T(layer)[:,:,:,channel])
```
<a id="multiscale"></a>
## Multiscale image generation
Looks like the network wants to show us something interesting! Let's help it. We are going to apply gradient ascent on multiple scales. Details formed on smaller scale will be upscaled and augmented with additional details on the next scale.
With multiscale image generation it may be tempting to set the number of octaves to some high value to produce wallpaper-sized images. Storing network activations and backprop values will quickly run out of GPU memory in this case. There is a simple trick to avoid this: split the image into smaller tiles and compute each tile gradient independently. Applying random shifts to the image before every iteration helps avoid tile seams and improves the overall image quality.
```
def tffunc(*argtypes):
'''Helper that transforms TF-graph generating function into a regular one.
See "resize" function below.
'''
placeholders = list(map(tf.placeholder, argtypes))
def wrap(f):
out = f(*placeholders)
def wrapper(*args, **kw):
return out.eval(dict(zip(placeholders, args)), session=kw.get('session'))
return wrapper
return wrap
# Helper function that uses TF to resize an image
def resize(img, size):
img = tf.expand_dims(img, 0)
return tf.image.resize_bilinear(img, size)[0,:,:,:]
resize = tffunc(np.float32, np.int32)(resize)
def calc_grad_tiled(img, t_grad, tile_size=512):
'''Compute the value of tensor t_grad over the image in a tiled way.
Random shifts are applied to the image to blur tile boundaries over
multiple iterations.'''
sz = tile_size
h, w = img.shape[:2]
sx, sy = np.random.randint(sz, size=2)
img_shift = np.roll(np.roll(img, sx, 1), sy, 0)
grad = np.zeros_like(img)
for y in range(0, max(h-sz//2, sz),sz):
for x in range(0, max(w-sz//2, sz),sz):
sub = img_shift[y:y+sz,x:x+sz]
g = sess.run(t_grad, {t_input:sub})
grad[y:y+sz,x:x+sz] = g
return np.roll(np.roll(grad, -sx, 1), -sy, 0)
def render_multiscale(t_obj, img0=img_noise, iter_n=10, step=1.0, octave_n=3, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for octave in range(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print('.', end = ' ')
clear_output()
showarray(visstd(img))
render_multiscale(T(layer)[:,:,:,channel])
```
<a id="laplacian"></a>
## Laplacian Pyramid Gradient Normalization
This looks better, but the resulting images mostly contain high frequencies. Can we improve it? One way is to add a smoothness prior into the optimization objective. This will effectively blur the image a little every iteration, suppressing the higher frequencies, so that the lower frequencies can catch up. This will require more iterations to produce a nice image. Why don't we just boost lower frequencies of the gradient instead? One way to achieve this is through the [Laplacian pyramid](https://en.wikipedia.org/wiki/Pyramid_%28image_processing%29#Laplacian_pyramid) decomposition. We call the resulting technique _Laplacian Pyramid Gradient Normalization_.
```
k = np.float32([1,4,6,4,1])
k = np.outer(k, k)
k5x5 = k[:,:,None,None]/k.sum()*np.eye(3, dtype=np.float32)
def lap_split(img):
'''Split the image into lo and hi frequency components'''
with tf.name_scope('split'):
lo = tf.nn.conv2d(img, k5x5, [1,2,2,1], 'SAME')
lo2 = tf.nn.conv2d_transpose(lo, k5x5*4, tf.shape(img), [1,2,2,1])
hi = img-lo2
return lo, hi
def lap_split_n(img, n):
'''Build Laplacian pyramid with n splits'''
levels = []
for i in range(n):
img, hi = lap_split(img)
levels.append(hi)
levels.append(img)
return levels[::-1]
def lap_merge(levels):
'''Merge Laplacian pyramid'''
img = levels[0]
for hi in levels[1:]:
with tf.name_scope('merge'):
img = tf.nn.conv2d_transpose(img, k5x5*4, tf.shape(hi), [1,2,2,1]) + hi
return img
def normalize_std(img, eps=1e-10):
'''Normalize image by making its standard deviation = 1.0'''
with tf.name_scope('normalize'):
std = tf.sqrt(tf.reduce_mean(tf.square(img)))
return img/tf.maximum(std, eps)
def lap_normalize(img, scale_n=4):
'''Perform the Laplacian pyramid normalization.'''
img = tf.expand_dims(img,0)
tlevels = lap_split_n(img, scale_n)
tlevels = list(map(normalize_std, tlevels))
out = lap_merge(tlevels)
return out[0,:,:,:]
# Showing the lap_normalize graph with TensorBoard
lap_graph = tf.Graph()
with lap_graph.as_default():
lap_in = tf.placeholder(np.float32, name='lap_in')
lap_out = lap_normalize(lap_in)
show_graph(lap_graph)
def render_lapnorm(t_obj, img0=img_noise, visfunc=visstd,
iter_n=10, step=1.0, octave_n=3, octave_scale=1.4, lap_n=4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
# build the laplacian normalization graph
lap_norm_func = tffunc(np.float32)(partial(lap_normalize, scale_n=lap_n))
img = img0.copy()
for octave in range(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
g = lap_norm_func(g)
img += g*step
print('.', end = ' ')
clear_output()
showarray(visfunc(img))
render_lapnorm(T(layer)[:,:,:,channel])
```
<a id="playing"></a>
## Playing with feature visualizations
We got a nice smooth image using only 10 iterations per octave. In case of running on GPU this takes just a few seconds. Let's try to visualize another channel from the same layer. The network can generate wide diversity of patterns.
```
render_lapnorm(T(layer)[:,:,:,65])
```
Lower layers produce features of lower complexity.
```
render_lapnorm(T('mixed3b_1x1_pre_relu')[:,:,:,101])
```
There are many interesting things one may try. For example, optimizing a linear combination of features often gives a "mixture" pattern.
```
render_lapnorm(T(layer)[:,:,:,65]+T(layer)[:,:,:,139], octave_n=4)
```
<a id="deepdream"></a>
## DeepDream
Now let's reproduce the [DeepDream algorithm](https://github.com/google/deepdream/blob/master/dream.ipynb) with TensorFlow.
```
def render_deepdream(t_obj, img0=img_noise,
iter_n=10, step=1.5, octave_n=4, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
# split the image into a number of octaves
img = img0
octaves = []
for i in range(octave_n-1):
hw = img.shape[:2]
lo = resize(img, np.int32(np.float32(hw)/octave_scale))
hi = img-resize(lo, hw)
img = lo
octaves.append(hi)
# generate details octave by octave
for octave in range(octave_n):
if octave>0:
hi = octaves[-octave]
img = resize(img, hi.shape[:2])+hi
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
img += g*(step / (np.abs(g).mean()+1e-7))
print('.',end = ' ')
clear_output()
showarray(img/255.0)
```
Let's load some image and populate it with DogSlugs (in case you've missed them).
```
img0 = PIL.Image.open('pilatus800.jpg')
img0 = np.float32(img0)
showarray(img0/255.0)
render_deepdream(tf.square(T('mixed4c')), img0)
```
Note that results can differ from the [Caffe](https://github.com/BVLC/caffe)'s implementation, as we are using an independently trained network. Still, the network seems to like dogs and animal-like features due to the nature of the ImageNet dataset.
Using an arbitrary optimization objective still works:
```
render_deepdream(T(layer)[:,:,:,139], img0)
```
Don't hesitate to use higher resolution inputs (also increase the number of octaves)! Here is an [example](http://storage.googleapis.com/deepdream/pilatus_flowers.jpg) of running the flower dream over the bigger image.
We hope that the visualization tricks described here may be helpful for analyzing representations learned by neural networks or find their use in various artistic applications.
| true |
code
| 0.726371 | null | null | null | null |
|
# Tutorial
## [How to do Novelty Detection in Keras with Generative Adversarial Network](https://www.dlology.com/blog/how-to-do-novelty-detection-in-keras-with-generative-adversarial-network-part-2/) | DLology
This notebook is for test phase Novelty Detection. To Train the model, run this first.
```bash
python models.py
```
It is recommended to understand how the model works in general before continuing the implementation.
→ [How to do Novelty Detection in Keras with Generative Adversarial Network (Part 1)](https://www.dlology.com/blog/how-to-do-novelty-detection-in-keras-with-generative-adversarial-network/)
```
from utils import *
from kh_tools import *
import models
import imp
imp.reload(models)
from models import ALOCC_Model
from keras.datasets import mnist
from keras.losses import binary_crossentropy
from keras import backend as K
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
self =ALOCC_Model(dataset_name='mnist', input_height=28,input_width=28)
```
## Choose a stopping criterion
The training procedure is stopped when R successfully maps noisy images to clean images carrying the concept of the target class. When R can reconstruct its input with minimum error. In the following case, we pick the epoch 3.
```
# This image was generated at the end of the models.py training procedure to help pick a ending epoch to load.
from IPython.display import Image
Image(filename='plot_g_recon_losses.png')
# Load the epoch #3 saved weights.
self.adversarial_model.load_weights('./checkpoint/ALOCC_Model_3.h5')
(X_train, y_train), (_, _) = mnist.load_data()
X_train = X_train / 255
```
## Test the reconstruction loss and Discriminator output
The `abnormal` image has a **`larger` reconstruction loss** and **`smaller` discriminator output value**.
```
def test_reconstruction(label, data_index = 11):
specific_idx = np.where(y_train == label)[0]
if data_index >= len(X_train):
data_index = 0
data = X_train[specific_idx].reshape(-1, 28, 28, 1)[data_index:data_index+1]
model_predicts = self.adversarial_model.predict(data)
fig= plt.figure(figsize=(8, 8))
columns = 1
rows = 2
fig.add_subplot(rows, columns, 1)
input_image = data.reshape((28, 28))
reconstructed_image = model_predicts[0].reshape((28, 28))
plt.title('Input')
plt.imshow(input_image, label='Input')
fig.add_subplot(rows, columns, 2)
plt.title('Reconstruction')
plt.imshow(reconstructed_image, label='Reconstructed')
plt.show()
# Compute the mean binary_crossentropy loss of reconstructed image.
y_true = K.variable(reconstructed_image)
y_pred = K.variable(input_image)
error = K.eval(binary_crossentropy(y_true, y_pred)).mean()
print('Reconstruction loss:', error)
print('Discriminator Output:', model_predicts[1][0][0])
```
### Normal case
The network was trained with label == 1.
```
test_reconstruction(1)
```
## Abnormal cases
The network was not trained on those labels, so the Generator/R network find it hard to reconstruct the input images reflected in higher reconstruction loss values.
Discriminator also outputs a lower value compared to normal ones.
```
test_reconstruction(3)
test_reconstruction(5)
test_reconstruction(7)
```
| true |
code
| 0.859251 | null | null | null | null |
|
# Convert OpenSN data to name,host,type,x,y,z,t,lum
Data downloaded from The Open Supernova Catalog https://sne.space on Aug. 20, 2019
```
import pandas as pd
import numpy as np
from astropy import units
from astropy.coordinates import SkyCoord, Distance
from astropy.cosmology import WMAP9
import datetime
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv('OpenSNCat.csv')
#select the ones that have all the data we need
#In the end, I want z, but sonce there are multiple z values for some sources,
# I think I will just use the luminosity distance and convert below
df = df.loc[(df['R.A.'].notnull()) & \
(df['Dec.'].notnull()) & \
(df['dL (Mpc)'].notnull()) & \
(df['Disc. Date'].notnull()) & \
(df['Mmax'].notnull())]
df
```
I will have to iterate through the rows, since some coords have multiple entries, and some dates are bad
```
x = []
y = []
z = []
t = []
log10lum = []
name = []
host = []
tpe = []
#for datetime
fmt = '%Y/%m/%d'
N = 1e10
for index, row in df.iterrows():
bad = False
#there are still some dates that cause errors (e.g., 185/12/07/)
date = str(row['Disc. Date'])
pos = date.find(',')
fmt0 = fmt
if (pos != -1):
date = row['Disc. Date'][0:pos]
pos1 = date.find('/')
pos2 = date.rfind('/')
if (pos1 == -1):
fmt0 = '%Y'
if (pos1 != -1 and pos2 == pos1):
fmt0 = '%Y/%m/'
if (fmt0 == fmt):
val1 = int(date[0:pos1])
if (val1 <= 12):
fmt0 = '%m/%d/%Y'
if (val1 > 12 and val1 < 1800):
bad = True
if (not bad):
dt = datetime.datetime.strptime(date, fmt0)
t.append(dt.year + dt.month/12. + dt.day/365.24)
ra = row['R.A.']
pos = str(ra).find(',')
if (pos != -1):
ra = row['R.A.'][0:pos]
dec = row['Dec.']
pos = str(dec).find(',')
if (pos != -1):
dec = row['Dec.'][0:pos]
d = row['dL (Mpc)']*units.Mpc
#convert to comoving distance
cosmoz = Distance(d).z
c1 = SkyCoord(ra, dec, unit=(units.hourangle, units.deg), distance=WMAP9.comoving_distance(cosmoz)).galactic.cartesian
x.append(c1.x.to(units.Mpc).value)
y.append(c1.y.to(units.Mpc).value)
z.append(c1.z.to(units.Mpc).value)
log10lum.append(0.4*(4.74 - row['Mmax']))
name.append(row['Name'])
host.append(row['Host Name'])
tpe.append(row['Type'])
if (index > N):
break
print(min(t), max(t))
f, (ax1, ax2) = plt.subplots(1,2, figsize=(10, 5))
_ = ax1.hist(t,bins=100)
_ = ax2.hist(log10lum,bins=100)
```
### Write this to a new csv file
```
print(len(name), len(host), len(type), len(x), len(y), len(z), len(t))
data = {'name':np.array(name),
'host':np.array(host),
'type':np.array(tpe),
'x':np.array(x),
'y':np.array(y),
'z':np.array(z),
't':np.array(t),
'log10lum':np.array(log10lum)}
pd.DataFrame(data).to_csv('OpenSNCatConverted.csv', index=False)
def unique(list1):
# intilize a null list
unique_list = []
# traverse for all elements
for x in list1:
# check if exists in unique_list or not
if x not in unique_list:
unique_list.append(x)
# print list
for x in unique_list:
print(x)
unique(tpe)
```
| true |
code
| 0.218962 | null | null | null | null |
|
```
%matplotlib notebook
# test imports
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sklearn
print(f"The version of numpy is: {np.__version__}")
print(f"The version of pandas is: {pd.__version__}")
print(f"The version of scikit-learn is: {sklearn.__version__}")
```
You should see the versions of the libaries installed in your environment. If you are using the local virtual environment set up by `pipenv`, you should see the following:
```
The version of numpy is: 1.19.5
The version of pandas is: 1.1.5
The version of scikit-learn is: 0.22.2.post1
```
If you are running this notebook on Google Colab, your versions might be different.
## Variables
```
number_1 = 1
number_2 = 2.0
print(number_1)
print(type(number_1))
print(number_2)
print(type(number_2))
string_1 = "hello"
string_2 = "hello world"
print(string_1)
print(type(string_1))
print(string_2)
print(type(string_2))
list_1 = [1, 2, 3]
print(list_1)
print(len(list_1))
list_2 = ["hello", "world", "1", 1]
print(list_2)
print(list_2[2])
dict_1 = {
"class_number": "MECE2020",
"class_capacity": 150,
}
print(type(dict_1))
print(dict_1["class_capacity"])
```
## Operators
```
number_1 + number_2
# this will fail
list_1 / number_1
number_1 >= number_2
len(list_2[0]) == len(list_2[1])
(len(list_2[0]) == len(list_2[1])) and False
```
## Control Structures
```
wether_today = "raining"
if wether_today == "raining":
print("Bring an umbrella!")
elif wether_today == "sunny":
print("Enjoy the sun!")
else:
print("What is the wether today?")
for i in range(10):
print("The number is:", i)
i = 0
while i < 10:
print("The number is:", i)
i += 1
```
## List comprehension
```
list_3 = []
for i in range(10):
list_3.append(i**2)
list_3
list_4 = [i**2 for i in range(10)]
print(list_4)
list_5 = [1.25, -9.45, 10.22, 3.78, -5.92, 1.16]
list_6 = [x if x > 0 else 0 for x in list_5]
list_6
```
## Function
```
def add(number_1: float, number_2: float) -> float:
"""Add two numbers."""
return number_1 + number_2
add(1, 2)
def square_root(x: float) -> float:
"""Calcuate the square root of the input using Newton's method.
Args:
x (float): The input number, must be greater or equal to zero.
Returns:
(float): Sqaure root of the input.
Raises:
ValueError: If the input number is negative.
"""
if x < 0:
raise ValueError("The input number can not be negative.")
def get_next_guess(current_guess: float) -> float:
"""Get next guess using Newton's method."""
return 0.5 * (current_guess + x / current_guess)
epislon = 1e-5
current_guess = x
next_guess = get_next_guess(current_guess)
while abs(current_guess - next_guess) > epislon:
current_guess = next_guess
next_guess = get_next_guess(current_guess)
return next_guess
square_root(3)
```
## Class
```
class Person:
"""A simple class."""
def __init__(self, name: str):
self.name = name
def say(self, words: str):
"""Say something."""
print(f"{self.name} says: {words}")
def pat(self, person: Person):
"""Pat another person."""
print(f"{self.name} pats {person.name}'s shoulder.")
person_1 = Person("John Doe")
person_1.say("Hello!")
person_2 = Person("Jane Doe")
person_2.say("Hello too!")
person_1.pat(person_2)
```
## Using `pandas`
```
data = pd.read_csv("https://raw.githubusercontent.com/changyaochen/MECE4520/master/lectures/lecture_1/iris.csv")
data.head()
data.shape
# some simple data aggregation
data["Species"].value_counts()
data.groupby("Species")["SepalLengthCm"].mean()
# Some simple visualization
plt.scatter(x=data["SepalLengthCm"], y=data["SepalWidthCm"])
plt.xlabel("SepalLengthCm")
plt.ylabel("SepalWidthCm")
plt.tight_layout()
plt.show()
```
| true |
code
| 0.597725 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/gyyang/neurogym/blob/master/examples/demo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Exploring NeuroGym tasks
NeuroGym is a comprehensive toolkit that allows training any network model on many established neuroscience tasks using Reinforcement Learning techniques. It includes working memory tasks, value-based decision tasks and context-dependent perceptual categorization tasks.
In this notebook we first show how to install the relevant toolbox.
We then show how to access the available tasks and their relevant information.
Finally we train an LSTM network on the Random Dots Motion task using the A2C algorithm [Mnih et al. 2016](https://arxiv.org/abs/1602.01783) implemented in the [stable-baselines](https://github.com/hill-a/stable-baselines) toolbox, and plot the results.
You can easily change the code to train a network on any other available task or using a different algorithm (e.g. ACER, PPO2).
### Installation on google colab
```
%tensorflow_version 1.x
# Install gym
! pip install gym
# Install neurogym
! git clone https://github.com/gyyang/neurogym.git
%cd neurogym/
! pip install -e .
# Install stable-baselines
! pip install --upgrade stable-baselines
```
### Explore tasks
```
import warnings
import gym
import neurogym as ngym
from neurogym.utils import info, plotting
warnings.filterwarnings('ignore')
info.all_tasks()
```
### Visualize a single task
```
task = 'PerceptualDecisionMaking-v0'
env = gym.make(task);
print(env)
plotting.plot_env(env, num_steps=300, def_act=0, ob_traces=['Fixation cue', 'Stim1', 'Stim2'], fig_kwargs={'figsize': (12, 12)});
```
### Explore wrappers
```
info.all_wrappers()
info.info_wrapper('TrialHistory-v0', show_code=True);
```
### Train a network
```
import warnings
import numpy as np
from neurogym.wrappers import trial_hist, monitor
from stable_baselines.common.policies import LstmPolicy
from stable_baselines.common.vec_env import DummyVecEnv
from stable_baselines import A2C # ACER, PPO2
warnings.filterwarnings('default')
# task paremters
task = 'PerceptualDecisionMaking-v0'
timing = {'fixation': ('constant', 300),
'stimulus': ('constant', 700),
'decision': ('constant', 300)}
kwargs = {'dt': 100, 'timing': timing, 'stim_scale': 2}
# wrapper parameters
n_ch = 2
p = 0.8
num_blocks = 2
block_1 = np.array([[p, 1-p], [1-p, p]]) # repeating block
block_2 = np.array([[1-p, p], [p, 1-p]]) # alternating block
probs = np.empty((num_blocks, n_ch, n_ch))
probs[0, :, :] = block_1
probs[1, :, :] = block_2
block_dur = 50
# build task
env = gym.make(task, **kwargs)
# Apply the wrapper
env = trial_hist.TrialHistory(env, probs=probs, block_dur=block_dur)
env = monitor.Monitor(env, folder='content/tests/', sv_per=10000, verbose=1, sv_fig=True, num_stps_sv_fig=100)
# the env is now wrapped automatically when passing it to the constructor
env = DummyVecEnv([lambda: env])
model = A2C(LstmPolicy, env, verbose=1, policy_kwargs={'feature_extraction':"mlp"})
model.learn(total_timesteps=500000, log_interval=100000)
env.close()
```
### Visualize results
```
import numpy as np
import matplotlib.pyplot as plt
# Create task
env = gym.make(task, **kwargs)
# Apply the wrapper
env = trial_hist.TrialHistory(env, probs=probs, block_dur=block_dur)
env = DummyVecEnv([lambda: env])
plotting.plot_env(env, num_steps=50, def_act=0, ob_traces=['Fixation cue', 'Stim1', 'Stim2'], fig_kwargs={'figsize': (12, 12)}, model=model);
```
| true |
code
| 0.621943 | null | null | null | null |
|
# MeshCat Animations
MeshCat.jl also provides an animation interface, built on top of the [three.js animation system](https://threejs.org/docs/#manual/introduction/Animation-system). While it is possible to construct animation clips and tracks manually, just as you would in Three.js, it's generally easier to use the MeshCat `Animation` type.
Let's show off building a simple animation. We first have to create our scene:
```
import meshcat
from meshcat.geometry import Box
vis = meshcat.Visualizer()
## To open the visualizer in a new browser tab, do:
# vis.open()
## To open the visualizer inside this jupyter notebook, do:
# vis.jupyter_cell()
vis["box1"].set_object(Box([0.1, 0.2, 0.3]))
```
### Building an Animation
We construct an animation by first creating a blank `Animation()` object. We can then use the `at_frame` method to set properties or transforms of the animation at specific frames of the animation. Three.js will automatically interpolate between whatever values we provide.
For example, let's animate moving the box from [0, 0, 0] to [0, 1, 0]:
```
from meshcat.animation import Animation
import meshcat.transformations as tf
anim = Animation()
with anim.at_frame(vis, 0) as frame:
# `frame` behaves like a Visualizer, in that we can
# call `set_transform` and `set_property` on it, but
# it just stores information inside the animation
# rather than changing the current visualization
frame["box1"].set_transform(tf.translation_matrix([0, 0, 0]))
with anim.at_frame(vis, 30) as frame:
frame["box1"].set_transform(tf.translation_matrix([0, 1, 0]))
# `set_animation` actually sends the animation to the
# viewer. By default, the viewer will play the animation
# right away. To avoid that, you can also pass `play=false`.
vis.set_animation(anim)
```
You should see the box slide 1 meter to the right in the viewer. If you missed the animation, you can run it again from the viewer. Click "Open Controls", find the "Animations" section, and click "play".
### Animating the Camera
The camera is just another object in the MeshCat scene. To set its transform, we just need to index into the visualizer with the right path (note the leading `/`):
```
vis["/Cameras/default"].set_transform(tf.translation_matrix([0, 0, 1]))
```
To animate the camera, we just have to do that same kind of `settransform!` to individual frames in an animation:
```
anim = Animation()
with anim.at_frame(vis, 0) as frame:
frame["/Cameras/default"].set_transform(tf.translation_matrix([0, 0, 0]))
with anim.at_frame(vis, 30) as frame:
frame["/Cameras/default"].set_transform(tf.translation_matrix([0, 0, 1]))
# we can repeat the animation playback with the
# repetitions argument:
vis.set_animation(anim, repetitions=2)
```
We can also animate object properties. For example, let's animate the camera's `zoom` property to smoothly zoom out and then back in. Note that to do this, we have to access a deeper path in the visualizer to get to the actual camera object. For more information, see: https://github.com/rdeits/meshcat#camera-control
```
anim = Animation()
camera_path = "/Cameras/default/rotated/<object>"
with anim.at_frame(vis, 0) as frame:
frame[camera_path].set_property("zoom", "number", 1)
with anim.at_frame(vis, 30) as frame:
frame[camera_path].set_property("zoom", "number", 0.5)
with anim.at_frame(vis, 60) as frame:
frame[camera_path].set_property("zoom", "number", 1)
# While we're animating the camera zoom, we can also animate any other
# properties we want. Let's simultaneously translate the box during
# the same animation:
with anim.at_frame(vis, 0) as frame:
frame["box1"].set_transform(tf.translation_matrix([0, -1, 0]))
with anim.at_frame(vis, 60) as frame:
frame["box1"].set_transform(tf.translation_matrix([0, 1, 0]))
vis.set_animation(anim)
```
### Recording an Animation
To record an animation at a smooth, fixed frame rate, click on "Open Controls" in the viewer, and then go to "Animations" -> "default" -> "Recording" -> "record". This will play the entire animation, recording every frame and then let you download the resulting frames to your computer.
To record activity in the MeshCat window that isn't a MeshCat animation, we suggest using a screen-capture tool like Quicktime for macOS or RecordMyDesktop for Linux.
### Converting the Animation into a Video
Currently, meshcat can only save an animation as a `.tar` file consisting of a list of `.png` images, one for each frame. To convert that into a video, you will need to install the `ffmpeg` program, and then you can run:
```
from meshcat.animation import convert_frames_to_video
convert_frames_to_video("/home/rdeits/Downloads/meshcat_1528401494656.tar", overwrite=True)
```
| true |
code
| 0.627352 | null | null | null | null |
|
# Variational Multi-modal Recurrent Graph AutoEncoder
In this tuorial, we will go through how to run a Variational Multi-modal Recurrent Graph AutoEncoder (VMR-GAE) model for origin-destination (OD) matrix completion. In particular, we will demonstrate how to train the model and evaluate the completion results.
## Part I: Training
In this part, we will show how to train a VMR-GAE model for OD matrix completion on the NYC taxi dataset. In particular, we adopt some training skills from previous works, including data normalization, Kullback-Leibler loss delay and so on.
Visit `paddlespatial/networks/vmrgae/train.py` for more details.
```
import argparse
import os
import numpy as np
import paddle
import pgl
from model import VmrGAE
import utils as utils
from utils import MinMaxScaler
```
The VmrGAE class is built upon PaddlePaddle, which is a deep learning framework.
```
def prep_env(flag='train'):
# type: (str) -> dict
"""
Desc:
Prepare the environment
Args:
flag: specify the environment, 'train' or 'evaluate'
Returns:
A dict indicating the environment variables
"""
parser = \
argparse.ArgumentParser(description='{} [VMR-GAE] on the task of OD Matrix Completion'
.format("Training" if flag == "train" else "Evaluating"))
parser.add_argument('--num_nodes', type=int, default=263, help='The number of nodes in the graph')
parser.add_argument('--timelen', type=int, default=3, help='The length of input sequence')
parser.add_argument('--hidden_dim', type=int, default=32, help='The dimensionality of the hidden state')
parser.add_argument('--rnn_layer', type=int, default=2, help='The number of RNN layers')
parser.add_argument('--delay', type=int, default=0, help='delay to apply kld_loss')
parser.add_argument('--clip_max_value', type=int, default=1, help='clip the max value')
parser.add_argument('--align', type=bool, default=True,
help='Whether or not align the distributions of two modals')
parser.add_argument('--x_feature', type=bool, default=False,
help='X is a feature matrix (if True) or an identity matrix (otherwise)')
parser.add_argument('--data_path', type=str, default='./data/NYC-taxi', help='Data path')
parser.add_argument('--checkpoints', type=str, default='./nyc/checkpoints', help='Checkpoints path')
parser.add_argument('--device', type=str, default='cpu', help='cpu or gpu')
if flag == "train":
parser.add_argument('--iter_num', type=int, default=10, help='The number of iterations')
parser.add_argument('--learning_rate', type=float, default=0.0001, help='delay to apply kld_loss')
parser.add_argument('--result_path', type=str, default='./nyc/results', help='result path')
else:
parser.add_argument('--sample_time', type=int, default=10, help='The sample time for point estimation')
args = parser.parse_known_args()[0]
if flag == "train":
if not os.path.exists(args.checkpoints):
os.makedirs(args.checkpoints)
if not os.path.exists(args.result_path):
os.makedirs(args.result_path)
else:
if not os.path.exists(args.checkpoints):
print('Checkpoint does not exist.')
exit()
primary_flow = np.load('%s/train_data.npy' % args.data_path, allow_pickle=True)
supp_flow = np.load('%s/green_data.npy' % args.data_path, allow_pickle=True)
train_data = np.load('%s/train_data.npy' % args.data_path, allow_pickle=True)[-1]
val_data = np.load('%s/val_data.npy' % args.data_path, allow_pickle=True)
test_data = np.load('%s/test_data.npy' % args.data_path, allow_pickle=True)
# scaling data
ground_truths = []
for i in range(len(primary_flow)):
primary_flow[i][0] = np.array(primary_flow[i][0]).astype("int")
primary_flow[i][1] = np.array(primary_flow[i][1]).astype("float32")
ground_truths.append(utils.index_to_adj_np(primary_flow[i][0], primary_flow[i][1], args.num_nodes))
ground_truths = np.stack(ground_truths, axis=0)
if args.clip_max_value == 1:
max_value = 50
else:
print(np.concatenate(primary_flow[:, 1]).max())
max_value = np.concatenate(primary_flow[:, 1]).max()
primary_scale = MinMaxScaler(0, max_value)
for i in range(args.timelen):
primary_flow[i][1] = primary_scale.transform(primary_flow[i][1])
for i in range(len(supp_flow)):
supp_flow[i][0] = np.array(supp_flow[i][0]).astype("int")
supp_flow[i][1] = np.array(supp_flow[i][1]).astype("float32")
supp_scale = MinMaxScaler(0, np.concatenate(supp_flow[:, 1]).max())
for i in range(args.timelen):
supp_flow[i][1] = supp_scale.transform(supp_flow[i][1])
# load into paddle
mask = np.zeros((args.num_nodes, args.num_nodes))
for i in range(args.timelen):
mask[np.where(ground_truths[i] > (2 / max_value))] = 1.0
target_graph = []
for i in range(len(primary_flow)):
target_graph.append(pgl.Graph(edges=primary_flow[i][0],
num_nodes=args.num_nodes,
edge_feat={'efeat': paddle.to_tensor(primary_flow[i][1])}))
supp_graph = []
for i in range(len(primary_flow)):
supp_graph.append(pgl.Graph(edges=supp_flow[i][0],
num_nodes=args.num_nodes,
edge_feat={'efeat': paddle.to_tensor(supp_flow[i][1])}))
mask = paddle.to_tensor(mask)
xs = paddle.to_tensor([np.eye(args.num_nodes) for i in range(args.timelen)])
x = paddle.to_tensor([np.eye(args.num_nodes) for i in range(args.timelen)])
ground_truths = paddle.to_tensor(ground_truths, dtype='float32')
res = {
"args": args,
"primary_flow": primary_flow, "primary_scale": primary_scale, "target_graph": target_graph, "x": x,
"mask": mask,
# "supp_flow": supp_flow, "supp_scale": supp_scale,
"supp_graph": supp_graph, "xs": xs,
"ground_truths": ground_truths,
"train_data": train_data, "val_data": val_data, "test_data": test_data
}
return res
```
### Environment Preparation
Here we use `argparse` method for arguments setting, including model parameters, file paths and arguments for training. Then, we load the data from the given path and scale them with normalization process. Since we use `PaddlePaddle` as backend, we should also transform the data into `paddle.tensor` form. Note that we use the iteration number as 10 only for demonstration and it need a larger number for training (e.g., 10e5).
```
if __name__ == '__main__':
env = prep_env()
if env['args'].device=='gpu':
paddle.set_device('gpu')
```
### Load the model and settings
The class is defined in the `paddlespatial/networks/vmrgae/model.py`.
Check it for more details.
```
model = VmrGAE(x_dim=env["x"].shape[-1], d_dim=env["xs"].shape[-1], h_dim=env["args"].hidden_dim,
num_nodes=env["args"].num_nodes, n_layers=env["args"].rnn_layer,
eps=1e-10, same_structure=True)
```
Before training, read the checkpoints if available
```
if not os.path.isfile('%s/model.pdparams' % env["args"].checkpoints):
print("Start new train (model).")
min_loss = np.Inf
epoch = 0
else:
print("Found the model file. continue to train ... ")
model.set_state_dict(paddle.load('%s/model.pdparams' % env["args"].checkpoints))
min_loss = paddle.load('%s/minloss.pdtensor' % env["args"].checkpoints)
epoch = np.load('%s/logged_epoch.npy' % env["args"].checkpoints)
optimizer = paddle.optimizer.Adam(learning_rate=env["args"].learning_rate, parameters=model.parameters())
if os.path.isfile('%s/opt_state.pdopt' % env["args"].checkpoints):
opt_state = paddle.load('%s/opt_state.pdopt' % env["args"].checkpoints)
optimizer.set_state_dict(opt_state)
patience = np.Inf
best_val_mape = np.Inf
max_iter = 0
```
### Start train
We now initialize the Adam optimizer and start the training procedure. The learning rate is set to 0.0001. Here we can activate the early stop mechanism if need. Then we train the model for 10 epochs for demostration purposes. In each epoch, we receive the losses, the critical intermediate variables, and the completed OD matrix. If the loss goes down, we then save the checkpoint.
```
for k in range(epoch, env["args"].iter_num):
kld_loss_tvge, kld_loss_avde, pis_loss, all_h, all_enc_mean, all_prior_mean, all_enc_d_mean, all_dec_t, \
all_z_in, all_z_out \
= model(env["x"], env["xs"], env["target_graph"], env["supp_graph"], env["mask"],
env["primary_scale"], env["ground_truths"])
pred = env["primary_scale"].inverse_transform(all_dec_t[-1].numpy())
val_MAE, val_RMSE, val_MAPE = utils.validate(pred, env["val_data"][0],
env["val_data"][1], flag='val')
test_MAE, test_RMSE, test_MAPE = utils.validate(pred, env["test_data"][0],
env["test_data"][1], flag='test')
if val_MAPE < best_val_mape:
best_val_mape = val_MAPE
max_iter = 0
else:
max_iter += 1
if max_iter >= patience:
print('Early Stop!')
break
if k >= env["args"].delay:
loss = kld_loss_tvge + kld_loss_avde + pis_loss
else:
loss = pis_loss
loss.backward()
optimizer.step()
optimizer.clear_grad()
if k % 10 == 0:
print('epoch: ', k)
print('loss =', loss.mean().item())
print('kld_loss_tvge =', kld_loss_tvge.mean().item())
print('kld_loss_avde =', kld_loss_avde.mean().item())
print('pis_loss =', pis_loss.mean().item())
print('val', "MAE:", val_MAE, 'RMSE:', val_RMSE, 'MAPE:', val_MAPE)
print('test', "MAE:", test_MAE, 'RMSE:', test_RMSE, 'MAPE:', test_MAPE)
if (loss.mean() < min_loss).item() | (k == env["args"].delay):
print('epoch: %d, Loss goes down, save the model. pis_loss = %f' % (k, pis_loss.mean().item()))
print('val', "MAE:", val_MAE, 'RMSE:', val_RMSE, 'MAPE:', val_MAPE)
print('test', "MAE:", test_MAE, 'RMSE:', test_RMSE, 'MAPE:', test_MAPE)
min_loss = loss.mean().item()
paddle.save(all_enc_mean, '%s/all_enc_mean.pdtensor' % env["args"].result_path)
paddle.save(all_prior_mean, '%s/all_prior_mean.pdtensor' % env["args"].result_path)
paddle.save(all_enc_d_mean, '%s/all_enc_d_mean.pdtensor' % env["args"].result_path)
paddle.save(all_dec_t, '%s/all_dec_t.pdtensor' % env["args"].result_path)
paddle.save(all_z_in, '%s/all_z_in.pdtensor' % env["args"].result_path)
paddle.save(all_z_out, '%s/all_z_out.pdtensor' % env["args"].result_path)
paddle.save(model.state_dict(), '%s/model.pdparams' % env["args"].checkpoints)
paddle.save(loss.mean(), '%s/minloss.pdtensor' % env["args"].checkpoints)
paddle.save(optimizer.state_dict(), '%s/opt_state.pdopt' % env["args"].checkpoints)
np.save('%s/logged_epoch.npy' % env["args"].checkpoints, k)
```
The above is about the training steps, you can adjust it as needed.
## Part II: Result Evalution
Below we will introduce how to use the trained model for OD matrix completion and evaluate the results.
Visit `paddlespatial/networks/vmrgae/eval.py` for more details.
```
from train import prep_env
if __name__ == '__main__':
env = prep_env(flag='eval')
if env['args'].device=='gpu':
paddle.set_device('gpu')
# load VMR-GAE and run
model = VmrGAE(x_dim=env["x"].shape[-1], d_dim=env["xs"].shape[-1], h_dim=env["args"].hidden_dim,
num_nodes=env["args"].num_nodes, n_layers=env["args"].rnn_layer,
eps=1e-10, same_structure=True)
if not os.path.isfile('%s/model.pdparams' % env["args"].checkpoints):
print('Checkpoint does not exist.')
exit()
else:
model.set_state_dict(paddle.load('%s/model.pdparams' % env["args"].checkpoints))
min_loss = paddle.load('%s/minloss.pdtensor' % env["args"].checkpoints)
epoch = np.load('%s/logged_epoch.npy' % env["args"].checkpoints)
```
Here we use the same preparation function in training process with `eval` flag to hold the model configurations. Then, we load the model and the available checkpoint.
### Start Evaluation
We perform the trained model for `sample_time` times and report the mean values as the completion results, as well as the standard deviations.
```
pred = []
for i in range(env["args"].sample_time):
_, _, _, _, _, _, _, all_dec_t, _, _ \
= model(env["x"], env["xs"], env["target_graph"], env["supp_graph"], env["mask"],
env["primary_scale"], env["ground_truths"])
pred.append(env["primary_scale"].inverse_transform(all_dec_t[-1].numpy()))
pred = np.stack(pred, axis=0)
pe, std = pred.mean(axis=0), pred.std(axis=0)
pe[np.where(pe < 0.5)] = 0
print(pe)
```
| true |
code
| 0.483466 | null | null | null | null |
|
# HW3: Variational Autoencoders
```
import torch
import torch.optim as optim
import torch.nn as nn
from torch.distributions import Normal
from itertools import chain
from torchlib.generative_model.autoencoder.vae import VAE
from torchlib.dataset.utils import create_data_loader
from torchlib.utils.distributions import IndependentNormal
from sklearn.model_selection import train_test_split
from torchlib.common import FloatTensor, move_tensor_to_gpu
import matplotlib.pyplot as plt
%load_ext autoreload
%autoreload 2
%qtconsole
```
## VAEs in 2D
### Part A
```
import numpy as np
def sample_data_1():
count = 100000
rand = np.random.RandomState(0)
return [[1.0, 2.0]] + rand.randn(count, 2) * [[5.0, 1.0]]
def sample_data_2():
count = 100000
rand = np.random.RandomState(0)
return [[1.0, 2.0]] + (rand.randn(count, 2) * [[5.0, 1.0]]).dot(
[[np.sqrt(2) / 2, np.sqrt(2) / 2], [-np.sqrt(2) / 2, np.sqrt(2) / 2]])
data_1 = sample_data_1().astype(np.float32)
data_1_loader = create_data_loader((data_1,), batch_size=1024)
data_2 = sample_data_2().astype(np.float32)
data_2_loader = create_data_loader((data_2,), batch_size=1024)
# visualize data distribution
plt.scatter(data_1[:, 0], data_1[:, 1])
# visualize data distribution
plt.scatter(data_2[:, 0], data_2[:, 1])
```
### Define prior
```
prior = Normal(loc=torch.zeros(2).type(FloatTensor), scale=torch.ones(2).type(FloatTensor))
```
### Define encoder
```
class Encoder(nn.Module):
def __init__(self, code_size=2, nn_size=32):
super(Encoder, self).__init__()
self.model = nn.Sequential(
nn.Linear(2, nn_size),
nn.BatchNorm1d(nn_size),
nn.ReLU(),
nn.Linear(nn_size, nn_size),
nn.BatchNorm1d(nn_size),
nn.ReLU(),
nn.Linear(nn_size, nn_size),
nn.BatchNorm1d(nn_size),
nn.ReLU(),
)
self.mu = nn.Linear(nn_size, code_size)
self.logvar = nn.Linear(nn_size, code_size)
def forward(self, x):
x = self.model(x)
mu = self.mu(x)
logvar = self.logvar(x)
return Normal(mu, torch.exp(logvar))
```
### Define decoder
```
class Decoder_1(nn.Module):
def __init__(self, code_size=2, nn_size=32):
super(Decoder_1, self).__init__()
self.model = nn.Sequential(
nn.Linear(code_size, nn_size),
nn.BatchNorm1d(nn_size),
nn.ReLU(),
nn.Linear(nn_size, nn_size),
nn.BatchNorm1d(nn_size),
nn.ReLU(),
nn.Linear(nn_size, nn_size),
nn.BatchNorm1d(nn_size),
nn.ReLU(),
)
self.mu = nn.Linear(nn_size, code_size)
self.logvar = nn.Linear(nn_size, code_size)
def forward(self, x):
x = self.model(x)
mu = self.mu(x)
logvar = self.logvar(x)
return IndependentNormal(mu, torch.exp(logvar))
class Decoder_2(nn.Module):
def __init__(self, code_size=2, nn_size=32):
super(Decoder_2, self).__init__()
self.model = nn.Sequential(
nn.Linear(code_size, nn_size),
nn.BatchNorm1d(nn_size),
nn.ReLU(),
nn.Linear(nn_size, nn_size),
nn.BatchNorm1d(nn_size),
nn.ReLU(),
nn.Linear(nn_size, nn_size),
nn.BatchNorm1d(nn_size),
nn.ReLU(),
)
self.mu = nn.Linear(nn_size, code_size)
self.logvar = nn.Linear(nn_size, 1)
def forward(self, x):
x = self.model(x)
mu = self.mu(x)
logvar = self.logvar(x)
return IndependentNormal(mu, torch.exp(logvar))
```
### Fit on dataset 1 using diag normal decoder
```
encoder = Encoder()
decoder = Decoder_1()
optimizer = optim.Adam(chain(encoder.parameters(),
decoder.parameters()), lr=1e-3)
model = VAE(encoder, decoder, prior, optimizer)
model.train(num_epoch=20, train_data_loader=data_1_loader, verbose=False)
full_path_samples = model.sample(1000, full_path=True).cpu().numpy()
plt.figure()
plt.scatter(full_path_samples[:, 0], full_path_samples[:, 1])
no_decoder_noise_samples = model.sample(1000, full_path=False).cpu().numpy()
plt.figure()
plt.scatter(no_decoder_noise_samples[:, 0], no_decoder_noise_samples[:, 1])
```
### Fit on dataset 2 using diag normal decoder
```
encoder = Encoder()
decoder = Decoder_1()
optimizer = optim.Adam(chain(encoder.parameters(),
decoder.parameters()), lr=1e-3)
model = VAE(encoder, decoder, prior, optimizer)
model.train(num_epoch=20, train_data_loader=data_2_loader, verbose=False)
full_path_samples = model.sample(1000, full_path=True).cpu().numpy()
plt.figure()
plt.scatter(full_path_samples[:, 0], full_path_samples[:, 1])
no_decoder_noise_samples = model.sample(1000, full_path=False).cpu().numpy()
plt.figure()
plt.scatter(no_decoder_noise_samples[:, 0], no_decoder_noise_samples[:, 1])
```
### Fit on dataset 1 using single sigma decoder
```
encoder = Encoder()
decoder = Decoder_2()
optimizer = optim.Adam(chain(encoder.parameters(),
decoder.parameters()), lr=1e-3)
model = VAE(encoder, decoder, prior, optimizer)
model.train(num_epoch=20, train_data_loader=data_1_loader, verbose=False)
full_path_samples = model.sample(1000, full_path=True).cpu().numpy()
plt.figure()
plt.scatter(full_path_samples[:, 0], full_path_samples[:, 1])
no_decoder_noise_samples = model.sample(1000, full_path=False).cpu().numpy()
plt.figure()
plt.scatter(no_decoder_noise_samples[:, 0], no_decoder_noise_samples[:, 1])
```
### Fit on dataset 2 using single sigma decoder
```
encoder = Encoder()
decoder = Decoder_2()
optimizer = optim.Adam(chain(encoder.parameters(),
decoder.parameters()), lr=1e-3)
model = VAE(encoder, decoder, prior, optimizer)
model.train(num_epoch=20, train_data_loader=data_2_loader, verbose=False)
full_path_samples = model.sample(1000, full_path=True).cpu().numpy()
plt.figure()
plt.scatter(full_path_samples[:, 0], full_path_samples[:, 1])
no_decoder_noise_samples = model.sample(1000, full_path=False).cpu().numpy()
plt.figure()
plt.scatter(no_decoder_noise_samples[:, 0], no_decoder_noise_samples[:, 1])
```
### Part B
```
def sample_data_3():
count = 100000
rand = np.random.RandomState(0)
a = [[-1.5, 2.5]] + rand.randn(count // 3, 2) * 0.2
b = [[1.5, 2.5]] + rand.randn(count // 3, 2) * 0.2
c = np.c_[2 * np.cos(np.linspace(0, np.pi, count // 3)),
-np.sin(np.linspace(0, np.pi, count // 3))]
c += rand.randn(*c.shape) * 0.2
data_x = np.concatenate([a, b, c], axis=0)
data_y = np.array([0] * len(a) + [1] * len(b) + [2] * len(c))
perm = rand.permutation(len(data_x))
return data_x[perm], data_y[perm]
data_3, data_3_label = sample_data_3()
data_3 = data_3.astype(np.float32)
data_3_train, data_3_test, data_3_train_label, data_3_test_label = train_test_split(
data_3, data_3_label, test_size=0.2)
data_3_train_loader = create_data_loader((data_3_train, data_3_train_label), batch_size=1024)
data_3_test_loader = create_data_loader((data_3_test, data_3_test_label), batch_size=1024,
shuffle=False, drop_last=False)
plt.scatter(data_3[:, 0], data_3[:, 1], c=data_3_label)
encoder = Encoder(nn_size=512)
decoder = Decoder_1(nn_size=512)
optimizer = optim.Adam(chain(encoder.parameters(),
decoder.parameters()), lr=1e-3)
model = VAE(encoder, decoder, prior, optimizer)
model.train(num_epoch=100, train_data_loader=data_3_train_loader, verbose=False)
full_path_samples = model.sample(10000, full_path=True).cpu().numpy()
plt.figure()
plt.scatter(full_path_samples[:, 0], full_path_samples[:, 1])
no_decoder_noise_samples = model.sample(10000, full_path=False).cpu().numpy()
plt.figure()
plt.scatter(no_decoder_noise_samples[:, 0], no_decoder_noise_samples[:, 1])
```
### Visualize Latent of training data
```
with torch.no_grad():
latent = []
for data in data_3_test_loader:
data = move_tensor_to_gpu(data[0])
latent.append(model.encode_reparm(data))
latent = torch.cat(latent, dim=0).cpu().numpy()
plt.scatter(latent[:, 0], latent[:, 1], c=data_3_test_label)
```
| true |
code
| 0.877844 | null | null | null | null |
|
This notebook can be executed in a notebook hosted in KubeFlow.
You can find instructions on how to deploy a KubeFlow cluster and how to access the the KubeFlow UI and the hosted notebooks here: https://www.kubeflow.org/docs/pipelines/pipelines-quickstart/
Please install KubeFlow Pipelines SDK using the following comand:
```
!pip3 install 'https://storage.googleapis.com/ml-pipeline/release/0.1.9/kfp.tar.gz'
```
# Energy Price Forecasting Pipeline
This notebook generates a KubeFlow pipeline that runs the solution end to end.
For more information on KubeFlow pipelines and how to run them in GCP please visit https://github.com/kubeflow/pipelines
```
import kfp
from kfp import compiler
import kfp.dsl as dsl
import kfp.gcp as gcp
import kfp.notebook
#Please modify the following values to match your GCP bucket, project, and docker image name.
OUTPUT_DIR = 'gs://pipelinestest/out'
PROJECT_NAME = 'energy-forecasting'
EF_IMAGE='gcr.io/%s/energy:dev' % PROJECT_NAME
```
### Create base image
This image takes the `tensorflow/tensorflow:1.10.0-py3` as a starting point and installs python libraries and applications that are required by some components in the pipeline.
```
%%docker {EF_IMAGE} {OUTPUT_DIR}
FROM tensorflow/tensorflow:1.10.0-py3
RUN apt-get update
RUN apt-get install -y git
RUN pip3 install --upgrade google-api-python-client
RUN pip3 install --upgrade pyarrow
RUN pip3 install --upgrade google-cloud-bigquery
RUN pip3 install --upgrade google-cloud-storage
RUN pip3 install --upgrade gitpython
```
### Create Components
Each cell defines the logic of different components that will be used in the pipeline and produces a `.yaml` file for it.
```
def copy_table(
dataset: str) -> str:
"""Retrieves raw data from competition website.
Retrieves raw data from the competition site and saves it in BigQuery.
Args:
dataset: String specifying the dataset in BigQuery to save the data in.
Returns:
String specifying if the component finished succesfully.
"""
from google.cloud import bigquery
import requests
import pandas as pd
from io import StringIO
from io import BytesIO
import zipfile
bq_client = bigquery.Client()
price_data = pd.read_csv(
StringIO(requests.get(
'http://complatt.smartwatt.net/assets/files/historicalRealData/RealMarketPriceDataPT.csv').text),
sep=';'
)
price_data.columns = ['date_utc', 'price']
bq_client.load_table_from_dataframe(
price_data,
bq_client.dataset(dataset).table(
'MarketPricePT')).result()
weather_zip = zipfile.ZipFile(
BytesIO(requests.get(
'http://complatt.smartwatt.net/assets/files/weatherHistoricalData/WeatherHistoricalData.zip').content))
weather_data = pd.read_csv(
weather_zip.open(
'WeatherHistoricalData/historical_weather.csv'))
bq_client.load_table_from_dataframe(
weather_data,
bq_client.dataset(dataset).table(
'historical_weather')).result()
return('success')
compiler.build_python_component(
component_func = copy_table,
staging_gcs_path = OUTPUT_DIR,
base_image=EF_IMAGE,
target_component_file='copy-table.component.yaml',
target_image = 'gcr.io/' + PROJECT_NAME + '/component-copy-table:latest')
def export_table(
inp: str,
table: str,
file: str) -> str:
"""Exports table to csv.
Exports BigQuery table into CSV file.
Args:
inp: String containing the output from previous component.
table: String specifying the origin BigQuery table.
file: String specifying the path and name for the csv file.
Returns:
String specifying if the component finished succesfully.
"""
from google.cloud import bigquery
bq_client = bigquery.Client()
bq_client.extract_table(
table,
file).result()
return('success')
compiler.build_python_component(
component_func = export_table,
staging_gcs_path = OUTPUT_DIR,
base_image=EF_IMAGE,
target_component_file='export-table.component.yaml',
target_image = 'gcr.io/' + PROJECT_NAME + '/component-export-table:latest')
def run_git_python_script(
inp: str,
code_repo: str,
code_folder: str,
script: str,
script_args: str) -> str:
"""Runs Python script from git repository.
Args:
inp: String containing the output from previous component.
code_repo: String specifying the url to the git repository.
code_folder: String specifying the folder for the script.
script: String specifying the name of the script.
script_args: String specifying the arguments for the script.
Returns:
String specifying if the component finished succesfully.
"""
import os
import git
git.Git('').clone(code_repo)
os.chdir(code_folder)
output = os.system(' '.join([
'python -m',
script,
script_args]))
if output == 0:
return('success')
raise Exception('Script failed. The exit status was: {}'.format(output))
compiler.build_python_component(
component_func = run_git_python_script,
staging_gcs_path = OUTPUT_DIR,
base_image=EF_IMAGE,
target_component_file='run-git-python-script.component.yaml',
target_image = 'gcr.io/' + PROJECT_NAME + '/component-run-git-python-script:latest')
def train_git_cmle_model(
tr_inp: str,
va_inp: str,
code_repo: str,
code_folder: str,
project: str,
bucket: str,
package_folder: str,
cmle_folder: str,
scale_tier: str,
python_module: str,
region: str,
runtime_version: str,
cmle_args: str) -> str:
"""Executes CMLE training job.
Retrieves python file from git repo and launches training job in CMLE.
Args:
tr_inp: String containing the source for the training data.
va_inp: String containing the source for the validation data.
code_repo: String specifying the url to the git repository.
code_folder: String specifying the folder for the job code.
project: String specifying the GCP project where job will run.
bucket: String specifying the GCS bucket where to save the job's outputs.
package_folder: String specifying the python package to run for the job.
cmle_folder: String specifying the folder in GCS where to save outputs.
scale_tier: String specifying compute resources to use for training job.
python_module: String specifying the python module to run for the job.
region: String specifying the GCP region in which to run the job.
runtime_version: String specifying the CMLE version to use for the job.
script_args: String specifying the arguments for the CMLE job.
Returns:
String containing output from running the training job in CMLE.
"""
import os
import git
import tarfile
import datetime
from google.cloud import storage
from googleapiclient import discovery
jobId = 'train' + datetime.datetime.today().strftime('%Y%m%d%H%M%S')
git.Git('').clone(code_repo)
with tarfile.open('code.tar.gz', 'w:gz') as tar:
tar.add(
code_folder,
arcname=os.path.basename(code_folder))
gcs_client = storage.Client()
gcs_bucket = gcs_client.get_bucket(bucket)
blob = gcs_bucket.blob(package_folder + jobId + '.tar.gz')
blob.upload_from_filename('code.tar.gz')
training_inputs = {
'scaleTier': scale_tier,
'pythonModule': python_module,
'args': cmle_args.split(' '),
'region': region,
'packageUris': [
'gs://'+ bucket + '/' + package_folder + jobId + '.tar.gz'],
'jobDir': 'gs://'+ bucket + '/' + cmle_folder + jobId,
'runtimeVersion': runtime_version}
job_spec = {
'jobId': jobId,
'trainingInput': training_inputs}
cloudml = discovery.build('ml', 'v1')
project_id = 'projects/{}'.format(project)
request = cloudml.projects().jobs().create(
body=job_spec,
parent=project_id)
return(str(request.execute()))
compiler.build_python_component(
component_func = train_git_cmle_model,
staging_gcs_path = OUTPUT_DIR,
base_image=EF_IMAGE,
target_component_file='train-git-cmle-model.component.yaml',
target_image = 'gcr.io/' + PROJECT_NAME + '/component-train-git-cmle-model:latest')
```
### Create pipeline
The following code loads all components needed for the pipeline. Specifies dependencies between components. Defines arguments and defaults for the pipeline and saves the pipeline into a `.tar.gz` file that can be loaded into KubeFlow pipelines.
```
@dsl.pipeline(
name='Energy Price Forecasting',
description='Energy Price Forecasting')
def basic_bq_pipeline(
project = dsl.PipelineParam(
'project',
value='energy-forecasting'),
dataset = dsl.PipelineParam(
'dataset',
value='Energy'),
bucket = dsl.PipelineParam(
'bucket',
value='energyforecast'),
code_repo = dsl.PipelineParam(
'code-repo',
value='https://github.com/GoogleCloudPlatform/professional-services.git'),
code_folder = dsl.PipelineParam(
'code-folder',
value='professional-services/examples/cloudml-energy-price-forecasting'),
data_prep_script = dsl.PipelineParam(
'data-prep-script',
value='data_preparation.data_prep'),
data_prep_args = dsl.PipelineParam(
'data-prep-args',
value=' '.join([
'--dataset=Energy',
'--train_table=MLDataTrain',
'--valid_table=MLDataValid',
'--test_table=MLDataTest',
'--prepare_data_file=data_preparation/prepare_data.sql',
'--weather_mean_std_file=data_preparation/weather_mean_std.sql',
'--train_from_date="2015-01-05 00:00:00"',
'--train_to_date="2015-10-04 23:01:00"',
'--valid_from_date="2015-10-05 00:00:00"',
'--valid_to_date="2015-10-11 23:01:00"',
'--test_from_date="2015-10-12 00:00:00"',
'--test_to_date="2015-10-18 23:01:00"',
'--price_scaling=0.01',
'--mean_path=gs://energyforecast/data/pickle/mean.pkl',
'--std_path=gs://energyforecast/data/pickle/std.pkl'])),
package_folder = dsl.PipelineParam(
'package-folder',
value='package/'),
cmle_folder = dsl.PipelineParam(
'cmle-folder',
value='cmle/'),
cmle_args = dsl.PipelineParam(
'cmle-args',
value=' '.join([
'--training_path', 'gs://energyforecast/data/csv/MLDataTrain.csv',
'--validation_path', 'gs://energyforecast/data/csv/MLDataValid.csv',
'--mean_path', 'gs://energyforecast/data/pickle/mean.pkl',
'--std_path', 'gs://energyforecast/data/pickle/std.pkl',
'--dropout' , '0.2',
'--hour_embedding', '20',
'--day_embedding', '10',
'--first_layer_size', '100',
'--number_layers', '3',
'--layer_reduction_fraction', '0.5',
'--learning_rate', '0.01',
'--batch_size', '64',
'--eval_batch_size', '168',
'--max_steps', '5000'])),
scale_tier = dsl.PipelineParam(
'scale-tier',
value='BASIC'),
python_module = dsl.PipelineParam(
'python-module',
value='trainer.task'),
region = dsl.PipelineParam(
'region',
value='us-central1'),
runtime_version = dsl.PipelineParam(
'runtime-version',
value='1.10'),
train_table = dsl.PipelineParam(
'train-table',
value='Energy.MLDataTrain'),
valid_table = dsl.PipelineParam(
'valid-table',
value='Energy.MLDataValid'),
test_table = dsl.PipelineParam(
'test-table',
value='Energy.MLDataTest'),
train_file = dsl.PipelineParam(
'train-file',
value='gs://energyforecast/data/csv/MLDataTrain.csv'),
valid_file = dsl.PipelineParam(
'valid-file',
value='gs://energyforecast/data/csv/MLDataValid.csv'),
test_file = dsl.PipelineParam(
'test-file',
value='gs://energyforecast/data/csv/MLDataTest.csv')):
CopTableOp = kfp.components.load_component('copy-table.component.yaml')
ExpTableOp = kfp.components.load_component('export-table.component.yaml')
DataPrepOp = kfp.components.load_component('run-git-python-script.component.yaml')
TrainModelOp = kfp.components.load_component('train-git-cmle-model.component.yaml')
ct_op = CopTableOp(
dataset).apply(gcp.use_gcp_secret('user-gcp-sa'))
dp_op = DataPrepOp(
ct_op.output,
code_repo,
code_folder,
data_prep_script,
data_prep_args).apply(gcp.use_gcp_secret('user-gcp-sa'))
tr_et_op = ExpTableOp(
dp_op.output,
train_table,
train_file).apply(gcp.use_gcp_secret('user-gcp-sa'))
va_et_op = ExpTableOp(
dp_op.output,
valid_table,
valid_file).apply(gcp.use_gcp_secret('user-gcp-sa'))
te_et_op = ExpTableOp(
dp_op.output,
test_table,
test_file).apply(gcp.use_gcp_secret('user-gcp-sa'))
tm_op = TrainModelOp(
tr_et_op.output,
va_et_op.output,
code_repo,
code_folder,
project,
bucket,
package_folder,
cmle_folder,
scale_tier,
python_module,
region,
runtime_version,
cmle_args).apply(gcp.use_gcp_secret('user-gcp-sa'))
compiler.Compiler().compile(basic_bq_pipeline, 'energy-forecasting.tar.gz')
```
| true |
code
| 0.759421 | null | null | null | null |
|
## Convolutional Neural Networks
---
In this notebook, we train an MLP to classify images from the MNIST database.
### 1. Load MNIST Database
```
from keras.datasets import mnist
# use Keras to import pre-shuffled MNIST database
(X_train, y_train), (X_test, y_test) = mnist.load_data()
print("The MNIST database has a training set of %d examples." % len(X_train))
print("The MNIST database has a test set of %d examples." % len(X_test))
```
### 2. Visualize the First Six Training Images
```
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib.cm as cm
import numpy as np
# plot first six training images
fig = plt.figure(figsize=(20,20))
for i in range(6):
ax = fig.add_subplot(1, 6, i+1, xticks=[], yticks=[])
ax.imshow(X_train[i], cmap='gray')
ax.set_title(str(y_train[i]))
```
### 3. View an Image in More Detail
```
def visualize_input(img, ax):
ax.imshow(img, cmap='gray')
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
ax.annotate(str(round(img[x][y],2)), xy=(y,x),
horizontalalignment='center',
verticalalignment='center',
color='white' if img[x][y]<thresh else 'black')
fig = plt.figure(figsize = (12,12))
ax = fig.add_subplot(111)
visualize_input(X_train[0], ax)
```
### 4. Rescale the Images by Dividing Every Pixel in Every Image by 255
```
# rescale [0,255] --> [0,1]
X_train = X_train.astype('float32')/255
X_test = X_test.astype('float32')/255
```
### 5. Encode Categorical Integer Labels Using a One-Hot Scheme
```
from keras.utils import np_utils
# print first ten (integer-valued) training labels
print('Integer-valued labels:')
print(y_train[:10])
# one-hot encode the labels
y_train = np_utils.to_categorical(y_train, 10)
y_test = np_utils.to_categorical(y_test, 10)
# print first ten (one-hot) training labels
print('One-hot labels:')
print(y_train[:10])
```
### 6. Define the Model Architecture
```
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
# define the model
model = Sequential()
model.add(Flatten(input_shape=X_train.shape[1:]))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='softmax'))
# summarize the model
model.summary()
```
### 7. Compile the Model
```
# compile the model
model.compile(loss='categorical_crossentropy', optimizer='rmsprop',
metrics=['accuracy'])
```
### 8. Calculate the Classification Accuracy on the Test Set (Before Training)
```
# evaluate test accuracy
score = model.evaluate(X_test, y_test, verbose=0)
accuracy = 100*score[1]
# print test accuracy
print('Test accuracy: %.4f%%' % accuracy)
```
### 9. Train the Model
```
from keras.callbacks import ModelCheckpoint
# train the model
checkpointer = ModelCheckpoint(filepath='mnist.model.best.hdf5',
verbose=1, save_best_only=True)
hist = model.fit(X_train, y_train, batch_size=128, epochs=10,
validation_split=0.2, callbacks=[checkpointer],
verbose=1, shuffle=True)
```
### 10. Load the Model with the Best Classification Accuracy on the Validation Set
```
# load the weights that yielded the best validation accuracy
model.load_weights('mnist.model.best.hdf5')
```
### 11. Calculate the Classification Accuracy on the Test Set
```
# evaluate test accuracy
score = model.evaluate(X_test, y_test, verbose=0)
accuracy = 100*score[1]
# print test accuracy
print('Test accuracy: %.4f%%' % accuracy)
```
| true |
code
| 0.808181 | null | null | null | null |
|
```
%matplotlib inline
import glob
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.api as sm
from statsmodels.tsa.seasonal import seasonal_decompose
from statsmodels.graphics.tsaplots import plot_pacf, plot_acf
sns.set_style('darkgrid')
df = pd.read_csv('../data/raw/arquivo_geral.csv', sep=';', parse_dates=['data'])
df.info()
new_cases = df.groupby(['data']).agg({
'casosNovos': 'sum'
})
```
## Analise exploratória
Nessa seção vamos criar algumas visualizações da serie temporal e testar algumas transformações possiveis dos dados.
```
def plot_ts(series):
fig, ax = plt.subplots(3, 2, figsize=(15, 17));
_df = pd.DataFrame(index=series.index)
_df['casosNovos'] = series
_df['x'] = series.index
sns.lineplot(x='x', y='casosNovos', data=_df, ci=90, err_style='band', ax=ax[0, 0]);
ax[0, 0].set_title('time series')
plot_acf(series.dropna(), ax=ax[0, 1], lags=int(len(series)/2))
ax[0, 1].set_title('acf')
plot_pacf(series.dropna(), ax=ax[1, 1], lags=int(len(series)/2))
ax[1, 1].set_title('pacf')
series.plot.hist(bins=20, ax=ax[1, 0]);
ax[1, 0].set_title('distribution')
series.rolling('15D').std().plot(ax=ax[2, 0]);
ax[2, 0].set_title('rolling std')
series.rolling('15D').mean().plot(ax=ax[2, 1]);
ax[2, 1].set_title('rolling mean');
def plot_decomposition(series):
fig, (ax1,ax2,ax3) = plt.subplots(3,1, figsize=(17,8), sharex=True)
res = seasonal_decompose(new_cases, model='aditive', freq=15)
res.trend.plot(ax=ax1)
res.resid.plot(ax=ax2)
res.seasonal.plot(ax=ax3)
ax1.set_title('Trend');
ax2.set_title('Residual');
ax3.set_title('Seasonality');
return res
#raw
ts1 = new_cases.copy()
# primeira diferença
ts2 = new_cases.diff()
# ignorando os dados antes dos 100 primeiros casos
first_100 = np.where(ts1.cumsum() >= 100)[0][0]
ts3 = ts1[first_100:]
# primeira diferença da serie após os 100 primeiros casos
ts4 = ts3.diff()
```
### Serie temporal sem tratamento
```
plot_ts(ts1)
ts1_res = plot_decomposition(ts1);
```
## Primeira diferença da serie temporal
```
plot_ts(ts2)
ts2_res = plot_decomposition(ts2);
```
## Retirando parte inicial da serie que não houveram casos
#### Após os 100 primeiros casos
```
plot_ts(ts3)
ts3_res = plot_decomposition(ts3);
```
## Primeira diferença da serie a partir do 100ø caso
```
plot_ts(ts4)
ts4_res = plot_decomposition(ts4);
```
## Modelagem
#### Verificação da estacionariedade
```
series = {
'raw': ts1,
'diff1': ts2,
'after_100': ts3,
'after_100_diff1': ts4
}
for name, s in series.items():
res = sm.tsa.adfuller(s.dropna(), regression='ct')
print('%22s | %s' % (name, 'non-stationary' if res[0] > res[4]['5%'] else 'stationary'))
ts = series['after_100'].dropna()
ts.index
```
#### Divisão entre test e train
```
tr_start,tr_end = '2020-03-15', '2020-04-14'
te_start,te_end = '2020-04-13', '2020-04-20'
pred_end = '2020-04-27'
tra = ts[tr_start:tr_end].dropna()
tes = ts[te_start:te_end].dropna()
arima = sm.tsa.statespace.SARIMAX(tra,order=(0,1,2), freq='D', seasonal_order=(0,1,[],15), enforce_stationarity=False, enforce_invertibility=False,).fit() # Marco, rmse + aic
# arima = sm.tsa.statespace.SARIMAX(tra, order=(0,1,0), freq='D', seasonal_order=(0, 1,0, 17), enforce_stationarity=False, enforce_invertibility=False).fit() # vittor, rmse do teste
arima.summary()
#SARIMAX(0, 1, 0)x(0, 1, 0, 17)
from sklearn.metrics import mean_squared_error
pred_train = arima.predict(tr_start,tr_end)[1:]
print('IN TRAIN: ARIMA model MSE:{}'.format(mean_squared_error(tra[1:], pred_train)))
pred_test = arima.predict(te_start,te_end)[1:]
print('IN TEST: ARIMA model MSE:{}'.format(mean_squared_error(tes[1:], pred_test)))
pred = arima.predict(te_start, pred_end)[1:]
_, ax = plt.subplots(figsize=(12, 8), dpi=100)
pred_train.name = 'Predicted on train'
pred.name = 'Predicted out of train'
ts.columns = ['New cases']
ts.shift(1).plot(ax=ax, color='k', marker='o')
pred_train.plot(ax=ax, marker='o', color=sns.xkcd_rgb["windows blue"]);
pred.plot(ax=ax, ls='--', marker='o', color=sns.xkcd_rgb["amber"])
plt.legend();
_, ax = plt.subplots(figsize=(12, 8), dpi=100)
pred_train.name = 'Predicted on train'
pred.name = 'Predicted out of train'
total_cases = ts.copy()
total_cases.columns = ['Total cases']
total_cases.shift(1).cumsum().plot(ax=ax, color='k', marker='o')
cum_test = pred.copy()
cum_test.loc[pd.Timestamp(tr_end)] = pred_train.cumsum().values[-1]
cum_test.sort_index(inplace=True)
ts.shift(1).cumsum().values[-1]
pred_train.cumsum().plot(ax=ax, marker='o', color=sns.xkcd_rgb["windows blue"]);
cum_test.cumsum().plot(ax=ax, ls='--', marker='o', color=sns.xkcd_rgb["amber"])
plt.legend();
pred
pred = arima.predict(te_start,te_end)
pred.plot( marker='o', color=sns.xkcd_rgb["amber"])
ts['New cases'].loc[pd.Timestamp(te_start): pd.Timestamp(te_end)].plot( marker='o', color=sns.xkcd_rgb["windows blue"]);
resid = (ts['New cases'].loc[pd.Timestamp(te_start): pd.Timestamp(te_end)] - pred)
resid.plot.hist(bins=4);
sm.qqplot(resid, line ='45')
```
## Obs.
Os residuos não estão com distribuição aproximando a normal, porem isso pode ser justificado pela baixo numero de amostras que temos no *test set*.
| true |
code
| 0.636664 | null | null | null | null |
|
<!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png">
*This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).*
*The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!*
<!--NAVIGATION-->
< [Density and Contour Plots](04.04-Density-and-Contour-Plots.ipynb) | [Contents](Index.ipynb) | [Customizing Plot Legends](04.06-Customizing-Legends.ipynb) >
<a href="https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/04.05-Histograms-and-Binnings.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
# Histograms, Binnings, and Density
A simple histogram can be a great first step in understanding a dataset.
Earlier, we saw a preview of Matplotlib's histogram function (see [Comparisons, Masks, and Boolean Logic](02.06-Boolean-Arrays-and-Masks.ipynb)), which creates a basic histogram in one line, once the normal boiler-plate imports are done:
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-white')
data = np.random.randn(1000)
plt.hist(data);
```
The ``hist()`` function has many options to tune both the calculation and the display;
here's an example of a more customized histogram:
```
plt.hist(data, bins=30, normed=True, alpha=0.5,
histtype='stepfilled', color='steelblue',
edgecolor='none');
```
The ``plt.hist`` docstring has more information on other customization options available.
I find this combination of ``histtype='stepfilled'`` along with some transparency ``alpha`` to be very useful when comparing histograms of several distributions:
```
x1 = np.random.normal(0, 0.8, 1000)
x2 = np.random.normal(-2, 1, 1000)
x3 = np.random.normal(3, 2, 1000)
kwargs = dict(histtype='stepfilled', alpha=0.3, normed=True, bins=40)
plt.hist(x1, **kwargs)
plt.hist(x2, **kwargs)
plt.hist(x3, **kwargs);
```
If you would like to simply compute the histogram (that is, count the number of points in a given bin) and not display it, the ``np.histogram()`` function is available:
```
counts, bin_edges = np.histogram(data, bins=5)
print(counts)
```
## Two-Dimensional Histograms and Binnings
Just as we create histograms in one dimension by dividing the number-line into bins, we can also create histograms in two-dimensions by dividing points among two-dimensional bins.
We'll take a brief look at several ways to do this here.
We'll start by defining some data—an ``x`` and ``y`` array drawn from a multivariate Gaussian distribution:
```
mean = [0, 0]
cov = [[1, 1], [1, 2]]
x, y = np.random.multivariate_normal(mean, cov, 10000).T
```
### ``plt.hist2d``: Two-dimensional histogram
One straightforward way to plot a two-dimensional histogram is to use Matplotlib's ``plt.hist2d`` function:
```
plt.hist2d(x, y, bins=30, cmap='Blues')
cb = plt.colorbar()
cb.set_label('counts in bin')
```
Just as with ``plt.hist``, ``plt.hist2d`` has a number of extra options to fine-tune the plot and the binning, which are nicely outlined in the function docstring.
Further, just as ``plt.hist`` has a counterpart in ``np.histogram``, ``plt.hist2d`` has a counterpart in ``np.histogram2d``, which can be used as follows:
```
counts, xedges, yedges = np.histogram2d(x, y, bins=30)
```
For the generalization of this histogram binning in dimensions higher than two, see the ``np.histogramdd`` function.
### ``plt.hexbin``: Hexagonal binnings
The two-dimensional histogram creates a tesselation of squares across the axes.
Another natural shape for such a tesselation is the regular hexagon.
For this purpose, Matplotlib provides the ``plt.hexbin`` routine, which will represents a two-dimensional dataset binned within a grid of hexagons:
```
plt.hexbin(x, y, gridsize=30, cmap='Blues')
cb = plt.colorbar(label='count in bin')
```
``plt.hexbin`` has a number of interesting options, including the ability to specify weights for each point, and to change the output in each bin to any NumPy aggregate (mean of weights, standard deviation of weights, etc.).
### Kernel density estimation
Another common method of evaluating densities in multiple dimensions is *kernel density estimation* (KDE).
This will be discussed more fully in [In-Depth: Kernel Density Estimation](05.13-Kernel-Density-Estimation.ipynb), but for now we'll simply mention that KDE can be thought of as a way to "smear out" the points in space and add up the result to obtain a smooth function.
One extremely quick and simple KDE implementation exists in the ``scipy.stats`` package.
Here is a quick example of using the KDE on this data:
```
from scipy.stats import gaussian_kde
# fit an array of size [Ndim, Nsamples]
data = np.vstack([x, y])
kde = gaussian_kde(data)
# evaluate on a regular grid
xgrid = np.linspace(-3.5, 3.5, 40)
ygrid = np.linspace(-6, 6, 40)
Xgrid, Ygrid = np.meshgrid(xgrid, ygrid)
Z = kde.evaluate(np.vstack([Xgrid.ravel(), Ygrid.ravel()]))
# Plot the result as an image
plt.imshow(Z.reshape(Xgrid.shape),
origin='lower', aspect='auto',
extent=[-3.5, 3.5, -6, 6],
cmap='Blues')
cb = plt.colorbar()
cb.set_label("density")
```
KDE has a smoothing length that effectively slides the knob between detail and smoothness (one example of the ubiquitous bias–variance trade-off).
The literature on choosing an appropriate smoothing length is vast: ``gaussian_kde`` uses a rule-of-thumb to attempt to find a nearly optimal smoothing length for the input data.
Other KDE implementations are available within the SciPy ecosystem, each with its own strengths and weaknesses; see, for example, ``sklearn.neighbors.KernelDensity`` and ``statsmodels.nonparametric.kernel_density.KDEMultivariate``.
For visualizations based on KDE, using Matplotlib tends to be overly verbose.
The Seaborn library, discussed in [Visualization With Seaborn](04.14-Visualization-With-Seaborn.ipynb), provides a much more terse API for creating KDE-based visualizations.
<!--NAVIGATION-->
< [Density and Contour Plots](04.04-Density-and-Contour-Plots.ipynb) | [Contents](Index.ipynb) | [Customizing Plot Legends](04.06-Customizing-Legends.ipynb) >
<a href="https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/04.05-Histograms-and-Binnings.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
| true |
code
| 0.607227 | null | null | null | null |
|
## Logistic Regression in plain Python
In logistic regression, we are trying to model the outcome of a **binary variable** given a **linear combination of input features**. For example, we could try to predict the outcome of an election (win/lose) using information about how much money a candidate spent campaigning, how much time she/he spent campaigning, etc.
### Model
Logistic regression works as follows.
**Given:**
- dataset $\{(\boldsymbol{x}^{(1)}, y^{(1)}), ..., (\boldsymbol{x}^{(m)}, y^{(m)})\}$
- with $\boldsymbol{x}^{(i)}$ being a $d-$dimensional vector $\boldsymbol{x}^{(i)} = (x^{(i)}_1, ..., x^{(i)}_d)$
- $y^{(i)}$ being a binary target variable, $y^{(i)} \in \{0,1\}$
The logistic regression model can be interpreted as a very **simple neural network:**
- it has a real-valued weight vector $\boldsymbol{w}= (w^{(1)}, ..., w^{(d)})$
- it has a real-valued bias $b$
- it uses a sigmoid function as its activation function

### Training
Different to [linear regression](linear_regression.ipynb), logistic regression has no closed form solution. But the cost function is convex, so we can train the model using gradient descent. In fact, **gradient descent** (or any other optimization algorithm) is guaranteed to find the global minimum (if the learning rate is small enough and enough training iterations are used).
Training a logistic regression model has different steps. In the beginning (step 0) the parameters are initialized. The other steps are repeated for a specified number of training iterations or until convergence of the parameters.
* * *
**Step 0: ** Initialize the weight vector and bias with zeros (or small random values).
* * *
**Step 1: ** Compute a linear combination of the input features and weights. This can be done in one step for all training examples, using vectorization and broadcasting:
$\boldsymbol{a} = \boldsymbol{X} \cdot \boldsymbol{w} + b $
where $\boldsymbol{X}$ is a matrix of shape $(n_{samples}, n_{features})$ that holds all training examples, and $\cdot$ denotes the dot product.
* * *
**Step 2: ** Apply the sigmoid activation function, which returns values between 0 and 1:
$\boldsymbol{\hat{y}} = \sigma(\boldsymbol{a}) = \frac{1}{1 + \exp(-\boldsymbol{a})}$
* * *
** Step 3: ** Compute the cost over the whole training set. We want to model the probability of the target values being 0 or 1. So during training we want to adapt our parameters such that our model outputs high values for examples with a positive label (true label being 1) and small values for examples with a negative label (true label being 0). This is reflected in the cost function:
$J(\boldsymbol{w},b) = - \frac{1}{m} \sum_{i=1}^m \Big[ y^{(i)} \log(\hat{y}^{(i)}) + (1 - y^{(i)}) \log(1 - \hat{y}^{(i)}) \Big]$
* * *
** Step 4: ** Compute the gradient of the cost function with respect to the weight vector and bias. A detailed explanation of this derivation can be found [here](https://stats.stackexchange.com/questions/278771/how-is-the-cost-function-from-logistic-regression-derivated).
The general formula is given by:
$ \frac{\partial J}{\partial w_j} = \frac{1}{m}\sum_{i=1}^m\left[\hat{y}^{(i)}-y^{(i)}\right]\,x_j^{(i)}$
For the bias, the inputs $x_j^{(i)}$ will be given 1.
* * *
** Step 5: ** Update the weights and bias
$\boldsymbol{w} = \boldsymbol{w} - \eta \, \nabla_w J$
$b = b - \eta \, \nabla_b J$
where $\eta$ is the learning rate.
```
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_blobs
import matplotlib.pyplot as plt
np.random.seed(123)
%matplotlib inline
```
## Dataset
```
# We will perform logistic regression using a simple toy dataset of two classes
X, y_true = make_blobs(n_samples= 1000, centers=2)
fig = plt.figure(figsize=(8,6))
plt.scatter(X[:,0], X[:,1], c=y_true)
plt.title("Dataset")
plt.xlabel("First feature")
plt.ylabel("Second feature")
plt.show()
# Reshape targets to get column vector with shape (n_samples, 1)
y_true = y_true[:, np.newaxis]
# Split the data into a training and test set
X_train, X_test, y_train, y_test = train_test_split(X, y_true)
print(f'Shape X_train: {X_train.shape}')
print(f'Shape y_train: {y_train.shape}')
print(f'Shape X_test: {X_test.shape}')
print(f'Shape y_test: {y_test.shape}')
```
## Logistic regression class
```
class LogisticRegression:
def __init__(self):
pass
def sigmoid(self, a):
return 1 / (1 + np.exp(-a))
def train(self, X, y_true, n_iters, learning_rate):
"""
Trains the logistic regression model on given data X and targets y
"""
# Step 0: Initialize the parameters
n_samples, n_features = X.shape
self.weights = np.zeros((n_features, 1))
self.bias = 0
costs = []
for i in range(n_iters):
# Step 1 and 2: Compute a linear combination of the input features and weights,
# apply the sigmoid activation function
y_predict = self.sigmoid(np.dot(X, self.weights) + self.bias)
# Step 3: Compute the cost over the whole training set.
cost = (- 1 / n_samples) * np.sum(y_true * np.log(y_predict) + (1 - y_true) * (np.log(1 - y_predict)))
# Step 4: Compute the gradients
dw = (1 / n_samples) * np.dot(X.T, (y_predict - y_true))
db = (1 / n_samples) * np.sum(y_predict - y_true)
# Step 5: Update the parameters
self.weights = self.weights - learning_rate * dw
self.bias = self.bias - learning_rate * db
costs.append(cost)
if i % 100 == 0:
print(f"Cost after iteration {i}: {cost}")
return self.weights, self.bias, costs
def predict(self, X):
"""
Predicts binary labels for a set of examples X.
"""
y_predict = self.sigmoid(np.dot(X, self.weights) + self.bias)
y_predict_labels = [1 if elem > 0.5 else 0 for elem in y_predict]
return np.array(y_predict_labels)[:, np.newaxis]
```
## Initializing and training the model
```
regressor = LogisticRegression()
w_trained, b_trained, costs = regressor.train(X_train, y_train, n_iters=600, learning_rate=0.009)
fig = plt.figure(figsize=(8,6))
plt.plot(np.arange(600), costs)
plt.title("Development of cost over training")
plt.xlabel("Number of iterations")
plt.ylabel("Cost")
plt.show()
```
## Testing the model
```
y_p_train = regressor.predict(X_train)
y_p_test = regressor.predict(X_test)
print(f"train accuracy: {100 - np.mean(np.abs(y_p_train - y_train)) * 100}%")
print(f"test accuracy: {100 - np.mean(np.abs(y_p_test - y_test))}%")
```
| true |
code
| 0.814717 | null | null | null | null |
|
# Introduction to JumpStart - Image Classification
---
Welcome to Amazon [SageMaker JumpStart](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-jumpstart.html)! You can use JumpStart to solve many Machine Learning tasks through one-click in SageMaker Studio, or through [SageMaker JumpStart API](https://sagemaker.readthedocs.io/en/stable/overview.html#use-prebuilt-models-with-sagemaker-jumpstart).
In this demo notebook, we demonstrate how to use the JumpStart API for Image Classification. Image Classification refers to classifying an image to one of the class labels of the training dataset. We demonstrate two use cases of Image Classification models:
* How to use a model pre-trained on ImageNet dataset to classify an image. [ImageNetLabels](https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt).
* How to fine-tune a pre-trained model to a custom dataset, and then run inference on the fine-tuned model.
Note: This notebook was tested on ml.t3.medium instance in Amazon SageMaker Studio with Python 3 (Data Science) kernel and in Amazon SageMaker Notebook instance with conda_python3 kernel.
---
1. [Set Up](#1.-Set-Up)
2. [Select a pre-trained model](#2.-Select-a-pre-trained-model)
3. [Run inference on the pre-trained model](#3.-Run-inference-on-the-pre-trained-model)
* [Retrieve JumpStart Artifacts & Deploy an Endpoint](#3.1.-Retrieve-JumpStart-Artifacts-&-Deploy-an-Endpoint)
* [Download example images for inference](#3.2.-Download-example-images-for-inference)
* [Query endpoint and parse response](#3.3.-Query-endpoint-and-parse-response)
* [Clean up the endpoint](#3.4.-Clean-up-the-endpoint)
4. [Fine-tune the pre-trained model on a custom dataset](#4.-Fine-tune-the-pre-trained-model-on-a-custome-dataset)
* [Retrieve JumpStart Training artifacts](#4.1.-Retrieve-JumpStart-Training-artifacts)
* [Set Training parameters](#4.2.-Set-Training-parameters)
* [Start Training](#4.3.-Start-Training)
* [Deploy & run Inference on the fine-tuned model](#4.4.-Deploy-&-run-Inference-on-the-fine-tuned-model)
## 1. Set Up
***
Before executing the notebook, there are some initial steps required for setup. This notebook requires latest version of sagemaker and ipywidgets.
***
```
!pip install sagemaker ipywidgets --upgrade --quiet
```
---
To train and host on Amazon Sagemaker, we need to setup and authenticate the use of AWS services. Here, we use the execution role associated with the current notebook instance as the AWS account role with SageMaker access. It has necessary permissions, including access to your data in S3.
---
```
import sagemaker, boto3, json
from sagemaker import get_execution_role
aws_role = get_execution_role()
aws_region = boto3.Session().region_name
sess = sagemaker.Session()
```
## 2. Select a pre-trained model
***
You can continue with the default model, or can choose a different model from the dropdown generated upon running the next cell. A complete list of JumpStart models can also be accessed at [JumpStart Models](https://sagemaker.readthedocs.io/en/stable/doc_utils/jumpstart.html#).
***
```
model_id, model_version, = (
"pytorch-ic-mobilenet-v2",
"*",
)
```
***
[Optional] Select a different JumpStart model. Here, we download jumpstart model_manifest file from the jumpstart s3 bucket, filter-out all the Image Classification models and select a model for inference.
***
```
import IPython
from ipywidgets import Dropdown
# download JumpStart model_manifest file.
boto3.client("s3").download_file(
f"jumpstart-cache-prod-{aws_region}", "models_manifest.json", "models_manifest.json"
)
with open("models_manifest.json", "rb") as json_file:
model_list = json.load(json_file)
# filter-out all the Image Classification models from the manifest list.
ic_models_all_versions, ic_models = [
model["model_id"] for model in model_list if "-ic-" in model["model_id"]
], []
[ic_models.append(model) for model in ic_models_all_versions if model not in ic_models]
# display the model-ids in a dropdown, for user to select a model.
dropdown = Dropdown(
options=ic_models,
value=model_id,
description="JumpStart Image Classification Models:",
style={"description_width": "initial"},
layout={"width": "max-content"},
)
display(IPython.display.Markdown("## Select a JumpStart pre-trained model from the dropdown below"))
display(dropdown)
```
## 3. Run inference on the pre-trained model
***
Using JumpStart, we can perform inference on the pre-trained model, even without fine-tuning it first on a custom dataset. For this example, that means on an input image, predicting the class label from one of the 1000 classes of the ImageNet dataset.
[ImageNetLabels](https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt).
***
### 3.1. Retrieve JumpStart Artifacts & Deploy an Endpoint
***
We retrieve the deploy_image_uri, deploy_source_uri, and base_model_uri for the pre-trained model. To host the pre-trained base-model, we create an instance of [`sagemaker.model.Model`](https://sagemaker.readthedocs.io/en/stable/api/inference/model.html) and deploy it.
***
```
from sagemaker import image_uris, model_uris, script_uris
from sagemaker.model import Model
from sagemaker.predictor import Predictor
from sagemaker.utils import name_from_base
# model_version="*" fetches the latest version of the model.
infer_model_id, infer_model_version = dropdown.value, "*"
endpoint_name = name_from_base(f"jumpstart-example-{infer_model_id}")
inference_instance_type = "ml.m5.xlarge"
# Retrieve the inference docker container uri.
deploy_image_uri = image_uris.retrieve(
region=None,
framework=None,
image_scope="inference",
model_id=infer_model_id,
model_version=infer_model_version,
instance_type=inference_instance_type,
)
# Retrieve the inference script uri.
deploy_source_uri = script_uris.retrieve(
model_id=infer_model_id, model_version=infer_model_version, script_scope="inference"
)
# Retrieve the base model uri.
base_model_uri = model_uris.retrieve(
model_id=infer_model_id, model_version=infer_model_version, model_scope="inference"
)
# Create the SageMaker model instance. Note that we need to pass Predictor class when we deploy model through Model class,
# for being able to run inference through the sagemaker API.
model = Model(
image_uri=deploy_image_uri,
source_dir=deploy_source_uri,
model_data=base_model_uri,
entry_point="inference.py",
role=aws_role,
predictor_cls=Predictor,
name=endpoint_name,
)
# deploy the Model.
base_model_predictor = model.deploy(
initial_instance_count=1,
instance_type=inference_instance_type,
endpoint_name=endpoint_name,
)
```
### 3.2. Download example images for inference
***
We download example images from the JumpStart S3 bucket.
***
```
s3_bucket = f"jumpstart-cache-prod-{aws_region}"
key_prefix = "inference-notebook-assets"
def download_from_s3(images):
for filename, image_key in images.items():
boto3.client("s3").download_file(s3_bucket, f"{key_prefix}/{image_key}", filename)
images = {"img1.jpg": "cat.jpg", "img2.jpg": "dog.jpg"}
download_from_s3(images)
```
### 3.3. Query endpoint and parse response
***
Input to the endpoint is a single image in binary format. Response from the endpoint is a dictionary containing the top-1 predicted class label, and a list of class probabilities.
***
```
from IPython.core.display import HTML
def predict_top_k_labels(probabilities, labels, k):
topk_prediction_ids = sorted(
range(len(probabilities)), key=lambda index: probabilities[index], reverse=True
)[:k]
topk_class_labels = ", ".join([labels[id] for id in topk_prediction_ids])
return topk_class_labels
for image_filename in images.keys():
with open(image_filename, "rb") as file:
img = file.read()
query_response = base_model_predictor.predict(
img, {"ContentType": "application/x-image", "Accept": "application/json;verbose"}
)
model_predictions = json.loads(query_response)
labels, probabilities = model_predictions["labels"], model_predictions["probabilities"]
top5_class_labels = predict_top_k_labels(probabilities, labels, 5)
display(
HTML(
f'<img src={image_filename} alt={image_filename} align="left" style="width: 250px;"/>'
f"<figcaption>Top-5 predictions: {top5_class_labels} </figcaption>"
)
)
```
### 3.4. Clean up the endpoint
```
# Delete the SageMaker endpoint and the attached resources
base_model_predictor.delete_model()
base_model_predictor.delete_endpoint()
```
## 4. Fine-tune the pre-trained model on a custom dataset
***
Previously, we saw how to run inference on a pre-trained model. Next, we discuss how a model can be finetuned to a custom dataset with any number of classes.
The model available for fine-tuning attaches a classification layer to the corresponding feature extractor model available on TensorFlow/PyTorch hub, and initializes the layer parameters to random values. The output dimension of the classification layer
is determined based on the number of classes in the input data. The fine-tuning step fine-tunes the model parameters. The objective is to minimize classification error on the input data. The model returned by fine-tuning can be further deployed for inference. Below are the instructions for how the training data should be formatted for input to the model.
- **Input:** A directory with as many sub-directories as the number of classes.
- Each sub-directory should have images belonging to that class in .jpg format.
- **Output:** A trained model that can be deployed for inference.
- A label mapping file is saved along with the trained model file on the s3 bucket.
The input directory should look like below if
the training data contains images from two classes: roses and dandelion. The s3 path should look like
`s3://bucket_name/input_directory/`. Note the trailing `/` is required. The names of the folders and 'roses', 'dandelion', and the .jpg filenames
can be anything. The label mapping file that is saved along with the trained model on the s3 bucket maps the
folder names 'roses' and 'dandelion' to the indices in the list of class probabilities the model outputs.
The mapping follows alphabetical ordering of the folder names. In the example below, index 0 in the model output list
would correspond to 'dandelion' and index 1 would correspond to 'roses'.
input_directory
|--roses
|--abc.jpg
|--def.jpg
|--dandelion
|--ghi.jpg
|--jkl.jpg
We provide tf_flowers dataset as a default dataset for fine-tuning the model.
tf_flower comprises images of five types of flowers.
The dataset has been downloaded from [TensorFlow](https://www.tensorflow.org/datasets/catalog/tf_flowers).
[Apache 2.0 License](https://jumpstart-cache-prod-us-west-2.s3-us-west-2.amazonaws.com/licenses/Apache-License/LICENSE-2.0.txt).
Citation:
<sub><sup>
@ONLINE {tfflowers,
author = "The TensorFlow Team",
title = "Flowers",
month = "jan",
year = "2019",
url = "http://download.tensorflow.org/example_images/flower_photos.tgz" }
</sup></sub> source: [TensorFlow Hub](model_url).
***
### 4.1. Retrieve JumpStart Training artifacts
***
Here, for the selected model, we retrieve the training docker container, the training algorithm source, the pre-trained base model, and a python dictionary of the training hyper-parameters that the algorithm accepts with their default values. Note that the model_version="*" fetches the lates model. Also, we do need to specify the training_instance_type to fetch train_image_uri.
***
```
from sagemaker import image_uris, model_uris, script_uris, hyperparameters
model_id, model_version = dropdown.value, "*"
training_instance_type = "ml.g4dn.xlarge"
# Retrieve the docker image
train_image_uri = image_uris.retrieve(
region=None,
framework=None,
model_id=model_id,
model_version=model_version,
image_scope="training",
instance_type=training_instance_type,
)
# Retrieve the training script
train_source_uri = script_uris.retrieve(
model_id=model_id, model_version=model_version, script_scope="training"
)
# Retrieve the pre-trained model tarball to further fine-tune
train_model_uri = model_uris.retrieve(
model_id=model_id, model_version=model_version, model_scope="training"
)
```
### 4.2. Set Training parameters
***
Now that we are done with all the setup that is needed, we are ready to fine-tune our Image Classification model. To begin, let us create a [``sageMaker.estimator.Estimator``](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html) object. This estimator will launch the training job.
There are two kinds of parameters that need to be set for training.
The first one are the parameters for the training job. These include: (i) Training data path. This is S3 folder in which the input data is stored, (ii) Output path: This the s3 folder in which the training output is stored. (iii) Training instance type: This indicates the type of machine on which to run the training. Typically, we use GPU instances for these training. We defined the training instance type above to fetch the correct train_image_uri.
The second set of parameters are algorithm specific training hyper-parameters.
***
```
# Sample training data is available in this bucket
training_data_bucket = f"jumpstart-cache-prod-{aws_region}"
training_data_prefix = "training-datasets/tf_flowers/"
training_dataset_s3_path = f"s3://{training_data_bucket}/{training_data_prefix}"
output_bucket = sess.default_bucket()
output_prefix = "jumpstart-example-ic-training"
s3_output_location = f"s3://{output_bucket}/{output_prefix}/output"
```
***
For algorithm specific hyper-parameters, we start by fetching python dictionary of the training hyper-parameters that the algorithm accepts with their default values. This can then be overridden to custom values.
***
```
from sagemaker import hyperparameters
# Retrieve the default hyper-parameters for fine-tuning the model
hyperparameters = hyperparameters.retrieve_default(model_id=model_id, model_version=model_version)
# [Optional] Override default hyperparameters with custom values
hyperparameters["epochs"] = "5"
print(hyperparameters)
```
### 4.3. Start Training
***
We start by creating the estimator object with all the required assets and then launch the training job.
***
```
from sagemaker.estimator import Estimator
from sagemaker.utils import name_from_base
training_job_name = name_from_base(f"jumpstart-example-{model_id}-transfer-learning")
# Create SageMaker Estimator instance
ic_estimator = Estimator(
role=aws_role,
image_uri=train_image_uri,
source_dir=train_source_uri,
model_uri=train_model_uri,
entry_point="transfer_learning.py",
instance_count=1,
instance_type=training_instance_type,
max_run=360000,
hyperparameters=hyperparameters,
output_path=s3_output_location,
)
# Launch a SageMaker Training job by passing s3 path of the training data
ic_estimator.fit({"training": training_dataset_s3_path}, logs=True)
```
## 4.4. Deploy & run Inference on the fine-tuned model
***
A trained model does nothing on its own. We now want to use the model to perform inference. For this example, that means predicting the class label of an image. We follow the same steps as in [3. Run inference on the pre-trained model](#3.-Run-inference-on-the-pre-trained-model). We start by retrieving the jumpstart artifacts for deploying an endpoint. However, instead of base_predictor, we deploy the `ic_estimator` that we fine-tuned.
***
```
inference_instance_type = "ml.m5.xlarge"
# Retrieve the inference docker container uri
deploy_image_uri = image_uris.retrieve(
region=None,
framework=None,
image_scope="inference",
model_id=model_id,
model_version=model_version,
instance_type=inference_instance_type,
)
# Retrieve the inference script uri
deploy_source_uri = script_uris.retrieve(
model_id=model_id, model_version=model_version, script_scope="inference"
)
endpoint_name = name_from_base(f"jumpstart-example-FT-{model_id}-")
# Use the estimator from the previous step to deploy to a SageMaker endpoint
finetuned_predictor = ic_estimator.deploy(
initial_instance_count=1,
instance_type=inference_instance_type,
entry_point="inference.py",
image_uri=deploy_image_uri,
source_dir=deploy_source_uri,
endpoint_name=endpoint_name,
)
```
---
Next, we download example images of a rose and a sunflower from the S3 bucket for inference.
---
```
s3_bucket = f"jumpstart-cache-prod-{aws_region}"
key_prefix = "training-datasets/tf_flowers"
def download_from_s3(images):
for filename, image_key in images.items():
boto3.client("s3").download_file(s3_bucket, f"{key_prefix}/{image_key}", filename)
flower_images = {
"img1.jpg": "roses/10503217854_e66a804309.jpg",
"img2.jpg": "sunflowers/1008566138_6927679c8a.jpg",
}
download_from_s3(flower_images)
```
---
Next, we query the finetuned model, parse the response and display the predictions.
---
```
from IPython.core.display import HTML
for image_filename in flower_images.keys():
with open(image_filename, "rb") as file:
img = file.read()
query_response = finetuned_predictor.predict(
img, {"ContentType": "application/x-image", "Accept": "application/json;verbose"}
)
model_predictions = json.loads(query_response)
predicted_label = model_predictions["predicted_label"]
display(
HTML(
f'<img src={image_filename} alt={image_filename} align="left" style="width: 250px;"/>'
f"<figcaption>Predicted Label: {predicted_label}</figcaption>"
)
)
```
---
Next, we clean up the deployed endpoint.
---
```
# Delete the SageMaker endpoint and the attached resources
finetuned_predictor.delete_model()
finetuned_predictor.delete_endpoint()
```
| true |
code
| 0.575707 | null | null | null | null |
|
# Why You Should Hedge Beta and Sector Exposures (Part I)
by Jonathan Larkin and Maxwell Margenot
Part of the Quantopian Lecture Series:
* [www.quantopian.com/lectures](https://www.quantopian.com/lectures)
* [github.com/quantopian/research_public](https://github.com/quantopian/research_public)
---
Whenever we have a trading strategy of any sort, we need to be considering the impact of systematic risk. There needs to be some risk involved in a strategy in order for there to be a return above the risk-free rate, but systematic risk poisons the well, so to speak. By its nature, systematic risk provides a commonality between the many securities in the market that cannot be diversified away. As such, we need to construct a hedge to get rid of it.
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.covariance import LedoitWolf
import seaborn as sns
import statsmodels.api as sm
```
# The Fundamental Law of Asset Management
The primary driver of the value of any strategy is whether or not it provides a compelling risk-adjusted return, i.e., the Sharpe Ratio. As expressed in [The Foundation of Algo Success](https://blog.quantopian.com/the-foundation-of-algo-success/) and "The Fundamental Law of Active Management", by Richard Grinold, Sharpe Ratio can be decomposed into two components, skill and breadth, as:
$$IR = IC \sqrt{BR}$$
Technically, this is the definition of the Information Ratio (IR), but for our purposes it is equivalent to the Sharpe Ratio. The IR is the ratio of the excess return of a portfolio over its benchmark per unit active risk, i.e., the excess return of a long-only portfolio less its benchmark per unit tracking error. In the time of Grinold’s publication, however, long/short investing was a rarity. Today, in the world of hedge funds and long/short investing, there is no benchmark. We seek absolute returns so, in this case, the IR is equivalent to the Sharpe ratio.
In this equation, skill is measured by IC (Information Coefficient), calculated with [Alphalens](https://github.com/quantopian/alphalens). The IC is essentially the Spearman rank correlation, used to correlate your prediction and its realization. Breadth is measured as the number of **independent** bets in the period. The takeaway from this "law" is that, with any strategy, we need to:
1. Bet well (high IC),
2. Bet often (high number of bets), *and*
3. **Make independent bets**
If the bets are completely independent, then breadth is the total number of bets we have made for every individual asset, the number of assets times the number of periods. If the bets are not independent then the **effective breadth** can be much much less than the number of assets. Let's see precisely what beta exposure and sector exposure do to **effective breadth**.
<div class="alert alert-warning">
<b>TL;DR:</b> Beta exposure and sector exposure lead to a significant increase in correlation among bets. Portfolios with beta and sector bets have very low effective breadth. In order to have high Sharpe then, these portfolios must have very high IC. It is easier to increase effective breadth by hedging beta and sector exposure than it is to increase your IC.
</div>
# Forecasts and Bet Correlation
We define a bet as the forecast of the *residual* of a security return. This forecast can be implicit -- i.e., we buy a stock and thus implicity we forecast that the stock will go up. What though do we mean by *residual*? Without any fancy math, this simply means the return **less a hedge**. Let's work through three examples. We use the Ledoit-Wolf covariance estimator to assess our covariance in all cases. For more information on why we use Ledoit-Wolf instead of typical sample covariance, check out [Estimating Covariance Matrices](https://www.quantopian.com/lectures/estimating-covariance-matrices).
### Example 1: No Hedge!
If we go long on a set of securities, but do not hold any short positions, there is no hedge! So the *residual* is the stock return itself.
$$r_{resid,i} = r_i$$
Let's see what the correlation of our bets are in this case.
```
tickers = ['WFC', 'JPM', 'USB', 'XOM', 'BHI', 'SLB'] # The securities we want to go long on
historical_prices = get_pricing(tickers, start_date='2015-01-01',end_date='2017-02-22') # Obtain prices
rets = historical_prices['close_price'].pct_change().fillna(0) # Calculate returns
lw_cov = LedoitWolf().fit(rets).covariance_ # Calculate Ledoit-Wolf estimator
def extract_corr_from_cov(cov_matrix):
# Linear algebra result:
# https://math.stackexchange.com/questions/186959/correlation-matrix-from-covariance-matrix
d = np.linalg.inv(np.diag(np.sqrt(np.diag(cov_matrix))))
corr = d.dot(cov_matrix).dot(d)
return corr
fig, (ax1, ax2) = plt.subplots(ncols=2)
fig.tight_layout()
corr = extract_corr_from_cov(lw_cov)
# Plot prices
left = historical_prices['close_price'].plot(ax=ax1)
# Plot covariance as a heat map
right = sns.heatmap(corr, ax=ax2, fmt='d', vmin=-1, vmax=1, xticklabels=tickers, yticklabels=tickers)
average_corr = np.mean(corr[np.triu_indices_from(corr, k=1)])
print 'Average pairwise correlation: %.4f' % average_corr
```
The result here is that we have six bets and they are all very highly correlated.
### Example 2: Beta Hedge
In this case, we will assume that each bet is hedged against the market (SPY). In this case, the residual is calculated as:
$$ r_{resid,i} = r_i - \beta_i r_i $$
where $\beta_i$ is the beta to the market of security $i$ calculated with the [CAPM](https://www.quantopian.com/lectures/the-capital-asset-pricing-model-and-arbitrage-pricing-theory) and $r_i$ is the return of security $i$.
```
tickers = ['WFC', 'JPM', 'USB', 'SPY', 'XOM', 'BHI', 'SLB' ] # The securities we want to go long on plus SPY
historical_prices = get_pricing(tickers, start_date='2015-01-01',end_date='2017-02-22') # Obtain prices
rets = historical_prices['close_price'].pct_change().fillna(0) # Calculate returns
market = rets[symbols(['SPY'])]
stock_rets = rets.drop(symbols(['SPY']), axis=1)
residuals = stock_rets.copy()*0
for stock in stock_rets.columns:
model = sm.OLS(stock_rets[stock], market.values)
results = model.fit()
residuals[stock] = results.resid
lw_cov = LedoitWolf().fit(residuals).covariance_ # Calculate Ledoit-Wolf Estimator
fig, (ax1, ax2) = plt.subplots(ncols=2)
fig.tight_layout()
corr = extract_corr_from_cov(lw_cov)
left = (1+residuals).cumprod().plot(ax=ax1)
right = sns.heatmap(corr, ax=ax2, fmt='d', vmin=-1, vmax=1, xticklabels=tickers, yticklabels=tickers)
average_corr = np.mean(corr[np.triu_indices_from(corr, k=1)])
print 'Average pairwise correlation: %.4f' % average_corr
```
The beta hedge has brought down the average correlation significanty. Theoretically, this should improve our breadth. It is obvious that we are left with two highly correlated clusters however. Let's see what happens when we hedge the sector risk.
### Example 3: Sector Hedge
The sector return and the market return are themselves highly correlated. As such, you cannot do a multivariate regression due to multicollinearity, a classic [violation of regression assumptions](https://www.quantopian.com/lectures/violations-of-regression-models). To hedge against both the market and a given security's sector, you first estimate the market beta residuals and then calculate the sector beta on *those* residuals.
$$
r_{resid,i} = r_i - \beta_i r_i \\
r_{resid_{SECTOR},i}= r_{resid,i} - \beta_{SECTOR,i}r_{resid,i}
$$
Here, $r_{resid, i}$ is the residual between the security return and a market beta hedge and $r_{resid_{SECTOR}, i}$ is the residual between *that* residual and a hedge of that residual against the relevant sector.
```
tickers = ['WFC', 'JPM', 'USB', 'XLF', 'SPY', 'XOM', 'BHI', 'SLB', 'XLE']
historical_prices = get_pricing(tickers, start_date='2015-01-01',end_date='2017-02-22')
rets = historical_prices['close_price'].pct_change().fillna(0)
# Get market hedge ticker
mkt = symbols(['SPY'])
# Get sector hedge tickers
sector_1_hedge = symbols(['XLF'])
sector_2_hedge = symbols(['XLE'])
# Identify securities for each sector
sector_1_stocks = symbols(['WFC', 'JPM', 'USB'])
sector_2_stocks = symbols(['XOM', 'BHI', 'SLB'])
market_rets = rets[mkt]
sector_1_rets = rets[sector_1_hedge]
sector_2_rets = rets[sector_2_hedge]
stock_rets = rets.drop(symbols(['XLF', 'SPY', 'XLE']), axis=1)
residuals_market = stock_rets.copy()*0
residuals = stock_rets.copy()*0
# Calculate market beta of sector 1 benchmark
model = sm.OLS(sector_1_rets.values, market.values)
results = model.fit()
sector_1_excess = results.resid
# Calculate market beta of sector 2 benchmark
model = sm.OLS(sector_2_rets.values, market.values)
results = model.fit()
sector_2_excess = results.resid
for stock in sector_1_stocks:
# Calculate market betas for sector 1 stocks
model = sm.OLS(stock_rets[stock], market.values)
results = model.fit()
# Calculate residual of security + market hedge
residuals_market[stock] = results.resid
# Calculate sector beta for previous residuals
model = sm.OLS(residuals_market[stock], sector_1_excess)
results = model.fit()
# Get final residual
residuals[stock] = results.resid
for stock in sector_2_stocks:
# Calculate market betas for sector 2 stocks
model = sm.OLS(stock_rets[stock], market.values)
results = model.fit()
# Calculate residual of security + market hedge
residuals_market[stock] = results.resid
# Calculate sector beta for previous residuals
model = sm.OLS(residuals_market[stock], sector_2_excess)
results = model.fit()
# Get final residual
residuals[stock] = results.resid
# Get covariance of residuals
lw_cov = LedoitWolf().fit(residuals).covariance_
fig, (ax1, ax2) = plt.subplots(ncols=2)
fig.tight_layout()
corr = extract_corr_from_cov(lw_cov)
left = (1+residuals).cumprod().plot(ax=ax1)
right = sns.heatmap(corr, ax=ax2, fmt='d', vmin=-1, vmax=1, xticklabels=tickers, yticklabels=tickers)
average_corr = np.mean(corr[np.triu_indices_from(corr, k=1)])
print 'Average pairwise correlation: %.4f' % average_corr
```
There we go! The sector hedge brought down the correlation between our bets to close to zero.
## Calculating Effective Breadth
This section is based on "How to calculate breadth: An evolution of the fundamental law of active portfolio management", by David Buckle; Vol. 4, 6, 393-405, 2003, _Journal of Asset Management_. Buckle derives the "semi-generalised fundamental law of active management" under several weak assumptions. The key result of this paper (for us) is a closed-form calculaiton of effective breadth as a function of the correlation between bets. Buckle shows that breadth, $BR$, can be modeled as
$$BR = \frac{N}{1 + \rho(N -1)}$$
where N is the number of stocks in the portfolio and $\rho$ is the assumed single correlation of the expected variation around the forecast.
```
def buckle_BR_const(N, rho):
return N/(1 + rho*(N - 1))
corr = np.linspace(start=0, stop=1.0, num=500)
plt.plot(corr, buckle_BR_const(6, corr))
plt.title('Effective Breadth as a function of Forecast Correlation (6 Stocks)')
plt.ylabel('Effective Breadth (Number of Bets)')
plt.xlabel('Forecast Correlation');
```
Here we see that in the case of the long-only portfolio, where the average correlation is 0.56, we are *effectively making only approximately 2 bets*. When we hedge beta, with a resulting average correlation of 0.22, things get a little better, *three effective bets*. When we add the sector hedge, we get close to zero correlation, and in this case the number of bets equals the number of assets, 6.
**More independent bets with the same IC leads to higher Sharpe ratio.**
## Using this in Practice
Trading costs money due to market impact and commissions. As such, the post hoc implementation of a hedge is almost always suboptimal. In that case, you are trading purely to hedge risk. It is preferable to think about your sector and market exposure *throughout the model development process*. Sector and market risk is naturally hedged in a pairs-style strategy; in a cross-sectional strategy, consider de-meaning the alpha vector by the sector average; with an event-driven strategy, consider adding additional alphas so you can find offsetting bets in the same sector. As a last resort, hedge with a well chosen sector ETF.
*This presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. ("Quantopian"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, Quantopian, Inc. has not taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information, believed to be reliable, available to Quantopian, Inc. at the time of publication. Quantopian makes no guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances.*
| true |
code
| 0.743293 | null | null | null | null |
|
## Basic usage of matplotlib
Matplotlib is the module of choice whenever you want to make a niceplot.
```
# the following two lines are required inside a python script to be run on binder. They are not needed inside the notebook.
import matplotlib
matplotlib.use('Agg')
import numpy as np
import matplotlib.pyplot as plt # this is the only line required to make plots with matplotlibt
# this line allows you to see the plot inside the notebook
%matplotlib inline
# This is a simple example to make an scatter plot
plt.figure() # Create and empty figure
x = [3,4,5]
y = [-1,2,2]
plt.scatter(x,y)
# This is another plot where we save the figure at the end as png file.
x = np.linspace(0,5,10)
y = x**2
plt.figure() # creates an empty figure
plt.scatter(x,y) # scatter plot
plt.xlabel("x")
plt.ylabel("y")
plt.title("Simple scatter plot")
plt.savefig("thefig.png") # Saves the figure
plt.figure()
x = np.linspace(-5.0,5.0,100)
y = 1./(1.+x**2)
plt.plot(x,y) # This time a line connect the points
plt.title(u"$\lambda$") # you can use latex
# After initializing plt.figure() you can make multiple plots
plt.figure()
plt.plot(range(10),2*np.arange(10))
plt.plot(range(10),3*np.arange(10))
plt.scatter(range(10),3*np.arange(10))
# Now let's make some simple figures
# In this case: circles
theta=np.linspace(0, 2*np.pi,50) # 50 points between 0 and 2pi
plt.figure()
plt.axis("equal")# this gives the same apparent size to the x and y axis
plt.axis("off") # this removes the axis
for i in range(5):
# Elegir el centro del círculo
x = 2*np.random.random()-1 # random value for the center in x
y = 2*np.random.random()-1 # random value for the center in y
r = np.random.random() # random value for the radius
plt.plot(r * np.cos(theta) + x, r * np.sin(theta) + y)
```
## Subplots
You can produce multiple plots inside the same figure.
```
plt.figure(figsize=(10,5))
a = np.linspace(0,2*np.pi,100)
plt.subplot(2,2,1)
plt.plot(a,np.sin(a))
plt.title("sin")
plt.subplot(2,2,2)
plt.plot(a,np.cos(a))
plt.title(u"cos")
plt.subplot(2,2,3)
plt.plot(a,np.tan(a))
plt.ylim(-2,2)
plt.title(u"tan")
plt.subplot(2,2,4)
plt.plot(a,np.cos(a)**2)
plt.title(r"$\cos^2$")
plt.subplots_adjust(hspace=.5) # adjusts the horizontal space between the subplots
# this an almost empty figure, only plotting text, so that you can see what is the ordering when you call plt.subplot(5,5,k)
plt.figure(figsize=(10,10))
k = 1
for n in range(1,6):
for d in range(1,6):
plt.subplot(5,5,k)
plt.plot()
plt.text(0,0,"k="+str(k))
plt.axis("off") # esto elimina los ejes
plt.axis("equal")
k = k + 1
# Drawing some roses
# http://en.wikipedia.org/wiki/Rose_%28mathematics%29
plt.figure(figsize=(10,10))
k = 1
for n in range(1,6):
for d in range(1,6):
plt.subplot(5,5,k)
kah = n/d
a = np.linspace(0,2*d*np.pi,200)
r = np.sin(kah * a)
x = r * np.cos(a)
y = r * np.sin(a)
plt.plot(x,y)
plt.axis("off")
plt.axis("equal")
k = k + 1
plt.show()
```
## Legends
```
# Adding legends to different lines in the same plot
plt.figure()
x = np.linspace(0,2*np.pi,20)
plt.plot(x,np.sin(x),"or-",label='sin(x)')
plt.plot(x,np.cos(x),"ob-",label='cos(x)')
plt.legend()
plt.axis([0,2*np.pi,-1,1])
```
### Exercise 4.1
Make a plot of the Lissajous curve corresponding to $a=5$, $b=4$.
See https://en.wikipedia.org/wiki/Lissajous_curve
### Exercise 4.2
**The solution to this exercise must ufuncs only, i.e. `for`, `while`, `if` statements cannot be used.**
Write a function to generate `N` points homogeneously distributed over a circle of radius `1` centered
on the origin of the coordinate system. The function must take as a in input `N` and return two arrays
`x` and `y` with the cartesian coodinates of the points. Use the function and generate `1000` points to
make a plot and confirm that indeed the points are homogeneously distributed.
Write a similar function to distribute `N` points over the surface of the 3D sphere of radius `1`.
Read the documentation https://pythonprogramming.net/matplotlib-3d-scatterplot-tutorial/ to see how to
make a 3D plot.
| true |
code
| 0.667012 | null | null | null | null |
|
```
import re
import docx2txt
import networkx as nx
import matplotlib.pyplot as plt
%matplotlib inline
```
## Extract programming language from Knowledge Graph
```
file_name_1 = 'Mathew Elliot.docx'
file_name_2 = 'John Guy.docx'
file_name_3 = 'Max Payne.docx'
def extract_programming_languages(file_name):
# read in word file
result = docx2txt.process(file_name)
programming_languages_pattern = re.search(r'Programming Languages:[A-Za-z,\s0-9]*\.',result)
programming_languages_line = programming_languages_pattern.group(0)
languages = re.sub("Programming Languages: ","", programming_languages_line)
languages = re.sub("\.","",languages)
languages_clean = languages.split(', ')
print(languages_clean)
return languages_clean
name_1 = file_name_1.split('.')[0]
name_2 = file_name_2.split('.')[0]
name_3 = file_name_3.split('.')[0]
languages_mathew = extract_programming_languages(file_name_1)
languages_john = extract_programming_languages(file_name_2)
languages_max = extract_programming_languages(file_name_3)
```
## Create and Visualize a Knowledge Graph
```
names = [name_1,name_2,name_3]
def draw_graph(e_dict):
# create a directed-graph from a dataframe
G=nx.from_dict_of_lists(e_dict,create_using=nx.MultiDiGraph())
plt.figure(figsize=(12,12))
pos = nx.spring_layout(G)
nx.draw(G, with_labels=True, node_color='skyblue', edge_cmap=plt.cm.Blues, pos = pos, node_size = 4500, font_size = 18)
plt.show()
```
### Knowledge Graph - Single Candidate
```
edge_dict = {}
edge_dict[names[0]] = languages_mathew
draw_graph(edge_dict)
edge_dict = {}
edge_dict[names[1]] = languages_john
draw_graph(edge_dict)
edge_dict = {}
edge_dict[names[2]] = languages_max
draw_graph(edge_dict)
```
### Knowledge Graph for Multiple Candidates
```
edge_dict = {}
edge_dict[names[0]] = languages_mathew
edge_dict[names[1]] = languages_john
edge_dict[names[2]] = languages_max
G=nx.from_dict_of_lists(edge_dict,
create_using=nx.MultiDiGraph())
plt.figure(figsize=(12,12))
pos = nx.circular_layout(G) # k regulates the distance between nodes
nx.draw(G, with_labels=True, node_color='skyblue', node_size=4500, edge_cmap=plt.cm.Blues, pos = pos, font_size=18)
plt.show()
```
## Traversing a Knowledge Graph
```
def get_max_degree_node(list_of_nodes_to_eliminate, G):
max_degree=0
all_remaining_nodes = [x for x in G.nodes() if x not in list_of_nodes_to_eliminate]
max_node=all_remaining_nodes[0]
for node in all_remaining_nodes:
degree = G.degree(node)
if degree>max_degree:
max_degree = degree
max_node = node
return max_degree, max_node
max_skill_degree, max_skill_node = get_max_degree_node(names, G)
print(max_skill_node)
print(max_skill_degree)
skill_list = languages_mathew+languages_john+languages_max
max_languages_degree, max_languages_node = get_max_degree_node(skill_list,G)
print(max_languages_node)
print(max_languages_degree)
```
| true |
code
| 0.26863 | null | null | null | null |
|
## Installs & Imports
```
# Select Tensorflow 2.x version in Colab
%tensorflow_version 2.x
# Import TensorFlow and tf.keras
import tensorflow as tf
keras = tf.keras
# Import helper libraries
import numpy as np
import matplotlib.pyplot as plt
# Print TensorFlow version
version = tf.__version__
print(version)
```
## Data
### 1. Load the MNIST dataset
```
# Load mnist from keras datasets
mnist = keras.datasets.mnist
# Get the training and test data
(x_train, y_train), (x_test, y_test) = mnist.load_data()
```
### 2. Explore the MNIST dataset
```
# Inspect the training and test dataset shape
print("x_train shape:", x_train.shape, "y_train shape:", y_train.shape)
print("x_test shape:", x_test.shape, "y_test shape", y_test.shape)
# Take a look at one of the training images
index = 0
```
```
# First let's look at the label of the image
print(y_train[index])
plt.imshow(x_train[index], cmap="gray")
print(x_train[index])
# Check datatype of x_train
x_train.dtype
```
### 2. Data preprocessing
```
# Convert data to float32 and normalize the input data
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
x_train.dtype
x_train[index]
# Reshape input data from (28, 28) to (28, 28, 1)
x_train = x_train.reshape(x_train.shape[0], 28, 28, 1)
x_test = x_test.reshape(x_test.shape[0], 28, 28, 1)
# Take a look at the dataset shape after conversion with keras.utils.to_categorical
print("x_train shape:", x_train.shape, "y_train shape:", y_train.shape)
print("x_test shape:", x_test.shape, "y_test shape", y_test.shape)
# One-hot encode the labels
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)
y_train[index]
```
## Model Training
### Define the model architecture
There are 3 ways to define a model with tf.Keras:
1. Sequential API
2. Functional API
3. Model subclassing
We will create a simple Convolutional Neural Network with the Sequential model API.

```
def create_model():
# Define the model architecture
model = keras.models.Sequential([
# Must define the input shape in the first layer of the neural network
keras.layers.Conv2D(filters=32, kernel_size=3, padding='same', activation='relu', input_shape=(28,28,1)),
keras.layers.MaxPooling2D(pool_size=2),
keras.layers.Conv2D(filters=64, kernel_size=3, padding='same', activation='relu'),
keras.layers.MaxPooling2D(pool_size=2),
keras.layers.Flatten(),
keras.layers.Dense(10, activation='softmax')
])
# Compile the model
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),
metrics=['accuracy'])
return model
model = create_model()
# Take a look at the model summary
model.summary()
%%time
model.fit(x_train,
y_train,
batch_size=64,
epochs=3)
```
### Model evaluation
```
test_loss, test_accuracy = model.evaluate(x_test, y_test)
print("Test Loss", test_loss)
print('Test Accuracy:', test_accuracy)
predictions = model.predict(x_test)
index = 99
np.argmax(predictions[index])
plt.imshow(np.squeeze(x_test[index]))
```
| true |
code
| 0.753866 | null | null | null | null |
|
## _*H2 ground state energy computation using Iterative QPE*_
This notebook demonstrates computing and graphing the ground state energy of the Hydrogen (H2) molecule over a range of inter-atomic distances using `IQPE` (Iterative Quantum Phase Estimation) algorithm. It is compared to the ground-truth energies as computed by the `ExactEigensolver`.
This notebook has been written to use the PYSCF chemistry driver. See the PYSCF chemistry driver readme if you need to install the external PySCF library that this driver requires.
First we define the `compute_energy` method, which contains the H2 molecule definition as well as the computation of its ground energy given the desired `distance` and `algorithm` (`i` is just a helper index for parallel computation to speed things up).
```
import pylab
import time
import numpy as np
import multiprocessing as mp
from qiskit import BasicAer
from qiskit.aqua import QuantumInstance, AquaError
from qiskit.aqua.algorithms.single_sample import IQPE
from qiskit.aqua.algorithms.classical import ExactEigensolver
from qiskit.chemistry import FermionicOperator
from qiskit.chemistry.aqua_extensions.components.initial_states import HartreeFock
from qiskit.chemistry.drivers import PySCFDriver, UnitsType
def compute_energy(i, distance, algorithm):
try:
driver = PySCFDriver(
atom='H .0 .0 .0; H .0 .0 {}'.format(distance),
unit=UnitsType.ANGSTROM,
charge=0,
spin=0,
basis='sto3g'
)
except:
raise AquaError('PYSCF driver does not appear to be installed')
molecule = driver.run()
qubit_mapping = 'parity'
fer_op = FermionicOperator(h1=molecule.one_body_integrals, h2=molecule.two_body_integrals)
qubit_op = fer_op.mapping(map_type=qubit_mapping, threshold=1e-10).two_qubit_reduced_operator(2)
if algorithm.lower() == 'exacteigensolver':
exact_eigensolver = ExactEigensolver(qubit_op, k=1)
result = exact_eigensolver.run()
reference_energy = result['energy']
elif algorithm.lower() == 'iqpe':
num_particles = molecule.num_alpha + molecule.num_beta
two_qubit_reduction = True
num_orbitals = qubit_op.num_qubits + (2 if two_qubit_reduction else 0)
num_time_slices = 3000
num_iterations = 12
state_in = HartreeFock(qubit_op.num_qubits, num_orbitals,
num_particles, qubit_mapping, two_qubit_reduction)
iqpe = IQPE(qubit_op, state_in, num_time_slices, num_iterations,
expansion_mode='trotter', expansion_order=1,
shallow_circuit_concat=True)
backend = BasicAer.get_backend('statevector_simulator')
quantum_instance = QuantumInstance(backend)
result = iqpe.run(quantum_instance)
else:
raise AquaError('Unrecognized algorithm.')
return i, distance, result['energy'] + molecule.nuclear_repulsion_energy, molecule.hf_energy
```
Next we set up the experiment to compute H2 ground energies for a range of inter-atomic distances, in parallel.
```
import concurrent.futures
import multiprocessing as mp
algorithms = ['iqpe', 'exacteigensolver']
start = 0.5 # Start distance
by = 0.5 # How much to increase distance by
steps = 20 # Number of steps to increase by
energies = np.empty([len(algorithms), steps+1])
hf_energies = np.empty(steps+1)
distances = np.empty(steps+1)
start_time = time.time()
max_workers = max(4, mp.cpu_count())
with concurrent.futures.ProcessPoolExecutor(max_workers=max_workers) as executor:
futures = []
for j in range(len(algorithms)):
algorithm = algorithms[j]
for i in range(steps+1):
d = start + i*by/steps
future = executor.submit(
compute_energy,
i,
d,
algorithm
)
futures.append(future)
for future in concurrent.futures.as_completed(futures):
i, d, energy, hf_energy = future.result()
energies[j][i] = energy
hf_energies[i] = hf_energy
distances[i] = d
print(' --- complete')
print('Distances: ', distances)
print('Energies:', energies)
print('Hartree-Fock energies:', hf_energies)
print("--- %s seconds ---" % (time.time() - start_time))
```
Finally we plot the results:
```
pylab.plot(distances, hf_energies, label='Hartree-Fock')
for j in range(len(algorithms)):
pylab.plot(distances, energies[j], label=algorithms[j])
pylab.xlabel('Interatomic distance')
pylab.ylabel('Energy')
pylab.title('H2 Ground State Energy')
pylab.legend(loc='upper right')
pylab.show()
pylab.plot(distances, np.subtract(hf_energies, energies[1]), label='Hartree-Fock')
pylab.plot(distances, np.subtract(energies[0], energies[1]), label='IQPE')
pylab.xlabel('Interatomic distance')
pylab.ylabel('Energy')
pylab.title('Energy difference from ExactEigensolver')
pylab.legend(loc='upper right')
pylab.show()
```
| true |
code
| 0.553445 | null | null | null | null |
|
# Generate volcanic ERF time series
Theme Song: Mt. Pinatubo<br>
Artist: The Low Frequency In Stereo<br>
Album: Futuro<br>
Released: 2009
```
from netCDF4 import Dataset, num2date
import numpy as np
import matplotlib.pyplot as pl
import pandas as pd
from ar6.utils import check_and_download
import scipy.stats
%matplotlib inline
pl.rcParams['figure.figsize'] = (16, 9)
pl.rcParams['font.size'] = 18
pl.rcParams['font.family'] = 'Arial'
def linear(delta_saod, scaling):
erf = scaling*delta_saod
return erf
```
Downloading the volcanic data from Toohey and Sigl and Glossac requires registration, so no quick way to get this to work
Download data from:
https://cera-www.dkrz.de/WDCC/ui/cerasearch/entry?acronym=eVolv2k_v2
Put this in '../data_input_large/eVolv2k_v3_EVA_AOD_-500_1900_1.nc'
Download data from:
https://asdc.larc.nasa.gov/project/GloSSAC/GloSSAC_2.0
Put this in '../data_input_large/GloSSAC_V2.0.nc'
```
# -500 to 1900 ERF
nc = Dataset('../data_input_large/eVolv2k_v3_EVA_AOD_-500_1900_1.nc')
aod550_mt = nc.variables['aod550'][:]
lat_mt = nc.variables['lat'][:]
time_mt = nc.variables['time'][:]
nc.close()
time_mt[-51*12]
nc = Dataset('../data_input_large/GloSSAC_V2.0.nc')
data_glossac = nc.variables['Glossac_Aerosol_Optical_Depth'][:]
lat_glossac = nc.variables['lat'][:]
trp_hgt_glossac = nc.variables['trp_hgt'][:] # lat, month
alt_glossac = nc.variables['alt'][:]
nc.close()
time_mt[-51*12]
lat_glossac
alt_glossac
data_glossac[0,0,:]
lat_mt_bnds = np.concatenate([[90], 0.5*(lat_mt[1:]+lat_mt[:-1]), [-90]])
weights = -np.squeeze(np.diff(np.sin(np.radians(lat_mt_bnds))))
lat_glossac_bnds = np.concatenate(([-90], 0.5*(lat_glossac[1:]+lat_glossac[:-1]), [90]))
weights_glossac = np.diff(np.sin(np.radians(lat_glossac_bnds)))
aod_mt = np.zeros((len(time_mt)))
for i in range(len(time_mt)):
aod_mt[i] = np.average(aod550_mt[i,:],weights=weights)
angstrom = (550/525)**(-2.33)
aod_glossac = np.zeros(480)
for i in range(480):
aod_glossac[i] = np.average(data_glossac[i,:,2],weights=weights_glossac)*angstrom
check_and_download(
'../data_input_large/CMIP_1850_2014_extinction_550nm_strat_only_v3.nc',
'ftp://iacftp.ethz.ch/pub_read/luo/CMIP6/CMIP_1850_2014_extinction_550nm_strat_only_v3.nc'
)
nc = Dataset('../data_input_large/CMIP_1850_2014_extinction_550nm_strat_only_v3.nc')
ext = nc.variables['ext550'][:].transpose((2,1,0)) # time, height, lat
lev = nc.variables['altitude'][:]
lat = nc.variables['latitude'][:]
time = nc.variables['month'][:]
print(nc.variables['month'])
nc.close()
lat_bnds = np.concatenate(([-90], 0.5*(lat[1:]+lat[:-1]), [90]))
weights = np.diff(np.sin(np.radians(lat_bnds)))
tax = np.zeros(165*12)
aod_cmip6 = np.zeros(165*12)
for i in range(0,1970,12):
gl_mn_OD = np.average(np.sum(np.mean(ext[i:i+12,...],axis=0) * 0.5 ,axis=0),weights=weights) # 0.5 is thickness in km
for i in range(1980):
aod_cmip6[i] = np.average(np.sum(ext[i,...] * 0.5,axis=0),weights=weights)
aod = np.concatenate((aod_mt[:-51*12],aod_cmip6[:129*12],aod_glossac))
len(aod)
aod[28200:28812] = (1-np.linspace(0,1,612))*aod_mt[-51*12:]+np.linspace(0,1,612)*aod_cmip6[:612]
aod[29748:29868] = (1-np.linspace(0,1,120))*aod_cmip6[1548:1668] + np.linspace(0,1,120)*aod_glossac[:120]
# repeat last year
aod = np.append(aod, aod[-12:])
pl.plot(np.arange(1845+1/24,1901+1/24,1/12), aod_mt[28140:], label='Toohey & Sigl -500 to 1900 incl. background')
pl.plot(np.arange(1850+1/24,1905+1/24,1/12), aod_cmip6[:660], label='CMIP6')
pl.plot(np.arange(1845+1/24,1905+1/24,1/12), aod[28140:28860], label='blended')
pl.legend()
pl.plot(np.arange(1979+1/24,2019+1/24,1/12), aod_glossac, label='Glossac')
pl.plot(np.arange(1975+1/24,2015+1/24,1/12), aod_cmip6[-480:], label='CMIP6')
pl.plot(np.arange(1975+1/24,2020+1/24,1/12), aod[-540:], label='blended')
pl.legend()
pl.plot(aod[-528:])
pl.plot(np.arange(1845+1/24,2020,1/12), aod[28140:])
volc_erf_minus20 = np.zeros((2520))
aod_2500yr_clim = np.zeros(12)
for i in range(12):
aod_2500yr_clim[i] = np.mean(aod[i:(2250*12):12]) # change of approach: pre-industrial defined as pre-1750
for i in range(2520):
volc_erf_minus20[i] = np.mean(linear(aod[i*12:i*12+12] - aod_2500yr_clim, -20))
print (np.mean(volc_erf_minus20))
aod[i*12:i*12+12]
i
pl.plot(volc_erf_minus20)
pl.plot(aod)
years = np.arange(-500,2020, dtype=int)
df = pd.DataFrame(data=volc_erf_minus20, index=years, columns=['volcanic_erf'])
df.index.name = 'year'
df.to_csv('../data_output/volcanic_erf.csv')
months = np.arange(-500+1/24,2020,1/12)
df = pd.DataFrame(data=aod, index=months, columns=['stratospheric_AOD'])
df.index.name = 'year'
df.to_csv('../data_output/volcanic_sAOD_monthly_-50001-201912.csv')
aod_annual_mean = np.zeros(2520)
for i in range(2520):
aod_annual_mean[i] = np.mean(aod[i*12:i*12+12])
scipy.stats.linregress(aod_annual_mean, volc_erf_minus20)
```
| true |
code
| 0.282604 | null | null | null | null |
|
# Optional Coding Exercise
## -- Implementing a "CART" Decision Tree From Scratch
```
%load_ext watermark
%watermark -d -u -a 'Sebastian Raschka' -v -p numpy,scipy,matplotlib
import numpy as np
```
<br>
<br>
<br>
<br>
<br>
<br>
## 1) Implementing a "CART" Decision Tree from Scratch
In this exercise, you are going to learn how to implement the CART decision tree algorithm we discussed in class. This decision tree algorithm will construct a binary decision tree based on maximizing Information Gain using the Gini Impurity measure on continuous features.
Implementing machine learning algorithms from scratch is a very important skill, and this homework will provide exercises that will help you to develop this skill. Even if you are interested in the more theoretical aspects of machine learning, being comfortable with implementing and trying out algorithms is vital for doing research, since even the more theoretical papers in machine learning are usually accompanied by experiments or simulations to a) verify results and b) to compare algorithms with the state-of-the art.
Since many students are not expert Python programmers (yet), I will provide partial solutions to the homework tasks such that you have a framework or guide to implement the solutions. Areas that you need to fill in will be marked with comments (e.g., `# YOUR CODE`). For these partial solutions, I first implemented the functions myself, and then I deleted parts you need to fill in by these comments. However, note that you can, of course, use more or fewer lines of code than I did. In other words, all that matter is that the function you write can create the same outputs as the ones I provide. How many lines of code you need to implement that function, and how efficient it is, does not matter here. The expected outputs for the respective functions will be provided for most functions so that you can double-check your solutions.
### 1.1) Splitting a node (4 pts)
First, we are going to implement a function that splits a dataset along a feature axis into sub-datasets. For this, we assume that the feature values are continuous (we are expecting NumPy float arrays). If the input is a NumPy integer array, we could convert it into a float array via
float_array = integer_array.astype(np.float64)
To provide an intuitive example of how the splitting function should work, suppose you are given the following NumPy array with four feature values, feature values 0-3:
np.array([0.0, 1.0, 4.0, 1.0, 0.0, 3.0, 1.0, 0.0, 1.0, 2.0])
The function you are going to implement should return a dictionary, where the dictionary key stores the information about which data point goes to the left child note and which data point goes to the right child node after applying a threshold for splitting.
For example, if we were to use a `split` function on the array shown above with a theshold $t=2.5$, the split function should return the following dictionary:
{
'left': array([0, 1, 3, 4, 6, 7, 8, 9]), # smaller than or equal to threshold
'right': array([2, 5]) # larger than threshold'
'threshold': 2.5 # threshold for splitting, e.g., 2.5 means <= 2.5
}
Note that we also store a "threshold" key here to keep track of what value we used for the split -- we will need this later.
Now it's your turn to implement the split function.
```
# SOLUTION
def split(array, t):
"""
Function that splits a feature based on a threshold.
Parameters
----------
array : NumPy array, type float, shape=(num_examples,)
A NumPy array containing feature values (float values).
t : float
A threshold parameter for dividing the examples into
a left and a right child node.
Returns
--------
d : dictionary of the split
A dictionary that has 3 keys, 'left', 'right', 'threshold'.
The 'threshold' simply references the threshold t we provided
as function argument. The 'left' child node is an integer array
containing the indices of the examples corresponding to feature
values with value <= t. The 'right' child node is an integer array
stores the indices of the examples for which the feature value > t.
"""
index = np.arange(array.shape[0])
mask = array <= t
left = index[mask]
right = index[~mask]
d = {'left': left, 'right': right, 'threshold': t}
return d
# DO NOT EDIT OR DELETE THIS CELL
ary = np.array([0.0, 1.0, 4.0, 1.0, 0.0, 3.0, 1.0, 0.0, 1.0, 2.0])
print(split(ary, t=2.5))
print(split(ary, t=1.5))
print(split(ary, t=-0.5))
print(split(ary, t=1.0))
```
### 1.2) Implement a function to compute the Gini Impurity (6 pts)
After implementing the splitting function, the next step is to implement a criterion function so that we can compare splits on different features. I.e., we use this criterion function to decide which feature is the best feature to split for growing the decision tree at each node. As discussed in class, our splitting criterion will be Information Gain. However, before we implement an Information Gain function, we need to implement a function that computes the Gini Impurity at each node, which we need to compute Information Gain. For your reference, we defined Gini Impurity as follows:
$$G(p) = 1 - \sum_i (p_i)^2$$
where you can think of $p_i$ as the proportion of examples with class label $i$ at a given node.
```
# SOLUTION
def gini(array):
"""
Function that computes the Gini Impurity.
Parameters
-----------
array : NumPy array, type int, shape=(num_examples,)
A NumPy array containing integers representing class
labels.
Returns
----------
Gini impurity (float value).
Example
----------
>>> gini(np.array([0, 0, 1, 1]))
0.5
"""
frequencies = [(array==c).sum()/array.shape[0] for c in np.unique(array)]
res = [p**2 for p in frequencies]
return 1. - sum(res)
```
TIP: To check your solution, try out the `gini` function on some example arrays. Note that the Gini impurity is maximum (0.5) if the classes are uniformly distributed; it is minimum if the array contains labels from only one single class.
```
# DO NOT EDIT OR DELETE THIS CELL
print(round(gini(np.array([0, 1, 0, 1, 1, 0])), 4))
print(round(gini(np.array([1, 2])), 4))
print(round(gini(np.array([1, 1])), 4))
print(round(gini(np.array([1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])), 4))
print(round(gini(np.array([0, 0, 0])), 4))
print(round(gini(np.array([1, 1, 1, 0, 1, 4, 4, 2, 1])), 4))
```
### 1.3) Implement Information Gain (6 pts)
Now that you have a working solution for the `gini` function, the next step is to compute the Information Gain. For your reference, information gain is computed as
$$GAIN(\mathcal{D}, x_j) = H(\mathcal{D}) - \sum_{v \in Values(x_j)} \frac{|\mathcal{D}_v|}{|\mathcal{D}|} H(\mathcal{D}_v).$$
```
# SOLUTION
def information_gain(x_array, y_array, split_dict):
"""
Function to compute information gain.
Parameters
-----------
x_array : NumPy array, shape=(num_examples)
NumPy array containing the continuous feature
values of a given feature variable x.
y_array : NumPy array, shape=(num_examples)
NumPy array containing the continuous feature
values of a given feature variable x.
split_dict : dictionary
A dictionary created by the `split` function, which
contains the indices for the left and right child node.
Returns
---------
Information gain for the given split in `split_dict`.
"""
parent_gini = gini(y_array)
for child in ('left', 'right'):
# freq := |D_v| / |D|
freq = split_dict[child].shape[0] / float(x_array.shape[0])
child_gini = gini(y_array[split_dict[child]])
parent_gini -= freq*child_gini
return parent_gini
```
I added the following code cell for your convenience to double-check your solution. If your results don't match the results shown below, there is a bug in your implementation of the `information_gain` function.
```
# DO NOT EDIT OR DELETE THIS CELL
x_ary = np.array([0.0, 1.0, 4.0, 1.0, 0.0, 3.0, 1.0, 0.0, 1.0, 2.0])
y_ary = np.array([0, 1, 1, 0, 0, 0, 1, 1, 0, 0])
split_dict_1 = split(ary, t=2.5)
print(information_gain(x_array=x_ary,
y_array=y_ary,
split_dict=split_dict_1))
split_dict_2 = split(ary, t=1.5)
print(information_gain(x_array=x_ary,
y_array=y_ary,
split_dict=split_dict_2))
split_dict_3 = split(ary, t=-1.5)
print(information_gain(x_array=x_ary,
y_array=y_ary,
split_dict=split_dict_3))
```
### 1.4) Creating different splitting thresholds (4 pts)
Now, we should have almost all the main components that we need for implementing the CART decision tree algorithm: a `split` function, a `gini` function, and an `information_gain` function based on the `gini` function. However, since we are working with continuous feature variables, we need to find a good threshold $t$ on the number line of each feature, which we can use with our function `split`.
For simplicity, we are going to implement a function that creates different thresholds based on the values found in a given feature variable. More precisely, we are going to implement a function `get_thresholds` that returns the lowest and highest feature value in a feature value array, plus the midpoint between each adjacent pairs of features (assuming the feature variable is sorted).
For example, if a feature array consists of the values
[0.1, 1.2, 2.4, 2.5, 2.7, 3.3, 3.7]
the returned thresholds should be
[0.1, (0.1+1.2)/2, (1.2+2.4)/2, (2.4+2.5)/2, (2.5+2.7)/2, (2.7+3.3)/2, (3.3+3.5/2), 3.7]
```
# SOLUTION
def get_thresholds(array):
"""
Get thresholds from a feature array.
Parameters
-----------
array : NumPy array, type float, shape=(num_examples,)
Array with feature values.
Returns
-----------
NumPy float array containing thresholds.
"""
sorted_ary = np.sort(array)
output_array = np.zeros(array.shape[0]+1, dtype=np.float64)
output_array[0] = sorted_ary[0]
output_array[-1] = sorted_ary[-1]
for i in range(array.shape[0]-1):
output_array[i+1] = (sorted_ary[i] + sorted_ary[i+1])/2
return output_array
# DO NOT EDIT OR DELETE THIS CELL
a = np.array([0.1, 1.2, 2.4, 2.5, 2.7, 3.3, 3.7])
print(get_thresholds(a))
b = np.array([3.7, 2.4, 1.2, 2.5, 3.3, 2.7, 0.1])
print(get_thresholds(b))
```
### 1.5) Selecting the best splitting threshold (4 pts)
In the previous section, we implemented a function `get_thresholds` to create different splitting thresholds for a given feature. In this section, we are now implementing a function that selects the best threshold (the threshold that results in the largest information gain) from the array returned by `get_thresholds` by combining the
- `get_thresholds`
- `split`
- `information_gain`
functions.
```
# SOLUTION
def get_best_threshold(x_array, y_array):
"""
Function to obtain the best threshold
based on maximizing information gain.
Parameters
-----------
x_array : NumPy array, type float, shape=(num_examples,)
Feature array containing the feature values of a feature
for each training example.
y_array : NumPy array, type int, shape=(num_examples,)
NumPy array containing the class labels for each
training example.
Returns
-----------
A float representing the best threshold to split the given
feature variable on.
"""
all_thresholds = get_thresholds(x_array)
info_gains = np.zeros(all_thresholds.shape[0])
for idx, t in enumerate(all_thresholds):
split_dict_t = split(x_array, t=t)
ig = information_gain(x_array=x_array,
y_array=y_array,
split_dict=split_dict_t)
info_gains[idx] = ig
best_idx = np.argmax(info_gains)
best_threshold = all_thresholds[best_idx]
return best_threshold
# DO NOT EDIT OR DELETE THIS CELL
x_ary = np.array([0.0, 1.0, 4.0, 1.0, 0.0, 3.0, 1.0, 0.0, 1.0, 2.0])
y_ary = np.array([0, 1, 1, 0, 0, 0, 1, 1, 0, 0])
print(get_best_threshold(x_array=x_ary,
y_array=y_ary))
x_ary = np.array([0.0, 3.0, 1.0, 0.0, 1.0, 2.0, 0.0, 1.0, 4.0, 1.0,])
y_ary = np.array([0, 0, 1, 1, 0, 0, 0, 1, 1, 0])
print(get_best_threshold(x_array=x_ary,
y_array=y_ary))
```
### 1.6) Decision Tree Splitting (4 pts)
The next task is to combine all the previously developed functions to recursively split a dataset on its different features to construct a decision tree that separates the examples from different classes well. We will call this function `make_tree`.
For simplicity, the decision tree returned by the `make_tree` function will be represented by a Python dictionary. To illustrate this, consider the following dataset consisting of 6 training examples (class labels are 0 or 1) and 2 feature variables $X_0$ and $X_1$:
```
Inputs:
[[0. 0.]
[0. 1.]
[1. 0.]
[1. 1.]
[2. 0.]
[2. 1.]]
Labels:
[0 1 0 1 1 1]
```
Based on this dataset with 6 training examples and two features, the resulting decision tree in form of the Python dictionary should look like as follows:
You should return a dictionary with the following form:
```
{'X_1 <= 0.000000': {'X_0 <= 1.500000': array([0, 0]),
'X_0 > 1.500000': array([1])
},
'X_1 > 0.000000': array([1, 1, 1])
}
```
Let me further illustrate what the different parts of the dictionary mean. Here, the `'X_1'` in `'X_1 <= 0'` refers feature 2 (the first column of the NumPy array; remember that Python starts the index at 0, in contrast to R).
- 'X_1 <= 0': For training examples stored in this node where the 2nd feature is less than or equal to 0.
- 'X_1 > 0': For training examples stored in this node where the 2nd feature is larger than 0.
The "array" is a NumPy array that stores the class labels of the training examples at that node. In the case of `'X_1 <= 0'` we actually store actually a sub-dictionary, because this node can be split further into 2 child nodes with `'X_0 <= 1.500000'` and `'X_0 > 1.500000'`.
```
# SOLUTION
def make_tree(X, y):
"""
A recursive function for building a binary decision tree.
Parameters
----------
X : NumPy array, type float, shape=(num_examples, num_features)
A design matrix representing the feature values.
y : NumPy array, type int, shape=(num_examples,)
NumPy array containing the class labels corresponding to the training examples.
Returns
----------
Dictionary representation of the decision tree.
"""
# ZHONGJIE:
# This is also ok:
# if y.shape[0] == 1 or y.shape[0] == 0:
# return y
# Return array if node is empty or pure (all labels are the same or 1 example in leaf node)
if y.shape[0] == 1 or y.shape[0] == 0 or (np.unique(y)).shape[0] == 1:
return y
# Select the best threshold for each feature
thresholds = np.array([get_best_threshold(x_array=feature, y_array=y) for feature in X.T])
# Compute information gain for each feature based on the best threshold for each feature
gains = np.zeros(X.shape[1])
split_dicts = []
for idx, (feature, threshold) in enumerate(zip(X.T, thresholds)):
split_dict = split(feature, threshold)
split_dicts.append(split_dict)
ig = information_gain(feature, y, split_dict)
gains[idx] = ig
# Early stopping if there is no information gain
if (gains <= 1e-05).all():
return y
# Else, get best feature
best_feature_idx = np.argmax(gains)
results = {}
subset_dict = split_dicts[best_feature_idx]
for node in ('left', 'right'):
child_y_subset = y[subset_dict[node]]
child_X_subset = X[subset_dict[node]]
if node == 'left':
results["X_%d <= %f" % (best_feature_idx, subset_dict['threshold'])] = \
make_tree(child_X_subset, child_y_subset)
else:
results["X_%d > %f" % (best_feature_idx, subset_dict['threshold'])] = \
make_tree(child_X_subset, child_y_subset)
return results
```
I added the following code cell for your convenience to double-check your solution. If your results don't match the results shown below, there is a bug in your implementation of the `make_tree` function.
```
# DO NOT EDIT OR DELETE THIS CELL
x1 = np.array([0., 0., 1., 1., 2., 2.])
x2 = np.array([0., 1., 0., 1., 0., 1.])
X = np.array( [x1, x2]).T
y = np.array( [0, 1, 0, 1, 1, 1])
print('Inputs:\n', X)
print('\nLabels:\n', y)
print('\nDecision tree:\n', make_tree(X, y))
```
### 1.7) Building a Decision Tree API (4 pts)
The final step of this part of the homework is now to write an API around our decision tree code so that we can use is for making predictions. Here, we will use the common convention, established by scikit-learn, to implement the decision tree as a Python class with
- a `fit` method that learns the decision tree model from a training set via the `make_tree` function we already implemented;
- a `predict` method to predict the class labels of training examples or any unseen data points.
For making predictions, since not all leaf nodes are guaranteed to be single training examples, we will use a majority voting function to predict the class label as discussed in class. I already implemented a `_traverse` method, which will recursively traverse a decision tree dictionary that is produced by the `make_tree` function.
Note that for simplicity, the `predict` method will only be able to accept one data point at a time (instead of a collection of data points). Hence `x` is a vector of size $\mathbb{R}^m$, where $m$ is the number of features. I use capital letters `X` to denote a matrix of size $\mathbb{R}^{n\times m}$, where $n$ is the number of training examples.
```
# SOLUTION
class CARTDecisionTreeClassifer(object):
def __init__(self):
pass
def fit(self, X, y):
self.splits_ = make_tree(X, y)
def _majority_vote(self, label_array):
return np.argmax(np.bincount(label_array))
def _traverse(self, x, d):
if isinstance(d, np.ndarray):
return d
for key in d:
if '<=' in key:
name, value = key.split(' <= ')
feature_idx = int(name.split('_')[-1])
value = float(value)
if x[feature_idx] <= value:
return self._traverse(x, d[key])
else:
name, value = key.split(' > ')
feature_idx = int(name.split('_')[-1])
value = float(value)
if x[feature_idx] > value:
return self._traverse(x, d[key])
def predict(self, x):
label_array = self._traverse(x, self.splits_)
return self._majority_vote(label_array)
```
I added the following code cell for your convenience to double-check your solution. If your results don't match the results shown below, there is a bug in your implementation of the `make_tree` function.
```
# DO NOT EDIT OR DELETE THIS CELL
tree = CARTDecisionTreeClassifer()
tree.fit(X, y)
print(tree.predict(np.array([0., 0.])))
print(tree.predict(np.array([0., 1.])))
print(tree.predict(np.array([1., 0.])))
print(tree.predict(np.array([1., 0.])))
print(tree.predict(np.array([1., 1.])))
print(tree.predict(np.array([2., 0.])))
print(tree.predict(np.array([2., 1.])))
```
| true |
code
| 0.789741 | null | null | null | null |
|
# Testing
In the Digital Humanities Lab, we're going to be ensuring that our code is thoroughly documented and tested. This is important because we are collaborating with others and we will also be sharing our code publicly. Once you get used to writing documentation, then tests, then code, you may find that writing the code comes more easily because you have already thought through what a function (for example) does and what the possible edge cases are.
Not being careful and thorough with testing can cause significant problems. Some historical examples of failure due to not testing correctly or not testing thoroughly include:
* [Mars Probe Lost Due to Simple Math Error](http://articles.latimes.com/1999/oct/01/news/mn-17288)
* [Why carmakers always insisted on male crash dummies](https://www.boston.com/cars/news-and-reviews/2012/08/22/why-carmakers-always-insisted-on-male-crash-test-dummies)
* [Boeing 787 Dreamliners contain a potentially catastrophic software bug](https://arstechnica.com/information-technology/2015/05/boeing-787-dreamliners-contain-a-potentially-catastrophic-software-bug/)
While the lab will not be developing probes, cars, or airplanes, it is still important to test code to ensure that it is useful to other developers and end users. We recommend writing the test prior to writing the code.
## doctest
Python comes prepackaged with a test framework module called [doctest](https://docs.python.org/3.7/library/doctest.html). This module searches for pieces of text within code comments that look like interactive Python sessions wihin code, then executes those sessions in order to confirm that that code runs exactly as expected.
The doctest also generates documentation for our code. We'll go through an example of using doctest with a function we create called `count_vowels()`.
We start by naming the function and writing a doctest in triple quotes.
```
def count_vowels(word):
"""
Given a single word, return the number of vowels in that single word.
>>> count_vowels('paris')
2
"""
```
So far, we have written a sentence on what the function does, and a test that if the word `paris` is provided, the function will return `2` as there are two vowels in that word. This provides a line of documentation and an example of the function with expected output for humans.
We can also add documentation for computers to read, telling it that the computer should expect the parameter of `word` to be of type string, and that the function should return an integer.
```
def count_vowels(word):
"""
Given a single word, return the number of vowels in that single word.
>>> count_vowels('paris')
2
:param word: str
:return: int
"""
```
With this completed, we need to write the function.
We can run doctest by importing the module with `import doctest` and end our Python program with:
```python
doctest.testmod()
```
```
import doctest
def count_vowels(word):
"""
Given a single word, return the number of vowels in that single word.
>>> count_vowels('paris')
2
:param word: str
:return: int
"""
total = 0
for letter in word:
if letter in 'aeiou':
total += 1
return total
doctest.testmod()
count_vowels('paris')
```
So far our test works, and our function runs as expected. But, what happens if we use a word with an upper-case vowel?
```
def count_vowels(word):
"""
Given a single word, return the number of vowels in that single word.
>>> count_vowels('paris')
2
>>> count_vowels('Oslo')
2
:param word: str
:return: int
"""
total = 0
for letter in word:
if letter in 'aeiou':
total += 1
return total
doctest.testmod()
```
When we run the code above, the test fails because the upper-case `O` is not counted, let's amend that.
```
def count_vowels(word):
"""
Given a single word, return the number of vowels in that single word.
>>> count_vowels('paris')
2
>>> count_vowels('Oslo')
2
:param word: str
:return: int
"""
total = 0
for letter in word.lower():
if letter in 'aeiou':
total += 1
return total
doctest.testmod()
count_vowels('Oslo')
```
With doctest, you should always have an estimate ready to be able to verify what is being returned via your program. For a novel with 316,059 words like *Middlemarch*, how many vowels would you expect to have?
From here, you can work to improve the tests, and through this testing improve the code so that it can accommodate edge cases and the full range of possibilities. Start with the following:
* Write a test for a type that is not a string (e.g. an integer)
* Write a test for words that have the letter `y`, which is sometimes considered a vowel in English.
* Write a test to handle `word` being a sentence — do you want a sentence to be passed to `word`?
* Write a test to deal with accented vowels, like the `ï` in `naïve` or the two `é`s in `résumé`.
## Resources
* [Python 3.7 documentation for the doctest module](https://docs.python.org/3.7/library/doctest.html)
* [doctest — Testing through Documentation](https://pymotw.com/3/doctest/)
* [doctest Introduction](http://pythontesting.net/framework/doctest/doctest-introduction/)
| true |
code
| 0.798737 | null | null | null | null |
|
# **1D Map Conjugacy for the Kuramoto-Sivashinsky PDE**
```
import numpy as np
from utils import Kuramoto
from scipy.integrate import odeint
import matplotlib.pyplot as plt
# Set plotting parameters
parameters = {'axes.labelsize': 16,
'axes.titlesize': 18,
'legend.fontsize': 13,
'xtick.labelsize': 16,
'ytick.labelsize': 16,
'figure.figsize': (12, 8),
'figure.titlesize': 18,
'font.serif': 'Computer Modern Roman',
}
plt.rcParams.update(parameters)
plt.rc('text', usetex=True)
```
## **Generate Measurement Data**
```
# Continuous-time simulation data
# Initializations
dt = .005
nu = 0.0298
modes = 14
t_span = np.arange(0, 5000, dt)
x0 = 0.1*np.random.rand(modes)
# Solution data
xsol = []
xsol = odeint(Kuramoto, x0, t_span,args = (nu,modes,))
# Plot Kuramoto-Sivashinsky Solution (x_10 vs. x_1)
plt.plot(xsol[1000:10000,0],xsol[1000:10000,9],'b')
plt.title("The Kuramoto-Sivashinsky Attractor")
plt.xlabel("$x_1(t)$")
plt.ylabel("$x_{10}(t)$")
# Create section data
Psec = []
temp = [0]*len(xsol[:,1])
count = 0
for m in range(len(temp)-1):
if xsol[m,0] <= 0 and xsol[m+1,0] >= 0: # section condition: x_1 = 0
temp[count] = xsol[m,1:]
count = count + 1
Psec.append(np.array(temp[1:count]))
xn, xnp1 = Psec[0][:-1], Psec[0][1:]
#Scale data
max_xn = xn.max()
min_xn = xn.min()
slope = 1/(max_xn - min_xn)
yint = -slope*min_xn
xn = slope*xn + yint
xnp1 = slope*xnp1 + yint
# Build input data matrix of forward iterates
forward_iters = 50
xnforward = []
xnp1 = xnp1[:-forward_iters]
for j in range(forward_iters):
xnforward.append(xn[j:-forward_iters+j])
# Plot Rossler Section Data
plt.plot(Psec[0][:-1,1],Psec[0][1:,1],'k.')
plt.title("Intersection of the Kuramoto-Sivashinsky Attractor with the Poincare Section")
plt.xlabel("$x_{2,n}$")
plt.ylabel("$x_{2,n+1}$")
```
## **Network Training**
```
import tensorflow as tf
from architecture_1D import Conjugacy
width = 200
size_x = 13 #number of x variables
degree = 2 #degree of latent mapping
activation = 'selu'
steps = 1
numblks_in = 4
numblks_out = 4
c1 = 3.5 # initialized mapping coefficients
c2 = -3.5
c3 = 0.0
c4 = 0.0
c5 = 0.0
stretchloss = 1
learning_rate = 0.00001
conjugacy = Conjugacy(width, size_x, activation, degree, steps, numblks_in, numblks_out, c1, c2, c3, c4, c5, stretchloss)
callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience = 10) # patience is set intentially low to speed up examples
optimizer = tf.keras.optimizers.Adam(lr=learning_rate)
conjugacy.compile(optimizer=optimizer, loss = 'mse')
conjugacy.fit(xnforward, xnp1, callbacks = [callback], epochs = 1000)
```
## **Network Output**
```
# Print Discovered Mapping
print('Discovered Conjugate Mapping:')
print('')
print('g(y) =',conjugacy.c1.numpy(),'*y +',conjugacy.c2.numpy(),'*y^2')
# Network Summary
print('')
conjugacy.summary()
```
| true |
code
| 0.608274 | null | null | null | null |
|
## Training excitatory-inhibitory recurrent network
Here we will train recurrent neural network with excitatory and inhibitory neurons on a simple perceptual decision making task.
[](https://colab.research.google.com/github/gyyang/nn-brain/blob/master/EI_RNN.ipynb)
## Install dependencies
```
# # If on Google Colab, uncomment to install neurogym to use cognitive tasks
# ! git clone https://github.com/gyyang/neurogym.git
# %cd neurogym/
# ! pip install -e .
```
## Defining a perceptual decision making task
```
# We will import the task from the neurogym library
import neurogym as ngym
# Environment
task = 'PerceptualDecisionMaking-v0'
timing = {
'fixation': ('choice', (50, 100, 200, 400)),
'stimulus': ('choice', (100, 200, 400, 800)),
}
kwargs = {'dt': 20, 'timing': timing}
seq_len = 100
# Make supervised dataset
dataset = ngym.Dataset(task, env_kwargs=kwargs, batch_size=16,
seq_len=seq_len)
# A sample environment from dataset
env = dataset.env
# Visualize the environment with 2 sample trials
_ = ngym.utils.plot_env(env, num_trials=2)
# Network input and output size
input_size = env.observation_space.shape[0]
output_size = env.action_space.n
```
## Define E-I recurrent network
Here we define a E-I recurrent network, in particular, no self-connections are allowed.
```
# Define networks
import torch
import torch.nn as nn
from torch.nn import init
from torch.nn import functional as F
import math
class PosWLinear(nn.Module):
r"""Applies a linear transformation to the incoming data: :math:`y = xA^T + b`
Same as nn.Linear, except that weight matrix is constrained to be non-negative
"""
__constants__ = ['bias', 'in_features', 'out_features']
def __init__(self, in_features, out_features, bias=True):
super(PosWLinear, self).__init__()
self.in_features = in_features
self.out_features = out_features
self.weight = torch.nn.Parameter(torch.Tensor(out_features, in_features))
if bias:
self.bias = torch.nn.Parameter(torch.Tensor(out_features))
else:
self.register_parameter('bias', None)
self.reset_parameters()
def reset_parameters(self):
init.kaiming_uniform_(self.weight, a=math.sqrt(5))
if self.bias is not None:
fan_in, _ = init._calculate_fan_in_and_fan_out(self.weight)
bound = 1 / math.sqrt(fan_in)
init.uniform_(self.bias, -bound, bound)
def forward(self, input):
# weight is non-negative
return F.linear(input, torch.abs(self.weight), self.bias)
class EIRecLinear(nn.Module):
r"""Recurrent E-I Linear transformation.
Args:
hidden_size: int, layer size
e_prop: float between 0 and 1, proportion of excitatory units
"""
__constants__ = ['bias', 'hidden_size', 'e_prop']
def __init__(self, hidden_size, e_prop, bias=True):
super().__init__()
self.hidden_size = hidden_size
self.e_prop = e_prop
self.e_size = int(e_prop * hidden_size)
self.i_size = hidden_size - self.e_size
self.weight = nn.Parameter(torch.Tensor(hidden_size, hidden_size))
mask = np.tile([1]*self.e_size+[-1]*self.i_size, (hidden_size, 1))
np.fill_diagonal(mask, 0)
self.mask = torch.tensor(mask, dtype=torch.float32)
if bias:
self.bias = nn.Parameter(torch.Tensor(hidden_size))
else:
self.register_parameter('bias', None)
self.reset_parameters()
def reset_parameters(self):
init.kaiming_uniform_(self.weight, a=math.sqrt(5))
# Scale E weight by E-I ratio
self.weight.data[:, :self.e_size] /= (self.e_size/self.i_size)
if self.bias is not None:
fan_in, _ = init._calculate_fan_in_and_fan_out(self.weight)
bound = 1 / math.sqrt(fan_in)
init.uniform_(self.bias, -bound, bound)
def effective_weight(self):
return torch.abs(self.weight) * self.mask
def forward(self, input):
# weight is non-negative
return F.linear(input, self.effective_weight(), self.bias)
class EIRNN(nn.Module):
"""E-I RNN.
Reference:
Song, H.F., Yang, G.R. and Wang, X.J., 2016.
Training excitatory-inhibitory recurrent neural networks
for cognitive tasks: a simple and flexible framework.
PLoS computational biology, 12(2).
Args:
input_size: Number of input neurons
hidden_size: Number of hidden neurons
Inputs:
input: (seq_len, batch, input_size)
hidden: (batch, hidden_size)
e_prop: float between 0 and 1, proportion of excitatory neurons
"""
def __init__(self, input_size, hidden_size, dt=None,
e_prop=0.8, sigma_rec=0, **kwargs):
super().__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.e_size = int(hidden_size * e_prop)
self.i_size = hidden_size - self.e_size
self.num_layers = 1
self.tau = 100
if dt is None:
alpha = 1
else:
alpha = dt / self.tau
self.alpha = alpha
self.oneminusalpha = 1 - alpha
# Recurrent noise
self._sigma_rec = np.sqrt(2*alpha) * sigma_rec
# self.input2h = PosWLinear(input_size, hidden_size)
self.input2h = nn.Linear(input_size, hidden_size)
self.h2h = EIRecLinear(hidden_size, e_prop=0.8)
def init_hidden(self, input):
batch_size = input.shape[1]
return (torch.zeros(batch_size, self.hidden_size).to(input.device),
torch.zeros(batch_size, self.hidden_size).to(input.device))
def recurrence(self, input, hidden):
"""Recurrence helper."""
state, output = hidden
total_input = self.input2h(input) + self.h2h(output)
state = state * self.oneminusalpha + total_input * self.alpha
state += self._sigma_rec * torch.randn_like(state)
output = torch.relu(state)
return state, output
def forward(self, input, hidden=None):
"""Propogate input through the network."""
if hidden is None:
hidden = self.init_hidden(input)
output = []
steps = range(input.size(0))
for i in steps:
hidden = self.recurrence(input[i], hidden)
output.append(hidden[1])
output = torch.stack(output, dim=0)
return output, hidden
class Net(nn.Module):
"""Recurrent network model.
Args:
input_size: int, input size
hidden_size: int, hidden size
output_size: int, output size
rnn: str, type of RNN, lstm, rnn, ctrnn, or eirnn
"""
def __init__(self, input_size, hidden_size, output_size, **kwargs):
super().__init__()
# Excitatory-inhibitory RNN
self.rnn = EIRNN(input_size, hidden_size, **kwargs)
# self.fc = PosWLinear(self.rnn.e_size, output_size)
self.fc = nn.Linear(self.rnn.e_size, output_size)
def forward(self, x):
rnn_activity, _ = self.rnn(x)
rnn_e = rnn_activity[:, :, :self.rnn.e_size]
out = self.fc(rnn_e)
return out, rnn_activity
```
## Train the network on the decision making task
```
import torch.optim as optim
import numpy as np
# Instantiate the network and print information
hidden_size = 50
net = Net(input_size=input_size, hidden_size=hidden_size,
output_size=output_size, dt=env.dt, sigma_rec=0.15)
print(net)
# Use Adam optimizer
optimizer = optim.Adam(net.parameters(), lr=0.001)
criterion = nn.CrossEntropyLoss()
running_loss = 0
running_acc = 0
print_step = 200
for i in range(5000):
inputs, labels = dataset()
inputs = torch.from_numpy(inputs).type(torch.float)
labels = torch.from_numpy(labels.flatten()).type(torch.long)
# in your training loop:
optimizer.zero_grad() # zero the gradient buffers
output, activity = net(inputs)
output = output.view(-1, output_size)
loss = criterion(output, labels)
loss.backward()
optimizer.step() # Does the update
running_loss += loss.item()
if i % print_step == (print_step - 1):
running_loss /= print_step
print('Step {}, Loss {:0.4f}'.format(i+1, running_loss))
running_loss = 0
```
## Run the network post-training and record neural activity
```
env.reset(no_step=True)
env.timing.update({'fixation': ('constant', 500),
'stimulus': ('constant', 500)})
perf = 0
num_trial = 500
activity_dict = {}
trial_infos = {}
stim_activity = [[], []] # response for ground-truth 0 and 1
for i in range(num_trial):
env.new_trial()
ob, gt = env.ob, env.gt
inputs = torch.from_numpy(ob[:, np.newaxis, :]).type(torch.float)
action_pred, rnn_activity = net(inputs)
# Compute performance
action_pred = action_pred.detach().numpy()
choice = np.argmax(action_pred[-1, 0, :])
correct = choice == gt[-1]
# Log trial info
trial_info = env.trial
trial_info.update({'correct': correct, 'choice': choice})
trial_infos[i] = trial_info
# Log stimulus period activity
rnn_activity = rnn_activity[:, 0, :].detach().numpy()
activity_dict[i] = rnn_activity
# Compute stimulus selectivity for all units
# Compute each neuron's response in trials where ground_truth=0 and 1 respectively
rnn_activity = rnn_activity[env.start_ind['stimulus']: env.end_ind['stimulus']]
stim_activity[env.trial['ground_truth']].append(rnn_activity)
print('Average performance', np.mean([val['correct'] for val in trial_infos.values()]))
```
### Plot neural activity from sample trials
```
import matplotlib.pyplot as plt
e_size = net.rnn.e_size
trial = 2
plt.figure()
_ = plt.plot(activity_dict[trial][:, :e_size], color='blue', label='Excitatory')
_ = plt.plot(activity_dict[trial][:, e_size:], color='red', label='Inhibitory')
plt.xlabel('Time step')
plt.ylabel('Activity')
```
### Compute stimulus selectivity for sorting neurons
Here for each neuron we compute its stimulus period selectivity $d'$
```
mean_activity = []
std_activity = []
for ground_truth in [0, 1]:
activity = np.concatenate(stim_activity[ground_truth], axis=0)
mean_activity.append(np.mean(activity, axis=0))
std_activity.append(np.std(activity, axis=0))
# Compute d'
selectivity = (mean_activity[0] - mean_activity[1])
selectivity /= np.sqrt((std_activity[0]**2+std_activity[1]**2+1e-7)/2)
# Sort index for selectivity, separately for E and I
ind_sort = np.concatenate((np.argsort(selectivity[:e_size]),
np.argsort(selectivity[e_size:])+e_size))
```
### Plot network connectivity sorted by stimulus selectivity
```
# Plot distribution of stimulus selectivity
plt.figure()
plt.hist(selectivity)
plt.xlabel('Selectivity')
plt.ylabel('Number of neurons')
W = net.rnn.h2h.effective_weight().detach().numpy()
# Sort by selectivity
W = W[:, ind_sort][ind_sort, :]
wlim = np.max(np.abs(W))
plt.figure()
plt.imshow(W, cmap='bwr_r', vmin=-wlim, vmax=wlim)
plt.colorbar()
plt.xlabel('From neurons')
plt.ylabel('To neurons')
plt.title('Network connectivity')
```
# Supplementary Materials
Code for making publication quality figures as it appears in the paper.
```
from mpl_toolkits.axes_grid1 import make_axes_locatable
plot_e = 8
plot_i = int(plot_e / 4)
plot_total = (plot_e + plot_i) * 2
# Sort index for selectivity, separately for E and I
ind_sort = np.concatenate((
np.argsort(selectivity[:e_size])[:plot_e],
np.argsort(selectivity[:e_size])[-plot_e:],
np.argsort(selectivity[e_size:])[:plot_i]+e_size,
np.argsort(selectivity[e_size:])[-plot_i:]+e_size))
# Plot distribution of stimulus selectivity
plt.figure()
plt.hist(selectivity)
plt.xlabel('Selectivity')
plt.ylabel('Number of neurons')
W = net.rnn.h2h.effective_weight().detach().numpy()
# Sort by selectivity
W = W[:, ind_sort][ind_sort, :]
wlim = np.percentile(np.abs(W), 99)
# wlim = np.max(np.abs(W))
wlim = int(wlim*100)/100
n_neuron = len(ind_sort)
fig = plt.figure(figsize=(3, 3))
ax = fig.add_axes([0.1, 0.1, 0.7, 0.7])
im = ax.imshow(W, cmap='RdBu', vmin=-wlim, vmax=wlim,
extent=(-0.5, n_neuron-0.5, -0.5, n_neuron-0.5),
interpolation='nearest'
)
# ax.plot([plot_e-0.5] * 2, [plot_total-0.5, plot_total+0.5], 'black', lw=0.5)
xticks = np.array([0, plot_e*2, plot_total]) - 0.5
yticks = plot_total - 1 - xticks
plt.xticks(xticks, ['', '', ''])
plt.yticks(yticks, ['', '', ''])
plt.xlabel('From neurons')
plt.ylabel('To neurons')
# plt.title('Network connectivity')
for loc in ['left', 'right', 'top', 'bottom']:
# ax.spines[loc].set_color('gray')
ax.spines[loc].set_visible(False)
divider = make_axes_locatable(ax)
cax = fig.add_axes([0.82, 0.1, 0.02, 0.7])
cb = plt.colorbar(im, cax=cax, ticks=[-wlim, 0, wlim])
cb.set_label('Connection weight', labelpad=-1)
cb.outline.set_linewidth(0)
# cb.set_ticklabels(['-0.8', '', '0.8'])
from pathlib import Path
fname = Path('figures/connectivity')
fig.savefig(fname.with_suffix('.pdf'), transparent=True)
fig.savefig(fname.with_suffix('.png'), dpi=300)
```
| true |
code
| 0.785782 | null | null | null | null |
|
# Mask R-CNN - Train on Shapes Dataset
This notebook shows how to train Mask R-CNN on your own dataset. To keep things simple we use a synthetic dataset of shapes (squares, triangles, and circles) which enables fast training. You'd still need a GPU, though, because the network backbone is a Resnet101, which would be too slow to train on a CPU. On a GPU, you can start to get okay-ish results in a few minutes, and good results in less than an hour.
The code of the *Shapes* dataset is included below. It generates images on the fly, so it doesn't require downloading any data. And it can generate images of any size, so we pick a small image size to train faster.
```
import os
import sys
import random
import math
import re
import time
import numpy as np
import cv2
import matplotlib
import matplotlib.pyplot as plt
# Root directory of the project
ROOT_DIR = os.path.abspath("../../")
# Import Mask RCNN
sys.path.append(ROOT_DIR) # To find local version of the library
from mrcnn.config import Config
from mrcnn import utils
import mrcnn.model as modellib
from mrcnn import visualize
from mrcnn.model import log
%matplotlib inline
# Directory to save logs and trained model
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
# Local path to trained weights file
COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5")
# Download COCO trained weights from Releases if needed
if not os.path.exists(COCO_MODEL_PATH):
utils.download_trained_weights(COCO_MODEL_PATH)
```
## Configurations
```
class ShapesConfig(Config):
"""Configuration for training on the toy shapes dataset.
Derives from the base Config class and overrides values specific
to the toy shapes dataset.
"""
# Give the configuration a recognizable name
NAME = "shapes"
# Train on 1 GPU and 8 images per GPU. We can put multiple images on each
# GPU because the images are small. Batch size is 8 (GPUs * images/GPU).
GPU_COUNT = 1
IMAGES_PER_GPU = 8
# Number of classes (including background)
NUM_CLASSES = 1 + 3 # background + 3 shapes
# Use small images for faster training. Set the limits of the small side
# the large side, and that determines the image shape.
IMAGE_MIN_DIM = 128
IMAGE_MAX_DIM = 128
# Use smaller anchors because our image and objects are small
RPN_ANCHOR_SCALES = (8, 16, 32, 64, 128) # anchor side in pixels
# Reduce training ROIs per image because the images are small and have
# few objects. Aim to allow ROI sampling to pick 33% positive ROIs.
TRAIN_ROIS_PER_IMAGE = 32
# Use a small epoch since the data is simple
STEPS_PER_EPOCH = 100
# use small validation steps since the epoch is small
VALIDATION_STEPS = 5
config = ShapesConfig()
config.display()
```
## Notebook Preferences
```
def get_ax(rows=1, cols=1, size=8):
"""Return a Matplotlib Axes array to be used in
all visualizations in the notebook. Provide a
central point to control graph sizes.
Change the default size attribute to control the size
of rendered images
"""
_, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows))
return ax
```
## Dataset
Create a synthetic dataset
Extend the Dataset class and add a method to load the shapes dataset, `load_shapes()`, and override the following methods:
* load_image()
* load_mask()
* image_reference()
```
class ShapesDataset(utils.Dataset):
"""Generates the shapes synthetic dataset. The dataset consists of simple
shapes (triangles, squares, circles) placed randomly on a blank surface.
The images are generated on the fly. No file access required.
"""
def load_shapes(self, count, height, width):
"""Generate the requested number of synthetic images.
count: number of images to generate.
height, width: the size of the generated images.
"""
# Add classes
self.add_class("shapes", 1, "square")
self.add_class("shapes", 2, "circle")
self.add_class("shapes", 3, "triangle")
# Add images
# Generate random specifications of images (i.e. color and
# list of shapes sizes and locations). This is more compact than
# actual images. Images are generated on the fly in load_image().
for i in range(count):
bg_color, shapes = self.random_image(height, width)
self.add_image("shapes", image_id=i, path=None,
width=width, height=height,
bg_color=bg_color, shapes=shapes)
def load_image(self, image_id):
"""Generate an image from the specs of the given image ID.
Typically this function loads the image from a file, but
in this case it generates the image on the fly from the
specs in image_info.
"""
info = self.image_info[image_id]
bg_color = np.array(info['bg_color']).reshape([1, 1, 3])
image = np.ones([info['height'], info['width'], 3], dtype=np.uint8)
image = image * bg_color.astype(np.uint8)
for shape, color, dims in info['shapes']:
image = self.draw_shape(image, shape, dims, color)
return image
def image_reference(self, image_id):
"""Return the shapes data of the image."""
info = self.image_info[image_id]
if info["source"] == "shapes":
return info["shapes"]
else:
super(self.__class__).image_reference(self, image_id)
def load_mask(self, image_id):
"""Generate instance masks for shapes of the given image ID.
"""
info = self.image_info[image_id]
shapes = info['shapes']
count = len(shapes)
mask = np.zeros([info['height'], info['width'], count], dtype=np.uint8)
for i, (shape, _, dims) in enumerate(info['shapes']):
mask[:, :, i:i+1] = self.draw_shape(mask[:, :, i:i+1].copy(),
shape, dims, 1)
# Handle occlusions
occlusion = np.logical_not(mask[:, :, -1]).astype(np.uint8)
for i in range(count-2, -1, -1):
mask[:, :, i] = mask[:, :, i] * occlusion
occlusion = np.logical_and(occlusion, np.logical_not(mask[:, :, i]))
# Map class names to class IDs.
class_ids = np.array([self.class_names.index(s[0]) for s in shapes])
return mask.astype(np.bool), class_ids.astype(np.int32)
def draw_shape(self, image, shape, dims, color):
"""Draws a shape from the given specs."""
# Get the center x, y and the size s
x, y, s = dims
if shape == 'square':
cv2.rectangle(image, (x-s, y-s), (x+s, y+s), color, -1)
elif shape == "circle":
cv2.circle(image, (x, y), s, color, -1)
elif shape == "triangle":
points = np.array([[(x, y-s),
(x-s/math.sin(math.radians(60)), y+s),
(x+s/math.sin(math.radians(60)), y+s),
]], dtype=np.int32)
cv2.fillPoly(image, points, color)
return image
def random_shape(self, height, width):
"""Generates specifications of a random shape that lies within
the given height and width boundaries.
Returns a tuple of three valus:
* The shape name (square, circle, ...)
* Shape color: a tuple of 3 values, RGB.
* Shape dimensions: A tuple of values that define the shape size
and location. Differs per shape type.
"""
# Shape
shape = random.choice(["square", "circle", "triangle"])
# Color
color = tuple([random.randint(0, 255) for _ in range(3)])
# Center x, y
buffer = 20
y = random.randint(buffer, height - buffer - 1)
x = random.randint(buffer, width - buffer - 1)
# Size
s = random.randint(buffer, height//4)
return shape, color, (x, y, s)
def random_image(self, height, width):
"""Creates random specifications of an image with multiple shapes.
Returns the background color of the image and a list of shape
specifications that can be used to draw the image.
"""
# Pick random background color
bg_color = np.array([random.randint(0, 255) for _ in range(3)])
# Generate a few random shapes and record their
# bounding boxes
shapes = []
boxes = []
N = random.randint(1, 4)
for _ in range(N):
shape, color, dims = self.random_shape(height, width)
shapes.append((shape, color, dims))
x, y, s = dims
boxes.append([y-s, x-s, y+s, x+s])
# Apply non-max suppression wit 0.3 threshold to avoid
# shapes covering each other
keep_ixs = utils.non_max_suppression(np.array(boxes), np.arange(N), 0.3)
shapes = [s for i, s in enumerate(shapes) if i in keep_ixs]
return bg_color, shapes
# Training dataset
dataset_train = ShapesDataset()
dataset_train.load_shapes(500, config.IMAGE_SHAPE[0], config.IMAGE_SHAPE[1])
dataset_train.prepare()
# Validation dataset
dataset_val = ShapesDataset()
dataset_val.load_shapes(50, config.IMAGE_SHAPE[0], config.IMAGE_SHAPE[1])
dataset_val.prepare()
# Load and display random samples
image_ids = np.random.choice(dataset_train.image_ids, 4)
for image_id in image_ids:
image = dataset_train.load_image(image_id)
mask, class_ids = dataset_train.load_mask(image_id)
visualize.display_top_masks(image, mask, class_ids, dataset_train.class_names)
```
## Create Model
```
# Create model in training mode
model = modellib.MaskRCNN(mode="training", config=config,
model_dir=MODEL_DIR)
# Which weights to start with?
init_with = "coco" # imagenet, coco, or last
if init_with == "imagenet":
model.load_weights(model.get_imagenet_weights(), by_name=True)
elif init_with == "coco":
# Load weights trained on MS COCO, but skip layers that
# are different due to the different number of classes
# See README for instructions to download the COCO weights
model.load_weights(COCO_MODEL_PATH, by_name=True,
exclude=["mrcnn_class_logits", "mrcnn_bbox_fc",
"mrcnn_bbox", "mrcnn_mask"])
elif init_with == "last":
# Load the last model you trained and continue training
model.load_weights(model.find_last(), by_name=True)
```
## Training
Train in two stages:
1. Only the heads. Here we're freezing all the backbone layers and training only the randomly initialized layers (i.e. the ones that we didn't use pre-trained weights from MS COCO). To train only the head layers, pass `layers='heads'` to the `train()` function.
2. Fine-tune all layers. For this simple example it's not necessary, but we're including it to show the process. Simply pass `layers="all` to train all layers.
```
# Train the head branches
# Passing layers="heads" freezes all layers except the head
# layers. You can also pass a regular expression to select
# which layers to train by name pattern.
model.train(dataset_train, dataset_val,
learning_rate=config.LEARNING_RATE,
epochs=1,
layers='heads')
# Fine tune all layers
# Passing layers="all" trains all layers. You can also
# pass a regular expression to select which layers to
# train by name pattern.
model.train(dataset_train, dataset_val,
learning_rate=config.LEARNING_RATE / 10,
epochs=2,
layers="all")
# Save weights
# Typically not needed because callbacks save after every epoch
# Uncomment to save manually
# model_path = os.path.join(MODEL_DIR, "mask_rcnn_shapes.h5")
# model.keras_model.save_weights(model_path)
```
## Detection
```
class InferenceConfig(ShapesConfig):
GPU_COUNT = 1
IMAGES_PER_GPU = 1
inference_config = InferenceConfig()
# Recreate the model in inference mode
model = modellib.MaskRCNN(mode="inference",
config=inference_config,
model_dir=MODEL_DIR)
# Get path to saved weights
# Either set a specific path or find last trained weights
# model_path = os.path.join(ROOT_DIR, ".h5 file name here")
model_path = model.find_last()
# Load trained weights
print("Loading weights from ", model_path)
model.load_weights(model_path, by_name=True)
# Test on a random image
image_id = random.choice(dataset_val.image_ids)
original_image, image_meta, gt_class_id, gt_bbox, gt_mask =\
modellib.load_image_gt(dataset_val, inference_config,
image_id, use_mini_mask=False)
log("original_image", original_image)
log("image_meta", image_meta)
log("gt_class_id", gt_class_id)
log("gt_bbox", gt_bbox)
log("gt_mask", gt_mask)
visualize.display_instances(original_image, gt_bbox, gt_mask, gt_class_id,
dataset_train.class_names, figsize=(8, 8))
results = model.detect([original_image], verbose=1)
r = results[0]
visualize.display_instances(original_image, r['rois'], r['masks'], r['class_ids'],
dataset_val.class_names, r['scores'], ax=get_ax())
```
## Evaluation
```
# Compute VOC-Style mAP @ IoU=0.5
# Running on 10 images. Increase for better accuracy.
image_ids = np.random.choice(dataset_val.image_ids, 10)
APs = []
for image_id in image_ids:
# Load image and ground truth data
image, image_meta, gt_class_id, gt_bbox, gt_mask =\
modellib.load_image_gt(dataset_val, inference_config,
image_id, use_mini_mask=False)
molded_images = np.expand_dims(modellib.mold_image(image, inference_config), 0)
# Run object detection
results = model.detect([image], verbose=0)
r = results[0]
# Compute AP
AP, precisions, recalls, overlaps =\
utils.compute_ap(gt_bbox, gt_class_id, gt_mask,
r["rois"], r["class_ids"], r["scores"], r['masks'])
APs.append(AP)
print("mAP: ", np.mean(APs))
```
| true |
code
| 0.682891 | null | null | null | null |
|
# SciPy - Library of scientific algorithms for Python
Original by J.R. Johansson (robert@riken.jp) http://dml.riken.jp/~rob/
Modified by Clayton Miller (miller.clayton@arch.ethz.ch)
The other notebooks in this lecture series are indexed at [http://jrjohansson.github.com](http://jrjohansson.github.com).
# Introduction
The SciPy framework builds on top of the low-level NumPy framwork for multidimensional arrays, and provides a large number of higher-level scientific algorithms. Some of the topics that SciPy covers are:
* Special functions ([scipy.special](http://docs.scipy.org/doc/scipy/reference/special.html))
* Integration ([scipy.integrate](http://docs.scipy.org/doc/scipy/reference/integrate.html))
* Optimization ([scipy.optimize](http://docs.scipy.org/doc/scipy/reference/optimize.html))
* Interpolation ([scipy.interpolate](http://docs.scipy.org/doc/scipy/reference/interpolate.html))
* Fourier Transforms ([scipy.fftpack](http://docs.scipy.org/doc/scipy/reference/fftpack.html))
* Signal Processing ([scipy.signal](http://docs.scipy.org/doc/scipy/reference/signal.html))
* Linear Algebra ([scipy.linalg](http://docs.scipy.org/doc/scipy/reference/linalg.html))
* Sparse Eigenvalue Problems ([scipy.sparse](http://docs.scipy.org/doc/scipy/reference/sparse.html))
* Statistics ([scipy.stats](http://docs.scipy.org/doc/scipy/reference/stats.html))
* Multi-dimensional image processing ([scipy.ndimage](http://docs.scipy.org/doc/scipy/reference/ndimage.html))
* File IO ([scipy.io](http://docs.scipy.org/doc/scipy/reference/io.html))
Each of these submodules provides a number of functions and classes that can be used to solve problems in their respective topics.
In this lecture we will look at how to use some of these subpackages.
To access the SciPy package in a Python program, we start by importing everything from the `scipy` module.
```
%pylab inline
from IPython.display import Image
from scipy import *
```
If we only need to use part of the SciPy framwork we can selectively include only those modules we are interested in. For example, to include the linear algebra package under the name `la`, we can do:
```
import scipy.linalg as la
```
## Ordinary differential equations (ODEs)
SciPy provides two different ways to solve ODEs: An API based on the function `odeint`, and object-oriented API based on the class `ode`. Usually `odeint` is easier to get started with, but the `ode` class offers some finer level of control.
Here we will use the `odeint` functions. For more information about the class `ode`, try `help(ode)`. It does pretty much the same thing as `odeint`, but in an object-oriented fashion.
To use `odeint`, first import it from the `scipy.integrate` module:
```
from scipy.integrate import odeint, ode
```
A system of ODEs are usually formulated on standard form before it is attacked nummerically. The standard form is:
$y' = f(y, t)$
where
$y = [y_1(t), y_2(t), ..., y_n(t)]$
and $f$ is some function that gives the derivatives of the function $y_i(t)$. To solve an ODE we need to know the function $f$ and an initial condition, $y(0)$.
Note that higher-order ODEs can always be written in this form by introducing new variables for the intermediate derivatives.
Once we have defined the Python function `f` and array `y_0` (that is $f$ and $y(0)$ in the mathematical formulation), we can use the `odeint` function as:
y_t = odeint(f, y_0, t)
where `t` is and array with time-coordinates for which to solve the ODE problem. `y_t` is an array with one row for each point in time in `t`, where each column corresponds to a solution `y_i(t)` at that point in time.
We will see how we can implement `f` and `y_0` in Python code in the examples below.
## Example: double pendulum
Let's consider a physical example: The double compound pendulum, described in some detail here: http://en.wikipedia.org/wiki/Double_pendulum
```
Image(url='http://upload.wikimedia.org/wikipedia/commons/c/c9/Double-compound-pendulum-dimensioned.svg')
```
The equations of motion of the pendulum are given on the wiki page:
${\dot \theta_1} = \frac{6}{m\ell^2} \frac{ 2 p_{\theta_1} - 3 \cos(\theta_1-\theta_2) p_{\theta_2}}{16 - 9 \cos^2(\theta_1-\theta_2)}$
${\dot \theta_2} = \frac{6}{m\ell^2} \frac{ 8 p_{\theta_2} - 3 \cos(\theta_1-\theta_2) p_{\theta_1}}{16 - 9 \cos^2(\theta_1-\theta_2)}.$
${\dot p_{\theta_1}} = -\frac{1}{2} m \ell^2 \left [ {\dot \theta_1} {\dot \theta_2} \sin (\theta_1-\theta_2) + 3 \frac{g}{\ell} \sin \theta_1 \right ]$
${\dot p_{\theta_2}} = -\frac{1}{2} m \ell^2 \left [ -{\dot \theta_1} {\dot \theta_2} \sin (\theta_1-\theta_2) + \frac{g}{\ell} \sin \theta_2 \right]$
To make the Python code simpler to follow, let's introduce new variable names and the vector notation: $x = [\theta_1, \theta_2, p_{\theta_1}, p_{\theta_2}]$
${\dot x_1} = \frac{6}{m\ell^2} \frac{ 2 x_3 - 3 \cos(x_1-x_2) x_4}{16 - 9 \cos^2(x_1-x_2)}$
${\dot x_2} = \frac{6}{m\ell^2} \frac{ 8 x_4 - 3 \cos(x_1-x_2) x_3}{16 - 9 \cos^2(x_1-x_2)}$
${\dot x_3} = -\frac{1}{2} m \ell^2 \left [ {\dot x_1} {\dot x_2} \sin (x_1-x_2) + 3 \frac{g}{\ell} \sin x_1 \right ]$
${\dot x_4} = -\frac{1}{2} m \ell^2 \left [ -{\dot x_1} {\dot x_2} \sin (x_1-x_2) + \frac{g}{\ell} \sin x_2 \right]$
```
g = 9.82
L = 0.5
m = 0.1
def dx(x, t):
"""
The right-hand side of the pendelum ODE
"""
x1, x2, x3, x4 = x[0], x[1], x[2], x[3]
dx1 = 6.0/(m*L**2) * (2 * x3 - 3 * cos(x1-x2) * x4)/(16 - 9 * cos(x1-x2)**2)
dx2 = 6.0/(m*L**2) * (8 * x4 - 3 * cos(x1-x2) * x3)/(16 - 9 * cos(x1-x2)**2)
dx3 = -0.5 * m * L**2 * ( dx1 * dx2 * sin(x1-x2) + 3 * (g/L) * sin(x1))
dx4 = -0.5 * m * L**2 * (-dx1 * dx2 * sin(x1-x2) + (g/L) * sin(x2))
return [dx1, dx2, dx3, dx4]
# choose an initial state
x0 = [pi/4, pi/2, 0, 0]
# time coodinate to solve the ODE for: from 0 to 10 seconds
t = linspace(0, 10, 250)
# solve the ODE problem
x = odeint(dx, x0, t)
x
```
## Simple annimation of the pendulum motion.
```
# plot the angles as a function of time
fig, axes = subplots(1,2, figsize=(12,4))
axes[0].plot(t, x[:, 0], 'r', label="theta1")
axes[0].plot(t, x[:, 1], 'b', label="theta2")
x1 = + L * sin(x[:, 0])
y1 = - L * cos(x[:, 0])
x2 = x1 + L * sin(x[:, 1])
y2 = y1 - L * cos(x[:, 1])
axes[1].plot(x1, y1, 'r', label="pendulum1")
axes[1].plot(x2, y2, 'b', label="pendulum2")
axes[1].set_ylim([-1, 0])
axes[1].set_xlim([1, -1]);
```
## Example: Damped harmonic oscillator
ODE problems are important in computational physics, so we will look at one more example: the damped harmonic oscillation. This problem is well described on the wiki page: http://en.wikipedia.org/wiki/Damping
The equation of motion for the damped oscillator is:
$\displaystyle \frac{\mathrm{d}^2x}{\mathrm{d}t^2} + 2\zeta\omega_0\frac{\mathrm{d}x}{\mathrm{d}t} + \omega^2_0 x = 0$
where $x$ is the position of the oscillator, $\omega_0$ is the frequency, and $\zeta$ is the damping ratio. To write this second-order ODE on standard form we introduce $p = \frac{\mathrm{d}x}{\mathrm{d}t}$:
$\displaystyle \frac{\mathrm{d}p}{\mathrm{d}t} = - 2\zeta\omega_0 p - \omega^2_0 x$
$\displaystyle \frac{\mathrm{d}x}{\mathrm{d}t} = p$
In the implementation of this example we will add extra arguments to the RHS function for the ODE, rather than using global variables as we did in the previous example. As a consequence of the extra arguments to the RHS, we need to pass an keyword argument `args` to the `odeint` function:
```
def dy(y, t, zeta, w0):
"""
The right-hand side of the damped oscillator ODE
"""
x, p = y[0], y[1]
dx = p
dp = -2 * zeta * w0 * p - w0**2 * x
return [dx, dp]
# initial state:
y0 = [1.0, 0.0]
# time coodinate to solve the ODE for
t = linspace(0, 10, 1000)
w0 = 2*pi*1.0
# solve the ODE problem for three different values of the damping ratio
y1 = odeint(dy, y0, t, args=(0.0, w0)) # undamped
y2 = odeint(dy, y0, t, args=(0.2, w0)) # under damped
y3 = odeint(dy, y0, t, args=(1.0, w0)) # critial damping
y4 = odeint(dy, y0, t, args=(5.0, w0)) # over damped
fig, ax = subplots()
ax.plot(t, y1[:,0], 'k', label="undamped", linewidth=0.25)
ax.plot(t, y2[:,0], 'r', label="under damped")
ax.plot(t, y3[:,0], 'b', label=r"critical damping")
ax.plot(t, y4[:,0], 'g', label="over damped")
ax.legend();
```
## Fourier transform
Fourier transforms are one of the universial tools in computational physics, which appear over and over again in different contexts. SciPy provides functions for accessing the classic [FFTPACK](http://www.netlib.org/fftpack/) library from NetLib, which is an efficient and well tested FFT library written in FORTRAN. The SciPy API has a few additional convenience functions, but overall the API is closely related to the original FORTRAN library.
To use the `fftpack` module in a python program, include it using:
```
from scipy.fftpack import *
```
To demonstrate how to do a fast Fourier transform with SciPy, let's look at the FFT of the solution to the damped oscillator from the previous section:
```
N = len(t)
dt = t[1]-t[0]
# calculate the fast fourier transform
# y2 is the solution to the under-damped oscillator from the previous section
F = fft(y2[:,0])
# calculate the frequencies for the components in F
w = fftfreq(N, dt)
fig, ax = subplots(figsize=(9,3))
ax.plot(w, abs(F));
```
Since the signal is real, the spectrum is symmetric. We therefore only need to plot the part that corresponds to the postive frequencies. To extract that part of the `w` and `F` we can use some of the indexing tricks for NumPy arrays that we saw in Lecture 2:
```
indices = where(w > 0) # select only indices for elements that corresponds to positive frequencies
w_pos = w[indices]
F_pos = F[indices]
fig, ax = subplots(figsize=(9,3))
ax.plot(w_pos, abs(F_pos))
ax.set_xlim(0, 5);
```
As expected, we now see a peak in the spectrum that is centered around 1, which is the frequency we used in the damped oscillator example.
## Optimization
Optimization (finding minima or maxima of a function) is a large field in mathematics, and optimization of complicated functions or in many variables can be rather involved. Here we will only look at a few very simple cases. For a more detailed introduction to optimization with SciPy see: http://scipy-lectures.github.com/advanced/mathematical_optimization/index.html
To use the optimization module in scipy first include the `optimize` module:
```
from scipy import optimize
```
### Finding a minima
Let's first look at how to find the minima of a simple function of a single variable:
```
def f(x):
return 4*x**3 + (x-2)**2 + x**4
fig, ax = subplots()
x = linspace(-5, 3, 100)
ax.plot(x, f(x));
```
We can use the `fmin_bfgs` function to find the minima of a function:
```
x_min = optimize.fmin_bfgs(f, -2)
x_min
optimize.fmin_bfgs(f, 0.5)
```
We can also use the `brent` or `fminbound` functions. They have a bit different syntax and use different algorithms.
```
optimize.brent(f)
optimize.fminbound(f, -4, 2)
```
## Interpolation
Interpolation is simple and convenient in scipy: The `interp1d` function, when given arrays describing X and Y data, returns and object that behaves like a function that can be called for an arbitrary value of x (in the range covered by X), and it returns the corresponding interpolated y value:
```
from scipy.interpolate import *
def f(x):
return sin(x)
n = arange(0, 10)
x = linspace(0, 9, 100)
y_meas = f(n) + 0.1 * randn(len(n)) # simulate measurement with noise
y_real = f(x)
linear_interpolation = interp1d(n, y_meas)
y_interp1 = linear_interpolation(x)
cubic_interpolation = interp1d(n, y_meas, kind='cubic')
y_interp2 = cubic_interpolation(x)
fig, ax = subplots(figsize=(10,4))
ax.plot(n, y_meas, 'bs', label='noisy data')
ax.plot(x, y_real, 'k', lw=2, label='true function')
ax.plot(x, y_interp1, 'r', label='linear interp')
ax.plot(x, y_interp2, 'g', label='cubic interp')
ax.legend(loc=3);
```
## Statistics
The `scipy.stats` module contains a large number of statistical distributions, statistical functions and tests. For a complete documentation of its features, see http://docs.scipy.org/doc/scipy/reference/stats.html.
There is also a very powerful python package for statistical modelling called statsmodels. See http://statsmodels.sourceforge.net for more details.
```
from scipy import stats
# create a (discreet) random variable with poissionian distribution
X = stats.poisson(3.5) # photon distribution for a coherent state with n=3.5 photons
n = arange(0,15)
fig, axes = subplots(3,1, sharex=True)
# plot the probability mass function (PMF)
axes[0].step(n, X.pmf(n))
# plot the commulative distribution function (CDF)
axes[1].step(n, X.cdf(n))
# plot histogram of 1000 random realizations of the stochastic variable X
axes[2].hist(X.rvs(size=1000));
# create a (continous) random variable with normal distribution
Y = stats.norm()
x = linspace(-5,5,100)
fig, axes = subplots(3,1, sharex=True)
# plot the probability distribution function (PDF)
axes[0].plot(x, Y.pdf(x))
# plot the commulative distributin function (CDF)
axes[1].plot(x, Y.cdf(x));
# plot histogram of 1000 random realizations of the stochastic variable Y
axes[2].hist(Y.rvs(size=1000), bins=50);
```
Statistics:
```
X.mean(), X.std(), X.var() # poission distribution
Y.mean(), Y.std(), Y.var() # normal distribution
```
### Statistical tests
Test if two sets of (independent) random data comes from the same distribution:
```
t_statistic, p_value = stats.ttest_ind(X.rvs(size=1000), X.rvs(size=1000))
print "t-statistic =", t_statistic
print "p-value =", p_value
```
Since the p value is very large we cannot reject the hypothesis that the two sets of random data have *different* means.
To test if the mean of a single sample of data has mean 0.1 (the true mean is 0.0):
```
stats.ttest_1samp(Y.rvs(size=1000), 0.1)
```
Low p-value means that we can reject the hypothesis that the mean of Y is 0.1.
```
Y.mean()
stats.ttest_1samp(Y.rvs(size=1000), Y.mean())
```
## Further reading
* http://www.scipy.org - The official web page for the SciPy project.
* http://docs.scipy.org/doc/scipy/reference/tutorial/index.html - A tutorial on how to get started using SciPy.
* https://github.com/scipy/scipy/ - The SciPy source code.
| true |
code
| 0.719913 | null | null | null | null |
|
3.11 模型选择、欠拟合和过拟合
在前几节基于Fashion-MNIST数据集的实验中,我们评价了机器学习模型在训练数据集和测试数据集上的表现。如果你改变过实验中的模型结构或者超参数,你也许发现了:当模型在训练数据集上更准确时,它在测试数据集上却不一定更准确。这是为什么呢?
3.11.1 训练误差和泛化误差
在解释上述现象之前,我们需要区分训练误差(training error)和泛化误差(generalization error)。通俗来讲,前者指模型在训练数据集上表现出的误差,后者指模型在任意一个测试数据样本上表现出的误差的期望,并常常通过测试数据集上的误差来近似。计算训练误差和泛化误差可以使用之前介绍过的损失函数,例如线性回归用到的平方损失函数和softmax回归用到的交叉熵损失函数。
让我们以高考为例来直观地解释训练误差和泛化误差这两个概念。训练误差可以认为是做往年高考试题(训练题)时的错误率,泛化误差则可以通过真正参加高考(测试题)时的答题错误率来近似。假设训练题和测试题都随机采样于一个未知的依照相同考纲的巨大试题库。如果让一名未学习中学知识的小学生去答题,那么测试题和训练题的答题错误率可能很相近。但如果换成一名反复练习训练题的高三备考生答题,即使在训练题上做到了错误率为0,也不代表真实的高考成绩会如此。
在机器学习里,我们通常假设训练数据集(训练题)和测试数据集(测试题)里的每一个样本都是从同一个概率分布中相互独立地生成的。基于该独立同分布假设,给定任意一个机器学习模型(含参数),它的训练误差的期望和泛化误差都是一样的。例如,如果我们将模型参数设成随机值(小学生),那么训练误差和泛化误差会非常相近。但我们从前面几节中已经了解到,模型的参数是通过在训练数据集上训练模型而学习出的,参数的选择依据了最小化训练误差(高三备考生)。所以,训练误差的期望小于或等于泛化误差。也就是说,一般情况下,由训练数据集学到的模型参数会使模型在训练数据集上的表现优于或等于在测试数据集上的表现。由于无法从训练误差估计泛化误差,一味地降低训练误差并不意味着泛化误差一定会降低。
机器学习模型应关注降低泛化误差。
3.11.2 模型选择
在机器学习中,通常需要评估若干候选模型的表现并从中选择模型。这一过程称为模型选择(model selection)。可供选择的候选模型可以是有着不同超参数的同类模型。以多层感知机为例,我们可以选择隐藏层的个数,以及每个隐藏层中隐藏单元个数和激活函数。为了得到有效的模型,我们通常要在模型选择上下一番功夫。下面,我们来描述模型选择中经常使用的验证数据集(validation data set)。
3.11.2.1 验证数据集
从严格意义上讲,测试集只能在所有超参数和模型参数选定后使用一次。不可以使用测试数据选择模型,如调参。由于无法从训练误差估计泛化误差,因此也不应只依赖训练数据选择模型。鉴于此,我们可以预留一部分在训练数据集和测试数据集以外的数据来进行模型选择。这部分数据被称为验证数据集,简称验证集(validation set)。例如,我们可以从给定的训练集中随机选取一小部分作为验证集,而将剩余部分作为真正的训练集。
然而在实际应用中,由于数据不容易获取,测试数据极少只使用一次就丢弃。因此,实践中验证数据集和测试数据集的界限可能比较模糊。从严格意义上讲,除非明确说明,否则本书中实验所使用的测试集应为验证集,实验报告的测试结果(如测试准确率)应为验证结果(如验证准确率)。
3.11.2.3 K折交叉验证
由于验证数据集不参与模型训练,当训练数据不够用时,预留大量的验证数据显得太奢侈。一种改善的方法是K折交叉验证(K-fold cross-validation)。在K折交叉验证中,我们把原始训练数据集分割成K个不重合的子数据集,然后我们做K次模型训练和验证。每一次,我们使用一个子数据集验证模型,并使用其他K−1个子数据集来训练模型。在这K次训练和验证中,每次用来验证模型的子数据集都不同。最后,我们对这K次训练误差和验证误差分别求平均。
3.11.3 欠拟合和过拟合
接下来,我们将探究模型训练中经常出现的两类典型问题:一类是模型无法得到较低的训练误差,我们将这一现象称作欠拟合(underfitting);另一类是模型的训练误差远小于它在测试数据集上的误差,我们称该现象为过拟合(overfitting)。在实践中,我们要尽可能同时应对欠拟合和过拟合。虽然有很多因素可能导致这两种拟合问题,在这里我们重点讨论两个因素:模型复杂度和训练数据集大小。
3.11.3.1 模型复杂度
为了解释模型复杂度,我们以多项式函数拟合为例。给定一个由标量数据特征x和对应的标量标签y组成的训练数据集,多项式函数拟合的目标是找一个K阶多项式函数
为了解释模型复杂度,我们以多项式函数拟合为例。给定一个由标量数据特征 x 和对应的标量标签 y 组成的训练数据集,多项式函数拟合的目标是找一个 K 阶多项式函数
\hat{y} = b + \sum_{k=1}^K x^k w_k
来近似 y。在上式中,wk是模型的权重参数,b是偏差参数。与线性回归相同,多项式函数拟合也使用平方损失函数。特别地,一阶多项式函数拟合又叫线性函数拟合。
因为高阶多项式函数模型参数更多,模型函数的选择空间更大,所以高阶多项式函数比低阶多项式函数的复杂度更高。因此,高阶多项式函数比低阶多项式函数更容易在相同的训练数据集上得到更低的训练误差。给定训练数据集,模型复杂度和误差之间的关系通常如图3.4所示。给定训练数据集,如果模型的复杂度过低,很容易出现欠拟合;如果模型复杂度过高,很容易出现过拟合。应对欠拟合和过拟合的一个办法是针对数据集选择合适复杂度的模型。
3.11.3.2 训练数据集大小
影响欠拟合和过拟合的另一个重要因素是训练数据集的大小。一般来说,如果训练数据集中样本数过少,特别是比模型参数数量(按元素计)更少时,过拟合更容易发生。此外,泛化误差不会随训练数据集里样本数量增加而增大。因此,在计算资源允许的范围之内,我们通常希望训练数据集大一些,特别是在模型复杂度较高时,例如层数较多的深度学习模型。
3.11.4 多项式函数拟合实验
为了理解模型复杂度和训练数据集大小对欠拟合和过拟合的影响,下面我们以多项式函数拟合为例来实验。首先导入实验需要的包或模块。
```
import tensorflow as tf
import numpy as np
from tensorflow.keras.layers import Conv2D,BatchNormalization,Activation
import sys
import matplotlib.pyplot as plt
import d2lzh as d2l
%matplotlib inline
```
3.11.4.1 生成数据集
我们将生成一个人工数据集。在训练数据集和测试数据集中,给定样本特征x,我们使用如下的三阶多项式函数来生成该样本的标签:y = 1.2x - 3.4x^2 + 5.6x^3 + 5 + epsilon 其中噪声项ϵ服从均值为0、标准差为0.01的正态分布。训练数据集和测试数据集的样本数都设为100。
```
n_train, n_test, true_w, true_b = 100, 100, [1.2, -3.4, 5.6], 5
features = tf.random.normal(shape=(n_train + n_test, 1))
poly_features = tf.concat([features, tf.pow(features, 2), tf.pow(features, 3)],1)
print(poly_features.shape)
labels = (true_w[0] * poly_features[:, 0] + true_w[1] * poly_features[:, 1]+ true_w[2] * poly_features[:, 2] + true_b)
print(tf.shape(labels))
# labels += tf.random.normal(labels.shape,0,0.1)
print(tf.shape(labels))
```
看一看生成的数据集的前两个样本。
```
features[:2], poly_features[:2], labels[:2]
```
3.11.4.2. 定义、训练和测试模型¶
我们先定义作图函数semilogy,其中 y 轴使用了对数尺度。
```
def semilogy(x_vals, y_vals, x_label, y_label, x2_vals=None, y2_vals=None,
legend=None, figsize=(3.5, 2.5)):
d2l.set_figsize(figsize)
d2l.plt.xlabel(x_label)
d2l.plt.ylabel(y_label)
d2l.plt.semilogy(x_vals, y_vals)
if x2_vals and y2_vals:
d2l.plt.semilogy(x2_vals, y2_vals, linestyle=':')
d2l.plt.legend(legend)
```
和线性回归一样,多项式函数拟合也使用平方损失函数。因为我们将尝试使用不同复杂度的模型来拟合生成的数据集,所以我们把模型定义部分放在fit_and_plot函数中。多项式函数拟合的训练和测试步骤与3.6节(softmax回归的从零开始实现)介绍的softmax回归中的相关步骤类似。
```
num_epochs=100
def fit_and_plot(train_features, test_features, train_labels, test_labels):
net = tf.keras.Sequential([tf.keras.layers.Dense(1)])
batch_size = min(10, train_labels.shape[0])
# batch_size = tf.cast(batch_size, 'int64')
train_iter = tf.data.Dataset.from_tensor_slices((train_features, train_labels)).batch(10)
optimizer = tf.keras.optimizers.Adam()
train_ls, test_ls, loss_history = [], [], []
for _ in range(num_epochs):
for X, y in train_iter:
with tf.GradientTape() as tape:
logits = net(X, training=True)
l = tf.keras.losses.mse(logits, y)
print(l)
# 获取本批数据梯度
grads = tape.gradient(l, net.trainable_variables)
# 反向传播优化
optimizer.apply_gradients(zip(grads, net.trainable_variables))
train_ls.append(tf.keras.losses.mse(net(train_features), train_labels).numpy().mean())
test_ls.append(tf.keras.losses.mse(net(test_features),test_labels).numpy().mean())
print('final epoch: train loss', train_ls[-1], 'test loss', test_ls[-1])
semilogy(range(1, num_epochs + 1), train_ls, 'epochs', 'loss',
range(1, num_epochs + 1), test_ls, ['train', 'test'])
print('weight:', net.get_weights()[0],
'\nbias:', net.get_weights()[1])
```
3.11.4.3. 三阶多项式函数拟合(正常)¶
我们先使用与数据生成函数同阶的三阶多项式函数拟合。实验表明,这个模型的训练误差和在测试数据集的误差都较低。训练出的模型参数也接近真实值: w1=1.2,w2=−3.4,w3=5.6,b=5 。
```
fit_and_plot(poly_features[:n_train, :], poly_features[n_train:, :],
labels[:n_train], labels[n_train:])
```
3.11.4.4. 线性函数拟合(欠拟合)¶
我们再试试线性函数拟合。很明显,该模型的训练误差在迭代早期下降后便很难继续降低。在完成最后一次迭代周期后,训练误差依旧很高。线性模型在非线性模型(如三阶多项式函数)生成的数据集上容易欠拟合。
```
fit_and_plot(poly_features[0:2, :], poly_features[n_train:, :], labels[0:2],
labels[n_train:])
```
| true |
code
| 0.63168 | null | null | null | null |
|
# Ensemble NMS - Detectron2 [Inference]
### Hi kagglers, This is `Ensemble NMW - Detectron2 [Inference]` notebook.
* [Sartorius Segmentation - Detectron2 [training]](https://www.kaggle.com/ammarnassanalhajali/sartorius-segmentation-detectron2-training)
* [Sartorius Segmentation - Detectron2 [Inference]](https://www.kaggle.com/ammarnassanalhajali/sartorius-segmentation-detectron2-inference)
* [K-fold CrossValidation COCO Dataset Generator](https://www.kaggle.com/ammarnassanalhajali/k-fold-crossvalidation-coco-dataset-generator)
### Please if this kernel is useful, <font color='red'>please upvote !!</font>
## Other notebooks in this competition
- [Sartorius Segmentation - Keras U-Net[Training]](https://www.kaggle.com/ammarnassanalhajali/sartorius-segmentation-keras-u-net-training)
- [Sartorius Segmentation - Keras U-Net[Inference]](https://www.kaggle.com/ammarnassanalhajali/sartorius-segmentation-keras-u-net-inference/edit)
# Intro
Ensembling multiple weaker performing models can help to get the results that you want.
## Install and import libraries
```
!pip install ../input/detectron-05/whls/pycocotools-2.0.2/dist/pycocotools-2.0.2.tar --no-index --find-links ../input/detectron-05/whls
!pip install ../input/detectron-05/whls/fvcore-0.1.5.post20211019/fvcore-0.1.5.post20211019 --no-index --find-links ../input/detectron-05/whls
!pip install ../input/detectron-05/whls/antlr4-python3-runtime-4.8/antlr4-python3-runtime-4.8 --no-index --find-links ../input/detectron-05/whls
!pip install ../input/detectron-05/whls/detectron2-0.5/detectron2 --no-index --find-links ../input/detectron-05/whls
!pip install ../input/ensemble-boxes-104/ensemble_boxes-1.0.4/ -f ./ --no-index
import os
import cv2
import json
import time
import numpy as np
import pandas as pd
import torch
import detectron2
from tqdm.auto import tqdm
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.data.datasets import register_coco_instances
from detectron2.evaluation import inference_on_dataset
from detectron2.evaluation.evaluator import DatasetEvaluator
from detectron2.data import DatasetCatalog, build_detection_test_loader
import pycocotools.mask as mask_util
from PIL import Image
import matplotlib.pyplot as plt
from fastcore.all import *
from ensemble_boxes import *
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
if torch.cuda.is_available():
DEVICE = torch.device('cuda')
print('GPU is available')
else:
DEVICE = torch.device('cpu')
print('CPU is used')
print('detectron ver:', detectron2.__version__)
```
## My Models
```
best_model=(
{'file': 'R50-306.pth','config_name':'COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml', 'LB score': 0.306,'ths':[.18, .38, .58]},
{'file': '50_FPN_3x_F3_R82_300.pth','config_name':'COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml', 'LB score': 0.300,'ths':[.18, .38, .58]},
{'file': '32x8d_FPN_3x_F3_R57_295.pth','config_name':'COCO-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_3x.yaml', 'LB score': 0.295,'ths':[.18, .38, .58]},
{'file': '50_FPN_3x_F5_ATTT32_300.pth','config_name':'COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml', 'LB score': 0.300,'ths':[.19, .39, .57]}
)
#config_name = "COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"
mdl_path = "../input/dtectron2-models-5fold"
DATA_PATH = "../input/sartorius-cell-instance-segmentation"
MODELS = []
BEST_MODELS =[]
THSS = []
ID_TEST = 0
SUBM_PATH = f'{DATA_PATH}/test'
SINGLE_MODE = False
NMS = True
MIN_PIXELS = [75, 150, 75]
IOU_TH = 0.3
for b_m in best_model:
model_name=b_m["file"]
model_ths=b_m["ths"]
config_name=b_m["config_name"]
BEST_MODELS.append(model_name)
THSS.append(model_ths)
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file(config_name))
cfg.INPUT.MASK_FORMAT = 'bitmask'
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 3
cfg.MODEL.WEIGHTS = f'{mdl_path}/{model_name}'
cfg.TEST.DETECTIONS_PER_IMAGE = 1000
MODELS.append(DefaultPredictor(cfg))
print(f'all loaded:\nthresholds: {THSS}\nmodels: {BEST_MODELS}')
MODELS
```
## Utils
```
def rle_decode(mask_rle, shape=(520, 704)):
'''
mask_rle: run-length as string formated (start length)
shape: (height,width) of array to return
Returns numpy array, 1 - mask, 0 - background
'''
s = mask_rle.split()
starts, lengths = [np.asarray(x, dtype=int)
for x in (s[0:][::2], s[1:][::2])]
starts -= 1
ends = starts + lengths
img = np.zeros(shape[0] * shape[1], dtype=np.uint8)
for lo, hi in zip(starts, ends):
img[lo : hi] = 1
return img.reshape(shape) # Needed to align to RLE direction
def rle_encode(img):
'''
img: numpy array, 1 - mask, 0 - background
Returns run length as string formated
'''
pixels = img.flatten()
pixels = np.concatenate([[0], pixels, [0]])
runs = np.where(pixels[1:] != pixels[:-1])[0] + 1
runs[1::2] -= runs[::2]
return ' '.join(str(x) for x in runs)
def pred_masks(file_name, path, model, ths, min_pixels):
img = cv2.imread(f'{path}/{file_name}')
output = model(img)
pred_classes = output['instances'].pred_classes.cpu().numpy().tolist()
pred_class = max(set(pred_classes), key=pred_classes.count)
take = output['instances'].scores >= ths[pred_class]
pred_masks = output['instances'].pred_masks[take]
pred_masks = pred_masks.cpu().numpy()
result = []
used = np.zeros(img.shape[:2], dtype=int)
for i, mask in enumerate(pred_masks):
mask = mask * (1 - used)
if mask.sum() >= min_pixels[pred_class]:
used += mask
result.append(rle_encode(mask))
return result
def ensemble_preds(file_name, path, models, ths):
img = cv2.imread(f'{path}/{file_name}')
classes = []
scores = []
bboxes = []
masks = []
for i, model in enumerate(models):
output = model(img)
pred_classes = output['instances'].pred_classes.cpu().numpy().tolist()
pred_class = max(set(pred_classes), key=pred_classes.count)
take = output['instances'].scores >= ths[i][pred_class]
classes.extend(output['instances'].pred_classes[take].cpu().numpy().tolist())
scores.extend(output['instances'].scores[take].cpu().numpy().tolist())
bboxes.extend(output['instances'].pred_boxes[take].tensor.cpu().numpy().tolist())
masks.extend(output['instances'].pred_masks[take].cpu().numpy())
assert len(classes) == len(masks) , 'ensemble lenght mismatch'
#scores, classes, bboxes, masks = zip(*sorted(zip(scores, classes, bboxes, masks),reverse=True))
return classes, scores, bboxes, masks
def nms_predictions(classes, scores, bboxes, masks,
iou_th=.5, shape=(520, 704)):
he, wd = shape[0], shape[1]
boxes_list = [[[x[0] / wd, x[1] / he, x[2] / wd, x[3] / he] for x in bboxes]]
scores_list = [[x for x in scores]]
classes_list = [[x for x in classes]]
nms_bboxes, nms_scores, nms_classes = non_maximum_weighted(
boxes_list,
scores_list,
classes_list,
weights=None,
iou_thr=0.3,skip_box_thr=0.0001
)
nms_masks = []
for s in nms_scores:
nms_masks.append(masks[scores.index(s)])
nms_scores, nms_classes, nms_masks = zip(*sorted(zip(nms_scores, nms_classes, nms_masks), reverse=True))
return nms_classes, nms_scores, nms_masks
def ensemble_pred_masks(masks, classes, min_pixels, shape=(520, 704)):
result = []
#pred_class = max(set(classes), key=classes.count)
pred_class = int(max(set(classes), key=classes.count).item())
used = np.zeros(shape, dtype=int)
for i, mask in enumerate(masks):
mask = mask * (1 - used)
if mask.sum() >= min_pixels[pred_class]:
used += mask
result.append(rle_encode(mask))
return result
```
## Demo inference
```
test_names = os.listdir(SUBM_PATH)
print('test images:', len(test_names))
encoded_masks_single = pred_masks(
test_names[ID_TEST],
path=SUBM_PATH,
model=MODELS[0],
ths=THSS[0],
min_pixels=MIN_PIXELS
)
classes, scores, bboxes, masks = ensemble_preds(
file_name=test_names[ID_TEST] ,
path=SUBM_PATH,
models=MODELS,
ths=THSS
)
if NMS:
classes, scores, masks = nms_predictions(
classes,
scores,
bboxes,
masks, iou_th=IOU_TH
)
encoded_masks = ensemble_pred_masks(masks, classes, min_pixels=MIN_PIXELS)
_, axs = plt.subplots(2, 2, figsize=(14, 8))
axs[0][0].imshow(cv2.imread(f'{SUBM_PATH}/{test_names[ID_TEST]}'))
axs[0][0].axis('off')
axs[0][0].set_title(test_names[ID_TEST])
for en_mask in encoded_masks_single:
dec_mask = rle_decode(en_mask)
axs[0][1].imshow(np.ma.masked_where(dec_mask == 0, dec_mask))
axs[0][1].axis('off')
axs[0][1].set_title('single model')
axs[1][0].imshow(cv2.imread(f'{SUBM_PATH}/{test_names[ID_TEST]}'))
axs[1][0].axis('off')
axs[1][0].set_title(test_names[ID_TEST])
for en_mask in encoded_masks:
dec_mask = rle_decode(en_mask)
axs[1][1].imshow(np.ma.masked_where(dec_mask == 0, dec_mask))
axs[1][1].axis('off')
axs[1][1].set_title('ensemble models')
plt.show()
```
## Inference
```
subm_ids, subm_masks = [], []
for test_name in tqdm(test_names):
if SINGLE_MODE:
encoded_masks = pred_masks(
test_name,
path=SUBM_PATH,
model=MODELS[0],
ths=THSS[0],
min_pixels=MIN_PIXELS
)
else:
classes, scores, bboxes, masks = ensemble_preds(
file_name=test_name,
path=SUBM_PATH,
models=MODELS,
ths=THSS
)
if NMS:
classes, scores, masks = nms_predictions(
classes,
scores,
bboxes,
masks,
iou_th=IOU_TH
)
encoded_masks = ensemble_pred_masks(
masks,
classes,
min_pixels=MIN_PIXELS
)
for enc_mask in encoded_masks:
subm_ids.append(test_name[:test_name.find('.')])
subm_masks.append(enc_mask)
pd.DataFrame({
'id': subm_ids,
'predicted': subm_masks
}).to_csv('submission.csv', index=False)
pd.read_csv('submission.csv').head()
```
# References
1. https://www.kaggle.com/vgarshin/detectron2-inference-with-ensemble-and-nms
| true |
code
| 0.48121 | null | null | null | null |
|
# GPU-accelerated interactive visualization of single cells with RAPIDS, Scanpy and Plotly Dash
Copyright (c) 2020, NVIDIA CORPORATION.
Licensed under the Apache License, Version 2.0 (the "License") you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
In this notebook, we cluster cells based on a single-cell RNA-seq count matrix, and produce an interactive visualization of the clustered cells that allows for further analysis of the data in a browser window.
For demonstration purposes, we use a dataset of ~70,000 human lung cells from Travaglini et al. 2020 (https://www.biorxiv.org/content/10.1101/742320v2) and label cells using the ACE2, TMPRSS2, and EPCAM marker genes.
## Import requirements
```
import scanpy as sc
import anndata
import sys
import time
import os, wget
import cudf
import cupy as cp
from cuml.decomposition import PCA
from cuml.manifold import TSNE
from cuml.cluster import KMeans
from cuml.preprocessing import StandardScaler
import rapids_scanpy_funcs
import warnings
warnings.filterwarnings('ignore', 'Expected ')
```
We use the RAPIDS memory manager on the GPU to control how memory is allocated.
```
import rmm
rmm.reinitialize(
managed_memory=True, # Allows oversubscription
pool_allocator=False, # default is False
devices=0, # GPU device IDs to register. By default registers only GPU 0.
)
cp.cuda.set_allocator(rmm.rmm_cupy_allocator)
```
## Input data
In the cell below, we provide the path to the `.h5ad` file containing the count matrix to analyze. Please see the README for instructions on how to download the dataset we use here.
We recommend saving count matrices in the sparse .h5ad format as it is much faster to load than a dense CSV file. To run this notebook using your own dataset, please see the README for instructions to convert your own count matrix into this format. Then, replace the path in the cell below with the path to your generated `.h5ad` file.
```
input_file = "../data/krasnow_hlca_10x.sparse.h5ad"
if not os.path.exists(input_file):
print('Downloading import file...')
os.makedirs('../data', exist_ok=True)
wget.download('https://rapids-single-cell-examples.s3.us-east-2.amazonaws.com/krasnow_hlca_10x.sparse.h5ad',
input_file)
```
## Set parameters
```
# marker genes
RIBO_GENE_PREFIX = "RPS" # Prefix for ribosomal genes to regress out
markers = ["ACE2", "TMPRSS2", "EPCAM"] # Marker genes for visualization
# filtering cells
min_genes_per_cell = 200 # Filter out cells with fewer genes than this expressed
max_genes_per_cell = 6000 # Filter out cells with more genes than this expressed
# filtering genes
min_cells_per_gene = 1 # Filter out genes expressed in fewer cells than this
n_top_genes = 5000 # Number of highly variable genes to retain
# PCA
n_components = 50 # Number of principal components to compute
# KNN
n_neighbors = 15 # Number of nearest neighbors for KNN graph
knn_n_pcs = 50 # Number of principal components to use for finding nearest neighbors
# UMAP
umap_min_dist = 0.3
umap_spread = 1.0
```
## Load and Prepare Data
We load the sparse count matrix from an `h5ad` file using Scanpy. The sparse count matrix will then be placed on the GPU.
```
%%time
adata = sc.read(input_file)
adata.shape
%%time
genes = cudf.Series(adata.var_names)
barcodes = cudf.Series(adata.obs_names)
sparse_gpu_array = cp.sparse.csr_matrix(adata.X)
```
## Preprocessing
### Filter
We filter the count matrix to remove cells with an extreme number of genes expressed.
```
%%time
sparse_gpu_array, barcodes = rapids_scanpy_funcs.filter_cells(sparse_gpu_array, min_genes=min_genes_per_cell, max_genes=max_genes_per_cell, barcodes=barcodes)
```
Some genes will now have zero expression in all cells. We filter out such genes.
```
%%time
sparse_gpu_array, genes = rapids_scanpy_funcs.filter_genes(sparse_gpu_array, genes, min_cells=1)
```
The size of our count matrix is now reduced.
```
sparse_gpu_array.shape
```
### Normalize
We normalize the count matrix so that the total counts in each cell sum to 1e4.
```
%%time
sparse_gpu_array = rapids_scanpy_funcs.normalize_total(sparse_gpu_array, target_sum=1e4)
```
Next, we log transform the count matrix.
```
%%time
sparse_gpu_array = sparse_gpu_array.log1p()
```
### Select Most Variable Genes
We first save the 'raw' expression values of the ACE2 and TMPRSS2 genes to use for labeling cells afterward. We will also store the expression of an epithelial marker gene (EPCAM).
```
%%time
tmp_norm = sparse_gpu_array.tocsc()
marker_genes_raw = {
("%s_raw" % marker): tmp_norm[:, genes[genes == marker].index[0]].todense().ravel()
for marker in markers
}
del tmp_norm
```
We identify the top 5000 variable genes using the `cellranger` method.
```
%time
hvg = rapids_scanpy_funcs.highly_variable_genes(sparse_gpu_array, genes, n_top_genes=5000)
```
We filter the count matrix to retain only the 5000 most variable genes.
```
%%time
sparse_gpu_array = sparse_gpu_array[:, hvg]
genes = genes[hvg].reset_index(drop=True)
sparse_gpu_array.shape
```
### Regress out confounding factors (number of counts, ribosomal gene expression)
We can now perform regression on the count matrix to correct for confounding factors - for example purposes, we use the number of counts and the expression of ribosomal genes. Many workflows use the expression of mitochondrial genes (named starting with `MT-`).
We now calculate the total counts and the percentage of ribosomal counts for each cell.
```
%%time
ribo_genes = genes.str.startswith(RIBO_GENE_PREFIX)
n_counts = sparse_gpu_array.sum(axis=1)
percent_ribo = (sparse_gpu_array[:,ribo_genes].sum(axis=1) / n_counts).ravel()
n_counts = cp.array(n_counts).ravel()
percent_ribo = cp.array(percent_ribo).ravel()
```
And perform regression:
```
%%time
sparse_gpu_array = rapids_scanpy_funcs.regress_out(sparse_gpu_array, n_counts, percent_ribo)
```
### Scale
Finally, we scale the count matrix to obtain a z-score and apply a cutoff value of 10 standard deviations, obtaining the preprocessed count matrix.
```
%%time
sparse_gpu_array = StandardScaler().fit_transform(sparse_gpu_array).clip(a_max=10)
```
## Cluster & Visualize
We store the preprocessed count matrix as an AnnData object, which is currently in host memory.
We also add the barcodes of the filtered cells, and the expression levels of the marker genes, to the annData object.
```
%%time
adata = anndata.AnnData(sparse_gpu_array.get())
adata.var_names = genes.to_pandas()
adata.obs_names = barcodes.to_pandas()
for name, data in marker_genes_raw.items():
adata.obs[name] = data.get()
```
### Reduce
We use PCA to reduce the dimensionality of the matrix to its top 50 principal components.
```
%%time
adata.obsm["X_pca"] = PCA(n_components=n_components, output_type="numpy").fit_transform(adata.X)
```
### UMAP + clustering
We visualize the cells using the UMAP algorithm in Rapids. Before UMAP, we need to construct a k-nearest neighbors graph in which each cell is connected to its nearest neighbors. This can be done conveniently using rapids functionality already integrated into Scanpy.
Note that Scanpy uses an approximation to the nearest neighbors on the CPU while the GPU version performs an exact search. While both methods are known to yield useful results, some differences in the resulting visualization and clusters can be observed.
```
%%time
sc.pp.neighbors(adata, n_neighbors=n_neighbors, n_pcs=knn_n_pcs, method='rapids')
```
The UMAP function from Rapids is also integrated into Scanpy.
```
%%time
sc.tl.umap(adata, min_dist=umap_min_dist, spread=umap_spread, method='rapids')
```
Finally, we use the Leiden algorithm for graph-based clustering.
```
%%time
adata.obs['leiden'] = rapids_scanpy_funcs.leiden(adata, resolution=0.1)
```
We plot the cells using the UMAP visualization, using the Louvain clusters as labels.
```
sc.pl.umap(adata, color=["leiden"])
```
## Defining re-clustering function for interactive visualization
As we have shown above, the speed of RAPIDS allows us to run steps like dimension reduction, clustering and visualization in seconds or even less. In the sections below, we create an interactive visualization that takes advantage of this speed by allowing users to cluster and analyze selected groups of cells at the click of a button.
First, we create a function named `re_cluster`. This function can be called on selected groups of cells. According to the function defined below, PCA, KNN, UMAP and Louvain clustering will be re-computed upon the selected cells. You can customize this function for your desired analysis.
```
def re_cluster(adata):
#### Function to repeat clustering and visualization on subsets of cells
#### Runs PCA, KNN, UMAP and Leiden clustering on selected cells.
adata.obsm["X_pca"] = PCA(n_components=n_components, output_type="numpy").fit_transform(adata.X)
sc.pp.neighbors(adata, n_neighbors=n_neighbors, n_pcs=knn_n_pcs, method='rapids')
sc.tl.umap(adata, min_dist=umap_min_dist, spread=umap_spread, method='rapids')
adata.obs['leiden'] = rapids_scanpy_funcs.leiden(adata)
return adata
```
## Creating an interactive visualization with Plotly Dash
<img src="https://github.com/clara-parabricks/rapids-single-cell-examples/blob/master/images/dashboard.png?raw=true" alt="Interactive Dashboard" width="400"/>
Below, we create the interactive visualization using the `adata` object and the re-clustering function defined above. To learn more about how this visualization is built, see `visualize.py`.
When you run the cell below, it returns a link. Click on this link to access the interactive visualization within your browser.
Once opened, click the `Directions` button for instructions.
```
import visualize
v = visualize.Visualization(adata, markers, re_cluster_callback=re_cluster)
v.start('0.0.0.0')
1
selected_cells = v.new_df
```
Within the dashboard, you can select cells using a variety of methods. You can then cluster, visualize and analyze the selected cells using the tools provided. Click on the `Directions` button for details.
To export the selected cells and the results of your analysis back to the notebook, click the `Export to Dataframe` button. This exports the results of your analysis back to this notebook, and closes the interactive dashboard.
See the next section for instructions on how to use the exported data.
## Exporting a selection of cells from the dashboard
If you exported a selection cells from the interactive visualization, your selection will be available here as a data frame named `selected_cells`. The `labels` column of this dataframe contains the newly generated cluster labels assigned to these selected cells.
```
print(selected_cells.shape)
selected_cells.head()
```
You can link the selected cells to the original `adata` object using the cell barcodes.
```
adata_selected_cells = adata[selected_cells.barcode.to_array(),:]
adata_selected_cells = adata[selected_cells.barcode.to_array(),:]
adata_selected_cells
```
| true |
code
| 0.54958 | null | null | null | null |
|
Guilherme Andrade, Gabriel Ramos, Daniel Madeira, Rafael Sachetto, Renato Ferreira, Leonardo Rocha, G-DBSCAN: A GPU Accelerated Algorithm for Density-based Clustering, Procedia Computer Science, Volume 18, 2013, Pages 369-378, ISSN 1877-0509, http://dx.doi.org/10.1016/j.procs.2013.05.200.
(http://www.sciencedirect.com/science/article/pii/S1877050913003438)
Abstract: With the advent of Web 2.0, we see a new and differentiated scenario: there is more data than that can be effectively analyzed. Organizing this data has become one of the biggest problems in Computer Science. Many algorithms have been proposed for this purpose, highlighting those related to the Data Mining area, specifically the clustering algorithms. However, these algo- rithms are still a computational challenge because of the volume of data that needs to be processed. We found in the literature some proposals to make these algorithms feasible, and, recently, those related to parallelization on graphics processing units (GPUs) have presented good results. In this work we present the G-DBSCAN, a GPU parallel version of one of the most widely used clustering algorithms, the DBSCAN. Although there are other parallel versions of this algorithm, our technique distinguishes itself by the simplicity with which the data are indexed, using graphs, allowing various parallelization opportu- nities to be explored. In our evaluation we show that the G-DBSCAN using GPU, can be over 100x faster than its sequential version using CPU.
Keywords: Clustering; Dbscan; Parallel computing; GPU
```
import numpy as np
eps = 0.3
minpts = 10
from sklearn.datasets import make_blobs
centers = [[1, 1], [-1, -1], [1, -1]]
d, labels_true = make_blobs(n_samples=250, centers=centers, cluster_std=0.3,
random_state=0)
from sklearn.metrics.pairwise import euclidean_distances
core = np.zeros(d.shape[0])
Va = np.zeros( (d.shape[0], 2) )
for i in range(d.shape[0]):
num_negh = 0
for j in range(d.shape[0]):
dist = euclidean_distances(d[i].reshape(1, -1), d[j].reshape(1,-1))[0]
if dist < eps:
num_negh += 1
Va[i][0] = num_negh
if num_negh >= minpts:
core[i] = 1
Va[:,0]
Va[:,1] = np.cumsum(Va[:,0])-Va[:,0]
Ea = np.zeros( int(Va[:,1][-1]) + int(Va[:,0][-1]) )
for i in range(d.shape[0]):
ni = 0
for j in range(d.shape[0]):
dist = euclidean_distances(d[i].reshape(1, -1), d[j].reshape(1,-1))[0]
if dist < eps:
#print(i, j, Va[i], int(Va[i][1])+ni)
Ea[int(Va[i][1])+ni] = j
ni += 1
Ea
def BreadthFirstSearchKernel(j, Fa, Xa):
tid = j
if Fa[tid]:
#print("tid", tid, Fa[tid])
Fa[tid] = 0
Xa[tid] = 1
#print("rng from", int(Va[j][1]), "count", int(Va[j][0]))
for k in range(int(Va[j][1]), int(Va[j][1])+int(Va[j][0])):
nid = int(Ea[k])
if not Xa[nid]:
#print(k, nid, not Xa[nid])
Fa[nid] = 1
def BreadthFirstSearch(i, cluster, visited, labels):
Xa = np.zeros(d.shape[0])
Fa = np.zeros(d.shape[0])
Fa[i] = 1
while np.count_nonzero(Fa) > 0:
#print("Count nonzero", np.count_nonzero(Fa))
for j in range(d.shape[0]):
BreadthFirstSearchKernel(j, Fa, Xa)
print("Count nonzero", np.count_nonzero(Fa))
for j in range(d.shape[0]):
if Xa[j]:
visited[j] = 1
labels[j] = cluster
print("Cluster assign", j, cluster)
def IdentifyCluster():
cluster = 0
labels = np.full( d.shape[0], -1 )
visited = np.zeros(d.shape[0])
for i in range(d.shape[0]):
if visited[i]:
continue
if not core[i]:
continue
print("Core ", i)
visited[i] = 1
labels[i] = cluster
BreadthFirstSearch(i, cluster, visited, labels)
cluster += 1
return (cluster, labels)
(cluster, labels) = IdentifyCluster()
import pandas as pd
df = pd.DataFrame.from_records( list(
map( lambda i: ( d[i][0], d[i][1], labels[i]), range(d.shape[0])) ), columns=['x', 'y', 'class'] )
%matplotlib inline
import seaborn as sns
sns.pairplot(df, x_vars=['x'], y_vars=['y'], hue="class", size=7)
```
| true |
code
| 0.335861 | null | null | null | null |
|
# 사용자 정의 모델 만들기 (Siamese)
> fastai에서는 데이터를 정의하는 방법으로 DataBlock API를 제안합니다. 각 인자가 의미하는 내용과, 실제 Siamese 공식 튜토리얼에 이 내용이 어떻게 적용되는지를 살펴봅니다.
- author: "Chansung Park"
- toc: true
- image: images/datablock/siamese-model.png
- comments: true
- categories: [model, siamese, fastai]
- permalink: /model-siamese/
- badges: false
- search_exclude: true
```
#hide
!pip install fastai
!pip install nbdev
#hide
from fastai.vision.all import *
import nbdev
#hide
path = untar_data(URLs.PETS)
files = get_image_files(path/"images")
def category_extraction_func(filename):
return re.match(r'^(.*)_\d+.jpg$', filename.name).groups()[0] # () 안의 것을 추출하여 반환
categories = list(set(files.map(category_extraction_func)))
splits = RandomSplitter()(files)
splits_files = [files[splits[i]] for i in range(2)]
splits_sets = mapped(set, splits_files)
splbl2files = [{c: [f for f in s if category_extraction_func(f) == c] for c in categories} for s in splits_sets]
def get_split(filename):
for i, s in enumerate(splits_sets):
if filename in s: return i
raise ValueError(f'File {f} is not presented in any split.')
def draw_other(filename):
given_category = category_extraction_func(filename)
split = get_split(filename)
is_same = random.random() < 0.5
if not is_same:
other_category = random.choice(L(category for category in categories if category != given_category))
else:
other_category = given_category
return random.choice(splbl2files[split][other_category]), is_same
def get_tuples(filenames):
return [[filename, *draw_other(filename)] for filename in filenames]
class ImageTuple(fastuple):
@classmethod
def create(cls, filenames):
return cls(tuple(PILImage.create(f) for f in filenames))
def show(self, ctx=None, **kwargs):
t1,t2 = self
if not isinstance(t1, Tensor) or \
not isinstance(t2, Tensor) or \
t1.shape != t2.shape:
return ctx
line = t1.new_zeros(t1.shape[0], t1.shape[1], 10)
return show_image(torch.cat([t1,line,t2], dim=2), ctx=ctx, **kwargs)
def ImageTupleBlock():
return TransformBlock(type_tfms=ImageTuple.create,
batch_tfms=IntToFloatTensor)
def get_x(t): return t[:2]
def get_y(t): return t[2]
def splitter(items):
def get_split_files(i):
return [j for j,(f1,f2,same) in enumerate(items) if get_split(f1)==i]
return get_split_files(0),get_split_files(1)
siamese = DataBlock(
get_items=get_tuples, # 모든 데이터를 불러 들이는 함수를 지정합니다.
get_x=get_x, # 불러와진 데이터에 대해서, 입력을 결정하는 함수를 지정합니다.
get_y=get_y, # 불러와진 데이터에 대해서, 출력을 결정하는 함수를 지정합니다.
blocks=(ImageTupleBlock, CategoryBlock), # tuple 형식으로, 두 개 이상도 가능합니다.
item_tfms=Resize(224), # 아이템 단위의 변환
batch_tfms=[Normalize.from_stats(*imagenet_stats)], # 배치 단위의 변환
splitter=splitter # 학습/검증 데이터셋을 분리하는 함수를 지정합니다.
)
#hide
siamese.summary(files)
```
이번 포스팅에서는 이전 포스팅 "[데이터 블록 만드는 법 (Siamese)](https://fast-ai-kr.github.io/tutorials/datablock-siamese/)"에서 만든 **DataBlock**을 수용할 수 있는 모델을 만드는 방법을 다룹니다. 기본적으로는 PyTorch의 **nn.Module**을 상속받아서 모델을 만들면 그만이지만, 이 때 **fastai**에서 제공하는 몇 가지 편리한 함수를 살펴볼 것입니다.
## SiameseModel 개요
우선 Siamese 모델이 하는 일을 다시 한번 생각해 봅시다. 이 모델은 두 이미지를 입력받아서, 두 이미지가 같은 부류에 속하는지를 판단하여 같다면 **True**, 다르다면 **False** 라는 결과를 예측합니다.
아래의 코드는 **SiameseModel** 이라는 간단한 모듈을 보여줍니다. 이 모듈은 PyTorch의 **nn.Module** 대신, fastai의 **Module**을 상속받아서 구현되었습니다. **nn.Module**과 **Module**의 차이는 단순히 **__init__** 메서드 내에서, **super.__init__** 부모 메서드를 호출할 필요가 있는지 없는지 입니다. 따라서, **super.__init__** 을 호출한다면, **nn.Module**을 사용해도 무방한 것이죠.
```
class SiameseModel(Module):
def __init__(self, encoder, head):
self.encoder,self.head = encoder,head
def forward(self, x1, x2):
filters = torch.cat([self.encoder(x1), self.encoder(x2)], dim=1)
return self.head(filters)
```
아래의 **Module** 닥스트링을 통해서, 해당 설명을 확인해 보기 바랍니다.
```
nbdev.show_doc(Module)
```
**SiameseModel**이 구현된 방식을 한번 살펴보겠습니다. 우선 생성자인 **__init__** 메서드는 두 개의 인자 (**encoder**, **head**)를 수용합니다. 이 둘은 각각 일반적인 CNN 모델에서 흔히 알고 있는 특징 추출을 담당하는 **Convolutional Layers** 와 분류를 담당하는 **Fully Connected Layers** 를 의미합니다.
이 두개를 입력받는 이유는 **전이학습**을 위해서 입니다. 일반적으로 **전이학습**은 사전에 훈련된 모델의 **Convolutional Layers**의 가중치를 그대로 활용합니다. 즉, 수 많은 이미지로부터 다양한 특징을 추출하는 능력을 이어받는 것이죠. 따라서, 이 부분이 **encoder**에 해당합니다.
하지만, 사전 훈련된 모델이 풀고자 했던 문제와 내가 현재 풀고자 하는 문제는 다릅니다. 분류 할 범주의 개수도 다르며, 종류도 다릅니다. 따라서 마지막 **head** 부분을 나의 문제에 맞게 구조를 잡은 다음, 이를 **encoder**와 결합해 주는 것입니다.
**fastai** 에서는 사전 훈련된 모델로부터 **encoder** 부분을 추출하는 편리한 메서드로, **create_body**를 제공합니다. 또한 일반적인 구조의 **head**를 만들어주는 **create_head** 메서드도 함께 제공합니다. 즉, **create_body**로 **encoder**를 추출한 다음, **create_head**로 생성된 부분을 **encoder**와 결합해 주면 되는 것입니다.

이 내용을 숙지한 상태로, 다시한번 **SiameseModel**의 구현 코드를 살펴봅시다.
```
class SiameseModel(Module):
def __init__(self, encoder, head):
self.encoder,self.head = encoder,head
def forward(self, x1, x2):
filters = torch.cat([self.encoder(x1), self.encoder(x2)], dim=1)
return self.head(filters)
```
**__init__** 생성자는 단순히 **encoder**와 **head**를 수용하여, 내부 변수에 저장합니다. 그리고, **forward**는 **x1**과 **x2** 두 개의 인자를 수용하는데, 각각은 비교되어야 할 서로다른 이미지 두 개를 의미합니다. 각 이미지를 **encoder**에 전달한 다음, **torch.cat** 함수를 통해 그 결과를 이어 붙여줍니다. 즉, 원래 **encoder**가 출력하는 결과의 두 배의 양이되는 것이죠. 다만, 양은 두개지만 서로다른 **encoder**를 사용하는 것이 아니므로, 가중치는 공유됩니다.
그 다음, 이어 붙여진 결과를 단순히 **head**로 삽입해 주는것으로 **SiameseModel**의 역할은 끝이납니다.
## encoder(body)와 head
그러면 이제 우리가 해야할 일은 **encoder**와 **head**를 만들어 주는 것입니다. 직접 **encoder**를 만들어서 밑바닥부터 학습을 진행해도 좋지만, 이미 이미지넷을 통해서 사전에 학습된 훌륭한 모델들이 존재합니다. 가령 **ResNet**, **xResNet**, **EfficientNet** 등과 같은것이 있을 수 있겠죠.
**fastai**에서는 기본적으로 **ResNet**, **xResNet**, **VGG**, **AlexNet**, **DenseNet**, **SqueezeNet** 을 기본적으로 제공합니다. 그 중 **ResNet**은 사실 PyTorch에서 제공하는 모델을 그대로 활용하죠. 다만 **fastai**에서 제공되는 모델은 추가적인 메타데이터가 함께 딸려옵니다. 이 메타데이터가 의미하는바를 먼저 알아보도록 하겠습니다.
### 모델의 메타데이터
다음은 **resnet34**에 대한 메타데이터가 가진 정보를 보여줍니다. **fastai**에서 제공하는 **model_meta** 딕셔너리 정보를 통해서, 각 모델의 메타데이터 정보를 조회할 수 있습니다.
```
model_meta[resnet34]
```
보다시피 세 개의 키값(**cut**, **split**, **stats**) 에 대한 정보를 가지고 있습니다. 각 키값이 의미하는바는 다음과 같습니다.
- **cut**
- CNN 구조에서, Fully Connected Layers가 시작되는 지점의 인덱스를 의미합니다. 즉, 개념적으로 생각해 보자면 **resnet34[-2]** 와 같이 접근하면, Fully Connected Layers를 제외한 나머지 Convolutional Layers 들 만을 가지고 올 수 있게 되는 것이죠. 이는 사전학습된 모델에서 Fully Connected Layer를 제거하고, 나만의 문제에 적합한 Fully Connected Layers를 추가하는 전이학습을 수행할 때 매우 도움이 되는 정보입니다.
---
- **split**
- split은 전이학습시 **freeze**되어야 하는 부분을 포함해서, 차별적 학습률이 적용된 파라미터 그룹을 구분짓습니다. **fastai**가 제공하는 모델 학습 시스템은 계층별로 차별적인 학습률을 둘 수 있도록 설계되어 있고, **split** 정보에는 차별적인 학습률이 적용된 계층 그룹에 대한 정보가 담겨 있습니다.
---
- **stats**
- 사전 학습된 모델이 학습한 데이터의 통계적 정보(**평균**, **표준편차**)를 저장합니다. 특정 모델이 학습한 데이터를 구성하는 값의 분포를 알고, 새로운 데이터를 그 분포에 맞도록 변형해 주면 전이학습의 효과를 좀 더 끌어올릴 수 있는 전략에 사용됩니다. 이 정보는 **DataBlock** 구성시 **batch_tfms**에 자동으로 삽입됩니다.
### encoder 만들기
모델의 메타데이터 정보를 가지고 있으므로, 이를 활용하여 **encoder** 부분을 잘라낼 수 있을 것입니다 (그렇지 않다면, 직접 모델의 구조를 파악한 후 자르는 지점을 결정해야만 합니다). 특히 **cut** 정보가 있을 때, **fastai**에서 제공하는 **create_body** 함수를 활용하면 쉽게 자를 수 있습니다.
먼저 **create_body** 함수의 원형을 살펴보죠.
```
nbdev.show_doc(create_body)
```
여기서의 **arch**는 모델이다. 즉 **resnet18**, **resnet50**과 같은 것이 됩니다. 유념해야 하는 인자는 **cut** 입니다. 바로 이 부분에 모델의 메타데이터의 **cut** 정보를 넣어주면 됩니다.
다음은 **resnet34**에 대하여 **create_body** 함수를 수행하여 얻은 결과로부터, 가장 마지막 인덱스에 해당하는 것을 가져옵니다. 이름에서 알 수 있듯이 34계층으로 구성되어 있기 때문에, 다소 많은 출력을 피하기 위한 목적과, 가장 마지막 계층이 Convolutional Layer로 끝난다는 사실을 확인하기 위함입니다.
```
encoder = create_body(resnet34, cut=model_meta[resnet34]['cut'])
encoder[-1]
```
### head 만들기
```
head = create_head(512*4, 2, ps=0.5)
nbdev.show_doc(create_head)
head
```
## SiameseModel과 Learner의 생성
```
model = SiameseModel(encoder, head)
def siamese_splitter(model):
return [params(model.encoder), params(model.head)]
def loss_func(out, targ):
return CrossEntropyLossFlat()(out, targ.long())
```
| true |
code
| 0.700332 | null | null | null | null |
|
This notebook I am going to discuss about,
1. deep learning
2. forward propagation
3. gradient decent
4. backword propagation
5. basic deep learning model with keras
### Deep Learning :
----
Deep learning is a machine learning algorithm where artificial neural network solve particular problem. This neural network is build with *perceptron*, it is a human brain like model. The idea behind deep learning is **trial and error**.
### Foward Propagation :
----
The basic idea of forward propagation is there will be some input nodes with weights then we will calculate the hidden nodes value and also the output node.
Here is an example :
lets say we have two node (2,3) their weight is (1,1) and (-1,1) and hidden node weight is (2,-1), we will calculate hidden node by
$(2*1)+(3*1) = 5$
$(2*-1)+(3*1) = 1$
$(5*2)+(1*-1) = 9$ // output
* forward propagation follow multiply and add process
* dot product
* forward propation one data point at a time
* output is the predicton for the data point

```
import numpy as np
input_data = np.array([2,3])
#store the weights of the nodes as dictionary
weights = {
'node_0': np.array([1,1]),
'node_1': np.array([-1,1]),
'output': np.array([2,-1])
}
node_0_value = (input_data * weights['node_0']).sum()
node_1_value = (input_data * weights['node_1']).sum()
hidden_layer_value = np.array([node_0_value, node_1_value])
output = (hidden_layer_value * weights['output']).sum()
print(hidden_layer_value)
print(output)
```
### Backward Propagation :
----
Most of the time the output of the forward propagation is not closer to the real value, to minimize the error backward propagation comes. Backward propagation updates the weights with respect to the error.
* start at random weights.
* use forward propagation to make a prediction.
* use backward propagation to calculate the slope of the loss function w.r.t to each weight.
* multiply that slope by the learning rate and subtract from the current weights.
* keep going with that cycle untill get to a flat part.
> **Gradient Decent :** Gradient descent is an optimization algorithm used to minimize loss function by iteratively moving in the direction of steepest descent as defined by the negative of the gradient. In machine learning, we use gradient descent to update the parameters of our model.
```
import numpy as np
import matplotlib.pyplot as plt
#return slop of loss function
def get_slope(input_data, target, weights):
error = get_error(input_data, target, weights)
slope = 2 * input_data * error
return slope
#return mean square error
def get_mse(input_data, target, weights):
error = get_error(input_data, target, weights)
mse = np.mean(error **2)
return mse
#return slope
def get_error(input_data, target, weights):
preds = (input_data * weights).sum()
error = preds - target
return error
# The data point you will make a prediction for
input_data = np.array([4,2,3])
#target
target_actual = 0
# Sample weights
weights = np.array([10,1,2])
learning_rate = 0.01
mse_hist = []
#print(get_slope(input_data, target_actual, weights))
for i in range(20):
slope = get_slope(input_data, target_actual, weights)
weights = weights - learning_rate * slope
#print('iteration {0} weights : {1}'.format(i, weights))
mse = get_mse(input_data, target_actual, weights)
mse_hist.append(mse)
plt.plot(mse_hist)
plt.xlabel('Iterations')
plt.ylabel('Mean Squared Error')
plt.show()
```
### Keras Architecture :
---
Keras is a deep learning library. It works on top of tensorflow.
Here is the basic keras architecture :
* specify architecture
* compile
* fit
* save model
* reload model
* predict
* evaluate
```
import keras
from keras.layers import Dense
from keras.models import Sequential
from keras.models import load_model
from keras.callbacks import EarlyStopping
import numpy as np
import pandas as pd
#import data
df = pd.read_csv('hourly_wages.csv')
target_df = df['wage_per_hour']
feature_df = df.drop(columns = ['wage_per_hour'])
#corvert numpy matrix
predtictor = feature_df.values
target = target_df.values
#get the number of columns
n_cols = predtictor.shape[1]
#print(len(predtictor))
#specify model
model = Sequential()
#specify layer
#1st layer
model.add(Dense(50, activation='relu', input_shape=(n_cols,)))
#2nd layer
model.add(Dense(32, activation='relu'))
#output layer
model.add(Dense(1))
#compile
model.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy'])
#stop when see model not improving
early_stopping_monitor = EarlyStopping(patience=2)
#fit
model.fit(predtictor, target, validation_split=0.3, epochs=20, callbacks=[early_stopping_monitor])
#save
model.save('hourly_wages.h5')
model.summary()
#reload
my_model = load_model('hourly_wages.h5')
# predict
```
| true |
code
| 0.589657 | null | null | null | null |
|
# Transformer
What is a Transformer?
A Transformer is a type of neural network architecture developed by Vaswani et al. in 2017.
Without going into too much detail, this model architecture consists of a multi-head self-attention mechanism combined with an encoder-decoder structure. It can achieve SOTA results that outperform various other models leveraging recurrent (RNN) or convolutional neural networks (CNN) both in terms of evaluation score (BLEU score) and training time.
The Transformer model structure has largely replaced other NLP model implementations such as RNNs.
The GPT model only uses the decoder of the Transformer structure (unidirectional), while **BERT** is based on the Transformer encoder (bidirectional).
Many Transformer-based NLP models were specifically created for transfer learning. Transfer learning describes an approach where a model is first pre-trained on large unlabeled text corpora using self-supervised learning.
While GPT used a standard language modeling objective which predicts the next word in a sentence, BERT was trained on Masked Language Modeling (MLM) and Next Sentence Prediction (NSP). The RoBERTa model replicated the BERT model architecture but changed the pre-training using more data, training for longer, and removing the NSP objective.
The model checkpoints of the pre-trained models serve as the starting point for fine-tuning. A labeled dataset for a specific downstream task is used as training data. There are several different fine-tuning approaches, including the following:
* Training the entire model on the labeled data.
* Training only higher layers and freezing the lower layers.
* Freezing the entire model and training one or more additional layers added on top.
No matter the approach, a task-specific output layer usually needs to be attached to the model.
Source: [How to use transformer-based NLP models](https://towardsdatascience.com/how-to-use-transformer-based-nlp-models-a42adbc292e5)
## Multilabel Classification with BERT
```
#!pip install simpletransformers
#!pip install gin-config
!pip install tensorflow-addons
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sklearn
import torch
import wandb
from itertools import cycle
from simpletransformers.classification import MultiLabelClassificationModel
from sklearn.metrics import accuracy_score, auc, classification_report, confusion_matrix, ConfusionMatrixDisplay, roc_curve, roc_auc_score
from sklearn.model_selection import train_test_split
# load data
df = pd.read_csv('../data/df_cleaned.csv')
# Remove new lines from comments
df['comment_text'] = df.comment_text.apply(lambda x: x.replace('\n', ' '))
# category list for plots
categories = ['toxic', 'severe_toxic', 'obscene', 'threat', 'insult', 'identity_hate']
# prepare dataframe for train test split. MultilabelClassificator needs a text column and a labels column,
# which provides all categories as a list
new_df = pd.DataFrame()
new_df['id'] = df['id']
new_df['text'] = df['comment_text']
new_df['labels'] = df.iloc[:, 2:8].values.tolist()
def split(df):
train_df, eval_df = train_test_split(df, test_size=0.2, random_state=0)
return train_df, eval_df
# Create trand and eval df for the model training and evaluation
train_df, eval_df = split(new_df)
# Model args
args = {
'logging_steps': 10,
'overwrite_output_dir':True,
'train_batch_size':2,
'gradient_accumulation_steps':16,
'learning_rate': 3e-5,
'num_train_epochs': 4,
'max_seq_length': 128,
'wandb_project': 'toxic-comment-classification',
"wandb_kwargs":
{"name": "bert-lr3e-5"},
}
# load pretrained model for the multilabel classification task
model = MultiLabelClassificationModel('bert', 'bert-base-uncased', num_labels=6, args=args)
# train the model with the train data
model.train_model(train_df = train_df)
# save model
torch.save(model, 'saved_models/bert_lr3e-5')
# load model
model = torch.load('saved_models/bert_lr3e-5')
# evaluate model on eval_df
result, model_outputs, wrong_predictions = model.eval_model(eval_df=eval_df, roc_auc_score=sklearn.metrics.roc_auc_score)
# make predictions
preds, outputs = model.predict(eval_df.text)
# define y_true for roc_auc plot and classification report
y_true = np.array(eval_df['labels'].values.tolist())
def evaluate_roc(probs, y_true, category, color):
"""
- Print AUC and accuracy on the test set
- Plot ROC
@params probs (np.array): an array of predicted probabilities with shape (len(y_true), 2)
@params y_true (np.array): an array of the true values with shape (len(y_true),)
"""
preds = probs
fpr, tpr, threshold = roc_curve(y_true, preds)
roc_auc = auc(fpr, tpr)
roc_aucs.append(roc_auc)
print(f'AUC: {roc_auc:.4f}')
# Get accuracy over the test set
y_pred = np.where(preds >= 0.3, 1, 0)
accuracy = accuracy_score(y_true, y_pred)
print(f'Accuracy: {accuracy*100:.2f}%')
# Plot ROC AUC
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, color=color, label="{0} (area = {1:0.5f})".format(category, roc_auc),)
plt.plot(fpr, tpr, color=color)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'k--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.savefig('plots/roc_auc_curve.png')
# evalutae roc auc score and plot curves per category
colors = cycle(["aqua", "darkorange", "cornflowerblue"])
for i, color in zip(range(6), colors):
print('-----------')
print(categories[i])
print('-----------')
evaluate_roc(outputs[:, i].ravel(), y_true[:, i].ravel(), categories[i], color)
# Plot confusion matrix per category
y_test = np.array(eval_df['labels'].to_list())
preds = np.array(preds)
f, axes = plt.subplots(2, 3, figsize=(25, 15))
axes = axes.ravel()
for i in range(6):
disp = ConfusionMatrixDisplay(confusion_matrix(y_test[:, i],
preds[:, i]),
display_labels=[f'non {categories[i]}', categories[i]])
disp.plot(ax=axes[i], values_format='.4g')
disp.ax_.set_title(f'toxicity label:\n {categories[i]}', fontsize=20)
if i<3:
disp.ax_.set_xlabel('')
if i%3!=0:
disp.ax_.set_ylabel('')
disp.im_.colorbar.remove()
plt.subplots_adjust(wspace=0.8, hspace=0.01)
f.colorbar(disp.im_, ax=axes)
plt.show()
# Print classification report
print(f"Classification Report : \n\n{classification_report(y_test, preds)}")
# Create submission_file
test_df = pd.read_csv('data/test.csv')
comments = test_df.comment_text.apply(lambda x: x.replace('\n', ' ')).tolist()
preds, outputs = model.predict(comments)
submission = pd.DataFrame(outputs, columns=categories)
submission['id'] = test_df['id']
submission = submission[categories]
# write to csv and upload at Kaggle to get ROC AUC Scores for Kaggles testdata
submisssion.to_csv('/content/drive/MyDrive/data/submission_roberta_tuning_lr2e5.csv', index=False)
```
| true |
code
| 0.685713 | null | null | null | null |
|
# Self Supervised Learning with Fastai
> Implementation of popular SOTA self-supervised learning algorithms as Fastai Callbacks.

[](https://pypi.org/project/self-supervised/#description)
[](https://zenodo.org/badge/latestdoi/295835009)
## Install
`pip install self-supervised`
## Documentation
Please read the documentation [here](https://keremturgutlu.github.io/self_supervised).
To go back to github repo please click [here](https://github.com/keremturgutlu/self_supervised/tree/master/).
## Algorithms
Please read the papers or blog posts before getting started with an algorithm, you may also check out documentation page of each algorithm to get a better understanding.
Here are the list of implemented **self_supervised.vision** algorithms:
- [SimCLR v1](https://arxiv.org/pdf/2002.05709.pdf) & [SimCLR v2](https://arxiv.org/pdf/2006.10029.pdf)
- [MoCo v1](https://arxiv.org/pdf/1911.05722.pdf) & [MoCo v2](https://arxiv.org/pdf/2003.04297.pdf)
- [BYOL](https://arxiv.org/pdf/2006.07733.pdf)
- [SwAV](https://arxiv.org/pdf/2006.09882.pdf)
- [Barlow Twins](https://arxiv.org/pdf/2103.03230.pdf)
- [DINO](https://arxiv.org/pdf/2104.14294.pdf)
Here are the list of implemented **self_supervised.multimodal** algorithms:
- [CLIP](https://arxiv.org/pdf/2103.00020.pdf)
- CLIP-MoCo (No paper, own idea)
For vision algorithms all models from [timm](https://github.com/rwightman/pytorch-image-models) and [fastai](https://github.com/fastai/fastai) can be used as encoders.
For multimodal training currently CLIP supports ViT-B/32 and ViT-L/14, following best architectures from the paper.
## Simple Usage
### Vision
#### SimCLR
```python
from self_supervised.vision.simclr import *
dls = get_dls(resize, bs)
# encoder = create_encoder("xresnet34", n_in=3, pretrained=False) # a fastai encoder
encoder = create_encoder("tf_efficientnet_b4_ns", n_in=3, pretrained=False) # a timm encoder
model = create_simclr_model(encoder, hidden_size=2048, projection_size=128)
aug_pipelines = get_simclr_aug_pipelines(size=size)
learn = Learner(dls,model,cbs=[SimCLR(aug_pipelines, temp=0.07)])
learn.fit_flat_cos(100, 1e-2)
```
#### MoCo
```python
from self_supervised.vision.moco import *
dls = get_dls(resize, bs)
# encoder = create_encoder("xresnet34", n_in=3, pretrained=False) # a fastai encoder
encoder = create_encoder("tf_efficientnet_b4_ns", n_in=3, pretrained=False) # a timm encoder
model = create_moco_model(encoder, hidden_size=2048, projection_size=128)
aug_pipelines = get_moco_aug_pipelines(size=size)
learn = Learner(dls, model,cbs=[MOCO(aug_pipelines=aug_pipelines, K=128)])
learn.fit_flat_cos(100, 1e-2)
```
#### BYOL
```python
from self_supervised.vision.byol import *
dls = get_dls(resize, bs)
# encoder = create_encoder("xresnet34", n_in=3, pretrained=False) # a fastai encoder
encoder = create_encoder("tf_efficientnet_b4_ns", n_in=3, pretrained=False) # a timm encoder
model = create_byol_model(encoder, hidden_size=2048, projection_size=128)
aug_pipelines = get_byol_aug_pipelines(size=size)
learn = Learner(dls, model,cbs=[BYOL(aug_pipelines=aug_pipelines)])
learn.fit_flat_cos(100, 1e-2)
```
#### SWAV
```python
from self_supervised.vision.swav import *
dls = get_dls(resize, bs)
encoder = create_encoder("xresnet34", n_in=3, pretrained=False) # a fastai encoder
encoder = create_encoder("tf_efficientnet_b4_ns", n_in=3, pretrained=False) # a timm encoder
model = create_swav_model(encoder, hidden_size=2048, projection_size=128)
aug_pipelines = get_swav_aug_pipelines(num_crops=[2,6],
crop_sizes=[128,96],
min_scales=[0.25,0.05],
max_scales=[1.0,0.3])
learn = Learner(dls, model, cbs=[SWAV(aug_pipelines=aug_pipelines, crop_assgn_ids=[0,1], K=bs*2**6, queue_start_pct=0.5)])
learn.fit_flat_cos(100, 1e-2)
```
#### Barlow Twins
```python
from self_supervised.vision.simclr import *
dls = get_dls(resize, bs)
# encoder = create_encoder("xresnet34", n_in=3, pretrained=False) # a fastai encoder
encoder = create_encoder("tf_efficientnet_b4_ns", n_in=3, pretrained=False) # a timm encoder
model = create_barlow_twins_model(encoder, hidden_size=2048, projection_size=128)
aug_pipelines = get_barlow_twins_aug_pipelines(size=size)
learn = Learner(dls,model,cbs=[BarlowTwins(aug_pipelines, lmb=5e-3)])
learn.fit_flat_cos(100, 1e-2)
```
#### DINO
```python
from self_supervised.models.vision_transformer import *
from self_supervised.vision.dino import *
dls = get_dls(resize, bs)
deits16 = MultiCropWrapper(deit_small(patch_size=16, drop_path_rate=0.1))
dino_head = DINOHead(deits16.encoder.embed_dim, 2**16, norm_last_layer=True)
student_model = nn.Sequential(deits16,dino_head)
deits16 = MultiCropWrapper(deit_small(patch_size=16))
dino_head = DINOHead(deits16.encoder.embed_dim, 2**16, norm_last_layer=True)
teacher_model = nn.Sequential(deits16,dino_head)
dino_model = DINOModel(student_model, teacher_model)
aug_pipelines = get_dino_aug_pipelines(num_crops=[2,6],
crop_sizes=[128,96],
min_scales=[0.25,0.05],
max_scales=[1.0,0.3])
learn = Learner(dls,model,cbs=[DINO(aug_pipelines=aug_pipelines)])
learn.fit_flat_cos(100, 1e-2)
```
### Multimodal
#### CLIP
```python
from self_supervised.multimodal.clip import *
dls = get_dls(...)
clip_tokenizer = ClipTokenizer()
vitb32_config_dict = vitb32_config(224, clip_tokenizer.context_length, clip_tokenizer.vocab_size)
clip_model = CLIP(**vitb32_config_dict, checkpoint=False, checkpoint_nchunks=0)
learner = Learner(dls, clip_model, loss_func=noop, cbs=[CLIPTrainer()])
learn.fit_flat_cos(100, 1e-2)
```
#### CLIP-MoCo
```python
from self_supervised.multimodal.clip_moco import *
dls = get_dls(...)
clip_tokenizer = ClipTokenizer()
vitb32_config_dict = vitb32_config(224, clip_tokenizer.context_length, clip_tokenizer.vocab_size)
clip_model = CLIPMOCO(K=4096,m=0.999, **vitb32_config_dict, checkpoint=False, checkpoint_nchunks=0)
learner = Learner(dls, clip_model, loss_func=noop, cbs=[CLIPMOCOTrainer()])
learn.fit_flat_cos(100, 1e-2)
```
## ImageWang Benchmarks
All of the algorithms implemented in this library have been evaluated in [ImageWang Leaderboard](https://github.com/fastai/imagenette#image%E7%BD%91-leaderboard).
In overall superiority of the algorithms are as follows `SwAV > MoCo > BYOL > SimCLR` in most of the benchmarks. For details you may inspect the history of [ImageWang Leaderboard](https://github.com/fastai/imagenette#image%E7%BD%91-leaderboard) through github.
`BarlowTwins` is still under testing on ImageWang.
It should be noted that during these experiments no hyperparameter selection/tuning was made beyond using `learn.lr_find()` or making
sanity checks over data augmentations by visualizing batches. So, there is still space for improvement and overall rankings of the alogrithms may change based on your setup. Yet, the overall rankings are on par with the papers.
## Contributing
Contributions and or requests for new self-supervised algorithms are welcome. This repo will try to keep itself up-to-date with recent SOTA self-supervised algorithms.
Before raising a PR please create a new branch with name `<self-supervised-algorithm>`. You may refer to previous notebooks before implementing your Callback.
Please refer to sections `Developers Guide, Abbreviations Guide, and Style Guide` from https://docs.fast.ai/dev-setup and note that same rules apply for this library.
| true |
code
| 0.844985 | null | null | null | null |
|
```
#importing important libraries
#libraries for reading dataset
import numpy as np
import pandas as pd
#libraries for data visualisation
import matplotlib.pyplot as plt
import seaborn as sns
#libraries for model building and understanding
import sklearn
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
import statsmodels.api as sm
from statsmodels.stats.outliers_influence import variance_inflation_factor
from sklearn.feature_selection import RFE
from sklearn.linear_model import LinearRegression
#library to deal with warning
import warnings
warnings.filterwarnings('ignore')
#to display all the columns in the dataset
pd.set_option('display.max_columns', 500)
```
## Reading Data
```
cp = pd.read_csv('CarPrice_Assignment.csv')
```
### Understanding the data
```
cp.head()
cp.shape
cp.info()
cp.describe()
#symboling is a categorical data so we convert to the specified type
cp['symboling'] = cp['symboling'].apply(str)
```
#### There are 11 categorical data and remaining are the numerical data
```
#as per the bussiness requirements we only need name of the carcompany and not the carmodel
#so we drop carmodel and only keep carCompany
cp['CarName'] = cp['CarName'].str.lower()
cp['carCompany'] = cp['CarName'].str.split(' ').str[0]
cp = cp.drop('CarName',axis = 1)
cp.head()
```
## Data visualization and understanding using EDA
### Price is a dependent variable
#### Visualising numerical data
```
#Finding correlation
cor = cp.corr()
cor
# visulaising correlation using heatmap
plt.subplots(figsize=(20, 20))
plt.title('Correlation between each data variable')
sns.heatmap(cor, xticklabels=cor.columns.values,
yticklabels=cor.columns.values,annot= True,linecolor="black",linewidths=2, cmap="viridis")
plt.show()
```
`citympg` and `highwaympg` have negative correlation with price
`carlength`,`carwidth`,`curbweight`,`enginesize` and `horsepower` have positve correlation with price
```
#scatter plot for numerical data with yaxis fixed as price
plt.figure(figsize=[18,12])
plt.subplot(3,3,1)
plt.scatter(cp.wheelbase, cp.price)
plt.title('wheelbase vs price')
plt.subplot(3,3,2)
plt.scatter(cp.peakrpm, cp.price)
plt.title('peakrpm vs price')
plt.subplot(3,3,3)
plt.scatter(cp.carheight, cp.price)
plt.title('carheight vs price')
plt.subplot(3,3,4)
plt.scatter(cp.compressionratio, cp.price)
plt.title('compressionratio vs price')
plt.subplot(3,3,5)
plt.scatter(cp.stroke, cp.price)
plt.title('Stroke vs price')
plt.subplot(3,3,6)
plt.scatter(cp.boreratio, cp.price)
plt.title('boreratio vs price')
plt.subplot(3,3,7)
plt.scatter(cp.enginesize, cp.price)
plt.title('enginesize vs price')
plt.subplot(3,3,8)
plt.scatter(cp.horsepower, cp.price)
plt.title('horsepower vs price')
plt.subplot(3,3,9)
plt.scatter(cp.curbweight, cp.price)
plt.title('curbweight vs price')
plt.show()
plt.figure(figsize=[15,8])
plt.subplot(2,2,1)
plt.scatter(cp.carlength, cp.price)
plt.title('carlength vs price')
plt.subplot(2,2,2)
plt.scatter(cp.carwidth, cp.price)
plt.title('carwidth vs price')
plt.subplot(2,2,3)
plt.scatter(cp.citympg, cp.price)
plt.title('citympg vs price')
plt.subplot(2,2,4)
plt.scatter(cp.highwaympg, cp.price)
plt.title('highwaympg vs price')
plt.show()
print(np.corrcoef(cp['carlength'], cp['carwidth'])[0, 1])
print(np.corrcoef(cp['citympg'], cp['highwaympg'])[0, 1])
```
### Deriving new feature `mielage`
```
#citympg and highwaympg both have similar negativecorrelation with price and can derive a new metric mielage
#using these two features which represents the realtion of both features with price as one
cp['mielage'] = (0.70*cp['highwaympg'])+(0.30*cp['citympg'])
cp['mielage'].corr(cp['price'])
```
#### Visualising categorical data
```
plt.figure(figsize=(20, 15))
plt.subplot(3,3,1)
sns.countplot(cp.fueltype)
plt.subplot(3,3,2)
sns.countplot(cp.aspiration)
plt.subplot(3,3,3)
sns.countplot(cp.doornumber)
plt.subplot(3,3,4)
sns.countplot(cp.drivewheel)
plt.subplot(3,3,5)
sns.countplot(cp.carbody)
plt.subplot(3,3,6)
sns.countplot(cp.enginelocation)
plt.subplot(3,3,7)
sns.countplot(cp.enginetype)
plt.subplot(3,3,8)
sns.countplot(cp.cylindernumber)
plt.subplot(3,3,9)
sns.countplot(cp.symboling)
plt.show()
```
`ohc` is the most preferred enginetype
most cars have `four cylinders`
`sedan` and `hatchback` are most common carbody
most cars prefer `gas` fueltype
```
plt.figure(figsize=(30,25))
plt.subplot(2,1,1)
sns.countplot(cp.fuelsystem)
plt.subplot(2,1,2)
sns.countplot(cp.carCompany)
plt.show()
```
`mpfi` and `2bbl` are most common fuelsystem
`Toyota` is most favoured carcompany
##### we can observe that numerous carcompanies are misspelled
```
# replcaing incorrect carcompanies with correct ones so we replace them with correct spellings
cp['carCompany'] = cp['carCompany'].str.replace('vok','volk')
cp['carCompany'] = cp['carCompany'].str.replace('ou','o')
cp['carCompany'] = cp['carCompany'].str.replace('cshce','sche')
cp['carCompany'] = cp['carCompany'].str.replace('vw','volkswagen')
cp['carCompany'] = cp['carCompany'].str.replace('xd','zd')
# visualising categorical data vs price
plt.figure(figsize = (25,15))
plt.subplot(3,3,1)
sns.boxplot(x = 'fueltype',y='price', data = cp)
plt.subplot(3,3,2)
sns.boxplot(x = 'symboling',y='price', data = cp)
plt.subplot(3,3,3)
sns.boxplot(x = 'aspiration',y='price', data = cp)
plt.subplot(3,3,4)
sns.boxplot(x = 'doornumber',y='price', data = cp)
plt.subplot(3,3,5)
sns.boxplot(x = 'carbody',y='price', data = cp)
plt.subplot(3,3,6)
sns.boxplot(x = 'drivewheel',y='price', data = cp)
plt.subplot(3,3,7)
sns.boxplot(x = 'enginelocation',y='price', data = cp)
plt.subplot(3,3,8)
sns.boxplot(x = 'enginetype',y='price', data = cp)
plt.subplot(3,3,9)
sns.boxplot(x = 'cylindernumber',y='price', data = cp)
plt.show()
```
`ohcv` are most expensive of the enginetype
`doornumber` don't have much impact on the price
`hardtop` and `covertible` are most expensive among the carbody
cars that are `rwd` have higher price
```
plt.figure(figsize=(30,25))
plt.subplot(2,1,1)
sns.boxplot(x = 'fuelsystem',y='price', data = cp)
plt.subplot(2,1,2)
sns.boxplot(x = 'carCompany',y='price', data = cp)
plt.show()
```
`buick`, `jaguar`, `porsche` and `bmw` are most expensive carCompany
`mpfi` and `idi` having the highest price range.
### Encoding categorical data
```
#defining fucntion for binary encoding of features with only 2 types of data
def number_map(x):
return x.map({'gas':1,'diesel':0,'std':0,'turbo':1,'two':0,'four':1,'front':0,'rear':1})
cp[['aspiration']] =cp[['aspiration']].apply(number_map)
cp[['doornumber']] =cp[['doornumber']].apply(number_map)
cp[['fueltype']] =cp[['fueltype']].apply(number_map)
cp[['enginelocation']] =cp[['enginelocation']].apply(number_map)
#creating dummies for categorical data by defining fucntion
def dummies(x,df):
sp = pd.get_dummies(df[x], drop_first = True)
df = pd.concat([df, sp], axis = 1)
return df
#appliying dummies function to features
cp = dummies('symboling',cp)
cp = dummies('carbody',cp)
cp = dummies('drivewheel',cp)
cp = dummies('enginetype',cp)
cp = dummies('cylindernumber',cp)
cp = dummies('fuelsystem',cp)
cp = dummies('carCompany',cp)
#dropping orignal columns after dummies
#and dropping columns that are not required for the analysis
cp = cp.drop(['carCompany','car_ID','symboling','carbody','drivewheel','enginetype','cylindernumber','fuelsystem'],axis = 1)
cp.head()
cp.info()
```
### Splitting data into test and train datasets
```
# Splitting data into test and train sets
df_train,df_test = train_test_split(cp ,train_size = 0.7, random_state = 100)
print(df_train.shape)
print(df_test.shape)
```
#### scaling the train dataset
```
scaler = MinMaxScaler()
#applying MinMax scaling to all the numerical data
num_var = ['boreratio','stroke','mielage','carlength','carwidth','wheelbase','curbweight','carheight','enginesize','price','peakrpm','horsepower','compressionratio']
df_train[num_var] = scaler.fit_transform(df_train[num_var])
df_train.head()
cp.columns
```
#### Dividing into X and y for model building
```
y_train = df_train.pop('price')
X_train = df_train
```
### Building model
#### we use RFE(recursive feature elemination) for building our model because we have lots of feature
#### it is not feasible to build a model by using them one by one
## RFE
```
# Running RFE with the output number of the variable equal to 14
lm = LinearRegression()
lm.fit(X_train, y_train)
rfe = RFE(lm, 14) # running RFE
rfe = rfe.fit(X_train, y_train)
#ranking of all the features
list(zip(X_train.columns,rfe.support_,rfe.ranking_))
#14 selected features
col = X_train.columns[rfe.support_]
col
#rejected features
X_train.columns[~rfe.support_]
```
### Building model using statsmodel, for the detailed statistics
```
# Creating X_test dataframe with RFE selected variables
X_train_rfe = X_train[col]
#Adding the constant
X_train_lr = sm.add_constant(X_train_rfe)
lr_1 = sm.OLS(y_train,X_train_lr).fit() # Running the linear model
print(lr_1.summary())
#defining the fucntion for calculating VIF
def get_vif(X):
vif = pd.DataFrame()
vif['Features'] = X.columns
vif['VIF'] = [variance_inflation_factor(X.values, i) for i in range(X.shape[1])]
vif['VIF'] = round(vif['VIF'], 2)
vif = vif.sort_values(by = "VIF", ascending = False)
return (vif)
get_vif(X_train_lr)
```
##### threshold for p-value = 0.030
```
#feature rotor has the highest VIF so we drop this feature as it is highly multicolinear
X_train_new = X_train_rfe.drop(['rotor'], axis=1)
```
Rebuilding the model without `rotor`
```
X_train_lm = sm.add_constant(X_train_new)
lr_2 = sm.OLS(y_train,X_train_lm).fit() # Running the linear model
print(lr_2.summary())
get_vif(X_train_lm)
#feature enginesize has the highest vif value so we drop this feature
X_train_new = X_train_new.drop(['enginesize'], axis=1)
```
Rebuilding the model without `enginesize`
```
X_train_lm = sm.add_constant(X_train_new)
lr_3 = sm.OLS(y_train,X_train_lm).fit() # Running the linear model
print(lr_3.summary())
get_vif(X_train_lm)
#feature two has very high p-value thus it is highly insignificant
X_train_new = X_train_new.drop(['two'], axis=1)
```
Rebuilding the model without `two`
```
X_train_lm = sm.add_constant(X_train_new)
lr_4 = sm.OLS(y_train,X_train_lm).fit() # Running the linear model
print(lr_4.summary())
#feature stroke has very high p-value thus it is highly insignificant
X_train_new = X_train_new.drop(['stroke'], axis=1)
```
Rebuilding the model without `stroke`
```
X_train_lm = sm.add_constant(X_train_new)
lr_5 = sm.OLS(y_train,X_train_lm).fit() # Running the linear model
print(lr_5.summary())
#feature five has very high p-value thus it is highly insignificant
X_train_new = X_train_new.drop(['five'], axis=1)
```
Rebuilding the model without `five`
```
X_train_lm = sm.add_constant(X_train_new)
lr_6 = sm.OLS(y_train,X_train_lm).fit() # Running the linear model
print(lr_6.summary())
get_vif(X_train_lm)
#feature boreratio has very high p-value thus it is highly insignificant
X_train_new = X_train_new.drop(['boreratio'], axis=1)
```
Rebuilding the model without `boreratio`
```
X_train_lm = sm.add_constant(X_train_new)
lr_7 = sm.OLS(y_train,X_train_lm).fit() # Running the linear model
print(lr_7.summary())
get_vif(X_train_lm)
#feature three has very high p-value thus it is highly insignificant
X_train_new = X_train_new.drop(['three'], axis=1)
```
Rebuilding the model without `three`
```
X_train_lm = sm.add_constant(X_train_new)
lr_8 = sm.OLS(y_train,X_train_lm).fit() # Running the linear model
print(lr_8.summary())
# Calculate the VIFs for the new model
get_vif(X_train_lm)
#feature curbweight has high vif thus it is multicolinear
X_train_new = X_train_new.drop(['curbweight'], axis=1)
```
Rebuilding the model without `curbweight`
```
X_train_lm = sm.add_constant(X_train_new)
lr_9 = sm.OLS(y_train,X_train_lm).fit() # Running the linear model
print(lr_9.summary())
# Calculate the VIFs for the new model
get_vif(X_train_lm)
#feature porsche has high p-value thus it is insignificant
X_train_new = X_train_new.drop(['porsche'], axis=1)
```
Rebuilding the model without `porsche`
```
X_train_lm = sm.add_constant(X_train_new)
lr_10 = sm.OLS(y_train,X_train_lm).fit() # Running the linear model
print(lr_10.summary())
get_vif(X_train_lm)
#checking model after removing twelve as it has very few datapoints
X_train_new = X_train_new.drop(['twelve'], axis=1)
```
Rebuilding the model without `twelve`
```
X_train_lm = sm.add_constant(X_train_new)
lr_11 = sm.OLS(y_train,X_train_lm).fit() # Running the linear model
print(lr_11.summary())
# Calculate the VIFs for the new model
get_vif(X_train_lm)
```
## Residual Analysis of the train data
to check if the error terms are also normally distributed
```
y_train_price = lr_11.predict(X_train_lm)
# Plot the histogram of the error terms
fig = plt.figure()
sns.distplot((y_train - y_train_price), bins = 20)
fig.suptitle('Error Terms', fontsize = 20) # Plot heading
plt.xlabel('Errors', fontsize = 18) # X-label
plt.show()
```
#### This above graph shows that our assumption that the error is normally distributed is correct and satisfied
## Making Predictions
#### Scaling the test sets
```
#applying MinMax scaling to all the numerical data
num_vars = ['boreratio','stroke','mielage','carlength','carwidth','wheelbase','curbweight','carheight','enginesize','price','peakrpm','horsepower','compressionratio']
df_test[num_vars] = scaler.transform(df_test[num_vars])
```
#### Dividing into X_test and y_test
```
y_test = df_test.pop('price')
X_test = df_test
# checking model to make predictions.
# Creating X_test_new dataframe by dropping variables from X_test
X_test_new = X_test[X_train_new.columns]
# Adding a constant variable
X_test_new = sm.add_constant(X_test_new)
# Making predictions
y_pred = lr_11.predict(X_test_new)
```
## Model Evaluation
```
# Plotting y_test and y_pred to understand the spread.
fig = plt.figure()
plt.scatter(y_test,y_pred)
fig.suptitle('y_test vs y_pred', fontsize=20) # Plot heading
plt.xlabel('y_test', fontsize=18) # X-label
plt.ylabel('y_pred', fontsize=16) # Y-label
#calculationg the final r2 score for test data
from sklearn.metrics import r2_score
r2_score(y_test, y_pred)
```
The test data has `r2 score` of `.80` i.e it explains `80%` variance.
```
# The final features for building model are -
# Features coefficients p-value vif
# enginelocation 0.5619 0.000 1.04
# carwidth 0.7678 0.000 1.45
# four -0.1170 0.000 1.59
# bmw 0.2762 0.000 1.09
```
1. The R-sqaured and Adjusted R-squared are 0.836 and 0.831 i.e. 83% of variance can be explained.
2. The F-stats and Prob(F-stats) are 175.7 and 4.17e-53(approx. 0.0) i.e. 83%variance explain is not by chance
3. The p-values for all the coefficients is less than the significance level of 0.03.i.e. all the predictors are significant.
### Model lr_11 satisfies all the requirements thus its the best fit model
The equation of our best fitted line is:
$ price = 0.561 \times enginelocation + 0.767 \times carwidth + 0.272 \times carCompany bmw - 0.117 \times cylindernumber four $
| true |
code
| 0.579043 | null | null | null | null |
|
<img src="https://rhyme.com/assets/img/logo-dark.png" align="center"> <h2 align="center">Logistic Regression: A Sentiment Analysis Case Study</h2>
### Introduction
___
- IMDB movie reviews dataset
- http://ai.stanford.edu/~amaas/data/sentiment
- Contains 25000 positive and 25000 negative reviews
<img src="https://i.imgur.com/lQNnqgi.png" align="center">
- Contains at most reviews per movie
- At least 7 stars out of 10 $\rightarrow$ positive (label = 1)
- At most 4 stars out of 10 $\rightarrow$ negative (label = 0)
- 50/50 train/test split
- Evaluation accuracy
<b>Features: bag of 1-grams with TF-IDF values</b>:
- Extremely sparse feature matrix - close to 97% are zeros
<b>Model: Logistic regression</b>
- $p(y = 1|x) = \sigma(w^{T}x)$
- Linear classification model
- Can handle sparse data
- Fast to train
- Weights can be interpreted
<img src="https://i.imgur.com/VieM41f.png" align="center" width=500 height=500>
### Task 1: Loading the dataset
---
```
import pandas as pd
df = pd.read_csv('./movie_data.csv')
df.head(5)
```
## <h2 align="center">Bag of words / Bag of N-grams model</h2>
### Task 2: Transforming documents into feature vectors
___
Note: If you are starting the notebook from this task, you can run cells from all previous tasks in the kernel by going to the top menu and Kernel > Restart and Run All
___
Below, we will call the fit_transform method on CountVectorizer. This will construct the vocabulary of the bag-of-words model and transform the following three sentences into sparse feature vectors:
1. The sun is shining
2. The weather is sweet
3. The sun is shining, the weather is sweet, and one and one is two
```
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining, the weather is sweet, and one and one is two'])
bag = count.fit_transform(docs)
print(count.vocabulary_)
print(bag.toarray())
```
Raw term frequencies: *tf (t,d)*—the number of times a term t occurs in a document *d*
### Task 3: Word relevancy using term frequency-inverse document frequency
___
Note: If you are starting the notebook from this task, you can run cells from all previous tasks in the kernel by going to the top menu and Kernel > Restart and Run All
___
$$\text{tf-idf}(t,d)=\text{tf (t,d)}\times \text{idf}(t,d)$$
$$\text{idf}(t,d) = \text{log}\frac{n_d}{1+\text{df}(d, t)},$$
where $n_d$ is the total number of documents, and df(d, t) is the number of documents d that contain the term t.
```
np.set_printoptions(precision=2)
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True, norm='l2', smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs)).toarray())
```
The equations for the idf and tf-idf that are implemented in scikit-learn are:
$$\text{idf} (t,d) = log\frac{1 + n_d}{1 + \text{df}(d, t)}$$
The tf-idf equation that is implemented in scikit-learn is as follows:
$$\text{tf-idf}(t,d) = \text{tf}(t,d) \times (\text{idf}(t,d)+1)$$
$$v_{\text{norm}} = \frac{v}{||v||_2} = \frac{v}{\sqrt{v_{1}^{2} + v_{2}^{2} + \dots + v_{n}^{2}}} = \frac{v}{\big (\sum_{i=1}^{n} v_{i}^{2}\big)^\frac{1}{2}}$$
### Example:
$$\text{idf}("is", d3) = log \frac{1+3}{1+3} = 0$$
Now in order to calculate the tf-idf, we simply need to add 1 to the inverse document frequency and multiply it by the term frequency:
$$\text{tf-idf}("is",d3)= 3 \times (0+1) = 3$$
### Task 4: Calculate tf-idf of the term *is*:
___
Note: If you are starting the notebook from this task, you can run cells from all previous tasks in the kernel by going to the top menu and Kernel > Restart and Run All
___
```
tf_is = 3
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1))
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
```
$$\text{tfi-df}_{norm} = \frac{[3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]}{\sqrt{[3.39^2, 3.0^2, 3.39^2, 1.29^2, 1.29^2, 1.29^2, 2.0^2 , 1.69^2, 1.29^2]}}$$$$=[0.5, 0.45, 0.5, 0.19, 0.19, 0.19, 0.3, 0.25, 0.19]$$$$\Rightarrow \text{tfi-df}_{norm}("is", d3) = 0.45$$
```
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True)
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1]
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
```
### Task 5:Data Preparation
___
Note: If you are starting the notebook from this task, you can run cells from all previous tasks in the kernel by going to the top menu and Kernel > Restart and Run All
___
```
df.loc[0, 'review'][-50:]
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text)
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
df['review'] = df['review'].apply(preprocessor)
```
### Task 6: Tokenization of documents
___
Note: If you are starting the notebook from this task, you can run cells from all previous tasks in the kernel by going to the top menu and Kernel > Restart and Run All
___
```
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:]
if w not in stop]
```
### Task 7: Document classification via a logistic regression model
___
Note: If you are starting the notebook from this task, you can run cells from all previous tasks in the kernel by going to the top menu and Kernel > Restart and Run All
___
```
'''param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l2'],
'clf__C': [1.0, 10.0, 100.0]},
]'''
```
### Task 8: Load saved model from disk
___
Note: If you are starting the notebook from this task, you can run cells from all previous tasks in the kernel by going to the top menu and Kernel > Restart and Run All
___
### Task 9: Model accuracy
___
Note: If you are starting the notebook from this task, you can run cells from all previous tasks in the kernel by going to the top menu and Kernel > Restart and Run All
___
| true |
code
| 0.234188 | null | null | null | null |
|
## Python Closures and Generators
## Closures - binding variables from outer function in the inner function
## Technically - function gets stored with its enviroment(bound variables)
### Can also think of preserving certain state
```
# remember this function?
def add_factory(x):
def add(y):
return y + x
return add # upon return free variable x gets bound in the add function
add5 = add_factory(5)
# 5 is bound inside add5 now,
add5(10)
type(add5.__closure__)
[x for x in add5.__closure__]
len(add5.__closure__)
int(add5.__closure__[0])
dir(add5.__closure__[0])
add5.__closure__[0].cell_contents
## Voila!! We get what we expected to get!
## Remember __closure__ is a tuple so we do not get to mutate this!
So how about more values stored?
def add2_fact(x, y):
return lambda z: z+x+y
a10n20 = add2_fact(10,20)
a10n20(40)
len(a10n20.__closure__)
[x.cell_contents for x in a10n20.__closure__]
One last closure example:
def outer(x):
a = 20
def inner(y):
print(f'x:{x}')
print(f'a:{a}')
print(f'y:{y}')
## x += 15 # We can't change the argument that was bound from outside argument
## a += 15 # We can't change the a that was bound from outside function
return a+x+y
return inner
axy = outer(10)
axy(5)
axy(5)
[x.cell_contents for x in axy.__closure__]
```
## What if we want rebind(assign new values) to variables coming from outer scope?
### In languages like Javascript you can do it, so Python should be able to, right?
### Solution: Python3 nonlocal modifier inside inner function
```
# https://docs.python.org/3/reference/simple_stmts.html#the-nonlocal-statement
# 7.13. The nonlocal statement
# The nonlocal statement causes the listed identifiers to refer to previously bound variables in the nearest enclosing scope excluding globals.
# This is important because the default behavior for binding is to search the local namespace first. The statement allows encapsulated code to rebind variables outside of the local scope besides the global (module) scope.
# Names listed in a nonlocal statement, unlike those listed in a global statement, must refer to pre-existing bindings in an enclosing scope (the scope in which a new binding should be created cannot be determined unambiguously).
# Names listed in a nonlocal statement must not collide with pre-existing bindings in the local scope.
def makeCounter():
count = 5
def f():
nonlocal count
count +=1
def h():
nonlocal count
count +=2
return count
return h()
return f
a = makeCounter()
a()
dir(a)
a()
print(a(),a(),a())
[a() for x in range(10)]
dir(a)
def makeAdjCounter(x):
count = 0
def f():
nonlocal count # without nonlocal we could reference count but couldn't modify it!
count += x
return count
return f
b = makeAdjCounter(2)
c = makeAdjCounter(3)
print(b(),b(),b(), c(), c(), c())
print(c(),c(),c(),c())
[x.cell_contents for x in c.__closure__]
# Result count is hidden from us, but by calling function we can modify its value.
## Another older way was to create some structure(List, Class, Dictionary) inside outer function whose members could be modified by innner
def makeAdjList():
holder=[1,0,0,3] # old method not recommended anymore!
def f():
holder[0] +=1
print(holder)
return holder[0]
return f
d = makeAdjList()
print(d(),d(),d())
```
### Most Python answer is to use generators for persisting some sort of iterable state
## What the heck is a Generator ?
### A Python generator is a function which returns a generator iterator (just an object we can iterate over) by calling yield
* KEY POINT: generator functions use **yield** instead of **return**
* in Python 3 we use next(generatorName) to obtain next value
```
def makeNextGen(current):
while True: ##This means our generator will never run out of values...
current += 1
yield current
numGen = makeNextGen(30)
mybyte = b'\x31\x32\x13\x07'
print(mybyte.decode('ascii'))
len(mybyte)
int.from_bytes(mybyte, byteorder='little')
int.from_bytes(mybyte, byteorder='big')
type(mybyte)
len(mybyte)
print(mybyte)
type(makeNextGen)
dir(makeNextGen)
type(range)
for i in range(20):
print(i)
for i in range(15):
print(next(numGen)) # This is for Python 3.x , in Python 2.x it was numGen.next()
## DO not do this!!
#for el in numGen:
# print(el)
## We can do even better and make an adjustable increment
def makeNextGenInc(current, inc):
while True:
current += inc
yield current
numGen = makeNextGenInc(20,5)
next(numGen)
def smallYield():
yield 1
yield 2
yield 99
yield 5
smallGen = smallYield()
next(smallGen)
list(numGen)
list(smallGen)
numGen10 = makeNextGenInc(200, 10)
[next(numGen10) for x in range(15)]
## Now the above is Pythonic approach to the problem!
### Then there is a generator expression
## The whole point is have a lazy evaluation (ie no need to have everything at once in memory)
gen = (i+10 for i in range(10))
for g in gen:
print(g)
list(gen)
## list(i+10 for i in range(10)) == [i+10 for i in range(10)]
type(gen)
list(gen)
gen = (i+10 for i in range(10))
for g in gen:
print(g)
for g in gen:
print(g)
# You see what is going on?!
gen_exp = (x ** 2 for x in range(10) if x % 2 == 0)
type(gen_exp)
for x in gen_exp:
print(x)
glist = list(gen)
glist
gen = (i+10 for i in range(10))
[next(gen) for x in range(5)]
yes_expr = ('yes' for _ in range(10))
def my_yes_gen():
for _ in range(10):
yield('yes')
#infinite generator
def my_yes_gen():
while True:
yield('yes')
myg = my_yes_gen()
list(myg)
list(yes_expr)
## Challenge how to make an infinite generator with a generator expression?
import itertools
genX = (i*5 for i in itertools.count(start=0, step=1))
[next(genX) for x in range(10)]
[next(genX) for x in range(35)]
gendice = (random.randrange(1,7) for _ in itertools.count(start=0, step=1))
[next(gendice) for x in range(20)]
import random
genY = (i*10+random.randrange(10) for i in itertools.count(start=0, step=1))
[next(genY) for x in range(10)]
## Be very careful with infinite generators, calling list on infinite generator not recommended!
## Of course we should generally have a pretty good idea of maximum number of generations needed
```
### Difference between Python's Generators and Iterators
* iterators is more general, covers all generators
From Official Docs: Python’s generators provide a convenient way to implement the iterator protocol. If a container object’s __iter__() method is implemented as a generator, it will automatically return an iterator object (technically, a generator object) supplying the __iter__() and next()
## A Generator is an Iterator
### Specifically, generator is a subtype of iterator.
Conceptually:
Iterators are about various ways to loop over data, generators generate the data on the fly
```
import itertools
dir(itertools)
help(itertools.product)
list(itertools.product(range(10),list('ABCDE')))
```
## Homework
### Write a generator to yield cubes (forever!)
### Write a generator to yield Fibonacci numbers(first 1000)
* Generator Functions ok to use here
```
def fib():
a, b = 0, 1
while True:
a, b = b, a+b
yield b
def fib1000():
a, b = 0, 1
for x in range(1000):
a, b = b, a+b
yield b
f1k = fib1000()
[next(f1k) for _ in range(10)]
f = fib()
[next(f) for _ in range(10)]
def cubes(current):
while True:
#print(current**3)
current+=1
cube = current**3
yield cube
g3 = cubes(1)
next(g3)
cubesforever = (x**3 for x in itertools.count(start=0, step=1))
c30 = [next(cubesforever) for _ in range(30)]
c30
# Hint use yield
## Extra Credit! write generator expression for first 500 cubes that are made from even numbers
g500 = (x**3 for x in range(1,501) if x % 2 == 0)
a10 = [next(g500) for x in range(10)]
a10
a = list(g500)
a[:10]
next(g500)
f10 = list(g500[:10])
```
| true |
code
| 0.344829 | null | null | null | null |
|
## 모듈 불러오기
```
import tensorflow as tf
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
from tensorflow.keras import layers
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
import json
from tqdm import tqdm
```
## 시각화 함수
```
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string], '')
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
```
## 학습 데이터 경로 정의
```
DATA_IN_PATH = './data_in/'
DATA_OUT_PATH = './data_out/'
INPUT_TRAIN_DATA = 'nsmc_train_input.npy'
LABEL_TRAIN_DATA = 'nsmc_train_label.npy'
DATA_CONFIGS = 'data_configs.json'
```
## 랜덤 시드 고정
```
SEED_NUM = 1234
tf.random.set_seed(SEED_NUM)
```
## 파일 로드
```
train_input = np.load(open(DATA_IN_PATH + INPUT_TRAIN_DATA, 'rb'))
train_label = np.load(open(DATA_IN_PATH + LABEL_TRAIN_DATA, 'rb'))
prepro_configs = json.load(open(DATA_IN_PATH + DATA_CONFIGS, 'r'))
```
## 모델 하이퍼파라메터 정의
```
model_name = 'cnn_classifier_kr'
BATCH_SIZE = 512
NUM_EPOCHS = 10
VALID_SPLIT = 0.1
MAX_LEN = train_input.shape[1]
kargs = {'model_name': model_name,
'vocab_size': prepro_configs['vocab_size'],
'embedding_size': 128,
'num_filters': 100,
'dropout_rate': 0.5,
'hidden_dimension': 250,
'output_dimension':1}
```
## 모델 선언 및 컴파일
```
class CNNClassifier(tf.keras.Model):
def __init__(self, **kargs):
super(CNNClassifier, self).__init__(name=kargs['model_name'])
self.embedding = layers.Embedding(input_dim=kargs['vocab_size'],
output_dim=kargs['embedding_size'])
self.conv_list = [layers.Conv1D(filters=kargs['num_filters'],
kernel_size=kernel_size,
padding='valid',
activation=tf.keras.activations.relu,
kernel_constraint=tf.keras.constraints.MaxNorm(max_value=3.))
for kernel_size in [3,4,5]]
self.pooling = layers.GlobalMaxPooling1D()
self.dropout = layers.Dropout(kargs['dropout_rate'])
self.fc1 = layers.Dense(units=kargs['hidden_dimension'],
activation=tf.keras.activations.relu,
kernel_constraint=tf.keras.constraints.MaxNorm(max_value=3.))
self.fc2 = layers.Dense(units=kargs['output_dimension'],
activation=tf.keras.activations.sigmoid,
kernel_constraint=tf.keras.constraints.MaxNorm(max_value=3.))
def call(self, x):
x = self.embedding(x)
x = self.dropout(x)
x = tf.concat([self.pooling(conv(x)) for conv in self.conv_list], axis=-1)
x = self.fc1(x)
x = self.fc2(x)
return x
model = CNNClassifier(**kargs)
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.BinaryAccuracy(name='accuracy')])
```
## Callback 선언
```
# overfitting을 막기 위한 ealrystop 추가
earlystop_callback = EarlyStopping(monitor='val_accuracy', min_delta=0.0001,patience=2)
# min_delta: the threshold that triggers the termination (acc should at least improve 0.0001)
# patience: no improvment epochs (patience = 1, 1번 이상 상승이 없으면 종료)\
checkpoint_path = DATA_OUT_PATH + model_name + '/weights.h5'
checkpoint_dir = os.path.dirname(checkpoint_path)
# Create path if exists
if os.path.exists(checkpoint_dir):
print("{} -- Folder already exists \n".format(checkpoint_dir))
else:
os.makedirs(checkpoint_dir, exist_ok=True)
print("{} -- Folder create complete \n".format(checkpoint_dir))
cp_callback = ModelCheckpoint(
checkpoint_path, monitor='val_accuracy', verbose=1, save_best_only=True, save_weights_only=True)
```
## 모델 학습
```
history = model.fit(train_input, train_label, batch_size=BATCH_SIZE, epochs=NUM_EPOCHS,
validation_split=VALID_SPLIT, callbacks=[earlystop_callback, cp_callback])
```
## 결과 플롯
```
plot_graphs(history, 'loss')
plot_graphs(history, 'accuracy')
```
## 결과 평가하기
```
DATA_OUT_PATH = './data_out/'
INPUT_TEST_DATA = 'nsmc_test_input.npy'
LABEL_TEST_DATA = 'nsmc_test_label.npy'
SAVE_FILE_NM = 'weights.h5' #저장된 best model 이름
test_input = np.load(open(DATA_IN_PATH + INPUT_TEST_DATA, 'rb'))
test_input = pad_sequences(test_input, maxlen=test_input.shape[1])
test_label_data = np.load(open(DATA_IN_PATH + LABEL_TEST_DATA, 'rb'))
model.load_weights(os.path.join(DATA_OUT_PATH, model_name, SAVE_FILE_NM))
model.evaluate(test_input, test_label_data)
```
| true |
code
| 0.718849 | null | null | null | null |
|
```
##====================================================
## このセルを最初に実行せよ---Run this cell initially.
##====================================================
import sys
if 'google.colab' in sys.modules:
!wget -P ./text https://www.eidos.ic.i.u-tokyo.ac.jp/~sato/assignments/project2/text/test_data.csv
!wget -P ./text https://www.eidos.ic.i.u-tokyo.ac.jp/~sato/assignments/project2/text/wiki_dataset.csv
```
# ミニプロジェクト(発展課題) / Miniproject (Advanced exercises)
## Project2. 自分のアイディアで手法を改良しよう(発展課題)
基礎課題では、ウィキペディアの6カテゴリの記事からなるデータセット $D$ を学習データとして、カテゴリが未知の記事6本を分類しました。
これら6本では正しく分類できたと思いますが、いろいろな記事で試してみると、正しく分類できない記事もあることがわかります。
そこで発展課題では、基礎課題で実装した手法をベースライン(基準)とし、それよりも高い精度で分類できるよう、手法を改良してください。
**皆さん自身で考えたアイディアを実装**し、**ベースラインの手法と皆さんの提案手法とで精度を比較評価した結果を報告**してください。
適宜、図や表を使って構いません。
また、Markdownセルを利用し、**なぜ提案手法がうまくいくのか(あるいはうまく行くと考えたのか)を分かりやすく説明し、分類に失敗した記事がある場合は、失敗した理由を議論**して下さい。
なお、精度 (Accuracy) は、未知の記事の総数を$N$、正しく分類できたものの数を$TP$とすると、以下の式で評価するものとします。
$$\mbox{Accuracy} = \frac{TP}{N}$$
### ライセンス
本教材で使用するウィキペディアのコンテンツは Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA) および GNU Free Documentation License (GFDL) の下にライセンスされています。
本データも同じくこれらのライセンスを継承します。
詳しくは[こちら](https://ja.wikipedia.org/wiki/Wikipedia:%E3%83%87%E3%83%BC%E3%82%BF%E3%83%99%E3%83%BC%E3%82%B9%E3%83%80%E3%82%A6%E3%83%B3%E3%83%AD%E3%83%BC%E3%83%89)を参照してください。

## Project2. Let's improve the baseline method with your own idea (Advanced exercises)
In the basic exercises, you impremented a code to categorize the six uncategorized articles, which were extracted from the Wikipedia data, using the dataset $D$ as the training data.
Although your code gives correct category labels to the given articles in the basic exercises, you may notice that it is not always successful but some articles are mis-categorized when you apply the method to various articles.
In this advanced exercises, consider the method implemented in the basic exercises as a baseline method and improve it so that it achieves higher accuracy than the original one.
**Please report the results of implementing your own ideas and comparing and evaluating the accuracy of the baseline method with your proposed method.**.
You can use diagrams and tables to illustrate, as appropriate.
Also, using the Markdown cell, **explain clearly why the proposed method works (or you thought it would work), and if there are articles that failed to be categorized, discuss the reasons for the failure.**
The accuracy is evaluated by the following equation;
$$\mbox{Accuracy} = \frac{TP}{N},$$
where $N$ is the total number of the uncategorized articles and $TP$ is the number of articles that are categorized correctly.
### Licenses
The Wikipedia contents used in this learning material are licensed
under Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA)
and GNU Free Documentation License (GFDL).
This dataset inherits these licenses as well.
For details, refer to [this site](https://ja.wikipedia.org/wiki/Wikipedia:%E3%83%87%E3%83%BC%E3%82%BF%E3%83%99%E3%83%BC%E3%82%B9%E3%83%80%E3%82%A6%E3%83%B3%E3%83%AD%E3%83%BC%E3%83%89).

## 準備:ウィキペディアデータセットの読み込み
以下のコードを使って、データセット $D$ を辞書 `Dw` に読み込んでください。
これは基礎課題のものと同じです。
`Dw` のキーがカテゴリ `cate` のとき、`Dw[cate]` は、カテゴリ `cate` のすべての記事の重要語リストを連結して得られるリストを与えます。 ここで、カテゴリとは、冒頭で述べた6つのカテゴリ (`animal`, `art`, `economy`, `law`, `plant`, `politics`) のいずれかです。
## Preparation: Reading the Wikipedia dataset
Execute the following code, and load the dataset $D$ into the dictionary `Dw`.
This code is excerpted from the basic exercises.
If a key of `Dw` is a category `cate`, `Dw[cate]` gives
the list obtained by concatenating the lists of important words of all the articles in the category `cate`.
Here, a category is one of the six categories:
`animal`, `art`, `economy`, `law`, `plant` and `politics`.
```
### Execute the following code to load the Wikipedia dataset:
Dw = {}
with open('text/wiki_dataset.csv', 'r', encoding='utf-8') as fi:
fi.readline()
for line in fi:
tmp = line.replace('\\n', '\n').split('\t')
if tmp[0] not in Dw:
Dw[tmp[0]] = []
Dw[tmp[0]].extend(tmp[2].split(' '))
```
各カテゴリごとに最初の10個の重要語を表示してみましょう。
Let's print the first ten important words of each category.
```
### Given code:
for cate in Dw:
print('Category:', cate)
print('Important words:', Dw[cate][:10])
```
## 準備:未知の記事集合およびその正解のカテゴリラベルの読み込み
CSVファイル `text/test_data.csv` には分類対象であるカテゴリが未知の記事60本が納められています。
ただし、本文はあらかじめデータセット`D`と同様に重要語リストに変換されています。
以下のコードを用いて、これらの記事を、各記事のタイトルをキー、その本文の重要語リストを値とする辞書 `Aw2` に読み込んでください。
また同時に、各記事のタイトルをキー、その正解のカテゴリ名を値とする辞書`Aw2_ans`に読み込んでください。
正解率は、推定したカテゴリラベルを正解のカテゴリラベルと比較することで評価します。
よって当然ながら、この正解のカテゴリラベルを、ラベルの推定のために使ってはいけません。
## Preparation: Loading a set of the uncategorized aticles and their correct category labels
The CSV file `text/test_data.csv` contains 60 target articles whose category is unknown.
The bodies are converted to a list of important words in the same way as the data set `D` in advance.
Execute the following code to load these uncategorized articles into the dictionary `Aw2` with the title of each article as a key and the list of important words in its body as the corresponding value.
At the same time, create the dictionary `Aw2_ans` with the title of each article as a key and the correct category label as the value.
The accuracy is evaluated by comparing the estimated category label with the correct category label.
Needless to say, the correct category labels must not be used to estimate the category labels.
```
### Given code:
Aw2 = {}
Aw2_ans = {}
with open('text/test_data.csv', 'r', encoding='utf-8') as fi:
fi.readline()
for line in fi:
tmp = line.replace('\\n', '').split('\t')
Aw2[tmp[1]] = tmp[2].split(' ')
Aw2_ans[tmp[1]] = tmp[0]
```
以下のコードで`Aw2`および`Aw2_ans`を1つ書き出してみましょう。
The contents of `Aw2` and `Aw2_ans` are printed as follows.
```
### Given code:
for title in Aw2:
print('title:', title)
print('Correct answer label:', Aw2_ans[title])
print('Important words:', Aw2[title])
break
```
## 皆さんのコードおよび解説
以下で皆さんのコードやその解説、結果の評価および議論を行ってください。
- この'project2.ipynb'は自動採点されません.答案検査システムもありません。教員やTAが一つずつ見て採点します。
- 解説や議論はMarkdownセルに、コードはCodeセルに記入してください。
- 提出されたipynbファイルは教員のPCで実行したうえで評価します。実行に必要な追加パッケージがあれば指定するなどして、実行できるファイルを提出してください。
- Codeセル、Markdownセルは必要に応じて増やして構いません
## Describe your code and explain it
Describe your code, explanation and discussion below.
- This notebook 'project2.ipynb' will not be automatically graded at all. No automatic checking for it is provided. The faculty members and TAs will read and execute this notebook and give a grade manually.
- Fill the explanation and discussion of your method in Markdown cells. The code should be written in Code cells.
- The submitted notebook will be executed on the faculty member's PC before grading. Please submit an executable file, specifying all additional packages required for execution if any.
- You can add Code cells and Markdown cells as needed.
# 既存手法の精度検証
提案手法の性能評価のため、既存手法 (基礎課題で実装した手法)を用いて、 `./text/test_data.csv` をカテゴリ分類する。
```
def compute_word_frequency(dw):
import itertools
from collections import Counter
important_word_list = list(itertools.chain.from_iterable(dw.values()))
return dict(Counter(important_word_list))
W = compute_word_frequency(Dw)
def extract_frequent_words(word_frequency, coverage):
n_freq = 0
answer = []
for word, freq in sorted(word_frequency.items(), key=lambda x: -x[1]):
if 1.0 * n_freq / sum(word_frequency.values()) < coverage:
n_freq += freq
answer.append(word)
else:
break
return answer
F = extract_frequent_words(W, 0.5)
def words2vec(words, frequent_words):
counter = dict([[fw, 0] for fw in frequent_words])
for word in words:
if counter.get(word) is not None:
counter[word] += 1
return [counter[fw] for fw in frequent_words]
Av2 = {}
for title, words in Aw2.items():
Av2[title] = words2vec(words, F)
Dv = {}
for cate, words in Dw.items():
Dv[cate] = words2vec(words, F)
import numpy as np
def guess_category(dv, v):
def cos_sim(x, y):
return np.sum(x * y) / (np.sqrt(np.sum(x * x)) * np.sqrt(np.sum(y * y)))
cos_sim_dict = dict([[cate, cos_sim(np.array(x), np.array(v))]
for cate, x in dv.items()])
return max(cos_sim_dict, key=cos_sim_dict.get)
results = []
for title in Av2.keys():
pred = guess_category(Dv, Av2[title])
results.append(pred == Aw2_ans[title])
accuracy = 1.0 * sum(results) / len(results)
print("Accuracy: {:.2%}".format(accuracy))
print("TP={}, N={}".format(sum(results), len(results)))
```
# 提案手法の概要 / Outline of your proposed method
...
# 着想に至った経緯 / Background to the idea
...
# 処理の流れ / Processing flow
1. First step...
1. Second step...
1. Third step...
```
# 提案手法のコード / The code of your proposed method
# 注意: 適宜、コメント行として解説を書き込み、わかりやすいコードとなるように務めてください。
# Note: Write commentaries as comment lines where appropriate and try to make the code easy to understand.
...
```
# 評価 / Evaluation
...
```
# 提案手法の評価に関するコード / The code for evaluation of your method
...
```
# 議論と結論 / Discussion and conclusion
...
| true |
code
| 0.293411 | null | null | null | null |
|
```
# cell used to import important library of the notebook
import numpy as np
import sys
from scipy import sparse
from scipy.spatial.distance import pdist, squareform
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import pandas as pd
import networkx as nx
from sklearn.preprocessing import StandardScaler
from utils import * # contains all helper functions used in the project
import scipy as sci
from sklearn.cluster import KMeans
import sklearn.metrics as sm
```
# I. Load, clean, study and prepare the data for graph creation
## I.1 Data cleaning & preperation
**Preparing IRS data**
```
#load the data
df_migrations = pd.read_csv("NTDS_Data/countyinflow1516.csv" )
# create the combined fips county number of destination
df_migrations['statefips_str'] = df_migrations['y2_statefips'].apply(lambda x : str(x).zfill(2))
df_migrations['countyfips_str'] = df_migrations['y2_countyfips'].apply(lambda x : str(x).zfill(3))
df_migrations['combined_fips-destination'] = df_migrations['statefips_str'].apply(lambda x: x.lstrip('0')) + df_migrations['countyfips_str']
# create the combined fips county number of source
df_migrations['statefips_str1'] = df_migrations['y1_statefips'].apply(lambda x : str(x).zfill(2))
df_migrations['countyfips_str1'] = df_migrations['y1_countyfips'].apply(lambda x : str(x).zfill(3))
df_migrations['combined_fips-source'] = df_migrations['statefips_str1'].apply(lambda x: x.lstrip('0')) + df_migrations['countyfips_str1']
# Cleaning the data to have only source and origin counties and unemployment rate as a new column
df_migrations = df_migrations[df_migrations['y1_statefips']<=56]
df_migrations["Unemployment rate"] = df_migrations["n1"]/(df_migrations["n2"] +df_migrations["n1"] )
# drop useless information
df_migrations = df_migrations.drop(columns=["y1_countyname","y2_statefips", "y2_countyfips", "y1_statefips", "y1_countyfips", "y1_state", "statefips_str", "countyfips_str","statefips_str1", "countyfips_str1"])
# remove nodes where data is undefined undefined data by zero
df_migrations = df_migrations[df_migrations['n1'] != -1]
# convert combined fips to int64
df_migrations['combined_fips-destination'] = df_migrations['combined_fips-destination'].astype('int64')
df_migrations['combined_fips-source'] = df_migrations['combined_fips-source'].astype('int64')
#extracting the combined fips destination and combined fips source for graph in form of numpy arrays
df_graph= df_migrations.drop(columns=["n1","n2","agi","Unemployment rate"])
# extracting all the combinations that have happened in the US between county
dest_source = df_graph.to_numpy()
# reset index starting from 0 (because rows were dropped)
df_migrations = df_migrations.reset_index()
df_migrations = df_migrations.drop(columns=['index'])
```
**From the IRS dataset create adjency matrix**
In this adjency matrix, the nodes are the counties and the edges are :
- `A_total[i, j]` := total number of people who migrated from county i to county j
- `A_returns[i, j]` := number of people who migrated from i to j and payed taxes
- `A_exempt[i, j]` := number of people who migrated from county i to county j and did not payed taxes
```
nodes_index = np.unique(dest_source)
num_nodes = nodes_index.shape[0]
A_total = np.zeros((num_nodes, num_nodes))
A_returns = np.zeros((num_nodes, num_nodes))
A_exemptions = np.zeros((num_nodes, num_nodes))
count = 0
for dest, source in dest_source :
i = np.where(nodes_index == dest)
j = np.where(nodes_index == source)
total = df_migrations["n1"][count] + df_migrations["n2"][count]
A_total[j[0], i[0]] = df_migrations["n1"][count] + df_migrations["n2"][count]
A_returns[j[0], i[0]] = df_migrations["n1"][count]/total
A_exemptions[j[0], i[0]] = df_migrations["n2"][count]/total
count += 1
```
**Preparing the presidential result by county dataset**
The main idea in this cell is to prepare the presidential result by county dataset. To each county a label is given : $+1$ if the county has a majority of Republican and $-1$ if the county has a majority of Democrat
```
df_presidential_result =pd.read_csv("NTDS_Data/2016_US_County_Level_Presidential_Results.csv" )
df_presidential_result = df_presidential_result.drop(columns=["Unnamed: 0","votes_dem", "votes_gop", "total_votes", "diff", "per_point_diff", "state_abbr", "county_name"])
#Sorting according to the fips code to be consistent with the migration data by IRS
df_presidential_result = df_presidential_result.sort_values(by=['combined_fips'])
#Adding a new column of the winners with -1 corresponding to democrat and 1 to republican
df_presidential_result["Winner"] = np.where((df_presidential_result['per_dem'] > df_presidential_result['per_gop']), -1, 1)
df_presidential_result = df_presidential_result.drop(columns=["per_dem","per_gop"])
# Redindex some FIPS due to difference between FIPS
test = nodes_index - df_presidential_result["combined_fips"].values
df_presidential_result["combined_fips"] = df_presidential_result["combined_fips"] + test
```
## I.2 Study the datasets at hand
First we study the proportion of people paying taxes and not paying taxes for each migration flow. An histogram of these migration is plotted.
As one can see, on average, $35$% of the people in a migration flow are paying taxes (and conversly $65$% are exempt from paying taxes).
At most, $50$% of the people in a migration flow will pay taxes. Hence, it is intersting to note that most people who migrate are not exempt from paying taxes.
In subsequent part of this notebook, we will try to see if we can use these proportions to predict if a county is either voting Republican or Democrat.
```
# <returns, exempt>
node_pct = np.zeros((df_migrations.shape[0], 2))
for i in range (0, df_migrations.shape[0]) :
total = df_migrations['n1'][i] + df_migrations['n2'][i]
node_pct[i, 0] = df_migrations['n1'][i] / total
node_pct[i, 1] = df_migrations['n2'][i] / total
df_node_pct = pd.DataFrame(node_pct, columns=["pct_return", "pct_exempt"])
plt.hist(df_node_pct["pct_return"].values, normed=False, bins=30)
plt.title('Distribution of the proportion of migration where people are paying taxes')
plt.ylabel('Number of migration');
plt.xlabel('Pct. of people paying tax and migrating');
plt.hist(df_node_pct["pct_exempt"].values, normed=False, bins=30)
plt.title('Distribution of the proportion of migration where people are not paying taxes')
plt.ylabel('Number of migration');
plt.xlabel('Pct. of people not paying tax and migrating');
```
One wants to also consider the proportion of Republican and Democrat counties in the US. Before doing the actual computation, a bit of historic background on the US electoral system is required.
Historically, most of the states in the US are Republican. Hence, if one draws a simple geographic map of the US, he would color most states in red (the color of the Republican). However, if then one scales the size of each state with the number of inhabitants in each county, then the proportion of blue and red on the map would be more or less equal, with coastal states (states that are on the Atlantic or Pacific coast) in blue, and the inner states red (Republican).
Our computations verify this historical proportions : more than $84$% of the counties are Republican.
```
num_republicans = df_presidential_result[df_presidential_result['Winner'] == 1].shape[0]
num_democrats = df_presidential_result[df_presidential_result['Winner'] == -1].shape[0]
pct_republican = df_presidential_result[df_presidential_result['Winner'] == 1].shape[0] / df_presidential_result.shape[0]
pct_democrat = df_presidential_result[df_presidential_result['Winner'] == -1].shape[0] / df_presidential_result.shape[0]
print("Pct. of counties Republican : ", pct_republican, " // Pct. of counties Democrat : ", pct_democrat)
```
# II. Creation of simple graph following structure of migration & first attempt to predict county type
## II.1 Creation of simple graph
The first graphs that will be studied in this notebook are simple to understand as they follow the structure of a migration : if there is a migration between county i and j, then an edge is set between these to counties.
Before moving on, it is intersting to note that in this section, we are creating graph that are suppose to show a correlation between a type of migration and a voting pattern in a county.
When we refer to "type of migration", we mean what kind of proportion is there between people paying taxes and not paying taxes in a specific migration flow. For exemple, we say that a migration flow has a high proportion of people paying taxes if more then $40$% of the people in the migration flow are paying taxes. The idea is to correlate this migration to a specific voting pattern in the destination county.
To achieve this task we will be creating 2 types of graph :
- `graph_nonRGB_returns` : in these graph there is a migration between two counties if (1) there is an actual migration between county i and j and (2) if the migration flow as a proportion of people paying taxes greater then a **specified threshold**.
- `graph_nonRGB_exempt`: same type of graph as before, but now we are studying the proportions of exempted people in a migration flow
In subsequent cells, we code mainly two methods : one for creating `graph_nonRGB_return` graphs and one for creating `graph_nonRGB_exempt`
**Note :** we refer to graph created in this section as "nonRGB" as in later section we will be using RGB graphs. One can read this notation as being a raw graph built on migration without any kind of similarity extrapolation.
```
def create_adjency_nonRGB_returns(threshold_returns, plot_adj_returns=False) :
"""
Create the adjency matrix for a graph where there is an edge between two county if migration flow
between two county has a proportion of people paying taxes greater than threshold_returns
"""
adjacency_nonRGB_returns = A_returns.copy()
adjacency_nonRGB_returns[adjacency_nonRGB_returns >= threshold_returns] = 1
adjacency_nonRGB_returns[adjacency_nonRGB_returns < threshold_returns] = 0
if plot_adj_returns :
plt.spy(adjacency_nonRGB_returns)
plt.show()
return adjacency_nonRGB_returns
def create_graph_nonRGB_returns(threshold_returns, plot_adj_returns=False) :
"""
Create a graph where there is an edge between two county if migration flow
between two county has a proportion of people paying taxes greater than threshold_returns
The attribute plot_adj_returns can is a boolean used if one wants to plot the adjency matrix of the graph
"""
i = np.where(nodes_index == dest)
graph_nonRGB_returns = nx.from_numpy_array(create_adjency_nonRGB_returns(threshold_returns, plot_adj_returns))
nodes = np.zeros((nodes_index.shape[0], 2))
for fips, result in df_presidential_result.values :
i = np.where(nodes_index == fips)
index = i[0][0]
nodes[index, 0] = index
nodes[index, 1] = result
node = pd.DataFrame(nodes, columns=["id", "result"])
node_props = node.to_dict()
for key in node_props:
nx.set_node_attributes(graph_nonRGB_returns, node_props[key], key)
nx.write_gexf(graph_nonRGB_returns, 'graph_nonRGB_returns_35.gexf')
return graph_nonRGB_returns
def create_graph_nonRGB_returns_features(threshold_returns, plot_adj_returns=False):
i = np.where(nodes_index == dest)
graph_nonRGB_returns = nx.from_numpy_array(create_adjency_nonRGB_returns(threshold_returns, plot_adj_returns))
nodes = np.zeros((nodes_index.shape[0], 4))
for fips, result in df_presidential_result.values :
i = np.where(nodes_index == fips)
index = i[0][0]
nodes[index, 0] = index
nodes[index, 1] = result
for j in range (0, df_migrations.shape[0]):
fips = df_migrations['combined_fips-destination'][j]
i = np.where(nodes_index == fips)
index = i[0][0]
nodes[index, 2] = df_migrations['agi'][j]
nodes[index, 3] = df_migrations['Unemployment rate'][j]
node = pd.DataFrame(nodes, columns=["id", "result", "agi", "unemployment_rate"])
node_props = node.to_dict()
for key in node_props:
nx.set_node_attributes(graph_nonRGB_returns, node_props[key], key)
nx.write_gexf(graph_nonRGB_returns, 'graph_nonRGB_returns_35.gexf')
return graph_nonRGB_returns, node
# construct graph for flows with more then 45% returns
# create adjacency matrix for flows with more then 45% returns
def create_adjency_nonRGB_exempt(thershold_exempt, plot_adj_exempt = False ) :
"""
Create the adjency matrix for a graph where there is an edge between two county if migration flow
between two county has a proportion of people not paying taxes greater than thershold_exempt
"""
adjacency_nonRGB_exempt = A_exemptions.copy()
adjacency_nonRGB_exempt[adjacency_nonRGB_exempt >= thershold_exempt] = 1
adjacency_nonRGB_exempt[adjacency_nonRGB_exempt < thershold_exempt] = 0
if plot_adj_exempt :
plt.spy(adjacency_nonRGB_exempt)
plt.show()
return adjacency_nonRGB_exempt
def create_graph_nonRGB_exempt(threshold_exempt, plot_adj_exempt = False) :
"""
Create a graph where there is an edge between two county if migration flow
between two county has a proportion of people not paying taxes greater than threshold_exempt
The attribute plot_adj_exempt can is a boolean used if one wants to plot the adjency matrix of the graph
"""
i = np.where(nodes_index == dest)
graph_nonRGB_exempt = nx.from_numpy_array(create_adjency_nonRGB_exempt(threshold_exempt, plot_adj_exempt))
nodes = np.zeros((nodes_index.shape[0], 2))
for fips, result in df_presidential_result.values :
i = np.where(nodes_index == fips)
index = i[0][0]
nodes[index, 0] = index
nodes[index, 1] = result
node = pd.DataFrame(nodes, columns=["id", "result"])
node_props = node.to_dict()
for key in node_props:
nx.set_node_attributes(graph_nonRGB_exempt, node_props[key], key)
nx.write_gexf(graph_nonRGB_exempt, 'graph_nonRGB_exempt.gexf')
return graph_nonRGB_exempt
def create_graph_nonRGB_exempt_features(threshold_exempt, plot_adj_exempt = False) :
i = np.where(nodes_index == dest)
graph_nonRGB_exempt = nx.from_numpy_array(create_adjency_nonRGB_exempt(threshold_exempt, plot_adj_exempt))
nodes = np.zeros((nodes_index.shape[0], 4))
for fips, result in df_presidential_result.values :
i = np.where(nodes_index == fips)
index = i[0][0]
nodes[index, 0] = index
nodes[index, 1] = result
for j in range (0, df_migrations.shape[0]):
fips = df_migrations['combined_fips-destination'][j]
i = np.where(nodes_index == fips)
index = i[0][0]
nodes[index, 2] = df_migrations['agi'][j]
nodes[index, 3] = df_migrations['Unemployment rate'][j]
node = pd.DataFrame(nodes, columns=["id", "result", "agi", "unemployment_rate"])
node_props = node.to_dict()
for key in node_props:
nx.set_node_attributes(graph_nonRGB_exempt, node_props[key], key)
nx.write_gexf(graph_nonRGB_exempt, 'graph_nonRGB_exempt.gexf')
return graph_nonRGB_exempt, node
```
## II.2 First attempt at predicting election results
With the graph built in the previous section, we want to see if there is some sort of pattern between a particular structure of the graph and the voting pattern in the county.
### II.2.1 First observations using Gephi
The first hypothesis that could be stated are the following :
1. **Hypothesis 1** : a migration flow with a more than 35% people paying taxes will have as destination a republican county. One could think that people paying taxes would like to move to Republican county were taxes such as the proprety tax are lower.
2. **Hypothesis 2** : a migration flow with a more than 70% people not paying taxes will have as destination a democrat county. One could think that people with the lowest income would move to county were charity is more developed (we are not considering helps from the state, which is the same whatever the state).
To validate or reject these two hypotethis, we are building two graph. The first one considers only the migration flows between counties were more then $38$% of the migrants are paying taxes. The second graph considers only the migration flow between counties where more then $70$% of the migrants are paying taxes.
If hypothesis 1 is correct, then one observing the first graph on $Gephi$, most of the connection will be toward Republican counties. On the other hand, if hypothesis 2 is correct, then most migration will have as destination a Democrat county.
```
create_graph_nonRGB_exempt(0.7)
create_graph_nonRGB_returns(0.35)
```
Result of the observation on $Gephi$ :
- *observation on exemption graph* : the exemption graph (i.e graph with edges between nodes where migration is caractersied by more then $70$% of migratiants not paying taxes) doesn't have the structre expected. Edges are going from Democrat to Republican, and to Democrat from Republican in an equal fashion. So hypothesis 2 cannot be validate.
- *observation on return graph* : the return graph (i.e graph with edges between nodes where migration is caractersised by more then $35$% of migrants paying taxes) doesn't have the expected strcuture. Most of the migration is concentrated between democrat nodes. It appears that migration caracterised by a high rate of people paying taxes is concentrated between Democrat counties.
So as a conclusion, both hypothesis 1 and 2 are rejected. However, it appears that from the return graph if one studies the degree of the node, one could be able to tell if its a Democrat or Republican county.
### II.2.2 Prediction based on degree of county (i.e node of the graph)
The aforementioned observation tells us that by studying the degree of a node, we might be able to predict the label (i.e Republican or Democrat) of that node. We will now verify this asumption.
The driving force behind our first prediction algorithm is quiet simple : we believe that we can split the nodes into two categories. The first category being nodes with high degree and the second category being nodes with low degree. These two catagory will then mapped to reciprocally Democrat and Republican.
However, the problem remains on finding the correct threshold to construct our graph (remember that our graph are constrcuted using a threshold on the proportion of migratants payin taxes or not paying taxes) and what should be the degree that defines the limit between the two aformentionned category. This limit is from now on reffered as the "cut".
This problem of finding the best possible tuple of hyper-parameters is a cross-validation problem. Hence, the subsequent cell implment a cross validation to find the best possible cut and threshold for this problem anc computes the accuracy of predicting graph in such a way.
```
def get_degree_attribute (G) :
degree_attr = [(G.degree(n), G.nodes[n]['result']) for n in G.nodes()]
return np.array(degree_attr)
def get_degree_party (degree_attr) :
democrats = []
republicans = []
for tuple_ in degree_attr :
if tuple_[1] == -1 :
democrats.append(tuple_[0])
else :
republicans.append(tuple_[0])
return democrats, republicans
def compute_accuracy(d, r, best_cut) :
pct_dem_predicted_correctly = d[d > best_cut].shape[0]/d.shape[0]
pct_rep_predicted_correctly = r[r > best_cut].shape[0]/r.shape[0]
accuracy = (num_democrats*pct_dem_predicted_correctly + num_republicans*(1 - pct_rep_predicted_correctly))/(num_democrats + num_republicans)
return accuracy
def cross_validation_returns (threshold_range_min, threshold_range_max, step=0.01, print_best=False) :
thresholds = np.arange(start=threshold_range_min, stop=threshold_range_max, step=step)
max_global = 0
best_cut = 0
best_threshold = 0
for threshold in thresholds :
graph_nonRGB_returns = create_graph_nonRGB_returns(threshold)
degree_attr = get_degree_attribute(graph_nonRGB_returns)
d, r = get_degree_party(degree_attr)
d = np.array(d)
r = np.array(r)
d_qt025 = np.quantile(d, 0.25)
d_qt075 = np.quantile(d, 0.75)
cuts = np.arange(d_qt025, d_qt075, 1)
max_local = 0
cut_local = 0
for cut in cuts :
temp = np.abs(d[d > cut].shape[0]/d.shape[0] - r[r > cut].shape[0]/r.shape[0])
if temp > max_local :
max_local = temp
cut_local = cut
if max_local > max_global :
max_global = max_local
best_threshold = threshold
best_cut = cut_local
if print_best :
graph_nonRGB_returns = create_graph_nonRGB_returns(best_threshold)
degree_attr = get_degree_attribute(graph_nonRGB_returns)
d, r = get_degree_party(degree_attr)
d = np.array(d)
r = np.array(r)
print(d[d > best_cut].shape[0]/d.shape[0])
print(r[r > best_cut].shape[0]/r.shape[0])
plt.hist(d, density=True, bins= 100)
plt.show()
plt.hist(r, density=True, bins= 100)
plt.show()
accuracy = compute_accuracy(d, r, best_cut)
return best_cut, best_threshold, accuracy
best_cut_brute, best_threshold_brute, accuracy_brute = cross_validation_returns(0.3, 0.6, print_best=True)
print("The best cut is : ", best_cut_brute, "and the best threshold is : ", best_threshold_brute)
print("W/ overall accuracy : ", accuracy_brute)
```
The graphs above show that by constructing a graph with migration caractersized by more then $38$% of people paying taxes we can split the nodes of the graph into two categories : the Republican being the nodes with a degree less than 6 and the Democrat being nodes with a degree more than 6. By doing so, we will rightfully caractersied half of the Democrats and $92$% of the Republicans giving an overall accuracy of $85$%. This is not great, as one could simply say that all counties are Republican and get an overall $81$% of accuracy.
**Note :** we refer as this method as cross-validation, but we are not splitting the data to create a validation set and a proper training set, hence talking about cross-validation here might be an over-statement. However, the term still encapsulate the idea that we are trying to find the best possible tuple (cut, threshold) for this prediction.
### II.2.3 Prediction based on degree neighboring nodes of county
The previous technique based on predicting the label of a node based on its absolute degree proved to perform poorly. And the reason was that half of the Democrat nodes are wrongfully predicted. Hence, we try a new technique : predicint the label of a node, based on the average degree of its neighboors.
The problem with the previous prediction algorithm was that to have a clear cut between two nodes, we had to put a high threshold between counties. A high threshold meant that most of the Republican nodes were having edge-free, but also a high proportion of Democrat nodes were edge-free and hence wrongfully classifed. To solve this problem, we use the fact that we can study the neighboring nodes and reduce the threshold. Hence, eventhough more Republican nodes will have conneciton, we believe that still making the average of all there connection, we will get a lower average degree than for Democrat nodes.
Also, because this method seemed like a realy good idea, we developped it for the two graphs : returns and exemptions.
**Study neighboors on the returns graph**
```
#a_dict = graph_nonRGB_returns.neighbors
def compute_mean (neigh_degree) :
if neigh_degree.shape[0] == 0 :
return 0
else :
return neigh_degree.mean()
def mean_degree_neighbors (G) :
degree_attr = get_degree_attribute(G)
mean_degree_neigh = []
dicts = [G.neighbors(n) for n in G.nodes]
for a_dict in dicts :
neigh_degree = []
for key in a_dict:
neigh_degree.append(degree_attr[key][0])
mean_degree_neigh.append(compute_mean(np.array(neigh_degree)))
return np.concatenate((np.array(mean_degree_neigh).reshape(degree_attr.shape[0], 1), degree_attr[:, 1].reshape(degree_attr.shape[0], 1)), axis=1)
def cross_validation_neigh_returns (threshold_range_min, threshold_range_max, step=0.01, print_best=False) :
thresholds = np.arange(start=threshold_range_min, stop=threshold_range_max, step=step)
max_global = 0
best_cut = 0
best_threshold = 0
for threshold in thresholds :
graph_nonRGB_returns = create_graph_nonRGB_returns(threshold)
degree_attr = mean_degree_neighbors(graph_nonRGB_returns)
d, r = get_degree_party(degree_attr)
d = np.array(d)
r = np.array(r)
d_qt025 = np.quantile(d, 0.25)
d_qt075 = np.quantile(d, 0.75)
cuts = np.arange(d_qt025, d_qt075, 1)
max_local = 0
cut_local = 0
for cut in cuts :
temp = np.abs(d[d > cut].shape[0]/d.shape[0] - np.log(r[r > cut].shape[0]/r.shape[0]))
if temp > max_local :
max_local = temp
cut_local = cut
if max_local > max_global :
max_global = max_local
best_threshold = threshold
best_cut = cut_local
if print_best :
graph_nonRGB_returns = create_graph_nonRGB_returns(best_threshold)
degree_attr = mean_degree_neighbors(graph_nonRGB_returns)
d, r = get_degree_party(degree_attr)
d = np.array(d)
r = np.array(r)
print(d[d > best_cut].shape[0]/d.shape[0])
print(r[r > best_cut].shape[0]/r.shape[0])
plt.hist(d, normed=True, bins= 100)
plt.show()
plt.hist(r, normed=True, bins= 100)
plt.show()
accuracy = compute_accuracy(d, r, best_cut)
return best_cut, best_threshold, accuracy
best_cut_return, best_thershold_return, accuracy_returns = cross_validation_neigh_returns(0.3, 0.6, print_best=True)
print("best cut is : ", best_cut_return, " // best thershold us : ", best_thershold_return)
print("W/ overall accuracy : ", accuracy_returns)
def cross_validation_neigh_exempt (threshold_range_min, threshold_range_max, step=0.01, print_best=False) :
thresholds = np.arange(start=threshold_range_min, stop=threshold_range_max, step=step)
max_global = 0
best_cut = 0
best_threshold = 0
for threshold in thresholds :
graph_nonRGB_exempt = create_graph_nonRGB_exempt(threshold)
degree_attr = mean_degree_neighbors(graph_nonRGB_exempt)
d, r = get_degree_party(degree_attr)
d = np.array(d)
r = np.array(r)
d_qt025 = np.quantile(d, 0.25)
d_qt075 = np.quantile(d, 0.75)
cuts = np.arange(d_qt025, d_qt075, 1)
max_local = 0
cut_local = 0
for cut in cuts :
temp = np.abs(d[d > cut].shape[0]/d.shape[0] - np.log(r[r > cut].shape[0]/r.shape[0]))
if temp > max_local :
max_local = temp
cut_local = cut
if max_local > max_global :
max_global = max_local
best_threshold = threshold
best_cut = cut_local
if print_best :
graph_nonRGB_exempt = create_graph_nonRGB_exempt(best_threshold)
degree_attr = mean_degree_neighbors(graph_nonRGB_exempt)
d, r = get_degree_party(degree_attr)
d = np.array(d)
r = np.array(r)
print(d[d > best_cut].shape[0]/d.shape[0])
print(r[r > best_cut].shape[0]/r.shape[0])
plt.hist(d, normed=True, bins= 100)
plt.show()
plt.hist(r, normed=True, bins= 100)
plt.show()
accuracy = compute_accuracy(d, r, best_cut)
return best_cut, best_threshold, accuracy
best_cut_exempt, best_thershold_exempt, accuracy_exempt = cross_validation_neigh_exempt(0.55, 0.8, print_best=True)
print("Best cut is : ", best_cut_exempt, " // best thershold us : ", best_thershold_exempt)
print("W/ overall accuracy : ", accuracy_exempt)
```
When a first try gave us the overall accuracy score of $0.78$% which is clearly a terrible result -- simply put all the counties as Democrat and you would get a better accuracy.
Hence, a bit of inspiration was taken from Machine Learning. In Machine Learning, when one is faced with heavy tailed targets (i.e most of the dataset at hand is driven toward on specific value) is to use a penalizing function such as the log.
This is specifically what we introduce here, when we are computing the absolute difference between the number of nodes above the cut, we are penalizing the republican nodes with the log function. By doing so, we are forcing the "loss function" (based on the degree of the nodes) to make sure that the number of nodes above the cut is kept small for Republican (because they are the one who will give us the biggest loss in the end). By doing so, reach an overall accuray of $87$%.
**Note :** the acutal value of the new loss function is :
$$
loss = |pctOfDemocratAboveCut + log(pctOfRepublicanAboveCut)|
$$
Even with these changements, we were able to improve the result by a mere $2$%, which could be great if not there was such a heavy tail on republican. Hence, we try more sophisticated methods using Lagrange techniques and GCN in subsequent cells in order to reach a higer accuracy.
### II.2.4 Graph observation
Observation of the graph non_RGB_return and non_RGB_exempt characteristics, such as, type of network, clustring coefficient and sparsity.
```
arr = df_graph.to_numpy()
possible_nodes = np.unique(arr)
A_migr = np.zeros((len(possible_nodes), len(possible_nodes)))
for dest, source in arr :
i = np.where(possible_nodes == dest)
#print(i)
j = np.where(possible_nodes == source)
#print(j)
A_migr[j[0], i[0]] = 1
A_migr[i[0], j[0]] = 1
G_migr = nx.from_numpy_matrix(A_migr)
G_exempt = create_graph_nonRGB_exempt(0.7)
G_returns = create_graph_nonRGB_returns(0.35)
```
Degree distribution of the graphs
```
fig, axes = plt.subplots(1, 3, figsize=(20, 6))
axes[0].set_title('Degree distribution of Non RGB exemption graph')
exemption_degrees = [degree for node, degree in G_exempt.degree()]
axes[0].hist(exemption_degrees);
axes[1].set_title('Degree distribution of Non RGB returns graph')
returns_degrees = [degree for node, degree in G_returns.degree()]
axes[1].hist(returns_degrees);
axes[2].set_title('Degree distribution of the migration graph')
migr_degrees = [degree for node, degree in G_migr.degree()]
axes[2].hist(migr_degrees);
```
We can clearly observe that the returns graph has nodes of higher degrees as compared to the exemption graph.
The exemption graph has fewer edges because there is an edge between two nodes only when more than 70% of the
migration flow is not paying taxes. This is a stricter requirement for an edge between nodes as compared to
the requirement for an edge in the returns graph making the exemption graph sparser. The degree distribution
also implies that most of the counties have degree less than 50 and only few counties which have a degree above
it. The migration graph, as expected, has higher degree nodes as it has edges between all the counties which have
a migration between them.
As the returns graph contains higher degree nodes,it is expected to have a higher average clustering coefficient
and a larger giant component size
The degree distribution of both the graphs follow approcimatley a power law and hence they are scale free.
Evaluating basic properties of the exemption graph
```
print('Number of nodes: {}, Number of edges: {}'. format(G_exempt.number_of_nodes(), G_exempt.number_of_edges()))
print('Number of connected components: {}'. format(nx.number_connected_components(G_exempt)))
exempt_connected_components = (G_exempt.subgraph(c) for c in nx.connected_components(G_exempt))
giant_exempt = max(exempt_connected_components, key = len)
print('The giant component of the exemption graph has {} nodes and {} edges.'.format(giant_exempt.number_of_nodes(), giant_exempt.size()))
# Calculating the clustering coefficient
print('The average clustering coefficient of the exemption graph is {}'.format(nx.average_clustering(G_exempt)))
```
By looking at the clustering coefficient, we can assume that the graph is not random and has a structure within it.
Simulating the graph using erdos renyi network
```
# It should have the same number of nodes
n1 = len(G_exempt.nodes())
# Edges of te exemption graph
m1 = G_exempt.size()
# The p parameter is adjusted to have the same number of edges
p1 = 2*m1 / (n1 * (n1-1))
G_exempt_er = nx.erdos_renyi_graph(n1, p1)
print('The Erdos-Rényi network that simulates the exemption graph has {} edges.'.format(G_exempt_er.size()))
```
The Erdos-Rényi network that simulates the exemption graph has 4089 edges.
```
# Calculating the clustering coefficient
nx.average_clustering(G_exempt_er)
```
The erdos renyi graph has a very low clustering coefficient as compared to the original graph as ER network is completely random while the original graph is not.
Simulation of the graph using BA network
```
q1 = 2
G_exempt_ba = nx.barabasi_albert_graph(n1, q1)
print('The Barabási-Albert network that simulates the exemption graph has {} edges.'.format(G_exempt_ba.size()))
# Calculating the clustering coefficient
nx.average_clustering(G_exempt_ba)
```
The BA network has it's average clustering close to that of the original graph but has significantly higher number of edges as compared to the original graph.
```
fig, axes = plt.subplots(1, 3, figsize=(20, 6))
axes[0].set_title('Degree distribution of the Simulated BA network')
exemption_degrees = [degree for node, degree in G_exempt_ba.degree()]
axes[0].hist(exemption_degrees);
axes[1].set_title('Degree distribution of the Simulated ER network')
exempt_er_degrees = [degree for node, degree in G_exempt_er.degree()]
axes[1].hist(exempt_er_degrees);
axes[2].set_title('Degree distribution of the original exemption graph')
exempt_degrees = [degree for node, degree in G_exempt.degree()]
axes[2].hist(exempt_degrees);
```
The degree distribution and the average clustering of the simulated BA network is very close to that of the original graph.
Performing similar analysis for the returns graph
```
print('Number of nodes: {}, Number of edges: {}'. format(G_returns.number_of_nodes(), G_returns.number_of_edges()))
print('Number of connected components: {}'. format(nx.number_connected_components(G_returns)))
returns_connected_components = (G_returns.subgraph(c) for c in nx.connected_components(G_returns))
giant_returns = max(returns_connected_components, key = len)
print('The giant component of the returns graph has {} nodes and {} edges.'.format(giant_returns.number_of_nodes(), giant_returns.size()))
# Calculating the clustering coefficient
print('The average clustering coefficient of the returns graph is {}'.format(nx.average_clustering(G_returns)))
# It should have the same number of nodes
n2 = len(G_returns.nodes())
# Edges of te exemption graph
m2 = G_returns.size()
# The p parameter is adjusted to have the same number of edges
p2 = 2*m2 / (n2 * (n2-1))
G_returns_er = nx.erdos_renyi_graph(n2, p2)
print('The Erdos-Rényi network that simulates the returns graph has {} edges.'.format(G_returns_er.size()))
# Calculating the clustering coefficient
nx.average_clustering(G_exempt_er)
q2 = 6
G_returns_ba = nx.barabasi_albert_graph(n2, q2)
print('The Barabási-Albert network that simulates the exemption graph has {} edges.'.format(G_returns_ba.size()))
# Calculating the clustering coefficient
nx.average_clustering(G_returns_ba)
fig, axes = plt.subplots(1, 3, figsize=(20, 6))
axes[0].set_title('Degree distribution of the Simulated BA network')
returnsba_degrees = [degree for node, degree in G_returns_ba.degree()]
axes[0].hist(returnsba_degrees);
axes[1].set_title('Degree distribution of the Simulated ER network')
returnser_degrees = [degree for node, degree in G_returns_er.degree()]
axes[1].hist(returnser_degrees);
axes[2].set_title('Degree distribution of the original returns graph')
returns_degrees = [degree for node, degree in G_returns.degree()]
axes[2].hist(returns_degrees);
```
Similar observations hold true for the returns graph. BA network simulates the returns graph more accurately as compared to ER graph. The degree distribution of the simulated network is similar to the original returns graph but the average clustering coefficient is too low.This could be because the returns graph has more structure as compared to the random BA network.
## II.3 Second attempt at predicting election results - GCN and Graph signal processing
## All functions used in this part are in uils file.
As aforementioned, we are now trying more sophisticated methods such as GCN and or using Laplacian and fourier transfom based on the return and exempt graphs.
-In both two methods, 20% of the target labels (either +1 ot -1) are randomly masked to zero which constitue our signal on wich filtering will be performed. This is served as the test set on which the performance will be evaluated. The remaining target lables are used for training.
Graph signal porcessing method:
The idea is to use Fourier analysis. Namely we apply the Fourier transform on the masked signal. The output is filtered by a lowpass filter and a heat kernel which can smooth the signal. When the signal is converted back to the graph domain, some values are assigned to the filtered signal. These values can contribute to the prediction. The final prediction of a masked node is obtained by averaging over the addition of the value of node itself and values of its neighbours, then thresholding to get back to the +1 and -1 entries and compare with grouch truth labels with F1 score.
GCN:GCN method requires a train set and a test set as well. Different from the Fourier method, where masked labels were directly modified over the original target. GCN required a different format for labels. Train mask and test mask have the same length as the original target, having value 0 or 1. If i-th value of train mask is 0, then the test mask of i-th value must be 1, and vice versa. This means that i-th node is masked and will be used for testing, not for training. In this way, labels are separated into two different sets. Afterwards, these masks are applied to the original target in order to form train labels and test labels, which then are ready to be fitted in GCN.
```
# creation of the graph // seperation of adjency matrix & label/features for later use
_, features1 = create_graph_nonRGB_returns_features(0.38)
adjacency_nonRGB_returns = create_adjency_nonRGB_returns(0.38, plot_adj_returns=False)
(features1)
```
### II.3.1 Graph signal processing and GCN on exemption graph
With this graph we are using the Fourier method to predict the outcome of the election in one particular county in the return graph.
```
# prepare A_migration and target label
A_migration = adjacency_nonRGB_returns.copy()
# prepare the target label
y_presidential_result = features1["result"].copy()
# compute lamb and U
laplacian_migration = compute_laplacian(A_migration, normalize=True)
lamb_migration, U_migration = spectral_decomposition(laplacian_migration)
# prepare low pass filter
ideal_lp_migration = np.ones((A_migration.shape[0],))
ideal_lp_migration[lamb_migration >= 0.1] = 0 # to tune
#heat kernel filter
ideal_ht_migration=np.exp(-0.1 * lamb_migration) #0.1 can be tuned
# apply filter
x_lp_migration = ideal_graph_filter(y_presidential_result.copy(),ideal_lp_migration,U_migration)
x_ht_migration=ideal_graph_filter(y_presidential_result.copy(),ideal_lp_migration,U_migration)
```
Additionnally, to the low pass filter used previousely, a heat kernel is also test in the sake of improving accuracy
```
iters = 20
n = int(len(y_presidential_result)*0.2)
accuracy_mean_fourier, accuracy_var_fourier = pred_iteration(A_migration,iters, y_presidential_result, n, x_lp_migration)
accuracy_mean_ht, accuracy_var_ht = pred_iteration(A_migration,iters, y_presidential_result, n, x_ht_migration)
```
Using low pass filtering and a heat kernel allows us to predict correctly $87$% of the election result, a similar result to the one found in part II.2.3, where we allready said that the result wasn't terrible. So we move on to GCN method.
**GCN method**
```
# determine features to use in GCN
X_migration = features1.drop(columns=['id', 'result']).values
# evaluation GCN performance
accuracy_mean_GCN, accuracy_var_GCN = apply_gcn(iters,X_migration,y_presidential_result,A_migration,laplacian_migration,lamb_migration,U_migration)
```
### II.3.2 Graph signal processing and GCN on exemption graph
We are conducting the same study are in part II.3.1 but on the graph of exempltion (i.e graph where the flow are each caraterised by at least $56$% of migrants not paying taxes.
```
# creation of the graph // seperation of adjency matrix & label/features for later use
_, features2 = create_graph_nonRGB_exempt_features(0.56)
adjacency_nonRGB_exempt = create_adjency_nonRGB_exempt(0.56, plot_adj_exempt = False )
```
**Fourier method**
```
# prepare A_migration and target label
A_migration2 = adjacency_nonRGB_exempt.copy()
# prepare the target label
y_presidential_result2 = features2["result"].copy()
# compute lamb and U
laplacian_migration2 = compute_laplacian(A_migration2, normalize=True)
lamb_migration2, U_migration2 = spectral_decomposition(laplacian_migration2)
# low pass filter
ideal_lp_migration2 = np.ones((A_migration2.shape[0],))
ideal_lp_migration2[lamb_migration2 >= 0.1] = 0 # to tune
#heat kernel filter
ideal_ht_migration2=np.exp(-0.1 * lamb_migration2)
# apply filter
x_lp_migration2 = ideal_graph_filter(y_presidential_result2.copy(),ideal_lp_migration2,U_migration2)
x_ht_migration2 = ideal_graph_filter(y_presidential_result2.copy(),ideal_ht_migration2,U_migration2)
iters = 20
n = int(len(y_presidential_result2)*0.2)
accuracy_mean_fourier2, accuracy_var_fourier2 = pred_iteration(A_migration2,iters, y_presidential_result2, n, x_lp_migration2)
accuracy_mean_ht2, accuracy_var_ht2 = pred_iteration(A_migration2,iters, y_presidential_result2, n, x_ht_migration2)
```
With the exemption graph we achieve an accuracy of $92$%, a result that starts to be rather conclusive.
**GCN method**
```
# determine features to use in GCN
X_migration2 = features2.drop(columns=['id', 'result']).values
# evaluation GCN performance
accuracy_mean_GCN2, accuracy_var_GCN2 = apply_gcn(iters,X_migration2,y_presidential_result2,A_migration2,laplacian_migration2,lamb_migration2,U_migration2)
```
# III. Study of a similarity graph for prediction
Result of the previous section were good, but still far from being great, hence we are now moving on to construct another type of graph : similarity graph using an RGB kernel.
Using such a graph will be interesting in the sense that the IRS will allow us to add another dimension to the graph: the origin of the migrant, if they are either US citizen or migrant. This allows us to capture the polorazing aspect of migration : immigration of foreigner.
To construct the similarity graph we re-prepare the IRS dataset (now we are considering other part of the IRS dataset -- the one that allows us to do the seperation between foreigner and US citizen).
## III.1. Clean and prepare the data
```
# load the data
df_migrations = pd.read_csv("./NTDS_Data/countyinflow1516.csv" )
# load the data
df_migrations = pd.read_csv("./NTDS_Data/countyinflow1516.csv" )
# keep only summury information of each county
df_migrations = df_migrations[df_migrations['y1_countyname'].str.contains("County Total Migration")]
# create the combined fips county number
df_migrations['statefips_str'] = df_migrations['y2_statefips'].apply(lambda x : str(x).zfill(2))
df_migrations['countyfips_str'] = df_migrations['y2_countyfips'].apply(lambda x : str(x).zfill(3))
df_migrations['combined_fips'] = df_migrations['statefips_str'].apply(lambda x: x.lstrip('0')) + df_migrations['countyfips_str']
# drop useless information
df_migrations = df_migrations.drop(columns=["y2_statefips", "y2_countyfips", "y1_statefips", "y1_countyfips", "y1_state", "statefips_str", "countyfips_str"])
# seperate each possible migration into three dataframe
df_migration_total = df_migrations[df_migrations['y1_countyname'].str.contains("County Total Migration-US and Foreign")]
df_migrations['y1_countyname'] = df_migrations['y1_countyname'].apply(lambda x : x if x.find("County Total Migration-US and Foreign") == -1 else "County Total Migration Both")
df_migration_us = df_migrations[df_migrations['y1_countyname'].str.contains("County Total Migration-US")]
df_migration_for = df_migrations[df_migrations['y1_countyname'].str.contains("County Total Migration-Foreign")]
# drop the name of the column
df_migration_total = df_migration_total.drop(columns=["y1_countyname"])
df_migration_us = df_migration_us.drop(columns=["y1_countyname"])
df_migration_for = df_migration_for.drop(columns=["y1_countyname"])
# remove nodes where data is undefined undefined data by zero
df_migration_total = df_migration_total[df_migration_total['n1'] != -1]
df_migration_us = df_migration_us[df_migration_us['n1'] != -1]
df_migration_for = df_migration_for[df_migration_for['n1'] != -1]
# convert combined fips to int64
df_migration_total['combined_fips'] = df_migration_total['combined_fips'].astype('int64')
df_migration_us['combined_fips'] = df_migration_us['combined_fips'].astype('int64')
df_migration_for['combined_fips'] = df_migration_for['combined_fips'].astype('int64')
df_presidential_result = pd.read_csv("./NTDS_Data/2016_US_County_Level_Presidential_Results.csv" )
df_presidential_result = df_presidential_result.drop(columns=["Unnamed: 0","votes_dem", "votes_gop", "total_votes", "diff", "per_point_diff", "state_abbr", "county_name"])
# merge the two dataset and drop useless column, add a new column winner
df_merged_total = pd.merge(df_migration_total, df_presidential_result, on="combined_fips", how='inner')
df_merged_us = pd.merge(df_migration_us, df_presidential_result, on="combined_fips", how='inner')
df_merged_for = pd.merge(df_migration_for, df_presidential_result, on="combined_fips", how='inner')
df_merged_total['difference'] = df_merged_total['per_dem'] - df_merged_total['per_gop']
df_merged_us['difference'] = df_merged_us['per_dem'] - df_merged_total['per_gop']
df_merged_for['difference'] = df_merged_for['per_dem'] - df_merged_total['per_gop']
df_merged_total['winner'] = df_merged_total['difference'].apply(lambda x : -1 if x > 0 else 1)
df_merged_us['winner'] = df_merged_us['difference'].apply(lambda x : -1 if x > 0 else 1)
df_merged_for['winner'] = df_merged_for['difference'].apply(lambda x : -1 if x > 0 else 1)
df_merged_total = df_merged_total.drop(columns=['difference'])
df_merged_us = df_merged_us.drop(columns=['difference'])
df_merged_for = df_merged_for.drop(columns=['difference'])
```
## III.2 Creation of the similarity graph
We will create 3 similarity graph :
- `total graph`: a graph that encapsulate the total inflow of migrant/immigrants in a county (US citizen or not)
- `US graph`: a graph that only encapsulate migration of US citizen
- `For graph`: a graoh that only encapsulate migration of foreigner
```
# compute the adjency matrix of total
X_total = df_merged_total.drop(columns=['combined_fips', 'per_dem', 'per_gop', 'winner'])
nodes_total = df_merged_total.drop(columns=['n1', 'n2', 'agi', 'per_dem', 'per_gop']).values
X_total['agi'] = (X_total['agi'] - X_total['agi'].mean()) / X_total['agi'].std()
X_total['prop_ret/exempt'] = X_total['n1'] / X_total['n2']
X_total = X_total.drop(columns=['n1', 'n2'])
adjacency_RGB_total = epsilon_similarity_graph(X_total, sigma=0.5284353963018223*0.1, epsilon=0.2)
# compute the adjency matrix of foreigner
X_for = df_merged_for.drop(columns=['combined_fips', 'per_dem', 'per_gop', 'winner'])
nodes_for = df_merged_for.drop(columns=['n1', 'n2', 'agi', 'per_dem', 'per_gop']).values
X_for['agi'] = (X_for['agi'] - X_for['agi'].mean()) / X_for['agi'].std()
X_for['prop_ret/exempt'] = X_for['n1'] / X_for['n2']
X_for = X_for.drop(columns=['n1', 'n2'])
adjacency_RGB_for = epsilon_similarity_graph(X_for, sigma=0.6675252605174871*0.1, epsilon=0.5)
# compute the adjency matrix of US
X_us = df_merged_us.drop(columns=['combined_fips', 'per_dem', 'per_gop', 'winner'])
nodes_us = df_merged_us.drop(columns=['n1', 'n2', 'agi', 'per_dem', 'per_gop']).values
X_us['agi'] = (X_us['agi'] - X_us['agi'].mean()) / X_us['agi'].std()
X_us['prop_ret/exempt'] = X_us['n1'] / X_us['n2']
X_us = X_us.drop(columns=['n1', 'n2'])
adjacency_RGB_us = epsilon_similarity_graph(X_us, sigma=0.5310405705207334*0.1, epsilon=0.5)
```
## II.3 Graph signal processing of similarity graphs
The approach is similar to one in the previous section, only the graph changes, as three graph as built according the origin of the migrants. The features used are the normalized AGI and the ration between people who are paying taxes and thoses who are not.
**Laplacian for total**
```
# prepare A(adjacency matrix)
A = adjacency_RGB_total.copy()
# prepare the target label
y = df_merged_total["winner"].copy()
# prepare features
X_total = X_total.values
# compute corresponding lamb and U
laplacian = compute_laplacian(A, normalize=True)
lamb, U = spectral_decomposition(laplacian)
# prepare filter
#low pass filter
n_nodes = A.shape[0]
ideal_lp = np.ones((n_nodes,))
ideal_lp[lamb >= 0.1] = 0 # to tune
#heat kernel filter
ideal_ht=np.exp(-0.1 * lamb)
# apply filter
x_lp = ideal_graph_filter(y.copy(),ideal_lp,U)
x_ht= ideal_graph_filter(y.copy(),ideal_ht,U)
# detemine the number of iteration
iters = 20
# determine the percentage of masks
n = int(len(y)*0.2)
# accuracy of the low pass filter
print("Fourier method:")
accuracy_mean, accuracy_var = pred_iteration(A,iters, y, n, x_lp)
#accuracy of the heat kernel
print("With heat kernel:")
accuracy_mean_ht, accuracy_var_ht = pred_iteration(A,iters, y, n, x_ht)
```
**Laplacian for foreigner**
```
# prepare A_for(adjacency matrix)
A_for = adjacency_RGB_for.copy()
# prepare the target label
y_for = df_merged_for["winner"].copy()
# prepare features
X_for = X_for.values
# compute corresponding lamb and U
laplacian_for = compute_laplacian(A_for, normalize=True)
lamb_for, U_for = spectral_decomposition(laplacian_for)
# prepare filter
ideal_lp_for = np.ones((A_for.shape[0],))
ideal_lp_for[lamb_for >= 0.1] = 0 # to tune
#heat kernel
ideal_ht_for=np.exp(-0.1 * lamb_for)
# apply filter
x_lp_for = ideal_graph_filter(y_for.copy(),ideal_lp_for,U_for)
x_ht_for = ideal_graph_filter(y_for.copy(),ideal_ht_for,U_for)
# detemine the number of iteration
iters_for = 20
# determine the percentage of masks
n_for = int(len(y_for)*0.2)
# apply low-pass method
print("Fourier method:")
accuracy_mean_for, accuracy_var_for = pred_iteration(A_for,iters_for, y_for, n_for, x_lp_for)
#heat kernel
print("With heat kernel:")
accuracy_mean_for_ht, accuracy_var_for = pred_iteration(A_for,iters_for, y_for, n_for, x_lp_for)
```
**Laplacian for US**
```
# prepare A_us(adjacency matrix)
A_us = adjacency_RGB_us.copy()
# prepare the target label
y_us = df_merged_us["winner"].copy()
# prepare features
X_us = X_us.values
# compute corresponding lamb and U
laplacian_us = compute_laplacian(A_us, normalize=True)
lamb_us, U_us = spectral_decomposition(laplacian_us)
# prepare filter
ideal_lp_us = np.ones((A_us.shape[0],))
ideal_lp_us[lamb_us >= 0.1] = 0 # to tune
#heat kernel
ideal_ht_us=np.exp(-0.1 * lamb_us)
# apply filter(lowpass+heat kernel)
x_lp_us = ideal_graph_filter(y_us.copy(),ideal_lp_us,U_us)
x_ht_us = ideal_graph_filter(y_us.copy(),ideal_ht_us,U_us)
# detemine the number of iteration
iters_us = 20
# determine the percentage of masks
n_us = int(len(y_us)*0.2)
# apply Fourier method
print("Fourier method:")
accuracy_mean_us, accuracy_var_us = pred_iteration(A_us,iters_us, y_us, n_us, x_lp_us)
#accuracy of the heat kernel method
print("With heat kernel:")
accuracy_mean_us_ht, accuracy_var_us_ht = pred_iteration(A_us,iters_us, y_us, n_us, x_ht_us)
```
## III.4 GCN method on similarity graphs
We are now trying to implement GCN methods on the three similarity graphs.
```
import time
import networkx as nx
from sklearn.linear_model import LogisticRegression
import torch
import torch.nn as nn
import torch.nn.functional as F
import dgl.function as fn
from dgl import DGLGraph
from dgl.data.citation_graph import load_cora
np.random.seed(0)
torch.manual_seed(1)
```
**GCN for foreigner**
```
mean_for,var_for = apply_gcn(iters_for,X_for,y_for,A_for,laplacian_for,lamb_for,U_for)
```
**GCN for total**
```
mean_total,var_total = apply_gcn(iters,X_total,y,A,laplacian,lamb,U)
```
**GCN for US citizen**
```
mean_us,var_us = apply_gcn(iters_us,X_us,y_us,A_us,laplacian_us,lamb_us,U_us)
```
At this stage a function that plots a signal on a graph using both a Laplacian embedding and the NetworkX force-directed layout.The spring layout is used for the force directed layout . This means that each node tries to get as far away from the others as it can, while being held back by the edges which are assimilated to springs, having a spring constant related to their corresponding weight.
```
graph_tot=nx.from_numpy_matrix(A)
coords_tot = nx.spring_layout(graph_tot) # Force-directed layout.
graph_us = nx.from_numpy_matrix(A_us)
coords_us = nx.spring_layout(graph_us) # Force-directed layout.
graph_for = nx.from_numpy_matrix(A_for)
coords_for = nx.spring_layout(graph_for) # Force-directed layout.
def embedding(adj,U):
D_norm = np.diag(np.clip(np.sum(adj, 1), 1, None)**(-1/2))
network_emb = D_norm @ U[:,[1,3]]
emb_x = network_emb[:,0]
emb_y = network_emb[:,1]
return emb_x,emb_y
def coplot_network_signal(signal,emb_x,emb_y,graph,coords, title='Signal = ...'):
'''
Plots a signal on a graph using both a Laplacian embedding and the NetworkX force-directed layout.
Args:
signal: The signal of each node to plot on the graph
title: Plot title
'''
fig, ax = plt.subplots(1, 2, figsize=(16,7))
vmax = max(-np.nanmin(signal), np.nanmax(signal))
vmin = -vmax
# emb_x, emb_y=embedding(A_us, U_us)
im = ax[0].scatter(emb_x, emb_y, c=signal, cmap='bwr', s=70, edgecolors='black',
vmin=vmin, vmax=vmax)
ax[0].set_title('Laplacian Embedding')
ax[0].set_xlabel('Generalized eigenvector embedding $U_1$')
ax[0].set_ylabel('Generalized eigenvector embedding $U_3$')
nx.draw_networkx_nodes(graph, coords, node_size=60, node_color=signal, cmap='bwr',
edgecolors='black', ax=ax[1], vmin=vmin, vmax=vmax)
nx.draw_networkx_edges(graph, coords, alpha=0.2, ax=ax[1])
ax[1].set_title('NetworkX Force-directed layout')
fig.suptitle(title, fontsize=16)
fig.subplots_adjust(right=0.9)
cbar_ax = fig.add_axes([0.925, 0.15, 0.025, 0.7])
fig.colorbar(im, cax=cbar_ax)
#plt of the us immigration
emb_x_us, emb_y_us=embedding(A_us, U_us)
coplot_network_signal(y_us,emb_x_us, emb_y_us,graph_us,coords_us, title='Signal = truth labels')
#plot of the foreign immigration
emb_x_for, emb_y_for=embedding(A_for, U_for)
coplot_network_signal(y_for,emb_x_for, emb_y_for,graph_for,coords_for, title='Signal = truth labels')
#plt of total immigration
emb_x, emb_y=embedding(A, U)
coplot_network_signal(y,emb_x, emb_y,graph_tot,coords_tot, title='Signal = truth labels')
```
| true |
code
| 0.407098 | null | null | null | null |
|
# Mixture of Gaussians
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
Mixture of Gaussians (aka Expectation Maximation) is a clustering method. The idea of this model is simpel: for a given dataset, each point is generated by linearly combining multiple multivariate Gaussians.
## What are Gaussians?
A Gaussian is a function of the form:
\begin{equation*}
f(x)=a e^{-\frac{(x-b)^2}{2c^2}}
\end{equation*}
where
- $a\in \mathbb{R}$ is the height of the curve's peak
- $b \in \mathbb{R}$ is the position of center of the peak,
- $c \in \mathbb{R}$ is the [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation "The standard deviation σ is a measure that is used to quantify the amount of variation or dispersion of a set of data values") which controls the width of the bell
The function is mathematically convenient that is often used to describe a dataset that typically has the normal [distribution](https://en.wikipedia.org/wiki/Frequency_distribution "A distribution is a listing of outcomes of an experiment and the probability associated with each outcome."). Its plot shape is called a bell curve.
A univariate Gaussian can be represented by two variables ($\mu$ and $\sigma$) when it represents the probability density function:
\begin{equation*}
f(x)=\frac{1}{\sigma \sqrt{2 \pi}} e^{-\frac{(x-\mu)^2}{2 \sigma^2}}
\end{equation*}
where
- $\mu$ is the mean of all data points. This specifies the center of the curve
- $\sigma$ is the standard deviation. This describe the "spread" of the data
Here are some plots of the univarite Guassian distribution for various parameters of $\mu$ and $\sigma$:
```
X = np.linspace(-6, 12, 100)
def gaussian(X, mu, sigma):
a = 1 / (sigma * np.sqrt(2 * np.pi))
return a * np.exp(-np.power(X - mu, 2.) / (2 * sigma * sigma))
fig, ax = plt.subplots(figsize=(10, 5))
ax.plot(X, gaussian(X, mu=0, sigma=1), label='$\mu = 0, \sigma$ = 1')
ax.plot(X, gaussian(X, mu=5, sigma=2), label='$\mu = 5, \sigma$ = 2')
ax.plot(X, gaussian(X, mu=5, sigma=5), label='$\mu = 5, \sigma$ = 5')
plt.legend()
```
The Gaussian distribution for a vector $x$ with $d$ dimensions (multivariate Gaussian) is defined as follows:
\begin{equation*}
f(x \mid \mu, \Sigma) = \frac{1}{ \sqrt{(2 \pi)^d |\Sigma|} } exp\left( -\frac{1}{2} (x-\mu)^T \Sigma^{-1} (x-m) \right)
\end{equation*}
where
- $d$ -- number of dimensions in the vector $x$
- $\mu$ -- the mean
- $\Sigma$ -- the covariance matrix
We can also plot 2D Gaussian distribution:

Source: Wikimedia
## Variance-Covariance Matrix
Before we look at Gaussian Mixture Model, let us first try to understand what the variance-covariance matrix is.
Covariance is a measure how changes in one variable are associated with changes in a second variables and tells us how two variables behave as a pair. In other words, covariance is a measure of the linear relationship between two variables. We are only interested in the sign of a covariance value:
- A positive value indicates a direct or increase linear relationship
- A negative value indicates a decreasing relationship
- Zero (or around zero) indicates that there is probably not a linear relationship between the two variables
We are not interested in the number itself since covariance does not tells us anything about the strength of the relationship. To find the strength of the relationship, we need to find the correlation.
Variance and covariance are often displayed together in a **variance-covariance** matrix aka a covariance matrix. The diagonal of covariance matrix provides the variance of each individual variable, whereas the off-diagonal entries provide the covariance between each pair of variables.
## Gaussian Mixture Model (GMM)
In GMM, we assume that each cluster $C_i$ is characterised by a multivariate normal distribution. Now, we can design a density function $f_i(x)$ which can tell us what is the probability that a data point $x$ (a vector with $d$ elements) belongs to the cluster $C_i$:
\begin{equation*}
f_i(x) = f(x \mid \mu_i, \Sigma_i) = \frac{1}{ \sqrt{(2 \pi)^d |\Sigma_i|} } exp\left( -\frac{1}{2} (x-\mu_i)^T \Sigma_i^{-1} (x-m_i) \right)
\end{equation*}
where
- $d$ -- number of dimensions
- $\mu_i$ -- the mean of the cluster $C_i$
- $\Sigma_i$ -- the covariance matrix for the cluster $C_i$
Before we can define the function, we need to learn the unknown parameters $\mu_i$ and $\Sigma_i$.
**Our problem is as follows:**
Give a dataset $X={x_1, x_2, \cdots, x_N}$ drawn from an unknown distribution (assume it is a multiple Gaussians distribution), estimate the parameters $\theta$ of the GMM that fits the data.
To find the parameters $\theta$, we
Maximise the likelihood $p(X \mid \theta)$ of the data with regard to the model parameters
| true |
code
| 0.667337 | null | null | null | null |
|
# Tutorial 5:
## Random Forest Regression
Random Forests Regression is an ensemble learning method that combines multiple Decision Tree Regressions. The method uses a multitude of decision trees to train and predict values. Random Forests reduces the over-fitting in comparison to using a single Decision Tree model.
For a deeper understanding of Random Forest Regression, use the following resources:
- ***Random Forests***
- ***Understanding Random Forests***
In this section, we will learn some of the key concepts in Random Forest. Inside, the practice segment you will also learn to develop a Random Forest model for Regression Problems.

---------------------------------------------
#### The following concepts are key to understand the Random Forest Algorithm,
- **Bagging**
- **Ensemble Learning**
#### Bagging
Bagging is a paradigm to use weak learners to create a strong learner. Using bagging multiple decision tree learners are used on sub-sampled sets of data and the results from each is aggregated based on the nature of the problem statement.
For classification, the results from every learner are used and a majority consensus is used whereas for regression the results are averaged across multiple decision trees.
Random Forests are also used for Feature selection, as every tree has a root node. It’s easy for an ensemble model such as Random Forest to find the most relevant features.
#### Ensemble Learning
Ensembling or Ensemble learning is a technique to combine multiple models to generate a more robust model. It is used to develop algorithms such as Random Fores, Gradient Boosting, XgBoost.
## In this practice session, we will learn to code Random Forest Regression.
### We will perform the following steps to build a simple classifier using the popular Iris dataset.
- **Data Preprocessing**
- Importing the libraries.
- Dealing with the categorical variable.
- Classifying dependent and independent variables.
- Splitting the data into a training set and test set.
- Feature scaling.
- **Random Forest Regression**
- Create a Random Forest Regressor.
- Feed the training data to the regression model.
- Predicting the species for the test set.
- Using the RMSE to calculate the error metric.
```
import ipywidgets as widgets
from IPython.display import display
style = {'description_width': 'initial'}
#1 Importing essential libraries
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
#2 Importing the dataset
file_name = 'DataSets/beer_data.csv'
dataset = pd.read_csv(file_name)
#Displaying the dataset
dataset.head(8)
# Dealing with Categorical variables
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
#Making sure the type of the review_profilename column is str
dataset["review_profilename"] = dataset["review_profilename"].astype(str)
dataset["review_profilename"] = le.fit_transform(dataset["review_profilename"])
dataset.head()
print(f"Dataset has {dataset.shape[0]} rows and {dataset.shape[1]} columns.")
# classify dependent and independent variables
X = dataset[[col for col in dataset.columns if col not in ('review_overall')]].values #independent variables
y = dataset['review_overall'].values #dependent variable
print("\nIdependent Variables :\n\n", X[:5])
print("\nDependent Variable (Score):\n\n", y[:5])
```
## Create Train and Test Sets
```
#4 Creating training set and testing set
from sklearn.model_selection import train_test_split
test_size = widgets.FloatSlider(min=0.01, max=0.6, value=0.2, description="Test Size :", tooltips=['Usually 20-30%'])
display(test_size)
#Divide the dataset into Train and Test sets
X_train, X_test, y_train, y_test = train_test_split(X ,y, test_size=test_size.value, random_state = 0)
print("Training Set :\n----------------\n")
print("X = \n", X_train[:5])
print("y = \n", y_train[:5])
print("\n\nTest Set :\n----------------\n")
print("X = \n",X_test[:5])
print("y = \n", y_test[:5])
print(f"Shape of Training set is {X_train.shape}")
print(f"Shape of Testing set is {X_test.shape}")
```
### Apply Random Forest Regression
```
# import random forest library
from sklearn.ensemble import RandomForestRegressor
# configure params for the model.
max_feat_wig = widgets.ToggleButtons(options=['log2', 'sqrt', 'auto'],
description='Number of features for the best split :',
disabled=False,
style=style)
display(max_feat_wig)
max_depth_wig = widgets.Dropdown(options=[10, 20, 30, 50],
description='The maximum depth of the Tree. :',
style=style)
display(max_depth_wig)
min_split_wig = widgets.Dropdown(options=[100, 200, 300, 500],
description='Minimum Number of Splits. :',
style=style)
display(min_split_wig)
njobs_wig = widgets.Dropdown(options=[('One', 1), ('Two', 2), ('Three', 3), ('All Cores', -1)],
description="Number of CPU Cores :", style=style)
display(njobs_wig)
```
### Predict and Evaluate the Model
```
# Train the Regressor with training set
regressor = RandomForestRegressor(max_features=max_feat_wig.value,
max_depth=max_depth_wig.value,
min_samples_split=min_split_wig.value,
n_jobs=njobs_wig.value)
#fit the linear model
regressor.fit(X_train, y_train)
#7 predict the outcome of test sets
y_Pred = regressor.predict(X_test)
print("\nPredictions = ", y_Pred)
# Calculating score from Root Mean Log Squared Error
def rmlse(y_test, y_pred):
error = np.square(np.log10(y_pred +1) - np.log10(y_test +1)).mean() ** 0.5
score = 1 - error
return score
# Printing the score
print("\n----------------------------\nRMLSE Score = ", rmlse(y_test, y_Pred))
#9 Comparing Actual and Predicted Salaries for he test set
print("\nActual vs Predicted Scores \n------------------------------\n")
error_df = pd.DataFrame({"Actual" : y_test,
"Predicted" : y_Pred,
"Abs. Error" : np.abs(y_test - y_Pred)})
error_df
```
## Feature Importance
```
feat_names = [col for col in dataset.columns if col not in ('review_overall')]
pd.Series(regressor.feature_importances_, \
index=feat_names).sort_values(ascending=True).plot(kind='barh', figsize=(16,9));
plt.title('Feature Importance Random Forest Regressor');
```
## Actual vs. Predicted
```
#Plotting Actual observation vs Predictions
plt.figure(figsize=(16, 9));
plt.scatter(y_test, y_Pred, s = 70)
plt.xlabel('Actual');
plt.ylabel('Predicted');
plt.grid();
plt.show();
```
| true |
code
| 0.555737 | null | null | null | null |
|
```
# import plaidml.keras
# plaidml.keras.install_backend()
# import os
# os.environ["KERAS_BACKEND"] = "plaidml.keras.backend"
# Importing useful libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
from keras.models import Sequential
from keras.layers import Dense, LSTM, Dropout, GRU, Bidirectional, Conv1D, Flatten, MaxPooling1D
from keras.optimizers import SGD
import math
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from keras import optimizers
import time
```
### Data Processing
```
df = pd.read_csv('../data/num_data.csv')
dataset = df
dataset.shape
# Useful functions
def plot_predictions(test, predicted):
plt.figure(figsize=(30, 15));
plt.plot(test, color='red', alpha=0.5, label='Actual PM2.5 Concentration',)
plt.plot(predicted, color='blue', alpha=0.5, label='Predicted PM2.5 Concentation')
plt.title('PM2.5 Concentration Prediction')
plt.xlabel('Time')
plt.ylabel('PM2.5 Concentration')
plt.legend()
plt.show()
def return_rmse(test,predicted):
rmse = math.sqrt(mean_squared_error(test, predicted))
return rmse
data_size = dataset.shape[0]
train_size=int(data_size * 0.6)
test_size = 100
valid_size = data_size - train_size - test_size
test_next_day = [12, 24, 48]
training_set = dataset[:train_size].iloc[:,4:16].values
valid_set = dataset[train_size:train_size+valid_size].iloc[:,4:16].values
test_set = dataset[data_size-test_size:].iloc[:,4:16].values
y = dataset.iloc[:,4].values
y = y.reshape(-1,1)
y.shape
# Scaling the dataset
sc = MinMaxScaler(feature_range=(0,1))
training_set_scaled = sc.fit_transform(training_set)
valid_set_scaled = sc.fit_transform(valid_set)
test_set_scaled = sc.fit_transform(test_set)
sc_y = MinMaxScaler(feature_range=(0,1))
y_scaled = sc_y.fit_transform(y)
# split a multivariate sequence into samples
def split_sequences(sequences, n_steps_in, n_steps_out):
X_, y_ = list(), list()
for i in range(len(sequences)):
# find the end of this pattern
end_ix = i + n_steps_in
out_end_ix = end_ix + n_steps_out-1
# check if we are beyond the dataset
if out_end_ix > len(sequences):
break
# gather input and output parts of the pattern
seq_x, seq_y = sequences[i:end_ix, :], sequences[end_ix-1:out_end_ix, 0]
X_.append(seq_x)
y_.append(seq_y)
return np.array(X_), np.array(y_)
n_steps_in = 24 * 7
n_steps_out = 24 * 7
X_train, y_train = split_sequences(training_set_scaled, n_steps_in, n_steps_out)
X_valid, y_valid = split_sequences(valid_set_scaled, n_steps_in, n_steps_out)
X_test, y_test = split_sequences(test_set_scaled, n_steps_in, n_steps_out)
GRU_3 = Sequential()
LSTM_3 = Sequential()
GRU_4 = Sequential()
LSTM_4 = Sequential()
GRU_3.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh'))
GRU_3.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh'))
GRU_3.add(GRU(units=50, activation='tanh'))
GRU_3.add(Dense(units=n_steps_out))
GRU_4.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh'))
GRU_4.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh'))
GRU_4.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh'))
GRU_4.add(GRU(units=50, activation='tanh'))
GRU_4.add(Dense(units=n_steps_out))
LSTM_3.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh'))
LSTM_3.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh'))
LSTM_3.add(LSTM(units=50, activation='tanh'))
LSTM_3.add(Dense(units=n_steps_out))
LSTM_4.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh'))
LSTM_4.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh'))
LSTM_4.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh'))
LSTM_4.add(LSTM(units=50, activation='tanh'))
LSTM_4.add(Dense(units=n_steps_out))
# Compiling the RNNs
adam = optimizers.Adam(lr=0.01)
GRU_3.compile(optimizer=adam,loss='mean_squared_error')
GRU_4.compile(optimizer=adam,loss='mean_squared_error')
LSTM_3.compile(optimizer=adam,loss='mean_squared_error')
LSTM_4.compile(optimizer=adam,loss='mean_squared_error')
LSTM_GRU_LSTM = Sequential()
GRU_LSTM_GRU = Sequential()
LSTM_LSTM_GRU_GRU = Sequential()
GRU_GRU_LSTM_LSTM = Sequential()
LSTM_GRU_LSTM_GRU = Sequential()
GRU_LSTM_GRU_LSTM = Sequential()
LSTM_GRU_LSTM.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh'))
LSTM_GRU_LSTM.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh'))
LSTM_GRU_LSTM.add(LSTM(units=50, activation='tanh'))
LSTM_GRU_LSTM.add(Dense(units=n_steps_out))
GRU_LSTM_GRU.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh'))
GRU_LSTM_GRU.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh'))
GRU_LSTM_GRU.add(GRU(units=50, activation='tanh'))
GRU_LSTM_GRU.add(Dense(units=n_steps_out))
LSTM_LSTM_GRU_GRU.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh'))
LSTM_LSTM_GRU_GRU.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh'))
LSTM_LSTM_GRU_GRU.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh'))
LSTM_LSTM_GRU_GRU.add(GRU(units=50, activation='tanh'))
LSTM_LSTM_GRU_GRU.add(Dense(units=n_steps_out))
GRU_GRU_LSTM_LSTM.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh'))
GRU_GRU_LSTM_LSTM.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh'))
GRU_GRU_LSTM_LSTM.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh'))
GRU_GRU_LSTM_LSTM.add(LSTM(units=50, activation='tanh'))
GRU_GRU_LSTM_LSTM.add(Dense(units=n_steps_out))
LSTM_GRU_LSTM_GRU.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh'))
LSTM_GRU_LSTM_GRU.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh'))
LSTM_GRU_LSTM_GRU.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh'))
LSTM_GRU_LSTM_GRU.add(GRU(units=50, activation='tanh'))
LSTM_GRU_LSTM_GRU.add(Dense(units=n_steps_out))
GRU_LSTM_GRU_LSTM.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh'))
GRU_LSTM_GRU_LSTM.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh'))
GRU_LSTM_GRU_LSTM.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh'))
GRU_LSTM_GRU_LSTM.add(LSTM(units=50, activation='tanh'))
GRU_LSTM_GRU_LSTM.add(Dense(units=n_steps_out))
# Compiling the RNNs
adam = optimizers.Adam(lr=0.01)
LSTM_GRU_LSTM.compile(optimizer=adam,loss='mean_squared_error')
GRU_LSTM_GRU.compile(optimizer=adam,loss='mean_squared_error')
LSTM_LSTM_GRU_GRU.compile(optimizer=adam,loss='mean_squared_error')
GRU_GRU_LSTM_LSTM.compile(optimizer=adam,loss='mean_squared_error')
LSTM_GRU_LSTM_GRU.compile(optimizer=adam,loss='mean_squared_error')
GRU_LSTM_GRU_LSTM.compile(optimizer=adam,loss='mean_squared_error')
RnnModelDict = {'LSTM_3': LSTM_3, 'GRU_3': GRU_3, 'LSTM_4': LSTM_4, 'GRU_4': GRU_4,
'LSTM_GRU_LSTM': LSTM_GRU_LSTM, 'GRU_LSTM_GRU': GRU_LSTM_GRU,
'LSTM_LSTM_GRU_GRU': LSTM_LSTM_GRU_GRU, 'GRU_GRU_LSTM_LSTM': GRU_GRU_LSTM_LSTM}
X_test_24 = X_test[:24]
y_test_24 = y_test[:24]
rmse_df = pd.DataFrame()
for model in RnnModelDict:
regressor = RnnModelDict[model]
print('training start for', model)
start = time.process_time()
regressor.fit(X_train,y_train,epochs=50,batch_size=32)
train_time = round(time.process_time() - start, 2)
print('results for training set')
y_train_pred = regressor.predict(X_train)
# plot_predictions(y_train,y_train_pred)
train_rmse = return_rmse(y_train,y_train_pred)
print('results for valid set')
y_valid_pred = regressor.predict(X_valid)
# plot_predictions(y_valid,y_valid_pred)
valid_rmse = return_rmse(y_valid,y_valid_pred)
# print('results for test set - 24 hours')
# y_test_pred24 = regressor.predict(X_test_24)
# plot_predictions(y_test_24,y_test_pred24)
# test24_rmse = return_rmse(y_test_24,y_test_pred24)
one_df = pd.DataFrame([[model, train_rmse, valid_rmse, train_time]],
columns=['Model', 'train_rmse', 'valid_rmse', 'train_time'])
rmse_df = pd.concat([rmse_df, one_df])
# save the rmse results
rmse_df.to_csv('../deep_rnn_1week.csv')
```
| true |
code
| 0.628578 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/rockerritesh/Artifical-Neural-Network/blob/master/NPL_and_SENTIMENT_CLASSIFICATION_using_simple_NEURAL_NETWORK.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import nltk
from nltk.stem import PorterStemmer
from nltk.corpus import stopwords
nltk.download('punkt')
nltk.download('stopwords')
nltk.download('wordnet')
```
### ***Below code is for Stemmer***
```
paragraph = """I have three visions for India. In 3000 years of our history, people from all over
the world have come and invaded us, captured our lands, conquered our minds.
From Alexander onwards, the Greeks, the Turks, the Moguls, the Portuguese, the British,
the French, the Dutch, all of them came and looted us, took over what was ours.
Yet we have not done this to any other nation. We have not conquered anyone.
We have not grabbed their land, their culture,
their history and tried to enforce our way of life on them.
Why? Because we respect the freedom of others.That is why my
first vision is that of freedom. I believe that India got its first vision of
this in 1857, when we started the War of Independence. It is this freedom that
we must protect and nurture and build on. If we are not free, no one will respect us.
My second vision for India’s development. For fifty years we have been a developing nation.
It is time we see ourselves as a developed nation. We are among the top 5 nations of the world
in terms of GDP. We have a 10 percent growth rate in most areas. Our poverty levels are falling.
Our achievements are being globally recognised today. Yet we lack the self-confidence to
see ourselves as a developed nation, self-reliant and self-assured. Isn’t this incorrect?
I have a third vision. India must stand up to the world. Because I believe that unless India
stands up to the world, no one will respect us. Only strength respects strength. We must be
strong not only as a military power but also as an economic power. Both must go hand-in-hand.
My good fortune was to have worked with three great minds. Dr. Vikram Sarabhai of the Dept. of
space, Professor Satish Dhawan, who succeeded him and Dr. Brahm Prakash, father of nuclear material.
I was lucky to have worked with all three of them closely and consider this the great opportunity of my life.
I see four milestones in my career"""
sentences = nltk.sent_tokenize(paragraph)
stemmer = PorterStemmer()
# Stemming
for i in range(len(sentences)):
words = nltk.word_tokenize(sentences[i])
words = [stemmer.stem(word) for word in words if word not in set(stopwords.words('english'))]
sentences[i] = ' '.join(words)
sentences
```
## ***Below code is for Lemmatizer***
```
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
sentences = nltk.sent_tokenize(paragraph)
lemmatizer = WordNetLemmatizer()
# Lemmatization
for i in range(len(sentences)):
words = nltk.word_tokenize(sentences[i])
words = [lemmatizer.lemmatize(word) for word in words if word not in set(stopwords.words('english'))]
sentences[i] = ' '.join(words)
sentences
```
Below code is for TFIDF
```
# Cleaning the texts
import re
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from nltk.stem import WordNetLemmatizer
ps = PorterStemmer()
wordnet=WordNetLemmatizer() #here i have used Lemmatizer
sentences = nltk.sent_tokenize(paragraph)
corpus = []
for i in range(len(sentences)):
review = re.sub('[^a-zA-Z]', ' ', sentences[i])
review = review.lower()
review = review.split()
review = [wordnet.lemmatize(word) for word in review if not word in set(stopwords.words('english'))]
review = ' '.join(review)
corpus.append(review)
corpus
# Creating the TF-IDF model
from sklearn.feature_extraction.text import TfidfVectorizer
cv = TfidfVectorizer()
X = cv.fit_transform(corpus).toarray()
X
X.shape
import numpy as np
p=np.random.rand(31,).reshape(31,1)
for i in range(30):
if p[i]>0.5:
p[i]=1
else:
p[i]=0
#float_array. astype(int)
u=p.astype(int)
u
from sklearn.decomposition import PCA
pca=PCA(n_components=2)
pca.fit(X)
x=pca.transform(X)
x
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test= train_test_split(x,u,test_size=0.1)
x_train.shape
y_train.shape
y_test[3]
from sklearn.linear_model import LogisticRegression
reg= LogisticRegression()
reg.fit(x_train,y_train)
reg.predict(x_test)
y_test
#From her we are going to start mini project of sentiment classification.
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.metrics import confusion_matrix, classification_report, mean_squared_error, mean_absolute_error, r2_score
from keras.models import Sequential
from keras.layers import Dense, Dropout, BatchNormalization, Activation
from keras.optimizers import Adam
from keras.callbacks import EarlyStopping
from keras.utils.np_utils import to_categorical
from sklearn.preprocessing import StandardScaler, LabelEncoder, OneHotEncoder, MinMaxScaler
from sklearn.model_selection import train_test_split, cross_val_score, StratifiedKFold, KFold
import keras.backend as K
from keras.wrappers.scikit_learn import KerasClassifier
import pandas as pd
df=pd.read_csv('train.csv')
df
def missing_value_of_data(data):
total=data.isnull().sum().sort_values(ascending=False)
percentage=round(total/data.shape[0]*100,2)
return pd.concat([total,percentage],axis=1,keys=['Total','Percentage'])
f=missing_value_of_data(df)
f
df=df.dropna()
x=df.iloc[:,2].values
x
# Creating the TF-IDF model
from sklearn.feature_extraction.text import TfidfVectorizer
cv = TfidfVectorizer()
X = cv.fit_transform(x).toarray()
#print(X)
X.shape
X
Y=df.iloc[:,3]
Y
cat=df['sentiment']
w=pd.get_dummies(cat)
w
t=w.iloc[:,[0,1,2]].values
t
from sklearn.decomposition import PCA
pca=PCA(n_components=8)
pca.fit(X)
x=pca.transform(X)
np.array(x[1])
t.shape
from numpy import asarray
from numpy import save
data=x
save('data.npy',data)
save('result.npy',t)
model = Sequential()
model.add(Dense(256, input_shape=(8,), activation='relu'))
model.add(Dense(128, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(16, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(3, activation='softmax'))
model.compile('adam', 'categorical_crossentropy', metrics=['accuracy'])
model.summary()
history = model.fit(x, t, verbose=1, epochs=50)
tf=pd.read_csv('test.csv')
missing=missing_value_of_data(df)
print(missing)
tf=tf.dropna()
tf
x_test=tf.iloc[:,1].values
x_test
# Creating the TF-IDF model
from sklearn.feature_extraction.text import TfidfVectorizer
cv = TfidfVectorizer()
X_test = cv.fit_transform(x_test).toarray()
X_test
from sklearn.decomposition import PCA
pca=PCA(n_components=8)
pca.fit(X_test)
x_test=pca.transform(X_test)
x_test.shape
y_pred=model.predict(x_test)
np.argmax(y_pred[2])
Y_test=tf.iloc[:,2]
cate=tf['sentiment']
y_test=pd.get_dummies(cate)
y_test =y_test.values
print(np.argmax(y_pred[4]))
print(np.argmax(y_test[4]))
g=model.predict_classes(x_test)
g.shape
h=np.argmax(y_test,axis=1)
h.shape
confusion_matrix(g,h)
```
| true |
code
| 0.312134 | null | null | null | null |
|
```
%matplotlib inline
import matplotlib.pyplot as plt
ax11 = plt.subplot(2, 2, 1)
ax12 = plt.subplot(2, 2, 2)
ax21 = plt.subplot(2, 2, 3)
ax22 = plt.subplot(2, 2, 4)
ax11.set_title("ax11")
ax12.set_title("ax12")
ax21.set_title("ax21")
ax22.set_title("ax22")
plt.tight_layout()
plt.show()
plt.savefig("images/subplots2.png")
fig, axes = plt.subplots(2, 2)
ax11, ax12, ax21, ax22 = axes.ravel()
ax11.set_title("ax11")
ax12.set_title("ax12")
ax21.set_title("ax21")
ax22.set_title("ax22")
plt.figure()
ax11 = plt.subplot(2, 2, 1)
ax12 = plt.subplot(2, 2, 2)
ax2 = plt.subplot(2, 1, 2)
ax11.set_title("ax11")
ax12.set_title("ax12")
ax2.set_title("ax2")
plt.tight_layout()
plt.show()
plt.savefig("images/complex_subplots.png")
fig, axes = plt.subplots(2, 2)
axes[0, 0].plot(sin)
sin = np.sin(np.linspace(-4, 4, 100))
plt.figure()
plt.subplot(2, 2, 1)
plt.plot(sin)
plt.subplot(2, 2, 2)
plt.plot(sin, c='r')
plt.subplot(2, 2, 3)
plt.subplot(2, 2, 4)
fig, axes = plt.subplots(2, 2)
axes[0, 0].plot(sin)
axes[0, 1].plot(sin, c='r')
plt.savefig("images/subplots_sin.png", bbox_inches="tight", dpi=300)
fig, ax = plt.subplots(2, 4, figsize=(10, 5))
ax[0, 0].plot(sin)
ax[0, 1].plot(range(100), sin) # same as above
ax[0, 2].plot(np.linspace(-4, 4, 100), sin)
ax[0, 3].plot(sin[::10], 'o')
ax[1, 0].plot(sin, c='r')
ax[1, 1].plot(sin, '--')
ax[1, 2].plot(sin, lw=3)
ax[1, 3].plot(sin[::10], '--o')
plt.tight_layout() # makes stuff fit - usually works
plt.savefig("images/plot.png", bbox_inches="tight", dpi=300)
x = np.random.uniform(size=50)
y = x + np.random.normal(0, .1, size=50)
sizes = np.abs(np.random.normal(scale=20, size=50))
fig, ax = plt.subplots(1, 4, figsize=(10, 3),
subplot_kw={'xticks': (), 'yticks': ()})
ax[0].plot(x, y, 'o')
ax[0].set_title("plot")
ax[1].scatter(x, y)
ax[1].set_title("scatter")
ax[2].scatter(x, y, c=x-y, cmap='bwr', edgecolor='k')
ax[2].set_title("scatter \w color")
ax[3].scatter(x, y, c=x-y, s=sizes, cmap='bwr', edgecolor='k')
ax[3].set_title("scatter \w size")
plt.tight_layout()
plt.savefig("images/matplotlib_scatter.png", bbox_inches="tight", dpi=300)
import pandas as pd
df, = pd.read_html("""<table><tbody><tr><th> </th><th> </th><th>Movie</th><th>Distributor</th><th>Gross</th><th>Change</th><th>Thtrs.</th><th>Per Thtr.</th><th>Total Gross</th><th>Days</th></tr>
<tr>
<td class="data">1</td>
<td class="data">(1)</td>
<td><b><a href="/movie/Hidden-Figures#tab=box-office">Hidden Figures</a></b></td>
<td><a href="/market/distributor/20th-Century-Fox">20th Century Fox</a></td>
<td class="data">$33,605,651</td>
<td class="data chart_up">+7%</td>
<td class="data">3,286</td>
<td class="data chart_grey">$10,227</td>
<td class="data"> $67,988,751</td>
<td class="data">26</td>
</tr>
<tr bgcolor="#ffeeff">
<td class="data chart_up"><b>2</b></td>
<td class="data">(5)</td>
<td><b><a href="/movie/La-La-Land#tab=box-office">La La Land</a></b></td>
<td><a href="/market/distributor/Lionsgate">Lionsgate</a></td>
<td class="data">$21,748,928</td>
<td class="data chart_up">+21%</td>
<td class="data">1,848</td>
<td class="data chart_grey">$11,769</td>
<td class="data"> $81,330,497</td>
<td class="data">42</td>
</tr>
<tr>
<td class="data">3</td>
<td class="data">(3)</td>
<td><b><a href="/movie/Sing-(2016)#tab=box-office">Sing</a></b></td>
<td><a href="/market/distributor/Universal">Universal</a></td>
<td class="data">$21,109,675</td>
<td class="data chart_down">-17%</td>
<td class="data">3,431</td>
<td class="data chart_grey">$6,153</td>
<td class="data"> $240,325,195</td>
<td class="data">30</td>
</tr>
<tr bgcolor="#ffeeff">
<td class="data chart_down">4</td>
<td class="data">(2)</td>
<td><b><a href="/movie/Rogue-One-A-Star-Wars-Story#tab=box-office">Rogue One: A Star Wars Story</a></b></td>
<td><a href="/market/distributor/Walt-Disney">Walt Disney</a></td>
<td class="data">$20,073,829</td>
<td class="data chart_down">-33%</td>
<td class="data">3,162</td>
<td class="data chart_grey">$6,348</td>
<td class="data"> $505,165,563</td>
<td class="data">35</td>
</tr>
<tr>
<td class="data chart_up"><b>5</b></td>
<td class="data">(28)</td>
<td><b><a href="/movie/Patriots-Day#tab=box-office">Patriots Day</a></b></td>
<td><a href="/market/distributor/Lionsgate">Lionsgate</a></td>
<td class="data">$16,715,863</td>
<td class="data chart_up">+10,435%</td>
<td class="data">3,120</td>
<td class="data chart_grey">$5,358</td>
<td class="data"> $17,639,945</td>
<td class="data">30</td>
</tr>
<tr bgcolor="#ffeeff">
<td class="data">6</td>
<td class="data"><b>new</b></td>
<td><b><a href="/movie/Bye-Bye-Man-The#tab=box-office">The Bye Bye Man</a></b></td>
<td><a href="/market/distributor/STX-Entertainment">STX Entertainment</a></td>
<td class="data">$16,559,630</td>
<td class="data"> </td>
<td class="data">2,220</td>
<td class="data chart_grey">$7,459</td>
<td class="data"> $16,559,630</td>
<td class="data">7</td>
</tr>
<tr>
<td class="data">7</td>
<td class="data"><b>new</b></td>
<td><b><a href="/movie/Monster-Trucks#tab=box-office">Monster Trucks</a></b></td>
<td><a href="/market/distributor/Paramount-Pictures">Paramount Pictures</a></td>
<td class="data">$15,611,554</td>
<td class="data"> </td>
<td class="data">3,119</td>
<td class="data chart_grey">$5,005</td>
<td class="data"> $15,611,554</td>
<td class="data">7</td>
</tr>
<tr bgcolor="#ffeeff">
<td class="data">8</td>
<td class="data"><b>new</b></td>
<td><b><a href="/movie/Sleepless-(2016)#tab=box-office">Sleepless</a></b></td>
<td><a href="/market/distributor/Open-Road">Open Road</a></td>
<td class="data">$11,486,904</td>
<td class="data"> </td>
<td class="data">1,803</td>
<td class="data chart_grey">$6,371</td>
<td class="data"> $11,486,904</td>
<td class="data">7</td>
</tr>
<tr>
<td class="data chart_down">9</td>
<td class="data">(4)</td>
<td><b><a href="/movie/Underworld-Blood-Wars#tab=box-office">Underworld: Blood Wars</a></b></td>
<td><a href="/market/distributor/Sony-Pictures">Sony Pictures</a></td>
<td class="data">$8,794,841</td>
<td class="data chart_down">-51%</td>
<td class="data">3,070</td>
<td class="data chart_grey">$2,865</td>
<td class="data"> $26,910,959</td>
<td class="data">14</td>
</tr>
<tr bgcolor="#ffeeff">
<td class="data chart_down">10</td>
<td class="data">(6)</td>
<td><b><a href="/movie/Passengers-(2016)#tab=box-office">Passengers</a></b></td>
<td><a href="/market/distributor/Sony-Pictures">Sony Pictures</a></td>
<td class="data">$7,853,457</td>
<td class="data chart_down">-36%</td>
<td class="data">2,447</td>
<td class="data chart_grey">$3,209</td>
<td class="data"> $92,233,188</td>
<td class="data">30</td>
</tr>
<tr>
<td class="data chart_up"><b>11</b></td>
<td class="data">(38)</td>
<td><b><a href="/movie/Live-by-Night#tab=box-office">Live by Night</a></b></td>
<td><a href="/market/distributor/Warner-Bros">Warner Bros.</a></td>
<td class="data">$7,481,705</td>
<td class="data chart_up">+16,845%</td>
<td class="data">2,822</td>
<td class="data chart_grey">$2,651</td>
<td class="data"> $7,667,349</td>
<td class="data">26</td>
</tr>
<tr bgcolor="#ffeeff">
<td class="data chart_down">12</td>
<td class="data">(8)</td>
<td><b><a href="/movie/Moana#tab=box-office">Moana</a></b></td>
<td><a href="/market/distributor/Walt-Disney">Walt Disney</a></td>
<td class="data">$6,968,577</td>
<td class="data chart_down">-16%</td>
<td class="data">1,847</td>
<td class="data chart_grey">$3,773</td>
<td class="data"> $234,274,702</td>
<td class="data">58</td>
</tr>
<tr>
<td class="data chart_down">13</td>
<td class="data">(7)</td>
<td><b><a href="/movie/Why-Him#tab=box-office">Why Him?</a></b></td>
<td><a href="/market/distributor/20th-Century-Fox">20th Century Fox</a></td>
<td class="data">$5,032,411</td>
<td class="data chart_down">-49%</td>
<td class="data">1,977</td>
<td class="data chart_grey">$2,545</td>
<td class="data"> $56,865,458</td>
<td class="data">28</td>
</tr>
<tr bgcolor="#ffeeff">
<td class="data chart_down">14</td>
<td class="data">(9)</td>
<td><b><a href="/movie/Fences#tab=box-office">Fences</a></b></td>
<td><a href="/market/distributor/Paramount-Pictures">Paramount Pictures</a></td>
<td class="data">$4,367,322</td>
<td class="data chart_down">-39%</td>
<td class="data">1,342</td>
<td class="data chart_grey">$3,254</td>
<td class="data"> $47,499,684</td>
<td class="data">35</td>
</tr>
<tr>
<td class="data chart_down">15</td>
<td class="data">(12)</td>
<td><b><a href="/movie/Lion-(Australia)#tab=box-office">Lion</a></b></td>
<td><a href="/market/distributor/Weinstein-Co">Weinstein Co.</a></td>
<td class="data">$3,539,926</td>
<td class="data chart_up">+9%</td>
<td class="data">575</td>
<td class="data chart_grey">$6,156</td>
<td class="data"> $14,582,530</td>
<td class="data">56</td>
</tr>
<tr bgcolor="#ffeeff">
<td class="data chart_up"><b>16</b></td>
<td class="data">(20)</td>
<td><b><a href="/movie/Silence-(2016)#tab=box-office">Silence</a></b></td>
<td><a href="/market/distributor/Paramount-Pictures">Paramount Pictures</a></td>
<td class="data">$2,926,937</td>
<td class="data chart_up">+319%</td>
<td class="data">747</td>
<td class="data chart_grey">$3,918</td>
<td class="data"> $4,008,701</td>
<td class="data">28</td>
</tr>
<tr>
<td class="data chart_down">17</td>
<td class="data">(11)</td>
<td><b><a href="/movie/Manchester-by-the-Sea#tab=box-office">Manchester-by-the Sea</a></b></td>
<td><a href="/market/distributor/Roadside-Attractions">Roadside Attractions</a></td>
<td class="data">$2,786,718</td>
<td class="data chart_down">-27%</td>
<td class="data">726</td>
<td class="data chart_grey">$3,838</td>
<td class="data"> $37,948,496</td>
<td class="data">63</td>
</tr>
<tr bgcolor="#ffeeff">
<td class="data chart_down">18</td>
<td class="data">(10)</td>
<td><b><a href="/movie/Assassins-Creed#tab=box-office">Assassin’s Creed</a></b></td>
<td><a href="/market/distributor/20th-Century-Fox">20th Century Fox</a></td>
<td class="data">$1,979,315</td>
<td class="data chart_down">-66%</td>
<td class="data">968</td>
<td class="data chart_grey">$2,045</td>
<td class="data"> $53,482,956</td>
<td class="data">30</td>
</tr>
<tr>
<td class="data chart_up"><b>19</b></td>
<td class="data">(21)</td>
<td><b><a href="/movie/Moonlight-(2015)#tab=box-office">Moonlight</a></b></td>
<td><a href="/market/distributor/A24">A24</a></td>
<td class="data">$1,693,623</td>
<td class="data chart_up">+185%</td>
<td class="data">582</td>
<td class="data chart_grey">$2,910</td>
<td class="data"> $15,192,382</td>
<td class="data">91</td>
</tr>
<tr bgcolor="#ffeeff">
<td class="data chart_down">20</td>
<td class="data">(14)</td>
<td><b><a href="/movie/Fantastic-Beasts-and-Where-to-Find-Them#tab=box-office">Fantastic Beasts and Where …</a></b></td>
<td><a href="/market/distributor/Warner-Bros">Warner Bros.</a></td>
<td class="data">$1,406,667</td>
<td class="data chart_down">-46%</td>
<td class="data">502</td>
<td class="data chart_grey">$2,802</td>
<td class="data"> $231,277,992</td>
<td class="data">63</td>
</tr>
<tr>
<td class="data chart_down">21</td>
<td class="data">(16)</td>
<td><b><a href="/movie/Jackie-(2016)#tab=box-office">Jackie</a></b></td>
<td><a href="/market/distributor/Fox-Searchlight">Fox Searchlight</a></td>
<td class="data">$1,149,751</td>
<td class="data chart_down">-26%</td>
<td class="data">353</td>
<td class="data chart_grey">$3,257</td>
<td class="data"> $10,902,840</td>
<td class="data">49</td>
</tr>
<tr bgcolor="#ffeeff">
<td class="data chart_down">22</td>
<td class="data">(13)</td>
<td><b><a href="/movie/Monster-Calls-A#tab=box-office">A Monster Calls</a></b></td>
<td><a href="/market/distributor/Focus-Features">Focus Features</a></td>
<td class="data">$887,171</td>
<td class="data chart_down">-68%</td>
<td class="data">1,513</td>
<td class="data chart_grey">$586</td>
<td class="data"> $3,710,799</td>
<td class="data">28</td>
</tr>
<tr>
<td class="data chart_down">23</td>
<td class="data">(17)</td>
<td><b><a href="/movie/Arrival-(2016)#tab=box-office">Arrival</a></b></td>
<td><a href="/market/distributor/Paramount-Pictures">Paramount Pictures</a></td>
<td class="data">$829,052</td>
<td class="data chart_down">-34%</td>
<td class="data">247</td>
<td class="data chart_grey">$3,356</td>
<td class="data"> $95,349,632</td>
<td class="data">70</td>
</tr>
<tr bgcolor="#ffeeff">
<td class="data chart_down">24</td>
<td class="data">(22)</td>
<td><b><a href="/movie/Trolls#tab=box-office">Trolls</a></b></td>
<td><a href="/market/distributor/20th-Century-Fox">20th Century Fox</a></td>
<td class="data">$639,148</td>
<td class="data chart_up">+15%</td>
<td class="data">262</td>
<td class="data chart_grey">$2,439</td>
<td class="data"> $152,041,839</td>
<td class="data">77</td>
</tr>
<tr>
<td class="data chart_down">25</td>
<td class="data">(19)</td>
<td><b><a href="/movie/Dangal-(India)#tab=box-office">Dangal</a></b></td>
<td><a href="/market/distributor/UTV-Communications">UTV Communications</a></td>
<td class="data">$537,498</td>
<td class="data chart_down">-52%</td>
<td class="data">95</td>
<td class="data chart_grey">$5,658</td>
<td class="data"> $12,008,183</td>
<td class="data">30</td>
</tr>
<tr bgcolor="#ffeeff">
<td class="data">26</td>
<td class="data">(26)</td>
<td><b><a href="/movie/20th-Century-Women#tab=box-office">20th Century Women</a></b></td>
<td><a href="/market/distributor/A24">A24</a></td>
<td class="data">$482,993</td>
<td class="data chart_up">+153%</td>
<td class="data">29</td>
<td class="data chart_grey">$16,655</td>
<td class="data"> $926,641</td>
<td class="data">26</td>
</tr>
<tr>
<td class="data">27</td>
<td class="data"><b>new</b></td>
<td><b><a href="/movie/Ok-Jaanu-(India)#tab=box-office">Ok Jaanu</a></b></td>
<td><a href="/market/distributor/FIP">FIP</a></td>
<td class="data">$312,090</td>
<td class="data"> </td>
<td class="data">121</td>
<td class="data chart_grey">$2,579</td>
<td class="data"> $312,090</td>
<td class="data">7</td>
</tr>
<tr bgcolor="#ffeeff">
<td class="data chart_down">28</td>
<td class="data">(23)</td>
<td><b><a href="/movie/Doctor-Strange-(2016)#tab=box-office">Doctor Strange</a></b></td>
<td><a href="/market/distributor/Walt-Disney">Walt Disney</a></td>
<td class="data">$309,972</td>
<td class="data chart_down">-30%</td>
<td class="data">162</td>
<td class="data chart_grey">$1,913</td>
<td class="data"> $231,345,380</td>
<td class="data">77</td>
</tr>
<tr>
<td class="data chart_down">29</td>
<td class="data">(15)</td>
<td><b><a href="/movie/Collateral-Beauty#tab=box-office">Collateral Beauty</a></b></td>
<td><a href="/market/distributor/Warner-Bros">Warner Bros.</a></td>
<td class="data">$305,013</td>
<td class="data chart_down">-83%</td>
<td class="data">254</td>
<td class="data chart_grey">$1,201</td>
<td class="data"> $30,621,252</td>
<td class="data">35</td>
</tr>
<tr bgcolor="#ffeeff">
<td class="data chart_down">30</td>
<td class="data">(24)</td>
<td><b><a href="/movie/Hacksaw-Ridge#tab=box-office">Hacksaw Ridge</a></b></td>
<td><a href="/market/distributor/Lionsgate">Lionsgate</a></td>
<td class="data">$208,955</td>
<td class="data chart_down">-34%</td>
<td class="data">172</td>
<td class="data chart_grey">$1,215</td>
<td class="data"> $65,411,438</td>
<td class="data">77</td>
</tr>
<tr>
<td class="data chart_down">31</td>
<td class="data">(18)</td>
<td><b><a href="/movie/Office-Christmas-Party#tab=box-office">Office Christmas Party</a></b></td>
<td><a href="/market/distributor/Paramount-Pictures">Paramount Pictures</a></td>
<td class="data">$165,146</td>
<td class="data chart_down">-86%</td>
<td class="data">141</td>
<td class="data chart_grey">$1,171</td>
<td class="data"> $54,648,213</td>
<td class="data">42</td>
</tr>
<tr bgcolor="#ffeeff">
<td class="data chart_down">32</td>
<td class="data">(30)</td>
<td><b><a href="/movie/Allied#tab=box-office">Allied</a></b></td>
<td><a href="/market/distributor/Paramount-Pictures">Paramount Pictures</a></td>
<td class="data">$161,201</td>
<td class="data chart_up">+20%</td>
<td class="data">174</td>
<td class="data chart_grey">$926</td>
<td class="data"> $40,015,450</td>
<td class="data">58</td>
</tr>
<tr>
<td class="data chart_down">33</td>
<td class="data">(29)</td>
<td><b><a href="/movie/Nocturnal-Animals#tab=box-office">Nocturnal Animals</a></b></td>
<td><a href="/market/distributor/Focus-Features">Focus Features</a></td>
<td class="data">$112,841</td>
<td class="data chart_down">-25%</td>
<td class="data">54</td>
<td class="data chart_grey">$2,090</td>
<td class="data"> $10,604,004</td>
<td class="data">63</td>
</tr>
<tr bgcolor="#ffeeff">
<td class="data chart_up"><b>34</b></td>
<td class="data">(36)</td>
<td><b><a href="/movie/Neruda-(Chile)#tab=box-office">Neruda</a></b></td>
<td><a href="/market/distributor/Orchard-The">The Orchard</a></td>
<td class="data">$69,515</td>
<td class="data chart_up">+25%</td>
<td class="data">15</td>
<td class="data chart_grey">$4,634</td>
<td class="data"> $296,307</td>
<td class="data">35</td>
</tr>
<tr>
<td class="data chart_down">35</td>
<td class="data">(34)</td>
<td><b><a href="/movie/Miss-Peregrines-Home-for-Peculiar-Children#tab=box-office">Miss Peregrine’s Home for…</a></b></td>
<td><a href="/market/distributor/20th-Century-Fox">20th Century Fox</a></td>
<td class="data">$68,755</td>
<td class="data chart_down">-4%</td>
<td class="data">84</td>
<td class="data chart_grey">$819</td>
<td class="data"> $87,170,123</td>
<td class="data">112</td>
</tr>
<tr bgcolor="#ffeeff">
<td class="data chart_down">36</td>
<td class="data">(33)</td>
<td><b><a href="/movie/Loving-(2016)#tab=box-office">Loving</a></b></td>
<td><a href="/market/distributor/Focus-Features">Focus Features</a></td>
<td class="data">$56,241</td>
<td class="data chart_down">-28%</td>
<td class="data">41</td>
<td class="data chart_grey">$1,372</td>
<td class="data"> $7,679,676</td>
<td class="data">77</td>
</tr>
<tr>
<td class="data chart_down">37</td>
<td class="data">(27)</td>
<td><b><a href="/movie/Railroad-Tigers#tab=box-office">Railroad Tigers</a></b></td>
<td><a href="/market/distributor/Well-Go-USA">Well Go USA</a></td>
<td class="data">$39,136</td>
<td class="data chart_down">-76%</td>
<td class="data">13</td>
<td class="data chart_grey">$3,010</td>
<td class="data"> $205,655</td>
<td class="data">14</td>
</tr>
<tr bgcolor="#ffeeff">
<td class="data chart_up"><b>38</b></td>
<td class="data">(40)</td>
<td><b><a href="/movie/avenir-L-(france)#tab=box-office">Things to Come</a></b></td>
<td><a href="/market/distributor/IFC-Films">IFC Films</a></td>
<td class="data">$30,237</td>
<td class="data chart_down">-10%</td>
<td class="data">20</td>
<td class="data chart_grey">$1,512</td>
<td class="data"> $326,869</td>
<td class="data">49</td>
</tr>
<tr>
<td class="data">39</td>
<td class="data">(39)</td>
<td><b><a href="/movie/A-Ga-ssi-(S-Korea)#tab=box-office">The Handmaiden</a></b></td>
<td><a href="/market/distributor/Magnolia-Pictures">Magnolia Pictures</a></td>
<td class="data">$29,808</td>
<td class="data chart_down">-18%</td>
<td class="data">16</td>
<td class="data chart_grey">$1,863</td>
<td class="data"> $1,961,089</td>
<td class="data">91</td>
</tr>
<tr bgcolor="#ffeeff">
<td class="data">40</td>
<td class="data"><b>new</b></td>
<td><b><a href="/movie/Enas-Allos-Kosmos-(Greece)#tab=box-office">Worlds Apart</a></b></td>
<td><a href="/market/distributor/Cinema-Libre">Cinema Libre</a></td>
<td class="data">$25,007</td>
<td class="data"> </td>
<td class="data">1</td>
<td class="data chart_grey">$25,007</td>
<td class="data"> $25,007</td>
<td class="data">7</td>
</tr>
<tr>
<td class="data">41</td>
<td class="data">(41)</td>
<td><b><a href="/movie/Sully#tab=box-office">Sully</a></b></td>
<td><a href="/market/distributor/Warner-Bros">Warner Bros.</a></td>
<td class="data">$19,427</td>
<td class="data chart_down">-18%</td>
<td class="data">35</td>
<td class="data chart_grey">$555</td>
<td class="data"> $125,059,249</td>
<td class="data">133</td>
</tr>
<tr bgcolor="#ffeeff">
<td class="data">42</td>
<td class="data"><b>new</b></td>
<td><b><a href="/movie/Jeder-stirbt-fur-sich-allein-(Germany)#tab=box-office">Alone in Berlin</a></b></td>
<td><a href="/market/distributor/IFC-Films">IFC Films</a></td>
<td class="data">$14,502</td>
<td class="data"> </td>
<td class="data">2</td>
<td class="data chart_grey">$7,251</td>
<td class="data"> $14,502</td>
<td class="data">7</td>
</tr>
<tr>
<td class="data">43</td>
<td class="data"><b>new</b></td>
<td><b><a href="/movie/Vince-Giordano-Theres-a-Future-in-the-Past#tab=box-office">Vince Giordano: There’s a…</a></b></td>
<td><a href="/market/distributor/First-Run-Features">First Run Features</a></td>
<td class="data">$10,625</td>
<td class="data"> </td>
<td class="data">1</td>
<td class="data chart_grey">$10,625</td>
<td class="data"> $10,625</td>
<td class="data">7</td>
</tr>
<tr bgcolor="#ffeeff">
<td class="data chart_up"><b>44</b></td>
<td class="data">(46)</td>
<td><b><a href="/movie/tout-nouveau-testament-Le-(Belgium)#tab=box-office">The Brand New Testament</a></b></td>
<td><a href="/market/distributor/Music-Box-Films">Music Box Films</a></td>
<td class="data">$8,835</td>
<td class="data chart_down">-31%</td>
<td class="data">13</td>
<td class="data chart_grey">$680</td>
<td class="data"> $103,977</td>
<td class="data">42</td>
</tr>
<tr>
<td class="data chart_down">45</td>
<td class="data">(42)</td>
<td><b><a href="/movie/Bad-Santa-2#tab=box-office">Bad Santa 2</a></b></td>
<td><a href="/market/distributor/Broad-Green-Pictures">Broad Green Pictures</a></td>
<td class="data">$5,777</td>
<td class="data chart_down">-74%</td>
<td class="data">19</td>
<td class="data chart_grey">$304</td>
<td class="data"> $17,781,710</td>
<td class="data">58</td>
</tr>
<tr bgcolor="#ffeeff">
<td class="data chart_down">46</td>
<td class="data">(43)</td>
<td><b><a href="/movie/man-som-heter-Ove-En#tab=box-office">A Man Called Ove</a></b></td>
<td><a href="/market/distributor/Music-Box-Films">Music Box Films</a></td>
<td class="data">$5,635</td>
<td class="data chart_down">-69%</td>
<td class="data">7</td>
<td class="data chart_grey">$805</td>
<td class="data"> $3,375,381</td>
<td class="data">112</td>
</tr>
<tr>
<td class="data chart_down">47</td>
<td class="data">(45)</td>
<td><b><a href="/movie/Trilogie-Marseillaise-La-(France)#tab=box-office">The Marseille Trilogy</a></b></td>
<td><a href="/market/distributor/Janus-Films">Janus Films</a></td>
<td class="data">$4,173</td>
<td class="data chart_down">-71%</td>
<td class="data">1</td>
<td class="data chart_grey">$4,173</td>
<td class="data"> $21,513</td>
<td class="data">21</td>
</tr>
<tr bgcolor="#ffeeff">
<td class="data">48</td>
<td class="data">(48)</td>
<td><b><a href="/movie/Saisons-Les-(France)#tab=box-office">Seasons</a></b></td>
<td><a href="/market/distributor/Music-Box-Films">Music Box Films</a></td>
<td class="data">$3,763</td>
<td class="data chart_down">-60%</td>
<td class="data">4</td>
<td class="data chart_grey">$941</td>
<td class="data"> $126,431</td>
<td class="data">70</td>
</tr>
<tr>
<td class="data chart_down">49</td>
<td class="data">(44)</td>
<td><b><a href="/movie/Tanpopo#tab=box-office">Tampopo</a></b></td>
<td><a href="/market/distributor/Janus-Films">Janus Films</a></td>
<td class="data">$2,716</td>
<td class="data chart_down">-85%</td>
<td class="data">1</td>
<td class="data chart_grey">$2,716</td>
<td class="data"> $203,791</td>
<td class="data">91</td>
</tr>
<tr bgcolor="#ffeeff">
<td class="data chart_up"><b>50</b></td>
<td class="data">(55)</td>
<td><b><a href="/movie/Bad-Kids-The-(2016)#tab=box-office">The Bad Kids</a></b></td>
<td><a href="/market/distributor/FilmRise">FilmRise</a></td>
<td class="data">$1,863</td>
<td class="data chart_up">+34%</td>
<td class="data">6</td>
<td class="data chart_grey">$311</td>
<td class="data"> $6,226</td>
<td class="data">35</td>
</tr>
<tr>
<td class="data">51</td>
<td class="data">(51)</td>
<td><b><a href="/movie/Harry-Benson-Shoot-First#tab=box-office">Harry Benson: Shoot First</a></b></td>
<td><a href="/market/distributor/Magnolia-Pictures">Magnolia Pictures</a></td>
<td class="data">$1,344</td>
<td class="data chart_down">-37%</td>
<td class="data">5</td>
<td class="data chart_grey">$269</td>
<td class="data"> $17,184</td>
<td class="data">42</td>
</tr>
<tr bgcolor="#ffeeff">
<td class="data chart_up"><b>52</b></td>
<td class="data">(53)</td>
<td><b><a href="/movie/Ardennen-D-(Belgium)#tab=box-office">The Ardennes</a></b></td>
<td><a href="/market/distributor/Film-Movement">Film Movement</a></td>
<td class="data">$976</td>
<td class="data chart_down">-32%</td>
<td class="data">2</td>
<td class="data chart_grey">$488</td>
<td class="data"> $2,415</td>
<td class="data">14</td>
</tr>
<tr>
<td class="data chart_down">53</td>
<td class="data">(50)</td>
<td><b><a href="/movie/Busanhaeng-(south-korea)#tab=box-office">Train to Busan</a></b></td>
<td><a href="/market/distributor/Well-Go-USA">Well Go USA</a></td>
<td class="data">$799</td>
<td class="data chart_down">-66%</td>
<td class="data">2</td>
<td class="data chart_grey">$400</td>
<td class="data"> $2,128,963</td>
<td class="data">182</td>
</tr>
</tbody></table>""", header=0)
df.Gross = df.Gross.str.replace("[$,]", "").astype("int")
df.head()
gross = df.Gross.values[20:]
movie = df.Movie.values[20:]
plt.figure()
plt.bar(range(len(gross)), gross)
plt.xticks(range(len(gross)), movie, rotation=90)
plt.tight_layout()
plt.savefig("images/matplotlib_bar", bbox_inches="tight", dpi=300)
plt.figure()
plt.barh(range(len(gross)), gross)
plt.yticks(range(len(gross)), movie, fontsize=8)
ax = plt.gca()
ax.set_frame_on(False)
ax.tick_params(length=0)
plt.tight_layout()
plt.savefig("images/matplotlib_barh", bbox_inches="tight", dpi=300)
data1 = np.random.laplace(loc=-2, size=100)
data2 = np.random.laplace(loc=5, size=100)
data3 = np.random.laplace(scale=6, size=200)
data4 = np.random.laplace(loc=-15, scale=.1, size=10)
data = np.hstack([data1, data2, data3, data4, [50]])
fig, ax = plt.subplots(1, 3, figsize=(20, 3))
ax[0].hist(data)
ax[1].hist(data, bins=100)
ax[2].hist(data, bins="auto")
plt.savefig("images/matplotlib_histogram.png", bbox_inches="tight", dpi=300)
from matplotlib.cbook import get_sample_data
f = get_sample_data("axes_grid/bivariate_normal.npy", asfileobj=False)
np.set_printoptions(suppress=True, precision=2)
arr = np.load(f)
arr = np.load("bivariate_normal.npy")
fig, ax = plt.subplots(2, 2)
im1 = ax[0, 0].imshow(arr)
ax[0, 1].imshow(arr, interpolation='bilinear')
im3 = ax[1, 0].imshow(arr, cmap='gray')
im4 = ax[1, 1].imshow(arr, cmap='bwr', vmin=-1.5, vmax=1.5)
plt.colorbar(im1, ax=ax[0, 0])
plt.colorbar(im3, ax=ax[1, 0])
plt.colorbar(im4, ax=ax[1, 1])
plt.savefig("images/matplotlib_heatmap.png", bbox_inches="tight", dpi=300)
x1, y1 = 1 / np.random.uniform(-1000, 100, size=(2, 10000))
x2, y2 = np.dot(np.random.uniform(size=(2, 2)), np.random.normal(size=(2, 1000)))
x = np.hstack([x1, x2])
y = np.hstack([y1, y2])
plt.figure()
plt.xlim(-1, 1)
plt.ylim(-1, 1)
plt.scatter(x, y)
fig, ax = plt.subplots(1, 3, figsize=(10, 4),
subplot_kw={'xlim': (-1, 1),
'ylim': (-1, 1)})
ax[0].scatter(x, y)
ax[1].scatter(x, y, alpha=.1)
ax[2].scatter(x, y, alpha=.01)
plt.savefig("images/matplotlib_overplotting.png", bbox_inches="tight", dpi=300)
plt.figure()
plt.hexbin(x, y, bins='log', extent=(-1, 1, -1, 1))
plt.colorbar()
plt.axis("off")
plt.savefig("images/matplotlib_hexgrid.png", bbox_inches="tight", dpi=300)
```
# Twinx
```
df = pd.DataFrame({'Math PhDs awareded (US)': {'2000': 1050,
'2001': 1010,
'2002': 919,
'2003': 993,
'2004': 1076,
'2005': 1205,
'2006': 1325,
'2007': 1393,
'2008': 1399,
'2009': 1554},
'Total revenue by arcades (US)': {'2000': 1196000000,
'2001': 1176000000,
'2002': 1269000000,
'2003': 1240000000,
'2004': 1307000000,
'2005': 1435000000,
'2006': 1601000000,
'2007': 1654000000,
'2008': 1803000000,
'2009': 1734000000}})
# could also do df.plot()
phds = df['Math PhDs awareded (US)']
revenue = df['Total revenue by arcades (US)']
years = df.index
plt.figure()
ax = plt.gca()
ax.plot(years, phds, label="math PhDs awarded")
ax.plot(years, revenue, c='r', label="revenue by arcades")
ax.set_ylabel("Math PhDs awarded")
ax.set_ylabel("revenue by arcades")
ax.legend()
plt.savefig("images/matplotlib_twinx1.png", bbox_inches="tight", dpi=300)
plt.figure()
ax1 = plt.gca()
line1, = ax1.plot(years, phds)
ax2 = ax1.twinx()
line2, = ax2.plot(years, revenue, c='r')
ax1.set_ylabel("Math PhDs awarded")
ax2.set_ylabel("revenue by arcades")
ax2.legend((line1, line2), ("math PhDs awarded", "revenue by arcades"))
plt.savefig("images/matplotlib_twinx2.png", bbox_inches="tight", dpi=300)
# DONT!
# This import registers the 3D projection, but is otherwise unused.
from mpl_toolkits.mplot3d import Axes3D # noqa: F401 unused import
import matplotlib.pyplot as plt
import numpy as np
# Fixing random state for reproducibility
np.random.seed(19680801)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
x, y = np.random.rand(2, 100) * 4
hist, xedges, yedges = np.histogram2d(x, y, bins=4, range=[[0, 4], [0, 4]])
# Construct arrays for the anchor positions of the 16 bars.
xpos, ypos = np.meshgrid(xedges[:-1] + 0.25, yedges[:-1] + 0.25, indexing="ij")
xpos = xpos.ravel()
ypos = ypos.ravel()
zpos = 0
# Construct arrays with the dimensions for the 16 bars.
dx = dy = 0.5 * np.ones_like(zpos)
dz = hist.ravel()
ax.bar3d(xpos, ypos, zpos, dx, dy, dz, color='b', zsort='average')
plt.savefig("images/3dhist.png", dpi=300)
import numpy as np
import matplotlib.pyplot as plt
# This import registers the 3D projection, but is otherwise unused.
from mpl_toolkits.mplot3d import Axes3D # noqa: F401 unused import
def lorenz(x, y, z, s=10, r=28, b=2.667):
'''
Given:
x, y, z: a point of interest in three dimensional space
s, r, b: parameters defining the lorenz attractor
Returns:
x_dot, y_dot, z_dot: values of the lorenz attractor's partial
derivatives at the point x, y, z
'''
x_dot = s*(y - x)
y_dot = r*x - y - x*z
z_dot = x*y - b*z
return x_dot, y_dot, z_dot
dt = 0.01
num_steps = 10000
# Need one more for the initial values
xs = np.empty((num_steps + 1,))
ys = np.empty((num_steps + 1,))
zs = np.empty((num_steps + 1,))
# Set initial values
xs[0], ys[0], zs[0] = (0., 1., 1.05)
# Step through "time", calculating the partial derivatives at the current point
# and using them to estimate the next point
for i in range(num_steps):
x_dot, y_dot, z_dot = lorenz(xs[i], ys[i], zs[i])
xs[i + 1] = xs[i] + (x_dot * dt)
ys[i + 1] = ys[i] + (y_dot * dt)
zs[i + 1] = zs[i] + (z_dot * dt)
# Plot
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(xs, ys, zs, lw=0.5)
ax.set_xlabel("X Axis")
ax.set_ylabel("Y Axis")
ax.set_zlabel("Z Axis")
ax.set_title("Lorenz Attractor")
plt.savefig("images/lorenz.png", dpi=300)
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
iris = load_iris()
X, y = iris.data, iris.target
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(X[:, 1], X[:, 2], X[:, 3], c=y)
plt.savefig("images/3dscatter.png", dpi=300)
```
| true |
code
| 0.455622 | null | null | null | null |
|
# Learning to pivot, part 3
## Independence $\neq$ non significant
This example demonstrates that statistical independence of classifier predictions from the nuisance parameter does not imply that
classifier does not use nuisance parameter.
Main paper: https://arxiv.org/abs/1611.01046
```
try:
import mlhep2019
except ModuleNotFoundError:
import subprocess as sp
result = sp.run(
['pip', 'install', 'git+https://github.com/yandexdataschool/mlhep2019.git'],
stdout=sp.PIPE, stderr=sp.PIPE
)
if result.returncode != 0:
print(result.stdout.decode('utf-8'))
print(result.stderr.decode('utf-8'))
import mlhep2019
%matplotlib inline
import matplotlib.pyplot as plt
from tqdm import tqdm_notebook as tqdm
from IPython import display
import numpy as np
import torch
import torch.utils.data
from mlhep2019.pivot import *
for i in range(torch.cuda.device_count()):
print(torch.cuda.get_device_name(i))
if torch.cuda.is_available():
device = torch.device("cuda:0")
else:
device = "cpu"
import warnings
warnings.warn('Using CPU!')
```
## Toy data
```
def get_data(size = 1024):
labels = np.random.binomial(1, 0.5, size=(size, )).astype('float32')
xs = np.random.uniform(0.1, 0.9, size=(size, ))
xs = xs + 0.1 * np.sign(xs - 0.5)
ys = np.where(
labels > 0.5,
xs + np.random.uniform(-1, 1, size=(size, )) * (xs - 0.5) ** 2 ,
1 - xs + np.random.uniform(-1, 1, size=(size, )) * (xs - 0.5) ** 2,
)
data = np.stack([xs, ys], axis=1).astype('float32')
return data, labels, xs.astype('float32')
data_train, labels_train, nuisance_train = get_data(size=1024)
data_test, labels_test, nuisance_test = get_data(size=128 * 1024)
plt.scatter(data_train[labels_train < 0.5, 0], data_train[labels_train < 0.5, 1], label='class 0')
plt.scatter(data_train[labels_train > 0.5, 0], data_train[labels_train > 0.5, 1], label='class 1')
plt.xlabel('$x_1$', fontsize=14)
plt.ylabel('$x_2$', fontsize=14)
plt.title('Toy data')
plt.legend()
plt.show()
```
## Utility functions
```
xs, ys, grid = make_grid(data_train)
X_train, y_train, z_train = [
torch.from_numpy(tensor).to(device)
for tensor in (data_train, labels_train, nuisance_train)
]
X_test, y_test, z_test = [
torch.from_numpy(tensor).to(device)
for tensor in (data_test, labels_test, nuisance_test)
]
G = torch.from_numpy(grid).to(device)
dataset_test = torch.utils.data.TensorDataset(X_test, y_test, z_test)
dataloader_test = torch.utils.data.DataLoader(dataset_test, batch_size=1024, shuffle=False)
dataset_grid = torch.utils.data.TensorDataset(G)
dataloader_grid = torch.utils.data.DataLoader(dataset_grid, batch_size=1024, shuffle=False)
def get_predictions(model, loader):
with torch.no_grad():
return np.concatenate([
torch.sigmoid(model(batch[0])).to('cpu').detach().numpy()
for batch in loader
], axis=0)
test_predictions = lambda model: get_predictions(model, dataloader_test)
grid_predictions = lambda model: get_predictions(model, dataloader_grid)
```
## Unmodified classification
Here we define a simple classifier:
```
Input(2 units) -> DenseLayer(64 units) -> DenseLayer(32 units) -> DenseLayer(1 unit)
```
**Note:** we don't use any activation function for the output layer, however, at the same time with use `BCEWithLogitsLoss` loss as it is more computationally stable.
```
class Classifier(torch.nn.Module):
def __init__(self, activation=torch.nn.Softplus()):
super(Classifier, self).__init__()
self.layer1 = torch.nn.Linear(2, 64)
self.layer2 = torch.nn.Linear(64, 32)
self.head = torch.nn.Linear(32, 1)
self.activation = activation
def forward(self, X):
result = X
result = self.activation(self.layer1(result))
result = self.activation(self.layer2(result))
return torch.flatten(
self.head(result)
)
classifier = Classifier().to(device)
loss_fn_classification = torch.nn.BCEWithLogitsLoss()
num_epoches = 128
num_batches = data_train.shape[0] // 32
losses = np.zeros(shape=(num_epoches, num_batches))
optimizer = torch.optim.Adam(classifier.parameters(), lr=1e-3)
for i in tqdm(range(num_epoches)):
for j in range(num_batches):
optimizer.zero_grad()
indx = torch.randint(0, data_train.shape[0], size=(32, ))
X_batch, y_batch = X_train[indx], y_train[indx]
predictions = classifier(X_batch)
loss = loss_fn_classification(predictions, y_batch)
losses[i, j] = loss.item()
loss.backward()
optimizer.step()
plot_losses(classifier=losses)
```
## Let's pivot
In order to make predictions of the classifier independent from nuisance parameters, an adversary is introduced.
The idea is similar to the main principle of GAN - seek for the solution that maximizes minimum of the adversary loss.
If classifier utilises information about nuisance parameters to make predictions, then its predictions are dependent on nuisance parameters. This information is most probably coming from dependencies between nuisance parameters and the training features, therefore, just excluding nuisance parameters from the training features is typically not enough.
Adversary is trained to predict nuisance parameters given output of the classifier. A dependency between nuisance parameters and predictions means that adversary is able to learn it (i.e. achieve minimum of the loss lower than loss of the constant). Maxumum of the minimum of the adversary loss is achieved only when there is not any dependencies between predictions and nusiances.
More formally, adversary loss is given by:
$$\mathcal{L}_{\mathrm{adv}}(\theta, \psi) = -\mathbb{E}_{x, z} \log P_\psi(z \mid f_\theta(x)) \to_\psi \min;$$
while the classifier is trained to minimize the following loss:
$$\mathcal{L}_{\mathrm{clf}} = \left[-\mathbb{E}_{x, y} \log P_\theta(y \mid x)\right] - \left[ \min_\psi \mathcal{L}_\mathrm{adv}(\theta, \psi)\right] \to_\theta \min;$$
where:
- $f_\theta$ and $P_\theta$ - classifier with parameters $\theta$ and probability distribution that corresponds to it;
- $P_\psi$ - probability distribution that corresponds to the output of adversary;
Note the minus sign before the second term in $\mathcal{L}_{\mathrm{clf}}$.
The training procedure is similar to that of GAN.
```
class Adversary(torch.nn.Module):
def __init__(self, activation=torch.nn.Softplus()):
super(Adversary, self).__init__()
self.layer1 = torch.nn.Linear(1, 128)
self.head = torch.nn.Linear(128, 1)
self.activation = activation
def forward(self, X):
result = X
result = self.activation(self.layer1(result))
return torch.squeeze(self.head(result), dim=1)
pivoted_classifier = Classifier().to(device)
adversary = Adversary().to(device)
loss_fn_pivoted_classification = torch.nn.BCEWithLogitsLoss()
loss_fn_adversary = torch.nn.MSELoss()
```
**Warning:** be careful using optimizers with an internal state for adversarial optimization problems ($\max \min$ problems): almost all of the popular optimizers have an internal state (except for SGD). After performing an optimization step for the generator, optimization problem for the adversary changes, thus, previously accumulated internal state might become invalid. This might lead to the noticable associlations in the learning curves. Alternatively, it might result in the generator (classifier in our case) and the adversary going in circles, which appears as if they have converged, which is especially difficult to detect; or collapse of the generator, as improper internal state of the discriminator optimizer slows its convergance.
One might avoid these effects by setting learning rate of the adversary optimizer to a low enough value and/or train the adversary longer.
One can use any optimizer for the generator (classifier in our case), provided that the adversary has enough time to converge.
From practical experience, optimizers that use $l_\infty$ (adamax, AMSGrad etc) norm perform well. Nevertheless, when in doubt, use SGD for the adversary.
```
optimizer_pivoted_classifier = torch.optim.Adam(pivoted_classifier.parameters(), lr=1e-3)
optimizer_adversary = torch.optim.Adamax(adversary.parameters(), lr=1e-3)
num_epoches = 128
num_batches = data_train.shape[0] // 32
losses_clf = np.zeros(shape=(num_epoches, num_batches))
losses_adv = np.zeros(shape=(num_epoches, num_batches))
for i in tqdm(range(num_epoches)):
for j in range(num_batches):
### training adversary
for k in range(4):
### generating batch
indx = torch.randint(0, data_train.shape[0], size=(32, ))
X_batch, z_batch = X_train[indx], z_train[indx]
optimizer_adversary.zero_grad()
predictions = pivoted_classifier(X_batch)
nuisance_predictions = adversary(torch.unsqueeze(predictions, dim=1))
loss_adversary = loss_fn_adversary(nuisance_predictions, z_batch)
loss_adversary.backward()
optimizer_adversary.step()
optimizer_pivoted_classifier.zero_grad()
### generating batch
indx = torch.randint(0, data_train.shape[0], size=(32, ))
X_batch, y_batch, z_batch = X_train[indx], y_train[indx], z_train[indx]
### training classifier
predictions = pivoted_classifier(X_batch)
nuisance_predictions = adversary(torch.unsqueeze(predictions, dim=1))
loss_classifier = loss_fn_pivoted_classification(predictions, y_batch)
loss_adversary = loss_fn_adversary(nuisance_predictions, z_batch)
losses_clf[i, j] = loss_classifier.item()
losses_adv[i, j] = loss_adversary.item()
joint_loss = loss_classifier - loss_adversary
joint_loss.backward()
optimizer_pivoted_classifier.step()
plot_losses(epoch=i, classifier=losses_clf, adversary=losses_adv)
```
If you look closely, you will see tiny (sometimes not tiny) associlations - note, how adamax stops them (at least, tries to). Try different optimizer (e.g. adam, adagrad) or decreasing number of adversary training steps for more pronounced effect.
### Conditional pivoting
Sometimes it is desirable to make predictions independent from the nuisance parameter within each class. Note, that this might still leave some dependency between nuisance and overall distribution of predictions.
In this case we make adversary **conditional**, which in practice means simply adding target labels as an input.
```
class ConditionalAdversary(torch.nn.Module):
def __init__(self, activation=torch.nn.Softplus()):
super(ConditionalAdversary, self).__init__()
self.layer1 = torch.nn.Linear(2, 128)
self.head = torch.nn.Linear(128, 1)
self.activation = activation
def forward(self, X):
result = X
result = self.activation(self.layer1(result))
return torch.squeeze(self.head(result), dim=1)
conditional_pivoted_classifier = Classifier().to(device)
conditional_adversary = ConditionalAdversary().to(device)
loss_fn_conditional_pivoted_classification = torch.nn.BCEWithLogitsLoss()
loss_fn_conditional_adversary = torch.nn.MSELoss()
optimizer_conditional_pivoted_classifier = torch.optim.Adam(
conditional_pivoted_classifier.parameters(), lr=1e-3
)
optimizer_conditional_adversary = torch.optim.Adam(conditional_adversary.parameters(), lr=1e-3)
num_epoches = 128
num_batches = data_train.shape[0] // 32
losses_clf = np.zeros(shape=(num_epoches, num_batches))
losses_adv = np.zeros(shape=(num_epoches, num_batches))
for i in tqdm(range(num_epoches)):
for j in range(num_batches):
### training adversary
for k in range(4):
optimizer_conditional_adversary.zero_grad()
indx = torch.randint(0, data_train.shape[0], size=(32, ))
X_batch, y_batch, z_batch = X_train[indx], y_train[indx], z_train[indx]
predictions = conditional_pivoted_classifier(X_batch)
nuisance_predictions = conditional_adversary(
torch.stack([predictions, y_batch], dim=1)
)
loss_adversary = loss_fn_conditional_adversary(nuisance_predictions, z_batch)
loss_adversary.backward()
optimizer_conditional_adversary.step()
optimizer_conditional_pivoted_classifier.zero_grad()
indx = torch.randint(0, data_train.shape[0], size=(32, ))
X_batch, y_batch, z_batch = X_train[indx], y_train[indx], z_train[indx]
### training classifier
predictions = conditional_pivoted_classifier(X_batch)
nuisance_predictions = conditional_adversary(
torch.stack([predictions, y_batch], dim=1)
)
loss_classifier = loss_fn_conditional_pivoted_classification(predictions, y_batch)
loss_adversary = loss_fn_conditional_adversary(nuisance_predictions, z_batch)
losses_clf[i, j] = loss_classifier.item()
losses_adv[i, j] = loss_adversary.item()
joint_loss = loss_classifier - loss_adversary
joint_loss.backward()
optimizer_conditional_pivoted_classifier.step()
plot_losses(classifier=losses_clf, adversary=losses_adv)
```
## Results
```
from sklearn.metrics import roc_auc_score, log_loss
cross_entropy = lambda y, p: log_loss(y, p, eps=1e-6)
accuracy = lambda y, p: np.mean(np.where(y > 0.5, 1, 0) == np.where(p > 0.5, 1, 0))
plt.subplots(nrows=1, ncols=3, figsize=(23, 5))
plt.subplot(1, 3, 1)
plt.title('non-pivoted')
draw_response(xs, ys, grid_predictions(classifier), data_train, labels_train)
plt.subplot(1, 3, 2)
plt.title('pivoted, unconditional')
draw_response(xs, ys, grid_predictions(pivoted_classifier), data_train, labels_train)
plt.subplot(1, 3, 3)
plt.title('pivoted, conditional')
draw_response(xs, ys, grid_predictions(conditional_pivoted_classifier), data_train, labels_train)
```
The following figure shows dependency between predictions and the nuisance parameter:
- each column correspond to a different model;
- rows correspond to nuisance parameter bins;
- each plot show distribution of model predictions within the corresponding nuisance bin.
- $\mathrm{MI}$ - (unconditional) mutual information between the nuisance parameter and model predictions.
- $\mathrm{MI}_i$ - mutual information between the nuisance parameter and model predictions **within** $i$-th class.
**Note**, that the following Mutual Information estimates migh be unreliable.
```
nuisance_prediction_hist([
test_predictions(classifier),
test_predictions(pivoted_classifier),
test_predictions(conditional_pivoted_classifier)
],
nuisance_test,
labels=labels_test.astype('int'),
names=['non-pivoted', 'pivoted, unconditional', 'pivoted, conditional']
)
```
Pivoted models tend to show worse (but flat) performance.
If pivoted models shows an increased performance in some regions, then most likely the model is biased (i.e. low capacity).
```
nuisance_metric_plot([
test_predictions(classifier),
test_predictions(pivoted_classifier),
test_predictions(conditional_pivoted_classifier)
],
labels_test, nuisance_test,
metric_fn=accuracy, metric_name='accuracy',
names=['non-pivoted', 'pivoted', 'conditional-pivoted'],
)
nuisance_metric_plot([
test_predictions(classifier),
test_predictions(pivoted_classifier),
test_predictions(conditional_pivoted_classifier)
],
labels_test, nuisance_test,
metric_fn=roc_auc_score, metric_name='ROC AUC',
names=['non-pivoted', 'pivoted', 'conditional-pivoted'],
)
nuisance_metric_plot([
test_predictions(classifier),
test_predictions(pivoted_classifier),
test_predictions(conditional_pivoted_classifier)
],
labels_test, nuisance_test,
metric_fn=cross_entropy, metric_name='cross-entropy', base_level=0.0,
names=['non-pivoted', 'pivoted', 'conditional-pivoted'],
)
```
| true |
code
| 0.76105 | null | null | null | null |
|
# Distribution of insolation
**Note this should be updated to take advantage of the new xarray capabilities of the `daily_insolation` code.**
Here are some examples calculating daily average insolation at different locations and times.
These all use a function called `daily_insolation` in the module `insolation.py` to do the calculation. The code calculates daily average insolation anywhere on Earth at any time of year for a given set of orbital parameters.
To look at past orbital variations and their effects on insolation, we use the module `orbital.py` which accesses tables of values for the past 5 million years. We can easily lookup parameters for any point in the past and pass these to `daily_insolation`.
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from climlab import constants as const
from climlab.solar.insolation import daily_insolation
```
## Present-day orbital parameters
Calculate an array of insolation over the year and all latitudes (for present-day orbital parameters).
```
lat = np.linspace( -90., 90., 500)
days = np.linspace(0, const.days_per_year, 365)
Q = daily_insolation( lat, days )
```
And make a contour plot of Q as function of latitude and time of year.
```
ax = plt.figure( figsize=(10,8) ).add_subplot(111)
CS = ax.contour( days, lat, Q , levels = np.arange(0., 600., 50.) )
ax.clabel(CS, CS.levels, inline=True, fmt='%r', fontsize=10)
ax.set_xlabel('Days since January 1', fontsize=16 )
ax.set_ylabel('Latitude', fontsize=16 )
ax.set_title('Daily average insolation', fontsize=24 )
ax.contourf ( days, lat, Q, levels=[-500., 0.] )
plt.show()
```
Take the area-weighted global, annual average of Q...
```
print(np.sum( np.mean( Q, axis=1 ) * np.cos( np.deg2rad(lat) ) ) / np.sum( np.cos( np.deg2rad( lat ) ) ))
```
Also plot the zonally averaged insolation at a few different times of the year:
```
summer_solstice = 170
winter_solstice = 353
ax = plt.figure( figsize=(10,8) ).add_subplot(111)
ax.plot( lat, Q[:,(summer_solstice, winter_solstice)] );
ax.plot( lat, np.mean(Q, axis=1), linewidth=2 )
ax.set_xbound(-90, 90)
ax.set_xticks( range(-90,100,30) )
ax.set_xlabel('Latitude', fontsize=16 );
ax.set_ylabel('Insolation (W m$^{-2}$)', fontsize=16 );
ax.grid()
plt.show()
```
## Past orbital parameters
The `orbital.py` code allows us to look up the orbital parameters for Earth over the last 5 million years.
Make reference plots of the variation in the three orbital parameter over the last 1 million years
```
from climlab.solar.orbital import OrbitalTable
kyears = np.arange( -1000., 1.)
#table = OrbitalTable()
orb = OrbitalTable.interp(kyear=kyears )
orb
```
The `xarray` object `orb` now holds 1 million years worth of orbital data, total of 1001 data points for each element: eccentricity `ecc`, obliquity angle `obliquity`, and solar longitude of perihelion `long_peri`.
```
fig = plt.figure( figsize = (10,10) )
ax1 = fig.add_subplot(3,1,1)
ax1.plot( kyears, orb['ecc'] )
ax1.set_title('Eccentricity $e$', fontsize=18 )
ax2 = fig.add_subplot(3,1,2)
ax2.plot( kyears, orb['ecc'] * np.sin( np.deg2rad( orb['long_peri'] ) ) )
ax2.set_title('Precessional parameter $e \sin(\Lambda)$', fontsize=18 )
ax3 = fig.add_subplot(3,1,3)
ax3.plot( kyears, orb['obliquity'] )
ax3.set_title('Obliquity (axial tilt) $\Phi$', fontsize=18 )
ax3.set_xlabel( 'Thousands of years before present', fontsize=14 )
plt.show()
```
### Annual mean insolation
Create a large array of insolation over the whole globe, whole year, and for every set of orbital parameters.
```
lat = np.linspace(-90, 90, 181)
days = np.linspace(1.,50.)/50 * const.days_per_year
Q = daily_insolation(lat, days, orb)
print(Q.shape)
Qann = np.mean(Q, axis=1) # time average over the year
print(Qann.shape)
Qglobal = np.empty_like( kyears )
for n in range( kyears.size ): # global area-weighted average
Qglobal[n] = np.sum( Qann[:,n] * np.cos( np.deg2rad(lat) ) ) / np.sum( np.cos( np.deg2rad(lat) ) )
print(Qglobal.shape)
```
We are going to create a figure showing past time variations in three quantities:
1. Global, annual mean insolation
2. Annual mean insolation at high northern latitudes
3. Summer solstice insolation at high northern latitudes
```
fig = plt.figure( figsize = (10,14) )
ax1 = fig.add_subplot(3,1,1)
ax1.plot( kyears, Qglobal )
ax1.set_title('Global, annual mean insolation', fontsize=18 )
ax1.ticklabel_format( useOffset=False )
ax2 = fig.add_subplot(3,1,2)
ax2.plot( kyears, Qann[160,:] )
ax2.set_title('Annual mean insolation at 70N', fontsize=18 )
ax3 = fig.add_subplot(3,1,3)
ax3.plot( kyears, Q[160,23,:] )
ax3.set_title('Summer solstice insolation at 70N', fontsize=18 )
plt.show()
```
And comparing with the plots of orbital variations above, we see that
1. Global annual mean insolation variations on with eccentricity (slow), and the variations are very small!
2. Annual mean insolation varies with obliquity (medium). Annual mean insolation does NOT depend on precession!
3. Summer solstice insolation at high northern latitudes is affected by both precession and obliquity. The variations are large.
### Insolation changes between the Last Glacial Maximum and the end of the last ice age
Last Glacial Maximum or "LGM" occurred around 23,000 years before present, when the ice sheets were at their greatest extent. By 10,000 years ago, the ice sheets were mostly gone and the last ice age was over. Let's plot the changes in the seasonal distribution of insolation from 23 kyrs to 10 kyrs.
```
orb_0 = OrbitalTable.interp(kyear=0) # present-day orbital parameters
orb_10 = OrbitalTable.interp(kyear=-10) # orbital parameters for 10 kyrs before present
orb_23 = OrbitalTable.interp(kyear=-23) # 23 kyrs before present
Q_0 = daily_insolation( lat, days, orb_0 )
Q_10 = daily_insolation( lat, days, orb_10 ) # insolation arrays for each of the three sets of orbital parameters
Q_23 = daily_insolation( lat, days, orb_23 )
fig = plt.figure( figsize=(20,8) )
ax1 = fig.add_subplot(1,2,1)
Qdiff = Q_10 - Q_23
CS1 = ax1.contour( days, lat, Qdiff, levels = np.arange(-100., 100., 10.) )
ax1.clabel(CS1, CS1.levels, inline=True, fmt='%r', fontsize=10)
ax1.contour( days, lat, Qdiff, levels = [0.], colors = 'k' )
ax1.set_xlabel('Days since January 1', fontsize=16 )
ax1.set_ylabel('Latitude', fontsize=16 )
ax1.set_title('Insolation differences: 10 kyrs - 23 kyrs', fontsize=24 )
ax2 = fig.add_subplot(1,2,2)
ax2.plot( np.mean( Qdiff, axis=1 ), lat )
ax2.set_xlabel('W m$^{-2}$', fontsize=16 )
ax2.set_ylabel( 'Latitude', fontsize=16 )
ax2.set_title(' Annual mean differences', fontsize=24 )
ax2.set_ylim((-90,90))
ax2.grid()
plt.show()
```
The annual mean plot shows a classic obliquity signal: at 10 kyrs, the axis close to its maximum tilt, around 24.2º. At 23 kyrs, the tilt was much weaker, only about 22.7º. In the annual mean, a stronger tilt means more sunlight to the poles and less to the equator. This is very helpful if you are trying to melt an ice sheet.
Finally, take the area-weighted global average of the difference:
```
print(np.average(np.mean(Qdiff,axis=1), weights=np.cos(np.deg2rad(lat))))
```
This confirms that the difference is tiny (and due to very small changes in the eccentricity). **Ice ages are driven by seasonal and latitudinal redistributions of solar energy**, NOT by changes in the total global amount of solar energy!
| true |
code
| 0.620219 | null | null | null | null |
|
# MadMiner particle physics tutorial
# Part 3b: Training a score estimator
Johann Brehmer, Felix Kling, Irina Espejo, and Kyle Cranmer 2018-2019
In part 3a of this tutorial we will finally train a neural network to estimate likelihood ratios. We assume that you have run part 1 and 2a of this tutorial. If, instead of 2a, you have run part 2b, you just have to load a different filename later.
## Preparations
Make sure you've run the first tutorial before executing this notebook!
```
from __future__ import absolute_import, division, print_function, unicode_literals
import logging
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
%matplotlib inline
from madminer.sampling import SampleAugmenter
from madminer import sampling
from madminer.ml import ScoreEstimator
# MadMiner output
logging.basicConfig(
format='%(asctime)-5.5s %(name)-20.20s %(levelname)-7.7s %(message)s',
datefmt='%H:%M',
level=logging.INFO
)
# Output of all other modules (e.g. matplotlib)
for key in logging.Logger.manager.loggerDict:
if "madminer" not in key:
logging.getLogger(key).setLevel(logging.WARNING)
```
## 1. Make (unweighted) training and test samples with augmented data
At this point, we have all the information we need from the simulations. But the data is not quite ready to be used for machine learning. The `madminer.sampling` class `SampleAugmenter` will take care of the remaining book-keeping steps before we can train our estimators:
First, it unweights the samples, i.e. for a given parameter vector `theta` (or a distribution `p(theta)`) it picks events `x` such that their distribution follows `p(x|theta)`. The selected samples will all come from the event file we have so far, but their frequency is changed -- some events will appear multiple times, some will disappear.
Second, `SampleAugmenter` calculates all the augmented data ("gold") that is the key to our new inference methods. Depending on the specific technique, these are the joint likelihood ratio and / or the joint score. It saves all these pieces of information for the selected events in a set of numpy files that can easily be used in any machine learning framework.
```
# sampler = SampleAugmenter('data/lhe_data_shuffled.h5')
sampler = SampleAugmenter('/data_CMS/cms/cortinovis/ewdim6/data_ew_1M_az/delphes_data_shuffled.h5')
```
The relevant `SampleAugmenter` function for local score estimators is `extract_samples_train_local()`. As in part 3a of the tutorial, for the argument `theta` you can use the helper functions `sampling.benchmark()`, `sampling.benchmarks()`, `sampling.morphing_point()`, `sampling.morphing_points()`, and `sampling.random_morphing_points()`.
```
x, theta, t_xz, _ = sampler.sample_train_local(
theta=sampling.benchmark('sm'),
n_samples=500000,
folder='/data_CMS/cms/cortinovis/ewdim6/data_ew_2M_az/samples',
filename='train_score'
)
```
We can use the same data as in part 3a, so you only have to execute this if you haven't gone through tutorial 3a:
```
_ = sampler.sample_test(
theta=sampling.benchmark('sm'),
n_samples=1000,
folder='/data_CMS/cms/cortinovis/ewdim6/data_ew_2M_az/samples',
filename='test'
)
```
## 2. Train score estimator
It's now time to build a neural network. Only this time, instead of the likelihood ratio itself, we will estimate the gradient of the log likelihood with respect to the theory parameters -- the score. To be precise, the output of the neural network is an estimate of the score at some reference parameter point, for instance the Standard Model. A neural network that estimates this "local" score can be used to calculate the Fisher information at that point. The estimated score can also be used as a machine learning version of Optimal Observables, and likelihoods can be estimated based on density estimation in the estimated score space. This method for likelihood ratio estimation is called SALLY, and there is a closely related version called SALLINO. Both are explained in ["Constraining Effective Field Theories With Machine Learning"](https://arxiv.org/abs/1805.00013) and ["A Guide to Constraining Effective Field Theories With Machine Learning"](https://arxiv.org/abs/1805.00020).
The central object for this is the `madminer.ml.ScoreEstimator` class:
```
estimator = ScoreEstimator(n_hidden=(30,30))
estimator.train(
method='sally',
x='/data_CMS/cms/cortinovis/ewdim6/data_ew_2M_az/samples/x_train_score.npy',
t_xz='/data_CMS/cms/cortinovis/ewdim6/data_ew_2M_az/samples/t_xz_train_score.npy',
)
estimator.save('/data_CMS/cms/cortinovis/ewdim6/models_ew_2M_az/sally')
```
## 3. Evaluate score estimator
Let's evaluate the SM score on the test data
```
estimator.load('/data_CMS/cms/cortinovis/ewdim6/models_ew_2M_az/sally')
t_hat = estimator.evaluate_score(
x = '/data_CMS/cms/cortinovis/ewdim6/data_ew_2M_az/samples/x_test.npy'
)
```
Let's have a look at the estimated score and how it is related to the observables:
```
x = np.load('/data_CMS/cms/cortinovis/ewdim6/data_ew_2M_az/samples/x_test.npy')
fig = plt.figure(figsize=(10,4))
for i in range(2):
ax = plt.subplot(1,2,i+1)
sc = plt.scatter(x[:,0], x[:,1], c=t_hat[:,i], s=25., cmap='viridis', vmin=-1., vmax=1.)
cbar = plt.colorbar(sc)
cbar.set_label(r'$\hat{t}_' + str(i) + r'(x | \theta_{ref})$')
plt.xlabel(r'$p_{T,j1}$ [GeV]')
plt.ylabel(r'$\Delta \phi_{jj}$')
plt.xlim(10.,300.)
plt.ylim(-3.15,3.15)
plt.tight_layout()
plt.show()
```
| true |
code
| 0.775796 | null | null | null | null |
|
**Note**: Click on "*Kernel*" > "*Restart Kernel and Clear All Outputs*" in [JupyterLab](https://jupyterlab.readthedocs.io/en/stable/) *before* reading this notebook to reset its output. If you cannot run this file on your machine, you may want to open it [in the cloud <img height="12" style="display: inline-block" src="../static/link/to_mb.png">](https://mybinder.org/v2/gh/webartifex/intro-to-python/develop?urlpath=lab/tree/08_mfr/00_content.ipynb).
# Chapter 8: Map, Filter, & Reduce
In this chapter, we continue the study of sequential data by looking at memory efficient ways to process the elements in a sequence. That is an important topic for the data science practitioner who must be able to work with data that does *not* fit into a single computer's memory.
As shown in [Chapter 4 <img height="12" style="display: inline-block" src="../static/link/to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/04_iteration/02_content.ipynb#Containers-vs.-Iterables), both the `list` objects `[0, 1, 2, 3, 4]` and `[1, 3, 5, 7, 9]` on the one side and the `range` objects `range(5)` and `range(1, 10, 2)` on the other side allow us to loop over the same numbers. However, the latter two only create *one* `int` object in every iteration while the former two create *all* `int` objects before the loop even starts. In this aspect, we consider `range` objects to be "rules" in memory that know how to calculate the numbers *without* calculating them.
In [Chapter 7 <img height="12" style="display: inline-block" src="../static/link/to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/07_sequences/01_content.ipynb#The-list-Type), we see how the built-in [list() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#func-list) constructor **materializes** the `range(1, 13)` object into the `list` object `[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]`. In other words, we make `range(1, 13)` calculate *all* numbers at once and store them in a `list` object for further processing.
In many cases, however, it is not necessary to do that, and, in this chapter, we look at other types of "rules" in memory and how we can compose different "rules" together to implement bigger computations.
Next, we take a step back and continue with a simple example involving the familiar `numbers` list. Then, we iteratively exchange `list` objects with "rule"-like objects *without* changing the overall computation at all. As computations involving sequential data are commonly classified into three categories **map**, **filter**, or **reduce**, we do so too for our `numbers` example.
```
numbers = [7, 11, 8, 5, 3, 12, 2, 6, 9, 10, 1, 4]
```
## Mapping
**Mapping** refers to the idea of applying a transformation to every element in a sequence.
For example, let's square each element in `numbers` and add `1` to the squares. In essence, we apply the transformation $y := x^2 + 1$ as expressed with the `transform()` function below.
```
def transform(element):
"""Map elements to their squares plus 1."""
return (element ** 2) + 1
```
With the syntax we know so far, we revert to a `for`-loop that iteratively appends the transformed elements to an initially empty `transformed_numbers` list.
```
transformed_numbers = []
for old in numbers:
new = transform(old)
transformed_numbers.append(new)
transformed_numbers
```
As this kind of data processing is so common, Python provides the [map() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#map) built-in. In its simplest usage form, it takes two arguments: A transformation `function` that takes exactly *one* positional argument and an `iterable` that provides the objects to be mapped.
We call [map() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#map) with a reference to the `transform()` function and the `numbers` list as the arguments and store the result in the variable `transformer` to inspect it.
```
transformer = map(transform, numbers)
```
We might expect to get back a materialized sequence (i.e., all elements exist in memory), and a `list` object would feel the most natural because of the type of the `numbers` argument. However, `transformer` is an object of type `map`.
```
transformer
type(transformer)
```
Like `range` objects, `map` objects generate a series of objects "on the fly" (i.e., one by one), and we use the built-in [next() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#next) function to obtain the next object in line. So, we should think of a `map` object as a "rule" stored in memory that only knows how to calculate the next object of possibly *infinitely* many.
```
next(transformer)
next(transformer)
next(transformer)
```
It is essential to understand that by creating a `map` object with the [map() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#map) built-in, *nothing* happens in memory except the creation of the `map` object. In particular, no second `list` object derived from `numbers` is created. Also, we may view `range` objects as a special case of `map` objects: They are constrained to generating `int` objects only, and the `iterable` argument is replaced with `start`, `stop`, and `step` arguments.
If we are sure that a `map` object generates a *finite* number of elements, we may materialize them into a `list` object with the built-in [list() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#func-list) constructor. Below, we "pull out" the remaining `int` objects from `transformer`, which itself is derived from a *finite* `list` object.
```
list(transformer)
```
In summary, instead of creating an empty list first and appending it in a `for`-loop as above, we write the following one-liner and obtain an equal `transformed_numbers` list.
```
transformed_numbers = list(map(transform, numbers))
transformed_numbers
```
## Filtering
**Filtering** refers to the idea of creating a subset of a sequence with a **boolean filter** `function` that indicates if an element should be kept (i.e., `True`) or not (i.e., `False`).
In the example, let's only keep the even elements in `numbers`. The `is_even()` function implements that as a filter.
```
def is_even(element):
"""Filter out odd numbers."""
if element % 2 == 0:
return True
return False
```
As `element % 2 == 0` is already a boolean expression, we could shorten `is_even()` like so.
```
def is_even(element):
"""Filter out odd numbers."""
return element % 2 == 0
```
As before, we first use a `for`-loop that appends the elements to be kept iteratively to an initially empty `even_numbers` list.
```
even_numbers = []
for number in transformed_numbers:
if is_even(number):
even_numbers.append(number)
even_numbers
```
Analogously to the `map` object above, we use the [filter() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#filter) built-in to create an object of type `filter` and assign it to `evens`.
```
evens = filter(is_even, transformed_numbers)
evens
type(evens)
```
`evens` works like `transformer` above: With the built-in [next() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#next) function we obtain the even numbers one by one. So, the "next" element in line is simply the next even `int` object the `filter` object encounters.
```
transformed_numbers
next(evens)
next(evens)
next(evens)
```
As above, we could create a materialized `list` object with the [list() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#func-list) constructor.
```
list(filter(is_even, transformed_numbers))
```
We may also chain `map` and `filter` objects derived from the original `numbers` list. As the entire cell is *one* big expression consisting of nested function calls, we read it from the inside out.
```
list(
filter(
is_even,
map(transform, numbers),
)
)
```
Using the [map() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#map) and [filter() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#filter) built-ins, we can quickly switch the order: Filter first and then transform the remaining elements. This variant equals the "*A simple Filter*" example in [Chapter 4 <img height="12" style="display: inline-block" src="../static/link/to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/04_iteration/03_content.ipynb#Example:-A-simple-Filter). On the contrary, code with `for`-loops and `if` statements is more tedious to adapt. Additionally, `map` and `filter` objects loop "at the C level" and are a lot faster because of that. Because of that, experienced Pythonistas tend to *not* use explicit `for`-loops so often.
```
list(
map(
transform,
filter(is_even, numbers),
)
)
```
## Reducing
Lastly, **reducing** sequential data means to summarize the elements into a single statistic.
A simple example is the built-in [sum() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#sum) function.
```
sum(
map(
transform,
filter(is_even, numbers),
)
)
```
Other straightforward examples are the built-in [min() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#min) or [max() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#max) functions.
```
min(map(transform, filter(is_even, numbers)))
max(map(transform, filter(is_even, numbers)))
```
[sum() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#sum), [min() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#min), and [max() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#max) can be regarded as special cases.
The generic way of reducing a sequence is to apply a function of *two* arguments on a rolling horizon: Its first argument is the reduction of the elements processed so far, and the second the next element to be reduced.
For illustration, let's replicate [sum() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#sum) as such a function, called `sum_alt()`. Its implementation only adds two numbers.
```
def sum_alt(sum_so_far, next_number):
"""Reduce a sequence by addition."""
return sum_so_far + next_number
```
Further, we create a *new* `map` object derived from `numbers` ...
```
evens_transformed = map(transform, filter(is_even, numbers))
```
... and loop over all *but* the first element it generates. The latter is captured separately as the initial `result` with the [next() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#next) function. We know from above that `evens_transformed` generates *six* elements. That is why we see *five* growing `result` values resembling a [cumulative sum](http://mathworld.wolfram.com/CumulativeSum.html). The first `210` is the sum of the first two elements generated by `evens_transformed`, `65` and `145`.
So, we also learn that `map` objects, and analogously `filter` objects, are *iterable* as we may loop over them.
```
result = next(evens_transformed)
for number in evens_transformed:
result = sum_alt(result, number)
print(result, end=" ") # line added for didactical purposes
```
The final `result` is the same `370` as above.
```
result
```
The [reduce() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functools.html#functools.reduce) function in the [functools <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functools.html) module in the [standard library <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/index.html) provides more convenience (and speed) replacing the `for`-loop. It takes two arguments, `function` and `iterable`, in the same way as the [map() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#map) and [filter() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#filter) built-ins.
[reduce() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functools.html#functools.reduce) is **[eager <img height="12" style="display: inline-block" src="../static/link/to_wiki.png">](https://en.wikipedia.org/wiki/Eager_evaluation)** meaning that all computations implied by the contained `map` and `filter` "rules" are executed immediately, and the code cell evaluates to `370`. On the contrary, [map() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#map) and [filter() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#filter) create **[lazy <img height="12" style="display: inline-block" src="../static/link/to_wiki.png">](https://en.wikipedia.org/wiki/Lazy_evaluation)** `map` and `filter` objects, and we have to use the [next() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#next) function to obtain the elements, one by one.
```
from functools import reduce
reduce(
sum_alt,
map(
transform,
filter(is_even, numbers),
)
)
```
## Lambda Expressions
[map() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#map), [filter() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#filter), and [reduce() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functools.html#functools.reduce) take a `function` object as their first argument, and we defined `transform()`, `is_even()`, and `sum_alt()` to be used precisely for that.
Often, such functions are used *only once* in a program. However, the primary purpose of functions is to *reuse* them. In such cases, it makes more sense to define them "anonymously" right at the position where the first argument goes.
As mentioned in [Chapter 2 <img height="12" style="display: inline-block" src="../static/link/to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/02_functions/00_content.ipynb#Anonymous-Functions), we use `lambda` expressions to create `function` objects *without* a name referencing them.
So, the above `sum_alt()` function could be rewritten as a `lambda` expression like so ...
```
lambda sum_so_far, next_number: sum_so_far + next_number
```
... or even shorter.
```
lambda x, y: x + y
```
With the new concepts in this section, we can rewrite the entire example in just a few lines of code *without* any `for`, `if`, and `def` statements. The resulting code is concise, easy to read, quick to modify, and even faster in execution. Most importantly, it is optimized to handle big amounts of data as *no* temporary `list` objects are materialized in memory.
```
numbers = [7, 11, 8, 5, 3, 12, 2, 6, 9, 10, 1, 4]
evens = filter(lambda x: x % 2 == 0, numbers)
transformed = map(lambda x: (x ** 2) + 1, evens)
sum(transformed)
```
If `numbers` comes as a sorted sequence of whole numbers, we may use the [range() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#func-range) built-in and get away *without* any materialized `list` object in memory at all!
```
numbers = range(1, 13)
evens = filter(lambda x: x % 2 == 0, numbers)
transformed = map(lambda x: (x ** 2) + 1, evens)
sum(transformed)
```
To additionally save the temporary variables, `numbers`, `evens`, and `transformed`, we could write the entire computation as *one* expression.
```
sum(
map(
lambda x: (x ** 2) + 1,
filter(
lambda x: x % 2 == 0,
range(1, 13),
)
)
)
```
PythonTutor visualizes the differences in the number of computational steps and memory usage:
- [Version 1 <img height="12" style="display: inline-block" src="../static/link/to_py.png">](http://pythontutor.com/visualize.html#code=def%20is_even%28element%29%3A%0A%20%20%20%20if%20element%20%25%202%20%3D%3D%200%3A%0A%20%20%20%20%20%20%20%20return%20True%0A%20%20%20%20return%20False%0A%0Adef%20transform%28element%29%3A%0A%20%20%20%20return%20%28element%20**%202%29%20%2B%201%0A%0Anumbers%20%3D%20list%28range%281,%2013%29%29%0A%0Aevens%20%3D%20%5B%5D%0Afor%20number%20in%20numbers%3A%0A%20%20%20%20if%20is_even%28number%29%3A%0A%20%20%20%20%20%20%20%20evens.append%28number%29%0A%0Atransformed%20%3D%20%5B%5D%0Afor%20number%20in%20evens%3A%0A%20%20%20%20transformed.append%28transform%28number%29%29%0A%0Aresult%20%3D%20sum%28transformed%29&cumulative=false&curInstr=0&heapPrimitives=nevernest&mode=display&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false): With `for`-loops, `if` statements, and named functions -> **116** steps and **3** `list` objects
- [Version 2 <img height="12" style="display: inline-block" src="../static/link/to_py.png">](http://pythontutor.com/visualize.html#code=numbers%20%3D%20range%281,%2013%29%0Aevens%20%3D%20filter%28lambda%20x%3A%20x%20%25%202%20%3D%3D%200,%20numbers%29%0Atransformed%20%3D%20map%28lambda%20x%3A%20%28x%20**%202%29%20%2B%201,%20evens%29%0Aresult%20%3D%20sum%28transformed%29&cumulative=false&curInstr=0&heapPrimitives=nevernest&mode=display&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false): With named `map` and `filter` objects -> **58** steps and **no** `list` object
- [Version 3 <img height="12" style="display: inline-block" src="../static/link/to_py.png">](http://pythontutor.com/visualize.html#code=result%20%3D%20sum%28map%28lambda%20x%3A%20%28x%20**%202%29%20%2B%201,%20filter%28lambda%20x%3A%20x%20%25%202%20%3D%3D%200,%20range%281,%2013%29%29%29%29&cumulative=false&curInstr=0&heapPrimitives=nevernest&mode=display&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false): Everything in *one* expression -> **55** steps and **no** `list` object
Versions 2 and 3 are the same, except for the three additional steps required to create the temporary variables. The *major* downside of Version 1 is that, in the worst case, it may need *three times* the memory as compared to the other two versions!
An experienced Pythonista would probably go with Version 2 in a production system to keep the code readable and maintainable.
The map-filter-reduce paradigm has caught attention in recent years as it enables **[parallel computing <img height="12" style="display: inline-block" src="../static/link/to_wiki.png">](https://en.wikipedia.org/wiki/Parallel_computing)**, and this gets important when dealing with big amounts of data. The workings in the memory as shown in this section provide an idea why.
| true |
code
| 0.809125 | null | null | null | null |
|
# Lab 3 Accuracy of Quantum Phase Estimation
Prerequisite
- [Ch.3.5 Quantum Fourier Transform](https://qiskit.org/textbook/ch-algorithms/quantum-fourier-transform.html)
- [Ch.3.6 Quantum Phase Estimation](https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html)
Other relevant materials
- [QCQI] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information
```
from qiskit import *
import numpy as np
from qiskit.visualization import plot_histogram
import qiskit.tools.jupyter
from qiskit.tools.monitor import job_monitor
from qiskit.ignis.mitigation.measurement import *
import matplotlib.pyplot as plt
```
<h2 style="font-size:24px;">Part 1: Performance of Quantum Phase Estimation</h2>
<br>
<div style="background: #E8E7EB; border-radius: 5px;
-moz-border-radius: 5px;">
<p style="background: #800080;
border-radius: 5px 5px 0px 0px;
padding: 10px 0px 10px 10px;
font-size:18px;
color:white;
"><b>Goal</b></p>
<p style=" padding: 0px 0px 10px 10px;
font-size:16px;">Investigate the relationship between the number of qubits required for the desired accuracy of the phase estimation with high probability.</p>
</div>
The accuracy of the estimated value through Quantum Phase Estimation (QPE) and its probability of success depend on the number of qubits employed in QPE circuits. Therefore, one might want to know the necessary number of qubits to achieve the targeted level of QPE performance, especially when the phase that needs to be determined cannot be decomposed in a finite bit binary expansion.
In Part 1 of this lab, we examine the number of qubits required to accomplish the desired accuracy and the probability of success in determining the phase through QPE.
<h3 style="font-size: 20px">1. Find the probability of obtaining the estimation for a phase value accurate to $2^{-2}$ successfully with four counting qubits.</h3>
<h4 style="font-size: 17px">📓Step A. Set up the QPE circuit with four counting qubits and save the circuit to the variable 'qc4'. Execute 'qc4' on a qasm simulator. Plot the histogram of the result.</h4>
Check the QPE chapter in Qiskit textbook ( go to `3. Example: Getting More Precision` section [here](https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html) ) for the circuit.
```
def qft(n):
"""Creates an n-qubit QFT circuit"""
circuit = QuantumCircuit(n)
def swap_registers(circuit, n):
for qubit in range(n//2):
circuit.swap(qubit, n-qubit-1)
return circuit
def qft_rotations(circuit, n):
"""Performs qft on the first n qubits in circuit (without swaps)"""
if n == 0:
return circuit
n -= 1
circuit.h(n)
for qubit in range(n):
circuit.cp(np.pi/2**(n-qubit), qubit, n)
qft_rotations(circuit, n)
qft_rotations(circuit, n)
swap_registers(circuit, n)
return circuit
## Start your code to create the circuit, qc4
qc4.draw()
## Run this cell to simulate 'qc4' and to plot the histogram of the result
sim = Aer.get_backend('qasm_simulator')
shots = 20000
count_qc4 = execute(qc4, sim, shots=shots).result().get_counts()
plot_histogram(count_qc4, figsize=(9,5))
```
Having performed `Step A` successfully, you will have obtained a distribution similar to the one shown below with the highest probability at `0101` which corresponds to the estimated $\phi$ value, `0.3125`.

Since the number of counting qubits used for the circuit is four, the best estimated value should be accurate to $\delta = 2^{-4} = 0.0625$. However, there are multiple possible outcomes as $\theta = 1/3$ cannot be expressed in a finite number of bits, the estimation by QPE here is not always bounded by this accuracy.
Running the following cell shows the same histogram but with all possible estimated $\phi$ values on the x-axis.
```
phi_est = np.array([round(int(key, 2)/2**t,3) for key in list(count_qc4.keys())])
key_new = list(map(str, phi_est))
count_new = dict(zip(key_new, count_qc4.values()))
plot_histogram(count_new, figsize=(9,5))
```
**Suppose the outcome of the final measurement is $m$, and let $b$ the best estimation which is `0.3125` for this case.**
<h4 style="font-size: 17px">📓Step B. Find $e$, the maximum difference in integer from the best estimation <code>0101</code> so that all the outcomes, 'm's, would approximate $\phi$ to an accuracy $2^{-2}$ when $|m - b| \leq \frac{e}{2^{t}}$. </h4>
In this case, the values of $t$ and $b$ are $4$ and $0.3125$, respectively.
For example, under $e = 1$, the considered outcomes are `0100`, `0101`, `0110` which correspond to the values of $m$: $0.25,~0.312,~0.375$, respectively, and all of them approximate the value $\frac{1}{3}$ to an accuracy $2^{-2}$.
```
## Your code goes here
```
<h4 style="font-size: 17px">📓Step C. Compute the probability of obtaining an approximation correct to an accuracy $2^{-2}$. Verify that the computed probability value is larger or equal to $1- \frac{1}{2(2^{(t-n)}-2)}$ where $t$ is the number of counting bits and the $2^{-n}$ is the desired accuracy. </h4>
Now it is easy to evaluate the probability of the success from the histogram since all the outcomes that approximate $\phi$ to the accuracy $2^{-2}$ can be found based on the maximum difference $e$ from the best estimate.
```
## Your code goes here
```
<h3 style="font-size: 20px">2. Compute the probability of success for the accuracy $2^{-2}$ when the number of counting qubits, $t$, varies from four to nine. Compare your result with the equation $t=n+log(2+\frac{1}{2\epsilon})$ when $2^{-n}$ is the desired accuracy and $\epsilon$ is 1 - probability of success.</h3>
The following plot shows the relationship between the number of counting qubit, t, and the minimum probability of success to approximate the phase to an accuracy $2^{-2}$. Check the Ch. 5.2.1 Performance and requirements in `[QCQI]`.
```
y = lambda t, n: 1-1/(2*(2**(t-n)-2))
t_q = np.linspace(3.5, 9.5, 100 )
p_min = y(t_q, 2)
plt.figure(figsize=(7, 5))
plt.plot(t_q, p_min, label='$p_{min}$')
plt.xlabel('t: number of counting qubits')
plt.ylabel('probability of success for the accuracy $2^{-2}$')
plt.legend(loc='lower right')
plt.title('Probability of success for different number of counting qubits')
plt.show()
```
<h4 style="font-size: 17px">📓Step A. Construct QPE circuit to estimate $\phi$ when $\phi = 1/3$ with for the different number of counting qubits, $t$, when $t = [4, 5, 6, 7, 8, 9]$. Store all the circuits in a list variable 'circ' to simulate all the circuits at once as we did in Lab2. </h4>
```
## Your Code to create the list variable 'circ' goes here
# Run this cell to simulate `circ` and plot the histograms of the results
results = execute(circ, sim, shots=shots).result()
n_circ = len(circ)
counts = [results.get_counts(idx) for idx in range(n_circ)]
fig, ax = plt.subplots(n_circ,1,figsize=(25,40))
for idx in range(n_circ):
plot_histogram(counts[idx], ax=ax[idx])
plt.tight_layout()
```
<h4 style="font-size: 17px">📓Step B. Determine $e$, the maximum difference in integer from the best estimation for the different numer of counting qubits, $t = [4, 5, 6, 7, 8, 9]$. Verify the relationship $e=2^{t-n}-1$ where $n=2$ since the desired accuracy is $2^{-2}$ in this case. </h4>
```
## Your Code goes here
```
If you successfully calculated $e$ values for all the counting qubits, $t=[4,5,6,7,8,9]$, you will be able to generate the following graph that verifies the relationship $e = 2^{t-2} -1$ with the $e$ values that you computed.

<h4 style="font-size: 17px">📓Step C. Evaluate the probability of success estimating $\phi$ to an accuracy $2^{-2}$ for all the values of $t$, the number of counting qubits. Save the probabilities to the list variable, 'prob_success'. </h4>
```
## Your code to create the list variable, 'prob_success', goes here
```
<h4 style="font-size: 17px">📓Step D. Overlay the results of Step C on the graph that shows the relationship between the number of counting qubits, $t$, and the minimum probability of success to approximate the phase to an accuracy $2^{-2}$. Understand the result. </h4>
```
## Your code goes here
```

Your plot should be similar to the above one.
The line plot in the left pannel shows the minimum success probability to estimate $\phi$ within the accuracy $2^{-2}$ as the number of counting qubits varies. The overlayed orange dots are the same values, but from the simulation, which confirms the relationship the line plot represents as the lower bound. The right pannel displays the same result but zoomed by adjusting the y-axis range.
The following graph exhibits the relationships with different accuracy levels. The relationship, $t=n+log(2+\frac{1}{2\epsilon})$, indicates the number of counting qubits $t$ to estimate $\phi$ to an accuracy $2^{-2}$ with probability of success at least $1-\epsilon$, as we validated above.
```
t = np.linspace(5.1, 10, 100)
prob_success_n = [y(t, n) for n in [2, 3, 4]]
prob_n2, prob_n3, prob_n4 = prob_success_n[0], prob_success_n[1], prob_success_n[2]
plt.figure(figsize=(7, 5))
plt.plot(t, prob_n2, t, prob_n3, t, prob_n4, t, [1]*len(t),'--' )
plt.axis([5, 10, 0.7, 1.05])
plt.xlabel('t: number of counting qubits')
plt.ylabel('probability of success for the accuracy $2^{-n}$')
plt.legend(['n = 2', 'n = 3', 'n = 4'], loc='lower right')
plt.grid(True)
```
<h2 style="font-size:24px;">Part 2: QPE on Noisy Quantum System</h2>
<br>
<div style="background: #E8E7EB; border-radius: 5px;
-moz-border-radius: 5px;">
<p style="background: #800080;
border-radius: 5px 5px 0px 0px;
padding: 10px 0px 10px 10px;
font-size:18px;
color:white;
"><b>Goal</b></p>
<p style=" padding: 0px 0px 10px 10px;
font-size:16px;">Run the QPE circuit on a real quantum system to understand the result and limitations when using noisy quantum systems</p>
</div>
The accuracy anaylsis that we performed in Part 1 would not be correct when the QPE circuit is executed on present day noisy quantum systems. In part 2, we will obtain QPE results by running the circuit on a backend from IBM Quantum Experience to examine how noise affects the outcome and learn techniques to reduce its impact.
<h4 style="font-size: 17px">📓Step A. Load your account and select the backend from your provider. </h4>
```
## Your code goes here.
```
<h4 style="font-size: 17px">📓Step B. Generate multiple ( as many as you want ) transpiled circuits of <code>qc4</code> that you set up in Part 1 at the beginning. Choose one with the minimum circuit depth, and the other with the maximum circuit depth.</h4>
Transpile the circuit with the parameter `optimization_level = 3` to reduce the error in the result. As we learned in Lab 1, Qiskit by default uses a stochastic swap mapper to place the needed SWAP gates, which varies the tranpiled circuit results even under the same runtime settings. Therefore, to achieve shorter depth transpiled circuit for smaller error in the outcome, transpile `qc4` multiple times and choose one with the minimum circuit depth. Select the maximum circuit depth one as well for comparison purposes.
```
## Your code goes here
```
<h4 style="font-size: 17px">📓Step C. Execute both circuits on the backend that you picked. Plot the histogram for the results and compare them with the simulation result in Part 1.</h4>
```
## Your code goes here
```
The following shows the sample result.

<h4 style="font-size: 17px">Step D. Measurement Error Mitigation </h4>
In the previous step, we utilized our knowledge about Qiskit transpiler to get the best result. Here, we try to mitigate the errors in the result further through the measurement mitigation technique that we learned in Lab 2.
<p>📓Construct the circuits to profile the measurement errors of all basis states using the function 'complete_meas_cal'. Obtain the measurement filter object, 'meas_filter', which will be applied to the noisy results to mitigate readout (measurement) error.
```
## Your Code goes here
```
<p>📓Plot the histogram of the results before and after the measurement error mitigation to exhibit the improvement.
```
## Your Code goes here
```
The following plot shows the sample result.

The figure below displays a simulation result with the sample final results from both the best and worst SWAP mapping cases after applying the measurement error mitigation. In Lab 2, as the major source of the error was from the measurement, after the error mitigation procedure, the outcomes were significantly improved. For QPE case, however, the measurement error doesn't seem to be the foremost cause for the noise in the result; CNOT gate errors dominate the noise profile. In this case, choosing the transpiled circuit with the least depth was the crucial procedure to reduce the errors in the result.

| true |
code
| 0.764584 | null | null | null | null |
|
# 18 - Support Vector Machines
by [Alejandro Correa Bahnsen](albahnsen.com/)
version 0.1, Apr 2016
## Part of the class [Practical Machine Learning](https://github.com/albahnsen/PracticalMachineLearningClass)
This notebook is licensed under a [Creative Commons Attribution-ShareAlike 3.0 Unported License](http://creativecommons.org/licenses/by-sa/3.0/deed.en_US). Special thanks goes to [Jake Vanderplas](http://www.vanderplas.com)
Previously we introduced supervised machine learning.
There are many supervised learning algorithms available; here we'll go into brief detail one of the most powerful and interesting methods: **Support Vector Machines (SVMs)**.
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
plt.style.use('fivethirtyeight')
```
## Motivating Support Vector Machines
Support Vector Machines (SVMs) are a powerful supervised learning algorithm used for **classification** or for **regression**. SVMs are a **discriminative** classifier: that is, they draw a boundary between clusters of data.
Let's show a quick example of support vector classification. First we need to create a dataset:
```
from sklearn.datasets.samples_generator import make_blobs
X, y = make_blobs(n_samples=50, centers=2,
random_state=0, cluster_std=0.60)
plt.figure(figsize=(8,8))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50);
```
A discriminative classifier attempts to draw a line between the two sets of data. Immediately we see a problem: such a line is ill-posed! For example, we could come up with several possibilities which perfectly discriminate between the classes in this example:
```
xfit = np.linspace(-1, 3.5)
plt.figure(figsize=(8,8))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50)
for m, b in [(1, 0.65), (0.5, 1.6), (-0.2, 2.9)]:
plt.plot(xfit, m * xfit + b, '-k')
plt.xlim(-1, 3.5);
```
These are three *very* different separaters which perfectly discriminate between these samples. Depending on which you choose, a new data point will be classified almost entirely differently!
How can we improve on this?
### Support Vector Machines: Maximizing the *Margin*
Support vector machines are one way to address this.
What support vector machined do is to not only draw a line, but consider a *region* about the line of some given width. Here's an example of what it might look like:
```
xfit = np.linspace(-1, 3.5)
plt.figure(figsize=(8,8))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50)
for m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]:
yfit = m * xfit + b
plt.plot(xfit, yfit, '-k')
plt.fill_between(xfit, yfit - d, yfit + d, edgecolor='none', color='#AAAAAA', alpha=0.4)
plt.xlim(-1, 3.5);
```
Notice here that if we want to maximize this width, the middle fit is clearly the best.
This is the intuition of **support vector machines**, which optimize a linear discriminant model in conjunction with a **margin** representing the perpendicular distance between the datasets.
#### Fitting a Support Vector Machine
Now we'll fit a Support Vector Machine Classifier to these points. While the mathematical details of the likelihood model are interesting, we'll let you read about those elsewhere. Instead, we'll just treat the scikit-learn algorithm as a black box which accomplishes the above task.
```
from sklearn.svm import SVC # "Support Vector Classifier"
clf = SVC(kernel='linear')
clf.fit(X, y)
```
To better visualize what's happening here, let's create a quick convenience function that will plot SVM decision boundaries for us:
```
import warnings
warnings.filterwarnings('ignore')
def plot_svc_decision_function(clf, ax=None):
"""Plot the decision function for a 2D SVC"""
if ax is None:
ax = plt.gca()
x = np.linspace(plt.xlim()[0], plt.xlim()[1], 30)
y = np.linspace(plt.ylim()[0], plt.ylim()[1], 30)
Y, X = np.meshgrid(y, x)
P = np.zeros_like(X)
for i, xi in enumerate(x):
for j, yj in enumerate(y):
P[i, j] = clf.decision_function([xi, yj])
# plot the margins
ax.contour(X, Y, P, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
plt.figure(figsize=(8,8))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50)
plot_svc_decision_function(clf);
```
Notice that the dashed lines touch a couple of the points: these points are the pivotal pieces of this fit, and are known as the *support vectors* (giving the algorithm its name).
In scikit-learn, these are stored in the ``support_vectors_`` attribute of the classifier:
```
plt.figure(figsize=(8,8))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50)
plot_svc_decision_function(clf)
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, facecolors='none');
```
Let's use IPython's ``interact`` functionality to explore how the distribution of points affects the support vectors and the discriminative fit.
(This is only available in IPython 2.0+, and will not work in a static view)
```
from IPython.html.widgets import interact
def plot_svm(N=10):
X, y = make_blobs(n_samples=200, centers=2,
random_state=0, cluster_std=0.60)
X = X[:N]
y = y[:N]
clf = SVC(kernel='linear')
clf.fit(X, y)
plt.figure(figsize=(8,8))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50)
plt.xlim(-1, 4)
plt.ylim(-1, 6)
plot_svc_decision_function(clf, plt.gca())
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, facecolors='none')
interact(plot_svm, N=[10, 200], kernel='linear');
```
Notice the unique thing about SVM is that only the support vectors matter: that is, if you moved any of the other points without letting them cross the decision boundaries, they would have no effect on the classification results!
#### Going further: Kernel Methods
Where SVM gets incredibly exciting is when it is used in conjunction with *kernels*.
To motivate the need for kernels, let's look at some data which is not linearly separable:
```
from sklearn.datasets.samples_generator import make_circles
X, y = make_circles(100, factor=.1, noise=.1)
clf = SVC(kernel='linear').fit(X, y)
plt.figure(figsize=(8,8))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50)
# plot_svc_decision_function(clf);
```
Clearly, no linear discrimination will ever separate these data.
One way we can adjust this is to apply a **kernel**, which is some functional transformation of the input data.
For example, one simple model we could use is a **radial basis function**
```
r = np.exp(-(X[:, 0] ** 2 + X[:, 1] ** 2))
```
If we plot this along with our data, we can see the effect of it:
```
from mpl_toolkits import mplot3d
def plot_3D(elev=30, azim=30):
plt.figure(figsize=(8,8))
ax = plt.subplot(projection='3d')
ax.scatter3D(X[:, 0], X[:, 1], r, c=y, s=50)
ax.view_init(elev=elev, azim=azim)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('r')
interact(plot_3D, elev=[-90, 90], azip=(-180, 180));
```
We can see that with this additional dimension, the data becomes trivially linearly separable!
This is a relatively simple kernel; SVM has a more sophisticated version of this kernel built-in to the process. This is accomplished by using ``kernel='rbf'``, short for *radial basis function*:
```
clf = SVC(kernel='rbf')
clf.fit(X, y)
plt.figure(figsize=(8,8))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50)
plot_svc_decision_function(clf)
```
Here there are effectively $N$ basis functions: one centered at each point! Through a clever mathematical trick, this computation proceeds very efficiently using the "Kernel Trick", without actually constructing the matrix of kernel evaluations.
| true |
code
| 0.684739 | null | null | null | null |
|
<div align="center">
<h1><img width="30" src="https://madewithml.com/static/images/rounded_logo.png"> <a href="https://madewithml.com/">Made With ML</a></h1>
Applied ML · MLOps · Production
<br>
Join 30K+ developers in learning how to responsibly <a href="https://madewithml.com/about/">deliver value</a> with ML.
<br>
</div>
<br>
<div align="center">
<a target="_blank" href="https://newsletter.madewithml.com"><img src="https://img.shields.io/badge/Subscribe-30K-brightgreen"></a>
<a target="_blank" href="https://github.com/GokuMohandas/MadeWithML"><img src="https://img.shields.io/github/stars/GokuMohandas/MadeWithML.svg?style=social&label=Star"></a>
<a target="_blank" href="https://www.linkedin.com/in/goku"><img src="https://img.shields.io/badge/style--5eba00.svg?label=LinkedIn&logo=linkedin&style=social"></a>
<a target="_blank" href="https://twitter.com/GokuMohandas"><img src="https://img.shields.io/twitter/follow/GokuMohandas.svg?label=Follow&style=social"></a>
<br>
🔥 Among the <a href="https://github.com/topics/deep-learning" target="_blank">top ML</a> repositories on GitHub
</div>
<br>
<hr>
# Transformers
In this lesson we will learn how to implement the Transformer architecture to extract contextual embeddings for our text classification task.
<div align="left">
<a target="_blank" href="https://madewithml.com/courses/foundations/transformers/"><img src="https://img.shields.io/badge/📖 Read-blog post-9cf"></a>
<a href="https://github.com/GokuMohandas/MadeWithML/blob/main/notebooks/15_Transformers.ipynb" role="button"><img src="https://img.shields.io/static/v1?label=&message=View%20On%20GitHub&color=586069&logo=github&labelColor=2f363d"></a>
<a href="https://colab.research.google.com/github/GokuMohandas/MadeWithML/blob/main/notebooks/15_Transformers.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
</div>
# Overview
Transformers are a very popular architecture that leverage and extend the concept of self-attention to create very useful representations of our input data for a downstream task.
- **advantages**:
- better representation for our input tokens via contextual embeddings where the token representation is based on the specific neighboring tokens using self-attention.
- sub-word tokens, as opposed to character tokens, since they can hold more meaningful representation for many of our keywords, prefixes, suffixes, etc.
- attend (in parallel) to all the tokens in our input, as opposed to being limited by filter spans (CNNs) or memory issues from sequential processing (RNNs).
- **disadvantages**:
- computationally intensive
- required large amounts of data (mitigated using pretrained models)
<div align="left">
<img src="https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/images/foundations/transformers/architecture.png" width="800">
</div>
<div align="left">
<small><a href="https://arxiv.org/abs/1706.03762" target="_blank">Attention Is All You Need</a></small>
</div>
# Set up
```
!pip install transformers==3.0.2 -q
import numpy as np
import pandas as pd
import random
import torch
import torch.nn as nn
SEED = 1234
def set_seeds(seed=1234):
"""Set seeds for reproducibility."""
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed) # multi-GPU# Set seeds for reproducibility
set_seeds(seed=SEED)
# Set seeds for reproducibility
set_seeds(seed=SEED)
# Set device
cuda = True
device = torch.device('cuda' if (
torch.cuda.is_available() and cuda) else 'cpu')
torch.set_default_tensor_type('torch.FloatTensor')
if device.type == 'cuda':
torch.set_default_tensor_type('torch.cuda.FloatTensor')
print (device)
```
## Load data
We will download the [AG News dataset](http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html), which consists of 120K text samples from 4 unique classes (`Business`, `Sci/Tech`, `Sports`, `World`)
```
import numpy as np
import pandas as pd
import re
import urllib
# Load data
url = "https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/datasets/news.csv"
df = pd.read_csv(url, header=0) # load
df = df.sample(frac=1).reset_index(drop=True) # shuffle
df.head()
# Reduce data size (too large to fit in Colab's limited memory)
df = df[:10000]
print (len(df))
```
## Preprocessing
We're going to clean up our input data first by doing operations such as lower text, removing stop (filler) words, filters using regular expressions, etc.
```
import nltk
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
import re
nltk.download('stopwords')
STOPWORDS = stopwords.words('english')
print (STOPWORDS[:5])
porter = PorterStemmer()
def preprocess(text, stopwords=STOPWORDS):
"""Conditional preprocessing on our text unique to our task."""
# Lower
text = text.lower()
# Remove stopwords
pattern = re.compile(r'\b(' + r'|'.join(stopwords) + r')\b\s*')
text = pattern.sub('', text)
# Remove words in paranthesis
text = re.sub(r'\([^)]*\)', '', text)
# Spacing and filters
text = re.sub(r"([-;;.,!?<=>])", r" \1 ", text)
text = re.sub('[^A-Za-z0-9]+', ' ', text) # remove non alphanumeric chars
text = re.sub(' +', ' ', text) # remove multiple spaces
text = text.strip()
return text
# Sample
text = "Great week for the NYSE!"
preprocess(text=text)
# Apply to dataframe
preprocessed_df = df.copy()
preprocessed_df.title = preprocessed_df.title.apply(preprocess)
print (f"{df.title.values[0]}\n\n{preprocessed_df.title.values[0]}")
```
## Split data
```
import collections
from sklearn.model_selection import train_test_split
TRAIN_SIZE = 0.7
VAL_SIZE = 0.15
TEST_SIZE = 0.15
def train_val_test_split(X, y, train_size):
"""Split dataset into data splits."""
X_train, X_, y_train, y_ = train_test_split(X, y, train_size=TRAIN_SIZE, stratify=y)
X_val, X_test, y_val, y_test = train_test_split(X_, y_, train_size=0.5, stratify=y_)
return X_train, X_val, X_test, y_train, y_val, y_test
# Data
X = preprocessed_df["title"].values
y = preprocessed_df["category"].values
# Create data splits
X_train, X_val, X_test, y_train, y_val, y_test = train_val_test_split(
X=X, y=y, train_size=TRAIN_SIZE)
print (f"X_train: {X_train.shape}, y_train: {y_train.shape}")
print (f"X_val: {X_val.shape}, y_val: {y_val.shape}")
print (f"X_test: {X_test.shape}, y_test: {y_test.shape}")
print (f"Sample point: {X_train[0]} → {y_train[0]}")
```
## Label encoder
```
class LabelEncoder(object):
"""Label encoder for tag labels."""
def __init__(self, class_to_index={}):
self.class_to_index = class_to_index
self.index_to_class = {v: k for k, v in self.class_to_index.items()}
self.classes = list(self.class_to_index.keys())
def __len__(self):
return len(self.class_to_index)
def __str__(self):
return f"<LabelEncoder(num_classes={len(self)})>"
def fit(self, y):
classes = np.unique(y)
for i, class_ in enumerate(classes):
self.class_to_index[class_] = i
self.index_to_class = {v: k for k, v in self.class_to_index.items()}
self.classes = list(self.class_to_index.keys())
return self
def encode(self, y):
y_one_hot = np.zeros((len(y), len(self.class_to_index)), dtype=int)
for i, item in enumerate(y):
y_one_hot[i][self.class_to_index[item]] = 1
return y_one_hot
def decode(self, y):
classes = []
for i, item in enumerate(y):
index = np.where(item == 1)[0][0]
classes.append(self.index_to_class[index])
return classes
def save(self, fp):
with open(fp, 'w') as fp:
contents = {'class_to_index': self.class_to_index}
json.dump(contents, fp, indent=4, sort_keys=False)
@classmethod
def load(cls, fp):
with open(fp, 'r') as fp:
kwargs = json.load(fp=fp)
return cls(**kwargs)
# Encode
label_encoder = LabelEncoder()
label_encoder.fit(y_train)
num_classes = len(label_encoder)
label_encoder.class_to_index
# Class weights
counts = np.bincount([label_encoder.class_to_index[class_] for class_ in y_train])
class_weights = {i: 1.0/count for i, count in enumerate(counts)}
print (f"counts: {counts}\nweights: {class_weights}")
# Convert labels to tokens
print (f"y_train[0]: {y_train[0]}")
y_train = label_encoder.encode(y_train)
y_val = label_encoder.encode(y_val)
y_test = label_encoder.encode(y_test)
print (f"y_train[0]: {y_train[0]}")
print (f"decode([y_train[0]]): {label_encoder.decode([y_train[0]])}")
```
## Tokenizer
We'll be using the [BertTokenizer](https://huggingface.co/transformers/model_doc/bert.html#berttokenizer) to tokenize our input text in to sub-word tokens.
```
from transformers import DistilBertTokenizer
from transformers import BertTokenizer
# Load tokenizer and model
# tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased")
tokenizer = BertTokenizer.from_pretrained("allenai/scibert_scivocab_uncased")
vocab_size = len(tokenizer)
print (vocab_size)
# Tokenize inputs
encoded_input = tokenizer(X_train.tolist(), return_tensors="pt", padding=True)
X_train_ids = encoded_input["input_ids"]
X_train_masks = encoded_input["attention_mask"]
print (X_train_ids.shape, X_train_masks.shape)
encoded_input = tokenizer(X_val.tolist(), return_tensors="pt", padding=True)
X_val_ids = encoded_input["input_ids"]
X_val_masks = encoded_input["attention_mask"]
print (X_val_ids.shape, X_val_masks.shape)
encoded_input = tokenizer(X_test.tolist(), return_tensors="pt", padding=True)
X_test_ids = encoded_input["input_ids"]
X_test_masks = encoded_input["attention_mask"]
print (X_test_ids.shape, X_test_masks.shape)
# Decode
print (f"{X_train_ids[0]}\n{tokenizer.decode(X_train_ids[0])}")
# Sub-word tokens
print (tokenizer.convert_ids_to_tokens(ids=X_train_ids[0]))
```
## Datasets
We're going to create Datasets and DataLoaders to be able to efficiently create batches with our data splits.
```
class TransformerTextDataset(torch.utils.data.Dataset):
def __init__(self, ids, masks, targets):
self.ids = ids
self.masks = masks
self.targets = targets
def __len__(self):
return len(self.targets)
def __str__(self):
return f"<Dataset(N={len(self)})>"
def __getitem__(self, index):
ids = torch.tensor(self.ids[index], dtype=torch.long)
masks = torch.tensor(self.masks[index], dtype=torch.long)
targets = torch.FloatTensor(self.targets[index])
return ids, masks, targets
def create_dataloader(self, batch_size, shuffle=False, drop_last=False):
return torch.utils.data.DataLoader(
dataset=self,
batch_size=batch_size,
shuffle=shuffle,
drop_last=drop_last,
pin_memory=False)
# Create datasets
train_dataset = TransformerTextDataset(ids=X_train_ids, masks=X_train_masks, targets=y_train)
val_dataset = TransformerTextDataset(ids=X_val_ids, masks=X_val_masks, targets=y_val)
test_dataset = TransformerTextDataset(ids=X_test_ids, masks=X_test_masks, targets=y_test)
print ("Data splits:\n"
f" Train dataset:{train_dataset.__str__()}\n"
f" Val dataset: {val_dataset.__str__()}\n"
f" Test dataset: {test_dataset.__str__()}\n"
"Sample point:\n"
f" ids: {train_dataset[0][0]}\n"
f" masks: {train_dataset[0][1]}\n"
f" targets: {train_dataset[0][2]}")
# Create dataloaders
batch_size = 128
train_dataloader = train_dataset.create_dataloader(
batch_size=batch_size)
val_dataloader = val_dataset.create_dataloader(
batch_size=batch_size)
test_dataloader = test_dataset.create_dataloader(
batch_size=batch_size)
batch = next(iter(train_dataloader))
print ("Sample batch:\n"
f" ids: {batch[0].size()}\n"
f" masks: {batch[1].size()}\n"
f" targets: {batch[2].size()}")
```
## Trainer
Let's create the `Trainer` class that we'll use to facilitate training for our experiments.
```
import torch.nn.functional as F
class Trainer(object):
def __init__(self, model, device, loss_fn=None, optimizer=None, scheduler=None):
# Set params
self.model = model
self.device = device
self.loss_fn = loss_fn
self.optimizer = optimizer
self.scheduler = scheduler
def train_step(self, dataloader):
"""Train step."""
# Set model to train mode
self.model.train()
loss = 0.0
# Iterate over train batches
for i, batch in enumerate(dataloader):
# Step
batch = [item.to(self.device) for item in batch] # Set device
inputs, targets = batch[:-1], batch[-1]
self.optimizer.zero_grad() # Reset gradients
z = self.model(inputs) # Forward pass
J = self.loss_fn(z, targets) # Define loss
J.backward() # Backward pass
self.optimizer.step() # Update weights
# Cumulative Metrics
loss += (J.detach().item() - loss) / (i + 1)
return loss
def eval_step(self, dataloader):
"""Validation or test step."""
# Set model to eval mode
self.model.eval()
loss = 0.0
y_trues, y_probs = [], []
# Iterate over val batches
with torch.no_grad():
for i, batch in enumerate(dataloader):
# Step
batch = [item.to(self.device) for item in batch] # Set device
inputs, y_true = batch[:-1], batch[-1]
z = self.model(inputs) # Forward pass
J = self.loss_fn(z, y_true).item()
# Cumulative Metrics
loss += (J - loss) / (i + 1)
# Store outputs
y_prob = F.softmax(z).cpu().numpy()
y_probs.extend(y_prob)
y_trues.extend(y_true.cpu().numpy())
return loss, np.vstack(y_trues), np.vstack(y_probs)
def predict_step(self, dataloader):
"""Prediction step."""
# Set model to eval mode
self.model.eval()
y_probs = []
# Iterate over val batches
with torch.no_grad():
for i, batch in enumerate(dataloader):
# Forward pass w/ inputs
inputs, targets = batch[:-1], batch[-1]
z = self.model(inputs)
# Store outputs
y_prob = F.softmax(z).cpu().numpy()
y_probs.extend(y_prob)
return np.vstack(y_probs)
def train(self, num_epochs, patience, train_dataloader, val_dataloader):
best_val_loss = np.inf
for epoch in range(num_epochs):
# Steps
train_loss = self.train_step(dataloader=train_dataloader)
val_loss, _, _ = self.eval_step(dataloader=val_dataloader)
self.scheduler.step(val_loss)
# Early stopping
if val_loss < best_val_loss:
best_val_loss = val_loss
best_model = self.model
_patience = patience # reset _patience
else:
_patience -= 1
if not _patience: # 0
print("Stopping early!")
break
# Logging
print(
f"Epoch: {epoch+1} | "
f"train_loss: {train_loss:.5f}, "
f"val_loss: {val_loss:.5f}, "
f"lr: {self.optimizer.param_groups[0]['lr']:.2E}, "
f"_patience: {_patience}"
)
return best_model
```
# Transformer
## Scaled dot-product attention
The most popular type of self-attention is scaled dot-product attention from the widely-cited [Attention is all you need](https://arxiv.org/abs/1706.03762) paper. This type of attention involves projecting our encoded input sequences onto three matrices, queries (Q), keys (K) and values (V), whose weights we learn.
$ inputs \in \mathbb{R}^{NXMXH} $ ($N$ = batch size, $M$ = sequence length, $H$ = hidden dim)
$ Q = XW_q $ where $ W_q \in \mathbb{R}^{HXd_q} $
$ K = XW_k $ where $ W_k \in \mathbb{R}^{HXd_k} $
$ V = XW_v $ where $ W_v \in \mathbb{R}^{HXd_v} $
$ attention (Q, K, V) = softmax( \frac{Q K^{T}}{\sqrt{d_k}} )V \in \mathbb{R}^{MXd_v} $
## Multi-head attention
Instead of applying self-attention only once across the entire encoded input, we can also separate the input and apply self-attention in parallel (heads) to each input section and concatenate them. This allows the different head to learn unique representations while maintaining the complexity since we split the input into smaller subspaces.
$ MultiHead(Q, K, V) = concat({head}_1, ..., {head}_{h})W_O $
* ${head}_i = attention(Q_i, K_i, V_i) $
* $h$ = # of self-attention heads
* $W_O \in \mathbb{R}^{hd_vXH} $
* $H$ = hidden dim. (or dimension of the model $d_{model}$)
## Positional encoding
With self-attention, we aren't able to account for the sequential position of our input tokens. To address this, we can use positional encoding to create a representation of the location of each token with respect to the entire sequence. This can either be learned (with weights) or we can use a fixed function that can better extend to create positional encoding for lengths during inference that were not observed during training.
$ PE_{(pos,2i)} = sin({pos}/{10000^{2i/H}}) $
$ PE_{(pos,2i+1)} = cos({pos}/{10000^{2i/H}}) $
where:
* $pos$ = position of the token $(1...M)$
* $i$ = hidden dim $(1..H)$
This effectively allows us to represent each token's relative position using a fixed function for very large sequences. And because we've constrained the positional encodings to have the same dimensions as our encoded inputs, we can simply concatenate them before feeding them into the multi-head attention heads.
## Architecture
And here's how it all fits together! It's an end-to-end architecture that creates these contextual representations and uses an encoder-decoder architecture to predict the outcomes (one-to-one, many-to-one, many-to-many, etc.) Due to the complexity of the architecture, they require massive amounts of data for training without overfitting, however, they can be leveraged as pretrained models to finetune with smaller datasets that are similar to the larger set it was initially trained on.
<div align="left">
<img src="https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/images/foundations/transformers/architecture.png" width="800">
</div>
<div align="left">
<small><a href="https://arxiv.org/abs/1706.03762" target="_blank">Attention Is All You Need</a></small>
</div>
> We're not going to the implement the Transformer [from scratch](https://nlp.seas.harvard.edu/2018/04/03/attention.html) but we will use the[ Hugging Face library](https://github.com/huggingface/transformers) to load a pretrained [BertModel](https://huggingface.co/transformers/model_doc/bert.html#bertmodel) , which we'll use as a feature extractor and fine-tune on our own dataset.
## Model
We're going to use a pretrained [BertModel](https://huggingface.co/transformers/model_doc/bert.html#bertmodel) to act as a feature extractor. We'll only use the encoder to receive sequential and pooled outputs (`is_decoder=False` is default).
```
from transformers import BertModel
# transformer = BertModel.from_pretrained("distilbert-base-uncased")
# embedding_dim = transformer.config.dim
transformer = BertModel.from_pretrained("allenai/scibert_scivocab_uncased")
embedding_dim = transformer.config.hidden_size
class Transformer(nn.Module):
def __init__(self, transformer, dropout_p, embedding_dim, num_classes):
super(Transformer, self).__init__()
self.transformer = transformer
self.dropout = torch.nn.Dropout(dropout_p)
self.fc1 = torch.nn.Linear(embedding_dim, num_classes)
def forward(self, inputs):
ids, masks = inputs
seq, pool = self.transformer(input_ids=ids, attention_mask=masks)
z = self.dropout(pool)
z = self.fc1(z)
return z
```
> We decided to work with the pooled output, but we could have just as easily worked with the sequential output (encoder representation for each sub-token) and applied a CNN (or other decoder options) on top of it.
```
# Initialize model
dropout_p = 0.5
model = Transformer(
transformer=transformer, dropout_p=dropout_p,
embedding_dim=embedding_dim, num_classes=num_classes)
model = model.to(device)
print (model.named_parameters)
```
## Training
```
# Arguments
lr = 1e-4
num_epochs = 100
patience = 10
# Define loss
class_weights_tensor = torch.Tensor(np.array(list(class_weights.values())))
loss_fn = nn.BCEWithLogitsLoss(weight=class_weights_tensor)
# Define optimizer & scheduler
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(
optimizer, mode="min", factor=0.1, patience=5)
# Trainer module
trainer = Trainer(
model=model, device=device, loss_fn=loss_fn,
optimizer=optimizer, scheduler=scheduler)
# Train
best_model = trainer.train(num_epochs, patience, train_dataloader, val_dataloader)
```
## Evaluation
```
import json
from sklearn.metrics import precision_recall_fscore_support
def get_performance(y_true, y_pred, classes):
"""Per-class performance metrics."""
# Performance
performance = {"overall": {}, "class": {}}
# Overall performance
metrics = precision_recall_fscore_support(y_true, y_pred, average="weighted")
performance["overall"]["precision"] = metrics[0]
performance["overall"]["recall"] = metrics[1]
performance["overall"]["f1"] = metrics[2]
performance["overall"]["num_samples"] = np.float64(len(y_true))
# Per-class performance
metrics = precision_recall_fscore_support(y_true, y_pred, average=None)
for i in range(len(classes)):
performance["class"][classes[i]] = {
"precision": metrics[0][i],
"recall": metrics[1][i],
"f1": metrics[2][i],
"num_samples": np.float64(metrics[3][i]),
}
return performance
# Get predictions
test_loss, y_true, y_prob = trainer.eval_step(dataloader=test_dataloader)
y_pred = np.argmax(y_prob, axis=1)
# Determine performance
performance = get_performance(
y_true=np.argmax(y_true, axis=1), y_pred=y_pred, classes=label_encoder.classes)
print (json.dumps(performance['overall'], indent=2))
# Save artifacts
from pathlib import Path
dir = Path("transformers")
dir.mkdir(parents=True, exist_ok=True)
label_encoder.save(fp=Path(dir, "label_encoder.json"))
torch.save(best_model.state_dict(), Path(dir, "model.pt"))
with open(Path(dir, "performance.json"), "w") as fp:
json.dump(performance, indent=2, sort_keys=False, fp=fp)
```
## Inference
```
def get_probability_distribution(y_prob, classes):
"""Create a dict of class probabilities from an array."""
results = {}
for i, class_ in enumerate(classes):
results[class_] = np.float64(y_prob[i])
sorted_results = {k: v for k, v in sorted(
results.items(), key=lambda item: item[1], reverse=True)}
return sorted_results
# Load artifacts
device = torch.device("cpu")
tokenizer = BertTokenizer.from_pretrained("allenai/scibert_scivocab_uncased")
label_encoder = LabelEncoder.load(fp=Path(dir, "label_encoder.json"))
transformer = BertModel.from_pretrained("allenai/scibert_scivocab_uncased")
embedding_dim = transformer.config.hidden_size
model = Transformer(
transformer=transformer, dropout_p=dropout_p,
embedding_dim=embedding_dim, num_classes=num_classes)
model.load_state_dict(torch.load(Path(dir, "model.pt"), map_location=device))
model.to(device);
# Initialize trainer
trainer = Trainer(model=model, device=device)
# Create datasets
train_dataset = TransformerTextDataset(ids=X_train_ids, masks=X_train_masks, targets=y_train)
val_dataset = TransformerTextDataset(ids=X_val_ids, masks=X_val_masks, targets=y_val)
test_dataset = TransformerTextDataset(ids=X_test_ids, masks=X_test_masks, targets=y_test)
print ("Data splits:\n"
f" Train dataset:{train_dataset.__str__()}\n"
f" Val dataset: {val_dataset.__str__()}\n"
f" Test dataset: {test_dataset.__str__()}\n"
"Sample point:\n"
f" ids: {train_dataset[0][0]}\n"
f" masks: {train_dataset[0][1]}\n"
f" targets: {train_dataset[0][2]}")
# Dataloader
text = "The final tennis tournament starts next week."
X = preprocess(text)
encoded_input = tokenizer(X, return_tensors="pt", padding=True).to(torch.device("cpu"))
ids = encoded_input["input_ids"]
masks = encoded_input["attention_mask"]
y_filler = label_encoder.encode([label_encoder.classes[0]]*len(ids))
dataset = TransformerTextDataset(ids=ids, masks=masks, targets=y_filler)
dataloader = dataset.create_dataloader(batch_size=int(batch_size))
# Inference
y_prob = trainer.predict_step(dataloader)
y_pred = np.argmax(y_prob, axis=1)
label_encoder.index_to_class[y_pred[0]]
# Class distributions
prob_dist = get_probability_distribution(y_prob=y_prob[0], classes=label_encoder.classes)
print (json.dumps(prob_dist, indent=2))
```
## Interpretability
Let's visualize the self-attention weights from each of the attention heads in the encoder.
```
import sys
!rm -r bertviz_repo
!test -d bertviz_repo || git clone https://github.com/jessevig/bertviz bertviz_repo
if not "bertviz_repo" in sys.path:
sys.path += ["bertviz_repo"]
from bertviz import head_view
# Print input ids
print (ids)
print (tokenizer.batch_decode(ids))
# Get encoder attentions
seq, pool, attn = model.transformer(input_ids=ids, attention_mask=masks, output_attentions=True)
print (len(attn)) # 12 attention layers (heads)
print (attn[0].shape)
# HTML set up
def call_html():
import IPython
display(IPython.core.display.HTML('''
<script src="/static/components/requirejs/require.js"></script>
<script>
requirejs.config({
paths: {
base: '/static/base',
"d3": "https://cdnjs.cloudflare.com/ajax/libs/d3/3.5.8/d3.min",
jquery: '//ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.min',
},
});
</script>
'''))
# Visualize self-attention weights
call_html()
tokens = tokenizer.convert_ids_to_tokens(ids[0])
head_view(attention=attn, tokens=tokens)
```
> Now you're ready to start the [MLOps lesson](https://madewithml.com/#mlops) to learn how to put all this foundational modeling knowledge to responsibly deliver value.
| true |
code
| 0.714472 | null | null | null | null |
|
# Charts for **REINFORCE** or Monte-Carlo policy gradient
My notes on REINFORCE Algorithm.
## Symbol Lookup Table
| Symbol | Definition |
|-------------------- |---------------------------------------------------------------------------------------------- |
| $s \in S$ | $s$ denotes a state. |
| $a \in A$ | $a$ denotes an action. |
| $r \in R$ | $r$ denotes a reward. |
| $ \pi(a \vert s) $ | Policy function, returns probability of choosing action $a$ in state $s$. |
| $V(s)$ | State-Value function, Measures how good a state is. (in terms of expected reward). |
| $V^\pi (s)$ | State-Value function, When we are using policy $\pi$. |
| $Q^\pi$ | Action-value function, Measures how good an action is. |
| $Q^\pi (s, a)$ | Action-value function, How good is to take action $a$ in state $s$ when we use policy $\pi$. |
| $\gamma$ | Discount factor. |
| $G_t$ | Total return value. |
| $Q^\pi$ | Action-value function. |
| $V^\pi$ | State-value function. |
## Definition
[REINFORCE (Monte-Carlo policy gradient)](https://lilianweng.github.io/lil-log/2018/04/08/policy-gradient-algorithms.html#reinforce) relies on an estimated return by Monte-Carlo methods using episode samples to update the policy parameter $\theta$. REINFORCE works because the expectation of the sample gradient is equal to the actual gradient:
$$
\begin{eqnarray}
\nabla_{\theta}J(\theta) &=& \mathbb{E}_{\pi} [ Q^{\pi} (s, a) \nabla_\theta \ln \pi_\theta(a \vert s) ] \nonumber \\
&=& \mathbb{E}_{\pi}[G_t \nabla_\theta \ln \pi_\theta ( A_t \vert S_t)] \nonumber
\end{eqnarray}
$$
(Because $ Q^\pi (S_t, A_t) = \mathbb{E}_{\pi}[G_t \vert S_t, A_t] $)
### Process
1. Initialize the policy parameter $\theta$ at random.
2. Generate one trajectory on policy $\pi_{\theta}: S_1, A_1, R_1, S_2, A_2, ... , S_T$.
3. For $t=1,2,...,T$:
1. Estimate the return $G_t$.
1. Update policy parameters: $\theta \leftarrow \theta + \alpha \gamma^t G_t \nabla_{\theta} \ln \pi_{\theta}(A_t \vert S_t)$
## Sources
This is just re-hash of what's already out there, nothing new per se.
1. [Lilian's Blog](https://lilianweng.github.io/lil-log/2018/04/08/policy-gradient-algorithms.html#reinforce)
1. [PyTorch's Github Repository](https://github.com/pytorch/examples/blob/master/reinforcement_learning/reinforce.py)
```
# Import all packages we want to use
from itertools import count
import numpy as np
import gym
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.distributions import Categorical
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
```
## Enough information about ``CartPole-v1``
A pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The system is controlled by applying a force of `+1` or `-1` to the cart. The pendulum starts upright, and the goal is to prevent it from falling over. A reward of `+1` is provided for every timestep that the pole remains upright. The episode ends when the pole is more than `15` degrees from vertical, or the cart moves more than `2.4` units from the center.
### Summary
| Property | Default | Note |
|-------------------- |------------ |------------------------------------------------------------------------------------------- |
| Max Episode Length | `500` | Check out this [line](https://github.com/openai/gym/blob/master/gym/envs/__init__.py#L63) |
| Action Space | `+1`, `-1` | The system is controlled by applying a force of `+1` or `-1` to the cart |
| Default reward | `+1` | A reward of `+1` is provided for every time-step that the pole remains upright |
### Sample output
<center><img src='./CartPole.gif'></center>
[Source](https://gym.openai.com/envs/CartPole-v1/)
```
# Preparing the Cart Pole
env = gym.make('CartPole-v1')
env.seed(0)
torch.manual_seed(0)
gamma = 0.99
# A very simple NN with one hidden layer acts as a brain
# We are simply mapping observation from environment to actions using one hidden layer!
class REINFORCEBrain(nn.Module):
def __init__(self):
super(REINFORCEBrain, self).__init__()
self.affine1 = nn.Linear(4, 128)
self.affine2 = nn.Linear(128, 2)
self.saved_log_probs = []
self.rewards = []
def forward(self, x):
x = F.relu(self.affine1(x))
action_scores = self.affine2(x)
return F.softmax(action_scores, dim=1)
def total_reward_received(self):
return np.sum(self.rewards)
# No need to use GPU yet! you can call .cuda() after REINFORCEBrain() to instantiate CUDA version of Brain
policy = REINFORCEBrain()
#Defining an optimizer
optimizer = optim.Adam(policy.parameters(), lr=1e-2)
# Retrieving a value for epsilon using numpy's built-ins
eps = np.finfo(np.float32).eps.item()
# Sample from policy π and store some extra info for calculating Loss J(θ)
def select_action(state):
state = torch.from_numpy(state).float().unsqueeze(0)
# Calculating probabilites of selection each actions
probs = policy(state)
# Using Categorical helper for sampling and log_probs
m = Categorical(probs)
action = m.sample()
# Keeping log probs. Wee need this to calculate J(θ)
policy.saved_log_probs.append(m.log_prob(action))
# converting tensor to python scalar and returning it
return action.item()
# just sample policy and return output
# Used only for logging
def sample_policy(state):
state = torch.from_numpy(state).float().unsqueeze(0)
# Calculating probabilites of selection each actions
probs = policy(state)
return probs.detach().numpy()
def policy_optimize_step():
R = 0
policy_loss = []
rewards = []
# Discounted Reward Calculation
for r in policy.rewards[::-1]:
R = r + gamma * R
rewards.insert(0, R)
# List conversion to Tensor
rewards = torch.tensor(rewards)
# Normalizing Reward Tensor to have zero mean and unit variance
rewards = (rewards - rewards.mean()) / (rewards.std() + eps)
# Calculating Loss per action/reward
for log_prob, reward in zip(policy.saved_log_probs, rewards):
policy_loss.append(-log_prob * reward)
optimizer.zero_grad()
# converting list of tensors to array and summing all of them to create total loss
policy_loss = torch.cat(policy_loss).sum()
policy_loss.backward()
optimizer.step()
# Removing data from last episode
del policy.rewards[:]
del policy.saved_log_probs[:]
def train(num_episodes, state_bank):
# Length of each episode
ep_history = []
# Total reward gathered in each episode
rw_history = []
# Record Selected Actions
policy_output_on_state_bank = {}
for current_episode in range(num_episodes):
# Reseting the Environment
state = env.reset()
# Gathering data, with max step of 500
for t in range(500):
action = select_action(state)
state, reward, done, _ = env.step(action)
policy.rewards.append(reward)
if done:
break
# Sample from our policy to log how it changes over training
l = []
for sb in state_bank:
probs = sample_policy(sb)
l.append(probs)
policy_output_on_state_bank[str(current_episode)] = l
ep_history.append(t)
rw_history.append(policy.total_reward_received())
# Optimize our policy after gathering a full episode
policy_optimize_step()
# Logging
if (current_episode+1) % 50 == 0:
print('Episode {}\tLast Epsiode length: {:5d}\t'.format(current_episode, t))
return ep_history, rw_history, policy_output_on_state_bank
state_bank = np.random.uniform(-1.5, 1.5, (180, 4))
episodes_to_train = 300
ep_history, rw_history, pout = train(episodes_to_train, state_bank)
# Making plots larger!
matplotlib.rcParams['figure.figsize'] = [15, 10]
# X Axis of the plots
xx = range(episodes_to_train)
plt.subplot(2, 1, 1)
plt.plot(xx, ep_history, '.-')
plt.title('Reward and Episode Length')
plt.ylabel('Length of each Episode')
plt.subplot(2, 1, 2)
plt.plot(xx, rw_history, '.-')
plt.xlabel('Episode')
plt.ylabel('Reward')
plt.show()
```
## Policy Evolution Chart
Adding a very unintuitive plot to show how our policy decision changes over training
```
data_theta_rad = [float(x)*np.pi/180.0 for x in np.linspace(1, 360, 180)]
data_theta_rad[0] = 0
data_theta_rad[-1] = 2 * np.pi
ax1 = plt.subplot(121, polar=True)
for i in np.linspace(1, 299, 25):
data_r = np.array(pout[str(int(i))]).squeeze()[:, 0]
ax1.plot(data_theta_rad, data_r, color='r', linewidth=0.5)
ax1.set_rmax(95)
ax1.grid(True)
ax1.fill_between
ax1.set_title("Choosing Action A", va='bottom')
ax1.fill_between(data_theta_rad, 0, data_r, facecolor='r', alpha=0.01)
ax1.axes.get_xaxis().set_visible(False)
ax2 = plt.subplot(122, polar=True)
for i in np.linspace(1, 299, 25):
data_r = np.array(pout[str(int(i))]).squeeze()[:, 1]
ax2.plot(data_theta_rad, data_r, color='b', linewidth=0.5)
ax2.set_rmax(95)
ax2.grid(True)
ax2.fill_between
ax2.set_title("Choosing Action B", va='bottom')
ax2.fill_between(data_theta_rad, 0, data_r, facecolor='b', alpha=0.01)
ax2.axes.get_xaxis().set_visible(False)
plt.show()
```
## Make A Movie !!
Policy change over training.
<center><img src='./PolicyChange.gif'></center>
Bellow are some codes for generating this video
```
# import cv2
# from tqdm import tqdm
# import io
# from PIL import Image
# import matplotlib.pyplot as plt
# fourcc = cv2.VideoWriter_fourcc(*'DIVX')
# video = cv2.VideoWriter('output1.avi', fourcc, 20.0, (1080, 720))
# for t in tqdm(np.linspace(1, 299, 35), desc='Generating Video ...'):
# ax1 = plt.subplot(121, polar=True)
# data_r = np.array(pout[str(int(t))]).squeeze()[:, 0]
# ax1.plot(data_theta_rad, data_r, color='r', linewidth=0.5)
# ax1.set_rmax(95)
# ax1.grid(True)
# ax1.fill_between
# ax1.set_title("Choosing Action A", va='bottom')
# ax1.fill_between(data_theta_rad, 0, data_r, facecolor='r', alpha=0.01)
# ax1.axes.get_xaxis().set_visible(False)
# axes = plt.gca()
# axes.set_ylim([0, 1])
# buf = io.BytesIO()
# plt.savefig(buf, format='png')
# buf.seek(0)
# img = Image.open(buf)
# img_out_cv2 = np.array(img)
# img_out_cv2 = img_out_cv2[:, :, ::-1].copy()
# video.write(img_out_cv2)
# buf.close()
# video.release()
# save_dir = "/src/rl-advantures/figs/"
# for t in tqdm(range(300), desc='Generating Video ...'):
# ax1 = plt.subplot(121, polar=True)
# data_r = np.array(pout[str(int(t))]).squeeze()[:, 0]
# ax1.plot(data_theta_rad, data_r, color='r', linewidth=0.5)
# ax1.set_rmax(95)
# ax1.grid(True)
# ax1.fill_between
# ax1.set_title("Choosing Action A", va='bottom')
# ax1.fill_between(data_theta_rad, 0, data_r, facecolor='r', alpha=0.1)
# ax1.axes.get_xaxis().set_visible(False)
# axes = plt.gca()
# axes.set_ylim([0, 1])
# ax2 = plt.subplot(122, polar=True)
# data_r = np.array(pout[str(int(t))]).squeeze()[:, 1]
# ax2.plot(data_theta_rad, data_r, color='b', linewidth=0.5)
# ax2.set_rmax(95)
# ax2.grid(True)
# ax2.fill_between
# ax2.set_title("Choosing Action B", va='bottom')
# ax2.fill_between(data_theta_rad, 0, data_r, facecolor='b', alpha=0.1)
# ax2.axes.get_xaxis().set_visible(False)
# axes = plt.gca()
# axes.set_ylim([0, 1])
# plt.savefig(save_dir + str(t) + '.png')
# plt.clf()
```
## Notes
1. Reward plot is useless as every value of `rw_history` is equals to `ep_history` minus `1`.
1. Continuing the training will hurt the performance of the model!
### Useful Tools
[Makrdown Table Generator](https://www.tablesgenerator.com/markdown_tables)
| true |
code
| 0.76207 | null | null | null | null |
|
<h1>Quiz 1 : Pemahaman</h1>
1. Sebutkan apa saja kira2 preprocessing Data?
2. Jelaskan beberapa cara imputing missing value?
3. Kapan kita perlu melakukan feature centering dan scaling?
4. Bagaimana Data Science Workflow?
1. Berikut preprocessing data:
- Binarization
- Mean Removal
- Scaling
- Normalization
- Label encoding
2. Berikut beberapa cara dalam menghandle missing value:
- Drop missing value yaitu ketika jumlah missing value data banyak atau NaN maka baris atau kolom tersebut dihapuskan
- Filling with mean/median yaitu berlaku untuk data yang bertipe numerik sehingga dirata-ratakan menjadi float dan kemudian diubah kedalam bentuk integer
- Filling with modus yaitu berlaku untuk data yang bertipe kategori dengan mengambil data yang paling banyak dari kategori tersebut untuk diisikan ke nilai NaN
- Filling with bfill (backward fill) atau ffill (forward fill) yaitu data NaN diisi dengan data sebelumnya atau data setelahnya
- KNN yaitu cara menghandle missing value dengan algoritma KNN berdasarkan data tetangganya yang terdekat
3. Ketika suatu kolom prediktor memiliki skala distribusi yang besar maka akan mempengaruhi pembangunan terhadap suatu model dan ketika skala distribusinya kecil maka pengaruhnya juga kecil terhadap arsitektur pembuatan suatu model, sehingga skala yang diperlukan harus disesuaikan agar seimbang.
4. Data science workflow yaitu
- Mengambil data dari sumber data
- Data tersebut kemudian diproses untuk dianalisis
- Analisis tersebut dibuat sebuah model
- Jika model tersebut sudah selesai maka akan diproduction
- Terakhir model tersebut akan dimonitoring
<h1>Quiz 2 : Pengaplikasian</h1>
Selamat, sampai tahap ini kalian telah belajar banyak tentang data science, dari mulai python, data manipulasi, visualisasi, dan pembuatan model. Sekarang saatnya untuk mengaplikasikan semuanya.
Download dan gunakan data titanic.csv sebagai data untuk pembuatan model ML. Pahami betul data ini dengan melakukan EDA (Explolatory Data Analaysis), Visualisasi, Data Analysis, Preprocessing Data, dan Modeling.
<b>(Optional)</b> Download dan gunakan data titanic_test.csv untuk mengetest model kalian dengan melakukan prediksi terhadap data tersebut. Submit hasil prediksinya ke kaggle dan lihat scorenya. https://www.kaggle.com/c/titanic/submit

```
# Read titanic.csv
import pandas as pd
df = pd.read_csv('titanic.csv')
df
# EDA - Columns titanic.csv
# ============================
# PassengerId : int (Clean)
# Survived : int (Clean)
# Pclass : int (Clean)
# Name : string (Not Clean, karena data berupa string unique)
# Sex : string (Not Clean, karena termasuk data kategori dan dapat diubah ke numerik)
# Age : float (Not Clean, karena ada missing value)
# SibSp : int (Clean)
# Parch : int (Clean)
# Ticket : string (Clean, karena data berupa string unique)
# Fare : float (Clean)
# Cabin : (Not Clean, karena ada missing value)
# Embarked : (Not Clean, karena termasuk data kategori dan dapat diubah ke numerik dan ada missing value)
# Solusi saya terkait data Not Clean yang termasuk kategori maka diubah ke numerik, sementara data Not Clean bagian missing value
# saya antisipasi dengan algoritma KNN karena kolom Age, Cabin, Embarked kemungkinan besar saling berkaitan antar data yang lain
# Menghapus kolom Name dan Ticket karena bersifat unique dan dapat diwakili oleh kolom PassengerId
df = df.drop(['Name', 'Ticket'], axis=1)
df
# Encoding Categorical Data Kolom Sex dan Embarked
obj_sex = {
'male' : 0,
'female' : 1
}
obj_embarked = {
'C' : 0,
'Q' : 1,
'S' : 2
}
df['Sex'] = df['Sex'].replace(obj_sex)
df['Embarked'] = df['Embarked'].replace(obj_embarked)
# Encoding kolom cabin menjadi float
import numpy as np
df['Cabin'] = df['Cabin'].replace(np.nan, '0')
key_cabin = df['Cabin'].unique()
key_cabin.sort()
value_cabin = np.arange(0, len(df['Cabin'].unique()))
obj_cabin = dict(zip(key_cabin, value_cabin.T))
df['Cabin'] = df['Cabin'].replace(obj_cabin)
df['Cabin'] = df['Cabin'].replace(0, np.nan)
# Missing value kolom Age, Cabin, dan Embarked
from sklearn.impute import KNNImputer
imp = KNNImputer(n_neighbors=5)
df[['Age', 'Cabin', 'Embarked']] = imp.fit_transform(df[['Age', 'Cabin', 'Embarked']])
# Value NaN sudah terisi
df
df['Survived'].value_counts()
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import cross_validate
X = df.drop('Survived', axis=1)
y = df['Survived']
from sklearn.preprocessing import StandardScaler
stdscalar = StandardScaler()
datascale = stdscalar.fit_transform(X)
X = pd.DataFrame(datascale, columns=X.columns)
X
X.describe()
df.describe()
def knn_predict(k):
model = KNeighborsClassifier(n_neighbors=k)
score = cross_validate(model, X, y, cv=10, return_train_score=True)
train_score = score['train_score'].mean()
test_score = score['test_score'].mean()
return train_score, test_score
train_scores = []
test_scores = []
for k in range(2, 100):
train_score, test_score = knn_predict(k)
train_scores.append(train_score)
test_scores.append(test_score)
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(14, 8))
ax.plot(range(2, 100), train_scores, marker='x', color='b', label='Train Scores')
ax.plot(range(2, 100), test_scores, marker='o', color='g', label='Test Scores')
ax.set_xlabel('Nilai K')
ax.set_ylabel('Score')
fig.legend()
plt.show()
from sklearn.model_selection import GridSearchCV
model = KNeighborsClassifier()
param_grid = {'n_neighbors':np.arange(5, 50), 'weights':['distance', 'uniform']}
gscv = GridSearchCV(model, param_grid=param_grid, scoring='accuracy', cv=5)
gscv.fit(X, y)
gscv.best_params_
gscv.best_score_
# Read titanic_test.csv
df_test = pd.read_csv('titanic_test.csv')
df_test
df_test = df_test.drop(['Name', 'Ticket'], axis=1)
df_test
df_test['Sex'] = df_test['Sex'].replace(obj_sex)
df_test['Embarked'] = df_test['Embarked'].replace(obj_embarked)
df_test['Cabin'] = df_test['Cabin'].replace(np.nan, '0')
key_cabin_test = df_test['Cabin'].unique()
key_cabin_test.sort()
value_cabin_test = np.arange(0, len(df_test['Cabin'].unique()))
obj_cabin_test = dict(zip(key_cabin_test, value_cabin_test.T))
df_test['Cabin'] = df_test['Cabin'].replace(obj_cabin_test)
df_test['Cabin'] = df_test['Cabin'].replace(0, np.nan)
# Missing Value terisi
df_test[['Age', 'Fare', 'Cabin', 'Embarked']] = imp.fit_transform(df_test[['Age', 'Fare', 'Cabin', 'Embarked']])
# Scaling
datascale_test = stdscalar.fit_transform(df_test)
X_test = pd.DataFrame(datascale_test, columns=df_test.columns)
# Prediksi
y_pred = gscv.predict(X_test)
df_test['Survived'] = y_pred
df_test
df_test = df_test.drop(['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Cabin', 'Embarked'], axis=1)
df_test
df_test['Survived'].value_counts()
df_test.to_csv("titanic_test_mazharrasyad.csv", index=False)
# Pembahasan Tugas Harian 5
import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import GridSearchCV
train = pd.read_csv('titanic.csv')
test = pd.read_csv('titanic_test.csv')
train = train.drop(['PassengerId', 'Name', 'Ticket', 'Cabin'], axis=1).dropna()
x = train.drop('Survived', axis=1)
y = train['Survived']
x = pd.get_dummies(x)
x = pd.DataFrame(StandardScaler().fit_transform(x), columns=list(x.columns.values))
test = test.drop('Cabin', axis=1).dropna()
x_test = test.drop(['PassengerId', 'Name', 'Ticket'], axis=1)
x_test = pd.get_dummies(x_test)
x_test = pd.DataFrame(StandardScaler().fit_transform(x_test), columns=list(x_test.columns.values))
model = KNeighborsClassifier()
params = {'n_neighbors':np.arange(1, 50), 'metric':['euclidean','manhattan','minkowski'], 'weights':['distance', 'uniform']}
gscv = GridSearchCV(model, param_grid=params, cv=5, scoring='accuracy')
gscv.fit(x, y.ravel())
print(gscv.best_params_)
print(gscv.best_score_)
model = KNeighborsClassifier(metric='euclidean', n_neighbors=10, weights='uniform')
model.fit(x,y.ravel())
y_pred = model.predict(x_test)
y_pred
test['Survived'] = y_pred
test.Survived.value_counts()
test[['PassengerId', 'Survived']].to_csv('titanic_test-2.csv', index=False)
```
| true |
code
| 0.477859 | null | null | null | null |
|
# Midterm Answer Script
**Name**: Ferdous Zeaul Islam
**ID**: 173 1136 042
**Course**: CSE445 (Machine Learning)
**Faculty**: Dr. Sifat Momen (Sfm1)
**Section**: 01
**Semester**: Spring 2021
### N.B- please put the diabetes.csv dataset on the same directory as the ipynb file.
```
# only need this line in jupyter
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
```
## (a) Read the dataset (which is in the csv format) using panda's dataframe.
```
diabetes_df = pd.read_csv('./diabetes.csv')
diabetes_df.shape
```
## (b) Find out the number of instances and the number of features (including the target class) in the dataset.
```
print('Number of instances in the dataset =', diabetes_df.shape[0])
print('Number of features in the dataset =', diabetes_df.shape[1])
```
## (c) Does the dataset have any missing entries? Show your workings.
```
diabetes_df.info()
```
### Explanation:
We can observe from the command on the previous line that all columns/features of the dataset have non-null count equal to the total number of instances that we found on on Question(b). Therefore, we can state that **to the naked eye there are no missing entries in this dataset.**
## (d) Here “Outcome” is the target class and contains values zeros or ones. Determine how many instances have the outcome values zeroes and how many have the outcome values ones. Hence or otherwise, comment on whether this dataset suffers from class imbalance problem.
```
outcome_freq = diabetes_df.Outcome.value_counts()
outcome_freq
num_total_instances = diabetes_df.shape[0]
num_outcome_zero = outcome_freq[0]
num_outcome_one = outcome_freq[1]
outcome_zero_data_percentage = round((num_outcome_zero*100)/num_total_instances, 3)
print('Percentage of data with outcome zero =', outcome_zero_data_percentage)
outcome_one_data_percentage = round((num_outcome_one*100)/num_total_instances, 3)
print('Percentage of data with outcome one =', outcome_one_data_percentage)
```
### Explanation:
With respect to "Outcome" we see that there are **65.104% data with value zero** and remaining **34.896% data with value one**. Clearly, **the dataset suffers from class imbalance.**
## (e) Show the first 5 and the last 5 instances of the dataset.
```
diabetes_df.head()
diabetes_df.tail()
```
## (f) Often, in many datasets, it may appear that there exists no missing entries. However, when you look at the dataset closely, it is often found that the missing entries are replaced by a zero (0). Check if this dataset has this issue or not. Show and explain your workings.
```
diabetes_df[30:35]
diabetes_df[342:347]
diabetes_df[706:711]
diabetes_df[(diabetes_df['DiabetesPedigreeFunction'] == 0)].shape[0]
diabetes_df[(diabetes_df['Age'] == 0)].shape[0]
```
### Explanation-
Apart from the 'Pregnancy' and 'Outcome' columns any other column with the value 0 is non-sensical. By printing various segments of the data we see that some instances have 0 value for columns- 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin' and 'BMI'. So we can state that, **there are missing datas replaced with 0 in this dataset.** Further calculations are shown below,
```
missing_data_count = diabetes_df[ (diabetes_df['Glucose']==0) | (diabetes_df['BloodPressure']==0) | (diabetes_df['BMI']==0)
| (diabetes_df['Insulin']==0) | (diabetes_df['SkinThickness']==0) ].shape[0]
print('A total of', missing_data_count, 'instances have missing data (one or more columns invalidly contain zero).')
```
## (g) Draw a histogram for each numerical features. You may use the hist() function of the panda's dataframe. Documentation on this can be found at https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.hist.html
### In order to make the histograms for each features visually appealing, you are advised to tweak bins and figsize parameters.
```
diabetes_df.hist(bins = 9, figsize = (15, 15))
plt.show()
```
## (h) One of the ways to visualize how each attribute is correlated with other attributes is by drawing a seaborn correlation heatmap. Read the documentation on how to generate correlation heatmap using the seaborn library. The following link provides a quick overview on how to do this: https://www.geeksforgeeks.org/how-to-create-a-seaborn-correlation-heatmap-in-python/
### I strongly suggest you to adjust the figure size before using the heatmap. For instance, you can write the code plt.figure (figsize = (a,b)) before using the seaborn's heatmap [Here a and b are appropriate choices for the figure size that you need to decide on].
```
import seaborn
# help taken from ->
# https://medium.com/@szabo.bibor/how-to-create-a-seaborn-correlation-heatmap-in-python-834c0686b88e
plt.figure(figsize=(15, 8))
corr_matrix = diabetes_df.corr()
# mask to hide the upper triangle of the symmetric corr-matrix
# mask = np.triu(np.ones_like(corr_matrix, dtype=np.bool))
heatmap = seaborn.heatmap(
# correlation matrix
corr_matrix,
# mask the top triangle of the matrix
# mask=mask,
# two-contrast color, different color for + -
cmap="PiYG",
# color map range
vmin=-1, vmax=1,
# show corr values in the cells
annot=True
)
# set a title
heatmap.set_title('Correlation Heatmap', fontdict={'fontsize':20}, pad=16);
plt.show()
```
## (i) If this dataset has the issue discussed in (f), you are now required to write a function in python that will replace each zeros by the corresponding median value of the features. Note that you may require to use the numpy library.
We saw in (f) that there were some invalid zeroes in the columns- **'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin' and 'BMI'**.
```
column_with_invalid_zeroes = ['Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI']
for column in column_with_invalid_zeroes:
# extract the column from original dataframe
column_data = diabetes_df[column]
# replace zero values with np.NaN
column_data = column_data.replace(0, np.NaN)
# replace np.NaN values with the median
column_data = column_data.fillna(column_data.median())
# put the column in the original dataframe
diabetes_df[column] = column_data
```
Now if we run the same code as we did of (f) to count missing values (i.e contains invalid zero),
```
missing_data_count = diabetes_df[ (diabetes_df['Glucose']==0) | (diabetes_df['BloodPressure']==0)
| (diabetes_df['BMI']==0) | (diabetes_df['Insulin']==0)
| (diabetes_df['SkinThickness']==0) ].shape[0]
print('A total of', missing_data_count, 'instances have missing data (one or more columns invalidly contain zero).')
```
**Therefore we can safely assume that invalid zeroes have been replaced by their columns median values.**
## (j) Split the dataset into X and y where X contains all the predictors and y contains only the entries in the target class.
```
X = diabetes_df.drop(columns=['Outcome'])
y = diabetes_df['Outcome']
diabetes_df.head()
X.head()
y.head()
```
## (k) Use the train_test_split function to split the dataset into train set and test set in the ratio 80:20.
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)
train_data_percentage = round((X_train.shape[0]/X.shape[0])*100, 2)
test_data_percentage = round((X_test.shape[0]/X.shape[0])*100, 2)
print("Test size = " + str(test_data_percentage) + "%" + " Train size = " + str(train_data_percentage) + "%")
```
## (l) Write a code to implement the zeroR classifier (i.e. a baseline classifier) on this dataset. Determine the precision, recall, F1 score, train accuracy and the test accuracy.
```
from sklearn.dummy import DummyClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import classification_report
# ZeroR classifier
model = DummyClassifier(strategy = 'most_frequent', random_state = 42)
# Dataset is trained and a model is created
model.fit(X_train,y_train)
y_train_predictions = model.predict(X_train)
y_test_predictions = model.predict(X_test)
print('For the train predictions:\n', classification_report(y_train, y_train_predictions))
print()
print('For the test predictions:\n', classification_report(y_test, y_test_predictions))
```
## (m) Apply the KNN classifier with the euclidean distance as the distance metric on this dataset. You need to determine a suitable value of the hyperparameter, k. One way to do this is to apply the KNN classifier with different values of k and determine the train and test accuracies. Plot a graph of train and test accuracy with respect to k and determine the value of k for which the difference between the train and the test accuracy is minimum. You may require to do feature scaling before using the KNN classifier.
Before we begin applying KNN algorithm we need to Scale our dataset. **We must scale test and train both segments of the dataset using the same min, max values for corresponding columns.**
```
from sklearn.neighbors import KNeighborsClassifier
from sklearn.preprocessing import MinMaxScaler
X_train.head()
scaler = MinMaxScaler()
X_train_scaled_using_library = pd.DataFrame(scaler.fit_transform(X_train), columns=X_train.columns)
X_train_scaled_using_library.hist(bins = 9, figsize = (15, 15))
plt.show()
columns = X.columns
X_train_col_min = []
X_train_col_max = []
col_idx = 0
for column in columns:
X_train_col_max.append(X_train[column].max())
X_train_col_min.append(X_train[column].min())
X_train_scaled = X_train.copy()
# MUST MAKE COLUMNS INTO FLOAT DATATYPE, UNLESS SCALING WILL NOT WORK
# spent 3hs for this (:
X_train_scaled[list(columns)] = X_train_scaled[list(columns)].astype(float)
for (row_idx, data) in X_train.iterrows():
col_idx = 0
for val in data:
column = columns[col_idx]
scaled_val = (val - X_train_col_min[col_idx]) / (X_train_col_max[col_idx] - X_train_col_min[col_idx])
X_train_scaled.at[row_idx, column] = float(scaled_val)
col_idx += 1
X_train_scaled.hist(bins = 9, figsize = (15, 15))
plt.show()
```
Among the above two scaling, the first one was done using MinMaxScaler() of sklearn library. The second one was done by manually implementing the scaling process. From the two histograms diagrams of each column from above we can conclude that our manual scaling process is as accurate as the MinMaxScaler() of sklearn library.
Now we can proceed to manually scale the test set, **using the minimum and maximum values of the train dataset**,
```
X_test_scaled = X_test.copy()
X_test_scaled[list(columns)] = X_test_scaled[list(columns)].astype(float)
for (row_idx, data) in X_test_scaled.iterrows():
col_idx = 0
for val in data:
column = columns[col_idx]
scaled_val = (val - X_train_col_min[col_idx]) / (X_train_col_max[col_idx] - X_train_col_min[col_idx])
X_test_scaled.at[row_idx, column] = scaled_val
col_idx += 1
X_test_scaled.head()
```
Now we implement a function that applies KNN classifier for k values in the range provided as function parameter,
```
def check_k_in_range(left, right):
k_values = []
for i in range (left, right):
k_values.append(i)
train_accuracies = []
test_accuracies = []
for k in k_values:
# k-nn classifier witk k neighbours and euclidian distance
model = KNeighborsClassifier(n_neighbors=k, metric='minkowski', p=2)
# train model
model.fit(X_train_scaled, y_train)
# train predictions
y_train_predictions = model.predict(X_train_scaled)
# train accuracy for current k value
train_accuracies.append(accuracy_score(y_train, y_train_predictions))
# test predictions
y_test_predictions = model.predict(X_test_scaled)
# test accuracy for current k value
test_accuracies.append(accuracy_score(y_test, y_test_predictions))
# plot the Test-Accuracy, Training-Accuracy VS K-value
plt.figure(figsize=(15, 8))
plt.title('Train accuracy, Test accuracy vs K-values')
plt.plot(k_values, train_accuracies, 'ro-', k_values, test_accuracies,'bv--')
plt.legend(['Training Accuracy','Test Accuracy'])
plt.xlabel('K values')
plt.ylabel('Accuracy')
min_k = 1
max_k = int(X_train.shape[0]/5)
print('Minimum k = ', min_k, 'Maximum k = ', max_k)
check_k_in_range(min_k, max_k)
```
### Explanation-
From the figure we can observe that an optiman k-value lies in the range 10 to 20. Because for this range Train Accuracies and Test Accuracies seem relatively closer, which means reduced chance of model getting too complex and overfitting. Let's test for k values in range 10 to 20 now.
```
check_k_in_range(10, 20)
```
From the above three graphs we can state that **k=17 should be the optimal choice for our K nearest neighbour classifier.**
## (n) Apply the decision tree classifier with the “gini” criterion on this dataset. One of the hyperparameters of the decision tree classifier is max_depth. Apply the decision tree classifier with different values of max_depth and find the train and test accuracies. Plot a graph showing how the train and test accuracy varies with max_depth. Determine the most suitable value of max_depth. For a suitable value of max_depth, draw the decision tree.
```
from sklearn import tree
def check_decision_tree_max_depth_in_range(left, right):
max_depths = []
for i in range (left, right):
max_depths.append(i)
train_accuracies = []
test_accuracies = []
for depth in max_depths:
# decision tree classifier with max_depth impurity measure 'gini'
model = tree.DecisionTreeClassifier(criterion='gini',max_depth=depth)
# train model
model.fit(X_train, y_train)
# train predictions
y_train_predictions = model.predict(X_train)
# train accuracy for current k value
train_accuracies.append(accuracy_score(y_train, y_train_predictions))
# test predictions
y_test_predictions = model.predict(X_test)
# test accuracy for current k value
test_accuracies.append(accuracy_score(y_test, y_test_predictions))
# plot the Test-Accuracy, Training-Accuracy VS K-value
plt.figure(figsize=(15, 8))
plt.title('Train accuracy, Test accuracy vs Max-Depths')
plt.plot(max_depths, train_accuracies, 'ro-', max_depths, test_accuracies,'bv--')
plt.legend(['Training Accuracy','Test Accuracy'])
plt.xlabel('Max Depths')
plt.ylabel('Accuracy')
check_decision_tree_max_depth_in_range(1, 50)
```
It appears that our desired max_depth is somewhere in the range from 1 to 10. Let's find out,
```
check_decision_tree_max_depth_in_range(1, 10)
```
From the graph we can state that **maxdepth = 4 is the optimal choice.** Now let's draw the decision tree for max_depth=4 and impurity measure as gini,
```
import pydotplus
from IPython.display import Image
# decision tree classifier with max_depth = 4 and impurity measure 'gini'
model = tree.DecisionTreeClassifier(criterion='gini',max_depth=4)
# train model
model.fit(X_train, y_train)
dot_data = tree.export_graphviz(model, feature_names=X_train.columns, class_names=['non-diabetic','diabetic'],
filled=True, out_file=None)
graph = pydotplus.graph_from_dot_data(dot_data)
Image(graph.create_png())
```
## (o) Read the article “How to configure k-fold cross validation” and apply 10-fold cross validation using the classifiers in m and n. Determine the performance of the classifiers (accuracy, precision, recall, f1-score and the area under the curve of the ROC curve) on this dataset. Link to the article: https://machinelearningmastery.com/how-to-configure-k-fold-cross-validation/
We have to initialize the 10-fold cross validator and our classifiers.
```
from sklearn.model_selection import StratifiedKFold, cross_val_score
# 10-fold cross validation
cv = StratifiedKFold(n_splits = 10, random_state = 42, shuffle = True)
```
Let's find out the accuracy, precision, recall and area under the ROC curve for decision tree classifier. To pick the max_depth hyper parameter for the decision tree we will pick the one with the highest average accuracy for 10-fold cross validation.
```
max_depths = []
for i in range (1, 25):
max_depths.append(i)
accuracies = []
for depth in max_depths:
model = tree.DecisionTreeClassifier(criterion='gini',max_depth=depth)
accuracie_segments = cross_val_score(model, X, y, scoring = 'accuracy', cv = cv, n_jobs = 1)
accuracies.append(np.mean(accuracie_segments))
plt.figure(figsize=(15, 8))
plt.title('Avg accuracy vs Max depths')
plt.plot(max_depths, accuracies,'bv--')
plt.xlabel('Max depths')
plt.ylabel('Avg accuracy')
plt.show()
```
So, **max_depth = 5** gives the highest accuracy.
```
# decision tree classifier with max_depth=5 impurity measure 'gini'
model_decision_tree = tree.DecisionTreeClassifier(criterion='gini',max_depth=5)
accuracies = cross_val_score(model_decision_tree, X, y, scoring = 'accuracy', cv = cv, n_jobs = 1)
precisions = cross_val_score(model_decision_tree, X, y, scoring = 'precision', cv = cv, n_jobs = 1)
recalls = cross_val_score(model_decision_tree, X, y, scoring = 'recall', cv = cv, n_jobs = 1)
f1s = cross_val_score(model_decision_tree, X, y, scoring = 'f1', cv = cv, n_jobs = 1)
aucs = cross_val_score(model_decision_tree, X, y, scoring = 'roc_auc', cv = cv, n_jobs = 1)
accuracy_decision_tree = np.mean(accuracies)
precision_decision_tree = np.mean(precisions)
recall_decision_tree = np.mean(recalls)
f1_decision_tree = np.mean(f1s)
auc_decision_tree = np.mean(aucs)
print('For the Decision Tree classifier:')
print('accuracy =', round(accuracy_decision_tree, 2)
, 'precision =', round(precision_decision_tree, 2)
, 'recall =', round(recall_decision_tree, 2)
, 'f1-score =', round(f1_decision_tree, 2)
, 'AUC =', round(auc_decision_tree, 2))
```
Let's find out the accuracy, precision, recall and area under the ROC curve for K-NN classifier. To pick the hyper parameter, k for the classifier we will pick the one with the highest average accuracy for 10-fold cross validation.
```
X_scaled = pd.DataFrame(MinMaxScaler().fit_transform(X), columns=X.columns)
k_values = []
for i in range (1, 25):
k_values.append(i)
accuracies = []
for k in k_values:
model = KNeighborsClassifier(n_neighbors=k, metric='minkowski', p=2)
accuracy_segments = cross_val_score(model, X_scaled, y, scoring = 'accuracy', cv = cv, n_jobs = 1)
accuracies.append(np.mean(accuracy_segments))
plt.figure(figsize=(15, 8))
plt.title('Avg accuracy vs K-values')
plt.plot(k_values, accuracies,'bv--')
plt.xlabel('K values')
plt.ylabel('Avg accuracy')
plt.show()
```
So, **k=17(or 15)** gives the highest average accuracy.
```
# k-nn classifier witk k=17 neighbours and euclidian distance
model_knn = KNeighborsClassifier(n_neighbors=17, metric='minkowski', p=2)
accuracies = cross_val_score(model_knn, X_scaled, y, scoring = 'accuracy', cv = cv, n_jobs = 1)
precisions = cross_val_score(model_knn, X_scaled, y, scoring = 'precision', cv = cv, n_jobs = 1)
recalls = cross_val_score(model_knn, X_scaled, y, scoring = 'recall', cv = cv, n_jobs = 1)
f1s = cross_val_score(model_knn, X_scaled, y, scoring = 'f1', cv = cv, n_jobs = 1)
aucs = cross_val_score(model_knn, X_scaled, y, scoring = 'roc_auc', cv = cv, n_jobs = 1)
accuracy_knn = np.mean(accuracies)
precision_knn = np.mean(precisions)
recall_knn = np.mean(recalls)
f1_knn = np.mean(f1s)
auc_knn = np.mean(aucs)
print('For the K-NN classifier:')
print('accuracy =', round(accuracy_knn, 2)
, ', precision =', round(precision_knn, 2)
, ', recall =', round(recall_knn, 2)
, ', f1-score =', round(f1_knn, 2)
, ', AUC =', round(auc_knn, 2))
```
For comparison of performance let's draw a bar graph of evaluation metrics for the two classifiers.
```
labels = ['accuracy', 'precision', 'recall', 'f1', 'auc']
decision_tree_evaluation_metrics = [accuracy_decision_tree, precision_decision_tree, recall_decision_tree,
f1_decision_tree, auc_decision_tree]
knn_evaluation_metrics = [accuracy_knn, precision_knn, recall_knn, f1_knn, auc_knn]
x = np.arange(len(labels)) # the label locations
width = 0.35 # the width of the bars
fig, ax = plt.subplots()
ax.bar(x - width/2, decision_tree_evaluation_metrics, width, label='decision tree')
ax.bar(x + width/2, knn_evaluation_metrics, width, label='k-nn')
# Add some text for labels, title and custom x-axis tick labels, etc.
ax.set_ylabel('Score')
ax.set_title('Decision Tree vs KNN comparison')
ax.set_xticks(x)
ax.set_xticklabels(labels)
ax.legend()
plt.show()
```
| true |
code
| 0.490053 | null | null | null | null |
|
## Making decisions with pandas
### quantile analysis: random data
Quintile analysis is a common framework for evaluating the efficacy of security factors
#### What is a factor ?
A factor is a method for scoring/ranking sets of securities. For a particular point in time and for a
particular set of securities, a factor can be represented as a pandas series where the index is an
array of the security identifiers and the values are the scores or ranks.
### Quitiles/Buckets
If we take factor scores over time, we can, at each point in time, split the set of securities into 5
equal buckets, or quintiles, based on the order of the factor scores. There is nothing particularly
sacred about the number 5.
We could have used 3 or 10. But we use 5 often. Finally, we track the
performance of each of the five buckets to determine if there is a meaningful difference in the
returns. We tend to focus more intently on the difference in returns of the bucket with the highest
rank relative to that of the lowest rank.
#### generating time series data for explanation
Returns:- generate random returns for specified number of securities and periods.
Signals: generate random signals for specified number of securities and periods and with
prescribed level of correlation with Returns. In order for a factor to be useful, there must be
some information or correlation between the scores/ranks and subsequent returns. If there
weren't correlation, we would see it. That would be a good exercise for the reader, duplicate
this analysis with random data generated with 0 correlation.
```
import pandas as pd
import numpy as np
num_securities = 1000
num_periods = 1000
period_frequency = 'W'
start_date = "2000-12-31"
np.random.seed([3,1415])
means = [0,0]
covariance = [[1.,5e-3],
[5e-3,1.]]
#generating a set of data [0] and m[1] with ~0.005 correlation
m = np.random.multivariate_normal(means, covariance,
(num_periods, num_securities)).T
# generating index
ids = pd.Index(['s{:05d}'.format(s) for s in range(num_securities)])
tidx = pd.date_range(start=start_date, periods=num_periods, freq=period_frequency)
```
I divide m[0] by 25 to scale down to something that looks like stock returns. I also add 1e-7 to give a
modest positive mean return.
```
security_returns = pd.DataFrame(m[0] / 25 + 1e-7, tidx, ids)
security_signals = pd.DataFrame(m[1], tidx, ids)
```
# pd.qcut - Create Quintile Buckets
```
def qcut(s, q=5):
labels = ['q{}'.format(i) for i in range(1, 6)]
return pd.qcut(s, q, labels=labels)
cut = security_signals.stack().groupby(level=0).apply(qcut)
#Use these cuts as an index on our returns
returns_cut = security_returns.stack().rename('returns') \
.to_frame().set_index(cut, append=True) \
.swaplevel(2, 1).sort_index().squeeze() \
.groupby(level=[0, 1]).mean().unstack()
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(15, 5))
ax1 = plt.subplot2grid((1,3), (0,0))
ax2 = plt.subplot2grid((1,3), (0,1))
ax3 = plt.subplot2grid((1,3), (0,2))
# Cumulative Returns
returns_cut.add(1).cumprod() \
.plot(colormap='jet', ax=ax1, title="Cumulative Returns")
leg1 = ax1.legend(loc='upper left', ncol=2, prop={'size': 10}, fancybox=True)
leg1.get_frame().set_alpha(.8)
# Rolling 50 Week Return
returns_cut.add(1).rolling(50).apply(lambda x: x.prod()) \
.plot(colormap='jet', ax=ax2, title="Rolling 50 Week Return")
leg2 = ax2.legend(loc='upper left', ncol=2, prop={'size': 10}, fancybox=True)
leg2.get_frame().set_alpha(.8)
# Return Distribution
returns_cut.plot.box(vert=False, ax=ax3, title="Return Distribution")
fig.autofmt_xdate()
plt.show()
```
### Visualize Quintile Correlation with scatter_matrix
```
def max_dd(returns):
"""returns is a series"""
r = returns.add(1).cumprod()
dd = r.div(r.cummax()).sub(1)
mdd = dd.min()
end = dd.argmin()
start = r.loc[:end].argmax()
return mdd, start, end
def max_dd_df(returns):
"""returns is a dataframe"""
series = lambda x: pd.Series(x, ['Draw Down', 'Start', 'End'])
return returns.apply(max_dd).apply(series)
#max_dd_df(returns_cut)
draw_downs = max_dd_df(returns_cut)
fig, axes = plt.subplots(5, 1, figsize=(10, 8))
for i, ax in enumerate(axes[::-1]):
returns_cut.iloc[:, i].add(1).cumprod().plot(ax=ax)
sd, ed = draw_downs[['Start', 'End']].iloc[i]
ax.axvspan(sd, ed, alpha=0.1, color='r')
ax.set_ylabel(returns_cut.columns[i])
fig.suptitle('Maximum Draw Down', fontsize=18)
fig.tight_layout()
plt.subplots_adjust(top=.95)
```
| true |
code
| 0.683023 | null | null | null | null |
|
# scikit-learn中的逻辑回归
```
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(666)
X = np.random.normal(0, 1, size=(200, 2))
y = np.array((X[:,0]**2+X[:,1])<1.5, dtype='int')
for _ in range(20):
y[np.random.randint(200)] = 1
y
plt.scatter(X[y==0,0], X[y==0,1])
plt.scatter(X[y==1,0], X[y==1,1])
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=666)
```
### 使用scikit-learn中的逻辑回归
```
from sklearn.linear_model import LogisticRegression
log_reg = LogisticRegression()
log_reg.fit(X_train, y_train)
log_reg.score(X_train, y_train)
log_reg.score(X_test, y_test)
def plot_decision_boundary(model, axis):
x0, x1 = np.meshgrid(
np.linspace(axis[0], axis[1], int((axis[1]-axis[0])*100)).reshape(-1, 1),
np.linspace(axis[2], axis[3], int((axis[3]-axis[2])*100)).reshape(-1, 1),
)
X_new = np.c_[x0.ravel(), x1.ravel()]
y_predict = model.predict(X_new)
zz = y_predict.reshape(x0.shape)
from matplotlib.colors import ListedColormap
custom_cmap = ListedColormap(['#EF9A9A','#FFF59D','#90CAF9'])
plt.contourf(x0, x1, zz, linewidth=5, cmap=custom_cmap)
plot_decision_boundary(log_reg, axis=[-4, 4, -4, 4])
plt.scatter(X[y==0,0], X[y==0,1])
plt.scatter(X[y==1,0], X[y==1,1])
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
def PolynomialLogisticRegression(degree):
return Pipeline([
('poly', PolynomialFeatures(degree=degree)),
('std_scaler', StandardScaler()),
('log_reg', LogisticRegression())
])
poly_log_reg = PolynomialLogisticRegression(degree=2)
poly_log_reg.fit(X_train, y_train)
poly_log_reg.score(X_train, y_train)
poly_log_reg.score(X_test, y_test)
plot_decision_boundary(poly_log_reg, axis=[-4, 4, -4, 4])
plt.scatter(X[y==0,0], X[y==0,1])
plt.scatter(X[y==1,0], X[y==1,1])
poly_log_reg2 = PolynomialLogisticRegression(degree=20)
poly_log_reg2.fit(X_train, y_train)
poly_log_reg2.score(X_train, y_train)
poly_log_reg2.score(X_test, y_test)
plot_decision_boundary(poly_log_reg2, axis=[-4, 4, -4, 4])
plt.scatter(X[y==0,0], X[y==0,1])
plt.scatter(X[y==1,0], X[y==1,1])
def PolynomialLogisticRegression(degree, C):
return Pipeline([
('poly', PolynomialFeatures(degree=degree)),
('std_scaler', StandardScaler()),
('log_reg', LogisticRegression(C=C))
])
poly_log_reg3 = PolynomialLogisticRegression(degree=20, C=0.1)
poly_log_reg3.fit(X_train, y_train)
poly_log_reg3.score(X_train, y_train)
poly_log_reg3.score(X_test, y_test)
plot_decision_boundary(poly_log_reg3, axis=[-4, 4, -4, 4])
plt.scatter(X[y==0,0], X[y==0,1])
plt.scatter(X[y==1,0], X[y==1,1])
def PolynomialLogisticRegression(degree, C, penalty='l2'):
return Pipeline([
('poly', PolynomialFeatures(degree=degree)),
('std_scaler', StandardScaler()),
('log_reg', LogisticRegression(C=C, penalty=penalty))
])
poly_log_reg4 = PolynomialLogisticRegression(degree=20, C=0.1, penalty='l1')
poly_log_reg4.fit(X_train, y_train)
poly_log_reg4.score(X_train, y_train)
poly_log_reg4.score(X_test, y_test)
plot_decision_boundary(poly_log_reg4, axis=[-4, 4, -4, 4])
plt.scatter(X[y==0,0], X[y==0,1])
plt.scatter(X[y==1,0], X[y==1,1])
```
| true |
code
| 0.527621 | null | null | null | null |
|
# City street network orientations
Compare the spatial orientations of city street networks with OSMnx.
- [Overview of OSMnx](http://geoffboeing.com/2016/11/osmnx-python-street-networks/)
- [GitHub repo](https://github.com/gboeing/osmnx)
- [Examples, demos, tutorials](https://github.com/gboeing/osmnx-examples)
- [Documentation](https://osmnx.readthedocs.io/en/stable/)
- [Journal article/citation](http://geoffboeing.com/publications/osmnx-complex-street-networks/)
```
import matplotlib.pyplot as plt
import numpy as np
import osmnx as ox
import pandas as pd
ox.config(log_console=True, use_cache=True)
weight_by_length = False
# define the study sites as label : query
places = {'Atlanta' : 'Atlanta, GA, USA',
'Boston' : 'Boston, MA, USA',
'Buffalo' : 'Buffalo, NY, USA',
'Charlotte' : 'Charlotte, NC, USA',
'Chicago' : 'Chicago, IL, USA',
'Cleveland' : 'Cleveland, OH, USA',
'Dallas' : 'Dallas, TX, USA',
'Houston' : 'Houston, TX, USA',
'Denver' : 'Denver, CO, USA',
'Detroit' : 'Detroit, MI, USA',
'Las Vegas' : 'Las Vegas, NV, USA',
'Los Angeles' : {'city':'Los Angeles', 'state':'CA', 'country':'USA'},
'Manhattan' : 'Manhattan, NYC, NY, USA',
'Miami' : 'Miami, FL, USA',
'Minneapolis' : 'Minneapolis, MN, USA',
'Orlando' : 'Orlando, FL, USA',
'Philadelphia' : 'Philadelphia, PA, USA',
'Phoenix' : 'Phoenix, AZ, USA',
'Portland' : 'Portland, OR, USA',
'Sacramento' : 'Sacramento, CA, USA',
'San Francisco' : {'city':'San Francisco', 'state':'CA', 'country':'USA'},
'Seattle' : 'Seattle, WA, USA',
'St Louis' : 'St. Louis, MO, USA',
'Tampa' : 'Tampa, FL, USA',
'Washington' : 'Washington, DC, USA'}
# verify OSMnx geocodes each query to what you expect
gdf = ox.gdf_from_places(places.values())
gdf
```
## Get the street networks and their edge bearings
```
def reverse_bearing(x):
return x + 180 if x < 180 else x - 180
bearings = {}
for place in sorted(places.keys()):
# get the graph
query = places[place]
G = ox.graph_from_place(query, network_type='drive')
# calculate edge bearings
Gu = ox.add_edge_bearings(ox.get_undirected(G))
if weight_by_length:
# weight bearings by length (meters)
city_bearings = []
for u, v, k, d in Gu.edges(keys=True, data=True):
city_bearings.extend([d['bearing']] * int(d['length']))
b = pd.Series(city_bearings)
bearings[place] = pd.concat([b, b.map(reverse_bearing)]).reset_index(drop='True')
else:
# don't weight bearings, just take one value per street segment
b = pd.Series([d['bearing'] for u, v, k, d in Gu.edges(keys=True, data=True)])
bearings[place] = pd.concat([b, b.map(reverse_bearing)]).reset_index(drop='True')
```
## Visualize it
```
def count_and_merge(n, bearings):
# make twice as many bins as desired, then merge them in pairs
# prevents bin-edge effects around common values like 0° and 90°
n = n * 2
bins = np.arange(n + 1) * 360 / n
count, _ = np.histogram(bearings, bins=bins)
# move the last bin to the front, so eg 0.01° and 359.99° will be binned together
count = np.roll(count, 1)
return count[::2] + count[1::2]
# function to draw a polar histogram for a set of edge bearings
def polar_plot(ax, bearings, n=36, title=''):
bins = np.arange(n + 1) * 360 / n
count = count_and_merge(n, bearings)
_, division = np.histogram(bearings, bins=bins)
frequency = count / count.sum()
division = division[0:-1]
width = 2 * np.pi / n
ax.set_theta_zero_location('N')
ax.set_theta_direction('clockwise')
x = division * np.pi / 180
bars = ax.bar(x, height=frequency, width=width, align='center', bottom=0, zorder=2,
color='#003366', edgecolor='k', linewidth=0.5, alpha=0.7)
ax.set_ylim(top=frequency.max())
title_font = {'family':'Century Gothic', 'size':24, 'weight':'bold'}
xtick_font = {'family':'Century Gothic', 'size':10, 'weight':'bold', 'alpha':1.0, 'zorder':3}
ytick_font = {'family':'Century Gothic', 'size': 9, 'weight':'bold', 'alpha':0.2, 'zorder':3}
ax.set_title(title.upper(), y=1.05, fontdict=title_font)
ax.set_yticks(np.linspace(0, max(ax.get_ylim()), 5))
yticklabels = ['{:.2f}'.format(y) for y in ax.get_yticks()]
yticklabels[0] = ''
ax.set_yticklabels(labels=yticklabels, fontdict=ytick_font)
xticklabels = ['N', '', 'E', '', 'S', '', 'W', '']
ax.set_xticklabels(labels=xticklabels, fontdict=xtick_font)
ax.tick_params(axis='x', which='major', pad=-2)
# create figure and axes
n = len(places)
ncols = int(np.ceil(np.sqrt(n)))
nrows = int(np.ceil(n / ncols))
figsize = (ncols * 5, nrows * 5)
fig, axes = plt.subplots(nrows, ncols, figsize=figsize, subplot_kw={'projection':'polar'})
# plot each city's polar histogram
for ax, place in zip(axes.flat, sorted(places.keys())):
polar_plot(ax, bearings[place].dropna(), title=place)
# add super title and save full image
suptitle_font = {'family':'Century Gothic', 'fontsize':60, 'fontweight':'normal', 'y':1.07}
fig.suptitle('City Street Network Orientation', **suptitle_font)
fig.tight_layout()
fig.subplots_adjust(hspace=0.35)
fig.savefig('images/street-orientations.png', dpi=120, bbox_inches='tight')
plt.close()
```
| true |
code
| 0.575827 | null | null | null | null |
|
# Object Detection Demo
Welcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Make sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md) before you start.
# Imports
```
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
from object_detection.utils import ops as utils_ops
if tf.__version__ < '1.4.0':
raise ImportError('Please upgrade your tensorflow installation to v1.4.* or later!')
```
## Env setup
```
# This is needed to display the images.
%matplotlib inline
```
## Object detection imports
Here are the imports from the object detection module.
```
from utils import label_map_util
from utils import visualization_utils as vis_util
```
# Model preparation
## Variables
Any model exported using the `export_inference_graph.py` tool can be loaded here simply by changing `PATH_TO_CKPT` to point to a new .pb file.
By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies.
```
# What model to download.
MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17'
MODEL_FILE = MODEL_NAME + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')
NUM_CLASSES = 90
```
## Download Model
```
opener = urllib.request.URLopener()
opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar_file = tarfile.open(MODEL_FILE)
for file in tar_file.getmembers():
file_name = os.path.basename(file.name)
if 'frozen_inference_graph.pb' in file_name:
tar_file.extract(file, os.getcwd())
```
## Load a (frozen) Tensorflow model into memory.
```
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
```
## Loading label map
Label maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
```
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
```
## Helper code
```
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
```
# Detection
```
# For the sake of simplicity we will use only 2 images:
# image1.jpg
# image2.jpg
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = 'test_images'
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 3) ]
# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)
def run_inference_for_single_image(image, graph):
with graph.as_default():
with tf.Session() as sess:
# Get handles to input and output tensors
ops = tf.get_default_graph().get_operations()
all_tensor_names = {output.name for op in ops for output in op.outputs}
tensor_dict = {}
for key in [
'num_detections', 'detection_boxes', 'detection_scores',
'detection_classes', 'detection_masks'
]:
tensor_name = key + ':0'
if tensor_name in all_tensor_names:
tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(
tensor_name)
if 'detection_masks' in tensor_dict:
# The following processing is only for single image
detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0])
detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0])
# Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.
real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32)
detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1])
detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1])
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
detection_masks, detection_boxes, image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(
tf.greater(detection_masks_reframed, 0.5), tf.uint8)
# Follow the convention by adding back the batch dimension
tensor_dict['detection_masks'] = tf.expand_dims(
detection_masks_reframed, 0)
image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')
# Run inference
output_dict = sess.run(tensor_dict,
feed_dict={image_tensor: np.expand_dims(image, 0)})
# all outputs are float32 numpy arrays, so convert types as appropriate
output_dict['num_detections'] = int(output_dict['num_detections'][0])
output_dict['detection_classes'] = output_dict[
'detection_classes'][0].astype(np.uint8)
output_dict['detection_boxes'] = output_dict['detection_boxes'][0]
output_dict['detection_scores'] = output_dict['detection_scores'][0]
if 'detection_masks' in output_dict:
output_dict['detection_masks'] = output_dict['detection_masks'][0]
return output_dict
for image_path in TEST_IMAGE_PATHS:
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = load_image_into_numpy_array(image)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.
output_dict = run_inference_for_single_image(image_np, detection_graph)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks'),
use_normalized_coordinates=True,
line_thickness=8)
plt.figure(figsize=IMAGE_SIZE)
plt.imshow(image_np)
```
| true |
code
| 0.604574 | null | null | null | null |
|
# Desafio 3
Neste desafio, iremos praticar nossos conhecimentos sobre distribuições de probabilidade. Para isso,
dividiremos este desafio em duas partes:
1. A primeira parte contará com 3 questões sobre um *data set* artificial com dados de uma amostra normal e
uma binomial.
2. A segunda parte será sobre a análise da distribuição de uma variável do _data set_ [Pulsar Star](https://archive.ics.uci.edu/ml/datasets/HTRU2), contendo 2 questões.
> Obs.: Por favor, não modifique o nome das funções de resposta.
## _Setup_ geral
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as sct
import seaborn as sns
from statsmodels.distributions.empirical_distribution import ECDF
#%matplotlib inline
from IPython.core.pylabtools import figsize
figsize(12, 8)
sns.set()
```
## Parte 1
### _Setup_ da parte 1
```
np.random.seed(42)
dataframe = pd.DataFrame({"normal": sct.norm.rvs(20, 4, size=10000),
"binomial": sct.binom.rvs(100, 0.2, size=10000)})
```
## Inicie sua análise a partir da parte 1 a partir daqui
```
# Sua análise da parte 1 começa aqui.
dataframe.head()
dataframe.info()
dataframe.describe()
```
## Questão 1
Qual a diferença entre os quartis (Q1, Q2 e Q3) das variáveis `normal` e `binomial` de `dataframe`? Responda como uma tupla de três elementos arredondados para três casas decimais.
Em outra palavras, sejam `q1_norm`, `q2_norm` e `q3_norm` os quantis da variável `normal` e `q1_binom`, `q2_binom` e `q3_binom` os quantis da variável `binom`, qual a diferença `(q1_norm - q1 binom, q2_norm - q2_binom, q3_norm - q3_binom)`?
```
def q1():
quantiles = dataframe.quantile([.25, .5, .75])
quantiles_diff = quantiles['normal'] - quantiles['binomial']
return tuple(quantiles_diff.round(3).to_list())
q1()
```
Para refletir:
* Você esperava valores dessa magnitude?
* Você é capaz de explicar como distribuições aparentemente tão diferentes (discreta e contínua, por exemplo) conseguem dar esses valores?
## Questão 2
Considere o intervalo $[\bar{x} - s, \bar{x} + s]$, onde $\bar{x}$ é a média amostral e $s$ é o desvio padrão. Qual a probabilidade nesse intervalo, calculada pela função de distribuição acumulada empírica (CDF empírica) da variável `normal`? Responda como uma único escalar arredondado para três casas decimais.
```
def q2():
inferior = dataframe.normal.mean() - dataframe.normal.std()
superior = dataframe.normal.mean() + dataframe.normal.std()
ecdf = ECDF(dataframe.normal)
return np.float(round(ecdf(superior) - ecdf(inferior), 3))
q2()
```
Para refletir:
* Esse valor se aproxima do esperado teórico?
* Experimente também para os intervalos $[\bar{x} - 2s, \bar{x} + 2s]$ e $[\bar{x} - 3s, \bar{x} + 3s]$.
## Questão 3
Qual é a diferença entre as médias e as variâncias das variáveis `binomial` e `normal`? Responda como uma tupla de dois elementos arredondados para três casas decimais.
Em outras palavras, sejam `m_binom` e `v_binom` a média e a variância da variável `binomial`, e `m_norm` e `v_norm` a média e a variância da variável `normal`. Quais as diferenças `(m_binom - m_norm, v_binom - v_norm)`?
```
def q3():
mean_std = dataframe.describe()[1:3]
mean_std.loc['std'] **= 2
mean_std_diff = mean_std['binomial'] - mean_std['normal']
return tuple(mean_std_diff.round(3).to_list())
q3()
```
Para refletir:
* Você esperava valore dessa magnitude?
* Qual o efeito de aumentar ou diminuir $n$ (atualmente 100) na distribuição da variável `binomial`?
## Parte 2
### _Setup_ da parte 2
```
stars = pd.read_csv("pulsar_stars.csv")
stars.rename({old_name: new_name
for (old_name, new_name)
in zip(stars.columns,
["mean_profile", "sd_profile", "kurt_profile", "skew_profile", "mean_curve", "sd_curve", "kurt_curve", "skew_curve", "target"])
},
axis=1, inplace=True)
stars.loc[:, "target"] = stars.target.astype(bool)
```
## Inicie sua análise da parte 2 a partir daqui
```
stars.head()
stars.info()
stars.describe()
```
## Questão 4
Considerando a variável `mean_profile` de `stars`:
1. Filtre apenas os valores de `mean_profile` onde `target == 0` (ou seja, onde a estrela não é um pulsar).
2. Padronize a variável `mean_profile` filtrada anteriormente para ter média 0 e variância 1.
Chamaremos a variável resultante de `false_pulsar_mean_profile_standardized`.
Encontre os quantis teóricos para uma distribuição normal de média 0 e variância 1 para 0.80, 0.90 e 0.95 através da função `norm.ppf()` disponível em `scipy.stats`.
Quais as probabilidade associadas a esses quantis utilizando a CDF empírica da variável `false_pulsar_mean_profile_standardized`? Responda como uma tupla de três elementos arredondados para três casas decimais.
```
def standardization(x):
return (x - x.mean()) / x.std()
def q4():
false_pulsar_mean_profile = stars.loc[stars['target'] == False]['mean_profile']
false_pulsar_mean_profile_standardized = standardization(false_pulsar_mean_profile)
ecdf = ECDF(false_pulsar_mean_profile_standardized)
ppf = pd.Series(ecdf(sct.norm.ppf([0.80, 0.90, 0.95])), [0.80, 0.90, 0.95])
return tuple(ppf.round(3).to_list())
q4()
```
Para refletir:
* Os valores encontrados fazem sentido?
* O que isso pode dizer sobre a distribuição da variável `false_pulsar_mean_profile_standardized`?
## Questão 5
Qual a diferença entre os quantis Q1, Q2 e Q3 de `false_pulsar_mean_profile_standardized` e os mesmos quantis teóricos de uma distribuição normal de média 0 e variância 1? Responda como uma tupla de três elementos arredondados para três casas decimais.
```
def standardization(x):
return (x - x.mean()) / x.std()
def q5():
false_pulsar_mean_profile = stars.loc[stars['target'] == False]['mean_profile']
false_pulsar_mean_profile_standardized = standardization(false_pulsar_mean_profile)
ppf = pd.Series(sct.norm.ppf([0.25, 0.50, 0.75]), [0.25, 0.50, 0.75])
quantiles = false_pulsar_mean_profile_standardized.quantile([0.25, 0.50, 0.75])
return tuple((quantiles - ppf).round(3).to_list())
q5()
```
Para refletir:
* Os valores encontrados fazem sentido?
* O que isso pode dizer sobre a distribuição da variável `false_pulsar_mean_profile_standardized`?
* Curiosidade: alguns testes de hipóteses sobre normalidade dos dados utilizam essa mesma abordagem.
| true |
code
| 0.60903 | null | null | null | null |
|
# Image classification transfer learning demo
1. [Introduction](#Introduction)
2. [Prerequisites and Preprocessing](#Prequisites-and-Preprocessing)
3. [Fine-tuning the Image classification model](#Fine-tuning-the-Image-classification-model)
4. [Set up hosting for the model](#Set-up-hosting-for-the-model)
1. [Import model into hosting](#Import-model-into-hosting)
2. [Create endpoint configuration](#Create-endpoint-configuration)
3. [Create endpoint](#Create-endpoint)
5. [Perform Inference](#Perform-Inference)
## Introduction
Welcome to our end-to-end example of distributed image classification algorithm in transfer learning mode. In this demo, we will use the Amazon sagemaker image classification algorithm in transfer learning mode to fine-tune a pre-trained model (trained on imagenet data) to learn to classify a new dataset. In particular, the pre-trained model will be fine-tuned using [caltech-256 dataset](http://www.vision.caltech.edu/Image_Datasets/Caltech256/).
To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on.
## Prequisites and Preprocessing
### Permissions and environment variables
Here we set up the linkage and authentication to AWS services. There are three parts to this:
* The roles used to give learning and hosting access to your data. This will automatically be obtained from the role used to start the notebook
* The S3 bucket that you want to use for training and model data
* The Amazon sagemaker image classification docker image which need not be changed
```
%%time
import boto3
import re
from sagemaker import get_execution_role
role = get_execution_role()
bucket='<<bucket-name>>' # customize to your bucket
containers = {'us-west-2': '433757028032.dkr.ecr.us-west-2.amazonaws.com/image-classification:latest',
'us-east-1': '811284229777.dkr.ecr.us-east-1.amazonaws.com/image-classification:latest',
'us-east-2': '825641698319.dkr.ecr.us-east-2.amazonaws.com/image-classification:latest',
'eu-west-1': '685385470294.dkr.ecr.eu-west-1.amazonaws.com/image-classification:latest'}
training_image = containers[boto3.Session().region_name]
print(training_image)
```
## Fine-tuning the Image classification model
The caltech 256 dataset consist of images from 257 categories (the last one being a clutter category) and has 30k images with a minimum of 80 images and a maximum of about 800 images per category.
The image classification algorithm can take two types of input formats. The first is a [recordio format](https://mxnet.incubator.apache.org/tutorials/basic/record_io.html) and the other is a [lst format](https://mxnet.incubator.apache.org/how_to/recordio.html?highlight=im2rec). Files for both these formats are available at http://data.dmlc.ml/mxnet/data/caltech-256/. In this example, we will use the recordio format for training and use the training/validation split [specified here](http://data.dmlc.ml/mxnet/data/caltech-256/).
```
import os
import urllib.request
import boto3
def download(url):
filename = url.split("/")[-1]
if not os.path.exists(filename):
urllib.request.urlretrieve(url, filename)
def upload_to_s3(channel, file):
s3 = boto3.resource('s3')
data = open(file, "rb")
key = channel + '/' + file
s3.Bucket(bucket).put_object(Key=key, Body=data)
# # caltech-256
download('http://data.mxnet.io/data/caltech-256/caltech-256-60-train.rec')
download('http://data.mxnet.io/data/caltech-256/caltech-256-60-val.rec')
upload_to_s3('validation', 'caltech-256-60-val.rec')
upload_to_s3('train', 'caltech-256-60-train.rec')
```
Once we have the data available in the correct format for training, the next step is to actually train the model using the data. Before training the model, we need to setup the training parameters. The next section will explain the parameters in detail.
## Training parameters
There are two kinds of parameters that need to be set for training. The first one are the parameters for the training job. These include:
* **Input specification**: These are the training and validation channels that specify the path where training data is present. These are specified in the "InputDataConfig" section. The main parameters that need to be set is the "ContentType" which can be set to "application/x-recordio" or "application/x-image" based on the input data format and the S3Uri which specifies the bucket and the folder where the data is present.
* **Output specification**: This is specified in the "OutputDataConfig" section. We just need to specify the path where the output can be stored after training
* **Resource config**: This section specifies the type of instance on which to run the training and the number of hosts used for training. If "InstanceCount" is more than 1, then training can be run in a distributed manner.
Apart from the above set of parameters, there are hyperparameters that are specific to the algorithm. These are:
* **num_layers**: The number of layers (depth) for the network. We use 18 in this samples but other values such as 50, 152 can be used.
* **num_training_samples**: This is the total number of training samples. It is set to 15420 for caltech dataset with the current split
* **num_classes**: This is the number of output classes for the new dataset. Imagenet was trained with 1000 output classes but the number of output classes can be changed for fine-tuning. For caltech, we use 257 because it has 256 object categories + 1 clutter class
* **epochs**: Number of training epochs
* **learning_rate**: Learning rate for training
* **mini_batch_size**: The number of training samples used for each mini batch. In distributed training, the number of training samples used per batch will be N * mini_batch_size where N is the number of hosts on which training is run
After setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 10 to 12 minutes per epoch on a p2.xlarge machine. The network typically converges after 10 epochs.
```
# The algorithm supports multiple network depth (number of layers). They are 18, 34, 50, 101, 152 and 200
# For this training, we will use 18 layers
num_layers = 18
# we need to specify the input image shape for the training data
image_shape = "3,224,224"
# we also need to specify the number of training samples in the training set
# for caltech it is 15420
num_training_samples = 15420
# specify the number of output classes
num_classes = 257
# batch size for training
mini_batch_size = 128
# number of epochs
epochs = 2
# learning rate
learning_rate = 0.01
top_k=2
# Since we are using transfer learning, we set use_pretrained_model to 1 so that weights can be
# initialized with pre-trained weights
use_pretrained_model = 1
```
# Training
Run the training using Amazon sagemaker CreateTrainingJob API
```
%%time
import time
import boto3
from time import gmtime, strftime
s3 = boto3.client('s3')
# create unique job name
job_name_prefix = 'sagemaker-imageclassification-notebook'
timestamp = time.strftime('-%Y-%m-%d-%H-%M-%S', time.gmtime())
job_name = job_name_prefix + timestamp
training_params = \
{
# specify the training docker image
"AlgorithmSpecification": {
"TrainingImage": training_image,
"TrainingInputMode": "File"
},
"RoleArn": role,
"OutputDataConfig": {
"S3OutputPath": 's3://{}/{}/output'.format(bucket, job_name_prefix)
},
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.p2.8xlarge",
"VolumeSizeInGB": 50
},
"TrainingJobName": job_name,
"HyperParameters": {
"image_shape": image_shape,
"num_layers": str(num_layers),
"num_training_samples": str(num_training_samples),
"num_classes": str(num_classes),
"mini_batch_size": str(mini_batch_size),
"epochs": str(epochs),
"learning_rate": str(learning_rate),
"use_pretrained_model": str(use_pretrained_model)
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 360000
},
#Training data should be inside a subdirectory called "train"
#Validation data should be inside a subdirectory called "validation"
#The algorithm currently only supports fullyreplicated model (where data is copied onto each machine)
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": 's3://{}/train/'.format(bucket),
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "application/x-recordio",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": 's3://{}/validation/'.format(bucket),
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "application/x-recordio",
"CompressionType": "None"
}
]
}
print('Training job name: {}'.format(job_name))
print('\nInput Data Location: {}'.format(training_params['InputDataConfig'][0]['DataSource']['S3DataSource']))
# create the Amazon SageMaker training job
sagemaker = boto3.client(service_name='sagemaker')
sagemaker.create_training_job(**training_params)
# confirm that the training job has started
status = sagemaker.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print('Training job current status: {}'.format(status))
try:
# wait for the job to finish and report the ending status
sagemaker.get_waiter('training_job_completed_or_stopped').wait(TrainingJobName=job_name)
training_info = sagemaker.describe_training_job(TrainingJobName=job_name)
status = training_info['TrainingJobStatus']
print("Training job ended with status: " + status)
except:
print('Training failed to start')
# if exception is raised, that means it has failed
message = sagemaker.describe_training_job(TrainingJobName=job_name)['FailureReason']
print('Training failed with the following error: {}'.format(message))
training_info = sagemaker.describe_training_job(TrainingJobName=job_name)
status = training_info['TrainingJobStatus']
print("Training job ended with status: " + status)
```
If you see the message,
> `Training job ended with status: Completed`
then that means training sucessfully completed and the output model was stored in the output path specified by `training_params['OutputDataConfig']`.
You can also view information about and the status of a training job using the AWS SageMaker console. Just click on the "Jobs" tab.
# Inference
***
A trained model does nothing on its own. We now want to use the model to perform inference. For this example, that means predicting the topic mixture representing a given document.
This section involves several steps,
1. [Create Model](#CreateModel) - Create model for the training output
1. [Create Endpoint Configuration](#CreateEndpointConfiguration) - Create a configuration defining an endpoint.
1. [Create Endpoint](#CreateEndpoint) - Use the configuration to create an inference endpoint.
1. [Perform Inference](#Perform Inference) - Perform inference on some input data using the endpoint.
## Create Model
We now create a SageMaker Model from the training output. Using the model we can create an Endpoint Configuration.
```
%%time
import boto3
from time import gmtime, strftime
sage = boto3.Session().client(service_name='sagemaker')
model_name="test-image-classification-model"
print(model_name)
info = sage.describe_training_job(TrainingJobName=job_name)
model_data = info['ModelArtifacts']['S3ModelArtifacts']
print(model_data)
containers = {'us-west-2': '433757028032.dkr.ecr.us-west-2.amazonaws.com/image-classification:latest',
'us-east-1': '811284229777.dkr.ecr.us-east-1.amazonaws.com/image-classification:latest',
'us-east-2': '825641698319.dkr.ecr.us-east-2.amazonaws.com/image-classification:latest',
'eu-west-1': '685385470294.dkr.ecr.eu-west-1.amazonaws.com/image-classification:latest'}
hosting_image = containers[boto3.Session().region_name]
primary_container = {
'Image': hosting_image,
'ModelDataUrl': model_data,
}
create_model_response = sage.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
print(create_model_response['ModelArn'])
```
### Create Endpoint Configuration
At launch, we will support configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way.
In addition, the endpoint configuration describes the instance type required for model deployment, and at launch will describe the autoscaling configuration.
```
from time import gmtime, strftime
timestamp = time.strftime('-%Y-%m-%d-%H-%M-%S', time.gmtime())
endpoint_config_name = job_name_prefix + '-epc-' + timestamp
endpoint_config_response = sage.create_endpoint_config(
EndpointConfigName = endpoint_config_name,
ProductionVariants=[{
'InstanceType':'ml.m4.xlarge',
'InitialInstanceCount':1,
'ModelName':model_name,
'VariantName':'AllTraffic'}])
print('Endpoint configuration name: {}'.format(endpoint_config_name))
print('Endpoint configuration arn: {}'.format(endpoint_config_response['EndpointConfigArn']))
```
### Create Endpoint
Lastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
```
%%time
import time
timestamp = time.strftime('-%Y-%m-%d-%H-%M-%S', time.gmtime())
endpoint_name = job_name_prefix + '-ep-' + timestamp
print('Endpoint name: {}'.format(endpoint_name))
endpoint_params = {
'EndpointName': endpoint_name,
'EndpointConfigName': endpoint_config_name,
}
endpoint_response = sagemaker.create_endpoint(**endpoint_params)
print('EndpointArn = {}'.format(endpoint_response['EndpointArn']))
```
Finally, now the endpoint can be created. It may take sometime to create the endpoint...
```
# get the status of the endpoint
response = sagemaker.describe_endpoint(EndpointName=endpoint_name)
status = response['EndpointStatus']
print('EndpointStatus = {}'.format(status))
# wait until the status has changed
sagemaker.get_waiter('endpoint_in_service').wait(EndpointName=endpoint_name)
# print the status of the endpoint
endpoint_response = sagemaker.describe_endpoint(EndpointName=endpoint_name)
status = endpoint_response['EndpointStatus']
print('Endpoint creation ended with EndpointStatus = {}'.format(status))
if status != 'InService':
raise Exception('Endpoint creation failed.')
```
If you see the message,
> `Endpoint creation ended with EndpointStatus = InService`
then congratulations! You now have a functioning inference endpoint. You can confirm the endpoint configuration and status by navigating to the "Endpoints" tab in the AWS SageMaker console.
We will finally create a runtime object from which we can invoke the endpoint.
## Perform Inference
Finally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
```
import boto3
runtime = boto3.Session().client(service_name='runtime.sagemaker')
```
### Download test image
```
!wget -O /tmp/test.jpg http://www.vision.caltech.edu/Image_Datasets/Caltech256/images/008.bathtub/008_0007.jpg
file_name = '/tmp/test.jpg'
# test image
from IPython.display import Image
Image(file_name)
import json
import numpy as np
with open(file_name, 'rb') as f:
payload = f.read()
payload = bytearray(payload)
response = runtime.invoke_endpoint(EndpointName=endpoint_name,
ContentType='application/x-image',
Body=payload)
result = response['Body'].read()
# result will be in json format and convert it to ndarray
result = json.loads(result)
# the result will output the probabilities for all classes
# find the class with maximum probability and print the class index
index = np.argmax(result)
object_categories = ['ak47', 'american-flag', 'backpack', 'baseball-bat', 'baseball-glove', 'basketball-hoop', 'bat', 'bathtub', 'bear', 'beer-mug', 'billiards', 'binoculars', 'birdbath', 'blimp', 'bonsai-101', 'boom-box', 'bowling-ball', 'bowling-pin', 'boxing-glove', 'brain-101', 'breadmaker', 'buddha-101', 'bulldozer', 'butterfly', 'cactus', 'cake', 'calculator', 'camel', 'cannon', 'canoe', 'car-tire', 'cartman', 'cd', 'centipede', 'cereal-box', 'chandelier-101', 'chess-board', 'chimp', 'chopsticks', 'cockroach', 'coffee-mug', 'coffin', 'coin', 'comet', 'computer-keyboard', 'computer-monitor', 'computer-mouse', 'conch', 'cormorant', 'covered-wagon', 'cowboy-hat', 'crab-101', 'desk-globe', 'diamond-ring', 'dice', 'dog', 'dolphin-101', 'doorknob', 'drinking-straw', 'duck', 'dumb-bell', 'eiffel-tower', 'electric-guitar-101', 'elephant-101', 'elk', 'ewer-101', 'eyeglasses', 'fern', 'fighter-jet', 'fire-extinguisher', 'fire-hydrant', 'fire-truck', 'fireworks', 'flashlight', 'floppy-disk', 'football-helmet', 'french-horn', 'fried-egg', 'frisbee', 'frog', 'frying-pan', 'galaxy', 'gas-pump', 'giraffe', 'goat', 'golden-gate-bridge', 'goldfish', 'golf-ball', 'goose', 'gorilla', 'grand-piano-101', 'grapes', 'grasshopper', 'guitar-pick', 'hamburger', 'hammock', 'harmonica', 'harp', 'harpsichord', 'hawksbill-101', 'head-phones', 'helicopter-101', 'hibiscus', 'homer-simpson', 'horse', 'horseshoe-crab', 'hot-air-balloon', 'hot-dog', 'hot-tub', 'hourglass', 'house-fly', 'human-skeleton', 'hummingbird', 'ibis-101', 'ice-cream-cone', 'iguana', 'ipod', 'iris', 'jesus-christ', 'joy-stick', 'kangaroo-101', 'kayak', 'ketch-101', 'killer-whale', 'knife', 'ladder', 'laptop-101', 'lathe', 'leopards-101', 'license-plate', 'lightbulb', 'light-house', 'lightning', 'llama-101', 'mailbox', 'mandolin', 'mars', 'mattress', 'megaphone', 'menorah-101', 'microscope', 'microwave', 'minaret', 'minotaur', 'motorbikes-101', 'mountain-bike', 'mushroom', 'mussels', 'necktie', 'octopus', 'ostrich', 'owl', 'palm-pilot', 'palm-tree', 'paperclip', 'paper-shredder', 'pci-card', 'penguin', 'people', 'pez-dispenser', 'photocopier', 'picnic-table', 'playing-card', 'porcupine', 'pram', 'praying-mantis', 'pyramid', 'raccoon', 'radio-telescope', 'rainbow', 'refrigerator', 'revolver-101', 'rifle', 'rotary-phone', 'roulette-wheel', 'saddle', 'saturn', 'school-bus', 'scorpion-101', 'screwdriver', 'segway', 'self-propelled-lawn-mower', 'sextant', 'sheet-music', 'skateboard', 'skunk', 'skyscraper', 'smokestack', 'snail', 'snake', 'sneaker', 'snowmobile', 'soccer-ball', 'socks', 'soda-can', 'spaghetti', 'speed-boat', 'spider', 'spoon', 'stained-glass', 'starfish-101', 'steering-wheel', 'stirrups', 'sunflower-101', 'superman', 'sushi', 'swan', 'swiss-army-knife', 'sword', 'syringe', 'tambourine', 'teapot', 'teddy-bear', 'teepee', 'telephone-box', 'tennis-ball', 'tennis-court', 'tennis-racket', 'theodolite', 'toaster', 'tomato', 'tombstone', 'top-hat', 'touring-bike', 'tower-pisa', 'traffic-light', 'treadmill', 'triceratops', 'tricycle', 'trilobite-101', 'tripod', 't-shirt', 'tuning-fork', 'tweezer', 'umbrella-101', 'unicorn', 'vcr', 'video-projector', 'washing-machine', 'watch-101', 'waterfall', 'watermelon', 'welding-mask', 'wheelbarrow', 'windmill', 'wine-bottle', 'xylophone', 'yarmulke', 'yo-yo', 'zebra', 'airplanes-101', 'car-side-101', 'faces-easy-101', 'greyhound', 'tennis-shoes', 'toad', 'clutter']
print("Result: label - " + object_categories[index] + ", probability - " + str(result[index]))
```
### Clean up
When we're done with the endpoint, we can just delete it and the backing instances will be released. Run the following cell to delete the endpoint.
```
sage.delete_endpoint(EndpointName=endpoint_name)
```
| true |
code
| 0.489442 | null | null | null | null |
|
# Concise Implementation of Softmax Regression
:label:`sec_softmax_concise`
Just as high-level APIs of deep learning frameworks
made it much easier
to implement linear regression in :numref:`sec_linear_concise`,
we will find it similarly (or possibly more)
convenient for implementing classification models. Let us stick with the Fashion-MNIST dataset
and keep the batch size at 256 as in :numref:`sec_softmax_scratch`.
```
from d2l import tensorflow as d2l
import tensorflow as tf
batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
```
## Initializing Model Parameters
As mentioned in :numref:`sec_softmax`,
the output layer of softmax regression
is a fully-connected layer.
Therefore, to implement our model,
we just need to add one fully-connected layer
with 10 outputs to our `Sequential`.
Again, here, the `Sequential` is not really necessary,
but we might as well form the habit since it will be ubiquitous
when implementing deep models.
Again, we initialize the weights at random
with zero mean and standard deviation 0.01.
```
net = tf.keras.models.Sequential()
net.add(tf.keras.layers.Flatten(input_shape=(28, 28)))
weight_initializer = tf.keras.initializers.RandomNormal(mean=0.0, stddev=0.01)
net.add(tf.keras.layers.Dense(10, kernel_initializer=weight_initializer))
```
## Softmax Implementation Revisited
:label:`subsec_softmax-implementation-revisited`
In the previous example of :numref:`sec_softmax_scratch`,
we calculated our model's output
and then ran this output through the cross-entropy loss.
Mathematically, that is a perfectly reasonable thing to do.
However, from a computational perspective,
exponentiation can be a source of numerical stability issues.
Recall that the softmax function calculates
$\hat y_j = \frac{\exp(o_j)}{\sum_k \exp(o_k)}$,
where $\hat y_j$ is the $j^\mathrm{th}$ element of
the predicted probability distribution $\hat{\mathbf{y}}$
and $o_j$ is the $j^\mathrm{th}$ element of the logits
$\mathbf{o}$.
If some of the $o_k$ are very large (i.e., very positive),
then $\exp(o_k)$ might be larger than the largest number
we can have for certain data types (i.e., *overflow*).
This would make the denominator (and/or numerator) `inf` (infinity)
and we wind up encountering either 0, `inf`, or `nan` (not a number) for $\hat y_j$.
In these situations we do not get a well-defined
return value for cross-entropy.
One trick to get around this is to first subtract $\max(o_k)$
from all $o_k$ before proceeding with the softmax calculation.
You can verify that this shifting of each $o_k$ by constant factor
does not change the return value of softmax.
After the subtraction and normalization step,
it might be possible that some $o_j$ have large negative values
and thus that the corresponding $\exp(o_j)$ will take values close to zero.
These might be rounded to zero due to finite precision (i.e., *underflow*),
making $\hat y_j$ zero and giving us `-inf` for $\log(\hat y_j)$.
A few steps down the road in backpropagation,
we might find ourselves faced with a screenful
of the dreaded `nan` results.
Fortunately, we are saved by the fact that
even though we are computing exponential functions,
we ultimately intend to take their log
(when calculating the cross-entropy loss).
By combining these two operators
softmax and cross-entropy together,
we can escape the numerical stability issues
that might otherwise plague us during backpropagation.
As shown in the equation below, we avoid calculating $\exp(o_j)$
and can use instead $o_j$ directly due to the canceling in $\log(\exp(\cdot))$.
$$
\begin{aligned}
\log{(\hat y_j)} & = \log\left( \frac{\exp(o_j)}{\sum_k \exp(o_k)}\right) \\
& = \log{(\exp(o_j))}-\log{\left( \sum_k \exp(o_k) \right)} \\
& = o_j -\log{\left( \sum_k \exp(o_k) \right)}.
\end{aligned}
$$
We will want to keep the conventional softmax function handy
in case we ever want to evaluate the output probabilities by our model.
But instead of passing softmax probabilities into our new loss function,
we will just pass the logits and compute the softmax and its log
all at once inside the cross-entropy loss function,
which does smart things like the ["LogSumExp trick"](https://en.wikipedia.org/wiki/LogSumExp).
```
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
```
## Optimization Algorithm
Here, we use minibatch stochastic gradient descent
with a learning rate of 0.1 as the optimization algorithm.
Note that this is the same as we applied in the linear regression example
and it illustrates the general applicability of the optimizers.
```
trainer = tf.keras.optimizers.SGD(learning_rate=.1)
```
## Training
Next we call the training function defined in :numref:`sec_softmax_scratch` to train the model.
```
num_epochs = 10
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)
```
As before, this algorithm converges to a solution
that achieves a decent accuracy,
albeit this time with fewer lines of code than before.
## Summary
* Using high-level APIs, we can implement softmax regression much more concisely.
* From a computational perspective, implementing softmax regression has intricacies. Note that in many cases, a deep learning framework takes additional precautions beyond these most well-known tricks to ensure numerical stability, saving us from even more pitfalls that we would encounter if we tried to code all of our models from scratch in practice.
## Exercises
1. Try adjusting the hyperparameters, such as the batch size, number of epochs, and learning rate, to see what the results are.
1. Increase the numper of epochs for training. Why might the test accuracy decrease after a while? How could we fix this?
[Discussions](https://discuss.d2l.ai/t/260)
| true |
code
| 0.859221 | null | null | null | null |
|
```
# Source/Reference: https://www.tensorflow.org/tutorials/structured_data/time_series
# Reasoning for explaining TF guides/tutorials:
# You will become comfortable with reading other tutorials/guides on TF2.0/Keras
# Pre-req:
# - LSTMs, RNNs, GRUs chapter
# - Previous code-walkthrough sessions
```
## Dataset
```
#imports
import tensorflow as tf
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
# gloabl params for all matplotlib plots
mpl.rcParams['figure.figsize'] = (8, 6)
mpl.rcParams['axes.grid'] = False
# get data
# data source: Max Plank Institute, https://www.bgc-jena.mpg.de/wetter/
zip_path = tf.keras.utils.get_file( #
origin='https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip',
fname='jena_climate_2009_2016.csv.zip',
extract=True)
print(zip_path)
csv_path, _ = os.path.splitext(zip_path) # https://docs.python.org/3/library/os.path.html
print(csv_path)
! ls /root/.keras/datasets/
df = pd.read_csv(csv_path)
df.head()
```
Observations:
1. One reading every 10 mins
2. 1 day = 6*24 = 144 readings
3. 5 days = 144*5 = 720 readings
**Forecasting task:** Predict temperature (in deg C) in the future.
```
# univariate data: Temp vs Time
uni_data_df = df['T (degC)']
uni_data_df.index = df['Date Time']
uni_data_df.head()
uni_data_df.plot()
uni_data = uni_data_df.values # numpy ndarray from pandas
TRAIN_SPLIT = 300000 # First 300000 obs will be used as train data and rest as test data.
# 300,000 => ~2100 days worth of training data
tf.random.set_seed(13) # random seed
# Normalize data: mean centering and variance-scaling.
# NOTE: use only train data to normalize all of the data. otherwise, leakage-issue
uni_train_mean = uni_data[:TRAIN_SPLIT].mean()
uni_train_std = uni_data[:TRAIN_SPLIT].std()
uni_data = (uni_data-uni_train_mean)/uni_train_std
print(type(uni_data))
```
## Moving window average
### Pose a simple problem:
Given last 'k' values of temp-observations (only one feature <=> univariate), predict the next observation
### MWA:
Average the previous k values to predict the next value.
```
# This function creates the data we need for the above problem
# dataset: numpy ndarray
# start_index:
# end_index:
# history_size: k => take k values at a time
# target_size: 0 => next value in the time-series
# Output: data: (n,k) and labels (n,1)
def univariate_data(dataset, start_index, end_index, history_size, target_size):
data = []
labels = []
start_index = start_index + history_size
if end_index is None:
end_index = len(dataset) - target_size
for i in range(start_index, end_index):
indices = range(i-history_size, i)
# Reshape data from (history_size,) to (history_size, 1)
data.append(np.reshape(dataset[indices], (history_size, 1)))
labels.append(dataset[i+target_size])
return np.array(data), np.array(labels)
# use the above function to create the datasets.
univariate_past_history = 20
univariate_future_target = 0
x_train_uni, y_train_uni = univariate_data(uni_data, 0, TRAIN_SPLIT,
univariate_past_history,
univariate_future_target)
x_val_uni, y_val_uni = univariate_data(uni_data, TRAIN_SPLIT, None,
univariate_past_history,
univariate_future_target)
print(x_train_uni.shape)
print(y_train_uni.shape)
print(x_val_uni.shape)
print(y_val_uni.shape)
print ('Single window of past history')
print (x_train_uni[0])
print ('\n Target temperature to predict')
print (y_train_uni[0])
#utility function
def create_time_steps(length):
return list(range(-length, 0))
print(create_time_steps(20))
# Plotting function
# plot_data: contains labels as list
# delta: 0 => next time step given last "k" steps.
# title: plot title
# Usage: show_plot([x_train_uni[0], y_train_uni[0]], 0, 'Sample Example')
def show_plot(plot_data, delta, title):
labels = ['History', 'True Future', 'Model Prediction']
marker = ['.-', 'rx', 'go'] # dot-line, red-x, green-o refer: https://matplotlib.org/3.1.1/api/markers_api.html
time_steps = create_time_steps(plot_data[0].shape[0])
if delta:
future = delta
else:
future = 0
plt.title(title)
for i, x in enumerate(plot_data):
if i:
plt.plot(future, plot_data[i], marker[i], markersize=10,
label=labels[i])
else:
plt.plot(time_steps, plot_data[i].flatten(), marker[i], label=labels[i])
plt.legend()
plt.xlim([time_steps[0], (future+5)*2])
plt.xlabel('Time-Step')
return plt
show_plot([x_train_uni[0], y_train_uni[0]], 0, 'Sample Example')
i=20
show_plot([x_train_uni[i], y_train_uni[i]], 0, 'Sample Example')
def mwa(history):
return np.mean(history)
i=0
show_plot([x_train_uni[i], y_train_uni[i], mwa(x_train_uni[i])], 0,
'MWA Prediction Example')
i=20
show_plot([x_train_uni[i], y_train_uni[i], mwa(x_train_uni[i])], 0,
'MWA Prediction Example')
```
## Univariate time-series forecasting
- Features from the history: only temperature => univariate
- Problem definition: Given last "k=20" values of temp, predict the next temp value.
```
# TF Dataset preperation
BATCH_SIZE = 256 # bacth size in batch-SGD/variants
BUFFER_SIZE = 10000 # for shuffling the dataset
train_univariate = tf.data.Dataset.from_tensor_slices((x_train_uni, y_train_uni))
train_univariate = train_univariate.cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE).repeat()
#https://www.tensorflow.org/api_docs/python/tf/data/Dataset#repeat
val_univariate = tf.data.Dataset.from_tensor_slices((x_val_uni, y_val_uni))
val_univariate = val_univariate.batch(BATCH_SIZE).repeat()
print(train_univariate)
print(val_univariate)
```
<img src="https://www.tensorflow.org/tutorials/structured_data/images/time_series.png" width="50%" height="50%" />
```
# MODEL:
simple_lstm_model = tf.keras.models.Sequential([
tf.keras.layers.LSTM(8, input_shape=x_train_uni.shape[-2:]),
tf.keras.layers.Dense(1)
])
simple_lstm_model.compile(optimizer='adam', loss='mae')
# Why not GRUs?
# https://www.appliedaicourse.com/lecture/11/applied-machine-learning-online-course/3436/grus/8/module-8-neural-networks-computer-vision-and-deep-learning
# https://www.quora.com/Whats-the-difference-between-LSTM-and-GRU
# Train and evaluate
STEPS_PER_EPOCH = 200
EPOCHS = 10
# https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit
simple_lstm_model.fit(train_univariate, epochs=EPOCHS,
steps_per_epoch=STEPS_PER_EPOCH,
validation_data=val_univariate, validation_steps=50)
for x, y in val_univariate.take(5): # take 5 random inputs from validation data
plot = show_plot([x[0].numpy(), y[0].numpy(),
simple_lstm_model.predict(x)[0]], 0, 'Simple LSTM model')
plot.show()
```
## Multi-variate & single-step forecasting
- Problem definition: Given three features (p, T, rho) at each time stamp in the past, predict the temperature at a single time-stamp in the future.
```
# Features
features_considered = ['p (mbar)', 'T (degC)', 'rho (g/m**3)']
features = df[features_considered]
features.index = df['Date Time']
features.head()
features.plot(subplots=True)
# Standardize data
dataset = features.values
data_mean = dataset[:TRAIN_SPLIT].mean(axis=0)
data_std = dataset[:TRAIN_SPLIT].std(axis=0)
dataset = (dataset-data_mean)/data_std
# Same as univariate_data above.
# New params:
# step: instead of taking data for each 10min, do you want to generate data once evrey 6 steps (60min)
# single_step: lables from single timestamp or multiple timesteps
def multivariate_data(dataset, target, start_index, end_index, history_size,
target_size, step, single_step=False):
data = []
labels = []
start_index = start_index + history_size
if end_index is None:
end_index = len(dataset) - target_size
for i in range(start_index, end_index):
indices = range(i-history_size, i, step) # step used here.
data.append(dataset[indices])
if single_step: # single_step used here.
labels.append(target[i+target_size])
else:
labels.append(target[i:i+target_size])
return np.array(data), np.array(labels)
# Generate data
past_history = 720 # 720*10 mins
future_target = 72 # 72*10 mins
STEP = 6 # one obs every 6X10min = 60 min => 1 hr
# past history: 7200 mins => 120 hrs, sampling at one sample evry hours
# future_target: 720 mins = > 12 hrs in the future, not next hour
x_train_single, y_train_single = multivariate_data(dataset, dataset[:, 1], 0,
TRAIN_SPLIT, past_history,
future_target, STEP,
single_step=True)
x_val_single, y_val_single = multivariate_data(dataset, dataset[:, 1],
TRAIN_SPLIT, None, past_history,
future_target, STEP,
single_step=True)
print(x_train_single.shape)
print(y_train_single.shape)
#TF dataset
train_data_single = tf.data.Dataset.from_tensor_slices((x_train_single, y_train_single))
train_data_single = train_data_single.cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE).repeat()
val_data_single = tf.data.Dataset.from_tensor_slices((x_val_single, y_val_single))
val_data_single = val_data_single.batch(BATCH_SIZE).repeat()
print(train_data_single)
print(val_data_single)
# Model
single_step_model = tf.keras.models.Sequential()
single_step_model.add(tf.keras.layers.LSTM(32,
input_shape=x_train_single.shape[-2:]))
single_step_model.add(tf.keras.layers.Dense(1))
single_step_model.compile(optimizer=tf.keras.optimizers.RMSprop(), loss='mae')
single_step_history = single_step_model.fit(train_data_single, epochs=EPOCHS,
steps_per_epoch=STEPS_PER_EPOCH,
validation_data=val_data_single,
validation_steps=50)
# Plot train and validation loss over epochs
def plot_train_history(history, title):
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.figure()
plt.plot(epochs, loss, 'b', label='Training loss')
plt.plot(epochs, val_loss, 'r', label='Validation loss')
plt.title(title)
plt.legend()
plt.grid()
plt.show()
plot_train_history(single_step_history,
'Single Step Training and validation loss')
# plot time series and predicted values
for x, y in val_data_single.take(5):
plot = show_plot([x[0][:, 1].numpy(), y[0].numpy(),
single_step_model.predict(x)[0]], 12,
'Single Step Prediction')
plot.show()
```
## Multi-variate & multi-step forecasting
- Generate multiple future values of temperature
```
# single_step=FALSE default value
future_target = 72 # 72 future values
x_train_multi, y_train_multi = multivariate_data(dataset, dataset[:, 1], 0,
TRAIN_SPLIT, past_history,
future_target, STEP)
x_val_multi, y_val_multi = multivariate_data(dataset, dataset[:, 1],
TRAIN_SPLIT, None, past_history,
future_target, STEP)
print(x_train_multi.shape)
print(y_train_multi.shape)
print(x_val_multi.shape)
print(y_val_multi.shape)
# TF DATASET
train_data_multi = tf.data.Dataset.from_tensor_slices((x_train_multi, y_train_multi))
train_data_multi = train_data_multi.cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE).repeat()
val_data_multi = tf.data.Dataset.from_tensor_slices((x_val_multi, y_val_multi))
val_data_multi = val_data_multi.batch(BATCH_SIZE).repeat()
#plotting function
def multi_step_plot(history, true_future, prediction):
plt.figure(figsize=(12, 6))
num_in = create_time_steps(len(history))
num_out = len(true_future)
plt.grid()
plt.plot(num_in, np.array(history[:, 1]), label='History')
plt.plot(np.arange(num_out)/STEP, np.array(true_future), 'bo',
label='True Future')
if prediction.any():
plt.plot(np.arange(num_out)/STEP, np.array(prediction), 'ro',
label='Predicted Future')
plt.legend(loc='upper left')
plt.show()
for x, y in train_data_multi.take(1):
multi_step_plot(x[0], y[0], np.array([0]))
multi_step_model = tf.keras.models.Sequential()
multi_step_model.add(tf.keras.layers.LSTM(32,
return_sequences=True,
input_shape=x_train_multi.shape[-2:]))
multi_step_model.add(tf.keras.layers.LSTM(16, activation='relu'))
multi_step_model.add(tf.keras.layers.Dense(72)) # for 72 outputs
multi_step_model.compile(optimizer=tf.keras.optimizers.RMSprop(clipvalue=1.0), loss='mae')
multi_step_history = multi_step_model.fit(train_data_multi, epochs=EPOCHS,
steps_per_epoch=STEPS_PER_EPOCH,
validation_data=val_data_multi,
validation_steps=50)
plot_train_history(multi_step_history, 'Multi-Step Training and validation loss')
for x, y in val_data_multi.take(3):
multi_step_plot(x[0], y[0], multi_step_model.predict(x)[0])
```
| true |
code
| 0.668231 | null | null | null | null |
|
*Poonam Ligade*
*1st Feb 2017*
----------
This notebook is like note to self.
I am trying to understand various components of Artificial Neural Networks aka Deep Learning.
Hope it might be useful for someone else here.
I am designing neural net on MNIST handwritten digits images to identify their correct label i.e number in image.
You must have guessed its an image recognition task.
MNIST is called Hello world of Deep learning.
Lets start!!
This notebook is inspired from [Jeremy's][1] [Deep Learning][2] mooc and [Deep learning with python][3] book by Keras author [François Chollet][4] .
[1]: https://www.linkedin.com/in/howardjeremy/
[2]: http://course.fast.ai/
[3]: https://www.manning.com/books/deep-learning-with-python
[4]: https://research.google.com/pubs/105096.html
**Import all required libraries**
===============================
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
%matplotlib inline
from keras.models import Sequential
from keras.layers import Dense , Dropout , Lambda, Flatten
from keras.optimizers import Adam ,RMSprop
from sklearn.model_selection import train_test_split
from keras import backend as K
from keras.preprocessing.image import ImageDataGenerator
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
from subprocess import check_output
print(check_output(["ls", "../input"]).decode("utf8"))
# Any results you write to the current directory are saved as output.
```
**Load Train and Test data**
============================
```
# create the training & test sets, skipping the header row with [1:]
train = pd.read_csv("../input/train.csv")
print(train.shape)
train.head()
test= pd.read_csv("../input/test.csv")
print(test.shape)
test.head()
X_train = (train.iloc[:,1:].values).astype('float32') # all pixel values
y_train = train.iloc[:,0].values.astype('int32') # only labels i.e targets digits
X_test = test.values.astype('float32')
X_train
y_train
```
The output variable is an integer from 0 to 9. This is a **multiclass** classification problem.
## Data Visualization
Lets look at 3 images from data set with their labels.
```
#Convert train datset to (num_images, img_rows, img_cols) format
X_train = X_train.reshape(X_train.shape[0], 28, 28)
for i in range(6, 9):
plt.subplot(330 + (i+1))
plt.imshow(X_train[i], cmap=plt.get_cmap('gray'))
plt.title(y_train[i]);
#expand 1 more dimention as 1 for colour channel gray
X_train = X_train.reshape(X_train.shape[0], 28, 28,1)
X_train.shape
X_test = X_test.reshape(X_test.shape[0], 28, 28,1)
X_test.shape
```
**Preprocessing the digit images**
==================================
**Feature Standardization**
-------------------------------------
It is important preprocessing step.
It is used to centre the data around zero mean and unit variance.
```
mean_px = X_train.mean().astype(np.float32)
std_px = X_train.std().astype(np.float32)
def standardize(x):
return (x-mean_px)/std_px
```
*One Hot encoding of labels.*
-----------------------------
A one-hot vector is a vector which is 0 in most dimensions, and 1 in a single dimension. In this case, the nth digit will be represented as a vector which is 1 in the nth dimension.
For example, 3 would be [0,0,0,1,0,0,0,0,0,0].
```
from keras.utils.np_utils import to_categorical
y_train= to_categorical(y_train)
num_classes = y_train.shape[1]
num_classes
```
Lets plot 10th label.
```
plt.title(y_train[9])
plt.plot(y_train[9])
plt.xticks(range(10));
```
Oh its 3 !
**Designing Neural Network Architecture**
=========================================
```
# fix random seed for reproducibility
seed = 43
np.random.seed(seed)
```
*Linear Model*
--------------
```
from keras.models import Sequential
from keras.layers.core import Lambda , Dense, Flatten, Dropout
from keras.callbacks import EarlyStopping
from keras.layers import BatchNormalization, Convolution2D , MaxPooling2D
```
Lets create a simple model from Keras Sequential layer.
1. Lambda layer performs simple arithmetic operations like sum, average, exponentiation etc.
In 1st layer of the model we have to define input dimensions of our data in (rows,columns,colour channel) format.
(In theano colour channel comes first)
2. Flatten will transform input into 1D array.
3. Dense is fully connected layer that means all neurons in previous layers will be connected to all neurons in fully connected layer.
In the last layer we have to specify output dimensions/classes of the model.
Here it's 10, since we have to output 10 different digit labels.
```
model= Sequential()
model.add(Lambda(standardize,input_shape=(28,28,1)))
model.add(Flatten())
model.add(Dense(10, activation='softmax'))
print("input shape ",model.input_shape)
print("output shape ",model.output_shape)
```
***Compile network***
-------------------
Before making network ready for training we have to make sure to add below things:
1. A loss function: to measure how good the network is
2. An optimizer: to update network as it sees more data and reduce loss
value
3. Metrics: to monitor performance of network
```
from keras.optimizers import RMSprop
model.compile(optimizer=RMSprop(lr=0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
from keras.preprocessing import image
gen = image.ImageDataGenerator()
```
## Cross Validation
```
from sklearn.model_selection import train_test_split
X = X_train
y = y_train
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.10, random_state=42)
batches = gen.flow(X_train, y_train, batch_size=64)
val_batches=gen.flow(X_val, y_val, batch_size=64)
history=model.fit_generator(generator=batches, steps_per_epoch=batches.n, epochs=3,
validation_data=val_batches, validation_steps=val_batches.n)
history_dict = history.history
history_dict.keys()
import matplotlib.pyplot as plt
%matplotlib inline
loss_values = history_dict['loss']
val_loss_values = history_dict['val_loss']
epochs = range(1, len(loss_values) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss_values, 'bo')
# b+ is for "blue crosses"
plt.plot(epochs, val_loss_values, 'b+')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.show()
plt.clf() # clear figure
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc_values, 'bo')
plt.plot(epochs, val_acc_values, 'b+')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.show()
```
## Fully Connected Model
Neurons in a fully connected layer have full connections to all activations in the previous layer, as seen in regular Neural Networks.
Adding another Dense Layer to model.
```
def get_fc_model():
model = Sequential([
Lambda(standardize, input_shape=(28,28,1)),
Flatten(),
Dense(512, activation='relu'),
Dense(10, activation='softmax')
])
model.compile(optimizer='Adam', loss='categorical_crossentropy',
metrics=['accuracy'])
return model
fc = get_fc_model()
fc.optimizer.lr=0.01
history=fc.fit_generator(generator=batches, steps_per_epoch=batches.n, epochs=1,
validation_data=val_batches, validation_steps=val_batches.n)
```
## Convolutional Neural Network
CNNs are extremely efficient for images.
```
from keras.layers import Convolution2D, MaxPooling2D
def get_cnn_model():
model = Sequential([
Lambda(standardize, input_shape=(28,28,1)),
Convolution2D(32,(3,3), activation='relu'),
Convolution2D(32,(3,3), activation='relu'),
MaxPooling2D(),
Convolution2D(64,(3,3), activation='relu'),
Convolution2D(64,(3,3), activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy',
metrics=['accuracy'])
return model
model= get_cnn_model()
model.optimizer.lr=0.01
history=model.fit_generator(generator=batches, steps_per_epoch=batches.n, epochs=1,
validation_data=val_batches, validation_steps=val_batches.n)
```
## Data Augmentation
It is tehnique of showing slighly different or new images to neural network to avoid overfitting. And to achieve better generalization.
In case you have very small dataset, you can use different kinds of data augmentation techniques to increase your data size. Neural networks perform better if you provide them more data.
Different data aumentation techniques are as follows:
1. Cropping
2. Rotating
3. Scaling
4. Translating
5. Flipping
6. Adding Gaussian noise to input images etc.
```
gen =ImageDataGenerator(rotation_range=8, width_shift_range=0.08, shear_range=0.3,
height_shift_range=0.08, zoom_range=0.08)
batches = gen.flow(X_train, y_train, batch_size=64)
val_batches = gen.flow(X_val, y_val, batch_size=64)
model.optimizer.lr=0.001
history=model.fit_generator(generator=batches, steps_per_epoch=batches.n, epochs=1,
validation_data=val_batches, validation_steps=val_batches.n)
```
## Adding Batch Normalization
BN helps to fine tune hyperparameters more better and train really deep neural networks.
```
from keras.layers.normalization import BatchNormalization
def get_bn_model():
model = Sequential([
Lambda(standardize, input_shape=(28,28,1)),
Convolution2D(32,(3,3), activation='relu'),
BatchNormalization(axis=1),
Convolution2D(32,(3,3), activation='relu'),
MaxPooling2D(),
BatchNormalization(axis=1),
Convolution2D(64,(3,3), activation='relu'),
BatchNormalization(axis=1),
Convolution2D(64,(3,3), activation='relu'),
MaxPooling2D(),
Flatten(),
BatchNormalization(),
Dense(512, activation='relu'),
BatchNormalization(),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
model= get_bn_model()
model.optimizer.lr=0.01
history=model.fit_generator(generator=batches, steps_per_epoch=batches.n, epochs=1,
validation_data=val_batches, validation_steps=val_batches.n)
```
## Submitting Predictions to Kaggle.
Make sure you use full train dataset here to train model and predict on test set.
```
model.optimizer.lr=0.01
gen = image.ImageDataGenerator()
batches = gen.flow(X, y, batch_size=64)
history=model.fit_generator(generator=batches, steps_per_epoch=batches.n, epochs=3)
predictions = model.predict_classes(X_test, verbose=0)
submissions=pd.DataFrame({"ImageId": list(range(1,len(predictions)+1)),
"Label": predictions})
submissions.to_csv("DR.csv", index=False, header=True)
```
More to come . Please upvote if you find it useful.
You can increase number of epochs on your GPU enabled machine to get better results.
| true |
code
| 0.774258 | null | null | null | null |
|
# Clustering CIML
Clustering experiment on CIML.
**Motivation:** During CIML supervised learning on multiple classification experiments, where the classes are cloud operators providing the VMs to run CI jobs, the classes predicted with the best metrics were those with the higher amount of samples in the dataset.
We want to evaluate if unsupervised learning can group those cloud providers with high support in separate clusters.
Clustering algorithm: k-means.
<br>Method for deciding the number of clusters: elbow method and silhouette score.
```
from ciml import gather_results
from ciml import tf_trainer
from sklearn.cluster import KMeans
from sklearn.mixture import GaussianMixture
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.cm as cmx
import matplotlib.colors as pltcolors
import matplotlib.pyplot as plt
import plotly.express as px
from plotly.subplots import make_subplots
from sklearn import metrics
from scipy.spatial.distance import cdist
from sklearn.metrics import silhouette_samples, silhouette_score
import matplotlib.cm as cm
```
## Data loading and analysis
From the supervised learning experiments on multiple data classification on CIML data, the best results were obtained for the following experiment:
* Features from dstat data: User CPU `usr` and Average System Load `1m`.
* Data resolution: 1 minute
* Classes reduction: cloud providers with several regions were mapped to a single class.
* Model hyperparameters:
* NW topology: DNN with 3 hidden layers and 100 units per layer.
* Activation function: RELU.
* Output layer: Sigmoid.
* Initial learning rate: 0.05
* Optimizer: Adagrad
We will load the dataset used for this experiment and analyse the distribution of samples per cloud provider.
```
#Define datapath
data_path = '/Users/kw/ciml_data/cimlodsceu2019seed'
#dataset = 'usr_1m-10s-node_provider'
dataset = 'usr_1m-1min-node_provider'
#Dataset including classes
labels = gather_results.load_dataset(dataset, 'labels', data_path=data_path)['labels']
training_data = gather_results.load_dataset(dataset, 'training', data_path=data_path)
test_data = gather_results.load_dataset(dataset, 'test', data_path=data_path)
config = gather_results.load_model_config(dataset, data_path=data_path)
classes = training_data['classes']
examples = training_data['examples']
example_ids = training_data['example_ids']
# Create an int representation of class
unique_classes = list(set(classes))
dict_classes = dict(zip(unique_classes, list(range(len(unique_classes)))))
int_classes = [dict_classes[x] for x in classes]
df_data = pd.DataFrame(examples, columns=labels, index=example_ids)
df_data['classes'] = int_classes
```
The dataset contains 185 feautures and 2377 samples. Each sample is a CI job run.
```
#Let's have a look at the data
df_data.shape
```
We now list the cloud provider clases in the dataset and see how many samples the dataset contains per class.
```
#Cloud providers in the dataset and their numerical mapping
classes_count = pd.DataFrame.from_dict(dict_classes, orient='index').reset_index()
classes_count = classes_count.rename(columns={'index':'cloud_prov',0:'id'})
classes_count
#Add the total amount of samples in the dataset per cloud provider to have an overall view of the dataset
total_count = pd.DataFrame(df_data['classes'].value_counts()).add_suffix('_count').reset_index()
classes_count['count'] = classes_count.apply(
lambda x: (total_count[total_count['index']==x['id']]['classes_count']).values[0], axis=1, result_type = 'expand')
classes_count.sort_values(by='count', ascending=False)
```
## Determine the optimal number of clusters
Next step is to determine the optimal number of clusters for training our k-means clustering model.
<br>We will use the elbow method and the silhouette score to find out their recommendation.
```
#Numpy representation of the dataframe df_data.
#This representation is needed for calculating the silhouette coefficients.
cluster_examples = df_data.to_numpy()
cluster_examples.shape
```
### Elbow method
In cluster analysis, the elbow method is a heuristic used in determining the number of clusters in a data set.
<br>The method consists of plotting the explained variation as a function of the number of clusters, and picking the elbow of the curve as the number of clusters to use.[1](https://en.wikipedia.org/wiki/Elbow_method_(clustering)#:~:text=In%20cluster%20analysis%2C%20the%20elbow,number%20of%20clusters%20to%20use.)
```
# k means determine k using elbow method
distortions = []
K = range(1,10)
X = cluster_examples
for k in K:
kmeanModel = KMeans(n_clusters=k).fit(X)
kmeanModel.fit(X)
distortions.append(sum(np.min(cdist(X, kmeanModel.cluster_centers_, 'euclidean'), axis=1)) / X.shape[0])
# Plot the elbow
plt.plot(K, distortions, 'bx-')
plt.xlabel('k')
plt.ylabel('Distortion')
plt.title('The Elbow Method showing the optimal k')
plt.show()
```
The elbow method suggest running k-means with 2 clusters.
### Silhouette score
The elbow method can be ambiguous, as an alternative the average silhouette method can be used.
<br>The silhouette value is a measure of how similar an object is to its own cluster (cohesion) compared
<br>to other clusters (separation). The silhouette ranges from −1 to +1, where a high value indicates that
<br>the object is well matched to its own cluster and poorly matched to neighboring clusters.
<br>If most objects have a high value, then the clustering configuration is appropriate.
<br>If many points have a low or negative value, then the clustering configuration may have too many or too few clusters. [2](https://en.wikipedia.org/wiki/Silhouette_(clustering)#:~:text=Silhouette%20refers%20to%20a%20method,consistency%20within%20clusters%20of%20data.&text=The%20silhouette%20ranges%20from%20%E2%88%921,poorly%20matched%20to%20neighboring%20clusters.)
```
X = cluster_examples
range_n_clusters = (2,3,4,5,6,7,8)
for n_clusters in range_n_clusters:
# Create a subplot with 1 row and 2 columns
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.set_size_inches(18, 7)
# The 1st subplot is the silhouette plot
# The silhouette coefficient can range from -1, 1 but in this example all
# lie within [-0.1, 1]
ax1.set_xlim([-0.1, 1])
# The (n_clusters+1)*10 is for inserting blank space between silhouette
# plots of individual clusters, to demarcate them clearly.
ax1.set_ylim([0, len(X) + (n_clusters + 1) * 10])
# Initialize the clusterer with n_clusters value and a random generator
# seed of 10 for reproducibility.
clusterer = KMeans(n_clusters=n_clusters, random_state=555)
cluster_labels = clusterer.fit_predict(X)
# The silhouette_score gives the average value for all the samples.
# This gives a perspective into the density and separation of the formed
# clusters
silhouette_avg = silhouette_score(X, cluster_labels)
print("For n_clusters =", n_clusters,
"The average silhouette_score is :", silhouette_avg)
# Compute the silhouette scores for each sample
sample_silhouette_values = silhouette_samples(X, cluster_labels)
y_lower = 10
for i in range(n_clusters):
# Aggregate the silhouette scores for samples belonging to
# cluster i, and sort them
ith_cluster_silhouette_values = \
sample_silhouette_values[cluster_labels == i]
ith_cluster_silhouette_values.sort()
size_cluster_i = ith_cluster_silhouette_values.shape[0]
y_upper = y_lower + size_cluster_i
color = cm.nipy_spectral(float(i) / n_clusters)
ax1.fill_betweenx(np.arange(y_lower, y_upper),
0, ith_cluster_silhouette_values,
facecolor=color, edgecolor=color, alpha=0.7)
# Label the silhouette plots with their cluster numbers at the middle
ax1.text(-0.05, y_lower + 0.5 * size_cluster_i, str(i))
# Compute the new y_lower for next plot
y_lower = y_upper + 10 # 10 for the 0 samples
ax1.set_title("The silhouette plot for the various clusters.")
ax1.set_xlabel("The silhouette coefficient values")
ax1.set_ylabel("Cluster label")
# The vertical line for average silhouette score of all the values
ax1.axvline(x=silhouette_avg, color="red", linestyle="--")
ax1.set_yticks([]) # Clear the yaxis labels / ticks
ax1.set_xticks([-0.1, 0, 0.2, 0.4, 0.6, 0.8, 1])
# 2nd Plot showing the actual clusters formed
colors = cm.nipy_spectral(cluster_labels.astype(float) / n_clusters)
ax2.scatter(X[:, 0], X[:, 1], marker='.', s=30, lw=0, alpha=0.7,
c=colors, edgecolor='k')
# Labeling the clusters
centers = clusterer.cluster_centers_
# Draw white circles at cluster centers
ax2.scatter(centers[:, 0], centers[:, 1], marker='o',
c="white", alpha=1, s=200, edgecolor='k')
for i, c in enumerate(centers):
ax2.scatter(c[0], c[1], marker='$%d$' % i, alpha=1,
s=50, edgecolor='k')
ax2.set_title("The visualization of the clustered data.")
ax2.set_xlabel("Feature space for the 1st feature")
ax2.set_ylabel("Feature space for the 2nd feature")
plt.suptitle(("Silhouette analysis for KMeans clustering on sample data "
"with n_clusters = %d" % n_clusters),
fontsize=14, fontweight='bold')
plt.show()
```
For 2,3,5 and 6 clusters, the silhouette coefficient has higher values with best clustering separation for 2 clusters.
## Clustering Experiments
We run now the experiment using k-means with two, three, five and six clusters and evaluate how the cloud providers are grouped in them.
<br>First we define the functions to execute the training and create an overview of the results.
```
experiments = [2,3,5,6]
data_clusters = df_data.copy()
data_clusters.head()
def k_training(c):
clusterer = KMeans(n_clusters=c, random_state=555)
cluster_labels = clusterer.fit_predict(X)
k_labels = clusterer.labels_
data_clusters['clusters_'+str(c)] = k_labels
#Create a dataframe with the original dataset and the resulting cluster label found during training of k-means.
classes_totals = data_clusters['classes'].value_counts()
```
We define a function to produce an overview of the resulting clustering including:
* List of cloud providers in each cluster.
* Percentage of the overall samples of the cloud provider included in the cluster `pclass`.
* Percentage of the cluster covered by the cloud provider `pcluster`.
```
def statistics(c):
clusters_totals = data_clusters['clusters_'+str(c)].value_counts()
stats = pd.DataFrame(data_clusters.groupby(by=['clusters_'+str(c),'classes'])['classes'].count())
stats = stats.add_suffix('_count').reset_index()
stats['p_class'] = (stats.apply(
lambda x: 100*x['classes_count']/classes_totals[x['classes']], axis=1, result_type = 'expand')).round(2)
stats['p_cluster'] = (stats.apply(
lambda x: 100*x['classes_count']/clusters_totals[x['clusters_'+str(c)]], axis=1, result_type = 'expand')).round(2)
stats['cloud_prov'] = stats.apply(
lambda x: (classes_count[classes_count['id']==x['classes']]['cloud_prov']).values[0], axis=1, result_type = 'expand')
return stats
```
We define a function to highlight in the table returned by `stats` the class with biggest coverage within a cluster.
```
def highlight_biggestclass(row):
# if row.p_cluster > 50:
# return ['background-color: cyan']*6
# else:
# return ['background-color: white']*6
return ['background-color: orange' if (row.p_cluster > 50) else 'background-color: cyan' if (row.p_class > 50) else 'background-color: white']*6
```
# Experiments runs and results
Comparing with the amount of samples of each cloud provider in the original dataset
```
classes_count.sort_values(by='count', ascending=False)
```
## Experiment with 2 clusters
```
k_training(2)
stats = statistics(2)
stats.style.apply(highlight_biggestclass, axis=1)
```
Besides cloud operator `vexxhost`, which is distributed in the two clusters, the remaining cloud operators are separated in the two clusters.
<br>However, this result is not significant for the aim of our experiments.
## Experiment with 3 clusters
```
k_training(3)
stats = statistics(3)
stats.style.apply(highlight_biggestclass, axis=1)
```
Clustering of the cloud providers is divisive and not significant.
## Experiment with 4 clusters
```
k_training(4)
stats = statistics(4)
stats.style.apply(highlight_biggestclass, axis=1)
```
Three of the cloud operators have predominance in separate clusters.
<br>Cloud operator `rax` is the one with highest supper in the dataset and dominates cluster 2 even though with only 20% of samples of its class.
<br>Cloud operator `inap` is grouped in a cluster with little noise and 99.69% of its samples.
<br>Cloud operator `ovh` is grouped in a separate cluster with little noise and 99.01% of its samples.
## Experiment with 5 clusters
```
k_training(5)
stats = statistics(5)
stats.style.apply(highlight_biggestclass, axis=1)
```
<br>Cloud operator `inap` is grouped in a cluster with 99.69% of its samples and even less noise as in the experiment with 4 clusters.
<br>Cloud operators `rax` and `ovh` also have separate clusters with high class and cluster coverage. However they are also predominant in other two clusters as they have more samples as the remaining operators.
## Experiment with 6 clusters
```
k_training(6)
stats = statistics(6)
stats.style.apply(highlight_biggestclass, axis=1)
```
The resulting clustering is noise with exception of cloud operator `inap`
### Conclusion
Although the elbow method suggested 2 clusters and the silhouette score recommended 2 or 3 clusters as optimal number of clusters value for the clustering training, in the resulting experiments, the clustering with better differentiation among cloud providers was with 4 clusters.
<br>We are not considering the experiment with 2 clusters the best result as we wanted to evaluate how many operators with high support a clustering algorith could group.
For experiments with more than 3 clusters, the cloud operator `inap` was grouped in a separate cluster with very little noise and a 99.69% of its samples. This result indicates that the dstat data generated when running CI jobs on `inap` VM has a combination of values discernible enough for k-means to group them efficiently.
The top three cloud operators with higher support in the dataset (`rax`, `ovh` and `inap`) could be grouped in different clusters.
Cloud operator `rax` has the highest support and had an unique cluster only for the experiment with 2 clusters, otherwise it was split into two clusters with the highest coverage of 79% of samples in a cluster for the experiment with 3 and 4 clusters. This might be due to the regions that were reduced to a single class.
Cloud operator `ovh` had the best coverage of samples in a single cluster for the experiment with 4 clusters (99%).
In general, the dstat data from the CI jobs has potential for further exploration using unsupervised learning. <br>Especially clustering of failed CI jobs could help engineers to better triage failures coming from the gate pipeline when considering the CI system in Openstack. Thsi approach could be used in other CI systems as well.
| true |
code
| 0.763285 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/finerbrighterlighter/myanmar_covid19/blob/master/exponential_growth.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#Libraries
```
import statsmodels.api as sm
import pandas as pd
import numpy as np
from math import pi
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.ticker as mticker
from matplotlib.ticker import StrMethodFormatter
from google.colab import files
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
%matplotlib inline
```
#Data
Since the Government fails to provide a complete and open dataset for disease status in the country, several young doctors in Myanmar volunteered on their own to monitor announcements. Current data applied is collected by Dr. Nyein Chan Ko Ko.
```
data = "https://raw.githubusercontent.com/finerbrighterlighter/myanmar_covid19/master/mohs_announcement.csv"
df = pd.read_csv(data,header= 0)
df.insert(loc=0, column="case_id", value=np.arange(1,len(df)+1))
df["case_id"] = "case_" + df["case_id"].astype(str)
df["first_date"] = pd.to_datetime(df["first_date"].values, dayfirst=True, utc=False).tz_localize("Asia/Yangon")
df["qua_date"] = pd.to_datetime(df["qua_date"].values, dayfirst=True, utc=False).tz_localize("Asia/Yangon")
df["ann_date"] = pd.to_datetime(df["ann_date"].values, dayfirst=True, utc=False).tz_localize("Asia/Yangon")
df["exp_date"] = pd.to_datetime(df["exp_date"].values, dayfirst=True, utc=False).tz_localize("Asia/Yangon")
df["dsc_date"] = pd.to_datetime(df["dsc_date"].values, dayfirst=True, utc=False).tz_localize("Asia/Yangon")
df
```
# Basic Timeline ( Total cases, Daily new cases, infection spread)
```
case_df = df[["ann_date","travel"]].copy()
case_df.columns = ["date", "travel"]
case_df["overseas_inflow"] = np.where(df["travel"].isna(), 0, 1)
case_df["local_spread"] = np.where(df["travel"].notna(), 0, 1)
case_df["known_contact"] = np.where(df["travel"].notna(), 0, np.where(df["contact"]=="0", 0, np.where(df["contact"]=="1", 0, 1)))
case_df["unknown_contact"] = np.where(df["travel"].notna(), 0, np.where(df["contact"]=="0", 1, 0))
case_df["contact_blinded"] = np.where(df["travel"].notna(), 0, np.where(df["contact"]=="1", 1, 0))
case_df["date"] = pd.to_datetime(case_df["date"])
case_df.drop("travel", axis=1 , inplace=True)
case_df=case_df.groupby(["date"]).sum().reset_index()
case_df
timeline_df = pd.DataFrame(columns=["ndays","date"])
timeline_df["ndays"] = np.arange(len(pd.date_range(start=df.ann_date.min(), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon"))))
timeline_df.loc[0,"date"]=df.ann_date.min()
for i in range(1,len(timeline_df)):
timeline_df.loc[i,"date"] = timeline_df.loc[i-1,"date"] + pd.Timedelta(days=1)
i=i+1
timeline_df["date"] = pd.to_datetime(timeline_df["date"])
timeline_df=timeline_df.merge(case_df,indicator=False,how='left')
timeline_df["overseas_inflow"].fillna(0, inplace=True)
timeline_df["local_spread"].fillna(0, inplace=True)
timeline_df["known_contact"].fillna(0, inplace=True)
timeline_df["unknown_contact"].fillna(0, inplace=True)
timeline_df["contact_blinded"].fillna(0, inplace=True)
timeline_df["overseas_inflow"]=timeline_df["overseas_inflow"].astype(int)
timeline_df["local_spread"]=timeline_df["local_spread"].astype(int)
timeline_df["known_contact"]=timeline_df["known_contact"].astype(int)
timeline_df["unknown_contact"]=timeline_df["unknown_contact"].astype(int)
timeline_df["contact_blinded"]=timeline_df["contact_blinded"].astype(int)
timeline_df["total"] = (timeline_df["overseas_inflow"]+timeline_df["local_spread"]).cumsum().astype(int)
timeline_df
```
# Pie (Donut) Chart
```
osf = timeline_df["overseas_inflow"].sum()/timeline_df["total"][timeline_df.index[-1]]
ls = timeline_df["local_spread"].sum()/timeline_df["total"][timeline_df.index[-1]]
ls_kc = timeline_df["known_contact"].sum()/timeline_df["total"][timeline_df.index[-1]]
ls_ukc = timeline_df["unknown_contact"].sum()/timeline_df["total"][timeline_df.index[-1]]
con_bli = timeline_df["contact_blinded"].sum()/timeline_df["total"][timeline_df.index[-1]]
# First Ring (outside)
fig, ax = plt.subplots()
ax.axis('equal')
mypie, _ = ax.pie([osf,ls], radius=2, labels=["Overseas Inflow = "+str("{:.2f}".format(osf*100))+" %", "Local Spread = "+str("{:.2f}".format(ls*100))+" %"], labeldistance=1,colors=["cornflowerblue", "lightcoral"])
plt.setp( mypie, width=0.7, edgecolor='white')
# Second Ring (Inside)
mypie2, _ = ax.pie([osf,ls_kc,ls_ukc,con_bli], radius=2-0.7, labels=[" ", "Known Contact = "+str("{:.2f}".format(ls_kc*100))+" %", "Unknown Contact = "+str("{:.2f}".format(ls_ukc*100))+" %","Contact Blinded = "+str("{:.2f}".format(con_bli*100))+" %"], labeldistance=0.8, colors=["lightsteelblue", "rosybrown", "firebrick", "coral"])
plt.setp( mypie2, width=0.5, edgecolor='white')
plt.margins(0,0)
plt.title("Total cases as of "+str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d/%m/%Y")),y=1.5)
plt.text(0, -0.5, "* out of "+str(timeline_df["total"][timeline_df.index[-1]])+" patients confirmed as of "+str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d/%m/%Y")), horizontalalignment="left", verticalalignment="bottom", transform=ax.transAxes)
# show it
tot_dist = str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d-%m-%Y"))+"_total_dist.svg"
plt.savefig(tot_dist, bbox_inches = "tight", format="svg")
plt.show()
files.download(tot_dist)
```
## Daily New Case
```
fig, ax = plt.subplots(figsize=(10,5))
ax.grid(linestyle=':', linewidth='0.5', color='silver')
ax.set_axisbelow(True)
xindex = np.arange(len(pd.date_range(start=timeline_df.date.min(), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon"))))
plt.xticks(xindex,pd.date_range(start=timeline_df.date.min(), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon")).strftime('%d/%m'), fontsize=10, rotation=90)
plt.gca().yaxis.set_major_locator(mticker.MultipleLocator(5))
oi_case = plt.bar(xindex, timeline_df["overseas_inflow"], color = "cornflowerblue")
ls_k_case = plt.bar(xindex, timeline_df["known_contact"], bottom=timeline_df["overseas_inflow"], color = "rosybrown")
ls_bli = plt.bar(xindex, timeline_df["contact_blinded"], bottom=timeline_df["overseas_inflow"]+timeline_df["known_contact"], color = "coral")
ls_uk_case = plt.bar(xindex, timeline_df["unknown_contact"], bottom=timeline_df["overseas_inflow"]+timeline_df["known_contact"]+timeline_df["contact_blinded"], color = "firebrick")
"""oi_case = plt.plot(xindex, timeline_df["overseas_inflow"], color = "cornflowerblue")
ls_k_case = plt.plot(xindex, timeline_df["known_contact"], color = "rosybrown")
ls_uk_case = plt.plot(xindex, timeline_df["unknown_contact"], color = "firebrick")
total = plt.plot(xindex, timeline_df["overseas_inflow"]+timeline_df["known_contact"]+timeline_df["unknown_contact"], color = "teal")"""
plt.title("Daily new cases as of "+str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d/%m/%Y")))
plt.legend((oi_case[0],ls_k_case[0],ls_bli[0],ls_uk_case[0]), ("Overseas Inflow", "Local Spread ( Known Contact )", "Local Spread ( Contact Blinded )", "Local Spread ( Unknown Contact )"),loc="lower left", bbox_to_anchor=(0, -0.4))
#plt.legend((oi_case[0],ls_k_case[0],ls_uk_case[0],total[0]), ("Overseas Inflow", "Local Spread ( Known Contact )", "Local Spread ( Unknown Contact ) or Local Spread ( Contact Blinded )","Total cases per day"),loc="lower left", bbox_to_anchor=(0, -0.4))
new_cases = str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d-%m-%Y"))+"_new_cases.svg"
plt.savefig(new_cases, bbox_inches = "tight")
plt.show()
files.download(new_cases)
```
#Mortality
```
exp_df = pd.DataFrame(columns=["ndays","date"])
exp_df["ndays"] = np.arange(len(pd.date_range(start=case_df.date.min(), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon"))))
exp_df.loc[0,"date"]=case_df.date.min()
for i in range(1,len(exp_df)):
exp_df.loc[i,"date"] = exp_df.loc[i-1,"date"] + pd.Timedelta(days=1)
i=i+1
exp_df["date"] = pd.to_datetime(exp_df["date"])
exp_df=exp_df.merge(df.groupby(["exp_date"]).size().to_frame("expire"),left_on="date",right_on="exp_date",indicator=False,how='left')
exp_df["expire"].fillna(0, inplace=True)
exp_df["expire"]=exp_df["expire"].astype(int)
exp_df["total"]=exp_df["expire"].cumsum().astype(int)
exp_df
fig, ax = plt.subplots(figsize=(10,5))
ax.grid(linestyle=':', linewidth='0.5', color='silver')
ax.set_axisbelow(True)
xindex = np.arange(len(pd.date_range(start=exp_df.date.min(), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon"))))
plt.xticks(xindex,pd.date_range(start=exp_df.date.min(), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon")).strftime('%d/%m'), fontsize=10, rotation=90)
plt.gca().yaxis.set_major_locator(mticker.MultipleLocator(1))
expire = plt.bar(xindex,exp_df["total"], linestyle=(0, (3, 1, 1, 1, 1, 1)), color="red")
plt.title("Cummulative mortality as of "+str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d/%m/%Y")))
#plt.legend((expire),("Patient expired",),loc='upper left', bbox_to_anchor=(1, 1))
exp_cases = str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d-%m-%Y"))+"_exp.svg"
plt.savefig(exp_cases, bbox_inches = "tight")
plt.show()
files.download(exp_cases)
```
## Radar chart for underlying conditions of expired patients
```
mort_data = "https://raw.githubusercontent.com/finerbrighterlighter/myanmar_covid19/master/expired_underlying.csv"
mort_df = pd.read_csv(mort_data,header= 0)
mort_df
#mort_df.insert(loc=0, column="exp_id", value=np.arange(1,len(mort_df)+1))
#mort_df["exp_id"] = "exp_" + mort_df["exp_id"].astype(str)
mort_df.drop("case", axis=1 , inplace=True)
mort_df.drop("date", axis=1 , inplace=True)
mort_df.drop("underlying_con", axis=1 , inplace=True)
mort_df.ht_ds = mort_df.ht_ds.cumsum()
mort_df.dm = mort_df.dm.cumsum()
mort_df.ht = mort_df.ht.cumsum()
mort_df.ch_resp = mort_df.ch_resp.cumsum()
mort_df.ca = mort_df.ca.cumsum()
mort_df
Attributes =list(mort_df)
AttNo = len(Attributes)
Attributes
values = mort_df.iloc[-1].tolist()
values += values [:1]
values[:] = [x / len(mort_df) for x in values]
values
angles = [n / float(AttNo) * 2 * pi for n in range(AttNo)]
angles += angles [:1]
fig = plt.figure(figsize=(8,8))
ax = plt.subplot(111, polar=True)
#Add the attribute labels to our axes
plt.xticks(angles[:-1],["Heart Disease", "Diabetes Mellitus", "Hypertension", "Chronic Respiratory Disease", "Cancer"])
plt.gca().yaxis.set_major_formatter(mticker.PercentFormatter(1,decimals=0))
plt.ylim(0, 1.0)
#Plot the line around the outside of the filled area, using the angles and values calculated before
ax.plot(angles,values)
#Fill in the area plotted in the last line
ax.fill(angles, values, 'slategray', alpha=0.8)
#Give the plot a title and show it
ax.set_title("Underlying conditions of the expired patients", pad=20)
plt.text(1, -0.15, "* out of "+str(len(mort_df))+" patients expired as of "+str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d/%m/%Y")), horizontalalignment="left", verticalalignment="bottom", transform=ax.transAxes)
under = str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d-%m-%Y"))+"_underlying.svg"
plt.savefig(under, bbox_inches = "tight")
plt.show()
files.download(under)
```
#Status
```
dsc_df = pd.DataFrame(columns=["ndays","date"])
dsc_df["ndays"] = np.arange(len(pd.date_range(start=case_df.date.min(), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon"))))
dsc_df.loc[0,"date"]=case_df.date.min()
for i in range(1,len(dsc_df)):
dsc_df.loc[i,"date"] = dsc_df.loc[i-1,"date"] + pd.Timedelta(days=1)
i=i+1
dsc_df["date"] = pd.to_datetime(dsc_df["date"])
dsc_df=dsc_df.merge(df.groupby(["dsc_date"]).size().to_frame("recovered"),left_on="date",right_on="dsc_date",indicator=False,how='left')
dsc_df["recovered"].fillna(0, inplace=True)
dsc_df["recovered"]=dsc_df["recovered"].astype(int)
dsc_df["total"]=dsc_df["recovered"].cumsum().astype(int)
dsc_df
total_df = timeline_df[["date","total"]].copy()
total_df["expire"] = exp_df["total"]
total_df["recovered"] = dsc_df["total"]
total_df["hosp"] = (total_df["total"]-total_df["expire"]-total_df["recovered"])
total_df["expire"] = total_df["expire"]/total_df["total"]
total_df["recovered"] = total_df["recovered"]/total_df["total"]
total_df["hosp"] = total_df["hosp"]/total_df["total"]
total_df
fig, ax = plt.subplots(figsize=(15,7.5))
ax.grid(linestyle=':', which="both", linewidth='0.5', color='silver')
box = dict(boxstyle="square, pad=1", facecolor="skyblue", alpha=0.25)
xindex = np.arange(len(pd.date_range(start=total_df.date.min(), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon"))))
plt.xticks(xindex,pd.date_range(start=total_df.date.min(), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon")).strftime('%d/%m'), fontsize=10, rotation=90)
plt.gca().yaxis.set_major_formatter(mticker.PercentFormatter(1))
plt.gca().yaxis.set_major_locator(mticker.MultipleLocator(0.2))
plt.gca().yaxis.set_minor_locator(mticker.MultipleLocator(0.05))
local_spread = plt.stackplot(xindex,[total_df["expire"],total_df["recovered"],total_df["hosp"]],labels=["Patients expired","Patients recovered","Currently under hospitalization"],colors=["black","limegreen","teal"])
plt.title("Comfirmed patients' status as of "+str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d/%m/%Y")),fontsize=15)
plt.legend(loc="lower left", bbox_to_anchor=(0, -0.3),fontsize=12)
plt.text(0, 0, "As of "+str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d/%m/%Y"))+
",\nThere is "+str("%.2f" %((total_df.loc[len(total_df)-1,"recovered"])*100))+" % recovery rate and "+
str("%.2f" %((total_df.loc[len(total_df)-1,"expire"])*100))+" % mortality rate.",
fontsize=15, linespacing= 2, bbox=box , position=(0.45,-0.25),
horizontalalignment="left", verticalalignment="bottom", transform=ax.transAxes)
status = str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d-%m-%Y"))+"_status.svg"
plt.savefig(status, bbox_inches = "tight")
plt.show()
files.download(status)
```
## Spread Trend
```
spread_trend_df = pd.DataFrame(columns=["ndays","date"])
spread_trend_df["ndays"] = np.arange(len(pd.date_range(start=case_df.date.min()-pd.Timedelta(days=5), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon"))))
spread_trend_df.loc[0,"date"]=case_df.date.min()-pd.Timedelta(days=5)
for i in range(1,len(spread_trend_df)):
spread_trend_df.loc[i,"date"] = spread_trend_df.loc[i-1,"date"] + pd.Timedelta(days=1)
i=i+1
spread_trend_df["date"] = pd.to_datetime(spread_trend_df["date"])
spread_trend_df=spread_trend_df.merge(case_df,indicator=False,how='left')
spread_trend_df["overseas_inflow"].fillna(0, inplace=True)
spread_trend_df["local_spread"].fillna(0, inplace=True)
spread_trend_df["overseas_inflow"]=spread_trend_df["overseas_inflow"].astype(int)
spread_trend_df["local_spread"]=spread_trend_df["local_spread"].astype(int)
spread_trend_df["tot_overseas_inflow"]=spread_trend_df["overseas_inflow"].cumsum()
spread_trend_df["tot_local_spread"]=spread_trend_df["local_spread"].cumsum()
spread_trend_df
fig, ax = plt.subplots(figsize=(15,5))
ax.grid(linestyle=':', which="both" , linewidth='0.5', color='silver')
ax.set_axisbelow(True)
xindex = np.arange(len(pd.date_range(start=spread_trend_df.date.min(), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon"))))
plt.xticks(xindex,pd.date_range(start=spread_trend_df.date.min(), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon")).strftime('%d/%m'), fontsize=12, rotation=90)
plt.yticks(fontsize=12)
plt.gca().yaxis.set_major_locator(mticker.MultipleLocator(20))
plt.gca().yaxis.set_minor_locator(mticker.MultipleLocator(10))
box = dict(boxstyle='square,pad=1', facecolor="indianred", alpha=0.25)
land_close_fore = plt.axvline(x=1, color="springgreen", linestyle="--")
com_qua = plt.axvline(x=4, color="plum", linestyle="--")
visa_close = plt.axvline(x=11, color="skyblue", linestyle="--")
air_close = plt.axvline(x=12, color="gold", linestyle="--")
insein_cluster = plt.axvline(x=25, color="brown", linestyle="--")
thingyan_over= plt.axvline(x=33, color="burlywood", linestyle="--")
total = plt.plot(xindex, spread_trend_df["tot_overseas_inflow"]+spread_trend_df["tot_local_spread"], color="teal")
overseas_inflow = plt.plot(xindex, spread_trend_df["tot_overseas_inflow"], color="cornflowerblue")
local_spread = plt.plot(xindex, spread_trend_df["tot_local_spread"], color="lightcoral")
plt.title("Cummulative COVID-19 cases in Myanmar as of "+str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d/%m/%Y")),fontsize=15)
plt.legend((total[0],local_spread[0],overseas_inflow[0],land_close_fore,com_qua,visa_close,air_close,insein_cluster,thingyan_over),
("Cummulative Total", "Local Spread", "Overseas Inflow",
"Land border closed to foreigners", "All foreign entries have to undergo 14 days quarantine",
"Visa paused","International Flight Ban",
"Insein religious cluster is discovered","Thingyan Holidays are over"),
loc="lower left", bbox_to_anchor=(-0.025, -0.85),fontsize=12)
plt.text(0, 0, "The first COVID-19 patient was identified on 23rd March 2020.\n \nAs of "+str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d/%m/%Y"))+
",\nThere are "+str(spread_trend_df.loc[len(spread_trend_df)-1,"tot_overseas_inflow"]+spread_trend_df.loc[len(spread_trend_df)-1,"tot_local_spread"])+" confirmed patients in Myanmar.\n"+
str(spread_trend_df.loc[len(spread_trend_df)-1,"tot_overseas_inflow"])+" of the patients had returned from foreign countries and\n"+
str(spread_trend_df.loc[len(spread_trend_df)-1,"tot_local_spread"])+" patients were contracted within the country.",
fontsize=15, linespacing= 2, bbox=box , position=(0.45,-0.793),
horizontalalignment="left", verticalalignment="bottom", transform=ax.transAxes)
tot_cases = str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d-%m-%Y"))+"_total_cases.svg"
plt.savefig(tot_cases, bbox_inches = "tight", format="svg")
plt.show()
files.download(tot_cases)
```
# Age Distribution
```
age_df = df[["sex","age"]].copy()
age_df["age_gp"]=pd.cut(age_df.age,bins=[0,10.0,20.0,30.0,40.0,50.0,60.0,70.0,80.0,90.0,100.0],labels=["0-10","10-20","20-30","30-40","40-50","50-60","60-70","70-80","80-90","90-100"])
age_df.drop("age", axis=1 , inplace=True)
#pd.set_option('display.max_rows', 100)
# discharged
age_df["f_dsc"] = np.where(df["sex"]!="Female", 0, np.where(df["dsc_date"].notna(), 1, 0))
age_df["m_dsc"] = np.where(df["sex"]!="Male", 0, np.where(df["dsc_date"].notna(), 1, 0))
# expired
age_df["f_exp"] = np.where(df["sex"]!="Female", 0, np.where(df["dsc_date"].notna(), 0, np.where(df["exp_date"].notna(), 1,0)))
age_df["m_exp"] = np.where(df["sex"]!="Male", 0, np.where(df["dsc_date"].notna(), 0, np.where(df["exp_date"].notna(), 1,0)))
# in hospital
age_df["f_hosp"] = np.where(df["sex"]!="Female", 0, np.where(df["dsc_date"].notna(), 0, np.where(df["exp_date"].notna(), 0,1)))
age_df["m_hosp"] = np.where(df["sex"]!="Male", 0, np.where(df["dsc_date"].notna(), 0, np.where(df["exp_date"].notna(), 0,1)))
age_df
age_df = age_df.groupby(["sex","age_gp"]).sum().reset_index()
age_df["f_dsc"].fillna(0, inplace=True)
age_df["m_dsc"].fillna(0, inplace=True)
age_df["f_exp"].fillna(0, inplace=True)
age_df["m_exp"].fillna(0, inplace=True)
age_df["f_hosp"].fillna(0, inplace=True)
age_df["m_hosp"].fillna(0, inplace=True)
age_df
# these arrays are no longer necessary
# leaving them here to see the map into the dataframe
# age_gp=age_df.iloc[0:9,1].to_numpy()
# f_dsc=age_df.iloc[0:9,2].to_numpy()
# f_dsc=f_dsc*-1
# m_dsc=age_df.iloc[10:19,3].to_numpy()
# f_exp=age_df.iloc[0:9,4].to_numpy()
# f_exp=f_exp*-1
# m_exp=age_df.iloc[10:19,5].to_numpy()
# f_hosp=age_df.iloc[0:9,6].to_numpy()
# f_hosp=f_hosp*-1
# m_hosp=age_df.iloc[10:19,7].to_numpy()
fig, ax = plt.subplots(figsize=(10,5))
# grids
ax.grid(linestyle=":", which="both", linewidth="0.5", color="silver")
ax.set_axisbelow(True)
plt.axvline(x=0, color="snow")
# ticks
plt.gca().xaxis.set_major_formatter(mticker.PercentFormatter(len(df)))
plt.gca().xaxis.set_major_locator(mticker.MultipleLocator(5))
plt.gca().xaxis.set_major_formatter(StrMethodFormatter('{x:,.0f} %'))
plt.gca().xaxis.set_minor_locator(mticker.MultipleLocator(1))
plt.xticks(rotation=90)
plt.xlim((-25, 25))
# data
m_hosp = plt.barh(age_df.iloc[0:9,1], age_df.iloc[10:19,7], color = "teal")
m_dsc = plt.barh(age_df.iloc[0:9,1], age_df.iloc[10:19,3], left=age_df.iloc[10:19,7], color = "limegreen")
m_exp = plt.barh(age_df.iloc[0:9,1], age_df.iloc[10:19,5], left=age_df.iloc[10:19,7]+age_df.iloc[10:19,3], color = "black")
f_hosp = plt.barh(age_df.iloc[0:9,1], age_df.iloc[0:9,6]*-1, color = "teal")
f_dsc = plt.barh(age_df.iloc[0:9,1], age_df.iloc[0:9,2]*-1, left=age_df.iloc[0:9,6]*-1, color = "limegreen")
f_exp = plt.barh(age_df.iloc[0:9,1], age_df.iloc[0:9,4]*-1, left=(age_df.iloc[0:9,6]+age_df.iloc[0:9,2])*-1, color = "black")
# titles, subtitles, labels and legends
plt.title("Distribution of confirmed patients as of "+str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d/%m/%Y")))
plt.xlabel("Gender Distribution")
plt.ylabel("Age Distribution")
plt.text(0.25, -0.2, "Female", horizontalalignment="left", verticalalignment="center", transform=ax.transAxes)
plt.text(0.7, -0.2, "Male", horizontalalignment="left", verticalalignment="center", transform=ax.transAxes)
plt.text(0.01, -0.3, "Please consider the X axis as absolute values.", horizontalalignment="left", verticalalignment="bottom", transform=ax.transAxes)
plt.legend((m_hosp[0],m_dsc[0],m_exp[0]), ("Currently under Hospitalization", "Recovered", "Expired"),loc="lower left", bbox_to_anchor=(0, -0.55))
# download
age = str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d-%m-%Y"))+"_age.svg"
plt.savefig(age, bbox_inches = "tight", format="svg")
plt.show()
files.download(age)
```
- travel -> travel history
- region -> states and admininstrative regions of Myanmar where the case is quarantined
- first_date -> entry into the country, or first symptom for no travel history
- qua_date -> first date of hospital quarantine
- ann_date -> date of announcement by MOHS as positive
- exp_date -> date of patient's death
- dsc_date -> date of discharge
# Per Patient Timeline
```
timeline_df = pd.DataFrame(columns=["case_id"])
timeline_df["case_id"] = df["case_id"]
timeline_df["until_qua"] = (df["qua_date"]-df["first_date"]).dt.days
timeline_df["until_ann"] = (df["ann_date"]-df["qua_date"]).dt.days
timeline_df["until_first"] = (df["first_date"]-df["first_date"].min()).dt.days
timeline_df["hosp"] = np.where(df["exp_date"].isna(), np.where(df["dsc_date"].isna(), (pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon")-df["ann_date"]).dt.days, (df["dsc_date"]-df["ann_date"]).dt.days), (df["exp_date"]-df["ann_date"]).dt.days)
timeline_df["until_dsc"] = np.where(df["dsc_date"].notna(), 0.5, 0)
timeline_df["until_exp"] = np.where(df["exp_date"].notna(), 0.5, 0)
timeline_df
# case_39 expired on the same day as admission. adding 0.5 to visualize on the plot
timeline_df.loc[38, "hosp"] = timeline_df.loc[38, "hosp"] + 0.75
timeline_df.fillna(0, inplace=True)
```
##Timeline for each patient(Bar Plot)
```
fig, ax = plt.subplots(figsize=(10,30))
ax.grid(linestyle=':', linewidth='0.5', color='silver')
ax.set_axisbelow(True)
yindex = np.arange(len(timeline_df["case_id"]))
xindex = np.arange(len(pd.date_range(start=df.first_date.min(), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon"))))
plt.yticks(yindex,timeline_df["case_id"], fontsize=10)
plt.xticks(xindex,pd.date_range(start=df.first_date.min(), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon")).strftime('%d/%m'), fontsize=10, rotation=90)
plt.gca().invert_yaxis()
until_qua = plt.barh(yindex, timeline_df["until_qua"], left= timeline_df["until_first"], color = "tomato")
until_ann = plt.barh(yindex, timeline_df["until_ann"], left= timeline_df["until_qua"]+timeline_df["until_first"], color = "lightsalmon")
hosp = plt.barh(yindex, timeline_df["hosp"], left= timeline_df["until_ann"]+timeline_df["until_qua"]+timeline_df["until_first"], color = "teal")
until_exp = plt.barh(yindex, timeline_df["until_exp"], left= timeline_df["hosp"]+timeline_df["until_ann"]+timeline_df["until_qua"]+timeline_df["until_first"]-0.5, color = "black")
until_dsc = plt.barh(yindex, timeline_df["until_dsc"], left= timeline_df["hosp"]+timeline_df["until_exp"]+timeline_df["until_ann"]+timeline_df["until_qua"]+timeline_df["until_first"]-0.5, color = "limegreen")
plt.title("Timeline as of "+str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d/%m/%Y")))
plt.legend((until_qua[0], until_ann[0],hosp[0],until_dsc[0],until_exp[0]), ("Arrival in country or Contact with suspected carrier", "Under Hospital Quarantine","Under hospitalization","Patient recovered","Patient expired"),loc="lower left", bbox_to_anchor=(0, 0))
timeline = str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d-%m-%Y"))+"_timeline.svg"
plt.savefig(timeline, bbox_inches = "tight")
plt.show()
files.download(timeline)
```
##Time taken for action (Bar Plot)
***Depracated***
```
"""fig, ax = plt.subplots(figsize=(40,10))
ax.grid(linestyle=':', linewidth='0.5', color='silver')
ax.set_axisbelow(True)
index = np.arange(len(timeline_df["case_id"]))
pre_incub = plt.axhline(y=14, color="teal", linestyle="--")
com_qua = plt.axhline(y=21, color="skyblue", linestyle="--")
new_incub = plt.axhline(y=28, color="aqua", linestyle="--")
p1 = plt.bar(index, timeline_df["until_qua"], color = "tomato")
p2 = plt.bar(index, timeline_df["until_ann"], bottom=timeline_df["until_qua"], color = "lightsalmon")
plt.ylabel("Days", fontsize=10)
plt.title("Days until being announced positive as of "+str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d/%m/%Y")))
plt.xticks(index,timeline_df["case_id"], fontsize=10, rotation=90)
plt.legend((p1[0], p2[0],pre_incub,com_qua,new_incub), ("Days until hospital quarantine", "Days until announcement", "Incubation period was assumed to be 14 days", "As of 11/4/2020, Community Quarantine is extended to 21 days, continuing with 7 days Home quarantine", "As of 11/4/2020, incubation period is readjusted to be 28 days "),loc="lower left", bbox_to_anchor=(0, -0.5))
days = str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d-%m-%Y"))+"_time_for_action.svg"
plt.savefig(days, bbox_inches = "tight")
plt.show()
files.download(days)"""
```
# Exponential Growth
```
sum_df = df[["ann_date","case_id"]].copy()
sum_df.columns = ["Date", "id"]
sum_df=sum_df.groupby(["Date"]).size().to_frame("Case").reset_index()
sum_df["Date"] = pd.to_datetime(sum_df["Date"])
sum_df
confirmed_df = pd.DataFrame(columns=["ndays","Date"])
#confirmed_df["ndays"] = np.arange(len(pd.date_range(start=sum_df.Date.min(), end=sum_df.Date.max())))
confirmed_df["ndays"] = np.arange(len(pd.date_range(start=sum_df.Date.min(), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon"))))
confirmed_df.loc[0,"Date"]=sum_df.Date.min()
for i in range(1,len(confirmed_df)):
confirmed_df.loc[i,"Date"] = confirmed_df.loc[i-1,"Date"] + pd.Timedelta(days=1)
i=i+1
confirmed_df["Date"] = pd.to_datetime(confirmed_df["Date"])
confirmed_df=confirmed_df.merge(sum_df,indicator=False,how='left')
confirmed_df["Case"].fillna(0, inplace=True)
confirmed_df["Case"]=confirmed_df["Case"].astype(int)
confirmed_df["Case"] = confirmed_df["Case"].cumsum()
# Natural Log of Real Cases
confirmed_df["logCase"] = np.log(confirmed_df.Case).astype(float)
confirmed_df
```
Natural log makes it better in terms of visualization and long term comparison, make the data look more linear. That is why I will be plotting both real and natural log line graphs.
# Model of choice
True exponential does not exist, but exponential growth is assumed until the inflection point has arrived. Linear Regression is applied.
## Logistic Regression
### Ordinary Least Squared Regression
```
X = confirmed_df.ndays
X = sm.add_constant(X)
y = confirmed_df.logCase
model = sm.OLS(y, X)
result = model.fit()
result.summary()
```
Exponential Formaula<br> y = ab<sup>x</sup> <br>
a = Initial Value<br>
b = Rate of Change<br>
x = The feature ( Here it is time )<br>
b = (1+r) = Growth Rate <- Before Inflection <br>
b = (1-r) = Decay Rate <- After Inflection <br>
In the summary, "constant stands" for initial "a".<br>
"ndays" is the coefficient of "time", which means the value increasing y as x is increased by 1. In our case, the number of cases to increase as the next day comes.
```
def linear_predictions(t):
return np.exp(result.params["const"]) * np.exp(result.params["ndays"]) ** t
```
As we fitted our model with natural log values, we should change them back to real numbers to predict.
# Next Week Prediction
```
ndays = len(confirmed_df)+7
nextweek_df = pd.DataFrame(columns=["ndays","Date"])
nextweek_df["ndays"] = np.arange(ndays)
nextweek_df.loc[0,"Date"]=confirmed_df.loc[0,"Date"]
for i in range(1,len(nextweek_df)):
nextweek_df.loc[i,"Date"] = nextweek_df.loc[i-1,"Date"] + pd.Timedelta(days=1)
i=i+1
nextweek_df["Predictions"] = nextweek_df.ndays.apply(linear_predictions)
# Natural Log of Predicted Cases
nextweek_df["logPredictions"] = np.log(nextweek_df.Predictions).astype(float)
nextweek_df
```
Although I stated next week, here I added only "3". Since our data and history is very short right now, it is not sufficient to predict far without sacraficing. This currently here is a proof of concept. We shall increase the data and after that, we should pursure further analysis.
# Real Number Plot
```
real = str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d-%m-%Y"))+"_real.svg"
log = str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d-%m-%Y"))+"_log.svg"
confirmed_x = pd.date_range(start=confirmed_df["Date"][confirmed_df.index[0]], end=confirmed_df["Date"][confirmed_df.index[-1]])
confirmed_y = confirmed_df["Case"].tolist()
confirmed_plot = pd.Series(data=confirmed_y, index=confirmed_x)
nextweek_x = pd.date_range(start=nextweek_df["Date"][nextweek_df.index[0]], end=nextweek_df["Date"][nextweek_df.index[-1]])
nextweek_y = nextweek_df["Predictions"].tolist()
nextweek_plot = pd.Series(data=nextweek_y, index=nextweek_x)
fig, ax = plt.subplots()
ax.plot(confirmed_plot, label="Confirmed", color="red")
ax.plot(nextweek_plot, label="Predicted", color ="blue")
ax.grid(linestyle=':', linewidth='0.5', color='silver')
ax.set_axisbelow(True)
legend = ax.legend(loc="upper left", fontsize="large")
plt.xlabel("Date")
plt.ylabel("Infections")
plt.suptitle("Predicted number of cases vs comfirmed number of cases")
plt.title("As of "+str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d/%m/%Y")))
plt.xticks(rotation=90)
plt.savefig(real, bbox_inches = "tight")
plt.show()
files.download(real)
```
# Natural Log Plot
```
confirmed_logy = confirmed_df["logCase"].tolist()
confirmed_logplot = pd.Series(data=confirmed_logy, index=confirmed_x)
nextweek_logy = nextweek_df["logPredictions"].tolist()
nextweek_logplot = pd.Series(data=nextweek_logy, index=nextweek_x)
fig, ax = plt.subplots()
ax.plot(confirmed_logplot, label="Confirmed", color="red")
ax.plot(nextweek_logplot, label="Predicted", color ="blue")
ax.grid(linestyle=':', linewidth='0.5', color='silver')
ax.set_axisbelow(True)
legend = ax.legend(loc="upper left", fontsize="large")
plt.xlabel("Date")
plt.ylabel("Infections")
plt.suptitle("Predicted number of cases vs confirmed number of cases (Natural Log)")
plt.title("As of "+str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d/%m/%Y")))
plt.xticks(rotation=90)
plt.savefig(log, bbox_inches = "tight")
plt.show()
files.download(log)
%reset
```
| true |
code
| 0.3671 | null | null | null | null |
|
# Day 6 - Voronoi diagram
<figure style="float: right; max-width: 25em; margin: 1em">
<img src="https://upload.wikimedia.org/wikipedia/commons/6/6d/Manhattan_Voronoi_Diagram.svg"
alt="Manhattan Voronoi diagram illustration from Wikimedia"/>
<figcaption style="font-style: italic; font-size: smaller">
Manhattan Voronoi diagram
Balu Ertl [<a href="https://creativecommons.org/licenses/by-sa/1.0">CC BY-SA 1.0</a>],<br/><a href="https://commons.wikimedia.org/wiki/File:Manhattan_Voronoi_Diagram.svg">from Wikimedia Commons</a>
</figcaption>
</figure>
* [Day 6](https://adventofcode.com/2018/day/6)
Another computational geometry problem! This time we are asked to find the largest area in a [Voronoi diagram](https://en.wikipedia.org/wiki/Voronoi_diagram).
The most efficient algorithm to produce the boundaries between the points ($O(n \log_2 n)$) is [Fortune's algorithm](https://en.wikipedia.org/wiki/Fortune%27s_algorithm), which (like [day 3](./Day%2003.ipynb)) is a [sweep line algorithm](https://en.wikipedia.org/wiki/Sweep_line_algorithm) to reduce the problem from 2 to 1 dimension. *But*, we don't need boundaries, we need *area*. A simpler method is to just use a $O(kn)$ double loop to find which of $n$ elements is closest for each of the $k$ `(x, y)` points in the map.
There are three importand aspects to remember here:
1. we need to use [Manhattan distance](https://en.wikipedia.org/wiki/Taxicab_geometry), not Euclidian distance, when doing our calculations.
2. If 2 or more coordinates are equidistant from a given `(x, y)` point, that point doesn't count as area for any of the coordinates. This means we can't just use `min()` here, and `sort()` would be needlessly precise. Instead, we only need to know the *top 2 smallest distances*; if these two are equidistance we know we can't give the area to anyone. To get the two N of anything, you'd want to use a [heap queue](https://docs.python.org/3/library/heapq.html#heapq.nsmallest), which gives us the result in $O(n)$ time rather than $O(n \log_2 n)$. Or, for numpy arrays, use the [`numpy.partition()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.partition.html) function; we just want to group the top two separate from the remainder, it doesn't matter if they are ordered any further, after all.
3. Per coordinate we need to track if their area stretches to infinity, so we can disqualify them from consideration when we ask for the maximum area. Any coordinate that can claim a `(x, y)` point on the boundaries (defined as the min and max x and y coordinates) can be expected to stretch to infinity.
All computations can be done with numpy arrays, and the distance calculations can be done for all points for the whole matrix in one step with the Scipy [`scipy.spacial.distance.cdist()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html) function, which directly supports calculating Manhattan distances.
As inputs we need
- an array for all the `(x, y)` positions for the coordinates
<table>
<thead>
<tr><th>#</th><th>x</th><th>y</th></tr>
</thead>
<tbody>
<tr><th>0</th><td>1</td><td>1</td></tr>
<tr><th>1</th><td>1</td><td>6</td></tr>
<tr><th>⋮</th><td>⋮</td><td>⋮</td></tr>
<tr><th>5</th><td>8</td><td>9</td></tr>
</tbody>
</table>
- an array of all possible `(x, y)` coordinates for the grid
bounded the min and max x, y coordinates of the input coordinates
<table>
<thead>
<tr><th>#</th><th>x</th><th>y</th></tr>
</thead>
<tbody>
<tr><th>0</th><td>1</td><td>1</td></tr>
<tr><th>1</th><td>2</td><td>1</td></tr>
<tr><th>2</th><td>3</td><td>1</td></tr>
<tr><th>⋮</th><td>⋮</td><td>⋮</td></tr>
<tr><th>55</th><td>7</td><td>8</td></tr>
</tbody>
</table>
Given these, `cdist()` will give us a big matrix with all distances:
<table>
<thead>
<tr><th>#</th><th>distances</th></tr>
</thead>
<tbody>
<tr><th>0</th><td>
<table>
<thead>
<tr><th></th><th>0</th><th>1</th><th>2</th><th>3</th><th>4</th><th>5</th><th>6</th></tr></thead>
<tbody>
<tr><th>0</th><td>0.0</td><td>1.0</td><td>2.0</td><td>3.0</td><td>4.0</td><td>5.0</td><td>6.0</td></tr>
<tr><th>1</th><td>1.0</td><td>2.0</td><td>3.0</td><td>4.0</td><td>5.0</td><td>6.0</td><td>7.0</td></tr>
<tr><th>⋮</th><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td></tr>
<tr><th>7</th><td>7.0</td><td>8.0</td><td>9.0</td><td>10.0</td><td>11.0</td><td>12.0</td><td>13.0</td></tr>
</tbody>
</table>
</td></tr>
<tr><th>1</th><td>
<table>
<thead>
<tr><th></th><th>0</th><th>1</th><th>2</th><th>3</th><th>4</th><th>5</th><th>6</th></tr>
</thead>
<tbody>
<tr><th>0</th><td>5.0</td><td>6.0</td><td>7.0</td><td>8.0</td><td>9.0</td><td>10.0</td><td>11.0</td></tr>
<tr><th>1</th><td>4.0</td><td>5.0</td><td>6.0</td><td>7.0</td><td>8.0</td><td>9.0</td><td>10.0</td></tr>
<tr><th>⋮</th><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td></tr>
<tr><th>7</th><td>2.0</td><td>3.0</td><td>4.0</td><td>5.0</td><td>6.0</td><td>7.0</td><td>8.0</td></tr>
</tbody>
</table>
</td></tr>
<tr><th>⋮</th><td>⋮</td></tr>
<tr><th>5</th><td>
<table>
<thead>
<tr><th></th><th>0</th><th>1</th><th>2</th><th>3</th><th>4</th><th>5</th><th>6</th></tr>
</thead>
<tbody>
<tr><th>0</th><td>15.0</td><td>14.0</td><td>13.0</td><td>12.0</td><td>11.0</td><td>10.0</td><td>9.0</td></tr>
<tr><th>1</th><td>14.0</td><td>13.0</td><td>12.0</td><td>11.0</td><td>10.0</td><td>9.0</td><td>8.0</td></tr>
<tr><th>⋮</th><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td></tr>
<tr><th>7</th><td>8.0</td><td>7.0</td><td>6.0</td><td>5.0</td><td>4.0</td><td>3.0</td><td>2.0</td></tr>
</tbody>
</table>
</td></tr>
</tbody>
</table>
All that then remains to be done is find the *ids* of the input coordinates (integer index) that have the lowest distance at each point, remove the points at which the lowest distance and second lowest distance are equal (contested distances), remove the ids that claim area at the edges (those have infinite area), then count the ids and return the highest of those counts.
```
import numpy as np
from scipy.spatial import distance
def _manhattan_distances_matrix(coords):
"""Produce a len(coords) matrix of manhattan distances at all possible x, y"""
x = np.arange(coords[..., 0].min(), coords[..., 0].max() + 1)
y = np.arange(coords[..., 1].min(), coords[..., 1].max() + 1)
# arrays with len(x) x len(y) x and y coordinates
xx, yy = np.meshgrid(x, y)
# array of all possible [x, y]
all_xy = np.stack((xx, yy), axis=-1).reshape(-1, 2)
# calculate distances for all points at all coordinates; essentially a
# len(coordinates) set of matrices of distances
return distance.cdist(coords, all_xy, metric='cityblock').reshape(-1, *xx.shape)
def _claimed_area(coords):
"""matrix of claimed areas by id; -1 is used to mark equidistanc areas.
"""
distances = _manhattan_distances_matrix(coords)
# What coordinate ids win for a given x, y position?
coord_ids = distances.argmin(axis=0)
# whereever the top and second best distance are the same, clear the
# claim for a candidate id
candidate, next_ = np.partition(distances, 2, axis=0)[:2]
coord_ids[candidate == next_] = -1
return coord_ids
def find_max_area(coords):
"""How large is the largest non-infinite area covered?"""
coord_ids = _claimed_area(coords)
# Any candidate id that's at the edge has infinite area, clear those
# from consideration
is_infinite = np.union1d(
# top and bottom
np.unique(coord_ids[[0, -1], :]),
# left and right
np.unique(coord_ids[:, [0, -1]]),
)
coord_ids[np.isin(coord_ids, is_infinite)] = -1
# now we have a matrix of all positions on the infinite grid (bounded
# by the min and max coordinates) with non-infinite areas marked by
# the id of the coordinates that claim that position. -1 mark spots
# not claimable or part of an infinite area. All we need to do now is
# count the ids != -1, and return the maximum
_, counts = np.unique(coord_ids[coord_ids != -1], return_counts=True)
return counts.max()
testcoords = np.genfromtxt('''\
1, 1
1, 6
8, 3
3, 4
5, 5
8, 9'''.splitlines(), delimiter=',', dtype=np.int)
assert find_max_area(testcoords) == 17
%matplotlib inline
from PIL import Image
import matplotlib.pyplot as plt
def visualise(coords, cmap='rainbow', unclaimed='black', centers='white', ratio=1):
coord_ids = _claimed_area(coords)
vmin, vmax = coord_ids[coord_ids != -1].min(), coord_ids.max()
# mark the coordinate centers with a separate colour; coordinates
# must first be normalised as we don't necessarily start at 0, 0
# anymore.
normalised_coords = coords - coords.min(axis=0)
coord_ids[tuple(normalised_coords.T)[::-1]] = vmax + 1
# Generate a PIL image, using a matplotlib palette
# resample a matplotlib colour map to cover our coords count (vmax + 1)
p = plt.get_cmap(cmap)._resample(vmax + 1)
# -1 is given one colour
p.set_under(unclaimed)
# vmax + 1 another
p.set_over(centers)
# map points through the resampled palette
img = Image.fromarray(p(coord_ids, bytes=True))
if ratio != 1:
img = img.resize((int(img.size[0] * ratio), int(img.size[1] * ratio)))
return img
visualise(testcoords, ratio=35)
import aocd
data = aocd.get_data(day=6, year=2018)
coords = np.genfromtxt(data.splitlines(), delimiter=',', dtype=np.uint)
print('Part 1:', find_max_area(coords))
# All the coordinates mapped out:
visualise(coords)
```
## Part 2
This part is actually easier, we can sum all the distances that `cdist()` gives us for all possible positions, and count how many of these fall below the threshold. Numpy was made for this kind of work.
```
def area_within_threshold(coords, threshold):
distances = _manhattan_distances_matrix(coords)
return (distances.sum(axis=0) < threshold).sum()
testthreshold = 32
assert area_within_threshold(testcoords, testthreshold) == 16
def plot_area(coords, threshold):
cmap = plt.get_cmap('cool')
cmap.set_over('black')
distance = _manhattan_distances_matrix(coords).sum(axis=0)
plt.axis('off')
plt.imshow(distance, vmax=threshold - 1, cmap=cmap)
plot_area(testcoords, testthreshold)
threshold = 10000
print('Part 2:', area_within_threshold(coords, threshold))
plot_area(coords, threshold)
```
| true |
code
| 0.73154 | null | null | null | null |
|
# 简单理解 SCF 中的 DIIS
> 创建时间:2019-10-23
这篇文档将会简单地叙述 GGA 为代表的 SCF DIIS。
DIIS 是一种 (几乎) 专门用于自洽场过程加速的算法。关于 DIIS 的算法与数学论述,这里并不作展开。这里推荐 C. David Sherrill 的笔记 [^Sherrill-note] 与 Psi4NumPy 的 Jupyter Notebook [^psi4numpy-note] 作为拓展阅读。
这篇笔记会借助 PySCF 的 DIIS 程序,对 Fock 矩阵进行外推。我们将描述在第 $t$ 步 DIIS 过程之后,如何更新第 $t+1$ 步的 Fock 矩阵。我们 **并不会** 从头写一个 DIIS 程序;这一方面是出于程序复杂性考虑,因为一般的 DIIS 程序应当要允许便利地增添、删除迭代过程中间的向量,能够处理解矩阵方程时会出现的线性依赖问题,并且要能保证一定的程序效率。另一方面,若对 DIIS 的更新过程有所了解,那么原则上我们已经理解了 DIIS 程序了,剩下的细节将只是时间与耐心上的问题。
```
import numpy as np
import scipy
from pyscf import gto, dft, lib
from functools import partial
np.einsum = partial(np.einsum, optimize=["greedy", 1024 ** 3 * 2 / 8])
np.set_printoptions(5, linewidth=150, suppress=True)
```
## PySCF 的 DIIS 使用
### 分子体系定义与 DIIS 程序
首先,我们的分子体系是不对称的双氧水。
```
mol = gto.Mole()
mol.atom = """
O 0.0 0.0 0.0
O 0.0 0.0 1.5
H 1.0 0.0 0.0
H 0.0 0.7 1.0
"""
mol.basis = "6-31G"
mol.verbose = 0
mol.build()
nao = nmo = mol.nao
nocc = mol.nelec[0]
nvir = nmo - nocc
so, sv, sa = slice(0, nocc), slice(nocc, nmo), slice(0, nmo)
```
为了简化程序,我们借助 PySCF 的 DFT 自洽场类 `scf_eng`,用来生成 Fock 矩阵和密度矩阵。
```
scf_eng = dft.RKS(mol)
scf_eng.xc = "B3LYPg"
S = mol.intor("int1e_ovlp")
mo_occ = np.zeros(nmo)
mo_occ[:nocc] = 2
```
我们先简单地用下述程序展示 PySCF 中的 DIIS 是如何使用的。我们在后面会介绍 DIIS 类的具体的一些函数;这个用作演示用途的 DIIS 也时通过下述程序生成的。
下面的自洽场程序取自 pyxdh 文档 [^pyxdh-note],但作了一些修改与简化。这个自洽场程序大体思路与 Szabo [^Szabo-Ostlund.Dover.1996] 第三章的叙述契合,也与 Psi4NumPy 和 PySCF 的不少演示程序比较相似。
与 Szabo 第三章叙述不太一样的地方有两处。其中一行代码是
```python
D = coef * D + (1 - coef) * D_old
```
这行代码仅仅是用来对 Naive SCF 作的修改。Szabo 的第三章可以称作是 Naive SCF,即单纯地将 Fock 矩阵对角化生成分子轨道,再得到密度代入 Fock 矩阵中。但这一行会将上一次迭代的密度 $D_{\mu \nu}^{t-1}$ 与这一次迭代的密度 $D_{\mu \nu}^{t}$ 混合,产生新的密度代入 Fock 矩阵中。这仅仅是为了防止 Naive SCF 振荡收敛,不会用于 DIIS 加速的算法。
另一行代码是
```python
F = func(diis=diis, F=F, C=C, mo_occ=mo_occ)
```
这行代码是用于指定 DIIS 的更新。方式在这份文档中,DIIS 的更新方式有
- `func_no_special`:Naive SCF,不引入 DIIS
- `diis_err_deviation`:通过迭代过程中的 Fock 矩阵差值 $\Delta F_{\mu \nu}^{t} = F_{\mu \nu}^{t} - F_{\mu \nu}^{t - 1}$ 更新 DIIS 状态
- `diis_err_gradient`:通过占据-非占 Fock 矩阵 $F_{ai}^{t}$ 更新 DIIS 状态
之所以用这种不太常见也不太直观的代码方式指定 DIIS 更新方式,单纯地是因为节省文档中的代码空间,避免代码冗余。
```
def scf_process(func, coef=1.0, maxcycle=128):
diis = lib.diis.DIIS()
C = e = NotImplemented # Orbital (canonical) coefficient
D = np.zeros((nao, nao)) # Density in this iteration
D_old = np.zeros((nao, nao)) + 1e-4 # Density in last iteration
count = 0 # Iteration count (1, 2, ...)
while (not np.allclose(D, D_old)): # atol=1e-8, rtol=1e-5
if count > maxcycle:
raise ValueError("SCF not converged!")
count += 1
D_old = D
F = scf_eng.get_fock(dm=D) # Generate initial Fock matrix from Density
if count > 1: # avoid the case: C = NotImplemented
F = func(diis=diis, F=F, C=C, mo_occ=mo_occ) # Different DIIS approaches
# func_no_special : nothing happens
# diis_err_deviation : F = diis.update(F)
# diis_err_gradient : F = diis.update(F, scf_eng.get_grad(C, mo_occ))
e, C = scipy.linalg.eigh(F, S) # Solve FC = SCe
D = scf_eng.make_rdm1(mo_coeff=C, mo_occ=mo_occ) # D = 2 * C(occ).T @ C(occ)
D = coef * D + (1 - coef) * D_old # For convergence of original SCF
# func_no_special: D = 0.3 * D + 0.7 * D_old
# other cases : nothing happens
E_tot = scf_eng.energy_tot(dm=D)
print("SCF Converged in ", count, " loops")
print("Total energy (B3LYP) ", E_tot, " a.u.")
```
### DIIS 方法加速效果
现在我们可以来看每种 DIIS 更新方式会产生的效果了。
**Naive SCF**
```
def func_no_special(*args, **kwargs):
return kwargs["F"]
scf_process(func_no_special, coef=0.3)
```
**DIIS:Fock 矩阵差值**
```
def diis_err_deviation(*args, **kwargs):
diis, F = kwargs["diis"], kwargs["F"]
return diis.update(F)
scf_process(diis_err_deviation)
```
**DIIS:占据-非占 Fock 矩阵**
```
def diis_err_gradient(*args, **kwargs):
diis, F, C, mo_occ = kwargs["diis"], kwargs["F"], kwargs["C"], kwargs["mo_occ"]
return diis.update(F, scf_eng.get_grad(C, mo_occ))
scf_process(diis_err_gradient)
```
尽管从原理上,使用占据-非占 Fock 矩阵的方法应当更好;但在当前的体系下 Fock 矩阵差值的方法能更快地收敛。
我们能发现,若使用 PySCF 的 DIIS 类 (Psi4NumPy 的 [helper_HF.py](https://github.com/psi4/psi4numpy/blob/master/Self-Consistent-Field/helper_HF.py) 的 `DIIS_helper` 类也是相似的),我们在实际进行 DIIS 时只需要相对于 Naive SCF 增加一行,非常方便地就可以加速收敛:
```python
F = diis.update(F) # diis_err_deviation
```
或者
```python
F = diis.update(F, scf_eng.get_grad(C, mo_occ)) # diis_err_gradient
```
简单地说,这就是将 Fock 矩阵在每次迭代过程中,利用以前储存下来的 Fock 矩阵的信息进行再次更新。
## DIIS 细节
在这一段中,我们主要通过占据-非占 Fock 矩阵更新法的程序,对迭代 6 次时的 DIIS 状态进行较为细致的分析,并以此推导出第 6 次 DIIS 更新后的 Fock 矩阵。
第 6 次迭代时的 DIIS 类 `diis` 与更新前的 Fock 矩阵 `F_old` $F_{\mu \nu}^{t=6}$、更新后的 Fock 矩阵 `F` $\mathscr{F}_{\mu \nu}$ 的获得方式是通过限制迭代次数而给出的:
```
diis = lib.diis.DIIS()
C = e = NotImplemented # Orbital (canonical) coefficient
D = np.zeros((nao, nao)) # Density in this iteration
D_old = np.zeros((nao, nao)) + 1e-4 # Density in last iteration
count = 0 # Iteration count (1, 2, ...)
F_old = NotImplemented # Variable in last iteration
while (not np.allclose(D, D_old)): # atol=1e-8, rtol=1e-5
count += 1
D_old = D
F = scf_eng.get_fock(dm=D) # Generate initial Fock matrix from Density
if count == 6:
F_old = F.copy()
F = diis.update(F, scf_eng.get_grad(C, mo_occ))
break
elif count > 1: # avoid the case: C = NotImplemented
F = diis.update(F, scf_eng.get_grad(C, mo_occ))
e, C = scipy.linalg.eigh(F, S) # Solve FC = SCe
D = scf_eng.make_rdm1(mo_coeff=C, mo_occ=mo_occ) # D = 2 * C(occ).T @ C(occ)
```
这里补充一个程序细节,更新后的 Fock 矩阵 $\mathscr{F}_{\mu \nu}$ 也可以通过 `diis.extrapolate` 获得:
```
np.allclose(F, diis.extrapolate().reshape(nmo, nmo))
```
而 `diis.update` 除了给出更新后的 Fock 矩阵外,还将当前的 Fock 矩阵与误差信息更新入 `diis` 中,为下一次迭代的 DIIS 更新作准备。
### DIIS 储存内容
一般来说,DIIS 储存两部分内容:待外推信息 $p_I^t$ 与误差信息 $e_J^t$。
我们在 SCF 过程中使用 DIIS 的目的是借助以前若干步迭代过程中的信息,对 Fock 矩阵作外推,得到在当前迭代步 $t$ 下更好的 Fock 矩阵。因此,待外推信息 $p_I^t$ 是第 $t$ 次迭代过程计算得到的 Fock 矩阵 $F_{\mu \nu}^t$。
这里会有些诡异的地方是,待外推信息 $p_I^t$ 是单下标 $I$ 的向量,但原子轨道基组下的 Fock 矩阵 $F_{\mu \nu}^t$ 是双下标的矩阵。事实上,$p_I^t$ 在实践过程中就是将 $F_{\mu \nu}^t$ 压平成一维向量。待外推信息 $p_I^t$ 可以通过 `diis.get_vec` 给出,我们将这些向量整合为 `vecs` 变量中,其角标储存是 $(t, I)$:
```
vecs = np.array([diis.get_vec(i) for i in range(diis.get_num_vec())])
```
我们记每次迭代的误差信息为 $e_J^t$。对于占据-非占 Fock 矩阵更新法而言,$e_J^t$ 即是分子轨道基组下的非占-占据 Fock 矩阵 $F_{ai}^t$。我们知道,对于 SCF 收敛之后的状态下,$F_{ai} = 0$;但这在自洽场迭代过程中,该量一般地并不是零,甚至说自洽场过程就是为了达成 $F_{ai} = 0$ 的目的也不为过。因此,$F_{ai}^t$ 的状况可以看作是自洽场是否收敛得较好的判标;于是我们定义 $e_J^t$ 为压平之后的 $F_{ai}^t$。
误差信息 $e_J^t$ 可以通过 `diis.get_err_vec` 给出,我们将这些向量整合为 `err_vecs` 变量中,其角标储存是 $(t, J)$:
```
err_vecs = np.array([diis.get_err_vec(i) for i in range(diis.get_num_vec())])
```
我们指出,$p_I^t$ 与 $e_J^t$ 下标所指代的维度未必要是一样的。
```
print(vecs.shape)
print(err_vecs.shape)
```
:::{note}
从上述的叙述与代码中,能看到我们只进行了 6 次迭代,其中迭代过程的误差信息与待外推信息只储存了 5 次 ($t$ 从 1 计数,外推信息的 $t \in [2, 6]$)。我们定义当前作为迭代次数的上标 $t$ 的集合是 $\mathscr{T} = \{2, 3, 4, 5, 6\}$;但在 PySCF 的 DIIS 类 `diis` 中,通过程序取出这些向量的指标时则应当使用 `0, 1, 2, 3, 4`。
我们知道,DIIS 为了进行外推,会储存许多待外推信息与误差信息;但对于大分子而言,这会占用许多内存空间。出于这个目的 (以及出于收敛性的考量),DIIS 通常只会存比较少量地待外推信息与误差信息。PySCF 的 DIIS 一般只储存 6 次迭代过程的信息。这意味着,若我们进行了 15 次迭代,待外推矩阵至多也只会储存 6 个,其余的待外推信息或误差信息都会舍去。
为简化讨论,我们在这篇文档中不讨论如何舍去已经储存的待外推信息与误差信息。
:::
### DIIS 外推:理论
有了所有的待外推信息 $p_I^t$ 与误差信息 $e_J^t$ 后,我们可以作出外推结果 $\mathscr{p}_I = \mathscr{F}_{\mu \nu}$。上述公式中看似有问题的单下标转双下标可以通过互阵的 reshape 实现。
外推的含义是
$$
\mathscr{p}_I = \sum_{t \in \mathscr{T}} w_t p_I^t
$$
$\mathscr{T}$ 代表 DIIS 当前储存的每个外推信息对应的被迭代次数的集合,在这里恰好是从 2 开始的所有的被迭代次数。如果我们现在的迭代次数非常大,但只允许 DIIS 储存不多于 6 个待外推信息,那么求和指标 $t$ 的取值范围 $\mathscr{T}$ 将会舍去这些迭代次数,从而保持其集合元素数量 $|\mathscr{T}|$ 不超过 6。
我们人为地引入权重 $w_t$ 以归一条件:
$$
\sum_{t \in \mathscr{T}} w_t = 1
$$
如果我们假定待外推的信息 $p_I^t$ 与对应的误差信息 $e_J^t$ 呈线性关系,那么被外推的信息 $\mathscr{p}_I$ 的误差 $\mathscr{e}_J$ 应当满足
$$
\mathscr{e}_I = \sum_{t \in \mathscr{T}} w_t e_J^t
$$
我们希望误差 $\Vert \mathscr{e}_J \Vert_2^2$ 最小化,但同时又满足 $w_t$ 的归一化条件;那么我们通过 Lagrange 乘子法,构造以下损失函数
$$
\begin{align}
\mathscr{L} (\{w_t\}_{t \in \mathscr{T}}, \lambda) &= \Vert \mathscr{e}_J \Vert_2^2 + 2 \lambda \left( \sum_{t \in \mathscr{T}} w_t - 1 \right) \\
&= \sum_J \sum_{t \in \mathscr{T}} w_t e_J^t \cdot \sum_{s \in \mathscr{T}} w_s e_J^s + 2 \lambda \left( \sum_{t \in \mathscr{T}} w_t - 1 \right)
\end{align}
$$
我们现在定义
$$
B_{ts} = \sum_{J} e_J^t e_J^s
$$
那么损失函数可以写为
$$
\mathscr{L} (\{w_t\}_{t \in \mathscr{T}}, \lambda) = \sum_{t, s \in \mathscr{T}} w_t B_{ts} w_s + 2 \lambda \left( \sum_{t \in \mathscr{T}} w_t - 1 \right)
$$
对上述损失函数求取关于 $w_t$ 的偏导数,则得到
$$
\frac{\partial \mathscr{L}}{\partial w_t} = 2 \sum_{s \in \mathscr{T}} B_{ts} w_s + 2 \lambda
$$
我们显然是希望让损失函数对关于 $w_t$ 的偏导数为零;那么联立归一化条件 $\sum_{t \in \mathscr{T}} w_t = 1$,我们应当得到以下矩阵方程:
$$
\begin{align}
\begin{pmatrix}
0 & 1 & 1 & \cdots \\
1 & B_{t_0 t_0} & B_{t_0 t_1} & \cdots \\
1 & B_{t_1 t_0} & B_{t_1 t_1} & \\
\vdots & \vdots & & \ddots \\
\end{pmatrix}
\begin{pmatrix}
\lambda \\ w_{t_0} \\ w_{t_1} \\ \vdots
\end{pmatrix}
=
\begin{pmatrix}
1 \\ 0 \\ 0 \\ \vdots
\end{pmatrix}
\end{align}
$$
其中,$t_0, t_1, \cdots \in \mathscr{T}$ 是互不相同的指标。求解上述方程,就可以获得权重 $w_t$,进而给出 $\mathscr{F}_{\mu \nu} = \mathscr{p}_I = \sum_{t \in \mathscr{T}} w_t p_I^t$,达成目标。
### DIIS 外推:实现
首先,我们出,`diis` 的一个隐含变量 `diis._H` 储存的就是矩阵方程 LHS 的矩阵部分:
```
A = diis._H[:diis.get_num_vec()+1, :diis.get_num_vec()+1]
A
```
我们能很方便地构建上述矩阵的第 1 行以下、第 1 列以右的子矩阵 $B_{ts} = \sum_{J} e_J^t e_J^s$:
```
np.einsum("tI, sI -> ts", err_vecs, err_vecs)
```
我们可以直接解上述的矩阵方程:
```
b = np.zeros(diis.get_num_vec() + 1)
b[0] = 1
w = np.linalg.solve(A, b)
w = w[1:]
w
```
那么我们可以通过 $\mathscr{F}_{\mu \nu} = \mathscr{p}_I = \sum_{t \in \mathscr{T}} w_t p_I^t$ 给出外推 Fock 矩阵 `F_ex`,并且与 `diis` 给出的外推的 Fock 矩阵 `F` 进行比较:
```
F_ex = np.einsum("t, tI -> I", w, vecs).reshape(nmo, nmo)
np.allclose(F_ex, F)
```
:::{tip}
在求解 DIIS 所给出的权重 $w_t$ 向量的过程中,会遇到线性依赖关系,或者说会遇到 $B_{ts}$ 数值上不满秩的情况。在这种情况下,求解矩阵方程可能会失败。
一种解决方案是,干脆关闭 DIIS,使用 Naive SCF 作最后的收尾工作。由于 DIIS 已经将电子态密度收敛到相当不错的状态了,因此应当能预期这种情况下 Naive SCF 可以正常地进行收敛。
另一种解决方式是对矩阵方程 $\mathbf{A} \boldsymbol{x} = \boldsymbol{b}$ 的矩阵 $\mathbf{A}$ 作对角化,并舍去其中绝对值极小的本征值与本征向量,求解一个子空间的线性方程组问题。这种解决方案应用在 PySCF 的 DIIS 程序中。
:::
### 对 Fock 矩阵差值方法的补充说明
Fock 矩阵差值方法的计算过程与占据-非占 Fock 矩阵方法的实现过程几乎是相同的。唯一的区别是:
- 占据-非占 Fock 矩阵方法 $e_J^t = F_{ai}^t$
- Fock 矩阵差值方法 $e_J^t = \Delta F_{\mu \nu}^{t} = F_{\mu \nu}^{t} - F_{\mu \nu}^{t - 1}$
[^Sherrill-note]: <http://vergil.chemistry.gatech.edu/notes/diis/diis.pdf>
[^psi4numpy-note]: <https://github.com/psi4/psi4numpy/blob/master/Tutorials/03_Hartree-Fock/3b_rhf-diis.ipynb>
[^pyxdh-note]: <https://py-xdh.readthedocs.io/zh_CN/latest/qcbasic/proj_xyg3.html>
[^Szabo-Ostlund.Dover.1996]: Szabo, A.; Ostlund, N. S. *Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory (Dover Books on Chemistry)*; Dover Publications, 1996.
| true |
code
| 0.385808 | null | null | null | null |
|

<a href="https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fcallysto%2Fcurriculum-notebooks&branch=master&subPath=Science/ReflectionsOfLightByPlaneAndSphericalMirrors/reflections-of-light-by-plane-and-spherical-mirrors.ipynb&depth=1" target="_parent"><img src="https://raw.githubusercontent.com/callysto/curriculum-notebooks/master/open-in-callysto-button.svg?sanitize=true" width="123" height="24" alt="Open in Callysto"/></a>
```
from IPython.display import display, Math, Latex, HTML
HTML('''<script>
function code_toggle() {
if (code_shown){
$('div.input').hide('500');
$('#toggleButton').val('Show Code')
} else {
$('div.input').show('500');
$('#toggleButton').val('Hide Code')
}
code_shown = !code_shown
}
$( document ).ready(function(){
code_shown=false;
$('div.input').hide()
});
</script>
<form action="javascript:code_toggle()"><input type="submit" id="toggleButton" value="Show Code"></form>''')
from helper import *
%matplotlib inline
```
# Reflection of Light
## by Plane and Spherical Mirrors
## Introduction
When light shines onto the surface of an object, some of the light is reflected, while the rest is either absorbed or transmitted. We can imagine the light consisting of many narrow beams that travel in straight-line paths called **rays**. The light rays that strike the surface are called the **incident rays**. The light rays that reflect off the surface are called the **reflected rays**. This model of light is called the **ray model**, and it can be used to describe many aspects of light, including the reflection and formation of images by plane and spherical mirrors.
## Law of Reflection
<img src="Images/law_of_reflection.svg" width="50%"/>
To measure the angles of the incident and reflected rays, we first draw the **normal**, which is the line perpendicular to the surface. The **angle of incidence, $\theta_{i}$,** is the angle between the incident ray and the normal. Likewise, the **angle of reflection, $\theta_{r}$,** is the angle between the reflected ray and the normal. The incident ray, the reflected ray, and the normal to the reflecting surface all lie within the same plane. This is shown in the figure above. Notice that the angle of reflection is equal to the angle of incidence. This is known as the **law of reflection**, and it can be expressed by the following equation:
$$\theta_{r} = \theta_{i}$$
Use the slider below to change the angle of incidence. This changes the angle between the incident ray and the normal. Notice how the angle of reflection also changes when the slider is moved.
```
interactive_plot = widgets.interactive(f, Angle=widgets.IntSlider(value=45,min=0,max=90,step=15,continuous_update=False))
output = interactive_plot.children[-1]
output.layout.height = '280px'
interactive_plot
```
**Question:** *When the angle of incidence increases, what happens to the angle of reflection?*
```
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = "The angle of reflection increases."
option_2 = "The angle of reflection decreases."
option_3 = "The angle of reflection remains constant."
option_4 = "The angle of reflection equals zero."
multiple_choice(option_1, option_2, option_3, option_4)
```
## Specular and Diffuse Reflections
For a very smooth surface, such as a mirror, almost all of the light is reflected to produce a **specular reflection**. In a specular reflection, the reflected light rays are parallel to one another and point in the same direction. This allows specular reflections to form images. If the surface is not very smooth, then the light may bounce off of the surface in various directions. This produces a **diffuse reflection**. Diffuse reflections cannot form images.
<img src="Images/specular_diffuse_reflections.svg" width="70%"/>
**Note:** The law of reflection still applies to diffuse reflections, even though the reflected rays are pointing in various directions. We can imagine that each small section of the rough surface is like a flat plane orientated differently than the sections around it. Since each of these sections is orientated differently, the angle of incidence is different at each section. This causes the reflected rays to scatter.
**Question:** *Which of the following is an example of a specular reflection?*
```
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = "The reflection off a clean window."
option_2 = "The reflection off a wooden deck."
option_3 = "The reflection off a carpet floor."
option_4 = "The reflection off a table cloth."
multiple_choice(option_1, option_2, option_3, option_4)
```
**Question:** *Which of the following is an example of a diffuse reflection?*
```
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = "The reflection off a concrete sidewalk."
option_2 = "The reflection off a mirror."
option_3 = "The reflection off the surface of a still lake."
option_4 = "The reflection off a polished sheet of metal."
multiple_choice(option_1, option_2, option_3, option_4)
```
## Image Formation by Plane Mirrors
A **plane mirror** is simply a mirror made from a flat (or planar) surface. These types of mirrors are commonly found in bedroom or bathroom fixtures. When an object is reflected in a plane mirror, the image of the object appears to be located behind the mirror. This is because our brains interpret the reflected light rays entering our eyes as having travelled in straight-line paths. The light rays entering our eyes simply do not contain enough information for our brains to differentiate between a straight-line path and a path that changed direction due to a reflection.
<img src="Images/plane_mirror_reflection.svg" width="60%"/>
Notice in the figure above that the light rays do not actually converge at the location where the image appears to be formed (behind the mirror). Since the light rays do not actually go behind the mirror, they are represented as projections using dashed lines. If a film were placed at the image location behind the mirror, it would not be able to capture the image. As a result, this type of image is called a **virtual image**.
For objects reflected in a plane mirror, the distance of the image from the mirror, $d_{i}$, is always equal to the distance of the object from the mirror, $d_{o}$. If the object is moved toward the mirror, the image of the object will also move toward the mirror such that the object and the image are always equidistant from the surface of the mirror.
Use the slider below to change the object distance. Notice how the image distance also changes when the slider is moved.
```
interactive_plot = widgets.interactive(f,Distance=widgets.IntSlider(value=30,min=10,max=50,step=10,continuous_update=False))
output = interactive_plot.children[-1]
output.layout.height = '280px'
interactive_plot
#Print question
distance = round(random.uniform(5,10),1)
print("If you stand " + str(distance) + " m in front of a plane mirror, how many metres behind the mirror is your virtual image?")
#Answer calculation
answer = distance
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = str(round((answer),1)) + " m"
option_2 = str(round((answer * 2),1)) + " m"
option_3 = str(round((answer / 2),1)) + " m"
option_4 = str(round((answer / 4),1)) + " m"
multiple_choice(option_1, option_2, option_3, option_4)
#Print question
distance = round(random.uniform(5,10),1)
print("If you stand " + str(distance) + " m in front of a plane mirror, how many metres will separate you from your virtual image?")
#Answer calculation
answer = (distance * 2)
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = str(round((answer),1)) + " m"
option_2 = str(round((answer * 2),1)) + " m"
option_3 = str(round((answer / 2),1)) + " m"
option_4 = str(round((answer / 4),1)) + " m"
multiple_choice(option_1, option_2, option_3, option_4)
```
## Spherical Mirrors
Two common types of curved mirror are formed from a section of a sphere. If the reflection takes place on the inside of the spherical section, then the mirror is called a **concave mirror**. The reflecting surface of a concave mirror curves inward and away from the viewer. If the reflection takes place on the outside of the spherical section, then the mirror is called a **convex mirror**. The reflecting surface of a convex mirror curves outward and toward the viewer.
<img src="Images/concave_convex_mirrors.svg" width="75%"/>
The **centre of curvature, $C$,** is the point located at the centre of the sphere used to create the mirror. The **vertex, $V$,** is the point located at the geometric centre of the mirror itself. The **focus, $F$,** is the point located midway between the centre of curvature and the vertex. The line passing through the centre of curvature and the vertex is called the **principal axis**. Notice that the focus also lies on the principal axis.
When an incident ray parallel to the principle axis strikes the mirror, the reflected ray always passes through the focus. When an incident ray passes through the focus and strikes the mirror, the reflected ray is always parallel to the principle axis. (In the above diagrams, reverse the arrow directions to see this case). These properties make the focus particularly useful when examining spherical mirrors.
**Note:** The distance from the centre of curvature to the vertex is equal to the **radius, $R$,** of the sphere used to create the mirror. Any straight line drawn from the centre to any point on the surface of a spherical mirror will have a length equal to the radius. The distance from the vertex to the focus is called the **focal length, $f$**. This distance is equal to half the distance of the radius.
$$f = \frac{R}{2}$$
```
#Print question
radius = round(random.uniform(10,30),1)
print("If the radius of a curved mirror is " + str(radius) + " cm, how many centimetres is the focal length?")
#Answer calculation
answer = radius/2
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = str(round((answer),1)) + " cm"
option_2 = str(round((answer * 2),1)) + " cm"
option_3 = str(round((answer / 2),1)) + " cm"
option_4 = str(round((answer * 4),1)) + " cm"
multiple_choice(option_1, option_2, option_3, option_4)
#Print question
focal_length = round(random.uniform(5,15),1)
print("If the focal length of a curved mirror is " + str(focal_length) + " cm, how many centimetres is the radius of curvature?")
#Answer calculation
answer = focal_length*2
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = str(round((answer),1)) + " cm"
option_2 = str(round((answer * 2),1)) + " cm"
option_3 = str(round((answer / 2),1)) + " cm"
option_4 = str(round((answer / 4),1)) + " cm"
multiple_choice(option_1, option_2, option_3, option_4)
```
## Image Formation by Spherical Mirrors
A simple way to determine the position and characteristics of an image formed by the rays reflected from a spherical mirror is to construct a **ray diagram**. A ray diagram is used to shown the path taken by light rays as they reflect from an object or mirror. This was used to find the image created by a plane mirror in the previous section. When constructing a ray diagram, we need only consider ourselves with finding the location of a single point on the reflected image. To do this, any point on the object may be chosen, but for consistency, we will choose the topmost point for the diagrams shown below. Any rays may be chosen, but there are three particular rays that are easy to draw:
* **Ray 1:** This ray is drawn parallel to the principle axis from a point on the object to the surface of the mirror. Since the incident ray is parallel to the principle axis, the reflected ray must pass through the focus.
* **Ray 2:** This ray is drawn from a point on the object and through the focus. Since the incident ray passes through the focus, the reflected ray must be parallel to the principle axis.
* **Ray 3:** This ray is drawn from a point on the object and through the centre of curvature. This ray is therefore perpendicular to the mirror's surface (incident angle = 0). As such, the reflected ray must return along the same path and pass through the centre of curvature.
The point at which any two of these three rays converge can be used to find the location and characteristics of the reflected image.
### Concave Mirrors
The characteristics of an image formed in a concave mirror depend on the position of the object. There are essentially five cases. Each of these five cases are demonstrated below:
### Case 1: Object Located at a Distance Greater than $C$
In the first case, the distance of the object from the mirror is greater than the radius used to define the centre of curvature. In other words, the object is further away from the mirror than the centre of curvature. In this example, we can draw any two of the three rays mentioned above to find the image of the reflected object.
**Note:** You only need to draw two of the three rays to find the image of the reflected object.
```
output_case_1 = widgets.Output()
frame_case_1 = 1
#Toggle images
def show_svg_case_1():
global frame_case_1
if frame_case_1 == 0:
display(SVG("Images/case_1_0.svg"))
frame_case_1 = 1
elif frame_case_1 == 1:
display(SVG("Images/case_1_1.svg"))
frame_case_1 = 2
elif frame_case_1 == 2:
display(SVG("Images/case_1_2.svg"))
frame_case_1 = 3
elif frame_case_1 == 3:
display(SVG("Images/case_1_3.svg"))
frame_case_1 = 0
button_case_1 = widgets.Button(description="Toggle rays", button_style = 'success')
display(button_case_1)
def on_submit_button_case_1_clicked(b):
with output_case_1:
clear_output(wait=True)
show_svg_case_1()
with output_case_1:
display(SVG("Images/case_1_0.svg"))
button_case_1.on_click(on_submit_button_case_1_clicked)
display(output_case_1)
```
**Question:** *Which options from the dropdown menus best describe the image formed by the reflected rays shown above?*
```
#Create dropdown menus
dropdown1_1 = widgets.Dropdown(options={' ':0,'Beyond C': 1, 'At C': 2, 'Between C and F': 3, 'At F': 4, 'Between F and V': 5, 'Beyond V': 6, 'Not applicable (no image)': 7}, value=0, description='Position',)
dropdown1_2 = widgets.Dropdown(options={' ':0,'Larger than the object': 1, 'Same size as the object': 2, 'Smaller than the object': 3, 'Not applicable (no image)': 4}, value=0, description='Relative size',)
dropdown1_3 = widgets.Dropdown(options={' ':0,'Upright': 1, 'Inverted': 2, 'Not applicable (no image)': 3}, value=0, description='Orientation',)
dropdown1_4 = widgets.Dropdown(options={' ':0,'Real': 1, 'Virtual': 2, 'Not applicable (no image)': 3}, value=0, description='Type',)
#Display menus as 2x2 table
container1_1 = widgets.VBox(children=[dropdown1_1,dropdown1_2])
container1_2 = widgets.VBox(children=[dropdown1_3,dropdown1_4])
display(widgets.HBox(children=[container1_1, container1_2])), print(" ", end='\r')
#Evaluate input
def check_answer1(b):
answer1_1 = dropdown1_1.label
answer1_2 = dropdown1_2.label
answer1_3 = dropdown1_3.label
answer1_4 = dropdown1_4.label
if answer1_1 == "Between C and F" and answer1_2 == "Smaller than the object" and answer1_3 == "Inverted" and answer1_4 == "Real":
print("Correct! ", end='\r')
elif answer1_1 != ' ' and answer1_2 != ' ' and answer1_3 != ' ' and answer1_4 != ' ':
print("Try again.", end='\r')
else:
print(" ", end='\r')
dropdown1_1.observe(check_answer1, names='value')
dropdown1_2.observe(check_answer1, names='value')
dropdown1_3.observe(check_answer1, names='value')
dropdown1_4.observe(check_answer1, names='value')
```
### Case 2: Object Located at $C$
In the second case, the distance of the object from the mirror is equal to the radius used to define the centre of curvature.
In other words, the object is located at the centre of curvature. In this case, we can draw only two rays to find the image of the reflected object. We cannot draw a ray passing through the centre of curvature because the object is located at $C$.
```
output_case_2 = widgets.Output()
frame_case_2 = 1
#Toggle images
def show_svg_case_2():
global frame_case_2
if frame_case_2 == 0:
display(SVG("Images/case_2_0.svg"))
frame_case_2 = 1
elif frame_case_2 == 1:
display(SVG("Images/case_2_1.svg"))
frame_case_2 = 2
elif frame_case_2 == 2:
display(SVG("Images/case_2_2.svg"))
frame_case_2 = 0
button_case_2 = widgets.Button(description="Toggle rays", button_style = 'success')
display(button_case_2)
def on_submit_button_case_2_clicked(b):
with output_case_2:
clear_output(wait=True)
show_svg_case_2()
with output_case_2:
display(SVG("Images/case_2_0.svg"))
button_case_2.on_click(on_submit_button_case_2_clicked)
display(output_case_2)
```
**Question:** *Which options from the dropdown menus best describe the image formed by the reflected rays shown above?*
```
#Create dropdown menus
dropdown2_1 = widgets.Dropdown(options={' ':0,'Beyond C': 1, 'At C': 2, 'Between C and F': 3, 'At F': 4, 'Between F and V': 5, 'Beyond V': 6, 'Not applicable (no image)': 7}, value=0, description='Position',)
dropdown2_2 = widgets.Dropdown(options={' ':0,'Larger than the object': 1, 'Same size as the object': 2, 'Smaller than the object': 3, 'Not applicable (no image)': 4}, value=0, description='Relative size',)
dropdown2_3 = widgets.Dropdown(options={' ':0,'Upright': 1, 'Inverted': 2, 'Not applicable (no image)': 3}, value=0, description='Orientation',)
dropdown2_4 = widgets.Dropdown(options={' ':0,'Real': 1, 'Virtual': 2, 'Not applicable (no image)': 3}, value=0, description='Type',)
#Display menus as 2x2 table
container2_1 = widgets.VBox(children=[dropdown2_1,dropdown2_2])
container2_2 = widgets.VBox(children=[dropdown2_3,dropdown2_4])
display(widgets.HBox(children=[container2_1, container2_2])), print(" ", end='\r')
#Evaluate input
def check_answer2(b):
answer2_1 = dropdown2_1.label
answer2_2 = dropdown2_2.label
answer2_3 = dropdown2_3.label
answer2_4 = dropdown2_4.label
if answer2_1 == "At C" and answer2_2 == "Same size as the object" and answer2_3 == "Inverted" and answer2_4 == "Real":
print("Correct! ", end='\r')
elif answer2_1 != ' ' and answer2_2 != ' ' and answer2_3 != ' ' and answer2_4 != ' ':
print("Try again.", end='\r')
else:
print(" ", end='\r')
dropdown2_1.observe(check_answer2, names='value')
dropdown2_2.observe(check_answer2, names='value')
dropdown2_3.observe(check_answer2, names='value')
dropdown2_4.observe(check_answer2, names='value')
```
### Case 3: Object Located between $C$ and $F$
In the third case, the distance of the object from the mirror is less than the radius used to define the centre of curvature, but greater than the focal length. In other words, the object is located between $F$ and $C$. In this case, we can find the image of the reflected object using two rays as shown below. If the mirror is large enough, a third ray that passes through $C$ can also be drawn.
```
output_case_3 = widgets.Output()
frame_case_3 = 1
#Toggle images
def show_svg_case_3():
global frame_case_3
if frame_case_3 == 0:
display(SVG("Images/case_3_0.svg"))
frame_case_3 = 1
elif frame_case_3 == 1:
display(SVG("Images/case_3_1.svg"))
frame_case_3 = 2
elif frame_case_3 == 2:
display(SVG("Images/case_3_2.svg"))
frame_case_3 = 0
button_case_3 = widgets.Button(description="Toggle rays", button_style = 'success')
display(button_case_3)
def on_submit_button_case_3_clicked(b):
with output_case_3:
clear_output(wait=True)
show_svg_case_3()
with output_case_3:
display(SVG("Images/case_3_0.svg"))
button_case_3.on_click(on_submit_button_case_3_clicked)
display(output_case_3)
```
**Question:** *Which options from the dropdown menus best describe the image formed by the reflected rays shown above?*
```
#Create dropdown menus
dropdown3_1 = widgets.Dropdown(options={' ':0,'Beyond C': 1, 'At C': 2, 'Between C and F': 3, 'At F': 4, 'Between F and V': 5, 'Beyond V': 6, 'Not applicable (no image)': 7}, value=0, description='Position',)
dropdown3_2 = widgets.Dropdown(options={' ':0,'Larger than the object': 1, 'Same size as the object': 2, 'Smaller than the object': 3, 'Not applicable (no image)': 4}, value=0, description='Relative size',)
dropdown3_3 = widgets.Dropdown(options={' ':0,'Upright': 1, 'Inverted': 2, 'Not applicable (no image)': 3}, value=0, description='Orientation',)
dropdown3_4 = widgets.Dropdown(options={' ':0,'Real': 1, 'Virtual': 2, 'Not applicable (no image)': 3}, value=0, description='Type',)
#Display menus as 2x2 table
container3_1 = widgets.VBox(children=[dropdown3_1,dropdown3_2])
container3_2 = widgets.VBox(children=[dropdown3_3,dropdown3_4])
display(widgets.HBox(children=[container3_1, container3_2])), print(" ", end='\r')
#Evaluate input
def check_answer3(b):
answer3_1 = dropdown3_1.label
answer3_2 = dropdown3_2.label
answer3_3 = dropdown3_3.label
answer3_4 = dropdown3_4.label
if answer3_1 == "Beyond C" and answer3_2 == "Larger than the object" and answer3_3 == "Inverted" and answer3_4 == "Real":
print("Correct! ", end='\r')
elif answer3_1 != ' ' and answer3_2 != ' ' and answer3_3 != ' ' and answer3_4 != ' ':
print("Try again.", end='\r')
else:
print(" ", end='\r')
dropdown3_1.observe(check_answer3, names='value')
dropdown3_2.observe(check_answer3, names='value')
dropdown3_3.observe(check_answer3, names='value')
dropdown3_4.observe(check_answer3, names='value')
```
### Case 4: Object Located at $F$
In the fourth case, the distance of the object from the mirror is equal to the focal length. In other words, the object is located at the focus. In this case, we can draw only two rays to find the image of the reflected object. We cannot draw a ray passing through the focus because the object is located at $F$. Notice that the reflected rays are parallel and therefore do not intersect. As a consequence, no image is formed.
```
output_case_4 = widgets.Output()
frame_case_4 = 1
#Toggle images
def show_svg_case_4():
global frame_case_4
if frame_case_4 == 0:
display(SVG("Images/case_4_0.svg"))
frame_case_4 = 1
elif frame_case_4 == 1:
display(SVG("Images/case_4_1.svg"))
frame_case_4 = 2
elif frame_case_4 == 2:
display(SVG("Images/case_4_2.svg"))
frame_case_4 = 0
button_case_4 = widgets.Button(description="Toggle rays", button_style = 'success')
display(button_case_4)
def on_submit_button_case_4_clicked(b):
with output_case_4:
clear_output(wait=True)
show_svg_case_4()
with output_case_4:
display(SVG("Images/case_4_0.svg"))
button_case_4.on_click(on_submit_button_case_4_clicked)
display(output_case_4)
```
**Question:** *Which options from the dropdown menus best describe the image formed by the reflected rays shown above?*
```
#import ipywidgets as widgets
#Create dropdown menus
dropdown4_1 = widgets.Dropdown(options={' ':0,'Beyond C': 1, 'At C': 2, 'Between C and F': 3, 'At F': 4, 'Between F and V': 5, 'Beyond V': 6, 'Not applicable (no image)': 7}, value=0, description='Position',)
dropdown4_2 = widgets.Dropdown(options={' ':0,'Larger than the object': 1, 'Same size as the object': 2, 'Smaller than the object': 3, 'Not applicable (no image)': 4}, value=0, description='Relative size',)
dropdown4_3 = widgets.Dropdown(options={' ':0,'Upright': 1, 'Inverted': 2, 'Not applicable (no image)': 3}, value=0, description='Orientation',)
dropdown4_4 = widgets.Dropdown(options={' ':0,'Real': 1, 'Virtual': 2, 'Not applicable (no image)': 3}, value=0, description='Type',)
#Display menus as 2x2 table
container4_1 = widgets.VBox(children=[dropdown4_1,dropdown4_2])
container4_2 = widgets.VBox(children=[dropdown4_3,dropdown4_4])
display(widgets.HBox(children=[container4_1, container4_2])), print(" ", end='\r')
#Evaluate input
def check_answer4(b):
answer4_1 = dropdown4_1.label
answer4_2 = dropdown4_2.label
answer4_3 = dropdown4_3.label
answer4_4 = dropdown4_4.label
if answer4_1 == "Not applicable (no image)" and answer4_2 == "Not applicable (no image)" and answer4_3 == "Not applicable (no image)" and answer4_4 == "Not applicable (no image)":
print("Correct! ", end='\r')
elif answer4_1 != ' ' and answer4_2 != ' ' and answer4_3 != ' ' and answer4_4 != ' ':
print("Try again.", end='\r')
else:
print(" ", end='\r')
dropdown4_1.observe(check_answer4, names='value')
dropdown4_2.observe(check_answer4, names='value')
dropdown4_3.observe(check_answer4, names='value')
dropdown4_4.observe(check_answer4, names='value')
```
### Case 5: Object Located between $F$ and $V$
In the fifth case, the distance of the object from the mirror is less than the focal length. In other words, the object is located between $F$ and $V$. In this case, we can find the image of the reflected object using two rays as shown below. Notice that the reflected rays do not actually converge. However, the projections of the reflected rays *do* converge behind the mirror. Therefore, a virtual image is formed.
```
output_case_5 = widgets.Output()
frame_case_5 = 1
#Toggle images
def show_svg_case_5():
global frame_case_5
if frame_case_5 == 0:
display(SVG("Images/case_5_0.svg"))
frame_case_5 = 1
elif frame_case_5 == 1:
display(SVG("Images/case_5_1.svg"))
frame_case_5 = 2
elif frame_case_5 == 2:
display(SVG("Images/case_5_2.svg"))
frame_case_5 = 0
button_case_5 = widgets.Button(description="Toggle rays", button_style = 'success')
display(button_case_5)
def on_submit_button_case_5_clicked(b):
with output_case_5:
clear_output(wait=True)
show_svg_case_5()
with output_case_5:
display(SVG("Images/case_5_0.svg"))
button_case_5.on_click(on_submit_button_case_5_clicked)
display(output_case_5)
```
**Question:** *Which options from the dropdown menus best describe the image formed by the reflected rays shown above?*
```
#Create dropdown menus
dropdown5_1 = widgets.Dropdown(options={' ':0,'Beyond C': 1, 'At C': 2, 'Between C and F': 3, 'At F': 4, 'Between F and V': 5, 'Beyond V': 6, 'Not applicable (no image)': 7}, value=0, description='Position',)
dropdown5_2 = widgets.Dropdown(options={' ':0,'Larger than the object': 1, 'Same size as the object': 2, 'Smaller than the object': 3, 'Not applicable (no image)': 4}, value=0, description='Relative size',)
dropdown5_3 = widgets.Dropdown(options={' ':0,'Upright': 1, 'Inverted': 2, 'Not applicable (no image)': 3}, value=0, description='Orientation',)
dropdown5_4 = widgets.Dropdown(options={' ':0,'Real': 1, 'Virtual': 2, 'Not applicable (no image)': 3}, value=0, description='Type',)
#Display menus as 2x2 table
container5_1 = widgets.VBox(children=[dropdown5_1,dropdown5_2])
container5_2 = widgets.VBox(children=[dropdown5_3,dropdown5_4])
display(widgets.HBox(children=[container5_1, container5_2])), print(" ", end='\r')
#Evaluate input
def check_answer5(b):
answer5_1 = dropdown5_1.label
answer5_2 = dropdown5_2.label
answer5_3 = dropdown5_3.label
answer5_4 = dropdown5_4.label
if answer5_1 == "Beyond V" and answer5_2 == "Larger than the object" and answer5_3 == "Upright" and answer5_4 == "Virtual":
print("Correct! ", end='\r')
elif answer5_1 != ' ' and answer5_2 != ' ' and answer5_3 != ' ' and answer5_4 != ' ':
print("Try again.", end='\r')
else:
print(" ", end='\r')
dropdown5_1.observe(check_answer5, names='value')
dropdown5_2.observe(check_answer5, names='value')
dropdown5_3.observe(check_answer5, names='value')
dropdown5_4.observe(check_answer5, names='value')
```
### Convex Mirrors
For reflections in convex mirrors, the location of the object does not change the general characteristics of the image. The image will always be between F and V, smaller than the object, upright, and virtual.
```
output_convex = widgets.Output()
frame_convex = 1
#Toggle images
def show_svg_convex():
global frame_convex
if frame_convex == 0:
display(SVG("Images/convex_mirror_reflection_0.svg"))
frame_convex = 1
elif frame_convex == 1:
display(SVG("Images/convex_mirror_reflection_1.svg"))
frame_convex = 2
elif frame_convex == 2:
display(SVG("Images/convex_mirror_reflection_2.svg"))
frame_convex = 0
button_convex = widgets.Button(description="Toggle rays", button_style = 'success')
display(button_convex)
def on_submit_button_convex_clicked(b):
with output_convex:
clear_output(wait=True)
show_svg_convex()
with output_convex:
display(SVG("Images/convex_mirror_reflection_0.svg"))
button_convex.on_click(on_submit_button_convex_clicked)
display(output_convex)
```
**Question:** *Which options from the dropdown menus best describe the image formed by the reflected rays shown above?*
```
#Create dropdown menus
dropdown6_1 = widgets.Dropdown(options={' ':0,'Beyond C': 1, 'At C': 2, 'Between C and F': 3, 'At F': 4, 'Between F and V': 5, 'Beyond V': 6, 'Not applicable (no image)': 7}, value=0, description='Position',)
dropdown6_2 = widgets.Dropdown(options={' ':0,'Larger than the object': 1, 'Same size as the object': 2, 'Smaller than the object': 3, 'Not applicable (no image)': 4}, value=0, description='Relative size',)
dropdown6_3 = widgets.Dropdown(options={' ':0,'Upright': 1, 'Inverted': 2, 'Not applicable (no image)': 3}, value=0, description='Orientation',)
dropdown6_4 = widgets.Dropdown(options={' ':0,'Real': 1, 'Virtual': 2, 'Not applicable (no image)': 3}, value=0, description='Type',)
#Display menus as 2x2 table
container6_1 = widgets.VBox(children=[dropdown6_1,dropdown6_2])
container6_2 = widgets.VBox(children=[dropdown6_3,dropdown6_4])
display(widgets.HBox(children=[container6_1, container6_2])), print(" ", end='\r')
#Evaluate input
def check_answer6(b):
answer6_1 = dropdown6_1.label
answer6_2 = dropdown6_2.label
answer6_3 = dropdown6_3.label
answer6_4 = dropdown6_4.label
if answer6_1 == "Between F and V" and answer6_2 == "Smaller than the object" and answer6_3 == "Upright" and answer6_4 == "Virtual":
print("Correct! ", end='\r')
elif answer6_1 != ' ' and answer6_2 != ' ' and answer6_3 != ' ' and answer6_4 != ' ':
print("Try again.", end='\r')
else:
print(" ", end='\r')
dropdown6_1.observe(check_answer6, names='value')
dropdown6_2.observe(check_answer6, names='value')
dropdown6_3.observe(check_answer6, names='value')
dropdown6_4.observe(check_answer6, names='value')
```
## Conclusions
In this notebook, the reflection of light off of plane and spherical mirrors was examined. In summary:
* Light can be thought of as a collection of narrow beams called **rays** which travel in straight-line paths. This conceptualization of light is called the **ray model**.
* The **law of reflection** states that the angle of reflection is equal to the angle of incidence.
$$\theta_{r} = \theta_{i}$$
* A **specular reflection** is characterized by having reflected rays that are parallel and pointing in the same direction.
* A **diffuse reflection** is characterized by having reflected rays pointing in various directions.
* **Plane mirrors** always produce a virtual image behind the mirror. This image has the same size and orientation of the object, and the image and object are always equidistant from the mirror.
* **A spherical mirror** is formed from a section of a sphere. If the reflecting surface is on the inside of the spherical section, the mirror is **concave**. If it is on the outside, the mirror is **convex**.
* A **ray diagram** can be used to find the location and characteristics of a reflection in concave and convex mirrors. For concave mirrors, the characteristics of the possible images are summarized as follows:
Object position | Image position | Relative size | Orientation | Type
--- | --- | --- | --- | ---
Beyond C | Between C and F | Smaller than the object | Inverted | Real
At C | At C | Same size as the object | Inverted | Real
Between C and F | Beyond C | Larger than the object | Inverted | Real
At F | (No image) | (No image) | (No image) | (No image)
Between F and V | Beyond V | Larger than the object | Upright | Virtual
* The images formed by a convex mirror are always between F and V, smaller than the object, upright, and virtual.
Images in this notebook represent original artwork.
[](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)
| true |
code
| 0.421969 | null | null | null | null |
|
# Road Following - Live demo
In this notebook, we will use model we trained to move jetBot smoothly on track.
### Load Trained Model
We will assume that you have already downloaded ``best_steering_model_xy.pth`` to work station as instructed in "train_model.ipynb" notebook. Now, you should upload model file to JetBot in to this notebooks's directory. Once that's finished there should be a file named ``best_steering_model_xy.pth`` in this notebook's directory.
> Please make sure the file has uploaded fully before calling the next cell
Execute the code below to initialize the PyTorch model. This should look very familiar from the training notebook.
```
import torchvision
import torch
model = torchvision.models.resnet18(pretrained=False)
model.fc = torch.nn.Linear(512, 2)
```
Next, load the trained weights from the ``best_steering_model_xy.pth`` file that you uploaded.
```
model.load_state_dict(torch.load('best_steering_model_xy.pth'))
```
Currently, the model weights are located on the CPU memory execute the code below to transfer to the GPU device.
```
device = torch.device('cuda')
model = model.to(device)
model = model.eval().half()
```
### Creating the Pre-Processing Function
We have now loaded our model, but there's a slight issue. The format that we trained our model doesnt exactly match the format of the camera. To do that, we need to do some preprocessing. This involves the following steps:
1. Convert from HWC layout to CHW layout
2. Normalize using same parameters as we did during training (our camera provides values in [0, 255] range and training loaded images in [0, 1] range so we need to scale by 255.0
3. Transfer the data from CPU memory to GPU memory
4. Add a batch dimension
```
import torchvision.transforms as transforms
import torch.nn.functional as F
import cv2
import PIL.Image
import numpy as np
mean = torch.Tensor([0.485, 0.456, 0.406]).cuda().half()
std = torch.Tensor([0.229, 0.224, 0.225]).cuda().half()
def preprocess(image):
image = PIL.Image.fromarray(image)
image = transforms.functional.to_tensor(image).to(device).half()
image.sub_(mean[:, None, None]).div_(std[:, None, None])
return image[None, ...]
```
Awesome! We've now defined our pre-processing function which can convert images from the camera format to the neural network input format.
Now, let's start and display our camera. You should be pretty familiar with this by now.
```
from IPython.display import display
import ipywidgets
import ipywidgets.widgets as widgets
import traitlets
from jetbot import Camera, bgr8_to_jpeg
image_widget = widgets.Image(format='jpeg', width=224, height=224)
display(image_widget)
```
We'll also create our robot instance which we'll need to drive the motors.
```
from jetbot import Robot
robot = Robot(driver_board = "dfrobot")
```
Now, we will define sliders to control JetBot
> Note: We have initialize the slider values for best known configurations, however these might not work for your dataset, therefore please increase or decrease the sliders according to your setup and environment
1. Speed Control (speed_gain_slider): To start your JetBot increase ``speed_gain_slider``
2. Steering Gain Control (steering_gain_sloder): If you see JetBot is woblling, you need to reduce ``steering_gain_slider`` till it is smooth
3. Steering Bias control (steering_bias_slider): If you see JetBot is biased towards extreme right or extreme left side of the track, you should control this slider till JetBot start following line or track in the center. This accounts for motor biases as well as camera offsets
> Note: You should play around above mentioned sliders with lower speed to get smooth JetBot road following behavior.
```
speed_gain_slider = ipywidgets.FloatSlider(min=0.0, max=1.0, step=0.01, description='speed gain')
steering_gain_slider = ipywidgets.FloatSlider(min=0.0, max=1.0, step=0.01, value=0.2, description='steering gain')
steering_dgain_slider = ipywidgets.FloatSlider(min=0.0, max=0.5, step=0.001, value=0.0, description='steering kd')
steering_bias_slider = ipywidgets.FloatSlider(min=-0.3, max=0.3, step=0.01, value=0.0, description='steering bias')
slider_box = ipywidgets.VBox([speed_gain_slider, steering_gain_slider, steering_dgain_slider, steering_bias_slider])
x_slider = ipywidgets.FloatSlider(min=-1.0, max=1.0, description='x')
y_slider = ipywidgets.FloatSlider(min=0, max=1.0, orientation='vertical', description='y')
steering_slider = ipywidgets.FloatSlider(min=-1.0, max=1.0, description='steering')
speed_slider = ipywidgets.FloatSlider(min=0, max=1.0, orientation='vertical', description='speed')
y_box = ipywidgets.HBox([y_slider, speed_slider])
xy_box = ipywidgets.VBox([y_box,x_slider, steering_slider])
final_box = ipywidgets.HBox([xy_box,slider_box])
display(final_box)
```
Next, we'll create a function that will get called whenever the camera's value changes. This function will do the following steps
1. Pre-process the camera image
2. Execute the neural network
3. Compute the approximate steering value
4. Control the motors using proportional / derivative control (PD)
```
angle = 0.0
angle_last = 0.0
cap = cv2.VideoCapture(1)
cap.set(cv2.CAP_PROP_FRAME_WIDTH,640)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT,480)
try:
while True:
ret,frame = cap.read(
new_color_image = cv2.resize(frame, (224, 224), interpolation=cv2.INTER_CUBIC)
xy = model(preprocess(new_color_image)).detach().float().cpu().numpy().flatten()
x = xy[0]
y = (0.5 - xy[1]) / 2.0
x_slider.value = x
y_slider.value = y
speed_slider.value = speed_gain_slider.value
angle = np.arctan2(x, y)
pid = angle * steering_gain_slider.value + (angle - angle_last) * steering_dgain_slider.value
angle_last = angle
steering_slider.value = pid + steering_bias_slider.value
robot.left_motor.value = max(min(speed_slider.value + steering_slider.value, 1.0), 0.0)
robot.right_motor.value = max(min(speed_slider.value - steering_slider.value, 1.0), 0.0)
#image_widget.value = cv2.imencode('.jpg',new_color_image)[1].tobytes()
finally:
cap.release()
robot.stop()
```
Cool! We've created our neural network execution function, but now we need to attach it to the camera for processing.
We accomplish that with the observe function.
>WARNING: This code will move the robot!! Please make sure your robot has clearance and it is on Lego or Track you have collected data on. The road follower should work, but the neural network is only as good as the data it's trained on!
### Conclusion
That's it for this live demo! Hopefully you had some fun seeing your JetBot moving smoothly on track follwing the road!!!
If your JetBot wasn't following road very well, try to spot where it fails. The beauty is that we can collect more data for these failure scenarios and the JetBot should get even better :)
| true |
code
| 0.633722 | null | null | null | null |
|
# Introduction to Brian part 3: Simulations
If you haven’t yet read parts 1 and 2 on Neurons and Synapses, go read them first.
This tutorial is about managing the slightly more complicated tasks that crop up in research problems, rather than the toy examples we've been looking at so far. So we cover things like inputting sensory data, modelling experimental conditions, etc.
As before we start by importing the Brian package and setting up matplotlib for IPython:
```
from brian2 import *
%matplotlib inline
```
## Multiple runs
Let's start by looking at a very common task: doing multiple runs of a simulation with some parameter that changes. Let's start off with something very simple, how does the firing rate of a leaky integrate-and-fire neuron driven by Poisson spiking neurons change depending on its membrane time constant? Let's set that up.
```
# remember, this is here for running separate simulations in the same notebook
start_scope()
# Parameters
num_inputs = 100
input_rate = 10*Hz
weight = 0.1
# Range of time constants
tau_range = linspace(1, 10, 30)*ms
# Use this list to store output rates
output_rates = []
# Iterate over range of time constants
for tau in tau_range:
# Construct the network each time
P = PoissonGroup(num_inputs, rates=input_rate)
eqs = '''
dv/dt = -v/tau : 1
'''
G = NeuronGroup(1, eqs, threshold='v>1', reset='v=0', method='exact')
S = Synapses(P, G, on_pre='v += weight')
S.connect()
M = SpikeMonitor(G)
# Run it and store the output firing rate in the list
run(1*second)
output_rates.append(M.num_spikes/second)
# And plot it
plot(tau_range/ms, output_rates)
xlabel(r'$\tau$ (ms)')
ylabel('Firing rate (sp/s)');
```
Now if you're running the notebook, you'll see that this was a little slow to run. The reason is that for each loop, you're recreating the objects from scratch. We can improve that by setting up the network just once. We store a copy of the state of the network before the loop, and restore it at the beginning of each iteration.
```
start_scope()
num_inputs = 100
input_rate = 10*Hz
weight = 0.1
tau_range = linspace(1, 10, 30)*ms
output_rates = []
# Construct the network just once
P = PoissonGroup(num_inputs, rates=input_rate)
eqs = '''
dv/dt = -v/tau : 1
'''
G = NeuronGroup(1, eqs, threshold='v>1', reset='v=0', method='exact')
S = Synapses(P, G, on_pre='v += weight')
S.connect()
M = SpikeMonitor(G)
# Store the current state of the network
store()
for tau in tau_range:
# Restore the original state of the network
restore()
# Run it with the new value of tau
run(1*second)
output_rates.append(M.num_spikes/second)
plot(tau_range/ms, output_rates)
xlabel(r'$\tau$ (ms)')
ylabel('Firing rate (sp/s)');
```
That's a very simple example of using store and restore, but you can use it in much more complicated situations. For example, you might want to run a long training run, and then run multiple test runs afterwards. Simply put a store after the long training run, and a restore before each testing run.
You can also see that the output curve is very noisy and doesn't increase monotonically like we'd expect. The noise is coming from the fact that we run the Poisson group afresh each time. If we only wanted to see the effect of the time constant, we could make sure that the spikes were the same each time (although note that really, you ought to do multiple runs and take an average). We do this by running just the Poisson group once, recording its spikes, and then creating a new `SpikeGeneratorGroup` that will output those recorded spikes each time.
```
start_scope()
num_inputs = 100
input_rate = 10*Hz
weight = 0.1
tau_range = linspace(1, 10, 30)*ms
output_rates = []
# Construct the Poisson spikes just once
P = PoissonGroup(num_inputs, rates=input_rate)
MP = SpikeMonitor(P)
# We use a Network object because later on we don't
# want to include these objects
net = Network(P, MP)
net.run(1*second)
# And keep a copy of those spikes
spikes_i = MP.i
spikes_t = MP.t
# Now construct the network that we run each time
# SpikeGeneratorGroup gets the spikes that we created before
SGG = SpikeGeneratorGroup(num_inputs, spikes_i, spikes_t)
eqs = '''
dv/dt = -v/tau : 1
'''
G = NeuronGroup(1, eqs, threshold='v>1', reset='v=0', method='exact')
S = Synapses(SGG, G, on_pre='v += weight')
S.connect()
M = SpikeMonitor(G)
# Store the current state of the network
net = Network(SGG, G, S, M)
net.store()
for tau in tau_range:
# Restore the original state of the network
net.restore()
# Run it with the new value of tau
net.run(1*second)
output_rates.append(M.num_spikes/second)
plot(tau_range/ms, output_rates)
xlabel(r'$\tau$ (ms)')
ylabel('Firing rate (sp/s)');
```
You can see that now there is much less noise and it increases monotonically because the input spikes are the same each time, meaning we're seeing the effect of the time constant, not the random spikes.
Note that in the code above, we created `Network` objects. The reason is that in the loop, if we just called `run` it would try to simulate all the objects, including the Poisson neurons ``P``, and we only want to run that once. We use `Network` to specify explicitly which objects we want to include.
The techniques we've looked at so far are the conceptually most simple way to do multiple runs, but not always the most efficient. Since there's only a single output neuron in the model above, we can simply duplicate that output neuron and make the time constant a parameter of the group.
```
start_scope()
num_inputs = 100
input_rate = 10*Hz
weight = 0.1
tau_range = linspace(1, 10, 30)*ms
num_tau = len(tau_range)
P = PoissonGroup(num_inputs, rates=input_rate)
# We make tau a parameter of the group
eqs = '''
dv/dt = -v/tau : 1
tau : second
'''
# And we have num_tau output neurons, each with a different tau
G = NeuronGroup(num_tau, eqs, threshold='v>1', reset='v=0', method='exact')
G.tau = tau_range
S = Synapses(P, G, on_pre='v += weight')
S.connect()
M = SpikeMonitor(G)
# Now we can just run once with no loop
run(1*second)
output_rates = M.count/second # firing rate is count/duration
plot(tau_range/ms, output_rates)
xlabel(r'$\tau$ (ms)')
ylabel('Firing rate (sp/s)');
```
You can see that this is much faster again! It's a little bit more complicated conceptually, and it's not always possible to do this trick, but it can be much more efficient if it's possible.
Let's finish with this example by having a quick look at how the mean and standard deviation of the interspike intervals depends on the time constant.
```
trains = M.spike_trains()
isi_mu = full(num_tau, nan)*second
isi_std = full(num_tau, nan)*second
for idx in range(num_tau):
train = diff(trains[idx])
if len(train)>1:
isi_mu[idx] = mean(train)
isi_std[idx] = std(train)
errorbar(tau_range/ms, isi_mu/ms, yerr=isi_std/ms)
xlabel(r'$\tau$ (ms)')
ylabel('Interspike interval (ms)');
```
Notice that we used the ``spike_trains()`` method of `SpikeMonitor`. This is a dictionary with keys being the indices of the neurons and values being the array of spike times for that neuron.
## Changing things during a run
Imagine an experiment where you inject current into a neuron, and change the amplitude randomly every 10 ms. Let's see if we can model that using a Hodgkin-Huxley type neuron.
```
start_scope()
# Parameters
area = 20000*umetre**2
Cm = 1*ufarad*cm**-2 * area
gl = 5e-5*siemens*cm**-2 * area
El = -65*mV
EK = -90*mV
ENa = 50*mV
g_na = 100*msiemens*cm**-2 * area
g_kd = 30*msiemens*cm**-2 * area
VT = -63*mV
# The model
eqs_HH = '''
dv/dt = (gl*(El-v) - g_na*(m*m*m)*h*(v-ENa) - g_kd*(n*n*n*n)*(v-EK) + I)/Cm : volt
dm/dt = 0.32*(mV**-1)*(13.*mV-v+VT)/
(exp((13.*mV-v+VT)/(4.*mV))-1.)/ms*(1-m)-0.28*(mV**-1)*(v-VT-40.*mV)/
(exp((v-VT-40.*mV)/(5.*mV))-1.)/ms*m : 1
dn/dt = 0.032*(mV**-1)*(15.*mV-v+VT)/
(exp((15.*mV-v+VT)/(5.*mV))-1.)/ms*(1.-n)-.5*exp((10.*mV-v+VT)/(40.*mV))/ms*n : 1
dh/dt = 0.128*exp((17.*mV-v+VT)/(18.*mV))/ms*(1.-h)-4./(1+exp((40.*mV-v+VT)/(5.*mV)))/ms*h : 1
I : amp
'''
group = NeuronGroup(1, eqs_HH,
threshold='v > -40*mV',
refractory='v > -40*mV',
method='exponential_euler')
group.v = El
statemon = StateMonitor(group, 'v', record=True)
spikemon = SpikeMonitor(group, variables='v')
figure(figsize=(9, 4))
for l in range(5):
group.I = rand()*50*nA
run(10*ms)
axvline(l*10, ls='--', c='k')
axhline(El/mV, ls='-', c='lightgray', lw=3)
plot(statemon.t/ms, statemon.v[0]/mV, '-b')
plot(spikemon.t/ms, spikemon.v/mV, 'ob')
xlabel('Time (ms)')
ylabel('v (mV)');
```
In the code above, we used a loop over multiple runs to achieve this. That's fine, but it's not the most efficient way to do it because each time we call ``run`` we have to do a lot of initialisation work that slows everything down. It also won't work as well with the more efficient standalone mode of Brian. Here's another way.
```
start_scope()
group = NeuronGroup(1, eqs_HH,
threshold='v > -40*mV',
refractory='v > -40*mV',
method='exponential_euler')
group.v = El
statemon = StateMonitor(group, 'v', record=True)
spikemon = SpikeMonitor(group, variables='v')
# we replace the loop with a run_regularly
group.run_regularly('I = rand()*50*nA', dt=10*ms)
run(50*ms)
figure(figsize=(9, 4))
# we keep the loop just to draw the vertical lines
for l in range(5):
axvline(l*10, ls='--', c='k')
axhline(El/mV, ls='-', c='lightgray', lw=3)
plot(statemon.t/ms, statemon.v[0]/mV, '-b')
plot(spikemon.t/ms, spikemon.v/mV, 'ob')
xlabel('Time (ms)')
ylabel('v (mV)');
```
We've replaced the loop that had multiple ``run`` calls with a ``run_regularly``. This makes the specified block of code run every ``dt=10*ms``. The ``run_regularly`` lets you run code specific to a single `NeuronGroup`, but sometimes you might need more flexibility. For this, you can use `network_operation` which lets you run arbitrary Python code (but won't work with the standalone mode).
```
start_scope()
group = NeuronGroup(1, eqs_HH,
threshold='v > -40*mV',
refractory='v > -40*mV',
method='exponential_euler')
group.v = El
statemon = StateMonitor(group, 'v', record=True)
spikemon = SpikeMonitor(group, variables='v')
# we replace the loop with a network_operation
@network_operation(dt=10*ms)
def change_I():
group.I = rand()*50*nA
run(50*ms)
figure(figsize=(9, 4))
for l in range(5):
axvline(l*10, ls='--', c='k')
axhline(El/mV, ls='-', c='lightgray', lw=3)
plot(statemon.t/ms, statemon.v[0]/mV, '-b')
plot(spikemon.t/ms, spikemon.v/mV, 'ob')
xlabel('Time (ms)')
ylabel('v (mV)');
```
Now let's extend this example to run on multiple neurons, each with a different capacitance to see how that affects the behaviour of the cell.
```
start_scope()
N = 3
eqs_HH_2 = '''
dv/dt = (gl*(El-v) - g_na*(m*m*m)*h*(v-ENa) - g_kd*(n*n*n*n)*(v-EK) + I)/C : volt
dm/dt = 0.32*(mV**-1)*(13.*mV-v+VT)/
(exp((13.*mV-v+VT)/(4.*mV))-1.)/ms*(1-m)-0.28*(mV**-1)*(v-VT-40.*mV)/
(exp((v-VT-40.*mV)/(5.*mV))-1.)/ms*m : 1
dn/dt = 0.032*(mV**-1)*(15.*mV-v+VT)/
(exp((15.*mV-v+VT)/(5.*mV))-1.)/ms*(1.-n)-.5*exp((10.*mV-v+VT)/(40.*mV))/ms*n : 1
dh/dt = 0.128*exp((17.*mV-v+VT)/(18.*mV))/ms*(1.-h)-4./(1+exp((40.*mV-v+VT)/(5.*mV)))/ms*h : 1
I : amp
C : farad
'''
group = NeuronGroup(N, eqs_HH_2,
threshold='v > -40*mV',
refractory='v > -40*mV',
method='exponential_euler')
group.v = El
# initialise with some different capacitances
group.C = array([0.8, 1, 1.2])*ufarad*cm**-2*area
statemon = StateMonitor(group, variables=True, record=True)
# we go back to run_regularly
group.run_regularly('I = rand()*50*nA', dt=10*ms)
run(50*ms)
figure(figsize=(9, 4))
for l in range(5):
axvline(l*10, ls='--', c='k')
axhline(El/mV, ls='-', c='lightgray', lw=3)
plot(statemon.t/ms, statemon.v.T/mV, '-')
xlabel('Time (ms)')
ylabel('v (mV)');
```
So that runs, but something looks wrong! The injected currents look like they're different for all the different neurons! Let's check:
```
plot(statemon.t/ms, statemon.I.T/nA, '-')
xlabel('Time (ms)')
ylabel('I (nA)');
```
Sure enough, it's different each time. But why? We wrote ``group.run_regularly('I = rand()*50*nA', dt=10*ms)`` which seems like it should give the same value of I for each neuron. But, like threshold and reset statements, ``run_regularly`` code is interpreted as being run separately for each neuron, and because I is a parameter, it can be different for each neuron. We can fix this by making I into a *shared* variable, meaning it has the same value for each neuron.
```
start_scope()
N = 3
eqs_HH_3 = '''
dv/dt = (gl*(El-v) - g_na*(m*m*m)*h*(v-ENa) - g_kd*(n*n*n*n)*(v-EK) + I)/C : volt
dm/dt = 0.32*(mV**-1)*(13.*mV-v+VT)/
(exp((13.*mV-v+VT)/(4.*mV))-1.)/ms*(1-m)-0.28*(mV**-1)*(v-VT-40.*mV)/
(exp((v-VT-40.*mV)/(5.*mV))-1.)/ms*m : 1
dn/dt = 0.032*(mV**-1)*(15.*mV-v+VT)/
(exp((15.*mV-v+VT)/(5.*mV))-1.)/ms*(1.-n)-.5*exp((10.*mV-v+VT)/(40.*mV))/ms*n : 1
dh/dt = 0.128*exp((17.*mV-v+VT)/(18.*mV))/ms*(1.-h)-4./(1+exp((40.*mV-v+VT)/(5.*mV)))/ms*h : 1
I : amp (shared) # everything is the same except we've added this shared
C : farad
'''
group = NeuronGroup(N, eqs_HH_3,
threshold='v > -40*mV',
refractory='v > -40*mV',
method='exponential_euler')
group.v = El
group.C = array([0.8, 1, 1.2])*ufarad*cm**-2*area
statemon = StateMonitor(group, 'v', record=True)
group.run_regularly('I = rand()*50*nA', dt=10*ms)
run(50*ms)
figure(figsize=(9, 4))
for l in range(5):
axvline(l*10, ls='--', c='k')
axhline(El/mV, ls='-', c='lightgray', lw=3)
plot(statemon.t/ms, statemon.v.T/mV, '-')
xlabel('Time (ms)')
ylabel('v (mV)');
```
Ahh, that's more like it!
## Adding input
Now let's think about a neuron being driven by a sinusoidal input. Let's go back to a leaky integrate-and-fire to simplify the equations a bit.
```
start_scope()
A = 2.5
f = 10*Hz
tau = 5*ms
eqs = '''
dv/dt = (I-v)/tau : 1
I = A*sin(2*pi*f*t) : 1
'''
G = NeuronGroup(1, eqs, threshold='v>1', reset='v=0', method='euler')
M = StateMonitor(G, variables=True, record=True)
run(200*ms)
plot(M.t/ms, M.v[0], label='v')
plot(M.t/ms, M.I[0], label='I')
xlabel('Time (ms)')
ylabel('v')
legend(loc='best');
```
So far, so good and the sort of thing we saw in the first tutorial. Now, what if that input current were something we had recorded and saved in a file? In that case, we can use `TimedArray`. Let's start by reproducing the picture above but using `TimedArray`.
```
start_scope()
A = 2.5
f = 10*Hz
tau = 5*ms
# Create a TimedArray and set the equations to use it
t_recorded = arange(int(200*ms/defaultclock.dt))*defaultclock.dt
I_recorded = TimedArray(A*sin(2*pi*f*t_recorded), dt=defaultclock.dt)
eqs = '''
dv/dt = (I-v)/tau : 1
I = I_recorded(t) : 1
'''
G = NeuronGroup(1, eqs, threshold='v>1', reset='v=0', method='exact')
M = StateMonitor(G, variables=True, record=True)
run(200*ms)
plot(M.t/ms, M.v[0], label='v')
plot(M.t/ms, M.I[0], label='I')
xlabel('Time (ms)')
ylabel('v')
legend(loc='best');
```
Note that for the example where we put the ``sin`` function directly in the equations, we had to use the ``method='euler'`` argument because the exact integrator wouldn't work here (try it!). However, ``TimedArray`` is considered to be constant over its time step and so the linear integrator can be used. This means you won't get the same behaviour from these two methods for two reasons. Firstly, the numerical integration methods ``exact`` and ``euler`` give slightly different results. Secondly, ``sin`` is not constant over a timestep whereas ``TimedArray`` is.
Now just to show that ``TimedArray`` works for arbitrary currents, let's make a weird "recorded" current and run it on that.
```
start_scope()
A = 2.5
f = 10*Hz
tau = 5*ms
# Let's create an array that couldn't be
# reproduced with a formula
num_samples = int(200*ms/defaultclock.dt)
I_arr = zeros(num_samples)
for _ in range(100):
a = randint(num_samples)
I_arr[a:a+100] = rand()
I_recorded = TimedArray(A*I_arr, dt=defaultclock.dt)
eqs = '''
dv/dt = (I-v)/tau : 1
I = I_recorded(t) : 1
'''
G = NeuronGroup(1, eqs, threshold='v>1', reset='v=0', method='exact')
M = StateMonitor(G, variables=True, record=True)
run(200*ms)
plot(M.t/ms, M.v[0], label='v')
plot(M.t/ms, M.I[0], label='I')
xlabel('Time (ms)')
ylabel('v')
legend(loc='best');
```
Finally, let's finish on an example that actually reads in some data from a file. See if you can work out how this example works.
```
start_scope()
from matplotlib.image import imread
img = (1-imread('brian.png'))[::-1, :, 0].T
num_samples, N = img.shape
ta = TimedArray(img, dt=1*ms) # 228
A = 1.5
tau = 2*ms
eqs = '''
dv/dt = (A*ta(t, i)-v)/tau+0.8*xi*tau**-0.5 : 1
'''
G = NeuronGroup(N, eqs, threshold='v>1', reset='v=0', method='euler')
M = SpikeMonitor(G)
run(num_samples*ms)
plot(M.t/ms, M.i, '.k', ms=3)
xlim(0, num_samples)
ylim(0, N)
xlabel('Time (ms)')
ylabel('Neuron index');
```
| true |
code
| 0.611498 | null | null | null | null |
|
# Quantum states with high dimensional entanglement
This notebook allows visualizing the 20 circuits of the second pilot study with mention of their depth and gate repartition.
At the end, a toy protocol of ballot transmission is presented with experimental verification.
```
import numpy as np
import copy
from qiskit import QuantumRegister, QuantumCircuit, ClassicalRegister, Aer, execute, transpile, assemble
from qiskit.tools.visualization import *
from qiskit.ignis.mitigation.measurement import (complete_meas_cal, tensored_meas_cal,
CompleteMeasFitter, TensoredMeasFitter)
import json
import time
from qiskit.tools.monitor import job_monitor
from scipy.spatial.distance import cdist
import matplotlib.pyplot as plt
from c_utils import new_cut # circuit building utilities
from o_utils import ora
data_directory = "data2_files/" # this directory for 2d pilot project data
def json_dic_loader(dic_name):
f = open(data_directory+dic_name+'.json')
return json.load(f)
def json_dic_dumper(dic, dic_name):
with open(data_directory+dic_name+'.json', 'w') as f:
json.dump(dic,f)
```
## Set up the simulator and layout for 5 qubits
```
simulator = Aer.get_backend('qasm_simulator')
#specify the layout of the devices
used_qubits = 5
qubit_list = [0,1,2,3,4]
program_name="AL2" # This for a mix of W/Psi+ and W_bar/Phi+ separable states (2d pilot project)
Flag_char = "DS" # use the joint set
if len(Flag_char) >= 2:
unique_char = "M" # for "mixed"
else:
unique_char = Flag_char
```
###### OLD VERSION TO BE DELETED These dictionaries for the devices used in the study
QV_dic = {'ibmq_athens': 32.0, 'ibmq_valencia': 16.0, 'ibmq_ourense': 8.0,
"ibmqx2": 8.0, 'ibmq_santiago': 32.0, 'ibmq_vigo': 16.0, 'ideal_device': np.inf}
dev_dic = {'ibmq_santiago': "San",'ibmq_athens': "Ath", 'ibmq_valencia': "Val", 'ibmq_vigo': 'Vig','ibmq_ourense': "Our",
"ibmqx2": 'Yor', 'ideal_device': "Ide"}
```
# These dictionaries for the devices used in the study
if program_name == "QAD":
fidelity_dic = {'ibmq_athens': 0.925110, 'ibmq_valencia': 0.809101, 'ibmq_ourense': 0.802380,"ibmqx2": 0.627392,
'ibmq_santiago': 0.919399, 'ibmq_vigo': 0.908840, 'ibmq_lima':0.771835, 'ideal_device': 1.0}
data_directory = "data_files/"
elif program_name == "AL2":
fidelity_dic = {'ibmq_athens': 0.910145, 'ibmq_valencia': 0.794262, 'ibmq_ourense': 0.818974, "ibmqx2": 0.359528,
'ibmq_santiago': 0.900024, 'ibmq_vigo': 0.841831, 'ibmq_quito': 0.840260, 'ibmq_lima':0.771835,
'ibmq_belem':0.842281,'ideal_device': 1.0}
data_directory = "data2_files/"
QV_dic = {'ibmq_athens': 32.0, 'ibmq_valencia': 16.0, 'ibmq_ourense': 8.0,"ibmqx2": 8.0, 'ibmq_santiago': 32.0,
'ibmq_vigo': 16.0, 'ideal_device': np.inf, 'ibmq_quito': 16.0, 'ibmq_lima': "Lim",'ibmq_belem':16.0}
dev_dic = {'ibmq_santiago': "San",'ibmq_athens': "Ath", 'ibmq_valencia': "Val", 'ibmq_vigo': 'Vig','ibmq_ourense': "Our",
"ibmqx2": 'Yor', 'ibmq_quito': "Qui", 'ibmq_lima': "Lim", 'ibmq_belem': "Bel",'ideal_device': "Ide" }
# specify the device: here first the ideal noise-free device
project_device = 'ideal_device'
device_name = dev_dic[project_device]
# specify the nb of id gates between state creation and measurements
# zero for the ideal device of course
id_gates = 0
str_nb_id = str(id_gates)
zfilled = str_nb_id.zfill(4-len(str_nb_id))
# tail of the file names for RAM storage
mitig_name = program_name + "_" + device_name
project_name = mitig_name + "_" + unique_char + zfilled
print(mitig_name)
print(project_name)
# establish the result label list
# meas_calibs will be used for mitigation in the real device section
qr = QuantumRegister(used_qubits) #
meas_calibs, label_list = complete_meas_cal(qubit_list=qubit_list, qr=qr, circlabel='mcal')
nb_labels=len(label_list)
print(nb_labels,label_list)
len(meas_calibs)
# permutation list
# here it is simple to write down the list,
# but a version using itertools will be wellcome for >5 qubits projects
q_perm = [[0, 1, 2, 3, 4], [0, 1, 3, 2, 4], [0, 1, 4, 2, 3], [0, 2, 3, 1, 4], [0, 2, 4, 1, 3],
[0, 3, 4, 1, 2], [1, 2, 3, 0, 4], [1, 2, 4, 0, 3], [1, 3, 4, 0, 2], [2, 3, 4, 0, 1]]
```
## Create the quantum states
```
# define the two subsets of 10 separable states
if program_name == "QAD":
state_1a = ["W","Phi+"]
state_1b = ["GHZ","Psi+"]
elif program_name == "ALT" or "AL2":
state_1a = ["W","Psi+"]
state_1b = ["Wbar","Phi+"]
l_states = state_1a+state_1b
l_states
# version 20 circuits for demonstration
# (in the version run on real devices: two batches of 10 circuits)
# these circuits limited to state creation are ready to be saved
# for ultimately building circuits adapted to noisy simulator and real devices
# as option, these circuits will include a row of id gates between creation and measurements
circ_ori = []
for i_s in range(0,len(l_states),2):
for perm in q_perm:
mycircuit = QuantumCircuit(used_qubits, used_qubits)
mycircuit = new_cut.circuit_builder(mycircuit, perm, l_states[i_s],l_states[i_s+1])
circ_ori.append(mycircuit)
# add measurement section to the circuit set newly created:
nb_states = len(circ_ori)
circ_ideal = copy.deepcopy(circ_ori)
for i_state in range(nb_states):
new_cut.add_barrier_and_measure(circ_ideal[i_state],qubit_list)
```
## Obtain result distributions on noise free simulator
```
# execute on noise free simulator
s_sim = 12000
job_simul = execute(circ_ideal, backend=simulator, shots=s_sim)
tot_results_simul = job_simul.result()
# establish a dictionary of count results on noise free simulator:
# (this step is only useful if ram storage is performed)
void_counts = dict(zip(label_list, np.full(2**used_qubits,0.0))) #, dtype=int)))
tot_results_sim_dic = {}
ideal_dic = {}
for i_state in range(nb_states):
counts_simul = copy.deepcopy(void_counts)
counts_simul.update(tot_results_simul.get_counts(i_state))
ideal_dic[str(i_state)]=counts_simul
```
Example of circuit for separable state of the first type : $|W\rangle\otimes|\Psi^+\rangle$
```
i_state_test = 1
print(device_name, "circuit #",i_state_test)
circ_ideal[i_state_test].draw(output='mpl')
print(device_name, "circuit #",i_state_test)
plot_histogram(ideal_dic[str(i_state_test)],
legend=['noise free simulation'],
color = "b", figsize=(10.,5.))
```
Example of circuit for separable state of the second type : $|W\rangle^{\otimes X}\otimes|\Phi^+\rangle$
```
i_state_test = 11
print(device_name, "circuit #",i_state_test)
circ_ideal[i_state_test].draw(output='mpl')
print(device_name, "circuit #",i_state_test)
plot_histogram(ideal_dic[str(i_state_test)],
legend=['noise free simulation'],
color = "b", figsize=(10.,5.))
```
### Obtain the matrix of probability distribution of shape(nb_state,nb_labels) used by the classifier
```
def print_first_and_last_row(PDM):
print("first and last rows of the probability distribution matrix of dimension "+str(nb_states)+"x"+str(nb_labels))
print(np.round(PDM[0:1,:],4))
print(" ...")
print(np.round(PDM[-1:,:],4))
PD_ideal = np.ndarray((nb_states,nb_labels))
for i_state in range(nb_states):
PD_ideal[i_state, :] = list(ideal_dic[str(i_state)].values())
# now a little trick to get the ideal values from the simulator approximated values
with np.errstate(divide='ignore'): # ignore the divide by zero warning
PD_ideal = 1/np.round(s_sim/(PD_ideal))
# have a look at the matrix head and tail:
print_first_and_last_row(PD_ideal)
```
# Real device section
```
from qiskit import IBMQ
IBMQ.load_account()
provider = IBMQ.get_provider(hub='ibm-q')
print(provider.backends())
project_device = 'ibmq_quito'# you may choice here a different backend
device_name = dev_dic[project_device]
mitig_name = program_name + "_" + device_name
print(mitig_name)
# determine here the backend
device = provider.get_backend(project_device) # the backend names are listed here above
properties = device.properties()
coupling_map = device.configuration().coupling_map
```
### Load circuits run on real device
```
id_gates = 0 # choice of 0 or 256 at this time
str_nb_id = str(id_gates)
zfilled = str_nb_id.zfill(4-len(str_nb_id))
project_name = mitig_name + "_" + unique_char + zfilled
print(project_name)
circuit_dic = json_dic_loader("circuit_"+ project_name)
real_circs = []
for i_state in list(range(nb_states)):
real_circs.append(QuantumCircuit().from_qasm_str(circuit_dic[str(i_state)]))
for i_state in range(20):
print(i_state,
"depth",real_circs[i_state].depth(),
"size", real_circs[i_state].size(),
"cx",real_circs[i_state].num_nonlocal_gates(),
json.dumps(real_circs[i_state].count_ops()))
i_state_test = 11 # choose here a perticular state to study
print(project_device, "circuit #",i_state_test,
"circuit depth:",real_circs[i_state_test].depth())
print('gates = ',real_circs[i_state_test].count_ops())
real_circs[i_state_test].draw(output='mpl')
```
## Histogram on simulator
```
job_simul = execute(real_circs[i_state_test], backend=simulator, shots=s_sim)
print(project_device, "circuit #",i_state_test, "on noise free simulator")
simul_results = job_simul.result().get_counts()
plot_histogram(simul_results,
legend=['noise free simulation'],
color = "b", figsize=(10.,5.))
```
# Results on real device
### Obtain mitigation filter
```
# retrieve the corresponding measurement mitigation filter obtained at experimental time
# use a fake job because the calibration results were stored as dictionary
simulator = Aer.get_backend('qasm_simulator')
fake_job_cal = execute(meas_calibs, backend=simulator, shots=1)
fake_cal_results = fake_job_cal.result()
cal_results_dic = json_dic_loader("cal_results_dic_"+mitig_name)
if 'date' in cal_results_dic.keys():
str(cal_results_dic['date'])
cal_results = fake_cal_results.from_dict(cal_results_dic)
meas_fitter = CompleteMeasFitter(cal_results, label_list, qubit_list=qubit_list, circlabel='mcal')
meas_filter = meas_fitter.filter
# have a look at the average measurement fidefily of this device:
print("Average Measurement Fidelity was: %f" % meas_fitter.readout_fidelity(), "for",project_device)
```
### Obtain the matrix of probability distribution of shape(nb_state,nb_labels) used by the classifier
```
empirical_dic = json_dic_loader('experimental_'+project_name)
test_dic = json_dic_loader('test_'+project_name)
def rectify_counts(tot_res, test_cqi,mitigation,m_filter) :
void_counts = dict(zip(label_list, np.zeros(2**used_qubits)))
try:
counts_results_real_test = tot_res[test_cqi]
except KeyError as error:
counts_results_real_test = tot_res[str(test_cqi)]
raw_counts_test = copy.deepcopy(void_counts)
raw_counts_test.update(counts_results_real_test)
if mitigation:
mitigated_results_test = meas_filter.apply(raw_counts_test, method = 'least_squares')
returned_counts = copy.deepcopy(void_counts)
returned_counts.update(mitigated_results_test)
else:
returned_counts = copy.deepcopy(raw_counts_test)
return returned_counts
def get_clean_matrix(dic, mitigation,m_filter):
clean_matrix = np.ndarray((nb_states,nb_labels))
for i_state in range(nb_states):
rectified_counts = rectify_counts(dic,i_state, mitigation,m_filter) # get a rectified counts dictionary
clean_matrix[i_state, :] = list(rectified_counts.values())
clean_matrix = clean_matrix/clean_matrix.sum(axis=1, keepdims=True)
return clean_matrix
def obtain_pooled_PDM(mitigation):
PD_exper = get_clean_matrix(empirical_dic, mitigation=mitigation,
m_filter=meas_filter)
PD_test = get_clean_matrix(test_dic, mitigation=mitigation,
m_filter=meas_filter)
return PD_exper + PD_test
PD_tot = obtain_pooled_PDM(mitigation=False)/2
PD_totm = obtain_pooled_PDM(mitigation=True)/2
print(project_device, "circuit #",i_state_test,
"circuit depth:",real_circs[i_state_test].depth())
print('gates = ',real_circs[i_state_test].count_ops())
ideal_results = dict(zip(label_list,PD_ideal[i_state_test]))
real_results = dict(zip(label_list,PD_tot[i_state_test]))
mit_results = dict(zip(label_list,PD_totm[i_state_test]))
plot_histogram([ideal_results, real_results, mit_results],
legend=['ideal device','real results on\n '+ project_device, 'after measurement\n errror mitigation'],
color =["b","r","g"],
bar_labels=False,
figsize=(10.,5.))
# Matrix of distances between distributions
# Numbers in squares are "distances expressed per thousand", thus from 0 to 1000
Y_dist_tot = cdist(PD_tot,PD_ideal, metric='sqeuclidean')
# !cdist(1st matrix -> Y rows, 2d matrix -> Y columns)
# adapted from https://stackoverflow.com/questions/40887753/display-matrix-values-and-colormap:
fig, ax = plt.subplots(figsize=(10.,10.))
figsize=(20.,20.)
min_val, max_val = np.min(Y_dist_tot), np.max(Y_dist_tot)
ax.matshow(Y_dist_tot, cmap=plt.cm.Reds)
for i in range(20):
for j in range(20):
c = round(1000*Y_dist_tot[j,i])
ax.text(i, j, str(c), va='center', ha='center')
import qiskit.tools.jupyter
%qiskit_version_table
```
| true |
code
| 0.367951 | null | null | null | null |
|
```
# default_exp layers
```
# Useful Layers
> Some Pytorch layers needed for MetNet
```
#export
from fastai.vision.all import *
from fastai.text.all import WeightDropout, RNNDropout
```
## ConvLSTM / ConvGRU layers
### CGRU
https://github.com/jhhuang96/ConvLSTM-PyTorch/blob/master/ConvRNN.py
In a GRU cell the outputs and hidden are the same, last output must be equal to last hidden.
```
#export
class ConvGRUCell(Module):
def __init__(self, input_dim, hidden_dim, kernel_size=(3,3), bias=True, activation=F.tanh, batchnorm=False):
"""
Initialize ConvGRU cell.
Parameters
----------
input_dim: int
Number of channels of input tensor.
hidden_dim: int
Number of channels of hidden state.
kernel_size: (int, int)
Size of the convolutional kernel.
bias: bool
Whether or not to add the bias.
"""
self.input_dim = input_dim
self.hidden_dim = hidden_dim
self.kernel_size = kernel_size if isinstance(kernel_size, (tuple, list)) else [kernel_size]*2
self.padding = self.kernel_size[0] // 2, self.kernel_size[1] // 2
self.bias = bias
self.activation = activation
self.batchnorm = batchnorm
self.conv_zr = nn.Conv2d(in_channels=self.input_dim + self.hidden_dim,
out_channels=2 * self.hidden_dim,
kernel_size=self.kernel_size,
padding=self.padding,
bias=self.bias)
self.conv_h1 = nn.Conv2d(in_channels=self.input_dim,
out_channels=self.hidden_dim,
kernel_size=self.kernel_size,
padding=self.padding,
bias=self.bias)
self.conv_h2 = nn.Conv2d(in_channels=self.hidden_dim,
out_channels=self.hidden_dim,
kernel_size=self.kernel_size,
padding=self.padding,
bias=self.bias)
self.reset_parameters()
def forward(self, input, h_prev=None):
#init hidden on forward
if h_prev is None:
h_prev = self.init_hidden(input)
combined = torch.cat((input, h_prev), dim=1) # concatenate along channel axis
combined_conv = F.sigmoid(self.conv_zr(combined))
z, r = torch.split(combined_conv, self.hidden_dim, dim=1)
h_ = self.activation(self.conv_h1(input) + r * self.conv_h2(h_prev))
h_cur = (1 - z) * h_ + z * h_prev
return h_cur
def init_hidden(self, input):
bs, ch, h, w = input.shape
return one_param(self).new_zeros(bs, self.hidden_dim, h, w)
def reset_parameters(self):
#self.conv.reset_parameters()
nn.init.xavier_uniform_(self.conv_zr.weight, gain=nn.init.calculate_gain('tanh'))
self.conv_zr.bias.data.zero_()
nn.init.xavier_uniform_(self.conv_h1.weight, gain=nn.init.calculate_gain('tanh'))
self.conv_h1.bias.data.zero_()
nn.init.xavier_uniform_(self.conv_h2.weight, gain=nn.init.calculate_gain('tanh'))
self.conv_h2.bias.data.zero_()
if self.batchnorm:
self.bn1.reset_parameters()
self.bn2.reset_parameters()
cgru_cell = ConvGRUCell(16, 32, 3)
cgru_cell(torch.rand(1, 16, 16, 16)).shape
```
Let's check:
```
#export
class ConvGRU(nn.Module):
def __init__(self, input_dim, hidden_dim, kernel_size, n_layers, batch_first=True,
bias=True, activation=F.tanh, input_p=0.2, hidden_p=0.1, batchnorm=False):
super(ConvGRU, self).__init__()
self._check_kernel_size_consistency(kernel_size)
# Make sure that both `kernel_size` and `hidden_dim` are lists having len == num_layers
kernel_size = self._extend_for_multilayer(kernel_size, n_layers)
hidden_dim = self._extend_for_multilayer(hidden_dim, n_layers)
activation = self._extend_for_multilayer(activation, n_layers)
if not len(kernel_size) == len(hidden_dim) == len(activation) == n_layers:
raise ValueError('Inconsistent list length.')
self.input_dim = input_dim
self.hidden_dim = hidden_dim
self.kernel_size = kernel_size
self.n_layers = n_layers
self.batch_first = batch_first
self.bias = bias
self.input_p = input_p
self.hidden_p = hidden_p
cell_list = []
for i in range(self.n_layers):
cur_input_dim = self.input_dim if i == 0 else self.hidden_dim[i-1]
cell_list.append(ConvGRUCell(input_dim=cur_input_dim,
hidden_dim=self.hidden_dim[i],
kernel_size=self.kernel_size[i],
bias=self.bias,
activation=activation[i],
batchnorm=batchnorm))
self.cell_list = nn.ModuleList(cell_list)
self.input_dp = RNNDropout(input_p)
self.hidden_dps = nn.ModuleList([nn.Dropout(hidden_p) for l in range(n_layers)])
self.reset_parameters()
def __repr__(self):
s = f'ConvGru(in={self.input_dim}, out={self.hidden_dim[0]}, ks={self.kernel_size[0]}, '
s += f'n_layers={self.n_layers}, input_p={self.input_p}, hidden_p={self.hidden_p})'
return s
def forward(self, input, hidden_state=None):
"""
Parameters
----------
input_tensor:
5-D Tensor either of shape (t, b, c, h, w) or (b, t, c, h, w)
hidden_state:
Returns
-------
last_state_list, layer_output
"""
input = self.input_dp(input)
cur_layer_input = torch.unbind(input, dim=int(self.batch_first))
if hidden_state is None:
hidden_state = self.get_init_states(cur_layer_input[0])
seq_len = len(cur_layer_input)
layer_output_list = []
last_state_list = []
for l, (gru_cell, hid_dp) in enumerate(zip(self.cell_list, self.hidden_dps)):
h = hidden_state[l]
output_inner = []
for t in range(seq_len):
h = gru_cell(input=cur_layer_input[t], h_prev=h)
output_inner.append(h)
cur_layer_input = torch.stack(output_inner) #list to array
if l != self.n_layers: cur_layer_input = hid_dp(cur_layer_input)
last_state_list.append(h)
layer_output = torch.stack(output_inner, dim=int(self.batch_first))
last_state_list = torch.stack(last_state_list, dim=0)
return layer_output, last_state_list
def reset_parameters(self):
for c in self.cell_list:
c.reset_parameters()
def get_init_states(self, input):
init_states = []
for gru_cell in self.cell_list:
init_states.append(gru_cell.init_hidden(input))
return init_states
@staticmethod
def _check_kernel_size_consistency(kernel_size):
if not (isinstance(kernel_size, tuple) or (isinstance(kernel_size, list)
and all([isinstance(elem, tuple) for elem in kernel_size]))):
raise ValueError('`kernel_size` must be tuple or list of tuples')
@staticmethod
def _extend_for_multilayer(param, num_layers):
if not isinstance(param, list):
param = [param] * num_layers
return param
cgru = ConvGRU(16, 32, (3, 3), 2)
cgru
layer_output, last_state_list = cgru(torch.rand(1,10,16,6,6))
layer_output.shape
last_state_list.shape
layer_output, last_state_list = cgru(torch.rand(1,10,16,6,6), last_state_list)
```
# Export -
```
# hide
from nbdev.export import *
notebook2script()
```
| true |
code
| 0.77518 | null | null | null | null |
|
## Custom camera projection
User defined ray distribution: ray origins and directions in camera textures.
```
%matplotlib notebook
import matplotlib.pyplot as plt
import numpy as np
from plotoptix import NpOptiX
from plotoptix.materials import m_flat
from plotoptix.geometry import PinnedBuffer
```
Create the raytracer:
```
width = 1000
height = 1000
def update_image(rt: NpOptiX) -> None:
img.set_data(rt._img_rgba)
plt.draw()
rt = NpOptiX(on_launch_finished=update_image, width=width, height=height, start_now=False)
```
Setup for the usual image making - camera, lights and ambient, postprocessing. This setup is needed only to show how the scene looks like, it is not needed for calculation and displaying the hit and distance information.
```
rt.set_param(min_accumulation_step=4, max_accumulation_frames=100)
rt.set_float("tonemap_exposure", 1.0)
rt.set_float("tonemap_gamma", 2.2)
rt.add_postproc("Gamma")
rt.setup_light("light1", pos=[-15, 10, 30], color=2, radius=8)
rt.setup_light("light2", pos=[-15, -10, 30], color=2, radius=8)
rt.set_ambient([0.03, 0.04, 0.05])
rt.set_background(0)
rt.setup_camera("cam1", eye=[20, 25, 60], fov=25)
```
Create and upload two surfaces. Use the default, *diffuse* material for the moment.
```
# a smooth surface that will be used as a source o rays later:
rxz = (-10, 10)
n = 500
xz = np.linspace(rxz[0], rxz[1], n)
X, Z = np.meshgrid(xz, xz)
Y1 = np.sin(np.sqrt(X**2 + Z**2)) - 1
rt.set_data_2d("surface1", Y1, c=[0.9, 0.8, 0.7],
range_x=rxz, range_z=rxz,
make_normals=True)
# second surface, placed above the first one (a more coarse, for fun)
rxz2 = (-15, 15)
n2 = 20
xz2 = np.linspace(rxz2[0], rxz2[1], n2)
X2, Z2 = np.meshgrid(xz2, xz2)
Y2 = np.sin(np.sqrt((1.5*X2+0.3)**2 + (Z2+0.3)**2)) + 2
rt.set_data_2d("surface2", Y2, c=[0.7, 0.8, 0.9],
range_x=rxz2, range_z=rxz2,
make_normals=False)
```
Show the output image here:
```
plt.figure(1)
img = plt.imshow(np.zeros((height,width)), cmap=plt.get_cmap("plasma"))
plt.tight_layout()
```
Start the ray tracing:
```
rt.start()
```
Have a look from another angle to see the other surface better:
```
rt.update_camera("cam1", eye=[20, -15, 60], fov=25)
```
Now, change the configuration to the **custom projection** camera.
- prepare textures with ray origins and directions
- use flat shading material for performance
- display hit info instead rgb data
```
# use mesh data of the `surface1` geometry to create textures
with PinnedBuffer(rt.geometry_data["surface1"], "Positions") as P:
eye = np.zeros((n,n,4), dtype=np.float32)
eye[:,:,:3] = P.reshape(n,n,3)
rt.set_texture_2d("eye", eye)
with PinnedBuffer(rt.geometry_data["surface1"], "Vectors") as N:
rdir = np.zeros((n,n,4), dtype=np.float32)
rdir[:,:,:3] = N.reshape(n,n,3)
rt.set_texture_2d("dir", rdir)
rt.setup_camera("cam2", cam_type="CustomProjXYZtoDir", textures=["eye", "dir"])
```
Display distance from the ray origin to the first hit:
```
# NOTE: no need for multiple passes if only the distance is calculated, so set up just 1 pass:
rt.set_param(min_accumulation_step=1, max_accumulation_frames=1)
# flat shading material - no secondary rays are traced
rt.setup_material("flat", m_flat)
rt.update_data_2d("surface1", mat="flat")
rt.update_data_2d("surface2", mat="flat")
# and the new callback function
def update_image(rt: NpOptiX) -> None:
dist = rt._hit_pos[:,:,3].reshape(rt._height, rt._width) # hit distance from the ray origin
fid = rt._geo_id[:,:,1].reshape(rt._height, rt._width) # face id data, or empty region signature
dmax = np.amax(dist[fid < 0xFFFFFFFF])
dmin = np.amin(dist[fid < 0xFFFFFFFF])
img.set_data(dist) # update figure using distance data
img.set_clim(vmin=dmin, vmax=dmax)
plt.draw()
rt.set_launch_finished_cb(update_image)
```
Close the ray-tracer.
```
rt.close()
```
| true |
code
| 0.601594 | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.