code
stringlengths 2.5k
6.36M
| kind
stringclasses 2
values | parsed_code
stringlengths 0
404k
| quality_prob
float64 0
0.98
| learning_prob
float64 0.03
1
|
---|---|---|---|---|
# Sentiment Analysis
## Using XGBoost in SageMaker
_Deep Learning Nanodegree Program | Deployment_
---
In this example of using Amazon's SageMaker service we will construct a random tree model to predict the sentiment of a movie review. You may have seen a version of this example in a pervious lesson although it would have been done using the sklearn package. Instead, we will be using the XGBoost package as it is provided to us by Amazon.
## Instructions
Some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `# TODO: ...` comment. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.
> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.
## Step 1: Downloading the data
The dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.
> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.
We begin by using some Jupyter Notebook magic to download and extract the dataset.
```
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
```
## Step 2: Preparing the data
The data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
```
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
```
## Step 3: Processing the data
Now that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
```
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
```
### Extract Bag-of-Words features
For the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
```
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
```
## Step 4: Classification using XGBoost
Now that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker.
### Writing the dataset
The XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
```
import pandas as pd
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
test_y = pd.DataFrame(test_y)
test_X = pd.DataFrame(test_X)
```
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.
For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
```
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/xgboost'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# First, save the test data to test.csv in the data_dir directory. Note that we do not save the associated ground truth
# labels, instead we will use them later to compare with our model output.
test_X.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
train_X = val_X = train_y = val_y = None
```
### Uploading Training / Validation files to S3
Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.
For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.
Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.
For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
```
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-xgboost'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
```
### (TODO) Creating a hypertuned XGBoost model
Now that the data has been uploaded it is time to create the XGBoost model. As in the Boston Housing notebook, the first step is to create an estimator object which will be used as the *base* of your hyperparameter tuning job.
```
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# TODO: Create a SageMaker estimator using the container location determined in the previous cell.
# It is recommended that you use a single training instance of type ml.m4.xlarge. It is also
# recommended that you use 's3://{}/{}/output'.format(session.default_bucket(), prefix) as the
# output path.
xgb = sagemaker.estimator.Estimator(container, # The name of the training container
role, # The IAM role to use (our current role in this case)
train_instance_count=1, # The number of instances to use for training
train_instance_type='ml.m4.xlarge', # The type of instance ot use for training
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
# Where to save the output (the model artifacts)
sagemaker_session=session) # The current SageMaker session
# TODO: Set the XGBoost hyperparameters in the xgb object. Don't forget that in this case we have a binary
# label so we should be using the 'binary:logistic' objective.
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=200)
```
### (TODO) Create the hyperparameter tuner
Now that the base estimator has been set up we need to construct a hyperparameter tuner object which we will use to request SageMaker construct a hyperparameter tuning job.
**Note:** Training a single sentiment analysis XGBoost model takes longer than training a Boston Housing XGBoost model so if you don't want the hyperparameter tuning job to take too long, make sure to not set the total number of models (jobs) too high.
```
# First, make sure to import the relevant objects used to construct the tuner
from sagemaker.tuner import IntegerParameter, ContinuousParameter, HyperparameterTuner
# TODO: Create the hyperparameter tuner object
xgb_hyperparameter_tuner = HyperparameterTuner(estimator = xgb, # The estimator object to use as the basis for the training jobs.
objective_metric_name = 'validation:rmse', # The metric used to compare trained models.
objective_type = 'Minimize', # Whether we wish to minimize or maximize the metric.
max_jobs = 6, # The total number of models to train
max_parallel_jobs = 3, # The number of models to train in parallel
hyperparameter_ranges = {
'max_depth': IntegerParameter(3, 12),
'eta' : ContinuousParameter(0.05, 0.5),
'min_child_weight': IntegerParameter(2, 8),
'subsample': ContinuousParameter(0.5, 0.9),
'gamma': ContinuousParameter(0, 10),
})
```
### Fit the hyperparameter tuner
Now that the hyperparameter tuner object has been constructed, it is time to fit the various models and find the best performing model.
```
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb_hyperparameter_tuner.fit({'train': s3_input_train, 'validation': s3_input_validation})
```
Remember that the tuning job is constructed and run in the background so if we want to see the progress of our training job we need to call the `wait()` method.
```
xgb_hyperparameter_tuner.wait()
```
### (TODO) Testing the model
Now that we've run our hyperparameter tuning job, it's time to see how well the best performing model actually performs. To do this we will use SageMaker's Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set.
Remember that in order to create a transformer object to perform the batch transform job, we need a trained estimator object. We can do that using the `attach()` method, creating an estimator object which is attached to the best trained job.
```
xgb_hyperparameter_tuner.best_training_job()
# TODO: Create a new estimator object attached to the best training job found during hyperparameter tuning
xgb_attached = sagemaker.estimator.Estimator.attach(xgb_hyperparameter_tuner.best_training_job())
```
Now that we have an estimator object attached to the correct training job, we can proceed as we normally would and create a transformer object.
```
# TODO: Create a transformer object from the attached estimator. Using an instance count of 1 and an instance type of ml.m4.xlarge
# should be more than enough.
xgb_transformer = xgb_attached.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
```
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
```
# TODO: Start the transform job. Make sure to specify the content type and the split type of the test data.
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
```
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
```
xgb_transformer.wait()
```
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
```
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
```
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
```
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
```
## Optional: Clean up
The default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
```
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
```
|
github_jupyter
|
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
import pandas as pd
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
test_y = pd.DataFrame(test_y)
test_X = pd.DataFrame(test_X)
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/xgboost'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# First, save the test data to test.csv in the data_dir directory. Note that we do not save the associated ground truth
# labels, instead we will use them later to compare with our model output.
test_X.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
train_X = val_X = train_y = val_y = None
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-xgboost'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost')
# TODO: Create a SageMaker estimator using the container location determined in the previous cell.
# It is recommended that you use a single training instance of type ml.m4.xlarge. It is also
# recommended that you use 's3://{}/{}/output'.format(session.default_bucket(), prefix) as the
# output path.
xgb = sagemaker.estimator.Estimator(container, # The name of the training container
role, # The IAM role to use (our current role in this case)
train_instance_count=1, # The number of instances to use for training
train_instance_type='ml.m4.xlarge', # The type of instance ot use for training
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
# Where to save the output (the model artifacts)
sagemaker_session=session) # The current SageMaker session
# TODO: Set the XGBoost hyperparameters in the xgb object. Don't forget that in this case we have a binary
# label so we should be using the 'binary:logistic' objective.
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=200)
# First, make sure to import the relevant objects used to construct the tuner
from sagemaker.tuner import IntegerParameter, ContinuousParameter, HyperparameterTuner
# TODO: Create the hyperparameter tuner object
xgb_hyperparameter_tuner = HyperparameterTuner(estimator = xgb, # The estimator object to use as the basis for the training jobs.
objective_metric_name = 'validation:rmse', # The metric used to compare trained models.
objective_type = 'Minimize', # Whether we wish to minimize or maximize the metric.
max_jobs = 6, # The total number of models to train
max_parallel_jobs = 3, # The number of models to train in parallel
hyperparameter_ranges = {
'max_depth': IntegerParameter(3, 12),
'eta' : ContinuousParameter(0.05, 0.5),
'min_child_weight': IntegerParameter(2, 8),
'subsample': ContinuousParameter(0.5, 0.9),
'gamma': ContinuousParameter(0, 10),
})
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv')
xgb_hyperparameter_tuner.fit({'train': s3_input_train, 'validation': s3_input_validation})
xgb_hyperparameter_tuner.wait()
xgb_hyperparameter_tuner.best_training_job()
# TODO: Create a new estimator object attached to the best training job found during hyperparameter tuning
xgb_attached = sagemaker.estimator.Estimator.attach(xgb_hyperparameter_tuner.best_training_job())
# TODO: Create a transformer object from the attached estimator. Using an instance count of 1 and an instance type of ml.m4.xlarge
# should be more than enough.
xgb_transformer = xgb_attached.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
# TODO: Start the transform job. Make sure to specify the content type and the split type of the test data.
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
xgb_transformer.wait()
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
| 0.54577 | 0.989076 |
# <center> COVID-19 Spread Simulation </center>
## <center> https://github.com/DmitrySerg/COVID-19 </center>
Let's play a little Plague Inc.

**Author: Dmitry Sergeev**
**Senior Data Scientist @ ŌURA Health, Head of Data Science Programme @ Otus**
**Telegram: @dmitryserg**
Data description: This project uses a collection of datasets, describing the current outbreak of coronavirus disease (COVID-19). The data includes: world airport locations, connections and estimated number of flights per month between them (taken from https://www.flightconnections.com/), estimated population on country and city level (http://worldpopulationreview.com/world-cities/), as well as the current outbreak monitoring data, provided by the Johns Hopkins University Center for Systems Science and Engineering (JHU CCSE) on the number of confirmed, recovered, and death cases (https://github.com/CSSEGISandData/COVID-19/tree/master/csse_covid_19_data).
```
import pandas as pd
import numpy as np
import networkx as nx
import pickle
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.graph_objects as go
from tqdm import tqdm_notebook
from multiprocessing import Pool
from collections import ChainMap
from joblib import Parallel, delayed
from scipy.special import logit, expit
import warnings
warnings.filterwarnings("ignore")
from sir_model import SIR
airport_df = pd.read_csv("../data/airport_df_preprocessed.csv", index_col=0)
airport_df.head()
connections = pd.read_csv("../data/connections_preprocessed.csv", index_col=0)
connections.head()
```
# <center> Network infection spread </center>
To model the infection spread through the airline traffic network we need to calculate the probability that a given susceptible city would be infected by its neighbouring infected city on a given day.
We consider a city infected if at least one infected plane landed in this city. Hence, first we need to calculate the probability that the plane coming from the infected city is infected itself. Next, we can calculate the probability that the city is infected.
As a result, we recalculate the probabilities of infection spread based on the estimated number of the infected population in the infected cities. That approach proved to be surprisingly accurate and was able to "predict" major COVID-19 outbreaks, e.g. in Western Europe or the USA.
---
$$P(\text{plane infected}) = \frac{I}{N}$$
$$P(\text{new city infected}) = 1 - P(\text{all incoming places are healthy}) = $$
$$= 1- P(\text{all planes from city A are healthy}) \cdot P(\text{all planes from city B are healthy}) \cdot ... \cdot =$$
$$= 1 - [(1 - \frac{I_A}{N_A}) ^ {f_A} \cdot (1 - \frac{I_B}{N_B}) ^ {f_B} \cdot...]$$
$I$ - number of infected in the city, $N$ - total population of the city, $f$ - flights from city per day
```
def prob_infected_plane(I, N):
"""
I - number of infected in the city
N - total population in the city
"""
return I/N
def prob_city_infected(infectious_sources, populations_sources, daily_flights_sources):
"""
Calculates the probability that the city will be infected by any of the incoming planes
Formula used:
P(new city infected) = 1 - P(all incoming places are healthy) = \
= P(all planes from city A are healthy) * P(all planes from city B are healthy) * ...
= 1 - [(1 - I_A/N_A) ^ f_A * (1 - I_B/N_B) ^ f_B * ...]
"""
prob_all_planes_healthy = 1
for I, N, f in zip(infectious_sources, populations_sources, daily_flights_sources):
prob_all_planes_healthy *= (1-prob_infected_plane(I, N)) ** f
return 1 - prob_all_planes_healthy
prob_city_infected([10, 20], [10000, 20000], [100, 50])
```
# <center> City infection spread (SIR) </center>
To model the spread of infection within a particular city we use a homogeneous [Susceptible-Infectious-Recovered/Removed (SIR)](https://www.maa.org/press/periodicals/loci/joma/the-sir-model-for-spread-of-disease-the-differential-equation-model) model with several assumptions. Although quite simplistic, the model proves to be reasonable for approximating the COVID-19 infection spread. There are several reasons for this efficiency:
1. A person becomes infectious already during the incubation period (source: [Johns Hopkins University](https://www.jhsph.edu/news/news-releases/2020/new-study-on-COVID-19-estimates-5-days-for-incubation-period.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+JHSPHNews+%28Public+Health+News+Headlines+from+Johns+Hopkins%29)). That means there is a direct transition from Susceptible to Infectious bypassing the Exposed step as in SEIR model
2. There is no vaccine at the moment, so it's impossible to prevent the decease from spreading using traditional herd immunization strategies. For SIR model that means that all city population is susceptible unless strict quarantine is enforced (more on that later)
3. The long incubation period (14 days median, Ibid) and unsymptomatic nature for the majority of infected allow the decease to spread undetected up until first symptomatic infections are detected and tested. That once again aligns with the initial dynamics of the SIR model.
The major idea that we've implemented to address the changes in the infection rate due to social distancing and quarantine measures is dynamically modelling the reproduction nunmber **R**. The idea is straightforward - adjust **R** in response to the preventive measures. As a baseline, we took the Wuhan example of preventive measures and their approximate timelines.
- During the first days, the infection spreads largely undetected, hence, **R** value is close to its upper bound.
- On average, after the **median incubation period of 14 days**, first social distancing measures are taken into action which drives **R** down to its average values.
- Finally, after approximately 1-month period strict quarantine measures are enforced, including travel bans, area lockdowns, etc. That results in **R** value dropping down to its minimum values
```
def calculate_reproduction_number(
max_R, min_R,
simulation_days,
intervention_start=30,
intervention_period=14
):
"""
:max_R: maximal possible R value during the simulation
:min_R: minimal possible R value during the simulation
:simulation_days: number of days in the simulation run
:intervention_start: number of days after which R starts going down
:intervention_period: number of days it takes from the intervention start
to drive R value to its minimal values
"""
reproduction_n = np.repeat(max_R, intervention_start)
reproduction_intervention = expit(np.linspace(-5, 3, num=intervention_period))[::-1]
reproduction_intervention = reproduction_intervention * (max_R - min_R) + min_R
reproduction_n = np.concatenate(
(
reproduction_n,
reproduction_intervention,
np.repeat(min_R, simulation_days)
)
)
return reproduction_n
```
# Example of dynamical R modelling
```
max_R = 5.
min_R = 1.
simulation_days=100
intervention_start=30
intervention_period=30
example_R = calculate_reproduction_number(
max_R, min_R,
simulation_days=simulation_days,
intervention_start=intervention_start,
intervention_period=intervention_period
)
plt.figure(figsize=(10, 6))
plt.plot(example_R[:80], linewidth=2, color='red')
plt.title("Dynamic Reprodiction number modelling")
plt.text(0, max_R-0.2, f"R0: {max_R}")
plt.text(70, min_R+0.2, f"R60: {min_R}")
plt.vlines(intervention_start, min_R, max_R,
label='Intervention start', linestyles='dashed')
plt.vlines(
intervention_start+intervention_period, min_R, max_R,
label='Intervention period', linestyles='dashed')
plt.ylabel("R")
plt.xlabel("Days from start of infection")
plt.legend()
sns.despine()
plt.show()
```
# Comparing to Johns Hopkins Data
```
main_link = 'https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/'
CONFIRMED = pd.read_csv(main_link+'time_series_covid19_confirmed_global.csv' )
DEATHS = pd.read_csv(main_link+'time_series_covid19_deaths_global.csv')
RECOVERED = pd.read_csv(main_link+'time_series_covid19_recovered_global.csv')
country = 'China'
confirmed_country = CONFIRMED[CONFIRMED['Country/Region']==country].sum()[4:].values
first_case = np.argwhere(confirmed_country)[0][0]
confirmed_country = confirmed_country[first_case:]
deaths_country = DEATHS[DEATHS['Country/Region']==country].sum()[4:].values[first_case:]
recovered_country = RECOVERED[RECOVERED['Country/Region']==country].sum()[4:].values[first_case:]
plt.figure(figsize=(14, 7))
plt.title(country, fontsize=14)
plt.plot(
confirmed_country,
linewidth=2,
label='Confirmed',
color='red'
)
plt.plot(
deaths_country,
linewidth=2,
label='Deaths',
color='black'
)
plt.plot(
recovered_country,
linewidth=2,
label='Recovered',
color='green'
)
plt.legend()
plt.xlabel('Days from start of infection')
plt.ylabel("Reported number of cases")
plt.grid(alpha=0.4)
sns.despine()
plt.show()
```
# Important assumption
Only 10% of cases, on average, were reported in the China statistics (estimate from Johns Hopkins University)
```
max_R0 = 4
min_R0 = 1
SIMULATION_DAYS = 150
beta = calculate_reproduction_number(
max_R0, min_R0, SIMULATION_DAYS, intervention_start=30, intervention_period=14
)
# Dividing the initial number of cases by 0.1 to adjust for the unobserved cases
sir_model = SIR(8364977, I0=444/0.1, beta=beta/14, gamma=0.0576*2, days=SIMULATION_DAYS)
S, I, R = sir_model.run()
sir_model.plot_results(S, I, R)
fig, axs = plt.subplots(2, 2, figsize=(15, 8))
axs[0, 0].plot(confirmed_country)
axs[0, 0].set_title("Confirmed actual")
sns.despine()
axs[0, 1].plot(I)
axs[0, 1].set_title("Infected predicted")
axs[0, 1].set_xlim(0, len(confirmed_country))
axs[1, 0].plot(deaths_country+recovered_country)
axs[1, 0].set_title("Recovered + deaths actual")
axs[1, 0].set_xlabel("Days from start of infection")
axs[1, 1].plot(R)
sns.despine()
axs[1, 1].set_title("Recovered + deaths predicted")
axs[1, 1].set_xlim(0, len(confirmed_country))
axs[1, 1].set_xlabel("Days from start of infection")
plt.show()
```
# <center> COVID-19 Spread Simulation </center>
```
# taking unique source-destination pairs
connections = connections.groupby(["source_airport", 'dest_airport'], as_index=False).agg({
"destination_flights":np.nansum,
"lat_source":min,
"long_source":min,
"lat_dest":min,
"long_dest":min
})
```
## NetworkX graph
```
connections_graph = nx.from_pandas_edgelist(
connections,
source = 'source_airport',
target = 'dest_airport',
create_using = nx.DiGraph()
)
len(connections_graph.nodes)
```
## Auxillary data models
```
def city_neighbours(city_name, connections_graph=connections_graph):
city_airports = CITY_TO_AIRPORT_CODE[city_name]
neighbours = []
for city in city_airports:
neighbours.extend(list(connections_graph.neighbors(city)))
return neighbours
AIRPORT_CODE_TO_CITY = airport_df[['City', 'IATA']].set_index("IATA").to_dict()['City']
CITY_TO_AIRPORT_CODE = airport_df[['City', 'IATA']].groupby("City")['IATA'].unique().to_dict()
CITY_TO_AIRPORT_CODE = {k:list(v) for k, v in CITY_TO_AIRPORT_CODE.items()}
CITY_POPULATION = airport_df[['City', 'city_population']].set_index("City").to_dict()['city_population']
CITY_NEIGHBOURS = {}
for city in airport_df.City.unique():
try:
CITY_NEIGHBOURS[city] = city_neighbours(city)
except:
continue
NUMBER_OF_FLIGHTS = dict(zip(tuple(
zip(
connections.source_airport,
connections.dest_airport
)),
connections.destination_flights
))
```
## Simulation functions
```
def get_city_neighbours(city_name):
return CITY_NEIGHBOURS[city_name]
def get_healthy_airports(airports):
airports = list(set(airports) - set(INFECTED_AIRPORTS))
return airports
def get_infected_airports(airports):
airports = list(set(airports).intersection(set(INFECTED_AIRPORTS)))
return airports
def airports_to_cities(airports):
return list(set([AIRPORT_CODE_TO_CITY[code] for code in airports]))
def get_number_of_flights(source, destination):
if not isinstance(source, list):
source = [source]
if not isinstance(destination, list):
destination = [destination]
flights = 0
for source in source:
for dest in destination:
flights+=NUMBER_OF_FLIGHTS[(source, dest)]
return flights
def get_infected_number(city_name, simulation_day):
infection_day = INFECTED_CITIES[city_name]['day']
return INFECTED_CITIES[city_name]['infected'][simulation_day-infection_day]
def calculate_infection_prob(current_susceptible_city, DAY):
current_susceptible_airports = CITY_TO_AIRPORT_CODE[current_susceptible_city]
current_infected_neighbours = get_infected_airports(get_city_neighbours(current_susceptible_city))
flights = []
infected_populations = []
total_populations = []
for infected_neighbour in current_infected_neighbours:
infected_city_name = AIRPORT_CODE_TO_CITY[infected_neighbour]
flights.append(get_number_of_flights(infected_neighbour, current_susceptible_airports))
infected_populations.append(get_infected_number(infected_city_name, DAY))
total_populations.append(CITY_POPULATION[infected_city_name])
infection_probability = prob_city_infected(infected_populations, total_populations, flights)
return infection_probability
def run_neighbour_simulation(current_susceptible_city, current_infection_source_city, DAY):
infection_probability = calculate_infection_prob(current_susceptible_city, DAY)
if np.random.random() < infection_probability:
S, I, R = run_sir(
city_population=CITY_POPULATION[current_susceptible_city],
first_infected_number=100
)
return {current_susceptible_city:{
'day':DAY,
'infected':I,
'susceptible':S,
'recovered':R,
'from': current_infection_source_city
}}
def run_infectious_city_simulation(current_infection_source_city, DAY):
neighbour_airports = get_city_neighbours(current_infection_source_city)
susceptible_airports = get_healthy_airports(neighbour_airports)
susceptible_cities = airports_to_cities(susceptible_airports)
results = []
for current_susceptible_city in tqdm_notebook(
susceptible_cities,
leave=False,
desc='susceptible',
disable=True
):
try:
results.append(run_neighbour_simulation(current_susceptible_city, current_infection_source_city, DAY))
except:
continue
results = [res for res in results if res]
return results
```
## Configurations for scenarios
- Realistic: intervention starts after 30 days, it takes 14 days to enforce lockdown
- Optimistic: intervention starts after 14 days, it takes 7 days to enforce lockdown
- Pessimistic: intervention starts after 60 days, it takes 30 days to enforce lockdown
```
NUMBER_OF_SIMULATIONS = 10
SIMULATION_DAYS = 200
VERBOSE = True
max_R = 4
min_R = 1
GAMMA = 0.0576*2
REPRODUCTION_NUMBER = calculate_reproduction_number(
max_R, min_R,
SIMULATION_DAYS,
intervention_start=60,
intervention_period=30)
def run_sir(
city_population, first_infected_number,
reproduction_number=REPRODUCTION_NUMBER,
gamma=GAMMA, days=SIMULATION_DAYS
):
sir_model = SIR(
city_population, I0=first_infected_number, beta=reproduction_number/14, gamma=gamma, days=days
)
S, I, R = sir_model.run()
return S, I, R
```
# Simulation run
```
INFECTED_CITIES = {}
INFECTED_AIRPORTS = []
NEW_INFECTED = {}
for simulation_run in tqdm_notebook(range(NUMBER_OF_SIMULATIONS), leave=False):
# Always start at Wuhan on day 0
S, I, R = run_sir(CITY_POPULATION['Wuhan'], 444/0.1)
INFECTED_CITIES = {'Wuhan':{'day':0, 'infected':I, 'susceptible':S, 'recovered':R, 'from':'Wuhan'}}
INFECTED_AIRPORTS = ['WUH']
for DAY in tqdm_notebook(range(0, SIMULATION_DAYS), desc='Day', leave=False):
CHECKED_SUSCEPTIBLE_CITIES = []
for current_infection_source_city in tqdm_notebook(
INFECTED_CITIES.keys(),
leave=False,
desc='infection sources',
disable=not VERBOSE
):
results = run_infectious_city_simulation(current_infection_source_city, DAY)
NEW_INFECTED.update(dict(ChainMap(*results)))
INFECTED_CITIES.update(NEW_INFECTED)
NEW_INFECTED = {}
INFECTED_AIRPORTS = sum([CITY_TO_AIRPORT_CODE[city] for city in INFECTED_CITIES.keys()], [])
with open(f"../simulation_data/INFECTED_CITIES_mild_{simulation_run}", 'wb') as f:
pickle.dump(INFECTED_CITIES, f)
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import networkx as nx
import pickle
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.graph_objects as go
from tqdm import tqdm_notebook
from multiprocessing import Pool
from collections import ChainMap
from joblib import Parallel, delayed
from scipy.special import logit, expit
import warnings
warnings.filterwarnings("ignore")
from sir_model import SIR
airport_df = pd.read_csv("../data/airport_df_preprocessed.csv", index_col=0)
airport_df.head()
connections = pd.read_csv("../data/connections_preprocessed.csv", index_col=0)
connections.head()
def prob_infected_plane(I, N):
"""
I - number of infected in the city
N - total population in the city
"""
return I/N
def prob_city_infected(infectious_sources, populations_sources, daily_flights_sources):
"""
Calculates the probability that the city will be infected by any of the incoming planes
Formula used:
P(new city infected) = 1 - P(all incoming places are healthy) = \
= P(all planes from city A are healthy) * P(all planes from city B are healthy) * ...
= 1 - [(1 - I_A/N_A) ^ f_A * (1 - I_B/N_B) ^ f_B * ...]
"""
prob_all_planes_healthy = 1
for I, N, f in zip(infectious_sources, populations_sources, daily_flights_sources):
prob_all_planes_healthy *= (1-prob_infected_plane(I, N)) ** f
return 1 - prob_all_planes_healthy
prob_city_infected([10, 20], [10000, 20000], [100, 50])
def calculate_reproduction_number(
max_R, min_R,
simulation_days,
intervention_start=30,
intervention_period=14
):
"""
:max_R: maximal possible R value during the simulation
:min_R: minimal possible R value during the simulation
:simulation_days: number of days in the simulation run
:intervention_start: number of days after which R starts going down
:intervention_period: number of days it takes from the intervention start
to drive R value to its minimal values
"""
reproduction_n = np.repeat(max_R, intervention_start)
reproduction_intervention = expit(np.linspace(-5, 3, num=intervention_period))[::-1]
reproduction_intervention = reproduction_intervention * (max_R - min_R) + min_R
reproduction_n = np.concatenate(
(
reproduction_n,
reproduction_intervention,
np.repeat(min_R, simulation_days)
)
)
return reproduction_n
max_R = 5.
min_R = 1.
simulation_days=100
intervention_start=30
intervention_period=30
example_R = calculate_reproduction_number(
max_R, min_R,
simulation_days=simulation_days,
intervention_start=intervention_start,
intervention_period=intervention_period
)
plt.figure(figsize=(10, 6))
plt.plot(example_R[:80], linewidth=2, color='red')
plt.title("Dynamic Reprodiction number modelling")
plt.text(0, max_R-0.2, f"R0: {max_R}")
plt.text(70, min_R+0.2, f"R60: {min_R}")
plt.vlines(intervention_start, min_R, max_R,
label='Intervention start', linestyles='dashed')
plt.vlines(
intervention_start+intervention_period, min_R, max_R,
label='Intervention period', linestyles='dashed')
plt.ylabel("R")
plt.xlabel("Days from start of infection")
plt.legend()
sns.despine()
plt.show()
main_link = 'https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/'
CONFIRMED = pd.read_csv(main_link+'time_series_covid19_confirmed_global.csv' )
DEATHS = pd.read_csv(main_link+'time_series_covid19_deaths_global.csv')
RECOVERED = pd.read_csv(main_link+'time_series_covid19_recovered_global.csv')
country = 'China'
confirmed_country = CONFIRMED[CONFIRMED['Country/Region']==country].sum()[4:].values
first_case = np.argwhere(confirmed_country)[0][0]
confirmed_country = confirmed_country[first_case:]
deaths_country = DEATHS[DEATHS['Country/Region']==country].sum()[4:].values[first_case:]
recovered_country = RECOVERED[RECOVERED['Country/Region']==country].sum()[4:].values[first_case:]
plt.figure(figsize=(14, 7))
plt.title(country, fontsize=14)
plt.plot(
confirmed_country,
linewidth=2,
label='Confirmed',
color='red'
)
plt.plot(
deaths_country,
linewidth=2,
label='Deaths',
color='black'
)
plt.plot(
recovered_country,
linewidth=2,
label='Recovered',
color='green'
)
plt.legend()
plt.xlabel('Days from start of infection')
plt.ylabel("Reported number of cases")
plt.grid(alpha=0.4)
sns.despine()
plt.show()
max_R0 = 4
min_R0 = 1
SIMULATION_DAYS = 150
beta = calculate_reproduction_number(
max_R0, min_R0, SIMULATION_DAYS, intervention_start=30, intervention_period=14
)
# Dividing the initial number of cases by 0.1 to adjust for the unobserved cases
sir_model = SIR(8364977, I0=444/0.1, beta=beta/14, gamma=0.0576*2, days=SIMULATION_DAYS)
S, I, R = sir_model.run()
sir_model.plot_results(S, I, R)
fig, axs = plt.subplots(2, 2, figsize=(15, 8))
axs[0, 0].plot(confirmed_country)
axs[0, 0].set_title("Confirmed actual")
sns.despine()
axs[0, 1].plot(I)
axs[0, 1].set_title("Infected predicted")
axs[0, 1].set_xlim(0, len(confirmed_country))
axs[1, 0].plot(deaths_country+recovered_country)
axs[1, 0].set_title("Recovered + deaths actual")
axs[1, 0].set_xlabel("Days from start of infection")
axs[1, 1].plot(R)
sns.despine()
axs[1, 1].set_title("Recovered + deaths predicted")
axs[1, 1].set_xlim(0, len(confirmed_country))
axs[1, 1].set_xlabel("Days from start of infection")
plt.show()
# taking unique source-destination pairs
connections = connections.groupby(["source_airport", 'dest_airport'], as_index=False).agg({
"destination_flights":np.nansum,
"lat_source":min,
"long_source":min,
"lat_dest":min,
"long_dest":min
})
connections_graph = nx.from_pandas_edgelist(
connections,
source = 'source_airport',
target = 'dest_airport',
create_using = nx.DiGraph()
)
len(connections_graph.nodes)
def city_neighbours(city_name, connections_graph=connections_graph):
city_airports = CITY_TO_AIRPORT_CODE[city_name]
neighbours = []
for city in city_airports:
neighbours.extend(list(connections_graph.neighbors(city)))
return neighbours
AIRPORT_CODE_TO_CITY = airport_df[['City', 'IATA']].set_index("IATA").to_dict()['City']
CITY_TO_AIRPORT_CODE = airport_df[['City', 'IATA']].groupby("City")['IATA'].unique().to_dict()
CITY_TO_AIRPORT_CODE = {k:list(v) for k, v in CITY_TO_AIRPORT_CODE.items()}
CITY_POPULATION = airport_df[['City', 'city_population']].set_index("City").to_dict()['city_population']
CITY_NEIGHBOURS = {}
for city in airport_df.City.unique():
try:
CITY_NEIGHBOURS[city] = city_neighbours(city)
except:
continue
NUMBER_OF_FLIGHTS = dict(zip(tuple(
zip(
connections.source_airport,
connections.dest_airport
)),
connections.destination_flights
))
def get_city_neighbours(city_name):
return CITY_NEIGHBOURS[city_name]
def get_healthy_airports(airports):
airports = list(set(airports) - set(INFECTED_AIRPORTS))
return airports
def get_infected_airports(airports):
airports = list(set(airports).intersection(set(INFECTED_AIRPORTS)))
return airports
def airports_to_cities(airports):
return list(set([AIRPORT_CODE_TO_CITY[code] for code in airports]))
def get_number_of_flights(source, destination):
if not isinstance(source, list):
source = [source]
if not isinstance(destination, list):
destination = [destination]
flights = 0
for source in source:
for dest in destination:
flights+=NUMBER_OF_FLIGHTS[(source, dest)]
return flights
def get_infected_number(city_name, simulation_day):
infection_day = INFECTED_CITIES[city_name]['day']
return INFECTED_CITIES[city_name]['infected'][simulation_day-infection_day]
def calculate_infection_prob(current_susceptible_city, DAY):
current_susceptible_airports = CITY_TO_AIRPORT_CODE[current_susceptible_city]
current_infected_neighbours = get_infected_airports(get_city_neighbours(current_susceptible_city))
flights = []
infected_populations = []
total_populations = []
for infected_neighbour in current_infected_neighbours:
infected_city_name = AIRPORT_CODE_TO_CITY[infected_neighbour]
flights.append(get_number_of_flights(infected_neighbour, current_susceptible_airports))
infected_populations.append(get_infected_number(infected_city_name, DAY))
total_populations.append(CITY_POPULATION[infected_city_name])
infection_probability = prob_city_infected(infected_populations, total_populations, flights)
return infection_probability
def run_neighbour_simulation(current_susceptible_city, current_infection_source_city, DAY):
infection_probability = calculate_infection_prob(current_susceptible_city, DAY)
if np.random.random() < infection_probability:
S, I, R = run_sir(
city_population=CITY_POPULATION[current_susceptible_city],
first_infected_number=100
)
return {current_susceptible_city:{
'day':DAY,
'infected':I,
'susceptible':S,
'recovered':R,
'from': current_infection_source_city
}}
def run_infectious_city_simulation(current_infection_source_city, DAY):
neighbour_airports = get_city_neighbours(current_infection_source_city)
susceptible_airports = get_healthy_airports(neighbour_airports)
susceptible_cities = airports_to_cities(susceptible_airports)
results = []
for current_susceptible_city in tqdm_notebook(
susceptible_cities,
leave=False,
desc='susceptible',
disable=True
):
try:
results.append(run_neighbour_simulation(current_susceptible_city, current_infection_source_city, DAY))
except:
continue
results = [res for res in results if res]
return results
NUMBER_OF_SIMULATIONS = 10
SIMULATION_DAYS = 200
VERBOSE = True
max_R = 4
min_R = 1
GAMMA = 0.0576*2
REPRODUCTION_NUMBER = calculate_reproduction_number(
max_R, min_R,
SIMULATION_DAYS,
intervention_start=60,
intervention_period=30)
def run_sir(
city_population, first_infected_number,
reproduction_number=REPRODUCTION_NUMBER,
gamma=GAMMA, days=SIMULATION_DAYS
):
sir_model = SIR(
city_population, I0=first_infected_number, beta=reproduction_number/14, gamma=gamma, days=days
)
S, I, R = sir_model.run()
return S, I, R
INFECTED_CITIES = {}
INFECTED_AIRPORTS = []
NEW_INFECTED = {}
for simulation_run in tqdm_notebook(range(NUMBER_OF_SIMULATIONS), leave=False):
# Always start at Wuhan on day 0
S, I, R = run_sir(CITY_POPULATION['Wuhan'], 444/0.1)
INFECTED_CITIES = {'Wuhan':{'day':0, 'infected':I, 'susceptible':S, 'recovered':R, 'from':'Wuhan'}}
INFECTED_AIRPORTS = ['WUH']
for DAY in tqdm_notebook(range(0, SIMULATION_DAYS), desc='Day', leave=False):
CHECKED_SUSCEPTIBLE_CITIES = []
for current_infection_source_city in tqdm_notebook(
INFECTED_CITIES.keys(),
leave=False,
desc='infection sources',
disable=not VERBOSE
):
results = run_infectious_city_simulation(current_infection_source_city, DAY)
NEW_INFECTED.update(dict(ChainMap(*results)))
INFECTED_CITIES.update(NEW_INFECTED)
NEW_INFECTED = {}
INFECTED_AIRPORTS = sum([CITY_TO_AIRPORT_CODE[city] for city in INFECTED_CITIES.keys()], [])
with open(f"../simulation_data/INFECTED_CITIES_mild_{simulation_run}", 'wb') as f:
pickle.dump(INFECTED_CITIES, f)
| 0.539711 | 0.935582 |
# Advent of Code 2019 - Day 4
## Part 1
```
# Input
min = 146810
max = 612564
# Brute force solution :-(
matches = []
for pwd in [str(i) for i in range(min, max + 1)]:
has_adjacent = False
has_increasing = True
start_idx = 1
end_idx = len(pwd) - 2
for idx in range(start_idx, end_idx):
is_start = True if idx == start_idx else False
is_end = True if idx == end_idx else False
prev_digit = int(pwd[idx-1])
curr_digit = int(pwd[idx])
next_digit = int(pwd[idx+1])
next_next_digit = int(pwd[idx+2])
if (prev_digit > curr_digit
or curr_digit > next_digit
or next_digit > next_next_digit):
has_increasing = False
if (prev_digit == curr_digit
or curr_digit == next_digit
or next_digit == next_next_digit):
has_adjacent = True
if has_adjacent and has_increasing:
matches.append(pwd)
print(f'Solution: {len(matches)}')
```
## Part 2
```
# Brute force solution :-(
def is_valid_password(password):
has_adjacent = False
has_increasing = True
start_idx = 1
end_idx = len(password) - 2
for idx in range(start_idx, end_idx):
is_start = True if idx == start_idx else False
is_end = True if idx == (end_idx - 1) else False
prev_digit = int(password[idx-1])
curr_digit = int(password[idx])
next_digit = int(password[idx+1])
next_next_digit = int(password[idx+2])
if (prev_digit > curr_digit
or curr_digit > next_digit
or next_digit > next_next_digit):
has_increasing = False
if (is_start and (
prev_digit == curr_digit
and curr_digit != next_digit
)):
has_adjacent = True
if (is_end and (
curr_digit != next_digit
and next_digit == next_next_digit
)):
has_adjacent = True
if (prev_digit != curr_digit
and curr_digit == next_digit
and next_digit != next_next_digit
):
has_adjacent = True
return True if has_adjacent and has_increasing else False
for password in ['112233', '123444', '111122']:
print(f'{password}: {is_valid_password(password)}')
# Brute force solution :-(
matches = []
for password in [str(i) for i in range(min, max + 1)]:
if is_valid_password(password):
matches.append(password)
print(f'Solution: {len(matches)}')
```
|
github_jupyter
|
# Input
min = 146810
max = 612564
# Brute force solution :-(
matches = []
for pwd in [str(i) for i in range(min, max + 1)]:
has_adjacent = False
has_increasing = True
start_idx = 1
end_idx = len(pwd) - 2
for idx in range(start_idx, end_idx):
is_start = True if idx == start_idx else False
is_end = True if idx == end_idx else False
prev_digit = int(pwd[idx-1])
curr_digit = int(pwd[idx])
next_digit = int(pwd[idx+1])
next_next_digit = int(pwd[idx+2])
if (prev_digit > curr_digit
or curr_digit > next_digit
or next_digit > next_next_digit):
has_increasing = False
if (prev_digit == curr_digit
or curr_digit == next_digit
or next_digit == next_next_digit):
has_adjacent = True
if has_adjacent and has_increasing:
matches.append(pwd)
print(f'Solution: {len(matches)}')
# Brute force solution :-(
def is_valid_password(password):
has_adjacent = False
has_increasing = True
start_idx = 1
end_idx = len(password) - 2
for idx in range(start_idx, end_idx):
is_start = True if idx == start_idx else False
is_end = True if idx == (end_idx - 1) else False
prev_digit = int(password[idx-1])
curr_digit = int(password[idx])
next_digit = int(password[idx+1])
next_next_digit = int(password[idx+2])
if (prev_digit > curr_digit
or curr_digit > next_digit
or next_digit > next_next_digit):
has_increasing = False
if (is_start and (
prev_digit == curr_digit
and curr_digit != next_digit
)):
has_adjacent = True
if (is_end and (
curr_digit != next_digit
and next_digit == next_next_digit
)):
has_adjacent = True
if (prev_digit != curr_digit
and curr_digit == next_digit
and next_digit != next_next_digit
):
has_adjacent = True
return True if has_adjacent and has_increasing else False
for password in ['112233', '123444', '111122']:
print(f'{password}: {is_valid_password(password)}')
# Brute force solution :-(
matches = []
for password in [str(i) for i in range(min, max + 1)]:
if is_valid_password(password):
matches.append(password)
print(f'Solution: {len(matches)}')
| 0.173568 | 0.639609 |
```
import ipywidgets as widgets
#https://towardsdatascience.com/bring-your-jupyter-notebook-to-life-with-interactive-widgets-bc12e03f0916
from IPython.display import display
#Have a table to compare when the best time to use radar vs ultrasound (cross reference on the slides)
speed_of_ultrasound_m_per_sec={ 'tissue':1540, #In most tissue
'water':1481,
'air': 343
}
speed_of_light_m_per_sec={ 'air': 299792458
}
```
Our goal is to understand the effect of the speed of a wave on how far it travels and how that influences how many computation
If we define the time the wave is traveling as:
\begin{equation}
\Delta t=t_{end}-t_{start}
\end{equation}
We can find the distance the wave travels given we know its speed (c)
\begin{equation}
d=c\Delta t
\end{equation}
This means for a round trip time
\begin{equation}
d=\frac{c}{2}\Delta t
\end{equation}
The two types of waves we will be examining are ultrasound and radio waves. In air ultrasound has a speed of:
\begin{equation}
c_{ultrasound}=343 \frac{m}{s}
\end{equation}
while radio waves have a speed of:
\begin{equation}
c_{light}=299,792,458 \frac{m}{s}
\end{equation}
Below find the time it would take for an ultrasound wave to travel 10 meters (+/- 1m). Compare this distance with the distance radio waves would travel. What is the ratio of the difference?
```
from ipywidgets import Layout
style = {'description_width': 'initial'}
time_traveled_output = widgets.Output()
time_traveled_slider = widgets.IntSlider(
min=1,
max=100,
step=10,
description='Time traveled (milliseconds):',
value=3,
layout=Layout(width='auto', height='80px'),
style=style
)
def time_traveled_eventhandler(change):
time_traveled_output.clear_output()
with time_traveled_output:
t_milliseconds = time_traveled_slider.value
t_sec = t_milliseconds/1000
print(f"For a time of {t_milliseconds} milliseconds the distance traveled for ultrasound is {(speed_of_ultrasound_m_per_sec['air']*t_sec):.1f} m")
print(f"For a time of {t_milliseconds} milliseconds the distance traveled for light is {(speed_of_light_m_per_sec['air']*t_sec):.1f} m")
time_traveled_slider.observe(handler=time_traveled_eventhandler,type='change')
display(time_traveled_slider)
display(time_traveled_output)
```
Assume you have a car with both ultrasound and radar sensors. Your task is to design a system that can detect a car when it is 10 meters in front of your car. Change the variables `t_ultrasound_sec` and `t_radar_sec` in the cell below to the correct values for a 20 meter round trip time.
```
t_ultrasound_sec=0.1
t_radar_sec=0.1
#Do not change anything below this line
d_m=10.0
d_ultrasound_m = t_ultrasound_sec*speed_of_ultrasound_m_per_sec['air']
d_radar_m = t_radar_sec*speed_of_light_m_per_sec['air']
print(f"An ultrasound signal completes a round trip distance of {d_ultrasound_m:.1f} meters in {t_ultrasound_sec} seconds.")
print(f"A radar signal completes a round trip distance of {d_radar_m:.1f} meters in {t_ultrasound_sec} seconds.")
```
In this final section lets look at how many instructions a CPU that can complete 600 megaflops in a second can process. Which signal is the easiest to perform signal processing on and by what factor? Later we will investigate what level of processing can be completed by common algorihms such as the Fast Fourier Transform.
```
CPU_flops=600e6
print(f"The number of flops a CPU can use to process the ultrasound signal is: {int(CPU_flops*t_ultrasound_sec)} flops")
print(f"The number of flops a CPU can use to process the radar signal is: {int(CPU_flops*t_radar_sec)} flops")
```
|
github_jupyter
|
import ipywidgets as widgets
#https://towardsdatascience.com/bring-your-jupyter-notebook-to-life-with-interactive-widgets-bc12e03f0916
from IPython.display import display
#Have a table to compare when the best time to use radar vs ultrasound (cross reference on the slides)
speed_of_ultrasound_m_per_sec={ 'tissue':1540, #In most tissue
'water':1481,
'air': 343
}
speed_of_light_m_per_sec={ 'air': 299792458
}
from ipywidgets import Layout
style = {'description_width': 'initial'}
time_traveled_output = widgets.Output()
time_traveled_slider = widgets.IntSlider(
min=1,
max=100,
step=10,
description='Time traveled (milliseconds):',
value=3,
layout=Layout(width='auto', height='80px'),
style=style
)
def time_traveled_eventhandler(change):
time_traveled_output.clear_output()
with time_traveled_output:
t_milliseconds = time_traveled_slider.value
t_sec = t_milliseconds/1000
print(f"For a time of {t_milliseconds} milliseconds the distance traveled for ultrasound is {(speed_of_ultrasound_m_per_sec['air']*t_sec):.1f} m")
print(f"For a time of {t_milliseconds} milliseconds the distance traveled for light is {(speed_of_light_m_per_sec['air']*t_sec):.1f} m")
time_traveled_slider.observe(handler=time_traveled_eventhandler,type='change')
display(time_traveled_slider)
display(time_traveled_output)
t_ultrasound_sec=0.1
t_radar_sec=0.1
#Do not change anything below this line
d_m=10.0
d_ultrasound_m = t_ultrasound_sec*speed_of_ultrasound_m_per_sec['air']
d_radar_m = t_radar_sec*speed_of_light_m_per_sec['air']
print(f"An ultrasound signal completes a round trip distance of {d_ultrasound_m:.1f} meters in {t_ultrasound_sec} seconds.")
print(f"A radar signal completes a round trip distance of {d_radar_m:.1f} meters in {t_ultrasound_sec} seconds.")
CPU_flops=600e6
print(f"The number of flops a CPU can use to process the ultrasound signal is: {int(CPU_flops*t_ultrasound_sec)} flops")
print(f"The number of flops a CPU can use to process the radar signal is: {int(CPU_flops*t_radar_sec)} flops")
| 0.582966 | 0.955026 |
<img width="10%" alt="Naas" src="https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160"/>
# Reddit - Get Hot Posts From Subreddit
<a href="https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/Reddit/Reddit_Get_Hot_Posts_From_Subreddit.ipynb" target="_parent"><img src="https://naasai-public.s3.eu-west-3.amazonaws.com/open_in_naas.svg"/></a>
**Tags:** #reddit #subreddit #data #hottopics #rss #information #opendata #snippet #dataframe
**Author:** [Yaswanthkumar GOTHIREDDY](https://www.linkedin.com/in/yaswanthkumargothireddy/)
This notebook explains how to get hot posts from a subreddit. A subreddit is a specific online community, and the posts associated with it, on the social media website Reddit
## Input
### Install packages
```
!pip install praw
import praw
import pandas as pd
import numpy as np
from datetime import datetime
```
### Choose Subreddit topic
```
SUBREDDIT = "Python" #example: "CryptoCurrency"
```
### Setup App to connect to Reddit API
* To get data from reddit, you need to [create a reddit app](https://www.reddit.com/prefs/apps) which queries the reddit API.
* Select “script” as the type of app.
* Name your app and give it a description.
* Set-up the redirect uri to be http://localhost:8080.
* Once you click on “create app”, you will get a box showing you your "client_id" and "client_secrets".
* "user_agent" is the name of your app.
If you need help on setting up and getting your API credentials, please visit ---> [Get Reddit API Credentials](https://www.jcchouinard.com/get-reddit-api-credentials-with-praw/)
```
MY_CLIENT_ID = 'EtAr0o-oKbVuEnPOFbrRqQ'
MY_CLIENT_SECRET = 'LmNpsZuFM-WXyZULAayVyNsOhMd_ug'
MY_USER_AGENT = 'script by u/naas'
```
## Model
#### Connect with the reddit API
```
reddit = praw.Reddit(client_id=MY_CLIENT_ID, client_secret=MY_CLIENT_SECRET, user_agent=MY_USER_AGENT)
```
#### Get the subreddit level data
```
posts =[]
for post in reddit.subreddit(SUBREDDIT).hot(limit=50):
posts.append([post.title, post.score, post.id, post.subreddit, post.url, post.num_comments, post.selftext, post.created])
posts = pd.DataFrame(posts,columns=['title', 'score', 'id', 'subreddit', 'url', 'num_comments', 'body', 'created'])
```
* If you need more variables, check "vars()" function``
* Usage: 'vars(post)', you'll get post level variables
#### Convert unix timestamp to interpretable date-time
```
posts['created']=pd.to_datetime(posts["created"],unit='s')
```
## Output
```
posts.head()
```
Hint: Filter data using "created" variable for past 24 hours hot posts
## Additional Resources
- More info on the PRAW package used: https://praw.readthedocs.io/en/stable/
|
github_jupyter
|
!pip install praw
import praw
import pandas as pd
import numpy as np
from datetime import datetime
SUBREDDIT = "Python" #example: "CryptoCurrency"
MY_CLIENT_ID = 'EtAr0o-oKbVuEnPOFbrRqQ'
MY_CLIENT_SECRET = 'LmNpsZuFM-WXyZULAayVyNsOhMd_ug'
MY_USER_AGENT = 'script by u/naas'
reddit = praw.Reddit(client_id=MY_CLIENT_ID, client_secret=MY_CLIENT_SECRET, user_agent=MY_USER_AGENT)
posts =[]
for post in reddit.subreddit(SUBREDDIT).hot(limit=50):
posts.append([post.title, post.score, post.id, post.subreddit, post.url, post.num_comments, post.selftext, post.created])
posts = pd.DataFrame(posts,columns=['title', 'score', 'id', 'subreddit', 'url', 'num_comments', 'body', 'created'])
posts['created']=pd.to_datetime(posts["created"],unit='s')
posts.head()
| 0.269518 | 0.816772 |
**Notas para contenedor de docker:**
Comando de docker para ejecución de la nota de forma local:
nota: cambiar `dir_montar` por la ruta de directorio que se desea mapear a `/datos` dentro del contenedor de docker.
```
dir_montar=<ruta completa de mi máquina a mi directorio>#aquí colocar la ruta al directorio a montar, por ejemplo:
#dir_montar=/Users/erick/midirectorio.
```
Ejecutar:
```
$docker run --rm -v $dir_montar:/datos --name jupyterlab_prope_r_kernel_tidyverse -p 8888:8888 -d palmoreck/jupyterlab_prope_r_kernel_tidyverse:2.1.4
```
Ir a `localhost:8888` y escribir el password para jupyterlab: `qwerty`
Detener el contenedor de docker:
```
docker stop jupyterlab_prope_r_kernel_tidyverse
```
Documentación de la imagen de docker `palmoreck/jupyterlab_prope_r_kernel_tidyverse:2.1.4` en [liga](https://github.com/palmoreck/dockerfiles/tree/master/jupyterlab/prope_r_kernel_tidyverse).
---
Para ejecución de la nota usar:
[docker](https://www.docker.com/) (instalación de forma **local** con [Get docker](https://docs.docker.com/install/)) y ejecutar comandos que están al inicio de la nota de forma **local**.
O bien dar click en alguno de los botones siguientes:
[](https://mybinder.org/v2/gh/palmoreck/dockerfiles-for-binder/jupyterlab_prope_r_kernel_tidyerse?urlpath=lab/tree/Propedeutico/Python/clases/3_algebra_lineal/3_minimos_cuadrados.ipynb) esta opción crea una máquina individual en un servidor de Google, clona el repositorio y permite la ejecución de los notebooks de jupyter.
[](https://repl.it/languages/python3) esta opción no clona el repositorio, no ejecuta los notebooks de jupyter pero permite ejecución de instrucciones de Python de forma colaborativa con [repl.it](https://repl.it/). Al dar click se crearán nuevos ***repl*** debajo de sus users de ***repl.it***.
# Mínimos cuadrados lineales
Supóngase que se han realizado mediciones de un fenómeno de interés en diferentes puntos $x_i$'s resultando en cantidades $y_i$'s $\forall i=0,1,\dots, m$ (se tienen $m+1$ puntos) y además las $y_i$'s contienen un ruido aleatorio causado por errores de medición:
<img src="https://dl.dropboxusercontent.com/s/z0ksltumd4ibyjp/mcuadrados_1.jpg?dl=0" heigth="400" width="400">
El objetivo de los mínimos cuadrados lineales es construir una curva, $f(x|\beta)$ que "mejor" se ajuste a los datos $(x_i,y_i)$, $\forall i=0,1,\dots,m$. El término de "mejor" se refiere a que la suma: $$\displaystyle \sum_{i=0}^m (y_i -f(x_i|\beta))^2$$ sea lo más pequeña posible, esto es, a que la suma de las distancias verticales entre $y_i$ y $f(x_i|\beta)$ $\forall i=0,1,\dots,m$ al cuadrado sea mínima. Por ejemplo:
<img src="https://dl.dropboxusercontent.com/s/z31rni7hrp6w91s/mcuadrados_2.jpg?dl=0" heigth="400" width="400">
**Obs:**
* La notación $f(x|\beta)$ se utiliza para denotar que $\beta$ es un vector de parámetros a estimar, en específico $\beta_0, \beta_1, \dots \beta_n$, esto es: $n+1$ parámetros a estimar.
## Modelo en mínimos cuadrados lineales o también nombrados ordinarios
En los mínimos cuadrados lineales se supone: $f(x|\beta) = \displaystyle \sum_{j=0}^n\beta_j\phi_j(x)$ con $\phi_j: \mathbb{R} \rightarrow \mathbb{R}$ funciones conocidas por lo que se tiene una gran flexibilidad para el proceso de ajuste.
**Obs:**
* Si $n=m$ entonces se tiene un problema de interpolación.
* x se nombra variable **regresora**.
## ¿Cómo ajustar el modelo anterior?
En lo siguiente se **asume** $n+1 \leq m+1$ (tenemos más puntos $(x_i,y_i)$'s que parámetros a estimar).
Para realizar el ajuste de mínimos cuadrados se utilizan las **ecuaciones normales**: $$A^TA\beta=A^Ty$$ donde: $A$ se construye con las $\phi_j$'s evaluadas en los puntos $x_i$'s, el vector $\beta$ contiene a los parámetros $\beta_j$'s a estimar y el vector $y$, la variable **respuesta**, se construye con los puntos $y_i$'s:
$$A = \left[\begin{array}{cccc}
\phi_0(x_0) &\phi_1(x_0)&\dots&\phi_n(x_0)\\
\phi_0(x_1) &\phi_1(x_1)&\dots&\phi_n(x_1)\\
\vdots &\vdots& \vdots&\vdots\\
\phi_0(x_n) &\phi_1(x_n)&\dots&\phi_n(x_n)\\
\vdots &\vdots& \vdots&\vdots\\
\phi_0(x_{m-1}) &\phi_1(x_{m-1})&\dots&\phi_n(x_{m-1})\\
\phi_0(x_m) &\phi_1(x_m)&\dots&\phi_n(x_m)
\end{array}
\right] \in \mathbb{R}^{(m+1)x(n+1)},
\beta=
\left[\begin{array}{c}
\beta_0\\
\beta_1\\
\vdots \\
\beta_n
\end{array}
\right] \in \mathbb{R}^n,
y=
\left[\begin{array}{c}
y_0\\
y_1\\
\vdots \\
y_m
\end{array}
\right] \in \mathbb{R}^{m + 1}
$$
y si $A$ es de $rank$ completo (tiene $n+1$ columnas linealmente independientes) se calcula la factorización $QR$ de $A$ : $A = QR$ y entonces: $$A^TA\beta = A^Ty$$
y como $A=QR$ se tiene: $A^TA = (R^TQ^T)(QR)$ y $A^T = R^TQ^T$ por lo que:
$$(R^TQ^T)(QR) \beta = R^TQ^T y$$
y usando que $Q$ tiene columnas ortonormales:
$$R^TR\beta = R^TQ^Ty$$
Como $A$ tiene $n+1$ columnas linealmente independientes, la matriz $R$ es invertible por lo que $R^T$ también lo es y finalmente se tiene el sistema de ecuaciones por resolver:
$$R\beta = Q^Ty$$
## Ejemplo: regresión lineal
En el caso de la regresión lineal se tienen dos modelos que se pueden ajustar un modelo con intercepto y otro sin él. La elección depende de los datos.
### Modelo con intercepto
Se ajusta un modelo de la forma: $f(x|\beta) = \beta_0 + \beta_1 x$ a los datos $(x_i,y_i)$'s $\forall i=0,1,\dots,m$.
**Obs:** En este caso se eligen $\phi_0(x) = 1$, $\phi_1(x) =x$. Y tenemos que estimar dos parámetros: $\beta_0, \beta_1$.
#### Ejemplo numérico en numpy:
```
import numpy as np
import matplotlib.pyplot as plt
import pprint
np.set_printoptions(precision = 2) #sólo dos decimales que se muestren
np.random.seed(1989) #para reproducibilidad
mpoints = 20
x = np.random.randn(mpoints)
y = -3*x + np.random.normal(2,1,mpoints)
```
##### Los datos ejemplo
```
plt.plot(x,y, 'r*')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Puntos ejemplo')
plt.show()
```
##### El ajuste
Con numpy podemos usar la función `polyfit` en el paquete de `numpy` para realizar el ajuste: (ver [numpy.polyfit](https://docs.scipy.org/doc/numpy/reference/generated/numpy.polyfit.html))
El tercer argumento de polyfit especifica el grado del polinomio a ajustar. Usaremos `ndegree = 1` pues queremos ajustar una recta.
```
ndegree = 1
coefficients = np.polyfit(x,y,ndegree)
```
Una vez realizado el llamado a la función polyfit se regresan los coeficientes de $x$ ordenados del mayor grado al menor.
```
pprint.pprint(coefficients)
```
Entonces nuestro polinomio es: $$p_{1}(x) = -2.65x + 2.03$$
y así tenemos nuestras beta's ajustadas $\hat{\beta_0} = 2.03$, $\hat{\beta_1} = -2.65$.
Denotamos como $\hat{y}_i$ al valor ajustado para el dato $x_i$, esto es: $\hat{y}_i = f(x_i|\hat{\beta}) = \displaystyle \sum_{j=0}^n\hat{\beta}_j\phi_j(x_i)$.
##### La gráfica
Ahora nos gustaría graficar el modelo en el intervalo $[min(x),max(x)]$ con $min(x)$ la entrada con valor mínimo del numpy array $x$ y $max(x)$ su entrada con valor máximo.
Para lo anterior debemos obtener los valores ajustados al evaluar $p_1(x)$ los valores de $x$:
```
y_hat_numpy = coefficients[1] + coefficients[0] * x
y_hat_numpy
plt.plot(x, y_hat_numpy, 'k-',
x, y, 'r*')
plt.legend(['modelo lineal','datos'], loc='best')
plt.show()
```
Grafiquemos con $x's$ que no hayamos observado o medido:
```
(np.min(x),np.max(x))
mpoints_new = 30
x_new_values = np.linspace(-3,5,mpoints_new)
y_hat_numpy_new = coefficients[1] + coefficients[0] * x_new_values
y_hat_numpy_new
y_hat_numpy_new.shape
x_new_values.shape
plt.plot(x_new_values, y_hat_numpy_new, 'k-',
x, y, 'r*')
plt.legend(['modelo lineal','datos'], loc='best')
plt.show()
x_new_values[x_new_values.size-1]
y_hat_numpy_new[y_hat_numpy_new.size-1]
```
**También podemos obtener lo anterior con la factorización QR construyendo a la matriz A de forma explícita**
```
A=np.ones((mpoints,2))
A[:,1] = x
A
Q,R = np.linalg.qr(A)
```
Resolvemos el sistema $R\beta = Q^Ty$
```
beta = np.linalg.solve(R,Q.T@y)
pprint.pprint(beta)
y_hat_QR = A@beta
```
**Obs: que la línea anterior es equivalente a realizar: `y_hat_QR = beta[0] + beta[1]*x`**
```
plt.plot(x, y_hat_QR , 'k-',x, y, 'r*')
plt.legend(['modelo lineal','datos'], loc='best')
plt.show()
```
### Modelo sin intercepto
Se ajusta un modelo de la forma: $f(x|\beta) = \beta_1 x$ a los datos $(x_i,y_i)$'s $\forall i=0,1,\dots,m$.
**Obs:** En este caso se elige $\phi_1(x) =x$ y no hay $\phi_0$ por lo que sólo se tiene que estimar $\beta_1$.
#### Ejemplo numérico en numpy:
**(Tarea) Ejercicio: realizar el ajuste correspondiente para este caso con `QR` con un notebook de jupyter. Hacer gráfica**
## Una vez hecho el ajuste...
Se realiza un **análisis de residuales**, se hace una **gráfica** del modelo si las dimensiones en las que se está trabajando lo permiten y se calcula el **error cuadrático medio**.
El residual $i$ es: $r_i = y_i - \hat{y}_i$ y representa la discrepancia entre los datos y el modelo.
El error cuadrático medio se calcula como: $$ECM(\hat{y}) = \frac{1}{m} \displaystyle \sum_{i=0}^m(y_i-\hat{y}_i)^2$$
$m$ es el número de puntos.
**(Tarea)Ejercicio: crear un módulo de nombre `utils.py` en el que se tenga la función:**
```
def MSE(y, y_hat):
"""
Compute mean squared error.
See: https://en.wikipedia.org/wiki/Mean_squared_error
Args:
y (numpy 1d array of floats): actual values of data.
y_hat (numpy 1d array of floats): estimated values via model.
Returns:
ecm (float): mean squared error result.
"""
```
**En esta función se implementa la fórmula de error cuadrático medio.**
## Ejemplo: ajuste de un modelo por mínimos cuadrados lineales con funciones $\phi_j$'s no lineales
Obsérvese que el modelo que se ha utilizado: $f(x|\beta) = \displaystyle \sum_{j=0}^n\beta_j\phi_j(x)$ permite elegir las $\phi_j$'s como funciones de $\mathbb{R}$ a $\mathbb{R}$. Por lo que tenemos una amplia gama de posibilidades de ajuste de curvas a datos.
Como ejemplo utilizaremos el conjunto de datos **data_for_nbook_3_minimos_cuadrados.txt** el cual lo pueden descargar en el mismo directorio de este *ipynb* o bien dando click [aquí](https://drive.google.com/file/d/1Ht7d2E1LWw7EIrrkULFQ_7-5nGVxHT4P/view?usp=sharing) y ajustaremos tres modelos de la forma:
$$f_1(x|\beta) = \beta_0 + \beta_1 \frac{x}{x+1}$$
$$f_2(x|\beta) = \beta_0 + \beta_1x + \beta_2x^2$$
$$f_3(x|\beta) = \beta_0 + \beta_1\text{log}(x+1)$$
tomando como variable respuesta la segunda columna de los datos etiquetada como $y$.
```
datos = np.loadtxt('data_for_nbook_3_minimos_cuadrados.txt', skiprows=1)
```
ver: [numpy.loadtxt](https://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html)
```
x = datos[:,0]
y = datos[:,1]
plt.plot(x, y, 'r^')
plt.legend(['datos'], loc='best')
plt.title('datos')
plt.show()
```
### Modelo 1
```
phi_1 = lambda var: var/(var+1)
```
Construimos a la matriz A como sigue:
```
mpoints, = x.shape
A=np.ones((mpoints,2))
A[:,1] = phi_1(x)
```
Calculamos la factorización QR y graficamos:
```
Q,R = np.linalg.qr(A)
beta = np.linalg.solve(R,Q.T@y)
print('beta')
pprint.pprint(beta)
y_hat_QR = A@beta
```
**Obs: que la línea anterior es equivalente a realizar: `y_hat_QR = beta[0] + beta[1]*phi_1(x)`**
```
plt.plot(x, y_hat_QR , 'k-',x, y, 'r^')
plt.legend(['modelo1','datos'], loc='best')
plt.show()
```
### Modelo 2
No tenemos que construir A pues es un polinomio de grado 2 por lo que usamos `polyfit` de `numpy`:
```
ndegree = 2
coefficients = np.polyfit(x,y,ndegree)
pprint.pprint(coefficients)
y_hat_numpy = coefficients[2] + coefficients[1] * x + coefficients[0] * x**2
plt.plot(x, y_hat_numpy, 'k-',x, y, 'r^')
plt.legend(['modelo polinomio cuadrático','datos'], loc='best')
plt.show()
```
### Modelo 3
**(Tarea) Ejercicio: ajustar el modelo 3. Calcular ECM de cada modelo y realizar en una sola gráfica los tres modelos. ¿Cuál es el modelo con menor ECM? Realizar este ejercicio en un notebook de jupyter.**
|
github_jupyter
|
dir_montar=<ruta completa de mi máquina a mi directorio>#aquí colocar la ruta al directorio a montar, por ejemplo:
#dir_montar=/Users/erick/midirectorio.
$docker run --rm -v $dir_montar:/datos --name jupyterlab_prope_r_kernel_tidyverse -p 8888:8888 -d palmoreck/jupyterlab_prope_r_kernel_tidyverse:2.1.4
docker stop jupyterlab_prope_r_kernel_tidyverse
import numpy as np
import matplotlib.pyplot as plt
import pprint
np.set_printoptions(precision = 2) #sólo dos decimales que se muestren
np.random.seed(1989) #para reproducibilidad
mpoints = 20
x = np.random.randn(mpoints)
y = -3*x + np.random.normal(2,1,mpoints)
plt.plot(x,y, 'r*')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Puntos ejemplo')
plt.show()
ndegree = 1
coefficients = np.polyfit(x,y,ndegree)
pprint.pprint(coefficients)
y_hat_numpy = coefficients[1] + coefficients[0] * x
y_hat_numpy
plt.plot(x, y_hat_numpy, 'k-',
x, y, 'r*')
plt.legend(['modelo lineal','datos'], loc='best')
plt.show()
(np.min(x),np.max(x))
mpoints_new = 30
x_new_values = np.linspace(-3,5,mpoints_new)
y_hat_numpy_new = coefficients[1] + coefficients[0] * x_new_values
y_hat_numpy_new
y_hat_numpy_new.shape
x_new_values.shape
plt.plot(x_new_values, y_hat_numpy_new, 'k-',
x, y, 'r*')
plt.legend(['modelo lineal','datos'], loc='best')
plt.show()
x_new_values[x_new_values.size-1]
y_hat_numpy_new[y_hat_numpy_new.size-1]
A=np.ones((mpoints,2))
A[:,1] = x
A
Q,R = np.linalg.qr(A)
beta = np.linalg.solve(R,Q.T@y)
pprint.pprint(beta)
y_hat_QR = A@beta
plt.plot(x, y_hat_QR , 'k-',x, y, 'r*')
plt.legend(['modelo lineal','datos'], loc='best')
plt.show()
def MSE(y, y_hat):
"""
Compute mean squared error.
See: https://en.wikipedia.org/wiki/Mean_squared_error
Args:
y (numpy 1d array of floats): actual values of data.
y_hat (numpy 1d array of floats): estimated values via model.
Returns:
ecm (float): mean squared error result.
"""
datos = np.loadtxt('data_for_nbook_3_minimos_cuadrados.txt', skiprows=1)
x = datos[:,0]
y = datos[:,1]
plt.plot(x, y, 'r^')
plt.legend(['datos'], loc='best')
plt.title('datos')
plt.show()
phi_1 = lambda var: var/(var+1)
mpoints, = x.shape
A=np.ones((mpoints,2))
A[:,1] = phi_1(x)
Q,R = np.linalg.qr(A)
beta = np.linalg.solve(R,Q.T@y)
print('beta')
pprint.pprint(beta)
y_hat_QR = A@beta
plt.plot(x, y_hat_QR , 'k-',x, y, 'r^')
plt.legend(['modelo1','datos'], loc='best')
plt.show()
ndegree = 2
coefficients = np.polyfit(x,y,ndegree)
pprint.pprint(coefficients)
y_hat_numpy = coefficients[2] + coefficients[1] * x + coefficients[0] * x**2
plt.plot(x, y_hat_numpy, 'k-',x, y, 'r^')
plt.legend(['modelo polinomio cuadrático','datos'], loc='best')
plt.show()
| 0.650467 | 0.785309 |
<p style="font-family: Arial; font-size:2.75em;color:purple; font-style:bold"><br>
## Regresssion with scikit-learn<br><br> using Soccer Dataset
<br></p>
We will again be using the open dataset from the popular site <a href="https://www.kaggle.com">Kaggle</a> that we used in Week 1 for our example.
Recall that this <a href="https://www.kaggle.com/hugomathien/soccer">European Soccer Database</a> has more than 25,000 matches and more than 10,000 players for European professional soccer seasons from 2008 to 2016.
**Note:** Please download the file *database.sqlite* if you don't yet have it in your *Week-7-MachineLearning* folder.
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
## Import Libraries<br><br></p>
```
import sqlite3
import pandas as pd
from sklearn.tree import DecisionTreeRegressor
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from math import sqrt
```
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
## Read Data from the Database into pandas
<br><br></p>
```
# Create your connection.
cnx = sqlite3.connect('database.sqlite')
df = pd.read_sql_query("SELECT * FROM Player_Attributes", cnx)
df.head()
df.shape
df.columns
```
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
## Declare the Columns You Want to Use as Features
<br><br></p>
```
features = [
'potential', 'crossing', 'finishing', 'heading_accuracy',
'short_passing', 'volleys', 'dribbling', 'curve', 'free_kick_accuracy',
'long_passing', 'ball_control', 'acceleration', 'sprint_speed',
'agility', 'reactions', 'balance', 'shot_power', 'jumping', 'stamina',
'strength', 'long_shots', 'aggression', 'interceptions', 'positioning',
'vision', 'penalties', 'marking', 'standing_tackle', 'sliding_tackle',
'gk_diving', 'gk_handling', 'gk_kicking', 'gk_positioning',
'gk_reflexes']
```
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
## Specify the Prediction Target
<br><br></p>
```
target = ['overall_rating']
```
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
## Clean the Data<br><br></p>
```
df = df.dropna()
```
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
## Extract Features and Target ('overall_rating') Values into Separate Dataframes
<br><br></p>
```
X = df[features]
y = df[target]
```
Let us look at a typical row from our features:
```
X.iloc[2]
```
Let us also display our target values:
```
y
```
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
## Split the Dataset into Training and Test Datasets
<br><br></p>
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=324)
```
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
## (1) Linear Regression: Fit a model to the training set
<br><br></p>
```
regressor = LinearRegression() # regressor is a linear regression object
regressor.fit(X_train, y_train)
```
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
## Perform Prediction using Linear Regression Model
<br><br></p>
```
y_prediction = regressor.predict(X_test)
y_prediction
```
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
### What is the mean of the expected target value in test set ?
<br><br></p>
```
y_test.describe()
```
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
## Evaluate Linear Regression Accuracy using Root Mean Square Error
<br><br></p>
```
RMSE = sqrt(mean_squared_error(y_true = y_test, y_pred = y_prediction))
print(RMSE)
```
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
## (2) Decision Tree Regressor: Fit a new regression model to the training set
<br><br></p>
```
regressor = DecisionTreeRegressor(max_depth=20)
regressor.fit(X_train, y_train)
```
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
### Perform Prediction using Decision Tree Regressor
<br><br></p>
```
y_prediction = regressor.predict(X_test)
y_prediction
```
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
For comparision: What is the mean of the expected target value in test set ?
<br><br></p>
```
y_test.describe()
```
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br>
## Evaluate Decision Tree Regression Accuracy using Root Mean Square Error
<br><br></p>
```
RMSE = sqrt(mean_squared_error(y_true = y_test, y_pred = y_prediction))
print(RMSE)
```
|
github_jupyter
|
import sqlite3
import pandas as pd
from sklearn.tree import DecisionTreeRegressor
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from math import sqrt
# Create your connection.
cnx = sqlite3.connect('database.sqlite')
df = pd.read_sql_query("SELECT * FROM Player_Attributes", cnx)
df.head()
df.shape
df.columns
features = [
'potential', 'crossing', 'finishing', 'heading_accuracy',
'short_passing', 'volleys', 'dribbling', 'curve', 'free_kick_accuracy',
'long_passing', 'ball_control', 'acceleration', 'sprint_speed',
'agility', 'reactions', 'balance', 'shot_power', 'jumping', 'stamina',
'strength', 'long_shots', 'aggression', 'interceptions', 'positioning',
'vision', 'penalties', 'marking', 'standing_tackle', 'sliding_tackle',
'gk_diving', 'gk_handling', 'gk_kicking', 'gk_positioning',
'gk_reflexes']
target = ['overall_rating']
df = df.dropna()
X = df[features]
y = df[target]
X.iloc[2]
y
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=324)
regressor = LinearRegression() # regressor is a linear regression object
regressor.fit(X_train, y_train)
y_prediction = regressor.predict(X_test)
y_prediction
y_test.describe()
RMSE = sqrt(mean_squared_error(y_true = y_test, y_pred = y_prediction))
print(RMSE)
regressor = DecisionTreeRegressor(max_depth=20)
regressor.fit(X_train, y_train)
y_prediction = regressor.predict(X_test)
y_prediction
y_test.describe()
RMSE = sqrt(mean_squared_error(y_true = y_test, y_pred = y_prediction))
print(RMSE)
| 0.620162 | 0.927692 |
# Basics of probability
We'll start by reviewing some basics of probability theory. I will use some simple examples - dice and roullete - to illustrate basic probability concepts. We'll also use these simple examples to build intuition on several properties of probabilities - the law of total probability, independence, conditional probability, and Bayes's rule - that can be generalized, and will be used throughout the course.
# Probability Spaces
To work with probabilites we need to define three things:
1. A sample space, $\Omega$
2. An event space, $\mathcal F$
3. A probability function, $P$
Read more about probability spaces on <a href="https://en.wikipedia.org/wiki/Probability_space"> Wikipedia</a>
## Discrete Sample Space
**Roulette**
A simple example to illustrate the concept or probability spaces is the roullete. Here we'll consider an American Roullete with 38 equally-probably numbers.

- ***Sample Space***:<br>
The sample space is the space of all possible outcomes.
$$\Omega=\{\color{green}{00},\color{green}0,\color{red}1,2,\color{red}3,4,\ldots, \color{red}{36}\}$$
- ***Event Space:***<br>
The event space is the set of all subsets of the sample space:
$$\mathcal F=\left\{
\{\color{green}{00}\},
\{\color{green}{0}\},
\{\color{red}1\},
\{2\},\{\color{red}3\}\ldots,
\{\color{green}{00},\color{green}0\},\ldots,
\{ \color{red}1,\ldots,
\color{red}36\}\right\}$$
- ***Probability:***<br>
For a roullete the probability is defined as $P=1/38$ for each of the 38 possible outcomes in the sample space. Each event also has an associated probability
We note a couple of things. The Sample space and the event space do not uniquely define the probability. For example, we could have a biased roullete (perhaps using a magnet and a metal ball), such that the ball is more likely to fall on particular numbers. In that case, the probability of individual outcomes in the sample space may not be equal. However, as we discusss more below, the total sum of probabilities across the possible outcomes still has to equal 1, unless there is a chance that the ball falls off the roullete table and none of the outcomes is hit.
Note also that outcomes are different from events. A single outcome, e.g. a roullete roll of $\color{green}{00}$ is associated with multiple possible events. It helps to think of an event as a possible bet , and the event space as *the space of all possible bets*. Any bet you make on the roullete can be expressed as a subset of $\mathcal F$, and has a probability associated with it.
For example, consider a bet on a single number (e.g. on $\color{red}7$), also called a straight-up bet. This event is equivalent to the outcome of the roullete being in the set $E_1=\{\color{red}1\}$. The probability of this event is $P(E_1)$=1/38.

Alternatively consider a bet on red. This event is equivalent to the outcome being in $E_2=\left\{\color{red}{1},\color{red}{2},\color{red}{3},\ldots,\color{red}{36}\right\}$, and its probability is $P(E_2)=18/38$.

*Note*: Formally, the event space is a $\sigma$-algebra, and the probability function is a measure.
## Infinite Sample Spaces
Why do we need to go through these definitions of event spaces and sample spaces? For probability spaces with a finite number of possibl outcomes we can assign a probability to each outcome and it becomes trivial to compute the probability of events. However, that is no longer the case when we start working with infinite sample spaces, such as an interval on the real line. For example, if the sample space of a random process is the interval $\Omega=[0,1]\in \mathbb R$, there are an infinite number of possible outcomes, and thus not all of them can have finite (non-zero) probability. In that case, we can only assign finite probabilities to sub-intervals, or subsets of the sample space. In other words, *in the most general case we can only assign finite probabilities to member of the event space $\mathcal E$*. However, the same rules of probabilites apply for both infinite and finite samples spaces, and it is easier to get an intuition for them on small, finite spaces.
For purposes of this class, we don't need to worry about probability spaces, event spaces, and probability functions. However, simple examples such as these are useful in illustrating some very general properties of probabilities that we *will* use extensively in the class, especially in the chapters on statistical inference and Bayesian data analysis.
<hr style="border:1px solid black
"> </hr>
# Calculus of Probabilities
## Combining probabilities
The probability of roullete bets illustrates a general rule about combining probabilities. How do we compute the probability of an event? Consider two friends Alice and Bob, betting on the roullete, and say we want to compute the probability that at least one of them wins. We'll call this event $E$.
First, let's say each of them bets one of the green zeros. Rolling a green zero is equivalent to rolling either $\color{green}0$ OR $\color{green}{00}$. The two friends going home with some money is associated with the subset $E=\{\color{green}0,\color{green}{00}\}$of the event space. If we allso associate Alice winning with event $A$ with rolling $\color{green}{00}$ and Bob winning event $B$ with rolling $\color{green}{00}$, we can write:
$$P(E)=P(A \text{ OR } B)\equiv P(A \lor B)$$
Notice that
- $\{\color{green}{00,0}\}$=$\{\color{green}{00}\}\cup\{\color{green}{0}\}$
- $\{\color{green}{00}\}\cap\{\color{green}{0}\}=0$
- $P(\{\color{green}{00}\}\cup\{\color{green}{0}\})=P(\{\color{green}{00}\})+P(\{\color{green}{0}\})$
On the other hand, consider a case where Alice bets on all the numbers between $1-6$, with win probability $P(A)=6/38$ and the Bob bets on his favourite numbers, $3$ and $33$, with win probability $P(B)=2/38$ Them winning something is associated with $E=\{1,2,3,4,5,6,33\}$, with $P(E)=7/38$. Notice that in thise $P(E)\neq P(A)+P(B)$
```{important}
Thus, the general rule of combining probabilities is:
$$P(A \text{ OR } B)=P(A \cup B)=P(A)+P(B)-P(A\cap B)$$
with
$$P(A \text{ OR } B)=P(A \cup B)=P(A)+P(B)\text{, if } P(A\cap B)=\phi$$
```
## Valid Probability Functions
In order for a function on the event space to be a valid probability function, it neeeds to obey the following properties
```{important}
- $P:\Omega \rightarrow [0,1]$
- $P(\phi)=0$, where $\phi$ is the null-set
- $P(\Omega)=1$
- If $E_1$ and and $E_2$ are mutually exclusive events (i.e. $E_1 \cap E_2=\phi$), then the probability of $E_1$ OR $E_2$ occuring is
$$P(E_1 \text{ OR } E_2)=P(E_1\cup E_2)=P(E_1)+P(E_2)$$
```
## Joint Probabilities
The joint probability of two events is the probability of both being true simultaneously. Consider the probability of both Alice AND Bob winning. We would denote this as
$$P(A\text{ AND }B)\equiv P(A\land B)\equiv P(A,B)$$
The latter notation is the one we will be using a lot throughout the course. If each of them bets on one of the zeros, and there is thus no overlap between their bets, the probability of both winning is zero. In the second case, where Alice bets on $1$ through $6$ and Bob bets on $3$ and $33$, the probability of both of them winning is the probability of rolling a $3$, whith the intersection of the two events. This is a general rule of probability calculus:
```{imporatant}
The joint probability of two events is the probability of their intersection:
- $P(A,B)=P(A\cap B)$
- $P(A,B)=0 \text{ if } A\cap B=\phi$, and we would say that $A$ and $B$ are mutually exclusive
```
<hr style="border:1px solid black
"> </hr>
# Conditional Probabilities & Independence
Here we'll review some definitions and properties of conditional probabilities and independence that we will use throughout the course. This is more easily done when we have two random processes occuring simultaneously. So we will consider the roll of two independent and *fair* dice.
The sample space, $\Omega$ is illustrated by the following table:

The event space is the set of all subsets of $\Omega$ and the probability function for a fair dice is a constant functoin $P=1/36$.
For example the *event* $E$="a total roll of 3" is the intersection of two different outcome "red dice rolls 1 and greeen dice rolls 2" is different from "red dice rolls 2 and green dice rolls 1", i.e. $\{\color{red}1,\color{green}2\} ,\{\color{green}2,\color{red}1\}$ The probability of rolling a 3 is thus $P(\{\color{red}1,\color{green}2\})+P(\{\color{green}1,\color{red}2)\}=2/36$.
We can also define events applicable to a single dice. For example, "the red dice rolled a 1". The probability of the red dice rolling a 1 is 1/6, and it can also be written as the union of all the six outcomes in which the red dice rolls 1.
## Independent Events
The role of one dice should not affect the role of the other. The two dice are ***independent***.
Example: what is the probability of rolling a <span style="color:blue">⚅</span> <span style="color:red">⚅</span> is 1/36. It is also the probability of rolling both a <span style="color:blue">⚅</span> and a <span style="color:red">⚅</span> at the same time. The probability for each dice 1/6, so their combined probability is
$P($<span style="color:blue">⚅</span> <span style="color:red">⚅</span>$)=P($<span style="color:blue">⚅</span>$)P($<span style="color:red">⚅</span>$)$
```{important}
<b>Definition</b>: Two events are independent iff:
$$P(A \text{ AND } B)=P(A,B)=P(A)P(B)$$
```
Fun sidenote: although it's harder, we can define independent events for a single dice! Consider the two events:
- A: "Green dice rolled even" with probability $P(A)=1/2$
- B: "Green dice rolled less than three", with probability $P(B)=1/3$<br>
The joint probability of both events happening is the probability of the dice rolling a 2, so it is $P(A,B)=1/6=P(A)P(B)$
## Conditional Probability
Consider the previous example, with two independent events. How can two events on the same dice be independent? It's easier to think through this process sequentially, or *conditionally*. Think of the probability of both "rolling even" and "rolling less than three" as "the probability of rolling even" and, subsequently, also "rolling less than three". The truth values of those statemetns do not change if you look at them one at a time.
If you roll even, the probability of rolling less than 3 is 1/3, i.e. the probability of rolling a 2.
If you roll odd, the probability of rolling <3 is still 1/3, i.e. the probability of rolling a 1.
So whether you roll odd or even does not impact the P of rolling less than three. Thus, the two events are independent.
This notion of whether the realization of one event affects the probability of another event is quantified using conditoinal probabilities.
***Notation***: $P(A|B)$=The conditional probability of event $A$ being true **given** that event $B$ is true.
Examples: What is the probability of the event "A" ="rolling a combined 10"? It is the probability of the events consisting of {{<span style="color:blue">⚃</span> <span style="color:red">⚅</span>},{<span style="color:blue">⚄</span> <span style="color:red">⚄</span>},{<span style="color:blue">⚅</span> <span style="color:red">⚃</span>}} and it is 3/36. Now, what is the probability of rolling a combined 10, **given** that the red dice rolled a 4. Well, it is the probability of the green dice rolling a 6, which is 1/6.
- $P("10")=3/36$
- $P("10"|$<span style="color:red">⚃</span>$)=1/6$
Let's use our sequential thinking to come up with a very useful properties of conditional probabilities. Consider the joint event of "A"="rolling a combined "10" and "B" the red dice rolling 4.
The probabiliy of both being true is equal to 1/36. But you can think of it as the probability of rolling a 4 with the red dice (P(B)=1/6), and then rolling a "10" given that the red dice rolled 4 P(A|B)=1/6).
```{important}
<b>Definition</b>: The following relationships hold between joint and conditional probabilities:
$$P(A,B)=P(A|B)P(B)$$
$$P(A|B)=\frac{P(A,B)}{P(B)}$$
```
## Bayes Rule
The above relation between conditional probabilities and joint probabilities leads us to one of the most useful formulas in statistics: Bayes' Rule.
Notice that we can write $P(A,B)$ as either $P(A|B)P(B)$ or $P(B|A)P(A)$. We can manipulate this relation to understand how the relation between the conditional probability of A given B and the conditional probability of B given A
$$P(A,B)=P(A|B)P(B)=P(B|A)P(A)$$
```{important}
**Bayes' Rule**:
$$P(A|B)=\frac{P(B|A)P(A)}{P(B)}$$
```
## Law of Total probability
```{important}
The law of total probability says that if we have a partition of the sample space, $A_n$ such that $A_i\cap A_j=\phi$ if $i\neq j$. and $\cap_{n} A_n = \Omega$, then
$$P(E)=\sum_n P(E|A_n)P(A_n)$$
```
This should be intuitive with the fair dice example. For example, let $E$ be the event 'A total roll $D=6$ was rolled'. A partition $A_n$ could be 'the dice $X$ rolled n' for $n$ between 1 and 6. Thus, the total probability of $D=6$ is the sum of the probability of rolling a seven given that $X$ rolled 1, plus the probability of rolling a seven given that $X$ rolled a 2, and so on....
<hr style="border:1px solid black
"> </hr>
# Random variables
***Definition***: A random variable is a real-valued function whose whose values depend on outcomes of a random phenomenon.**
$$X:\Omega \rightarrow \mathbb R$$
Consider the case of a single fair dice, with possible values, i.e. sample space:
$$\Omega=\{1,2,3,4,5,6\}$$
We can define a random variable $X$ whose value is equal to the dice roll. This random variable could take ***discrete*** values between 1 and 6.
$$X:\{1,2,3,4,5,6\} \rightarrow \{1,2,3,4,5,6\} $$
If the dice is fair, than the probability of X taking each value is the same, and equal to 1/6. We would call this a discrete uniform random variable.
At this point it may seem like I'm inventing new terminology. For example, why do we need to call $X$ a random variable, and talk about the possibility that it takes on different values? It seems like the probability of X taking on each value is just the probability of each event in $\Omega$?
Here is another example of a random variable on the same sample space: $Z$ is a random variable which takes the value $Z=0$ if the dice roll is odd and $Z=1$ if the dice roll is even. Thus, even though $X$ and $Z$ are associated with the same sample space and events, they take on different values. In this case, since $Z$ takes on only two values, 0 and 1 $Z$ would be called a Bernoulli random variable.
$$Z:\{1,2,3,4,5,6\} \rightarrow \{1,2\} $$
|
github_jupyter
|
## Valid Probability Functions
In order for a function on the event space to be a valid probability function, it neeeds to obey the following properties
## Joint Probabilities
The joint probability of two events is the probability of both being true simultaneously. Consider the probability of both Alice AND Bob winning. We would denote this as
$$P(A\text{ AND }B)\equiv P(A\land B)\equiv P(A,B)$$
The latter notation is the one we will be using a lot throughout the course. If each of them bets on one of the zeros, and there is thus no overlap between their bets, the probability of both winning is zero. In the second case, where Alice bets on $1$ through $6$ and Bob bets on $3$ and $33$, the probability of both of them winning is the probability of rolling a $3$, whith the intersection of the two events. This is a general rule of probability calculus:
<hr style="border:1px solid black
"> </hr>
# Conditional Probabilities & Independence
Here we'll review some definitions and properties of conditional probabilities and independence that we will use throughout the course. This is more easily done when we have two random processes occuring simultaneously. So we will consider the roll of two independent and *fair* dice.
The sample space, $\Omega$ is illustrated by the following table:

The event space is the set of all subsets of $\Omega$ and the probability function for a fair dice is a constant functoin $P=1/36$.
For example the *event* $E$="a total roll of 3" is the intersection of two different outcome "red dice rolls 1 and greeen dice rolls 2" is different from "red dice rolls 2 and green dice rolls 1", i.e. $\{\color{red}1,\color{green}2\} ,\{\color{green}2,\color{red}1\}$ The probability of rolling a 3 is thus $P(\{\color{red}1,\color{green}2\})+P(\{\color{green}1,\color{red}2)\}=2/36$.
We can also define events applicable to a single dice. For example, "the red dice rolled a 1". The probability of the red dice rolling a 1 is 1/6, and it can also be written as the union of all the six outcomes in which the red dice rolls 1.
## Independent Events
The role of one dice should not affect the role of the other. The two dice are ***independent***.
Example: what is the probability of rolling a <span style="color:blue">⚅</span> <span style="color:red">⚅</span> is 1/36. It is also the probability of rolling both a <span style="color:blue">⚅</span> and a <span style="color:red">⚅</span> at the same time. The probability for each dice 1/6, so their combined probability is
$P($<span style="color:blue">⚅</span> <span style="color:red">⚅</span>$)=P($<span style="color:blue">⚅</span>$)P($<span style="color:red">⚅</span>$)$
Fun sidenote: although it's harder, we can define independent events for a single dice! Consider the two events:
- A: "Green dice rolled even" with probability $P(A)=1/2$
- B: "Green dice rolled less than three", with probability $P(B)=1/3$<br>
The joint probability of both events happening is the probability of the dice rolling a 2, so it is $P(A,B)=1/6=P(A)P(B)$
## Conditional Probability
Consider the previous example, with two independent events. How can two events on the same dice be independent? It's easier to think through this process sequentially, or *conditionally*. Think of the probability of both "rolling even" and "rolling less than three" as "the probability of rolling even" and, subsequently, also "rolling less than three". The truth values of those statemetns do not change if you look at them one at a time.
If you roll even, the probability of rolling less than 3 is 1/3, i.e. the probability of rolling a 2.
If you roll odd, the probability of rolling <3 is still 1/3, i.e. the probability of rolling a 1.
So whether you roll odd or even does not impact the P of rolling less than three. Thus, the two events are independent.
This notion of whether the realization of one event affects the probability of another event is quantified using conditoinal probabilities.
***Notation***: $P(A|B)$=The conditional probability of event $A$ being true **given** that event $B$ is true.
Examples: What is the probability of the event "A" ="rolling a combined 10"? It is the probability of the events consisting of {{<span style="color:blue">⚃</span> <span style="color:red">⚅</span>},{<span style="color:blue">⚄</span> <span style="color:red">⚄</span>},{<span style="color:blue">⚅</span> <span style="color:red">⚃</span>}} and it is 3/36. Now, what is the probability of rolling a combined 10, **given** that the red dice rolled a 4. Well, it is the probability of the green dice rolling a 6, which is 1/6.
- $P("10")=3/36$
- $P("10"|$<span style="color:red">⚃</span>$)=1/6$
Let's use our sequential thinking to come up with a very useful properties of conditional probabilities. Consider the joint event of "A"="rolling a combined "10" and "B" the red dice rolling 4.
The probabiliy of both being true is equal to 1/36. But you can think of it as the probability of rolling a 4 with the red dice (P(B)=1/6), and then rolling a "10" given that the red dice rolled 4 P(A|B)=1/6).
## Bayes Rule
The above relation between conditional probabilities and joint probabilities leads us to one of the most useful formulas in statistics: Bayes' Rule.
Notice that we can write $P(A,B)$ as either $P(A|B)P(B)$ or $P(B|A)P(A)$. We can manipulate this relation to understand how the relation between the conditional probability of A given B and the conditional probability of B given A
$$P(A,B)=P(A|B)P(B)=P(B|A)P(A)$$
## Law of Total probability
| 0.855851 | 0.995523 |
# Introduction to Programming
Topics for today will include:
- Mozilla Developer Network [(MDN)](https://developer.mozilla.org/en-US/)
- Python Documentation [(Official Documentation)](https://docs.python.org/3/)
- Importance of Design
- Functions
- Built in Functions
## Mozilla Developer Network [(MDN)](https://developer.mozilla.org/en-US/)
---
The Mozilla Developer Network is a great resource for all things web dev. This site is good for trying to learn about standards as well as finding out quick information about something that you're trying to do Web Dev Wise
This will be a major resource going forward when it comes to doing things with HTML and CSS
You'll often find that you're not the first to try and do something. That being said you need to start to get comfortable looking for information on your own when things go wrong.
## Python Documentation [(Official Documentation)](https://docs.python.org/3/)
---
This section is similar to the one above. Python has a lot of resources out there that we can utilize when we're stuck or need some help with something that we may not have encountered before.
Since this is the official documentation page for the language you may often be given too much information or something that you wanted but in the wrong form or for the wrong version of the language. It is up to you to learn how to utilize these things and use them to your advantage.
## Importance of Design
---
So this is a topic that i didn't learn the importance of until I was in the work force. Design is a major influence in the way that code is build and in a major capacity and significant effect on the industry.
Let's pretend we have a client that wants us to do the following:
- Write a function which will count the number of times any one character appears in a string of characters.
- Write a main function which takes the character to be counted from the user and calls the function, outputting the result to the user.
For example, are you like Android and take the latest and greatest and put them into phones in an unregulated hardware market. Thus leaving great variability in the market for your brand? Are you an Apple where you control the full stack. Your hardware and software may not be bleeding edge but it's seamless and uniform.
What does the market want? What are you good at? Do you have people around you that can fill your gaps?
Here's a blurb from a friend about the matter:
>Design, often paired with the phrase "design thinking", is an approach and method of problem solving that builds empathy for user(s) of a product, resulting in the creation of a seamless and delightful user experience tailored to the user's needs.
>Design thinks holistically about the experience that a user would go through when encountering and interacting with a product or technology. Design understands the user and their needs in great detail so that the product team can build the product and experience that fits what the user is looking for. We don't want to create products for the sake of creating them, we want to ensure that there is a need for it by a user.
>Design not only focuses on the actual interface design of a product, but can also ensure the actual technology has a seamless experience as well. Anything that blocks potential users from wanting to buy a product or prohibits current users from utilizing the product successfully, design wants to investigate. We ensure all pieces fit together from the user's standpoint, and we work to build a bridge between the technology and the user, who doesn't need to understand the technical depths of the product.
### Sorting Example [(Toptal Sorting Algorithms)](https://www.toptal.com/developers/sorting-algorithms)
---
Hypothetical, a client comes to you and they want you sort a list of numbers how do you optimally sort a list? `[2, 5, 6, 1, 4, 3]`
### Design Thinking [(IBM Design Thinking)](https://www.ibm.com/design/thinking/)
---
As this idea starts to grow you come to realize that different companies have different design methodologies. IBM has it's own version of Design Thinking. You can find more information about that at the site linked in the title. IBM is very focused on being exactly like its customers in most aspects.
What we're mostly going to take from this is that there are entire careers birthed from thinking before you act. That being said we're going to harp on a couple parts of this.
### Knowing what your requirements are
---
One of the most common scenarios to come across is a product that is annouced that's going to change everything. In the planning phase everyone agrees that the idea is amazing and going to solve all of our problems.
We get down the line and things start to fall apart, we run out of time. Things ran late, or didn't come in in time pushing everything out.
Scope creep ensued.
This is typically the result of not agreeing on what our requirements are. Something as basic as agreeing on what needs to be done needs to be discussed and checked on thouroughly. We do this because often two people rarely are thinking exactly the same thing.
You need to be on the same page as your client and your fellow developers as well. If you don't know ask.
### Planning Things Out
---
We have an idea on what we want to do. So now we just write it? No, not quite. We need to have a rough plan on how we're going to do things. Do we want to use functions, do we need a quick solution, is this going to be verbose and complex?
It's important to look at what we can set up for ourselves. We don't need to make things difficult by planning things out poorly. This means allotting time for things like getting stuck and brainstorming.
### Breaking things down
---
Personally I like to take my problem and scale it down into an easy example so in the case of our problem. The client may want to process a text like Moby Dick. We can start with a sentence and work our way up!
Taking the time to break things in to multiple pieces and figure out what goes where is an art in itself.
```
def char_finder(character, string):
total = 0
for char in string:
if char == character:
total += 1
return total
if __name__ == "__main__":
output = char_finder('o', 'Quick brown fox jumped over the lazy dog')
print(output)
```
## Functions
---
This is a intergral piece of how we do things in any programming language. This allows us to repeat instances of code that we've seen and use them at our preferance.
We'll often be using functions similar to how we use variables and our data types.
### Making Our Own Functions
---
So to make a functions we'll be using the `def` keyword followed by a name and then parameters. We've seen this a couple times now in code examples.
```
def exampleName(exampleParameter1, exampleParameter2):
print(exampleParameter1, exampleParameter2)
```
There are many ways to write functions, we can say that we're going return a specific type of data type.
```
def exampleName(exampleParameter1, exampleParameter2) -> any:
print(exampleParameter1, exampleParameter2)
```
We can also specify the types that the parameters are going to be.
```
def exampleName(exampleParameter1: any, exampleParameter2: any) -> any:
print(exampleParameter1, exampleParameter2)
```
Writing functions is only one part of the fun. We still have to be able to use them.
### Using functions
---
Using functions is fairly simple. To use a function all we have to do is give the function name followed by parenthesis. This should seem familiar.
```
def exampleName(exampleParameter1: int, exampleParameter2: int) -> None:
# print(exampleParameter1, exampleParameter2)
return exampleParameter1 + exampleParameter2
print()
exampleName(10, 94)
```
### Functions In Classes
---
Now we've mentioned classes before, classes can have functions but they're used a little differently. Functions that stem from classes are used often with a dot notation.
```
class Person:
def __init__(self, weight: int, height: int, name: str):
self.weight = weight
self.height = height
self.name = name
def who_is_this(self):
print("This person's name is " + self.name)
print("This person's weight is " + str(self.weight) + " pounds")
print("This person's height is " + str(self.height) + " inches")
if __name__ == "__main__":
Kipp = Person(225, 70, "Aaron Kippins")
Kipp.who_is_this()
```
## Built in Functions and Modules
---
With the talk of dot notation those are often used with built in functions. Built in function are functions that come along with the language. These tend to be very useful because as we start to visit more complex issues they allow us to do complexs thing with ease in some cases.
We have functions that belong to particular classes or special things that can be done with things of a certain class type.
Along side those we can also have Modules. Modules are classes or functions that other people wrote that we can import into our code to use.
### Substrings
---
```
string = "I want to go home!"
print(string[0:12], "to Cancun!")
# print(string[0:1])
```
### toUpper toLower
---
```
alpha_sentence = 'Quick brown fox jumped over the lazy dog'
print(alpha_sentence.title())
print(alpha_sentence.upper())
print(alpha_sentence.lower())
if alpha_sentence.lower().islower():
print("sentence is all lowercase")
```
### Exponents
---
```
print(2 ** 5)
```
### math.sqrt()
---
```
import math
math.sqrt(4)
```
### Integer Division vs Float Division
---
```
print(4//2)
print(4/2)
```
### Abs()
---
```
abs(-10)
```
### String Manipulation
---
```
dummy_string = "Hey there I'm just a string for the example about to happen."
print(dummy_string.center(70, "-"))
print(dummy_string.partition("o"))
print(dummy_string.swapcase())
print(dummy_string.split(" "))
```
### Array Manipulation
---
```
arr = [2, 5, 6, 1, 4, 3]
arr.sort()
print(arr)
print(arr[3])
# sorted(arr)
print(arr[1:3])
```
### Insert and Pop, Append and Remove
---
```
arr.append(7)
print(arr)
arr.pop()
print(arr)
```
|
github_jupyter
|
def char_finder(character, string):
total = 0
for char in string:
if char == character:
total += 1
return total
if __name__ == "__main__":
output = char_finder('o', 'Quick brown fox jumped over the lazy dog')
print(output)
def exampleName(exampleParameter1, exampleParameter2):
print(exampleParameter1, exampleParameter2)
def exampleName(exampleParameter1, exampleParameter2) -> any:
print(exampleParameter1, exampleParameter2)
def exampleName(exampleParameter1: any, exampleParameter2: any) -> any:
print(exampleParameter1, exampleParameter2)
def exampleName(exampleParameter1: int, exampleParameter2: int) -> None:
# print(exampleParameter1, exampleParameter2)
return exampleParameter1 + exampleParameter2
print()
exampleName(10, 94)
class Person:
def __init__(self, weight: int, height: int, name: str):
self.weight = weight
self.height = height
self.name = name
def who_is_this(self):
print("This person's name is " + self.name)
print("This person's weight is " + str(self.weight) + " pounds")
print("This person's height is " + str(self.height) + " inches")
if __name__ == "__main__":
Kipp = Person(225, 70, "Aaron Kippins")
Kipp.who_is_this()
string = "I want to go home!"
print(string[0:12], "to Cancun!")
# print(string[0:1])
alpha_sentence = 'Quick brown fox jumped over the lazy dog'
print(alpha_sentence.title())
print(alpha_sentence.upper())
print(alpha_sentence.lower())
if alpha_sentence.lower().islower():
print("sentence is all lowercase")
print(2 ** 5)
import math
math.sqrt(4)
print(4//2)
print(4/2)
abs(-10)
dummy_string = "Hey there I'm just a string for the example about to happen."
print(dummy_string.center(70, "-"))
print(dummy_string.partition("o"))
print(dummy_string.swapcase())
print(dummy_string.split(" "))
arr = [2, 5, 6, 1, 4, 3]
arr.sort()
print(arr)
print(arr[3])
# sorted(arr)
print(arr[1:3])
arr.append(7)
print(arr)
arr.pop()
print(arr)
| 0.283385 | 0.983166 |
```
# hide
# skip
!git clone https://github.com/benihime91/gale # install gale on colab
!pip install -e "gale[dev]"
# default_exp classification.model.heads
# hide
%load_ext nb_black
%load_ext autoreload
%autoreload 2
%matplotlib inline
# hide
import warnings
from nbdev.export import *
from nbdev.showdoc import *
warnings.filterwarnings("ignore")
```
# Heads
> A head is a regular `torch.nn.Module` that can be attached to a backbone.
For Image Classification a, `head` typically contains a pooling layer along with the classifier
```
# export
import logging
from collections import namedtuple
from dataclasses import dataclass
from typing import *
import torch
import torch.nn.functional as F
from fastcore.all import L, ifnone, store_attr
from omegaconf import MISSING
from timm.models.layers.classifier import _create_fc, _create_pool
from torch import nn
from gale.classification.model.backbones import filter_weight_decay
from gale.core_classes import BasicModule
from gale.torch_utils import trainable_params
from gale.utils.activs import ACTIVATION_REGISTRY
from gale.utils.shape_spec import ShapeSpec
from gale.utils.structures import IMAGE_CLASSIFIER_HEADS
_logger = logging.getLogger(__name__)
# hide
from fastcore.test import *
from gale.utils.logger import setup_logger
from omegaconf import MISSING, DictConfig, OmegaConf
setup_logger()
_logger = logging.getLogger("gale.classification.models.backbones")
# export
_all_ = ["IMAGE_CLASSIFIER_HEADS"]
# export
class ImageClassificationHead(BasicModule):
"""
Abstract class for ImageClassification Heads
"""
_hypers = namedtuple("hypers", field_names=["lr", "wd"])
@property
def hypers(self) -> Tuple:
"""
Returns list of parameters like `lr` and `wd`
for each param group
"""
lrs = []
wds = []
for p in self.build_param_dicts():
lrs.append(p["lr"])
wds.append(p["weight_decay"])
return self._hypers(lrs, wds)
show_doc(ImageClassificationHead.hypers)
```
## FullyConnectedHead
```
# export
class FullyConnectedHead(ImageClassificationHead):
"""
Classifier head w/ configurable global pooling and dropout.
From - https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/classifier.py
"""
def __init__(
self,
input_shape: ShapeSpec,
num_classes: int,
pool_type: str = "avg",
drop_rate: float = 0.0,
use_conv: bool = False,
lr: float = 2e-03,
wd: float = 0,
filter_wd: bool = False,
):
super(FullyConnectedHead, self).__init__()
self.drop_rate = drop_rate
in_planes = input_shape.channels
# fmt: off
self.global_pool, num_pooled_features = _create_pool(in_planes, num_classes, pool_type, use_conv=use_conv)
# fmt: on
self.fc = _create_fc(num_pooled_features, num_classes, use_conv=use_conv)
self.flatten_after_fc = use_conv and pool_type
store_attr("lr, wd, filter_wd")
def forward(self, x):
x = self.global_pool(x)
if self.drop_rate:
x = F.dropout(x, p=float(self.drop_rate), training=self.training)
x = self.fc(x)
return x
def build_param_dicts(self) -> Any:
# fmt: off
if self.filter_wd:
ps = filter_weight_decay(self, lr=self.lr, weight_decay=self.wd)
else:
ps = [{"params": trainable_params(self),"lr": self.lr,"weight_decay": self.wd}]
return ps
```
Arguments to `FullyConnectedHead`:
- `input_shape` (ShapeSpec): input shape
- `num_classes` (int): Number of classes for the head.
- `pool_type` (str): The pooling layer to use. Check [here](https://github.com/rwightman/pytorch-image-models/blob/9a1bd358c7e998799eed88b29842e3c9e5483e34/timm/models/layers/adaptive_avgmax_pool.py#L79).
- `drop_rate` (float): If >0.0 then applies dropout between the pool_layer and the fc layer.
- `use_conv` (bool): Use a convolutional layer as the final fc layer.
- `lr` (float): Learning rate for the modules.
- `wd` (float): Weight decay for the modules.
- `filter_wd` (bool): Filter out `bias`, `bn` from `weight_decay`.
```
input_shape = ShapeSpec(channels=512)
tst = FullyConnectedHead(input_shape, 10)
tst
# hide
o = tst(torch.randn(2, 512, 2, 2))
o.shape
```
### Dataclass
```
# export
@dataclass
class FCHeadDataClass:
"""
Base config for `FullyConnectedHead`
"""
num_classes: int = MISSING
pool_type: str = "avg"
drop_rate: float = 0.0
use_conv: bool = False
lr: float = 1e-03
wd: float = 0.0
use_conv: bool = False
input_shape = ShapeSpec(channels=512)
c = OmegaConf.structured(FCHeadDataClass(num_classes=10))
tst = FullyConnectedHead.from_config_dict(c, input_shape=input_shape)
tst
```
## FastaiHead
```
# export
class FastaiHead(ImageClassificationHead):
"""
Model head that takes `in_planes` features, runs through `lin_ftrs`, and out `num_classes` classes.
From -
https://github.com/fastai/fastai/blob/8b1da8765fc07f1232c20fa8dc5e909d2835640c/fastai/vision/learner.py#L76
"""
def __init__(
self,
input_shape: ShapeSpec,
num_classes: int,
act: str = "ReLU",
lin_ftrs: Optional[List] = None,
ps: Union[List, int] = 0.5,
concat_pool: bool = True,
first_bn: bool = True,
bn_final: bool = False,
lr: float = 2e-03,
wd: float = 0,
filter_wd: bool = False,
):
super(FastaiHead, self).__init__()
in_planes = input_shape.channels
pool = "catavgmax" if concat_pool else "avg"
pool, nf = _create_pool(in_planes, num_classes, pool, use_conv=False)
# fmt: off
lin_ftrs = [nf, 512, num_classes] if lin_ftrs is None else [nf] + lin_ftrs + [num_classes]
# fmt: on
bns = [first_bn] + [True] * len(lin_ftrs[1:])
ps = L(ps)
if len(ps) == 1:
ps = [ps[0] / 2] * (len(lin_ftrs) - 2) + ps
act = ifnone(act, "ReLU")
# fmt: off
actns = [ACTIVATION_REGISTRY.get(act)(inplace=True)] * (len(lin_ftrs) - 2) + [None]
if bn_final:
actns[-1] = ACTIVATION_REGISTRY.get(act)(inplace=True)
# fmt: on
self.layers = [pool]
for ni, no, bn, p, actn in zip(lin_ftrs[:-1], lin_ftrs[1:], bns, ps, actns):
self.layers += nn.Sequential(
nn.BatchNorm1d(ni), nn.Dropout(p), nn.Linear(ni, no, bias=not bns), actn
)
if bn_final:
self.layers.append(nn.BatchNorm1d(lin_ftrs[-1], momentum=0.01))
self.layers = nn.Sequential(*[l for l in self.layers if l is not None])
store_attr("lr, wd, filter_wd")
def forward(self, xb: torch.Tensor) -> Any:
return self.layers(xb)
def build_param_dicts(self) -> Any:
if self.filter_wd:
ps = filter_weight_decay(self.layers, lr=self.lr, weight_decay=self.wd)
else:
# fmt: off
ps = [{"params": trainable_params(self.layers),"lr": self.lr,"weight_decay": self.wd}]
# fmt: on
return ps
```
The head begins with `AdaptiveConcatPool2d` if `concat_pool=True` otherwise, it uses traditional average pooling. Then it uses a Flatten layer before going on blocks of `BatchNorm`, `Dropout` and `Linear` layers.
Those blocks start at `in_planes`, then every element of `lin_ftrs` (defaults to [512]) and end at `num_classes`. `ps` is a list of probabilities used for the dropouts (if you only pass 1, it will use half the value then that value as many times as necessary).
Arguments to `FastaiHead`:
- `input_shape` (ShapeSpec): input shape
- `num_classes` (int): Number of classes for the head.
- `act` (str): name of the activation function to use. If None uses the default activations else the name must be in ACTIVATION_REGISTRY. Activation layers are used after every block (`BatchNorm`, `Dropout` and `Linear` layers) if it is not the last block.
- `lin_ftrs` (List): Features of the Linear layers. (defaults to [512])
- `ps` (List): list of probabilities used for the dropouts.
- `concat_pool` (bool): Wether to use `AdaptiveConcatPool2d` or `AdaptiveAveragePool2d`.
- `first_bn` (bool): BatchNorm Layer after pool.
- `bn_final` (bool): Final Layer is BatchNorm.
- `lr` (float): Learning rate for the modules.
- `wd` (float): Weight decay for the modules.
- `filter_wd` (bool): Filter out `bias`, `bn` from `weight_decay`.
```
input_shape = ShapeSpec(channels=512)
tst = FastaiHead(input_shape=input_shape, num_classes=10)
tst
# hide
o = tst(torch.randn(2, 512, 2, 2))
o.shape
```
### Dataclass
```
# export
@dataclass
class FastaiHeadDataClass:
num_classes: int = MISSING
act: str = "ReLU"
lin_ftrs: Optional[List] = None
ps: Any = 0.5
concat_pool: bool = True
first_bn: bool = True
bn_final: bool = False
lr: float = 0.002
wd: float = 0
filter_wd: bool = False
conf = OmegaConf.structured(FastaiHeadDataClass(num_classes=10))
tst = FastaiHead.from_config_dict(conf, input_shape=input_shape)
tst
```
## Export-
```
# hide
notebook2script("04a_classification.models.heads.ipynb")
```
|
github_jupyter
|
# hide
# skip
!git clone https://github.com/benihime91/gale # install gale on colab
!pip install -e "gale[dev]"
# default_exp classification.model.heads
# hide
%load_ext nb_black
%load_ext autoreload
%autoreload 2
%matplotlib inline
# hide
import warnings
from nbdev.export import *
from nbdev.showdoc import *
warnings.filterwarnings("ignore")
# export
import logging
from collections import namedtuple
from dataclasses import dataclass
from typing import *
import torch
import torch.nn.functional as F
from fastcore.all import L, ifnone, store_attr
from omegaconf import MISSING
from timm.models.layers.classifier import _create_fc, _create_pool
from torch import nn
from gale.classification.model.backbones import filter_weight_decay
from gale.core_classes import BasicModule
from gale.torch_utils import trainable_params
from gale.utils.activs import ACTIVATION_REGISTRY
from gale.utils.shape_spec import ShapeSpec
from gale.utils.structures import IMAGE_CLASSIFIER_HEADS
_logger = logging.getLogger(__name__)
# hide
from fastcore.test import *
from gale.utils.logger import setup_logger
from omegaconf import MISSING, DictConfig, OmegaConf
setup_logger()
_logger = logging.getLogger("gale.classification.models.backbones")
# export
_all_ = ["IMAGE_CLASSIFIER_HEADS"]
# export
class ImageClassificationHead(BasicModule):
"""
Abstract class for ImageClassification Heads
"""
_hypers = namedtuple("hypers", field_names=["lr", "wd"])
@property
def hypers(self) -> Tuple:
"""
Returns list of parameters like `lr` and `wd`
for each param group
"""
lrs = []
wds = []
for p in self.build_param_dicts():
lrs.append(p["lr"])
wds.append(p["weight_decay"])
return self._hypers(lrs, wds)
show_doc(ImageClassificationHead.hypers)
# export
class FullyConnectedHead(ImageClassificationHead):
"""
Classifier head w/ configurable global pooling and dropout.
From - https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/classifier.py
"""
def __init__(
self,
input_shape: ShapeSpec,
num_classes: int,
pool_type: str = "avg",
drop_rate: float = 0.0,
use_conv: bool = False,
lr: float = 2e-03,
wd: float = 0,
filter_wd: bool = False,
):
super(FullyConnectedHead, self).__init__()
self.drop_rate = drop_rate
in_planes = input_shape.channels
# fmt: off
self.global_pool, num_pooled_features = _create_pool(in_planes, num_classes, pool_type, use_conv=use_conv)
# fmt: on
self.fc = _create_fc(num_pooled_features, num_classes, use_conv=use_conv)
self.flatten_after_fc = use_conv and pool_type
store_attr("lr, wd, filter_wd")
def forward(self, x):
x = self.global_pool(x)
if self.drop_rate:
x = F.dropout(x, p=float(self.drop_rate), training=self.training)
x = self.fc(x)
return x
def build_param_dicts(self) -> Any:
# fmt: off
if self.filter_wd:
ps = filter_weight_decay(self, lr=self.lr, weight_decay=self.wd)
else:
ps = [{"params": trainable_params(self),"lr": self.lr,"weight_decay": self.wd}]
return ps
input_shape = ShapeSpec(channels=512)
tst = FullyConnectedHead(input_shape, 10)
tst
# hide
o = tst(torch.randn(2, 512, 2, 2))
o.shape
# export
@dataclass
class FCHeadDataClass:
"""
Base config for `FullyConnectedHead`
"""
num_classes: int = MISSING
pool_type: str = "avg"
drop_rate: float = 0.0
use_conv: bool = False
lr: float = 1e-03
wd: float = 0.0
use_conv: bool = False
input_shape = ShapeSpec(channels=512)
c = OmegaConf.structured(FCHeadDataClass(num_classes=10))
tst = FullyConnectedHead.from_config_dict(c, input_shape=input_shape)
tst
# export
class FastaiHead(ImageClassificationHead):
"""
Model head that takes `in_planes` features, runs through `lin_ftrs`, and out `num_classes` classes.
From -
https://github.com/fastai/fastai/blob/8b1da8765fc07f1232c20fa8dc5e909d2835640c/fastai/vision/learner.py#L76
"""
def __init__(
self,
input_shape: ShapeSpec,
num_classes: int,
act: str = "ReLU",
lin_ftrs: Optional[List] = None,
ps: Union[List, int] = 0.5,
concat_pool: bool = True,
first_bn: bool = True,
bn_final: bool = False,
lr: float = 2e-03,
wd: float = 0,
filter_wd: bool = False,
):
super(FastaiHead, self).__init__()
in_planes = input_shape.channels
pool = "catavgmax" if concat_pool else "avg"
pool, nf = _create_pool(in_planes, num_classes, pool, use_conv=False)
# fmt: off
lin_ftrs = [nf, 512, num_classes] if lin_ftrs is None else [nf] + lin_ftrs + [num_classes]
# fmt: on
bns = [first_bn] + [True] * len(lin_ftrs[1:])
ps = L(ps)
if len(ps) == 1:
ps = [ps[0] / 2] * (len(lin_ftrs) - 2) + ps
act = ifnone(act, "ReLU")
# fmt: off
actns = [ACTIVATION_REGISTRY.get(act)(inplace=True)] * (len(lin_ftrs) - 2) + [None]
if bn_final:
actns[-1] = ACTIVATION_REGISTRY.get(act)(inplace=True)
# fmt: on
self.layers = [pool]
for ni, no, bn, p, actn in zip(lin_ftrs[:-1], lin_ftrs[1:], bns, ps, actns):
self.layers += nn.Sequential(
nn.BatchNorm1d(ni), nn.Dropout(p), nn.Linear(ni, no, bias=not bns), actn
)
if bn_final:
self.layers.append(nn.BatchNorm1d(lin_ftrs[-1], momentum=0.01))
self.layers = nn.Sequential(*[l for l in self.layers if l is not None])
store_attr("lr, wd, filter_wd")
def forward(self, xb: torch.Tensor) -> Any:
return self.layers(xb)
def build_param_dicts(self) -> Any:
if self.filter_wd:
ps = filter_weight_decay(self.layers, lr=self.lr, weight_decay=self.wd)
else:
# fmt: off
ps = [{"params": trainable_params(self.layers),"lr": self.lr,"weight_decay": self.wd}]
# fmt: on
return ps
input_shape = ShapeSpec(channels=512)
tst = FastaiHead(input_shape=input_shape, num_classes=10)
tst
# hide
o = tst(torch.randn(2, 512, 2, 2))
o.shape
# export
@dataclass
class FastaiHeadDataClass:
num_classes: int = MISSING
act: str = "ReLU"
lin_ftrs: Optional[List] = None
ps: Any = 0.5
concat_pool: bool = True
first_bn: bool = True
bn_final: bool = False
lr: float = 0.002
wd: float = 0
filter_wd: bool = False
conf = OmegaConf.structured(FastaiHeadDataClass(num_classes=10))
tst = FastaiHead.from_config_dict(conf, input_shape=input_shape)
tst
# hide
notebook2script("04a_classification.models.heads.ipynb")
| 0.890044 | 0.719962 |
```
# Code source: Sebastian Curi and Andreas Krause, based on Jaques Grobler (sklearn demos).
# License: BSD 3 clause
# We start importing some modules and running some magic commands
%matplotlib inline
%reload_ext autoreload
%load_ext autoreload
%autoreload 2
# General math and plotting modules.
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from scipy.special import erfinv
# Project files.
from utilities.util import gradient_descent
from utilities.classifiers import Logistic
from utilities.regressors import TStudent
from utilities.regularizers import L2Regularizer
from utilities.load_data import polynomial_data, linear_separable_data
from utilities import plot_helpers
# Widget and formatting modules
import IPython
import ipywidgets
from ipywidgets import interact, interactive, interact_manual, fixed
from matplotlib import rcParams
# If in your browser the figures are not nicely vizualized, change the following line.
rcParams['figure.figsize'] = (10, 5)
rcParams['font.size'] = 16
# Machine Learning library.
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn import datasets
from sklearn.linear_model import SGDRegressor, Ridge, LogisticRegression
from sklearn.model_selection import cross_val_score
def get_regression_dataset(dataset, X=None, n_samples=200, noise=0, w=None):
if X is None:
X = np.random.randn(n_samples)
if dataset == 'cos':
Y = np.cos(1.5 * np.pi * X) + noise * np.random.randn(X.shape[0])
elif dataset == 'sinc':
Y = X * np.sin(1.5 * np.pi * X) + noise * np.random.randn(X.shape[0])
elif dataset == 'linear':
X = np.atleast_2d(X).T
Phi = PolynomialFeatures(degree=1, include_bias=True).fit_transform(X)
Y = Phi @ w[:2] + noise * np.random.randn(X.shape[0])
elif dataset == 'linear-features':
X = np.atleast_2d(X).T
Phi = PolynomialFeatures(degree=len(w) - 1, include_bias=True).fit_transform(X)
Y = Phi @ w + noise * np.random.randn(X.shape[0])
return X, Y
def get_classification_dataset(dataset, n_samples=200, noise=0.3):
if dataset == 'linear':
X, Y = linear_separable_data(n_samples, noise=noise, dim=2)
Y = (Y + 1) // 2
elif dataset == '2-blobs':
X, Y = datasets.make_classification(n_classes=2, n_features=2, n_informative=2, n_redundant=0,
n_clusters_per_class=1, n_samples=n_samples, random_state=8)
elif dataset == '3-blobs':
X, Y = datasets.make_classification(n_classes=3, n_features=2, n_informative=2, n_redundant=0,
n_clusters_per_class=1, n_samples=n_samples, random_state=8)
elif dataset == '4-blobs':
X, Y = datasets.make_classification(n_classes=4, n_features=2, n_informative=2, n_redundant=0,
n_clusters_per_class=1, n_samples=n_samples, random_state=8)
elif dataset == 'circles':
X, Y = datasets.make_circles(n_samples=n_samples, factor=.5, noise=.05)
elif dataset == 'moons':
X, Y = datasets.make_moons(n_samples=n_samples, noise=.05)
elif dataset == 'iris':
X, Y = datasets.load_iris(return_X_y=True)
X = X[:, :2]
elif dataset == 'imbalanced':
X, Y = linear_separable_data(n_samples, noise=noise, dim=2, num_negative=int(n_samples * 0.2))
Y = (Y + 1) // 2
return X, Y
```
# Probabilistic Regression
We compare a regressor that uses a gaussian likelihood vs. one that uses a student-t likelihood with 2 degrees of freedom.
```
rcParams['figure.figsize'] = (10, 6)
rcParams['font.size'] = 16
def probabilistic_regression(dataset, nu, n_samples, degree, alpha, noise, noise_type):
np.random.seed(0)
# DATASET
w = np.random.randn(1 + degree)
X = np.sort(np.random.rand(n_samples))
_, y = get_regression_dataset(dataset, X=X, noise=0, w=w)
ymean = np.mean(y)
if noise_type == 'gaussian':
y += noise * np.random.randn(*y.shape)
elif noise_type == 'heavy-tailed':
y += noise * np.random.standard_cauchy(*y.shape)
y = y - np.mean(y)
# REGRESSION
polynomial_features = PolynomialFeatures(degree=degree, include_bias=False)
Phi = polynomial_features.fit_transform(X[:, np.newaxis])
Phimean = Phi.mean(axis=0)
normal = Ridge(alpha=alpha)
normal.fit(Phi - Phimean, y)
student = TStudent(x=Phi - Phimean, y=y, nu=nu, sigma=noise)
regularizer = L2Regularizer(alpha, include_bias=False)
opts = {'eta0': 0.1, 'n_iter': 1000, 'batch_size': min(n_samples, 64), 'n_samples': X.shape[0],
'algorithm': 'SGD'}
gradient_descent(normal.coef_, student, regularizer, opts=opts)
# PREDICT
X_plot = np.linspace(-1, 2, 100)
Phi_plot = polynomial_features.fit_transform(X_plot[:, np.newaxis]) - Phimean
_, Y_plot = get_regression_dataset(dataset, X=X_plot, noise=0, w=w)
Y_plot -= ymean
# PLOTS
plt.plot(X_plot, student.predict(Phi_plot), 'g-', label="Student")
plt.plot(X_plot, normal.predict(Phi_plot), 'r-', label="Normal")
plt.plot(X_plot, Y_plot, 'b--', label="True function")
plt.scatter(X, y, edgecolor='b', s=20)
plt.xlabel("x")
plt.ylabel("y")
plt.xlim((-0.5, 1.5))
plt.ylim((-1 + np.min(Y_plot), 1 + np.max(Y_plot)))
plt.legend(loc="upper left", ncol=4)
plt.show()
interact(probabilistic_regression, dataset=['cos', 'sinc', 'linear', 'linear-features'],
nu=ipywidgets.FloatLogSlider(value=1, min=-2, max=4, step=0.01, readout_format='.4f',
description='Nu:', continuous_update=False),
n_samples=ipywidgets.IntSlider(value=300, min=30, max=1500, step=1,
description='Samples:', continuous_update=False),
degree=ipywidgets.IntSlider(value=1, min=1, max=15, step=1,
description='Degree:', continuous_update=False),
noise=ipywidgets.FloatSlider(value=0.1, min=0, max=1, step=0.01, readout_format='.2f',
description='Noise level:', continuous_update=False),
alpha=ipywidgets.BoundedFloatText(value=0, min=0, max=1000, step=0.0001,
description='Reg Coef.:', continuous_update=False),
noise_type=['gaussian', 'heavy-tailed']
);
```
# Probabilistic Classification (Logistic Regression)
```
rcParams['figure.figsize'] = (20, 6)
rcParams['font.size'] = 22
num_points_w = ipywidgets.IntSlider(value=300, min=30, max=1500, step=1, description='Number of samples:',
style={'description_width': 'initial'}, continuous_update=False)
noise_w = ipywidgets.FloatSlider(value=0.1, min=0, max=1, step=0.01, readout_format='.2f', description='Noise level:',
style={'description_width': 'initial'}, continuous_update=False)
reg_w = ipywidgets.BoundedFloatText(value=0, min=0, max=1000, step=0.0001, description='Regularization:',
style={'description_width': 'initial'}, continuous_update=False)
batch_size_w = ipywidgets.IntSlider(value=16, min=1, max=64, step=1, description='Batch Size:',
style={'description_width': 'initial'}, continuous_update=False)
lr_w = ipywidgets.FloatLogSlider(value=0.3, min=-4, max=1, step=0.1, readout_format='.4f', description='Learning Rate:',
style={'description_width': 'initial'}, continuous_update=False)
num_iter_w = ipywidgets.IntSlider(value=50, min=10, max=200, step=1, description='Num Iter:',
style={'description_width': 'initial'}, continuous_update=False)
def logistic_SGD(dataset, num_points, noise, reg, batch_size, lr, num_iter):
# np.random.seed(42)
# DATASET
X, Y = get_classification_dataset(dataset, num_points, noise)
Y = 2 * Y - 1
if X.shape[1] == 2:
ones = np.ones((X.shape[0], 1))
X = np.concatenate((X, ones), axis=-1)
Xtest, Ytest = get_classification_dataset(dataset, int(0.1 * num_points), noise)
Ytest = 2 * Ytest - 1
if Xtest.shape[1] == 2:
ones = np.ones((Xtest.shape[0], 1))
Xtest = np.concatenate((Xtest, ones), axis=-1)
indexes = np.arange(0, X.shape[0], 1)
np.random.shuffle(indexes)
X, Y = X[indexes], Y[indexes]
# REGRESSION
classifier = Logistic(X, Y)
classifier.load_test_data(Xtest, Ytest)
regularizer = L2Regularizer(reg)
np.random.seed(42)
w0 = np.random.randn(3, )
opts = {'eta0': lr,
'n_iter': num_iter,
'batch_size': min(batch_size, X.shape[0]),
'n_samples': X.shape[0],
'algorithm': 'SGD',
}
try:
trajectory, indexes = gradient_descent(w0, classifier, regularizer, opts)
# PLOTS
contour_plot = plt.subplot(121)
error_plot = plt.subplot(122)
opt = {'marker': 'ro', 'fillstyle': 'full', 'label': '+ Train', 'size': 8}
plot_helpers.plot_data(X[np.where(Y == 1)[0], 0], X[np.where(Y == 1)[0], 1], fig=contour_plot, options=opt)
opt = {'marker': 'bs', 'fillstyle': 'full', 'label': '- Train', 'size': 8}
plot_helpers.plot_data(X[np.where(Y == -1)[0], 0], X[np.where(Y == -1)[0], 1], fig=contour_plot, options=opt)
opt = {'marker': 'ro', 'fillstyle': 'none', 'label': '+ Test', 'size': 8}
plot_helpers.plot_data(Xtest[np.where(Ytest == 1)[0], 0], Xtest[np.where(Ytest == 1)[0], 1], fig=contour_plot, options=opt)
opt = {'marker': 'bs', 'fillstyle': 'none', 'label': '- Test', 'size': 8}
plot_helpers.plot_data(Xtest[np.where(Ytest == -1)[0], 0], Xtest[np.where(Ytest == -1)[0], 1], fig=contour_plot, options=opt)
contour_opts = {'n_points': 100, 'x_label': '$x$', 'y_label': '$y$', 'sgd_point': True, 'n_classes': 4}
error_opts = {'epoch': 5, 'x_label': '$t$', 'y_label': 'error'}
opts = {'contour_opts': contour_opts, 'error_opts': error_opts}
plot_helpers.classification_progression(X, Y, trajectory, indexes, classifier,
contour_plot=contour_plot, error_plot=error_plot,
options=opts)
except KeyboardInterrupt:
pass
interact_manual(logistic_SGD, dataset=['linear', 'moons', 'circles', 'imbalanced'],
num_points=num_points_w, noise=noise_w, reg=reg_w, batch_size=batch_size_w,
lr=lr_w, num_iter=num_iter_w);
```
# Multi-class Logistic Regression
```
rcParams['figure.figsize'] = (20, 15)
rcParams['font.size'] = 16
def multi_class_lr(dataset):
# DATASET
X, y = get_classification_dataset(dataset, 200)
X = X[:, :2]
# REGRESSION
model = LogisticRegression().fit(X, y)
# PREDICT
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = .02 # step size in the mesh
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
xy = np.c_[xx.ravel(), yy.ravel()]
C = model.predict(xy)
P = model.predict_proba(xy)
H = -(P * model.predict_log_proba(xy)).sum(axis=1)
PP = P[:, 1]
P = P.max(axis=1)
# Put the result into a color plot
C = C.reshape(xx.shape)
P = P.reshape(xx.shape)
PP = PP.reshape(xx.shape)
H = H.reshape(xx.shape)
# PLOTS
fig, axes = plt.subplots(2, 2)
axes[0, 0].set_title('Classification Boundary')
axes[0, 0].contourf(xx, yy, C, cmap=plt.cm.jet, alpha=0.5)
axes[0, 1].set_title('Prediction Probabilities')
cf = axes[0, 1].contourf(xx, yy, P, cmap=plt.cm.cividis_r, alpha=0.5, vmin=1. / len(np.unique(y)), vmax=1)
m = plt.cm.ScalarMappable(cmap=plt.cm.cividis_r)
m.set_array(P)
m.set_clim(1. / len(np.unique(y)), 1.)
cbar = plt.colorbar(m, ax=axes[0, 1])
axes[1, 0].set_title('Probabilistic Boundary')
if len(np.unique(C)) == 2:
axes[1, 0].contourf(xx, yy, PP, cmap=plt.cm.jet, alpha=0.5)
else:
axes[1, 0].contourf(xx, yy, P * C, cmap=plt.cm.jet, alpha=0.5)
axes[1, 1].set_title('Entropy')
cf = axes[1, 1].contourf(xx, yy, H, cmap=plt.cm.cividis_r, alpha=0.5)
# Plot also the training points
plt.colorbar(cf, ax=axes[1, 1])
for row in axes:
for ax in row:
ax.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.jet)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
plt.show()
interact(multi_class_lr, dataset=['3-blobs', '4-blobs', 'iris', 'linear', 'imbalanced', '2-blobs', 'circles', 'moons']);
```
# Doubtful Logistic Regression
```
rcParams['figure.figsize'] = (20, 6)
rcParams['font.size'] = 16
def doubtful_logistic_regression(dataset, min_prob):
np.random.seed(42)
# DATASET
X, y = get_classification_dataset(dataset, 200)
X = X[:, :2]
# REGRESSION
model = LogisticRegression().fit(X, y)
# PREDICT
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = .02 # step size in the mesh
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
xy = np.c_[xx.ravel(), yy.ravel()]
P = model.predict_proba(xy)
C = 2 * model.predict(xy)
H = -(model.predict_log_proba(xy) * P).sum(axis=1)
P = P.max(axis=1)
# Doubfult STEP
C[np.where(P < min_prob)[0]] = 1
C = C.reshape(xx.shape)
P = P.reshape(xx.shape)
H = H.reshape(xx.shape)
# PLOTS
fig, axes = plt.subplots(1, 2)
axes[0].set_title('Classification Boundary')
axes[0].contourf(xx, yy, C, cmap=plt.cm.jet, alpha=0.5)
axes[1].set_title('Probability')
cf = axes[1].contourf(xx, yy, P, cmap=plt.cm.cividis_r, alpha=0.5)
m = plt.cm.ScalarMappable(cmap=plt.cm.cividis_r)
m.set_array(P)
m.set_clim(1. / len(np.unique(y)), 1.)
cbar = plt.colorbar(m, ax=axes[1])
# Plot also the training points
for ax in axes:
ax.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.jet)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
plt.show()
interact(doubtful_logistic_regression, dataset=['linear', 'imbalanced', '2-blobs', '3-blobs', '4-blobs', 'circles', 'moons', 'iris'],
min_prob=ipywidgets.FloatSlider(value=0.75, min=0.25, max=1, step=0.01, continuous_update=False));
```
# Cost Sensitive Classification (Logistic Regression)
```
rcParams['figure.figsize'] = (20,8)
rcParams['font.size'] = 16
def cost_sensitive_logistic_regression(dataset, cost_ratio):
# cost_ratio = cost_false_positive / cost_false_negative
np.random.seed(0)
min_positive_prob = 1 / (1 + cost_ratio)
# DATASET
X, y = get_classification_dataset(dataset, 200)
X = X[:, :2]
# REGRESSION
model = LogisticRegression().fit(X, y)
# PREDICT
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = .02 # step size in the mesh
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
xy = np.c_[xx.ravel(), yy.ravel()]
P = model.predict_proba(xy)
C = 2 * model.predict(xy)
H = -(model.predict_log_proba(xy) * P).sum(axis=1)
# Cost Sensitive Step
C[np.where(P[:, 1] < min_positive_prob)[0]] = 0
C[np.where(P[:, 1] >= min_positive_prob)[0]] = 1
P = P.max(axis=1)
C = C.reshape(xx.shape)
P = P.reshape(xx.shape)
H = H.reshape(xx.shape)
# PLOTS
fig, axes = plt.subplots(1, 2)
axes[0].set_title('Classification Boundary')
axes[0].contourf(xx, yy, C, cmap=plt.cm.jet, alpha=0.5, vmin=0, vmax=1)
axes[1].set_title('Prediction Probabilities')
cf = axes[1].contourf(xx, yy, P, cmap=plt.cm.cividis_r, alpha=0.5, vmin=1. / len(np.unique(y)), vmax=1)
m = plt.cm.ScalarMappable(cmap=plt.cm.cividis_r)
m.set_array(P)
m.set_clim(1. / len(np.unique(y)), 1.)
cbar = plt.colorbar(m, ax=axes[1])
for ax in axes:
ax.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.jet, vmin=0, vmax=1)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
plt.show()
interact(cost_sensitive_logistic_regression,
dataset=['linear', 'imbalanced', '2-blobs', 'moons'],
cost_ratio=ipywidgets.FloatLogSlider(value=1, min=-3, max=4, step=0.1, continuous_update=False));
```
# Cost-Sensitive Linear Regression
```
rcParams['figure.figsize'] = (10, 6)
rcParams['font.size'] = 16
def cost_sensitive_linear_regression(dataset, over_estimation_cost_ratio, degree, alpha, n_samples, noise):
np.random.seed(42)
ratio = 1 / (1 + over_estimation_cost_ratio)
# DATASET
w_star = np.array([1, 0.2, -0.3, 4])
X = np.sort(np.random.rand(n_samples))
_, f = get_regression_dataset(dataset, n_samples=200, X=X, noise=0, w=w_star)
_, y = get_regression_dataset(dataset, n_samples=200, X=X, noise=noise, w=w_star)
# REGRESSION
Phi = PolynomialFeatures(degree=degree, include_bias=True).fit_transform(np.atleast_2d(X).T)
w_hat = Ridge(alpha=alpha, fit_intercept=False).fit(Phi, y).coef_
# PREDICT
X_test = np.linspace(-1, 2, 100)
_, f_test = get_regression_dataset(dataset, n_samples=200, X=X_test, noise=0, w=w_star)
Phi_test = PolynomialFeatures(degree=degree, include_bias=True).fit_transform(np.atleast_2d(X_test).T)
y_equal = Phi_test @ w_hat
# COST SENSITIVITY
y_sensitive = y_equal + noise * np.sqrt(2) * erfinv(2 * ratio - 1)
# PLOT
plt.plot(X, y, '*')
plt.plot(X_test, y_sensitive, label='Cost Sensitive')
plt.plot(X_test, y_equal, label='Linear Regression')
plt.plot(X_test, f_test, label='True Function')
plt.legend(loc='upper left', ncol=4)
plt.ylim(-2, 2);
interact(cost_sensitive_linear_regression, dataset=['cos', 'sinc', 'linear', 'linear-features'],
over_estimation_cost_ratio=ipywidgets.FloatLogSlider(value=0.1, min=-3, max=3, step=0.1,
readout_format='.4f',
description='Ratio:', continuous_update=False),
n_samples=ipywidgets.IntSlider(value=30, min=30, max=1500, step=1,
description='N Samples:', continuous_update=False),
degree=ipywidgets.IntSlider(value=1, min=1, max=9, step=1,
description='Poly Degree:', continuous_update=False),
alpha=ipywidgets.BoundedFloatText(value=0, min=0, max=1000, step=0.0001,
description='Reg Coef.:', continuous_update=False),
noise=ipywidgets.FloatSlider(value=0.3, min=0, max=1, step=0.01, readout_format='.2f',
description='Noise level:', continuous_update=False)
);
```
# Uncertainty Sampling in Logistic Regression
```
rcParams['figure.figsize'] = (16, 5)
rcParams['font.size'] = 16
queried_set = {}
def uncertainty_sampling(dataset, criterion, noise):
query_button = ipywidgets.Button(description="Query new point")
update_button = ipywidgets.Button(description="Update Model")
restart_button = ipywidgets.Button(description="Restart")
X, Y = get_classification_dataset(dataset, 200, noise=noise)
num_classes = len(np.unique(Y)) - 1
X = X[:, :2]
indexes = np.arange(X.shape[0])
index_set = set([i for i in indexes])
def plot(model, X, Y, queried_set, next_idx=None, display_query=True):
neg_i = np.where(Y == 0)[0]
pos_i = np.where(Y == 1)[0]
queried_idx = [i for i in queried_set]
non_queried_idx = [i for i in index_set.difference(queried_set)]
qX, qY = X[queried_idx], Y[queried_idx]
nX, nY = X[non_queried_idx], Y[non_queried_idx]
# Model prediction contours.
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = .02
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
xy = np.c_[xx.ravel(), yy.ravel()]
P = model.predict_proba(xy).max(axis=1).reshape(xx.shape)
C = model.predict(xy).reshape(xx.shape)
H = -(model.predict_proba(xy) * model.predict_log_proba(xy)).sum(axis=1).reshape(xx.shape)
# PLOTS
fig, axes = plt.subplots(1, 2)
axes[0].set_title('Classification Boundary')
axes[0].contourf(xx, yy, C, cmap=plt.cm.jet, alpha=0.5, vmin=0, vmax=num_classes)
if criterion == 'max-entropy':
axes[1].set_title('Entropy')
cf = axes[1].contourf(xx, yy, H, cmap=plt.cm.cividis_r, alpha=0.5)
m = plt.cm.ScalarMappable(cmap=plt.cm.cividis_r)
m.set_array(H)
cbar = plt.colorbar(m, ax=axes[1])
cbar.set_label('Predicted Entropy', rotation=270, labelpad=20)
elif criterion == 'min-probability':
axes[1].set_title('Probability')
cf = axes[1].contourf(xx, yy, P, cmap=plt.cm.cividis_r, alpha=0.5)
m = plt.cm.ScalarMappable(cmap=plt.cm.cividis_r)
m.set_array(P)
cbar = plt.colorbar(m, ax=axes[1])
cbar.set_label('Predicted Probability', rotation=270, labelpad=20)
# Plot also the training points
for ax in axes:
ax.scatter(qX[:, 0], qX[:, 1], c=qY, marker='o', s=200, cmap=plt.cm.jet, vmin=0, vmax=num_classes)
ax.scatter(nX[:, 0], nX[:, 1], c=nY, marker='o', alpha=0.3, s=20, cmap=plt.cm.jet, vmin=0, vmax=num_classes)
if next_idx is not None:
ax.scatter(X[[next_idx], 0], X[[next_idx], 1], c=Y[[next_idx]], s=400, marker='*',
cmap=plt.cm.jet, vmin=0, vmax=num_classes)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
IPython.display.clear_output(wait=True)
IPython.display.display(plt.gcf())
plt.close()
if display_query:
display(query_button)
else:
display(update_button)
display(restart_button)
def update_model(b):
global queried_set, model
queried_idx = [i for i in queried_set]
model = LogisticRegression(C=10).fit(X[queried_idx], Y[queried_idx])
plot(model, X, Y, queried_set, next_idx=None, display_query=True)
def restart(b):
global queried_set
queried_set = set()
classes = np.unique(Y)
for c in classes:
i = np.random.choice(np.where(Y == c)[0])
queried_set.add(i)
update_model(None)
def append_point(b):
global queried_set, model
query_points = X
probs = model.predict_proba(X).max(axis=1)
H = model.predict_log_proba(X) * model.predict_proba(X)
H = H.sum(axis=1)
queried_idx = [i for i in queried_set]
probs[queried_idx] = float('Inf')
H[queried_idx] = float('Inf')
if criterion == 'max-entropy':
i = np.argmin(H)
elif criterion == 'min-probability':
i = np.argmin(probs)
plot(model, X, Y, queried_set, i, display_query=False)
queried_set.add(i)
query_button.on_click(append_point)
update_button.on_click(update_model)
restart_button.on_click(restart)
restart(None);
interact(uncertainty_sampling,
dataset=['linear', 'imbalanced', '2-blobs', '3-blobs', '4-blobs', 'iris', 'circles', 'moons'],
criterion=['min-probability', 'max-entropy'],
noise=ipywidgets.FloatSlider(value=0.25, min=0, max=1, step=0.01, readout_format='.2f',
continuous_update=False));
```
|
github_jupyter
|
# Code source: Sebastian Curi and Andreas Krause, based on Jaques Grobler (sklearn demos).
# License: BSD 3 clause
# We start importing some modules and running some magic commands
%matplotlib inline
%reload_ext autoreload
%load_ext autoreload
%autoreload 2
# General math and plotting modules.
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from scipy.special import erfinv
# Project files.
from utilities.util import gradient_descent
from utilities.classifiers import Logistic
from utilities.regressors import TStudent
from utilities.regularizers import L2Regularizer
from utilities.load_data import polynomial_data, linear_separable_data
from utilities import plot_helpers
# Widget and formatting modules
import IPython
import ipywidgets
from ipywidgets import interact, interactive, interact_manual, fixed
from matplotlib import rcParams
# If in your browser the figures are not nicely vizualized, change the following line.
rcParams['figure.figsize'] = (10, 5)
rcParams['font.size'] = 16
# Machine Learning library.
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn import datasets
from sklearn.linear_model import SGDRegressor, Ridge, LogisticRegression
from sklearn.model_selection import cross_val_score
def get_regression_dataset(dataset, X=None, n_samples=200, noise=0, w=None):
if X is None:
X = np.random.randn(n_samples)
if dataset == 'cos':
Y = np.cos(1.5 * np.pi * X) + noise * np.random.randn(X.shape[0])
elif dataset == 'sinc':
Y = X * np.sin(1.5 * np.pi * X) + noise * np.random.randn(X.shape[0])
elif dataset == 'linear':
X = np.atleast_2d(X).T
Phi = PolynomialFeatures(degree=1, include_bias=True).fit_transform(X)
Y = Phi @ w[:2] + noise * np.random.randn(X.shape[0])
elif dataset == 'linear-features':
X = np.atleast_2d(X).T
Phi = PolynomialFeatures(degree=len(w) - 1, include_bias=True).fit_transform(X)
Y = Phi @ w + noise * np.random.randn(X.shape[0])
return X, Y
def get_classification_dataset(dataset, n_samples=200, noise=0.3):
if dataset == 'linear':
X, Y = linear_separable_data(n_samples, noise=noise, dim=2)
Y = (Y + 1) // 2
elif dataset == '2-blobs':
X, Y = datasets.make_classification(n_classes=2, n_features=2, n_informative=2, n_redundant=0,
n_clusters_per_class=1, n_samples=n_samples, random_state=8)
elif dataset == '3-blobs':
X, Y = datasets.make_classification(n_classes=3, n_features=2, n_informative=2, n_redundant=0,
n_clusters_per_class=1, n_samples=n_samples, random_state=8)
elif dataset == '4-blobs':
X, Y = datasets.make_classification(n_classes=4, n_features=2, n_informative=2, n_redundant=0,
n_clusters_per_class=1, n_samples=n_samples, random_state=8)
elif dataset == 'circles':
X, Y = datasets.make_circles(n_samples=n_samples, factor=.5, noise=.05)
elif dataset == 'moons':
X, Y = datasets.make_moons(n_samples=n_samples, noise=.05)
elif dataset == 'iris':
X, Y = datasets.load_iris(return_X_y=True)
X = X[:, :2]
elif dataset == 'imbalanced':
X, Y = linear_separable_data(n_samples, noise=noise, dim=2, num_negative=int(n_samples * 0.2))
Y = (Y + 1) // 2
return X, Y
rcParams['figure.figsize'] = (10, 6)
rcParams['font.size'] = 16
def probabilistic_regression(dataset, nu, n_samples, degree, alpha, noise, noise_type):
np.random.seed(0)
# DATASET
w = np.random.randn(1 + degree)
X = np.sort(np.random.rand(n_samples))
_, y = get_regression_dataset(dataset, X=X, noise=0, w=w)
ymean = np.mean(y)
if noise_type == 'gaussian':
y += noise * np.random.randn(*y.shape)
elif noise_type == 'heavy-tailed':
y += noise * np.random.standard_cauchy(*y.shape)
y = y - np.mean(y)
# REGRESSION
polynomial_features = PolynomialFeatures(degree=degree, include_bias=False)
Phi = polynomial_features.fit_transform(X[:, np.newaxis])
Phimean = Phi.mean(axis=0)
normal = Ridge(alpha=alpha)
normal.fit(Phi - Phimean, y)
student = TStudent(x=Phi - Phimean, y=y, nu=nu, sigma=noise)
regularizer = L2Regularizer(alpha, include_bias=False)
opts = {'eta0': 0.1, 'n_iter': 1000, 'batch_size': min(n_samples, 64), 'n_samples': X.shape[0],
'algorithm': 'SGD'}
gradient_descent(normal.coef_, student, regularizer, opts=opts)
# PREDICT
X_plot = np.linspace(-1, 2, 100)
Phi_plot = polynomial_features.fit_transform(X_plot[:, np.newaxis]) - Phimean
_, Y_plot = get_regression_dataset(dataset, X=X_plot, noise=0, w=w)
Y_plot -= ymean
# PLOTS
plt.plot(X_plot, student.predict(Phi_plot), 'g-', label="Student")
plt.plot(X_plot, normal.predict(Phi_plot), 'r-', label="Normal")
plt.plot(X_plot, Y_plot, 'b--', label="True function")
plt.scatter(X, y, edgecolor='b', s=20)
plt.xlabel("x")
plt.ylabel("y")
plt.xlim((-0.5, 1.5))
plt.ylim((-1 + np.min(Y_plot), 1 + np.max(Y_plot)))
plt.legend(loc="upper left", ncol=4)
plt.show()
interact(probabilistic_regression, dataset=['cos', 'sinc', 'linear', 'linear-features'],
nu=ipywidgets.FloatLogSlider(value=1, min=-2, max=4, step=0.01, readout_format='.4f',
description='Nu:', continuous_update=False),
n_samples=ipywidgets.IntSlider(value=300, min=30, max=1500, step=1,
description='Samples:', continuous_update=False),
degree=ipywidgets.IntSlider(value=1, min=1, max=15, step=1,
description='Degree:', continuous_update=False),
noise=ipywidgets.FloatSlider(value=0.1, min=0, max=1, step=0.01, readout_format='.2f',
description='Noise level:', continuous_update=False),
alpha=ipywidgets.BoundedFloatText(value=0, min=0, max=1000, step=0.0001,
description='Reg Coef.:', continuous_update=False),
noise_type=['gaussian', 'heavy-tailed']
);
rcParams['figure.figsize'] = (20, 6)
rcParams['font.size'] = 22
num_points_w = ipywidgets.IntSlider(value=300, min=30, max=1500, step=1, description='Number of samples:',
style={'description_width': 'initial'}, continuous_update=False)
noise_w = ipywidgets.FloatSlider(value=0.1, min=0, max=1, step=0.01, readout_format='.2f', description='Noise level:',
style={'description_width': 'initial'}, continuous_update=False)
reg_w = ipywidgets.BoundedFloatText(value=0, min=0, max=1000, step=0.0001, description='Regularization:',
style={'description_width': 'initial'}, continuous_update=False)
batch_size_w = ipywidgets.IntSlider(value=16, min=1, max=64, step=1, description='Batch Size:',
style={'description_width': 'initial'}, continuous_update=False)
lr_w = ipywidgets.FloatLogSlider(value=0.3, min=-4, max=1, step=0.1, readout_format='.4f', description='Learning Rate:',
style={'description_width': 'initial'}, continuous_update=False)
num_iter_w = ipywidgets.IntSlider(value=50, min=10, max=200, step=1, description='Num Iter:',
style={'description_width': 'initial'}, continuous_update=False)
def logistic_SGD(dataset, num_points, noise, reg, batch_size, lr, num_iter):
# np.random.seed(42)
# DATASET
X, Y = get_classification_dataset(dataset, num_points, noise)
Y = 2 * Y - 1
if X.shape[1] == 2:
ones = np.ones((X.shape[0], 1))
X = np.concatenate((X, ones), axis=-1)
Xtest, Ytest = get_classification_dataset(dataset, int(0.1 * num_points), noise)
Ytest = 2 * Ytest - 1
if Xtest.shape[1] == 2:
ones = np.ones((Xtest.shape[0], 1))
Xtest = np.concatenate((Xtest, ones), axis=-1)
indexes = np.arange(0, X.shape[0], 1)
np.random.shuffle(indexes)
X, Y = X[indexes], Y[indexes]
# REGRESSION
classifier = Logistic(X, Y)
classifier.load_test_data(Xtest, Ytest)
regularizer = L2Regularizer(reg)
np.random.seed(42)
w0 = np.random.randn(3, )
opts = {'eta0': lr,
'n_iter': num_iter,
'batch_size': min(batch_size, X.shape[0]),
'n_samples': X.shape[0],
'algorithm': 'SGD',
}
try:
trajectory, indexes = gradient_descent(w0, classifier, regularizer, opts)
# PLOTS
contour_plot = plt.subplot(121)
error_plot = plt.subplot(122)
opt = {'marker': 'ro', 'fillstyle': 'full', 'label': '+ Train', 'size': 8}
plot_helpers.plot_data(X[np.where(Y == 1)[0], 0], X[np.where(Y == 1)[0], 1], fig=contour_plot, options=opt)
opt = {'marker': 'bs', 'fillstyle': 'full', 'label': '- Train', 'size': 8}
plot_helpers.plot_data(X[np.where(Y == -1)[0], 0], X[np.where(Y == -1)[0], 1], fig=contour_plot, options=opt)
opt = {'marker': 'ro', 'fillstyle': 'none', 'label': '+ Test', 'size': 8}
plot_helpers.plot_data(Xtest[np.where(Ytest == 1)[0], 0], Xtest[np.where(Ytest == 1)[0], 1], fig=contour_plot, options=opt)
opt = {'marker': 'bs', 'fillstyle': 'none', 'label': '- Test', 'size': 8}
plot_helpers.plot_data(Xtest[np.where(Ytest == -1)[0], 0], Xtest[np.where(Ytest == -1)[0], 1], fig=contour_plot, options=opt)
contour_opts = {'n_points': 100, 'x_label': '$x$', 'y_label': '$y$', 'sgd_point': True, 'n_classes': 4}
error_opts = {'epoch': 5, 'x_label': '$t$', 'y_label': 'error'}
opts = {'contour_opts': contour_opts, 'error_opts': error_opts}
plot_helpers.classification_progression(X, Y, trajectory, indexes, classifier,
contour_plot=contour_plot, error_plot=error_plot,
options=opts)
except KeyboardInterrupt:
pass
interact_manual(logistic_SGD, dataset=['linear', 'moons', 'circles', 'imbalanced'],
num_points=num_points_w, noise=noise_w, reg=reg_w, batch_size=batch_size_w,
lr=lr_w, num_iter=num_iter_w);
rcParams['figure.figsize'] = (20, 15)
rcParams['font.size'] = 16
def multi_class_lr(dataset):
# DATASET
X, y = get_classification_dataset(dataset, 200)
X = X[:, :2]
# REGRESSION
model = LogisticRegression().fit(X, y)
# PREDICT
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = .02 # step size in the mesh
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
xy = np.c_[xx.ravel(), yy.ravel()]
C = model.predict(xy)
P = model.predict_proba(xy)
H = -(P * model.predict_log_proba(xy)).sum(axis=1)
PP = P[:, 1]
P = P.max(axis=1)
# Put the result into a color plot
C = C.reshape(xx.shape)
P = P.reshape(xx.shape)
PP = PP.reshape(xx.shape)
H = H.reshape(xx.shape)
# PLOTS
fig, axes = plt.subplots(2, 2)
axes[0, 0].set_title('Classification Boundary')
axes[0, 0].contourf(xx, yy, C, cmap=plt.cm.jet, alpha=0.5)
axes[0, 1].set_title('Prediction Probabilities')
cf = axes[0, 1].contourf(xx, yy, P, cmap=plt.cm.cividis_r, alpha=0.5, vmin=1. / len(np.unique(y)), vmax=1)
m = plt.cm.ScalarMappable(cmap=plt.cm.cividis_r)
m.set_array(P)
m.set_clim(1. / len(np.unique(y)), 1.)
cbar = plt.colorbar(m, ax=axes[0, 1])
axes[1, 0].set_title('Probabilistic Boundary')
if len(np.unique(C)) == 2:
axes[1, 0].contourf(xx, yy, PP, cmap=plt.cm.jet, alpha=0.5)
else:
axes[1, 0].contourf(xx, yy, P * C, cmap=plt.cm.jet, alpha=0.5)
axes[1, 1].set_title('Entropy')
cf = axes[1, 1].contourf(xx, yy, H, cmap=plt.cm.cividis_r, alpha=0.5)
# Plot also the training points
plt.colorbar(cf, ax=axes[1, 1])
for row in axes:
for ax in row:
ax.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.jet)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
plt.show()
interact(multi_class_lr, dataset=['3-blobs', '4-blobs', 'iris', 'linear', 'imbalanced', '2-blobs', 'circles', 'moons']);
rcParams['figure.figsize'] = (20, 6)
rcParams['font.size'] = 16
def doubtful_logistic_regression(dataset, min_prob):
np.random.seed(42)
# DATASET
X, y = get_classification_dataset(dataset, 200)
X = X[:, :2]
# REGRESSION
model = LogisticRegression().fit(X, y)
# PREDICT
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = .02 # step size in the mesh
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
xy = np.c_[xx.ravel(), yy.ravel()]
P = model.predict_proba(xy)
C = 2 * model.predict(xy)
H = -(model.predict_log_proba(xy) * P).sum(axis=1)
P = P.max(axis=1)
# Doubfult STEP
C[np.where(P < min_prob)[0]] = 1
C = C.reshape(xx.shape)
P = P.reshape(xx.shape)
H = H.reshape(xx.shape)
# PLOTS
fig, axes = plt.subplots(1, 2)
axes[0].set_title('Classification Boundary')
axes[0].contourf(xx, yy, C, cmap=plt.cm.jet, alpha=0.5)
axes[1].set_title('Probability')
cf = axes[1].contourf(xx, yy, P, cmap=plt.cm.cividis_r, alpha=0.5)
m = plt.cm.ScalarMappable(cmap=plt.cm.cividis_r)
m.set_array(P)
m.set_clim(1. / len(np.unique(y)), 1.)
cbar = plt.colorbar(m, ax=axes[1])
# Plot also the training points
for ax in axes:
ax.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.jet)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
plt.show()
interact(doubtful_logistic_regression, dataset=['linear', 'imbalanced', '2-blobs', '3-blobs', '4-blobs', 'circles', 'moons', 'iris'],
min_prob=ipywidgets.FloatSlider(value=0.75, min=0.25, max=1, step=0.01, continuous_update=False));
rcParams['figure.figsize'] = (20,8)
rcParams['font.size'] = 16
def cost_sensitive_logistic_regression(dataset, cost_ratio):
# cost_ratio = cost_false_positive / cost_false_negative
np.random.seed(0)
min_positive_prob = 1 / (1 + cost_ratio)
# DATASET
X, y = get_classification_dataset(dataset, 200)
X = X[:, :2]
# REGRESSION
model = LogisticRegression().fit(X, y)
# PREDICT
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = .02 # step size in the mesh
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
xy = np.c_[xx.ravel(), yy.ravel()]
P = model.predict_proba(xy)
C = 2 * model.predict(xy)
H = -(model.predict_log_proba(xy) * P).sum(axis=1)
# Cost Sensitive Step
C[np.where(P[:, 1] < min_positive_prob)[0]] = 0
C[np.where(P[:, 1] >= min_positive_prob)[0]] = 1
P = P.max(axis=1)
C = C.reshape(xx.shape)
P = P.reshape(xx.shape)
H = H.reshape(xx.shape)
# PLOTS
fig, axes = plt.subplots(1, 2)
axes[0].set_title('Classification Boundary')
axes[0].contourf(xx, yy, C, cmap=plt.cm.jet, alpha=0.5, vmin=0, vmax=1)
axes[1].set_title('Prediction Probabilities')
cf = axes[1].contourf(xx, yy, P, cmap=plt.cm.cividis_r, alpha=0.5, vmin=1. / len(np.unique(y)), vmax=1)
m = plt.cm.ScalarMappable(cmap=plt.cm.cividis_r)
m.set_array(P)
m.set_clim(1. / len(np.unique(y)), 1.)
cbar = plt.colorbar(m, ax=axes[1])
for ax in axes:
ax.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.jet, vmin=0, vmax=1)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
plt.show()
interact(cost_sensitive_logistic_regression,
dataset=['linear', 'imbalanced', '2-blobs', 'moons'],
cost_ratio=ipywidgets.FloatLogSlider(value=1, min=-3, max=4, step=0.1, continuous_update=False));
rcParams['figure.figsize'] = (10, 6)
rcParams['font.size'] = 16
def cost_sensitive_linear_regression(dataset, over_estimation_cost_ratio, degree, alpha, n_samples, noise):
np.random.seed(42)
ratio = 1 / (1 + over_estimation_cost_ratio)
# DATASET
w_star = np.array([1, 0.2, -0.3, 4])
X = np.sort(np.random.rand(n_samples))
_, f = get_regression_dataset(dataset, n_samples=200, X=X, noise=0, w=w_star)
_, y = get_regression_dataset(dataset, n_samples=200, X=X, noise=noise, w=w_star)
# REGRESSION
Phi = PolynomialFeatures(degree=degree, include_bias=True).fit_transform(np.atleast_2d(X).T)
w_hat = Ridge(alpha=alpha, fit_intercept=False).fit(Phi, y).coef_
# PREDICT
X_test = np.linspace(-1, 2, 100)
_, f_test = get_regression_dataset(dataset, n_samples=200, X=X_test, noise=0, w=w_star)
Phi_test = PolynomialFeatures(degree=degree, include_bias=True).fit_transform(np.atleast_2d(X_test).T)
y_equal = Phi_test @ w_hat
# COST SENSITIVITY
y_sensitive = y_equal + noise * np.sqrt(2) * erfinv(2 * ratio - 1)
# PLOT
plt.plot(X, y, '*')
plt.plot(X_test, y_sensitive, label='Cost Sensitive')
plt.plot(X_test, y_equal, label='Linear Regression')
plt.plot(X_test, f_test, label='True Function')
plt.legend(loc='upper left', ncol=4)
plt.ylim(-2, 2);
interact(cost_sensitive_linear_regression, dataset=['cos', 'sinc', 'linear', 'linear-features'],
over_estimation_cost_ratio=ipywidgets.FloatLogSlider(value=0.1, min=-3, max=3, step=0.1,
readout_format='.4f',
description='Ratio:', continuous_update=False),
n_samples=ipywidgets.IntSlider(value=30, min=30, max=1500, step=1,
description='N Samples:', continuous_update=False),
degree=ipywidgets.IntSlider(value=1, min=1, max=9, step=1,
description='Poly Degree:', continuous_update=False),
alpha=ipywidgets.BoundedFloatText(value=0, min=0, max=1000, step=0.0001,
description='Reg Coef.:', continuous_update=False),
noise=ipywidgets.FloatSlider(value=0.3, min=0, max=1, step=0.01, readout_format='.2f',
description='Noise level:', continuous_update=False)
);
rcParams['figure.figsize'] = (16, 5)
rcParams['font.size'] = 16
queried_set = {}
def uncertainty_sampling(dataset, criterion, noise):
query_button = ipywidgets.Button(description="Query new point")
update_button = ipywidgets.Button(description="Update Model")
restart_button = ipywidgets.Button(description="Restart")
X, Y = get_classification_dataset(dataset, 200, noise=noise)
num_classes = len(np.unique(Y)) - 1
X = X[:, :2]
indexes = np.arange(X.shape[0])
index_set = set([i for i in indexes])
def plot(model, X, Y, queried_set, next_idx=None, display_query=True):
neg_i = np.where(Y == 0)[0]
pos_i = np.where(Y == 1)[0]
queried_idx = [i for i in queried_set]
non_queried_idx = [i for i in index_set.difference(queried_set)]
qX, qY = X[queried_idx], Y[queried_idx]
nX, nY = X[non_queried_idx], Y[non_queried_idx]
# Model prediction contours.
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = .02
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
xy = np.c_[xx.ravel(), yy.ravel()]
P = model.predict_proba(xy).max(axis=1).reshape(xx.shape)
C = model.predict(xy).reshape(xx.shape)
H = -(model.predict_proba(xy) * model.predict_log_proba(xy)).sum(axis=1).reshape(xx.shape)
# PLOTS
fig, axes = plt.subplots(1, 2)
axes[0].set_title('Classification Boundary')
axes[0].contourf(xx, yy, C, cmap=plt.cm.jet, alpha=0.5, vmin=0, vmax=num_classes)
if criterion == 'max-entropy':
axes[1].set_title('Entropy')
cf = axes[1].contourf(xx, yy, H, cmap=plt.cm.cividis_r, alpha=0.5)
m = plt.cm.ScalarMappable(cmap=plt.cm.cividis_r)
m.set_array(H)
cbar = plt.colorbar(m, ax=axes[1])
cbar.set_label('Predicted Entropy', rotation=270, labelpad=20)
elif criterion == 'min-probability':
axes[1].set_title('Probability')
cf = axes[1].contourf(xx, yy, P, cmap=plt.cm.cividis_r, alpha=0.5)
m = plt.cm.ScalarMappable(cmap=plt.cm.cividis_r)
m.set_array(P)
cbar = plt.colorbar(m, ax=axes[1])
cbar.set_label('Predicted Probability', rotation=270, labelpad=20)
# Plot also the training points
for ax in axes:
ax.scatter(qX[:, 0], qX[:, 1], c=qY, marker='o', s=200, cmap=plt.cm.jet, vmin=0, vmax=num_classes)
ax.scatter(nX[:, 0], nX[:, 1], c=nY, marker='o', alpha=0.3, s=20, cmap=plt.cm.jet, vmin=0, vmax=num_classes)
if next_idx is not None:
ax.scatter(X[[next_idx], 0], X[[next_idx], 1], c=Y[[next_idx]], s=400, marker='*',
cmap=plt.cm.jet, vmin=0, vmax=num_classes)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
IPython.display.clear_output(wait=True)
IPython.display.display(plt.gcf())
plt.close()
if display_query:
display(query_button)
else:
display(update_button)
display(restart_button)
def update_model(b):
global queried_set, model
queried_idx = [i for i in queried_set]
model = LogisticRegression(C=10).fit(X[queried_idx], Y[queried_idx])
plot(model, X, Y, queried_set, next_idx=None, display_query=True)
def restart(b):
global queried_set
queried_set = set()
classes = np.unique(Y)
for c in classes:
i = np.random.choice(np.where(Y == c)[0])
queried_set.add(i)
update_model(None)
def append_point(b):
global queried_set, model
query_points = X
probs = model.predict_proba(X).max(axis=1)
H = model.predict_log_proba(X) * model.predict_proba(X)
H = H.sum(axis=1)
queried_idx = [i for i in queried_set]
probs[queried_idx] = float('Inf')
H[queried_idx] = float('Inf')
if criterion == 'max-entropy':
i = np.argmin(H)
elif criterion == 'min-probability':
i = np.argmin(probs)
plot(model, X, Y, queried_set, i, display_query=False)
queried_set.add(i)
query_button.on_click(append_point)
update_button.on_click(update_model)
restart_button.on_click(restart)
restart(None);
interact(uncertainty_sampling,
dataset=['linear', 'imbalanced', '2-blobs', '3-blobs', '4-blobs', 'iris', 'circles', 'moons'],
criterion=['min-probability', 'max-entropy'],
noise=ipywidgets.FloatSlider(value=0.25, min=0, max=1, step=0.01, readout_format='.2f',
continuous_update=False));
| 0.673514 | 0.847021 |
```
import os
import sys
import cv2
import pytesseract
from PIL import Image, ImageDraw,ImageFont
from pytesseract import Output
nb_dir = '/'.join(os.getcwd().split('/')[:-1])
sys.path.append(nb_dir)
sys.path.append(os.path.split(nb_dir)[0])
import config
import src.utilities.app_context as app_context
app_context.init()
#path to craft model, weight can be pulled form the production bracnh of repo
config.CRAFT_MODEL_PATH= nb_dir + '/utilities/craft_pytorch/model/craft_mlt_25k.pth'
config.CRAFT_REFINE_MODEL_PATH = nb_dir + '/utilities/craft_pytorch/model/craft_refiner_CTW1500.pth'
from src.services.main import TextDetection
#base_dir = '/home/dhiraj/Documents/Anuwad/anuvaad/anuvaad-etl/anuvaad-extractor/block-merger/src/notebooks/sample-data/input'
base_dir = '/home/naresh/anuvaad/anuvaad-etl/anuvaad-extractor/document-processor/word-detector/craft/upload/'
filename = 'good.pdf'
#filename = 'hamlet_1.pdf'
file_format = 'pdf'
language = 'hi'
app_context.application_context = {
"input":{
"inputs": [
{
"file": {
"identifier": "string",
"name": filename,
"type": file_format
},
"config": {
"OCR": {
"option": "HIGH_ACCURACY",
"language": "hi"
}
}
}
]}
}
resp = TextDetection(app_context,base_dir)
import json
for k in range(2,5):
config.LANGUAGE_LINK_THRESOLDS['en']['link_threshold']=k*0.2
for i in range(3):
config.LANGUAGE_LINE_THRESOLDS['en']['low_text'] = i*0.2
for j in range(3):
config.LANGUAGE_LINE_THRESOLDS['en']['text_threshold'] = j*0.2
resp = TextDetection(app_context,base_dir)
file = "/home/naresh/word_detector/"+str(k*0.2)+"_"+str(i*0.2)+"_"+str(j*0.2)+"_.json"
json_object = json.dumps(resp, indent = 4)
with open(file, "w") as outfile:
outfile.write(json_object)
#json.dump(resp, out_file)
#print(resp['rsp']['outputs'][0]['page_info'][0])
image = draw_box(resp,resp['rsp']['outputs'][file_index]['page_info'][page_index],save_dir ,str(k*0.2)+"_"+str(i*0.2)+"_"+str(j*0.2),color="green", save=True)
file_index = 0
page_index =0
filepath = resp['rsp']['outputs'][file_index]['page_info'][page_index]
save_dir = '/home/naresh/word_detector_benchmark/'
#resp['rsp']['outputs'][file_index]['pages'][page_index]['words'][0]
def draw_box(resp,filepath,save_dir, thresh,color="green", save=False):
image = Image.open(filepath)
draw = ImageDraw.Draw(image)
for i in resp['rsp']['outputs'][file_index]['pages'][page_index]['words']:
#font = ImageFont.truetype("sans-serif.ttf", 30)
draw.text((10, 10),thresh,(255,0,255))
print("kkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk",i)
draw.rectangle(((i['boundingBox']['vertices'][0]['x'], i['boundingBox']['vertices'][0]['y']), (i['boundingBox']['vertices'][2]['x'],i['boundingBox']['vertices'][2]['y'])), outline=color,width=3)
save_filepath = os.path.join(save_dir, "bbox_"+str(thresh)+os.path.basename(filepath))
if save:
image.save(save_filepath)
return image
#local draw box
def draw_box(resp,filepath,save_dir,color="green", save=False):
image = Image.open(filepath)
draw = ImageDraw.Draw(image)
for i in resp['rsp']['outputs'][file_index]['pages'][page_index]['words']:
draw.rectangle(((i['boundingBox']['vertices'][0]['x'], i['boundingBox']['vertices'][0]['y']), (i['boundingBox']['vertices'][2]['x'],i['boundingBox']['vertices'][2]['y'])), outline=color,width=3)
save_filepath = os.path.join(save_dir, "bbox_"+os.path.basename(filepath))
if save:
image.save(save_filepath)
return image
image =draw_box(resp,filepath,save_dir,color="green", save=True)
def draw_tess(filepath,save_dir):
img = cv2.imread(filepath)
name = filepath.split("/")[-1]
print(name)
h, w, c = img.shape
boxes = pytesseract.image_to_boxes(img)
#try:
d = pytesseract.image_to_data(img, output_type=Output.DICT)
n_boxes = len(d['text'])
for i in range(n_boxes):
if int(d['conf'][i]) > 1:
(x, y, w, h) = (d['left'][i], d['top'][i], d['width'][i], d['height'][i])
img = cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 2)
cv2.imwrite(save_dir+name, img)
print(save_dir+name)
# except:
# pass
draw_tess(filepath,save_dir)
```
|
github_jupyter
|
import os
import sys
import cv2
import pytesseract
from PIL import Image, ImageDraw,ImageFont
from pytesseract import Output
nb_dir = '/'.join(os.getcwd().split('/')[:-1])
sys.path.append(nb_dir)
sys.path.append(os.path.split(nb_dir)[0])
import config
import src.utilities.app_context as app_context
app_context.init()
#path to craft model, weight can be pulled form the production bracnh of repo
config.CRAFT_MODEL_PATH= nb_dir + '/utilities/craft_pytorch/model/craft_mlt_25k.pth'
config.CRAFT_REFINE_MODEL_PATH = nb_dir + '/utilities/craft_pytorch/model/craft_refiner_CTW1500.pth'
from src.services.main import TextDetection
#base_dir = '/home/dhiraj/Documents/Anuwad/anuvaad/anuvaad-etl/anuvaad-extractor/block-merger/src/notebooks/sample-data/input'
base_dir = '/home/naresh/anuvaad/anuvaad-etl/anuvaad-extractor/document-processor/word-detector/craft/upload/'
filename = 'good.pdf'
#filename = 'hamlet_1.pdf'
file_format = 'pdf'
language = 'hi'
app_context.application_context = {
"input":{
"inputs": [
{
"file": {
"identifier": "string",
"name": filename,
"type": file_format
},
"config": {
"OCR": {
"option": "HIGH_ACCURACY",
"language": "hi"
}
}
}
]}
}
resp = TextDetection(app_context,base_dir)
import json
for k in range(2,5):
config.LANGUAGE_LINK_THRESOLDS['en']['link_threshold']=k*0.2
for i in range(3):
config.LANGUAGE_LINE_THRESOLDS['en']['low_text'] = i*0.2
for j in range(3):
config.LANGUAGE_LINE_THRESOLDS['en']['text_threshold'] = j*0.2
resp = TextDetection(app_context,base_dir)
file = "/home/naresh/word_detector/"+str(k*0.2)+"_"+str(i*0.2)+"_"+str(j*0.2)+"_.json"
json_object = json.dumps(resp, indent = 4)
with open(file, "w") as outfile:
outfile.write(json_object)
#json.dump(resp, out_file)
#print(resp['rsp']['outputs'][0]['page_info'][0])
image = draw_box(resp,resp['rsp']['outputs'][file_index]['page_info'][page_index],save_dir ,str(k*0.2)+"_"+str(i*0.2)+"_"+str(j*0.2),color="green", save=True)
file_index = 0
page_index =0
filepath = resp['rsp']['outputs'][file_index]['page_info'][page_index]
save_dir = '/home/naresh/word_detector_benchmark/'
#resp['rsp']['outputs'][file_index]['pages'][page_index]['words'][0]
def draw_box(resp,filepath,save_dir, thresh,color="green", save=False):
image = Image.open(filepath)
draw = ImageDraw.Draw(image)
for i in resp['rsp']['outputs'][file_index]['pages'][page_index]['words']:
#font = ImageFont.truetype("sans-serif.ttf", 30)
draw.text((10, 10),thresh,(255,0,255))
print("kkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk",i)
draw.rectangle(((i['boundingBox']['vertices'][0]['x'], i['boundingBox']['vertices'][0]['y']), (i['boundingBox']['vertices'][2]['x'],i['boundingBox']['vertices'][2]['y'])), outline=color,width=3)
save_filepath = os.path.join(save_dir, "bbox_"+str(thresh)+os.path.basename(filepath))
if save:
image.save(save_filepath)
return image
#local draw box
def draw_box(resp,filepath,save_dir,color="green", save=False):
image = Image.open(filepath)
draw = ImageDraw.Draw(image)
for i in resp['rsp']['outputs'][file_index]['pages'][page_index]['words']:
draw.rectangle(((i['boundingBox']['vertices'][0]['x'], i['boundingBox']['vertices'][0]['y']), (i['boundingBox']['vertices'][2]['x'],i['boundingBox']['vertices'][2]['y'])), outline=color,width=3)
save_filepath = os.path.join(save_dir, "bbox_"+os.path.basename(filepath))
if save:
image.save(save_filepath)
return image
image =draw_box(resp,filepath,save_dir,color="green", save=True)
def draw_tess(filepath,save_dir):
img = cv2.imread(filepath)
name = filepath.split("/")[-1]
print(name)
h, w, c = img.shape
boxes = pytesseract.image_to_boxes(img)
#try:
d = pytesseract.image_to_data(img, output_type=Output.DICT)
n_boxes = len(d['text'])
for i in range(n_boxes):
if int(d['conf'][i]) > 1:
(x, y, w, h) = (d['left'][i], d['top'][i], d['width'][i], d['height'][i])
img = cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 2)
cv2.imwrite(save_dir+name, img)
print(save_dir+name)
# except:
# pass
draw_tess(filepath,save_dir)
| 0.040922 | 0.110807 |
<small><small><i>
Introduction to Python for Bioinformatics - available at https://github.com/kipkurui/Python4Bioinformatics.
</i></small></small>
```
from IPython.display import HTML
```
# Control Flow Statements
The key thing to note about Python's control flow statements and program structure is that it uses _indentation_ to mark blocks. Hence the amount of white space (space or tab characters) at the start of a line is very important. This generally helps to make code more readable but can catch out new users of python.
## Conditionals
Conditionals in Python allows us to test conditions and change the program behaviour depending on the outcome of the tests. The Booleans, 'True' or 'False' are used in conditionals.
### If
```python
if some_condition:
code block```
Take note of the **:** at the end of the condition. The indented statements that follow are called a
block. The first unindented statement marksthe end of the block. Code is executed in blocks.
```
x = 12
if x > 10:
print("Hello")
```
### If-else
```python
if some_condition:
algorithm1
else:
algorithm2```
If the condition is True then algorithm1 is executed. If not, algorithm2 under the else clause is executed.
```
x = 12
if 10 < x < 11:
print("hello")
else:
print("world")
```
### Else if
Sometimes there are more than two possibilities and we need more than two branches. One way to express a computation like that is a **chained conditional**. You can have as many `elif` statements as you'd like, but it must have just one `else` statemet at the end.
```python
if some_condition:
algorithm
elif some_condition:
algorithm
else:
algorithm```
```
x = 10
y = 12
if x > y:
print("x>y")
elif x < y:
print("x<y")
else:
print("x=y")
```
if statement inside a if statement or if-elif or if-else are called as nested if statements.
```
x = 10
y = 12
if x > y:
print( "x>y")
elif x < y:
print( "x<y")
if x==10:
print ("x=10")
else:
print ("invalid")
else:
print ("x=y")
```
## Loops
### For
Loops allows us to repeat some code over a given number of times. For example, we can print an invite to a Party for each of our friends using a for loop. In this case, it repeats printing an ivite utill we have invited all our friends. That is the terminating condition of the loop.
```
names = ["Joe","Zoe","Brad","Angelina","Zuki","Thandi","Paris"]
for name in names:
invite = "Hi %s. Please come to my party on Saturday!" % name
print(invite)
```
In short:
```python
for variable in something:
algorithm```
When looping over integers the **range()** function is useful which generates a range of integers:
* range(n) = 0, 1, ..., n-1
* range(m,n)= m, m+1, ..., n-1
* range(m,n,s)= m, m+s, m+2s, ..., m + ((n-m-1)//s) * s
Once again, let's use [Python Visualizer](https://goo.gl/vHxi2f) to understand loops.
```
%%html
<iframe width="900" height="400" frameborder="0" src="http://pythontutor.com/iframe-embed.html#code=for%20f%20in%20%5B%22Joe%22,%22Zoe%22,%22Brad%22,%22Angelina%22,%22Zuki%22,%22Thandi%22,%22Paris%22%5D%3A%0A%20%20%20%20invite%20%3D%20%22Hi%20%22%20%2B%20f%20%2B%20%22.%20%20Please%20come%20to%20my%20party%20on%20Saturday!%22%0A%20%20%20%20print%28invite%29&codeDivHeight=400&codeDivWidth=350&cumulative=false&curInstr=15&heapPrimitives=nevernest&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false"> </iframe>
for ch in 'abc':
print(ch)
total = 0
for i in range(5):
total += i
for i,j in [(1,2),(3,1)]:
total += i**j
print("total =",total)
```
In the above example, i iterates over the 0,1,2,3,4. Every time it takes each value and executes the algorithm inside the loop. It is also possible to iterate over a nested list illustrated below.
```
list_of_lists = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
for list1 in list_of_lists:
print(list1)
```
A use case of a nested for loop in this case would be,
```
list_of_lists = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
total=0
for list1 in list_of_lists:
for x in list1:
total = total+x
print(total)
```
There are many helper functions that make **for** loops even more powerful and easy to use. For example **enumerate()**, **zip()**, **sorted()**, **reversed()**
```
print("reversed: ",end="")
for ch in reversed("abc"):
print(ch,end=";")
print("\nenuemerated: ")
for i,ch in enumerate("abc"):
print(i,"=",ch,end="; ")
print("\nzip'ed: ")
for a,x in zip("abc","xyz"):
print(a,":",x)
```
### While
```python
while some_condition:
algorithm```
A while loop checks a condition and continues executing the block untill the condition is False. The loop terminates when the condition is not met.
#### Example
* Write a program to manage bank withdrawals at the ATM
In the example below, sometimes the code does not behave as expected in Jupyter Notebook. See the Script bank.py.
```
acountbal = 50000
choice = input("Please enter 'b' to check balance or 'w' to withdraw: ")
while choice != 'q':
if choice.lower() in ('w','b'):
if choice.lower() == 'b':
print("Your balance is: %d" % acountbal)
print("Anything else?")
choice = input("Enter b for balance, w to withdraw or q to quit: ")
print(choice.lower())
else:
withdraw = float(input("Enter amount to withdraw: "))
if withdraw <= acountbal:
print("here is your: %.2f" % withdraw)
acountbal = acountbal - withdraw
print("Anything else?")
choice = input("Enter b for balance, w to withdraw or q to quit: ")
#choice = 'q'
else:
print("You have insufficient funds: %.2f" % acountbal)
else:
print("Wrong choice!")
choice = input("Please enter 'b' to check balance or 'w' to withdraw: ")
```
### You turn
Expand the script in the previous cell to also manage ATM deposits
```
i = 1
while i < 3:
print(i ** 2)
i = i+1
print('Bye')
dna = 'ATGCGGACCTAT'
base = 'C'
i = 0 # counter
j = 0 # string index
while j < len(dna):
if dna[j] == base:
i += 1
j += 1
print(j)
```
If the conditional does not chnage to false at some point, we end up with an infinite loop. For example, if you follow the directions for using shampoo 'lather, rinse, repeat' literally you may never finish washing you hair. That is an infinite loop.
Use a **for loop** if you know, before you start looping, the maximum number of times that you’ll need to execute the body.
### Break
Loops execute until a given number of times is reached or the condition changes to False. You can `break` out of a loop when a condition becomes true when executing the loop.
```
for i in range(100):
print(i)
if i>=7:
break
```
### Continue
This continues the rest of the loop. Sometimes when a condition is satisfied there are chances of the loop getting terminated. This can be avoided using continue statement.
```
for i in range(10):
if i>4:
print("Ignored",i)
continue
# this statement is not reach if i > 4
print("Processed",i)
```
## Catching exceptions
To break out of deeply nested exectution sometimes it is useful to raise an exception.
A try block allows you to catch exceptions that happen anywhere during the exeuction of the try block:
```python
try:
code
except <Exception Type> as <variable name>:
# deal with error of this type
except:
# deal with any error```
```
try:
count=0
while True:
print('First here')
while True:
print('Then here')
while True:
print('Finally here')
print("Looping")
count = count + 1
if count > 3:
float('ywed')
#raise Exception("abort") # exit every loop or function
except Exception as e: # this is where we go when an exception is raised
print("Caught exception:",e)
```
This can also be useful to handle unexpected system errors more gracefully:
```
try:
for i in [2,1.5,0.0,3]:
inverse = 1.0/i
print("The inverse of %f is %f" % (i,inverse))
except ValueError: # no matter what exception
print("Cannot calculate inverse of %f" % i)
except ZeroDivisionError:
print("Cannot divide by zero")
except:
print("No idea whhat went wrong")
```
### Exercise
1. Create a while loop that starts with x = 0 and increments x until x is equal to 5. Each iteration should print to the console.
2. Repeat the previous problem, but the loop will skip printing x = 5 to the console but will print values of x from 6 to 10.
3. Create a for loop that prints values from 4 to 10 to the console.
```
#Question one
i = 0
while i < 6:
print("x=",i)
i==5
i = i+1
#question 2
i = 0
while i < 11:
if i==5:
i+=1
continue
print("x=",i)
i = i+1
#question 3
for i in range(4,11):
print(i)
#Bank question
acountbal = 50000
choice = input("Please enter 'b' to check balance or 'w' to withdraw or 'd' to deposit: ")
while choice != 'q':
if choice.lower() in ('w','b','d'):
if choice.lower() == 'b':
print("Your balance is: %d" % acountbal)
print("Anything else?")
choice = input("Enter b for balance, w to withdraw,'d' to deposit or q to quit: ")
print(choice.lower())
elif choice.lower()== 'w':
withdraw = float(input("Enter amount to withdraw: "))
if withdraw <= acountbal:
print("here is your: %.2f" % withdraw)
acountbal = acountbal - withdraw
print("Anything else?")
choice = input("Enter b for balance, w to withdraw,'d' to deposit or q to quit: ")
else:
print("You have insufficient funds: %.2f" % acountbal)
choice=input("Enter b for balance, w to withdraw,'d' to deposit or 'q' to quit: ")
elif choice.lower()=='d':
deposit=float(input("Enter amount to deposit"))
print("The amount has been deposited successfully")
acountbal = acountbal + deposit
print("Your account balance is: %d" % acountbal)
print("Anything else?")
choice = input("Enter b for balance, w to withdraw,'d' to deposit or 'q' to quit: ")
else:
print("Wrong choice!")
choice = input("Please enter 'b' to check balance, 'w' to withdraw or 'd' to deposit: ")
```
|
github_jupyter
|
from IPython.display import HTML
if some_condition:
code block```
Take note of the **:** at the end of the condition. The indented statements that follow are called a
block. The first unindented statement marksthe end of the block. Code is executed in blocks.
### If-else
If the condition is True then algorithm1 is executed. If not, algorithm2 under the else clause is executed.
### Else if
Sometimes there are more than two possibilities and we need more than two branches. One way to express a computation like that is a **chained conditional**. You can have as many `elif` statements as you'd like, but it must have just one `else` statemet at the end.
if statement inside a if statement or if-elif or if-else are called as nested if statements.
## Loops
### For
Loops allows us to repeat some code over a given number of times. For example, we can print an invite to a Party for each of our friends using a for loop. In this case, it repeats printing an ivite utill we have invited all our friends. That is the terminating condition of the loop.
In short:
When looping over integers the **range()** function is useful which generates a range of integers:
* range(n) = 0, 1, ..., n-1
* range(m,n)= m, m+1, ..., n-1
* range(m,n,s)= m, m+s, m+2s, ..., m + ((n-m-1)//s) * s
Once again, let's use [Python Visualizer](https://goo.gl/vHxi2f) to understand loops.
In the above example, i iterates over the 0,1,2,3,4. Every time it takes each value and executes the algorithm inside the loop. It is also possible to iterate over a nested list illustrated below.
A use case of a nested for loop in this case would be,
There are many helper functions that make **for** loops even more powerful and easy to use. For example **enumerate()**, **zip()**, **sorted()**, **reversed()**
### While
A while loop checks a condition and continues executing the block untill the condition is False. The loop terminates when the condition is not met.
#### Example
* Write a program to manage bank withdrawals at the ATM
In the example below, sometimes the code does not behave as expected in Jupyter Notebook. See the Script bank.py.
### You turn
Expand the script in the previous cell to also manage ATM deposits
If the conditional does not chnage to false at some point, we end up with an infinite loop. For example, if you follow the directions for using shampoo 'lather, rinse, repeat' literally you may never finish washing you hair. That is an infinite loop.
Use a **for loop** if you know, before you start looping, the maximum number of times that you’ll need to execute the body.
### Break
Loops execute until a given number of times is reached or the condition changes to False. You can `break` out of a loop when a condition becomes true when executing the loop.
### Continue
This continues the rest of the loop. Sometimes when a condition is satisfied there are chances of the loop getting terminated. This can be avoided using continue statement.
## Catching exceptions
To break out of deeply nested exectution sometimes it is useful to raise an exception.
A try block allows you to catch exceptions that happen anywhere during the exeuction of the try block:
This can also be useful to handle unexpected system errors more gracefully:
### Exercise
1. Create a while loop that starts with x = 0 and increments x until x is equal to 5. Each iteration should print to the console.
2. Repeat the previous problem, but the loop will skip printing x = 5 to the console but will print values of x from 6 to 10.
3. Create a for loop that prints values from 4 to 10 to the console.
| 0.767516 | 0.940735 |
```
import requests
from bs4 import BeautifulSoup
import pandas as pd
headers={"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0.2 Safari/605.1.15"
}
drug_base_url='''http://code.nhsa.gov.cn:8000/hc/stdSpecification/getStdSpecificationListData.html?batchNumber=&_search=false&rows=25&page={}&sidx=specification_code&sord=asc'''
drug_full_url=drug_base_url.format(1)
drug_detail=requests.get(drug_full_url,headers=headers)
drug_detail=drug_detail.json()
drug_detail['records']
pages=drug_detail['total']
pages
rows=pd.DataFrame(drug_detail['rows'])
rows.head()
drugs=[]
drug_base_url='''http://code.nhsa.gov.cn:8000/hc/stdSpecification/getStdSpecificationListData.html?batchNumber=&_search=false&rows=25&page={}&sidx=specification_code&sord=asc'''
for page in range(1,pages+1):
drug_full_url=drug_base_url.format(page)
drug_detail=requests.get(drug_full_url,headers=headers)
if drug_detail.status_code == 200:
drug_detail=drug_detail.json()
rows=pd.DataFrame(drug_detail['rows'])
drugs.append(rows)
else:
print(drug_full_url)
CHS_MATERIAL_Table=pd.concat(drugs,axis=0,ignore_index=True)
CHS_MATERIAL_Table.head(20)
len(CHS_MATERIAL_Table)
CHS_MATERIAL_Table.to_csv('./LOOKUP_TABLES/CHS_MATERIAL.csv',encoding='utf-8',index=False)
```
#### version 20200628
```
drug_base_url='''http://code.nhsa.gov.cn:8000/hc/stdPublishData/getStdPublicDataList.html?\
releaseVersion=20200628&batchNumber=20200628&_search=false&rows=50&page={}&sord=asc'''
drug_base_url
drug_full_url=drug_base_url.format(1)
drug_full_url
drug_detail=requests.get(drug_full_url,headers=headers)
drug_detail=drug_detail.json()
drug_detail['records']
pages=drug_detail['total']
pages
rows=pd.DataFrame(drug_detail['rows'])
rows.head()
drugs=[]
for page in range(1,pages+1):
drug_full_url=drug_base_url.format(page)
drug_detail=requests.get(drug_full_url,headers=headers)
if drug_detail.status_code == 200:
drug_detail=drug_detail.json()
rows=pd.DataFrame(drug_detail['rows'])
drugs.append(rows)
else:
print(drug_full_url)
CHS_MATERIAL_Table=pd.concat(drugs,axis=0,ignore_index=True)
CHS_MATERIAL_Table.head(20)
len(CHS_MATERIAL_Table)
CHS_MATERIAL_Table.to_csv('./LOOKUP_TABLES/CHS_MATERIAL_20200628.csv',encoding='utf-8',index=False)
m_detail_base_url='''http://code.nhsa.gov.cn:8000/hc/stdPublishData/getStdPublicDataListDetail.html?\
specificationCode={}\
&releaseVersion=20200628\
&_search=false\
&rows=25\
&page=1\
&sord=asc'''
m_detail_base_url
m_detail_full_url=m_detail_base_url.format(CHS_MATERIAL_Table.specificationCode.to_list()[0])
m_detail=requests.get(m_detail_full_url,headers=headers)
m_details=[]
for sp_code in CHS_MATERIAL_Table.specificationCode.to_list():
m_detail_full_url=m_detail_base_url.format(sp_code)
m_detail=requests.get(m_detail_full_url,headers=headers)
if m_detail.status_code == 200:
m_detail=m_detail.json()
rows=pd.DataFrame(m_detail['rows'])
m_details.append(rows)
else:
print(m_detail_full_url)
CHS_MATERIAL_DETAIL_Table=pd.concat(m_details,axis=0,ignore_index=True)
CHS_MATERIAL_DETAIL_Table.head(20)
CHS_MATERIAL_DETAIL_Table.to_csv('./LOOKUP_TABLES/CHS_MATERIAL_DETAIL_20200628.csv',encoding='utf-8',index=False)
len(CHS_MATERIAL_DETAIL_Table)
```
|
github_jupyter
|
import requests
from bs4 import BeautifulSoup
import pandas as pd
headers={"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0.2 Safari/605.1.15"
}
drug_base_url='''http://code.nhsa.gov.cn:8000/hc/stdSpecification/getStdSpecificationListData.html?batchNumber=&_search=false&rows=25&page={}&sidx=specification_code&sord=asc'''
drug_full_url=drug_base_url.format(1)
drug_detail=requests.get(drug_full_url,headers=headers)
drug_detail=drug_detail.json()
drug_detail['records']
pages=drug_detail['total']
pages
rows=pd.DataFrame(drug_detail['rows'])
rows.head()
drugs=[]
drug_base_url='''http://code.nhsa.gov.cn:8000/hc/stdSpecification/getStdSpecificationListData.html?batchNumber=&_search=false&rows=25&page={}&sidx=specification_code&sord=asc'''
for page in range(1,pages+1):
drug_full_url=drug_base_url.format(page)
drug_detail=requests.get(drug_full_url,headers=headers)
if drug_detail.status_code == 200:
drug_detail=drug_detail.json()
rows=pd.DataFrame(drug_detail['rows'])
drugs.append(rows)
else:
print(drug_full_url)
CHS_MATERIAL_Table=pd.concat(drugs,axis=0,ignore_index=True)
CHS_MATERIAL_Table.head(20)
len(CHS_MATERIAL_Table)
CHS_MATERIAL_Table.to_csv('./LOOKUP_TABLES/CHS_MATERIAL.csv',encoding='utf-8',index=False)
drug_base_url='''http://code.nhsa.gov.cn:8000/hc/stdPublishData/getStdPublicDataList.html?\
releaseVersion=20200628&batchNumber=20200628&_search=false&rows=50&page={}&sord=asc'''
drug_base_url
drug_full_url=drug_base_url.format(1)
drug_full_url
drug_detail=requests.get(drug_full_url,headers=headers)
drug_detail=drug_detail.json()
drug_detail['records']
pages=drug_detail['total']
pages
rows=pd.DataFrame(drug_detail['rows'])
rows.head()
drugs=[]
for page in range(1,pages+1):
drug_full_url=drug_base_url.format(page)
drug_detail=requests.get(drug_full_url,headers=headers)
if drug_detail.status_code == 200:
drug_detail=drug_detail.json()
rows=pd.DataFrame(drug_detail['rows'])
drugs.append(rows)
else:
print(drug_full_url)
CHS_MATERIAL_Table=pd.concat(drugs,axis=0,ignore_index=True)
CHS_MATERIAL_Table.head(20)
len(CHS_MATERIAL_Table)
CHS_MATERIAL_Table.to_csv('./LOOKUP_TABLES/CHS_MATERIAL_20200628.csv',encoding='utf-8',index=False)
m_detail_base_url='''http://code.nhsa.gov.cn:8000/hc/stdPublishData/getStdPublicDataListDetail.html?\
specificationCode={}\
&releaseVersion=20200628\
&_search=false\
&rows=25\
&page=1\
&sord=asc'''
m_detail_base_url
m_detail_full_url=m_detail_base_url.format(CHS_MATERIAL_Table.specificationCode.to_list()[0])
m_detail=requests.get(m_detail_full_url,headers=headers)
m_details=[]
for sp_code in CHS_MATERIAL_Table.specificationCode.to_list():
m_detail_full_url=m_detail_base_url.format(sp_code)
m_detail=requests.get(m_detail_full_url,headers=headers)
if m_detail.status_code == 200:
m_detail=m_detail.json()
rows=pd.DataFrame(m_detail['rows'])
m_details.append(rows)
else:
print(m_detail_full_url)
CHS_MATERIAL_DETAIL_Table=pd.concat(m_details,axis=0,ignore_index=True)
CHS_MATERIAL_DETAIL_Table.head(20)
CHS_MATERIAL_DETAIL_Table.to_csv('./LOOKUP_TABLES/CHS_MATERIAL_DETAIL_20200628.csv',encoding='utf-8',index=False)
len(CHS_MATERIAL_DETAIL_Table)
| 0.152095 | 0.135861 |
# Sesiones prácticas
## 0
Instalación de Python + ecosistema científico + opencv + opengl
- aula virtual -> página web -> install
- git o unzip master
- anaconda completo o miniconda
- linux, windows, mac
- probar scripts con webcam y verificar opengl, dlib etc.
- manejo básico de jupyter
- repaso Python
- Ejercicio: recortar y unir imágenes para conseguir [algo como esto](../images/demos/ej-c0.png).
Opcional:
- compilación opencv
- docker
## 1
Ejercicio de comprobación de FOV/tamaños/distancias.
Dispositivos de captura
- umucv (install con --upgrade) (update_umucv.sh)
- webcam.py con opencv crudo
- stream.py, opciones de autostream, efecto de teclas, --help, --dev=help
- webcams
- videos
- carpeta de imágenes
- teléfono
- youtube
- urls de tv
- ejemplo de recorte invertido
## 2
Más utilidades
- spyder
- PYTHONPATH
- control de webcam v4l2-ctl, vlc, gucview
- wzoom.py (para las ventanas de Windows que no tienen zoom)
- help_window.py
- save_video.py
- ampliar mouse.py:
- círculos en las posiciones marcadas (cv.circle)
- coordenadas textuales (cv.putText (ej. en hello.py) o umucv.util.putText)
- marcar solo los dos últimos (pista: collections.deque)
- reproducir code/medidor.py indicando la distancia en pixels
- Dado el FOV: indicar el ángulo de las direcciones marcadas.
## 3
deque.py
roi.py:
- añadir la media del nivel de gris del recorte
- guardar el recorte y mostrar cv.absdiff respecto al frame actual, mostrando su media o su máximo.
(Sirve de punto de partida para el ejercicio ACTIVIDAD)
## 4
- aclaraciones ejercicio COLOR
- demo spectral
- trackbar.py
- demo filtros
Ejercicio:
- implementación de filtro gaussiano con tracker sigma en toda la imagen, monocromo ([ejemplo](../images/demos/ej-c4-0.png)).
- add box y median
- medir y mostrar tiempos de cómputo en diferentes casos
(Sirve de punto de partida para el ejercicio opcional FILTROS)
## 5
HOG
- (captura asíncrona)
- (teoría de HOG, implementación sencilla)
- hog0.py en detalle
- pedestrian.py, detección multiescala
- DLIB facelandmarks.py: HOG face detector con landmarks
Ejercicio: blink detection, inpainting eyes, etc.
## 6
Detección de corners y Flujo óptico de Lucas-Kanade
- LK/*.py
Vamos a construir un "tracker" de puntos de interés basado en el método de Lucas-Kanade.
El primer paso es construir un detector de corners partiendo de cero, calculando la imagen de respuesta correspondiente al menor valor propio de la matriz de covarianza de la distribución local del gradiente en cada pixel (`corners0.py`). En realidad esta operación está directamente disponible en opencv mediante cv.goodFeaturesToTrack (`corners1.py`, `corners2.py ).
El siguiente ejemplo muestra cómo encontrar directamente con `cv.calcOpticalFlowPyrLK` la posición de los puntos detectados en el fotograma siguiente, sin necesidad de recalcular puntos nuevos y asociarlos con los anteriores (`lk_track0.py`).
A continuación ampliamos el código para generar puntos nuevos periódicamente y crear una lista de trayectorias "tracks" que se mantiene actualizada en cada fotograma (`lk_track1.py`).
Finalmente, ampliamos el código anterior para que solo se generen puntos nuevos en zonas de la imagen donde no los haya, y mejoramos la detección de las posiciones siguientes con un criterio de calidad muy robusto que exige que la predicción hacia el pasado de los puntos nuevos coincida con el punto inicial. Si no hay una asociación mutua el punto y su trayectoria se descartan (`lk_tracks.py`).
Ejercicios:
- Analizar los tracks para determinar en qué dirección se mueve la cámara (UP,DOWN,LEFT,RIGHT, [FORWARD, BACKWARD])
- Estudiar la posibilidad de hacer tracking de un ROI.
## 7
Experimentamos con el detector de puntos de interés SIFT.
Nuestro objetivo es obtener un conjunto de "keypoints", cada uno con su descriptor (vector de características que describe el entorno del punto), que permita encontrarlo en imágenes futuras. Esto tiene una aplicación inmediata para reconocer objetos y más adelante en geometría visual.
Empezamos con el ejemplo de código code/SIFT/sift0.py, que simplemente calcula y muestra los puntos de interés. Es interesante observar el efecto de los parámetros del método y el tiempo de cómputo en función del tamaño de la imagen (que puedes cambiar con --size o --resize).
El siguiente ejemplo code/SIFT/sift1.py muestra un primer ataque para establecer correspondencias. Los resultados son bastante pobres porque se aceptan todas las posibles coincidencias.
Finalmente, en code/SIFT/sift.py aplicamos un criterio de selección para eliminar muchas correspondencias erróneas (aunque no todas). Esto es en principio suficiente para el reconocimiento de objetos. (Más adelante veremos una forma mucho mejor de eliminar correspondencias erróneas, necesaria para aplicaciones de geometría.)
El ejercicio obligatorio **SIFT** es una ampliación sencilla de este código. Se trata de almacenar un conjunto de modelos (¡con textura! para que tengan suficientes keypoints) como portadas de libros, discos, videojuegos, etc. y reconocerlos en base a la proporción de coincidencias detectadas.
En la segunda parte de la clase experimentamos con un servidor mjpg y creamos bots de telegram (explicados al final de este documento) para comunicarnos fácilmente con las aplicaciones de visión artificial desde el móvil.
## 8
Reconocimiento de formas mediante descriptores frecuenciales.
Nuestro objetivo es hacer un programa que reconozca la forma de trébol, como se muestra [en este pantallazo](../images/demos/shapedetect.png). Si no tenéis a mano un juego de cartas podéis usar --dev=dir:../images/card*.png para hacer las pruebas, aunque lo ideal es hacerlo funcionar con una cámara en vivo.
Trabajaremos con los ejemplos de código de la carpeta `code/shapes` y, como es habitual, iremos añadiendo poco a poco funcionalidad. En cada nuevo paso los comentarios explican los cambios respecto al paso anterior.
Empezamos con el ejemplo shapes/trebol1.py, que simplemente prepara un bucle de captura básico, binariza la imagen y muestra los contornos encontrados. Se muestran varias formas de realizar la binarización y se puede experimentar con ellas, pero en principio el método automático propuesto suele funcionar bien en muchos casos.
El segundo paso en shapes/trebol2.py junta la visualización en una ventana y selecciona los contornos oscuros de tamaño razonable. Esto no es imprescincible para nuestra aplicación, pero es conveniente familiarizarse con el concepto de orientación de un contorno.
En shapes/trebol3.py leemos un modelo de la silueta trébol de una imagen que tenemos en el repositorio y la mostramos en una ventana.
En shapes/trebol3b.py hacemos una utilidad para ver gráficamente las componentes frecuenciales como elipses que componen la figura. Podemos ver las componentes en su tamaño natural, incluyendo la frecuencia principal, [como aquí](../images/demos/full-components.png), o quitando la frecuencia principal y ampliando el tamaño de las siguientes, que son la base del descriptor de forma, [como se ve aquí](../images/demos/shape-components.png). Observa que las configuraciones de elipses son parecidas cuando corresponden a la misma silueta.
En shapes/trebol4.py definimos la función que calcula el descriptor invariante. Se basa esencialmente en calcular los tamaños relativos de estas elipses. En el código se explica cómo se consigue la invarianza a las transformaciones deseadas: posición, tamaño, giros, punto de partida del contorno y ruido de medida.
Finalmente, en shapes/trebol5.py calculamos el descriptor del modelo y en el bucle de captura calculamos los descriptores de los contornos oscuros detectados para marcar las siluetas que tienen un descriptor muy parecido al del trébol.
## 8b
En esta subsesión vamos a hacer varias actividades. Necesitamos algunos paquetes. En Linux son:
sudo apt install tesseract-ocr tesseract-ocr-spa libtesseract-dev
pip install tesserocr
sudo apt install libzbar-dev
pip install pyzbar
[zbar Windows instaler](http://zbar.sourceforge.net) -> download
Usuarios de Mac y Windows: investigad la forma de instalar tesseract.
1) En primer lugar nos fijamos en el script [code/ocr.py](../code/ocr.py), cuya misión es poner en marcha el OCR con la cámara en vivo. Usamos el paquete de python `tesserocr`. Vamos a verificar el funcionamiento con una imagen estática, pero lo ideal es probarlo con la cámara en vivo.
./ocr.py --dev=dir:../images/texto/bo0.png
Está pensado para marcar una sola línea de texto, [como se muestra aquí](../images/demos/ocr.png). Este pantallazo se ha hecho con la imagen bo1.png disponible en la misma carpeta, que está desenfocada, pero aún así el OCR funciona bien.
(En windows parece que hay que usar pytesseract en lugar de tesserocr, lo que requiere adaptar del script.)
Para mostrar la complejidad de un ocr mostramos el resultado del script `crosscorr.py` sobre images/texto.png, para observar que la comparación pixel a pixel no es suficiente para obtener resultados satisfactorios. En esa misma imagen la binarización y extracción de componentes conexas no consigue separar letras individuales.
Finalmente demostramos mediante `spectral.py` que la transormada de Fourier 2D permite detectar el ángulo y la separación entre renglones.
2) El segundo ejemplo es `code/zbardemo.png` que muestra el uso del paquete pyzbar para leer códigos de barras ([ejemplo](../images/demos/barcode.png)) y códigos QR ([ejemplo](../images/demos/qr.png)) con la cámara. En los códigos de barras se detectan puntos de referencia, y en los QR se detectan las 4 esquinas del cuadrado, que pueden ser útiles como referencia en algunas aplicaciones de geometría.
4) demo de `grabcut.py` para segmentar interactivamente una imagen. Lo probamos con images/puzzle3.png.
5) Ponemos en marcha el detector de caras de opencv con la webcam en vivo y comparamos con el detector de DLIB.
## 9
Hoy vamos a rectificar el plano de la mesa apoyándonos en marcadores artificiales.
En primer lugar trabajaremos con marcadores poligonales. Nuestro objetivo es detectar un marcador como el que aparece en el vídeo `images/rot4.mjpg`. Nos vamos a la carpeta `code/polygon`.
El primer paso (`polygon0.py`) es detectar figuras poligonales con el número de lados correcto a partir de los contornos detectados.
A continuación (`polygon1.py`) nos quedamos con los polígonos que realmente pueden corresponder al marcador. Esto se hace viendo si existe una homografía que relaciona con precisión suficiente el marcador real y su posible imagen.
Finalmente (`polygon2.py`) obtiene el plano rectificado
También se puede añadir información "virtual" a la imagen original, como por ejemplo los ejes de coordenadas definidos por el marcador (`polygon3.py`).
Como segunda actividad, en la carpeta `code/elipses` se muestra la forma de detectar un marcador basado en 4 círculos.
## 10
En esta sesión vamos a extraer la matriz de cámara a partir del marcador utilizado en la sesión anterior, lo que nos permitirá añadir objetos virtuales tridimensionales a la escena y determinar la posición de la cámara en el espacio.
Nos vamos a la carpeta `code/pose`, donde encontraremos los siguientes ejemplos de código:
`pose0.py` incluye el código completo para extraer contornos, detectar el marcador poligonal, extraer la matriz de cámara y dibujar un cubo encima del marcador.
`pose1.py` hace lo mismo con funciones de umucv.
`pose2.py` trata de ocultar el marcador y dibuja un objeto que cambia de tamaño.
`pose3.py` explica la forma de proyectar una imagen en la escena escapando del plano del marcador.
`pose3D.py` es un ejemplo un poco más avanzado que utiliza el paquete pyqtgraph para mostrar en 3D la posición de la cámara en el espacio.
En el ejercicio **RA** puedes intentar que el comportamiento del objeto virtual dependa de acciones del usuario (p. ej. señalando con el ratón un punto del plano) o de objetos que se encuentran en la escena.
## 11
Breve introducción a scikit-learn y keras.
En primer lugar repasaremos algunos conceptos básicos en el notebook [machine learning](machine-learning.ipynb).
Esta sesión está dedicada a poner en marcha una red convolucional sencilla. La tarea que vamos a resolver es el reconocimiento de dígitos manuscritos. Por eso, en primer lugar es conveniente escribir unos cuantos números en una hoja de papel, con un bolígrafo que tenga un trazo no demasiado fino, y sin preocuparnos mucho de que estén bien escritos. Pueden tener distintos tamaños, pero no deben estar muy girados. Para desarrollar el programa y hacer pruebas cómodamente se puede trabajar con una imagen fija, pero la idea es que nuestro programa funcione con la cámara en vivo.
Trabajaremos en la carpeta [code/DL/CNN](../code/DL/CNN), donde tenemos las diferentes etapas de ejercicio y una imagen de prueba.
El primer paso es `digitslive-1.py` que simplemente encuentra las manchas de tinta que pueden ser posibles números.
En `digitslive-2.py` normalizamos el tamaño de las detecciones para poder utilizar la base de datos MNIST.
En `digitslive-3.py` implementamos un clasificador gaussiano con reducción de dimensión mediante PCA y lo ponemos en marcha con la imagen en vivo. (Funciona bastante bien pero, p.ej., en la imagen de prueba comete un error).
Finalmente, en `digitslive-4.py` implementamos la clasificación mediante una red convolucional mediante el paquete **keras**. Usamos unos pesos precalculados. (Esta máquina ya no comete el error anterior.)
Como siempre, en cada fase del ejercicio los comentarios explican el código que se va añadiendo.
Una vez conseguido esto, la sesión práctica tiene una segunda actividad que consiste en **entrenar los pesos** de (por ejemplo) esta misma red convolucional. Para hacerlo en nuestro ordenador sin perder la paciencia necesitamos una GPU con CUDA y libCUDNN. La instalación de todo lo necesario puede no ser trivial.
Una alternativa muy práctica es usar [google colab](https://colab.research.google.com/), que proporciona gratuitamente máquinas virtuales con GPU y un entorno de notebooks jupyter (un poco modificados pero compatibles). Para probarlo, entrad con vuestra cuenta de google y abrid un nuevo notebook. En la opción de menú **Runtime** hay que seleccionar **Change runtime type** y en hardware accelerator ponéis GPU. En una celda del notebook copiáis directamente el contenido del archivo `cnntest.py` que hay en este mismo directorio donde estamos trabajando hoy. Al evaluar la celda se descargará la base de datos y se lanzará un proceso de entrenamiento. Cada epoch tarda unos 4s. Podéis comparar con lo que se consigue con la CPU en vuestro propio ordenador. Se puede lanzar un entrenamiento más completo, guardar los pesos y descargarlos a vuestra máquina.
Como curiosidad, podéis comparar con lo que conseguiría el OCR tesseract, y guardar algunos casos de dígitos que estén bien dibujados pero que la red clasifique mal.
Finalmente, entrenamos un autoencoder (notebook [bottleneck](bottleneck.ipynb)) y comparamos el resultado con la reducción de dimensión PCA explicada al principio.
## 12
En esta sesión vamos a poner en marcha algunos modelos más avanzados de deep learning.
Los ejemplos de código se han probado sobre LINUX. En Windows o Mac puede ser necesario hacer modificaciones; para no perder mucho tiempo mi recomendación es probarlo primero en una máquina virtual.
Si tenéis una GPU nvidia reciente lo ideal es instalar CUDA y libCUDNN para conseguir una mayor velocidad de proceso. Si no tenéis GPU no hay ningún problema, todos los modelos funcionan con CPU. (Los ejercicios de deep learning que requieren entrenamiento son opcionales y se pueden entrenar en COLAB.)
Para ejecutar las máquinas inception, YOLO y el reconocimiento de caras necesitamos los siguientes paquetes:
pip install face_recognition tensorflow==1.15.0 keras easydict
La detección de marcadores corporales *openpose* requiere unos pasos de instalación adicionales que explicaremos más adelante.
(La versión 1.15.0 de tensorflow es necesaria para YOLO y openpose. Producirá algunos warnings sin mucha importancia. Si tenemos una versión más reciente de tensorflow podemos hacer `pip install --upgrade tensorflow=1.15.0` o crear un entorno de conda especial para este tema).
1) Para probar el **reconocimiento de caras** nos vamos a la carpeta code/DL/facerec. Debe estar correctamente instalado DLIB.
En el directorio `gente` se guardan los modelos. Como ejemplo tenemos a los miembros de Monty Python:
./facerec.py --dev=dir:../../../images/monty-python*
(Recuerda que las imágenes seleccionadas con --dev=dir: se avanzan pinchando con el ratón en la ventana pequeña de muestra).
Puedes meter fotos tuyas y de tu familia en la carpeta `gente` para probar con la webcam o con otras fotos.
Con pequeñas modificaciones de este programa se puede resolver el ejercicio ANON: selecciona una cara en la imagen en vivo pinchando con el ratón para ocultarla (emborronándola o pixelizándola) cuando se reconozca en las imágenes siguientes.
Esta versión del reconocimiento de caras no tiene aceleración con GPU (tal vez se puede configurar). Si reducimos un poco el tamaño de la imagen funciona con bastante fluidez.
2) Para probar la máquina **inception** nos movemos a la carpeta code/DL/inception.
./inception0.py
(Se descargará el modelo del la red). Se puede probar con las fotos incluidas en la carpeta con `--dev=dir:*.png`. La versión `inception1.py` captura en hilo aparte y muestra en consola las 5 categorías más probables.
Aunque se supone que consigue buenos resultados en las competiciones, sobre imágenes naturales comete bastante errores.
3) El funcionamiento de **YOLO** es mucho mejor. Nos vamos a la carpeta code/DL y ejecutamos lo siguiente para para descargar el código y los datos de esta máquina (y de openpose).
bash get.sh
Nos metemos en code/DL/yolo y ejecutamos:
/.yolo-v3.py
Se puede probar también con las imágenes de prueba incluidas añadiendo `--dev=dir:*.png`.
El artículo de [YOLO V3](https://pjreddie.com/media/files/papers/YOLOv3.pdf) es interesante. En la sección 5 el autor explica que abandonó esta línea de investigación por razones éticas. Os recomiendo que la leáis. Como curiosidad, hace unos días apareció [YOLO V4](https://arxiv.org/abs/2004.10934).
4) Para probar **openpose** nos vamos a code/DL/openpose. Los archivos necesarios ya se han descargado en el paso anterior, pero necesitamos instalar algunos paquetes. El proceso se explica en el README.
En la carpeta `docker` hay un script para ejecutar una imagen docker que tiene instalados todos los paquetes que hemos estamos usando en la asignatura. Es experimental. No perdaís ahora tiempo con esto si no estáis familiarizados con docker.
El tema de deep learning en visión artificial es amplísimo. Para estudiarlo en detalle hace falta (como mínimo) una asignatura avanzada (master). Nuestro objetivo es familizarizarnos un poco con algunas de las máquinas preentrenadas disponibles para hacernos una idea de sus ventajas y limitaciones.
Si estáis interesados en estos temas el paso siguiente es adaptar alguno de estos modelos a un problema propio mediante "transfer learning", que consiste en utilizar las primeras etapas de una red preentrenada para transformar nuestros datos y ajustar un clasificador sencillo. Alternativamente, se puede reajustar los pesos de un modelo preentrenado, fijando las capas iniciales al principio. Para remediar la posible falta de ejemplos se utilizan técnicas de "data augmentation", que generan variantes de los ejemplos de entrenamiento con múltiples transformaciones.
mediapipe
UNET
## entrenar dlib
- (opcional) DLIB herramienta de etiquetado imglab. Entrenamiento de detector HOG SVM con herramientas de DLIB:
- descargar y descomprimir dlib source
- ir a los ejemplos/faces
- meter dentro imglab (que hay que compilar pero tenemos versión precompilada en robot/material/va)
- mostrar los training.xml y testing.xml (se pueden crear otros)
- meter dentro train_detector.py y run_detector.py de code/hog
- ./train_detector training.xml testing.xml (crea detector.svm)
- ./run_detector detector.svm --dev=dir:\*.jpg (o también --dev=dir:/path/to/umucv/images/monty\*)
## correlation filter
Comentamos el método de detección de objetos por correlación cruzada, que es el mismo criterio que se usa para buscar la posición de *corners* en imágenes sucesivas, y luego vemos la demostración del discriminative correlation filter.
- crosscorr.py
- dcf.py
## flask server
El ejemplo `server.py` explica cómo hacer un servidor web sencillo con *flask* para enviar un pantallazo de la imagen actual de la webcam, y `mjpegserver.py` explica cómo hacer un servidor de streaming en formato mjpeg.
## telegram bot
Vamos a jugar con un bot de telegram que nos permite comunicarnos cómodamente con nuestro ordenador desde el teléfono móvil, sin necesidad de tener una dirección pública de internet.
Simplemente necesitamos:
pip install python-telegram-bot
El ejemplo `bot/bot0.py` nos envía al teléfono la IP del ordenador (es útil si necesitamos conectarnos por ssh con una máquina que tiene IP dinámica).
El ejemplo `bot/bot1.py` explica la forma de enviar una imagen nuestro teléfono cuando ocurre algo. En este caso se envía cuando se pulsa una tecla, pero lo normal es detectar automáticamente algún evento con las técnicas de visión artificial que estamos estudiando.
El ejemplo `bot/bot2.py` explica la forma de hacer que el bot responda a comandos. El comando /hello nos devuelve el saludo, el comando /stop detiene el programa y el comando /image nos devuelve una captura de nuestra webcam. (Se ha usado la captura en un hilo).
El ejemplo `bot/bot3.py` explica la forma de capturar comandos con argumentos y el procesamiento de una imagen enviada por el usuario.
Esta práctica es muy útil para enviar cómodamente a nuestros programas de visión artificial una imagen tomada con la cámara sin necesidad de escribir una aplicación específica para el móvil. Algunos ejercicios que estamos haciendo se pueden adaptar fácilmente para probarlos a través de un bot de este tipo.
Para crearos vuestro propio bot tenéis que contactar con el bot de telegram "BotFather", que os guiará paso a paso y os dará el token de acceso. Y luego el "IDBot" os dirá el id numérico de vuestro usuario.
|
github_jupyter
|
# Sesiones prácticas
## 0
Instalación de Python + ecosistema científico + opencv + opengl
- aula virtual -> página web -> install
- git o unzip master
- anaconda completo o miniconda
- linux, windows, mac
- probar scripts con webcam y verificar opengl, dlib etc.
- manejo básico de jupyter
- repaso Python
- Ejercicio: recortar y unir imágenes para conseguir [algo como esto](../images/demos/ej-c0.png).
Opcional:
- compilación opencv
- docker
## 1
Ejercicio de comprobación de FOV/tamaños/distancias.
Dispositivos de captura
- umucv (install con --upgrade) (update_umucv.sh)
- webcam.py con opencv crudo
- stream.py, opciones de autostream, efecto de teclas, --help, --dev=help
- webcams
- videos
- carpeta de imágenes
- teléfono
- youtube
- urls de tv
- ejemplo de recorte invertido
## 2
Más utilidades
- spyder
- PYTHONPATH
- control de webcam v4l2-ctl, vlc, gucview
- wzoom.py (para las ventanas de Windows que no tienen zoom)
- help_window.py
- save_video.py
- ampliar mouse.py:
- círculos en las posiciones marcadas (cv.circle)
- coordenadas textuales (cv.putText (ej. en hello.py) o umucv.util.putText)
- marcar solo los dos últimos (pista: collections.deque)
- reproducir code/medidor.py indicando la distancia en pixels
- Dado el FOV: indicar el ángulo de las direcciones marcadas.
## 3
deque.py
roi.py:
- añadir la media del nivel de gris del recorte
- guardar el recorte y mostrar cv.absdiff respecto al frame actual, mostrando su media o su máximo.
(Sirve de punto de partida para el ejercicio ACTIVIDAD)
## 4
- aclaraciones ejercicio COLOR
- demo spectral
- trackbar.py
- demo filtros
Ejercicio:
- implementación de filtro gaussiano con tracker sigma en toda la imagen, monocromo ([ejemplo](../images/demos/ej-c4-0.png)).
- add box y median
- medir y mostrar tiempos de cómputo en diferentes casos
(Sirve de punto de partida para el ejercicio opcional FILTROS)
## 5
HOG
- (captura asíncrona)
- (teoría de HOG, implementación sencilla)
- hog0.py en detalle
- pedestrian.py, detección multiescala
- DLIB facelandmarks.py: HOG face detector con landmarks
Ejercicio: blink detection, inpainting eyes, etc.
## 6
Detección de corners y Flujo óptico de Lucas-Kanade
- LK/*.py
Vamos a construir un "tracker" de puntos de interés basado en el método de Lucas-Kanade.
El primer paso es construir un detector de corners partiendo de cero, calculando la imagen de respuesta correspondiente al menor valor propio de la matriz de covarianza de la distribución local del gradiente en cada pixel (`corners0.py`). En realidad esta operación está directamente disponible en opencv mediante cv.goodFeaturesToTrack (`corners1.py`, `corners2.py ).
El siguiente ejemplo muestra cómo encontrar directamente con `cv.calcOpticalFlowPyrLK` la posición de los puntos detectados en el fotograma siguiente, sin necesidad de recalcular puntos nuevos y asociarlos con los anteriores (`lk_track0.py`).
A continuación ampliamos el código para generar puntos nuevos periódicamente y crear una lista de trayectorias "tracks" que se mantiene actualizada en cada fotograma (`lk_track1.py`).
Finalmente, ampliamos el código anterior para que solo se generen puntos nuevos en zonas de la imagen donde no los haya, y mejoramos la detección de las posiciones siguientes con un criterio de calidad muy robusto que exige que la predicción hacia el pasado de los puntos nuevos coincida con el punto inicial. Si no hay una asociación mutua el punto y su trayectoria se descartan (`lk_tracks.py`).
Ejercicios:
- Analizar los tracks para determinar en qué dirección se mueve la cámara (UP,DOWN,LEFT,RIGHT, [FORWARD, BACKWARD])
- Estudiar la posibilidad de hacer tracking de un ROI.
## 7
Experimentamos con el detector de puntos de interés SIFT.
Nuestro objetivo es obtener un conjunto de "keypoints", cada uno con su descriptor (vector de características que describe el entorno del punto), que permita encontrarlo en imágenes futuras. Esto tiene una aplicación inmediata para reconocer objetos y más adelante en geometría visual.
Empezamos con el ejemplo de código code/SIFT/sift0.py, que simplemente calcula y muestra los puntos de interés. Es interesante observar el efecto de los parámetros del método y el tiempo de cómputo en función del tamaño de la imagen (que puedes cambiar con --size o --resize).
El siguiente ejemplo code/SIFT/sift1.py muestra un primer ataque para establecer correspondencias. Los resultados son bastante pobres porque se aceptan todas las posibles coincidencias.
Finalmente, en code/SIFT/sift.py aplicamos un criterio de selección para eliminar muchas correspondencias erróneas (aunque no todas). Esto es en principio suficiente para el reconocimiento de objetos. (Más adelante veremos una forma mucho mejor de eliminar correspondencias erróneas, necesaria para aplicaciones de geometría.)
El ejercicio obligatorio **SIFT** es una ampliación sencilla de este código. Se trata de almacenar un conjunto de modelos (¡con textura! para que tengan suficientes keypoints) como portadas de libros, discos, videojuegos, etc. y reconocerlos en base a la proporción de coincidencias detectadas.
En la segunda parte de la clase experimentamos con un servidor mjpg y creamos bots de telegram (explicados al final de este documento) para comunicarnos fácilmente con las aplicaciones de visión artificial desde el móvil.
## 8
Reconocimiento de formas mediante descriptores frecuenciales.
Nuestro objetivo es hacer un programa que reconozca la forma de trébol, como se muestra [en este pantallazo](../images/demos/shapedetect.png). Si no tenéis a mano un juego de cartas podéis usar --dev=dir:../images/card*.png para hacer las pruebas, aunque lo ideal es hacerlo funcionar con una cámara en vivo.
Trabajaremos con los ejemplos de código de la carpeta `code/shapes` y, como es habitual, iremos añadiendo poco a poco funcionalidad. En cada nuevo paso los comentarios explican los cambios respecto al paso anterior.
Empezamos con el ejemplo shapes/trebol1.py, que simplemente prepara un bucle de captura básico, binariza la imagen y muestra los contornos encontrados. Se muestran varias formas de realizar la binarización y se puede experimentar con ellas, pero en principio el método automático propuesto suele funcionar bien en muchos casos.
El segundo paso en shapes/trebol2.py junta la visualización en una ventana y selecciona los contornos oscuros de tamaño razonable. Esto no es imprescincible para nuestra aplicación, pero es conveniente familiarizarse con el concepto de orientación de un contorno.
En shapes/trebol3.py leemos un modelo de la silueta trébol de una imagen que tenemos en el repositorio y la mostramos en una ventana.
En shapes/trebol3b.py hacemos una utilidad para ver gráficamente las componentes frecuenciales como elipses que componen la figura. Podemos ver las componentes en su tamaño natural, incluyendo la frecuencia principal, [como aquí](../images/demos/full-components.png), o quitando la frecuencia principal y ampliando el tamaño de las siguientes, que son la base del descriptor de forma, [como se ve aquí](../images/demos/shape-components.png). Observa que las configuraciones de elipses son parecidas cuando corresponden a la misma silueta.
En shapes/trebol4.py definimos la función que calcula el descriptor invariante. Se basa esencialmente en calcular los tamaños relativos de estas elipses. En el código se explica cómo se consigue la invarianza a las transformaciones deseadas: posición, tamaño, giros, punto de partida del contorno y ruido de medida.
Finalmente, en shapes/trebol5.py calculamos el descriptor del modelo y en el bucle de captura calculamos los descriptores de los contornos oscuros detectados para marcar las siluetas que tienen un descriptor muy parecido al del trébol.
## 8b
En esta subsesión vamos a hacer varias actividades. Necesitamos algunos paquetes. En Linux son:
sudo apt install tesseract-ocr tesseract-ocr-spa libtesseract-dev
pip install tesserocr
sudo apt install libzbar-dev
pip install pyzbar
[zbar Windows instaler](http://zbar.sourceforge.net) -> download
Usuarios de Mac y Windows: investigad la forma de instalar tesseract.
1) En primer lugar nos fijamos en el script [code/ocr.py](../code/ocr.py), cuya misión es poner en marcha el OCR con la cámara en vivo. Usamos el paquete de python `tesserocr`. Vamos a verificar el funcionamiento con una imagen estática, pero lo ideal es probarlo con la cámara en vivo.
./ocr.py --dev=dir:../images/texto/bo0.png
Está pensado para marcar una sola línea de texto, [como se muestra aquí](../images/demos/ocr.png). Este pantallazo se ha hecho con la imagen bo1.png disponible en la misma carpeta, que está desenfocada, pero aún así el OCR funciona bien.
(En windows parece que hay que usar pytesseract en lugar de tesserocr, lo que requiere adaptar del script.)
Para mostrar la complejidad de un ocr mostramos el resultado del script `crosscorr.py` sobre images/texto.png, para observar que la comparación pixel a pixel no es suficiente para obtener resultados satisfactorios. En esa misma imagen la binarización y extracción de componentes conexas no consigue separar letras individuales.
Finalmente demostramos mediante `spectral.py` que la transormada de Fourier 2D permite detectar el ángulo y la separación entre renglones.
2) El segundo ejemplo es `code/zbardemo.png` que muestra el uso del paquete pyzbar para leer códigos de barras ([ejemplo](../images/demos/barcode.png)) y códigos QR ([ejemplo](../images/demos/qr.png)) con la cámara. En los códigos de barras se detectan puntos de referencia, y en los QR se detectan las 4 esquinas del cuadrado, que pueden ser útiles como referencia en algunas aplicaciones de geometría.
4) demo de `grabcut.py` para segmentar interactivamente una imagen. Lo probamos con images/puzzle3.png.
5) Ponemos en marcha el detector de caras de opencv con la webcam en vivo y comparamos con el detector de DLIB.
## 9
Hoy vamos a rectificar el plano de la mesa apoyándonos en marcadores artificiales.
En primer lugar trabajaremos con marcadores poligonales. Nuestro objetivo es detectar un marcador como el que aparece en el vídeo `images/rot4.mjpg`. Nos vamos a la carpeta `code/polygon`.
El primer paso (`polygon0.py`) es detectar figuras poligonales con el número de lados correcto a partir de los contornos detectados.
A continuación (`polygon1.py`) nos quedamos con los polígonos que realmente pueden corresponder al marcador. Esto se hace viendo si existe una homografía que relaciona con precisión suficiente el marcador real y su posible imagen.
Finalmente (`polygon2.py`) obtiene el plano rectificado
También se puede añadir información "virtual" a la imagen original, como por ejemplo los ejes de coordenadas definidos por el marcador (`polygon3.py`).
Como segunda actividad, en la carpeta `code/elipses` se muestra la forma de detectar un marcador basado en 4 círculos.
## 10
En esta sesión vamos a extraer la matriz de cámara a partir del marcador utilizado en la sesión anterior, lo que nos permitirá añadir objetos virtuales tridimensionales a la escena y determinar la posición de la cámara en el espacio.
Nos vamos a la carpeta `code/pose`, donde encontraremos los siguientes ejemplos de código:
`pose0.py` incluye el código completo para extraer contornos, detectar el marcador poligonal, extraer la matriz de cámara y dibujar un cubo encima del marcador.
`pose1.py` hace lo mismo con funciones de umucv.
`pose2.py` trata de ocultar el marcador y dibuja un objeto que cambia de tamaño.
`pose3.py` explica la forma de proyectar una imagen en la escena escapando del plano del marcador.
`pose3D.py` es un ejemplo un poco más avanzado que utiliza el paquete pyqtgraph para mostrar en 3D la posición de la cámara en el espacio.
En el ejercicio **RA** puedes intentar que el comportamiento del objeto virtual dependa de acciones del usuario (p. ej. señalando con el ratón un punto del plano) o de objetos que se encuentran en la escena.
## 11
Breve introducción a scikit-learn y keras.
En primer lugar repasaremos algunos conceptos básicos en el notebook [machine learning](machine-learning.ipynb).
Esta sesión está dedicada a poner en marcha una red convolucional sencilla. La tarea que vamos a resolver es el reconocimiento de dígitos manuscritos. Por eso, en primer lugar es conveniente escribir unos cuantos números en una hoja de papel, con un bolígrafo que tenga un trazo no demasiado fino, y sin preocuparnos mucho de que estén bien escritos. Pueden tener distintos tamaños, pero no deben estar muy girados. Para desarrollar el programa y hacer pruebas cómodamente se puede trabajar con una imagen fija, pero la idea es que nuestro programa funcione con la cámara en vivo.
Trabajaremos en la carpeta [code/DL/CNN](../code/DL/CNN), donde tenemos las diferentes etapas de ejercicio y una imagen de prueba.
El primer paso es `digitslive-1.py` que simplemente encuentra las manchas de tinta que pueden ser posibles números.
En `digitslive-2.py` normalizamos el tamaño de las detecciones para poder utilizar la base de datos MNIST.
En `digitslive-3.py` implementamos un clasificador gaussiano con reducción de dimensión mediante PCA y lo ponemos en marcha con la imagen en vivo. (Funciona bastante bien pero, p.ej., en la imagen de prueba comete un error).
Finalmente, en `digitslive-4.py` implementamos la clasificación mediante una red convolucional mediante el paquete **keras**. Usamos unos pesos precalculados. (Esta máquina ya no comete el error anterior.)
Como siempre, en cada fase del ejercicio los comentarios explican el código que se va añadiendo.
Una vez conseguido esto, la sesión práctica tiene una segunda actividad que consiste en **entrenar los pesos** de (por ejemplo) esta misma red convolucional. Para hacerlo en nuestro ordenador sin perder la paciencia necesitamos una GPU con CUDA y libCUDNN. La instalación de todo lo necesario puede no ser trivial.
Una alternativa muy práctica es usar [google colab](https://colab.research.google.com/), que proporciona gratuitamente máquinas virtuales con GPU y un entorno de notebooks jupyter (un poco modificados pero compatibles). Para probarlo, entrad con vuestra cuenta de google y abrid un nuevo notebook. En la opción de menú **Runtime** hay que seleccionar **Change runtime type** y en hardware accelerator ponéis GPU. En una celda del notebook copiáis directamente el contenido del archivo `cnntest.py` que hay en este mismo directorio donde estamos trabajando hoy. Al evaluar la celda se descargará la base de datos y se lanzará un proceso de entrenamiento. Cada epoch tarda unos 4s. Podéis comparar con lo que se consigue con la CPU en vuestro propio ordenador. Se puede lanzar un entrenamiento más completo, guardar los pesos y descargarlos a vuestra máquina.
Como curiosidad, podéis comparar con lo que conseguiría el OCR tesseract, y guardar algunos casos de dígitos que estén bien dibujados pero que la red clasifique mal.
Finalmente, entrenamos un autoencoder (notebook [bottleneck](bottleneck.ipynb)) y comparamos el resultado con la reducción de dimensión PCA explicada al principio.
## 12
En esta sesión vamos a poner en marcha algunos modelos más avanzados de deep learning.
Los ejemplos de código se han probado sobre LINUX. En Windows o Mac puede ser necesario hacer modificaciones; para no perder mucho tiempo mi recomendación es probarlo primero en una máquina virtual.
Si tenéis una GPU nvidia reciente lo ideal es instalar CUDA y libCUDNN para conseguir una mayor velocidad de proceso. Si no tenéis GPU no hay ningún problema, todos los modelos funcionan con CPU. (Los ejercicios de deep learning que requieren entrenamiento son opcionales y se pueden entrenar en COLAB.)
Para ejecutar las máquinas inception, YOLO y el reconocimiento de caras necesitamos los siguientes paquetes:
pip install face_recognition tensorflow==1.15.0 keras easydict
La detección de marcadores corporales *openpose* requiere unos pasos de instalación adicionales que explicaremos más adelante.
(La versión 1.15.0 de tensorflow es necesaria para YOLO y openpose. Producirá algunos warnings sin mucha importancia. Si tenemos una versión más reciente de tensorflow podemos hacer `pip install --upgrade tensorflow=1.15.0` o crear un entorno de conda especial para este tema).
1) Para probar el **reconocimiento de caras** nos vamos a la carpeta code/DL/facerec. Debe estar correctamente instalado DLIB.
En el directorio `gente` se guardan los modelos. Como ejemplo tenemos a los miembros de Monty Python:
./facerec.py --dev=dir:../../../images/monty-python*
(Recuerda que las imágenes seleccionadas con --dev=dir: se avanzan pinchando con el ratón en la ventana pequeña de muestra).
Puedes meter fotos tuyas y de tu familia en la carpeta `gente` para probar con la webcam o con otras fotos.
Con pequeñas modificaciones de este programa se puede resolver el ejercicio ANON: selecciona una cara en la imagen en vivo pinchando con el ratón para ocultarla (emborronándola o pixelizándola) cuando se reconozca en las imágenes siguientes.
Esta versión del reconocimiento de caras no tiene aceleración con GPU (tal vez se puede configurar). Si reducimos un poco el tamaño de la imagen funciona con bastante fluidez.
2) Para probar la máquina **inception** nos movemos a la carpeta code/DL/inception.
./inception0.py
(Se descargará el modelo del la red). Se puede probar con las fotos incluidas en la carpeta con `--dev=dir:*.png`. La versión `inception1.py` captura en hilo aparte y muestra en consola las 5 categorías más probables.
Aunque se supone que consigue buenos resultados en las competiciones, sobre imágenes naturales comete bastante errores.
3) El funcionamiento de **YOLO** es mucho mejor. Nos vamos a la carpeta code/DL y ejecutamos lo siguiente para para descargar el código y los datos de esta máquina (y de openpose).
bash get.sh
Nos metemos en code/DL/yolo y ejecutamos:
/.yolo-v3.py
Se puede probar también con las imágenes de prueba incluidas añadiendo `--dev=dir:*.png`.
El artículo de [YOLO V3](https://pjreddie.com/media/files/papers/YOLOv3.pdf) es interesante. En la sección 5 el autor explica que abandonó esta línea de investigación por razones éticas. Os recomiendo que la leáis. Como curiosidad, hace unos días apareció [YOLO V4](https://arxiv.org/abs/2004.10934).
4) Para probar **openpose** nos vamos a code/DL/openpose. Los archivos necesarios ya se han descargado en el paso anterior, pero necesitamos instalar algunos paquetes. El proceso se explica en el README.
En la carpeta `docker` hay un script para ejecutar una imagen docker que tiene instalados todos los paquetes que hemos estamos usando en la asignatura. Es experimental. No perdaís ahora tiempo con esto si no estáis familiarizados con docker.
El tema de deep learning en visión artificial es amplísimo. Para estudiarlo en detalle hace falta (como mínimo) una asignatura avanzada (master). Nuestro objetivo es familizarizarnos un poco con algunas de las máquinas preentrenadas disponibles para hacernos una idea de sus ventajas y limitaciones.
Si estáis interesados en estos temas el paso siguiente es adaptar alguno de estos modelos a un problema propio mediante "transfer learning", que consiste en utilizar las primeras etapas de una red preentrenada para transformar nuestros datos y ajustar un clasificador sencillo. Alternativamente, se puede reajustar los pesos de un modelo preentrenado, fijando las capas iniciales al principio. Para remediar la posible falta de ejemplos se utilizan técnicas de "data augmentation", que generan variantes de los ejemplos de entrenamiento con múltiples transformaciones.
mediapipe
UNET
## entrenar dlib
- (opcional) DLIB herramienta de etiquetado imglab. Entrenamiento de detector HOG SVM con herramientas de DLIB:
- descargar y descomprimir dlib source
- ir a los ejemplos/faces
- meter dentro imglab (que hay que compilar pero tenemos versión precompilada en robot/material/va)
- mostrar los training.xml y testing.xml (se pueden crear otros)
- meter dentro train_detector.py y run_detector.py de code/hog
- ./train_detector training.xml testing.xml (crea detector.svm)
- ./run_detector detector.svm --dev=dir:\*.jpg (o también --dev=dir:/path/to/umucv/images/monty\*)
## correlation filter
Comentamos el método de detección de objetos por correlación cruzada, que es el mismo criterio que se usa para buscar la posición de *corners* en imágenes sucesivas, y luego vemos la demostración del discriminative correlation filter.
- crosscorr.py
- dcf.py
## flask server
El ejemplo `server.py` explica cómo hacer un servidor web sencillo con *flask* para enviar un pantallazo de la imagen actual de la webcam, y `mjpegserver.py` explica cómo hacer un servidor de streaming en formato mjpeg.
## telegram bot
Vamos a jugar con un bot de telegram que nos permite comunicarnos cómodamente con nuestro ordenador desde el teléfono móvil, sin necesidad de tener una dirección pública de internet.
Simplemente necesitamos:
pip install python-telegram-bot
El ejemplo `bot/bot0.py` nos envía al teléfono la IP del ordenador (es útil si necesitamos conectarnos por ssh con una máquina que tiene IP dinámica).
El ejemplo `bot/bot1.py` explica la forma de enviar una imagen nuestro teléfono cuando ocurre algo. En este caso se envía cuando se pulsa una tecla, pero lo normal es detectar automáticamente algún evento con las técnicas de visión artificial que estamos estudiando.
El ejemplo `bot/bot2.py` explica la forma de hacer que el bot responda a comandos. El comando /hello nos devuelve el saludo, el comando /stop detiene el programa y el comando /image nos devuelve una captura de nuestra webcam. (Se ha usado la captura en un hilo).
El ejemplo `bot/bot3.py` explica la forma de capturar comandos con argumentos y el procesamiento de una imagen enviada por el usuario.
Esta práctica es muy útil para enviar cómodamente a nuestros programas de visión artificial una imagen tomada con la cámara sin necesidad de escribir una aplicación específica para el móvil. Algunos ejercicios que estamos haciendo se pueden adaptar fácilmente para probarlos a través de un bot de este tipo.
Para crearos vuestro propio bot tenéis que contactar con el bot de telegram "BotFather", que os guiará paso a paso y os dará el token de acceso. Y luego el "IDBot" os dirá el id numérico de vuestro usuario.
| 0.528777 | 0.742305 |
# Anomaly Detection (Air Handling Units) - Model Inferencing
This Jupyter notebook demonstrates how to use the SAS Event Stream Processing ESPPy module to perform anomaly detection with a model previously created and stored on an analytic store file.
Additional resources for this use case can be found at the SAS GitHub page for [Anomaly Detection in Air Handling Units](https://github.com/sassoftware/iot-anomaly-detection-hvac) including an example of deploying an offline model in SAS Event Stream Processing Studio.
### 0. Setup the Environment
First, import the necessary packages to run this notebook. Set the home directory, which tells the notebook where to save the model XML file.
```
import esppy
import ipywidgets as widgets
from esppy.espapi.visuals import Visuals
from inspect import getsource
import time
import datetime
%matplotlib inline
import pandas as pd
```
Use Pandas to read in the data.
```
ahu_data = pd.read_csv('/demo/Event_Stream_Processing/data/ahu_scr.csv', header=0)
```
Set the value of <code>display.image_scale</code> to 0.6. This value enables you to better visualize the project as you add more windows.
```
esppy.options.display.image_scale = 0.75
```
Next, establish a connection with the ESP server for the project and an additional one to be used by the PLOTLY graphic library APIs.
```
esp = esppy.ESP('http://localhost:5001')
conn = esp.createServerConnection(interval=0)
esp
```
### 1. Create a Project
Create a project using <code>esp.create_project</code> and name it **proj**.
```
proj = esp.create_project('AHU_AnomalyDetection')
```
Now create a continuous query and call it <code>cq</code>. Add this query to the project. Continuous queries run automatically and periodically on streaming data.
To learn more about continuous queries in SAS ESP, see [Continuous Queries.](https://go.documentation.sas.com/?cdcId=espcdc&cdcVersion=6.2&docsetId=espstudio&docsetTarget=n05qoojb1v1ly3n1kkje39jmu7bb.htm&locale=en)
```
cq = esp.ContinuousQuery(name='Query_1')
proj.add_query(cq)
```
### 2. Create Project Windows
First, create a Source window and call it src_data. Source windows accept streaming data or raw data files.
Create a schema that corresponds to **ahu_scr** data in the Source window.
To read more about Source windows, see [Using Source Windows.](https://go.documentation.sas.com/?cdcId=espcdc&cdcVersion=6.2&docsetId=espcreatewindows&docsetTarget=p1h7eov9msvacmn1st88y7nlun1e.htm&locale=en)
```
src = esp.SourceWindow(schema=('Key_ID*:int64', 'seq_id:string','datetime:string',
'AHU:string','CHW_VALVE:double', 'CHW_VALVE_POSIT:double', 'DIS_AIR_TEMP:double',
'DUCT_PRESS_ACTV:double','MAX_CO2_VAL:double','MIXED_AIR_TEMP:double','RTRN_AIR_TEMP:double',
'SUPPL_FAN_SP:double'),
index_type='empty', insert_only=True, name='src_data')
cq.add_window(src)
#cq
```
Create a Source window to read in the model data.
```
model_request=esp.SourceWindow(schema=('req_id*:int64', 'req_key:string', 'req_val:string'),
index_type='empty', insert_only=True, name='model_request')
cq.add_window(model_request)
#cq
```
Next, create a window to read the model that is stored in the analytic store file. Call it model_reader.
To read more about Model Reader windows, see [Using Model Reader Windows.](https://go.documentation.sas.com/?cdcId=espcdc&cdcVersion=6.2&docsetId=espcreatewindows&docsetTarget=n0jsa0omxyf6m4n1ava0ekx1bli4.htm&locale=en)
```
model_reader = esp.ModelReaderWindow(name='Model_Reader', model_type="astore")
cq.add_window(model_reader)
#cq
```
Create a Score window that uses the offline analytic store model to score the **ahu_scr** data.
```
score_svdd = esp.ScoreWindow(name='Score_SVDD',
schema=('Key_ID*:int64', 'seq_id:string','datetime:string', 'AHU:string',
'_SVDDDISTANCE_:double',
'_SVDDSCORE_:double'))
score_svdd.add_connector('fs', conn_name='sub', conn_type='subscribe',
properties={'type':'sub', 'fstype':'csv', 'fsname': '/user/my_data/svdd_out.csv', 'snapshot':True,'header':'full'})
score_svdd.add_offline_model(model_type='astore')
cq.add_window(score_svdd)
#cq
```
After the windows have been created and added to the project, you can connect them by creating edges. To read more about connectors, see [Overview to Connectors.](https://go.documentation.sas.com/?cdcId=espcdc&cdcVersion=6.2&docsetId=espca&docsetTarget=p1nhdjrc9n0nnmn1fxqnyc0nihzz.htm&locale=en)
```
src.add_target(score_svdd, role='data')
model_request.add_target(model_reader, role='request')
model_reader.add_target(score_svdd, role='model')
cq.to_graph(schema=True)
```
### 3. Save Project to XML and Load It into the ESP Server
You have the option of saving the project that you created to an XML file. Here, save the 'AHUAnomalyDetection.xml' to the /shared/code folder.
```
proj.save_xml('/user/my_code/ahu_anomaly_detection.xml')
```
Next, load the project to the ESP server using <code>esp.load_project</code> and save the project code to /shared/code folder.
```
esp.load_project(proj)
```
### 4. Subscribe to the Project Windows
Subscribe to the project windows in order to receive output from them.
```
src.subscribe()
model_request.subscribe()
score_svdd.subscribe()
model_reader.subscribe()
ahuScoring = conn.getEventCollection("AHU_AnomalyDetection/Query_1/Score_SVDD")
```
### 5. Publish Offline Model and Load the Model with Data for Scoring
Create a publisher to publish the model to the score_svdd windows so that the project knows which model to use.
This step takes approximately 20 seconds to complete. The instruction <code>time.sleep(20)</code> is included to allow for the connection to be made before sending images to the server.
```
pubmodel = model_request.create_publisher(blocksize=1, rate=0, pause=0,
opcode='insert', format='csv')
pubmodel.send('i,n,1,"action","load"\n')
pubmodel.send('i,n,2,"type","astore"\n')
pubmodel.send('i,n,3,"reference","/demo/Event_Stream_Processing/data/ahu_svdd.astore"\n')
pubmodel.send('i,n,4,,\n')
pubmodel.close()
time.sleep(5)
```
Assign a variable to your raw data variable.
```
mydata= ahu_data
mydata.index.name='Key_ID'
mydata
```
Use the <code>publish_events()</code> method to publish data to the data events Source window of the model. Use the <code>subscribe()</code> method to create a DataFrame for the Source window as the SourceWindow’s <code>data</code> attribute. After you create the DataFrame in the <code>data</code> attribute of a window object, you can use Pandas DataFrame methods on the window object as if it were a DataFrame:
```
src.publish_events(mydata, pause=1000)
src.subscribe()
src.info()
model_reader
```
## 6. Display the Scoring Results
Show the scoring results by subscribing to the 'score_svdd' window output.
```
score_svdd
```
Optionally, use the interface with the PLOTLY graphic library to show the results in a customized format.
```
conn = esp.createServerConnection()
sf_input = conn.getEventStream("AHU_AnomalyDetection/Query_1/Score_SVDD",maxevents=20)
visuals = Visuals(colormap="sas_corporate",border="1px solid blue")
table = visuals.createTable(sf_input,values=["Key_ID","_SVDDDISTANCE_","_SVDDSCORE_"],title="Values Generated",
show_controls=True, width="35%")
chart=visuals.createBubbleChart(sf_input,y="_SVDDDISTANCE_", size="_SVDDDISTANCE_",color="_SVDDSCORE_",title="SVDD Distance")
widgets.HBox([table,chart])
# proj.delete()
```
|
github_jupyter
|
import esppy
import ipywidgets as widgets
from esppy.espapi.visuals import Visuals
from inspect import getsource
import time
import datetime
%matplotlib inline
import pandas as pd
ahu_data = pd.read_csv('/demo/Event_Stream_Processing/data/ahu_scr.csv', header=0)
esppy.options.display.image_scale = 0.75
esp = esppy.ESP('http://localhost:5001')
conn = esp.createServerConnection(interval=0)
esp
proj = esp.create_project('AHU_AnomalyDetection')
cq = esp.ContinuousQuery(name='Query_1')
proj.add_query(cq)
src = esp.SourceWindow(schema=('Key_ID*:int64', 'seq_id:string','datetime:string',
'AHU:string','CHW_VALVE:double', 'CHW_VALVE_POSIT:double', 'DIS_AIR_TEMP:double',
'DUCT_PRESS_ACTV:double','MAX_CO2_VAL:double','MIXED_AIR_TEMP:double','RTRN_AIR_TEMP:double',
'SUPPL_FAN_SP:double'),
index_type='empty', insert_only=True, name='src_data')
cq.add_window(src)
#cq
model_request=esp.SourceWindow(schema=('req_id*:int64', 'req_key:string', 'req_val:string'),
index_type='empty', insert_only=True, name='model_request')
cq.add_window(model_request)
#cq
model_reader = esp.ModelReaderWindow(name='Model_Reader', model_type="astore")
cq.add_window(model_reader)
#cq
score_svdd = esp.ScoreWindow(name='Score_SVDD',
schema=('Key_ID*:int64', 'seq_id:string','datetime:string', 'AHU:string',
'_SVDDDISTANCE_:double',
'_SVDDSCORE_:double'))
score_svdd.add_connector('fs', conn_name='sub', conn_type='subscribe',
properties={'type':'sub', 'fstype':'csv', 'fsname': '/user/my_data/svdd_out.csv', 'snapshot':True,'header':'full'})
score_svdd.add_offline_model(model_type='astore')
cq.add_window(score_svdd)
#cq
src.add_target(score_svdd, role='data')
model_request.add_target(model_reader, role='request')
model_reader.add_target(score_svdd, role='model')
cq.to_graph(schema=True)
proj.save_xml('/user/my_code/ahu_anomaly_detection.xml')
esp.load_project(proj)
src.subscribe()
model_request.subscribe()
score_svdd.subscribe()
model_reader.subscribe()
ahuScoring = conn.getEventCollection("AHU_AnomalyDetection/Query_1/Score_SVDD")
pubmodel = model_request.create_publisher(blocksize=1, rate=0, pause=0,
opcode='insert', format='csv')
pubmodel.send('i,n,1,"action","load"\n')
pubmodel.send('i,n,2,"type","astore"\n')
pubmodel.send('i,n,3,"reference","/demo/Event_Stream_Processing/data/ahu_svdd.astore"\n')
pubmodel.send('i,n,4,,\n')
pubmodel.close()
time.sleep(5)
mydata= ahu_data
mydata.index.name='Key_ID'
mydata
src.publish_events(mydata, pause=1000)
src.subscribe()
src.info()
model_reader
score_svdd
conn = esp.createServerConnection()
sf_input = conn.getEventStream("AHU_AnomalyDetection/Query_1/Score_SVDD",maxevents=20)
visuals = Visuals(colormap="sas_corporate",border="1px solid blue")
table = visuals.createTable(sf_input,values=["Key_ID","_SVDDDISTANCE_","_SVDDSCORE_"],title="Values Generated",
show_controls=True, width="35%")
chart=visuals.createBubbleChart(sf_input,y="_SVDDDISTANCE_", size="_SVDDDISTANCE_",color="_SVDDSCORE_",title="SVDD Distance")
widgets.HBox([table,chart])
# proj.delete()
| 0.242116 | 0.920754 |
# Extract search results with BeautifulSoup: PBS.org - part 03
In our previous Notebook, we scraped only one page of the results. At the time of writing, there were 30 pages. By adding an extra for-loop to the code, we will traverse through all the pages. But before we do this, we will make the code dynamic so you can scrape multiple keywords from the site (if you want to)
### 1. Retrieve how many pages there are
This will vary per website, but luckily PBS.org displays the final page in the pagination overview. If you click on a link, you see the URL of your browser changes into something like:
`https://www.pbs.org/newshour/search-results?q=%22artificial+intelligence%22&pnb=2` where `&pnb=2` is the current page. Again, this will change from site to site, but it is a welcome way to scrape for now.
So now we need to know how many pages there are. Looking at the HTML code, the best strategy is to get the last item of the class `pagination__number`
```
import requests
from bs4 import BeautifulSoup
# we need the %22 or " to ensure that we get the combination artificial intelligence
url = 'https://www.pbs.org/newshour/search-results?q=%22artificial%20intelligence%22'
# get url
page = requests.get(url)
# transform to soup
soup = BeautifulSoup(page.content, 'html')
# search for pagination links
pages = soup.find_all(class_='pagination__number')
# [-1] selects last item in a list
last_page = pages[-1].get_text()
# convert to int
number_of_pages = int(last_page)
```
### 2. Create URL list
Now we have our total number of pages we can create a nice url list. The `url_list` should be:
`['https://www.pbs.org/newshour/search-results?q=%22artificial+intelligence%22&pnb=1',
'https://www.pbs.org/newshour/search-results?q=%22artificial+intelligence%22&pnb=2', ...
'https://www.pbs.org/newshour/search-results?q=%22artificial+intelligence%22&pnb=30'`
This can be achieved by using a for-loop with a `range()`
```
url_list = []
# code goes here
url_list
```
### 3. Retrieve all the article URLs and save them in a list
Use the `url_list` and collect all the URLs of the articles of each page. The `article_list` should only contain the URLs of the articles.
```
import requests
from bs4 import BeautifulSoup
import time
article_list = []
for url in url_list:
print('Retrieving',url)
# code goes here
article_list
```
### 4. Go through the list of articles and save the individual files.
Look at the previous Notebooks in order to solve this part. Don't forget to use `article_list`. This can take some time to complete ±15 minutes
```
# code goes here
```
|
github_jupyter
|
import requests
from bs4 import BeautifulSoup
# we need the %22 or " to ensure that we get the combination artificial intelligence
url = 'https://www.pbs.org/newshour/search-results?q=%22artificial%20intelligence%22'
# get url
page = requests.get(url)
# transform to soup
soup = BeautifulSoup(page.content, 'html')
# search for pagination links
pages = soup.find_all(class_='pagination__number')
# [-1] selects last item in a list
last_page = pages[-1].get_text()
# convert to int
number_of_pages = int(last_page)
url_list = []
# code goes here
url_list
import requests
from bs4 import BeautifulSoup
import time
article_list = []
for url in url_list:
print('Retrieving',url)
# code goes here
article_list
# code goes here
| 0.161419 | 0.847463 |
# A platform for automated nanomole-scale reaction screening and micromole-scale synthesis in flow
DOI: 10.1126/science.aap9112
DAMITH PERERA, JOSEPH W. TUCKER, SHALINI BRAHMBHATT, CHRISTOPHER J. HELAL, ASHLEY CHONG, WILLIAM FARRELL, PAUL RICHARDSON, NEAL W. SACH *Science*, **2018**, *359*, 429-434.
Import schema and helper functions
```
import ord_schema
from datetime import datetime
from ord_schema.proto import dataset_pb2
from ord_schema.proto import reaction_pb2
from ord_schema.units import UnitResolver
from ord_schema import validations
from ord_schema import message_helpers
unit_resolver = UnitResolver()
from tqdm import tqdm
```
# Define a single reaction
Single reaction from the SI to be used as a template for the remaining entries.
**Define reaction inputs**:
These reaction conditions are not typical batch or flow, but individual droplets. We take the authors' characterization of the system at face value, where they report an approximate 1:100 dilution of a concentrated 5x1 uL reaction slug. This is equivalent to the addition of 495 uL additional solvent after preparation.
- Reactant 1 is 0.0004 mmol in 1 uL solvent
- Reactant 2 is 1 equiv (0.0004 mmol) in 1 uL solvent
- Reagent 1 (base) is 2.5 equiv (0.001 mmol) in 1 uL solvent
- Ligand is 0.125 equiv (5.0e-5 mmol) in 1 uL solvent
- Catalyst is 0.0625 equiv (2.5e-5 mmol) in 1 uL solvent
- Solvent is 9:1 ratio with water, 495 uL total (49.5 uL water, 445.5 uL solvent)
```
# Define Reaction
reaction = reaction_pb2.Reaction()
reaction.identifiers.add(value=r"Suzuki-Miyaura coupling", type="NAME")
# Reactant 1
reaction.inputs["reactant_1"].addition_order = 1
solute = reaction.inputs["reactant_1"].components.add()
solvent = reaction.inputs["reactant_1"].components.add()
solute.CopyFrom(
message_helpers.build_compound(
name="placeholder",
smiles="placeholder",
role="reactant",
amount="0.4 nmol",
prep=None,
is_limiting=True,
prep_details=None,
)
)
solvent.CopyFrom(
message_helpers.build_compound(
name="placeholder",
smiles="placeholder",
role="solvent",
amount="1 uL",
prep=None,
is_limiting=False,
prep_details=None,
)
)
solvent.amount.volume_includes_solutes = True
# Reactant 2
reaction.inputs["reactant_2"].addition_order = 1
solute = reaction.inputs["reactant_2"].components.add()
solvent = reaction.inputs["reactant_2"].components.add()
solute.CopyFrom(
message_helpers.build_compound(
name="placeholder",
smiles="placeholder",
role="reactant",
amount="0.4 nmol",
prep=None,
is_limiting=True,
prep_details=None,
)
)
solvent.CopyFrom(
message_helpers.build_compound(
name="DMF",
smiles="CN(C)C=O",
role="solvent",
amount="1 uL",
prep=None,
is_limiting=False,
prep_details=None,
)
)
solvent.amount.volume_includes_solutes = True
# Reagent 1 = Base
reaction.inputs["base"].addition_order = 1
solute = reaction.inputs["base"].components.add()
solvent = reaction.inputs["base"].components.add()
solute.CopyFrom(
message_helpers.build_compound(
name="placeholder",
smiles="placeholder",
role="reagent",
amount="1 nmol",
prep=None,
is_limiting=False,
prep_details=None,
)
)
solvent.CopyFrom(
message_helpers.build_compound(
name="placeholder",
smiles="placeholder",
role="solvent",
amount="1 uL",
prep=None,
is_limiting=False,
prep_details=None,
)
)
solvent.amount.volume_includes_solutes = True
# Ligand
reaction.inputs["ligand"].addition_order = 1
solute = reaction.inputs["ligand"].components.add()
solvent = reaction.inputs["ligand"].components.add()
solute.CopyFrom(
message_helpers.build_compound(
name="placeholder",
smiles="placeholder",
role="reagent",
amount="0.05 nmol",
prep=None,
is_limiting=False,
prep_details=None,
)
)
solvent.CopyFrom(
message_helpers.build_compound(
name="toluene",
smiles="Cc1ccccc1",
role="solvent",
amount="1 uL",
prep=None,
is_limiting=False,
prep_details=None,
)
)
solvent.amount.volume_includes_solutes = True
# Catalyst
reaction.inputs["catalyst"].addition_order = 1
solute = reaction.inputs["catalyst"].components.add()
solvent = reaction.inputs["catalyst"].components.add()
solute.CopyFrom(
message_helpers.build_compound(
name="Pd(OAc)2",
smiles="[Pd+2].[O-]C(=O)C.[O-]C(=O)C",
role="catalyst",
amount="0.025 nmol",
prep=None,
is_limiting=False,
prep_details=None,
)
)
solvent.CopyFrom(
message_helpers.build_compound(
name="1,3,5-triethylbenzene",
smiles="CCC1=CC(=CC(=C1)CC)CC",
role="solvent",
amount="1 uL",
prep=None,
is_limiting=False,
prep_details=None,
)
)
solvent.amount.volume_includes_solutes = True
# Extra solvent -- added last
reaction.inputs["carrier solvent"].addition_order = 2
solvent1 = reaction.inputs["carrier solvent"].components.add()
solvent2 = reaction.inputs["carrier solvent"].components.add()
solvent1.CopyFrom(
message_helpers.build_compound(
name="placeholder",
smiles="placeholder",
role="solvent",
amount="445.5 uL",
prep=None,
is_limiting=False,
prep_details=None,
)
)
solvent2.CopyFrom(
message_helpers.build_compound(
name="water",
smiles="O",
role="solvent",
amount="49.5 uL",
prep=None,
is_limiting=False,
prep_details=None,
)
)
```
Define reaction setup & conditions
```
# Reactions performed in 1556 well plate
reaction.setup.vessel.CopyFrom(
reaction_pb2.Vessel(
type="TUBE",
material=dict(type="CUSTOM", details="Hastelloy"),
volume=unit_resolver.resolve("710 uL"),
)
)
reaction.setup.is_automated = True
reaction.setup.automation_platform = "reaction segment preparation unit (RSPU), Agilent 1100 Infinity HPLC system"
# Reaction prepared in glove box, presumed sensitivity
reaction.notes.is_sensitive_to_moisture = True
reaction.notes.is_sensitive_to_oxygen = True
# Heated - not specified how
t_conds = reaction.conditions.temperature
t_conds.control.type = t_conds.TemperatureControl.DRY_ALUMINUM_PLATE # close
t_conds.control.details = "Hastelloy coil (vessel) placed on IKA hotplate"
t_conds.setpoint.CopyFrom(reaction_pb2.Temperature(units="CELSIUS", value=100))
# System run in flow at 100 bar, but explicitly not specified how
p_conds = reaction.conditions.pressure
p_conds.control.type = p_conds.PressureControl.PRESSURIZED
p_conds.setpoint.CopyFrom(reaction_pb2.Pressure(units="BAR", value=100))
# Although these reactions are being treated as small batch reactors in flow, we
# can define the flow conditions. Note that no reaction inputs have a defined
# continuous flow rate.
f_conds = reaction.conditions.flow
f_conds.type = f_conds.CUSTOM
f_conds.details = "Droplet reactor"
f_conds.pump_type = "Agilent G1311 quarternary pump"
f_conds.tubing.CopyFrom(
reaction_pb2.FlowConditions.Tubing(
diameter=unit_resolver.resolve("0.5 millimeter"), type="CUSTOM", details="Hastelloy"
)
)
# No safety notes
reaction.notes.safety_notes = ""
```
All residence times are 1 minute, at which time the crude products are sampled by LCMS. Product yield is determined both as a percent area by UV and as a raw mass ion count. Here, we treat the percent area by UV as the reaction yield for the record but also keep the raw mass ion count as a piece of processed data.
```
outcome = reaction.outcomes.add()
outcome.reaction_time.CopyFrom(unit_resolver.resolve("1 minute"))
# Analyses: UPLC. Only report product yield by percent area (LC)
# Note using LCMS but split into LC and MS
outcome.analyses["LCMS"].type = reaction_pb2.Analysis.LCMS
outcome.analyses["LCMS"].details = (
r"0.1% AcOH/NH4COOH/Water based gradient over 1.4 minutes"
r" running from 5-95% MeCN using a Waters Acquity UPLC BEH C18 30 x 2.1 mm"
r" column at 80 °C with a flow rate of 2.5ml/min and a detection wavelength of 210-360nm."
r"5μL injections were made directly and ionization monitored in ES+ positive mode."
)
outcome.analyses["LCMS"].instrument_manufacturer = "Agilent"
outcome.analyses["LCMS"].data["product yield by UV"].float_value = 0 # placeholder
outcome.analyses["LCMS"].data["product mass ion count"].float_value = 0 # placeholder
# Define product identity
product = outcome.products.add()
product.identifiers.add(value=r"CC1=CC=C2C(C=NN2C3OCCCC3)=C1C4=CC=C(N=CC=C5)C5=C4", type="SMILES")
product.is_desired_product = True
product.reaction_role = reaction_pb2.ReactionRole.PRODUCT
# Two raw measurements include percent area UV and raw mass ion count
measurement = product.measurements.add(analysis_key="LCMS", type="COUNTS", is_normalized=False)
# The UV product yield percent area was used to indicate yield
measurement = product.measurements.add(analysis_key="LCMS", type="AREA")
measurement.is_normalized = True # peak areas are relative
measurement.percentage.value = -999 # placeholder
# Reaction provenance
reaction.provenance.city = r"San Diego, CA"
reaction.provenance.doi = r"10.1126/science.aar5169"
reaction.provenance.publication_url = r"https://science.sciencemag.org/content/359/6374/429"
reaction.provenance.record_created.time.value = datetime.now().strftime("%m/%d/%Y, %H:%M:%S")
reaction.provenance.record_created.person.CopyFrom(
reaction_pb2.Person(name="Connor W. Coley", organization="MIT", orcid="0000-0002-8271-8723", email="ccoley@mit.edu")
)
```
Validate and examine this final prototypical reaction entry
```
reaction
```
# Full HTE Data Set
```
import pandas as pd
import os
if not os.path.isfile("aap9112_Data_File_S1.xlsx"):
!wget https://github.com/Open-Reaction-Database/ord-schema/raw/main/examples/10_Perera_Science_Suzuki/aap9112_Data_File_S1.xlsx
data = pd.read_excel("aap9112_Data_File_S1.xlsx", usecols=range(16))
data
reactions = []
for _, row in tqdm(data.iterrows()):
new_reaction = reaction_pb2.Reaction()
new_reaction.CopyFrom(reaction)
# Update reactant 1 name & SMILES
reactant_1_name, reactant_1_smiles, reactant_1_solvent_name, reactant_1_solvent_smiles = {
"6-chloroquinoline": (
"6-chloroquinoline",
"C1=CC2=C(C=CC(=C2)Cl)N=C1",
"1,3-diethylbenzene",
"CCC1=CC(=CC=C1)CC",
),
"6-Bromoquinoline": (
"6-Bromoquinoline",
"C1=CC2=C(C=CC(=C2)Br)N=C1",
"1,3-diethylbenzene",
"CCC1=CC(=CC=C1)CC",
),
"6-triflatequinoline": (
"6-triflatequinoline",
"O=S(OC1=CC=C2N=CC=CC2=C1)(C(F)(F)F)=O",
"1,3-diethylbenzene",
"CCC1=CC(=CC=C1)CC",
),
"6-Iodoquinoline": ("6-Iodoquinoline", "C1=CC2=C(C=CC(=C2)I)N=C1", "1,3-diethylbenzene", "CCC1=CC(=CC=C1)CC"),
"6-quinoline-boronic acid hydrochloride": (
"6-quinoline-boronic acid hydrochloride",
"OB(C1=CC=C2N=CC=CC2=C1)O.Cl",
"water",
"O",
),
"Potassium quinoline-6-trifluoroborate": (
"Potassium quinoline-6-trifluoroborate",
"F[B-](C1=CC=C2N=CC=CC2=C1)(F)F.[K+]",
"water",
"O",
),
"6-Quinolineboronic acid pinacol ester": (
"6-Quinolineboronic acid pinacol ester",
"CC1(C)C(C)(C)OB(O1)C2=CC=C3N=CC=CC3=C2",
"DMF",
"CN(C)C=O",
),
}[row["Reactant_1_Name"].strip()]
new_reaction.inputs["reactant_1"].components[0].identifiers[0].value = reactant_1_smiles
new_reaction.inputs["reactant_1"].components[0].identifiers[1].value = reactant_1_name
new_reaction.inputs["reactant_1"].components[1].identifiers[0].value = reactant_1_solvent_smiles
new_reaction.inputs["reactant_1"].components[1].identifiers[1].value = reactant_1_solvent_name
# Update reactant 2 SMILES, remove name identifier
reactant_2_smiles = {
"2a, Boronic Acid": ("CC1=CC=C2C(C=NN2C3OCCCC3)=C1B(O)O"),
"2b, Boronic Ester": ("CC1=CC=C2C(C=NN2C3OCCCC3)=C1B4OC(C)(C)C(C)(C)O4"),
"2c, Trifluoroborate": ("CC1=CC=C2C(C=NN2C3OCCCC3)=C1[B-](F)(F)F.[K+]"),
"2d, Bromide": ("CC1=CC=C2C(C=NN2C3OCCCC3)=C1Br"),
}[row["Reactant_2_Name"].strip()]
new_reaction.inputs["reactant_2"].components[0].identifiers[0].value = reactant_2_smiles
del new_reaction.inputs["reactant_2"].components[0].identifiers[1] # no name
# Update reagent 1 (base) name & SMILES
base_name, base_smiles, base_solvent_name, base_solvent_smiles = {
"NaOH": ("NaOH", "[OH-].[Na+]", "water", "O"),
"NaHCO3": ("NaHCO3", "C(=O)(O)[O-].[Na+]", "water", "O"),
"CsF": ("CsF", "[F-].[Cs+]", "water", "O"),
"K3PO4": ("K3PO4", "[K+].[K+].[K+].[O-]P([O-])([O-])=O", "water", "O"),
"KOH": ("KOH", "[OH-].[K+]", "water", "O"),
"LiOtBu": ("LiOtBu", "[Li+].CC(C)(C)[O-]", "hexane", "CCCCCC"),
"Et3N": ("Et3N", "CCN(CC)CC", "THF", "C1CCOC1"),
"None": (None, None, "water", "O"),
}[row["Reagent_1_Short_Hand"].strip()]
new_reaction.inputs["base"].components[1].identifiers[0].value = base_solvent_smiles
new_reaction.inputs["base"].components[1].identifiers[1].value = base_solvent_name
if base_smiles is None:
del new_reaction.inputs["base"].components[0]
else:
new_reaction.inputs["base"].components[0].identifiers[0].value = base_smiles
new_reaction.inputs["base"].components[0].identifiers[1].value = base_name
# Update ligand
ligand_name, ligand_smiles = {
"P(tBu)3": ("P(tBu)3", "CC(C)(C)P(C(C)(C)C)C(C)(C)C"),
"P(Ph)3": ("P(Ph)3", "c3c(P(c1ccccc1)c2ccccc2)cccc3"),
"AmPhos": ("AmPhos", "CC(C)(C)P(C1=CC=C(C=C1)CNC)C(C)(C)C"),
"P(Cy)3": ("P(Cy)3", "C1(CCCCC1)P(C2CCCCC2)C3CCCCC3"),
"P(o-Tol)3": ("P(o-Tol)3", "CC1=CC=CC=C1P(C2=CC=CC=C2C)C3=CC=CC=C3C"),
"CataCXium A": ("CataCXium A", "CCCCP(C12CC3CC(C1)CC(C3)C2)C45CC6CC(C4)CC(C6)C5"),
"SPhos": ("SPhos", "COc1cccc(c1c2ccccc2P(C3CCCCC3)C4CCCCC4)OC"),
"dtbpf": ("dtbpf", "CC(C)(C)P(C1=C[CH-]C=C1)C(C)(C)C.CC(C)(C)P(C1=C[CH-]C=C1)C(C)(C)C.[Fe+2]"),
"XPhos": ("XPhos", "P(c2ccccc2c1c(cc(cc1C(C)C)C(C)C)C(C)C)(C3CCCCC3)C4CCCCC4"),
"dppf": ("dppf", "C1=CC=C(C=C1)P([C-]2C=CC=C2)C3=CC=CC=C3.C1=CC=C(C=C1)P([C-]2C=CC=C2)C3=CC=CC=C3.[Fe+2]"),
"Xantphos": ("Xantphos", "O6c1c(cccc1P(c2ccccc2)c3ccccc3)C(c7cccc(P(c4ccccc4)c5ccccc5)c67)(C)C"),
"None": (None, None),
}[row["Ligand_Short_Hand"].strip()]
if ligand_smiles is None:
del new_reaction.inputs["ligand"].components[0]
else:
new_reaction.inputs["ligand"].components[0].identifiers[0].value = ligand_smiles
new_reaction.inputs["ligand"].components[0].identifiers[1].value = ligand_name
# Update solvent
solvent_name, solvent_smiles = {
"MeCN": ("acetonitrile", "CC#N"),
"THF": ("THF", "C1CCOC1"),
"DMF": ("DMF", "CN(C)C=O"),
"MeOH": ("methanol", "CO"),
"MeOH/H2O_V2 9:1": ("methanol", "CO"),
"THF_V2": ("THF", "C1CCOC1"),
}[row["Solvent_1_Short_Hand"].strip()]
new_reaction.inputs["carrier solvent"].components[0].identifiers[0].value = solvent_smiles
new_reaction.inputs["carrier solvent"].components[0].identifiers[1].value = solvent_name
# Record measurements
new_reaction.outcomes[0].products[0].measurements[0].percentage.value = row["Product_Yield_PCT_Area_UV"]
new_reaction.outcomes[0].products[0].measurements[1].float_value.value = row["Product_Yield_Mass_Ion_Count"]
# Record raw data in analysis
new_reaction.outcomes[0].analyses["LCMS"].data["product yield by UV"].float_value = row["Product_Yield_PCT_Area_UV"]
new_reaction.outcomes[0].analyses["LCMS"].data["product mass ion count"].float_value = row[
"Product_Yield_Mass_Ion_Count"
]
# Validate
output = validations.validate_message(new_reaction)
for error in output.errors:
print(error)
# Append
reactions.append(new_reaction)
print(f"Generated {len(reactions)} reactions")
# Inspect random reaction from this set
reactions[15]
# Example of writing
# dataset = dataset_pb2.Dataset(reactions=reactions)
# message_helpers.write_message(dataset, 'perera_dataset.pbtxt')
```
|
github_jupyter
|
import ord_schema
from datetime import datetime
from ord_schema.proto import dataset_pb2
from ord_schema.proto import reaction_pb2
from ord_schema.units import UnitResolver
from ord_schema import validations
from ord_schema import message_helpers
unit_resolver = UnitResolver()
from tqdm import tqdm
# Define Reaction
reaction = reaction_pb2.Reaction()
reaction.identifiers.add(value=r"Suzuki-Miyaura coupling", type="NAME")
# Reactant 1
reaction.inputs["reactant_1"].addition_order = 1
solute = reaction.inputs["reactant_1"].components.add()
solvent = reaction.inputs["reactant_1"].components.add()
solute.CopyFrom(
message_helpers.build_compound(
name="placeholder",
smiles="placeholder",
role="reactant",
amount="0.4 nmol",
prep=None,
is_limiting=True,
prep_details=None,
)
)
solvent.CopyFrom(
message_helpers.build_compound(
name="placeholder",
smiles="placeholder",
role="solvent",
amount="1 uL",
prep=None,
is_limiting=False,
prep_details=None,
)
)
solvent.amount.volume_includes_solutes = True
# Reactant 2
reaction.inputs["reactant_2"].addition_order = 1
solute = reaction.inputs["reactant_2"].components.add()
solvent = reaction.inputs["reactant_2"].components.add()
solute.CopyFrom(
message_helpers.build_compound(
name="placeholder",
smiles="placeholder",
role="reactant",
amount="0.4 nmol",
prep=None,
is_limiting=True,
prep_details=None,
)
)
solvent.CopyFrom(
message_helpers.build_compound(
name="DMF",
smiles="CN(C)C=O",
role="solvent",
amount="1 uL",
prep=None,
is_limiting=False,
prep_details=None,
)
)
solvent.amount.volume_includes_solutes = True
# Reagent 1 = Base
reaction.inputs["base"].addition_order = 1
solute = reaction.inputs["base"].components.add()
solvent = reaction.inputs["base"].components.add()
solute.CopyFrom(
message_helpers.build_compound(
name="placeholder",
smiles="placeholder",
role="reagent",
amount="1 nmol",
prep=None,
is_limiting=False,
prep_details=None,
)
)
solvent.CopyFrom(
message_helpers.build_compound(
name="placeholder",
smiles="placeholder",
role="solvent",
amount="1 uL",
prep=None,
is_limiting=False,
prep_details=None,
)
)
solvent.amount.volume_includes_solutes = True
# Ligand
reaction.inputs["ligand"].addition_order = 1
solute = reaction.inputs["ligand"].components.add()
solvent = reaction.inputs["ligand"].components.add()
solute.CopyFrom(
message_helpers.build_compound(
name="placeholder",
smiles="placeholder",
role="reagent",
amount="0.05 nmol",
prep=None,
is_limiting=False,
prep_details=None,
)
)
solvent.CopyFrom(
message_helpers.build_compound(
name="toluene",
smiles="Cc1ccccc1",
role="solvent",
amount="1 uL",
prep=None,
is_limiting=False,
prep_details=None,
)
)
solvent.amount.volume_includes_solutes = True
# Catalyst
reaction.inputs["catalyst"].addition_order = 1
solute = reaction.inputs["catalyst"].components.add()
solvent = reaction.inputs["catalyst"].components.add()
solute.CopyFrom(
message_helpers.build_compound(
name="Pd(OAc)2",
smiles="[Pd+2].[O-]C(=O)C.[O-]C(=O)C",
role="catalyst",
amount="0.025 nmol",
prep=None,
is_limiting=False,
prep_details=None,
)
)
solvent.CopyFrom(
message_helpers.build_compound(
name="1,3,5-triethylbenzene",
smiles="CCC1=CC(=CC(=C1)CC)CC",
role="solvent",
amount="1 uL",
prep=None,
is_limiting=False,
prep_details=None,
)
)
solvent.amount.volume_includes_solutes = True
# Extra solvent -- added last
reaction.inputs["carrier solvent"].addition_order = 2
solvent1 = reaction.inputs["carrier solvent"].components.add()
solvent2 = reaction.inputs["carrier solvent"].components.add()
solvent1.CopyFrom(
message_helpers.build_compound(
name="placeholder",
smiles="placeholder",
role="solvent",
amount="445.5 uL",
prep=None,
is_limiting=False,
prep_details=None,
)
)
solvent2.CopyFrom(
message_helpers.build_compound(
name="water",
smiles="O",
role="solvent",
amount="49.5 uL",
prep=None,
is_limiting=False,
prep_details=None,
)
)
# Reactions performed in 1556 well plate
reaction.setup.vessel.CopyFrom(
reaction_pb2.Vessel(
type="TUBE",
material=dict(type="CUSTOM", details="Hastelloy"),
volume=unit_resolver.resolve("710 uL"),
)
)
reaction.setup.is_automated = True
reaction.setup.automation_platform = "reaction segment preparation unit (RSPU), Agilent 1100 Infinity HPLC system"
# Reaction prepared in glove box, presumed sensitivity
reaction.notes.is_sensitive_to_moisture = True
reaction.notes.is_sensitive_to_oxygen = True
# Heated - not specified how
t_conds = reaction.conditions.temperature
t_conds.control.type = t_conds.TemperatureControl.DRY_ALUMINUM_PLATE # close
t_conds.control.details = "Hastelloy coil (vessel) placed on IKA hotplate"
t_conds.setpoint.CopyFrom(reaction_pb2.Temperature(units="CELSIUS", value=100))
# System run in flow at 100 bar, but explicitly not specified how
p_conds = reaction.conditions.pressure
p_conds.control.type = p_conds.PressureControl.PRESSURIZED
p_conds.setpoint.CopyFrom(reaction_pb2.Pressure(units="BAR", value=100))
# Although these reactions are being treated as small batch reactors in flow, we
# can define the flow conditions. Note that no reaction inputs have a defined
# continuous flow rate.
f_conds = reaction.conditions.flow
f_conds.type = f_conds.CUSTOM
f_conds.details = "Droplet reactor"
f_conds.pump_type = "Agilent G1311 quarternary pump"
f_conds.tubing.CopyFrom(
reaction_pb2.FlowConditions.Tubing(
diameter=unit_resolver.resolve("0.5 millimeter"), type="CUSTOM", details="Hastelloy"
)
)
# No safety notes
reaction.notes.safety_notes = ""
outcome = reaction.outcomes.add()
outcome.reaction_time.CopyFrom(unit_resolver.resolve("1 minute"))
# Analyses: UPLC. Only report product yield by percent area (LC)
# Note using LCMS but split into LC and MS
outcome.analyses["LCMS"].type = reaction_pb2.Analysis.LCMS
outcome.analyses["LCMS"].details = (
r"0.1% AcOH/NH4COOH/Water based gradient over 1.4 minutes"
r" running from 5-95% MeCN using a Waters Acquity UPLC BEH C18 30 x 2.1 mm"
r" column at 80 °C with a flow rate of 2.5ml/min and a detection wavelength of 210-360nm."
r"5μL injections were made directly and ionization monitored in ES+ positive mode."
)
outcome.analyses["LCMS"].instrument_manufacturer = "Agilent"
outcome.analyses["LCMS"].data["product yield by UV"].float_value = 0 # placeholder
outcome.analyses["LCMS"].data["product mass ion count"].float_value = 0 # placeholder
# Define product identity
product = outcome.products.add()
product.identifiers.add(value=r"CC1=CC=C2C(C=NN2C3OCCCC3)=C1C4=CC=C(N=CC=C5)C5=C4", type="SMILES")
product.is_desired_product = True
product.reaction_role = reaction_pb2.ReactionRole.PRODUCT
# Two raw measurements include percent area UV and raw mass ion count
measurement = product.measurements.add(analysis_key="LCMS", type="COUNTS", is_normalized=False)
# The UV product yield percent area was used to indicate yield
measurement = product.measurements.add(analysis_key="LCMS", type="AREA")
measurement.is_normalized = True # peak areas are relative
measurement.percentage.value = -999 # placeholder
# Reaction provenance
reaction.provenance.city = r"San Diego, CA"
reaction.provenance.doi = r"10.1126/science.aar5169"
reaction.provenance.publication_url = r"https://science.sciencemag.org/content/359/6374/429"
reaction.provenance.record_created.time.value = datetime.now().strftime("%m/%d/%Y, %H:%M:%S")
reaction.provenance.record_created.person.CopyFrom(
reaction_pb2.Person(name="Connor W. Coley", organization="MIT", orcid="0000-0002-8271-8723", email="ccoley@mit.edu")
)
reaction
import pandas as pd
import os
if not os.path.isfile("aap9112_Data_File_S1.xlsx"):
!wget https://github.com/Open-Reaction-Database/ord-schema/raw/main/examples/10_Perera_Science_Suzuki/aap9112_Data_File_S1.xlsx
data = pd.read_excel("aap9112_Data_File_S1.xlsx", usecols=range(16))
data
reactions = []
for _, row in tqdm(data.iterrows()):
new_reaction = reaction_pb2.Reaction()
new_reaction.CopyFrom(reaction)
# Update reactant 1 name & SMILES
reactant_1_name, reactant_1_smiles, reactant_1_solvent_name, reactant_1_solvent_smiles = {
"6-chloroquinoline": (
"6-chloroquinoline",
"C1=CC2=C(C=CC(=C2)Cl)N=C1",
"1,3-diethylbenzene",
"CCC1=CC(=CC=C1)CC",
),
"6-Bromoquinoline": (
"6-Bromoquinoline",
"C1=CC2=C(C=CC(=C2)Br)N=C1",
"1,3-diethylbenzene",
"CCC1=CC(=CC=C1)CC",
),
"6-triflatequinoline": (
"6-triflatequinoline",
"O=S(OC1=CC=C2N=CC=CC2=C1)(C(F)(F)F)=O",
"1,3-diethylbenzene",
"CCC1=CC(=CC=C1)CC",
),
"6-Iodoquinoline": ("6-Iodoquinoline", "C1=CC2=C(C=CC(=C2)I)N=C1", "1,3-diethylbenzene", "CCC1=CC(=CC=C1)CC"),
"6-quinoline-boronic acid hydrochloride": (
"6-quinoline-boronic acid hydrochloride",
"OB(C1=CC=C2N=CC=CC2=C1)O.Cl",
"water",
"O",
),
"Potassium quinoline-6-trifluoroborate": (
"Potassium quinoline-6-trifluoroborate",
"F[B-](C1=CC=C2N=CC=CC2=C1)(F)F.[K+]",
"water",
"O",
),
"6-Quinolineboronic acid pinacol ester": (
"6-Quinolineboronic acid pinacol ester",
"CC1(C)C(C)(C)OB(O1)C2=CC=C3N=CC=CC3=C2",
"DMF",
"CN(C)C=O",
),
}[row["Reactant_1_Name"].strip()]
new_reaction.inputs["reactant_1"].components[0].identifiers[0].value = reactant_1_smiles
new_reaction.inputs["reactant_1"].components[0].identifiers[1].value = reactant_1_name
new_reaction.inputs["reactant_1"].components[1].identifiers[0].value = reactant_1_solvent_smiles
new_reaction.inputs["reactant_1"].components[1].identifiers[1].value = reactant_1_solvent_name
# Update reactant 2 SMILES, remove name identifier
reactant_2_smiles = {
"2a, Boronic Acid": ("CC1=CC=C2C(C=NN2C3OCCCC3)=C1B(O)O"),
"2b, Boronic Ester": ("CC1=CC=C2C(C=NN2C3OCCCC3)=C1B4OC(C)(C)C(C)(C)O4"),
"2c, Trifluoroborate": ("CC1=CC=C2C(C=NN2C3OCCCC3)=C1[B-](F)(F)F.[K+]"),
"2d, Bromide": ("CC1=CC=C2C(C=NN2C3OCCCC3)=C1Br"),
}[row["Reactant_2_Name"].strip()]
new_reaction.inputs["reactant_2"].components[0].identifiers[0].value = reactant_2_smiles
del new_reaction.inputs["reactant_2"].components[0].identifiers[1] # no name
# Update reagent 1 (base) name & SMILES
base_name, base_smiles, base_solvent_name, base_solvent_smiles = {
"NaOH": ("NaOH", "[OH-].[Na+]", "water", "O"),
"NaHCO3": ("NaHCO3", "C(=O)(O)[O-].[Na+]", "water", "O"),
"CsF": ("CsF", "[F-].[Cs+]", "water", "O"),
"K3PO4": ("K3PO4", "[K+].[K+].[K+].[O-]P([O-])([O-])=O", "water", "O"),
"KOH": ("KOH", "[OH-].[K+]", "water", "O"),
"LiOtBu": ("LiOtBu", "[Li+].CC(C)(C)[O-]", "hexane", "CCCCCC"),
"Et3N": ("Et3N", "CCN(CC)CC", "THF", "C1CCOC1"),
"None": (None, None, "water", "O"),
}[row["Reagent_1_Short_Hand"].strip()]
new_reaction.inputs["base"].components[1].identifiers[0].value = base_solvent_smiles
new_reaction.inputs["base"].components[1].identifiers[1].value = base_solvent_name
if base_smiles is None:
del new_reaction.inputs["base"].components[0]
else:
new_reaction.inputs["base"].components[0].identifiers[0].value = base_smiles
new_reaction.inputs["base"].components[0].identifiers[1].value = base_name
# Update ligand
ligand_name, ligand_smiles = {
"P(tBu)3": ("P(tBu)3", "CC(C)(C)P(C(C)(C)C)C(C)(C)C"),
"P(Ph)3": ("P(Ph)3", "c3c(P(c1ccccc1)c2ccccc2)cccc3"),
"AmPhos": ("AmPhos", "CC(C)(C)P(C1=CC=C(C=C1)CNC)C(C)(C)C"),
"P(Cy)3": ("P(Cy)3", "C1(CCCCC1)P(C2CCCCC2)C3CCCCC3"),
"P(o-Tol)3": ("P(o-Tol)3", "CC1=CC=CC=C1P(C2=CC=CC=C2C)C3=CC=CC=C3C"),
"CataCXium A": ("CataCXium A", "CCCCP(C12CC3CC(C1)CC(C3)C2)C45CC6CC(C4)CC(C6)C5"),
"SPhos": ("SPhos", "COc1cccc(c1c2ccccc2P(C3CCCCC3)C4CCCCC4)OC"),
"dtbpf": ("dtbpf", "CC(C)(C)P(C1=C[CH-]C=C1)C(C)(C)C.CC(C)(C)P(C1=C[CH-]C=C1)C(C)(C)C.[Fe+2]"),
"XPhos": ("XPhos", "P(c2ccccc2c1c(cc(cc1C(C)C)C(C)C)C(C)C)(C3CCCCC3)C4CCCCC4"),
"dppf": ("dppf", "C1=CC=C(C=C1)P([C-]2C=CC=C2)C3=CC=CC=C3.C1=CC=C(C=C1)P([C-]2C=CC=C2)C3=CC=CC=C3.[Fe+2]"),
"Xantphos": ("Xantphos", "O6c1c(cccc1P(c2ccccc2)c3ccccc3)C(c7cccc(P(c4ccccc4)c5ccccc5)c67)(C)C"),
"None": (None, None),
}[row["Ligand_Short_Hand"].strip()]
if ligand_smiles is None:
del new_reaction.inputs["ligand"].components[0]
else:
new_reaction.inputs["ligand"].components[0].identifiers[0].value = ligand_smiles
new_reaction.inputs["ligand"].components[0].identifiers[1].value = ligand_name
# Update solvent
solvent_name, solvent_smiles = {
"MeCN": ("acetonitrile", "CC#N"),
"THF": ("THF", "C1CCOC1"),
"DMF": ("DMF", "CN(C)C=O"),
"MeOH": ("methanol", "CO"),
"MeOH/H2O_V2 9:1": ("methanol", "CO"),
"THF_V2": ("THF", "C1CCOC1"),
}[row["Solvent_1_Short_Hand"].strip()]
new_reaction.inputs["carrier solvent"].components[0].identifiers[0].value = solvent_smiles
new_reaction.inputs["carrier solvent"].components[0].identifiers[1].value = solvent_name
# Record measurements
new_reaction.outcomes[0].products[0].measurements[0].percentage.value = row["Product_Yield_PCT_Area_UV"]
new_reaction.outcomes[0].products[0].measurements[1].float_value.value = row["Product_Yield_Mass_Ion_Count"]
# Record raw data in analysis
new_reaction.outcomes[0].analyses["LCMS"].data["product yield by UV"].float_value = row["Product_Yield_PCT_Area_UV"]
new_reaction.outcomes[0].analyses["LCMS"].data["product mass ion count"].float_value = row[
"Product_Yield_Mass_Ion_Count"
]
# Validate
output = validations.validate_message(new_reaction)
for error in output.errors:
print(error)
# Append
reactions.append(new_reaction)
print(f"Generated {len(reactions)} reactions")
# Inspect random reaction from this set
reactions[15]
# Example of writing
# dataset = dataset_pb2.Dataset(reactions=reactions)
# message_helpers.write_message(dataset, 'perera_dataset.pbtxt')
| 0.465873 | 0.771456 |
# Introdução à Data Science com Python - Data ICMC-USP
Esse material foi desenvolvido pelo Data, grupo de extensão de aprendizado e ciência de dados compostos por alunos do Instituto de Ciências Matemáticas e de Computação da USP.
Esse notebook é acompanhado de um curso em video, que pode ser encontrado em [aqui](https://www.youtube.com/playlist?list=PLFE-LjWAAP9SfEuLXf3qrpw4szKWjlYq9)
Para saber mais sobre as atividades do Data entre no nosso site e nos siga e nossas redes sociais:
- [Site](http://data.icmc.usp.br/)
- [Twitter](https://twitter.com/data_icmc)
- [LinkedIn](https://www.linkedin.com/school/data-icmc/)
- [Facebook](https://www.facebook.com/dataICMC/)
Aproveite o material!
## Pandas
Pandas é uma biblioteca para manipulação de dados tabelados. Possuindo várias funções para manipular arquivos `.csv`, `.xls`, `.json`, entre outros. Pandas tem uma integração natural com NumPy, além de possuir um funcionamento análogo em vários sentidos. Assim como NumPy a biblioteca também é bem completa, então iremos passar pelos conceitos básicos e é muito recomendado conferir a documentação oficial em https://pandas.pydata.org/docs/
```
import numpy as np
# É comum importar pandas como pd
import pandas as pd
```
### Leitura de arquivos
Dados podem vir em arquivos com diferentes formatos, e para a maioria deles o Pandas fornece alternativas para sua leitura. Vamos trabalhar com arquivos `.csv` no decorrer do curso, pois é o tipo mais comum de arquivo.
```
# Assim que fazemos a leitura de arquivos csv
df = pd.read_csv('data/titanic.csv')
# Vamos ver o tipo desse dado que acabamos de ler
type(df)
```
### DataFrame
Um DataFrame é o principal elemento do Pandas. Ele é basicamente a representação de uma planilha/tabela, porém com muitas funções que permitem inspeção e manipulação.
```
# Podemos observar as primeiras linhas do DataFrame com .head()
df.head()
# DataFrames possuem shape, assim como arrays
df.shape
# Podemos acessar as colunas do DataFrame
df.columns
# Podemos conferir o tipo de dado de cada coluna
df.dtypes
```
Essa é uma boa hora para revisarmos algumas estatísticas descritivas (para um conteúdo mais aprofundado sobre o assunto acesse http://www.portalaction.com.br/estatistica-basica/estatisticas-descritivas)
#### Funções de agregação
Assim como no NumPy podemos aplicar diversas funções de agregação aos dados. Essas funções são por padrão aplicadas a cada coluna
```
# Soma
df.sum()
# Média
df.mean()
```
#### Conferindo o número de valores nulos
Valores nulos são entradas da tabela que estão vazias (pense em uma celula do Excel sem nenhum valor dentro). O fato de um campo não estar preenchido pode ter motivos diferentes, cabe ao ciêntista de dados saber com cada caso.
É importante saber que a maioria dos algoritmos de aprendizado de máquina não trabalham com valores nulos, então é importante tratá-los na preparação dos dados.
```
# Podemos usar a funçã isna e somar para ver o numero de nulos em cada coluna
df.isna().sum()
```
#### Observando estatísticas do DataFrame
Podemos observar as estatísticas de cada coluna do DataFrame utilizando o método `.describe()`
```
df.describe()
```
### Indexando DataFrames
```
#Para essa parte iremos definir um DataFrame simples por questões didáticas
df_toy = pd.DataFrame(data=np.random.randint(20, size=(8, 4)),
index=['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'],
columns=['Col 1', 'Col 2', 'Col 3', 'Col 4'])
df_toy
```
#### Indexação simples
```
# Podemos selecionar uma das colunas do nosso DataFrame
coluna_um = df_toy['Col 1']
coluna_um
# Cada coluna é um objeto do tipo Series
type(coluna_um)
# Ao inves de selecionar um unica coluna podemos pegar um conjunto usado uma lista
coluna_um_quatro = df_toy[['Col 1', 'Col 4']]
coluna_um_quatro
# Usando slices dentro dos [] selecionamos um conjunto de linhas
df_toy[2:6]
# Podemos selecionar linhas usando booleanos
df_toy[[True, True, False, True, False, False, False, True]]
# Isso permite manipulaçoes interessantes
col1_maior_10 = df_toy['Col 1'] > 10
df_toy[col1_maior_10]
```
Essas são as formas simples de indexar um DataFrame, mas como é possível ver acaba deixando de lado muitas possibilidades, para isso temos `.loc` e `.iloc`. Vamos ver um pouco sobre cada uma delas.
#### *.loc* (indexando por label)
O index é a label de cada linha, enquanto o nome das colunas é o label de cada coluna. Podemos usar esses labels para indexar conjuntos de valores específicos do DataFrame.
```
# Podemos selecionar uma linha pela sua label
df_toy.loc['a', :]
# Podemos também selecionar colunas
df_toy.loc[:, 'Col 2']
# Podemos fazer cortes de qualquer forma
df_toy.loc[['a', 'd', 'e'], ['Col 1', 'Col 3']]
```
#### *.iloc* (indexando por posição)
Já no iloc a identificação de linhas e colunas é feita pela sua posição, agora sim 1ª, 2ª,... colunas e linhas. Vamos fazer exatamente as mesmas seleções do loc mas usando o iloc.
```
# Vamos selecionar a primeira linha
df_toy.iloc[0, :]
# Vamos selecionar a segunda coluna
df_toy.iloc[:, 1]
# Também podemos fazer cortes de qualquer forma
df_toy.iloc[[0, 3, 4], [0, 2]]
```
Tanto loc quanto iloc permitem seleção de colunas utilizando booleanos
```
# Selecionando colunas com bool
df_toy.iloc[:, [True, False, False, True]]
```
A documentação oficial do Pandas tem uma página detalhada sobre o funcionamento da indexação: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html
### Operações em DataFrames
Certo, agora que já sabemos como manipular DataFrames podemos começar a manipular os dados que temos para fazer descobertas ou para prepará-los para um modelo de aprendizado de máquina.
```
df.head()
```
#### Adicionando/Removendo colunas e linhas
- Adicionar coluna
A recomendação do Pandas é que sempre que se for alterar valores do DataFrame, seja adicionando uma coluna inteira como estamos fazendo ou mudando um único valor, devemos usar os comandos `.loc` e `.iloc`. Isso se dá ao fato de nem sempre o que é retornado de uma indexação é uma view para o valor original, em algumas situações pode ser uma cópia. Leia mais na documentação: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
```
df.loc[:, 'Coluna Aleatoria'] = np.random.randint(100, size=(df.shape[0],))
df.head()
```
- Remover linha
```
# Removendo uma linha (identificamos a linha pela label)
df.drop(0) # Removendo a linha
df.head() # Vendo o resultado
```
Ops... A linha que queriamos remover ainda está lá, isso é porque a operação de remoção (e várias operações em DataFrames) não realizam a alteração no DataFrame original, elas retornam um novo DataFrame com as alterações feitas. Podemos salvar esse retorno numa variável (até mesmo na mesma variavel que o DataFrame original) ou passar um parâmetro especial para o DataFrame dizendo para ele realizar as mudanças no próprio DataFrame.
```
# Salvando na variavel
df = df.drop(0)
# Ou podemos indicar com o padrametro in_place
# df.drop(0, inplace=True)
```
- Remover coluna
```
# Para remover uma coluna precisamos avisar que estamos querendo remover uma coluna
df = df.drop(columns=['Coluna Aleatoria'])
df.head()
```
#### Tratando valores nulos
Vimos que podem existir valores nulos em nossos dados, mas não vimos o que fazer com eles. Temos basicamente três opções de como lidar com esses dados faltantes:
- Podemos remover as linhas que possuirem dados faltantes
- Podemos remover as colunas que possuirem dados faltantes
- Podemos substituir dados faltantes por algum valor (a média, por exemplo)
Vamos analisar nosso DataFrame e ver que tipo de medida podemos adotar
```
# Vamos ver quantos valores nulos temos em cada coluna
df.isna().sum()
# Talvez seja interessante saber qual porcentagem dos valores é nulo
df.isna().sum() / df.shape[0]
```
Temos 3 colunas que apresentam valores nulos, porém cada uma delas será tratada de uma diferente.
- **Age**: Cerca de 20% dos valores dessa coluna são nulos, então vamos substituir os dados faltantes pela média dos valores da coluna
- **Cabin**: Essa coluna não possui 77% dos seus valores, portanto não há muito o que pode ser feito, vamos jogar ela toda fora.
- **Embarked**: Nesse caso só 0.2% das linhas possuem dados faltantes, vamos simplesmente remover essas linhas dos nossos dados
Temos dois métodos que vão nos ajudar a lidar com nulos `.fillna()` e `.dropna()`
`.fillna()` vai gerar uma cópia do que foi passado substituindo os valores nulos por um valor passado como parâmetro
```
# Vamos calcular a média das idades
idade_media = df['Age'].mean()
# Substituindo a coluna pela coluna com NaN sustituidos
df.loc[:, 'Age'] = df['Age'].fillna(idade_media)
# Vamos ver se até agora aconteceu o que queriamos
df.isna().sum()
```
`.dropna()` retorna um DataFrame sem as colunas ou linhas que possuem valores nulos.
Temos parâmetros importantes: `axis` indica se deseja remover linhas (0) ou colunas (1), `subset` indica as labels do outro eixo que podem ser consideradas para remoção, `thresh` indica o número minimo de nulos para ser removida e `inplace` indica se a operação deve ser realizada no próprio DataFrame ou uma cópia deve ser retornada.
```
# Vamos começar removendo a coluna Cabin
df.dropna(axis=1, thresh=600, inplace=True)
df.head()
# Agora vamos remover as linhas que possuem Embarked nulo
df.dropna(axis=0, subset=['Embarked'], inplace=True)
# Conferindo se limpamos tudo
df.isna().sum()
```
#### Aplicando funções em colunas
Em várias situações queremos aplicar funções em colunas, nas maioria das vezes para gerar novas colunas. Vamos ver qual é o jeito certo de fazer isso. Mas antes vamos dar uma olhada no que <span style="color:red;">não deve ser feito</span>.
Vamos supor que queremos criar uma coluna chamada 'Faixa etária' que é construida baseada na coluna 'Age'.
```
# Vamos criar uma função que faz essa transformação
def calc_faixa_etaria(idade):
if idade < 13:
return 'Criança'
elif idade < 18:
return 'Adolescente'
elif idade < 60:
return 'Adulto'
else:
return 'Idoso'
# A primeira ideia pode ser fazer algo assim
faixas_etarias = []
for i in range(df.shape[0]):
idade = df.iloc[i, 5] # Age é a coluna 5
faixa = calc_faixa_etaria(idade)
faixas_etarias.append(faixa)
faixas_etarias[8:14]
```
Mas isso é extremamente ineficiente quando o tamanho do DataFrame é grande!
A solução correta é usar o método `.apply()`, que aplica uma função em cada uma das entradas e retorna um `pd.Series` com todos os resultados.
```
faixas_etarias = df['Age'].apply(calc_faixa_etaria)
faixas_etarias.iloc[8:14]
df.loc[:, 'Faixa etaria'] = faixas_etarias
df.head()
```
#### Salvando DataFrames
```
# Usamos index=False para não salvarmos a coluna de index
df.to_csv('data/dados_editados.csv', index=False)
```
|
github_jupyter
|
import numpy as np
# É comum importar pandas como pd
import pandas as pd
# Assim que fazemos a leitura de arquivos csv
df = pd.read_csv('data/titanic.csv')
# Vamos ver o tipo desse dado que acabamos de ler
type(df)
# Podemos observar as primeiras linhas do DataFrame com .head()
df.head()
# DataFrames possuem shape, assim como arrays
df.shape
# Podemos acessar as colunas do DataFrame
df.columns
# Podemos conferir o tipo de dado de cada coluna
df.dtypes
# Soma
df.sum()
# Média
df.mean()
# Podemos usar a funçã isna e somar para ver o numero de nulos em cada coluna
df.isna().sum()
df.describe()
#Para essa parte iremos definir um DataFrame simples por questões didáticas
df_toy = pd.DataFrame(data=np.random.randint(20, size=(8, 4)),
index=['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'],
columns=['Col 1', 'Col 2', 'Col 3', 'Col 4'])
df_toy
# Podemos selecionar uma das colunas do nosso DataFrame
coluna_um = df_toy['Col 1']
coluna_um
# Cada coluna é um objeto do tipo Series
type(coluna_um)
# Ao inves de selecionar um unica coluna podemos pegar um conjunto usado uma lista
coluna_um_quatro = df_toy[['Col 1', 'Col 4']]
coluna_um_quatro
# Usando slices dentro dos [] selecionamos um conjunto de linhas
df_toy[2:6]
# Podemos selecionar linhas usando booleanos
df_toy[[True, True, False, True, False, False, False, True]]
# Isso permite manipulaçoes interessantes
col1_maior_10 = df_toy['Col 1'] > 10
df_toy[col1_maior_10]
# Podemos selecionar uma linha pela sua label
df_toy.loc['a', :]
# Podemos também selecionar colunas
df_toy.loc[:, 'Col 2']
# Podemos fazer cortes de qualquer forma
df_toy.loc[['a', 'd', 'e'], ['Col 1', 'Col 3']]
# Vamos selecionar a primeira linha
df_toy.iloc[0, :]
# Vamos selecionar a segunda coluna
df_toy.iloc[:, 1]
# Também podemos fazer cortes de qualquer forma
df_toy.iloc[[0, 3, 4], [0, 2]]
# Selecionando colunas com bool
df_toy.iloc[:, [True, False, False, True]]
df.head()
df.loc[:, 'Coluna Aleatoria'] = np.random.randint(100, size=(df.shape[0],))
df.head()
# Removendo uma linha (identificamos a linha pela label)
df.drop(0) # Removendo a linha
df.head() # Vendo o resultado
# Salvando na variavel
df = df.drop(0)
# Ou podemos indicar com o padrametro in_place
# df.drop(0, inplace=True)
# Para remover uma coluna precisamos avisar que estamos querendo remover uma coluna
df = df.drop(columns=['Coluna Aleatoria'])
df.head()
# Vamos ver quantos valores nulos temos em cada coluna
df.isna().sum()
# Talvez seja interessante saber qual porcentagem dos valores é nulo
df.isna().sum() / df.shape[0]
# Vamos calcular a média das idades
idade_media = df['Age'].mean()
# Substituindo a coluna pela coluna com NaN sustituidos
df.loc[:, 'Age'] = df['Age'].fillna(idade_media)
# Vamos ver se até agora aconteceu o que queriamos
df.isna().sum()
# Vamos começar removendo a coluna Cabin
df.dropna(axis=1, thresh=600, inplace=True)
df.head()
# Agora vamos remover as linhas que possuem Embarked nulo
df.dropna(axis=0, subset=['Embarked'], inplace=True)
# Conferindo se limpamos tudo
df.isna().sum()
# Vamos criar uma função que faz essa transformação
def calc_faixa_etaria(idade):
if idade < 13:
return 'Criança'
elif idade < 18:
return 'Adolescente'
elif idade < 60:
return 'Adulto'
else:
return 'Idoso'
# A primeira ideia pode ser fazer algo assim
faixas_etarias = []
for i in range(df.shape[0]):
idade = df.iloc[i, 5] # Age é a coluna 5
faixa = calc_faixa_etaria(idade)
faixas_etarias.append(faixa)
faixas_etarias[8:14]
faixas_etarias = df['Age'].apply(calc_faixa_etaria)
faixas_etarias.iloc[8:14]
df.loc[:, 'Faixa etaria'] = faixas_etarias
df.head()
# Usamos index=False para não salvarmos a coluna de index
df.to_csv('data/dados_editados.csv', index=False)
| 0.282889 | 0.985882 |
# Changes In The Daily Growth Rate
> Changes in the daily growth rate for select countries.
- comments: true
- author: Thomas Wiecki
- categories: [growth]
- image: images/covid-growth.png
- permalink: /growth-analysis/
```
#hide
from pathlib import Path
loadpy = Path('load_covid_data.py')
if not loadpy.exists():
! wget https://raw.githubusercontent.com/github/covid19-dashboard/master/_notebooks/load_covid_data.py
#hide
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import pandas as pd
import seaborn as sns
import load_covid_data
sns.set_context('talk')
plt.style.use('seaborn-whitegrid')
#hide
df = load_covid_data.load_data(drop_states=True)
annotate_kwargs = dict(
s='Based on COVID Data Repository by Johns Hopkins CSSE ({})\nBy Thomas Wiecki'.format(df.index.max().strftime('%B %d, %Y')),
xy=(0.05, 0.01), xycoords='figure fraction', fontsize=10)
#hide
# Country names seem to change quite a bit
df.country.unique()
#hide
european_countries = ['Italy', 'Germany', 'France (total)', 'Spain', 'United Kingdom (total)',
'Iran']
large_engl_countries = ['US', 'Canada (total)', 'Australia (total)']
asian_countries = ['Singapore', 'Japan', 'Korea, South', 'Hong Kong']
south_american_countries = ['Argentina', 'Brazil', 'Colombia', 'Chile']
country_groups = [european_countries, large_engl_countries, asian_countries, south_american_countries]
line_styles = ['-', ':', '--', '-.']
#hide
def plot_countries(df, countries, min_confirmed=100, ls='-', col='confirmed'):
for country in countries:
df_country = df.loc[(df.country == country) & (df.confirmed >= min_confirmed)]
if len(df_country) == 0:
continue
df_country.reset_index()[col].plot(label=country, ls=ls)
sns.set_palette(sns.hls_palette(8, l=.45, s=.8)) # 8 countries max
fig, ax = plt.subplots(figsize=(12, 8))
for countries, ls in zip(country_groups, line_styles):
plot_countries(df, countries, ls=ls)
x = np.linspace(0, plt.xlim()[1] - 1)
ax.plot(x, 100 * (1.33) ** x, ls='--', color='k', label='33% daily growth')
ax.set(yscale='log',
title='Exponential growth of COVID-19 across countries',
xlabel='Days from first 100 confirmed cases',
ylabel='Confirmed cases (log scale)')
ax.get_yaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter())
ax.legend(bbox_to_anchor=(1.0, 1.0))
ax.annotate(**annotate_kwargs)
sns.despine();
#hide
fig, ax = plt.subplots(figsize=(12, 8))
for countries, ls in zip(country_groups, line_styles):
plot_countries(df, countries, ls=ls)
x = np.linspace(0, plt.xlim()[1] - 1)
ax.plot(x, 100 * (1.33) ** x, ls='--', color='k', label='33% daily growth')
ax.set(title='Exponential growth of COVID-19 across countries',
xlabel='Days from first 100 confirmed cases',
ylabel='Confirmed cases', ylim=(0, 30000))
ax.legend(bbox_to_anchor=(1.0, 1.0))
ax.annotate(**annotate_kwargs)
sns.despine();
#hide_input
plt.rcParams['axes.titlesize'] = 24
smooth_days = 4
fig, ax = plt.subplots(figsize=(14, 8))
df['pct_change'] = (df
.groupby('country')
.confirmed
.pct_change()
.rolling(smooth_days)
.mean()
)
for countries, ls in zip(country_groups, line_styles):
(df.set_index('country')
.loc[countries]
.loc[lambda x: x.confirmed > 100]
.reset_index()
.set_index('days_since_100')
.groupby('country', sort=False)['pct_change']
.plot(ls=ls)
)
ax.set(ylim=(0, 1),
xlim=(0, 20),
title='Are we seeing changes in daily growth rate?',
xlabel='Days from first 100 confirmed cases',
ylabel='Daily percent change (smoothed over {} days)'.format(smooth_days),
)
ax.axhline(.33, ls='--', color='k')
ax.get_yaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter())
ax.legend(bbox_to_anchor=(1.0, .155))
sns.despine()
ax.annotate(**annotate_kwargs);
# This creates a preview image for the blog post and home page
fig.savefig('../images/covid-growth.png')
```
## Appendix: German ICU Capacity
```
#hide_input
sns.set_palette(sns.hls_palette(8, l=.45, s=.8)) # 8 countries max
fig, ax = plt.subplots(figsize=(12, 8))
p_crit = .05
# 28000 ICU beds total, 80% occupied
icu_germany = 28000
icu_germany_free = .2
df_tmp = df.loc[lambda x: (x.country == 'Germany') & (x.confirmed > 100)].critical_estimate
df_tmp.plot(ax=ax)
x = np.linspace(0, 30, 30)
pd.Series(index=pd.date_range(df_tmp.index[0], periods=30),
data=100*p_crit * (1.33) ** x).plot(ax=ax,ls='--', color='k', label='33% daily growth')
ax.axhline(icu_germany, color='.3', ls='-.', label='Total ICU beds')
ax.axhline(icu_germany * icu_germany_free, color='.5', ls=':', label='Free ICU beds')
ax.set(yscale='log',
title='When will Germany run out of ICU beds?',
ylabel='Expected critical cases (assuming {:.0f}% critical)'.format(100 * p_crit),
)
ax.get_yaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter())
ax.legend(bbox_to_anchor=(1.0, 1.0))
sns.despine()
ax.annotate(**annotate_kwargs);
```
Updated daily by [GitHub Actions](https://github.com/features/actions).
This visualization was made by [Thomas Wiecki](https://twitter.com/twiecki)[^1].
[^1]: Data sourced from ["2019 Novel Coronavirus COVID-19 (2019-nCoV) Data Repository by Johns Hopkins CSSE"](https://systems.jhu.edu/research/public-health/ncov/) [GitHub repository](https://github.com/CSSEGISandData/COVID-19) and recreates the (pay-walled) plot in the [Financial Times]( https://www.ft.com/content/a26fbf7e-48f8-11ea-aeb3-955839e06441). This code is provided under the [BSD-3 License](https://github.com/twiecki/covid19/blob/master/LICENSE). Link to [original notebook](https://github.com/twiecki/covid19/blob/master/covid19_growth.ipynb).
|
github_jupyter
|
#hide
from pathlib import Path
loadpy = Path('load_covid_data.py')
if not loadpy.exists():
! wget https://raw.githubusercontent.com/github/covid19-dashboard/master/_notebooks/load_covid_data.py
#hide
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import pandas as pd
import seaborn as sns
import load_covid_data
sns.set_context('talk')
plt.style.use('seaborn-whitegrid')
#hide
df = load_covid_data.load_data(drop_states=True)
annotate_kwargs = dict(
s='Based on COVID Data Repository by Johns Hopkins CSSE ({})\nBy Thomas Wiecki'.format(df.index.max().strftime('%B %d, %Y')),
xy=(0.05, 0.01), xycoords='figure fraction', fontsize=10)
#hide
# Country names seem to change quite a bit
df.country.unique()
#hide
european_countries = ['Italy', 'Germany', 'France (total)', 'Spain', 'United Kingdom (total)',
'Iran']
large_engl_countries = ['US', 'Canada (total)', 'Australia (total)']
asian_countries = ['Singapore', 'Japan', 'Korea, South', 'Hong Kong']
south_american_countries = ['Argentina', 'Brazil', 'Colombia', 'Chile']
country_groups = [european_countries, large_engl_countries, asian_countries, south_american_countries]
line_styles = ['-', ':', '--', '-.']
#hide
def plot_countries(df, countries, min_confirmed=100, ls='-', col='confirmed'):
for country in countries:
df_country = df.loc[(df.country == country) & (df.confirmed >= min_confirmed)]
if len(df_country) == 0:
continue
df_country.reset_index()[col].plot(label=country, ls=ls)
sns.set_palette(sns.hls_palette(8, l=.45, s=.8)) # 8 countries max
fig, ax = plt.subplots(figsize=(12, 8))
for countries, ls in zip(country_groups, line_styles):
plot_countries(df, countries, ls=ls)
x = np.linspace(0, plt.xlim()[1] - 1)
ax.plot(x, 100 * (1.33) ** x, ls='--', color='k', label='33% daily growth')
ax.set(yscale='log',
title='Exponential growth of COVID-19 across countries',
xlabel='Days from first 100 confirmed cases',
ylabel='Confirmed cases (log scale)')
ax.get_yaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter())
ax.legend(bbox_to_anchor=(1.0, 1.0))
ax.annotate(**annotate_kwargs)
sns.despine();
#hide
fig, ax = plt.subplots(figsize=(12, 8))
for countries, ls in zip(country_groups, line_styles):
plot_countries(df, countries, ls=ls)
x = np.linspace(0, plt.xlim()[1] - 1)
ax.plot(x, 100 * (1.33) ** x, ls='--', color='k', label='33% daily growth')
ax.set(title='Exponential growth of COVID-19 across countries',
xlabel='Days from first 100 confirmed cases',
ylabel='Confirmed cases', ylim=(0, 30000))
ax.legend(bbox_to_anchor=(1.0, 1.0))
ax.annotate(**annotate_kwargs)
sns.despine();
#hide_input
plt.rcParams['axes.titlesize'] = 24
smooth_days = 4
fig, ax = plt.subplots(figsize=(14, 8))
df['pct_change'] = (df
.groupby('country')
.confirmed
.pct_change()
.rolling(smooth_days)
.mean()
)
for countries, ls in zip(country_groups, line_styles):
(df.set_index('country')
.loc[countries]
.loc[lambda x: x.confirmed > 100]
.reset_index()
.set_index('days_since_100')
.groupby('country', sort=False)['pct_change']
.plot(ls=ls)
)
ax.set(ylim=(0, 1),
xlim=(0, 20),
title='Are we seeing changes in daily growth rate?',
xlabel='Days from first 100 confirmed cases',
ylabel='Daily percent change (smoothed over {} days)'.format(smooth_days),
)
ax.axhline(.33, ls='--', color='k')
ax.get_yaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter())
ax.legend(bbox_to_anchor=(1.0, .155))
sns.despine()
ax.annotate(**annotate_kwargs);
# This creates a preview image for the blog post and home page
fig.savefig('../images/covid-growth.png')
#hide_input
sns.set_palette(sns.hls_palette(8, l=.45, s=.8)) # 8 countries max
fig, ax = plt.subplots(figsize=(12, 8))
p_crit = .05
# 28000 ICU beds total, 80% occupied
icu_germany = 28000
icu_germany_free = .2
df_tmp = df.loc[lambda x: (x.country == 'Germany') & (x.confirmed > 100)].critical_estimate
df_tmp.plot(ax=ax)
x = np.linspace(0, 30, 30)
pd.Series(index=pd.date_range(df_tmp.index[0], periods=30),
data=100*p_crit * (1.33) ** x).plot(ax=ax,ls='--', color='k', label='33% daily growth')
ax.axhline(icu_germany, color='.3', ls='-.', label='Total ICU beds')
ax.axhline(icu_germany * icu_germany_free, color='.5', ls=':', label='Free ICU beds')
ax.set(yscale='log',
title='When will Germany run out of ICU beds?',
ylabel='Expected critical cases (assuming {:.0f}% critical)'.format(100 * p_crit),
)
ax.get_yaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter())
ax.legend(bbox_to_anchor=(1.0, 1.0))
sns.despine()
ax.annotate(**annotate_kwargs);
| 0.532182 | 0.856932 |
```
# Libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
import cv2
from PIL import Image
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import KFold
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
from keras.optimizers import Adam
from keras.layers import Dropout
#importing training dataset
train=pd.read_csv('../input/gtsrb-german-traffic-sign/Train.csv')
X_train=train['Path']
y_train=train.ClassId
train
data_dir = "../input/gtsrb-german-traffic-sign"
train_imgpath= list((data_dir + '/' + str(train.Path[i])) for i in range(len(train.Path)))
for i in range(0,9):
plt.subplot(331+i)
seed=np.random.randint(0,39210)
im = Image.open(train_imgpath[seed])
plt.imshow(im)
plt.show()
```
# Preprocessing image-
converting images into arrays of the form (28,28,3)
```
train_data=[]
train_labels=[]
path = "../input/gtsrb-german-traffic-sign/"
for i in range(len(train.Path)):
image=cv2.imread(train_imgpath[i])
image_from_array = Image.fromarray(image, 'RGB')
size_image = image_from_array.resize((28,28))
train_data.append(np.array(size_image))
train_labels.append(train.ClassId[i])
X=np.array(train_data)
y=np.array(train_labels)
#Spliting the images into train and validation sets
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split( X, y, test_size=0.20, random_state=7777)
X_train = X_train.astype('float32')/255
X_val = X_val.astype('float32')/255
#Using one hote encoding for the train and validation labels
from keras.utils import to_categorical
y_train = to_categorical(y_train, 43)
y_val = to_categorical(y_val, 43)
```
# CNN Model-
Grid Search to determine the layers and neurons in each layer in the sequential model.
```
def create_model(layers):
cnn = tf.keras.models.Sequential()
cnn.add(tf.keras.layers.Conv2D(filters=32, kernel_size=3, padding="same", activation="relu", input_shape=[28, 28, 3]))
cnn.add(tf.keras.layers.MaxPool2D(pool_size=2, strides=2, padding='valid'))
cnn.add(tf.keras.layers.Conv2D(filters=32, kernel_size=3, padding="same", activation="relu"))
cnn.add(tf.keras.layers.MaxPool2D(pool_size=2, strides=2, padding='valid'))
cnn.add(tf.keras.layers.Flatten())
for i, nodes in enumerate(layers):
cnn.add(tf.keras.layers.Dense(units=nodes, activation='relu'))
cnn.add(tf.keras.layers.Dense(units=43, activation='softmax'))
cnn.compile(optimizer = 'Adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
return cnn
model = KerasClassifier(build_fn=create_model, verbose=1)
layers = [[128],(256, 128),(200, 150, 120)]
param_grid = dict(layers=layers)
grid = GridSearchCV(estimator=model, param_grid=param_grid, verbose=1)
grid_results = grid.fit(X_train,y_train, validation_data=(X_val, y_val))
print("Best: {0}, using {1}".format(grid_results.best_score_, grid_results.best_params_))
means = grid_results.cv_results_['mean_test_score']
stds = grid_results.cv_results_['std_test_score']
params = grid_results.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print('{0} ({1}) with: {2}'.format(mean, stdev, param))
```
Grid Search to determine the batch size
```
def create_model1():
cnn = tf.keras.models.Sequential()
cnn.add(tf.keras.layers.Conv2D(filters=32, kernel_size=3, padding="same", activation="relu", input_shape=[28, 28, 3]))
cnn.add(tf.keras.layers.MaxPool2D(pool_size=2, strides=2, padding='valid'))
cnn.add(tf.keras.layers.Conv2D(filters=32, kernel_size=3, padding="same", activation="relu"))
cnn.add(tf.keras.layers.MaxPool2D(pool_size=2, strides=2, padding='valid'))
cnn.add(tf.keras.layers.Flatten())
cnn.add(tf.keras.layers.Dense(units=256, activation='relu'))
cnn.add(tf.keras.layers.Dense(units=128, activation='relu'))
cnn.add(tf.keras.layers.Dense(units=43, activation='softmax'))
cnn.compile(optimizer = 'Adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
return cnn
model = KerasClassifier(build_fn = create_model1, verbose = 1)
batch_size = [20,40]
param_grid = dict(batch_size=batch_size)
grid = GridSearchCV(estimator = model, param_grid = param_grid, verbose = 1)
grid_results = grid.fit(X_train,y_train, validation_data=(X_val, y_val))
print("Best: {0}, using {1}".format(grid_results.best_score_, grid_results.best_params_))
means = grid_results.cv_results_['mean_test_score']
params = grid_results.cv_results_['params']
for mean,param in zip(means,params):
print('{0} with: {1}'.format(mean,param))
```
Grid Search to determine the dropout rate
```
def create_model2(dropout):
# create model
cnn = tf.keras.models.Sequential()
cnn.add(tf.keras.layers.Conv2D(filters=32, kernel_size=3, padding="same", activation="relu", input_shape=[28, 28, 3]))
cnn.add(tf.keras.layers.MaxPool2D(pool_size=2, strides=2, padding='valid'))
cnn.add(tf.keras.layers.Conv2D(filters=32, kernel_size=3, padding="same", activation="relu"))
cnn.add(tf.keras.layers.MaxPool2D(pool_size=2, strides=2, padding='valid'))
cnn.add(tf.keras.layers.Flatten())
cnn.add(tf.keras.layers.Dense(units=256, activation='relu'))
cnn.add(Dropout(dropout))
cnn.add(tf.keras.layers.Dense(units=128, activation='relu'))
cnn.add(Dropout(dropout))
cnn.add(tf.keras.layers.Dense(units=43, activation='softmax'))
cnn.compile(optimizer = 'Adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
return cnn
model = KerasClassifier(build_fn = create_model2, verbose = 1, batch_size=20)
dropout = [0.0, 0.1, 0.2]
param_grid = dict(dropout=dropout)
grid = GridSearchCV(estimator = model, param_grid = param_grid, verbose = 1)
grid_results = grid.fit(X_train,y_train, validation_data=(X_val, y_val))
print("Best: {0}, using {1}".format(grid_results.best_score_, grid_results.best_params_))
means = grid_results.cv_results_['mean_test_score']
params = grid_results.cv_results_['params']
for mean,param in zip(means,params):
print('{0} with: {1}'.format(mean,param))
```
Inputing the parameters in the final model (for better accuracy, you can run grid search multiple times zooming into each range used before)
```
#Definition of the DNN model
cnn = tf.keras.models.Sequential()
cnn.add(tf.keras.layers.Conv2D(filters=32, kernel_size=3, padding="same", activation="relu", input_shape=[28, 28, 3]))
cnn.add(tf.keras.layers.MaxPool2D(pool_size=2, strides=2, padding='valid'))
cnn.add(tf.keras.layers.Conv2D(filters=32, kernel_size=3, padding="same", activation="relu"))
cnn.add(tf.keras.layers.MaxPool2D(pool_size=2, strides=2, padding='valid'))
cnn.add(tf.keras.layers.Flatten())
cnn.add(tf.keras.layers.Dense(units=256, activation='relu'))
cnn.add(Dropout(0.1))
cnn.add(tf.keras.layers.Dense(units=128, activation='relu'))
cnn.add(Dropout(0.1))
cnn.add(tf.keras.layers.Dense(units=43, activation='softmax'))
# compile the model
cnn.compile(optimizer = 'Adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
history = cnn.fit(X_train, y_train, batch_size=20, epochs=20,validation_data=(X_val, y_val))
```
Plotting the values of accuracy and loss vs epoch to visually determine the suitable number of epochs required
```
import matplotlib.pyplot as plt
plt.figure(0)
plt.plot(history.history['accuracy'], label='training accuracy')
plt.plot(history.history['val_accuracy'], label='val accuracy')
plt.title('Accuracy')
plt.xlabel('epochs')
plt.ylabel('accuracy')
plt.legend()
plt.figure(1)
plt.plot(history.history['loss'], label='training loss')
plt.plot(history.history['val_loss'], label='val loss')
plt.title('Loss')
plt.xlabel('epochs')
plt.ylabel('loss')
plt.legend()
```
# preparing test data
```
test=pd.read_csv('../input/gtsrb-german-traffic-sign/Test.csv')
X_test=train['Path']
y_test=train.ClassId
data_dir = "../input/gtsrb-german-traffic-sign"
test_imgpath= list((data_dir + '/' + str(test.Path[i])) for i in range(len(test.Path)))
test_data=[]
test_labels=[]
path = "../input/gtsrb-german-traffic-sign/"
for i in range(len(test.Path)):
image=cv2.imread(test_imgpath[i])
image_from_array = Image.fromarray(image, 'RGB')
size_image = image_from_array.resize((28,28))
test_data.append(np.array(size_image))
test_labels.append(test.ClassId[i])
X_test=np.array(test_data)
y_test=np.array(test_labels)
X_test = X_test.astype('float32')/255
#predictions-
pred = cnn.predict_classes(X_test)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, pred)
```
|
github_jupyter
|
# Libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
import cv2
from PIL import Image
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import KFold
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
from keras.optimizers import Adam
from keras.layers import Dropout
#importing training dataset
train=pd.read_csv('../input/gtsrb-german-traffic-sign/Train.csv')
X_train=train['Path']
y_train=train.ClassId
train
data_dir = "../input/gtsrb-german-traffic-sign"
train_imgpath= list((data_dir + '/' + str(train.Path[i])) for i in range(len(train.Path)))
for i in range(0,9):
plt.subplot(331+i)
seed=np.random.randint(0,39210)
im = Image.open(train_imgpath[seed])
plt.imshow(im)
plt.show()
train_data=[]
train_labels=[]
path = "../input/gtsrb-german-traffic-sign/"
for i in range(len(train.Path)):
image=cv2.imread(train_imgpath[i])
image_from_array = Image.fromarray(image, 'RGB')
size_image = image_from_array.resize((28,28))
train_data.append(np.array(size_image))
train_labels.append(train.ClassId[i])
X=np.array(train_data)
y=np.array(train_labels)
#Spliting the images into train and validation sets
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split( X, y, test_size=0.20, random_state=7777)
X_train = X_train.astype('float32')/255
X_val = X_val.astype('float32')/255
#Using one hote encoding for the train and validation labels
from keras.utils import to_categorical
y_train = to_categorical(y_train, 43)
y_val = to_categorical(y_val, 43)
def create_model(layers):
cnn = tf.keras.models.Sequential()
cnn.add(tf.keras.layers.Conv2D(filters=32, kernel_size=3, padding="same", activation="relu", input_shape=[28, 28, 3]))
cnn.add(tf.keras.layers.MaxPool2D(pool_size=2, strides=2, padding='valid'))
cnn.add(tf.keras.layers.Conv2D(filters=32, kernel_size=3, padding="same", activation="relu"))
cnn.add(tf.keras.layers.MaxPool2D(pool_size=2, strides=2, padding='valid'))
cnn.add(tf.keras.layers.Flatten())
for i, nodes in enumerate(layers):
cnn.add(tf.keras.layers.Dense(units=nodes, activation='relu'))
cnn.add(tf.keras.layers.Dense(units=43, activation='softmax'))
cnn.compile(optimizer = 'Adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
return cnn
model = KerasClassifier(build_fn=create_model, verbose=1)
layers = [[128],(256, 128),(200, 150, 120)]
param_grid = dict(layers=layers)
grid = GridSearchCV(estimator=model, param_grid=param_grid, verbose=1)
grid_results = grid.fit(X_train,y_train, validation_data=(X_val, y_val))
print("Best: {0}, using {1}".format(grid_results.best_score_, grid_results.best_params_))
means = grid_results.cv_results_['mean_test_score']
stds = grid_results.cv_results_['std_test_score']
params = grid_results.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print('{0} ({1}) with: {2}'.format(mean, stdev, param))
def create_model1():
cnn = tf.keras.models.Sequential()
cnn.add(tf.keras.layers.Conv2D(filters=32, kernel_size=3, padding="same", activation="relu", input_shape=[28, 28, 3]))
cnn.add(tf.keras.layers.MaxPool2D(pool_size=2, strides=2, padding='valid'))
cnn.add(tf.keras.layers.Conv2D(filters=32, kernel_size=3, padding="same", activation="relu"))
cnn.add(tf.keras.layers.MaxPool2D(pool_size=2, strides=2, padding='valid'))
cnn.add(tf.keras.layers.Flatten())
cnn.add(tf.keras.layers.Dense(units=256, activation='relu'))
cnn.add(tf.keras.layers.Dense(units=128, activation='relu'))
cnn.add(tf.keras.layers.Dense(units=43, activation='softmax'))
cnn.compile(optimizer = 'Adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
return cnn
model = KerasClassifier(build_fn = create_model1, verbose = 1)
batch_size = [20,40]
param_grid = dict(batch_size=batch_size)
grid = GridSearchCV(estimator = model, param_grid = param_grid, verbose = 1)
grid_results = grid.fit(X_train,y_train, validation_data=(X_val, y_val))
print("Best: {0}, using {1}".format(grid_results.best_score_, grid_results.best_params_))
means = grid_results.cv_results_['mean_test_score']
params = grid_results.cv_results_['params']
for mean,param in zip(means,params):
print('{0} with: {1}'.format(mean,param))
def create_model2(dropout):
# create model
cnn = tf.keras.models.Sequential()
cnn.add(tf.keras.layers.Conv2D(filters=32, kernel_size=3, padding="same", activation="relu", input_shape=[28, 28, 3]))
cnn.add(tf.keras.layers.MaxPool2D(pool_size=2, strides=2, padding='valid'))
cnn.add(tf.keras.layers.Conv2D(filters=32, kernel_size=3, padding="same", activation="relu"))
cnn.add(tf.keras.layers.MaxPool2D(pool_size=2, strides=2, padding='valid'))
cnn.add(tf.keras.layers.Flatten())
cnn.add(tf.keras.layers.Dense(units=256, activation='relu'))
cnn.add(Dropout(dropout))
cnn.add(tf.keras.layers.Dense(units=128, activation='relu'))
cnn.add(Dropout(dropout))
cnn.add(tf.keras.layers.Dense(units=43, activation='softmax'))
cnn.compile(optimizer = 'Adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
return cnn
model = KerasClassifier(build_fn = create_model2, verbose = 1, batch_size=20)
dropout = [0.0, 0.1, 0.2]
param_grid = dict(dropout=dropout)
grid = GridSearchCV(estimator = model, param_grid = param_grid, verbose = 1)
grid_results = grid.fit(X_train,y_train, validation_data=(X_val, y_val))
print("Best: {0}, using {1}".format(grid_results.best_score_, grid_results.best_params_))
means = grid_results.cv_results_['mean_test_score']
params = grid_results.cv_results_['params']
for mean,param in zip(means,params):
print('{0} with: {1}'.format(mean,param))
#Definition of the DNN model
cnn = tf.keras.models.Sequential()
cnn.add(tf.keras.layers.Conv2D(filters=32, kernel_size=3, padding="same", activation="relu", input_shape=[28, 28, 3]))
cnn.add(tf.keras.layers.MaxPool2D(pool_size=2, strides=2, padding='valid'))
cnn.add(tf.keras.layers.Conv2D(filters=32, kernel_size=3, padding="same", activation="relu"))
cnn.add(tf.keras.layers.MaxPool2D(pool_size=2, strides=2, padding='valid'))
cnn.add(tf.keras.layers.Flatten())
cnn.add(tf.keras.layers.Dense(units=256, activation='relu'))
cnn.add(Dropout(0.1))
cnn.add(tf.keras.layers.Dense(units=128, activation='relu'))
cnn.add(Dropout(0.1))
cnn.add(tf.keras.layers.Dense(units=43, activation='softmax'))
# compile the model
cnn.compile(optimizer = 'Adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
history = cnn.fit(X_train, y_train, batch_size=20, epochs=20,validation_data=(X_val, y_val))
import matplotlib.pyplot as plt
plt.figure(0)
plt.plot(history.history['accuracy'], label='training accuracy')
plt.plot(history.history['val_accuracy'], label='val accuracy')
plt.title('Accuracy')
plt.xlabel('epochs')
plt.ylabel('accuracy')
plt.legend()
plt.figure(1)
plt.plot(history.history['loss'], label='training loss')
plt.plot(history.history['val_loss'], label='val loss')
plt.title('Loss')
plt.xlabel('epochs')
plt.ylabel('loss')
plt.legend()
test=pd.read_csv('../input/gtsrb-german-traffic-sign/Test.csv')
X_test=train['Path']
y_test=train.ClassId
data_dir = "../input/gtsrb-german-traffic-sign"
test_imgpath= list((data_dir + '/' + str(test.Path[i])) for i in range(len(test.Path)))
test_data=[]
test_labels=[]
path = "../input/gtsrb-german-traffic-sign/"
for i in range(len(test.Path)):
image=cv2.imread(test_imgpath[i])
image_from_array = Image.fromarray(image, 'RGB')
size_image = image_from_array.resize((28,28))
test_data.append(np.array(size_image))
test_labels.append(test.ClassId[i])
X_test=np.array(test_data)
y_test=np.array(test_labels)
X_test = X_test.astype('float32')/255
#predictions-
pred = cnn.predict_classes(X_test)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, pred)
| 0.778733 | 0.693499 |
```
import pandas
import numpy
import matplotlib.pyplot as plt
amazon=numpy.random.normal(1.21,0.3,250000)
wallyworld=numpy.random.normal(1.11,0.33,250000)
usps=numpy.random.normal(1.20,0.12,250000)
boxes = pandas.DataFrame({'Amazon Branded Boxes':amazon,'Walmart Branded Boxes':wallyworld,'US Postal Service Branded Boxes':usps})
boxes.to_csv('boxes.csv', index = False)
```
# <font color=darkblue>ENGR 1330-2022 Exam 3-Laboratory Portion </font>
**LAST NAME, FIRST NAME**
**R00000000**
ENGR 1330 Exam 3M - Demonstrate Laboratory/Programming Skills
---
**Download** (right-click, save target as ...) this page as a jupyterlab notebook from: [s22-ex3-deployM.ipynb](http://54.243.252.9/engr-1330-webroot/5-ExamProblems/Exam3/Exam3/spring2022/s22-ex3-deploy.ipynb)
**If you are unable to download the file, create an empty notebook and copy paste the problems into Markdown cells and Code cells (problem-by-problem)**
## Exercise 1 (5 pts.)
The file [boxes.csv](http://54.243.292.9/engr-1330-webroot/5-ExamProblems/Exam3/spring2022/boxes.csv) below contains values of impact strength of packaging materials in foot-pounds of branded boxes.
Download the file and read it into a dataframe.
<!--```
import requests
remote_url="http://54.243.292.9/engr-1330-webroot/5-ExamProblems/Exam3/spring2022/boxes.csv" # set the url
rget = requests.get(remote_url, allow_redirects=True) # get the remote resource, follow imbedded links
localfile = open('boxes.csv','wb') # open connection to a local file same name as remote
localfile.write(rget.content) # extract from the remote the contents,insert into the local file same name
localfile.close() # close connection to the local file
```-->
```
# download script goes here
# read dataframe script goes here
```
Describe the dataframe, how many columns are in the dataframe? What are the column names?
```
# your script/answers go here
```
Exercise 2 (15 pts.) Produce a histogram of the Amazon series and the Wallmart series on the same plot. Plot Amazon using red, and Wallmart using blue.
> - Import suitable package to build histograms
> - Apply package with plotting call to prodice two histograms on same figure space
> - Label plot and axes with suitable annotation
Comment on the histograms, do they overlap?
```
# your script goes here
```
Exercise 3 (5 pts.) Determine the mean strength and the standard deviation of the Amazon and USPS brands. Which one has a greater mean value? Which one has the greater standard deviation?
```
# your script goes here
```
Exercise 4 (5 pts.) Test the Amazon data for normality, interpret the results.
```
# your script here
```
Exercise 5 (5 pts.) Test the Wallmart data for normality, interpret the results.
```
# your script here
```
Exercise 6 (15 pts.)Determine if there is evidence of a difference in mean strength between the two brands.
Use an appropriate hypothesis test to support your assertion at a level of significance of $\alpha = 0.10$.
> - Choose a test and justify choice
> - Import suitable package to run the test
> - Apply the test and interpret the results
> - Report result with suitable annotation
|
github_jupyter
|
import pandas
import numpy
import matplotlib.pyplot as plt
amazon=numpy.random.normal(1.21,0.3,250000)
wallyworld=numpy.random.normal(1.11,0.33,250000)
usps=numpy.random.normal(1.20,0.12,250000)
boxes = pandas.DataFrame({'Amazon Branded Boxes':amazon,'Walmart Branded Boxes':wallyworld,'US Postal Service Branded Boxes':usps})
boxes.to_csv('boxes.csv', index = False)
import requests
remote_url="http://54.243.292.9/engr-1330-webroot/5-ExamProblems/Exam3/spring2022/boxes.csv" # set the url
rget = requests.get(remote_url, allow_redirects=True) # get the remote resource, follow imbedded links
localfile = open('boxes.csv','wb') # open connection to a local file same name as remote
localfile.write(rget.content) # extract from the remote the contents,insert into the local file same name
localfile.close() # close connection to the local file
# download script goes here
# read dataframe script goes here
# your script/answers go here
# your script goes here
# your script goes here
# your script here
# your script here
| 0.339061 | 0.770983 |
```
import numpy as np
import pandas as pd
import random
import seaborn as sns
from matplotlib import pyplot as plt
df = pd.read_csv("C:/Users/Jaswinder Singh/Downloads/SEM-2/DAPA/CA_2/Code/IBM Watson_dataset/watson_data.csv")
df.head()
df.info()
df.dtypes
# Viewing numerical and categorical variables
numeric_var = [key for key in dict(df.dtypes)
if dict(df.dtypes)[key]
in ['float64','float32','int32','int64']] # Numeric Variable
categorical_var = [key for key in dict(df.dtypes)
if dict(df.dtypes)[key] in ['object'] ] # Categorical Varible
numeric_var
categorical_var
categorical_var[1]
for i in range(len(categorical_var)):
print(categorical_var[i], '--', df[categorical_var[i]].unique())
# cols = {'Variable_Name': categorical_var, 'Categories': df[categorical_var].unique()}
# df2 = pd.DataFrame(data = cols)
df_cat = pd.DataFrame()
df_cat['State'] = df['State']
df_cat['Response'] = df['Response']
df_cat['Coverage'] = df['Coverage']
df_cat['Education'] = df['Education']
df_cat['Employment_Status'] = df['EmploymentStatus']
df_cat['Gender'] = df['Gender']
df_cat['Location Code'] = df['Location Code']
df_cat['Marital Status'] = df['Marital Status']
df_cat['Policy Type'] = df['Policy Type']
df_cat['Policy'] = df['Policy']
df_cat['Renew Offer Type'] = df['Renew Offer Type']
df_cat['Sales Channel'] = df['Sales Channel']
df_cat['Vehicle Class'] = df['Vehicle Class']
df_cat['Vehicle Size'] = df['Vehicle Size']
df_cat.head()
plt.figure(figsize = (15,8))
plt.grid()
sns.boxplot(x=df['Response'], y=df['Customer Lifetime Value'], hue = df['Renew Offer Type'], showfliers = False)
plt.figure(figsize = (15,8))
plt.grid()
sns.boxplot(x=df['Response'], y=df['Customer Lifetime Value'], hue = df['Renew Offer Type'])
continous_var_df = df.select_dtypes(include=['int64','float'])
#Correlation heat map
plt.figure(figsize=(10,5))
sns.heatmap(continous_var_df.corr(), annot = True, linewidths=.5, cmap="GnBu")
plt.show()
```
# Modelling
```
df = df.drop(['Income', 'Months Since Last Claim', 'Months Since Policy Inception',
'Number of Open Complaints','Customer',
'State','Response','Education',
'Effective To Date','EmploymentStatus',
'Gender','Location Code','Marital Status',
'Policy Type','Policy','Sales Channel','Vehicle Size'], axis=1)
df.head()
df['Coverage'] = df['Coverage'].factorize()[0]
df['Renew Offer Type'] = 0
df.loc[(df['Renew Offer Type'] == 'Offer1'), 'Renew Offer Type'] = 0
df.loc[(df['Renew Offer Type'] == 'Offer2'), 'Renew Offer Type'] = 1
df.loc[(df['Renew Offer Type'] == 'Offer3'), 'Renew Offer Type'] = 2
df.loc[(df['Renew Offer Type'] == 'Offer4'), 'Renew Offer Type'] = 3
df['Vehicle Class'] = 0
df.loc[(df['Vehicle Class'] == 'Two-Door Car'), 'Vehicle Class'] = 0
df.loc[(df['Vehicle Class'] == 'Four-Door Car'), 'Vehicle Class'] = 1
df.loc[(df['Vehicle Class'] == 'SUV'), 'Vehicle Class'] = 2
df.loc[(df['Vehicle Class'] == 'Luxury SUV'), 'Vehicle Class'] = 3
df.loc[(df['Vehicle Class'] == 'Sports Car'), 'Vehicle Class'] = 4
df.loc[(df['Vehicle Class'] == 'Luxury Car'), 'Vehicle Class'] = 5
y = df['Customer Lifetime Value']
df_features = ['Coverage', 'Monthly Premium Auto', 'Number of Policies', 'Renew Offer Type', 'Total Claim Amount', 'Vehicle Class']
X = df[df_features]
X.head()
train_X, val_X, train_y, val_y = train_test_split(X, y, train_size=0.8, test_size=0.2, random_state = 0)
#Random Forest Regressor
def get_mae(n_estimators, train_X, val_X, train_y, val_y):
model = RandomForestRegressor(n_estimators=n_estimators, random_state=0)
model.fit(train_X, train_y)
preds_val = model.predict(val_X)
mae = mean_absolute_error(val_y, preds_val)
return(mae)
from sklearn.metrics import mean_absolute_error
candidate_n_estimators = [25, 50, 100, 250, 500, 1000]
for n_estimators in candidate_n_estimators:
my_mae = get_mae(n_estimators, train_X, val_X, train_y, val_y)
print("n_estimators: %d \t\t Mean Absolute Error: %d" %(n_estimators, my_mae))
best_model = RandomForestRegressor(n_estimators=50, random_state=0)
best_model.fit(train_X, train_y)
preds_val = best_model.predict(val_X)
mae = mean_absolute_error(val_y, preds_val)
print("Best Model is RandomForestRegressor with the lowest MAE %d." %mae)
df_1 = pd.DataFrame({'Actual': val_y, 'Predicted': preds_val})
df_1
```
# Modelling
```
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import random
import seaborn as sns
from matplotlib import pyplot as plt
df = pd.read_csv("C:/Users/Jaswinder Singh/Downloads/SEM-2/DAPA/CA_2/Code/IBM Watson_dataset/watson_data.csv")
df.head()
df.info()
df.dtypes
# Viewing numerical and categorical variables
numeric_var = [key for key in dict(df.dtypes)
if dict(df.dtypes)[key]
in ['float64','float32','int32','int64']] # Numeric Variable
categorical_var = [key for key in dict(df.dtypes)
if dict(df.dtypes)[key] in ['object'] ] # Categorical Varible
numeric_var
categorical_var
categorical_var[1]
for i in range(len(categorical_var)):
print(categorical_var[i], '--', df[categorical_var[i]].unique())
# cols = {'Variable_Name': categorical_var, 'Categories': df[categorical_var].unique()}
# df2 = pd.DataFrame(data = cols)
df_cat = pd.DataFrame()
df_cat['State'] = df['State']
df_cat['Response'] = df['Response']
df_cat['Coverage'] = df['Coverage']
df_cat['Education'] = df['Education']
df_cat['Employment_Status'] = df['EmploymentStatus']
df_cat['Gender'] = df['Gender']
df_cat['Location Code'] = df['Location Code']
df_cat['Marital Status'] = df['Marital Status']
df_cat['Policy Type'] = df['Policy Type']
df_cat['Policy'] = df['Policy']
df_cat['Renew Offer Type'] = df['Renew Offer Type']
df_cat['Sales Channel'] = df['Sales Channel']
df_cat['Vehicle Class'] = df['Vehicle Class']
df_cat['Vehicle Size'] = df['Vehicle Size']
df_cat.head()
plt.figure(figsize = (15,8))
plt.grid()
sns.boxplot(x=df['Response'], y=df['Customer Lifetime Value'], hue = df['Renew Offer Type'], showfliers = False)
plt.figure(figsize = (15,8))
plt.grid()
sns.boxplot(x=df['Response'], y=df['Customer Lifetime Value'], hue = df['Renew Offer Type'])
continous_var_df = df.select_dtypes(include=['int64','float'])
#Correlation heat map
plt.figure(figsize=(10,5))
sns.heatmap(continous_var_df.corr(), annot = True, linewidths=.5, cmap="GnBu")
plt.show()
df = df.drop(['Income', 'Months Since Last Claim', 'Months Since Policy Inception',
'Number of Open Complaints','Customer',
'State','Response','Education',
'Effective To Date','EmploymentStatus',
'Gender','Location Code','Marital Status',
'Policy Type','Policy','Sales Channel','Vehicle Size'], axis=1)
df.head()
df['Coverage'] = df['Coverage'].factorize()[0]
df['Renew Offer Type'] = 0
df.loc[(df['Renew Offer Type'] == 'Offer1'), 'Renew Offer Type'] = 0
df.loc[(df['Renew Offer Type'] == 'Offer2'), 'Renew Offer Type'] = 1
df.loc[(df['Renew Offer Type'] == 'Offer3'), 'Renew Offer Type'] = 2
df.loc[(df['Renew Offer Type'] == 'Offer4'), 'Renew Offer Type'] = 3
df['Vehicle Class'] = 0
df.loc[(df['Vehicle Class'] == 'Two-Door Car'), 'Vehicle Class'] = 0
df.loc[(df['Vehicle Class'] == 'Four-Door Car'), 'Vehicle Class'] = 1
df.loc[(df['Vehicle Class'] == 'SUV'), 'Vehicle Class'] = 2
df.loc[(df['Vehicle Class'] == 'Luxury SUV'), 'Vehicle Class'] = 3
df.loc[(df['Vehicle Class'] == 'Sports Car'), 'Vehicle Class'] = 4
df.loc[(df['Vehicle Class'] == 'Luxury Car'), 'Vehicle Class'] = 5
y = df['Customer Lifetime Value']
df_features = ['Coverage', 'Monthly Premium Auto', 'Number of Policies', 'Renew Offer Type', 'Total Claim Amount', 'Vehicle Class']
X = df[df_features]
X.head()
train_X, val_X, train_y, val_y = train_test_split(X, y, train_size=0.8, test_size=0.2, random_state = 0)
#Random Forest Regressor
def get_mae(n_estimators, train_X, val_X, train_y, val_y):
model = RandomForestRegressor(n_estimators=n_estimators, random_state=0)
model.fit(train_X, train_y)
preds_val = model.predict(val_X)
mae = mean_absolute_error(val_y, preds_val)
return(mae)
from sklearn.metrics import mean_absolute_error
candidate_n_estimators = [25, 50, 100, 250, 500, 1000]
for n_estimators in candidate_n_estimators:
my_mae = get_mae(n_estimators, train_X, val_X, train_y, val_y)
print("n_estimators: %d \t\t Mean Absolute Error: %d" %(n_estimators, my_mae))
best_model = RandomForestRegressor(n_estimators=50, random_state=0)
best_model.fit(train_X, train_y)
preds_val = best_model.predict(val_X)
mae = mean_absolute_error(val_y, preds_val)
print("Best Model is RandomForestRegressor with the lowest MAE %d." %mae)
df_1 = pd.DataFrame({'Actual': val_y, 'Predicted': preds_val})
df_1
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
| 0.427516 | 0.374705 |
# Numpy
We have seen python basic data structures in our last section. They are great but lack specialized features for data analysis. Like, adding roows, columns, operating on 2d matrices aren't readily available. So, we will use *numpy* for such functions.
```
# Import library.
import numpy as np
```
Numpy operates on *nd* arrays. These are similar to lists but contains homogenous elements but easier to store 2-d data.
```
l1 = [1,2,3,4] # declare a list l1.
nd1 = np.array(l1) # convert list l1 to numpy array.
print(nd1) # print numpy array nd1.
l2 = [5,6,7,8]
nd2 = np.array([l1,l2])
print(nd2)
```
Sum functions on np.array()
```
print(nd2.shape) # print shape of numpy array (It will return shape as tuple).
print(nd2.size) # The size of numpy array (Total number of elements in array).
print(nd2.dtype) # The data-type of elements in the array.
```
### Question 1
Create an identity 2d-array or matrix (with ones across the diagonal).
[ **Hint: ** You can also use **np.identity()** function ]
```
np.identity(2) # will return an identity matrix of size 2x2 (as identity matrices are always square matrix).
```
### Question 2
Create a 2d-array or matrix of order 3x3 with values = 9,8,7,6,5,4,3,2,1 arranged in the same order.
Use: **np.array()** function
```
# we can use the np.matrix as well, but numpy documents says np.matrix will get depreciated and might not work after some time,
# therefore using np.array
# np.matrix is for maximumn 2-dimension whereas np.array is for n-dimensions
d1 = np.array([[9,8,7],[6,5,4],[3,2,1]])
d1
#we can also use the arange method and then reshape the array into the desired matrix
d2 = np.arange(start = 9, stop = 0, step = -1)
d2 = d2.reshape(3,3)
d2
```
### Question 3
Interchange both the rows and columns of the given matrix.
Hint: You can use the transpose **.T**)
```
d1.T # Return transpose of array d1.
```
### Question 4
Add + 1 to all the elements in the given matrix.
```
d1 + 1 # Adds the constant to each element of the array.
```
Similarly you can do operations like scalar substraction, division, multiplication (operating on each element in the matrix)
### Question 5
Find the mean of all elements in the given matrix nd6.
nd6 = [[ 1 4 9 121 144 169]
[ 16 25 36 196 225 256]
[ 49 64 81 289 324 361]]
Use: **.mean()** function
```
nd6 = np.matrix([[ 1, 4, 9, 121, 144, 169], [ 16, 25, 36, 196, 225, 256], [ 49, 64, 81, 289, 324, 361]]) # declare numpy array
nd6 # print array.
nd6.mean() # Returns mean of the elements of array.
```
### Question 7
Find the dot product of two given matrices.
[**Hint:** Use **np.dot()**]
```
np.dot(d1, nd6) # Returns the dot product of d1 and nd6.
```
### Array Slicing/Indexing:
- Now we'll learn to access multiple elements or a range of elements from an array.
```
x = np.arange(20) # Creates a numpy array with 20 integers from 0 to 19.
x
```
### Question 8
Print the array elements from start to 4th position
```
x[:5] # Returns the first 5 elements of array.
```
### Question 9
- Return elements from first position with step size 2. (Difference between consecutive elements is 2)
```
x[0::2] # Returns the elements of array from 1st element to further elements with step size of 2.
```
### Question 10
Reverse the array using array indexing:
```
x[::-1] # Returns the reversed array.
```
|
github_jupyter
|
# Import library.
import numpy as np
l1 = [1,2,3,4] # declare a list l1.
nd1 = np.array(l1) # convert list l1 to numpy array.
print(nd1) # print numpy array nd1.
l2 = [5,6,7,8]
nd2 = np.array([l1,l2])
print(nd2)
print(nd2.shape) # print shape of numpy array (It will return shape as tuple).
print(nd2.size) # The size of numpy array (Total number of elements in array).
print(nd2.dtype) # The data-type of elements in the array.
np.identity(2) # will return an identity matrix of size 2x2 (as identity matrices are always square matrix).
# we can use the np.matrix as well, but numpy documents says np.matrix will get depreciated and might not work after some time,
# therefore using np.array
# np.matrix is for maximumn 2-dimension whereas np.array is for n-dimensions
d1 = np.array([[9,8,7],[6,5,4],[3,2,1]])
d1
#we can also use the arange method and then reshape the array into the desired matrix
d2 = np.arange(start = 9, stop = 0, step = -1)
d2 = d2.reshape(3,3)
d2
d1.T # Return transpose of array d1.
d1 + 1 # Adds the constant to each element of the array.
nd6 = np.matrix([[ 1, 4, 9, 121, 144, 169], [ 16, 25, 36, 196, 225, 256], [ 49, 64, 81, 289, 324, 361]]) # declare numpy array
nd6 # print array.
nd6.mean() # Returns mean of the elements of array.
np.dot(d1, nd6) # Returns the dot product of d1 and nd6.
x = np.arange(20) # Creates a numpy array with 20 integers from 0 to 19.
x
x[:5] # Returns the first 5 elements of array.
x[0::2] # Returns the elements of array from 1st element to further elements with step size of 2.
x[::-1] # Returns the reversed array.
| 0.471953 | 0.990459 |
# 量子神经网络的表达能力
<em>Copyright (c) 2021 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.</em>
## 概览
在量子机器学习中,量子神经网络的**表达能力**是决定量子机器学习任务能否成功的关键因素。一般来说,量子神经网络拟设的表达能力越强,量子机器学习能够搜索到全局最优解的可能性就越大。本教程首先介绍量子神经网络表达能力的基本概念,随后将在量桨中展示如何通过布洛赫球感受不同拟设表达能力的差别。最后介绍一种定量分析量子神经网络表达能力的方法,并评估量桨提供的量子神经网络模板在不同深度下的表达能力。
## 基本概念
我们先来回顾一下量子机器学习算法的基本流程。在量子机器学习中,我们往往设计一个损失函数 $\mathcal{L}$,并通过优化一个酉变换 $U$ 使得损失函数最小化:
$$
\min_U\mathcal{L}(U)=\min_U \text{tr}[HU\rho_{in}U^\dagger],\tag{1}
$$
算法背后的数学原理保证当我们遍历完所有可能的酉变换后,损失函数能取到的最小值就对应于我们问题的解。而在实际操作中,我们采用量子神经网络将酉变换参数化:
$$
U=U(\vec{\theta})=U_D(\vec{\theta}_D)\dots U_1(\vec{\theta}_1),\tag{2}
$$
其中每个 $U_j(\vec{\theta}_j),j\in[1,D]$ 代表一层量子神经网络,$\vec{\theta}_j$ 代表该层对应的参数。此时,通过调整量子神经网络中的参数 $\vec{\theta}$,我们就可以进行对酉变换 $U$ 的优化,进而最小化损失函数 $\mathcal{L}$:
$$
\min_{\vec{\theta}}\mathcal{L}(\vec{\theta})=\min_{\vec{\theta}} \text{tr}[HU(\vec{\theta})\rho_{in}U(\vec{\theta})^\dagger].\tag{3}
$$
然而,细心的读者此时可能已经发现了量子神经网络的一个不足:对于一个给定的神经网络拟设,**遍历所有的参数并不一定可以保证遍历所有的酉变换**。作为一个简单的例子,如果我们只允许使用一个 $R_Y$ 旋转门作为单比特量子神经网络 $U(\theta)=R_Y(\theta)$,显然(除去全局相位)$U(\theta)$ 不能表示任何矩阵元含虚部的复酉矩阵。而当允许使用 $R_Y$ 和 $R_Z$ 旋转门时,如果我们搭建量子神经网络为 $U(\vec{\theta})=R_Z(\theta_1)R_Y(\theta_2)R_Z(\theta_3)$,$U(\vec{\theta})$ (除去全局相位)将能够表示所有的单比特酉矩阵 [1]。
如果我们将神经网络的表达能力定义为**在遍历电路参数 $\vec{\theta}$ 时电路能够表达的酉变换的多少**,那么,一个表达能力强的量子神经网络将更有可能包含那些使得损失函数 $\mathcal{L}$ 取到全局最小值的酉变换;相反地,如果一个量子神经网络 $U_{weak}$ 的表达能力太弱以至于不包含任何能将损失函数最小化的酉变换,那么基于优化 $U_{weak}$ 的量子机器学习任务就很可能会失败。
接下来我们基于量桨,通过观察单量子比特酉门遍历布洛赫球的能力直观地感受量子神经网络的表达能力。
## 直观感受表达能力:遍历布洛赫球
对于单量子比特的简单情况,我们可以直接观察量子神经网络如何将固定输入遍历布洛赫球表面。对于一个给定的神经网络拟设 $U(\vec{\theta})$,由于网络的输入往往是固定的(不妨设为 $|0\rangle$),通过均匀地采样神经网络参数 $\vec{\theta}$,神经网络的输出态 $U(\vec{\theta})|0\rangle$ 将散落在布洛赫球表面。显然,如果输出态在球面分布地越广越均匀,那么神经网络拟设 $U$ 的表达能力也就越强,包含损失函数全局最优的可能性也就越大。
为在量桨实现这一功能,首先引入必要的包:
```
import numpy as np
from numpy.random import random
import paddle
from paddle_quantum.circuit import UAnsatz
from paddle_quantum.utils import plot_state_in_bloch_sphere
```
首先,我们只允许使用一个 $R_Y$ 旋转门作为单比特量子神经网络 $U(\theta)=R_Y(\theta)$。通过在 $[0,2\pi]$ 均匀采样参数 $\theta$ 并将 $U(\theta)$ 作用在固定输入 $|0\rangle$ 上,我们便得到量子神经网络 $U(\theta)$ 的输出分布。通过量桨内置的 plot_bloch_sphere_from_input 函数,我们可以直接观察 $U(\theta)|0\rangle$ 在布洛赫球面上的分布:
```
num_qubit = 1 # 设定量子比特数
num_sample = 2000 # 设定采样次数
outputs_y = list() # 储存采样电路输出
for _ in range(num_sample):
# 初始化量子神经网络
cir = UAnsatz(num_qubit)
# 在 0 到 2 pi 间均匀采样参数 theta
theta = paddle.to_tensor(2 * np.pi * random(size=1), dtype='float64')
# 作用 Ry 旋转门
cir.ry(theta, 0)
# 输出态的密度矩阵
rho = cir.run_density_matrix()
outputs_y.append(rho.numpy())
# 量桨内置的 plot_bloch_sphere_from_input 函数
# plot_state_in_bloch_sphere(outputs_y, save_gif=True, filename='figures/bloch_y.gif')
```

可见,量子神经网络 $U(\theta)=R_Y(\theta)$ 的输出只能分布于布洛赫球面上的一个圆环(尽管在圆环上的分布是均匀的)。类似地,我们考虑包含两个参数的神经网络 $U(\vec{\theta})=R_Y(\theta_1)R_Z(\theta_2)$ 和 三个参数的神经网络 $U(\vec{\theta})=R_Y(\theta_1)R_Z(\theta_2)R_Y(\theta_3)$ 的输出分布:
```
outputs_yz = list() # 储存采样电路输出
for _ in range(num_sample):
# 初始化量子神经网络
cir = UAnsatz(num_qubit)
# 在 0 到 2 pi 间均匀采样参数 theta
theta = paddle.to_tensor(2 * np.pi * random(size=2), dtype='float64')
# 作用 Ry 旋转门
cir.ry(theta[0], 0)
# 作用 Rz 旋转门
cir.rz(theta[1], 0)
# 输出态的密度矩阵
rho = cir.run_density_matrix()
outputs_yz.append(rho.numpy())
# plot_state_in_bloch_sphere(outputs_yz, save_gif=True, filename='figures/bloch_yz.gif')
outputs_yzy = list() # 储存采样电路输出
for _ in range(num_sample):
# 初始化量子神经网络
cir = UAnsatz(num_qubit)
# 在 0 到 2 pi 间均匀采样参数 theta
theta = paddle.to_tensor(2 * np.pi * random(size=3), dtype='float64')
# 作用 Ry 旋转门
cir.ry(theta[0], 0)
# 作用 Rz 旋转门
cir.rz(theta[1], 0)
# 作用 Ry 旋转门
cir.ry(theta[2], 0)
# 输出态的密度矩阵
rho = cir.run_density_matrix()
outputs_yzy.append(rho.numpy())
# plot_state_in_bloch_sphere(outputs_yzy, save_gif=True, filename='figures/bloch_yzy.gif')
```


可见,神经网络 $U(\vec{\theta})=R_Y(\theta_1)R_Z(\theta_2)$ 的输出可以分布在整个布洛赫球表面了,虽然在两级($|0\rangle$ 和 $|1\rangle$)附近的分布会更加密集;而神经网络 $U(\vec{\theta})=R_Y(\theta_1)R_Z(\theta_2)R_Y(\theta_3)$ 的输出在球面的分布是比较均匀的。
在单量子比特的低维情形下我们可以借助布洛赫球定性观察量子神经网络的表达能力。而在一般的多量子比特应用中,我们必须借助统计数学的工具对表达能力定量分析。接下来我们将引入量子态之间保真度分布的 K-L 散度作为量化神经网络的表达能力的指标,并计算一种常见拟设的表达能力。
## 定量分析表达能力:K-L 散度
### 保真度分布与 K-L 散度
在文献 [2] 中,作者提出了基于神经网络输出态之间的保真度概率分布的表达能力量化方法。对任意量子神经网络 $U(\vec{\theta})$,采样两次神经网络参数(设为 $\vec{\phi}$ 和 $\vec{\psi}$),则两个量子电路输出态之间的保真度 $F=|\langle0|U(\vec{\phi})^\dagger U(\vec{\psi})|0\rangle|^2$ 服从某个概率分布:
$$
F\sim{P}(f).\tag{4}
$$
文献 [2] 指出,量子神经网络 $U$ 能够均匀地分布在所有酉矩阵上时(此时称 $U$ 服从哈尔分布),保真度的概率分布 $P_\text{Haar}(f)$ 满足
$$
P_\text{Haar}(f)=(2^{n}-1)(1-f)^{2^n-2}.\tag{5}
$$
量桨提供了直接从哈尔分布采样酉矩阵的函数。观察哈尔分布酉矩阵输出的量子态保真度服从的概率分布:
```
from paddle_quantum.utils import haar_unitary, state_fidelity
import matplotlib.pyplot as plt
from matplotlib.ticker import FuncFormatter
from scipy import integrate
# 定义绘制直方图的函数
def plot_hist(data, num_bin, title_str):
def to_percent(y, position):
return str(np.around(y*100, decimals=2)) + '%'
plt.hist(data, weights=[1./len(data)]*len(data), bins=np.linspace(0, 1, num=num_bin), facecolor="blue", edgecolor="black", alpha=0.7)
plt.xlabel("Fidelity")
plt.ylabel("frequency")
plt.title(title_str)
formatter = FuncFormatter(to_percent)
plt.gca().yaxis.set_major_formatter(formatter)
plt.show()
# 定义计算哈尔采样输出保真度分布的函数
def p_F_haar(n, s, b=50, draw=False):
f_list = list()
# 开始采样
for i in range(s):
# 采样第一个酉变换
u1 = haar_unitary(n)
# 输入 |0> 时酉变换的输出
phi1 = u1[:,0]
rho1 = np.outer(phi1, phi1.conj())
# 采样第二个酉变换
u2 = haar_unitary(n)
phi2 = u2[:,0]
# 输入 |0> 时酉变换的输出
rho2 = np.outer(phi2, phi2.conj())
# 计算两个采样输出之间的保真度
f_list.append(state_fidelity(rho1, rho2)**2)
f_list = np.array(f_list)
# 绘制概率分布图
if draw:
title_str = "haar, %d qubit(s)" % num_qubit
plot_hist(f_list, b, title_str)
sample_distribution, _ = np.histogram(f_list, bins=np.linspace(0, 1, num=b), density=True)
# 根据公式计算概率分布函数的理论值,用于后期计算 K-L 散度
theory_distribution = np.zeros_like(sample_distribution)
for index in range(len(theory_distribution)):
def p_continues(f):
return (2 ** n - 1) * (1 - f) ** (2 ** n - 2)
lower = 1/b*index
upper = lower + 1/b
theory_distribution[index], _ = integrate.quad(p_continues,lower,upper)
return sample_distribution, theory_distribution
num_qubit = 1
p_haar_1qubit, theory_haar_1qubit = p_F_haar(num_qubit, num_sample, draw=True)
num_qubit = 2
p_haar_2qubit, theory_haar_2qubit = p_F_haar(num_qubit, num_sample, draw=True)
```
可见保真度分布大致服从 $P_\text{Haar}$。类似地,我们也能计算之前定义的单比特量子神经网络 $R_Y(\theta)$,$R_Y(\theta_1)R_Z(\theta_2)$ 和 $R_Y(\theta_1)R_Z(\theta_2)R_Y(\theta_3)$ 输出的保真度概率分布:
```
# 定义计算量子神经网络输出保真度分布的函数
def p_F_qnn(n, s, g, b=50, draw=False):
f_list = list()
rho_sample = outputs_y
title_str = "Ry"
if g == 2:
rho_sample = outputs_yz
title_str = "Ry-Rz"
elif g == 3:
rho_sample = outputs_yzy
title_str = "Ry-Rz-Ry"
# 使用之前采样的数据计算保真度分布
for index in range(int(s / 2)):
rho1 = rho_sample[index]
rho2 = rho_sample[index+int(num_sample / 2)]
f_list.append(state_fidelity(rho1, rho2)**2)
f_list = np.array(f_list)
# 绘制概率分布图
if draw:
plot_hist(f_list, b, title_str)
distribution, _ = np.histogram(f_list, bins=np.linspace(0, 1, num=b), density=True)
return distribution
num_qubit = 1
p_y = p_F_qnn(num_qubit, num_sample, 1, draw=True)
p_yz = p_F_qnn(num_qubit, num_sample, 2, draw=True)
p_yzy = p_F_qnn(num_qubit, num_sample, 3, draw=True)
```
可见,$R_Y-R_Z-R_Y$ 门组成的神经网络的输出保真度分布与均匀酉矩阵的表现最为接近。统计数学中的 K-L 散度(也称相对熵)可以衡量两个概率分布之间的差异。两个离散概率分布 $P,Q$ 之间的 K-L 散度定义为
$$
D_{KL}(P||Q)=\sum_jP(j)\ln\frac{P(j)}{Q(j)}.\tag{6}
$$
如果将量子神经网络输出的保真度分布记为 $P_\text{QNN}(f)$,则量子神经网络的表达能力定义为 $P_\text{QNN}(f)$ 和 $P_\text{Haar}(f)$ 之间的 K-L 散度 [2]:
$$
\text{Expr}_\text{QNN}=D_{KL}(P_\text{QNN}(f)||P_\text{Haar}(f)).\tag{7}
$$
因此,当 $P_\text{QNN}(f)$ 越接近 $P_\text{Haar}(f)$ 时,$\text{Expr}$ 将越小(越趋近于 0),量子神经网络的表达能力也就越强;反之,$\text{Expr}$ 越大,量子神经网络的表达能力也就越弱。
我们可以根据该定义直接计算单比特量子神经网络 $R_Y(\theta)$,$R_Y(\theta_1)R_Z(\theta_2)$ 和 $R_Y(\theta_1)R_Z(\theta_2)R_Y(\theta_3)$ 的表达能力:
```
from scipy.stats import entropy
# 使用 scipy 的 entropy 函数计算相对熵(即 K-L 散度)
expr_y = entropy(p_y, theory_haar_1qubit)
expr_yz = entropy(p_yz, theory_haar_1qubit)
expr_yzy = entropy(p_yzy, theory_haar_1qubit)
print("Ry,Ry-Rz,和 Ry-Rz-Rz 神经网络的表达能力分别为 %.2f,%.2f,和 %.2f。" %(expr_y, expr_yz, expr_yzy))
```
### 评估量子神经网络拟设的表达能力
现在,我们拥有了定量研究任何量子神经网络拟设表达能力的工具——K-L 散度。作为一个实际应用,我们来探究量桨内置拟设 complex_entangled_layer 的表达能力随电路深度的变化。这里我们设置电路的宽度为 4-qubit。
```
# 定义计算保真度分布的函数
def p_F_cel(n, d, s, b=50, draw=False):
f_list = list()
for index in range(int(s / 2)):
if 2 * index % 400 == 0:
print(" 采样第 %d 个样本..." % (2 * index))
cir1 = UAnsatz(n)
# 在 0 到 2 pi 间均匀采样参数 theta
theta1 = paddle.to_tensor(2 * np.pi * random(size=(d, n, 3)), dtype='float64')
# 作用 complex_entangled_layer 层
cir1.complex_entangled_layer(theta1, d, range(n))
# 输出态的态矢量
rho1 = cir1.run_state_vector()
cir2 = UAnsatz(n)
# 在 0 到 2 pi 间均匀采样参数 theta
theta2 = paddle.to_tensor(2 * np.pi * random(size=(d, n, 3)), dtype='float64')
# 作用 complex_entangled_layer 层
cir2.complex_entangled_layer(theta2, d, range(n))
# 输出态的态矢量
rho2 = cir2.run_state_vector()
# 计算保真度
f_list.append(abs(np.inner(rho1.numpy(), rho2.numpy().conj()))**2)
print(" 采样完毕")
f_list = np.array(f_list)
# 绘制概率分布图
if draw:
title_str = "complex entangled layer, %d layer(s)" % d
plot_hist(f_list, b, title_str)
distribution, _ = np.histogram(f_list, bins=np.linspace(0, 1, num=b), density=True)
return distribution
# 设置电路宽度和最大深度
num_qubit = 4
max_depth = 3
# 计算哈尔采样对应的保真度分布
print("哈尔采样输出的保真度服从分布:")
p_haar_4qubit, theory_haar_4qubit = p_F_haar(num_qubit, num_sample, draw=True)
Expr_cel = list()
# 计算不同深度的神经网络的表达能力
for DEPTH in range(1, max_depth + 1):
print("正在采样深度为 %d 的电路..." % DEPTH)
p_cel = p_F_cel(num_qubit, DEPTH, num_sample, draw=True)
expr = entropy(p_cel, theory_haar_4qubit)
Expr_cel.append(expr)
# 比较不同深度的神经网络的表达能力
print("深度为 1,2,3 的神经网络的表达能力分别为", np.around(Expr_cel, decimals=4))
plt.plot(range(1, max_depth + 1), Expr_cel, marker='>')
plt.xlabel("depth")
plt.yscale('log')
plt.ylabel("Expr.")
plt.xticks(range(1, max_depth + 1))
plt.title("Expressibility vs Circuit Depth")
plt.show()
```
可见随着电路深度的增加,量子神经网络的表达能力也在逐渐增强。感兴趣的读者不妨自己动手尝试一下其他量桨内置拟设的表达能力计算,以及比较不同拟设之间表达能力的差别。
_______
## 参考文献
[1] Nielsen, Michael A., and Isaac L. Chuang. "Quantum Computation and Quantum Information." Cambridge University Press, 2010.
[2] Sim, Sukin, Peter D. Johnson, and Alán Aspuru‐Guzik. "Expressibility and entangling capability of parameterized quantum circuits for hybrid quantum‐classical algorithms." [Advanced Quantum Technologies 2.12 (2019): 1900070](https://onlinelibrary.wiley.com/doi/abs/10.1002/qute.201900070).
|
github_jupyter
|
import numpy as np
from numpy.random import random
import paddle
from paddle_quantum.circuit import UAnsatz
from paddle_quantum.utils import plot_state_in_bloch_sphere
num_qubit = 1 # 设定量子比特数
num_sample = 2000 # 设定采样次数
outputs_y = list() # 储存采样电路输出
for _ in range(num_sample):
# 初始化量子神经网络
cir = UAnsatz(num_qubit)
# 在 0 到 2 pi 间均匀采样参数 theta
theta = paddle.to_tensor(2 * np.pi * random(size=1), dtype='float64')
# 作用 Ry 旋转门
cir.ry(theta, 0)
# 输出态的密度矩阵
rho = cir.run_density_matrix()
outputs_y.append(rho.numpy())
# 量桨内置的 plot_bloch_sphere_from_input 函数
# plot_state_in_bloch_sphere(outputs_y, save_gif=True, filename='figures/bloch_y.gif')
outputs_yz = list() # 储存采样电路输出
for _ in range(num_sample):
# 初始化量子神经网络
cir = UAnsatz(num_qubit)
# 在 0 到 2 pi 间均匀采样参数 theta
theta = paddle.to_tensor(2 * np.pi * random(size=2), dtype='float64')
# 作用 Ry 旋转门
cir.ry(theta[0], 0)
# 作用 Rz 旋转门
cir.rz(theta[1], 0)
# 输出态的密度矩阵
rho = cir.run_density_matrix()
outputs_yz.append(rho.numpy())
# plot_state_in_bloch_sphere(outputs_yz, save_gif=True, filename='figures/bloch_yz.gif')
outputs_yzy = list() # 储存采样电路输出
for _ in range(num_sample):
# 初始化量子神经网络
cir = UAnsatz(num_qubit)
# 在 0 到 2 pi 间均匀采样参数 theta
theta = paddle.to_tensor(2 * np.pi * random(size=3), dtype='float64')
# 作用 Ry 旋转门
cir.ry(theta[0], 0)
# 作用 Rz 旋转门
cir.rz(theta[1], 0)
# 作用 Ry 旋转门
cir.ry(theta[2], 0)
# 输出态的密度矩阵
rho = cir.run_density_matrix()
outputs_yzy.append(rho.numpy())
# plot_state_in_bloch_sphere(outputs_yzy, save_gif=True, filename='figures/bloch_yzy.gif')
from paddle_quantum.utils import haar_unitary, state_fidelity
import matplotlib.pyplot as plt
from matplotlib.ticker import FuncFormatter
from scipy import integrate
# 定义绘制直方图的函数
def plot_hist(data, num_bin, title_str):
def to_percent(y, position):
return str(np.around(y*100, decimals=2)) + '%'
plt.hist(data, weights=[1./len(data)]*len(data), bins=np.linspace(0, 1, num=num_bin), facecolor="blue", edgecolor="black", alpha=0.7)
plt.xlabel("Fidelity")
plt.ylabel("frequency")
plt.title(title_str)
formatter = FuncFormatter(to_percent)
plt.gca().yaxis.set_major_formatter(formatter)
plt.show()
# 定义计算哈尔采样输出保真度分布的函数
def p_F_haar(n, s, b=50, draw=False):
f_list = list()
# 开始采样
for i in range(s):
# 采样第一个酉变换
u1 = haar_unitary(n)
# 输入 |0> 时酉变换的输出
phi1 = u1[:,0]
rho1 = np.outer(phi1, phi1.conj())
# 采样第二个酉变换
u2 = haar_unitary(n)
phi2 = u2[:,0]
# 输入 |0> 时酉变换的输出
rho2 = np.outer(phi2, phi2.conj())
# 计算两个采样输出之间的保真度
f_list.append(state_fidelity(rho1, rho2)**2)
f_list = np.array(f_list)
# 绘制概率分布图
if draw:
title_str = "haar, %d qubit(s)" % num_qubit
plot_hist(f_list, b, title_str)
sample_distribution, _ = np.histogram(f_list, bins=np.linspace(0, 1, num=b), density=True)
# 根据公式计算概率分布函数的理论值,用于后期计算 K-L 散度
theory_distribution = np.zeros_like(sample_distribution)
for index in range(len(theory_distribution)):
def p_continues(f):
return (2 ** n - 1) * (1 - f) ** (2 ** n - 2)
lower = 1/b*index
upper = lower + 1/b
theory_distribution[index], _ = integrate.quad(p_continues,lower,upper)
return sample_distribution, theory_distribution
num_qubit = 1
p_haar_1qubit, theory_haar_1qubit = p_F_haar(num_qubit, num_sample, draw=True)
num_qubit = 2
p_haar_2qubit, theory_haar_2qubit = p_F_haar(num_qubit, num_sample, draw=True)
# 定义计算量子神经网络输出保真度分布的函数
def p_F_qnn(n, s, g, b=50, draw=False):
f_list = list()
rho_sample = outputs_y
title_str = "Ry"
if g == 2:
rho_sample = outputs_yz
title_str = "Ry-Rz"
elif g == 3:
rho_sample = outputs_yzy
title_str = "Ry-Rz-Ry"
# 使用之前采样的数据计算保真度分布
for index in range(int(s / 2)):
rho1 = rho_sample[index]
rho2 = rho_sample[index+int(num_sample / 2)]
f_list.append(state_fidelity(rho1, rho2)**2)
f_list = np.array(f_list)
# 绘制概率分布图
if draw:
plot_hist(f_list, b, title_str)
distribution, _ = np.histogram(f_list, bins=np.linspace(0, 1, num=b), density=True)
return distribution
num_qubit = 1
p_y = p_F_qnn(num_qubit, num_sample, 1, draw=True)
p_yz = p_F_qnn(num_qubit, num_sample, 2, draw=True)
p_yzy = p_F_qnn(num_qubit, num_sample, 3, draw=True)
from scipy.stats import entropy
# 使用 scipy 的 entropy 函数计算相对熵(即 K-L 散度)
expr_y = entropy(p_y, theory_haar_1qubit)
expr_yz = entropy(p_yz, theory_haar_1qubit)
expr_yzy = entropy(p_yzy, theory_haar_1qubit)
print("Ry,Ry-Rz,和 Ry-Rz-Rz 神经网络的表达能力分别为 %.2f,%.2f,和 %.2f。" %(expr_y, expr_yz, expr_yzy))
# 定义计算保真度分布的函数
def p_F_cel(n, d, s, b=50, draw=False):
f_list = list()
for index in range(int(s / 2)):
if 2 * index % 400 == 0:
print(" 采样第 %d 个样本..." % (2 * index))
cir1 = UAnsatz(n)
# 在 0 到 2 pi 间均匀采样参数 theta
theta1 = paddle.to_tensor(2 * np.pi * random(size=(d, n, 3)), dtype='float64')
# 作用 complex_entangled_layer 层
cir1.complex_entangled_layer(theta1, d, range(n))
# 输出态的态矢量
rho1 = cir1.run_state_vector()
cir2 = UAnsatz(n)
# 在 0 到 2 pi 间均匀采样参数 theta
theta2 = paddle.to_tensor(2 * np.pi * random(size=(d, n, 3)), dtype='float64')
# 作用 complex_entangled_layer 层
cir2.complex_entangled_layer(theta2, d, range(n))
# 输出态的态矢量
rho2 = cir2.run_state_vector()
# 计算保真度
f_list.append(abs(np.inner(rho1.numpy(), rho2.numpy().conj()))**2)
print(" 采样完毕")
f_list = np.array(f_list)
# 绘制概率分布图
if draw:
title_str = "complex entangled layer, %d layer(s)" % d
plot_hist(f_list, b, title_str)
distribution, _ = np.histogram(f_list, bins=np.linspace(0, 1, num=b), density=True)
return distribution
# 设置电路宽度和最大深度
num_qubit = 4
max_depth = 3
# 计算哈尔采样对应的保真度分布
print("哈尔采样输出的保真度服从分布:")
p_haar_4qubit, theory_haar_4qubit = p_F_haar(num_qubit, num_sample, draw=True)
Expr_cel = list()
# 计算不同深度的神经网络的表达能力
for DEPTH in range(1, max_depth + 1):
print("正在采样深度为 %d 的电路..." % DEPTH)
p_cel = p_F_cel(num_qubit, DEPTH, num_sample, draw=True)
expr = entropy(p_cel, theory_haar_4qubit)
Expr_cel.append(expr)
# 比较不同深度的神经网络的表达能力
print("深度为 1,2,3 的神经网络的表达能力分别为", np.around(Expr_cel, decimals=4))
plt.plot(range(1, max_depth + 1), Expr_cel, marker='>')
plt.xlabel("depth")
plt.yscale('log')
plt.ylabel("Expr.")
plt.xticks(range(1, max_depth + 1))
plt.title("Expressibility vs Circuit Depth")
plt.show()
| 0.376623 | 0.990366 |
There are 4 questions, with points weighting given in the question. Write Python code to solve each question.
Points will be deducted for
- Functions or classes without `docstrings`
- Grossly inefficient or redundant code
- Excessively verbose code
- Use of *magic* numbers
Partial credit may be given for incomplete or wrong answers but not if you do not attempt the question.
**IMPORTANT**
- This is an **open book** exam meant to evaluate fluency with linear algebra and optimization Python
- Use a stopwatch to record the time you took to complete the exam in the cell below **honestly**:
- Under 2 hours - No penalty
- Between 2-3 hours - 5 points penalty
- More than 3 hours or **no time reported** - 10 points penalty
- Upload the notebook to Sakai when done
**Honor Code**: You agree to follow the Duke Honor code when taking this exam.
**Self-reported time taken**: It is your responsibility to time your exam.
<font color=red>Fill in total time in hours and minutes in the cell below</font>
1h 46min
**1**. (20 points)
In school, to help remember when the spelling should be "ei" or "ie", students are often taught the rule "i before e except after c". For example, "piece" and "conceive" fit this rule.
- Find all occurrences of words in the book `alice.txt` that violate this rule (10 points)
- Make a table of how often each such word occurs in decreasing order of the count (10 points)
```
import string
import re
import numpy as np
import pandas as pd
with open("alice.txt") as f:
texts = f.read()
words = texts.strip().lower().translate(str.maketrans('-', ' ', string.punctuation)).split()
violate = []
for word in words:
flag = False
if "cie" in word:
flag = True
else:
index = [m.start() for m in re.finditer('ei', word)]
for idx in index:
if idx == 0:
flag = True
break
elif word[idx - 1] != 'c':
flag = True
break
violate.append(flag)
occur = sum(violate)
print(occur)
word_violate = list(np.array(words)[np.array(violate)])
vocab = set(word_violate)
cnt = np.zeros(len(vocab), dtype = 'int')
for i, word in enumerate(vocab):
cnt[i] = word_violate.count(word)
df = pd.DataFrame(cnt, columns=['occurence'], index=vocab)
df.sort_values(by = 'occurence', ascending = False)
```
**2**. (20 points)
A grayscale figure of a Mandelbrot set is loaded for you.
- Compress the figure by reconstructing a rank k version, where is k is the number of singular values > 1e-9 (5 points)
- Calculate the Frobenius norm of the difference between the original and reconstructed image (5 points)
- Calculate the number of bytes needed to store the original image and the data needed to reconstruct the rank k image (5 points)
- What is the dimension of the null space of the reconstructed rank k image? (5 points)
```
from skimage import color, io
import matplotlib.pyplot as plt
%matplotlib inline
img = color.rgb2gray(color.rgba2rgb(io.imread('mandelbrot-250x250.png')))
plt.imshow(img, cmap='gray')
pass
import scipy.linalg as la
U, s, Vt = la.svd(img, full_matrices = False)
k = np.sum(s > 1e-9)
img_new = U[:, :k] @ np.diag(s[:k]) @ Vt[:k, :]
plt.imshow(img_new, cmap='gray')
pass
la.norm(img - img_new)
print(img.size * img.itemsize)
print(U[:, :k].size * U[:, :k].itemsize + s.size * s.itemsize + Vt[:k, :].size * Vt[:k, :].itemsize)
len(img_new) - np.linalg.matrix_rank(img_new)
```
**3**. (20 points)
Let the columns of $A$ represent the basis vectors for a plane in $\mathbb{R}^3$
$$
A = \pmatrix{1 & 2\\2 & 3\\3 & 4}
$$
- Construct a matrix $P$ that projects a vector $v \in \mathbb{R}^3$ onto this plane (5 points)
- Find the vector on the plane that is closes to the vector $\pmatrix{3\\4\\6}$ (5 points)
- Let $v = \pmatrix{3\\4\\6}$. Find the coordinates of $\text{proj}_A v$ with respect to the basis vectors of the plane (5 points)
- Find the distance between $\text{proj}_A v$ and $v$ using projection (5 points)
```
A = np.array([[1, 2], [2, 3], [3, 4]])
P = A @ la.inv(A.T @ A) @ A.T
P
v = np.array([3, 4, 6])
proj_v = P @ v
proj_v
Q = np.eye(len(A)) - P
Q @ v
```
<font color=red>-6 coordinates? distance?</font>
**4** (30 points)
Given the function $f(x) = x^3 - 5x^2 + x + 1$,
- Perform a single quadratic interpolation starting with the points (0, 2, 5) and return the next bracket (10 points)
- Plot the function and the quadratic interpolation showing the interpolated points for $x \in (-1, 6)$ (5 points)
- Find a local minimum using the newton method starting at the point x=4 with a tolerance of $10^{-4}$ for $\delta x$. Return the value of $x$ and $f(x)$ at that point (10 points)
- Find all roots of the function using the companion matrix method (5 points)
For the optimization problems, stop when a tolerance of $10^{-4}$ is reached for $x$. Do not use any library functions from `scipy.optimize` or `scipy.interpolate` or `np.root` (you can use for checking but not for solving)
```
from scipy.interpolate import interp1d
def f(x):
'''Definition of function f(x)'''
return x**3 - 5*x**2 + x + 1
def f_qua_intp(x, x0, y0):
'''Calculate the quadratic interpolation function'''
s = 0.0
for i in range(len(x0)):
xi = np.delete(x0, i)
s += y0[i] * np.prod(x - xi)/np.prod(x0[i] - xi)
return s
x0 = np.array([0,2,5])
y0 = f(x0)
f2 = lambda x: f_qua_intp(x, x0, y0)
```
<font color=red>-3 next bracket?</font>
```
xs = np.linspace(-1, 6, num=10000, endpoint=True)
plt.plot(xs, [f2(x) for x in xs])
plt.plot(xs, f(xs),'red')
pass
def df(x):
'''The derivative function of f(x)'''
return 3*x**2 - 10*x + 1
def d2f(x):
'''The second order derivative function of f(x)'''
return 6*x - 10
x0 = 4
xval = x0
limit = 1e-4
notconv = True
while(notconv):
funval = df(xval)
nextval = xval - funval / d2f(xval)
if abs(funval) < limit:
notconv = False
else:
xval = nextval
print(xval)
funval
```
<font color=red>-2 not min</font>
```
poly = np.array([1,-5,1,1])
A = np.r_[(-poly[1:] / poly[0])[None, :], np.c_[np.eye(2), np.zeros(2)[:, None]]]
A
la.eigvals(A)
```
|
github_jupyter
|
import string
import re
import numpy as np
import pandas as pd
with open("alice.txt") as f:
texts = f.read()
words = texts.strip().lower().translate(str.maketrans('-', ' ', string.punctuation)).split()
violate = []
for word in words:
flag = False
if "cie" in word:
flag = True
else:
index = [m.start() for m in re.finditer('ei', word)]
for idx in index:
if idx == 0:
flag = True
break
elif word[idx - 1] != 'c':
flag = True
break
violate.append(flag)
occur = sum(violate)
print(occur)
word_violate = list(np.array(words)[np.array(violate)])
vocab = set(word_violate)
cnt = np.zeros(len(vocab), dtype = 'int')
for i, word in enumerate(vocab):
cnt[i] = word_violate.count(word)
df = pd.DataFrame(cnt, columns=['occurence'], index=vocab)
df.sort_values(by = 'occurence', ascending = False)
from skimage import color, io
import matplotlib.pyplot as plt
%matplotlib inline
img = color.rgb2gray(color.rgba2rgb(io.imread('mandelbrot-250x250.png')))
plt.imshow(img, cmap='gray')
pass
import scipy.linalg as la
U, s, Vt = la.svd(img, full_matrices = False)
k = np.sum(s > 1e-9)
img_new = U[:, :k] @ np.diag(s[:k]) @ Vt[:k, :]
plt.imshow(img_new, cmap='gray')
pass
la.norm(img - img_new)
print(img.size * img.itemsize)
print(U[:, :k].size * U[:, :k].itemsize + s.size * s.itemsize + Vt[:k, :].size * Vt[:k, :].itemsize)
len(img_new) - np.linalg.matrix_rank(img_new)
A = np.array([[1, 2], [2, 3], [3, 4]])
P = A @ la.inv(A.T @ A) @ A.T
P
v = np.array([3, 4, 6])
proj_v = P @ v
proj_v
Q = np.eye(len(A)) - P
Q @ v
from scipy.interpolate import interp1d
def f(x):
'''Definition of function f(x)'''
return x**3 - 5*x**2 + x + 1
def f_qua_intp(x, x0, y0):
'''Calculate the quadratic interpolation function'''
s = 0.0
for i in range(len(x0)):
xi = np.delete(x0, i)
s += y0[i] * np.prod(x - xi)/np.prod(x0[i] - xi)
return s
x0 = np.array([0,2,5])
y0 = f(x0)
f2 = lambda x: f_qua_intp(x, x0, y0)
xs = np.linspace(-1, 6, num=10000, endpoint=True)
plt.plot(xs, [f2(x) for x in xs])
plt.plot(xs, f(xs),'red')
pass
def df(x):
'''The derivative function of f(x)'''
return 3*x**2 - 10*x + 1
def d2f(x):
'''The second order derivative function of f(x)'''
return 6*x - 10
x0 = 4
xval = x0
limit = 1e-4
notconv = True
while(notconv):
funval = df(xval)
nextval = xval - funval / d2f(xval)
if abs(funval) < limit:
notconv = False
else:
xval = nextval
print(xval)
funval
poly = np.array([1,-5,1,1])
A = np.r_[(-poly[1:] / poly[0])[None, :], np.c_[np.eye(2), np.zeros(2)[:, None]]]
A
la.eigvals(A)
| 0.303938 | 0.94699 |
## the boring stuff
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import time
import xgboost as xgb
import lightgbm as lgb
import category_encoders as cat_ed
import gc, mlcrate, glob
from gplearn.genetic import SymbolicTransformer
from fastai.imports import *
from fastai.structured import *
from pandas_summary import DataFrameSummary
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier, RandomForestRegressor
from IPython.display import display
from catboost import CatBoostClassifier, CatBoostRegressor
from scipy.cluster import hierarchy as hc
from collections import Counter
from sklearn import metrics
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.metrics import roc_auc_score, log_loss
from sklearn.model_selection import KFold, StratifiedKFold
from sklearn.model_selection import GridSearchCV
from sklearn.decomposition import PCA, TruncatedSVD, FastICA, FactorAnalysis
from sklearn.random_projection import GaussianRandomProjection, SparseRandomProjection
from sklearn.cluster import KMeans
from sklearn.metrics import accuracy_score, log_loss
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import AdaBoostClassifier, GradientBoostingClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
from sklearn.neural_network import MLPClassifier
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
# will ignore all warning from sklearn, seaborn etc..
def ignore_warn(*args, **kwargs):
pass
warnings.warn = ignore_warn
pd.option_context("display.max_rows", 1000);
pd.option_context("display.max_columns", 1000);
PATH = os.getcwd()
df_raw = pd.read_csv(f'{PATH}\\train_new_agg_feats.csv', low_memory=False)
df_test = pd.read_csv(f'{PATH}\\test_new_agg_feats.csv', low_memory=False)
def display_all(df):
with pd.option_context("display.max_rows", 100):
with pd.option_context("display.max_columns", 100):
display(df)
def make_submission(probs):
sample = pd.read_csv(f'{PATH}\\sample_submission.csv')
submit = sample.copy()
submit['Upvotes'] = probs
return submit
df_raw.shape,
df_raw.get_ftype_counts()
display_all(df_raw.isnull().sum().sort_index()/len(df_raw))
```
## random
```
df_raw.head()
df_raw = pd.get_dummies(df_raw, 'tag', columns=['Tag'])
df_test = pd.get_dummies(df_test, 'tag', columns=['Tag'])
```
## Bazooka ! (anokas)
```
man_train_list = df_raw.Username.unique()
man_test_list = df_test.Username.unique()
man_not_in_test = set(man_train_list) - set(man_test_list)
man_not_in_train = set(man_test_list) - set(man_train_list)
df_raw.drop(index = df_raw.loc[list(man_not_in_test)].index, inplace=True)
model=CatBoostRegressor(iterations=500, learning_rate= 0.06, depth = 8, loss_function='RMSE')
model.fit(df_raw, target)
preds = model.predict(df_test) - 1;
preds[:10]
submit = make_submission(preds)
submit.to_csv(f'{PATH}\\Adi_catboost_with rf_feats_310818.csv', index=None)
```
## RF
```
def rmse(x,y): return math.sqrt(((x-y)**2).mean())
def print_score(m):
res = ['RMSLE X_train', rmse(m.predict(X_train), y_train), '\n RMSLE X_valid', rmse(m.predict(X_valid), y_valid),
'\n R**2 Train',m.score(X_train, y_train), '\n R**2 Valid', m.score(X_valid, y_valid)]
if hasattr(m, 'oob_score_'): res.append(['\n OOB_Score', m.oob_score_])
print(res)
target = df_raw.target
df_raw.drop('target', axis=1,inplace=True)
df_raw.drop('Username', axis=1,inplace=True)
df_test.drop('Username', axis=1,inplace=True)
from sklearn.model_selection import train_test_split
X_train, X_valid, y_train, y_valid = train_test_split(df_raw, target, test_size=0.2, random_state=42)
def split_vals(a,n): return a[:n].copy(), a[n:].copy()
n_valid = 30000
n_trn = len(df_raw)-n_valid
raw_train, raw_valid = split_vals(df_raw, n_trn)
X_train, X_valid = split_vals(df_raw, n_trn)
y_train, y_valid = split_vals(target, n_trn)
X_train.shape, y_train.shape, X_valid.shape
df_raw.drop(['Reputation', 'Answers', 'Views'], axis=1, inplace=True)
df_test.drop(['Reputation', 'Answers', 'Views'], axis=1, inplace=True)
m = RandomForestRegressor(n_estimators=40, n_jobs=-1, oob_score=True, max_depth= 8)
m.fit(X_train, y_train)
print_score(m)
df_raw.head()
df_raw.columns
for i in df_raw.columns:
sns.distplot(df_raw[i])
plt.show()
```
|
github_jupyter
|
%load_ext autoreload
%autoreload 2
%matplotlib inline
import time
import xgboost as xgb
import lightgbm as lgb
import category_encoders as cat_ed
import gc, mlcrate, glob
from gplearn.genetic import SymbolicTransformer
from fastai.imports import *
from fastai.structured import *
from pandas_summary import DataFrameSummary
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier, RandomForestRegressor
from IPython.display import display
from catboost import CatBoostClassifier, CatBoostRegressor
from scipy.cluster import hierarchy as hc
from collections import Counter
from sklearn import metrics
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.metrics import roc_auc_score, log_loss
from sklearn.model_selection import KFold, StratifiedKFold
from sklearn.model_selection import GridSearchCV
from sklearn.decomposition import PCA, TruncatedSVD, FastICA, FactorAnalysis
from sklearn.random_projection import GaussianRandomProjection, SparseRandomProjection
from sklearn.cluster import KMeans
from sklearn.metrics import accuracy_score, log_loss
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import AdaBoostClassifier, GradientBoostingClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
from sklearn.neural_network import MLPClassifier
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
# will ignore all warning from sklearn, seaborn etc..
def ignore_warn(*args, **kwargs):
pass
warnings.warn = ignore_warn
pd.option_context("display.max_rows", 1000);
pd.option_context("display.max_columns", 1000);
PATH = os.getcwd()
df_raw = pd.read_csv(f'{PATH}\\train_new_agg_feats.csv', low_memory=False)
df_test = pd.read_csv(f'{PATH}\\test_new_agg_feats.csv', low_memory=False)
def display_all(df):
with pd.option_context("display.max_rows", 100):
with pd.option_context("display.max_columns", 100):
display(df)
def make_submission(probs):
sample = pd.read_csv(f'{PATH}\\sample_submission.csv')
submit = sample.copy()
submit['Upvotes'] = probs
return submit
df_raw.shape,
df_raw.get_ftype_counts()
display_all(df_raw.isnull().sum().sort_index()/len(df_raw))
df_raw.head()
df_raw = pd.get_dummies(df_raw, 'tag', columns=['Tag'])
df_test = pd.get_dummies(df_test, 'tag', columns=['Tag'])
man_train_list = df_raw.Username.unique()
man_test_list = df_test.Username.unique()
man_not_in_test = set(man_train_list) - set(man_test_list)
man_not_in_train = set(man_test_list) - set(man_train_list)
df_raw.drop(index = df_raw.loc[list(man_not_in_test)].index, inplace=True)
model=CatBoostRegressor(iterations=500, learning_rate= 0.06, depth = 8, loss_function='RMSE')
model.fit(df_raw, target)
preds = model.predict(df_test) - 1;
preds[:10]
submit = make_submission(preds)
submit.to_csv(f'{PATH}\\Adi_catboost_with rf_feats_310818.csv', index=None)
def rmse(x,y): return math.sqrt(((x-y)**2).mean())
def print_score(m):
res = ['RMSLE X_train', rmse(m.predict(X_train), y_train), '\n RMSLE X_valid', rmse(m.predict(X_valid), y_valid),
'\n R**2 Train',m.score(X_train, y_train), '\n R**2 Valid', m.score(X_valid, y_valid)]
if hasattr(m, 'oob_score_'): res.append(['\n OOB_Score', m.oob_score_])
print(res)
target = df_raw.target
df_raw.drop('target', axis=1,inplace=True)
df_raw.drop('Username', axis=1,inplace=True)
df_test.drop('Username', axis=1,inplace=True)
from sklearn.model_selection import train_test_split
X_train, X_valid, y_train, y_valid = train_test_split(df_raw, target, test_size=0.2, random_state=42)
def split_vals(a,n): return a[:n].copy(), a[n:].copy()
n_valid = 30000
n_trn = len(df_raw)-n_valid
raw_train, raw_valid = split_vals(df_raw, n_trn)
X_train, X_valid = split_vals(df_raw, n_trn)
y_train, y_valid = split_vals(target, n_trn)
X_train.shape, y_train.shape, X_valid.shape
df_raw.drop(['Reputation', 'Answers', 'Views'], axis=1, inplace=True)
df_test.drop(['Reputation', 'Answers', 'Views'], axis=1, inplace=True)
m = RandomForestRegressor(n_estimators=40, n_jobs=-1, oob_score=True, max_depth= 8)
m.fit(X_train, y_train)
print_score(m)
df_raw.head()
df_raw.columns
for i in df_raw.columns:
sns.distplot(df_raw[i])
plt.show()
| 0.538012 | 0.590336 |
## Setups
```
import psycopg2
import pandas as pd
from nltk import word_tokenize
from sklearn.feature_extraction.text import CountVectorizer
from nltk.stem import WordNetLemmatizer
import string
import re
from wordcloud import WordCloud
from sklearn.feature_extraction.text import TfidfVectorizer
import scipy
import pickle
import tqdm
from sklearn.linear_model import LogisticRegression
from sklearn.grid_search import GridSearchCV
from sklearn.pipeline import Pipeline
wordnet_lemmatizer = WordNetLemmatizer()
dbname = 'bills_db'
username = 'Joel'
import os
import yaml
import sys
os.chdir('..')
from src.ingest.get_bills import get_us_bills
from src.ingest.get_bills import get_ny_bills
from src.ingest.get_bills import get_subjects
from src.wrangle.create_features import make_feat_union
from src.analyze.run_model import create_model
from src.analyze.run_model import run_model
from src.wrangle.create_features import make_x_values
from src.wrangle.create_features import make_y_values
from src.wrangle.create_features import lemmatize_tokens
from src.wrangle.create_features import tokenize
from src.wrangle.create_features import my_preproc_text
from src.wrangle.create_features import my_preproc_title
from src.analyze.run_model import get_y_probs
from src.report.store_db import store_us_db
from src.report.store_db import store_ny_db
from src.report.make_roc_curve import make_roc_curve
from src.utils.get_time_stamp import get_time_stamp
con = psycopg2.connect(database = dbname, user = username)
```
#### Rerun only if the underlying data has changed
#### query:
sql_query = """
SELECT * FROM us_bills;
"""
us_bills = pd.read_sql_query(sql_query,con)
us_X = make_x_values(us_bills)
us_tf_vect_raw = CountVectorizer(stop_words='english', tokenizer=tokenize, preprocessor=my_preproc_text)
us_tf_text_raw = us_tf_vect_raw.fit_transform(us_X)
us_tf_vect_clean = CountVectorizer(stop_words='english', tokenizer=tokenize, preprocessor=my_preproc_text,
min_df=10, max_df=0.4)
us_tf_text_clean = us_tf_vect_clean.fit_transform(us_X)
pickle.dump((us_bills, us_X), open('../presentations/data/us_data.p', 'wb'))
pickle.dump((us_tf_vect_raw, us_tf_text_raw, us_tf_vect_clean, us_tf_text_clean),
open('../presentations/data/us_tf.p', 'wb'))
#### Rerun only if the underlying data has changed
con = psycopg2.connect(database = dbname, user = username)
#### query:
sql_query = """
SELECT * FROM ny_bills;
"""
ny_bills = pd.read_sql_query(sql_query,con)
ny_X = make_x_values(ny_bills)
ny_tf_vect_raw = CountVectorizer(stop_words='english', tokenizer=tokenize, preprocessor=my_preproc_text)
ny_tf_text_raw = ny_tf_vect_raw.fit_transform(ny_X)
ny_tf_vect_clean = CountVectorizer(stop_words='english', tokenizer=tokenize, preprocessor=my_preproc_text,
min_df=10, max_df=0.4)
ny_tf_text_clean = ny_tf_vect_clean.fit_transform(ny_X)
pickle.dump((ny_bills, ny_X), open('../presentations/data/ny_data.p', 'wb'))
pickle.dump((ny_tf_vect_raw, ny_tf_text_raw, ny_tf_vect_clean, ny_tf_text_clean),
open('../presentations/data/ny_tf.p', 'wb'))
```
us_bills, us_x = pickle.load(open('../presentations/data/us_data.p', 'rb'))
us_tf_vect_raw, us_tf_text_raw, us_tf_vect_clean, us_tf_text_clean = pickle.load(
open('../presentations/data/us_tf.p', 'rb'))
ny_bills, ny_x = pickle.load(open('../presentations/data/ny_data.p', 'rb'))
ny_tf_vect_raw, ny_tf_text_raw, ny_tf_vect_clean, ny_tf_text_clean = pickle.load(
open('../presentations/data/ny_tf.p', 'rb'))
```
## Slide 4
```
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
```
#### here we set some aesthetic parameters so that all of our figures are nice and big
plt.rcParams['figure.figsize'] = (3, 8)
plt.rcParams['font.size'] = 20
sns.set(style="white", context="talk")
```
#plt.rcParams.keys()
column_sums = us_tf_text_raw.sum(axis=0)
label_size = 11
figsize = (10, 3)
sum_df = pd.DataFrame(column_sums.transpose(), index=us_tf_vect_raw.get_feature_names(), columns=['word_counts'])
us_top_20 = sum_df.sort_values(by='word_counts', ascending=False)[0:20]
plt.figure(figsize=(3,4))
plt.hist(sum_df['word_counts'], 20, log=True)
plt.ylabel("Unique Words", size=15)
plt.xlabel("Word Count", size=15)
plt.ylim(0.1)
plt.xticks(size=15)
plt.yticks(size=15)
plt.title("U.S. Word Frequency", size=15)
plt.locator_params(axis='x', nbins=3)
us_top_20.sort_values(by='word_counts').plot(kind='barh', legend=None, figsize=figsize)
plt.ylabel("Unique Words", size=label_size)
plt.xlabel("Word Count", size=label_size)
plt.yticks(size=label_size)
plt.xticks(size=label_size)
plt.title("Word Counts for Top 20 Words in Bills for 114th U.S. Congress", size=label_size)
```
#### To build a word cloud
all_words = [word for word in tqdm.tqdm(vect.get_feature_names()) for i in range(0,sum_df.ix[word,0])]
one_text = " ".join(all_words)
wordcloud = WordCloud().generate(one_text)
plt.imshow(wordcloud)
plt.axis("off")
plt.show()
```
ny_column_sums = ny_tf_text_raw.sum(axis=0)
ny_sum_df = pd.DataFrame(ny_column_sums.transpose(), index=ny_tf_vect_raw.get_feature_names(), columns=['word_counts'])
ny_top_20 = ny_sum_df.sort_values(by='word_counts', ascending=False)[0:20]
```
plt.hist(ny_sum_df['word_counts'], 50, log=True)
plt.ylabel("Number of Unique Words with Given Word Count")
plt.xlabel("Word Count of Unique Words")
plt.ylim(0.1)
plt.title("Histogram of Word Frequency in Bills for 2015 Session of New York Legislature")
```
ny_top_20.sort_values(by='word_counts').plot(kind='barh', legend=None, figsize=figsize)
plt.ylabel("Unique Words", size=label_size)
plt.xlabel("Word Count", size=label_size)
plt.yticks(size=label_size)
plt.xticks(size=label_size)
plt.title("Word Counts for Top 20 Words in Bills for 2015 Session of New York Legislature", size=label_size)
```
ny_all_words = [word for word in tqdm.tqdm(ny_vect.get_feature_names()) for i in range(0,ny_sum_df.ix[word,0])]
ny_one_text = " ".join(ny_all_words)
wordcloud = WordCloud().generate(ny_one_text)
plt.imshow(wordcloud)
plt.axis("off")
plt.show()
## Slide 5
### Cleaned by focusing only on words in at least 10 documents and fewer than 40% of documents
```
us_clean_column_sums = us_tf_text_clean.sum(axis=0)
us_clean_sum_df = pd.DataFrame(us_clean_column_sums.transpose(), index=us_tf_vect_clean.get_feature_names(), columns=['word_counts'])
us_clean_top_20 = us_clean_sum_df.sort_values(by='word_counts', ascending=False)[0:20]
plt.figure(figsize=(3,4))
plt.hist(us_clean_sum_df['word_counts'], 20, log=True)
plt.ylabel("Unique Words", size=15)
plt.xlabel("Word Count", size=15)
plt.ylim(0.1)
plt.xticks(size=15)
plt.yticks(size=15)
plt.title("U.S. Reduced Frequency", size=15)
plt.locator_params(axis='x', nbins=3)
us_clean_top_20.sort_values(by='word_counts').plot(kind='barh', legend=None, figsize=figsize)
plt.ylabel("Unique Words", size=label_size)
plt.xlabel("Word Count", size=label_size)
plt.yticks(size=label_size)
plt.xticks(size=label_size)
plt.title("Cleaned Word Counts for Top 20 Words in Bills for 114th U.S. Congress", size=label_size)
```
us_clean_all_words = [word for word in tqdm.tqdm(us_clean_vect.get_feature_names()) for i in range(0,us_clean_sum_df.ix[word,0])]
us_clean_one_text = " ".join(us_clean_all_words)
wordcloud = WordCloud().generate(us_clean_one_text)
plt.imshow(wordcloud)
plt.axis("off")
plt.show()
### NY Clean Data
```
ny_clean_column_sums = ny_tf_text_clean.sum(axis=0)
ny_clean_sum_df = pd.DataFrame(ny_clean_column_sums.transpose(), index=ny_tf_vect_clean.get_feature_names(), columns=['word_counts'])
ny_clean_top_20 = ny_clean_sum_df.sort_values(by='word_counts', ascending=False)[0:20]
```
plt.hist(ny_clean_sum_df['word_counts'], 50, log=True)
plt.ylabel("Number of Unique Words with Given Word Count")
plt.xlabel("Word Count of Unique Words")
plt.ylim(0.1)
plt.title("Histogram of Word Frequency in Bills for 114th U.S. Congress")
```
ny_clean_top_20.sort_values(by='word_counts').plot(kind='barh', legend=None, figsize=figsize)
plt.ylabel("Unique Words", size=label_size)
plt.xlabel("Word Count", size=label_size)
plt.yticks(size=label_size)
plt.xticks(size=label_size)
plt.title("Cleaned Word Counts for Top 20 Words in Bills for 2015 Session of New York Legislature", size=label_size)
```
ny_clean_all_words = [word for word in tqdm.tqdm(ny_clean_vect.get_feature_names()) for i in range(0,ny_clean_sum_df.ix[word,0])]
ny_clean_one_text = " ".join(ny_clean_all_words)
wordcloud = WordCloud().generate(ny_clean_one_text)
plt.imshow(wordcloud)
plt.axis("off")
plt.show()
## Slide 6
### Build for ROC Curves and Confusion Matrices
```
con = psycopg2.connect(database = dbname, user = username)
sql_str = """
SELECT bill_num, subject FROM bill_subject
WHERE subject='Health'
"""
sub_bills = pd.read_sql_query(sql_str, con)
y_health = make_y_values(us_bills, sub_bills, 'Health')
sql_str = """
SELECT bill_num, subject FROM bill_subject
WHERE subject='Intellectual property'
"""
sub_bills = pd.read_sql_query(sql_str, con)
y_ip = make_y_values(us_bills, sub_bills, 'Intellectual property')
lr = LogisticRegression(penalty='l2', C=10)
pipeline = Pipeline(steps=[("tf", us_tf_vect_clean), ('lr', lr)])
ymlfile = open("configs.yml", 'r')
cfg = yaml.load(ymlfile)
ymlfile.close()
import src.report.make_roc_curve
reload(src.report.make_roc_curve)
make_roc_curve(pipeline, us_x, y_ip, 0.9, 'Intellectual Property', cfg)
results_health = pickle.load(open('../presentations/figures/roc_health_tf_2016-09-24-13-52-01.p', 'rb'))
results_health[4]
results_ip = pickle.load(open('../presentations/figures/split_data_intellectual property_2016-09-24-14-49-24.p', 'rb'))
results_ip[4]
```
## Slide 7
### Produce density plots for TF-IDF
We would need to get count vectors for each of the words
us_tfidf_vect = TfidfVectorizer(stop_words='english', tokenizer=tokenize, preprocessor=my_preproc_text, min_df=10, max_df=0.4)
us_tfidf_text = us_tfidf_vect.fit_transform(us_x)
pickle.dump((us_tfidf_vect, us_tfidf_text), open('../presentations/data/us_tfidf.p', 'wb'))
ny_tfidf_vect = TfidfVectorizer(stop_words='english', tokenizer=tokenize, preprocessor=my_preproc_text, min_df=10, max_df=0.4)
ny_tfidf_text = ny_tfidf_vect.fit_transform(ny_x)
pickle.dump((ny_tfidf_vect, ny_tfidf_text), open('../presentations/data/ny_tfidf.p', 'wb'))
```
us_tfidf_vect, us_tfidf_text = pickle.load(open('../presentations/data/us_tfidf.p', 'rb'))
ny_tfidf_vect, ny_tfidf_text = pickle.load(open('../presentations/data/ny_tfidf.p', 'rb'))
tfidf_us_column_sums = us_tfidf_text.sum(axis=0)
tfidf_us_sum_df = pd.DataFrame(tfidf_us_column_sums.transpose(), index=us_tfidf_vect.get_feature_names(), columns=['word_counts'])
tfidf_us_top_20 = tfidf_us_sum_df.sort_values(by='word_counts', ascending=False)[0:20]
plt.figure(figsize=(3,4))
plt.hist(tfidf_us_sum_df['word_counts'], 20, log=True)
plt.ylabel("Word Count", size=15)
plt.xlabel("Densities", size=15)
plt.ylim(0.1)
plt.xticks(size=15)
plt.yticks(size=15)
plt.title("U.S. Word Densities", size=15)
plt.locator_params(axis='x', nbins=3)
tfidf_us_top_20.sort_values(by='word_counts').plot(kind='barh', legend=None, figsize=(6,5))
plt.ylabel("Unique Words", size=label_size+2)
plt.xlabel("Word Density", size=label_size+2)
plt.yticks(size=label_size+2)
plt.xticks(size=label_size+2)
plt.title("Top 20 Word Densities in Bills for 114th U.S. Congress", size=label_size+2)
tfidf_ny_column_sums = ny_tfidf_text.sum(axis=0)
tfidf_ny_sum_df = pd.DataFrame(tfidf_ny_column_sums.transpose(), index=ny_tfidf_vect.get_feature_names(), columns=['word_counts'])
tfidf_ny_top_20 = tfidf_ny_sum_df.sort_values(by='word_counts', ascending=False)[0:20]
```
plt.hist(tfidf_ny_sum_df['word_counts'], 50, log=True)
plt.ylabel("Count of Words with Given Density")
plt.xlabel("Densities of Unique Words")
plt.ylim(0.1)
plt.title("Histogram of Word Densities in Bills for 2015 Session of New York Legislature")
```
tfidf_ny_top_20.plot(kind='barh', legend=None, figsize=figsize)
plt.ylabel("Unique Words", size=label_size)
plt.xlabel("Word Density", size=label_size)
plt.title("Top Word Densities in Bills for 2015 New York Legislative Session", size=label_size)
```
import numpy as np
import matplotlib.pyplot as plt
N = 10
data = np.random.random((N, 4))
labels = ['point{0}'.format(i) for i in range(N)]
plt.subplots_adjust(bottom = 0.1)
plt.scatter(
data[:, 0], data[:, 1], marker = 'o', c = data[:, 2], s = data[:, 3]*1500,
cmap = plt.get_cmap('Spectral'))
for label, x, y in zip(labels, data[:, 0], data[:, 1]):
plt.annotate(
label,
xy = (x, y), xytext = (-20, 20),
textcoords = 'offset points', ha = 'right', va = 'bottom',
bbox = dict(boxstyle = 'round,pad=0.5', fc = 'yellow', alpha = 0.5),
arrowprops = dict(arrowstyle = '->', connectionstyle = 'arc3,rad=0'))
plt.show()
tfidf_ny_top_10 = tfidf_ny_sum_df.sort_values(by='word_counts', ascending=False)[0:10]
ny_tfs = ny_clean_sum_df[ny_clean_sum_df.index.isin(tfidf_ny_top_10.index)]
ny_idfs = tfidf_ny_top_10/ny_tfs
labels = ny_tfs.sort_index().index
#plt.subplots_adjust(bottom = 0.1)
y = ny_tfs.sort_index()['word_counts']
x = ny_idfs.sort_index()['word_counts']
plt.scatter(
x, y, marker = 'o', s = tfidf_ny_top_10.sort_index()['word_counts']*0.5,
c = tfidf_ny_top_10.sort_index()['word_counts'], cmap = plt.get_cmap('Spectral_r'))
for label, x, y in zip(labels, x, y):
plt.annotate(
label,
xy = (x, y), xytext = (-40, 40),
textcoords = 'offset points', ha = 'left', va = 'bottom',
bbox = dict(boxstyle = 'round,pad=0.5', fc = 'yellow', alpha = 0.5),
arrowprops = dict(arrowstyle = '->', connectionstyle = 'arc3,rad=0'))
plt.xlim(0.005,0.02)
plt.show()
## Slide 9
```
tfidf_health = pickle.load(open('/Users/Joel/Desktop/Insight/bill_taxonomy/models/model_Health_2016-09-23-13-22-32.p'))
make_roc_curve(tfidf_health.best_estimator_, us_x, y_health, 0.9, 'Health', cfg)
final_health = pickle.load(open('../presentations/figures/split_data_health_2016-09-24-16-25-03.p'))
final_health[4]
tfidf_ip = pickle.load(open('/Users/Joel/Desktop/Insight/bill_taxonomy/models/presentation_models/model_Intellectual property_2016-09-23-15-07-14.p'))
tfidf_ip.best_score_
make_roc_curve(tfidf_ip.best_estimator_, us_x, y_ip, 0.8, 'Intellectual Property', cfg)
final_ip = pickle.load(open('../presentations/figures/split_data_intellectual property_2016-09-24-17-00-41.p'))
final_ip[4]
final_tax = pickle.load(open('models/model_Taxation_2016-09-26-08-30-51.p'))
final_tax.best_params_
sql_str = """
SELECT bill_num, subject FROM bill_subject
WHERE subject='Taxation'
"""
sub_bills = pd.read_sql_query(sql_str, con)
y_tax = make_y_values(us_bills, sub_bills, 'Taxation')
make_roc_curve(final_tax.best_estimator_, us_x, y_tax, 0.8, 'Taxation', cfg)
subject = "Bank accounts, deposits, capital"
if (subject.split(' ')[0] == 'Bank'):
subject = subject.replace('capital', 'and capital')
subject = subject.replace(' ', '_')
subject = subject.replace(',', '')
print subject
```
## Slide 10
```
best_est_lr = tfidf_ip.best_estimator_.steps[1][1]
feats = tfidf_ip.best_estimator_.steps[0][1]
feat_names = feats.get_feature_names()
weights = [(feat_names[i], best_est_lr.coef_[0][i]) for i in tqdm.tqdm(range(0, len(best_est_lr.coef_[0])))]
sort_weights = sorted(weights, key=lambda (a,b): abs(b), reverse=True)[0:10]
# Don't think I need this anymore but afraid to get rid of it
# feat_vect = [s[0].split('_')[1] + ': ' + s[1] for s in top20_df['feature'].str.split('__')]
top10_df = pd.DataFrame(sort_weights, columns=['feature', 'coefficient'])
feat_vect = [s[0].split('_')[1] + ': ' + s[1] for s in top10_df['feature'].str.split('__')]
top10_df.ix[:, 'feature'] = feat_vect
top10_df.set_index('feature', inplace=True)
top10_df.sort_values(by='coefficient').plot(kind='barh', legend=None, figsize=(8,6))
plt.ylabel("Feature", size=25)
plt.xlabel("Coefficient", size=25)
plt.xticks(size=25)
plt.yticks(size=25)
plt.title("Coefficient Weights for Intellectual Property", size=25)
```
## Notes from the production of Slide 8
```
svc_model = pickle.load(open('/Users/Joel/Desktop/Insight/bill_taxonomy/models/tfidf_models2/model_health_svc.p'))
svc_model.best_score_
X_svc = make_x_values(us_bills)
len(X_svc)
sql_str = """
SELECT bill_num, subject FROM bill_subject
WHERE subject='Health'
"""
sub_bills = pd.read_sql_query(sql_str, con)
y = make_y_values(us_bills, sub_bills, 'Health' )
svc_model.best_estimator_.predict
svc_model = pickle.load(open('/Users/Joel/Desktop/Insight/bill_taxonomy/models/tfidf_models2/model_health_svc.p'))
nb_model = svc_model
get_time_stamp()
from nltk.stem import WordNetLemmatizer
wordnet_lemmatizer = WordNetLemmatizer()
wordnet_lemmatizer.lemmatize('striking')
```
|
github_jupyter
|
import psycopg2
import pandas as pd
from nltk import word_tokenize
from sklearn.feature_extraction.text import CountVectorizer
from nltk.stem import WordNetLemmatizer
import string
import re
from wordcloud import WordCloud
from sklearn.feature_extraction.text import TfidfVectorizer
import scipy
import pickle
import tqdm
from sklearn.linear_model import LogisticRegression
from sklearn.grid_search import GridSearchCV
from sklearn.pipeline import Pipeline
wordnet_lemmatizer = WordNetLemmatizer()
dbname = 'bills_db'
username = 'Joel'
import os
import yaml
import sys
os.chdir('..')
from src.ingest.get_bills import get_us_bills
from src.ingest.get_bills import get_ny_bills
from src.ingest.get_bills import get_subjects
from src.wrangle.create_features import make_feat_union
from src.analyze.run_model import create_model
from src.analyze.run_model import run_model
from src.wrangle.create_features import make_x_values
from src.wrangle.create_features import make_y_values
from src.wrangle.create_features import lemmatize_tokens
from src.wrangle.create_features import tokenize
from src.wrangle.create_features import my_preproc_text
from src.wrangle.create_features import my_preproc_title
from src.analyze.run_model import get_y_probs
from src.report.store_db import store_us_db
from src.report.store_db import store_ny_db
from src.report.make_roc_curve import make_roc_curve
from src.utils.get_time_stamp import get_time_stamp
con = psycopg2.connect(database = dbname, user = username)
us_bills, us_x = pickle.load(open('../presentations/data/us_data.p', 'rb'))
us_tf_vect_raw, us_tf_text_raw, us_tf_vect_clean, us_tf_text_clean = pickle.load(
open('../presentations/data/us_tf.p', 'rb'))
ny_bills, ny_x = pickle.load(open('../presentations/data/ny_data.p', 'rb'))
ny_tf_vect_raw, ny_tf_text_raw, ny_tf_vect_clean, ny_tf_text_clean = pickle.load(
open('../presentations/data/ny_tf.p', 'rb'))
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
#plt.rcParams.keys()
column_sums = us_tf_text_raw.sum(axis=0)
label_size = 11
figsize = (10, 3)
sum_df = pd.DataFrame(column_sums.transpose(), index=us_tf_vect_raw.get_feature_names(), columns=['word_counts'])
us_top_20 = sum_df.sort_values(by='word_counts', ascending=False)[0:20]
plt.figure(figsize=(3,4))
plt.hist(sum_df['word_counts'], 20, log=True)
plt.ylabel("Unique Words", size=15)
plt.xlabel("Word Count", size=15)
plt.ylim(0.1)
plt.xticks(size=15)
plt.yticks(size=15)
plt.title("U.S. Word Frequency", size=15)
plt.locator_params(axis='x', nbins=3)
us_top_20.sort_values(by='word_counts').plot(kind='barh', legend=None, figsize=figsize)
plt.ylabel("Unique Words", size=label_size)
plt.xlabel("Word Count", size=label_size)
plt.yticks(size=label_size)
plt.xticks(size=label_size)
plt.title("Word Counts for Top 20 Words in Bills for 114th U.S. Congress", size=label_size)
ny_column_sums = ny_tf_text_raw.sum(axis=0)
ny_sum_df = pd.DataFrame(ny_column_sums.transpose(), index=ny_tf_vect_raw.get_feature_names(), columns=['word_counts'])
ny_top_20 = ny_sum_df.sort_values(by='word_counts', ascending=False)[0:20]
ny_top_20.sort_values(by='word_counts').plot(kind='barh', legend=None, figsize=figsize)
plt.ylabel("Unique Words", size=label_size)
plt.xlabel("Word Count", size=label_size)
plt.yticks(size=label_size)
plt.xticks(size=label_size)
plt.title("Word Counts for Top 20 Words in Bills for 2015 Session of New York Legislature", size=label_size)
us_clean_column_sums = us_tf_text_clean.sum(axis=0)
us_clean_sum_df = pd.DataFrame(us_clean_column_sums.transpose(), index=us_tf_vect_clean.get_feature_names(), columns=['word_counts'])
us_clean_top_20 = us_clean_sum_df.sort_values(by='word_counts', ascending=False)[0:20]
plt.figure(figsize=(3,4))
plt.hist(us_clean_sum_df['word_counts'], 20, log=True)
plt.ylabel("Unique Words", size=15)
plt.xlabel("Word Count", size=15)
plt.ylim(0.1)
plt.xticks(size=15)
plt.yticks(size=15)
plt.title("U.S. Reduced Frequency", size=15)
plt.locator_params(axis='x', nbins=3)
us_clean_top_20.sort_values(by='word_counts').plot(kind='barh', legend=None, figsize=figsize)
plt.ylabel("Unique Words", size=label_size)
plt.xlabel("Word Count", size=label_size)
plt.yticks(size=label_size)
plt.xticks(size=label_size)
plt.title("Cleaned Word Counts for Top 20 Words in Bills for 114th U.S. Congress", size=label_size)
ny_clean_column_sums = ny_tf_text_clean.sum(axis=0)
ny_clean_sum_df = pd.DataFrame(ny_clean_column_sums.transpose(), index=ny_tf_vect_clean.get_feature_names(), columns=['word_counts'])
ny_clean_top_20 = ny_clean_sum_df.sort_values(by='word_counts', ascending=False)[0:20]
ny_clean_top_20.sort_values(by='word_counts').plot(kind='barh', legend=None, figsize=figsize)
plt.ylabel("Unique Words", size=label_size)
plt.xlabel("Word Count", size=label_size)
plt.yticks(size=label_size)
plt.xticks(size=label_size)
plt.title("Cleaned Word Counts for Top 20 Words in Bills for 2015 Session of New York Legislature", size=label_size)
con = psycopg2.connect(database = dbname, user = username)
sql_str = """
SELECT bill_num, subject FROM bill_subject
WHERE subject='Health'
"""
sub_bills = pd.read_sql_query(sql_str, con)
y_health = make_y_values(us_bills, sub_bills, 'Health')
sql_str = """
SELECT bill_num, subject FROM bill_subject
WHERE subject='Intellectual property'
"""
sub_bills = pd.read_sql_query(sql_str, con)
y_ip = make_y_values(us_bills, sub_bills, 'Intellectual property')
lr = LogisticRegression(penalty='l2', C=10)
pipeline = Pipeline(steps=[("tf", us_tf_vect_clean), ('lr', lr)])
ymlfile = open("configs.yml", 'r')
cfg = yaml.load(ymlfile)
ymlfile.close()
import src.report.make_roc_curve
reload(src.report.make_roc_curve)
make_roc_curve(pipeline, us_x, y_ip, 0.9, 'Intellectual Property', cfg)
results_health = pickle.load(open('../presentations/figures/roc_health_tf_2016-09-24-13-52-01.p', 'rb'))
results_health[4]
results_ip = pickle.load(open('../presentations/figures/split_data_intellectual property_2016-09-24-14-49-24.p', 'rb'))
results_ip[4]
us_tfidf_vect, us_tfidf_text = pickle.load(open('../presentations/data/us_tfidf.p', 'rb'))
ny_tfidf_vect, ny_tfidf_text = pickle.load(open('../presentations/data/ny_tfidf.p', 'rb'))
tfidf_us_column_sums = us_tfidf_text.sum(axis=0)
tfidf_us_sum_df = pd.DataFrame(tfidf_us_column_sums.transpose(), index=us_tfidf_vect.get_feature_names(), columns=['word_counts'])
tfidf_us_top_20 = tfidf_us_sum_df.sort_values(by='word_counts', ascending=False)[0:20]
plt.figure(figsize=(3,4))
plt.hist(tfidf_us_sum_df['word_counts'], 20, log=True)
plt.ylabel("Word Count", size=15)
plt.xlabel("Densities", size=15)
plt.ylim(0.1)
plt.xticks(size=15)
plt.yticks(size=15)
plt.title("U.S. Word Densities", size=15)
plt.locator_params(axis='x', nbins=3)
tfidf_us_top_20.sort_values(by='word_counts').plot(kind='barh', legend=None, figsize=(6,5))
plt.ylabel("Unique Words", size=label_size+2)
plt.xlabel("Word Density", size=label_size+2)
plt.yticks(size=label_size+2)
plt.xticks(size=label_size+2)
plt.title("Top 20 Word Densities in Bills for 114th U.S. Congress", size=label_size+2)
tfidf_ny_column_sums = ny_tfidf_text.sum(axis=0)
tfidf_ny_sum_df = pd.DataFrame(tfidf_ny_column_sums.transpose(), index=ny_tfidf_vect.get_feature_names(), columns=['word_counts'])
tfidf_ny_top_20 = tfidf_ny_sum_df.sort_values(by='word_counts', ascending=False)[0:20]
tfidf_ny_top_20.plot(kind='barh', legend=None, figsize=figsize)
plt.ylabel("Unique Words", size=label_size)
plt.xlabel("Word Density", size=label_size)
plt.title("Top Word Densities in Bills for 2015 New York Legislative Session", size=label_size)
tfidf_health = pickle.load(open('/Users/Joel/Desktop/Insight/bill_taxonomy/models/model_Health_2016-09-23-13-22-32.p'))
make_roc_curve(tfidf_health.best_estimator_, us_x, y_health, 0.9, 'Health', cfg)
final_health = pickle.load(open('../presentations/figures/split_data_health_2016-09-24-16-25-03.p'))
final_health[4]
tfidf_ip = pickle.load(open('/Users/Joel/Desktop/Insight/bill_taxonomy/models/presentation_models/model_Intellectual property_2016-09-23-15-07-14.p'))
tfidf_ip.best_score_
make_roc_curve(tfidf_ip.best_estimator_, us_x, y_ip, 0.8, 'Intellectual Property', cfg)
final_ip = pickle.load(open('../presentations/figures/split_data_intellectual property_2016-09-24-17-00-41.p'))
final_ip[4]
final_tax = pickle.load(open('models/model_Taxation_2016-09-26-08-30-51.p'))
final_tax.best_params_
sql_str = """
SELECT bill_num, subject FROM bill_subject
WHERE subject='Taxation'
"""
sub_bills = pd.read_sql_query(sql_str, con)
y_tax = make_y_values(us_bills, sub_bills, 'Taxation')
make_roc_curve(final_tax.best_estimator_, us_x, y_tax, 0.8, 'Taxation', cfg)
subject = "Bank accounts, deposits, capital"
if (subject.split(' ')[0] == 'Bank'):
subject = subject.replace('capital', 'and capital')
subject = subject.replace(' ', '_')
subject = subject.replace(',', '')
print subject
best_est_lr = tfidf_ip.best_estimator_.steps[1][1]
feats = tfidf_ip.best_estimator_.steps[0][1]
feat_names = feats.get_feature_names()
weights = [(feat_names[i], best_est_lr.coef_[0][i]) for i in tqdm.tqdm(range(0, len(best_est_lr.coef_[0])))]
sort_weights = sorted(weights, key=lambda (a,b): abs(b), reverse=True)[0:10]
# Don't think I need this anymore but afraid to get rid of it
# feat_vect = [s[0].split('_')[1] + ': ' + s[1] for s in top20_df['feature'].str.split('__')]
top10_df = pd.DataFrame(sort_weights, columns=['feature', 'coefficient'])
feat_vect = [s[0].split('_')[1] + ': ' + s[1] for s in top10_df['feature'].str.split('__')]
top10_df.ix[:, 'feature'] = feat_vect
top10_df.set_index('feature', inplace=True)
top10_df.sort_values(by='coefficient').plot(kind='barh', legend=None, figsize=(8,6))
plt.ylabel("Feature", size=25)
plt.xlabel("Coefficient", size=25)
plt.xticks(size=25)
plt.yticks(size=25)
plt.title("Coefficient Weights for Intellectual Property", size=25)
svc_model = pickle.load(open('/Users/Joel/Desktop/Insight/bill_taxonomy/models/tfidf_models2/model_health_svc.p'))
svc_model.best_score_
X_svc = make_x_values(us_bills)
len(X_svc)
sql_str = """
SELECT bill_num, subject FROM bill_subject
WHERE subject='Health'
"""
sub_bills = pd.read_sql_query(sql_str, con)
y = make_y_values(us_bills, sub_bills, 'Health' )
svc_model.best_estimator_.predict
svc_model = pickle.load(open('/Users/Joel/Desktop/Insight/bill_taxonomy/models/tfidf_models2/model_health_svc.p'))
nb_model = svc_model
get_time_stamp()
from nltk.stem import WordNetLemmatizer
wordnet_lemmatizer = WordNetLemmatizer()
wordnet_lemmatizer.lemmatize('striking')
| 0.289773 | 0.566618 |
[source](../api/alibi_detect.ad.model_distillation.rst)
# Model distillation
## Overview
[Model distillation](https://arxiv.org/abs/1503.02531) is a technique that is used to transfer knowledge from a large network to a smaller network. Typically, it consists of training a second model with a simplified architecture on soft targets (the output distributions or the logits) obtained from the original model.
Here, we apply model distillation to obtain harmfulness scores, by comparing the output distributions of the original model with the output distributions
of the distilled model, in order to detect adversarial data, malicious data drift or data corruption.
We use the following definition of harmful and harmless data points:
* Harmful data points are defined as inputs for which the model's predictions on the uncorrupted data are correct while the model's predictions on the corrupted data are wrong.
* Harmless data points are defined as inputs for which the model's predictions on the uncorrupted data are correct and the model's predictions on the corrupted data remain correct.
Analogously to the [adversarial AE detector](https://arxiv.org/abs/2002.09364), which is also part of the library, the model distillation detector picks up drift that reduces the performance of the classification model.
The detector can be used as follows:
* Given an input $x,$ an adversarial score $S(x)$ is computed. $S(x)$ equals the value loss function employed for distillation calculated between the original model's output and the distilled model's output on $x$.
* If $S(x)$ is above a threshold (explicitly defined or inferred from training data), the instance is flagged as adversarial.
## Usage
### Initialize
Parameters:
* `threshold`: threshold value above which the instance is flagged as an adversarial instance.
* `distilled_model`: `tf.keras.Sequential` instance containing the model used for distillation. Example:
```python
distilled_model = tf.keras.Sequential(
[
tf.keras.InputLayer(input_shape=(input_dim,)),
tf.keras.layers.Dense(output_dim, activation=tf.nn.softmax)
]
)
```
* `model`: the classifier as a `tf.keras.Model`. Example:
```python
inputs = tf.keras.Input(shape=(input_dim,))
hidden = tf.keras.layers.Dense(hidden_dim)(inputs)
outputs = tf.keras.layers.Dense(output_dim, activation=tf.nn.softmax)(hidden)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
```
* `loss_type`: type of loss used for distillation. Supported losses: 'kld', 'xent'.
* `temperature`: Temperature used for model prediction scaling. Temperature <1 sharpens the prediction probability distribution which can be beneficial for prediction distributions with high entropy.
* `data_type`: can specify data type added to metadata. E.g. *'tabular'* or *'image'*.
Initialized detector example:
```python
from alibi_detect.ad import ModelDistillation
ad = ModelDistillation(
distilled_model=distilled_model,
model=model,
temperature=0.5
)
```
### Fit
We then need to train the detector. The following parameters can be specified:
* `X`: training batch as a numpy array.
* `loss_fn`: loss function used for training. Defaults to the custom model distillation loss.
* `optimizer`: optimizer used for training. Defaults to [Adam](https://arxiv.org/abs/1412.6980) with learning rate 1e-3.
* `epochs`: number of training epochs.
* `batch_size`: batch size used during training.
* `verbose`: boolean whether to print training progress.
* `log_metric`: additional metrics whose progress will be displayed if verbose equals True.
* `preprocess_fn`: optional data preprocessing function applied per batch during training.
```python
ad.fit(X_train, epochs=50)
```
The threshold for the adversarial / harmfulness score can be set via ```infer_threshold```. We need to pass a batch of instances $X$ and specify what percentage of those we consider to be normal via `threshold_perc`. Even if we only have normal instances in the batch, it might be best to set the threshold value a bit lower (e.g. $95$%) since the model could have misclassified training instances.
```python
ad.infer_threshold(X_train, threshold_perc=95, batch_size=64)
```
### Detect
We detect adversarial / harmful instances by simply calling `predict` on a batch of instances `X`. We can also return the instance level score by setting `return_instance_score` to True.
The prediction takes the form of a dictionary with `meta` and `data` keys. `meta` contains the detector's metadata while `data` is also a dictionary which contains the actual predictions stored in the following keys:
* `is_adversarial`: boolean whether instances are above the threshold and therefore adversarial instances. The array is of shape *(batch size,)*.
* `instance_score`: contains instance level scores if `return_instance_score` equals True.
```python
preds_detect = ad.predict(X, batch_size=64, return_instance_score=True)
```
## Examples
### Image
[Harmful drift detection through model distillation on CIFAR10](../examples/cd_distillation_cifar10.nblink)
|
github_jupyter
|
distilled_model = tf.keras.Sequential(
[
tf.keras.InputLayer(input_shape=(input_dim,)),
tf.keras.layers.Dense(output_dim, activation=tf.nn.softmax)
]
)
inputs = tf.keras.Input(shape=(input_dim,))
hidden = tf.keras.layers.Dense(hidden_dim)(inputs)
outputs = tf.keras.layers.Dense(output_dim, activation=tf.nn.softmax)(hidden)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
from alibi_detect.ad import ModelDistillation
ad = ModelDistillation(
distilled_model=distilled_model,
model=model,
temperature=0.5
)
ad.fit(X_train, epochs=50)
ad.infer_threshold(X_train, threshold_perc=95, batch_size=64)
preds_detect = ad.predict(X, batch_size=64, return_instance_score=True)
| 0.929676 | 0.994054 |
# Data Processing
```
"""
The regions of interest are labeled and the training sets (70% of the data set)
and test (30% of the data set) are created.
@author: Juan Felipe Latorre Gil - jflatorreg@unal.edu.co
"""
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
def class_matching(df_features):
"""
Regions of interest are labeled.
Parameters:
----------
df_features: DataFrame
DataFrame with the information and characteristics of the regions of interest.
Returns:
-------
df_features_labeled: DataFrame
DataFrame with the information, labels and characteristics of the regions of interest.
"""
df = pd.concat([df_features,
lab_wname,
lab_bin],
axis=1)
df.reset_index(inplace=True, drop=True)
df.dropna(axis=0, inplace=True)
df = df.loc[~df.lab_wname.isin(['1_vfar','1_ago']),:]
df['lab_gt'] = df['lab_gt'].astype(int)
return df
def split(X, y):
"""
Split the dataset into 70% for training and 30% for testing.
Parameters:
----------
X: numpy.array
Array with the characteristics of the regions of interest.
y: numpy.array
Array with labels of regions of interest.
Returns:
-------
X_train: numpy.array
Array with the characteristics of the regions of interest for training.
X_test: numpy.array
Array with the characteristics of the regions of interest for test.
y_train: numpy.array
Array with labels of regions of interest for training.
y_test: numpy.array
Array with labels of regions of interest for test.
"""
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.3,
random_state=42,
stratify=y,
shuffle=True)
return X_train, X_test, y_train, y_test
path_mannot = './data/trainds_mannot.txt'
path_low = './results/df_features_low.csv'
path_med = './results/df_features_med.csv'
path_high = './results/df_features_high.csv'
path_times = './results/df_times.csv'
Path_save = './results'
```
# Load Data
```
df_features_low = pd.read_csv(path_low)
df_features_med = pd.read_csv(path_med)
df_features_high = pd.read_csv(path_high)
df_times = pd.read_csv(path_times)
df_features_low.head()
df_features_med.head()
df_features_high.head()
df_times
```
# Get Labels
```
gt = pd.read_csv(path_mannot, header=None, usecols=[0,1,2], sep='\t',
names=['onset','offset','label'])
idx_annotated = (gt.label.str[1]=='_')
lab_wname = gt['label']
lab_wname.loc[~idx_annotated] = np.nan
lab_bin = lab_wname.str[0]
lab_bin.name = 'lab_gt'
lab_wname.name = 'lab_wname'
```
# Label Regions of Interest
```
df_features_low = class_matching(df_features_low)
df_features_med = class_matching(df_features_med)
df_features_high = class_matching(df_features_high)
X_low = df_features_low.loc[:,df_features_low.columns.str.startswith('shp')].values
y_low = df_features_low.loc[:,'lab_gt'].values
X_low.shape
X_med = df_features_med.loc[:,df_features_med.columns.str.startswith('shp')].values
y_med = df_features_med.loc[:,'lab_gt'].values
X_med.shape
X_high = df_features_high.loc[:,df_features_high.columns.str.startswith('shp')].values
y_high = df_features_high.loc[:,'lab_gt'].values
X_high.shape
```
# Split Data
```
X_train_low, X_test_low, y_train_low, y_test_low = split(X_low, y_low)
X_train_med, X_test_med, y_train_med, y_test_med = split(X_med, y_med)
X_train_high, X_test_high, y_train_high, y_test_high = split(X_high, y_high)
```
## Save Data
```
np.save('./results/X_train_low.npy', X_train_low)
np.save('./results/X_test_low.npy', X_test_low)
np.save('./results/y_train_low.npy', y_train_low)
np.save('./results/y_test_low.npy', y_test_low)
np.save('./results/X_train_med.npy', X_train_med)
np.save('./results/X_test_med.npy', X_test_med)
np.save('./results/y_train_med.npy', y_train_med)
np.save('./results/y_test_med.npy', y_test_med)
np.save('./results/X_train_high.npy', X_train_high)
np.save('./results/X_test_high.npy', X_test_high)
np.save('./results/y_train_high.npy', y_train_high)
np.save('./results/y_test_high.npy', y_test_high)
```
|
github_jupyter
|
"""
The regions of interest are labeled and the training sets (70% of the data set)
and test (30% of the data set) are created.
@author: Juan Felipe Latorre Gil - jflatorreg@unal.edu.co
"""
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
def class_matching(df_features):
"""
Regions of interest are labeled.
Parameters:
----------
df_features: DataFrame
DataFrame with the information and characteristics of the regions of interest.
Returns:
-------
df_features_labeled: DataFrame
DataFrame with the information, labels and characteristics of the regions of interest.
"""
df = pd.concat([df_features,
lab_wname,
lab_bin],
axis=1)
df.reset_index(inplace=True, drop=True)
df.dropna(axis=0, inplace=True)
df = df.loc[~df.lab_wname.isin(['1_vfar','1_ago']),:]
df['lab_gt'] = df['lab_gt'].astype(int)
return df
def split(X, y):
"""
Split the dataset into 70% for training and 30% for testing.
Parameters:
----------
X: numpy.array
Array with the characteristics of the regions of interest.
y: numpy.array
Array with labels of regions of interest.
Returns:
-------
X_train: numpy.array
Array with the characteristics of the regions of interest for training.
X_test: numpy.array
Array with the characteristics of the regions of interest for test.
y_train: numpy.array
Array with labels of regions of interest for training.
y_test: numpy.array
Array with labels of regions of interest for test.
"""
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.3,
random_state=42,
stratify=y,
shuffle=True)
return X_train, X_test, y_train, y_test
path_mannot = './data/trainds_mannot.txt'
path_low = './results/df_features_low.csv'
path_med = './results/df_features_med.csv'
path_high = './results/df_features_high.csv'
path_times = './results/df_times.csv'
Path_save = './results'
df_features_low = pd.read_csv(path_low)
df_features_med = pd.read_csv(path_med)
df_features_high = pd.read_csv(path_high)
df_times = pd.read_csv(path_times)
df_features_low.head()
df_features_med.head()
df_features_high.head()
df_times
gt = pd.read_csv(path_mannot, header=None, usecols=[0,1,2], sep='\t',
names=['onset','offset','label'])
idx_annotated = (gt.label.str[1]=='_')
lab_wname = gt['label']
lab_wname.loc[~idx_annotated] = np.nan
lab_bin = lab_wname.str[0]
lab_bin.name = 'lab_gt'
lab_wname.name = 'lab_wname'
df_features_low = class_matching(df_features_low)
df_features_med = class_matching(df_features_med)
df_features_high = class_matching(df_features_high)
X_low = df_features_low.loc[:,df_features_low.columns.str.startswith('shp')].values
y_low = df_features_low.loc[:,'lab_gt'].values
X_low.shape
X_med = df_features_med.loc[:,df_features_med.columns.str.startswith('shp')].values
y_med = df_features_med.loc[:,'lab_gt'].values
X_med.shape
X_high = df_features_high.loc[:,df_features_high.columns.str.startswith('shp')].values
y_high = df_features_high.loc[:,'lab_gt'].values
X_high.shape
X_train_low, X_test_low, y_train_low, y_test_low = split(X_low, y_low)
X_train_med, X_test_med, y_train_med, y_test_med = split(X_med, y_med)
X_train_high, X_test_high, y_train_high, y_test_high = split(X_high, y_high)
np.save('./results/X_train_low.npy', X_train_low)
np.save('./results/X_test_low.npy', X_test_low)
np.save('./results/y_train_low.npy', y_train_low)
np.save('./results/y_test_low.npy', y_test_low)
np.save('./results/X_train_med.npy', X_train_med)
np.save('./results/X_test_med.npy', X_test_med)
np.save('./results/y_train_med.npy', y_train_med)
np.save('./results/y_test_med.npy', y_test_med)
np.save('./results/X_train_high.npy', X_train_high)
np.save('./results/X_test_high.npy', X_test_high)
np.save('./results/y_train_high.npy', y_train_high)
np.save('./results/y_test_high.npy', y_test_high)
| 0.695958 | 0.952574 |
# Representing data in memory
A typical program outline calls for us to load data from disk and place into memory organized into data structures. The way we represent data in memory is critical to building programs. This is particularly true with data science programs because processing data is our focus.
First, let's get something straight about data. Data elements have *values* and *types*, such as `32` and *integer* or `"hi"` and *string*. We build the data structures by combining and organizing these data elements, such as a list of integers.
We also have a special element called a *pointer* or *reference* that refers to another element. It's like a phone number "points at" a phone but is not the phone itself. Using the pointer we can get to the phone. A list of pointers is like a phone book with references to humans but the phone book is not actually a list of humans. (We will see later that even when we do something simple like `x=3`, the variable `x` is secretly a pointer to an integer object with the value of 3.)
Next, let's take a small detour into computer architecture to get a handle on what it means to load something into memory.
## Computer architecture detour
A computer consists of three primary components: a disk to hold data, a memory (that is wiped upon power off), and a processor (CPU) to process that data. Here is a picture of an actual CPU and some memory chips:
<img src="images/cpu-memory.png" width="400">
Computer memory (RAM == random access memory) is much faster but usually much smaller than the disk and all memory is lost when the computer powers off. Think of memory as your working or scratch space and the disk as your permanent storage. Memory chips are kind of like human short-term memory that is prone to disappearing versus a piece of paper which is slower to read and write but *persistent*.
The memory is broken up into discrete cells of a fixed size. The size of a cell is one *byte*, which consists of 8 *bits*, binary on/off digits. It is sufficient to hold a number between 0 and 255. Each cell is identified by an integer address, just like the numbers on mailboxes (see image below and to the right). Processors can ask for the data at a particular address and can store a piece of data at a specific memory location as well. For example, here is an abstract representation of byte-addressable computer memory:
<table border="0">
<tr>
<td><img src="images/addresses.png" width="80">
<td><img src="images/mailboxes.png" width="70">
</table>
In this case, the memory has value 100 at address 0. At address 1, the memory has value 0. Address 4 has the maximum value we can store in a single byte: 255.
**Everything from actual numbers to music to videos is stored using one or more of these atomic storage units called bytes.**
**Everything is stored as a number or sequence of numbers in a computer, even text.**
Data lives either in memory, on the disk, or can be retrieved from a network. As part of producing a programming plan, you need to know where the data resides so you can incorporate loading that data into memory as part of the plan.
### Computer archicture metrics
Here are the key units we use in computer architecture:
* Kilo. $10^3 = 1,000$ or often $2^{10} = 1024$
* Mega. $10^6 = 1,000,000$
* Giga. $10^9 = 1,000,000,000$
* Tera. $10^12 = 1,000,000,000,000$
You need to know these units because you need to know whether a data set fits in memory or whether it fits on the disk or even how long it will take to transfer across the network.
For example, when I started out, my first microcomputer had 16k of RAM, but my desktop now has 32G of RAM. What is the ratio of memory size increase?
CPUs execute instructions to the heartbeat of a clock, which is where we get the term clock rate. Mhz (million herz==cycles/second), Ghz (billion) are the typically units in clock ticks per second. My desktop has a 4Ghz clock rate, which means that you can execute approximately 4 giga- or billion instructions per second. That's a lot.
If your network is, say, 100Mbits/second that you can transfer a 800Mbit (100M byte) file in 8 seconds.
How big is the San Francisco phonebook (uncompressed)? How fast can you transfer that phonebook across a 8Mbit/second network?
## Programming language view of memory
Programming languages present us with a higher level view of the memory in two ways: we can use names to refer to locations in memory and each memory cell can hold integer and real number values of arbitrary size (they do have a limit, but let's keep things simple for now). For example, here are two named values stored in memory:
<img src="images/named-memory.png" width="90">
```
units = 923
price = 8.02
```
<img src="images/redbang.png" width="30" align="left">When referring to the kind of thing a value represents, we use the word **type**. The type of the "units" cell is integer and the type of "price" is real number (or floating-point number).
```
type(units)
type(price)
```
Another very common value type is *string*, which is really a list of characters. We use strings to hold place names, book titles, and any other text-based values. We can think of strings as being a single value because the programming language hides the details. Strings can be arbitrarily long and the programming language stores the characters as a sequence of bytes in memory. Each character takes one or two bytes. In other words, we think of it as
<img src="images/strings.png" width="110">
```
name = "Mary"
type(name)
```
but it is really more like this:
<img src="images/strings2.png" width="110">
Using package [lolviz](https://github.com/parrt/lolviz) we can visualize even simple types like strings:
```
from lolviz import *
strviz(name)
objviz(name) # render as list of char
```
These basic data types
* integer numbers
* floating-point numbers
* strings
are our building blocks. If we arrange some of these blocks together, we can create more complex structures.
## Data structures
### List
The most common *data structures* is the **list**, which is just a sequence of memory cells. Because we're all familiar with spreadsheets, let's visualize these data structures using a spreadsheet. Columns in a spreadsheet are really lists, such as the following lists/columns of integers, floating-point numbers, and strings:
<table border="0">
<tr>
<td><img src="images/int-list.png" width="60">
<td><img src="images/float-list.png" width="80">
<td><img src="images/names-list.png" width="139">
</tr>
</table>
```
Quantity = [6, 49, 27, 30, 19, 21, 12, 22, 21]
type(Quantity)
len(Quantity)
objviz(Quantity)
```
We can think of the rows of a spreadsheet as lists also. For example, the header row of a spreadsheet is really a list of strings:
<img src="images/header-row.png" width="750">
```
headers = ['Date', 'Quantity', 'Unit Price', 'Shipping']
type(headers)
objviz(headers)
```
All of these lists have one thing in common: the type of element is the same. They are *homogeneous*. But, we can also have lists with *heterogeneous* elements, which is typically what we see in spreadsheet rows:
<img src="images/sample-row.png" width="800">
```
arow = ['10/13/10', 6, 38.94, 35, 'Muhammed MacIntyre']
```
or
```
from datetime import date
arow = [date(2010, 10, 13), 6, 38.94, 35, 'Muhammed MacIntyre']
arow
type(arow)
listviz(arow)
```
Heterogeneous lists are typically used to group bits of information about a particular entity. In machine learning, we call this a **feature vector**, an **instance**, or an **observation**. For example, an apples versus oranges classifier might have feature vectors containing weight (number), volume (number), and color (string). The important point here is that a list can also be used to as a way to aggregate features about a particular entity. The sequence of the elements is less important than the fact that they are contained (aggregated) within the same list.
### Tuple
A tuple is an immutable list and is often used for returning multiple values from a function. It's also a simple way to group number of related elements such as:
```
me = ('parrt',607)
me
```
We index the elements just like we do with a list:
```
print(me[0])
print(me[1])
```
But, there's no way to change the elements, as there is with a list. If we do:
```python
me[0] = 'tombu'
```
the result is an error:
```
TypeError: 'tuple' object does not support item assignment
```
Here's an example of pulling apart a tuple using the multiple assignment statement:
```
userid,office = me
print(userid)
print(office)
```
Tuples are a great way to group related items without having to create a formal Python class definition.
### Set
If we enforce a rule that all elements within a list are unique, then we get a **set**. Sets are unordered.
```
ids = {100, 103, 121, 102, 113, 113, 113, 113}
ids
type(ids)
objviz(ids)
```
We can do lots of fun set arithmetic:
```
{100,102}.union({109})
{100,102}.intersection({100,119})
```
### Tables (list of lists)
Spreadsheets arrange rows one after the other, which programmers interpret as a *list of lists.* In the analytics or database world, we call this a **table**:
<img src="images/rows.png" width="700">
In this example, each row represents a sales transaction.
The input to machine learning algorithms is often a table where each row aggregates the data associated with a specific instance or observation. These tables are called **dataframes** and will become your BFF.
```
from pandas import DataFrame
df = DataFrame(data=[[99,'parrt'],[101,'sri'],[42,'kayla']],
columns=['ID','user'])
df
df.values
df.columns
df.user
objviz(df.values)
```
### Matrix
If the table elements are all numbers, we call it a **matrix**. Here's a matrix with 5 rows and 2 columns:
<img src="images/matrix.png" width="110">
Let me introduce you to another of your new BFF, `numpy`:
```
import numpy as np
A = np.array([[19,11],
[21,15],
[103,18],
[99,13],
[8,2]])
print(A)
```
That is a matrix with shape 5 rows, 2 columns:
```
A.shape
```
There are many ways to represent or layout things and memory. In this case, we can view the matrix as a list of lists using lolviz:
```
lolviz(A.tolist())
```
Or as a matrix
```
objviz(A)
```
We can do lots of matrix math with numpy:
```
objviz(A+A)
objviz(A*99)
objviz(A.T) #transpose
```
Here's a system of equation: $A x = b$, $x = A^{-1} b$:
\begin{equation*}
\begin{bmatrix}
38 & 22\\
42 & 30
\end{bmatrix}
\begin{bmatrix}
x_1 \\
x_2
\end{bmatrix} =
\begin{bmatrix}
3 \\
5
\end{bmatrix}
\end{equation*}
Using numpy, we can solve that using the inverse of $A$.
```
from numpy.linalg import inv
A = np.array([[38, 22], [42, 30]])
b = np.array([3, 5])
x = inv(A).dot(b)
objviz(x)
```
Here's some more stuff about the shape of various numpy $n$-dimensional arrays:
```
x = np.array([3, 5]) # vertical vector with 2 rows
y = np.array([[3, 5]]) # matrix with 1 row and 2 columns
z = np.array([[3],[5]]) # matrix with 2 rows, 1 column
print(x.shape)
print(y.shape)
print(z.shape)
```
The tuple `(2,)` means a one-dimensional vector with 2 rows. We can't use notation `(2)` because that's just an expression that means 2 rather than a tuple. It's a quirk but necessary.
### Dictionary
If we arrange two lists side-by-side and kind of glue them together, we get a **dictionary**. Dictionaries map one value to another, just like a dictionary in the real world maps a word to a definition. Here is a sample dictionary that maps a movie title to the year it was nominated for an Oscar award:
<img src="images/dict.png" width="220">
```
movies = {'Amadeus':1984, 'Witness':1985}
print(movies)
objviz(movies)
print(movies.keys())
print(movies.values())
movies['Amadeus']
```
```
movies['foo']
```
gets a KeyError:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-40-72c06b90f573> in <module>()
----> 1 movies['foo'] # gets a KeyError
KeyError: 'foo'
```
## Traversing data structures
The spreadsheet model is a good one for understanding data structures but it's important to keep in mind that computers process one element (number or string) at a time.
As humans, we can look at the spreadsheet or data structure from above in its entirety, but programs must **walk** or **traverse** the elements of a data structure one after the other. It's kind of like sliding a magnifying glass over the elements of a list:
<img src="images/int-list-item.png" width="230">
This notion of traversal abstracts to any **sequence** (or **stream**) of elements, not just lists. For example, we will eventually traverse the lines of a text file or a sequence of filenames obtained from the operating system. Sequences are extremely powerful because it allows us to process data that is much bigger than the memory of our computer. We can process the data piecemeal whereas a list requires all elements to be in memory at once.
Typically we iterate through the elements of a list with a `for`-each statement:
```
for q in Quantity:
print(q)
```
Here, the type of the objects pointed to by `q` is `int`. We can also iterate through that list using an indexed loop:
```
for i in range(len(Quantity)):
print(Quantity[i])
```
For lists and other structures that fit completely in memory, we often find a **reverse traversal** useful, that examines elements from last to first:
```
for q in reversed(Quantity):
print(q)
```
Walking a dictionary is also easy but we have to decide whether we want to walk the keys or the values:
```
movies = {'Amadeus':1984, 'Witness':1985}
for m in movies: # walk keys
print(m)
for m in movies.values(): # walk values
print(m)
```
## Summary
Here are the commonly-used data types:
* integer numbers like -2, 0, 99
* real numbers (floating-point numbers) like -2.3, 99.1932
* strings like "Mary", "President Obama"
And here are the commonly-used data structures:
* ordered list
* set (just an unordered, unique list)
* list of lists such as tables or matrices with rows and columns
* tuples are immutable lists
* dictionary such as mapping a student name to their student ID; we can think of this as a table where each row in the table associates the key with a value.
Remember that all variable names are actually indirect references to a memory location. Everything is a pointer to the data in the implementation. That means we can have two variable names that refer to the same memory location and hence the variables are aliased. Changing one variable's elements appears to change the other variables elements.
Now that we know what data looks like in memory, let's consider a [computation model](computation.ipynb).
|
github_jupyter
|
units = 923
price = 8.02
type(units)
type(price)
name = "Mary"
type(name)
from lolviz import *
strviz(name)
objviz(name) # render as list of char
Quantity = [6, 49, 27, 30, 19, 21, 12, 22, 21]
type(Quantity)
len(Quantity)
objviz(Quantity)
headers = ['Date', 'Quantity', 'Unit Price', 'Shipping']
type(headers)
objviz(headers)
arow = ['10/13/10', 6, 38.94, 35, 'Muhammed MacIntyre']
from datetime import date
arow = [date(2010, 10, 13), 6, 38.94, 35, 'Muhammed MacIntyre']
arow
type(arow)
listviz(arow)
me = ('parrt',607)
me
print(me[0])
print(me[1])
me[0] = 'tombu'
TypeError: 'tuple' object does not support item assignment
userid,office = me
print(userid)
print(office)
ids = {100, 103, 121, 102, 113, 113, 113, 113}
ids
type(ids)
objviz(ids)
{100,102}.union({109})
{100,102}.intersection({100,119})
from pandas import DataFrame
df = DataFrame(data=[[99,'parrt'],[101,'sri'],[42,'kayla']],
columns=['ID','user'])
df
df.values
df.columns
df.user
objviz(df.values)
import numpy as np
A = np.array([[19,11],
[21,15],
[103,18],
[99,13],
[8,2]])
print(A)
A.shape
lolviz(A.tolist())
objviz(A)
objviz(A+A)
objviz(A*99)
objviz(A.T) #transpose
from numpy.linalg import inv
A = np.array([[38, 22], [42, 30]])
b = np.array([3, 5])
x = inv(A).dot(b)
objviz(x)
x = np.array([3, 5]) # vertical vector with 2 rows
y = np.array([[3, 5]]) # matrix with 1 row and 2 columns
z = np.array([[3],[5]]) # matrix with 2 rows, 1 column
print(x.shape)
print(y.shape)
print(z.shape)
movies = {'Amadeus':1984, 'Witness':1985}
print(movies)
objviz(movies)
print(movies.keys())
print(movies.values())
movies['Amadeus']
movies['foo']
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-40-72c06b90f573> in <module>()
----> 1 movies['foo'] # gets a KeyError
KeyError: 'foo'
for q in Quantity:
print(q)
for i in range(len(Quantity)):
print(Quantity[i])
for q in reversed(Quantity):
print(q)
movies = {'Amadeus':1984, 'Witness':1985}
for m in movies: # walk keys
print(m)
for m in movies.values(): # walk values
print(m)
| 0.253584 | 0.991859 |
```
#default_exp inference.export
```
# inference.export
> This module contains the main functionality for extracting the transform parameters from DataLoaders
```
#hide
from nbdev.showdoc import *
#export
from fastai.vision.all import *
```
## Vision
For an example we will look at the pets dataset. We will define a series of transforms in our Pipelines, and we will attempt to extract them.
```
path = untar_data(URLs.PETS)
fnames = get_image_files(path/'images')
pat = r'(.+)_\d+.jpg$'
batch_tfms = [*aug_transforms(), Normalize.from_stats(*imagenet_stats)]
item_tfms = RandomResizedCrop(460, min_scale=0.75, ratio=(1.,1.))
bs=64
dls = ImageDataLoaders.from_name_re(path, fnames, pat, batch_tfms=batch_tfms,
item_tfms=item_tfms, bs=bs)
# Cell
def to_list(b):
"Recursively make any `L()` or CategoryMap to list"
def _inner(o):
if isinstance(o,L) or isinstance(o, CategoryMap):
return list(o)
elif isinstance(o, Tensor):
return np.array(to_detach(o))
else: return o
for k in b.keys():
b[k] = apply(_inner,b[k])
return b
#export
def _gen_dict(tfm):
"Grabs the `attrdict` and transform name from `tfm`"
tfm_dict = attrdict(tfm, *tfm.store_attrs.split(','))
if 'partial' in tfm.name:
tfm_name = tfm.name[1].split(' --')[0]
else:
tfm_name = tfm.name.split(' --')[0]
return tfm_dict, tfm_name
#export
def _make_tfm_dict(tfms, type_tfm=False):
"Extracts transform params from `tfms`"
tfm_dicts = {}
for tfm in tfms:
if hasattr(tfm, 'store_attrs') and not isinstance(tfm, AffineCoordTfm):
if type_tfm or tfm.split_idx is not 0:
tfm_dict,name = _gen_dict(tfm)
tfm_dict = to_list(tfm_dict)
tfm_dicts[name] = tfm_dict
return tfm_dicts
dls.after_batch[2].fs[1].__dict__
_make_tfm_dict(dls.after_item)
dls.after_batch[2].fs[1].__dict__
my_d = dls.after_batch[2].fs[0].__dict__.copy()
my_d.pop('change')
from fastai.vision.augment import _BrightnessLogit
RandTransform??
def extract_logits(tfm):
name = tfm.__class__.name
t_d = tfm.__dict__
t
dls.after_batch[2].fs[0].__class__.__name__
ab_dict = {}
for tfm in dls.after_batch:
if isinstance(tfm, AffineCoordTfm) or isinstance(tfm, LightingTfm):
if hasattr(tfm, 'aff_fs'):
for t in tfm.aff_fs:
ab_dict[t.func.__name__] = t.keywords
elif hasattr(tfm, 'coord_fs'):
for t in tfm.coord_fs:
t_d,n = _gen_dict(t)
ab_dict[n] = t_d
elif hasattr(tfm, 'fs'):
for t in tfm.fs:
t_d,n = _gen_dict()
#hide
test_eq(len(_make_tfm_dict(dls.tfms, True)), 1)
ab_dict = _make_tfm_dict(dls.after_batch)
in_('Normalize', ab_dict.keys());
not in_('Flip', ab_dict.keys());
it_dict = _make_tfm_dict(dls.after_item)
in_('RandomResizedCrop', ab_dict.keys())
not in_('ToTensor', ab_dict.keys());
#export
@typedispatch
def _extract_tfm_dicts(dl:TfmdDL):
"Extracts all transform params from `dl`"
type_tfm,use_images = True,False
attrs = ['tfms','after_item','after_batch']
tfm_dicts = {}
for attr in attrs:
tfm_dicts[attr] = _make_tfm_dict(getattr(dl, attr), type_tfm)
if attr == 'tfms':
if getattr(dl,attr)[0][1].name == 'PILBase.create':
use_images=True
if attr == 'after_item': tfm_dicts[attr]['ToTensor'] = {'is_image':use_images}
type_tfm = False
return tfm_dicts
#export
def get_information(dls): return _extract_tfm_dicts(dls[0])
```
### get_information
This function will take any set of `DataLoaders` and extract the transforms which are important during inference and their information
```
tfm_info = get_information(dls)
#hide
test_eq(len(tfm_info),3)
test_eq(tfm_info.keys(), ['tfms','after_item','after_batch'])
```
For vision it will contain `tfms`, `after_item`, and `after_batch`
First, our `type` transforms:
```
tfm_info['tfms']
```
Then the `item` transforms:
```
tfm_info['after_item']
```
And finally our batch transforms:
```
tfm_info['after_batch']
```
## Tabular
Next we'll look at a tabular example. We will use the `ADULT_SAMPLE` dataset here:
```
#export
from fastai.tabular.all import *
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
splits = RandomSplitter()(range_of(df))
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race']
cont_names = ['age', 'fnlwgt', 'education-num']
procs = [Categorify, FillMissing, Normalize]
y_names = 'salary'
to = TabularPandas(df, procs=procs, cat_names=cat_names, cont_names=cont_names,
y_names=y_names, splits=splits)
dls = to.dataloaders()
dls.normalize
#export
@typedispatch
def _extract_tfm_dicts(dl:TabDataLoader):
"Extracts all transform params from `dl`"
types = 'normalize,fill_missing,categorify'
if hasattr(dl, 'categorize'): types += ',categorize'
if hasattr(dl, 'regression_setup'): types += ',regression_setup'
tfms = {}
name2idx = {name:n for n,name in enumerate(dl.dataset) if name in dl.cat_names or name in dl.cont_names}
idx2name = {v:k for k,v in name2idx.items()}
cat_idxs = {name2idx[name]:name for name in dl.cat_names}
cont_idxs = {name2idx[name]:name for name in dl.cont_names}
names = {'cats':cat_idxs, 'conts':cont_idxs}
tfms['encoder'] = names
for t in types.split(','):
tfm = getattr(dl, t)
tfms[t] = to_list(attrdict(tfm, *tfm.store_attrs.split(',')))
categorize = dl.procs.categorify.classes.copy()
for i,c in enumerate(categorize):
categorize[c] = {a:b for a,b in enumerate(categorize[c])}
categorize[c] = {v: k for k, v in categorize[c].items()}
categorize[c].pop('#na#')
categorize[c][np.nan] = 0
tfms['categorify']['classes'] = categorize
new_dict = {}
for k,v in tfms.items():
if k == 'fill_missing':
k = 'FillMissing'
new_dict.update({k:v})
else:
new_dict.update({k.capitalize():v})
return new_dict
```
The usage is the exact same:
```
tfm_dicts = get_information(dls)
#hide
test_eq(len(tfm_dicts),5)
```
However our keys are different. By default it will have `normalize`, `fill_missing`, and `categorify`, and then depending on what is available it will store either `categorize` or `regression_setup` to tell us about our outputs.
Here is an example from `Normalize`:
```
tfm_dicts['Normalize']
```
`FillMissing`:
```
tfm_dicts['FillMissing']
```
And `Categorify`:
```
tfm_dicts['Categorify']['classes'].keys()
```
Before finally `categorize` (since we have a classification problem):
```
tfm_dicts['Categorize']
```
## Exporting
To export, a new `to_fastinference` function has been made
```
#export
@patch
def to_fastinference(x:Learner, data_fname='data', model_fname='model', path=Path('.')):
"Export data for `fastinference_onnx` or `_pytorch` to use"
if not isinstance(path,Path): path = Path(path)
dicts = get_information(x.dls)
with open(path/f'{data_fname}.pkl', 'wb') as handle:
pickle.dump(dicts, handle, protocol=pickle.HIGHEST_PROTOCOL)
torch.save(x.model, path/f'{model_fname}.pkl')
doc(Learner.to_fastinference)
```
Params:
* `data_fname`: Filename to save our extracted `DataLoader` information, default is `data`
* `model_fname`: Filename to save our current model, default is `model`
* `path`: Path to save our model and data to, default is `.`
Exported files will have the extension `.pkl`
```
learn = tabular_learner(dls, [200,100], metrics=[accuracy])
learn.to_fastinference(path='../../')
```
Simply call `learn.to_fastinference` and it will export everything needed for `fastinference_pytorch` or `fastinference_onnx`
```
learn.to_fastinference(data_fname = 'data', model_fname = 'model', path = Path('.'))
#hide
"""
# TODO: Text
Things to save:
* `data.vocab`
* `data.o2i`
* Tokenizer
* All the rules in `text.core`:
[<function fastai.text.core.fix_html>,
<function fastai.text.core.replace_rep>,
<function fastai.text.core.replace_wrep>,
<function fastai.text.core.spec_add_spaces>,
<function fastai.text.core.rm_useless_spaces>,
<function fastai.text.core.replace_all_caps>,
<function fastai.text.core.replace_maj>,
<function fastai.text.core.lowercase>]
- Ensure that `L` is in the library
"""
```
|
github_jupyter
|
#default_exp inference.export
#hide
from nbdev.showdoc import *
#export
from fastai.vision.all import *
path = untar_data(URLs.PETS)
fnames = get_image_files(path/'images')
pat = r'(.+)_\d+.jpg$'
batch_tfms = [*aug_transforms(), Normalize.from_stats(*imagenet_stats)]
item_tfms = RandomResizedCrop(460, min_scale=0.75, ratio=(1.,1.))
bs=64
dls = ImageDataLoaders.from_name_re(path, fnames, pat, batch_tfms=batch_tfms,
item_tfms=item_tfms, bs=bs)
# Cell
def to_list(b):
"Recursively make any `L()` or CategoryMap to list"
def _inner(o):
if isinstance(o,L) or isinstance(o, CategoryMap):
return list(o)
elif isinstance(o, Tensor):
return np.array(to_detach(o))
else: return o
for k in b.keys():
b[k] = apply(_inner,b[k])
return b
#export
def _gen_dict(tfm):
"Grabs the `attrdict` and transform name from `tfm`"
tfm_dict = attrdict(tfm, *tfm.store_attrs.split(','))
if 'partial' in tfm.name:
tfm_name = tfm.name[1].split(' --')[0]
else:
tfm_name = tfm.name.split(' --')[0]
return tfm_dict, tfm_name
#export
def _make_tfm_dict(tfms, type_tfm=False):
"Extracts transform params from `tfms`"
tfm_dicts = {}
for tfm in tfms:
if hasattr(tfm, 'store_attrs') and not isinstance(tfm, AffineCoordTfm):
if type_tfm or tfm.split_idx is not 0:
tfm_dict,name = _gen_dict(tfm)
tfm_dict = to_list(tfm_dict)
tfm_dicts[name] = tfm_dict
return tfm_dicts
dls.after_batch[2].fs[1].__dict__
_make_tfm_dict(dls.after_item)
dls.after_batch[2].fs[1].__dict__
my_d = dls.after_batch[2].fs[0].__dict__.copy()
my_d.pop('change')
from fastai.vision.augment import _BrightnessLogit
RandTransform??
def extract_logits(tfm):
name = tfm.__class__.name
t_d = tfm.__dict__
t
dls.after_batch[2].fs[0].__class__.__name__
ab_dict = {}
for tfm in dls.after_batch:
if isinstance(tfm, AffineCoordTfm) or isinstance(tfm, LightingTfm):
if hasattr(tfm, 'aff_fs'):
for t in tfm.aff_fs:
ab_dict[t.func.__name__] = t.keywords
elif hasattr(tfm, 'coord_fs'):
for t in tfm.coord_fs:
t_d,n = _gen_dict(t)
ab_dict[n] = t_d
elif hasattr(tfm, 'fs'):
for t in tfm.fs:
t_d,n = _gen_dict()
#hide
test_eq(len(_make_tfm_dict(dls.tfms, True)), 1)
ab_dict = _make_tfm_dict(dls.after_batch)
in_('Normalize', ab_dict.keys());
not in_('Flip', ab_dict.keys());
it_dict = _make_tfm_dict(dls.after_item)
in_('RandomResizedCrop', ab_dict.keys())
not in_('ToTensor', ab_dict.keys());
#export
@typedispatch
def _extract_tfm_dicts(dl:TfmdDL):
"Extracts all transform params from `dl`"
type_tfm,use_images = True,False
attrs = ['tfms','after_item','after_batch']
tfm_dicts = {}
for attr in attrs:
tfm_dicts[attr] = _make_tfm_dict(getattr(dl, attr), type_tfm)
if attr == 'tfms':
if getattr(dl,attr)[0][1].name == 'PILBase.create':
use_images=True
if attr == 'after_item': tfm_dicts[attr]['ToTensor'] = {'is_image':use_images}
type_tfm = False
return tfm_dicts
#export
def get_information(dls): return _extract_tfm_dicts(dls[0])
tfm_info = get_information(dls)
#hide
test_eq(len(tfm_info),3)
test_eq(tfm_info.keys(), ['tfms','after_item','after_batch'])
tfm_info['tfms']
tfm_info['after_item']
tfm_info['after_batch']
#export
from fastai.tabular.all import *
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
splits = RandomSplitter()(range_of(df))
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race']
cont_names = ['age', 'fnlwgt', 'education-num']
procs = [Categorify, FillMissing, Normalize]
y_names = 'salary'
to = TabularPandas(df, procs=procs, cat_names=cat_names, cont_names=cont_names,
y_names=y_names, splits=splits)
dls = to.dataloaders()
dls.normalize
#export
@typedispatch
def _extract_tfm_dicts(dl:TabDataLoader):
"Extracts all transform params from `dl`"
types = 'normalize,fill_missing,categorify'
if hasattr(dl, 'categorize'): types += ',categorize'
if hasattr(dl, 'regression_setup'): types += ',regression_setup'
tfms = {}
name2idx = {name:n for n,name in enumerate(dl.dataset) if name in dl.cat_names or name in dl.cont_names}
idx2name = {v:k for k,v in name2idx.items()}
cat_idxs = {name2idx[name]:name for name in dl.cat_names}
cont_idxs = {name2idx[name]:name for name in dl.cont_names}
names = {'cats':cat_idxs, 'conts':cont_idxs}
tfms['encoder'] = names
for t in types.split(','):
tfm = getattr(dl, t)
tfms[t] = to_list(attrdict(tfm, *tfm.store_attrs.split(',')))
categorize = dl.procs.categorify.classes.copy()
for i,c in enumerate(categorize):
categorize[c] = {a:b for a,b in enumerate(categorize[c])}
categorize[c] = {v: k for k, v in categorize[c].items()}
categorize[c].pop('#na#')
categorize[c][np.nan] = 0
tfms['categorify']['classes'] = categorize
new_dict = {}
for k,v in tfms.items():
if k == 'fill_missing':
k = 'FillMissing'
new_dict.update({k:v})
else:
new_dict.update({k.capitalize():v})
return new_dict
tfm_dicts = get_information(dls)
#hide
test_eq(len(tfm_dicts),5)
tfm_dicts['Normalize']
tfm_dicts['FillMissing']
tfm_dicts['Categorify']['classes'].keys()
tfm_dicts['Categorize']
#export
@patch
def to_fastinference(x:Learner, data_fname='data', model_fname='model', path=Path('.')):
"Export data for `fastinference_onnx` or `_pytorch` to use"
if not isinstance(path,Path): path = Path(path)
dicts = get_information(x.dls)
with open(path/f'{data_fname}.pkl', 'wb') as handle:
pickle.dump(dicts, handle, protocol=pickle.HIGHEST_PROTOCOL)
torch.save(x.model, path/f'{model_fname}.pkl')
doc(Learner.to_fastinference)
learn = tabular_learner(dls, [200,100], metrics=[accuracy])
learn.to_fastinference(path='../../')
learn.to_fastinference(data_fname = 'data', model_fname = 'model', path = Path('.'))
#hide
"""
# TODO: Text
Things to save:
* `data.vocab`
* `data.o2i`
* Tokenizer
* All the rules in `text.core`:
[<function fastai.text.core.fix_html>,
<function fastai.text.core.replace_rep>,
<function fastai.text.core.replace_wrep>,
<function fastai.text.core.spec_add_spaces>,
<function fastai.text.core.rm_useless_spaces>,
<function fastai.text.core.replace_all_caps>,
<function fastai.text.core.replace_maj>,
<function fastai.text.core.lowercase>]
- Ensure that `L` is in the library
"""
| 0.587352 | 0.839273 |

### Egeria Hands-On Lab
# Welcome to the Open Discovery Lab
**NOTE - This lab is under construction and is only partly completed**
## Introduction
Egeria is an open source project that provides open standards and implementation libraries to connect tools,
catalogs and platforms together so they can share information about data and technology (called metadata).
In this hands-on lab you will get a chance to run an Egeria metadata server, configure discovery services in a discovery engine and run the discovery engine in an Engine Host OMAG server.
## What is open discovery?
[Metadata discovery](https://egeria-project.org/features/discovery-and-stewardship/overview/) is the
ability to automatically analyze and create metadata about assets. Egeria provides an [Open Discovery Framework (ODF)](https://egeria-project.org/frameworks/odf/overview/) that defines open interfaces for components that implement specific types of metadata discovery. These components can then be called from tools offered by different vendors through the open APIs.
We call this ability to invoke metadata discovery components from many different vendor tools, **open discovery**.
The Open Discovery Framework (ODF) provides standard interfaces for **discovery services**. This is the ODF
name for the metadata discovery components. The ODF interfaces control how a discovery service is started and stopped, how it can access the existing metadata about an asset, and store any additional information about the asset that it discovers.
Discovery services are specialist **governance services**. They are grouped together into a useful collection of capability called a **governance engine**. The same discovery service may be used in multiple governance engines.
Egeria provides a governance server called the **engine host server** that can host one or more governance engines.
The engine host server has APIs to call the discovery services in order to drive the analysis a specific asset, and then to view the results. The discovery services can also scan through all assets, running specific analysis on any it finds.
Governance engines tend to be paired and deployed close to the data platforms they are analyzing because the discovery services
tend to make many calls to access the content of the asset. It is not uncommon for an organization to deploy multiple governance engines if their data is distributed.
A discovery service connects to a metadata server to retrieve and store metadata about the asset.
It uses the Discovery Engine OMAS APIs and events of the metadata server.
A single metadata server can support many governance engines.
The Governance Engine OMAS supports the
maintenance of the discovery services' and governance engines' definitions.

> **Figure 1:** governance engine deployments
A particular discovery engine may be assigned to run in multiple servers. This is useful if the type of
data it is able to analyze is distributed across different locations.
The exercises that follow take you through the process of defining discovery engines and services, verifying that
they are available in the engine host server and then running discovery requests against various assets.
## The scenario
Peter Profile is Coco Pharmaceuticals' Information Analyst. He is experienced in managing and analyzing data.
In this lab, Peter is setting up automated metadata discovery services for use when new data sets are
sent to Coco Pharmaceuticals' data lake. These data sets come from both internal systems and external partners
such as hospitals that are participating in clinical trials.

Peter's collegue, **Gary Geeke**, the IT Infrastructure leader at Coco Pharmaceuticals,
has already configured an engine host server called `governDL01` for Peter to use
(see the **[Server Configuration](../egeria-server-config.ipynb)** lab).

> **Figure 2:** Coco Pharmaceuticals' OMAG Server Platforms
The `governDL01` server is running on the Data Lake OMAG Server Platform, along with `cocoMDS1`,
which is the metadata server that `governDL01` will use to retrieve and store metadata.
The first step is to ensure all of the platforms and servers are running.
```
# Start up the metadata servers
%run ../common/environment-check.ipynb
print("Start up the Engine Host Server")
activatePlatform(dataLakePlatformName, dataLakePlatformURL, [governDL01Name])
print("Done. ")
```
----
You should see that both the metadata server `cocoMDS1` and the engine host server `governDL01` are started.
If any of the platforms are not running, follow [this link to set up and run the platform](https://egeria-project.org/education/open-metadata-labs/overview/). If any server is reporting that it is not configured then
run the steps in the [Server Configuration](../egeria-server-config.ipynb) lab to configure
the servers. Then re-run the previous step to ensure all of the servers are started.
----
The `governDL01` server has been configured to run the Asset Analysis Open Metadata Engine Service (OMES). Asset Analysis OMES is able to host Open Discovery Framework (ODF) discovery engines. It has been configured to host two discovery engines. The command below lists the discovery engines and their status.
```
printGovernanceEngineStatuses(governDL01Name, governDL01PlatformName, governDL01PlatformURL, petersUserId)
```
The status code `ASSIGNED` means that the governance engine was listed in Engine Host's configuration
document - ie the governance engine was assigned to this server - but Engine Host has not been
able to retrieve the configuration for the governance engine from the metadata server (`cocoMDS1`).
When the basic governance engine properties have been retrieved from the metadata server then the status code
becomes `CONFIGURING` and more decriptive information is returned with the status.
When governance services are registered with the governance engine, the status moves to `RUNNING` and it is possible to see the list of supported request types for the governance engine.
The next step in the lab is to add configuration for the discovery engine to `cocoMDS1` until the
`AssetDiscovery` discovery engine is running.
## Exercise 1 - Configuring the Governance Engine with Open Discovery Services
Figure 3 shows the structure of the configuration that needs to be stored in the metadata server for
a governance engine.
The discovery engine has a set of descriptive properties. These are linked to a list of discovery request types.
The discovery request types are memorable names for the types of analysis that the users of the discovery
engines will want to run. It also includes a default set of analysis parameters that can be overridden when
a specific discovery request is made.
Each discovery request type is further linked either to a discovery service or a **discovery pipeline**.
(A discovery pipeline is a discovery service that coordinates the execution of other discovery services.)
When a discovery request is made it specifies a discovery request type. The discovery engine runs the
discovery service or discovery pipeline linked to the requested discovery type.

> **Figure 3:** Structure of discovery engine configuration
The discovery engine is configured using calls to the Discovery Engine OMAS running in the metadata server `cocoMDS1`. The first configuration call is to store the discovery engine properties.
```
assetDiscoveryEngineName = "AssetDiscovery"
assetDiscoveryEngineDisplayName = "Asset Discovery Engine"
assetDiscoveryEngineDescription = "Extracts metadata about an asset on request."
assetDiscoveryEngineGUID = createGovernanceEngine(cocoMDS1Name,
cocoMDS1PlatformName,
cocoMDS1PlatformURL,
petersUserId,
"OpenDiscoveryEngine",
assetDiscoveryEngineName,
assetDiscoveryEngineDisplayName,
assetDiscoveryEngineDescription)
print (" ")
print ("The guid for the " + assetDiscoveryEngineName + " discovery engine is: " + assetDiscoveryEngineGUID)
print (" ")
```
----
The properties for the discovery engine are now on `cocoMDS1`. This configuration will eventually propagate to
the server `governDL01` through the Discovery Engine OMAS events. However to propagate the
configuration immediately, there is a `refresh configuration` REST API call that can be made to the Asset Analysis
OMES to request that it calls the metadata server to retrieve its configuration.
```
refreshGovernanceEngineConfig(governDL01Name, governDL01PlatformName, governDL01PlatformURL, petersUserId, assetDiscoveryEngineName)
```
----
When the status of the discovery engines is requested, the AssetDiscovery discovery engine is now showing `CONFIGURING`. This means the discovery engine is defined, but it does not have any discovery request types
defined and hence can not run any discovery services. It is effectively "empty".
```
printGovernanceEngineStatuses(governDL01Name, governDL01PlatformName, governDL01PlatformURL, petersUserId)
```
----
To complete the configuration of the discovery engine it needs at least one discovery service registered.
The next set of calls creates the definition for a discovery service and then registers it with the discovery
engine. The registration request is the point where the discovery
request types are linked to the discovery service as shown in **figure 3** above.
The definition of the discovery service is independent of the registration with the discovery engine because
discovery services can be reused in multiple discovery pipelines and engines.
```
discoveryServiceName = "csv-asset-discovery-service"
discoveryServiceDisplayName = "CSV Asset Discovery Service"
discoveryServiceDescription = "Discovers columns for CSV Files."
discoveryServiceProviderClassName = "org.odpi.openmetadata.adapters.connectors.discoveryservices.CSVDiscoveryServiceProvider"
discoveryServiceRequestType = "small-csv"
discoveryServiceGUID = createGovernanceService(cocoMDS1Name,
cocoMDS1PlatformName,
cocoMDS1PlatformURL,
petersUserId,
"OpenDiscoveryService",
discoveryServiceName,
discoveryServiceDisplayName,
discoveryServiceDescription,
discoveryServiceProviderClassName,
{})
if discoveryServiceGUID:
registerGovernanceServiceWithEngine(cocoMDS1Name,
cocoMDS1PlatformName,
cocoMDS1PlatformURL,
petersUserId,
assetDiscoveryEngineGUID,
discoveryServiceGUID,
discoveryServiceRequestType)
print (" ")
print ("Service registered as: " + discoveryServiceGUID)
print (" ")
print ("Done. ")
refreshGovernanceEngineConfig(governDL01Name, governDL01PlatformName, governDL01PlatformURL, petersUserId, assetDiscoveryEngineName)
print ("Done. ")
```
----
Now the discovery engine has sufficient configuration to offer a useful service to its callers.
```
printGovernanceEngineStatuses(governDL01Name, governDL01PlatformName, governDL01PlatformURL, petersUserId)
```
----
Asset Analysis OMES is ready to run automated discovery requests on the **AssetDiscovery** discovery engine. The **AssetQuality** discovery engine will be configured in a later release of Egeria when the quaity management function is enabled.
----
## Exercise 2 - Analysing Assets
The next exercise is to run a metadata discovery service. It is work in progress and will be added soon.
The commands below do not currently work because the discovery service is incomplete.
```
# reportGUID = runDiscoveryService(governDL01Name, governDL01PlatformName, governDL01PlatformURL, petersUserId, "AssetDiscovery", "small-csv", asset1guid)
```
This is how to query the result of a discovery request.
```
# Return the report header
#getDiscoveryReport(governDL01Name, governDL01PlatformName, governDL01PlatformURL, petersUserId, "AssetDeduplicator", reportGUID)
# Return the annotations
#getDiscoveryReportAnnotations(governDL01Name, governDL01PlatformName, governDL01PlatformURL, petersUserId, "AssetDeduplicator", reportGUID)
```
----
## Exercise 3 - Exploring Asset Contents
The next exercise is to run metadata discovery on a new asset to discovery its schema (structure) and the
characteristics of its content.
__Details coming soon ...__
----
## Exercise 3 - Assessing the quality of assets
The final exercise is to use metadata discovery to report on errors in the data from an asset and provide an assessment of its quality.
__Details coming soon ...__
|
github_jupyter
|
# Start up the metadata servers
%run ../common/environment-check.ipynb
print("Start up the Engine Host Server")
activatePlatform(dataLakePlatformName, dataLakePlatformURL, [governDL01Name])
print("Done. ")
printGovernanceEngineStatuses(governDL01Name, governDL01PlatformName, governDL01PlatformURL, petersUserId)
assetDiscoveryEngineName = "AssetDiscovery"
assetDiscoveryEngineDisplayName = "Asset Discovery Engine"
assetDiscoveryEngineDescription = "Extracts metadata about an asset on request."
assetDiscoveryEngineGUID = createGovernanceEngine(cocoMDS1Name,
cocoMDS1PlatformName,
cocoMDS1PlatformURL,
petersUserId,
"OpenDiscoveryEngine",
assetDiscoveryEngineName,
assetDiscoveryEngineDisplayName,
assetDiscoveryEngineDescription)
print (" ")
print ("The guid for the " + assetDiscoveryEngineName + " discovery engine is: " + assetDiscoveryEngineGUID)
print (" ")
refreshGovernanceEngineConfig(governDL01Name, governDL01PlatformName, governDL01PlatformURL, petersUserId, assetDiscoveryEngineName)
printGovernanceEngineStatuses(governDL01Name, governDL01PlatformName, governDL01PlatformURL, petersUserId)
discoveryServiceName = "csv-asset-discovery-service"
discoveryServiceDisplayName = "CSV Asset Discovery Service"
discoveryServiceDescription = "Discovers columns for CSV Files."
discoveryServiceProviderClassName = "org.odpi.openmetadata.adapters.connectors.discoveryservices.CSVDiscoveryServiceProvider"
discoveryServiceRequestType = "small-csv"
discoveryServiceGUID = createGovernanceService(cocoMDS1Name,
cocoMDS1PlatformName,
cocoMDS1PlatformURL,
petersUserId,
"OpenDiscoveryService",
discoveryServiceName,
discoveryServiceDisplayName,
discoveryServiceDescription,
discoveryServiceProviderClassName,
{})
if discoveryServiceGUID:
registerGovernanceServiceWithEngine(cocoMDS1Name,
cocoMDS1PlatformName,
cocoMDS1PlatformURL,
petersUserId,
assetDiscoveryEngineGUID,
discoveryServiceGUID,
discoveryServiceRequestType)
print (" ")
print ("Service registered as: " + discoveryServiceGUID)
print (" ")
print ("Done. ")
refreshGovernanceEngineConfig(governDL01Name, governDL01PlatformName, governDL01PlatformURL, petersUserId, assetDiscoveryEngineName)
print ("Done. ")
printGovernanceEngineStatuses(governDL01Name, governDL01PlatformName, governDL01PlatformURL, petersUserId)
# reportGUID = runDiscoveryService(governDL01Name, governDL01PlatformName, governDL01PlatformURL, petersUserId, "AssetDiscovery", "small-csv", asset1guid)
# Return the report header
#getDiscoveryReport(governDL01Name, governDL01PlatformName, governDL01PlatformURL, petersUserId, "AssetDeduplicator", reportGUID)
# Return the annotations
#getDiscoveryReportAnnotations(governDL01Name, governDL01PlatformName, governDL01PlatformURL, petersUserId, "AssetDeduplicator", reportGUID)
| 0.375248 | 0.955527 |
# DQN
P.S. This is not my code.
I mainly copied it from higgsfield's RL-Adventure
This is a Proof of Concept for embeddings generation. Although, it is possible to generate new embeddings using transfer learning, I would advice you to use it only for testing purposes.
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
import numpy as np
import pandas as pd
from tqdm.auto import tqdm
from IPython.display import clear_output
import matplotlib.pyplot as plt
%matplotlib inline
# == recnn ==
import sys
sys.path.append("../../")
import recnn
device = torch.device('cuda')
# ---
frame_size = 10
batch_size = 10
embed_dim = 128
# ---
tqdm.pandas()
# https://drive.google.com/open?id=1kTyu05ZmtP2MA33J5hWdX8OyUYEDW4iI
# download ml20m dataset yourself
ratings = pd.read_csv('../../data/ml-20m/ratings.csv')
keys = list(sorted(ratings['movieId'].unique()))
key_to_id = dict(zip(keys, range(len(keys))))
user_dict, users = recnn.data.prepare_dataset(ratings, key_to_id, frame_size)
del ratings
gc.collect()
clear_output(True)
clear_output(True)
print('Done!')
class DuelDQN(nn.Module):
def __init__(self, input_dim, action_dim):
super(DuelDQN, self).__init__()
self.feature = nn.Sequential(nn.Linear(input_dim, 128), nn.ReLU())
self.advantage = nn.Sequential(nn.Linear(128, 128), nn.ReLU(),
nn.Linear(128, action_dim))
self.value = nn.Sequential(nn.Linear(128, 128), nn.ReLU(), nn.Linear(128, 1))
def forward(self, x):
x = self.feature(x)
advantage = self.advantage(x)
value = self.value(x)
return value + advantage - advantage.mean()
def dqn_update(step, batch, params, learn=True):
batch = [i.to(device) for i in batch]
items, next_items, ratings, next_ratings, action, reward, done = batch
b_size = items.size(0)
state = torch.cat([embeddings(items).view(b_size, -1), ratings], 1)
next_state = torch.cat([embeddings(next_items).view(b_size, -1), next_ratings], 1)
q_values = dqn(state)
with torch.no_grad():
next_q_values = target_dqn(next_state)
q_value = q_values.gather(1, action.unsqueeze(1)).squeeze(1)
next_q_value = next_q_values.max(1)[0]
expected_q_value = reward + params['gamma'] * next_q_value * (1 - done)
loss = (q_value - expected_q_value).pow(2).mean()
if learn:
writer.add_scalar('value/train', loss, step)
embeddings_optimizer.zero_grad()
value_optimizer.zero_grad()
loss.backward()
torch.nn.utils.clip_grad_norm_(dqn.parameters(), -1, 1)
embeddings_optimizer.step()
value_optimizer.step()
else:
writer.add_histogram('q_values', q_values, step)
writer.add_scalar('value/test', loss, step)
return loss.item()
def run_tests():
test_batch = next(iter(test_dataloader))
losses = dqn_update(step, test_batch, params, learn=False)
return losses
def soft_update(net, target_net, soft_tau=1e-2):
for target_param, param in zip(target_net.parameters(), net.parameters()):
target_param.data.copy_(
target_param.data * (1.0 - soft_tau) + param.data * soft_tau
)
# === DQN settings ===
params = {
'gamma' : 0.99,
'value_lr' : 1e-5,
'embeddings_lr': 1e-5,
}
# === end ===
dqn = DuelDQN((embed_dim + 1) * frame_size, len(keys)).to(device)
target_dqn = DuelDQN((embed_dim + 1) * frame_size, len(keys)).to(device)
embeddings = nn.Embedding(len(keys), embed_dim).to(device)
embeddings.load_state_dict(torch.load('../../models/embeddings/dqn.pt'))
target_dqn.load_state_dict(dqn.state_dict())
target_dqn.eval()
value_optimizer = recnn.optim.RAdam(dqn.parameters(),
lr=params['value_lr'])
embeddings_optimizer = recnn.optim.RAdam(embeddings.parameters(),
lr=params['embeddings_lr'])
writer = SummaryWriter(log_dir='../../runs')
n_epochs = 100
batch_size = 25
epoch_bar = tqdm(total=n_epochs)
train_users = users[:-5000]
test_users = users[-5000:]
def prepare_batch_wrapper(x):
batch = recnn.data.prepare_batch_static_size(x, frame_size=frame_size)
return batch
train_user_dataset = recnn.data.UserDataset(train_users, user_dict)
test_user_dataset = recnn.data.UserDataset(test_users, user_dict)
train_dataloader = DataLoader(train_user_dataset, batch_size=batch_size,
shuffle=True, num_workers=4,collate_fn=prepare_batch_wrapper)
test_dataloader = DataLoader(test_user_dataset, batch_size=batch_size,
shuffle=True, num_workers=4,collate_fn=prepare_batch_wrapper)
torch.cuda.empty_cache()
# --- config ---
plot_every = 30
# --- end ---
step = 1
train_loss = []
test_loss = []
test_step = []
mem_usage = []
torch.cuda.reset_max_memory_allocated()
for epoch in range(n_epochs):
epoch_bar.update(1)
for batch in tqdm(train_dataloader):
loss = dqn_update(step, batch, params)
train_loss.append(loss)
step += 1
if step % 30:
torch.cuda.empty_cache()
soft_update(dqn, target_dqn)
if step % plot_every == 0:
clear_output(True)
print('step', step)
mem_usage.append(torch.cuda.max_memory_allocated())
test_ = run_tests()
test_step.append(step)
test_loss.append(test_)
plt.plot(train_loss)
plt.plot(test_step, test_loss)
plt.show()
np.set_printoptions(precision=5)
np.set_printoptions(suppress=True)
print(embeddings(torch.tensor([[686]]).to(device)).detach().cpu().numpy())
torch.save(embeddings.state_dict(), "../../models/embeddings/dqn.pt")
plt.plot(mem_usage)
```
|
github_jupyter
|
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
import numpy as np
import pandas as pd
from tqdm.auto import tqdm
from IPython.display import clear_output
import matplotlib.pyplot as plt
%matplotlib inline
# == recnn ==
import sys
sys.path.append("../../")
import recnn
device = torch.device('cuda')
# ---
frame_size = 10
batch_size = 10
embed_dim = 128
# ---
tqdm.pandas()
# https://drive.google.com/open?id=1kTyu05ZmtP2MA33J5hWdX8OyUYEDW4iI
# download ml20m dataset yourself
ratings = pd.read_csv('../../data/ml-20m/ratings.csv')
keys = list(sorted(ratings['movieId'].unique()))
key_to_id = dict(zip(keys, range(len(keys))))
user_dict, users = recnn.data.prepare_dataset(ratings, key_to_id, frame_size)
del ratings
gc.collect()
clear_output(True)
clear_output(True)
print('Done!')
class DuelDQN(nn.Module):
def __init__(self, input_dim, action_dim):
super(DuelDQN, self).__init__()
self.feature = nn.Sequential(nn.Linear(input_dim, 128), nn.ReLU())
self.advantage = nn.Sequential(nn.Linear(128, 128), nn.ReLU(),
nn.Linear(128, action_dim))
self.value = nn.Sequential(nn.Linear(128, 128), nn.ReLU(), nn.Linear(128, 1))
def forward(self, x):
x = self.feature(x)
advantage = self.advantage(x)
value = self.value(x)
return value + advantage - advantage.mean()
def dqn_update(step, batch, params, learn=True):
batch = [i.to(device) for i in batch]
items, next_items, ratings, next_ratings, action, reward, done = batch
b_size = items.size(0)
state = torch.cat([embeddings(items).view(b_size, -1), ratings], 1)
next_state = torch.cat([embeddings(next_items).view(b_size, -1), next_ratings], 1)
q_values = dqn(state)
with torch.no_grad():
next_q_values = target_dqn(next_state)
q_value = q_values.gather(1, action.unsqueeze(1)).squeeze(1)
next_q_value = next_q_values.max(1)[0]
expected_q_value = reward + params['gamma'] * next_q_value * (1 - done)
loss = (q_value - expected_q_value).pow(2).mean()
if learn:
writer.add_scalar('value/train', loss, step)
embeddings_optimizer.zero_grad()
value_optimizer.zero_grad()
loss.backward()
torch.nn.utils.clip_grad_norm_(dqn.parameters(), -1, 1)
embeddings_optimizer.step()
value_optimizer.step()
else:
writer.add_histogram('q_values', q_values, step)
writer.add_scalar('value/test', loss, step)
return loss.item()
def run_tests():
test_batch = next(iter(test_dataloader))
losses = dqn_update(step, test_batch, params, learn=False)
return losses
def soft_update(net, target_net, soft_tau=1e-2):
for target_param, param in zip(target_net.parameters(), net.parameters()):
target_param.data.copy_(
target_param.data * (1.0 - soft_tau) + param.data * soft_tau
)
# === DQN settings ===
params = {
'gamma' : 0.99,
'value_lr' : 1e-5,
'embeddings_lr': 1e-5,
}
# === end ===
dqn = DuelDQN((embed_dim + 1) * frame_size, len(keys)).to(device)
target_dqn = DuelDQN((embed_dim + 1) * frame_size, len(keys)).to(device)
embeddings = nn.Embedding(len(keys), embed_dim).to(device)
embeddings.load_state_dict(torch.load('../../models/embeddings/dqn.pt'))
target_dqn.load_state_dict(dqn.state_dict())
target_dqn.eval()
value_optimizer = recnn.optim.RAdam(dqn.parameters(),
lr=params['value_lr'])
embeddings_optimizer = recnn.optim.RAdam(embeddings.parameters(),
lr=params['embeddings_lr'])
writer = SummaryWriter(log_dir='../../runs')
n_epochs = 100
batch_size = 25
epoch_bar = tqdm(total=n_epochs)
train_users = users[:-5000]
test_users = users[-5000:]
def prepare_batch_wrapper(x):
batch = recnn.data.prepare_batch_static_size(x, frame_size=frame_size)
return batch
train_user_dataset = recnn.data.UserDataset(train_users, user_dict)
test_user_dataset = recnn.data.UserDataset(test_users, user_dict)
train_dataloader = DataLoader(train_user_dataset, batch_size=batch_size,
shuffle=True, num_workers=4,collate_fn=prepare_batch_wrapper)
test_dataloader = DataLoader(test_user_dataset, batch_size=batch_size,
shuffle=True, num_workers=4,collate_fn=prepare_batch_wrapper)
torch.cuda.empty_cache()
# --- config ---
plot_every = 30
# --- end ---
step = 1
train_loss = []
test_loss = []
test_step = []
mem_usage = []
torch.cuda.reset_max_memory_allocated()
for epoch in range(n_epochs):
epoch_bar.update(1)
for batch in tqdm(train_dataloader):
loss = dqn_update(step, batch, params)
train_loss.append(loss)
step += 1
if step % 30:
torch.cuda.empty_cache()
soft_update(dqn, target_dqn)
if step % plot_every == 0:
clear_output(True)
print('step', step)
mem_usage.append(torch.cuda.max_memory_allocated())
test_ = run_tests()
test_step.append(step)
test_loss.append(test_)
plt.plot(train_loss)
plt.plot(test_step, test_loss)
plt.show()
np.set_printoptions(precision=5)
np.set_printoptions(suppress=True)
print(embeddings(torch.tensor([[686]]).to(device)).detach().cpu().numpy())
torch.save(embeddings.state_dict(), "../../models/embeddings/dqn.pt")
plt.plot(mem_usage)
| 0.783947 | 0.766031 |
<a href="https://colab.research.google.com/github/AngieCat26/MujeresDigitales/blob/main/clase3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
**ESTRUCTURA CONDICIONAL**
---
Al momento de construir una instrucción condicional se deberá comprender quue dichas instruciiones están diseñadas para ayudar en la toma de decisiones:
Ejemplo:
Si pedro va a la tienda por la izquierda llegará más rápido, si no se demorará más.
Existen varios tipos de instrucciones, algunos pueden ser simples y otros múltiples
1. Verdadero
2. Falso
En Python tales resultados serian True o False
Para aplicar los condicionales que veremos a continuación debemos recordar los comando de operadores matemáticos vistos en clase anterior:
1. Igualdas
2. Diferencia
3. Menor que
4. Mayor a que
5. Menor o igual que
6. Mayor o igual que
En condiciones múltiplkes, podeos enlazar los operadores lógicos:
1. And
2. Or
3. Not
Para aplicar los operadores matemáticos y lógicos tenemos en cuenta lo que llamamos **DIAGRAMA DE FLUJOS**, esto nos permite una mayor organización de las ideas para la toma de decisiones
**EL COMANDO IF**
Este comando permite evaluar si una sentencia es verdadera o falsa. Es decir, se ejecuta una acción establecida mediante un comando de instruccio o varias instrucciones inmediatanmente en las lineas siguientes a dicha condición
```
num = int(input("Escribe un número cualquiera "))
if num == 200:
print("Escribiste 200")
```
**EL COMANDO ELSE**
Este comando permite relacionar las acciones que debería realizar en caso de que la condicion sea falsa
```
num = int(input("Escribe un número cualquiera "))
if num == 200:
print ("Escribiste 200")
else:
print("El número escrito no es 200")
```
**EL COMANDO ELIF**
Significa "sino, si" y permite concatenar condicionales
```
num = int(input("Escribe un número cualquiera "))
if num == 200:
print ("Escribiste 200")
elif num > 200:
print("El número escrito es MAYOR a 200")
else:
print("El número escrito no es 200")
```
**CONDICIONALES MÚLTIPLES**
Cuando se presentan situaciones con más de una condición que depende unas a otras, estas se pueden tratar mediante las sentencias o comandos if, o mediante el manejo adecuado del comando elif. Sin embargo, en muchos casos cuando hay multiples condiciones, la programación necesita mayor cantidad de lineas de código
En esos casos, es necesario, el uso de operadores lógicos como el AND y el OR
```
x = int(input("valor"))
if 0 < x:
if x < 10:
print("x es un numero positivo ")
x = int(input("valor"))
if 0 < x and x < 10:
print("x es un numero de un solo digito")
```
Esta condición es la misma expresión booleana compuesta y la misma expresión condicional anidada
**ESTRUCTURAS DE CONTROL ITERATIVO**
Las variables son claves en las estructuras de control iterativas, puesto que son el medio entre la iteracion de la condicion uq se está utilizando
**¿QUÉ ES ITERACIÓN?**
Iteracion es la consecucion del codigo tantas veces requiere hasta que se cumplen las condiciones establecidad
*Banderas*
son las variables que toman un valor preferiblemente binario, booleano e indican un estado
Ejemplo
```
suma = False
total = 0
a = 3
B = 10
if(suma == False):
total = a + b
suma = True
tareas = (4.5*10)/100)
talleres = (4.0*25)/100)
asistencia = (5.0*5)/100)
participacion = (4.0*15)/100)
suma = tareas + talleres + asistencia + participacion
proyecto = 5.0 - suma
print("las estudiantes tendrían que sacar en el proyecto: ", proyecto)
```
|
github_jupyter
|
num = int(input("Escribe un número cualquiera "))
if num == 200:
print("Escribiste 200")
num = int(input("Escribe un número cualquiera "))
if num == 200:
print ("Escribiste 200")
else:
print("El número escrito no es 200")
num = int(input("Escribe un número cualquiera "))
if num == 200:
print ("Escribiste 200")
elif num > 200:
print("El número escrito es MAYOR a 200")
else:
print("El número escrito no es 200")
x = int(input("valor"))
if 0 < x:
if x < 10:
print("x es un numero positivo ")
x = int(input("valor"))
if 0 < x and x < 10:
print("x es un numero de un solo digito")
suma = False
total = 0
a = 3
B = 10
if(suma == False):
total = a + b
suma = True
tareas = (4.5*10)/100)
talleres = (4.0*25)/100)
asistencia = (5.0*5)/100)
participacion = (4.0*15)/100)
suma = tareas + talleres + asistencia + participacion
proyecto = 5.0 - suma
print("las estudiantes tendrían que sacar en el proyecto: ", proyecto)
| 0.074595 | 0.983375 |
<a href="https://colab.research.google.com/github/gustavolq/Bootcamp-DataScience-Alura/blob/main/Modulo_05/Aulas/MachineLearning_Modelos_Metricas_Validacao.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Machine Learning : Modelos, Métricas e Validação
Olá! Sejá muito bem-vindo(a) ao meu Notebook referente à aula do quinto módulo do Bootcamp de Data Science Aplicada da Alura.
Nesse módulo, iremos compreender métricas utilizadas para determinar o quão bom nosso modelo criado foi, como por exemplo, F1 Score, Precision, Confusion Matrix, AUC...
Também iremos aprender a realizar a criação de um workflow em projetos de Data Science e aprenderemos aplicar a validação cruzada.
Assim como o Modulo 04, iremos trabalhar com dados da COVID-19 do hospital Sírio Libânes disponibilizados no [Kaggle](https://www.kaggle.com/S%C3%ADrio-Libanes/covid19).
## 1. Worfklow de Machine Learning
### 1.1 Áreas que a Machine Learning atua
- Escalar tarefas em que humanos são bons, como reconhecimento de imagens, análise de sentimentos, reconhecimentos de caracteres.
- Ajudar em tarefas que humanos não são bons, como detecção de DeepFake, Limites de Crédito, Detecção de Fraudes, Previsões de Faturamento.
### 1.2 Worfklow de Machine Learning
#### 1.2.1 Objetivo e impacto no negócio
- Realizar a definição do objetivo de forma clara
- Como conseguimos mensurar o impacto?
#### 1.2.2 Aquisição e Transformação nos Dados
- Onde estão os dados (Bancos de Dados Relacionais e Não Relacionais, arquivos CSV, XML, JSON..., Webscraping e Webcrawling, Streaming em tempo real...) ?
- Realizar o pré-processamento dos dados (verificar tipo da variável, verificar valores NaN, checar a estrutura dos dados, normalização e padronização, balanceamento da variável target...)
Também é importante realizar a análise exploratória das variáveis categóricas e numéricas, para visualizarmos como os dados estão distribuídos.
Nesse ponto, podemos utilizar Barplots, Boxplots, Histogramas, Tabelas de Frequência, Scatterplots. Também devemos identificar correlações entre as variáveis preditoras e identificar valores outliers em nosso conjunto de dados.
#### 1.2.3 Desenvolvimento do Modelo
- Divisão dos dados em treino e teste usando aleatoriedade (70/75% treino e 30/25% teste)
- Classificação, Regressão ou Clusterização? Aprendizagem Supervisionada ou Não Supervisionada?
- Avaliação do Modelo (Confusion Matrix - Classificação; Residual Values, R2 - Regressão)
- Métricas de Classificação (Accuracy, Fscore, Recall, Precision..)
- Otimização do modelo (Alteração de Hiperparâmetros, Feature Selection, Cross Validation...)
#### 1.2.4 Comunicação dos Resultados e Deploy e Monitoramento
- Deploy do modelo ou entrega de resultados.
```
import pandas as pd
import numpy as np
dados = pd.read_excel("https://github.com/gustavolq/Bootcamp-DataScience-Alura/blob/main/Modulo_05/Dataset/Kaggle_Sirio_Libanes_ICU_Prediction.xlsx?raw=true")
dados.head()
def preenche_tabela(dados) :
features_continuas_colunas = dados.iloc[:, 13:-2].columns
features_continuas = dados.groupby("PATIENT_VISIT_IDENTIFIER", as_index = False)[features_continuas_colunas].fillna(method = 'bfill').fillna(method = 'ffill')
features_categoricas = dados.iloc[:, :13]
saida = dados.iloc[:, -2:]
dados_finais = pd.concat([features_categoricas, features_continuas, saida], ignore_index = True, axis = 1)
dados_finais.columns = dados.columns
return dados_finais
dados_limpos = preenche_tabela(dados)
a_remover = dados_limpos.query("WINDOW == '0-2' and ICU == 1")['PATIENT_VISIT_IDENTIFIER'].values
dados_limpos = dados_limpos.query("PATIENT_VISIT_IDENTIFIER not in @a_remover")
dados_limpos = dados_limpos.dropna()
dados_limpos.describe()
def prepare_window(rows) : # rows = linhas de 1 paciente
if(np.any(rows['ICU'])) :
rows.loc[rows["WINDOW"] == "0-2", "ICU"] = 1
return rows.loc[rows["WINDOW"] == "0-2"]
dados_limpos = dados_limpos.groupby("PATIENT_VISIT_IDENTIFIER").apply(prepare_window)
#---------------- Transformação em Categoria ----------------
#dados_limpos['AGE_PERCENTIL'] = dados_limpos['AGE_PERCENTIL'].astype('category').cat.codes
#dados_limpos['AGE_PERCENTIL'] = pd.Series(pd.Categorical('dados_limpos['AGE_PERCENTIL']).cat.codes
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
le.fit(dados_limpos['AGE_PERCENTIL'])
dados_limpos['AGE_PERCENTIL'] = le.transform(dados_limpos['AGE_PERCENTIL'])
#-----------------------------------------------------------
dados_limpos.head()
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.dummy import DummyClassifier
from sklearn.metrics import accuracy_score
np.random.seed(73246)
#x_columns = dados.describe().columns
x_columns = dados.columns # ---> Inserimos a coluna AGE_PERCENTIL (não estava sendo utilizada no dados.describe().columns)
y = dados_limpos['ICU']
# x = dados_limpos[x_columns].drop(["ICU"], axis = 1)
x = dados_limpos[x_columns].drop(["ICU", 'WINDOW'], axis = 1)
x_train, x_test, y_train, y_test = train_test_split(x, y, stratify = y)
modelo = DummyClassifier()
modelo.fit(x_train, y_train)
y_prediction = modelo.predict(x_test)
accuracy_score(y_test, y_prediction)
modelo = LogisticRegression(max_iter = 10000)
modelo.fit(x_train, y_train)
y_prediction = modelo.predict(x_test)
accuracy_score(y_test, y_prediction)
```
##2. Métricas de Avaliação
```
# Criação de um modelo de árvore de decisão
from sklearn.tree import DecisionTreeClassifier
modelo_tree = DecisionTreeClassifier()
modelo_tree.fit(x_train, y_train)
y_prediction = modelo_tree.predict(x_test)
from sklearn.tree import plot_tree
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize = (25, 20))
plot_tree(modelo_tree, filled = True)
plt.show()
# Accuracy = Tudo que eu acertei
accuracy_score(y_test, y_prediction)
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay, plot_confusion_matrix
print(confusion_matrix(y_test, y_prediction))
plot_confusion_matrix(modelo_tree, x_test, y_test)
plt.show()
#cm = confusion_matrix(y_test, y_prediction, labels=modelo.classes_)
#ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=modelo.classes_).plot()
```
- TP (True Positive) = 24
- TN (True Negative) = 34
- FP (False Positive) = 13
- FN (False Negative) = 17
```
from sklearn.metrics import classification_report
# precision : Tudo que estou falando como positivo (TP e FP) -> TP / TP + FP
# recall : Todas as classes positivas, quantas eu realmente classifiquei corretamente
# f1-score : 2 * ((precision * recall) / (precision + recall))
print(classification_report(y_test, y_prediction))
```
## 2.1 Curva ROC E AUC
- Area Under Curve (AUC) : Calcula a área sobre a curva, que seria a curva da taxa de True Positive(y) por False Positive(x), onde, quanto mais para o canto esquerdo superior, melhor será o nosso modelo.
```
from sklearn.metrics import roc_auc_score
prob_tree = modelo_tree.predict_proba(x_test)
roc_auc_score(y_test, prob_tree[:,1])
prob_tree
# Criação de uma função
def roda_modelo(modelo, dados) :
np.random.seed(73246)
x_columns = dados.columns
y = dados['ICU']
x = dados[x_columns].drop(["ICU", 'WINDOW'], axis = 1)
x_train, x_test, y_train, y_test = train_test_split(x, y, stratify = y, test_size = 0.15)
modelo.fit(x_train, y_train)
y_pred = modelo.predict(x_test)
prob_predict = modelo.predict_proba(x_test)
auc = roc_auc_score(y_test, prob_predict[:,1])
print(f"AUC : {auc}")
print("\nClassification Report")
print(classification_report(y_test, y_pred))
roda_modelo(modelo, dados_limpos)
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import plot_roc_curve
from sklearn.model_selection import train_test_split
np.random.seed(73246)
x_columns = dados_limpos.columns
y = dados_limpos['ICU']
x = dados_limpos[x_columns].drop(["ICU", 'WINDOW'], axis = 1)
X_train, X_test, y_train, y_test = train_test_split(x, y, stratify = y, test_size = 0.15)
classifiers = [('LogisticRegression', LogisticRegression(max_iter = 10000, random_state = 73246)),
('DecisionTreeClassifier', DecisionTreeClassifier(random_state = 73246))]
fig, ax = plt.subplots(figsize = (8,6))
for name, cls in classifiers:
modelo = cls.fit(X_train, y_train)
plot_roc_curve(modelo, X_test, y_test, ax=ax)
plt.plot([0,1], [0,1], color='orange', linestyle='--')
ax.set_yticks(np.arange(0, 1.1, step = 0.1))
ax.set_xticks(np.arange(0, 1.1, step = 0.1))
def roda_modelo(modelos, dados) :
X = dados[x_columns].drop(["ICU", 'WINDOW'], axis = 1)
y = dados['ICU']
resultados_dataframe = pd.DataFrame(columns = ['modelo', 'roc_auc_mean', 'roc_auc_std'])
resultados_cv_results = {}
for name, modelo in modelos :
kfold = KFold(10, True, random_state = 73246)
cv_results = cross_val_score(modelo, X, y, cv = kfold, scoring = 'roc_auc')
resultados_cv_results[name] = cv_results * 100
resultados_dataframe = resultados_dataframe.append({'modelo' : name, 'roc_auc_mean' : cv_results.mean(), 'roc_auc_std' : cv_results.std()}, ignore_index = True)
return resultados_dataframe, resultados_cv_results
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
modelos = []
modelos.append(('LR', LogisticRegression(max_iter = 10000)))
modelos.append(('LDA', LinearDiscriminantAnalysis()))
modelos.append(('NB', GaussianNB()))
modelos.append(('KNN', KNeighborsClassifier()))
modelos.append(('CART', DecisionTreeClassifier()))
modelos.append(('SVM', SVC()))
resultados_df, resultados_cv = roda_modelo(modelos, dados_limpos)
resultados_df['min_interval'] = resultados_df['roc_auc_mean'] - 2 * resultados_df['roc_auc_std']
resultados_df['max_interval'] = resultados_df['roc_auc_mean'] + 2 * resultados_df['roc_auc_std']
resultados_df.sort_values(by = 'roc_auc_mean', ascending = False)
modelos
import seaborn as sns
sns.set()
fig, ax = plt.subplots(figsize = (12,8))
plt.boxplot(resultados_cv.values())
ax.set_xticklabels(resultados_cv.keys())
plt.show()
```
## 3. Cross-Validation / Validação Cruzada
```
from sklearn.model_selection import cross_validate
from sklearn.model_selection import StratifiedKFold
# Stratified = Proporção de casos positivos para quem precisa de UTI terão a mesma proporção
cv = StratifiedKFold(n_splits = 5, shuffle = True)
cross_validate(modelo, x, y, cv = cv)
from sklearn.model_selection import RepeatedStratifiedKFold
np.random.seed(84612)
cv = RepeatedStratifiedKFold(n_splits = 5, n_repeats = 10)
cross_validate(modelo, x, y, cv = cv)
np.random.seed(84612)
cross_val_score(modelo, x, y, cv = cv)
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from xgboost import XGBClassifier
from sklearn.ensemble import RandomForestClassifier
import numpy as np
def roda_modelo_cv(modelos, dados) :
x_columns = dados.columns
X = dados[x_columns].drop(["PATIENT_VISIT_IDENTIFIER","ICU", 'WINDOW'], axis = 1)
y = dados['ICU']
resultados_dataframe = pd.DataFrame(columns = ['modelo', 'roc_auc_mean', 'roc_auc_std', 'min_interval', 'max_interval'])
cv = RepeatedStratifiedKFold(n_splits = 5, n_repeats = 10)
for name, cls in modelos :
np.random.seed(89461)
# results = cross_validate(cls, X, y, cv = cv)['train_score']
results = cross_val_score(cls, X, y, cv = cv, scoring = 'roc_auc')
resultados_dataframe = resultados_dataframe.append({'modelo' : name, 'roc_auc_mean' : np.mean(results), 'roc_auc_std' : np.std(results), 'min_interval' : (np.mean(results) - 2 * np.std(results)), 'max_interval' : (np.mean(results) + 2 * np.std(results))},
ignore_index = True)
return resultados_dataframe
classifiers = [('LogisticRegression', LogisticRegression(max_iter = 10000)),
('DecisionTreeClassifier', DecisionTreeClassifier()),
('XGBoost', XGBClassifier(eval_metric='mlogloss')),
('RandomForest', RandomForestClassifier())]
df_result = roda_modelo_cv(classifiers, dados_limpos)
df_result
```
## 4. Overfit e RandomForest
```
def outro_roda_modelo_cv(modelos, dados) :
x_columns = dados.columns
X = dados[x_columns].drop(["PATIENT_VISIT_IDENTIFIER","ICU", 'WINDOW'], axis = 1)
y = dados['ICU']
resultados_dataframe = pd.DataFrame(columns = ['modelo', 'roc_auc_mean_train', 'roc_auc_mean_test', 'roc_auc_std', 'min_interval', 'max_interval'])
cv = RepeatedStratifiedKFold(n_splits = 5, n_repeats = 10)
for name,cls in modelos :
np.random.seed(89461)
results = cross_validate(cls, X, y, cv = cv, scoring='roc_auc', return_train_score=True)
train_mean = np.mean(results['train_score'])
test_mean = np.mean(results['test_score'])
roc_std = np.std(results['test_score'])
min_interval = test_mean - 2 * roc_std
max_interval = test_mean + 2 * roc_std
resultados_dataframe = resultados_dataframe.append({'modelo' : name,
'roc_auc_mean_train' : train_mean,
'roc_auc_mean_test' : test_mean,
'roc_auc_std' : roc_std,
'min_interval' : min_interval,
'max_interval' : max_interval},
ignore_index = True)
return resultados_dataframe
classifiers = [('LogisticRegression', LogisticRegression(max_iter = 10000)),
('DecisionTreeClassifier', DecisionTreeClassifier()),
('XGBoost', XGBClassifier(eval_metric='mlogloss')),
('RandomForest', RandomForestClassifier())]
df_result = outro_roda_modelo_cv(classifiers, dados_limpos)
df_result
classifiers = []
for i in np.arange(1, 16) :
classifiers.append((f'DecisionTreeClassifier_{i}', DecisionTreeClassifier(max_depth=i)))
df_result = outro_roda_modelo_cv(classifiers, dados_limpos)
df_result
x = np.arange(1, 16)
plt.plot(x, df_result['roc_auc_mean_train'])
plt.plot(x, df_result['roc_auc_mean_test'])
```
## 5. Dados Correlacionados
Iremos realizar a construção de uma matriz de correlação para identificarmos quais variáveis possuem maior correlação.
Esse passo é importante para o Feature Selection.
```
dados_limpos.info()
cor_cols = [coluna for coluna in dados_limpos.columns if coluna not in dados_limpos.select_dtypes(exclude = 'float64').columns]
alta_corr = 0.95
matrix_corr = dados_limpos.loc[:, cor_cols].corr().abs()
exclude_columns = [coluna for coluna in matrix_corr.columns if any(matrix_corr[coluna] > alta_corr)]
exclude_columns
matrix_upper = matrix_corr.where(np.triu(np.ones(matrix_corr.shape), k = 1).astype(np.bool))
exclude_columns = [coluna for coluna in matrix_upper.columns if any(matrix_upper[coluna] > alta_corr)]
exclude_columns
def remove_corr_var(dados, valor_corte) :
matrix_corr = dados.loc[:, cor_cols].corr().abs()
matrix_upper = matrix_corr.where(np.triu(np.ones(matrix_corr.shape), k = 1).astype(np.bool))
exclude_columns = [coluna for coluna in matrix_upper.columns if any(matrix_upper[coluna] > valor_corte)]
return dados.drop(exclude_columns, axis = 1)
dados_limpos_sem_corr = remove_corr_var(dados_limpos, 0.95)
dados_limpos_sem_corr
classifiers = [('LogisticRegression', LogisticRegression(max_iter = 10000)),
('DecisionTreeClassifier', DecisionTreeClassifier()),
('XGBoost', XGBClassifier(eval_metric='mlogloss')),
('RandomForest', RandomForestClassifier())]
df_result = roda_modelo_cv(classifiers, dados_limpos)
df_result
classifiers = [('LogisticRegression', LogisticRegression(max_iter = 10000)),
('DecisionTreeClassifier', DecisionTreeClassifier()),
('XGBoost', XGBClassifier(eval_metric='mlogloss')),
('RandomForest', RandomForestClassifier())]
df_result = roda_modelo_cv(classifiers, dados_limpos_sem_corr)
df_result
dados_limpos_sem_corr
dados_limpos
dados_limpos_sem_corr
```
|
github_jupyter
|
import pandas as pd
import numpy as np
dados = pd.read_excel("https://github.com/gustavolq/Bootcamp-DataScience-Alura/blob/main/Modulo_05/Dataset/Kaggle_Sirio_Libanes_ICU_Prediction.xlsx?raw=true")
dados.head()
def preenche_tabela(dados) :
features_continuas_colunas = dados.iloc[:, 13:-2].columns
features_continuas = dados.groupby("PATIENT_VISIT_IDENTIFIER", as_index = False)[features_continuas_colunas].fillna(method = 'bfill').fillna(method = 'ffill')
features_categoricas = dados.iloc[:, :13]
saida = dados.iloc[:, -2:]
dados_finais = pd.concat([features_categoricas, features_continuas, saida], ignore_index = True, axis = 1)
dados_finais.columns = dados.columns
return dados_finais
dados_limpos = preenche_tabela(dados)
a_remover = dados_limpos.query("WINDOW == '0-2' and ICU == 1")['PATIENT_VISIT_IDENTIFIER'].values
dados_limpos = dados_limpos.query("PATIENT_VISIT_IDENTIFIER not in @a_remover")
dados_limpos = dados_limpos.dropna()
dados_limpos.describe()
def prepare_window(rows) : # rows = linhas de 1 paciente
if(np.any(rows['ICU'])) :
rows.loc[rows["WINDOW"] == "0-2", "ICU"] = 1
return rows.loc[rows["WINDOW"] == "0-2"]
dados_limpos = dados_limpos.groupby("PATIENT_VISIT_IDENTIFIER").apply(prepare_window)
#---------------- Transformação em Categoria ----------------
#dados_limpos['AGE_PERCENTIL'] = dados_limpos['AGE_PERCENTIL'].astype('category').cat.codes
#dados_limpos['AGE_PERCENTIL'] = pd.Series(pd.Categorical('dados_limpos['AGE_PERCENTIL']).cat.codes
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
le.fit(dados_limpos['AGE_PERCENTIL'])
dados_limpos['AGE_PERCENTIL'] = le.transform(dados_limpos['AGE_PERCENTIL'])
#-----------------------------------------------------------
dados_limpos.head()
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.dummy import DummyClassifier
from sklearn.metrics import accuracy_score
np.random.seed(73246)
#x_columns = dados.describe().columns
x_columns = dados.columns # ---> Inserimos a coluna AGE_PERCENTIL (não estava sendo utilizada no dados.describe().columns)
y = dados_limpos['ICU']
# x = dados_limpos[x_columns].drop(["ICU"], axis = 1)
x = dados_limpos[x_columns].drop(["ICU", 'WINDOW'], axis = 1)
x_train, x_test, y_train, y_test = train_test_split(x, y, stratify = y)
modelo = DummyClassifier()
modelo.fit(x_train, y_train)
y_prediction = modelo.predict(x_test)
accuracy_score(y_test, y_prediction)
modelo = LogisticRegression(max_iter = 10000)
modelo.fit(x_train, y_train)
y_prediction = modelo.predict(x_test)
accuracy_score(y_test, y_prediction)
# Criação de um modelo de árvore de decisão
from sklearn.tree import DecisionTreeClassifier
modelo_tree = DecisionTreeClassifier()
modelo_tree.fit(x_train, y_train)
y_prediction = modelo_tree.predict(x_test)
from sklearn.tree import plot_tree
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize = (25, 20))
plot_tree(modelo_tree, filled = True)
plt.show()
# Accuracy = Tudo que eu acertei
accuracy_score(y_test, y_prediction)
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay, plot_confusion_matrix
print(confusion_matrix(y_test, y_prediction))
plot_confusion_matrix(modelo_tree, x_test, y_test)
plt.show()
#cm = confusion_matrix(y_test, y_prediction, labels=modelo.classes_)
#ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=modelo.classes_).plot()
from sklearn.metrics import classification_report
# precision : Tudo que estou falando como positivo (TP e FP) -> TP / TP + FP
# recall : Todas as classes positivas, quantas eu realmente classifiquei corretamente
# f1-score : 2 * ((precision * recall) / (precision + recall))
print(classification_report(y_test, y_prediction))
from sklearn.metrics import roc_auc_score
prob_tree = modelo_tree.predict_proba(x_test)
roc_auc_score(y_test, prob_tree[:,1])
prob_tree
# Criação de uma função
def roda_modelo(modelo, dados) :
np.random.seed(73246)
x_columns = dados.columns
y = dados['ICU']
x = dados[x_columns].drop(["ICU", 'WINDOW'], axis = 1)
x_train, x_test, y_train, y_test = train_test_split(x, y, stratify = y, test_size = 0.15)
modelo.fit(x_train, y_train)
y_pred = modelo.predict(x_test)
prob_predict = modelo.predict_proba(x_test)
auc = roc_auc_score(y_test, prob_predict[:,1])
print(f"AUC : {auc}")
print("\nClassification Report")
print(classification_report(y_test, y_pred))
roda_modelo(modelo, dados_limpos)
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import plot_roc_curve
from sklearn.model_selection import train_test_split
np.random.seed(73246)
x_columns = dados_limpos.columns
y = dados_limpos['ICU']
x = dados_limpos[x_columns].drop(["ICU", 'WINDOW'], axis = 1)
X_train, X_test, y_train, y_test = train_test_split(x, y, stratify = y, test_size = 0.15)
classifiers = [('LogisticRegression', LogisticRegression(max_iter = 10000, random_state = 73246)),
('DecisionTreeClassifier', DecisionTreeClassifier(random_state = 73246))]
fig, ax = plt.subplots(figsize = (8,6))
for name, cls in classifiers:
modelo = cls.fit(X_train, y_train)
plot_roc_curve(modelo, X_test, y_test, ax=ax)
plt.plot([0,1], [0,1], color='orange', linestyle='--')
ax.set_yticks(np.arange(0, 1.1, step = 0.1))
ax.set_xticks(np.arange(0, 1.1, step = 0.1))
def roda_modelo(modelos, dados) :
X = dados[x_columns].drop(["ICU", 'WINDOW'], axis = 1)
y = dados['ICU']
resultados_dataframe = pd.DataFrame(columns = ['modelo', 'roc_auc_mean', 'roc_auc_std'])
resultados_cv_results = {}
for name, modelo in modelos :
kfold = KFold(10, True, random_state = 73246)
cv_results = cross_val_score(modelo, X, y, cv = kfold, scoring = 'roc_auc')
resultados_cv_results[name] = cv_results * 100
resultados_dataframe = resultados_dataframe.append({'modelo' : name, 'roc_auc_mean' : cv_results.mean(), 'roc_auc_std' : cv_results.std()}, ignore_index = True)
return resultados_dataframe, resultados_cv_results
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
modelos = []
modelos.append(('LR', LogisticRegression(max_iter = 10000)))
modelos.append(('LDA', LinearDiscriminantAnalysis()))
modelos.append(('NB', GaussianNB()))
modelos.append(('KNN', KNeighborsClassifier()))
modelos.append(('CART', DecisionTreeClassifier()))
modelos.append(('SVM', SVC()))
resultados_df, resultados_cv = roda_modelo(modelos, dados_limpos)
resultados_df['min_interval'] = resultados_df['roc_auc_mean'] - 2 * resultados_df['roc_auc_std']
resultados_df['max_interval'] = resultados_df['roc_auc_mean'] + 2 * resultados_df['roc_auc_std']
resultados_df.sort_values(by = 'roc_auc_mean', ascending = False)
modelos
import seaborn as sns
sns.set()
fig, ax = plt.subplots(figsize = (12,8))
plt.boxplot(resultados_cv.values())
ax.set_xticklabels(resultados_cv.keys())
plt.show()
from sklearn.model_selection import cross_validate
from sklearn.model_selection import StratifiedKFold
# Stratified = Proporção de casos positivos para quem precisa de UTI terão a mesma proporção
cv = StratifiedKFold(n_splits = 5, shuffle = True)
cross_validate(modelo, x, y, cv = cv)
from sklearn.model_selection import RepeatedStratifiedKFold
np.random.seed(84612)
cv = RepeatedStratifiedKFold(n_splits = 5, n_repeats = 10)
cross_validate(modelo, x, y, cv = cv)
np.random.seed(84612)
cross_val_score(modelo, x, y, cv = cv)
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from xgboost import XGBClassifier
from sklearn.ensemble import RandomForestClassifier
import numpy as np
def roda_modelo_cv(modelos, dados) :
x_columns = dados.columns
X = dados[x_columns].drop(["PATIENT_VISIT_IDENTIFIER","ICU", 'WINDOW'], axis = 1)
y = dados['ICU']
resultados_dataframe = pd.DataFrame(columns = ['modelo', 'roc_auc_mean', 'roc_auc_std', 'min_interval', 'max_interval'])
cv = RepeatedStratifiedKFold(n_splits = 5, n_repeats = 10)
for name, cls in modelos :
np.random.seed(89461)
# results = cross_validate(cls, X, y, cv = cv)['train_score']
results = cross_val_score(cls, X, y, cv = cv, scoring = 'roc_auc')
resultados_dataframe = resultados_dataframe.append({'modelo' : name, 'roc_auc_mean' : np.mean(results), 'roc_auc_std' : np.std(results), 'min_interval' : (np.mean(results) - 2 * np.std(results)), 'max_interval' : (np.mean(results) + 2 * np.std(results))},
ignore_index = True)
return resultados_dataframe
classifiers = [('LogisticRegression', LogisticRegression(max_iter = 10000)),
('DecisionTreeClassifier', DecisionTreeClassifier()),
('XGBoost', XGBClassifier(eval_metric='mlogloss')),
('RandomForest', RandomForestClassifier())]
df_result = roda_modelo_cv(classifiers, dados_limpos)
df_result
def outro_roda_modelo_cv(modelos, dados) :
x_columns = dados.columns
X = dados[x_columns].drop(["PATIENT_VISIT_IDENTIFIER","ICU", 'WINDOW'], axis = 1)
y = dados['ICU']
resultados_dataframe = pd.DataFrame(columns = ['modelo', 'roc_auc_mean_train', 'roc_auc_mean_test', 'roc_auc_std', 'min_interval', 'max_interval'])
cv = RepeatedStratifiedKFold(n_splits = 5, n_repeats = 10)
for name,cls in modelos :
np.random.seed(89461)
results = cross_validate(cls, X, y, cv = cv, scoring='roc_auc', return_train_score=True)
train_mean = np.mean(results['train_score'])
test_mean = np.mean(results['test_score'])
roc_std = np.std(results['test_score'])
min_interval = test_mean - 2 * roc_std
max_interval = test_mean + 2 * roc_std
resultados_dataframe = resultados_dataframe.append({'modelo' : name,
'roc_auc_mean_train' : train_mean,
'roc_auc_mean_test' : test_mean,
'roc_auc_std' : roc_std,
'min_interval' : min_interval,
'max_interval' : max_interval},
ignore_index = True)
return resultados_dataframe
classifiers = [('LogisticRegression', LogisticRegression(max_iter = 10000)),
('DecisionTreeClassifier', DecisionTreeClassifier()),
('XGBoost', XGBClassifier(eval_metric='mlogloss')),
('RandomForest', RandomForestClassifier())]
df_result = outro_roda_modelo_cv(classifiers, dados_limpos)
df_result
classifiers = []
for i in np.arange(1, 16) :
classifiers.append((f'DecisionTreeClassifier_{i}', DecisionTreeClassifier(max_depth=i)))
df_result = outro_roda_modelo_cv(classifiers, dados_limpos)
df_result
x = np.arange(1, 16)
plt.plot(x, df_result['roc_auc_mean_train'])
plt.plot(x, df_result['roc_auc_mean_test'])
dados_limpos.info()
cor_cols = [coluna for coluna in dados_limpos.columns if coluna not in dados_limpos.select_dtypes(exclude = 'float64').columns]
alta_corr = 0.95
matrix_corr = dados_limpos.loc[:, cor_cols].corr().abs()
exclude_columns = [coluna for coluna in matrix_corr.columns if any(matrix_corr[coluna] > alta_corr)]
exclude_columns
matrix_upper = matrix_corr.where(np.triu(np.ones(matrix_corr.shape), k = 1).astype(np.bool))
exclude_columns = [coluna for coluna in matrix_upper.columns if any(matrix_upper[coluna] > alta_corr)]
exclude_columns
def remove_corr_var(dados, valor_corte) :
matrix_corr = dados.loc[:, cor_cols].corr().abs()
matrix_upper = matrix_corr.where(np.triu(np.ones(matrix_corr.shape), k = 1).astype(np.bool))
exclude_columns = [coluna for coluna in matrix_upper.columns if any(matrix_upper[coluna] > valor_corte)]
return dados.drop(exclude_columns, axis = 1)
dados_limpos_sem_corr = remove_corr_var(dados_limpos, 0.95)
dados_limpos_sem_corr
classifiers = [('LogisticRegression', LogisticRegression(max_iter = 10000)),
('DecisionTreeClassifier', DecisionTreeClassifier()),
('XGBoost', XGBClassifier(eval_metric='mlogloss')),
('RandomForest', RandomForestClassifier())]
df_result = roda_modelo_cv(classifiers, dados_limpos)
df_result
classifiers = [('LogisticRegression', LogisticRegression(max_iter = 10000)),
('DecisionTreeClassifier', DecisionTreeClassifier()),
('XGBoost', XGBClassifier(eval_metric='mlogloss')),
('RandomForest', RandomForestClassifier())]
df_result = roda_modelo_cv(classifiers, dados_limpos_sem_corr)
df_result
dados_limpos_sem_corr
dados_limpos
dados_limpos_sem_corr
| 0.514156 | 0.934694 |
```
import pandas as pd
import os
from rdkit import Chem
pattern_triazolopyrazine = Chem.MolFromSmarts("*-c1nnc2cncc(-*)n12")
def is_triazolopyrazine(mol):
if mol.HasSubstructMatch(pattern_triazolopyrazine):
return 1
else:
return 0
DATAPATH = "../data"
# Take most real active molecules and best generated for a round of mollib
osm = pd.read_csv(os.path.join(DATAPATH, "training_all.csv"))
eos = pd.read_csv(os.path.join(DATAPATH, "eosi_s4_candidates_90.csv"))
osm.sort_values("activity", ascending=True, inplace =True)
osm_sel = osm.head(90)
osm_sel.rename(columns={"osm":"id"}, inplace=True)
eos.rename(columns={"EosId":"id"}, inplace=True)
eos.drop(columns="InchiKey", inplace=True)
osm_sel.drop(columns="activity", inplace=True)
actives = pd.concat([osm_sel, eos], ignore_index=True)
smi = actives["smiles"].tolist()
with open(os.path.join(DATAPATH, "high_actives.txt"), "w") as f:
for s in smi:
f.write(s+"\n")
#load already synthesized molecules to compare with mollib new predicted
gen_mols = pd.read_csv(os.path.join(DATAPATH, "data_0.csv"))
real_mols = pd.read_csv(os.path.join(DATAPATH, "training_all.csv"))
gen_smi = gen_mols["Smiles"].tolist()
real_smi = real_mols["smiles"].tolist()
#read mollib output
mollib_40e_dict = {
"ep10": pd.read_csv(os.path.join(DATAPATH, "mollib_40epochs", "molecules_10_0.7.txt"), header = None),
"ep20": pd.read_csv(os.path.join(DATAPATH, "mollib_40epochs", "molecules_20_0.7.txt"), header = None),
"ep30": pd.read_csv(os.path.join(DATAPATH, "mollib_40epochs", "molecules_30_0.7.txt"), header = None),
"ep40": pd.read_csv(os.path.join(DATAPATH, "mollib_40epochs", "molecules_40_0.7.txt"), header = None),
}
for k,v in mollib_40e_dict.items():
v.columns=["smiles"]
for k,v in mollib_40e_dict.items():
print(len(v))
mollib_40e = pd.concat([v for k,v in mollib_40e_dict.items()], ignore_index=True)
smiles = mollib_40e["smiles"].tolist()
mols = [Chem.MolFromSmiles(smi) for smi in smiles]
mollib_40e["IsTriazoloPyrazine"] = [is_triazolopyrazine(mol) for mol in (mols)]
idx = mollib_40e.index[(mollib_40e["IsTriazoloPyrazine"]==0)].tolist()
mollib_40e.drop(idx, inplace=True)
print(mollib_40e.shape)
mollib_40e_smi = mollib_40e["smiles"].tolist()
dup_smi = list(set(gen_smi).intersection(mollib_40e_smi))
print(len(dup_smi))
mollib_new = list(set(mollib_40e_smi).difference(set(gen_smi)))
print(len(mollib_new))
red_smi = list(set(real_smi).intersection(set(mollib_40e_smi)))
print(len(red_smi))
mollib_40e_new = list(set(mollib_new).difference(set(real_smi)))
print(len(mollib_40e_new))
mollib_40e_new = list(set(mollib_40e_new).difference(set(mollib_10e_new)))
print(len(mollib_40e_new))
mollib_data = pd.DataFrame(mollib_40e_new, columns=["smiles"])
mollib_data.to_csv(os.path.join(DATAPATH, "mollib_40e.csv"), index=False)
```
|
github_jupyter
|
import pandas as pd
import os
from rdkit import Chem
pattern_triazolopyrazine = Chem.MolFromSmarts("*-c1nnc2cncc(-*)n12")
def is_triazolopyrazine(mol):
if mol.HasSubstructMatch(pattern_triazolopyrazine):
return 1
else:
return 0
DATAPATH = "../data"
# Take most real active molecules and best generated for a round of mollib
osm = pd.read_csv(os.path.join(DATAPATH, "training_all.csv"))
eos = pd.read_csv(os.path.join(DATAPATH, "eosi_s4_candidates_90.csv"))
osm.sort_values("activity", ascending=True, inplace =True)
osm_sel = osm.head(90)
osm_sel.rename(columns={"osm":"id"}, inplace=True)
eos.rename(columns={"EosId":"id"}, inplace=True)
eos.drop(columns="InchiKey", inplace=True)
osm_sel.drop(columns="activity", inplace=True)
actives = pd.concat([osm_sel, eos], ignore_index=True)
smi = actives["smiles"].tolist()
with open(os.path.join(DATAPATH, "high_actives.txt"), "w") as f:
for s in smi:
f.write(s+"\n")
#load already synthesized molecules to compare with mollib new predicted
gen_mols = pd.read_csv(os.path.join(DATAPATH, "data_0.csv"))
real_mols = pd.read_csv(os.path.join(DATAPATH, "training_all.csv"))
gen_smi = gen_mols["Smiles"].tolist()
real_smi = real_mols["smiles"].tolist()
#read mollib output
mollib_40e_dict = {
"ep10": pd.read_csv(os.path.join(DATAPATH, "mollib_40epochs", "molecules_10_0.7.txt"), header = None),
"ep20": pd.read_csv(os.path.join(DATAPATH, "mollib_40epochs", "molecules_20_0.7.txt"), header = None),
"ep30": pd.read_csv(os.path.join(DATAPATH, "mollib_40epochs", "molecules_30_0.7.txt"), header = None),
"ep40": pd.read_csv(os.path.join(DATAPATH, "mollib_40epochs", "molecules_40_0.7.txt"), header = None),
}
for k,v in mollib_40e_dict.items():
v.columns=["smiles"]
for k,v in mollib_40e_dict.items():
print(len(v))
mollib_40e = pd.concat([v for k,v in mollib_40e_dict.items()], ignore_index=True)
smiles = mollib_40e["smiles"].tolist()
mols = [Chem.MolFromSmiles(smi) for smi in smiles]
mollib_40e["IsTriazoloPyrazine"] = [is_triazolopyrazine(mol) for mol in (mols)]
idx = mollib_40e.index[(mollib_40e["IsTriazoloPyrazine"]==0)].tolist()
mollib_40e.drop(idx, inplace=True)
print(mollib_40e.shape)
mollib_40e_smi = mollib_40e["smiles"].tolist()
dup_smi = list(set(gen_smi).intersection(mollib_40e_smi))
print(len(dup_smi))
mollib_new = list(set(mollib_40e_smi).difference(set(gen_smi)))
print(len(mollib_new))
red_smi = list(set(real_smi).intersection(set(mollib_40e_smi)))
print(len(red_smi))
mollib_40e_new = list(set(mollib_new).difference(set(real_smi)))
print(len(mollib_40e_new))
mollib_40e_new = list(set(mollib_40e_new).difference(set(mollib_10e_new)))
print(len(mollib_40e_new))
mollib_data = pd.DataFrame(mollib_40e_new, columns=["smiles"])
mollib_data.to_csv(os.path.join(DATAPATH, "mollib_40e.csv"), index=False)
| 0.093766 | 0.168139 |
<a href="https://colab.research.google.com/github/316141725/daa_2021_1/blob/master/7deOctubre.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
```
#Busqueda Lineal
Dado un conjunto de ddatos no ordenados, la busqueda linela cinsiste en recorrer el conjunto de datos desde el inicio al final, moviendose de uno en uno hasta encontrar el elemento o llegar hasta al final del conjunto.
datos = [4,18,47,2,34,14,78,12,48,21,31,19,1,3,5]
#Busqueda Binaria
funciona ssobre un conjunto de datos lineal ordenado.
Consiste en dividir el conjunto en mitades y buscar en esa mitad. Si el elemento buscado no esta a la derecha o a la izquierda. Haces la lista igual a la mitad correspondiente y repites el proceso.
L=[4,18,47,2,34,14,78,12,48,21,31,19,1,3,5]
DER = longitud( L ) -1
IZQ = 0
MID apuntara a la mitad del segmento de busqueda
buscado: es el valor a buscar
1. Hacer DER = longitud( L ) -1
2. Hacer IZQ = 0
3. Si IZQ > DER significa que hay un error en los datos.
4.calcular MID = int((IZQ + DER)/2)
5. mientras L[MID] != buscadro hacer
6. preguntar l[MID] > buscado
hacer DER = mid
de lo contrario
hacer IZQ = mid
preguntat (DER-IZQ)%2
MID = (IZQ + ((DER-IZQ)/2))+1
de lo contrario
MID = IZQ + ((DER - IZQ)/2)
7. Return MID
```
"""
Busqeuda lineal
regresa la posisicon del elemento 'buscado' si se encuentra dentro de la lista.
regresa -1 si elemento buscado no existe dentro de la lista.
"""
def busqueda_lineal(L , buscado):
indice = -1
contador = 0
for idx in range(len(L)):
contador += 1
if L[idx] == buscado:
indice = idx
break
print(contador)
return indice
"""
Busqueda binaria
"""
def busqueda_binaria( L , buscado ):
IZQ=0
DER = len(L)-1
MID = int((IZQ + DER)/2)
if len(L) % 2 == 0:
MID= (DER//2)+1
else:
MID = DER//2
while (L[MID] != buscado):
if L[MID] > buscado:
DER = MID
else:
IZQ = MID
if (DER - IZQ) % 2 == 0:
MID = (IZQ+ ((DER - IZQ)//2))+1
else:
MID = IZQ+ ((DER - IZQ)//2)
return MID
"""
Busqueda lineal
"""
def main():
datos = [ 4,18,47,2,34,14,78,12,48,21,31,19,1,3,5]
dato = int(input("Que valor quieres buscar: "))
resultado = busqueda_lineal(datos, dato)
print("Resultado: ",resultado)
print("Busqueda linal en una lista ordenada")
datos.sort()
print(datos)
resultado = busqueda_lineal(datos, dato)
print("Resultado: ",resultado)
print("Busqueda binaria")
posicion = busqueda_binaria( datos , dato)
print(f"")
main()
```
|
github_jupyter
|
```
#Busqueda Lineal
Dado un conjunto de ddatos no ordenados, la busqueda linela cinsiste en recorrer el conjunto de datos desde el inicio al final, moviendose de uno en uno hasta encontrar el elemento o llegar hasta al final del conjunto.
datos = [4,18,47,2,34,14,78,12,48,21,31,19,1,3,5]
#Busqueda Binaria
funciona ssobre un conjunto de datos lineal ordenado.
Consiste en dividir el conjunto en mitades y buscar en esa mitad. Si el elemento buscado no esta a la derecha o a la izquierda. Haces la lista igual a la mitad correspondiente y repites el proceso.
L=[4,18,47,2,34,14,78,12,48,21,31,19,1,3,5]
DER = longitud( L ) -1
IZQ = 0
MID apuntara a la mitad del segmento de busqueda
buscado: es el valor a buscar
1. Hacer DER = longitud( L ) -1
2. Hacer IZQ = 0
3. Si IZQ > DER significa que hay un error en los datos.
4.calcular MID = int((IZQ + DER)/2)
5. mientras L[MID] != buscadro hacer
6. preguntar l[MID] > buscado
hacer DER = mid
de lo contrario
hacer IZQ = mid
preguntat (DER-IZQ)%2
MID = (IZQ + ((DER-IZQ)/2))+1
de lo contrario
MID = IZQ + ((DER - IZQ)/2)
7. Return MID
| 0.41478 | 0.89039 |
<a href="https://colab.research.google.com/github/shainaelevado/Linear-Algebra_CHE-2nd-Sem-2022/blob/main/Assignment3_Elevado_Nebres.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Linear Algebra for CHE
## Assignment 3: Matrices
Now that you have a fundamental knowledge about Python, we'll try to look into greater dimensions
### Objectives
1. Be familiar with matrices and their relation to linear equations.
2. Perform basic matrix operations
3. Program and translate matrix equations and operations using Python.
# Discussion
```
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
%matplotlib inline
```
### Matrices
The notation and use of matrices is probably one of the fundamentals of modern computing. Matrices are also handy representations of complex equations or multiple inter-related equations from 2-dimentional equations to even hundreds and thousand of them.
Let's say for example you have *A* and *B* as system of equation
$$
A = \left\{
\begin{array}\
x + y\\
4x - 10y
\end{array}
\right.\\
B = \left\{
\begin{array}\
x+y+z \\
3x -2y -z \\
-x + 4y +2z
\end{array}
\right. \\
C = \left\{
\begin{array}\
w-2x+3y-4z \\
3w- x -2y +z \\
2w -x + 3y - 2z
\end{array}
\right.
$$
We could see that *A* is a system of 2 equations with 2 parameters. while *B* is a system of 3 eqautions with 3 parameters. We can represent them as matrices as:
:$$
A=\begin{bmatrix} 1 & 1 \\ 4 & -10\end{bmatrix} \\
B=\begin{bmatrix} 1 & 1 & 1 \\ 3 & -2 & -1 \\ -1 & 4 & 2\end{bmatrix}\\
C=\begin{bmatrix} 1 & -2 & 3 & -4 \\ 3 & -1 & -2 & 1 \\ 2 & -1 & 3 & -2\end{bmatrix} $$
So assuming that you already discussed the fundamental representation of matrices, their types, and operations. We'll proceed in doing them in here in Python.
### Declaring Matrices
Just like our previous laboratory activity, we'll represent system of linear equations as a matrix. The entities or numbers in matrices are called the element of a matrix. These elements are arranged and ordered in rows and columns which form the list/array-like structure of matrices. And just like arrays, these elements are indexed according to their position with respect to their rows and columns. This can be represented just like the equation below. Whereas *A* is a matrix consisting of elements denoted by *aij*. Denoted by *i* is the number of rows in the matrix while *j* stands for the number of columns
$$
A=\begin{bmatrix}
a_{(0,0)}&a_{(0,1)}&\dots&a_{(0,j-1)}\\
a_{(1,0)}&a_{(1,1)}&\dots&a_{(1,j-1)}\\
\vdots&\vdots&\ddots&\vdots&\\
a_{(i-1,0)}&a_{(i-1,1)}&\dots&a_{(i-1,j-1)}
\end{bmatrix}
$$
We already gone over some of the types of matrices as vectors but we'll further discuss them in this laboratory activity. since you already know how to describe vectors using **shape, dimensions** and **size** attributes. We'll use them to analyze these matrices
```
## Since we'll keep on describing matrices. Let's make a function
def describe_mat(matrix):
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\n')
## Delcaring a 2 x 2 matrix
A = np.array([
[1, 2],
[3, 1]
])
describe_mat(A)
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
%matplotlib inline
G = np.array([
[1,1,3],
[2,2,4]
])
describe_mat(G)
B = np.array([
[8, 2],
[5, 4],
[1, 1]
])
describe_mat(B)
H = np.array([1,2,3,4])
describe_mat(H)
Matrix:
[1 2 3 4]
Shape: (4,)
Rank: 1
```
## Categorizing Matrices
There are several ways of classifying matrices. Once coulb be according to their **shape** and another is according to their **element values**. We'll try to go through them.
### Row and Column Matrices
```
## Declaring a Row Matrix
rowmatrix1D = np.array([
1, 3, 2, -4
]) ## this is a 1-D Matrix with a shape of (3,), it's not really considered as a row matrix.
row_mat_2D = np.array([
[1,2,3, -4]
]) ## this is a 2-D Matrix with a shape of (1,3)
describe_mat(rowmatrix1D)
describe_mat(row_mat_2D)
col_mat = np.array([
[2],
[6],
[10]
]) ## this is a 2-D Matrix with a shape of (3,2)
describe_mat(col_mat)
```
### Square Matrices
Square matrices have have the same row and column sizes. We could say a mtrix square if *i = j*. We can tweak our matrix descriptor function to determine square matrices.
```
def describe_mat(matrix):
is_square = True if matrix.shape[0] == matrix.shape[1] else False
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\nIs Square: {is_square}\n')
square_mat = np.array([
[1,2,5],
[3,7,8],
[6,1,2]
])
non_square_mat = np.array([
[1,2,6],
[3,3,8]
])
describe_mat(square_mat)
describe_mat(non_square_mat)
def describe_mat(matrix):
if matrix.size > 0:
is_square = True if matrix.shape[0] == matrix.shape[1] else False
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\nIs Square: {is_square}\n')
else:
print('Matrix is Null')
null_mat = np.array([])
describe_mat(null_mat)
```
## Zero Matrix
A zero matrix can be any rectangular matrix but with all elements having a value of 0
```
zero_mat_row = np.zeros((1,2))
zero_mat_sqr = np.zeros((2,2))
zero_mat_rct = np.zeros((3,2))
print(f'Zero Row Matrix: \n{zero_mat_row}')
print(f'Zero Square Matrix: \n{zero_mat_sqr}')
print(f'Zero Rectangular Matrix: \n{zero_mat_rct}')
```
## Ones Matrix
A ones matrix, just like the zero matrix, can be any rectangular matrix but all of its elements are 1s instead of 0s
```
ones_mat_row = np.ones((1,2))
ones_mat_sqr = np.ones((2,2))
ones_mat_rct = np.ones((3,2))
print(f'Ones Row Matrix: \n{ones_mat_row}')
print(f'Ones Square Matrix: \n{ones_mat_sqr}')
print(f'Ones Rectangular Matrix: \n{ones_mat_rct}')
```
## Diagonal Matrix
A diagonal matrix is a square matrix that has values only at the diagonal of the matrix.
```
np.array([
[2,0,0],
[0,3,0],
[0,0,5]
])
d = np.diag([2,3,5,7])
#d.shape[0] == d.shape[1]
d
```
## Identity Matrix
An identity matrix is a special diagonal matrix in which the values at the diagonal are ones.
```
np.eye(2)
np.identity(9)
```
## Upper Triangular Matrix
An upper triangular matrix is a matrix that has no values below the diagonal
```
np.array([
[1,2,3,5],
[0,3,1,-2],
[0,0,5,3],
[0,0,0,3]
])
F = np.array([
[1, -3, 4, -5, 6],
[2, -3, 4, -5, 6],
[-2, -3, 5, -5, 6],
[-6, -3, 4, -5, 6],
[2, -3, 4, -5, 6],
])
np.triu(F)
```
## Lower Triangular Matrix
A lower triangular matrix is a matrix that has no values above the diagonal
```
np.tril(F)
```
## Practice
Given the linear combination below, try to create a corresponding matrix representing it.
:$$\theta = 5x + 3y - z$$
2. Given the system of linear combinations below, try to encode it as a matrix. Also describe the matrix.
$$
A = \left\{\begin{array}
5x_1 + 2x_2 +x_3\\
4x_2 - x_3\\
10x_3
\end{array}\right.
$$
$$
A = \left\{\begin{array}
5x_1 + 2x_2 +x_3\\
4x_2 - x_3\\
10x_3
\end{array}\right.
$$
3. Given the matrix below, express it as a linear combination in a markdown and a LaTeX markdown
```
G = np.array([
[1,7,8],
[2,2,2],
[4,6,7]
])
```
4. Given the matrix below, display the output as a LaTeX markdown also express it as a system of linear combinations.
```
H = np.tril(G)
H
def create_user (userid):
print("Successfully created user: {}".format(userid))
userid = 2021_100001
create_user(2021-100100)
```
## Matrix Algebra
### Addition
```
A = np.array([
[1,2],
[2,3],
[4,1],
])
B = np.array([
[2,2],
[0,0],
[1,1],
])
A+B
3+A ##Broadcasting
# 2*np;.ones(A.shape)+A
```
### Subtraction
```
A-B
3-B
```
### Element-wise Multiplication
```
A*B
np.multiply(A,B)
2*A
```
## Task 1
Create a function named mat_desc() that thoroughly describes a matrix, it should:
1. Displays the shape, size and rank of the matrix
2. Displays whether the matrix is square or non-square.
3. Displays whether the matrix is an empty matrix.
4. Displays if the matrix is an identity, ones, or zeros matrix.
Use 3 sample matrices in which their shapes are not lower than (3,3), In your methodology, create a flowchart discuss the functions and methods you have done. Present your results in the results section showing the description of each matrix you have declared.
```
## Function Area
import numpy as np
## Matrix Declarations
def mat_desc(mat):
sq = False
mat = np.array(mat)
print(mat)
print('Shape:', mat.shape)
print('Size:', mat.size)
print('Rank:', np.linalg.matrix_rank(mat))
if(mat.shape[0] == mat.shape[1]):
sq = True
print('The matrix is square')
else:
print('The matrix is non-square')
if(mat.shape[0] == 0 and mat.shape[1] == 0):
print('The matrix is empty')
else:
print('The matrix is not empty')
iden = np.identity(mat.shape[0])
if(sq and (iden == mat).all()):
print('The matrix is an identity matrix')
else:
print('The matrix is not an identity matrix')
one = np.ones((mat.shape[0], mat.shape[1]))
if((one == mat).all()):
print('The matrix is an ones matrix')
else:
print('The matrix is not an ones matrix')
zero = np.zeros((mat.shape[0], mat.shape[1]))
if((zero == mat).all()):
print('The matrix is an zeros matrix')
else:
print('The matrix is not a zeros matrix')
## Sample Matrices
print('Matrix 1:')
mat_desc([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
print('Matrix 2:')
mat_desc([[2, 0, 0], [0, 2, 0], [0, 0, 2]])
print('Matrix 3:')
mat_desc([[1, 2, 3], [4, 5, 6], [5, 6, 8]])
```
## Task 2
Create a function named mat_operations() that takes in two matrices a inpyt marameters it should:
1. Determines if the matrices are viable for operation and returns your own error mesage if they are not viable.
2. Returns the sum of the matrices.
3. Returns the difference of the matrices.
4. Returns the element-wise multiplication of the matrices.
5. Returns the element-wise division of the matrices.
Use 3 sample matrices in which their shapes are not lower thant (3,3). In your methodology, create a flowchart discuss the functions and methods you have done. Present your results in the results section showing the description of each matrix you have declared.
```
import numpy as np
def mat_operations(mat1, mat2):
mat1 = np.array(mat1)
mat2 = np.array(mat2)
print('Matrix 1:', mat1)
print('Matrix 2:', mat2)
if(mat1.shape != mat2.shape):
print('The shape of both matrices are not same. Could not perform operations.')
return
print('Sum of the given matrices:')
msum = mat1 + mat2
print(msum)
print('Difference of the given matrices:')
mdiff = mat1 - mat2
print(mdiff)
print('Element-wise multiplication of the given matrices:')
mmul = np.multiply(mat1, mat2)
print(mmul)
print('Element-wise division of the given matrices:')
mmul = np.divide(mat1, mat2)
print(mmul)
print('Sample 1:')
mat_operations([[1, 0, 0], [0, 1, 0], [0, 0, 1]], [[1, 2, 4], [2, 3, 4], [4, 5, 6]])
print('Sample 2:')
mat_operations([[2, 0, 0], [0, 2, 0], [0, 0, 2]], [[1, 2, 4], [2, 3, 4], [4, 5, 6]])
print('Sample 3:')
mat_operations([[1, 2, 3], [4, 5, 6], [5, 6, 8]], [[1, 2, 4], [2, 3, 4], [4, 5, 6]])
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
%matplotlib inline
## Since we'll keep on describing matrices. Let's make a function
def describe_mat(matrix):
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\n')
## Delcaring a 2 x 2 matrix
A = np.array([
[1, 2],
[3, 1]
])
describe_mat(A)
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
%matplotlib inline
G = np.array([
[1,1,3],
[2,2,4]
])
describe_mat(G)
B = np.array([
[8, 2],
[5, 4],
[1, 1]
])
describe_mat(B)
H = np.array([1,2,3,4])
describe_mat(H)
Matrix:
[1 2 3 4]
Shape: (4,)
Rank: 1
## Declaring a Row Matrix
rowmatrix1D = np.array([
1, 3, 2, -4
]) ## this is a 1-D Matrix with a shape of (3,), it's not really considered as a row matrix.
row_mat_2D = np.array([
[1,2,3, -4]
]) ## this is a 2-D Matrix with a shape of (1,3)
describe_mat(rowmatrix1D)
describe_mat(row_mat_2D)
col_mat = np.array([
[2],
[6],
[10]
]) ## this is a 2-D Matrix with a shape of (3,2)
describe_mat(col_mat)
def describe_mat(matrix):
is_square = True if matrix.shape[0] == matrix.shape[1] else False
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\nIs Square: {is_square}\n')
square_mat = np.array([
[1,2,5],
[3,7,8],
[6,1,2]
])
non_square_mat = np.array([
[1,2,6],
[3,3,8]
])
describe_mat(square_mat)
describe_mat(non_square_mat)
def describe_mat(matrix):
if matrix.size > 0:
is_square = True if matrix.shape[0] == matrix.shape[1] else False
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\nIs Square: {is_square}\n')
else:
print('Matrix is Null')
null_mat = np.array([])
describe_mat(null_mat)
zero_mat_row = np.zeros((1,2))
zero_mat_sqr = np.zeros((2,2))
zero_mat_rct = np.zeros((3,2))
print(f'Zero Row Matrix: \n{zero_mat_row}')
print(f'Zero Square Matrix: \n{zero_mat_sqr}')
print(f'Zero Rectangular Matrix: \n{zero_mat_rct}')
ones_mat_row = np.ones((1,2))
ones_mat_sqr = np.ones((2,2))
ones_mat_rct = np.ones((3,2))
print(f'Ones Row Matrix: \n{ones_mat_row}')
print(f'Ones Square Matrix: \n{ones_mat_sqr}')
print(f'Ones Rectangular Matrix: \n{ones_mat_rct}')
np.array([
[2,0,0],
[0,3,0],
[0,0,5]
])
d = np.diag([2,3,5,7])
#d.shape[0] == d.shape[1]
d
np.eye(2)
np.identity(9)
np.array([
[1,2,3,5],
[0,3,1,-2],
[0,0,5,3],
[0,0,0,3]
])
F = np.array([
[1, -3, 4, -5, 6],
[2, -3, 4, -5, 6],
[-2, -3, 5, -5, 6],
[-6, -3, 4, -5, 6],
[2, -3, 4, -5, 6],
])
np.triu(F)
np.tril(F)
G = np.array([
[1,7,8],
[2,2,2],
[4,6,7]
])
H = np.tril(G)
H
def create_user (userid):
print("Successfully created user: {}".format(userid))
userid = 2021_100001
create_user(2021-100100)
A = np.array([
[1,2],
[2,3],
[4,1],
])
B = np.array([
[2,2],
[0,0],
[1,1],
])
A+B
3+A ##Broadcasting
# 2*np;.ones(A.shape)+A
A-B
3-B
A*B
np.multiply(A,B)
2*A
## Function Area
import numpy as np
## Matrix Declarations
def mat_desc(mat):
sq = False
mat = np.array(mat)
print(mat)
print('Shape:', mat.shape)
print('Size:', mat.size)
print('Rank:', np.linalg.matrix_rank(mat))
if(mat.shape[0] == mat.shape[1]):
sq = True
print('The matrix is square')
else:
print('The matrix is non-square')
if(mat.shape[0] == 0 and mat.shape[1] == 0):
print('The matrix is empty')
else:
print('The matrix is not empty')
iden = np.identity(mat.shape[0])
if(sq and (iden == mat).all()):
print('The matrix is an identity matrix')
else:
print('The matrix is not an identity matrix')
one = np.ones((mat.shape[0], mat.shape[1]))
if((one == mat).all()):
print('The matrix is an ones matrix')
else:
print('The matrix is not an ones matrix')
zero = np.zeros((mat.shape[0], mat.shape[1]))
if((zero == mat).all()):
print('The matrix is an zeros matrix')
else:
print('The matrix is not a zeros matrix')
## Sample Matrices
print('Matrix 1:')
mat_desc([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
print('Matrix 2:')
mat_desc([[2, 0, 0], [0, 2, 0], [0, 0, 2]])
print('Matrix 3:')
mat_desc([[1, 2, 3], [4, 5, 6], [5, 6, 8]])
import numpy as np
def mat_operations(mat1, mat2):
mat1 = np.array(mat1)
mat2 = np.array(mat2)
print('Matrix 1:', mat1)
print('Matrix 2:', mat2)
if(mat1.shape != mat2.shape):
print('The shape of both matrices are not same. Could not perform operations.')
return
print('Sum of the given matrices:')
msum = mat1 + mat2
print(msum)
print('Difference of the given matrices:')
mdiff = mat1 - mat2
print(mdiff)
print('Element-wise multiplication of the given matrices:')
mmul = np.multiply(mat1, mat2)
print(mmul)
print('Element-wise division of the given matrices:')
mmul = np.divide(mat1, mat2)
print(mmul)
print('Sample 1:')
mat_operations([[1, 0, 0], [0, 1, 0], [0, 0, 1]], [[1, 2, 4], [2, 3, 4], [4, 5, 6]])
print('Sample 2:')
mat_operations([[2, 0, 0], [0, 2, 0], [0, 0, 2]], [[1, 2, 4], [2, 3, 4], [4, 5, 6]])
print('Sample 3:')
mat_operations([[1, 2, 3], [4, 5, 6], [5, 6, 8]], [[1, 2, 4], [2, 3, 4], [4, 5, 6]])
| 0.524395 | 0.990759 |
### Description:
This notebook is working with a creative-commons licensed dataset provided Olist, a Brazilian e-commerce marketplace integrator.
The e-commerce website enables independent sellers to sell their products through the ecommerce Store and ship them directly to the customers.
After a customer purchases a product, a seller gets notified to fulfill that order. Once the customer receives the product, or the estimated delivery date is due, the customer gets a satisfaction survey by email where she can rate and review the purchase experience.
### Dataset description:
The dataset has information of 100k orders from 2016 to 2018. The dataset contains order status, price, payment and delivery time to the customer location, product attributes, and reviews written by customers.The geolocation dataset that relates Brazilian zip codes to lat/lng coordinates.
```
# pip install spacy, nltk, google-trans-new
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from glob import glob
from hops import hdfs
# hide warnings
import warnings
warnings.filterwarnings('ignore')
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
import time
start = time.time()
customers_df = pd.read_csv(hdfs.project_path() + "ecommerce/customer.csv")
geolocation_df = pd.read_csv(hdfs.project_path() + "ecommerce/geolocation.csv")
orders_df = pd.read_csv(hdfs.project_path() + "ecommerce/order.csv")
order_items_df = pd.read_csv(hdfs.project_path() + "ecommerce/order_item.csv")
order_payments_df = pd.read_csv(hdfs.project_path() + "ecommerce/order_payment.csv")
order_reviews_df = pd.read_csv(hdfs.project_path() + "ecommerce/order_review.csv")
products_df = pd.read_csv(hdfs.project_path() + "ecommerce/product.csv")
sellers_df = pd.read_csv(hdfs.project_path() + "ecommerce/seller.csv")
category_transalations_df = pd.read_csv(hdfs.project_path() + "ecommerce/product_category_name_translation.csv")
end = time.time()
print(end - start)
# Lets check the size of each df:
df_names = ['customers_df','geolocation_df', 'orders_df', 'order_items_df','order_payments_df',
'order_reviews_df','products_df','sellers_df', 'category_transalations_df' ]
for df in df_names:
print("Dataset {} has shape {}".format(df, eval(df).shape))
```
### Data Schema

### Description of various columns in different csv files


```
df = pd.merge(orders_df,order_payments_df, on="order_id")
df = pd.merge(df,customers_df, on="customer_id")
df = pd.merge(df,order_items_df, on="order_id")
df = pd.merge(df,sellers_df, on="seller_id")
df = pd.merge(df,order_reviews_df, on="order_id")
df = pd.merge(df,products_df, on="product_id")
#df = pd.merge(df,geolocation_df, left_on="" right_on="geolocation_zip_code_prefix")
df = pd.merge(df,category_transalations_df, on="product_category_name")
df.shape
df.head()
df.isnull().sum()
df.info()
```
## Products Analysis
#### Top 25 Product catgeories
```
print("Number of unique categories: ", len(products_df.product_category_name.unique()))
plt.figure(figsize=(10,6))
top_25_prod_categories = products_df.groupby('product_category_name')['product_id'].count().sort_values(ascending=False).head(25)
sns.barplot(x=top_25_prod_categories.index, y=top_25_prod_categories.values)
plt.xticks(rotation=80)
plt.xlabel('Product Category')
plt.title('Top 25 Most Common Categories');
plt.show()
```
Maximum number of products fall under these top 25 categories
### Lets now do an RFM(Recency, Frequncy, Monetary) Analysis for Behavioural Segmentaion of Customers
```
from datetime import datetime
import sklearn
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
from scipy.cluster.hierarchy import linkage
from scipy.cluster.hierarchy import dendrogram
from scipy.cluster.hierarchy import cut_tree
```
## Monetary
```
df.head()
df.count()
# Remove duplicate entries
df= df.drop_duplicates(subset={'order_id','customer_id','order_purchase_timestamp','order_delivered_customer_date'}, keep='first')
df=df.reindex()
df['total_payment'] = df['payment_value'] * df['payment_installments']
# monetary
grouped_df = df.groupby('customer_unique_id')['total_payment'].sum()
grouped_df = grouped_df.reset_index()
grouped_df.columns = ['customer_unique_id', 'monetary']
grouped_df.head()
# frequency
frequency = df.groupby('customer_unique_id')['order_id'].count()
frequency = frequency.reset_index()
frequency.columns = ['customer_unique_id', 'frequency']
frequency.sort_values("frequency",ascending=False).head()
# merge the two dfs
grouped_df = pd.merge(grouped_df, frequency, on='customer_unique_id', how='inner')
grouped_df.sort_values("monetary",ascending=False).head()
df['order_purchase_timestamp'] = pd.to_datetime(df['order_purchase_timestamp'], infer_datetime_format=True, errors='ignore')
max_date = max(df['order_purchase_timestamp'])
df['diff_days'] = (max_date-df['order_purchase_timestamp']).dt.days
# Recency
recency = df.groupby('customer_unique_id')['diff_days'].min()
recency = recency.reset_index()
recency.columns = ['customer_unique_id', 'recency']
recency.head()
# merge the grouped_df to recency df
rfm_df = pd.merge(grouped_df, recency, on='customer_unique_id', how='inner')
rfm_df.sort_values("monetary",ascending=False).head()
# Plot RFM distributions
plt.figure(figsize=(12,10))
# Plot distribution of R
plt.subplot(3, 1, 1); sns.distplot(rfm_df['recency'])
# Plot distribution of F
plt.subplot(3, 1, 2); sns.distplot(rfm_df['frequency'])
# Plot distribution of M
plt.subplot(3, 1, 3); sns.distplot(rfm_df['monetary'])
# Show the plot
plt.show()
```
### Lets check for Outliers
```
sns.boxplot(rfm_df['recency'])
sns.boxplot(rfm_df['frequency'])
sns.boxplot(rfm_df['monetary'])
```
### Monetary and Frequency have outliers
```
# removing (statistical) outliers for monetary
Q1 = rfm_df.monetary.quantile(0.05)
Q3 = rfm_df.monetary.quantile(0.95)
IQR = Q3 - Q1
rfm_df = rfm_df[(rfm_df.monetary >= Q1 - 1.5*IQR) & (rfm_df.monetary <= Q3 + 1.5*IQR)]
# outlier treatment for frequency
Q1 = rfm_df.frequency.quantile(0.05)
Q3 = rfm_df.frequency.quantile(0.95)
IQR = Q3 - Q1
rfm_df = rfm_df[(rfm_df.frequency >= Q1 - 1.5*IQR) & (rfm_df.frequency <= Q3 + 1.5*IQR)]
sns.boxplot(rfm_df['monetary'])
sns.boxplot(rfm_df['frequency'])
```
#### Scaling
```
rfm_df_scaled = rfm_df[['monetary', 'frequency', 'recency']]
# instantiate
scaler = StandardScaler()
# fit_transform
rfm_df_scaled = scaler.fit_transform(rfm_df_scaled)
rfm_df_scaled.shape
rfm_df_scaled = pd.DataFrame(rfm_df_scaled)
rfm_df_scaled.columns = ['monetary', 'frequency', 'recency']
rfm_df_scaled.head()
```
## Lets do the RFM Analysis for the Sellers
We can use the same code above just replacing the groupby column
```
df.head()
# frequency
frequency = df.groupby('seller_id')['order_item_id'].count()
frequency = frequency.reset_index()
frequency.columns = ['seller_id', 'frequency']
frequency.head()
# monetary
monetary = df.groupby('seller_id')['total_payment'].sum()
monetary = monetary.reset_index()
monetary.columns = ['seller_id', 'monetary']
monetary.head()
# monetary
recency = df.groupby('seller_id')['diff_days'].min()
recency = recency.reset_index()
recency.columns = ['seller_id', 'recency']
recency.head()
rfm_seller_df = pd.merge(frequency, monetary, on='seller_id', how='inner')
rfm_seller_df = pd.merge(rfm_seller_df, recency, on='seller_id', how='inner')
rfm_seller_df.head()
# Plot RFM distributions
plt.figure(figsize=(12,10))
# Plot distribution of R
plt.subplot(3, 1, 1); sns.distplot(rfm_seller_df['recency'])
# Plot distribution of F
plt.subplot(3, 1, 2); sns.distplot(rfm_seller_df['frequency'])
# Plot distribution of M
plt.subplot(3, 1, 3); sns.distplot(rfm_seller_df['monetary'])
# Show the plot
plt.show()
```
#### Outlier detection
```
sns.boxplot(rfm_seller_df['recency'])
sns.boxplot(rfm_seller_df['frequency'])
sns.boxplot(rfm_seller_df['monetary'])
```
### Outlier treatment
```
# removing (statistical) outliers for monetary
Q1 = rfm_seller_df.monetary.quantile(0.05)
Q3 = rfm_seller_df.monetary.quantile(0.95)
IQR = Q3 - Q1
rfm_seller_df = rfm_seller_df[(rfm_seller_df.monetary >= Q1 - 1.5*IQR) & (rfm_seller_df.monetary <= Q3 + 1.5*IQR)]
# outlier treatment for frequency
Q1 = rfm_seller_df.frequency.quantile(0.05)
Q3 = rfm_seller_df.frequency.quantile(0.95)
IQR = Q3 - Q1
rfm_seller_df = rfm_seller_df[(rfm_seller_df.frequency >= Q1 - 1.5*IQR) & (rfm_seller_df.frequency <= Q3 + 1.5*IQR)]
# outlier treatment for recency
Q1 = rfm_seller_df.recency.quantile(0.05)
Q3 = rfm_seller_df.recency.quantile(0.95)
IQR = Q3 - Q1
rfm_seller_df = rfm_seller_df[(rfm_seller_df.recency >= Q1 - 1.5*IQR) & (rfm_seller_df.recency <= Q3 + 1.5*IQR)]
sns.boxplot(rfm_seller_df['recency'])
sns.boxplot(rfm_seller_df['frequency'])
sns.boxplot(rfm_seller_df['monetary'])
```
## Scaling
```
rfm_seller_df_scaled = rfm_seller_df[['monetary', 'frequency', 'recency']]
# instantiate
scaler = StandardScaler()
# fit_transform
rfm_seller_df_scaled = scaler.fit_transform(rfm_seller_df_scaled)
rfm_seller_df_scaled.shape
rfm_seller_df_scaled = pd.DataFrame(rfm_seller_df_scaled)
rfm_seller_df_scaled.columns = ['monetary', 'frequency', 'recency']
rfm_seller_df_scaled.head()
```
### Lets do some Customer Sentiment Analysis using the review comments
```
order_reviews_df.head()
len(order_reviews_df)
```
We need real review messages so lets drop the NONE values
```
reviews_df = order_reviews_df[~(order_reviews_df['review_comment_message'] == 'NONE')]
reviews_df.head()
```
Lets only keep the review messages and the review score to see what the customers are liking and disliking
```
reviews_df = reviews_df[['review_score','review_comment_message']]
reviews_df.head()
```
Lets do some cleaning of the review messages
```
!python -m spacy download pt_core_news_sm
import re, nltk, spacy, string
import pt_core_news_sm
nlp = pt_core_news_sm.load()
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
```
### Cleaning the messages
```
def clean_text(text):
text = text.lower() # Make the text lowercase
text = re.sub('\[.*\]','', text).strip() # Remove text in square brackets if any
text = text.translate(str.maketrans('', '', string.punctuation)) # Remove punctuation
text = re.sub('\S*\d\S*\s*','', text).strip() # Remove words containing numbers
return text.strip()
reviews_df.review_comment_message = reviews_df.review_comment_message.apply(lambda x: clean_text(x))
```
### Lemmatizing the words
```
# portugese stopwords
stopwords = nlp.Defaults.stop_words
# lemmatizer function
def lemmatizer(text):
doc = nlp(text)
sent = [token.lemma_ for token in doc if not token.text in set(stopwords)]
return ' '.join(sent)
reviews_df['lemma'] = reviews_df.review_comment_message.apply(lambda x: lemmatizer(x))
reviews_df.head(10)
```
### Unigrams, Bigram, Trigrams Frequency Analysis
```
#top 30 bigram frequency among the reviews
def get_top_n_bigram(text, ngram=1, top=None):
vec = CountVectorizer(ngram_range=(ngram, ngram), stop_words=stopwords).fit(text)
bag_of_words = vec.transform(text)
sum_words = bag_of_words.sum(axis=0)
words_freq = [(word, sum_words[0, idx]) for word, idx in vec.vocabulary_.items()]
words_freq =sorted(words_freq, key = lambda x: x[1], reverse=True)
return words_freq[:top]
top_30_unigrams = get_top_n_bigram(reviews_df.lemma,ngram=1, top=30)
top_30_bigrams = get_top_n_bigram(reviews_df.lemma,ngram=2, top=30)
top_30_trigrams = get_top_n_bigram(reviews_df.lemma,ngram=3, top=30)
df1 = pd.DataFrame(top_30_unigrams, columns = ['unigram' , 'count'])
plt.figure(figsize=(12,6))
fig = sns.barplot(x=df1['unigram'], y=df1['count'])
plt.xticks(rotation = 80)
plt.show()
df2 = pd.DataFrame(top_30_bigrams, columns = ['bigram' , 'count'])
plt.figure(figsize=(12,6))
fig = sns.barplot(x=df2['bigram'], y=df2['count'])
plt.xticks(rotation = 80)
plt.show()
df3 = pd.DataFrame(top_30_trigrams, columns = ['trigram' , 'count'])
plt.figure(figsize=(12,6))
fig = sns.barplot(x=df3['trigram'], y=df3['count'])
plt.xticks(rotation = 80)
plt.show()
```
## TfidfVectorizer
```
tfidf = TfidfVectorizer(min_df=2, max_df=0.95, stop_words=stopwords)
dtm = tfidf.fit_transform(reviews_df.lemma)
tfidf.get_feature_names()[:10]
len(tfidf.get_feature_names())
reviews_df.review_score.value_counts()
```
## Using NMF for Topic modelling for the review messages
Lets pick 5 has the n_components as the review_score is also of 5 point grades
```
from sklearn.decomposition import NMF
#Load nmf_model with the n_components
num_topics = 5
#keep the random_state =40
nmf_model = NMF(n_components=num_topics, random_state=40)
W1 = nmf_model.fit_transform(dtm)
H1 = nmf_model.components_
colnames = ["Topic" + str(i) for i in range(nmf_model.n_components)]
docnames = ["Doc" + str(i) for i in range(len(reviews_df.lemma))]
df_doc_topic = pd.DataFrame(np.round(W1, 2), columns=colnames, index=docnames)
significant_topic = np.argmax(df_doc_topic.values, axis=1)
df_doc_topic['dominant_topic'] = significant_topic
reviews_df['topic'] = significant_topic
pd.set_option('display.max_colwidth', -1)
reviews_df[['review_comment_message','lemma','review_score','topic']][reviews_df.topic==0].head(20)
temp = reviews_df[['review_comment_message','lemma','review_score','topic']].groupby('topic').head(20)
temp.sort_values('topic')
```
## Translating Portugese to English using Google Translator
### Attempting to provide better names to the topic indexes thus obtained above
```
!pip install google_trans_new
# google translate from portuguese to english
from google_trans_new import google_translator
translator = google_translator()
def translate_pt_to_eng(sent):
translated_sent = translator.translate(sent,lang_tgt='en',lang_src='pt')
return translated_sent
!pip install --user googletrans
from googletrans import Translator
translator = Translator()
translator.translate('veritas lux mea', src='la', dest='en')
reviews_df['lemma'].head()
print(translate_pt_to_eng('receber prazo estipular'))
reviews_df['lemma'] = reviews_df['lemma'].apply(lambda x : translate_pt_to_eng(x))
```
|
github_jupyter
|
# pip install spacy, nltk, google-trans-new
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from glob import glob
from hops import hdfs
# hide warnings
import warnings
warnings.filterwarnings('ignore')
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
import time
start = time.time()
customers_df = pd.read_csv(hdfs.project_path() + "ecommerce/customer.csv")
geolocation_df = pd.read_csv(hdfs.project_path() + "ecommerce/geolocation.csv")
orders_df = pd.read_csv(hdfs.project_path() + "ecommerce/order.csv")
order_items_df = pd.read_csv(hdfs.project_path() + "ecommerce/order_item.csv")
order_payments_df = pd.read_csv(hdfs.project_path() + "ecommerce/order_payment.csv")
order_reviews_df = pd.read_csv(hdfs.project_path() + "ecommerce/order_review.csv")
products_df = pd.read_csv(hdfs.project_path() + "ecommerce/product.csv")
sellers_df = pd.read_csv(hdfs.project_path() + "ecommerce/seller.csv")
category_transalations_df = pd.read_csv(hdfs.project_path() + "ecommerce/product_category_name_translation.csv")
end = time.time()
print(end - start)
# Lets check the size of each df:
df_names = ['customers_df','geolocation_df', 'orders_df', 'order_items_df','order_payments_df',
'order_reviews_df','products_df','sellers_df', 'category_transalations_df' ]
for df in df_names:
print("Dataset {} has shape {}".format(df, eval(df).shape))
df = pd.merge(orders_df,order_payments_df, on="order_id")
df = pd.merge(df,customers_df, on="customer_id")
df = pd.merge(df,order_items_df, on="order_id")
df = pd.merge(df,sellers_df, on="seller_id")
df = pd.merge(df,order_reviews_df, on="order_id")
df = pd.merge(df,products_df, on="product_id")
#df = pd.merge(df,geolocation_df, left_on="" right_on="geolocation_zip_code_prefix")
df = pd.merge(df,category_transalations_df, on="product_category_name")
df.shape
df.head()
df.isnull().sum()
df.info()
print("Number of unique categories: ", len(products_df.product_category_name.unique()))
plt.figure(figsize=(10,6))
top_25_prod_categories = products_df.groupby('product_category_name')['product_id'].count().sort_values(ascending=False).head(25)
sns.barplot(x=top_25_prod_categories.index, y=top_25_prod_categories.values)
plt.xticks(rotation=80)
plt.xlabel('Product Category')
plt.title('Top 25 Most Common Categories');
plt.show()
from datetime import datetime
import sklearn
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
from scipy.cluster.hierarchy import linkage
from scipy.cluster.hierarchy import dendrogram
from scipy.cluster.hierarchy import cut_tree
df.head()
df.count()
# Remove duplicate entries
df= df.drop_duplicates(subset={'order_id','customer_id','order_purchase_timestamp','order_delivered_customer_date'}, keep='first')
df=df.reindex()
df['total_payment'] = df['payment_value'] * df['payment_installments']
# monetary
grouped_df = df.groupby('customer_unique_id')['total_payment'].sum()
grouped_df = grouped_df.reset_index()
grouped_df.columns = ['customer_unique_id', 'monetary']
grouped_df.head()
# frequency
frequency = df.groupby('customer_unique_id')['order_id'].count()
frequency = frequency.reset_index()
frequency.columns = ['customer_unique_id', 'frequency']
frequency.sort_values("frequency",ascending=False).head()
# merge the two dfs
grouped_df = pd.merge(grouped_df, frequency, on='customer_unique_id', how='inner')
grouped_df.sort_values("monetary",ascending=False).head()
df['order_purchase_timestamp'] = pd.to_datetime(df['order_purchase_timestamp'], infer_datetime_format=True, errors='ignore')
max_date = max(df['order_purchase_timestamp'])
df['diff_days'] = (max_date-df['order_purchase_timestamp']).dt.days
# Recency
recency = df.groupby('customer_unique_id')['diff_days'].min()
recency = recency.reset_index()
recency.columns = ['customer_unique_id', 'recency']
recency.head()
# merge the grouped_df to recency df
rfm_df = pd.merge(grouped_df, recency, on='customer_unique_id', how='inner')
rfm_df.sort_values("monetary",ascending=False).head()
# Plot RFM distributions
plt.figure(figsize=(12,10))
# Plot distribution of R
plt.subplot(3, 1, 1); sns.distplot(rfm_df['recency'])
# Plot distribution of F
plt.subplot(3, 1, 2); sns.distplot(rfm_df['frequency'])
# Plot distribution of M
plt.subplot(3, 1, 3); sns.distplot(rfm_df['monetary'])
# Show the plot
plt.show()
sns.boxplot(rfm_df['recency'])
sns.boxplot(rfm_df['frequency'])
sns.boxplot(rfm_df['monetary'])
# removing (statistical) outliers for monetary
Q1 = rfm_df.monetary.quantile(0.05)
Q3 = rfm_df.monetary.quantile(0.95)
IQR = Q3 - Q1
rfm_df = rfm_df[(rfm_df.monetary >= Q1 - 1.5*IQR) & (rfm_df.monetary <= Q3 + 1.5*IQR)]
# outlier treatment for frequency
Q1 = rfm_df.frequency.quantile(0.05)
Q3 = rfm_df.frequency.quantile(0.95)
IQR = Q3 - Q1
rfm_df = rfm_df[(rfm_df.frequency >= Q1 - 1.5*IQR) & (rfm_df.frequency <= Q3 + 1.5*IQR)]
sns.boxplot(rfm_df['monetary'])
sns.boxplot(rfm_df['frequency'])
rfm_df_scaled = rfm_df[['monetary', 'frequency', 'recency']]
# instantiate
scaler = StandardScaler()
# fit_transform
rfm_df_scaled = scaler.fit_transform(rfm_df_scaled)
rfm_df_scaled.shape
rfm_df_scaled = pd.DataFrame(rfm_df_scaled)
rfm_df_scaled.columns = ['monetary', 'frequency', 'recency']
rfm_df_scaled.head()
df.head()
# frequency
frequency = df.groupby('seller_id')['order_item_id'].count()
frequency = frequency.reset_index()
frequency.columns = ['seller_id', 'frequency']
frequency.head()
# monetary
monetary = df.groupby('seller_id')['total_payment'].sum()
monetary = monetary.reset_index()
monetary.columns = ['seller_id', 'monetary']
monetary.head()
# monetary
recency = df.groupby('seller_id')['diff_days'].min()
recency = recency.reset_index()
recency.columns = ['seller_id', 'recency']
recency.head()
rfm_seller_df = pd.merge(frequency, monetary, on='seller_id', how='inner')
rfm_seller_df = pd.merge(rfm_seller_df, recency, on='seller_id', how='inner')
rfm_seller_df.head()
# Plot RFM distributions
plt.figure(figsize=(12,10))
# Plot distribution of R
plt.subplot(3, 1, 1); sns.distplot(rfm_seller_df['recency'])
# Plot distribution of F
plt.subplot(3, 1, 2); sns.distplot(rfm_seller_df['frequency'])
# Plot distribution of M
plt.subplot(3, 1, 3); sns.distplot(rfm_seller_df['monetary'])
# Show the plot
plt.show()
sns.boxplot(rfm_seller_df['recency'])
sns.boxplot(rfm_seller_df['frequency'])
sns.boxplot(rfm_seller_df['monetary'])
# removing (statistical) outliers for monetary
Q1 = rfm_seller_df.monetary.quantile(0.05)
Q3 = rfm_seller_df.monetary.quantile(0.95)
IQR = Q3 - Q1
rfm_seller_df = rfm_seller_df[(rfm_seller_df.monetary >= Q1 - 1.5*IQR) & (rfm_seller_df.monetary <= Q3 + 1.5*IQR)]
# outlier treatment for frequency
Q1 = rfm_seller_df.frequency.quantile(0.05)
Q3 = rfm_seller_df.frequency.quantile(0.95)
IQR = Q3 - Q1
rfm_seller_df = rfm_seller_df[(rfm_seller_df.frequency >= Q1 - 1.5*IQR) & (rfm_seller_df.frequency <= Q3 + 1.5*IQR)]
# outlier treatment for recency
Q1 = rfm_seller_df.recency.quantile(0.05)
Q3 = rfm_seller_df.recency.quantile(0.95)
IQR = Q3 - Q1
rfm_seller_df = rfm_seller_df[(rfm_seller_df.recency >= Q1 - 1.5*IQR) & (rfm_seller_df.recency <= Q3 + 1.5*IQR)]
sns.boxplot(rfm_seller_df['recency'])
sns.boxplot(rfm_seller_df['frequency'])
sns.boxplot(rfm_seller_df['monetary'])
rfm_seller_df_scaled = rfm_seller_df[['monetary', 'frequency', 'recency']]
# instantiate
scaler = StandardScaler()
# fit_transform
rfm_seller_df_scaled = scaler.fit_transform(rfm_seller_df_scaled)
rfm_seller_df_scaled.shape
rfm_seller_df_scaled = pd.DataFrame(rfm_seller_df_scaled)
rfm_seller_df_scaled.columns = ['monetary', 'frequency', 'recency']
rfm_seller_df_scaled.head()
order_reviews_df.head()
len(order_reviews_df)
reviews_df = order_reviews_df[~(order_reviews_df['review_comment_message'] == 'NONE')]
reviews_df.head()
reviews_df = reviews_df[['review_score','review_comment_message']]
reviews_df.head()
!python -m spacy download pt_core_news_sm
import re, nltk, spacy, string
import pt_core_news_sm
nlp = pt_core_news_sm.load()
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
def clean_text(text):
text = text.lower() # Make the text lowercase
text = re.sub('\[.*\]','', text).strip() # Remove text in square brackets if any
text = text.translate(str.maketrans('', '', string.punctuation)) # Remove punctuation
text = re.sub('\S*\d\S*\s*','', text).strip() # Remove words containing numbers
return text.strip()
reviews_df.review_comment_message = reviews_df.review_comment_message.apply(lambda x: clean_text(x))
# portugese stopwords
stopwords = nlp.Defaults.stop_words
# lemmatizer function
def lemmatizer(text):
doc = nlp(text)
sent = [token.lemma_ for token in doc if not token.text in set(stopwords)]
return ' '.join(sent)
reviews_df['lemma'] = reviews_df.review_comment_message.apply(lambda x: lemmatizer(x))
reviews_df.head(10)
#top 30 bigram frequency among the reviews
def get_top_n_bigram(text, ngram=1, top=None):
vec = CountVectorizer(ngram_range=(ngram, ngram), stop_words=stopwords).fit(text)
bag_of_words = vec.transform(text)
sum_words = bag_of_words.sum(axis=0)
words_freq = [(word, sum_words[0, idx]) for word, idx in vec.vocabulary_.items()]
words_freq =sorted(words_freq, key = lambda x: x[1], reverse=True)
return words_freq[:top]
top_30_unigrams = get_top_n_bigram(reviews_df.lemma,ngram=1, top=30)
top_30_bigrams = get_top_n_bigram(reviews_df.lemma,ngram=2, top=30)
top_30_trigrams = get_top_n_bigram(reviews_df.lemma,ngram=3, top=30)
df1 = pd.DataFrame(top_30_unigrams, columns = ['unigram' , 'count'])
plt.figure(figsize=(12,6))
fig = sns.barplot(x=df1['unigram'], y=df1['count'])
plt.xticks(rotation = 80)
plt.show()
df2 = pd.DataFrame(top_30_bigrams, columns = ['bigram' , 'count'])
plt.figure(figsize=(12,6))
fig = sns.barplot(x=df2['bigram'], y=df2['count'])
plt.xticks(rotation = 80)
plt.show()
df3 = pd.DataFrame(top_30_trigrams, columns = ['trigram' , 'count'])
plt.figure(figsize=(12,6))
fig = sns.barplot(x=df3['trigram'], y=df3['count'])
plt.xticks(rotation = 80)
plt.show()
tfidf = TfidfVectorizer(min_df=2, max_df=0.95, stop_words=stopwords)
dtm = tfidf.fit_transform(reviews_df.lemma)
tfidf.get_feature_names()[:10]
len(tfidf.get_feature_names())
reviews_df.review_score.value_counts()
from sklearn.decomposition import NMF
#Load nmf_model with the n_components
num_topics = 5
#keep the random_state =40
nmf_model = NMF(n_components=num_topics, random_state=40)
W1 = nmf_model.fit_transform(dtm)
H1 = nmf_model.components_
colnames = ["Topic" + str(i) for i in range(nmf_model.n_components)]
docnames = ["Doc" + str(i) for i in range(len(reviews_df.lemma))]
df_doc_topic = pd.DataFrame(np.round(W1, 2), columns=colnames, index=docnames)
significant_topic = np.argmax(df_doc_topic.values, axis=1)
df_doc_topic['dominant_topic'] = significant_topic
reviews_df['topic'] = significant_topic
pd.set_option('display.max_colwidth', -1)
reviews_df[['review_comment_message','lemma','review_score','topic']][reviews_df.topic==0].head(20)
temp = reviews_df[['review_comment_message','lemma','review_score','topic']].groupby('topic').head(20)
temp.sort_values('topic')
!pip install google_trans_new
# google translate from portuguese to english
from google_trans_new import google_translator
translator = google_translator()
def translate_pt_to_eng(sent):
translated_sent = translator.translate(sent,lang_tgt='en',lang_src='pt')
return translated_sent
!pip install --user googletrans
from googletrans import Translator
translator = Translator()
translator.translate('veritas lux mea', src='la', dest='en')
reviews_df['lemma'].head()
print(translate_pt_to_eng('receber prazo estipular'))
reviews_df['lemma'] = reviews_df['lemma'].apply(lambda x : translate_pt_to_eng(x))
| 0.564819 | 0.859723 |
# Testing
## Introduction
### A few reasons not to do testing
Sensibility | Sense
------------------------------------ | -------------------------------------
**It's boring** | *Maybe*
**Code is just a one off throwaway** | *As with most research codes*
**No time for it** | *A bit more code, a lot less debugging*
**Tests can be buggy too** | *See above*
**Not a professional programmer** | *See above*
**Will do it later** | *See above*
### A few reasons to do testing
* **lazyness** *testing saves time*
* **peace of mind** *tests (should) ensure code is correct*
* **runnable specification** *best way to let others know what a function should do and
not do*
* **reproducible debugging** *debugging that happened and is saved for later reuse*
* code structure / **modularity** *since the code is designed for at least two situations*
* easier to modify *since results can be tested*
### Not a panacea
> Trying to improve the quality of software by doing more testing is like trying to lose weight by
> weighting yourself more often.
- Steve McConnell
* Testing won't corrrect a buggy code
* Testing will tell you were the bugs are...
* ... if the test cases *cover* the bugs
### Tests at different scales
Level of test | Area covered by test
----------------------- | ---------------------------------------------------------
**Unit testing** | smallest logical block of work (often < 10 lines of code)
**Component testing** | several logical blocks of work together
**Integration testing** | all components together / whole program
* Always start at the smallest scale!
* If a unit test is too complicated, go smaller.
### Legacy code hardening
* Very difficult to create unit-tests for existing code
* Instead we make a **regression test**
* Run program as a black box:
```
setup input
run program
read output
check output against expected result
```
* Does not test correctness of code
* Checks code is a similarly wrong on day N as day 0
### Testing vocabulary
* **fixture**: input data
* **action**: function that is being tested
* **expected result**: the output that should be obtained
* **actual result**: the output that is obtained
* **coverage**: proportion of all possible paths in the code that the tests take
### Branch coverage:
```python
if energy > 0:
! Do this
else:
! Do that
```
Is there a test for both `energy > 0` and `energy <= 0`?
|
github_jupyter
|
setup input
run program
read output
check output against expected result
if energy > 0:
! Do this
else:
! Do that
| 0.271638 | 0.720467 |
## KF Basics - Part I
### Introduction
#### What is the need to describe belief in terms of PDF's?
This is because robot environments are stochastic. A robot environment may have cows with Tesla by side. That is a robot and it's environment cannot be deterministically modelled(e.g as a function of something like time t). In the real world sensors are also error prone, and hence there'll be a set of values with a mean and variance that it can take. Hence, we always have to model around some mean and variances associated.
#### What is Expectation of a Random Variables?
Expectation is nothing but an average of the probabilites
$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$
In the continous form,
$$\mathbb E[X] = \int_{-\infty}^\infty x\, f(x) \,dx$$
```
import numpy as np
import random
x=[3,1,2]
p=[0.1,0.3,0.4]
E_x=np.sum(np.multiply(x,p))
print(E_x)
```
#### What is the advantage of representing the belief as a unimodal as opposed to multimodal?
Obviously, it makes sense because we can't multiple probabilities to a car moving for two locations. This would be too confusing and the information will not be useful.
### Variance, Covariance and Correlation
#### Variance
Variance is the spread of the data. The mean does'nt tell much **about** the data. Therefore the variance tells us about the **story** about the data meaning the spread of the data.
$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$
```
x=np.random.randn(10)
np.var(x)
```
#### Covariance
This is for a multivariate distribution. For example, a robot in 2-D space can take values in both x and y. To describe them, a normal distribution with mean in both x and y is needed.
For a multivariate distribution, mean $\mu$ can be represented as a matrix,
$$
\mu = \begin{bmatrix}\mu_1\\\mu_2\\ \vdots \\\mu_n\end{bmatrix}
$$
Similarly, variance can also be represented.
But an important concept is that in the same way as every variable or dimension has a variation in its values, it is also possible that there will be values on how they **together vary**. This is also a measure of how two datasets are related to each other or **correlation**.
For example, as height increases weight also generally increases. These variables are correlated. They are positively correlated because as one variable gets larger so does the other.
We use a **covariance matrix** to denote covariances of a multivariate normal distribution:
$$
\Sigma = \begin{bmatrix}
\sigma_1^2 & \sigma_{12} & \cdots & \sigma_{1n} \\
\sigma_{21} &\sigma_2^2 & \cdots & \sigma_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
\sigma_{n1} & \sigma_{n2} & \cdots & \sigma_n^2
\end{bmatrix}
$$
**Diagonal** - Variance of each variable associated.
**Off-Diagonal** - covariance between ith and jth variables.
$$\begin{aligned}VAR(X) = \sigma_x^2 &= \frac{1}{n}\sum_{i=1}^n(X - \mu)^2\\
COV(X, Y) = \sigma_{xy} &= \frac{1}{n}\sum_{i=1}^n[(X-\mu_x)(Y-\mu_y)\big]\end{aligned}$$
```
x=np.random.random((3,3))
np.cov(x)
```
Covariance taking the data as **sample** with $\frac{1}{N-1}$
```
x_cor=np.random.rand(1,10)
y_cor=np.random.rand(1,10)
np.cov(x_cor,y_cor)
```
Covariance taking the data as **population** with $\frac{1}{N}$
```
np.cov(x_cor,y_cor,bias=1)
```
### Gaussians
#### Central Limit Theorem
According to this theorem, the average of n samples of random and independant variables tends to follow a normal distribution as we increase the sample size.(Generally, for n>=30)
```
import matplotlib.pyplot as plt
import random
a=np.zeros((100,))
for i in range(100):
x=[random.uniform(1,10) for _ in range(1000)]
a[i]=np.sum(x,axis=0)/1000
plt.hist(a)
```
#### Gaussian Distribution
A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:
$$
f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]
$$
Range is $$[-\inf,\inf] $$
This is just a function of mean($\mu$) and standard deviation ($\sigma$) and what gives the normal distribution the charecteristic **bell curve**.
```
import matplotlib.mlab as mlab
import math
import scipy.stats
mu = 0
variance = 5
sigma = math.sqrt(variance)
x = np.linspace(mu - 5*sigma, mu + 5*sigma, 100)
plt.plot(x,scipy.stats.norm.pdf(x, mu, sigma))
plt.show()
```
#### Why do we need Gaussian distributions?
Since it becomes really difficult in the real world to deal with multimodal distribution as we cannot put the belief in two seperate location of the robots. This becomes really confusing and in practice impossible to comprehend.
Gaussian probability distribution allows us to drive the robots using only one mode with peak at the mean with some variance.
### Gaussian Properties
**Multiplication**
For the measurement update in a Bayes Filter, the algorithm tells us to multiply the Prior P(X_t) and measurement P(Z_t|X_t) to calculate the posterior:
$$P(X \mid Z) = \frac{P(Z \mid X)P(X)}{P(Z)}$$
Here for the numerator, $P(Z \mid X),P(X)$ both are gaussian.
$N(\bar\mu, \bar\sigma^1)$ and $N(\bar\mu, \bar\sigma^2)$ are their mean and variances.
New mean is
$$\mu_\mathtt{new} = \frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2}$$
New variance is
$$
\sigma_\mathtt{new} = \frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}
$$
```
import matplotlib.mlab as mlab
import math
mu1 = 0
variance1 = 2
sigma = math.sqrt(variance1)
x1 = np.linspace(mu1 - 3*sigma, mu1 + 3*sigma, 100)
plt.plot(x1,scipy.stats.norm.pdf(x1, mu1, sigma),label='prior')
mu2 = 10
variance2 = 2
sigma = math.sqrt(variance2)
x2 = np.linspace(mu2 - 3*sigma, mu2 + 3*sigma, 100)
plt.plot(x2,scipy.stats.norm.pdf(x2, mu2, sigma),"g-",label='measurement')
mu_new=(mu1*variance2+mu2*variance1)/(variance1+variance2)
print("New mean is at: ",mu_new)
var_new=(variance1*variance2)/(variance1+variance2)
print("New variance is: ",var_new)
sigma = math.sqrt(var_new)
x3 = np.linspace(mu_new - 3*sigma, mu_new + 3*sigma, 100)
plt.plot(x3,scipy.stats.norm.pdf(x3, mu_new, var_new),label="posterior")
plt.legend(loc='upper left')
plt.xlim(-10,20)
plt.show()
```
**Addition**
The motion step involves a case of adding up probability (Since it has to abide the Law of Total Probability). This means their beliefs are to be added and hence two gaussians. They are simply arithmetic additions of the two.
$$\begin{gathered}\mu_x = \mu_p + \mu_z \\
\sigma_x^2 = \sigma_z^2+\sigma_p^2\, \square\end{gathered}$$
```
import matplotlib.mlab as mlab
import math
mu1 = 5
variance1 = 1
sigma = math.sqrt(variance1)
x1 = np.linspace(mu1 - 3*sigma, mu1 + 3*sigma, 100)
plt.plot(x1,scipy.stats.norm.pdf(x1, mu1, sigma),label='prior')
mu2 = 10
variance2 = 1
sigma = math.sqrt(variance2)
x2 = np.linspace(mu2 - 3*sigma, mu2 + 3*sigma, 100)
plt.plot(x2,scipy.stats.norm.pdf(x2, mu2, sigma),"g-",label='measurement')
mu_new=mu1+mu2
print("New mean is at: ",mu_new)
var_new=(variance1+variance2)
print("New variance is: ",var_new)
sigma = math.sqrt(var_new)
x3 = np.linspace(mu_new - 3*sigma, mu_new + 3*sigma, 100)
plt.plot(x3,scipy.stats.norm.pdf(x3, mu_new, var_new),label="posterior")
plt.legend(loc='upper left')
plt.xlim(-10,20)
plt.show()
#Example from:
#https://scipython.com/blog/visualizing-the-bivariate-gaussian-distribution/
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
# Our 2-dimensional distribution will be over variables X and Y
N = 60
X = np.linspace(-3, 3, N)
Y = np.linspace(-3, 4, N)
X, Y = np.meshgrid(X, Y)
# Mean vector and covariance matrix
mu = np.array([0., 1.])
Sigma = np.array([[ 1. , -0.5], [-0.5, 1.5]])
# Pack X and Y into a single 3-dimensional array
pos = np.empty(X.shape + (2,))
pos[:, :, 0] = X
pos[:, :, 1] = Y
def multivariate_gaussian(pos, mu, Sigma):
"""Return the multivariate Gaussian distribution on array pos.
pos is an array constructed by packing the meshed arrays of variables
x_1, x_2, x_3, ..., x_k into its _last_ dimension.
"""
n = mu.shape[0]
Sigma_det = np.linalg.det(Sigma)
Sigma_inv = np.linalg.inv(Sigma)
N = np.sqrt((2*np.pi)**n * Sigma_det)
# This einsum call calculates (x-mu)T.Sigma-1.(x-mu) in a vectorized
# way across all the input variables.
fac = np.einsum('...k,kl,...l->...', pos-mu, Sigma_inv, pos-mu)
return np.exp(-fac / 2) / N
# The distribution on the variables X, Y packed into pos.
Z = multivariate_gaussian(pos, mu, Sigma)
# Create a surface plot and projected filled contour plot under it.
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_surface(X, Y, Z, rstride=3, cstride=3, linewidth=1, antialiased=True,
cmap=cm.viridis)
cset = ax.contourf(X, Y, Z, zdir='z', offset=-0.15, cmap=cm.viridis)
# Adjust the limits, ticks and view angle
ax.set_zlim(-0.15,0.2)
ax.set_zticks(np.linspace(0,0.2,5))
ax.view_init(27, -21)
plt.show()
```
This is a 3D projection of the gaussians involved with the lower surface showing the 2D projection of the 3D projection above. The innermost ellipse represents the highest peak, that is the maximum probability for a given (X,Y) value.
** numpy einsum examples **
```
a = np.arange(25).reshape(5,5)
b = np.arange(5)
c = np.arange(6).reshape(2,3)
print(a)
print(b)
print(c)
#this is the diagonal sum, i repeated means the diagonal
np.einsum('ij', a)
#this takes the output ii which is the diagonal and outputs to a
np.einsum('ii->i',a)
#this takes in the array A represented by their axes 'ij' and B by its only axes'j'
#and multiples them element wise
np.einsum('ij,j',a, b)
A = np.arange(3).reshape(3,1)
B = np.array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
C=np.multiply(A,B)
np.sum(C,axis=1)
D = np.array([0,1,2])
E = np.array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
np.einsum('i,ij->i',D,E)
from scipy.stats import multivariate_normal
x, y = np.mgrid[-5:5:.1, -5:5:.1]
pos = np.empty(x.shape + (2,))
pos[:, :, 0] = x; pos[:, :, 1] = y
rv = multivariate_normal([0.5, -0.2], [[2.0, 0.9], [0.9, 0.5]])
plt.contourf(x, y, rv.pdf(pos))
```
### References:
1. Roger Labbe's [repo](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python) on Kalman Filters. (Majority of the examples in the notes are from this)
2. Probabilistic Robotics by Sebastian Thrun, Wolfram Burgard and Dieter Fox, MIT Press.
3. Scipy [Documentation](https://scipython.com/blog/visualizing-the-bivariate-gaussian-distribution/)
|
github_jupyter
|
import numpy as np
import random
x=[3,1,2]
p=[0.1,0.3,0.4]
E_x=np.sum(np.multiply(x,p))
print(E_x)
x=np.random.randn(10)
np.var(x)
x=np.random.random((3,3))
np.cov(x)
x_cor=np.random.rand(1,10)
y_cor=np.random.rand(1,10)
np.cov(x_cor,y_cor)
np.cov(x_cor,y_cor,bias=1)
import matplotlib.pyplot as plt
import random
a=np.zeros((100,))
for i in range(100):
x=[random.uniform(1,10) for _ in range(1000)]
a[i]=np.sum(x,axis=0)/1000
plt.hist(a)
import matplotlib.mlab as mlab
import math
import scipy.stats
mu = 0
variance = 5
sigma = math.sqrt(variance)
x = np.linspace(mu - 5*sigma, mu + 5*sigma, 100)
plt.plot(x,scipy.stats.norm.pdf(x, mu, sigma))
plt.show()
import matplotlib.mlab as mlab
import math
mu1 = 0
variance1 = 2
sigma = math.sqrt(variance1)
x1 = np.linspace(mu1 - 3*sigma, mu1 + 3*sigma, 100)
plt.plot(x1,scipy.stats.norm.pdf(x1, mu1, sigma),label='prior')
mu2 = 10
variance2 = 2
sigma = math.sqrt(variance2)
x2 = np.linspace(mu2 - 3*sigma, mu2 + 3*sigma, 100)
plt.plot(x2,scipy.stats.norm.pdf(x2, mu2, sigma),"g-",label='measurement')
mu_new=(mu1*variance2+mu2*variance1)/(variance1+variance2)
print("New mean is at: ",mu_new)
var_new=(variance1*variance2)/(variance1+variance2)
print("New variance is: ",var_new)
sigma = math.sqrt(var_new)
x3 = np.linspace(mu_new - 3*sigma, mu_new + 3*sigma, 100)
plt.plot(x3,scipy.stats.norm.pdf(x3, mu_new, var_new),label="posterior")
plt.legend(loc='upper left')
plt.xlim(-10,20)
plt.show()
import matplotlib.mlab as mlab
import math
mu1 = 5
variance1 = 1
sigma = math.sqrt(variance1)
x1 = np.linspace(mu1 - 3*sigma, mu1 + 3*sigma, 100)
plt.plot(x1,scipy.stats.norm.pdf(x1, mu1, sigma),label='prior')
mu2 = 10
variance2 = 1
sigma = math.sqrt(variance2)
x2 = np.linspace(mu2 - 3*sigma, mu2 + 3*sigma, 100)
plt.plot(x2,scipy.stats.norm.pdf(x2, mu2, sigma),"g-",label='measurement')
mu_new=mu1+mu2
print("New mean is at: ",mu_new)
var_new=(variance1+variance2)
print("New variance is: ",var_new)
sigma = math.sqrt(var_new)
x3 = np.linspace(mu_new - 3*sigma, mu_new + 3*sigma, 100)
plt.plot(x3,scipy.stats.norm.pdf(x3, mu_new, var_new),label="posterior")
plt.legend(loc='upper left')
plt.xlim(-10,20)
plt.show()
#Example from:
#https://scipython.com/blog/visualizing-the-bivariate-gaussian-distribution/
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
# Our 2-dimensional distribution will be over variables X and Y
N = 60
X = np.linspace(-3, 3, N)
Y = np.linspace(-3, 4, N)
X, Y = np.meshgrid(X, Y)
# Mean vector and covariance matrix
mu = np.array([0., 1.])
Sigma = np.array([[ 1. , -0.5], [-0.5, 1.5]])
# Pack X and Y into a single 3-dimensional array
pos = np.empty(X.shape + (2,))
pos[:, :, 0] = X
pos[:, :, 1] = Y
def multivariate_gaussian(pos, mu, Sigma):
"""Return the multivariate Gaussian distribution on array pos.
pos is an array constructed by packing the meshed arrays of variables
x_1, x_2, x_3, ..., x_k into its _last_ dimension.
"""
n = mu.shape[0]
Sigma_det = np.linalg.det(Sigma)
Sigma_inv = np.linalg.inv(Sigma)
N = np.sqrt((2*np.pi)**n * Sigma_det)
# This einsum call calculates (x-mu)T.Sigma-1.(x-mu) in a vectorized
# way across all the input variables.
fac = np.einsum('...k,kl,...l->...', pos-mu, Sigma_inv, pos-mu)
return np.exp(-fac / 2) / N
# The distribution on the variables X, Y packed into pos.
Z = multivariate_gaussian(pos, mu, Sigma)
# Create a surface plot and projected filled contour plot under it.
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_surface(X, Y, Z, rstride=3, cstride=3, linewidth=1, antialiased=True,
cmap=cm.viridis)
cset = ax.contourf(X, Y, Z, zdir='z', offset=-0.15, cmap=cm.viridis)
# Adjust the limits, ticks and view angle
ax.set_zlim(-0.15,0.2)
ax.set_zticks(np.linspace(0,0.2,5))
ax.view_init(27, -21)
plt.show()
a = np.arange(25).reshape(5,5)
b = np.arange(5)
c = np.arange(6).reshape(2,3)
print(a)
print(b)
print(c)
#this is the diagonal sum, i repeated means the diagonal
np.einsum('ij', a)
#this takes the output ii which is the diagonal and outputs to a
np.einsum('ii->i',a)
#this takes in the array A represented by their axes 'ij' and B by its only axes'j'
#and multiples them element wise
np.einsum('ij,j',a, b)
A = np.arange(3).reshape(3,1)
B = np.array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
C=np.multiply(A,B)
np.sum(C,axis=1)
D = np.array([0,1,2])
E = np.array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
np.einsum('i,ij->i',D,E)
from scipy.stats import multivariate_normal
x, y = np.mgrid[-5:5:.1, -5:5:.1]
pos = np.empty(x.shape + (2,))
pos[:, :, 0] = x; pos[:, :, 1] = y
rv = multivariate_normal([0.5, -0.2], [[2.0, 0.9], [0.9, 0.5]])
plt.contourf(x, y, rv.pdf(pos))
| 0.604049 | 0.994106 |
```
from collections import Counter
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from nltk.tokenize import word_tokenize
from nltk.corpus import gutenberg
data = "person nationalities religious political groups facbuildings airports highways bridges orggpe countries cities statesloc product event work_of_art law language date time percent money quantity ordinal person proper location date numbers organization person location organization person location organization miscellaneous person location organization designation abbreviation number measure terms time person location organization miscellaneous date time number percentages monetary expressions measurement expressions person location organization country other month week party artifact entertainment facilities location locomotive materials organisms organization person plants count distance money quantity date day period time year person location organization person designation organization abbreviation title person title-object location time number measure terms person location organization person location organization miscellaneous date time number percentages monetary expressions measurement expressions personlocation organization miscellaneous date time number percentages monetary expressions measurement expressions person location organization miscellaneous date time number percentages monetary expressions measurement expressions person location organization person organization location expressions of times quantitiesnumber person location organization ex absolute temporal terms timex monetary person location organization time currency percentage person location organization person location organization person location organization time"
wordlist = data.split()
wordfreq = []
for w in wordlist:
wordfreq.append(wordlist.count(w))
zipped_list = str(zip(wordlist, wordfreq))
#print([i for i in zip(wordlist, wordfreq)])
indexes = np.arange(len(wordlist))
width = .5
#print(indexes)
width = .5
plt.bar([indexes], [wordfreq], color="orange",
marker="*",
s=100,
label=wordlist)
plt.xticks(indexes + width * 0.5, wordlist)
plt.show()
from wordcloud import WordCloud, STOPWORDS
import matplotlib.pyplot as plt
import cv2
import urllib.request
# In[2]:
try:
def freq(str1):
str1 = str.split()
str2 = []
for i in str1:
if i not in str2:
str2.append(i)
for i in range(0, len(str2)):
print(str2[i], ':', str.count(str2[i]))
except FileNotFoundError:
print("File Not Found")
except TypeError:
print("File Is Not Valid")
# In[3]:
str = "person nationalities religious political groups facbuildings airports highways bridges orggpe countries cities statesloc product event work_of_art law language date time percent money quantity ordinal person proper location date numbers organization person location organization person location organization miscellaneous person location organization designation abbreviation number measure terms time person location organization miscellaneous date time number percentages monetary expressions measurement expressions person location organization country other month week party artifact entertainment facilities location locomotive materials organisms organization person plants count distance money quantity date day period time year person location organization person designation organization abbreviation title person title-object location time number measure terms person location organization person location organization miscellaneous date time number percentages monetary expressions measurement expressions personlocation organization miscellaneous date time number percentages monetary expressions measurement expressions person location organization miscellaneous date time number percentages monetary expressions measurement expressions person location organization person organization location expressions of times quantitiesnumber person location organization ex absolute temporal terms timex monetary person location organization time currency percentage person location organization person location organization person location organization time"
freq(str)
# In[4]:
wordcloud = WordCloud(width=1480, height=1480, max_words=10).generate(str)
plt.imshow(wordcloud)
plt.show()
from collections import Counter
import pandas as pd
import nltk
import numpy as np
import matplotlib.pyplot as plt
sno = nltk.stem.SnowballStemmer('english')
s = "person nationalities religious political groups facbuildings airports highways bridges orggpe countries cities states location product event work_of_art law language date time percent money quantity ordinal person proper location date numbers organization person location organization person location organization miscellaneous person location organization designation abbreviation number measure terms time person location organization miscellaneous date time number percentage monetary expressions measurement expressions person location organization country other month week party artifact entertainment facilities location location materials organization person plants count distance money quantity date day period time year person location organization person designation organization abbreviation title person person location time number measure terms person location organization person location organization miscellaneous date time number percentage monetary expressions measurement expressions person location organization miscellaneous date time number percentage monetary expressions measurement expressions person location organization miscellaneous date time number percentage monetary expressions measurement expressions person location organization person organization location expressions of times quantities number person locationmorganization absolute temporal terms timex monetary person location organization time currency percentage person location organization person location organization person location organization time"
s1 = s.split(' ')
d = pd.DataFrame(s1)
s2 = d[0].apply(lambda x: sno.stem(x))
counter = Counter(s2)
#print(counter)
author_names = counter.keys()
#print(author_names)
author_counts = counter.values()
#print(author_counts)
# Plot histogram using matplotlib bar().
indexes = np.arange(len(author_names))
#print(indexes)
width = .80
fig, ax = plt.subplots()
rects1 = ax.bar(indexes, author_counts, width, color='r')
ax.set_ylim(0,25)
ax.set_ylabel('Frequency')
ax.set_title('Insert Title Here')
ax.set_xticks(np.add(indexes,(width/2))) # set the position of the x ticks
ax.set_xticklabels(author_names)
def autolabel(rects):
for rect in rects:
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width()/2.0, 1.05*height,'%d' % int(height),ha='center', va='bottom')
autolabel(rects1)
plt.show()
```
|
github_jupyter
|
from collections import Counter
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from nltk.tokenize import word_tokenize
from nltk.corpus import gutenberg
data = "person nationalities religious political groups facbuildings airports highways bridges orggpe countries cities statesloc product event work_of_art law language date time percent money quantity ordinal person proper location date numbers organization person location organization person location organization miscellaneous person location organization designation abbreviation number measure terms time person location organization miscellaneous date time number percentages monetary expressions measurement expressions person location organization country other month week party artifact entertainment facilities location locomotive materials organisms organization person plants count distance money quantity date day period time year person location organization person designation organization abbreviation title person title-object location time number measure terms person location organization person location organization miscellaneous date time number percentages monetary expressions measurement expressions personlocation organization miscellaneous date time number percentages monetary expressions measurement expressions person location organization miscellaneous date time number percentages monetary expressions measurement expressions person location organization person organization location expressions of times quantitiesnumber person location organization ex absolute temporal terms timex monetary person location organization time currency percentage person location organization person location organization person location organization time"
wordlist = data.split()
wordfreq = []
for w in wordlist:
wordfreq.append(wordlist.count(w))
zipped_list = str(zip(wordlist, wordfreq))
#print([i for i in zip(wordlist, wordfreq)])
indexes = np.arange(len(wordlist))
width = .5
#print(indexes)
width = .5
plt.bar([indexes], [wordfreq], color="orange",
marker="*",
s=100,
label=wordlist)
plt.xticks(indexes + width * 0.5, wordlist)
plt.show()
from wordcloud import WordCloud, STOPWORDS
import matplotlib.pyplot as plt
import cv2
import urllib.request
# In[2]:
try:
def freq(str1):
str1 = str.split()
str2 = []
for i in str1:
if i not in str2:
str2.append(i)
for i in range(0, len(str2)):
print(str2[i], ':', str.count(str2[i]))
except FileNotFoundError:
print("File Not Found")
except TypeError:
print("File Is Not Valid")
# In[3]:
str = "person nationalities religious political groups facbuildings airports highways bridges orggpe countries cities statesloc product event work_of_art law language date time percent money quantity ordinal person proper location date numbers organization person location organization person location organization miscellaneous person location organization designation abbreviation number measure terms time person location organization miscellaneous date time number percentages monetary expressions measurement expressions person location organization country other month week party artifact entertainment facilities location locomotive materials organisms organization person plants count distance money quantity date day period time year person location organization person designation organization abbreviation title person title-object location time number measure terms person location organization person location organization miscellaneous date time number percentages monetary expressions measurement expressions personlocation organization miscellaneous date time number percentages monetary expressions measurement expressions person location organization miscellaneous date time number percentages monetary expressions measurement expressions person location organization person organization location expressions of times quantitiesnumber person location organization ex absolute temporal terms timex monetary person location organization time currency percentage person location organization person location organization person location organization time"
freq(str)
# In[4]:
wordcloud = WordCloud(width=1480, height=1480, max_words=10).generate(str)
plt.imshow(wordcloud)
plt.show()
from collections import Counter
import pandas as pd
import nltk
import numpy as np
import matplotlib.pyplot as plt
sno = nltk.stem.SnowballStemmer('english')
s = "person nationalities religious political groups facbuildings airports highways bridges orggpe countries cities states location product event work_of_art law language date time percent money quantity ordinal person proper location date numbers organization person location organization person location organization miscellaneous person location organization designation abbreviation number measure terms time person location organization miscellaneous date time number percentage monetary expressions measurement expressions person location organization country other month week party artifact entertainment facilities location location materials organization person plants count distance money quantity date day period time year person location organization person designation organization abbreviation title person person location time number measure terms person location organization person location organization miscellaneous date time number percentage monetary expressions measurement expressions person location organization miscellaneous date time number percentage monetary expressions measurement expressions person location organization miscellaneous date time number percentage monetary expressions measurement expressions person location organization person organization location expressions of times quantities number person locationmorganization absolute temporal terms timex monetary person location organization time currency percentage person location organization person location organization person location organization time"
s1 = s.split(' ')
d = pd.DataFrame(s1)
s2 = d[0].apply(lambda x: sno.stem(x))
counter = Counter(s2)
#print(counter)
author_names = counter.keys()
#print(author_names)
author_counts = counter.values()
#print(author_counts)
# Plot histogram using matplotlib bar().
indexes = np.arange(len(author_names))
#print(indexes)
width = .80
fig, ax = plt.subplots()
rects1 = ax.bar(indexes, author_counts, width, color='r')
ax.set_ylim(0,25)
ax.set_ylabel('Frequency')
ax.set_title('Insert Title Here')
ax.set_xticks(np.add(indexes,(width/2))) # set the position of the x ticks
ax.set_xticklabels(author_names)
def autolabel(rects):
for rect in rects:
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width()/2.0, 1.05*height,'%d' % int(height),ha='center', va='bottom')
autolabel(rects1)
plt.show()
| 0.59749 | 0.50354 |
```
%matplotlib inline
from pyvista import set_plot_theme
set_plot_theme('document')
```
Parametric Geometric Objects {#ref_parametric_example}
============================
Creating parametric objects
```
from math import pi
import pyvista as pv
```
This example demonstrates how to plot parametric objects using pyvista
Supertoroid
===========
```
supertoroid = pv.ParametricSuperToroid(n1=0.5)
supertoroid.plot(color="tan", smooth_shading=True)
```
Parametric Ellipsoid
====================
```
# Ellipsoid with a long x axis
ellipsoid = pv.ParametricEllipsoid(10, 5, 5)
ellipsoid.plot(color="tan")
```
Partial Parametric Ellipsoid
============================
```
# cool plotting direction
cpos = [
(21.9930, 21.1810, -30.3780),
(-1.1640, -1.3098, -0.1061),
(0.8498, -0.2515, 0.4631),
]
# half ellipsoid
part_ellipsoid = pv.ParametricEllipsoid(10, 5, 5, max_v=pi / 2)
part_ellipsoid.plot(color="tan", smooth_shading=True, cpos=cpos)
```
Pseudosphere
============
```
pseudosphere = pv.ParametricPseudosphere()
pseudosphere.plot(color="tan", smooth_shading=True)
```
Bohemian Dome
=============
```
bohemiandome = pv.ParametricBohemianDome()
bohemiandome.plot(color="tan")
```
Bour
====
```
bour = pv.ParametricBour()
bour.plot(color="tan")
```
Boy\'s Surface
==============
```
boy = pv.ParametricBoy()
boy.plot(color="tan")
```
Catalan Minimal
===============
```
catalanminimal = pv.ParametricCatalanMinimal()
catalanminimal.plot(color="tan")
```
Conic Spiral
============
```
conicspiral = pv.ParametricConicSpiral()
conicspiral.plot(color="tan")
```
Cross Cap
=========
```
crosscap = pv.ParametricCrossCap()
crosscap.plot(color="tan")
```
Dini
====
```
dini = pv.ParametricDini()
dini.plot(color="tan")
```
Enneper
=======
```
enneper = pv.ParametricEnneper()
enneper.plot(cpos="yz")
```
Figure-8 Klein
==============
```
figure8klein = pv.ParametricFigure8Klein()
figure8klein.plot()
```
Henneberg
=========
```
henneberg = pv.ParametricHenneberg()
henneberg.plot(color="tan")
```
Klein
=====
```
klein = pv.ParametricKlein()
klein.plot(color="tan")
```
Kuen
====
```
kuen = pv.ParametricKuen()
kuen.plot(color="tan")
```
Mobius
======
```
mobius = pv.ParametricMobius()
mobius.plot(color="tan")
```
Plucker Conoid
==============
```
pluckerconoid = pv.ParametricPluckerConoid()
pluckerconoid.plot(color="tan")
```
Random Hills
============
```
randomhills = pv.ParametricRandomHills()
randomhills.plot(color="tan")
```
Roman
=====
```
roman = pv.ParametricRoman()
roman.plot(color="tan")
```
Super Ellipsoid
===============
```
superellipsoid = pv.ParametricSuperEllipsoid(n1=0.1, n2=2)
superellipsoid.plot(color="tan")
```
Torus
=====
```
torus = pv.ParametricTorus()
torus.plot(color="tan")
```
Circular Arc
============
```
pointa = [-1, 0, 0]
pointb = [0, 1, 0]
center = [0, 0, 0]
resolution = 100
arc = pv.CircularArc(pointa, pointb, center, resolution)
pl = pv.Plotter()
pl.add_mesh(arc, color='k', line_width=4)
pl.show_bounds()
pl.view_xy()
pl.show()
```
Extruded Half Arc
=================
```
pointa = [-1, 0, 0]
pointb = [1, 0, 0]
center = [0, 0, 0]
resolution = 100
arc = pv.CircularArc(pointa, pointb, center, resolution)
poly = arc.extrude([0, 0, 1])
poly.plot(color="tan", cpos='iso', show_edges=True)
```
|
github_jupyter
|
%matplotlib inline
from pyvista import set_plot_theme
set_plot_theme('document')
from math import pi
import pyvista as pv
supertoroid = pv.ParametricSuperToroid(n1=0.5)
supertoroid.plot(color="tan", smooth_shading=True)
# Ellipsoid with a long x axis
ellipsoid = pv.ParametricEllipsoid(10, 5, 5)
ellipsoid.plot(color="tan")
# cool plotting direction
cpos = [
(21.9930, 21.1810, -30.3780),
(-1.1640, -1.3098, -0.1061),
(0.8498, -0.2515, 0.4631),
]
# half ellipsoid
part_ellipsoid = pv.ParametricEllipsoid(10, 5, 5, max_v=pi / 2)
part_ellipsoid.plot(color="tan", smooth_shading=True, cpos=cpos)
pseudosphere = pv.ParametricPseudosphere()
pseudosphere.plot(color="tan", smooth_shading=True)
bohemiandome = pv.ParametricBohemianDome()
bohemiandome.plot(color="tan")
bour = pv.ParametricBour()
bour.plot(color="tan")
boy = pv.ParametricBoy()
boy.plot(color="tan")
catalanminimal = pv.ParametricCatalanMinimal()
catalanminimal.plot(color="tan")
conicspiral = pv.ParametricConicSpiral()
conicspiral.plot(color="tan")
crosscap = pv.ParametricCrossCap()
crosscap.plot(color="tan")
dini = pv.ParametricDini()
dini.plot(color="tan")
enneper = pv.ParametricEnneper()
enneper.plot(cpos="yz")
figure8klein = pv.ParametricFigure8Klein()
figure8klein.plot()
henneberg = pv.ParametricHenneberg()
henneberg.plot(color="tan")
klein = pv.ParametricKlein()
klein.plot(color="tan")
kuen = pv.ParametricKuen()
kuen.plot(color="tan")
mobius = pv.ParametricMobius()
mobius.plot(color="tan")
pluckerconoid = pv.ParametricPluckerConoid()
pluckerconoid.plot(color="tan")
randomhills = pv.ParametricRandomHills()
randomhills.plot(color="tan")
roman = pv.ParametricRoman()
roman.plot(color="tan")
superellipsoid = pv.ParametricSuperEllipsoid(n1=0.1, n2=2)
superellipsoid.plot(color="tan")
torus = pv.ParametricTorus()
torus.plot(color="tan")
pointa = [-1, 0, 0]
pointb = [0, 1, 0]
center = [0, 0, 0]
resolution = 100
arc = pv.CircularArc(pointa, pointb, center, resolution)
pl = pv.Plotter()
pl.add_mesh(arc, color='k', line_width=4)
pl.show_bounds()
pl.view_xy()
pl.show()
pointa = [-1, 0, 0]
pointb = [1, 0, 0]
center = [0, 0, 0]
resolution = 100
arc = pv.CircularArc(pointa, pointb, center, resolution)
poly = arc.extrude([0, 0, 1])
poly.plot(color="tan", cpos='iso', show_edges=True)
| 0.693473 | 0.948728 |
# Install libraries
Install the required libraries through pip.
```
!pip install google-cloud-language spacy
!pip install --upgrade networkx
```
Download the [required model](https://spacy.io/usage/models).
```
!python -m spacy download en_core_web_sm
```
# Import libraries
Libraries for making use of [Google NLP](https://cloud.google.com/natural-language/). This API has very strong entity extraction.
```
from google.cloud import language
from google.cloud.language import enums
from google.cloud.language import types
```
Library for regular expressions, used in basic data loading.
```
import re
```
We use [Spacy](https://spacy.io/) for linguistic analysis of the statements. Used to extract sentence structure.
```
import spacy
```
We use [Pandas](https://pandas.pydata.org/) for dealing with tabular data.
```
import pandas as pd
```
Standard python libraries we use to deal with:
- processes that are IO heavy;
- combining iterators;
- exporting data.
```
import concurrent.futures
import itertools
import json
```
# Include the sole sample document
Only _one_ sample document was given. We include it here for convenience's sake.
```
document = """WITNESS B
Date of birth: December 17, 1991
Date statement taken: September 14, 2006
A. Circumstances of the enlistment
1. My name is “Marcus BRODY.” I am 15 years old.
2. In late 2002, I can’t remember exactly when, there was a lot of fighting in Goma, where I was living with my cousins. My parents and sisters were already killed by the Rebels. They had attacked our village one night with guns and grenades.
3. I went to a rally in Goma where several Government commanders spoke and said that the President and the DCP were defending Congo. I remember one of the commanders who spoke was Chief KOBONO, because everyone cheered when he got up to speak. There were many soldiers at the rally, about the same as the number of children in my school . Each family had to contribute, a cow or a goat or a child. I was around ten years old then. My aunt said I should go, since my family was dead because of the Rebels. I followed the DCP soldiers.
B. Training at Kalemie
4. The DCP soldiers took me to a training camp called Kalemie, which was a few miles outside of my village. A number of us, men and children, were transported there in a green “stout” (open back truck). The camp was very big, with many soldiers and lots of guns. I think that some of the trainers were from another country because sometimes they spoke in a language I did not really understand. The training was hard at first, but I was good at running and shooting, so the commanders knew I would make a good soldier. They did not give us guns to keep at first, but we learnt how to clean and use them properly.
5. There were lots of new recruits all the time, children as well as men. There were girls as well. Most of the children were around my age, but some were younger, as young as seven years old. Many of the girls were wives of the commanders. The commanders called them their wives, but the girls did not talk about it much. A good wife spends the entire night with her man. The commanders laughed and said that if we boys learned to be good fighters, we would have many wives too. I did not really take them seriously as I was just a boy.
6. The commanders also offered us marijuana to smoke. They said it would help us to relax. I did not smoke, but many of the other boys and girls smoked with the commanders. When the boys smoked, they seemed to go calm.
7. The commander of the camp was BAGOR. We all knew him by name.
8. I remember he or maybe another commander spoke to the soldiers and recruits one time after the evening meal. He said that we should kill all Rebels, that they were the enemy. He said that the purpose of all our hard training was to prepare us for fighting the Rebels, who were taking our land and trying to kill us. He said the enemy had already killed many of our family members, and we were entitled to revenge. He kept saying that they were the “enemy.” We should kill all of them, men, women and children, and destroy their villages. It was our duty, what we were meant to do.
9. The President, Ule MATOBO GOBO, visited the camp one time. He arrived in a green jeep with several other commanders. We received special instructions in preparation for his visit. Other members of the militia taught us what to do. If the President came to the camp, you had to lift your gun, holding the base in your hand and putting the barrel on your shoulder, march in front of him with your legs good and straight. I had practiced this salute many times before the President arrived, and my commander told me afterwards that my salute was one of the best in my group.
10. The President spoke to the regular soldiers, who were in uniform, as well as the children and new adult recruits. We were all assembled in a big hut in the middle of the camp. There were many of us who were brought in to hear the President speak, the whole camp. There were many other children in the crowd, boys and girls, most of whom were my age.
11. The President spent all morning talking to the soldiers and recruits. He told us that we were here to become a trained army that would bring peace to Congo. He said our enemies were all those who were opposed to peace. He said that after the fighting was done we would be able to go back to school, get other training. This fighting, he said, was for the good of Congo in the end. The soldiers in the crowd cheered loudly at the end of the speech. I too was moved by the President’s speech and wanted to fight to protect my people.
12. The President said that when we were done with our training, we would each receive a gun. At that time, I had not yet received my own gun.
13. We talked about the President’s visit afterwards with the soldiers and commanders. I was excited to get my personal gun and uniform, to be like the soldiers.
C. Participation in Attacks
14. Finally the fighting came. In February 2003, we were told there was a big attack coming, on Bankana. I knew it was February because it was the beginning of the rainy season, and it rained heavily every day for a long time. I had my gun, and because the commanders knew I was a good soldier, I was sent to the front line of the fighting. There were many other boys my age on the front line, and some girls too, though just a few.
15. The commanders reminded us that we were brave and that we must use our training to destroy the enemy. He offered us marijuana to help us relax. I did not take any, but several of the boys smoked and it seemed to help them relax. Commander BAGOR, told us that the fighting was for the good of Congo in the end.
16. The attack lasted the whole day. We were met with fierce resistance from the armed soldiers of the village. I shot many rounds, and killed several people. I saw the soldiers from my platoon take a mother and her daughter from a house. The soldiers pulled the mother away from her daughter. I saw the soldiers kill the child in front of her mother with a machete.
17. We also captured some prisoners, both men and women. During the attack, the commander told us to burn the houses and destroy the crops. We set many fires to the buildings. There was a lot of confusion.
18. The day after the attack, the DCP troops under the command of BAGOR were inspected by Commander AL-ZARIAN and Commander IKE DUBAKU at the militia headquarters in Mongbwalu. That was the first time I saw Commander AL-ZARIAN. He was a very high-ranking commander, a very important man. He said his name to us, but I had heard it before, since he was such an important commander. IKE DUBAKU ordered our platoons to re-attack Bankana and told us to follow our commanders to the frontline.
19. Several days later, we attacked Bankana again with another platoon that was bigger than ours. I do not know how many men and children were in the platoon, maybe as many pupils as there were in my school. Again, my commander sent me to the frontline, along with several other boys my age. The men threw hand grenades and launched missiles to start the attack. I used my gun and shot and killed many enemies. We came under attack from armed men in the village, but I managed to make it out without being injured. Two of my friends, however, boys my age, both died in the attack, along with a few others from the platoon that accompanied us. One of the girls in the other platoon was shot in the foot.
D. Role as bodyguard
20. Afterward, I was posted to guard Commander TCHAZA’s house in a neighbourhood of Kinshasa. I wore a “tâches-tâche” uniform and carried a gun, a fusil. It was an honor to be told to guard the commander. I patrolled his compound and searched all those who visited him. Sometimes I would also accompany him to rallies in the Government villages and meetings with his commanders. I would have shot anyone who tried to harm him.
E. Demobilization
21. After the President left, IKE called me on the phone and told me to go home. IKE’s phone number is 08984948494. I kept my gun but went back to my village to try to find my cousins. Some had been killed while I was away. I stayed there but I was not able to find any work and I did not want to go to school.
22. After wandering a little from village to village, where I did some field-work in exchange for food and lodging, I ended up at the center of Gbadolite for child soldiers in Kinshasa.
23. At the center, I was interviewed several times by a white woman named SANARA who asked me about my experience in the militia. She would take notes of our conversations. There were many other children in the center, enough to fill my entire school. In the room where I slept, there were about 7 girls/boys and we were all between 14 and 17 years of age. I made friends with many of them and we used to talk about our life as soldiers.
24. I had been staying at the center until I was transferred to a safe-house to await the trial.
"""
```
# Extract _events_ and _statements_
The document is structured into _events_ and _statements_. An analist transforms a recording of an interview between the witness and an investigator into a _witness report_. This report is grouped into events, and each event is supported by numbered statements. We assume all witness statements are of this form.
```
def extract_events(document:str)->(pd.DataFrame, pd.DataFrame):
# Extract events from document
extracted_events = [
(event.span()[0], event.span()[1], event.group('index'), event.group('event'))
for event in re.finditer(r'\n(?P<index>[A-Z]+)\.\s+(?P<event>.*)', document)
]
# Convert into Pandas dataframe
df_events = pd.DataFrame([
(event[2], event[3], document[event[1]:next_event[0]].strip())
for event, next_event in zip(extracted_events, extracted_events[1:])
], columns= [ 'letter','header', 'content']
)
# Extract statements into Pandas dataframe
df_statements \
= df_events\
.content.str.extractall(r'(?:^|\n)(?P<nr_statement>\d+)\.\s*(?P<statement>.*)')\
.assign(nr_statement = lambda df: df.nr_statement.astype(int))\
.reset_index()\
.merge(df_events[['letter','header']], left_on='level_0', right_index=True)\
.drop(['match','level_0'], axis=1)\
.set_index('nr_statement')
return df_statements[['letter', 'header', 'statement']]
```
Actually extract _events_ and _statements_.
```
df_statements = extract_events(document)
```
Output the data to JSON, to be used in the visualisation.
```
print(json.dumps(df_statements.to_dict(orient='index')))
```
# Analyze the entities per statement
Use Google NLP to analyze the entities per statement. When more data would be available, a custom `spacy` model would also be possible.
```
def analyze_document(client:object, doc:str):
# Convert into Google NLP Doc
document = types.Document(
content= doc,
type= enums.Document.Type.PLAIN_TEXT
)
# Analyze entities
entities = client.analyze_entities(document).entities
return [{
'name' : entity.name,
'salience': entity.salience,
'mentioned_as' : mention.text.content,
'mentioned_type': mention.type
}\
for entity in entities
for mention in entity.mentions
]
```
Method to query Google NLP in parallel to reduce wait-time.
```
def analyze_entities(docs:pd.Series)->pd.DataFrame():
# Create NLP client
client = language.LanguageServiceClient()
def entity_iterator():
# Work with a threadpool
with concurrent.futures.ThreadPoolExecutor(max_workers=8) as executor:
# Query for entities
task_dict = {
executor.submit(analyze_document, client, statement) : index\
for index, statement in docs.items()
}
for future in concurrent.futures.as_completed(task_dict):
nr_statement = task_dict[future]
try:
data = future.result()
except Exception as exc:
print('Statement {:} generated an exception: {:}.'.format(nr_statement, exc))
else:
for entity in data:
entity['nr_statement']= nr_statement
yield data
return pd.DataFrame(list(itertools.chain(*entity_iterator())))
```
Get overview of all entities
```
df_entities = analyze_entities(df_statements.statement)
```
## Analyze the occurence of entities per statement
The main entities are those that are references as `PROPER`; see [the documentation](https://cloud.google.com/natural-language/docs/reference/rest/v1/Entity).
```
main_entities = df_entities.query('mentioned_type == 1').name.unique()
df_entities_per_statement \
= df_entities[lambda df: df.name.isin(main_entities)]\
.groupby(by=['name','nr_statement'])\
.size()\
.unstack('nr_statement')\
.fillna(0)
```
## Analyze the mentions of entities in text
Load the necessary model.
```
spacy_en = spacy.load('en_core_web_sm')
```
Analyze each statement in the dataframe
```
def extract_relations_in_statements(df:pd.DataFrame):
def relation_iterator():
for statement_nr, statement in df_statements.statement.iteritems():
# Parse the statement
parsed_statement = spacy_en(statement)
for chunk in parsed_statement.noun_chunks:
# Only consider verb relationships
if chunk.root.head.pos_ == 'VERB':
yield {
'from_raw' : chunk.text,
'from' : chunk.lemma_,
'verb' : chunk.root.head.lemma_,
'verb_raw': chunk.root.head.text,
'statement': statement_nr
}
return pd.DataFrame(list(relation_iterator()))
df_statement_relations = extract_relations_in_statements(df_statements)
```
## Analyze relations between events and entities based on statements
Main entities can be found in any noun phrase by matching to a regex. Indirect references are missed in this way.
```
main_entity_regex = re.compile('(?P<entity>{:})'.format('|'.join(main_entities)), re.IGNORECASE)
```
Store a dict with all relationships between verbs and entities.
```
entity_statement_relations = df_statement_relations\
.assign(entity = lambda df: df.from_raw.str.extract(main_entity_regex, expand= False))\
[lambda df: ~df.entity.isnull()]\
.to_dict(orient='index')
```
Store a dict with all relationships between verbs and general noun phrases.
```
all_relations = df_statement_relations.to_dict(orient='index')
```
Output the data to command line.
```
print(json.dumps(list(entity_statement_relations.values()), indent=2, sort_keys=True))
print(json.dumps(list(all_relations.values()), indent=2, sort_keys=True))
```
## Analyze verbs per statement
Create overview of key verbs per statement.
```
df_verbs_per_statement = df_statement_relations.groupby(by=['statement','verb']).size().rename('occurrence').reset_index()
print(json.dumps(list(df_verbs_per_statement.to_dict(orient='index').values()), indent=2, sort_keys=True))
```
|
github_jupyter
|
!pip install google-cloud-language spacy
!pip install --upgrade networkx
!python -m spacy download en_core_web_sm
from google.cloud import language
from google.cloud.language import enums
from google.cloud.language import types
import re
import spacy
import pandas as pd
import concurrent.futures
import itertools
import json
document = """WITNESS B
Date of birth: December 17, 1991
Date statement taken: September 14, 2006
A. Circumstances of the enlistment
1. My name is “Marcus BRODY.” I am 15 years old.
2. In late 2002, I can’t remember exactly when, there was a lot of fighting in Goma, where I was living with my cousins. My parents and sisters were already killed by the Rebels. They had attacked our village one night with guns and grenades.
3. I went to a rally in Goma where several Government commanders spoke and said that the President and the DCP were defending Congo. I remember one of the commanders who spoke was Chief KOBONO, because everyone cheered when he got up to speak. There were many soldiers at the rally, about the same as the number of children in my school . Each family had to contribute, a cow or a goat or a child. I was around ten years old then. My aunt said I should go, since my family was dead because of the Rebels. I followed the DCP soldiers.
B. Training at Kalemie
4. The DCP soldiers took me to a training camp called Kalemie, which was a few miles outside of my village. A number of us, men and children, were transported there in a green “stout” (open back truck). The camp was very big, with many soldiers and lots of guns. I think that some of the trainers were from another country because sometimes they spoke in a language I did not really understand. The training was hard at first, but I was good at running and shooting, so the commanders knew I would make a good soldier. They did not give us guns to keep at first, but we learnt how to clean and use them properly.
5. There were lots of new recruits all the time, children as well as men. There were girls as well. Most of the children were around my age, but some were younger, as young as seven years old. Many of the girls were wives of the commanders. The commanders called them their wives, but the girls did not talk about it much. A good wife spends the entire night with her man. The commanders laughed and said that if we boys learned to be good fighters, we would have many wives too. I did not really take them seriously as I was just a boy.
6. The commanders also offered us marijuana to smoke. They said it would help us to relax. I did not smoke, but many of the other boys and girls smoked with the commanders. When the boys smoked, they seemed to go calm.
7. The commander of the camp was BAGOR. We all knew him by name.
8. I remember he or maybe another commander spoke to the soldiers and recruits one time after the evening meal. He said that we should kill all Rebels, that they were the enemy. He said that the purpose of all our hard training was to prepare us for fighting the Rebels, who were taking our land and trying to kill us. He said the enemy had already killed many of our family members, and we were entitled to revenge. He kept saying that they were the “enemy.” We should kill all of them, men, women and children, and destroy their villages. It was our duty, what we were meant to do.
9. The President, Ule MATOBO GOBO, visited the camp one time. He arrived in a green jeep with several other commanders. We received special instructions in preparation for his visit. Other members of the militia taught us what to do. If the President came to the camp, you had to lift your gun, holding the base in your hand and putting the barrel on your shoulder, march in front of him with your legs good and straight. I had practiced this salute many times before the President arrived, and my commander told me afterwards that my salute was one of the best in my group.
10. The President spoke to the regular soldiers, who were in uniform, as well as the children and new adult recruits. We were all assembled in a big hut in the middle of the camp. There were many of us who were brought in to hear the President speak, the whole camp. There were many other children in the crowd, boys and girls, most of whom were my age.
11. The President spent all morning talking to the soldiers and recruits. He told us that we were here to become a trained army that would bring peace to Congo. He said our enemies were all those who were opposed to peace. He said that after the fighting was done we would be able to go back to school, get other training. This fighting, he said, was for the good of Congo in the end. The soldiers in the crowd cheered loudly at the end of the speech. I too was moved by the President’s speech and wanted to fight to protect my people.
12. The President said that when we were done with our training, we would each receive a gun. At that time, I had not yet received my own gun.
13. We talked about the President’s visit afterwards with the soldiers and commanders. I was excited to get my personal gun and uniform, to be like the soldiers.
C. Participation in Attacks
14. Finally the fighting came. In February 2003, we were told there was a big attack coming, on Bankana. I knew it was February because it was the beginning of the rainy season, and it rained heavily every day for a long time. I had my gun, and because the commanders knew I was a good soldier, I was sent to the front line of the fighting. There were many other boys my age on the front line, and some girls too, though just a few.
15. The commanders reminded us that we were brave and that we must use our training to destroy the enemy. He offered us marijuana to help us relax. I did not take any, but several of the boys smoked and it seemed to help them relax. Commander BAGOR, told us that the fighting was for the good of Congo in the end.
16. The attack lasted the whole day. We were met with fierce resistance from the armed soldiers of the village. I shot many rounds, and killed several people. I saw the soldiers from my platoon take a mother and her daughter from a house. The soldiers pulled the mother away from her daughter. I saw the soldiers kill the child in front of her mother with a machete.
17. We also captured some prisoners, both men and women. During the attack, the commander told us to burn the houses and destroy the crops. We set many fires to the buildings. There was a lot of confusion.
18. The day after the attack, the DCP troops under the command of BAGOR were inspected by Commander AL-ZARIAN and Commander IKE DUBAKU at the militia headquarters in Mongbwalu. That was the first time I saw Commander AL-ZARIAN. He was a very high-ranking commander, a very important man. He said his name to us, but I had heard it before, since he was such an important commander. IKE DUBAKU ordered our platoons to re-attack Bankana and told us to follow our commanders to the frontline.
19. Several days later, we attacked Bankana again with another platoon that was bigger than ours. I do not know how many men and children were in the platoon, maybe as many pupils as there were in my school. Again, my commander sent me to the frontline, along with several other boys my age. The men threw hand grenades and launched missiles to start the attack. I used my gun and shot and killed many enemies. We came under attack from armed men in the village, but I managed to make it out without being injured. Two of my friends, however, boys my age, both died in the attack, along with a few others from the platoon that accompanied us. One of the girls in the other platoon was shot in the foot.
D. Role as bodyguard
20. Afterward, I was posted to guard Commander TCHAZA’s house in a neighbourhood of Kinshasa. I wore a “tâches-tâche” uniform and carried a gun, a fusil. It was an honor to be told to guard the commander. I patrolled his compound and searched all those who visited him. Sometimes I would also accompany him to rallies in the Government villages and meetings with his commanders. I would have shot anyone who tried to harm him.
E. Demobilization
21. After the President left, IKE called me on the phone and told me to go home. IKE’s phone number is 08984948494. I kept my gun but went back to my village to try to find my cousins. Some had been killed while I was away. I stayed there but I was not able to find any work and I did not want to go to school.
22. After wandering a little from village to village, where I did some field-work in exchange for food and lodging, I ended up at the center of Gbadolite for child soldiers in Kinshasa.
23. At the center, I was interviewed several times by a white woman named SANARA who asked me about my experience in the militia. She would take notes of our conversations. There were many other children in the center, enough to fill my entire school. In the room where I slept, there were about 7 girls/boys and we were all between 14 and 17 years of age. I made friends with many of them and we used to talk about our life as soldiers.
24. I had been staying at the center until I was transferred to a safe-house to await the trial.
"""
def extract_events(document:str)->(pd.DataFrame, pd.DataFrame):
# Extract events from document
extracted_events = [
(event.span()[0], event.span()[1], event.group('index'), event.group('event'))
for event in re.finditer(r'\n(?P<index>[A-Z]+)\.\s+(?P<event>.*)', document)
]
# Convert into Pandas dataframe
df_events = pd.DataFrame([
(event[2], event[3], document[event[1]:next_event[0]].strip())
for event, next_event in zip(extracted_events, extracted_events[1:])
], columns= [ 'letter','header', 'content']
)
# Extract statements into Pandas dataframe
df_statements \
= df_events\
.content.str.extractall(r'(?:^|\n)(?P<nr_statement>\d+)\.\s*(?P<statement>.*)')\
.assign(nr_statement = lambda df: df.nr_statement.astype(int))\
.reset_index()\
.merge(df_events[['letter','header']], left_on='level_0', right_index=True)\
.drop(['match','level_0'], axis=1)\
.set_index('nr_statement')
return df_statements[['letter', 'header', 'statement']]
df_statements = extract_events(document)
print(json.dumps(df_statements.to_dict(orient='index')))
def analyze_document(client:object, doc:str):
# Convert into Google NLP Doc
document = types.Document(
content= doc,
type= enums.Document.Type.PLAIN_TEXT
)
# Analyze entities
entities = client.analyze_entities(document).entities
return [{
'name' : entity.name,
'salience': entity.salience,
'mentioned_as' : mention.text.content,
'mentioned_type': mention.type
}\
for entity in entities
for mention in entity.mentions
]
def analyze_entities(docs:pd.Series)->pd.DataFrame():
# Create NLP client
client = language.LanguageServiceClient()
def entity_iterator():
# Work with a threadpool
with concurrent.futures.ThreadPoolExecutor(max_workers=8) as executor:
# Query for entities
task_dict = {
executor.submit(analyze_document, client, statement) : index\
for index, statement in docs.items()
}
for future in concurrent.futures.as_completed(task_dict):
nr_statement = task_dict[future]
try:
data = future.result()
except Exception as exc:
print('Statement {:} generated an exception: {:}.'.format(nr_statement, exc))
else:
for entity in data:
entity['nr_statement']= nr_statement
yield data
return pd.DataFrame(list(itertools.chain(*entity_iterator())))
df_entities = analyze_entities(df_statements.statement)
main_entities = df_entities.query('mentioned_type == 1').name.unique()
df_entities_per_statement \
= df_entities[lambda df: df.name.isin(main_entities)]\
.groupby(by=['name','nr_statement'])\
.size()\
.unstack('nr_statement')\
.fillna(0)
spacy_en = spacy.load('en_core_web_sm')
def extract_relations_in_statements(df:pd.DataFrame):
def relation_iterator():
for statement_nr, statement in df_statements.statement.iteritems():
# Parse the statement
parsed_statement = spacy_en(statement)
for chunk in parsed_statement.noun_chunks:
# Only consider verb relationships
if chunk.root.head.pos_ == 'VERB':
yield {
'from_raw' : chunk.text,
'from' : chunk.lemma_,
'verb' : chunk.root.head.lemma_,
'verb_raw': chunk.root.head.text,
'statement': statement_nr
}
return pd.DataFrame(list(relation_iterator()))
df_statement_relations = extract_relations_in_statements(df_statements)
main_entity_regex = re.compile('(?P<entity>{:})'.format('|'.join(main_entities)), re.IGNORECASE)
entity_statement_relations = df_statement_relations\
.assign(entity = lambda df: df.from_raw.str.extract(main_entity_regex, expand= False))\
[lambda df: ~df.entity.isnull()]\
.to_dict(orient='index')
all_relations = df_statement_relations.to_dict(orient='index')
print(json.dumps(list(entity_statement_relations.values()), indent=2, sort_keys=True))
print(json.dumps(list(all_relations.values()), indent=2, sort_keys=True))
df_verbs_per_statement = df_statement_relations.groupby(by=['statement','verb']).size().rename('occurrence').reset_index()
print(json.dumps(list(df_verbs_per_statement.to_dict(orient='index').values()), indent=2, sort_keys=True))
| 0.049154 | 0.894651 |
```
import gym
import itertools
import matplotlib
import matplotlib.style
import numpy as np
import pandas as pd
import sys
if "../" not in sys.path:
sys.path.append("../")
from lib.envs.windy_gridworld import WindyGridworldEnv
from collections import defaultdict
from lib.envs import plotting
matplotlib.style.use('ggplot')
env = WindyGridworldEnv()
#3 : Make the $\epsilon$-greedy policy.
def createEpsilonGreedyPolicy(Q, epsilon, num_actions):
"""
Creates an epsilon-greedy policy based
on a given Q-function and epsilon.
Returns a function that takes the state
as an input and returns the probabilities
for each action in the form of a numpy array
of length of the action space(set of possible actions).
"""
def policyFunction(state):
Action_probabilities = np.ones(num_actions,
dtype = float) * epsilon / num_actions
best_action = np.argmax(Q[state])
Action_probabilities[best_action] += (1.0 - epsilon)
return Action_probabilities
return policyFunction
def qLearning(env, num_episodes, discount_factor = 1.0,
alpha = 0.6, epsilon = 0.1):
"""
Q-Learning algorithm: Off-policy TD control.
Finds the optimal greedy policy while improving
following an epsilon-greedy policy"""
# Action value function
# A nested dictionary that maps
# state -> (action -> action-value).
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# Keeps track of useful statistics
stats = plotting.EpisodeStats( episode_lengths = np.zeros(num_episodes), episode_rewards = np.zeros(num_episodes), )
# Create an epsilon greedy policy function
# appropriately for environment action space
policy = createEpsilonGreedyPolicy(Q, epsilon, env.action_space.n)
# For every episode
for ith_episode in range(num_episodes):
# Reset the environment and pick the first action
state = env.reset()
for t in itertools.count():
# get probabilities of all actions from current state
action_probabilities = policy(state)
# choose action according to
# the probability distribution
action = np.random.choice(np.arange(
len(action_probabilities)),
p = action_probabilities)
# take action and get reward, transit to next state
next_state, reward, done, _ = env.step(action)
# Update statistics
stats.episode_rewards[i_episode] += reward
stats.episode_lengths[i_episode] = t
# TD Update
best_next_action = np.argmax(Q[next_state])
td_target = reward + discount_factor * Q[next_state][best_next_action]
td_delta = td_target - Q[state][action]
Q[state][action] += alpha * td_delta
# done is True if episode terminated
if done:
break
state = next_state
return Q, stats
Q, stats = qLearning(env, 1000)
plotting.plot_episode_stats(stats)
```
|
github_jupyter
|
import gym
import itertools
import matplotlib
import matplotlib.style
import numpy as np
import pandas as pd
import sys
if "../" not in sys.path:
sys.path.append("../")
from lib.envs.windy_gridworld import WindyGridworldEnv
from collections import defaultdict
from lib.envs import plotting
matplotlib.style.use('ggplot')
env = WindyGridworldEnv()
#3 : Make the $\epsilon$-greedy policy.
def createEpsilonGreedyPolicy(Q, epsilon, num_actions):
"""
Creates an epsilon-greedy policy based
on a given Q-function and epsilon.
Returns a function that takes the state
as an input and returns the probabilities
for each action in the form of a numpy array
of length of the action space(set of possible actions).
"""
def policyFunction(state):
Action_probabilities = np.ones(num_actions,
dtype = float) * epsilon / num_actions
best_action = np.argmax(Q[state])
Action_probabilities[best_action] += (1.0 - epsilon)
return Action_probabilities
return policyFunction
def qLearning(env, num_episodes, discount_factor = 1.0,
alpha = 0.6, epsilon = 0.1):
"""
Q-Learning algorithm: Off-policy TD control.
Finds the optimal greedy policy while improving
following an epsilon-greedy policy"""
# Action value function
# A nested dictionary that maps
# state -> (action -> action-value).
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# Keeps track of useful statistics
stats = plotting.EpisodeStats( episode_lengths = np.zeros(num_episodes), episode_rewards = np.zeros(num_episodes), )
# Create an epsilon greedy policy function
# appropriately for environment action space
policy = createEpsilonGreedyPolicy(Q, epsilon, env.action_space.n)
# For every episode
for ith_episode in range(num_episodes):
# Reset the environment and pick the first action
state = env.reset()
for t in itertools.count():
# get probabilities of all actions from current state
action_probabilities = policy(state)
# choose action according to
# the probability distribution
action = np.random.choice(np.arange(
len(action_probabilities)),
p = action_probabilities)
# take action and get reward, transit to next state
next_state, reward, done, _ = env.step(action)
# Update statistics
stats.episode_rewards[i_episode] += reward
stats.episode_lengths[i_episode] = t
# TD Update
best_next_action = np.argmax(Q[next_state])
td_target = reward + discount_factor * Q[next_state][best_next_action]
td_delta = td_target - Q[state][action]
Q[state][action] += alpha * td_delta
# done is True if episode terminated
if done:
break
state = next_state
return Q, stats
Q, stats = qLearning(env, 1000)
plotting.plot_episode_stats(stats)
| 0.536799 | 0.658932 |
# Image Classification with Fashion MNIST
> In this post, we will implement the Image classification (especially on Fashion MNIST) with Neural Network using Tensorflow.
- toc: true
- badges: true
- comments: true
- author: Chanseok Kang
- categories: [Python, Deep_Learning, Tensorflow-Keras]
- image: images/FashionMNIST.png
```
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
plt.rcParams['figure.figsize'] = (16, 10)
plt.rc('font', size=15)
```
## Fashion MNIST

Yann LeCun introduced Convolutional Neural Network (CNN for short) through [his paper](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), namely **LeNet-5**, and shows its effectiveness in hand-written digits. The dataset used his paper is called ["Modified National Institute of Standards and Technology"](http://yann.lecun.com/exdb/mnist/)(or MNIST for short), and it is widely used for validating the neural network performance.

Each image has 28x28 shapes, and is grayscaled (meaning that each pixel value has a range from 0 to 255). But as you notice from original image, features for each digits are almost clear, so most of neural network in now can easily learn its dataset. And also the task cannot represent the complicated task. So there are many trials to formalize its baseline dataset. One of these is [Fashion-MNIST](https://www.kaggle.com/zalando-research/fashionmnist), presented by Zalando research. Its dataset also has 28x28 pixels, and has 10 labels to classify. So main properties are same as Original MNIST, but it is hard to classify it.

In this post, we will use Fashion MNIST dataset classification with tensorflow 2.x. For the prerequisite for implementation, please check the previous posts.
### Data Preprocessing
Actually, tensorflow-keras includes several baseline datasets, including FashionMNIST. It contains 60000 training datasets, 10000 test datasets for validation, and 10 labels. Also, each dataset has grayscale. At first, we can load the dataset into variables. Let's see what it looks like.
```
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()
print(X_train[0])
print(y_train[0])
print(X_train.shape)
print(y_train.shape)
```
As you can see, each pixel value has a range from 0 to 255. and image has 2d array. So it requires to normalize it and reshape it with 1D array for training neural network (since we cover the MLP, we need to reshape it with 1D array. If we use CNN, we don't need to convert it).
```
X_train = X_train / 255.
X_train = X_train.reshape([-1, 28*28])
X_train = X_train.astype(np.float32)
y_train = y_train.astype(np.int32)
X_test = X_test / 255.
X_test = X_test.reshape([-1, 28*28])
X_test = X_test.astype(np.float32)
y_test = y_test.astype(np.int32)
```
### Input Pipeline
As you can see from previous post, it requires to convert raw dataset into tensorflow input pipeline. While building input pipeline, we can chain the method with shuffle, prefetch, and repeat. Note that, the purpose of test dataset is to measure the performance. So we don't need to shuffle it.
```
# Train_dataset
train_ds = tf.data.Dataset.from_tensor_slices((X_train, y_train))\
.shuffle(buffer_size=len(X_train))\
.batch(batch_size=128)\
.prefetch(buffer_size=128)\
.repeat()
# Test dataset
test_ds = tf.data.Dataset.from_tensor_slices((X_test, y_test))\
.batch(batch_size=128)\
.prefetch(buffer_size=128)\
.repeat()
```
### Sample data visualization
Actually, it is important to check the dataset manually. And it is require to visualize sample data. In this section, we'll visualize each label of image in 5x5 matrix.
```
labels_map = {0: 'T-Shirt', 1: 'Trouser', 2: 'Pullover', 3: 'Dress', 4: 'Coat',
5: 'Sandal', 6: 'Shirt', 7: 'Sneaker', 8: 'Bag', 9: 'Ankle Boot'}
columns = 5
rows = 5
fig = plt.figure(figsize=(8, 8))
for i in range(1, columns * rows+1):
data_idx = np.random.randint(len(X_train))
img = X_train[data_idx].reshape([28, 28])
label = labels_map[y_train[data_idx]]
fig.add_subplot(rows, columns, i)
plt.title(label)
plt.imshow(img, cmap='gray')
plt.axis('off')
plt.tight_layout()
plt.show()
```
### Building Neural network.
In this section, we'll build the Multi Layer Perceptron (MLP for short) with 2 Dense Layers. MLP, also called Artificial Neural Network, consists of several fully-connected layers. We can add activation function(sigmoid, ReLU or Softmax) for each layer. We can also apply advanced techniques like weight initialization, Dropout or Batch Normalization. Here, we will build 2 Dense layers in Sequential model.
```
# Build Sequential Model
model = tf.keras.Sequential(name='nn')
model.add(tf.keras.layers.Dense(256, input_shape=(28*28, )))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.ReLU())
model.add(tf.keras.layers.Dense(10, activation='softmax'))
```
We need to figure some points.
- In the input layer, we implemented Dense layer with 256 nodes, and it accepts 28x28-shaped(or 764) input. Since the shape of image is 28x28, and 1D array converted from 2d array will enter here. So we need to define `input_shape` here. Note that, `input_shape` argument have to be a tuple type.
- We added Batch Normalization. Batch Normalization can reduce the effect of Internal Covariate Shift. And it would maintain the information distribution to be normal distribution.
- Here, we added ReLU activation function. We can also add this as an argument of layers.
- Since this task is a sort of multi-class classification, Softmax activation function is added at the end of the output layer.
We can get summary of this model. From the summary, we can check how many layers implement this model, and how many parameters in this model, etc.
```
model.summary()
```
### Model compile
we're almost at the end. Here, we need to compile the model to train. Before compiling, it is required to define loss function and optimizer. As you can see in the [documentation](https://www.tensorflow.org/api_docs/python/tf/keras/losses), there are lots of loss functions predefined. In this task, we need to classify the label, so our loss function may be categorical crossentropy. But keep in mind that, if your label is sort of one-hot encoding, you need to use `categorical_crossentropy`. Since our label is just integer, meaning its label index, our loss function may be `SparseCategoricalCrossentropy`.
And mainly-used optimizer is Adam with 0.01 learning rate.
```
# Model compile
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.01),
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
```
### Model fit
At last, we can train the model with `model.fit`. Here, we can also define batch size, and epochs, and steps.
```
model.fit(train_ds, batch_size=128, steps_per_epoch=len(X_train)/128, epochs=10)
```
### Evaluation
We have the model with trained weight. So we can evaluate our model performance with test data. Since test dataset is unseen data from model, so its accuracy and loss may be lower than training one.
```
loss, acc = model.evaluate(test_ds, steps=len(X_test)/128)
print('test loss is {}'.format(loss))
print('test accuracy is {}'.format(acc))
```
Also, we can visualize its performance. It'll also visualize true or false of label classification.
```
test_batch_size = 25
batch_index = np.random.choice(len(X_test), size=test_batch_size, replace=False)
batch_xs = X_test[batch_index]
batch_ys = y_test[batch_index]
y_pred_ = model(batch_xs, training=False)
fig = plt.figure(figsize=(10, 10))
for i, (px, py, y_pred) in enumerate(zip(batch_xs, batch_ys, y_pred_)):
p = fig.add_subplot(5, 5, i+1)
if np.argmax(y_pred) == py:
p.set_title("{}".format(labels_map[py]), color='blue')
else:
p.set_title("{}/{}".format(labels_map[np.argmax(y_pred)],
labels_map[py]), color='red')
p.imshow(px.reshape(28, 28))
p.axis('off')
plt.tight_layout()
```
At last, we implement the Multi layer perceptron for image classification. There are some incorrect prediction. But we can improve your model with hyperparameter tuning (the number of epoch, the number of layers, input nodes, learning rate, etc..)
## Summary
In this post, we implemented the neural network for Fashion-MNIST. Through this process, we preprocess the dataset and generate the input pipeline. Then add the layers in sequential model. After that, we defined loss function and optimizers for training.
Thanks to the tensorflow-keras, we can easily train the model and evalute its performance.
|
github_jupyter
|
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
plt.rcParams['figure.figsize'] = (16, 10)
plt.rc('font', size=15)
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()
print(X_train[0])
print(y_train[0])
print(X_train.shape)
print(y_train.shape)
X_train = X_train / 255.
X_train = X_train.reshape([-1, 28*28])
X_train = X_train.astype(np.float32)
y_train = y_train.astype(np.int32)
X_test = X_test / 255.
X_test = X_test.reshape([-1, 28*28])
X_test = X_test.astype(np.float32)
y_test = y_test.astype(np.int32)
# Train_dataset
train_ds = tf.data.Dataset.from_tensor_slices((X_train, y_train))\
.shuffle(buffer_size=len(X_train))\
.batch(batch_size=128)\
.prefetch(buffer_size=128)\
.repeat()
# Test dataset
test_ds = tf.data.Dataset.from_tensor_slices((X_test, y_test))\
.batch(batch_size=128)\
.prefetch(buffer_size=128)\
.repeat()
labels_map = {0: 'T-Shirt', 1: 'Trouser', 2: 'Pullover', 3: 'Dress', 4: 'Coat',
5: 'Sandal', 6: 'Shirt', 7: 'Sneaker', 8: 'Bag', 9: 'Ankle Boot'}
columns = 5
rows = 5
fig = plt.figure(figsize=(8, 8))
for i in range(1, columns * rows+1):
data_idx = np.random.randint(len(X_train))
img = X_train[data_idx].reshape([28, 28])
label = labels_map[y_train[data_idx]]
fig.add_subplot(rows, columns, i)
plt.title(label)
plt.imshow(img, cmap='gray')
plt.axis('off')
plt.tight_layout()
plt.show()
# Build Sequential Model
model = tf.keras.Sequential(name='nn')
model.add(tf.keras.layers.Dense(256, input_shape=(28*28, )))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.ReLU())
model.add(tf.keras.layers.Dense(10, activation='softmax'))
model.summary()
# Model compile
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.01),
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
model.fit(train_ds, batch_size=128, steps_per_epoch=len(X_train)/128, epochs=10)
loss, acc = model.evaluate(test_ds, steps=len(X_test)/128)
print('test loss is {}'.format(loss))
print('test accuracy is {}'.format(acc))
test_batch_size = 25
batch_index = np.random.choice(len(X_test), size=test_batch_size, replace=False)
batch_xs = X_test[batch_index]
batch_ys = y_test[batch_index]
y_pred_ = model(batch_xs, training=False)
fig = plt.figure(figsize=(10, 10))
for i, (px, py, y_pred) in enumerate(zip(batch_xs, batch_ys, y_pred_)):
p = fig.add_subplot(5, 5, i+1)
if np.argmax(y_pred) == py:
p.set_title("{}".format(labels_map[py]), color='blue')
else:
p.set_title("{}/{}".format(labels_map[np.argmax(y_pred)],
labels_map[py]), color='red')
p.imshow(px.reshape(28, 28))
p.axis('off')
plt.tight_layout()
| 0.73782 | 0.992349 |
# Emergency Room
Everyone is enjoying the summer, Covid19 restrictions have been lifted, we all get back to regular exercise and outdoor activities. But once in a while, the inevitable happens: An ill-considered step, a brief second of inattention, and injuries all of all types will happen, that require immediate treatment. Luckily our city hosts a modern hospital with an efficient emergency room where the wounded are being taken care of.
To save more lives, the mayor has asked us to review and potentially improve process efficiency in the ER. To do so, we need to realize the following steps
1. Understand the current process and model is as simulation
2. Formulate key objectives to be optimized
2. Assess process statistics and metrics, to unravel potential improvements to help more patients.
3. Explore more optimized decision policies to increase
So let's dive right into it without further ado.
## Process Model
Patients are classified two-fold
1. By **Severity**. The ER is using the well known [Emergency Severity Index](https://en.wikipedia.org/wiki/Emergency_Severity_Index) to triage patients based on the acuity of patients' health care problems, and the number of resources their care is anticipated to require.
2. **Type of injury** which are defined [here](https://medlineplus.gov/woundsandinjuries.html)
Resources
* Surgery **rooms** that must be
equipped by considering the type (i.e., the family) of surgery to
be performed. It will take time to prepare a room for a certain type of injury. These setup times are listed in an excel sheet.
* **Doctors** that are qualified for a subset of all possible injuries
Process dynamics
* **PD-A** Depending on the severity, patients might die if not being treated. Also, if not being treated their severity will increase rather quickly
* **PD-B** The more busy the waiting room is, the less efficient surgeries tend to be. This is because of stress (over-allocation of supporting personal and material). It is phenomenon that is often observed complex queuing processes such as manufacturing or customer services.
* **PD-C** Depending on the severity, patients will die during surgery
* **PD-D** The surgery time correlates with the severity of the injury
* **PD-E** During nights fewer new patients arrive compared to the day
Clearly, more resources are required in the ER and many supported processes are required to run it. However, we leave these out here, as they are not considered to have a major impact on the overall process efficiency. Choosing a correct level of abstraction with a focus on key actors and resources, is the first _key to success_ when optimizing a complex process.
## Key Objectives & Observations
The head nurse, who is governing the process based on her long-term experience, is scheduling patients based on the following principle
> Most urgent injuries first
[comment]: <> ( https://www.merriam-webster.com/dictionary/first%20come%2C%20first%20served)
Clearly if possible it would be great to also
* Minimize waiting times
* Reduce number of surgery room setups
## Analysis
Because of the great variety rooms, we observe a lot of setup steps to prepare surgery rooms. Often even if patients with the same type of injury all already waiting.
## Process Optimization
The idea for model above was orginally formulated by [Kramer et al. in 2019](todo reference) :
> Other relevant applications arise in the context of health-care, where, for example, patients have to be assigned to surgery rooms that must be equipped by considering the type (i.e., the family) of surgery to be performed. In such cases, the weight usually models a level of urgency for the patient.
## Implementation
The tick-unit of the simulation is hours.
```
@file:Repository("*mavenLocal")
@file:DependsOn("com.github.holgerbrandl:kalasim:0.6.97-SNAPSHOT")
//@file:DependsOn("com.github.holgerbrandl:kalasim:0.6.92")
@file:DependsOn("com.github.holgerbrandl:kravis:0.8.1")
@file:DependsOn("org.jetbrains.kotlinx:kotlinx-coroutines-core:1.5.2")
import krangl.*
import org.kalasim.*
import PatientStatus.*
import Severity.*
import org.kalasim.*
import org.kalasim.monitors.MetricTimeline
import kotlin.math.pow
import kotlin.math.sqrt
import kotlin.random.Random
import org.kalasim.examples.er.*
```
## Simulation
```
val er = EmergencyRoom(RefittingAvoidanceNurse)
```
Now run it for some days
```
er.run(24.0*7)
er.get<MetricTimeline>(named(TREATED_MONITOR)).display("Treated Patients")
er.get<MetricTimeline>(named(DECEASED_MONITOR)).display("Deceased Patients")
er.get<HeadNurse>().waitingLine.queueLengthMonitor.display()
er.get<HeadNurse>().waitingLine.lengthOfStayMonitor.display("Length")
```
## Analysis
```
data class RequestRecord(val requester: String, val timestamp: Double, val resource: String, val quantity: Double)
val tc = sim.get<TraceCollector>()
val requests = tc.filterIsInstance<ResourceEvent>().map {
val amountDirected = (if(it.type == ResourceEventType.RELEASED) -1 else 1) * it.amount
RequestRecord(it.requester.name, it.time, it.resource.name, amountDirected)
}
val requestsDf = requests.asDataFrame()
.groupBy("requester")
.sortedBy("requester", "timestamp")
.addColumn("end_time") { it["timestamp"].lead() }
.filter { it["quantity"] gt 0 }
.addColumn("state") { rowNumber.map { if(it.rem(2) == 0) "hungry" else "eating" } }
.ungroup()
requestsDf.schema()
```
Inspect the table with resource request data
```
requestsDf.head(10)
```
Let's try to visualize these data similar to https://r-simmer.org/articles/simmer-08-philosophers.html
```
requestsDf.plot(x = "timestamp", xend = "end_time", y = "requester", yend = "requester", color = "state")
.geomSegment(size = 15.0)
```
It is with great relief, that all 4 philosophers get a firm handle on 2 forks to enjory the tasty sphaghetti!
## Conclusion & Summary
In this article we have worked out a complex process with partially non-intuitive process dynamics can be modelled with kalasim and optimized using insights from operations research.
Disclaimer: The author is not a medical doctor, so please excuse possible inprecsion in wording and lack of ER process understanding. Feel welcome to suggest corrections or improvements
[comment]: <> (// **TODO** use https://github.com/DiUS/java-faker)
|
github_jupyter
|
@file:Repository("*mavenLocal")
@file:DependsOn("com.github.holgerbrandl:kalasim:0.6.97-SNAPSHOT")
//@file:DependsOn("com.github.holgerbrandl:kalasim:0.6.92")
@file:DependsOn("com.github.holgerbrandl:kravis:0.8.1")
@file:DependsOn("org.jetbrains.kotlinx:kotlinx-coroutines-core:1.5.2")
import krangl.*
import org.kalasim.*
import PatientStatus.*
import Severity.*
import org.kalasim.*
import org.kalasim.monitors.MetricTimeline
import kotlin.math.pow
import kotlin.math.sqrt
import kotlin.random.Random
import org.kalasim.examples.er.*
val er = EmergencyRoom(RefittingAvoidanceNurse)
er.run(24.0*7)
er.get<MetricTimeline>(named(TREATED_MONITOR)).display("Treated Patients")
er.get<MetricTimeline>(named(DECEASED_MONITOR)).display("Deceased Patients")
er.get<HeadNurse>().waitingLine.queueLengthMonitor.display()
er.get<HeadNurse>().waitingLine.lengthOfStayMonitor.display("Length")
data class RequestRecord(val requester: String, val timestamp: Double, val resource: String, val quantity: Double)
val tc = sim.get<TraceCollector>()
val requests = tc.filterIsInstance<ResourceEvent>().map {
val amountDirected = (if(it.type == ResourceEventType.RELEASED) -1 else 1) * it.amount
RequestRecord(it.requester.name, it.time, it.resource.name, amountDirected)
}
val requestsDf = requests.asDataFrame()
.groupBy("requester")
.sortedBy("requester", "timestamp")
.addColumn("end_time") { it["timestamp"].lead() }
.filter { it["quantity"] gt 0 }
.addColumn("state") { rowNumber.map { if(it.rem(2) == 0) "hungry" else "eating" } }
.ungroup()
requestsDf.schema()
requestsDf.head(10)
requestsDf.plot(x = "timestamp", xend = "end_time", y = "requester", yend = "requester", color = "state")
.geomSegment(size = 15.0)
| 0.301876 | 0.945851 |
```
%load_ext autoreload
%autoreload 2
```
# MWE of error in xgcm and ECCO
```
import xarray as xr
from xgcm import Grid
rootdir = '/Users/graemem/Documents/research/data/ECCO/v4r4/'
# shortwave
localdir = 'nctiles_monthly/MXLDEPTH/*/'
filename = 'MXLDEPTH_*.nc'
ds = xr.open_mfdataset(rootdir+localdir+filename)
ds = ds.rename({'tile':'face'})
localdir = 'nctiles_grid/'
filename = 'ECCO-GRID.nc'
grid = xr.open_dataset(rootdir+localdir+filename)
grid = grid.rename({'tile':'face'})
grid
ds = xr.merge([ds,grid])
ds
# define the connectivity between faces
face_connections = {'face':
{0: {'X': ((12, 'Y', False), (3, 'X', False)),
'Y': (None, (1, 'Y', False))},
1: {'X': ((11, 'Y', False), (4, 'X', False)),
'Y': ((0, 'Y', False), (2, 'Y', False))},
2: {'X': ((10, 'Y', False), (5, 'X', False)),
'Y': ((1, 'Y', False), (6, 'X', False))},
3: {'X': ((0, 'X', False), (9, 'Y', False)),
'Y': (None, (4, 'Y', False))},
4: {'X': ((1, 'X', False), (8, 'Y', False)),
'Y': ((3, 'Y', False), (5, 'Y', False))},
5: {'X': ((2, 'X', False), (7, 'Y', False)),
'Y': ((4, 'Y', False), (6, 'Y', False))},
6: {'X': ((2, 'Y', False), (7, 'X', False)),
'Y': ((5, 'Y', False), (10, 'X', False))},
7: {'X': ((6, 'X', False), (8, 'X', False)),
'Y': ((5, 'X', False), (10, 'Y', False))},
8: {'X': ((7, 'X', False), (9, 'X', False)),
'Y': ((4, 'X', False), (11, 'Y', False))},
9: {'X': ((8, 'X', False), None),
'Y': ((3, 'X', False), (12, 'Y', False))},
10: {'X': ((6, 'Y', False), (11, 'X', False)),
'Y': ((7, 'Y', False), (2, 'X', False))},
11: {'X': ((10, 'X', False), (12, 'X', False)),
'Y': ((8, 'Y', False), (1, 'X', False))},
12: {'X': ((11, 'X', False), None),
'Y': ((9, 'Y', False), (0, 'X', False))}}}
ds['drW'] = ds.hFacW * ds.drF #vertical cell size at u point
ds['drS'] = ds.hFacS * ds.drF #vertical cell size at v point
ds['drC'] = ds.hFacC * ds.drF #vertical cell size at tracer point
metrics = {
('X',): ['dxC', 'dxG'], # X distances
('Y',): ['dyC', 'dyG'], # Y distances
('Z',): ['drW', 'drS', 'drC'], # Z distances
('X', 'Y'): ['rA', 'rAz', 'rAs', 'rAw'] # Areas
}
# create the grid object
xgrid = Grid(ds, periodic=False, face_connections=face_connections, metrics=metrics)
xgrid
# Calculate gradients in field
gx = xgrid.interp(ds['MXLDEPTH'], 'X')
gy = xgrid.interp(ds['MXLDEPTH'], 'Y', boundary='fill')
dg = xgrid.diff_2d_vector({'X':gx,'Y':gy}, boundary='fill')
dg
xgrid.interp(ds['dxG'],'Y')
dgx = xgrid.diff(ds['MXLDEPTH'], 'X')
dgdx = dgx/xgrid.get_metric(dgx,'X')
dgy = xgrid.diff(ds['MXLDEPTH'], 'Y',boundary='fill')
dgdy = dgy/xgrid.get_metric(dgy,'Y')
xgrid.interp(dgdy,'Y')
dg['Y'].isel(time=0).plot(col='face', col_wrap=5, robust = True)
dg.isel(time=0).plot(col='face', col_wrap=5, robust = True)
gxg = xgrid.interp(dg ,'Y', boundary = 'fill')
```
|
github_jupyter
|
%load_ext autoreload
%autoreload 2
import xarray as xr
from xgcm import Grid
rootdir = '/Users/graemem/Documents/research/data/ECCO/v4r4/'
# shortwave
localdir = 'nctiles_monthly/MXLDEPTH/*/'
filename = 'MXLDEPTH_*.nc'
ds = xr.open_mfdataset(rootdir+localdir+filename)
ds = ds.rename({'tile':'face'})
localdir = 'nctiles_grid/'
filename = 'ECCO-GRID.nc'
grid = xr.open_dataset(rootdir+localdir+filename)
grid = grid.rename({'tile':'face'})
grid
ds = xr.merge([ds,grid])
ds
# define the connectivity between faces
face_connections = {'face':
{0: {'X': ((12, 'Y', False), (3, 'X', False)),
'Y': (None, (1, 'Y', False))},
1: {'X': ((11, 'Y', False), (4, 'X', False)),
'Y': ((0, 'Y', False), (2, 'Y', False))},
2: {'X': ((10, 'Y', False), (5, 'X', False)),
'Y': ((1, 'Y', False), (6, 'X', False))},
3: {'X': ((0, 'X', False), (9, 'Y', False)),
'Y': (None, (4, 'Y', False))},
4: {'X': ((1, 'X', False), (8, 'Y', False)),
'Y': ((3, 'Y', False), (5, 'Y', False))},
5: {'X': ((2, 'X', False), (7, 'Y', False)),
'Y': ((4, 'Y', False), (6, 'Y', False))},
6: {'X': ((2, 'Y', False), (7, 'X', False)),
'Y': ((5, 'Y', False), (10, 'X', False))},
7: {'X': ((6, 'X', False), (8, 'X', False)),
'Y': ((5, 'X', False), (10, 'Y', False))},
8: {'X': ((7, 'X', False), (9, 'X', False)),
'Y': ((4, 'X', False), (11, 'Y', False))},
9: {'X': ((8, 'X', False), None),
'Y': ((3, 'X', False), (12, 'Y', False))},
10: {'X': ((6, 'Y', False), (11, 'X', False)),
'Y': ((7, 'Y', False), (2, 'X', False))},
11: {'X': ((10, 'X', False), (12, 'X', False)),
'Y': ((8, 'Y', False), (1, 'X', False))},
12: {'X': ((11, 'X', False), None),
'Y': ((9, 'Y', False), (0, 'X', False))}}}
ds['drW'] = ds.hFacW * ds.drF #vertical cell size at u point
ds['drS'] = ds.hFacS * ds.drF #vertical cell size at v point
ds['drC'] = ds.hFacC * ds.drF #vertical cell size at tracer point
metrics = {
('X',): ['dxC', 'dxG'], # X distances
('Y',): ['dyC', 'dyG'], # Y distances
('Z',): ['drW', 'drS', 'drC'], # Z distances
('X', 'Y'): ['rA', 'rAz', 'rAs', 'rAw'] # Areas
}
# create the grid object
xgrid = Grid(ds, periodic=False, face_connections=face_connections, metrics=metrics)
xgrid
# Calculate gradients in field
gx = xgrid.interp(ds['MXLDEPTH'], 'X')
gy = xgrid.interp(ds['MXLDEPTH'], 'Y', boundary='fill')
dg = xgrid.diff_2d_vector({'X':gx,'Y':gy}, boundary='fill')
dg
xgrid.interp(ds['dxG'],'Y')
dgx = xgrid.diff(ds['MXLDEPTH'], 'X')
dgdx = dgx/xgrid.get_metric(dgx,'X')
dgy = xgrid.diff(ds['MXLDEPTH'], 'Y',boundary='fill')
dgdy = dgy/xgrid.get_metric(dgy,'Y')
xgrid.interp(dgdy,'Y')
dg['Y'].isel(time=0).plot(col='face', col_wrap=5, robust = True)
dg.isel(time=0).plot(col='face', col_wrap=5, robust = True)
gxg = xgrid.interp(dg ,'Y', boundary = 'fill')
| 0.444806 | 0.576005 |
<a href="https://colab.research.google.com/github/ravi-prakash1907/Machine-Learning-for-Cyber-Security/blob/main/Labs/rnn_lab8.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Spam detection using RNN and Keras
```
# -*- coding: utf-8 -*-
"""
Origin of code:
---------------
Created on Wed Jan 23 11:27:12 2019
@author: Teenu
"""
```
This code was also tester on 70:30 split of dataset.
Presenting final results.
```
from keras.layers import SimpleRNN,LSTM, Embedding, Dense# in an embedding, words are represented by dense vectors where a vector represents the projection of the word into a continuous vector space.The position of a word within the vector space is learned from text and is based on the words that surround the word when it is used. The position of a word in the learned vector space is referred to as its embedding.Keras offers an Embedding layer that can be used for neural networks on text data. It requires that the input data be integer encoded, so that each word is represented by a unique integer. This data preparation step can be performed using the Tokenizer API also provided with Keras.
# Keras offers an Embedding layer that can be used for neural networks on text data. It requires that the input data be integer encoded, so that each word is represented by a unique integer. This data preparation step can be performed using the Tokenizer API also provided with Keras https://machinelearningmastery.com/use-word-embedding-layers-deep-learning-keras/
from keras.models import Sequential
import pandas as pd#pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with “relational” or “labeled” data both easy and intuitive
import numpy as np
import matplotlib.pyplot as plt
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
#%matplotlib inline
import seaborn as sns
sns.set
data=pd.read_csv ("https://raw.githubusercontent.com/ravi-prakash1907/Machine-Learning-for-Cyber-Security/main/Labs/dataset/spam.csv")
texts = []
labels = []
for i, item in enumerate(data['Category']):
texts.append(data['Message'][i])
#print(texts)
# # print(labels)
#print(i)
if item == 'ham':
labels.append(0)
else:
labels.append(1)
texts = np.asarray(texts)
labels = np.asarray(labels)
print("number of texts :" , len(texts))
print("number of labels: ", len(labels))
max_features =10000# number of words used as features
maxlen=500# cut off the words after seeing 500 words in each document(email)
training_samples= int(5572*.8)
print(training_samples)
validation_samples= int(5572-training_samples)
print(len(texts) == (training_samples + validation_samples))
print("The number of training {0}, validation {1} ".format(training_samples, validation_samples))#Syntax : { } .format(value)
tokenizer=Tokenizer()#Class for vectorizing texts, or/and turning texts into sequences (=list of word indexes, where the word of rank i in the dataset (starting at 1) has index i)
#print(tokenizer)
tokenizer.fit_on_texts(texts)
sequences=tokenizer.texts_to_sequences(texts)
#print(sequences)
word_index=tokenizer.word_index
print(word_index)
print("Found {0} unique words:".format(len(word_index)))
#print("Found {0} unique words: ".format(len(word_index)))
data= pad_sequences(sequences, maxlen=maxlen)
#print(data)
print("data shape:", data.shape)
np.random.seed(42)
indices=np.arange(data.shape[0])# shuffle data
np.random.shuffle(indices)
data=data[indices]
labels=labels[indices]
texts_train=data[:training_samples]
print(texts_train)
y_train=labels[:training_samples]
texts_test=data[training_samples:]
y_test=labels[training_samples:]#https://www.kaggle.com/kentata/rnn-for-spam-detection/data
model=Sequential()
model.add(Embedding(max_features,32))
model.add(SimpleRNN(32))
model.add(Dense(1,activation='sigmoid'))
model.compile(optimizer='rmsprop',loss='binary_crossentropy', metrics=['acc'])
history_rnn = model.fit(texts_train, y_train, epochs=10, batch_size=60, validation_split=0.2)
acc = history_rnn.history['acc']
val_acc = history_rnn.history['val_acc']
loss = history_rnn.history['loss']
val_loss = history_rnn.history['val_loss']
new="FreeMsg Hey there darling it's 'S BEeN 3 now and no word back"
yhat = model.predict_classes(new)
epochs=range(len(acc))
plt.plot(epochs,acc,'-', color='orange',label='training accuracy')
plt.plot(epochs,val_acc,'-',color='blue', label='validation accuracy')
plt.legend()
plt.show()
print("Validity accuracy:",val_acc[-1])
```
|
github_jupyter
|
# -*- coding: utf-8 -*-
"""
Origin of code:
---------------
Created on Wed Jan 23 11:27:12 2019
@author: Teenu
"""
from keras.layers import SimpleRNN,LSTM, Embedding, Dense# in an embedding, words are represented by dense vectors where a vector represents the projection of the word into a continuous vector space.The position of a word within the vector space is learned from text and is based on the words that surround the word when it is used. The position of a word in the learned vector space is referred to as its embedding.Keras offers an Embedding layer that can be used for neural networks on text data. It requires that the input data be integer encoded, so that each word is represented by a unique integer. This data preparation step can be performed using the Tokenizer API also provided with Keras.
# Keras offers an Embedding layer that can be used for neural networks on text data. It requires that the input data be integer encoded, so that each word is represented by a unique integer. This data preparation step can be performed using the Tokenizer API also provided with Keras https://machinelearningmastery.com/use-word-embedding-layers-deep-learning-keras/
from keras.models import Sequential
import pandas as pd#pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with “relational” or “labeled” data both easy and intuitive
import numpy as np
import matplotlib.pyplot as plt
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
#%matplotlib inline
import seaborn as sns
sns.set
data=pd.read_csv ("https://raw.githubusercontent.com/ravi-prakash1907/Machine-Learning-for-Cyber-Security/main/Labs/dataset/spam.csv")
texts = []
labels = []
for i, item in enumerate(data['Category']):
texts.append(data['Message'][i])
#print(texts)
# # print(labels)
#print(i)
if item == 'ham':
labels.append(0)
else:
labels.append(1)
texts = np.asarray(texts)
labels = np.asarray(labels)
print("number of texts :" , len(texts))
print("number of labels: ", len(labels))
max_features =10000# number of words used as features
maxlen=500# cut off the words after seeing 500 words in each document(email)
training_samples= int(5572*.8)
print(training_samples)
validation_samples= int(5572-training_samples)
print(len(texts) == (training_samples + validation_samples))
print("The number of training {0}, validation {1} ".format(training_samples, validation_samples))#Syntax : { } .format(value)
tokenizer=Tokenizer()#Class for vectorizing texts, or/and turning texts into sequences (=list of word indexes, where the word of rank i in the dataset (starting at 1) has index i)
#print(tokenizer)
tokenizer.fit_on_texts(texts)
sequences=tokenizer.texts_to_sequences(texts)
#print(sequences)
word_index=tokenizer.word_index
print(word_index)
print("Found {0} unique words:".format(len(word_index)))
#print("Found {0} unique words: ".format(len(word_index)))
data= pad_sequences(sequences, maxlen=maxlen)
#print(data)
print("data shape:", data.shape)
np.random.seed(42)
indices=np.arange(data.shape[0])# shuffle data
np.random.shuffle(indices)
data=data[indices]
labels=labels[indices]
texts_train=data[:training_samples]
print(texts_train)
y_train=labels[:training_samples]
texts_test=data[training_samples:]
y_test=labels[training_samples:]#https://www.kaggle.com/kentata/rnn-for-spam-detection/data
model=Sequential()
model.add(Embedding(max_features,32))
model.add(SimpleRNN(32))
model.add(Dense(1,activation='sigmoid'))
model.compile(optimizer='rmsprop',loss='binary_crossentropy', metrics=['acc'])
history_rnn = model.fit(texts_train, y_train, epochs=10, batch_size=60, validation_split=0.2)
acc = history_rnn.history['acc']
val_acc = history_rnn.history['val_acc']
loss = history_rnn.history['loss']
val_loss = history_rnn.history['val_loss']
new="FreeMsg Hey there darling it's 'S BEeN 3 now and no word back"
yhat = model.predict_classes(new)
epochs=range(len(acc))
plt.plot(epochs,acc,'-', color='orange',label='training accuracy')
plt.plot(epochs,val_acc,'-',color='blue', label='validation accuracy')
plt.legend()
plt.show()
print("Validity accuracy:",val_acc[-1])
| 0.512205 | 0.964018 |
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import string
import numpy as np
import pandas as pd
from tqdm import tqdm
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# Any results you write to the current directory are saved as output.
```
#### Importing data
```
data = pd.read_csv('/kaggle/input/frenchenglish-translation/fra.tsv', delimiter='\t')
data.head()
```
#### Since the dataset has around 1,50,000 training examples but we will be using only 60,000 rows to keep it simple.
```
data = data.iloc[:55000, :]
english = data.english.values
french = data.french.values
```
#### Exploring dataset
```
print("Length of english sentence:", len(english))
print("Length of french sentence:", len(french))
print('-'*20)
print(english[100])
print('-'*20)
print(french[100])
```
#### Remove all punctuations from text
```
english = [s.translate(str.maketrans('', '', string.punctuation)) for s in english]
french = [s.translate(str.maketrans('', '', string.punctuation)) for s in french]
print(english[100])
print('-'*20)
print(french[100])
```
#### Convert all examples to lowercase
```
english = [s.lower() if isinstance(s, str) else s for s in english]
french = [s.lower() if isinstance(s, str) else s for s in french]
print(english[100])
print('-'*20)
print(french[100])
```
#### Visualise the length of examples
```
eng_l = [len(s.split()) for s in english]
fre_l = [len(s.split()) for s in french]
length_df = pd.DataFrame({'english': eng_l, 'french': fre_l})
length_df.hist(bins=30)
plt.show()
from keras import optimizers
from keras.models import Sequential
from keras.preprocessing.text import Tokenizer
from keras.utils.vis_utils import plot_model
from keras.preprocessing.sequence import pad_sequences
from keras.callbacks import ModelCheckpoint
from keras.layers import Dense, Embedding, LSTM, RepeatVector, Dropout, Bidirectional, Flatten
def tokenizer(corpus):
tokenizer = Tokenizer()
tokenizer.fit_on_texts(corpus)
return tokenizer
english_tokenizer = tokenizer(english)
french_tokenizer = tokenizer(french)
word_index_english = english_tokenizer.word_index
word_index_french = french_tokenizer.word_index
eng_vocab_size = len(word_index_english) + 1
fre_vocab_size = len(word_index_french) + 1
print("Size of english vocab:", len(word_index_english))
print("Size of french vocab:", len(word_index_french))
max_len_eng = max(eng_l)
max_len_fre = max(fre_l)
print("Max length of english sentence:", max_len_eng)
print("Max length of french sentence:", max_len_fre)
english = pd.Series(english).to_frame('english')
french = pd.Series(french).to_frame('french')
dummy_df = pd.concat([english, french], axis=1)
train, test = train_test_split(dummy_df, test_size=0.1, random_state=42)
train_english = train.english.values
train_french = train.french.values
test_english = test.english.values
test_french = test.french.values
def encode_sequences(tokenizer, length, text):
sequences = tokenizer.texts_to_sequences(text)
sequences = pad_sequences(sequences, maxlen=length, padding='post')
return sequences
eng_seq = encode_sequences(english_tokenizer, max_len_eng, train_english)
fre_seq = encode_sequences(french_tokenizer, max_len_fre, train_french)
# test_english = encode_sequences(english_tokenizer, max_len_eng, test_english)
test_french = encode_sequences(french_tokenizer, max_len_fre, test_french)
print(eng_seq[10])
print(fre_seq[10])
def nmt_model(in_vocab_size, out_vocab_size, in_timestep, out_timestep, units):
model = Sequential()
model.add(Embedding(in_vocab_size, units, input_length=in_timestep, mask_zero=True))
model.add(Bidirectional(LSTM(units, dropout=0.5, recurrent_dropout=0.4)))
model.add(Dropout(0.5))
model.add(RepeatVector(out_timestep))
model.add(Bidirectional(LSTM(units, dropout=0.5, recurrent_dropout=0.4, return_sequences=True)))
model.add(Dropout(0.5))
model.add(Dense(out_vocab_size, activation="softmax"))
return model
model = nmt_model(fre_vocab_size, eng_vocab_size, max_len_fre, max_len_eng, 256)
rms = optimizers.RMSprop(lr=0.01)
model.compile(loss="sparse_categorical_crossentropy", optimizer=rms, metrics=['accuracy'])
model.summary()
plot_model(model, show_shapes=True)
filepath="weights-improvement.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_accuracy', verbose=1, save_best_only=True, mode='max')
eng_seq = eng_seq.reshape(eng_seq.shape[0], eng_seq.shape[1], 1)
history = model.fit(fre_seq, eng_seq, batch_size=1024, epochs=300, verbose=1, validation_split=0.05, shuffle=True, callbacks=[checkpoint])
# Plot training & validation accuracy values
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# Plot training & validation loss values
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
prediction = model.predict_classes(test_french.reshape(test_french.shape[0], test_french.shape[1]))
def get_word(n, tokenizer):
for word, index in tokenizer.word_index.items():
if index == n:
return word
return None
preds_text = []
for i in tqdm(prediction):
temp = []
for j in range(len(i)):
t = get_word(i[j], english_tokenizer)
if j > 0:
if (t == get_word(i[j-1], english_tokenizer)) or (t == None):
temp.append('')
else:
temp.append(t)
else:
if(t == None):
temp.append('')
else:
temp.append(t)
preds_text.append(' '.join(temp))
pred_df = pd.DataFrame({'actual' : test_english, 'predicted' : preds_text})
pred_df.head(7)
pred_df.tail(7)
```
|
github_jupyter
|
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import string
import numpy as np
import pandas as pd
from tqdm import tqdm
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# Any results you write to the current directory are saved as output.
data = pd.read_csv('/kaggle/input/frenchenglish-translation/fra.tsv', delimiter='\t')
data.head()
data = data.iloc[:55000, :]
english = data.english.values
french = data.french.values
print("Length of english sentence:", len(english))
print("Length of french sentence:", len(french))
print('-'*20)
print(english[100])
print('-'*20)
print(french[100])
english = [s.translate(str.maketrans('', '', string.punctuation)) for s in english]
french = [s.translate(str.maketrans('', '', string.punctuation)) for s in french]
print(english[100])
print('-'*20)
print(french[100])
english = [s.lower() if isinstance(s, str) else s for s in english]
french = [s.lower() if isinstance(s, str) else s for s in french]
print(english[100])
print('-'*20)
print(french[100])
eng_l = [len(s.split()) for s in english]
fre_l = [len(s.split()) for s in french]
length_df = pd.DataFrame({'english': eng_l, 'french': fre_l})
length_df.hist(bins=30)
plt.show()
from keras import optimizers
from keras.models import Sequential
from keras.preprocessing.text import Tokenizer
from keras.utils.vis_utils import plot_model
from keras.preprocessing.sequence import pad_sequences
from keras.callbacks import ModelCheckpoint
from keras.layers import Dense, Embedding, LSTM, RepeatVector, Dropout, Bidirectional, Flatten
def tokenizer(corpus):
tokenizer = Tokenizer()
tokenizer.fit_on_texts(corpus)
return tokenizer
english_tokenizer = tokenizer(english)
french_tokenizer = tokenizer(french)
word_index_english = english_tokenizer.word_index
word_index_french = french_tokenizer.word_index
eng_vocab_size = len(word_index_english) + 1
fre_vocab_size = len(word_index_french) + 1
print("Size of english vocab:", len(word_index_english))
print("Size of french vocab:", len(word_index_french))
max_len_eng = max(eng_l)
max_len_fre = max(fre_l)
print("Max length of english sentence:", max_len_eng)
print("Max length of french sentence:", max_len_fre)
english = pd.Series(english).to_frame('english')
french = pd.Series(french).to_frame('french')
dummy_df = pd.concat([english, french], axis=1)
train, test = train_test_split(dummy_df, test_size=0.1, random_state=42)
train_english = train.english.values
train_french = train.french.values
test_english = test.english.values
test_french = test.french.values
def encode_sequences(tokenizer, length, text):
sequences = tokenizer.texts_to_sequences(text)
sequences = pad_sequences(sequences, maxlen=length, padding='post')
return sequences
eng_seq = encode_sequences(english_tokenizer, max_len_eng, train_english)
fre_seq = encode_sequences(french_tokenizer, max_len_fre, train_french)
# test_english = encode_sequences(english_tokenizer, max_len_eng, test_english)
test_french = encode_sequences(french_tokenizer, max_len_fre, test_french)
print(eng_seq[10])
print(fre_seq[10])
def nmt_model(in_vocab_size, out_vocab_size, in_timestep, out_timestep, units):
model = Sequential()
model.add(Embedding(in_vocab_size, units, input_length=in_timestep, mask_zero=True))
model.add(Bidirectional(LSTM(units, dropout=0.5, recurrent_dropout=0.4)))
model.add(Dropout(0.5))
model.add(RepeatVector(out_timestep))
model.add(Bidirectional(LSTM(units, dropout=0.5, recurrent_dropout=0.4, return_sequences=True)))
model.add(Dropout(0.5))
model.add(Dense(out_vocab_size, activation="softmax"))
return model
model = nmt_model(fre_vocab_size, eng_vocab_size, max_len_fre, max_len_eng, 256)
rms = optimizers.RMSprop(lr=0.01)
model.compile(loss="sparse_categorical_crossentropy", optimizer=rms, metrics=['accuracy'])
model.summary()
plot_model(model, show_shapes=True)
filepath="weights-improvement.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_accuracy', verbose=1, save_best_only=True, mode='max')
eng_seq = eng_seq.reshape(eng_seq.shape[0], eng_seq.shape[1], 1)
history = model.fit(fre_seq, eng_seq, batch_size=1024, epochs=300, verbose=1, validation_split=0.05, shuffle=True, callbacks=[checkpoint])
# Plot training & validation accuracy values
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# Plot training & validation loss values
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
prediction = model.predict_classes(test_french.reshape(test_french.shape[0], test_french.shape[1]))
def get_word(n, tokenizer):
for word, index in tokenizer.word_index.items():
if index == n:
return word
return None
preds_text = []
for i in tqdm(prediction):
temp = []
for j in range(len(i)):
t = get_word(i[j], english_tokenizer)
if j > 0:
if (t == get_word(i[j-1], english_tokenizer)) or (t == None):
temp.append('')
else:
temp.append(t)
else:
if(t == None):
temp.append('')
else:
temp.append(t)
preds_text.append(' '.join(temp))
pred_df = pd.DataFrame({'actual' : test_english, 'predicted' : preds_text})
pred_df.head(7)
pred_df.tail(7)
| 0.626238 | 0.747478 |
# **Dataset: HR Analytics: Job Change of Data Scientists**
## **1.** **Introduction**
## **1.1 Team**
* Tran Bao Nguyen
* Truong Hoang Pham
* Tung Thanh Vu
## **1.2 Main question:**
A company which is active in Big Data and Data Science wants to hire data scientists among people who successfully pass some courses which conduct by the company. Many people signup for their training. Company wants to know which of these candidates are really wants to work for the company after training or looking for a new employment because it helps to reduce the cost and time as well as the quality of training or planning the courses and categorization of candidates. Information related to demographics, education, experience are in hands from candidates signup and enrollment.
Which factors that lead a person to leave current job and change their job to data scientist?
* Audience: Jobseeker who want to change job to a data scientist and recruiter of the company who want to look for ideal candidates
## **Sub questions - Part 1:**
__Data Cleaning:__
- Is there any duplication in your data?
- Is there missing data?
- Is there any mislabeled data/errors?
- Is there any column that need reformatting for better analysis?
__Exploratory Data Analysis:__
- For numerical data: How is the data distributed? How are they correlated? Provide summary statistics of these data? Identify outliers, check if they are errors or simply abnormalities in the data. ?
- For categorical data: How many categories are there? Are there any difference between those categories?
## **Sub questions - Part 2:**
How many training hours do a job seeker needs to be ready for a job change in data science?
Does any of the following factors affect someone's intention of changing their job to data science?
* Gender
* Relevant experience
* Enrolled university
* Education level
* Major
* Recent job
* Company size
* Company type
* City development index
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import scipy
hr = pd.read_csv('https://raw.githubusercontent.com/trannguyen135/trannguyen135/main/aug_train.csv')
```
### **1.3** **Data overview**
```
hr.head()
hr.info()
hr.describe()
hr.describe(include=object)
```
### **Comments:**
Problems of dataset:
* Missing values in columns: gender, enrolled_university, education_level, major_discipline, experience, company_size, company_type, last_new_job
* Mislabeled data/errors: relevent_experience, city, company size
```
numbers = [f for f in hr.columns if hr.dtypes[f] != 'object']
list_nums = ', '.join(numbers)
list_nums
objects = [f for f in hr.columns if hr.dtypes[f] =='object']
list_objects = ', '.join(objects)
list_objects
# Categorical:
i = 1
for obj in objects:
print(i, "/", obj, "\t", len(hr[obj].unique()), ":", hr[obj].unique())
i = i+1
i = 1
for obj in numbers:
print(i, "/", obj, len(hr[obj].unique()), ":", hr[obj].unique() if len(hr[obj].unique())<10 else '')
i = i+1
```
## **2.1** **Data Cleaning - mislabeled data/errors**
```
hr.head()
#Rename some columns
hr = hr.rename(columns = ({'city':'city_code', 'experience':'work_experience', 'last_new_job':'most_recent_job'}))
#Replace some values in the work_experience, relevent_experience and most_recent_job column
hr['work_experience'].replace({np.NaN:0,'>20':21,'<1':0},inplace=True)
hr['most_recent_job'].replace({np.NaN:0,'>4':5,'never':0},inplace=True)
hr["relevent_experience"].replace({"Has relevent experience":"yes","No relevent experience":"no"},inplace=True)
#change type of columns
hr['work_experience'] = hr['work_experience'].astype(int)
hr['most_recent_job'] = hr['most_recent_job'].astype(int)
#Get code city to fix data in city column
def get_code_city(city):
return city.split('_')[1]
#Fix data inn the company_size column
def fix_company_size(x):
if x == "<10":
return "Local"
elif x == "50-99" or x == "10/49":
return "Small"
elif x == "100-500":
return "Medium"
elif x == "500-999":
return "Upper"
elif x == "1000-4999" or x == "5000-9999":
return "Extended"
elif x == "10000+":
return "Large"
#Fix data inn the most_recent_job column
def fix_most_recent_job(x):
if 0 < x <= 1:
return "0-1"
elif 1 < x <= 3:
return "2-3"
elif 3 < x <= 5:
return "4-5"
elif x > 5:
return "5+"
else:
pass
#Fix data in the work_experience column
def fix_work_experience(x):
if x == 0:
return "0"
elif 0 < x <= 3:
return "1-3"
elif 3 < x <= 7:
return "4-7"
elif 7 < x <= 15:
return "7-15"
elif x > 15:
return "15+"
#Create city_code and apply to get_code_city
hr['city_code'] = hr['city_code'].apply(get_code_city)
#Fix name in company_size column
hr["company_size"] = hr["company_size"].apply(lambda x: fix_company_size(x))
#Fix values in work_experience column
hr["work_experience"] = hr["work_experience"].apply(lambda x: fix_work_experience(x))
#Fix values in most_recent_job column
hr["most_recent_job"] = hr["most_recent_job"].apply(lambda x: fix_most_recent_job(x))
hr.head()
hr.duplicated().sum()
```
## **2.1** **Data Cleaning - Duplication**
```
# Display dataframe to look for columns cannot be duplicated
hr
```
**Comment:** enrollee_id cannot be duplicated
```
# Check the overall duplication
hr.duplicated().sum()
# Look for enrollee_id duplicated values
hr['enrollee_id'].duplicated().sum()
```
**Commet:** No duplication
## **2.2** **Data Cleaning - Missing values**
```
# Check for NULL values
hr.isnull().sum()
```
**Comment:** Null values in columns: 'gender', 'enrolled_university', 'education_level', 'major_discipline', 'comapny_size', 'company_type', 'most_recent_job'.
1. **Gender**
```
# check for all unique variables
hr['gender'].unique()
# Display the dataframe to look for gender indications
hr
```
**Comment:** No clear indication but assume 'major_discipline' has
```
# Check relationship with major discipline
hr.groupby('gender')['major_discipline'].value_counts()
```
**Notice:** a significant portion of 'Male' is 'STEM' -> replace NULL values have 'major_discipline' as 'STEM' WITH 'Male'
```
mask = (hr['major_discipline'] == 'STEM') & (hr['gender'].isnull())
hr.loc[mask, 'gender'] = hr.loc[mask, 'gender'].fillna('Male')
# Check for update
hr.groupby('gender')['major_discipline'].value_counts()
# Check the percentage of remaining NULL values
hr['gender'].isnull().sum() / hr['gender'].count() * 100
```
**Comment:** 6.7% is a small percentage -> replace null values with mode value of 'gender
```
hr['gender'] = hr['gender'].fillna(hr.gender.mode()[0])
# Final check for gender
hr['gender'].unique()
```
2. **enrolled_university**
```
# Check all unique varialbes
hr['enrolled_university'].unique()
# Display the dataframe to look for indications
hr
```
**Comment:** Education level looks like a potential indication
```
# Check relationship
hr.groupby('enrolled_university')['education_level'].value_counts()
```
**Comment:** 'Education_level', actually, shows no indication for 'enrolled_university' and versus.
```
# Check the percentage of NULL values
hr['enrolled_university'].isnull().sum() / hr['enrolled_university'].count() * 100
```
**Comment:** 2.06% is a small percentage -> replace NULL values with mode value of 'enrolled_university'.
```
hr['enrolled_university'] = hr['enrolled_university'].fillna(hr.enrolled_university.mode()[0])
# Final check for enrolled_university
hr['enrolled_university'].unique()
```
3. **education_level**
```
# Check all unique variables
hr['education_level'].unique()
# Display the dataframe to look for indications
hr
```
**Comment:** Potentially has relationship with 'enrolled_university' but shows in indication as proved above.
```
# Check the percentage of NULL values
hr['education_level'].isnull().sum() / hr['education_level'].count() * 100
```
**Comment:** 2.46% is a small percentage -> replace NULL values with mode value of 'education_level'.
```
hr['education_level'] = hr['education_level'].fillna(hr.education_level.mode()[0])
# Final check for education_level
hr['education_level'].unique()
```
4. **major_discipline**
```
# Check for all unique variables
hr['major_discipline'].unique()
# Display dataframe to look for indications
hr
```
**Comment:** No clear indication but as proved above, 'Male's tend to be 'STEM' -> replace NULL values have gender as 'Male' with 'STEM'
```
mask = (hr['major_discipline'].isnull()) & (hr['gender'] == 'Male')
hr.loc[mask, 'major_discipline'].fillna('STEM', inplace = True)
# Check the percentage of the remaining NULL values
hr['major_discipline'].isnull().sum() / hr['major_discipline'].count() * 100
```
**Comment:** 17.21% is still a significant portion -> replace with 'Unknown'
```
hr['major_discipline'] = hr['major_discipline'].fillna('Unknown')
# Final check for major_discipline
hr['major_discipline'].unique()
```
5. **company_size**
```
# Check for all unique variables
hr['company_size'].unique()
# Display dataframe to look for indications
hr
```
**Comment:** No clear indication but have an assumption that all startups are 'Small'
```
# Check assumption
hr.groupby('company_type')['company_size'].value_counts()
```
**Comment:** No clear indication from 'company_type' for 'company_size' and versus
```
# Calculate the percentage of NULL values
hr['company_size'].isnull().sum() / hr['company_size'].count() * 100
```
**Comment:** 50% is a very huge percentage -> replace NULL values with 'Unknown'
```
hr['company_size'] = hr['company_size'].fillna('Unknown')
# Final check for company_size
hr['company_size'].unique()
```
6. Company_type
```
# Check all unique variables
hr['company_type'].unique()
```
**Comment:** Only has connection with 'company_size' but as proved above, no clear indication -> replace with 'Unknown'
```
# Check the percenatge of NULL values
hr['company_type'].isnull().sum() / hr['company_type'].count() * 100
```
**Comment:** 47.17% is a huge percentage -> replace NULL values with 'Unknown'
```
hr['company_type'] = hr['company_type'].fillna('Unknown')
# Final checl for company_type
hr['company_type'].unique()
```
* **Among columns, 'company_size' and 'company_type' have a significant portion of 'Unknown' values**
## **3.1** **EDA - For continuous variables**
*city_development_index*
```
hr['city_development_index'].describe()
hr_median = hr['city_development_index'].median()
hr_median
hr_mode = hr['city_development_index'].mode()
hr_mode
# Visualize data
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
sns.distplot(hr['city_development_index'])
plt.subplot(1,2,2)
plt.hist(hr['city_development_index'])
plt.show()
```
**Comments:**
1. Not equally distributed
2. Mostly in 0.92
3. Negative skewness
4. Has an odd local peak at around 0.6
*training_hours*
```
hr['training_hours'].describe()
hr_mode = hr['training_hours'].mode()
hr_mode
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
sns.distplot(hr['training_hours'])
plt.subplot(1,2,2)
plt.hist(hr['training_hours'])
plt.show()
```
**Comments:**
1. Range is from 0 to 336
2. Not equally distributed
3. Mostly have training hours less than a 100.
4. Positive skewness.
```
hr[['city_development_index','training_hours','target']].corr()
# Continuous & continuous
sns.pairplot(hr[['city_development_index','training_hours','target']])
```
**Comment:**
* No correlation between 'city_development_index' and 'training_hours'
* 'city_development_index' and 'target' have negative correlation
* No correlation between 'training_hours' and 'target'
```
# boxplot of city_development_index & training_hours
plt.figure(figsize=(8,6))
plt.subplot(121)
plt.title('city_development_index')
plt.boxplot(hr['city_development_index'])
plt.subplot(122)
plt.title('training_hours')
plt.boxplot(hr['training_hours'])
plt.show()
```
### **Comments:**
* For variable 'city_development_index': only one outliner at the lower whisker
* For variable 'training_hour': many ouliners at the upper whisker
*Checking outliners of 'city_development_index'*
```
Q1 = np.percentile(hr.city_development_index, 25)
Q1
Q3 = np.percentile(hr.city_development_index, 75)
Q3
city_development_index_iqr = Q3 - Q1
city_development_index_iqr
outliner_ratio_1 = len(hr[(hr['city_development_index'] < (Q1 - 1.5*city_development_index_iqr))])/len(hr['city_development_index'])
outliner_percentage_1 = "{:.2%}".format(outliner_ratio_1)
print('Percentage of outliners:',outliner_percentage_1)
hr_new_1 = hr[(hr['city_development_index'] >= (Q1 - 1.5*city_development_index_iqr))]
plt.boxplot(hr_new_1['city_development_index'])
plt.show()
hr_new_1.city_development_index.describe()
hr.city_development_index.describe()
```
### **Comments:**
* Percentage of outliners is not significant
* Outliners don't create statistically significant difference
**-> Consider not to remove outliners**
*Checking outliners of 'training_hours'*
```
Q1 = np.percentile(hr.training_hours, 25)
Q1
Q3 = np.percentile(hr.training_hours, 75)
Q3
training_hours_iqr = Q3 - Q1
training_hours_iqr
outliner_ratio = len(hr[(hr['training_hours'] > (Q3 + 1.5*training_hours_iqr ))])/len(hr['training_hours'])
outliner_percentage = "{:.2%}".format(outliner_ratio)
print('Percentage of outliners:',outliner_percentage)
hr_new = hr[(hr['training_hours'] <= (Q3 + 1.5*training_hours_iqr ))]
plt.boxplot(hr_new['training_hours'])
plt.show()
hr_new.training_hours.describe()
hr.training_hours.describe()
```
### **Comments:**
* Percentage of outliners is reasonable
* Outliners don't create statistically significant difference
* Losing insights from abnormal groups in dataset
**-> Consider not to remove outliners**
*Checking abnormal groups of 'training_hours'*
```
training_hours_abnormal = hr[(hr['training_hours'] > (Q3 + 1.5*training_hours_iqr ))]
training_hours_abnormal.head(10)
training_hours_abnormal.describe()
training_hours_abnormal.describe(include=object)
```
### **Comments:**
* Mean of dataset is 248 hours
* Compare to normal dataset, the only different variable is 'work_experience' (work experience in abnormal dataset is shorter(7 - 15 years))
**New question: Is there correlation between work experience and training hours?**
```
work_train = hr.groupby('work_experience')['training_hours'].mean().sort_values(ascending=True)
work_train
```
### **Comments:**
* Freshers and Juniors (0 - 3 years) have the least training hours
* Seniors (4 - 15 years) have the most training hours
##**3.2** **EDA - For categorical variables**
```
city_count = (hr.groupby('city_code').count())['enrollee_id'].sort_values(ascending=False)
top_10_city = city_count.head(10)
gender_count = (hr.groupby('gender').count())['enrollee_id'].sort_values(ascending=False)
relevant_experience_count = (hr.groupby('relevent_experience').count())['enrollee_id'].sort_values(ascending=False)
university_count = (hr.groupby('enrolled_university').count())['enrollee_id'].sort_values(ascending=False)
education_count = (hr.groupby('education_level').count())['enrollee_id'].sort_values(ascending=False)
major_count = (hr.groupby('major_discipline').count())['enrollee_id'].sort_values(ascending=False)
experience_count = (hr.groupby('work_experience').count())['enrollee_id'].sort_values(ascending=False)
company_size_count = (hr.groupby('company_size').count())['enrollee_id'].sort_values(ascending=False)
company_type_count = (hr.groupby('company_type').count())['enrollee_id'].sort_values(ascending=False)
last_new_job_count = (hr.groupby('most_recent_job').count())['enrollee_id'].sort_values(ascending=False)
print(top_10_city)
print('-'*50)
print(gender_count)
print('-'*50)
print(relevant_experience_count)
print('-'*50)
print(university_count)
print('-'*50)
print(education_count)
print('-'*50)
print(major_count)
print('-'*50)
print(experience_count)
print('-'*50)
print(company_size_count)
print('-'*50)
print(company_type_count)
print('-'*50)
print(last_new_job_count)
plt.figure(figsize=(50,40))
plt.subplots_adjust(bottom=0.3, top=0.7, hspace=0.6)
plt.rc('xtick',labelsize=20)
plt.rc('ytick',labelsize=20)
plt.subplot(251)
sns.set(style="darkgrid")
top_10_city.plot.bar()
plt.xlabel('City code',fontsize=20)
plt.ylabel('Count',fontsize=20)
plt.rc('xtick',labelsize=20)
plt.rc('ytick',labelsize=20)
plt.subplot(252)
sns.set()
gender_count.plot.bar()
plt.xlabel('Gender',fontsize=20)
plt.ylabel('Count',fontsize=20)
plt.rc('xtick',labelsize=20)
plt.rc('ytick',labelsize=20)
plt.subplot(253)
sns.set()
relevant_experience_count.plot.bar()
plt.xlabel('Relevant Experience',fontsize=20)
plt.ylabel('Count',fontsize=20)
plt.rc('xtick',labelsize=20)
plt.rc('ytick',labelsize=20)
plt.subplot(254)
sns.set()
university_count.plot.bar()
plt.xlabel('Enrolled University',fontsize=20)
plt.ylabel('Count',fontsize=20)
plt.rc('xtick',labelsize=20)
plt.rc('ytick',labelsize=20)
plt.subplot(255)
sns.set()
education_count.plot.bar()
plt.xlabel('Education Level',fontsize=20)
plt.ylabel('Count',fontsize=20)
plt.rc('xtick',labelsize=20)
plt.rc('ytick',labelsize=20)
plt.subplot(256)
sns.set()
major_count.plot.bar()
plt.xlabel('Major discippline',fontsize=20)
plt.ylabel('Count',fontsize=20)
plt.rc('xtick',labelsize=20)
plt.rc('ytick',labelsize=20)
plt.subplot(257)
sns.set()
experience_count.plot.bar()
plt.xlabel('Work Experience',fontsize=20)
plt.ylabel('Count',fontsize=20)
plt.rc('xtick',labelsize=20)
plt.rc('ytick',labelsize=20)
plt.subplot(258)
sns.set()
company_size_count.plot.bar()
plt.xlabel('Company Size',fontsize=20)
plt.ylabel('Count',fontsize=20)
plt.rc('xtick',labelsize=20)
plt.rc('ytick',labelsize=20)
plt.subplot(259)
sns.set()
company_type_count.plot.bar()
plt.xlabel('Company type',fontsize=20)
plt.ylabel('Count',fontsize=20)
plt.rc('xtick',labelsize=20)
plt.rc('ytick',labelsize=20)
plt.subplot(2,5,10)
sns.set()
last_new_job_count.plot.bar()
plt.xlabel('Most recent job',fontsize=20)
plt.ylabel('Count',fontsize=20)
plt.show()
```
*Analyze `target` variable*
```
target_count = hr.groupby('target').count()['enrollee_id']
target_count
sns.set(style="darkgrid")
plt.figure(figsize=(8,5))
ax = sns.countplot(x="target", data=hr)
for p in ax.patches:
percentage = '{:.1f}%'.format(100 * p.get_height()/len(hr.target))
x = p.get_x() + p.get_width()/2
y = p.get_height()*1.05
ax.annotate(percentage, (x, y),ha='center')
plt.ylim(0,16000)
plt.show()
```
**Comments:** Imbalanced dataset, >75% data is 'target 0' (people who are not looking for a job change to data scientist)
---
*Analyze categorical variables and 'target'*
*Analyze 'gender' and 'target'*
```
plt.figure(figsize=(8,6))
sns.countplot(hr["gender"], hue = "target", data = hr)
plt.show()
target_gender = hr.groupby(['gender'])['target'].value_counts(normalize=True).unstack()
target_gender
plt.rcParams["figure.figsize"] = [8, 6]
target_gender.plot.bar(stacked=True)
plt.show()
```
### **Comments:**
* Percentage of female want to change their job is more than percentage of male
*Analyze 'relevent_experience' and 'target'*
```
plt.figure(figsize=(8,6))
sns.countplot(hr["relevent_experience"], hue = "target", data = hr)
plt.show()
relevant_experience_target = hr.groupby(['relevent_experience'])['target'].value_counts(normalize=True).unstack()
relevant_experience_target
plt.rcParams["figure.figsize"] = [8, 6]
relevant_experience_target.plot.bar(stacked=True)
plt.show()
```
### **Comments:**
* People who have no relevant experience want to change job to data science more than people who have relevant experience
*Analyze 'enrolled_university' and 'target'*
```
plt.figure(figsize=(8,6))
sns.countplot(hr["enrolled_university"], hue = "target", data = hr)
plt.show()
enrolled_university_target = hr.groupby(['enrolled_university'])['target'].value_counts(normalize=True).unstack()
enrolled_university_target
plt.rcParams["figure.figsize"] = [8, 6]
enrolled_university_target.plot.bar(stacked=True)
plt.show()
```
### **Comments:**
* People who enroll for full time course in university is more motivated to change their job to data science than people who don't enroll to any courses or just do part time course
*Analyze 'education_level' and 'target'*
```
plt.figure(figsize=(8,6))
sns.countplot(hr["education_level"], hue = "target", data = hr)
plt.show()
education_level_target = hr.groupby(['education_level'])['target'].value_counts(normalize=True).unstack()
education_level_target
plt.rcParams["figure.figsize"] = [8, 6]
education_level_target.plot.bar(stacked=True)
plt.show()
```
### **Comments:**
* People who finished bachelor are looking for a job change to data science more than others
*Analyze 'major_discipline' and 'target'*
```
plt.figure(figsize=(10,6))
sns.countplot(hr["major_discipline"], hue = "target", data = hr)
plt.show()
major_discipline_target = hr.groupby(['major_discipline'])['target'].value_counts(normalize=True).unstack()
major_discipline_target
plt.rcParams["figure.figsize"] = [8, 6]
major_discipline_target.plot.bar(stacked=True)
plt.show()
```
### **Comments:**
* People who studied STEM and Business are looking for a job change to data science more than others
*Analyze 'work_experience' and 'target'*
```
plt.figure(figsize=(10,6))
sns.countplot(hr["work_experience"], hue = "target", data = hr)
plt.show()
work_experience_target = hr.groupby(['work_experience'])['target'].value_counts(normalize=True).unstack()
work_experience_target
plt.rcParams["figure.figsize"] = [8, 6]
work_experience_target.plot.bar(stacked=True)
plt.show()
```
### **Comments:**
* People who have limited working experience (0 - 3 years) are looking for a job change to data science more than others
*Analyze 'company_size' and 'target'*
```
plt.figure(figsize=(10,6))
sns.countplot(hr["company_size"], hue = "target", data = hr)
plt.show()
company_size_target = hr.groupby(['company_size'])['target'].value_counts(normalize=True).unstack()
company_size_target
plt.rcParams["figure.figsize"] = [8, 6]
company_size_target.plot.bar(stacked=True)
plt.show()
```
*Analyze 'company_type' and 'target'*
```
plt.figure(figsize=(15,6))
sns.countplot(hr["company_type"], hue = "target", data = hr)
plt.show()
company_type_target = hr.groupby(['company_type'])['target'].value_counts(normalize=True).unstack()
company_type_target
plt.rcParams["figure.figsize"] = [8, 6]
company_type_target.plot.bar(stacked=True)
plt.show()
```
### **Comments:**
* People who worked in public sector and early stage startup are looking for a job change to data science more than others
*Analyze 'last_new_job' and 'target'*
```
plt.figure(figsize=(10,6))
sns.countplot(hr["most_recent_job"], hue = "target", data = hr)
plt.show()
most_recent_job_target = hr.groupby(['most_recent_job'])['target'].value_counts(normalize=True).unstack()
most_recent_job_target
plt.rcParams["figure.figsize"] = [8, 6]
most_recent_job_target.plot.bar(stacked=True)
plt.show()
```
### **Comments:**
* People who spent a short time (0 - 1 year) in their most recent job are looking for a job change to data science more than others
*Analyze 'training_hours' and 'target'*
```
training_hours_target = hr.groupby('target')['training_hours'].mean()
training_hours_target
plt.rcParams["figure.figsize"] = [8, 6]
training_hours_target.plot.bar()
plt.show()
```
### **Comments:**
* People who are looking for job change to data science spent average 63 training hours
*Analyze 'city_development_index' and 'target'*
```
city_development_target = hr.groupby('target')['city_development_index'].mean()
city_development_target
plt.rcParams["figure.figsize"] = [8, 6]
city_development_target.plot.bar()
plt.show()
```
### **Comments:**
* People who are looking for job change to data science come from city with lower development index (0.75)
### **EXTRA** **Data Standardization**
```
hr['training_hours_log'] = np.log(hr['training_hours'])
hr.head()
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
sns.distplot(hr['training_hours_log'])
plt.subplot(1,2,2)
plt.hist(hr['training_hours_log'])
plt.show()
# Use RobustScaler for variable "city_development_index"
# CODE HERE
# Import sklearn
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import scipy
hr = pd.read_csv('https://raw.githubusercontent.com/trannguyen135/trannguyen135/main/aug_train.csv')
hr.head()
hr.info()
hr.describe()
hr.describe(include=object)
numbers = [f for f in hr.columns if hr.dtypes[f] != 'object']
list_nums = ', '.join(numbers)
list_nums
objects = [f for f in hr.columns if hr.dtypes[f] =='object']
list_objects = ', '.join(objects)
list_objects
# Categorical:
i = 1
for obj in objects:
print(i, "/", obj, "\t", len(hr[obj].unique()), ":", hr[obj].unique())
i = i+1
i = 1
for obj in numbers:
print(i, "/", obj, len(hr[obj].unique()), ":", hr[obj].unique() if len(hr[obj].unique())<10 else '')
i = i+1
hr.head()
#Rename some columns
hr = hr.rename(columns = ({'city':'city_code', 'experience':'work_experience', 'last_new_job':'most_recent_job'}))
#Replace some values in the work_experience, relevent_experience and most_recent_job column
hr['work_experience'].replace({np.NaN:0,'>20':21,'<1':0},inplace=True)
hr['most_recent_job'].replace({np.NaN:0,'>4':5,'never':0},inplace=True)
hr["relevent_experience"].replace({"Has relevent experience":"yes","No relevent experience":"no"},inplace=True)
#change type of columns
hr['work_experience'] = hr['work_experience'].astype(int)
hr['most_recent_job'] = hr['most_recent_job'].astype(int)
#Get code city to fix data in city column
def get_code_city(city):
return city.split('_')[1]
#Fix data inn the company_size column
def fix_company_size(x):
if x == "<10":
return "Local"
elif x == "50-99" or x == "10/49":
return "Small"
elif x == "100-500":
return "Medium"
elif x == "500-999":
return "Upper"
elif x == "1000-4999" or x == "5000-9999":
return "Extended"
elif x == "10000+":
return "Large"
#Fix data inn the most_recent_job column
def fix_most_recent_job(x):
if 0 < x <= 1:
return "0-1"
elif 1 < x <= 3:
return "2-3"
elif 3 < x <= 5:
return "4-5"
elif x > 5:
return "5+"
else:
pass
#Fix data in the work_experience column
def fix_work_experience(x):
if x == 0:
return "0"
elif 0 < x <= 3:
return "1-3"
elif 3 < x <= 7:
return "4-7"
elif 7 < x <= 15:
return "7-15"
elif x > 15:
return "15+"
#Create city_code and apply to get_code_city
hr['city_code'] = hr['city_code'].apply(get_code_city)
#Fix name in company_size column
hr["company_size"] = hr["company_size"].apply(lambda x: fix_company_size(x))
#Fix values in work_experience column
hr["work_experience"] = hr["work_experience"].apply(lambda x: fix_work_experience(x))
#Fix values in most_recent_job column
hr["most_recent_job"] = hr["most_recent_job"].apply(lambda x: fix_most_recent_job(x))
hr.head()
hr.duplicated().sum()
# Display dataframe to look for columns cannot be duplicated
hr
# Check the overall duplication
hr.duplicated().sum()
# Look for enrollee_id duplicated values
hr['enrollee_id'].duplicated().sum()
# Check for NULL values
hr.isnull().sum()
# check for all unique variables
hr['gender'].unique()
# Display the dataframe to look for gender indications
hr
# Check relationship with major discipline
hr.groupby('gender')['major_discipline'].value_counts()
mask = (hr['major_discipline'] == 'STEM') & (hr['gender'].isnull())
hr.loc[mask, 'gender'] = hr.loc[mask, 'gender'].fillna('Male')
# Check for update
hr.groupby('gender')['major_discipline'].value_counts()
# Check the percentage of remaining NULL values
hr['gender'].isnull().sum() / hr['gender'].count() * 100
hr['gender'] = hr['gender'].fillna(hr.gender.mode()[0])
# Final check for gender
hr['gender'].unique()
# Check all unique varialbes
hr['enrolled_university'].unique()
# Display the dataframe to look for indications
hr
# Check relationship
hr.groupby('enrolled_university')['education_level'].value_counts()
# Check the percentage of NULL values
hr['enrolled_university'].isnull().sum() / hr['enrolled_university'].count() * 100
hr['enrolled_university'] = hr['enrolled_university'].fillna(hr.enrolled_university.mode()[0])
# Final check for enrolled_university
hr['enrolled_university'].unique()
# Check all unique variables
hr['education_level'].unique()
# Display the dataframe to look for indications
hr
# Check the percentage of NULL values
hr['education_level'].isnull().sum() / hr['education_level'].count() * 100
hr['education_level'] = hr['education_level'].fillna(hr.education_level.mode()[0])
# Final check for education_level
hr['education_level'].unique()
# Check for all unique variables
hr['major_discipline'].unique()
# Display dataframe to look for indications
hr
mask = (hr['major_discipline'].isnull()) & (hr['gender'] == 'Male')
hr.loc[mask, 'major_discipline'].fillna('STEM', inplace = True)
# Check the percentage of the remaining NULL values
hr['major_discipline'].isnull().sum() / hr['major_discipline'].count() * 100
hr['major_discipline'] = hr['major_discipline'].fillna('Unknown')
# Final check for major_discipline
hr['major_discipline'].unique()
# Check for all unique variables
hr['company_size'].unique()
# Display dataframe to look for indications
hr
# Check assumption
hr.groupby('company_type')['company_size'].value_counts()
# Calculate the percentage of NULL values
hr['company_size'].isnull().sum() / hr['company_size'].count() * 100
hr['company_size'] = hr['company_size'].fillna('Unknown')
# Final check for company_size
hr['company_size'].unique()
# Check all unique variables
hr['company_type'].unique()
# Check the percenatge of NULL values
hr['company_type'].isnull().sum() / hr['company_type'].count() * 100
hr['company_type'] = hr['company_type'].fillna('Unknown')
# Final checl for company_type
hr['company_type'].unique()
hr['city_development_index'].describe()
hr_median = hr['city_development_index'].median()
hr_median
hr_mode = hr['city_development_index'].mode()
hr_mode
# Visualize data
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
sns.distplot(hr['city_development_index'])
plt.subplot(1,2,2)
plt.hist(hr['city_development_index'])
plt.show()
hr['training_hours'].describe()
hr_mode = hr['training_hours'].mode()
hr_mode
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
sns.distplot(hr['training_hours'])
plt.subplot(1,2,2)
plt.hist(hr['training_hours'])
plt.show()
hr[['city_development_index','training_hours','target']].corr()
# Continuous & continuous
sns.pairplot(hr[['city_development_index','training_hours','target']])
# boxplot of city_development_index & training_hours
plt.figure(figsize=(8,6))
plt.subplot(121)
plt.title('city_development_index')
plt.boxplot(hr['city_development_index'])
plt.subplot(122)
plt.title('training_hours')
plt.boxplot(hr['training_hours'])
plt.show()
Q1 = np.percentile(hr.city_development_index, 25)
Q1
Q3 = np.percentile(hr.city_development_index, 75)
Q3
city_development_index_iqr = Q3 - Q1
city_development_index_iqr
outliner_ratio_1 = len(hr[(hr['city_development_index'] < (Q1 - 1.5*city_development_index_iqr))])/len(hr['city_development_index'])
outliner_percentage_1 = "{:.2%}".format(outliner_ratio_1)
print('Percentage of outliners:',outliner_percentage_1)
hr_new_1 = hr[(hr['city_development_index'] >= (Q1 - 1.5*city_development_index_iqr))]
plt.boxplot(hr_new_1['city_development_index'])
plt.show()
hr_new_1.city_development_index.describe()
hr.city_development_index.describe()
Q1 = np.percentile(hr.training_hours, 25)
Q1
Q3 = np.percentile(hr.training_hours, 75)
Q3
training_hours_iqr = Q3 - Q1
training_hours_iqr
outliner_ratio = len(hr[(hr['training_hours'] > (Q3 + 1.5*training_hours_iqr ))])/len(hr['training_hours'])
outliner_percentage = "{:.2%}".format(outliner_ratio)
print('Percentage of outliners:',outliner_percentage)
hr_new = hr[(hr['training_hours'] <= (Q3 + 1.5*training_hours_iqr ))]
plt.boxplot(hr_new['training_hours'])
plt.show()
hr_new.training_hours.describe()
hr.training_hours.describe()
training_hours_abnormal = hr[(hr['training_hours'] > (Q3 + 1.5*training_hours_iqr ))]
training_hours_abnormal.head(10)
training_hours_abnormal.describe()
training_hours_abnormal.describe(include=object)
work_train = hr.groupby('work_experience')['training_hours'].mean().sort_values(ascending=True)
work_train
city_count = (hr.groupby('city_code').count())['enrollee_id'].sort_values(ascending=False)
top_10_city = city_count.head(10)
gender_count = (hr.groupby('gender').count())['enrollee_id'].sort_values(ascending=False)
relevant_experience_count = (hr.groupby('relevent_experience').count())['enrollee_id'].sort_values(ascending=False)
university_count = (hr.groupby('enrolled_university').count())['enrollee_id'].sort_values(ascending=False)
education_count = (hr.groupby('education_level').count())['enrollee_id'].sort_values(ascending=False)
major_count = (hr.groupby('major_discipline').count())['enrollee_id'].sort_values(ascending=False)
experience_count = (hr.groupby('work_experience').count())['enrollee_id'].sort_values(ascending=False)
company_size_count = (hr.groupby('company_size').count())['enrollee_id'].sort_values(ascending=False)
company_type_count = (hr.groupby('company_type').count())['enrollee_id'].sort_values(ascending=False)
last_new_job_count = (hr.groupby('most_recent_job').count())['enrollee_id'].sort_values(ascending=False)
print(top_10_city)
print('-'*50)
print(gender_count)
print('-'*50)
print(relevant_experience_count)
print('-'*50)
print(university_count)
print('-'*50)
print(education_count)
print('-'*50)
print(major_count)
print('-'*50)
print(experience_count)
print('-'*50)
print(company_size_count)
print('-'*50)
print(company_type_count)
print('-'*50)
print(last_new_job_count)
plt.figure(figsize=(50,40))
plt.subplots_adjust(bottom=0.3, top=0.7, hspace=0.6)
plt.rc('xtick',labelsize=20)
plt.rc('ytick',labelsize=20)
plt.subplot(251)
sns.set(style="darkgrid")
top_10_city.plot.bar()
plt.xlabel('City code',fontsize=20)
plt.ylabel('Count',fontsize=20)
plt.rc('xtick',labelsize=20)
plt.rc('ytick',labelsize=20)
plt.subplot(252)
sns.set()
gender_count.plot.bar()
plt.xlabel('Gender',fontsize=20)
plt.ylabel('Count',fontsize=20)
plt.rc('xtick',labelsize=20)
plt.rc('ytick',labelsize=20)
plt.subplot(253)
sns.set()
relevant_experience_count.plot.bar()
plt.xlabel('Relevant Experience',fontsize=20)
plt.ylabel('Count',fontsize=20)
plt.rc('xtick',labelsize=20)
plt.rc('ytick',labelsize=20)
plt.subplot(254)
sns.set()
university_count.plot.bar()
plt.xlabel('Enrolled University',fontsize=20)
plt.ylabel('Count',fontsize=20)
plt.rc('xtick',labelsize=20)
plt.rc('ytick',labelsize=20)
plt.subplot(255)
sns.set()
education_count.plot.bar()
plt.xlabel('Education Level',fontsize=20)
plt.ylabel('Count',fontsize=20)
plt.rc('xtick',labelsize=20)
plt.rc('ytick',labelsize=20)
plt.subplot(256)
sns.set()
major_count.plot.bar()
plt.xlabel('Major discippline',fontsize=20)
plt.ylabel('Count',fontsize=20)
plt.rc('xtick',labelsize=20)
plt.rc('ytick',labelsize=20)
plt.subplot(257)
sns.set()
experience_count.plot.bar()
plt.xlabel('Work Experience',fontsize=20)
plt.ylabel('Count',fontsize=20)
plt.rc('xtick',labelsize=20)
plt.rc('ytick',labelsize=20)
plt.subplot(258)
sns.set()
company_size_count.plot.bar()
plt.xlabel('Company Size',fontsize=20)
plt.ylabel('Count',fontsize=20)
plt.rc('xtick',labelsize=20)
plt.rc('ytick',labelsize=20)
plt.subplot(259)
sns.set()
company_type_count.plot.bar()
plt.xlabel('Company type',fontsize=20)
plt.ylabel('Count',fontsize=20)
plt.rc('xtick',labelsize=20)
plt.rc('ytick',labelsize=20)
plt.subplot(2,5,10)
sns.set()
last_new_job_count.plot.bar()
plt.xlabel('Most recent job',fontsize=20)
plt.ylabel('Count',fontsize=20)
plt.show()
target_count = hr.groupby('target').count()['enrollee_id']
target_count
sns.set(style="darkgrid")
plt.figure(figsize=(8,5))
ax = sns.countplot(x="target", data=hr)
for p in ax.patches:
percentage = '{:.1f}%'.format(100 * p.get_height()/len(hr.target))
x = p.get_x() + p.get_width()/2
y = p.get_height()*1.05
ax.annotate(percentage, (x, y),ha='center')
plt.ylim(0,16000)
plt.show()
plt.figure(figsize=(8,6))
sns.countplot(hr["gender"], hue = "target", data = hr)
plt.show()
target_gender = hr.groupby(['gender'])['target'].value_counts(normalize=True).unstack()
target_gender
plt.rcParams["figure.figsize"] = [8, 6]
target_gender.plot.bar(stacked=True)
plt.show()
plt.figure(figsize=(8,6))
sns.countplot(hr["relevent_experience"], hue = "target", data = hr)
plt.show()
relevant_experience_target = hr.groupby(['relevent_experience'])['target'].value_counts(normalize=True).unstack()
relevant_experience_target
plt.rcParams["figure.figsize"] = [8, 6]
relevant_experience_target.plot.bar(stacked=True)
plt.show()
plt.figure(figsize=(8,6))
sns.countplot(hr["enrolled_university"], hue = "target", data = hr)
plt.show()
enrolled_university_target = hr.groupby(['enrolled_university'])['target'].value_counts(normalize=True).unstack()
enrolled_university_target
plt.rcParams["figure.figsize"] = [8, 6]
enrolled_university_target.plot.bar(stacked=True)
plt.show()
plt.figure(figsize=(8,6))
sns.countplot(hr["education_level"], hue = "target", data = hr)
plt.show()
education_level_target = hr.groupby(['education_level'])['target'].value_counts(normalize=True).unstack()
education_level_target
plt.rcParams["figure.figsize"] = [8, 6]
education_level_target.plot.bar(stacked=True)
plt.show()
plt.figure(figsize=(10,6))
sns.countplot(hr["major_discipline"], hue = "target", data = hr)
plt.show()
major_discipline_target = hr.groupby(['major_discipline'])['target'].value_counts(normalize=True).unstack()
major_discipline_target
plt.rcParams["figure.figsize"] = [8, 6]
major_discipline_target.plot.bar(stacked=True)
plt.show()
plt.figure(figsize=(10,6))
sns.countplot(hr["work_experience"], hue = "target", data = hr)
plt.show()
work_experience_target = hr.groupby(['work_experience'])['target'].value_counts(normalize=True).unstack()
work_experience_target
plt.rcParams["figure.figsize"] = [8, 6]
work_experience_target.plot.bar(stacked=True)
plt.show()
plt.figure(figsize=(10,6))
sns.countplot(hr["company_size"], hue = "target", data = hr)
plt.show()
company_size_target = hr.groupby(['company_size'])['target'].value_counts(normalize=True).unstack()
company_size_target
plt.rcParams["figure.figsize"] = [8, 6]
company_size_target.plot.bar(stacked=True)
plt.show()
plt.figure(figsize=(15,6))
sns.countplot(hr["company_type"], hue = "target", data = hr)
plt.show()
company_type_target = hr.groupby(['company_type'])['target'].value_counts(normalize=True).unstack()
company_type_target
plt.rcParams["figure.figsize"] = [8, 6]
company_type_target.plot.bar(stacked=True)
plt.show()
plt.figure(figsize=(10,6))
sns.countplot(hr["most_recent_job"], hue = "target", data = hr)
plt.show()
most_recent_job_target = hr.groupby(['most_recent_job'])['target'].value_counts(normalize=True).unstack()
most_recent_job_target
plt.rcParams["figure.figsize"] = [8, 6]
most_recent_job_target.plot.bar(stacked=True)
plt.show()
training_hours_target = hr.groupby('target')['training_hours'].mean()
training_hours_target
plt.rcParams["figure.figsize"] = [8, 6]
training_hours_target.plot.bar()
plt.show()
city_development_target = hr.groupby('target')['city_development_index'].mean()
city_development_target
plt.rcParams["figure.figsize"] = [8, 6]
city_development_target.plot.bar()
plt.show()
hr['training_hours_log'] = np.log(hr['training_hours'])
hr.head()
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
sns.distplot(hr['training_hours_log'])
plt.subplot(1,2,2)
plt.hist(hr['training_hours_log'])
plt.show()
# Use RobustScaler for variable "city_development_index"
# CODE HERE
# Import sklearn
| 0.277963 | 0.953362 |
# Convert your Raspberry Pi into an Alexa device
> Install Alexa Voice Services SDK on Raspberry Pi and Provision the device under an Amazon account
- toc: false
- badges: false
- comments: false
- categories: [Alexa Voice Services, Alexa, Raspberry Pi, nodejs]
- image: images/avs-small.png
## Converting your Raspberry Pi into an Alexa device using Alexa Voice Service SDK
Back in 2017 , my company was working on a product that embedded Amazon Alexa Voice Service (AVS) on device and my team was involved in the onboarding solution for the device.
In order to better understand the use cases, I installed Amazon Alexa Voice Service SDK on a Raspberry Pi .
Here is a video of a short interaction with Alexa Voice Service installed on Raspberry Pi
<iframe width="853" height="480" src="https://www.youtube.com/embed/bajws_5RN8M" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
If you are interested in creating an Alexa device on Raspberry pi please follow the latest instructions using this link
https://developer.amazon.com/en-US/docs/alexa/avs-device-sdk/raspberry-pi.html
**Provisioning an AVS device under your Amazon Account**
If you are distributing such custom devices, you need to provide a way for the user to onboard Alexa under their Amazon account. I built a POC for this provisioning flow using Nodejs (and also a Java version). Here are the steps involved.
> 1. Create a developer account under Amazon if you don’t already have one.
> 2. Browse to https://developer.amazon.com/alexa/console/avs/products
> 3. Click on Add New Product
> 4. Fill up the information regarding your device. Give it a Product Name and Product ID
>> Here are the other options I chose for my device


> 5. Save the product details
> 6. Choose Security Profile and select create new profile .
> 7. Fill up the option for security profile name and description and click Next .
> 8. The rest of the information like Profile ID, Client ID and Client secret are now automatically filled up for you .
> 9. I kept the options to just Web based flow but you may choose to add Android and iOS app integration (as we did in the final product)
> 10. Fill up the allowed origin and callback URL details. Callback URl is the address of the page to which Amazon authorization service will redirect to with the result of User authorization . This is the service or page that needs to handle the authorization response.
**AVS Device Provisioning App flow**
This is how the flow looks like

**Project code**
The code for this App flow can be found under this repository https://github.com/ravindrabharathi/AVS-provisioning-nodejs .
Although I have not updated the code for recent version of dependency libraries, the code still works as the screenshots above are recent. I’ll follow up with another post explaining the code structure and important parts that handle authorization, provisioning of device, etc
|
github_jupyter
|
# Convert your Raspberry Pi into an Alexa device
> Install Alexa Voice Services SDK on Raspberry Pi and Provision the device under an Amazon account
- toc: false
- badges: false
- comments: false
- categories: [Alexa Voice Services, Alexa, Raspberry Pi, nodejs]
- image: images/avs-small.png
## Converting your Raspberry Pi into an Alexa device using Alexa Voice Service SDK
Back in 2017 , my company was working on a product that embedded Amazon Alexa Voice Service (AVS) on device and my team was involved in the onboarding solution for the device.
In order to better understand the use cases, I installed Amazon Alexa Voice Service SDK on a Raspberry Pi .
Here is a video of a short interaction with Alexa Voice Service installed on Raspberry Pi
<iframe width="853" height="480" src="https://www.youtube.com/embed/bajws_5RN8M" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
If you are interested in creating an Alexa device on Raspberry pi please follow the latest instructions using this link
https://developer.amazon.com/en-US/docs/alexa/avs-device-sdk/raspberry-pi.html
**Provisioning an AVS device under your Amazon Account**
If you are distributing such custom devices, you need to provide a way for the user to onboard Alexa under their Amazon account. I built a POC for this provisioning flow using Nodejs (and also a Java version). Here are the steps involved.
> 1. Create a developer account under Amazon if you don’t already have one.
> 2. Browse to https://developer.amazon.com/alexa/console/avs/products
> 3. Click on Add New Product
> 4. Fill up the information regarding your device. Give it a Product Name and Product ID
>> Here are the other options I chose for my device


> 5. Save the product details
> 6. Choose Security Profile and select create new profile .
> 7. Fill up the option for security profile name and description and click Next .
> 8. The rest of the information like Profile ID, Client ID and Client secret are now automatically filled up for you .
> 9. I kept the options to just Web based flow but you may choose to add Android and iOS app integration (as we did in the final product)
> 10. Fill up the allowed origin and callback URL details. Callback URl is the address of the page to which Amazon authorization service will redirect to with the result of User authorization . This is the service or page that needs to handle the authorization response.
**AVS Device Provisioning App flow**
This is how the flow looks like

**Project code**
The code for this App flow can be found under this repository https://github.com/ravindrabharathi/AVS-provisioning-nodejs .
Although I have not updated the code for recent version of dependency libraries, the code still works as the screenshots above are recent. I’ll follow up with another post explaining the code structure and important parts that handle authorization, provisioning of device, etc
| 0.657318 | 0.76074 |
# Time variability
In this tutorial, we will cover how to instatiate a time-variable `StarryProcess`, useful for modeling stars with spots that evolve over time. We will show how to sample from the process and use it to do basic inference.
```
%matplotlib inline
%config InlineBackend.figure_format = "retina"
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
# Disable annoying font warnings
matplotlib.font_manager._log.setLevel(50)
# Disable theano deprecation warnings
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
warnings.filterwarnings("ignore", category=matplotlib.MatplotlibDeprecationWarning)
warnings.filterwarnings("ignore", category=FutureWarning)
warnings.filterwarnings("ignore", category=UserWarning, module="theano")
# Style
plt.style.use("default")
plt.rcParams["savefig.dpi"] = 100
plt.rcParams["figure.dpi"] = 100
plt.rcParams["figure.figsize"] = (12, 4)
plt.rcParams["font.size"] = 14
plt.rcParams["text.usetex"] = False
plt.rcParams["font.family"] = "sans-serif"
plt.rcParams["font.sans-serif"] = ["Liberation Sans"]
plt.rcParams["font.cursive"] = ["Liberation Sans"]
try:
plt.rcParams["mathtext.fallback"] = "cm"
except KeyError:
plt.rcParams["mathtext.fallback_to_cm"] = True
plt.rcParams["mathtext.fallback_to_cm"] = True
# Short arrays when printing
np.set_printoptions(threshold=0)
del matplotlib
del plt
del warnings
```
## Setup
```
from starry_process import StarryProcess
import numpy as np
import matplotlib.pyplot as plt
from tqdm.auto import tqdm
import theano
import theano.tensor as tt
```
To instantiate a time-variable `StarryProcess`, we simply pass a nonzero value for the `tau` parameter:
```
sp = StarryProcess(tau=25.0)
```
This is the timescale of the surface evolution in arbitrary units (i.e., this will have the same units as the rotation period and the input time arrays; units of days are the common choice). We can also provide a GP kernel to model the time variability. By default a Matern-3/2 kernel is used, but that can be changed by supplying any of the kernels defined in the `starry_process.temporal` module with the `kernel` keyword. If you wish, you can even provide your own callable tensor-valued function of the form
```python
def kernel(t1, t2, tau):
(...)
return K
```
where `t1` and `t2` are the input times (scalars or vectors), `tau` is the timescale, and `K` is a covariance matrix of shape ``(len(t1), len(t2))``.
Let's stick with the `Matern32` kernel for now, and specify a time array over which we'll evaluate the process:
```
t = np.linspace(0, 50, 1000)
```
## Sampling
### Sampling in spherical harmonics
The easiest thing we can do is sample maps. For time-variable processes, we can pass a time `t` argument to `sample_ylm` to get map samples evaluated at different points in time:
```
y = sp.sample_ylm(t).eval()
y
```
Note the shape of `y`, which is `(number of samples, number of times, number of ylms)`:
```
y.shape
```
At every point in time, the spherical harmonic representation of the surface is different. We can visualize this as a movie by simply calling
```python
sp.visualize(y)
```
```
# We actually tweak the contrast a little,
# and downsample to make this run quicker
sp.visualize(y[:, ::10], vmin=0.6, vmax=1.3)
```
Computing the corresponding light curve is easy:
```
flux = sp.flux(y, t).eval()
flux
```
where the shape of `flux` is `(number of samples, number of times)`:
```
flux.shape
```
We could also pass explicit values for the following parameters (otherwise they assume their default values):
```
from IPython.display import display, Markdown
from starry_process.defaults import defaults
defaults["u"] = defaults["u"][: defaults["udeg"]]
display(
Markdown(
"""
| attribute | description | default value |
| - | :- | :-:
| `i` | stellar inclination in degrees | `{i}` |
| `p` | stellar rotation period in days | `{p}`|
| `u` | limb darkening coefficient vector | `{u}` |
""".format(
**defaults
)
)
)
```
Here's the light curve in parts per thousand:
```
plt.plot(t, 1e3 * flux[0])
plt.xlabel("rotations")
plt.ylabel("relative flux [ppt]")
plt.show()
```
### Sampling in flux
We can also sample in flux directly:
```
flux = sp.sample(t, nsamples=50).eval()
flux
```
where again it's useful to note the shape of the returned quantity, `(number of samples, number of time points)`:
```
flux.shape
```
Here are all 50 light curves plotted on the same scale:
```
fig, ax = plt.subplots(10, 5, figsize=(12, 8), sharex=True, sharey=True)
ax = ax.flatten()
for k in range(50):
ax[k].plot(t, 1e3 * flux[k], lw=0.5)
ax[k].axis("off")
```
## Doing inference
We can also do inference using time-variable `StarryProcess` models. Let's do a mock ensemble analysis on the 50 light curves we generated above. First, let's add some observation noise. Here's what the first "observed" light curve looks like:
```
ferr = 1e-3
np.random.seed(0)
f = flux + ferr * np.random.randn(50, len(t))
plt.plot(t, flux[0], "C0-", lw=0.75, alpha=0.5)
plt.plot(t, f[0], "C0.", ms=3)
plt.xlabel("time [days]")
plt.ylabel("relative flux [ppt]")
plt.show()
```
Now, let's try to infer the timescale of the generating process. For simplicity, we'll keep all other parameters fixed at their default (and in this case, true) values. As in the [Quickstart](Quickstart.ipynb) tutorial, we compile the likelihood function using `theano`. It will accept two inputs, a light curve and a timescale, and will return the corresponding log likelihood. To make this example run a little faster, we'll also downsample the light curves by a factor of 5 (not recommended in practice! We should never throw out information!)
```
f_tensor = tt.dvector()
tau_tensor = tt.dscalar()
log_likelihood = theano.function(
[f_tensor, tau_tensor],
StarryProcess(tau=tau_tensor).log_likelihood(t[::5], f_tensor[::5], ferr ** 2),
)
```
Compute the joint likelihood of all datasets:
```
tau = np.linspace(0, 50, 100)
ll = np.zeros_like(tau)
for k in tqdm(range(len(tau))):
ll[k] = np.sum([log_likelihood(f[n], tau[k]) for n in range(50)])
```
Following the same steps as in the [Quickstart](Quickstart.ipynb) tutorial, we can convert this into a posterior distribution by normalizing it (and implicitly assuming a uniform prior over `tau`):
```
likelihood = np.exp(ll - np.max(ll))
prob = likelihood / np.trapz(likelihood, tau)
plt.plot(tau, prob, label="posterior")
plt.axvline(25, color="C1", label="truth")
plt.legend()
plt.ylabel("probability density")
plt.xlabel("variability timescale [days]")
plt.show()
```
As expected, we correctly infer the timescale of variability.
|
github_jupyter
|
%matplotlib inline
%config InlineBackend.figure_format = "retina"
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
# Disable annoying font warnings
matplotlib.font_manager._log.setLevel(50)
# Disable theano deprecation warnings
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
warnings.filterwarnings("ignore", category=matplotlib.MatplotlibDeprecationWarning)
warnings.filterwarnings("ignore", category=FutureWarning)
warnings.filterwarnings("ignore", category=UserWarning, module="theano")
# Style
plt.style.use("default")
plt.rcParams["savefig.dpi"] = 100
plt.rcParams["figure.dpi"] = 100
plt.rcParams["figure.figsize"] = (12, 4)
plt.rcParams["font.size"] = 14
plt.rcParams["text.usetex"] = False
plt.rcParams["font.family"] = "sans-serif"
plt.rcParams["font.sans-serif"] = ["Liberation Sans"]
plt.rcParams["font.cursive"] = ["Liberation Sans"]
try:
plt.rcParams["mathtext.fallback"] = "cm"
except KeyError:
plt.rcParams["mathtext.fallback_to_cm"] = True
plt.rcParams["mathtext.fallback_to_cm"] = True
# Short arrays when printing
np.set_printoptions(threshold=0)
del matplotlib
del plt
del warnings
from starry_process import StarryProcess
import numpy as np
import matplotlib.pyplot as plt
from tqdm.auto import tqdm
import theano
import theano.tensor as tt
sp = StarryProcess(tau=25.0)
def kernel(t1, t2, tau):
(...)
return K
t = np.linspace(0, 50, 1000)
y = sp.sample_ylm(t).eval()
y
y.shape
sp.visualize(y)
# We actually tweak the contrast a little,
# and downsample to make this run quicker
sp.visualize(y[:, ::10], vmin=0.6, vmax=1.3)
flux = sp.flux(y, t).eval()
flux
flux.shape
from IPython.display import display, Markdown
from starry_process.defaults import defaults
defaults["u"] = defaults["u"][: defaults["udeg"]]
display(
Markdown(
"""
| attribute | description | default value |
| - | :- | :-:
| `i` | stellar inclination in degrees | `{i}` |
| `p` | stellar rotation period in days | `{p}`|
| `u` | limb darkening coefficient vector | `{u}` |
""".format(
**defaults
)
)
)
plt.plot(t, 1e3 * flux[0])
plt.xlabel("rotations")
plt.ylabel("relative flux [ppt]")
plt.show()
flux = sp.sample(t, nsamples=50).eval()
flux
flux.shape
fig, ax = plt.subplots(10, 5, figsize=(12, 8), sharex=True, sharey=True)
ax = ax.flatten()
for k in range(50):
ax[k].plot(t, 1e3 * flux[k], lw=0.5)
ax[k].axis("off")
ferr = 1e-3
np.random.seed(0)
f = flux + ferr * np.random.randn(50, len(t))
plt.plot(t, flux[0], "C0-", lw=0.75, alpha=0.5)
plt.plot(t, f[0], "C0.", ms=3)
plt.xlabel("time [days]")
plt.ylabel("relative flux [ppt]")
plt.show()
f_tensor = tt.dvector()
tau_tensor = tt.dscalar()
log_likelihood = theano.function(
[f_tensor, tau_tensor],
StarryProcess(tau=tau_tensor).log_likelihood(t[::5], f_tensor[::5], ferr ** 2),
)
tau = np.linspace(0, 50, 100)
ll = np.zeros_like(tau)
for k in tqdm(range(len(tau))):
ll[k] = np.sum([log_likelihood(f[n], tau[k]) for n in range(50)])
likelihood = np.exp(ll - np.max(ll))
prob = likelihood / np.trapz(likelihood, tau)
plt.plot(tau, prob, label="posterior")
plt.axvline(25, color="C1", label="truth")
plt.legend()
plt.ylabel("probability density")
plt.xlabel("variability timescale [days]")
plt.show()
| 0.583441 | 0.970268 |
## Importing Neccessary Libraries
```
import cv2
import matplotlib.pyplot as plt
import seaborn as sns
import os
from PIL import Image
from keras.preprocessing.image import img_to_array
from keras.preprocessing.image import load_img
from keras.utils import np_utils
infected_data = os.listdir('cell_images/Parasitized/')
print(infected_data[:10])
uninfected_data = os.listdir('cell_images/Uninfected')
print('\n')
print(uninfected_data[:10])
```
## Visualization of Infected and Uninfected Cells
1. Infected Data
```
plt.figure(figsize = (12,12))
for i in range(4):
plt.subplot(1, 4, i+1)
img = cv2.imread('/content/Extracting_folder/cell_images/Parasitized' + "/" + infected_data[i])
plt.imshow(img)
plt.title('INFECTED : 1')
plt.tight_layout()
plt.show()
```
2. Unifected Data
```
plt.figure(figsize = (12,12))
for i in range(4):
plt.subplot(1, 4, i+1)
img = cv2.imread('/content/Extracting_folder/cell_images/Uninfected' + "/" + uninfected_data[i])
plt.imshow(img)
plt.title('UNINFECTED : 1')
plt.tight_layout()
plt.show()
data = []
labels = []
for img in infected_data:
try:
img_read = plt.imread('/content/Extracting_folder/cell_images/Parasitized' + "/" + img)
img_resize = cv2.resize(img_read, (50, 50))
img_array = img_to_array(img_resize)
img_aray=img_array/255
data.append(img_array)
labels.append(1)
except:
None
for img in uninfected_data:
try:
img_read = plt.imread('/content/Extracting_folder/cell_images/Uninfected' + "/" + img)
img_resize = cv2.resize(img_read, (50, 50))
img_array = img_to_array(img_resize)
img_array= img_array/255
data.append(img_array)
labels.append(0)
except:
None
plt.imshow(data[0])
plt.show()
import numpy as np
image_data = np.array(data)
labels = np.array(labels)
idx = np.arange(image_data.shape[0])
np.random.shuffle(idx)
image_data = image_data[idx]
labels = labels[idx]
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(image_data, labels, test_size = 0.2, random_state = 42)
y_train = np_utils.to_categorical(y_train, 2)
y_test = np_utils.to_categorical(y_test, 2)
print(f'Shape of training image : {x_train.shape}')
print(f'Shape of testing image : {x_test.shape}')
print(f'Shape of training labels : {y_train.shape}')
print(f'Shape of testing labels : {y_test.shape}')
```
## The Architecture of the CNN model
```
import keras
from keras.layers import Dense, Conv2D
from keras.layers import Flatten
from keras.layers import MaxPooling2D, GlobalAveragePooling2D
from keras.layers import Activation
from keras.layers import BatchNormalization
from keras.layers import Dropout
from keras.models import Sequential
from keras import backend as K
from keras import optimizers
inputShape= (50,50,3)
model=Sequential()
model.add(Conv2D(32, (3,3), activation = 'relu', input_shape = inputShape))
model.add(MaxPooling2D(2,2))
model.add(BatchNormalization(axis =-1))
model.add(Dropout(0.2))
model.add(Conv2D(32, (3,3), activation = 'relu'))
model.add(MaxPooling2D(2,2))
model.add(BatchNormalization(axis = -1))
model.add(Dropout(0.2))
model.add(Conv2D(32, (3,3), activation = 'relu'))
model.add(MaxPooling2D(2,2))
model.add(BatchNormalization(axis = -1))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(512, activation = 'relu'))
model.add(BatchNormalization(axis = -1))
model.add(Dropout(0.5))
model.add(Dense(2, activation = 'softmax'))
model.summary()
#compile the model
model.compile(loss = 'categorical_crossentropy', optimizer = 'Adam', metrics = ['accuracy'])
H = model.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=25)
print(H.history.keys())
# summarize history for accuracy
plt.plot(H.history['accuracy'])
plt.plot(H.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train','test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(H.history['loss'])
plt.plot(H.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train','test'], loc='upper right')
plt.show()
# make predictions on the test set
preds = model.predict(x_test)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test.argmax(axis=1), preds.argmax(axis=1)))
from sklearn.metrics import classification_report
print(classification_report(y_test.argmax(axis=1), preds.argmax(axis=1)))
def plot_confusion_matrix(y_true, y_pred, classes,
normalize=False,
title=None,
cmap=plt.cm.Blues):
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
if not title:
if normalize:
title = 'Normalized confusion matrix'
else:
title = 'Confusion matrix, without normalization'
# Compute confusion matrix
cm = confusion_matrix(y_true, y_pred)
# Only use the labels that appear in the data
classes = classes[unique_labels(y_true, y_pred)]
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
fig, ax = plt.subplots()
im = ax.imshow(cm, interpolation='nearest', cmap=cmap)
ax.figure.colorbar(im, ax=ax)
ax.set(xticks=np.arange(cm.shape[1]),
yticks=np.arange(cm.shape[0]),
xticklabels=classes, yticklabels=classes,
title=title,
ylabel='True label',
xlabel='Predicted label')
plt.setp(ax.get_xticklabels(), rotation=45, ha="right",
rotation_mode="anchor")
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i in range(cm.shape[0]):
for j in range(cm.shape[1]):
ax.text(j, i, format(cm[i, j], fmt),
ha="center", va="center",
color="white" if cm[i, j] > thresh else "black")
fig.tight_layout()
return ax
class_names=np.array((0,1))
plot_confusion_matrix(y_test.argmax(axis=1), preds.argmax(axis=1), classes=class_names, title='Confusion Matrix')
```
|
github_jupyter
|
import cv2
import matplotlib.pyplot as plt
import seaborn as sns
import os
from PIL import Image
from keras.preprocessing.image import img_to_array
from keras.preprocessing.image import load_img
from keras.utils import np_utils
infected_data = os.listdir('cell_images/Parasitized/')
print(infected_data[:10])
uninfected_data = os.listdir('cell_images/Uninfected')
print('\n')
print(uninfected_data[:10])
plt.figure(figsize = (12,12))
for i in range(4):
plt.subplot(1, 4, i+1)
img = cv2.imread('/content/Extracting_folder/cell_images/Parasitized' + "/" + infected_data[i])
plt.imshow(img)
plt.title('INFECTED : 1')
plt.tight_layout()
plt.show()
plt.figure(figsize = (12,12))
for i in range(4):
plt.subplot(1, 4, i+1)
img = cv2.imread('/content/Extracting_folder/cell_images/Uninfected' + "/" + uninfected_data[i])
plt.imshow(img)
plt.title('UNINFECTED : 1')
plt.tight_layout()
plt.show()
data = []
labels = []
for img in infected_data:
try:
img_read = plt.imread('/content/Extracting_folder/cell_images/Parasitized' + "/" + img)
img_resize = cv2.resize(img_read, (50, 50))
img_array = img_to_array(img_resize)
img_aray=img_array/255
data.append(img_array)
labels.append(1)
except:
None
for img in uninfected_data:
try:
img_read = plt.imread('/content/Extracting_folder/cell_images/Uninfected' + "/" + img)
img_resize = cv2.resize(img_read, (50, 50))
img_array = img_to_array(img_resize)
img_array= img_array/255
data.append(img_array)
labels.append(0)
except:
None
plt.imshow(data[0])
plt.show()
import numpy as np
image_data = np.array(data)
labels = np.array(labels)
idx = np.arange(image_data.shape[0])
np.random.shuffle(idx)
image_data = image_data[idx]
labels = labels[idx]
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(image_data, labels, test_size = 0.2, random_state = 42)
y_train = np_utils.to_categorical(y_train, 2)
y_test = np_utils.to_categorical(y_test, 2)
print(f'Shape of training image : {x_train.shape}')
print(f'Shape of testing image : {x_test.shape}')
print(f'Shape of training labels : {y_train.shape}')
print(f'Shape of testing labels : {y_test.shape}')
import keras
from keras.layers import Dense, Conv2D
from keras.layers import Flatten
from keras.layers import MaxPooling2D, GlobalAveragePooling2D
from keras.layers import Activation
from keras.layers import BatchNormalization
from keras.layers import Dropout
from keras.models import Sequential
from keras import backend as K
from keras import optimizers
inputShape= (50,50,3)
model=Sequential()
model.add(Conv2D(32, (3,3), activation = 'relu', input_shape = inputShape))
model.add(MaxPooling2D(2,2))
model.add(BatchNormalization(axis =-1))
model.add(Dropout(0.2))
model.add(Conv2D(32, (3,3), activation = 'relu'))
model.add(MaxPooling2D(2,2))
model.add(BatchNormalization(axis = -1))
model.add(Dropout(0.2))
model.add(Conv2D(32, (3,3), activation = 'relu'))
model.add(MaxPooling2D(2,2))
model.add(BatchNormalization(axis = -1))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(512, activation = 'relu'))
model.add(BatchNormalization(axis = -1))
model.add(Dropout(0.5))
model.add(Dense(2, activation = 'softmax'))
model.summary()
#compile the model
model.compile(loss = 'categorical_crossentropy', optimizer = 'Adam', metrics = ['accuracy'])
H = model.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=25)
print(H.history.keys())
# summarize history for accuracy
plt.plot(H.history['accuracy'])
plt.plot(H.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train','test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(H.history['loss'])
plt.plot(H.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train','test'], loc='upper right')
plt.show()
# make predictions on the test set
preds = model.predict(x_test)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test.argmax(axis=1), preds.argmax(axis=1)))
from sklearn.metrics import classification_report
print(classification_report(y_test.argmax(axis=1), preds.argmax(axis=1)))
def plot_confusion_matrix(y_true, y_pred, classes,
normalize=False,
title=None,
cmap=plt.cm.Blues):
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
if not title:
if normalize:
title = 'Normalized confusion matrix'
else:
title = 'Confusion matrix, without normalization'
# Compute confusion matrix
cm = confusion_matrix(y_true, y_pred)
# Only use the labels that appear in the data
classes = classes[unique_labels(y_true, y_pred)]
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
fig, ax = plt.subplots()
im = ax.imshow(cm, interpolation='nearest', cmap=cmap)
ax.figure.colorbar(im, ax=ax)
ax.set(xticks=np.arange(cm.shape[1]),
yticks=np.arange(cm.shape[0]),
xticklabels=classes, yticklabels=classes,
title=title,
ylabel='True label',
xlabel='Predicted label')
plt.setp(ax.get_xticklabels(), rotation=45, ha="right",
rotation_mode="anchor")
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i in range(cm.shape[0]):
for j in range(cm.shape[1]):
ax.text(j, i, format(cm[i, j], fmt),
ha="center", va="center",
color="white" if cm[i, j] > thresh else "black")
fig.tight_layout()
return ax
class_names=np.array((0,1))
plot_confusion_matrix(y_test.argmax(axis=1), preds.argmax(axis=1), classes=class_names, title='Confusion Matrix')
| 0.568775 | 0.809653 |
# Predict house prices: regression
In a regression problem, we aim to predict the output of a continuous value, like a price or a probability. Contrast this with a classification problem, where we aim to predict a discrete label (for example, where a picture contains an apple or an orange).
This notebook builds a model to predict the median price of homes in a Boston suburb during the mid-1970s. To do this, we'll provide the model with some data points about the suburb, such as the crime rate and the local property tax rate.
```
from __future__ import absolute_import, division, print_function
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
```
## The Boston Housing Prices dataset
This dataset is accessible directly in TensorFlow. Download and shuffle the training set:
```
boston_housing = keras.datasets.boston_housing
(train_data, train_labels), (test_data, test_labels) = boston_housing.load_data()
# Shuffle the training set
order = np.argsort(np.random.random(train_labels.shape))
train_data = train_data[order]
train_labels = train_labels[order]
```
### Examples and features
This dataset is much smaller than the others we've worked with so far: it has 506 total examples are split between 404 training examples and 102 test examples:
```
print("Training set: {}".format(train_data.shape)) # 404 examples, 13 features
print("Testing set: {}".format(test_data.shape)) # 102 examples, 13 features
```
The dataset contains 13 different features:
1. Per capita crime rate.
2. The proportion of residential land zoned for lots over 25,000 square feet.
3. The proportion of non-retail business acres per town.
4. Charles River dummy variable (= 1 if tract bounds river; 0 otherwise).
5. Nitric oxides concentration (parts per 10 million).
6. The average number of rooms per dwelling.
7. The proportion of owner-occupied units built before 1940.
8. Weighted distances to five Boston employment centers.
9. Index of accessibility to radial highways.
10. Full-value property-tax rate per $10,000.
11. Pupil-teacher ratio by town.
12. 1000 * (Bk - 0.63) ** 2 where Bk is the proportion of Black people by town.
13. Percentage lower status of the population.
Each one of these input data features is stored using a different scale. Some features are represented by a proportion between 0 and 1, other features are ranges between 1 and 12, some are ranges between 0 and 100, and so on. This is often the case with real-world data, and understanding how to explore and clean such data is an important skill to develop.
```
print(train_data[0]) # Display sample features, notice the different scales
```
Use the pandas library to display the first few rows of the dataset in a nicely formatted table:
```
import pandas as pd
column_names = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD',
'TAX', 'PTRATIO', 'B', 'LSTAT']
df = pd.DataFrame(train_data, columns=column_names)
df.head()
```
### Labels
The labels are the house prices in thousands of dollars. (You may notice the mid-1970s prices.)
```
print(train_labels[0:10]) # Display first 10 entries
```
## Normalize features
It's recommended to normalize features that use different scales and ranges. For each feature, subtract the mean of the feature and divide by the standard deviation:
```
# Test data is *not* used when calculating the mean and std
mean = train_data.mean(axis=0)
std = train_data.std(axis=0)
train_data = (train_data - mean) / std
test_data = (test_data - mean) / std
print(train_data[0]) # First training sample, normalized
```
Although the model might converge without feature normalization, it makes training more difficult, and it makes the resulting model more dependent on the choice of units used in the input.
## Create the model
Let's build our model. Here, we'll use a Sequential model with two densely connected hidden layers, and an output layer that returns a single, continuous value. The model building steps are wrapped in a function, build_model, since we'll create a second model, later on.
```
def build_model():
model = keras.Sequential([
keras.layers.Dense(64, activation=tf.nn.relu,
input_shape=(train_data.shape[1],)),
keras.layers.Dense(64, activation=tf.nn.relu),
keras.layers.Dense(1)
])
optimizer = tf.train.RMSPropOptimizer(0.001)
model.compile(loss='mse',
optimizer=optimizer,
metrics=['mae'])
return model
model = build_model()
model.summary()
```
## Train the model
The model is trained for 500 epochs, and record the training and validation accuracy in the history object.
```
# Display training progress by printing a single dot for each completed epoch
class PrintDot(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs):
if epoch % 100 == 0: print('')
print('.', end='')
EPOCHS = 500
# Store training stats
history = model.fit(train_data, train_labels, epochs=EPOCHS,
validation_split=0.2, verbose=0,
callbacks=[PrintDot()])
history_dict = history.history
history_dict.keys()
```
Visualize the model's training progress using the stats stored in the history object. We want to use this data to determine how long to train before the model stops making progress.
```
import matplotlib.pyplot as plt
%matplotlib inline
def plot_history(history):
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Abs Error [1000$]')
plt.plot(history.epoch, np.array(history.history['mean_absolute_error']),
label='Train Loss')
plt.plot(history.epoch, np.array(history.history['val_mean_absolute_error']),
label = 'Val loss')
plt.legend()
plt.ylim([0, 5])
plot_history(history)
```
This graph shows little improvement in the model after about 200 epochs. Let's update the `model.fit` method to automatically stop training when the validation score doesn't improve. We'll use a callback that tests a training condition for every epoch. If a set amount of epochs elapses without showing improvement, then automatically stop the training.
```
model = build_model()
# The patience parameter is the amount of epochs to check for improvement
early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=20)
history = model.fit(train_data, train_labels, epochs=EPOCHS,
validation_split=0.2, verbose=0,
callbacks=[early_stop, PrintDot()])
plot_history(history)
```
The graph shows the average error is about $2,500 dollars. Is this good? Well, $2,500 is not an insignificant amount when some of the labels are only $15,000.
Let's see how did the model performs on the test set:
```
[loss, mae] = model.evaluate(test_data, test_labels, verbose=0)
print("Testing set Mean Abs Error: ${:7.2f}".format(mae * 1000))
```
## Predict
Finally, predict some housing prices using data in the testing set:
```
test_predictions = model.predict(test_data).flatten()
plt.scatter(test_labels, test_predictions)
plt.xlabel('True Values [1000$]')
plt.ylabel('Predictions [1000$]')
plt.axis('equal')
plt.xlim(plt.xlim())
plt.ylim(plt.ylim())
_ = plt.plot([-100, 100], [-100, 100])
error = test_predictions - test_labels
plt.hist(error, bins = 50)
plt.xlabel("Prediction Error [1000$]")
_ = plt.ylabel("Count")
```
## Conclusion
This notebook introduced a few techniques to handle a regression problem.
Mean Squared Error (MSE) is a common loss function used for regression problems (different than classification problems).
Similarly, evaluation metrics used for regression differ from classification. A common regression metric is Mean Absolute Error (MAE).
When input data features have values with different ranges, each feature should be scaled independently.
If there is not much training data, prefer a small network with few hidden layers to avoid overfitting.
Early stopping is a useful technique to prevent overfitting.
|
github_jupyter
|
from __future__ import absolute_import, division, print_function
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
boston_housing = keras.datasets.boston_housing
(train_data, train_labels), (test_data, test_labels) = boston_housing.load_data()
# Shuffle the training set
order = np.argsort(np.random.random(train_labels.shape))
train_data = train_data[order]
train_labels = train_labels[order]
print("Training set: {}".format(train_data.shape)) # 404 examples, 13 features
print("Testing set: {}".format(test_data.shape)) # 102 examples, 13 features
print(train_data[0]) # Display sample features, notice the different scales
import pandas as pd
column_names = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD',
'TAX', 'PTRATIO', 'B', 'LSTAT']
df = pd.DataFrame(train_data, columns=column_names)
df.head()
print(train_labels[0:10]) # Display first 10 entries
# Test data is *not* used when calculating the mean and std
mean = train_data.mean(axis=0)
std = train_data.std(axis=0)
train_data = (train_data - mean) / std
test_data = (test_data - mean) / std
print(train_data[0]) # First training sample, normalized
def build_model():
model = keras.Sequential([
keras.layers.Dense(64, activation=tf.nn.relu,
input_shape=(train_data.shape[1],)),
keras.layers.Dense(64, activation=tf.nn.relu),
keras.layers.Dense(1)
])
optimizer = tf.train.RMSPropOptimizer(0.001)
model.compile(loss='mse',
optimizer=optimizer,
metrics=['mae'])
return model
model = build_model()
model.summary()
# Display training progress by printing a single dot for each completed epoch
class PrintDot(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs):
if epoch % 100 == 0: print('')
print('.', end='')
EPOCHS = 500
# Store training stats
history = model.fit(train_data, train_labels, epochs=EPOCHS,
validation_split=0.2, verbose=0,
callbacks=[PrintDot()])
history_dict = history.history
history_dict.keys()
import matplotlib.pyplot as plt
%matplotlib inline
def plot_history(history):
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Abs Error [1000$]')
plt.plot(history.epoch, np.array(history.history['mean_absolute_error']),
label='Train Loss')
plt.plot(history.epoch, np.array(history.history['val_mean_absolute_error']),
label = 'Val loss')
plt.legend()
plt.ylim([0, 5])
plot_history(history)
model = build_model()
# The patience parameter is the amount of epochs to check for improvement
early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=20)
history = model.fit(train_data, train_labels, epochs=EPOCHS,
validation_split=0.2, verbose=0,
callbacks=[early_stop, PrintDot()])
plot_history(history)
[loss, mae] = model.evaluate(test_data, test_labels, verbose=0)
print("Testing set Mean Abs Error: ${:7.2f}".format(mae * 1000))
test_predictions = model.predict(test_data).flatten()
plt.scatter(test_labels, test_predictions)
plt.xlabel('True Values [1000$]')
plt.ylabel('Predictions [1000$]')
plt.axis('equal')
plt.xlim(plt.xlim())
plt.ylim(plt.ylim())
_ = plt.plot([-100, 100], [-100, 100])
error = test_predictions - test_labels
plt.hist(error, bins = 50)
plt.xlabel("Prediction Error [1000$]")
_ = plt.ylabel("Count")
| 0.901956 | 0.994615 |
Authentication is a difficult topic fraught with potential pitfalls and complicated configuration options. Panel aims to be a "batteries-included" package for building applications and dashboards and therefore ships with a number of inbuilt providers for authentication in an application.
The primary mechanism by which Panel performs autentication is [OAuth 2.0](https://oauth.net/2/). The official specification for OAuth 2.0 describes the protocol as follows:
The OAuth 2.0 authorization framework enables a third-party
application to obtain limited access to an HTTP service, either on
behalf of a resource owner by orchestrating an approval interaction
between the resource owner and the HTTP service, or by allowing the
third-party application to obtain access on its own behalf.
In other words OAuth outsources authentication to a third party provider, e.g. GitHub, Google or Azure AD, to authenticate the user credentials and give limited access to the APIs of that service.
Note that since Panel is built on Bokeh server and Tornado it is also possible to implement your own authentication independent of the OAuth components shipped with Panel, [see the Bokeh documentation](https://docs.bokeh.org/en/latest/docs/user_guide/server.html#authentication) for further information.
## Configuring OAuth
The OAuth component will stop any user from accessing the application before first logging into the selected provider. The configuration to set up OAuth is all handled via the global `pn.config` object, which has a number of OAuth related parameters. When launching the application via the `panel serve` CLI command these config options can be set as CLI arguments or environment variables, when using the `pn.serve` function on the other hand these variables can be passed in as arguments.
### `oauth_provider`
The first step in configuring a OAuth is to specify a specific OAuth provider. Panel ships with a number of providers by default:
* `azure`: Azure Active Directory
* `bitbucket`: Bitbucket
* `github`: GitHub
* `gitlab`: GitLab
* `google`: Google
* `okta`: Okta
We will go through the process of configuring each of these individually later but for now all we need to know that the `oauth_provider` can be set on the commandline using the `--oauth-provider` CLI argument to `panel serve` or the `PANEL_OAUTH_PROVIDER` environment variable.
Examples:
```
panel serve oauth_example.py --oauth-provider=...
PANEL_OAUTH_PROVIDER=... panel serve oauth_example.py
```
### `oauth_key` and `oauth_secret`
To authenticate with a OAuth provider we generally require two pieces of information (although some providers will require more customization):
1. The Client ID is a public identifier for apps.
2. The Client Secret is a secret known only to the application and the authorization server.
These can be configured in a number of ways the client ID and client secret can be supplied to the `panel serve` command as `--oauth-key` and `--oauth-secret` CLI arguments or `PANEL_OAUTH_KEY` and `PANEL_OAUTH_SECRET` environment variables respectively.
Examples:
```
panel serve oauth_example.py --oauth-key=... --oauth-secret=...
PANEL_OAUTH_KEY=... PANEL_OAUTH_KEY=... panel serve oauth_example.py ...
```
### `oauth_extra_params`
Some OAuth providers will require some additional configuration options which will become part of the OAuth URLs. The `oauth_extra_params` configuration variable allows providing this additional information and can be set using the `--oauth-extra-params` CLI argument or `PANEL_OAUTH_EXTRA_PARAMS`.
Examples:
```
panel serve oauth_example.py --oauth-extra-params={'tenant_id': ...}
PANEL_OAUTH_EXTRA_PARAMS={'tenant_id': ...} panel serve oauth_example.py ...
```
### `cookie_secret`
Once authenticated the user information and authorization token will be set as secure cookies. Cookies are not secure and can easily be modified by clients. A secure cookie ensures that the user information cannot be interfered with or forged by the client by signing it with a secret key. Note that secure cookies guarantee integrity but not confidentiality. That is, the cookie cannot be modified but its contents can be seen by the user. To generate a `cookie_secret` use the `panel secret` CLI argument or generate some other random non-guessable string, ideally with at least 256-bits of entropy.
To set the `cookie_secret` supply `--cookie-secret` as a CLI argument or set the `PANEL_COOKIE_SECRET` environment variable.
Examples:
```
panel serve oauth_example.py --cookie-secret=...
PANEL_COOKIE_SECRET=... panel serve oauth_example.py ...
```
### `oauth_expiry`
The OAuth expiry configuration value determines for how long an OAuth token will be valid once it has been issued. By default it is valid for 1 day, but may be overwritten by providing the duration in the number of days (decimal values are allowed).
To set the `oauth_expiry` supply `--oauth-expiry-days` as a CLI argument or set the `PANEL_OAUTH_EXPIRY` environment variable.
Examples:
```
panel serve oauth_example.py --oauth-expiry-days=...
PANEL_OAUTH_EXPIRY=... panel serve oauth_example.py ...
```
### Encryption
The architecture of the Bokeh/Panel server means that credentials stored as cookies can be leak in a number of ways. On the initial HTTP(S) request the server will respond with the HTML document that renders the application and this will include an unencrypted token containing the OAuth information. To ensure that the user information and access token are properly encrypted we rely on the Fernet encryption in the `cryptography` library. You can install it with `pip install cryptography` or `conda install cryptography`.
Once installed you will be able to generate a encryption key with `panel oauth-secret`. This will generate a secret you can pass to the `panel serve` CLI command using the ``--oauth-encryption-key`` argument or `PANEL_OAUTH_ENCRYPTION` environment variable.
Examples:
```
panel serve oauth_example.py --oauth-encryption-key=...
PANEL_OAUTH_ENCRYPTION=... panel serve oauth_example.py ...
```
### Redirect URI
Once the OAuth provider has authenticated a user it has to redirect them back to the application, this is what is known as the redirect URI. For security reasons this has to match the URL registered with the OAuth provider exactly. By default Panel will redirect the user straight back to the original URL of your app, e.g. when you're hosting your app at `https://myapp.myprovider.com` Panel will use that as the redirect URI. However in certain scenarios you may override this to provide a specific redirect URI. This can be achieved with the `--oauth-redirect-uri` CLI argument or the `PANEL_OAUTH_REDIRECT_URI` environment variable.
Examples:
```
panel serve oauth_example.py --oauth-redirect-uri=...
PANEL_OAUTH_REDIRECT_URI=... panel serve oauth_example.py
```
### Summary
A fully configured OAuth configuration may look like this:
```
panel serve oauth_example.py --oauth-provider=github --oauth-key=... --oauth-secret=... --cookie-secret=... --oauth-encryption-key=...
PANEL_OAUTH_PROVIDER=... PANEL_OAUTH_KEY=... PANEL_OAUTH_SECRET=... PANEL_COOKIE_SECRET=... PANEL_OAUTH_ENCRYPTION=... panel serve oauth_example.py ...`
```
## Accessing OAuth information
Once a user is authorized with the chosen OAuth provider certain user information and an `access_token` will be available to be used in the application to customize the user experience. Like all other global state this may be accessed on the `pn.state` object, specifically it makes three attributes available:
* **`pn.state.user`**: A unique name, email or ID that identifies the user.
* **`pn.state.access_token`**: The access token issued by the OAuth provider to authorize requests to its APIs.
* **`pn.state.user_info`**: Additional user information provided by the OAuth provider. This may include names, email, APIs to request further user information, IDs and more.
## OAuth Providers
Panel provides a number of inbuilt OAuth providers, below is the list
### **Azure Active Directory**
To set up OAuth2.0 authentication for Azure Active directory follow [these instructions](https://docs.microsoft.com/en-us/azure/api-management/api-management-howto-protect-backend-with-aad). In addition to the `oauth_key` and `oauth_secret` ensure that you also supply the tenant ID using `oauth_extra_params`, e.g.:
```
panel serve oauth_test.py --oauth-extra-params="{'tenant': '...'}"
PANEL_OAUTH_EXTRA_PARAMS="{'tenant': '...'}" panel serve oauth_example.py ...
```
### **Bitbucket**
Bitbucket provides instructions about setting [setting up an OAuth consumer](https://support.atlassian.com/bitbucket-cloud/docs/use-oauth-on-bitbucket-cloud/). Follow these and then supply the `oauth_key` and `oauth_secret` to Panel as described above.
### **GitHub**
GitHub provides detailed instructions on [creating an OAuth app](https://developer.github.com/apps/building-oauth-apps/creating-an-oauth-app/). Follow these and then supply the `oauth_key` and `oauth_secret` to Panel as described above.
### **GitLab**
GitLab provides a detailed guide on [configuring an OAuth](https://docs.gitlab.com/ee/api/oauth2.html) application. In addition to the `oauth_key` and `oauth_secret` you will also have to supply a custom url using the `oauth_extra_params` if you have a custom GitLab instance (the default `oauth_extra_params={'url': 'gitlab.com'}`).
### **Google**
Google provides a guide about [configuring a OAuth application](https://developers.google.com/identity/protocols/oauth2/native-app). By default nothing except the `oauth_key` and `oauth_secret` are required but to access Google services you may also want to override the default `scope` via the `oauth_extra_params`.
### **Okta**
Okta provides a guide about [configuring OAuth2](https://developer.okta.com/docs/concepts/oauth-openid/). You must provide an `oauth_key` and `oauth_secret` but in most other ordinary setups you will also have to provide a `url` via the `oauth_extra_params` and if you have set up a custom authentication server (i.e. not 'default') with Okta you must also provide 'server', the `oauth_extra_params` should then look something like this: `{'server': 'custom', 'url': 'dev-***.okta.com'}`
### Plugins
The Panel OAuth providers are pluggable, in other words downstream libraries may define their own Tornado `RequestHandler` to be used with Panel. To register such a component the `setup.py` of the downstream package should register an entry_point that Panel can discover. To read more about entry points see the [Python documentation](https://packaging.python.org/specifications/entry-points/). A custom OAuth request handler in your library may be registered as follows:
```python
entry_points={
'panel.auth': [
"custom = my_library.auth:MyCustomOAuthRequestHandler"
]
}
```
|
github_jupyter
|
panel serve oauth_example.py --oauth-provider=...
PANEL_OAUTH_PROVIDER=... panel serve oauth_example.py
panel serve oauth_example.py --oauth-key=... --oauth-secret=...
PANEL_OAUTH_KEY=... PANEL_OAUTH_KEY=... panel serve oauth_example.py ...
panel serve oauth_example.py --oauth-extra-params={'tenant_id': ...}
PANEL_OAUTH_EXTRA_PARAMS={'tenant_id': ...} panel serve oauth_example.py ...
panel serve oauth_example.py --cookie-secret=...
PANEL_COOKIE_SECRET=... panel serve oauth_example.py ...
panel serve oauth_example.py --oauth-expiry-days=...
PANEL_OAUTH_EXPIRY=... panel serve oauth_example.py ...
panel serve oauth_example.py --oauth-encryption-key=...
PANEL_OAUTH_ENCRYPTION=... panel serve oauth_example.py ...
panel serve oauth_example.py --oauth-redirect-uri=...
PANEL_OAUTH_REDIRECT_URI=... panel serve oauth_example.py
panel serve oauth_example.py --oauth-provider=github --oauth-key=... --oauth-secret=... --cookie-secret=... --oauth-encryption-key=...
PANEL_OAUTH_PROVIDER=... PANEL_OAUTH_KEY=... PANEL_OAUTH_SECRET=... PANEL_COOKIE_SECRET=... PANEL_OAUTH_ENCRYPTION=... panel serve oauth_example.py ...`
panel serve oauth_test.py --oauth-extra-params="{'tenant': '...'}"
PANEL_OAUTH_EXTRA_PARAMS="{'tenant': '...'}" panel serve oauth_example.py ...
entry_points={
'panel.auth': [
"custom = my_library.auth:MyCustomOAuthRequestHandler"
]
}
| 0.292494 | 0.904144 |
# HW 7: Design a Controller
We are going to control the motion of a rectangular "paddle" by implementing a position-velocity controller.
The pose of the paddle can be described as $(x,y,\theta)$. Where $x,y$ describe the center of mass coordinate.
Thus, the state of the control problem is ${\bf x} = (x, y, \theta, \dot x, \dot y, \dot \theta)$
The control inputs are $(u_x, u_y, u_{th}) = (F_{app_x}, F_{app_y},\tau_{app})$.
* The forces $(F_{app_x}, F_{app_y})$ are *applied to the center of mass* and do not cause any angular acceleration.
* Similiarly, the torque $(\tau_{app})$ is an external moment *applied to the center of mass*.
The output of the controller is simply the state.
-------------------------------------
As discussed in class, this system is *decoupled*. The control problem can be seperated into the horizontal control problem, vertical control problem, and rotational control problem.
Hit the play button to run a simulation. A screen should pop up where the position of the paddle is relatively stable.
If you look at the code in the `closedLoopController` function, you will see that the only control input is $u_y=K_{p_y} \cdot g = m\cdot g$. Contrary to the function name, this is a feed forward controller that applies a force upwards to counteract gravity.
Your job is to modify the code in this function to implement a position-velocity controller.
```
import tutorial; reload(tutorial); from tutorial import *
# starting position of the paddle at t=0
initial_pose = (15, 12, 0.0)
# desired position of the paddle (reference signal)
desired_pose = (15, 12, 0.0)
# desired velocity of the paddle (reference signal).
# when this is set to 0 we want the paddle to stop at our desired_pose.
desired_vel = (0, 0, 0)
# our desired state specifies a pose and velocity of the paddle (x,y,th,dx,dy,dth)
desired_state = desired_pose + desired_vel
# system parameters (do not change)
m = bodies['robot'].mass
I = bodies['robot'].inertia
g = 9.81
# example gain parameter for vertical proportion control
K_py = m*1
def closedLoopController (time, robot_state):
# the output signal
x, y, th, xdot, ydot, thdot = robot_state
# the reference signal
rx, ry, rth, rxdot, rydot, rthdot = desired_state
# the controller output
u_x = 0 #F_app_x
u_y = K_py*g #F_app_y currently set to adjust for gravity
u_th = 0 #\tau_app
return u_x, u_y, u_th
result = run_pd_control(initial_pose, closedLoopController)
plot(result, "Robot")
```
|
github_jupyter
|
import tutorial; reload(tutorial); from tutorial import *
# starting position of the paddle at t=0
initial_pose = (15, 12, 0.0)
# desired position of the paddle (reference signal)
desired_pose = (15, 12, 0.0)
# desired velocity of the paddle (reference signal).
# when this is set to 0 we want the paddle to stop at our desired_pose.
desired_vel = (0, 0, 0)
# our desired state specifies a pose and velocity of the paddle (x,y,th,dx,dy,dth)
desired_state = desired_pose + desired_vel
# system parameters (do not change)
m = bodies['robot'].mass
I = bodies['robot'].inertia
g = 9.81
# example gain parameter for vertical proportion control
K_py = m*1
def closedLoopController (time, robot_state):
# the output signal
x, y, th, xdot, ydot, thdot = robot_state
# the reference signal
rx, ry, rth, rxdot, rydot, rthdot = desired_state
# the controller output
u_x = 0 #F_app_x
u_y = K_py*g #F_app_y currently set to adjust for gravity
u_th = 0 #\tau_app
return u_x, u_y, u_th
result = run_pd_control(initial_pose, closedLoopController)
plot(result, "Robot")
| 0.629888 | 0.939359 |
# Analyzing KDF Robustness to Label Noise
This tutorial seeks to expand on `bayeserrorestimate_gaussianparity.ipynb` by exploring a property of the Kerenel Density Forest (KDF). Here, the goal is to explore robustness to contamination in the Gaussian XOR distribution compared to the Random Forest (RF) algorithm.
Recall from `bayeserrorestimate_gaussianparity.ipynb` that the **estimated Bayes Error is 0.267.**
```
# Created by: Jacob Desman
# Date: 2021-11-22
# Contact at: jake.m.desman@gmail.com
import numpy as np
import matplotlib.pyplot as plt
import random
import pandas as pd
import seaborn as sns
from kdg import kdf
from sklearn.ensemble import RandomForestClassifier as rf
from kdg.utils import generate_gaussian_parity
from functions.kdf_gaussian_xor_label_noise import plot_gaussians
from functions.kdf_gaussian_xor_label_noise import label_noise_trial
```
## Distributions of Interest
As an example, we show the Guassian parity problem of interest below. The left figure shows the original distribution while the right figure illustrates 20% contamination in the labels.
```
# Show the sample blobs / Gaussians
fig, ax = plt.subplots(1, 2, figsize=(16, 8))
n_samples = 5000
# Generate original distribution
X, y = generate_gaussian_parity(n_samples, cluster_std=0.5)
plot_gaussians(X, y, ax=ax[0])
plt.gca().set_title('Uncontaminated Labels', fontsize=30)
# Randomly flip labels
p = 0.20
n_noise = np.int32(np.round(len(X) * p))
noise_indices = random.sample(range(len(X)), n_noise)
y[noise_indices] = 1 - y[noise_indices]
plot_gaussians(X, y, ax=ax[1])
plt.gca().set_title('20% Flipped Labels', fontsize=30)
```
## Experiment: Accuracy in Contaminated Environment
This will compare the error rate of the Kernel Density Forest algorithm to the Random Forest algorithm at different label contamination levels: 0%, 10%, 20%, 30%, and 40%. Multiple trials will be conducted at each contamination level.
```
df = pd.DataFrame()
reps = 10
n_estimators = 500
n_samples = 5000
err_kdf = []
err_rf = []
proportions = [0.0, 0.1, 0.2, 0.3, 0.4]
proportion_list = []
reps_list = []
for p in proportions:
for ii in range(reps):
err_kdf_i, err_rf_i = label_noise_trial(
n_samples=n_samples, p=p, n_estimators=n_estimators
)
err_kdf.append(err_kdf_i)
err_rf.append(err_rf_i)
reps_list.append(ii)
proportion_list.append(p)
# Construct DataFrame
df["reps"] = reps_list
df["proportion"] = proportion_list
df["error_kdf"] = err_kdf
df["error_rf"] = err_rf
err_kdf_med = []
err_kdf_25_quantile = []
err_kdf_75_quantile = []
err_rf_med = []
err_rf_25_quantile = []
err_rf_75_quantile = []
for p in proportions:
curr_kdf = df["error_kdf"][df["proportion"] == p]
curr_rf = df["error_rf"][df["proportion"] == p]
err_kdf_med.append(np.median(curr_kdf))
err_kdf_25_quantile.append(np.quantile(curr_kdf, [0.25])[0])
err_kdf_75_quantile.append(np.quantile(curr_kdf, [0.75])[0])
err_rf_med.append(np.median(curr_rf))
err_rf_25_quantile.append(np.quantile(curr_rf, [0.25])[0])
err_rf_75_quantile.append(np.quantile(curr_rf, [0.75])[0])
# Plotting
sns.set_context("talk")
fig, ax = plt.subplots(1, 1, figsize=(8, 8))
ax.plot(proportions, err_kdf_med, c="r", label="KDF")
ax.fill_between(
proportions, err_kdf_25_quantile, err_kdf_75_quantile, facecolor="r", alpha=0.3
)
ax.plot(proportions, err_rf_med, c="k", label="RF")
ax.fill_between(
proportions, err_rf_25_quantile, err_rf_75_quantile, facecolor="k", alpha=0.3
)
right_side = ax.spines["right"]
right_side.set_visible(False)
top_side = ax.spines["top"]
top_side.set_visible(False)
ax.set_xlabel("Label Noise Proportion")
ax.set_ylabel("Error")
plt.title("Gaussian Parity Label Noise")
ax.legend(frameon=False)
plt.show()
```
|
github_jupyter
|
# Created by: Jacob Desman
# Date: 2021-11-22
# Contact at: jake.m.desman@gmail.com
import numpy as np
import matplotlib.pyplot as plt
import random
import pandas as pd
import seaborn as sns
from kdg import kdf
from sklearn.ensemble import RandomForestClassifier as rf
from kdg.utils import generate_gaussian_parity
from functions.kdf_gaussian_xor_label_noise import plot_gaussians
from functions.kdf_gaussian_xor_label_noise import label_noise_trial
# Show the sample blobs / Gaussians
fig, ax = plt.subplots(1, 2, figsize=(16, 8))
n_samples = 5000
# Generate original distribution
X, y = generate_gaussian_parity(n_samples, cluster_std=0.5)
plot_gaussians(X, y, ax=ax[0])
plt.gca().set_title('Uncontaminated Labels', fontsize=30)
# Randomly flip labels
p = 0.20
n_noise = np.int32(np.round(len(X) * p))
noise_indices = random.sample(range(len(X)), n_noise)
y[noise_indices] = 1 - y[noise_indices]
plot_gaussians(X, y, ax=ax[1])
plt.gca().set_title('20% Flipped Labels', fontsize=30)
df = pd.DataFrame()
reps = 10
n_estimators = 500
n_samples = 5000
err_kdf = []
err_rf = []
proportions = [0.0, 0.1, 0.2, 0.3, 0.4]
proportion_list = []
reps_list = []
for p in proportions:
for ii in range(reps):
err_kdf_i, err_rf_i = label_noise_trial(
n_samples=n_samples, p=p, n_estimators=n_estimators
)
err_kdf.append(err_kdf_i)
err_rf.append(err_rf_i)
reps_list.append(ii)
proportion_list.append(p)
# Construct DataFrame
df["reps"] = reps_list
df["proportion"] = proportion_list
df["error_kdf"] = err_kdf
df["error_rf"] = err_rf
err_kdf_med = []
err_kdf_25_quantile = []
err_kdf_75_quantile = []
err_rf_med = []
err_rf_25_quantile = []
err_rf_75_quantile = []
for p in proportions:
curr_kdf = df["error_kdf"][df["proportion"] == p]
curr_rf = df["error_rf"][df["proportion"] == p]
err_kdf_med.append(np.median(curr_kdf))
err_kdf_25_quantile.append(np.quantile(curr_kdf, [0.25])[0])
err_kdf_75_quantile.append(np.quantile(curr_kdf, [0.75])[0])
err_rf_med.append(np.median(curr_rf))
err_rf_25_quantile.append(np.quantile(curr_rf, [0.25])[0])
err_rf_75_quantile.append(np.quantile(curr_rf, [0.75])[0])
# Plotting
sns.set_context("talk")
fig, ax = plt.subplots(1, 1, figsize=(8, 8))
ax.plot(proportions, err_kdf_med, c="r", label="KDF")
ax.fill_between(
proportions, err_kdf_25_quantile, err_kdf_75_quantile, facecolor="r", alpha=0.3
)
ax.plot(proportions, err_rf_med, c="k", label="RF")
ax.fill_between(
proportions, err_rf_25_quantile, err_rf_75_quantile, facecolor="k", alpha=0.3
)
right_side = ax.spines["right"]
right_side.set_visible(False)
top_side = ax.spines["top"]
top_side.set_visible(False)
ax.set_xlabel("Label Noise Proportion")
ax.set_ylabel("Error")
plt.title("Gaussian Parity Label Noise")
ax.legend(frameon=False)
plt.show()
| 0.677581 | 0.976714 |
# Spark DataFrame - Basics
Let's start off with the fundamentals of Spark DataFrame.
Objective: In this exercise, you'll find out how to start a spark session, read in data, explore the data and manipuluate the data (using DataFrame syntax as well as SQL syntax). Let's get started!
```
# Must be included at the beginning of each new notebook. Remember to change the app name.
import findspark
findspark.init('/home/ubuntu/spark-2.1.1-bin-hadoop2.7')
import pyspark
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('basics').getOrCreate()
# Let's read in the data. Note that it's in the format of JSON.
df = spark.read.json('Datasets/people.json')
```
## Data Exploration
```
# The show method allows you visualise DataFrames. We can see that there are two columns.
df.show()
# You could also try this.
df.columns
# We can use the describe method get some general statistics on our data too. Remember to show the DataFrame!
# But what about data type?
df.describe().show()
# For type, we can use print schema.
# But wait! What if you want to change the format of the data? Maybe change age to an integer instead of long?
df.printSchema()
```
## Data Manipulation
```
# Let's import in the relevant types.
from pyspark.sql.types import (StructField,StringType,IntegerType,StructType)
# Then create a variable with the correct structure.
data_schema = [StructField('age',IntegerType(),True),
StructField('name',StringType(),True)]
final_struct = StructType(fields=data_schema)
# And now we can read in the data using that schema. If we print the schema, we can see that age is now an integer.
df = spark.read.json('Datasets/people.json', schema=final_struct)
df.printSchema()
# We can also select various columns from a DataFrame.
df.select('age').show()
# We could split up these steps, first assigning the output to a variable, then showing that variable. As you see, the output is the same.
ageColumn = df.select('age')
ageColumn.show()
# We can also add columns, manipulating the DataFrame.
df.withColumn('double_age',df['age']*2).show()
# But note that this doesn't alter the original DataFrame. You need to assign the output to a new variable in order to do so.
df.show()
# We can rename columns too!
df.withColumnRenamed('age', 'my_new_age').show()
```
## Introducing SQL
We can query a DataFrame as if it were a table! Let's see a few examples of that below:
```
# First, we have to register the DataFrame as a SQL temporary view.
df.createOrReplaceTempView('people')
# After that, we can use the SQL programming language for queries.
results = spark.sql("SELECT * FROM people")
results.show()
# Here's another example:
results = spark.sql("SELECT age FROM people WHERE age >= 19")
results.show()
```
Now that we're done with this tutorial, let's move on to Spark DataFrame Operations!
|
github_jupyter
|
# Must be included at the beginning of each new notebook. Remember to change the app name.
import findspark
findspark.init('/home/ubuntu/spark-2.1.1-bin-hadoop2.7')
import pyspark
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('basics').getOrCreate()
# Let's read in the data. Note that it's in the format of JSON.
df = spark.read.json('Datasets/people.json')
# The show method allows you visualise DataFrames. We can see that there are two columns.
df.show()
# You could also try this.
df.columns
# We can use the describe method get some general statistics on our data too. Remember to show the DataFrame!
# But what about data type?
df.describe().show()
# For type, we can use print schema.
# But wait! What if you want to change the format of the data? Maybe change age to an integer instead of long?
df.printSchema()
# Let's import in the relevant types.
from pyspark.sql.types import (StructField,StringType,IntegerType,StructType)
# Then create a variable with the correct structure.
data_schema = [StructField('age',IntegerType(),True),
StructField('name',StringType(),True)]
final_struct = StructType(fields=data_schema)
# And now we can read in the data using that schema. If we print the schema, we can see that age is now an integer.
df = spark.read.json('Datasets/people.json', schema=final_struct)
df.printSchema()
# We can also select various columns from a DataFrame.
df.select('age').show()
# We could split up these steps, first assigning the output to a variable, then showing that variable. As you see, the output is the same.
ageColumn = df.select('age')
ageColumn.show()
# We can also add columns, manipulating the DataFrame.
df.withColumn('double_age',df['age']*2).show()
# But note that this doesn't alter the original DataFrame. You need to assign the output to a new variable in order to do so.
df.show()
# We can rename columns too!
df.withColumnRenamed('age', 'my_new_age').show()
# First, we have to register the DataFrame as a SQL temporary view.
df.createOrReplaceTempView('people')
# After that, we can use the SQL programming language for queries.
results = spark.sql("SELECT * FROM people")
results.show()
# Here's another example:
results = spark.sql("SELECT age FROM people WHERE age >= 19")
results.show()
| 0.512693 | 0.980986 |
```
import numpy as np
import pandas as pd
```
# Loading the Raw Data
```
pd.read_excel?
macro_df = pd.read_excel("../data/Residential_data_for_trainer.xlsx",
sheet_name="Macro",
na_values=[0])
macro_df.info()
macro_df.head()
macro_df.shape
households_df = pd.read_excel("../data/Residential_data_for_trainer.xlsx",
sheet_name="Households")
households_df.info()
households_df.head()
households_df.shape
_economies = (macro_df.loc[:, "Economies"]
.unique())
_economy_dfs = []
for _economy in _economies:
_df = pd.read_excel("../data/Residential_data_for_trainer.xlsx",
sheet_name=_economy,
converters={"Activity": lambda s: s.strip(), "Technology": lambda s: s.strip()},
na_values=[0])
# filter out the rows with actual data (don't need to use the egeda data)
_filter = (_df.loc[:, ["Activity", "Technology"]]
.notna()
.any(axis=1))
_clean_df = _df.loc[_filter, :]
_economy_dfs.append(_clean_df)
economy_df = pd.concat(_economy_dfs, ignore_index=True)
economy_df.info()
economy_df.head()
economy_df.tail()
```
# Make the data "tidy"
```
macro_df.head()
_wide_df.head()
_long_df.head()
_wide_df = (macro_df.drop(["Parameter", "Unit", "Unnamed: 75"], axis=1)
.rename(columns={"Code_Name": "variable", "Economies": "economy"}))
_long_df = pd.melt(_wide_df, id_vars=["economy", "variable"], var_name="year")
tidy_macro_df = _long_df.pivot_table(index=["economy", "year"], columns="variable", values="value")
tidy_macro_df.info()
tidy_macro_df.head()
tidy_macro_df.tail()
tidy_macro_df.to_csv("../data/interim/tidy-macro.csv")
_wide_df = (households_df.drop(["Parameter"], axis=1)
.rename(columns={"Code_Name": "variable", "Economies": "economy"}))
_long_df = pd.melt(_wide_df,
id_vars=["economy", "variable"],
var_name="year")
tidy_households_df = _long_df.pivot_table(index=["economy", "year"],
columns="variable",
values="value")
tidy_households_df.info()
tidy_households_df.head()
tidy_households_df.tail()
tidy_households_df.to_csv("../data/interim/tidy-households.csv")
_wide_df = (economy_df.drop(["Unit"], axis=1)
.rename(columns={"Economy": "economy", "Fuel": "fuel", "Activity": "activity", "Technology": "technology"})
.groupby(["economy", "activity"])
.sum()
.replace(0.0, np.nan)
.reset_index())
_long_df = pd.melt(_wide_df,
id_vars=["economy", "activity"],
var_name="year")
tidy_economy_df = _long_df.pivot_table(index=["economy", "year"],
columns=["activity"],
values="value")
tidy_economy_df.info()
tidy_economy_df.head()
tidy_economy_df.tail()
tidy_economy_df.to_csv("../data/interim/tidy-economies.csv")
```
|
github_jupyter
|
import numpy as np
import pandas as pd
pd.read_excel?
macro_df = pd.read_excel("../data/Residential_data_for_trainer.xlsx",
sheet_name="Macro",
na_values=[0])
macro_df.info()
macro_df.head()
macro_df.shape
households_df = pd.read_excel("../data/Residential_data_for_trainer.xlsx",
sheet_name="Households")
households_df.info()
households_df.head()
households_df.shape
_economies = (macro_df.loc[:, "Economies"]
.unique())
_economy_dfs = []
for _economy in _economies:
_df = pd.read_excel("../data/Residential_data_for_trainer.xlsx",
sheet_name=_economy,
converters={"Activity": lambda s: s.strip(), "Technology": lambda s: s.strip()},
na_values=[0])
# filter out the rows with actual data (don't need to use the egeda data)
_filter = (_df.loc[:, ["Activity", "Technology"]]
.notna()
.any(axis=1))
_clean_df = _df.loc[_filter, :]
_economy_dfs.append(_clean_df)
economy_df = pd.concat(_economy_dfs, ignore_index=True)
economy_df.info()
economy_df.head()
economy_df.tail()
macro_df.head()
_wide_df.head()
_long_df.head()
_wide_df = (macro_df.drop(["Parameter", "Unit", "Unnamed: 75"], axis=1)
.rename(columns={"Code_Name": "variable", "Economies": "economy"}))
_long_df = pd.melt(_wide_df, id_vars=["economy", "variable"], var_name="year")
tidy_macro_df = _long_df.pivot_table(index=["economy", "year"], columns="variable", values="value")
tidy_macro_df.info()
tidy_macro_df.head()
tidy_macro_df.tail()
tidy_macro_df.to_csv("../data/interim/tidy-macro.csv")
_wide_df = (households_df.drop(["Parameter"], axis=1)
.rename(columns={"Code_Name": "variable", "Economies": "economy"}))
_long_df = pd.melt(_wide_df,
id_vars=["economy", "variable"],
var_name="year")
tidy_households_df = _long_df.pivot_table(index=["economy", "year"],
columns="variable",
values="value")
tidy_households_df.info()
tidy_households_df.head()
tidy_households_df.tail()
tidy_households_df.to_csv("../data/interim/tidy-households.csv")
_wide_df = (economy_df.drop(["Unit"], axis=1)
.rename(columns={"Economy": "economy", "Fuel": "fuel", "Activity": "activity", "Technology": "technology"})
.groupby(["economy", "activity"])
.sum()
.replace(0.0, np.nan)
.reset_index())
_long_df = pd.melt(_wide_df,
id_vars=["economy", "activity"],
var_name="year")
tidy_economy_df = _long_df.pivot_table(index=["economy", "year"],
columns=["activity"],
values="value")
tidy_economy_df.info()
tidy_economy_df.head()
tidy_economy_df.tail()
tidy_economy_df.to_csv("../data/interim/tidy-economies.csv")
| 0.205615 | 0.512388 |
```
import torch
import os, sys
import numpy as np
HOME_DIRECTORY=os.path.abspath(os.path.join(os.getcwd(), os.pardir))
os.chdir(HOME_DIRECTORY)
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # sync ids with nvidia-smi
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
os.environ["MKL_SERVICE_FORCE_INTEL"]="1"
# script params
port=5015
sampling_fn="uncertainty"
lSet_partition=1
base_seed=1
num_GPU=1
al_iterations=4
num_aml_trials=5 #50
budget_size=2500
dataset="CIFAR10"
init_partition=10
step_partition=10
clf_epochs=5 #150
num_classes=10
swa_lr=5e-4
swa_freq=50
swa_epochs=5 #50
log_iter=40
#Data arguments
train_dir=f"{HOME_DIRECTORY}/data/{dataset}/train-{dataset}/"
test_dir=f"{HOME_DIRECTORY}/data/{dataset}/test-{dataset}/"
lSetPath=f"{HOME_DIRECTORY}/data/{dataset}/partition_{lSet_partition}/lSet_{dataset}.npy"
uSetPath=f"{HOME_DIRECTORY}/data/{dataset}/partition_{lSet_partition}/uSet_{dataset}.npy"
valSetPath=f"{HOME_DIRECTORY}/data/{dataset}/partition_{lSet_partition}/valSet_{dataset}.npy"
out_dir=f"{HOME_DIRECTORY}/sample_budgetsize_results"
model_style="vgg_style"
model_type="vgg"
model_depth=16
# It is important to note that we should point results for budget size experiment.
# For example: If we don't take care of savepath & assume it points to 10% budget size experiment
# then running AL for 15% will have no issues but once we go to 20% - we have earlier results which
# should not be used. So for such experiments we just copy the base (trained on initial labeled data)
# classifier to new directory and then run any experiments.
!mkdir -p $out_dir/best_automl_results/lSet_1/start_1/CIFAR10/10.0/vgg_depth_16
print("Made best_automl_results directory")
print("Copying base classifier started....")
!scp -r sample_results_aml/best_automl_results/lSet_1/start_1/CIFAR10/10.0/vgg_depth_16/vanilla $out_dir/best_automl_results/lSet_1/start_1/CIFAR10/10.0/vgg_depth_16/
print("Copying base classifier finished!")
# DO the copy again but this time for automl_results
!mkdir -p $out_dir/auto_ml_results/lSet_1/start_1/CIFAR10/10.0/vgg_depth_16/vanilla/trial-0
print("Made auto_ml_results directory")
print("Copying base classifier checkpoints and config started....")
!scp -r sample_results_aml/best_automl_results/lSet_1/start_1/CIFAR10/10.0/vgg_depth_16/vanilla/config.yaml $out_dir/auto_ml_results/lSet_1/start_1/CIFAR10/10.0/vgg_depth_16/vanilla/trial-0/
!scp -r sample_results_aml/best_automl_results/lSet_1/start_1/CIFAR10/10.0/vgg_depth_16/vanilla/checkpoints $out_dir/auto_ml_results/lSet_1/start_1/CIFAR10/10.0/vgg_depth_16/vanilla/trial-0/
print("Copying finished!")
print("""
Please remember to change paths in config file.
For example do replace each "sample_results_aml" occurences in paths to "sample_budgetsize_results" # old directory name to new directory name
""")
# Please remember to change paths in config file
# For example do replace each "sample_results_aml" occurences in paths to "sample_budgetsize_results" # old directory name to new directory name
# Please also modify budget size to 2500 in config.py
!python3 $HOME_DIRECTORY/tools/main_aml.py --n_GPU $num_GPU \
--port $port --sampling_fn $sampling_fn --lSet_partition $lSet_partition \
--seed_id $base_seed \
--init_partition $init_partition --step_partition $step_partition \
--dataset $dataset --budget_size $budget_size \
--out_dir $out_dir \
--num_aml_trials $num_aml_trials --num_classes $num_classes \
--al_max_iter $al_iterations \
--model_type $model_type --model_depth $model_depth \
--clf_epochs $clf_epochs \
--eval_period 1 --checkpoint_period 1 \
--lSetPath $lSetPath --uSetPath $uSetPath --valSetPath $valSetPath \
--train_dir $train_dir --test_dir $test_dir \
--dropout_iterations 25 \
--cfg configs/$dataset/$model_style/$model_type/R-18_4gpu_unreg.yaml \
--vaal_z_dim 32 --vaal_vae_bs 64 --vaal_epochs 2 \
--vaal_vae_lr 5e-4 --vaal_disc_lr 5e-4 --vaal_beta 1.0 --vaal_adv_param 1.0
```
|
github_jupyter
|
import torch
import os, sys
import numpy as np
HOME_DIRECTORY=os.path.abspath(os.path.join(os.getcwd(), os.pardir))
os.chdir(HOME_DIRECTORY)
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # sync ids with nvidia-smi
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
os.environ["MKL_SERVICE_FORCE_INTEL"]="1"
# script params
port=5015
sampling_fn="uncertainty"
lSet_partition=1
base_seed=1
num_GPU=1
al_iterations=4
num_aml_trials=5 #50
budget_size=2500
dataset="CIFAR10"
init_partition=10
step_partition=10
clf_epochs=5 #150
num_classes=10
swa_lr=5e-4
swa_freq=50
swa_epochs=5 #50
log_iter=40
#Data arguments
train_dir=f"{HOME_DIRECTORY}/data/{dataset}/train-{dataset}/"
test_dir=f"{HOME_DIRECTORY}/data/{dataset}/test-{dataset}/"
lSetPath=f"{HOME_DIRECTORY}/data/{dataset}/partition_{lSet_partition}/lSet_{dataset}.npy"
uSetPath=f"{HOME_DIRECTORY}/data/{dataset}/partition_{lSet_partition}/uSet_{dataset}.npy"
valSetPath=f"{HOME_DIRECTORY}/data/{dataset}/partition_{lSet_partition}/valSet_{dataset}.npy"
out_dir=f"{HOME_DIRECTORY}/sample_budgetsize_results"
model_style="vgg_style"
model_type="vgg"
model_depth=16
# It is important to note that we should point results for budget size experiment.
# For example: If we don't take care of savepath & assume it points to 10% budget size experiment
# then running AL for 15% will have no issues but once we go to 20% - we have earlier results which
# should not be used. So for such experiments we just copy the base (trained on initial labeled data)
# classifier to new directory and then run any experiments.
!mkdir -p $out_dir/best_automl_results/lSet_1/start_1/CIFAR10/10.0/vgg_depth_16
print("Made best_automl_results directory")
print("Copying base classifier started....")
!scp -r sample_results_aml/best_automl_results/lSet_1/start_1/CIFAR10/10.0/vgg_depth_16/vanilla $out_dir/best_automl_results/lSet_1/start_1/CIFAR10/10.0/vgg_depth_16/
print("Copying base classifier finished!")
# DO the copy again but this time for automl_results
!mkdir -p $out_dir/auto_ml_results/lSet_1/start_1/CIFAR10/10.0/vgg_depth_16/vanilla/trial-0
print("Made auto_ml_results directory")
print("Copying base classifier checkpoints and config started....")
!scp -r sample_results_aml/best_automl_results/lSet_1/start_1/CIFAR10/10.0/vgg_depth_16/vanilla/config.yaml $out_dir/auto_ml_results/lSet_1/start_1/CIFAR10/10.0/vgg_depth_16/vanilla/trial-0/
!scp -r sample_results_aml/best_automl_results/lSet_1/start_1/CIFAR10/10.0/vgg_depth_16/vanilla/checkpoints $out_dir/auto_ml_results/lSet_1/start_1/CIFAR10/10.0/vgg_depth_16/vanilla/trial-0/
print("Copying finished!")
print("""
Please remember to change paths in config file.
For example do replace each "sample_results_aml" occurences in paths to "sample_budgetsize_results" # old directory name to new directory name
""")
# Please remember to change paths in config file
# For example do replace each "sample_results_aml" occurences in paths to "sample_budgetsize_results" # old directory name to new directory name
# Please also modify budget size to 2500 in config.py
!python3 $HOME_DIRECTORY/tools/main_aml.py --n_GPU $num_GPU \
--port $port --sampling_fn $sampling_fn --lSet_partition $lSet_partition \
--seed_id $base_seed \
--init_partition $init_partition --step_partition $step_partition \
--dataset $dataset --budget_size $budget_size \
--out_dir $out_dir \
--num_aml_trials $num_aml_trials --num_classes $num_classes \
--al_max_iter $al_iterations \
--model_type $model_type --model_depth $model_depth \
--clf_epochs $clf_epochs \
--eval_period 1 --checkpoint_period 1 \
--lSetPath $lSetPath --uSetPath $uSetPath --valSetPath $valSetPath \
--train_dir $train_dir --test_dir $test_dir \
--dropout_iterations 25 \
--cfg configs/$dataset/$model_style/$model_type/R-18_4gpu_unreg.yaml \
--vaal_z_dim 32 --vaal_vae_bs 64 --vaal_epochs 2 \
--vaal_vae_lr 5e-4 --vaal_disc_lr 5e-4 --vaal_beta 1.0 --vaal_adv_param 1.0
| 0.297164 | 0.09277 |
# Data Obfuscation Library
Sharing data, creating documents and doing public demonstrations often require that data containing
PII or other sensitive material be obfuscated.
MSTICPy contains a simple library to obfuscation data using hashing and random mapping of values.
You can use these functions on a single data items or entire DataFrames.
## Contents
- [Import the module](#Import-the-module)
- [Individual Obfuscation Functions](#Individual-Obfuscation-Functions)
- [Obfuscating DataFrames](#Obfuscating-DataFrames)
- [Creating custom column mappings](#Creating-custom-mappings)
- [Using hash_item with delimiters](#Using-hash_item-with-delimiters-to-preserve-the-structure/look-of-the-hashed-input)
- [Checking Your Obfuscation](#Checking-Your-Obfuscation)
## Import the module
```
import pandas as pd
from msticpy.common.utility import md
from msticpy.data import data_obfus
```
### Read in some data for the examples
```
netflow_df = pd.read_csv("data/az_net_flows.csv")
# list is imported as string from csv - convert back to list with eval
def str_to_list(val):
if isinstance(val, str):
return eval(val)
netflow_df["PublicIPs"] = netflow_df["PublicIPs"].apply(str_to_list)
# Define subset of output columns
out_cols = [
'TenantId', 'TimeGenerated', 'FlowStartTime',
'ResourceGroup', 'VMName', 'VMIPAddress', 'PublicIPs',
'SrcIP', 'DestIP', 'L4Protocol', 'AllExtIPs'
]
netflow_df = netflow_df[out_cols]
```
## Individual Obfuscation Functions
Here we're importing individual functions but you can access them with the single
import statement above as:
```
data_obfus.hash_string(...)
```
etc.
> **Note** In the next cell we're using a function to output documentation and examples.<br>
> You can ignore this. The usage of each function is show in the output of<br>
> the subsequent cells.
```
from msticpy.data.data_obfus import (
hash_dict,
hash_ip,
hash_item,
hash_list,
hash_sid,
hash_string,
replace_guid
)
# Function to automate/format the examples below. You can ignore this
def show_func(func, examples):
func_name = func.__name__
if func.__name__.startswith("_"):
func_name = func_name[1:]
md(func_name, "bold")
print(func.__doc__)
md("Examples", "bold")
for example in examples:
if isinstance(example, tuple):
arg, delim = example
print(
f"{func_name}('{arg}', delim='{delim}') =>", func(*example)
)
else:
print(
f"{func_name}('{example}') =>", func(example)
)
md("<br><hr><br>")
md("hash_string", "large, bold")
md("hash_string does a simple hash of the input. If the input is a numeric string it will output a numeric")
show_func(hash_string, ["sensitive data", "12345"])
md("hash_item", "large, bold")
md("hash_item allows specification of delimiters. Useful for preserving the look of domains, emails, etc.")
show_func(hash_item, [("sensitive data", " "), ("most-sensitive-data/here", " /-")])
md("hash_ip", "large, bold")
md("hash_ip will output random mappings of input IP V4 and V6 addresses.")
md("Within a Python session the mapping will remain constant.")
show_func(hash_ip, [
"192.168.3.1",
"2001:0db8:85a3:0000:0000:8a2e:0370:7334",
["192.168.3.1", "192.168.5.2", "192.168.10.2"],
])
md("hash_sid", "large, bold")
md("hash_sid will randomize the domain-specific parts of a SID. It preserves built-in SIDs and well known RIDs (e.g. Admins -500)")
show_func(hash_sid, ["S-1-5-21-1180699209-877415012-3182924384-1004", "S-1-5-18"])
md("hash_list", "large, bold")
md("hash_list will randomize a list of items preserving the list structure.")
show_func(hash_list, [["S-1-5-21-1180699209-877415012-3182924384-1004", "S-1-5-18"]])
md("hash_dict", "large, bold")
md("hash_dict will randomize a dict of items preserving the structure and the dict keys.")
show_func(hash_dict, [{"SID1": "S-1-5-21-1180699209-877415012-3182924384-1004", "SID2": "S-1-5-18"}])
md("replace_guid", "large, bold")
md("replace_guid will output a random UUID mapped to the input.")
md("An input GUID will be mapped to the same newly-generated output UUID")
md("You can see that UUID #4 is the same as #1 and mapped to the same output UUID.")
show_func(replace_guid, [
"cf1b0b29-08ae-4528-839a-5f66eca2cce9",
"ed63d29e-6288-4d66-b10d-8847096fc586",
"ac561203-99b2-4067-a525-60d45ea0d7ff",
"cf1b0b29-08ae-4528-839a-5f66eca2cce9",
])
```
## Obfuscating DataFrames
We can use the msticpy pandas extension to obfuscate an entire DataFrame.
The obfuscation library contains a mapping for a number of common field names.
You can view this list by displaying the attribute:
```
data_obfus.OBFUS_COL_MAP
```
In the first example, the TenantId, ResourceGroup, VMName have been obfuscated.
```
display(netflow_df.head(3))
netflow_df.head(3).mp_obf.obfuscate()
```
### Adding custom column mappings
Note in the previous example that the VMIPAddress, PublicIPs and AllExtIPs columns were unchanged.
We can add these columns to a custom mapping dictionary and re-run the obfuscation.
See the later section on [Creating Custom Mappings](#Creating-custom-mappings).
```
col_map = {
"VMName": ".",
"VMIPAddress": "ip",
"PublicIPs": "ip",
"AllExtIPs": "ip"
}
netflow_df.head(3).mp_obf.obfuscate(column_map=col_map)
```
### ofuscate_df function
You can also call the standard function `obfuscate_df` to perform the same operation
on the dataframe passed as the `data` parameter.
```
data_obfus.obfuscate_df(data=netflow_df.head(3), column_map=col_map)
```
## Creating custom mappings
A custom mapping dictionary has entries in the following form:
```
"ColumnName": "operation"
```
The `operation` defines the type of obfuscation method used for that column. Both the column
and the operation code must be quoted.
|operation code | obfuscation function |
|---------------|----------------------|
| "uuid" | replace_guid |
| "ip" | hash_ip |
| "str" | hash_string |
| "dict" | hash_dict |
| "list" | hash_list |
| "sid" | hash_sid |
| "null" | "null"\* |
| None | hash_str\* |
| delims_str | hash_item\* |
\*The last three items require some explanation:
- null - the `null` operation code means set the value to empty - i.e. delete the value
in the output frame.
- None (i.e. the dictionary value is `None`) default to hash_string.
- delims_str - any string other than those named above is assumed to be a string of delimiters.
See next section for a discussion of use of delimiters.
---
> **NOTE** If you want to *only* use custom mappings and ignore the builtin<br>
> mapping table, specify `use_default=False` as a parameter to either<br>
> `mp_obf.obfuscate()` or `obfuscate_df`
---
## Using `hash_item` with delimiters to preserve the structure/look of the hashed input
Using hash_item with a delimiters string lets you create output that somewhat resembles the input
type. The delimiters string is specified as a simple string of delimiter characters, e.g. `"@\,-"`
The input string is broken into substrings using each of the delimiters in the delims_str. The substrings
are individually hashed and the resulting substrings joined together using the original delimiters.
The string is split in the order of the characters in the delims string.
This allows you to create hashed values that bear some resemblance to the original structure of the string.
This might be useful for email address, qualified domain names and other structure text.
For example :
ian@mydomain.com
Using the simple `hash_string` function the output bears no resemblance to an email address
```
hash_string("ian@mydomain.com")
```
Using `hash_item` and specifying the expected delimiters we get something like an email address in the output.
```
hash_item("ian@mydomain.com", "@.")
```
You use `hash_item` in your Custom Mapping dictionary by specifying a delimiters string as the `operation`.
## Checking Your Obfuscation
You should check that you have correctly masked all of the columns needed.
There is a function `check_obfuscation` to do this.
Use `silent=False` to print out the results.
If you use `silent=True` (the default it will return 2 lists of `unchanged` and
`obfuscated` columns)
```
data_obfus.check_obfuscation(
data: pandas.core.frame.DataFrame,
orig_data: pandas.core.frame.DataFrame,
index: int = 0,
silent=True,
) -> Union[Tuple[List[str], List[str]], NoneType]
Check the obfuscation results for a row.
Parameters
----------
data : pd.DataFrame
Obfuscated DataFrame
orig_data : pd.DataFrame
Original DataFrame
index : int, optional
The row to check, by default 0
silent: bool
If False the function returns no output and
returns lists of changed and unchanged columns.
By default, True
Returns
-------
Optional[Tuple[List[str], List[str]]] :
If silent is True returns a tuple of unchanged, changed
items. If False, returns None.
```
> **Note** by default this will check only the first row of the data.
> You can check other rows using the index parameter.
> **Warning** The two DataFrames should have a matching index and ordering because
> the check works by comparing the values in each column, judging that
> column values that do not match have been obfuscated.
**We first test the partially-obfuscated DataFrame from earlier.**
```
partly_obfus_df = netflow_df.head(3).mp_obf.obfuscate()
fully_obfus_df = netflow_df.head(3).mp_obf.obfuscate(column_map=col_map)
data_obfus.check_obfuscation(partly_obfus_df, netflow_df.head(3), silent=False)
```
**Checking the fully-obfuscated data set**
```
data_obfus.check_obfuscation(fully_obfus_df, netflow_df.head(3), silent=False)
```
---
## Appendix
```
import tabulate
print(tabulate.tabulate(netflow_df.head(3), tablefmt="rst", showindex=False, headers="keys"))
```
|
github_jupyter
|
import pandas as pd
from msticpy.common.utility import md
from msticpy.data import data_obfus
netflow_df = pd.read_csv("data/az_net_flows.csv")
# list is imported as string from csv - convert back to list with eval
def str_to_list(val):
if isinstance(val, str):
return eval(val)
netflow_df["PublicIPs"] = netflow_df["PublicIPs"].apply(str_to_list)
# Define subset of output columns
out_cols = [
'TenantId', 'TimeGenerated', 'FlowStartTime',
'ResourceGroup', 'VMName', 'VMIPAddress', 'PublicIPs',
'SrcIP', 'DestIP', 'L4Protocol', 'AllExtIPs'
]
netflow_df = netflow_df[out_cols]
data_obfus.hash_string(...)
from msticpy.data.data_obfus import (
hash_dict,
hash_ip,
hash_item,
hash_list,
hash_sid,
hash_string,
replace_guid
)
# Function to automate/format the examples below. You can ignore this
def show_func(func, examples):
func_name = func.__name__
if func.__name__.startswith("_"):
func_name = func_name[1:]
md(func_name, "bold")
print(func.__doc__)
md("Examples", "bold")
for example in examples:
if isinstance(example, tuple):
arg, delim = example
print(
f"{func_name}('{arg}', delim='{delim}') =>", func(*example)
)
else:
print(
f"{func_name}('{example}') =>", func(example)
)
md("<br><hr><br>")
md("hash_string", "large, bold")
md("hash_string does a simple hash of the input. If the input is a numeric string it will output a numeric")
show_func(hash_string, ["sensitive data", "12345"])
md("hash_item", "large, bold")
md("hash_item allows specification of delimiters. Useful for preserving the look of domains, emails, etc.")
show_func(hash_item, [("sensitive data", " "), ("most-sensitive-data/here", " /-")])
md("hash_ip", "large, bold")
md("hash_ip will output random mappings of input IP V4 and V6 addresses.")
md("Within a Python session the mapping will remain constant.")
show_func(hash_ip, [
"192.168.3.1",
"2001:0db8:85a3:0000:0000:8a2e:0370:7334",
["192.168.3.1", "192.168.5.2", "192.168.10.2"],
])
md("hash_sid", "large, bold")
md("hash_sid will randomize the domain-specific parts of a SID. It preserves built-in SIDs and well known RIDs (e.g. Admins -500)")
show_func(hash_sid, ["S-1-5-21-1180699209-877415012-3182924384-1004", "S-1-5-18"])
md("hash_list", "large, bold")
md("hash_list will randomize a list of items preserving the list structure.")
show_func(hash_list, [["S-1-5-21-1180699209-877415012-3182924384-1004", "S-1-5-18"]])
md("hash_dict", "large, bold")
md("hash_dict will randomize a dict of items preserving the structure and the dict keys.")
show_func(hash_dict, [{"SID1": "S-1-5-21-1180699209-877415012-3182924384-1004", "SID2": "S-1-5-18"}])
md("replace_guid", "large, bold")
md("replace_guid will output a random UUID mapped to the input.")
md("An input GUID will be mapped to the same newly-generated output UUID")
md("You can see that UUID #4 is the same as #1 and mapped to the same output UUID.")
show_func(replace_guid, [
"cf1b0b29-08ae-4528-839a-5f66eca2cce9",
"ed63d29e-6288-4d66-b10d-8847096fc586",
"ac561203-99b2-4067-a525-60d45ea0d7ff",
"cf1b0b29-08ae-4528-839a-5f66eca2cce9",
])
data_obfus.OBFUS_COL_MAP
display(netflow_df.head(3))
netflow_df.head(3).mp_obf.obfuscate()
col_map = {
"VMName": ".",
"VMIPAddress": "ip",
"PublicIPs": "ip",
"AllExtIPs": "ip"
}
netflow_df.head(3).mp_obf.obfuscate(column_map=col_map)
data_obfus.obfuscate_df(data=netflow_df.head(3), column_map=col_map)
"ColumnName": "operation"
hash_string("ian@mydomain.com")
hash_item("ian@mydomain.com", "@.")
data_obfus.check_obfuscation(
data: pandas.core.frame.DataFrame,
orig_data: pandas.core.frame.DataFrame,
index: int = 0,
silent=True,
) -> Union[Tuple[List[str], List[str]], NoneType]
Check the obfuscation results for a row.
Parameters
----------
data : pd.DataFrame
Obfuscated DataFrame
orig_data : pd.DataFrame
Original DataFrame
index : int, optional
The row to check, by default 0
silent: bool
If False the function returns no output and
returns lists of changed and unchanged columns.
By default, True
Returns
-------
Optional[Tuple[List[str], List[str]]] :
If silent is True returns a tuple of unchanged, changed
items. If False, returns None.
partly_obfus_df = netflow_df.head(3).mp_obf.obfuscate()
fully_obfus_df = netflow_df.head(3).mp_obf.obfuscate(column_map=col_map)
data_obfus.check_obfuscation(partly_obfus_df, netflow_df.head(3), silent=False)
data_obfus.check_obfuscation(fully_obfus_df, netflow_df.head(3), silent=False)
import tabulate
print(tabulate.tabulate(netflow_df.head(3), tablefmt="rst", showindex=False, headers="keys"))
| 0.669961 | 0.939081 |
# EP2 - Descritores de Fourier
**MAC0317 - Introdução ao Processamento de Sinais Digitais**
**DCC-IME-USP** Departamento de Ciência da Computação
**Prof.** Marcel Prolin Jackowski
**Aluno** Vitor Santa Rosa Gomes, 10258862, [vitorssrg@usp.br](mailto:vitorssrg@usp.br)
## Abrir imagens
```
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
from ipywidgets import interactive
from matplotlib.colors import Normalize
from main import main, read_image, contour_image, save_image
mpl.rcParams['axes.titlesize'] = 10
mpl.rcParams['axes.labelsize'] = 8
mpl.rcParams['xtick.labelsize'] = 8
mpl.rcParams['ytick.labelsize'] = 8
def grid_plot(shape, factor=6):
"""create a grid plot"""
shape = np.array(shape)
fig, ax = plt.subplots(*shape, figsize=(factor*shape)[::-1])
if np.all(shape == (1, 1)):
ax = np.array([[ax]], dtype=object)
ax = ax.reshape(shape)
fig.set_tight_layout(True)
return ax
def draw_image(ax, title, img, cmap=None, norm=None):
"""draw a image to an axis"""
ax.set_title(title)
kwargs = dict()
if cmap is not None:
kwargs['cmap'] = cmap
if norm is not None:
kwargs['norm'] = norm
ax.imshow(img, **kwargs)
add_colorbar(ax, cmap=cmap, norm=norm)
def add_colorbar(
ax, pos='right', size=0.1, pad=0.05,
cmap=None, norm=None, off=False,
orientation='vertical', sharex=None
):
"""add a colorbar to an axis"""
import matplotlib.cm
from matplotlib import colorbar
from mpl_toolkits.axes_grid1 import make_axes_locatable
divider = make_axes_locatable(ax)
bar = divider.append_axes(pos, size, pad=pad, sharex=sharex)
if cmap is None:
return
if isinstance(cmap, str):
cmap = matplotlib.cm.cmap_d[cmap]
if off:
bar.axis('off')
else:
colorbar.ColorbarBase(bar, cmap=cmap, norm=norm,
orientation=orientation)
return bar
axes = grid_plot((2, 3), factor=4)
norml8 = Normalize(0, 2**8-1)
draw_image(axes[0, 0], 'Original image', read_image('./fox_binary.png', 'RGB'))
draw_image(axes[0, 1], 'Red channel', read_image('./fox_binary.png', 'R' ), 'Reds', norml8)
draw_image(axes[0, 2], 'Green channel', read_image('./fox_binary.png', 'G' ), 'Greens', norml8)
draw_image(axes[1, 0], 'Blue channel', read_image('./fox_binary.png', 'B' ), 'Blues', norml8)
draw_image(axes[1, 1], 'Alpha channel', read_image('./fox_binary.png', 'A' ), 'Greys', norml8)
draw_image(axes[1, 2], 'Luminance channel', read_image('./fox_binary.png', 'L' ), 'gray', norml8)
```
## Rastrear contornos
```
axes = grid_plot((1, 2), factor=4)
img = read_image('./fox_binary.png', 'L')
poly = contour_image(img, 255)
draw_image(axes[0, 0], 'Luminance channel', img, 'gray', norml8)
draw_image(axes[0, 1], 'Countour at $I=255$', 0*img, 'gray', norml8)
axes[0, 1].plot(poly[:, 1], poly[:, 0], linewidth=2, color='magenta')
```
## Projetar curvas no espaço Fourier
```
axes = grid_plot((2, 2), factor=4)
img = read_image('./fox_binary.png', 'L')
poly = contour_image(img, 255)
fpoly = np.fft.fftshift(np.fft.fft(poly.astype(float), axis=0), axes=0)
H, W = img.shape
L = poly.shape[0]
draw_image(axes[0, 0], 'Luminance channel', img, 'gray', norml8)
draw_image(axes[0, 1], 'Countour at $I=255$', 0*img, 'gray', norml8)
axes[0, 1].plot(poly[:, 1], poly[:, 0], linewidth=2, color='magenta')
axes[1, 0].set_title('Marginal contours')
axes[1, 0].plot(np.mgrid[0:1:L*1j], poly[:, 0], linewidth=2, color='red')
axes[1, 0].plot(np.mgrid[0:1:L*1j], poly[:, 1], linewidth=2, color='green')
axes[1, 1].set_title('FFT Marginal contours')
axes[1, 1].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(fpoly[:, 0].flatten())), linewidth=2, color='red')
axes[1, 1].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(fpoly[:, 1].flatten())), linewidth=2, color='green')
```
## Filtrar frequências de curvas projetadas
```
def update(p=0.5):
axes = grid_plot((3, 3), factor=4)
img = read_image('./fox_binary.png', 'L')
poly = contour_image(img, 255)
fpoly = np.fft.fftshift(np.fft.fft(poly.astype(float), axis=0), axes=0)
ipoly = np.real(np.fft.ifft(np.fft.ifftshift(fpoly, axes=0), axis=0))
H, W = img.shape
L = poly.shape[0]
draw_image(axes[0, 0], 'Countour at $I=255$', 0*img, 'gray', norml8)
axes[0, 0].plot(poly[:, 1], poly[:, 0], linewidth=2, color='magenta')
axes[0, 1].set_title('Marginal contours')
axes[0, 1].plot(np.mgrid[0:1:L*1j], poly[:, 0], linewidth=2, color='red')
axes[0, 1].plot(np.mgrid[0:1:L*1j], poly[:, 1], linewidth=2, color='green')
axes[0, 2].set_title('FFT Marginal contours')
axes[0, 2].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(fpoly[:, 0].flatten())), linewidth=2, color='red')
axes[0, 2].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(fpoly[:, 1].flatten())), linewidth=2, color='green')
lfpoly = fpoly*((np.mgrid[0:1:L*1j, 0:2][0]>=1/2-p/2)&(np.mgrid[0:1:L*1j, 0:2][0]<=1/2+p/2))
lipoly = np.real(np.fft.ifft(np.fft.ifftshift(lfpoly, axes=0), axis=0)).astype(int) # % (H, W)
draw_image(axes[1, 0], 'Low-filtered countour', 0*img, 'gray', norml8)
axes[1, 0].plot(lipoly[:, 1], lipoly[:, 0], linewidth=2, color='magenta')
axes[1, 1].set_title('Low-filtered marginal contours')
axes[1, 1].plot(np.mgrid[0:1:L*1j], lipoly[:, 0], linewidth=2, color='red')
axes[1, 1].plot(np.mgrid[0:1:L*1j], lipoly[:, 1], linewidth=2, color='green')
axes[1, 2].set_title('Low-filtered FFT marginal contours')
axes[1, 2].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(lfpoly[:, 0].flatten())), linewidth=2, color='red')
axes[1, 2].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(lfpoly[:, 1].flatten())), linewidth=2, color='green')
hfpoly = fpoly*((np.mgrid[0:1:L*1j, 0:2][0]<=p/2)|(np.mgrid[0:1:L*1j, 0:2][0]>=1-p/2))
hipoly = np.real(np.fft.ifft(np.fft.ifftshift(hfpoly, axes=0), axis=0)).astype(int) # % (H, W)
draw_image(axes[2, 0], 'High-filtered countour', 0*img, 'gray', norml8)
axes[2, 0].plot(hipoly[:, 1], hipoly[:, 0], linewidth=2, color='magenta')
axes[2, 1].set_title('High-filtered marginal contours')
axes[2, 1].plot(np.mgrid[0:1:L*1j], hipoly[:, 0], linewidth=2, color='red')
axes[2, 1].plot(np.mgrid[0:1:L*1j], hipoly[:, 1], linewidth=2, color='green')
axes[2, 2].set_title('High-filtered FFT marginal contours')
axes[2, 2].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(hfpoly[:, 0].flatten())), linewidth=2, color='red')
axes[2, 2].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(hfpoly[:, 1].flatten())), linewidth=2, color='green')
interactive(update, p=(0, 1, 0.01))
```
## Questionário
Após a implementação, utilize o arquivo `fox.png` para responder as seguintes perguntas, salvando os arquivos de saída correspondentes às suas respostas.
```
import pathlib
pathlib.Path('./samples').mkdir(parents=True, exist_ok=True)
```
### Se você somente mantiver a frequência fundamental (DC) após a filtragem, o que acontece com o contorno digital após a reconstrução?
```
axes = grid_plot((1, 6), factor=4)
img = read_image('./fox_binary.png', 'L')
poly = contour_image(img, 255)
fpoly = np.fft.fftshift(np.fft.fft(poly.astype(float), axis=0), axes=0)
ipoly = np.real(np.fft.ifft(np.fft.ifftshift(fpoly, axes=0), axis=0))
H, W = img.shape
L = poly.shape[0]
draw_image(axes[0, 0], 'Countour at $I=255$', 0*img, 'gray', norml8)
axes[0, 0].plot(poly[:, 1], poly[:, 0], linewidth=2, color='magenta')
axes[0, 1].set_title('Marginal contours')
axes[0, 1].plot(np.mgrid[0:1:L*1j], poly[:, 0], linewidth=2, color='red')
axes[0, 1].plot(np.mgrid[0:1:L*1j], poly[:, 1], linewidth=2, color='green')
axes[0, 2].set_title('FFT Marginal contours')
axes[0, 2].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(fpoly[:, 0].flatten())), linewidth=2, color='red')
axes[0, 2].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(fpoly[:, 1].flatten())), linewidth=2, color='green')
lfpoly = np.fft.fftshift(np.fft.ifftshift(fpoly, axes=0)*(np.mgrid[0:L, 0:2][0]==0), axes=0)
lipoly = np.real(np.fft.ifft(np.fft.ifftshift(lfpoly, axes=0), axis=0)).astype(int)
print(lipoly)
draw_image(axes[0, 5], 'Low-filtered countour', 0*img, 'gray', norml8)
axes[0, 5].plot(lipoly[:, 1], lipoly[:, 0], linewidth=2, color='magenta')
axes[0, 4].set_title('Low-filtered marginal contours')
axes[0, 4].plot(np.mgrid[0:1:L*1j], lipoly[:, 0], linewidth=2, color='red')
axes[0, 4].plot(np.mgrid[0:1:L*1j], lipoly[:, 1], linewidth=2, color='green')
axes[0, 3].set_title('Low-filtered FFT marginal contours')
axes[0, 3].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(lfpoly[:, 0].flatten())), linewidth=2, color='red')
axes[0, 3].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(lfpoly[:, 1].flatten())), linewidth=2, color='green')
```
Pela definição da transformada discreta de fourier, sabemos que a frequência fundamental `S[0]` é tal que:
$$S[0] = \frac{1}{N}\sum^{N-1}_{n=0} s(n) e^{-\frac{i2\pi}{N}0n} = \frac{1}{N}\sum^{N-1}_{n=0} s(n) e^{0} = \frac{1}{N}\sum^{N-1}_{n=0} s(n)$$
Como podemos verificar pelo algoritmo e pela definição, manter somente a frequência fundamental gera o sinal constante (curva singular) que é a média das posições dos pontos da curva.
```
fcimg = 0*img
fcimg[lipoly[:, 0].flatten(), lipoly[:, 1].flatten()] = 255
save_image(fcimg, './samples/q1_fltctr.png')
```

### Se você mantiver todas as frequências, exceto a fundamental (DC), o que acontece com o contorno após a reconstrução?
```
axes = grid_plot((1, 6), factor=4)
img = read_image('./fox_binary.png', 'L')
poly = contour_image(img, 255)
fpoly = np.fft.fftshift(np.fft.fft(poly.astype(float), axis=0), axes=0)
ipoly = np.real(np.fft.ifft(np.fft.ifftshift(fpoly, axes=0), axis=0))
H, W = img.shape
L = poly.shape[0]
draw_image(axes[0, 0], 'Countour at $I=255$', 0*img, 'gray', norml8)
axes[0, 0].plot(poly[:, 1], poly[:, 0], linewidth=2, color='magenta')
axes[0, 1].set_title('Marginal contours')
axes[0, 1].plot(np.mgrid[0:1:L*1j], poly[:, 0], linewidth=2, color='red')
axes[0, 1].plot(np.mgrid[0:1:L*1j], poly[:, 1], linewidth=2, color='green')
axes[0, 2].set_title('FFT Marginal contours')
axes[0, 2].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(fpoly[:, 0].flatten())), linewidth=2, color='red')
axes[0, 2].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(fpoly[:, 1].flatten())), linewidth=2, color='green')
hfpoly = np.fft.fftshift(np.fft.ifftshift(fpoly, axes=0)*(np.mgrid[0:L, 0:2][0]>0), axes=0)
hipoly = np.real(np.fft.ifft(np.fft.ifftshift(hfpoly, axes=0), axis=0)).astype(int)
print(hipoly)
draw_image(axes[0, 5], 'Low-filtered countour', 0*img, 'gray', norml8)
axes[0, 5].plot(hipoly[:, 1], hipoly[:, 0], linewidth=2, color='magenta')
axes[0, 4].set_title('Low-filtered marginal contours')
axes[0, 4].plot(np.mgrid[0:1:L*1j], hipoly[:, 0], linewidth=2, color='red')
axes[0, 4].plot(np.mgrid[0:1:L*1j], hipoly[:, 1], linewidth=2, color='green')
axes[0, 3].set_title('Low-filtered FFT marginal contours')
axes[0, 3].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(hfpoly[:, 0].flatten())), linewidth=2, color='red')
axes[0, 3].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(hfpoly[:, 1].flatten())), linewidth=2, color='green')
```
Como visto na questão anterior que a frequência fundamental `S[0]` guarda a média de todos os pontos, quando mantivermos todas as frequências exceto a fundamental, perdemos a informação da translação da curva em relação à origem.
```
fcimg = 0*img
fcimg[(hipoly%img.shape)[:, 0].flatten(), (hipoly%img.shape)[:, 1].flatten()] = 255
save_image(fcimg, './samples/q2_fltctr.png')
```

### Para o arquivo de entrada `fox.png`, qual o menor valor de $p$, onde $p > 0$, para que o contorno mantenha a sua geometria original ? Neste item você precisará calcular o somatório das distâncias Euclideanas em pares de pontos entre filtragens sucessivas e estabelecer um valor de tolerância para dizer que duas reconstruções são iguais (e.g. $ε = 10^{−3}$).
```
axes = grid_plot((1, 1), factor=(6, 16))
img = read_image('./fox_binary.png', 'L')
poly = contour_image(img, 255)
fpoly = np.fft.fftshift(np.fft.fft(poly.astype(float), axis=0), axes=0)
ipoly = np.real(np.fft.ifft(np.fft.ifftshift(fpoly, axes=0), axis=0))
H, W = img.shape
L = poly.shape[0]
P = np.mgrid[0:L]
E1 = np.mgrid[0:L]
E2 = np.mgrid[0:L]
for i, p in enumerate(P/L):
lfpoly = fpoly*((np.mgrid[0:1:L*1j, 0:2][0]>=1/2-p/2)&(np.mgrid[0:1:L*1j, 0:2][0]<=1/2+p/2))
lipoly = np.real(np.fft.ifft(np.fft.ifftshift(lfpoly, axes=0), axis=0))
E1[i] = np.sum(np.abs(poly - lipoly + 0.0))/L
E2[i] = np.sum(np.sqrt(np.sum((poly - lipoly + 0.0)**2, axis=1)))/L
axes[0, 0].set_title("Distance error by log frequency threshold")
axes[0, 0].plot(np.log1p(P), E1, linewidth=2, color='red' , label='L1 Distance')
axes[0, 0].plot(np.log1p(P), E2, linewidth=2, color='blue', label='L2 Distance')
axes[0, 0].legend('upper right')
```
Primeiros valores de $p$ para que os erros estejam abaixo de $10^{-3}$:
```
print(f"min {{p}}, subj {{L1 Distance < 10**-3.0}}: p={np.argmax(E1 < 10**-3.0)*100/L:.2f}%")
print(f"min {{p}}, subj {{L2 Distance < 10**-3.0}}: p={np.argmax(E2 < 10**-3.0)*100/L:.2f}%")
main('./fox_binary.png', 0, 255, np.ceil(4.39), './samples/q3_ctr.png', './samples/q3_fltctr.png')
```

|
github_jupyter
|
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
from ipywidgets import interactive
from matplotlib.colors import Normalize
from main import main, read_image, contour_image, save_image
mpl.rcParams['axes.titlesize'] = 10
mpl.rcParams['axes.labelsize'] = 8
mpl.rcParams['xtick.labelsize'] = 8
mpl.rcParams['ytick.labelsize'] = 8
def grid_plot(shape, factor=6):
"""create a grid plot"""
shape = np.array(shape)
fig, ax = plt.subplots(*shape, figsize=(factor*shape)[::-1])
if np.all(shape == (1, 1)):
ax = np.array([[ax]], dtype=object)
ax = ax.reshape(shape)
fig.set_tight_layout(True)
return ax
def draw_image(ax, title, img, cmap=None, norm=None):
"""draw a image to an axis"""
ax.set_title(title)
kwargs = dict()
if cmap is not None:
kwargs['cmap'] = cmap
if norm is not None:
kwargs['norm'] = norm
ax.imshow(img, **kwargs)
add_colorbar(ax, cmap=cmap, norm=norm)
def add_colorbar(
ax, pos='right', size=0.1, pad=0.05,
cmap=None, norm=None, off=False,
orientation='vertical', sharex=None
):
"""add a colorbar to an axis"""
import matplotlib.cm
from matplotlib import colorbar
from mpl_toolkits.axes_grid1 import make_axes_locatable
divider = make_axes_locatable(ax)
bar = divider.append_axes(pos, size, pad=pad, sharex=sharex)
if cmap is None:
return
if isinstance(cmap, str):
cmap = matplotlib.cm.cmap_d[cmap]
if off:
bar.axis('off')
else:
colorbar.ColorbarBase(bar, cmap=cmap, norm=norm,
orientation=orientation)
return bar
axes = grid_plot((2, 3), factor=4)
norml8 = Normalize(0, 2**8-1)
draw_image(axes[0, 0], 'Original image', read_image('./fox_binary.png', 'RGB'))
draw_image(axes[0, 1], 'Red channel', read_image('./fox_binary.png', 'R' ), 'Reds', norml8)
draw_image(axes[0, 2], 'Green channel', read_image('./fox_binary.png', 'G' ), 'Greens', norml8)
draw_image(axes[1, 0], 'Blue channel', read_image('./fox_binary.png', 'B' ), 'Blues', norml8)
draw_image(axes[1, 1], 'Alpha channel', read_image('./fox_binary.png', 'A' ), 'Greys', norml8)
draw_image(axes[1, 2], 'Luminance channel', read_image('./fox_binary.png', 'L' ), 'gray', norml8)
axes = grid_plot((1, 2), factor=4)
img = read_image('./fox_binary.png', 'L')
poly = contour_image(img, 255)
draw_image(axes[0, 0], 'Luminance channel', img, 'gray', norml8)
draw_image(axes[0, 1], 'Countour at $I=255$', 0*img, 'gray', norml8)
axes[0, 1].plot(poly[:, 1], poly[:, 0], linewidth=2, color='magenta')
axes = grid_plot((2, 2), factor=4)
img = read_image('./fox_binary.png', 'L')
poly = contour_image(img, 255)
fpoly = np.fft.fftshift(np.fft.fft(poly.astype(float), axis=0), axes=0)
H, W = img.shape
L = poly.shape[0]
draw_image(axes[0, 0], 'Luminance channel', img, 'gray', norml8)
draw_image(axes[0, 1], 'Countour at $I=255$', 0*img, 'gray', norml8)
axes[0, 1].plot(poly[:, 1], poly[:, 0], linewidth=2, color='magenta')
axes[1, 0].set_title('Marginal contours')
axes[1, 0].plot(np.mgrid[0:1:L*1j], poly[:, 0], linewidth=2, color='red')
axes[1, 0].plot(np.mgrid[0:1:L*1j], poly[:, 1], linewidth=2, color='green')
axes[1, 1].set_title('FFT Marginal contours')
axes[1, 1].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(fpoly[:, 0].flatten())), linewidth=2, color='red')
axes[1, 1].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(fpoly[:, 1].flatten())), linewidth=2, color='green')
def update(p=0.5):
axes = grid_plot((3, 3), factor=4)
img = read_image('./fox_binary.png', 'L')
poly = contour_image(img, 255)
fpoly = np.fft.fftshift(np.fft.fft(poly.astype(float), axis=0), axes=0)
ipoly = np.real(np.fft.ifft(np.fft.ifftshift(fpoly, axes=0), axis=0))
H, W = img.shape
L = poly.shape[0]
draw_image(axes[0, 0], 'Countour at $I=255$', 0*img, 'gray', norml8)
axes[0, 0].plot(poly[:, 1], poly[:, 0], linewidth=2, color='magenta')
axes[0, 1].set_title('Marginal contours')
axes[0, 1].plot(np.mgrid[0:1:L*1j], poly[:, 0], linewidth=2, color='red')
axes[0, 1].plot(np.mgrid[0:1:L*1j], poly[:, 1], linewidth=2, color='green')
axes[0, 2].set_title('FFT Marginal contours')
axes[0, 2].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(fpoly[:, 0].flatten())), linewidth=2, color='red')
axes[0, 2].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(fpoly[:, 1].flatten())), linewidth=2, color='green')
lfpoly = fpoly*((np.mgrid[0:1:L*1j, 0:2][0]>=1/2-p/2)&(np.mgrid[0:1:L*1j, 0:2][0]<=1/2+p/2))
lipoly = np.real(np.fft.ifft(np.fft.ifftshift(lfpoly, axes=0), axis=0)).astype(int) # % (H, W)
draw_image(axes[1, 0], 'Low-filtered countour', 0*img, 'gray', norml8)
axes[1, 0].plot(lipoly[:, 1], lipoly[:, 0], linewidth=2, color='magenta')
axes[1, 1].set_title('Low-filtered marginal contours')
axes[1, 1].plot(np.mgrid[0:1:L*1j], lipoly[:, 0], linewidth=2, color='red')
axes[1, 1].plot(np.mgrid[0:1:L*1j], lipoly[:, 1], linewidth=2, color='green')
axes[1, 2].set_title('Low-filtered FFT marginal contours')
axes[1, 2].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(lfpoly[:, 0].flatten())), linewidth=2, color='red')
axes[1, 2].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(lfpoly[:, 1].flatten())), linewidth=2, color='green')
hfpoly = fpoly*((np.mgrid[0:1:L*1j, 0:2][0]<=p/2)|(np.mgrid[0:1:L*1j, 0:2][0]>=1-p/2))
hipoly = np.real(np.fft.ifft(np.fft.ifftshift(hfpoly, axes=0), axis=0)).astype(int) # % (H, W)
draw_image(axes[2, 0], 'High-filtered countour', 0*img, 'gray', norml8)
axes[2, 0].plot(hipoly[:, 1], hipoly[:, 0], linewidth=2, color='magenta')
axes[2, 1].set_title('High-filtered marginal contours')
axes[2, 1].plot(np.mgrid[0:1:L*1j], hipoly[:, 0], linewidth=2, color='red')
axes[2, 1].plot(np.mgrid[0:1:L*1j], hipoly[:, 1], linewidth=2, color='green')
axes[2, 2].set_title('High-filtered FFT marginal contours')
axes[2, 2].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(hfpoly[:, 0].flatten())), linewidth=2, color='red')
axes[2, 2].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(hfpoly[:, 1].flatten())), linewidth=2, color='green')
interactive(update, p=(0, 1, 0.01))
import pathlib
pathlib.Path('./samples').mkdir(parents=True, exist_ok=True)
axes = grid_plot((1, 6), factor=4)
img = read_image('./fox_binary.png', 'L')
poly = contour_image(img, 255)
fpoly = np.fft.fftshift(np.fft.fft(poly.astype(float), axis=0), axes=0)
ipoly = np.real(np.fft.ifft(np.fft.ifftshift(fpoly, axes=0), axis=0))
H, W = img.shape
L = poly.shape[0]
draw_image(axes[0, 0], 'Countour at $I=255$', 0*img, 'gray', norml8)
axes[0, 0].plot(poly[:, 1], poly[:, 0], linewidth=2, color='magenta')
axes[0, 1].set_title('Marginal contours')
axes[0, 1].plot(np.mgrid[0:1:L*1j], poly[:, 0], linewidth=2, color='red')
axes[0, 1].plot(np.mgrid[0:1:L*1j], poly[:, 1], linewidth=2, color='green')
axes[0, 2].set_title('FFT Marginal contours')
axes[0, 2].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(fpoly[:, 0].flatten())), linewidth=2, color='red')
axes[0, 2].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(fpoly[:, 1].flatten())), linewidth=2, color='green')
lfpoly = np.fft.fftshift(np.fft.ifftshift(fpoly, axes=0)*(np.mgrid[0:L, 0:2][0]==0), axes=0)
lipoly = np.real(np.fft.ifft(np.fft.ifftshift(lfpoly, axes=0), axis=0)).astype(int)
print(lipoly)
draw_image(axes[0, 5], 'Low-filtered countour', 0*img, 'gray', norml8)
axes[0, 5].plot(lipoly[:, 1], lipoly[:, 0], linewidth=2, color='magenta')
axes[0, 4].set_title('Low-filtered marginal contours')
axes[0, 4].plot(np.mgrid[0:1:L*1j], lipoly[:, 0], linewidth=2, color='red')
axes[0, 4].plot(np.mgrid[0:1:L*1j], lipoly[:, 1], linewidth=2, color='green')
axes[0, 3].set_title('Low-filtered FFT marginal contours')
axes[0, 3].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(lfpoly[:, 0].flatten())), linewidth=2, color='red')
axes[0, 3].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(lfpoly[:, 1].flatten())), linewidth=2, color='green')
fcimg = 0*img
fcimg[lipoly[:, 0].flatten(), lipoly[:, 1].flatten()] = 255
save_image(fcimg, './samples/q1_fltctr.png')
axes = grid_plot((1, 6), factor=4)
img = read_image('./fox_binary.png', 'L')
poly = contour_image(img, 255)
fpoly = np.fft.fftshift(np.fft.fft(poly.astype(float), axis=0), axes=0)
ipoly = np.real(np.fft.ifft(np.fft.ifftshift(fpoly, axes=0), axis=0))
H, W = img.shape
L = poly.shape[0]
draw_image(axes[0, 0], 'Countour at $I=255$', 0*img, 'gray', norml8)
axes[0, 0].plot(poly[:, 1], poly[:, 0], linewidth=2, color='magenta')
axes[0, 1].set_title('Marginal contours')
axes[0, 1].plot(np.mgrid[0:1:L*1j], poly[:, 0], linewidth=2, color='red')
axes[0, 1].plot(np.mgrid[0:1:L*1j], poly[:, 1], linewidth=2, color='green')
axes[0, 2].set_title('FFT Marginal contours')
axes[0, 2].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(fpoly[:, 0].flatten())), linewidth=2, color='red')
axes[0, 2].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(fpoly[:, 1].flatten())), linewidth=2, color='green')
hfpoly = np.fft.fftshift(np.fft.ifftshift(fpoly, axes=0)*(np.mgrid[0:L, 0:2][0]>0), axes=0)
hipoly = np.real(np.fft.ifft(np.fft.ifftshift(hfpoly, axes=0), axis=0)).astype(int)
print(hipoly)
draw_image(axes[0, 5], 'Low-filtered countour', 0*img, 'gray', norml8)
axes[0, 5].plot(hipoly[:, 1], hipoly[:, 0], linewidth=2, color='magenta')
axes[0, 4].set_title('Low-filtered marginal contours')
axes[0, 4].plot(np.mgrid[0:1:L*1j], hipoly[:, 0], linewidth=2, color='red')
axes[0, 4].plot(np.mgrid[0:1:L*1j], hipoly[:, 1], linewidth=2, color='green')
axes[0, 3].set_title('Low-filtered FFT marginal contours')
axes[0, 3].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(hfpoly[:, 0].flatten())), linewidth=2, color='red')
axes[0, 3].plot(np.mgrid[-0.5:0.5:L*1j], np.log1p(np.abs(hfpoly[:, 1].flatten())), linewidth=2, color='green')
fcimg = 0*img
fcimg[(hipoly%img.shape)[:, 0].flatten(), (hipoly%img.shape)[:, 1].flatten()] = 255
save_image(fcimg, './samples/q2_fltctr.png')
axes = grid_plot((1, 1), factor=(6, 16))
img = read_image('./fox_binary.png', 'L')
poly = contour_image(img, 255)
fpoly = np.fft.fftshift(np.fft.fft(poly.astype(float), axis=0), axes=0)
ipoly = np.real(np.fft.ifft(np.fft.ifftshift(fpoly, axes=0), axis=0))
H, W = img.shape
L = poly.shape[0]
P = np.mgrid[0:L]
E1 = np.mgrid[0:L]
E2 = np.mgrid[0:L]
for i, p in enumerate(P/L):
lfpoly = fpoly*((np.mgrid[0:1:L*1j, 0:2][0]>=1/2-p/2)&(np.mgrid[0:1:L*1j, 0:2][0]<=1/2+p/2))
lipoly = np.real(np.fft.ifft(np.fft.ifftshift(lfpoly, axes=0), axis=0))
E1[i] = np.sum(np.abs(poly - lipoly + 0.0))/L
E2[i] = np.sum(np.sqrt(np.sum((poly - lipoly + 0.0)**2, axis=1)))/L
axes[0, 0].set_title("Distance error by log frequency threshold")
axes[0, 0].plot(np.log1p(P), E1, linewidth=2, color='red' , label='L1 Distance')
axes[0, 0].plot(np.log1p(P), E2, linewidth=2, color='blue', label='L2 Distance')
axes[0, 0].legend('upper right')
print(f"min {{p}}, subj {{L1 Distance < 10**-3.0}}: p={np.argmax(E1 < 10**-3.0)*100/L:.2f}%")
print(f"min {{p}}, subj {{L2 Distance < 10**-3.0}}: p={np.argmax(E2 < 10**-3.0)*100/L:.2f}%")
main('./fox_binary.png', 0, 255, np.ceil(4.39), './samples/q3_ctr.png', './samples/q3_fltctr.png')
| 0.585812 | 0.865395 |
## Download latest Jars
```
dbutils.fs.mkdirs("dbfs:/FileStore/jars/")
%sh
cd ../../dbfs/FileStore/jars/
wget -O cudf-0.15.jar https://search.maven.org/remotecontent?filepath=ai/rapids/cudf/0.15/cudf-0.15.jar
wget -O rapids-4-spark_2.12-0.2.0-databricks.jar https://search.maven.org/remotecontent?filepath=com/nvidia/rapids-4-spark_2.12/0.2.0-databricks/rapids-4-spark_2.12-0.2.0-databricks.jar
wget -O xgboost4j_3.0-1.0.0-0.2.0.jar https://search.maven.org/remotecontent?filepath=com/nvidia/xgboost4j_3.0/1.0.0-0.2.0/xgboost4j_3.0-1.0.0-0.2.0.jar
wget -O xgboost4j-spark_3.0-1.0.0-0.2.0.jar https://search.maven.org/remotecontent?filepath=com/nvidia/xgboost4j-spark_3.0/1.0.0-0.2.0/xgboost4j-spark_3.0-1.0.0-0.2.0.jar
ls -ltr
# Your Jars are downloaded in dbfs:/FileStore/jars directory
```
### Create a Directory for your init script
```
dbutils.fs.mkdirs("dbfs:/databricks/init_scripts/")
dbutils.fs.put("/databricks/init_scripts/init.sh","""
#!/bin/bash
sudo cp /dbfs/FileStore/jars/xgboost4j_3.0-1.0.0-0.2.0.jar /databricks/jars/spark--maven-trees--ml--7.x--xgboost--ml.dmlc--xgboost4j_2.12--ml.dmlc__xgboost4j_2.12__1.0.0.jar
sudo cp /dbfs/FileStore/jars/cudf-0.15.jar /databricks/jars/
sudo cp /dbfs/FileStore/jars/rapids-4-spark_2.12-0.2.0-databricks.jar /databricks/jars/
sudo cp /dbfs/FileStore/jars/xgboost4j-spark_3.0-1.0.0-0.2.0.jar /databricks/jars/spark--maven-trees--ml--7.x--xgboost--ml.dmlc--xgboost4j-spark_2.12--ml.dmlc__xgboost4j-spark_2.12__1.0.0.jar""", True)
```
### Confirm your init script is in the new directory
```
%sh
cd ../../dbfs/databricks/init_scripts
pwd
ls -ltr
```
### Download the Mortgage Dataset into your local machine and upload Data using import Data
```
dbutils.fs.mkdirs("dbfs:/FileStore/tables/")
%sh
cd /dbfs/FileStore/tables/
wget -O mortgage.zip https://rapidsai-data.s3.us-east-2.amazonaws.com/spark/mortgage.zip
ls
unzip mortgage.zip
%sh
pwd
cd ../../dbfs/FileStore/tables
ls -ltr mortgage/csv/*
```
### Next steps
1. Edit your cluster, adding an initialization script from `dbfs:/databricks/init_scripts/init.sh` in the "Advanced Options" under "Init Scripts" tab
2. Reboot the cluster
3. Go to "Libraries" tab under your cluster and install `dbfs:/FileStore/jars/xgboost4j-spark_3.0-1.0.0-0.2.0.jar` in your cluster by selecting the "DBFS" option for installing jars
4. Import the mortgage example notebook from `https://github.com/NVIDIA/spark-xgboost-examples/blob/spark-3/examples/notebooks/python/mortgage-gpu.ipynb`
5. Inside the mortgage example notebook, update the data paths
`train_data = reader.schema(schema).option('header', True).csv('/data/mortgage/csv/small-train.csv')`
`trans_data = reader.schema(schema).option('header', True).csv('/data/mortgage/csv/small-trans.csv')`
|
github_jupyter
|
dbutils.fs.mkdirs("dbfs:/FileStore/jars/")
%sh
cd ../../dbfs/FileStore/jars/
wget -O cudf-0.15.jar https://search.maven.org/remotecontent?filepath=ai/rapids/cudf/0.15/cudf-0.15.jar
wget -O rapids-4-spark_2.12-0.2.0-databricks.jar https://search.maven.org/remotecontent?filepath=com/nvidia/rapids-4-spark_2.12/0.2.0-databricks/rapids-4-spark_2.12-0.2.0-databricks.jar
wget -O xgboost4j_3.0-1.0.0-0.2.0.jar https://search.maven.org/remotecontent?filepath=com/nvidia/xgboost4j_3.0/1.0.0-0.2.0/xgboost4j_3.0-1.0.0-0.2.0.jar
wget -O xgboost4j-spark_3.0-1.0.0-0.2.0.jar https://search.maven.org/remotecontent?filepath=com/nvidia/xgboost4j-spark_3.0/1.0.0-0.2.0/xgboost4j-spark_3.0-1.0.0-0.2.0.jar
ls -ltr
# Your Jars are downloaded in dbfs:/FileStore/jars directory
dbutils.fs.mkdirs("dbfs:/databricks/init_scripts/")
dbutils.fs.put("/databricks/init_scripts/init.sh","""
#!/bin/bash
sudo cp /dbfs/FileStore/jars/xgboost4j_3.0-1.0.0-0.2.0.jar /databricks/jars/spark--maven-trees--ml--7.x--xgboost--ml.dmlc--xgboost4j_2.12--ml.dmlc__xgboost4j_2.12__1.0.0.jar
sudo cp /dbfs/FileStore/jars/cudf-0.15.jar /databricks/jars/
sudo cp /dbfs/FileStore/jars/rapids-4-spark_2.12-0.2.0-databricks.jar /databricks/jars/
sudo cp /dbfs/FileStore/jars/xgboost4j-spark_3.0-1.0.0-0.2.0.jar /databricks/jars/spark--maven-trees--ml--7.x--xgboost--ml.dmlc--xgboost4j-spark_2.12--ml.dmlc__xgboost4j-spark_2.12__1.0.0.jar""", True)
%sh
cd ../../dbfs/databricks/init_scripts
pwd
ls -ltr
dbutils.fs.mkdirs("dbfs:/FileStore/tables/")
%sh
cd /dbfs/FileStore/tables/
wget -O mortgage.zip https://rapidsai-data.s3.us-east-2.amazonaws.com/spark/mortgage.zip
ls
unzip mortgage.zip
%sh
pwd
cd ../../dbfs/FileStore/tables
ls -ltr mortgage/csv/*
| 0.177276 | 0.323834 |
## Content-Based Recommender System
### Item based recommendation using knn
Source: https://towardsdatascience.com
```
##Dataset url: https://grouplens.org/datasets/movielens/latest/
import pandas as pd
import numpy as np
movies_df = pd.read_csv('movies.csv',usecols=['movieId','title'],dtype={'movieId': 'int32', 'title': 'str'})
rating_df=pd.read_csv('ratings.csv',usecols=['userId', 'movieId', 'rating'],
dtype={'userId': 'int32', 'movieId': 'int32', 'rating': 'float32'})
movies_df.head()
rating_df.head()
df = pd.merge(rating_df,movies_df,on='movieId')
df.head()
combine_movie_rating = df.dropna(axis = 0, subset = ['title'])
movie_ratingCount = (combine_movie_rating.
groupby(by = ['title'])['rating'].
count().
reset_index().
rename(columns = {'rating': 'totalRatingCount'})
[['title', 'totalRatingCount']]
)
movie_ratingCount.head()
rating_with_totalRatingCount = combine_movie_rating.merge(movie_ratingCount, on = 'title', how = 'left')
rating_with_totalRatingCount.head()
pd.set_option('display.float_format', lambda x: '%.3f' % x)
print(movie_ratingCount['totalRatingCount'].describe())
popularity_threshold = 50
rating_popular_movie= rating_with_totalRatingCount.query('totalRatingCount >= @popularity_threshold')
rating_popular_movie.head()
rating_popular_movie.shape
## First lets create a Pivot matrix
movie_features_df=rating_popular_movie.pivot_table(index='title',columns='userId',values='rating').fillna(0)
movie_features_df.head()
movie_features_df
from scipy.sparse import csr_matrix
movie_features_df_matrix = csr_matrix(movie_features_df.values)
from sklearn.neighbors import NearestNeighbors
model_knn = NearestNeighbors(metric = 'cosine', algorithm = 'brute')
model_knn.fit(movie_features_df_matrix)
movie_features_df.shape
query_index = np.random.choice(movie_features_df.shape[0])
print(query_index)
distances, indices = model_knn.kneighbors(movie_features_df.iloc[query_index,:].values.reshape(1, -1), n_neighbors = 6)
indices
distances
movie_features_df.head()
for i in range(0, len(distances.flatten())):
if i == 0:
print('Recommendations for {0}:\n'.format(movie_features_df.index[query_index]))
else:
print('{0}: {1}, with distance of {2}:'.format(i, movie_features_df.index[indices.flatten()[i]], distances.flatten()[i]))
```
# Collaborative Filtering
### (User Based)
```
movie_features_df.transpose().head()
movie_features_df = movie_features_df.transpose()
movie_features_df_matrix = csr_matrix(movie_features_df)
from sklearn.neighbors import NearestNeighbors
model_knn = NearestNeighbors(metric = 'cosine', algorithm = 'brute')
model_knn.fit(movie_features_df_matrix)
# query_index = np.random.choice(movie_features_df.shape[0])
#query_index = 375
print(query_index)
distances, indices = model_knn.kneighbors(movie_features_df.iloc[query_index,:].values.reshape(1, -1), n_neighbors = 6)
for i in range(0, len(distances.flatten())):
if i == 0:
print('Recommendations for {0}:\n'.format(movie_features_df.index[query_index]))
else:
print('{0}: {1}, with distance of {2}:'.format(i, movie_features_df.index[indices.flatten()[i]], distances.flatten()[i]))
movie_features_df.iloc[indices.flatten(), :]
not_watched_movie_list = movie_features_df.iloc[indices.flatten()[0], :][movie_features_df.iloc[indices.flatten()[0], :] == 0.0].index
not_watched_movie_list
movie_features_df.iloc[indices.flatten()][not_watched_movie_list]
movie_features_df[not_watched_movie_list].columns
movie_features_df[not_watched_movie_list].index
# recommended_movie_index = np.sort(movie_features_df[not_watched_movie_list].apply(lambda x: (x!=0).sum(), axis=0).values)[::-1][:5]
recommended_movie_index = np.sort(movie_features_df.iloc[indices.flatten()][not_watched_movie_list].apply(lambda x: x.sum(), axis=0))
mdf = pd.DataFrame(
{"movie_count" : recommended_movie_index,
"title" : movie_features_df[not_watched_movie_list].columns})
mdf
mdf.sort_values(by="movie_count", ascending=False).head()
```
|
github_jupyter
|
##Dataset url: https://grouplens.org/datasets/movielens/latest/
import pandas as pd
import numpy as np
movies_df = pd.read_csv('movies.csv',usecols=['movieId','title'],dtype={'movieId': 'int32', 'title': 'str'})
rating_df=pd.read_csv('ratings.csv',usecols=['userId', 'movieId', 'rating'],
dtype={'userId': 'int32', 'movieId': 'int32', 'rating': 'float32'})
movies_df.head()
rating_df.head()
df = pd.merge(rating_df,movies_df,on='movieId')
df.head()
combine_movie_rating = df.dropna(axis = 0, subset = ['title'])
movie_ratingCount = (combine_movie_rating.
groupby(by = ['title'])['rating'].
count().
reset_index().
rename(columns = {'rating': 'totalRatingCount'})
[['title', 'totalRatingCount']]
)
movie_ratingCount.head()
rating_with_totalRatingCount = combine_movie_rating.merge(movie_ratingCount, on = 'title', how = 'left')
rating_with_totalRatingCount.head()
pd.set_option('display.float_format', lambda x: '%.3f' % x)
print(movie_ratingCount['totalRatingCount'].describe())
popularity_threshold = 50
rating_popular_movie= rating_with_totalRatingCount.query('totalRatingCount >= @popularity_threshold')
rating_popular_movie.head()
rating_popular_movie.shape
## First lets create a Pivot matrix
movie_features_df=rating_popular_movie.pivot_table(index='title',columns='userId',values='rating').fillna(0)
movie_features_df.head()
movie_features_df
from scipy.sparse import csr_matrix
movie_features_df_matrix = csr_matrix(movie_features_df.values)
from sklearn.neighbors import NearestNeighbors
model_knn = NearestNeighbors(metric = 'cosine', algorithm = 'brute')
model_knn.fit(movie_features_df_matrix)
movie_features_df.shape
query_index = np.random.choice(movie_features_df.shape[0])
print(query_index)
distances, indices = model_knn.kneighbors(movie_features_df.iloc[query_index,:].values.reshape(1, -1), n_neighbors = 6)
indices
distances
movie_features_df.head()
for i in range(0, len(distances.flatten())):
if i == 0:
print('Recommendations for {0}:\n'.format(movie_features_df.index[query_index]))
else:
print('{0}: {1}, with distance of {2}:'.format(i, movie_features_df.index[indices.flatten()[i]], distances.flatten()[i]))
movie_features_df.transpose().head()
movie_features_df = movie_features_df.transpose()
movie_features_df_matrix = csr_matrix(movie_features_df)
from sklearn.neighbors import NearestNeighbors
model_knn = NearestNeighbors(metric = 'cosine', algorithm = 'brute')
model_knn.fit(movie_features_df_matrix)
# query_index = np.random.choice(movie_features_df.shape[0])
#query_index = 375
print(query_index)
distances, indices = model_knn.kneighbors(movie_features_df.iloc[query_index,:].values.reshape(1, -1), n_neighbors = 6)
for i in range(0, len(distances.flatten())):
if i == 0:
print('Recommendations for {0}:\n'.format(movie_features_df.index[query_index]))
else:
print('{0}: {1}, with distance of {2}:'.format(i, movie_features_df.index[indices.flatten()[i]], distances.flatten()[i]))
movie_features_df.iloc[indices.flatten(), :]
not_watched_movie_list = movie_features_df.iloc[indices.flatten()[0], :][movie_features_df.iloc[indices.flatten()[0], :] == 0.0].index
not_watched_movie_list
movie_features_df.iloc[indices.flatten()][not_watched_movie_list]
movie_features_df[not_watched_movie_list].columns
movie_features_df[not_watched_movie_list].index
# recommended_movie_index = np.sort(movie_features_df[not_watched_movie_list].apply(lambda x: (x!=0).sum(), axis=0).values)[::-1][:5]
recommended_movie_index = np.sort(movie_features_df.iloc[indices.flatten()][not_watched_movie_list].apply(lambda x: x.sum(), axis=0))
mdf = pd.DataFrame(
{"movie_count" : recommended_movie_index,
"title" : movie_features_df[not_watched_movie_list].columns})
mdf
mdf.sort_values(by="movie_count", ascending=False).head()
| 0.589126 | 0.864939 |
# Finite Volume Discretisation
In this notebook, we explain the discretisation process that converts an expression tree, representing a model, to a linear algebra tree that can be evaluated by the solvers.
We use Finite Volumes as an example of a spatial method, since it is the default spatial method for most PyBaMM models. This is a good spatial method for battery problems as it is conservative: for lithium-ion battery models, we can be sure that the total amount of lithium in the system is constant. For more details on the Finite Volume method, see [Randall Leveque's book](https://books.google.co.uk/books/about/Finite_Volume_Methods_for_Hyperbolic_Pro.html?id=QazcnD7GUoUC&printsec=frontcover&source=kp_read_button&redir_esc=y#v=onepage&q&f=false).
This notebook is structured as follows:
1. **Setting up a discretisation**. Overview of the parameters that are passed to the discretisation
2. **Discretisations and spatial methods**. Operations that are common to most spatial methods:
- Discretising a spatial variable (e.g. $x$)
- Discretising a variable (e.g. concentration)
3. **Example: Finite Volume operators**. Finite Volume implementation of some useful operators:
- Gradient operator
- Divergence operator
- Integral operator
4. **Example: Discretising a simple model**. Setting up and solving a simple model, using Finite Volumes as the spatial method
To find out how to implement a new spatial method, see the [tutorial](https://pybamm.readthedocs.io/en/latest/tutorials/add-spatial-method.html) in the API docs.
## Setting up a Discretisation
We first import `pybamm` and some useful other modules, and change our working directory to the root of the `PyBaMM` folder:
```
import pybamm
import numpy as np
import os
import matplotlib.pyplot as plt
from pprint import pprint
os.chdir(pybamm.__path__[0]+'/..')
```
To set up a discretisation, we must create a geometry, mesh this geometry, and then create the discretisation with the appropriate spatial method(s). The easiest way to create a geometry is to the inbuilt battery geometry:
```
parameter_values = pybamm.ParameterValues(
values={
"Negative electrode thickness [m]": 0.3,
"Separator thickness [m]": 0.2,
"Positive electrode thickness [m]": 0.3,
}
)
geometry = pybamm.battery_geometry()
parameter_values.process_geometry(geometry)
```
We then use this geometry to create a mesh, which for this example consists of uniform 1D submeshes
```
submesh_types = {
"negative electrode": pybamm.Uniform1DSubMesh,
"separator": pybamm.Uniform1DSubMesh,
"positive electrode": pybamm.Uniform1DSubMesh,
"negative particle": pybamm.Uniform1DSubMesh,
"positive particle": pybamm.Uniform1DSubMesh,
"current collector": pybamm.SubMesh0D,
}
var = pybamm.standard_spatial_vars
var_pts = {var.x_n: 15, var.x_s: 10, var.x_p: 15, var.r_n: 10, var.r_p: 10}
mesh = pybamm.Mesh(geometry, submesh_types, var_pts)
```
Finally, we can use the mesh to create a discretisation, using Finite Volumes as the spatial method for this example
```
spatial_methods = {
"macroscale": pybamm.FiniteVolume(),
"negative particle": pybamm.FiniteVolume(),
"positive particle": pybamm.FiniteVolume(),
}
disc = pybamm.Discretisation(mesh, spatial_methods)
```
## Discretisations and Spatial Methods
### Spatial Variables
Spatial variables, such as $x$ and $r$, are converted to `pybamm.Vector` nodes
```
# Set up
macroscale = ["negative electrode", "separator", "positive electrode"]
x_var = pybamm.SpatialVariable("x", domain=macroscale)
r_var = pybamm.SpatialVariable("r", domain=["negative particle"])
# Discretise
x_disc = disc.process_symbol(x_var)
r_disc = disc.process_symbol(r_var)
print("x_disc is a {}".format(type(x_disc)))
print("r_disc is a {}".format(type(r_disc)))
# Evaluate
x = x_disc.evaluate()
r = r_disc.evaluate()
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(13,4))
ax1.plot(x, "*")
ax1.set_xlabel("index")
ax1.set_ylabel(r"$x$")
ax2.plot(r, "*")
ax2.set_xlabel("index")
ax2.set_ylabel(r"$r$")
plt.tight_layout()
plt.show()
```
We define `y_macroscale`, `y_microscale` and `y_scalar` for evaluation and visualisation of results below
```
y_macroscale = x ** 3 / 3
y_microscale = np.cos(r)
y_scalar = np.array([[5]])
y = np.concatenate([y_macroscale, y_microscale, y_scalar])
```
### Variables
In this notebook, we will work with three variables `u`, `v`, `w`.
```
u = pybamm.Variable("u", domain=macroscale) # u is a variable in the macroscale (e.g. electrolyte potential)
v = pybamm.Variable("v", domain=["negative particle"]) # v is a variable in the negative particle (e.g. particle concentration)
w = pybamm.Variable("w") # w is a variable without a domain (e.g. time, average concentration)
variables = [u,v,w]
```
Before discretising, trying to evaluate the variables raises a `NotImplementedError`:
```
try:
u.evaluate()
except NotImplementedError as e:
print(e)
```
For any spatial method, a `pybamm.Variable` gets converted to a `pybamm.StateVector` which, when evaluated, takes the appropriate slice of the input vector `y`.
```
# Pass the list of variables to the discretisation to calculate the slices to be used (order matters here!)
disc.set_variable_slices(variables)
# Discretise the variables
u_disc = disc.process_symbol(u)
v_disc = disc.process_symbol(v)
w_disc = disc.process_symbol(w)
# Print the outcome
print("Discretised u is the StateVector {}".format(u_disc))
print("Discretised v is the StateVector {}".format(v_disc))
print("Discretised w is the StateVector {}".format(w_disc))
```
Since the variables have been passed to `disc` in the order `[u,v,w]`, they each read the appropriate part of `y` when evaluated:
```
x_fine = np.linspace(x[0], x[-1], 1000)
r_fine = np.linspace(r[0], r[-1], 1000)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(13,4))
ax1.plot(x_fine, x_fine**3/3, x, u_disc.evaluate(y=y), "o")
ax1.set_xlabel("x")
ax1.legend(["x^3/3", "u"], loc="best")
ax2.plot(r_fine, np.cos(r_fine), r, v_disc.evaluate(y=y), "o")
ax2.set_xlabel("r")
ax2.legend(["cos(r)", "v"], loc="best")
plt.tight_layout()
plt.show()
print("w = {}".format(w_disc.evaluate(y=y)))
```
## Finite Volume Operators
### Gradient operator
The gradient operator is converted to a Matrix-StateVector multiplication. In 1D, the gradient operator is equivalent to $\partial/\partial x$ on the macroscale and $\partial/\partial r$ on the microscale. In Finite Volumes, we take the gradient of an object on nodes (shape (n,)), which returns an object on the edges (shape (n-1,)).
```
grad_u = pybamm.grad(u)
grad_u_disc = disc.process_symbol(grad_u)
grad_u_disc.render()
```
The Matrix in `grad_u_disc` is the standard `[-1,1]` sparse matrix, divided by the step sizes `dx`:
```
macro_mesh = mesh.combine_submeshes(*macroscale)
print("gradient matrix is:\n")
print("1/dx *\n{}".format(macro_mesh.d_nodes[:,np.newaxis] * grad_u_disc.children[0].entries.toarray()))
```
When evaluated with `y_macroscale=x**3/3`, `grad_u_disc` is equal to `x**2` as expected:
```
x_edge = macro_mesh.edges[1:-1] # note that grad_u_disc is evaluated on the node edges
fig, ax = plt.subplots()
ax.plot(x_fine, x_fine**2, x_edge, grad_u_disc.evaluate(y=y), "o")
ax.set_xlabel("x")
legend = ax.legend(["x^2", "grad(u).evaluate(y=x**3/3)"], loc="best")
plt.show()
```
Similary, we can create, discretise and evaluate the gradient of `v`, which is a variable in the negative particles. Note that the syntax for doing this is identical: we do not need to explicitly specify that we want the gradient in `r`, since this is inferred from the `domain` of `v`.
```
v.domain
grad_v = pybamm.grad(v)
grad_v_disc = disc.process_symbol(grad_v)
print("grad(v) tree is:\n")
grad_v_disc.render()
micro_mesh = mesh["negative particle"]
print("\n gradient matrix is:\n")
print("1/dr *\n{}".format(micro_mesh.d_nodes[:,np.newaxis] * grad_v_disc.children[0].entries.toarray()))
r_edge = micro_mesh.edges[1:-1] # note that grad_u_disc is evaluated on the node edges
fig, ax = plt.subplots()
ax.plot(r_fine, -np.sin(r_fine), r_edge, grad_v_disc.evaluate(y=y), "o")
ax.set_xlabel("x")
legend = ax.legend(["-sin(r)", "grad(v).evaluate(y=cos(r))"], loc="best")
plt.show()
```
#### Boundary conditions
If the discretisation is provided with boundary conditions, appropriate ghost nodes are concatenated onto the variable, and a larger gradient matrix is used. The ghost nodes are chosen based on the value of the first/last node in the variable and the boundary condition.
For a Dirichlet boundary condition $u=a$ on the left-hand boundary, we set the value of the left ghost node to be equal to
$$2*a-u[0],$$
where $u[0]$ is the value of $u$ in the left-most cell in the domain. Similarly, for a Dirichlet condition $u=b$ on the right-hand boundary, we set the right ghost node to be
$$2*b-u[-1].$$
Note also that the size of the gradient matrix is now (41,42) instead of (39,40), to account for the presence of boundary conditions in the State Vector.
```
disc.bcs = {u.id: {"left": (pybamm.Scalar(1), "Dirichlet"), "right": (pybamm.Scalar(2), "Dirichlet")}}
grad_u_disc = disc.process_symbol(grad_u)
print("The gradient object is:")
(grad_u_disc.render())
u_eval = grad_u_disc.children[1].evaluate(y=y)
print("The value of u on the left-hand boundary is {}".format((u_eval[0] + u_eval[1]) / 2))
print("The value of u on the right-hand boundary is {}".format((u_eval[-2] + u_eval[-1]) / 2))
```
For a Neumann boundary condition $\partial u/\partial x=c$ on the left-hand boundary, we set the value of the left ghost node to be
$$u[0] - c * dx,$$
where $dx$ is the step size at the left-hand boundary. For a Neumann boundary condition $\partial u/\partial x=d$ on the right-hand boundary, we set the value of the right ghost node to be
$$u[-1] + d * dx.$$
```
disc.bcs = {u.id: {"left": (pybamm.Scalar(3), "Neumann"), "right": (pybamm.Scalar(4), "Neumann")}}
grad_u_disc = disc.process_symbol(grad_u)
print("The gradient object is:")
(grad_u_disc.render())
grad_u_eval = grad_u_disc.evaluate(y=y)
print("The gradient on the left-hand boundary is {}".format(grad_u_eval[0]))
print("The gradient of u on the right-hand boundary is {}".format(grad_u_eval[-1]))
```
We can mix the types of the boundary conditions:
```
disc.bcs = {u.id: {"left": (pybamm.Scalar(5), "Dirichlet"), "right": (pybamm.Scalar(6), "Neumann")}}
grad_u_disc = disc.process_symbol(grad_u)
print("The gradient object is:")
(grad_u_disc.render())
grad_u_eval = grad_u_disc.evaluate(y=y)
u_eval = grad_u_disc.children[1].evaluate(y=y)
print("The value of u on the left-hand boundary is {}".format((u_eval[0] + u_eval[1])/2))
print("The gradient on the right-hand boundary is {}".format(grad_u_eval[-1]))
```
Robin boundary conditions can be implemented by specifying a Neumann condition where the flux depends on the variable.
### Divergence operator
Before computing the Divergence operator, we set up Neumann boundary conditions. The behaviour with Dirichlet boundary conditions is very similar.
```
disc.bcs = {u.id: {"left": (pybamm.Scalar(-1), "Neumann"), "right": (pybamm.Scalar(1), "Neumann")}}
```
Now we can process `div(grad(u))`, converting it to a Matrix-Vector multiplication, plus a vector for the boundary conditions. Since we have Neumann boundary conditions, the divergence of an object of size (n,) has size (n+1,).
```
div_grad_u = pybamm.div(grad_u)
div_grad_u_disc = disc.process_symbol(div_grad_u)
div_grad_u_disc.render()
```
Once again, in 1D, the divergence matrix is a `[-1,1]` matrix (divided by the distance between the edges)
```
print("divergence matrix is:\n")
print("1/dx * \n{}".format(
macro_mesh.d_edges[:,np.newaxis] * div_grad_u_disc.children[0].entries.toarray()
))
```
We can simplify `div_grad_u_disc`, to collapse the two `[-1,1]` matrices into a single `[1,-2,1]` matrix. The vector of boundary conditions is also simplified.
```
div_grad_u_disc_simp = div_grad_u_disc.simplify()
div_grad_u_disc_simp.render()
print("laplacian matrix is:\n")
print("1/dx^2 *\n{}".format(
macro_mesh.d_edges[:,np.newaxis] ** 2 * div_grad_u_disc_simp.children[0].entries.toarray()
))
```
Simplifying the tree reduces the time taken to evaluate it:
```
import timeit
timeit.timeit('div_grad_u_disc.evaluate(y=y)', setup="from __main__ import div_grad_u_disc, y", number=10000)
timeit.timeit('div_grad_u_disc_simp.evaluate(y=y)', setup="from __main__ import div_grad_u_disc_simp, y", number=10000)
```
### Integral operator
Finally, we can define an integral operator, which integrates the variable across the domain specified by the integration variable.
```
int_u = pybamm.Integral(u, x_var)
int_u_disc = disc.process_symbol(int_u)
print("int(u) = {} is approximately equal to 1/12, {}".format(int_u_disc.evaluate(y=y), 1/12))
# We divide v by r to evaluate the integral more easily
int_v_over_r = pybamm.Integral(v/r_var, r_var)
int_v_over_r_disc = disc.process_symbol(int_v_over_r)
print("int(v/r) = {} is approximately equal to 4 * pi**2 * sin(1), {}".format(
int_v_over_r_disc.evaluate(y=y), 4 * np.pi**2 * np.sin(1))
)
```
The integral operators are also Matrix-Vector multiplications
```
print("int(u):\n")
int_u_disc.render()
print("\nint(v):\n")
int_v_over_r_disc.render()
int_u_disc.children[0].evaluate() / macro_mesh.d_edges
int_v_over_r_disc.children[0].evaluate() / micro_mesh.d_edges
```
## Discretising a model
We can now discretise a whole model. We create, and discretise, a simple model for the concentration in the electrolyte and the concentration in the particles, and discretise it with a single command:
```
disc.process_model(model)
```
```
model = pybamm.BaseModel()
c_e = pybamm.Variable("electrolyte concentration", domain=macroscale)
N_e = pybamm.grad(c_e)
c_s = pybamm.Variable("particle concentration", domain=["negative particle"])
N_s = pybamm.grad(c_s)
model.rhs = {c_e: pybamm.div(N_e) - 5, c_s: pybamm.div(N_s)}
model.boundary_conditions = {
c_e: {"left": (np.cos(0), "Neumann"), "right": (np.cos(10), "Neumann")},
c_s: {"left": (0, "Neumann"), "right": (-1, "Neumann")},
}
model.initial_conditions = {c_e: 1 + 0.1 * pybamm.sin(10*x_var), c_s: 1}
# Create a new discretisation and process model
disc2 = pybamm.Discretisation(mesh, spatial_methods)
disc2.process_model(model);
```
The initial conditions are discretised to vectors, and an array of concatenated initial conditions is created.
```
c_e_0 = model.initial_conditions[c_e].evaluate()
c_s_0 = model.initial_conditions[c_s].evaluate()
y0 = model.concatenated_initial_conditions.evaluate()
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(13,4))
ax1.plot(x_fine, 1 + 0.1*np.sin(10*x_fine), x, c_e_0, "o")
ax1.set_xlabel("x")
ax1.legend(["1+0.1*sin(10*x)", "c_e_0"], loc="best")
ax2.plot(x_fine, np.ones_like(r_fine), r, c_s_0, "o")
ax2.set_xlabel("r")
ax2.legend(["1", "c_s_0"], loc="best")
ax3.plot(y0,"*")
ax3.set_xlabel("index")
ax3.set_ylabel("y0")
plt.tight_layout()
plt.show()
```
The discretised rhs can be evaluated, for example at `0,y0`:
```
rhs_c_e = model.rhs[c_e].evaluate(0, y0)
rhs_c_s = model.rhs[c_s].evaluate(0, y0)
rhs = model.concatenated_rhs.evaluate(0, y0)
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(13,4))
# ax1.plot(x_fine, -10*np.sin(10*x_fine) - 5, x, rhs_c_e, "o")
ax1.plot(x, rhs_c_e, "o")
ax1.set_xlabel("x")
ax1.set_ylabel("rhs_c_e")
# ax1.legend(["1+0.1*sin(10*x)", "c_e_0"], loc="best")
ax2.plot(r, rhs_c_s, "o")
ax2.set_xlabel("r")
ax2.set_ylabel("rhs_c_s")
ax3.plot(rhs,"*")
ax3.set_xlabel("index")
ax3.set_ylabel("rhs")
plt.tight_layout()
plt.show()
```
The function `model.concatenated_rhs` is then passed to the solver to solve the model, with initial conditions `model.concatenated_initial_conditions`.
## More advanced concepts
Since this notebook is only an introduction to the discretisation, we have not covered everything. More advanced concepts, such as the ones below, can be explored by looking into the [API docs](https://pybamm.readthedocs.io/en/latest/source/spatial_methods/finite_volume.html).
- Gradient and divergence of microscale variables in the P2D model
- Indefinite integral
If you would like detailed examples of these operations, please [create an issue](https://github.com/pybamm-team/PyBaMM/blob/master/CONTRIBUTING.md#a-before-you-begin) and we will be happy to help.
|
github_jupyter
|
import pybamm
import numpy as np
import os
import matplotlib.pyplot as plt
from pprint import pprint
os.chdir(pybamm.__path__[0]+'/..')
parameter_values = pybamm.ParameterValues(
values={
"Negative electrode thickness [m]": 0.3,
"Separator thickness [m]": 0.2,
"Positive electrode thickness [m]": 0.3,
}
)
geometry = pybamm.battery_geometry()
parameter_values.process_geometry(geometry)
submesh_types = {
"negative electrode": pybamm.Uniform1DSubMesh,
"separator": pybamm.Uniform1DSubMesh,
"positive electrode": pybamm.Uniform1DSubMesh,
"negative particle": pybamm.Uniform1DSubMesh,
"positive particle": pybamm.Uniform1DSubMesh,
"current collector": pybamm.SubMesh0D,
}
var = pybamm.standard_spatial_vars
var_pts = {var.x_n: 15, var.x_s: 10, var.x_p: 15, var.r_n: 10, var.r_p: 10}
mesh = pybamm.Mesh(geometry, submesh_types, var_pts)
spatial_methods = {
"macroscale": pybamm.FiniteVolume(),
"negative particle": pybamm.FiniteVolume(),
"positive particle": pybamm.FiniteVolume(),
}
disc = pybamm.Discretisation(mesh, spatial_methods)
# Set up
macroscale = ["negative electrode", "separator", "positive electrode"]
x_var = pybamm.SpatialVariable("x", domain=macroscale)
r_var = pybamm.SpatialVariable("r", domain=["negative particle"])
# Discretise
x_disc = disc.process_symbol(x_var)
r_disc = disc.process_symbol(r_var)
print("x_disc is a {}".format(type(x_disc)))
print("r_disc is a {}".format(type(r_disc)))
# Evaluate
x = x_disc.evaluate()
r = r_disc.evaluate()
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(13,4))
ax1.plot(x, "*")
ax1.set_xlabel("index")
ax1.set_ylabel(r"$x$")
ax2.plot(r, "*")
ax2.set_xlabel("index")
ax2.set_ylabel(r"$r$")
plt.tight_layout()
plt.show()
y_macroscale = x ** 3 / 3
y_microscale = np.cos(r)
y_scalar = np.array([[5]])
y = np.concatenate([y_macroscale, y_microscale, y_scalar])
u = pybamm.Variable("u", domain=macroscale) # u is a variable in the macroscale (e.g. electrolyte potential)
v = pybamm.Variable("v", domain=["negative particle"]) # v is a variable in the negative particle (e.g. particle concentration)
w = pybamm.Variable("w") # w is a variable without a domain (e.g. time, average concentration)
variables = [u,v,w]
try:
u.evaluate()
except NotImplementedError as e:
print(e)
# Pass the list of variables to the discretisation to calculate the slices to be used (order matters here!)
disc.set_variable_slices(variables)
# Discretise the variables
u_disc = disc.process_symbol(u)
v_disc = disc.process_symbol(v)
w_disc = disc.process_symbol(w)
# Print the outcome
print("Discretised u is the StateVector {}".format(u_disc))
print("Discretised v is the StateVector {}".format(v_disc))
print("Discretised w is the StateVector {}".format(w_disc))
x_fine = np.linspace(x[0], x[-1], 1000)
r_fine = np.linspace(r[0], r[-1], 1000)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(13,4))
ax1.plot(x_fine, x_fine**3/3, x, u_disc.evaluate(y=y), "o")
ax1.set_xlabel("x")
ax1.legend(["x^3/3", "u"], loc="best")
ax2.plot(r_fine, np.cos(r_fine), r, v_disc.evaluate(y=y), "o")
ax2.set_xlabel("r")
ax2.legend(["cos(r)", "v"], loc="best")
plt.tight_layout()
plt.show()
print("w = {}".format(w_disc.evaluate(y=y)))
grad_u = pybamm.grad(u)
grad_u_disc = disc.process_symbol(grad_u)
grad_u_disc.render()
macro_mesh = mesh.combine_submeshes(*macroscale)
print("gradient matrix is:\n")
print("1/dx *\n{}".format(macro_mesh.d_nodes[:,np.newaxis] * grad_u_disc.children[0].entries.toarray()))
x_edge = macro_mesh.edges[1:-1] # note that grad_u_disc is evaluated on the node edges
fig, ax = plt.subplots()
ax.plot(x_fine, x_fine**2, x_edge, grad_u_disc.evaluate(y=y), "o")
ax.set_xlabel("x")
legend = ax.legend(["x^2", "grad(u).evaluate(y=x**3/3)"], loc="best")
plt.show()
v.domain
grad_v = pybamm.grad(v)
grad_v_disc = disc.process_symbol(grad_v)
print("grad(v) tree is:\n")
grad_v_disc.render()
micro_mesh = mesh["negative particle"]
print("\n gradient matrix is:\n")
print("1/dr *\n{}".format(micro_mesh.d_nodes[:,np.newaxis] * grad_v_disc.children[0].entries.toarray()))
r_edge = micro_mesh.edges[1:-1] # note that grad_u_disc is evaluated on the node edges
fig, ax = plt.subplots()
ax.plot(r_fine, -np.sin(r_fine), r_edge, grad_v_disc.evaluate(y=y), "o")
ax.set_xlabel("x")
legend = ax.legend(["-sin(r)", "grad(v).evaluate(y=cos(r))"], loc="best")
plt.show()
disc.bcs = {u.id: {"left": (pybamm.Scalar(1), "Dirichlet"), "right": (pybamm.Scalar(2), "Dirichlet")}}
grad_u_disc = disc.process_symbol(grad_u)
print("The gradient object is:")
(grad_u_disc.render())
u_eval = grad_u_disc.children[1].evaluate(y=y)
print("The value of u on the left-hand boundary is {}".format((u_eval[0] + u_eval[1]) / 2))
print("The value of u on the right-hand boundary is {}".format((u_eval[-2] + u_eval[-1]) / 2))
disc.bcs = {u.id: {"left": (pybamm.Scalar(3), "Neumann"), "right": (pybamm.Scalar(4), "Neumann")}}
grad_u_disc = disc.process_symbol(grad_u)
print("The gradient object is:")
(grad_u_disc.render())
grad_u_eval = grad_u_disc.evaluate(y=y)
print("The gradient on the left-hand boundary is {}".format(grad_u_eval[0]))
print("The gradient of u on the right-hand boundary is {}".format(grad_u_eval[-1]))
disc.bcs = {u.id: {"left": (pybamm.Scalar(5), "Dirichlet"), "right": (pybamm.Scalar(6), "Neumann")}}
grad_u_disc = disc.process_symbol(grad_u)
print("The gradient object is:")
(grad_u_disc.render())
grad_u_eval = grad_u_disc.evaluate(y=y)
u_eval = grad_u_disc.children[1].evaluate(y=y)
print("The value of u on the left-hand boundary is {}".format((u_eval[0] + u_eval[1])/2))
print("The gradient on the right-hand boundary is {}".format(grad_u_eval[-1]))
disc.bcs = {u.id: {"left": (pybamm.Scalar(-1), "Neumann"), "right": (pybamm.Scalar(1), "Neumann")}}
div_grad_u = pybamm.div(grad_u)
div_grad_u_disc = disc.process_symbol(div_grad_u)
div_grad_u_disc.render()
print("divergence matrix is:\n")
print("1/dx * \n{}".format(
macro_mesh.d_edges[:,np.newaxis] * div_grad_u_disc.children[0].entries.toarray()
))
div_grad_u_disc_simp = div_grad_u_disc.simplify()
div_grad_u_disc_simp.render()
print("laplacian matrix is:\n")
print("1/dx^2 *\n{}".format(
macro_mesh.d_edges[:,np.newaxis] ** 2 * div_grad_u_disc_simp.children[0].entries.toarray()
))
import timeit
timeit.timeit('div_grad_u_disc.evaluate(y=y)', setup="from __main__ import div_grad_u_disc, y", number=10000)
timeit.timeit('div_grad_u_disc_simp.evaluate(y=y)', setup="from __main__ import div_grad_u_disc_simp, y", number=10000)
int_u = pybamm.Integral(u, x_var)
int_u_disc = disc.process_symbol(int_u)
print("int(u) = {} is approximately equal to 1/12, {}".format(int_u_disc.evaluate(y=y), 1/12))
# We divide v by r to evaluate the integral more easily
int_v_over_r = pybamm.Integral(v/r_var, r_var)
int_v_over_r_disc = disc.process_symbol(int_v_over_r)
print("int(v/r) = {} is approximately equal to 4 * pi**2 * sin(1), {}".format(
int_v_over_r_disc.evaluate(y=y), 4 * np.pi**2 * np.sin(1))
)
print("int(u):\n")
int_u_disc.render()
print("\nint(v):\n")
int_v_over_r_disc.render()
int_u_disc.children[0].evaluate() / macro_mesh.d_edges
int_v_over_r_disc.children[0].evaluate() / micro_mesh.d_edges
disc.process_model(model)
model = pybamm.BaseModel()
c_e = pybamm.Variable("electrolyte concentration", domain=macroscale)
N_e = pybamm.grad(c_e)
c_s = pybamm.Variable("particle concentration", domain=["negative particle"])
N_s = pybamm.grad(c_s)
model.rhs = {c_e: pybamm.div(N_e) - 5, c_s: pybamm.div(N_s)}
model.boundary_conditions = {
c_e: {"left": (np.cos(0), "Neumann"), "right": (np.cos(10), "Neumann")},
c_s: {"left": (0, "Neumann"), "right": (-1, "Neumann")},
}
model.initial_conditions = {c_e: 1 + 0.1 * pybamm.sin(10*x_var), c_s: 1}
# Create a new discretisation and process model
disc2 = pybamm.Discretisation(mesh, spatial_methods)
disc2.process_model(model);
c_e_0 = model.initial_conditions[c_e].evaluate()
c_s_0 = model.initial_conditions[c_s].evaluate()
y0 = model.concatenated_initial_conditions.evaluate()
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(13,4))
ax1.plot(x_fine, 1 + 0.1*np.sin(10*x_fine), x, c_e_0, "o")
ax1.set_xlabel("x")
ax1.legend(["1+0.1*sin(10*x)", "c_e_0"], loc="best")
ax2.plot(x_fine, np.ones_like(r_fine), r, c_s_0, "o")
ax2.set_xlabel("r")
ax2.legend(["1", "c_s_0"], loc="best")
ax3.plot(y0,"*")
ax3.set_xlabel("index")
ax3.set_ylabel("y0")
plt.tight_layout()
plt.show()
rhs_c_e = model.rhs[c_e].evaluate(0, y0)
rhs_c_s = model.rhs[c_s].evaluate(0, y0)
rhs = model.concatenated_rhs.evaluate(0, y0)
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(13,4))
# ax1.plot(x_fine, -10*np.sin(10*x_fine) - 5, x, rhs_c_e, "o")
ax1.plot(x, rhs_c_e, "o")
ax1.set_xlabel("x")
ax1.set_ylabel("rhs_c_e")
# ax1.legend(["1+0.1*sin(10*x)", "c_e_0"], loc="best")
ax2.plot(r, rhs_c_s, "o")
ax2.set_xlabel("r")
ax2.set_ylabel("rhs_c_s")
ax3.plot(rhs,"*")
ax3.set_xlabel("index")
ax3.set_ylabel("rhs")
plt.tight_layout()
plt.show()
| 0.468304 | 0.986004 |
# OCR model for reading captcha
**Author:** [A_K_Nain](https://twitter.com/A_K_Nain)<br>
**Date created:** 2020/06/14<br>
**Last modified:** 2020/06/14<br>
**Description:** How to implement an OCR model using CNNs, RNNs and CTC loss.
## Introduction
This example demonstrates a simple OCR model using Functional API. Apart from
combining CNN and RNN, it also illustrates how you can instantiate a new layer
and use it as an `Endpoint` layer for implementing CTC loss. For a detailed
description on layer subclassing, please check out this
[example](https://keras.io/guides/making_new_layers_and_models_via_subclassing/#the-addmetric-method)
in the developer guides.
## Setup
```
import os
import numpy as np
import matplotlib.pyplot as plt
from pathlib import Path
from collections import Counter
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
```
## Load the data: [Captcha Images](https://www.kaggle.com/fournierp/captcha-version-2-images)
Let's download the data.
```
!curl -LO https://github.com/AakashKumarNain/CaptchaCracker/raw/master/captcha_images_v2.zip
!unzip -qq captcha_images_v2.zip
```
The dataset contains 1040 captcha files as png images. The label for each sample is the
name of the file (excluding the '.png' part). The label for each sample is a string.
We will map each character in the string to a number for training the model. Similary,
we would be required to map the predictions of the model back to string. For this purpose
would maintain two dictionary mapping characters to numbers and numbers to characters
respectively.
```
# Path to the data directory
data_dir = Path("./captcha_images_v2/")
# Get list of all the images
images = sorted(list(map(str, list(data_dir.glob("*.png")))))
labels = [img.split(os.path.sep)[-1].split(".png")[0] for img in images]
characters = set(char for label in labels for char in label)
print("Number of images found: ", len(images))
print("Number of labels found: ", len(labels))
print("Number of unique characters: ", len(characters))
print("Characters present: ", characters)
# Batch size for training and validation
batch_size = 16
# Desired image dimensions
img_width = 200
img_height = 50
# Factor by which the image is going to be downsampled
# by the convolutional blocks. We will be using two
# convolution blocks and each convolution block will have
# a pooling layer which downsample the features by a factor of 2.
# Hence total downsampling factor would be 4.
downsample_factor = 4
# Maximum length of any captcha in the dataset
max_length = max([len(label) for label in labels])
```
## Preprocessing
```
# Mapping characters to numbers
char_to_num = layers.experimental.preprocessing.StringLookup(
vocabulary=list(characters), num_oov_indices=0, mask_token=None
)
# Mapping numbers back to original characters
num_to_char = layers.experimental.preprocessing.StringLookup(
vocabulary=char_to_num.get_vocabulary(), mask_token=None, invert=True
)
def split_data(images, labels, train_size=0.9, shuffle=True):
# 1. Get the total size of the dataset
size = len(images)
# 2. Make an indices array and shuffle it, if required
indices = np.arange(size)
if shuffle:
np.random.shuffle(indices)
# 3. Get the size of training samples
train_samples = int(size * train_size)
# 4. Split data into training and validation sets
x_train, y_train = images[indices[:train_samples]], labels[indices[:train_samples]]
x_valid, y_valid = images[indices[train_samples:]], labels[indices[train_samples:]]
return x_train, x_valid, y_train, y_valid
# Splitting data into training and validation sets
x_train, x_valid, y_train, y_valid = split_data(np.array(images), np.array(labels))
def encode_single_sample(img_path, label):
# 1. Read image
img = tf.io.read_file(img_path)
# 2. Decode and convert to grayscale
img = tf.io.decode_png(img, channels=1)
# 3. Convert to float32 in [0, 1] range
img = tf.image.convert_image_dtype(img, tf.float32)
# 4. Resize to the desired size
img = tf.image.resize(img, [img_height, img_width])
# 5. Transpose the image because we want the time
# dimension to correspond to the width of the image.
img = tf.transpose(img, perm=[1, 0, 2])
# 6. Map the characters in label to numbers
label = char_to_num(tf.strings.unicode_split(label, input_encoding="UTF-8"))
# 7. Return a dict as our model is expecting two inputs
return {"image": img, "label": label}
```
## Data Generators
```
train_data_generator = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_data_generator = (
train_data_generator.map(
encode_single_sample, num_parallel_calls=tf.data.experimental.AUTOTUNE
)
.batch(batch_size)
.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
)
valid_data_generator = tf.data.Dataset.from_tensor_slices((x_valid, y_valid))
valid_data_generator = (
valid_data_generator.map(
encode_single_sample, num_parallel_calls=tf.data.experimental.AUTOTUNE
)
.batch(batch_size)
.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
)
```
## Visualize the data
```
_, ax = plt.subplots(4, 4, figsize=(10, 5))
for batch in train_data_generator.take(1):
images = batch["image"]
labels = batch["label"]
for i in range(16):
img = (images[i] * 255).numpy().astype("uint8")
label = tf.strings.reduce_join(num_to_char(labels[i])).numpy().decode("utf-8")
ax[i // 4, i % 4].imshow(img[:, :, 0].T, cmap="gray")
ax[i // 4, i % 4].set_title(label)
ax[i // 4, i % 4].axis("off")
plt.show()
```
## Model
```
class CTCLayer(layers.Layer):
def __init__(self, name=None):
super().__init__(name=name)
self.loss_fn = keras.backend.ctc_batch_cost
def call(self, y_true, y_pred):
# Compute the training-time loss value and add it
# to the layer using `self.add_loss()`.
batch_len = tf.cast(tf.shape(y_true)[0], dtype="int64")
input_length = tf.cast(tf.shape(y_pred)[1], dtype="int64")
label_length = tf.cast(tf.shape(y_true)[1], dtype="int64")
input_length = input_length * tf.ones(shape=(batch_len, 1), dtype="int64")
label_length = label_length * tf.ones(shape=(batch_len, 1), dtype="int64")
loss = self.loss_fn(y_true, y_pred, input_length, label_length)
self.add_loss(loss)
# On test time, just return the computed loss
return y_pred
def build_model():
# Inputs to the model
input_img = layers.Input(
shape=(img_width, img_height, 1), name="image", dtype="float32"
)
labels = layers.Input(name="label", shape=(None,), dtype="float32")
# First conv block
x = layers.Conv2D(
32,
(3, 3),
activation="relu",
kernel_initializer="he_normal",
padding="same",
name="Conv1",
)(input_img)
x = layers.MaxPooling2D((2, 2), name="pool1")(x)
# Second conv block
x = layers.Conv2D(
64,
(3, 3),
activation="relu",
kernel_initializer="he_normal",
padding="same",
name="Conv2",
)(x)
x = layers.MaxPooling2D((2, 2), name="pool2")(x)
# We have used two max pool with pool size and strides of 2.
# Hence, downsampled feature maps are 4x smaller. The number of
# filters in the last layer is 64. Reshape accordingly before
# passing it to RNNs
new_shape = ((img_width // 4), (img_height // 4) * 64)
x = layers.Reshape(target_shape=new_shape, name="reshape")(x)
x = layers.Dense(64, activation="relu", name="dense1")(x)
x = layers.Dropout(0.2)(x)
# RNNs
x = layers.Bidirectional(layers.LSTM(128, return_sequences=True, dropout=0.2))(x)
x = layers.Bidirectional(layers.LSTM(64, return_sequences=True, dropout=0.25))(x)
# Output layer
x = layers.Dense(len(characters) + 1, activation="softmax", name="dense2")(x)
# Add CTC layer for calculating CTC loss at each step
output = CTCLayer(name="ctc_loss")(labels, x)
# Define the model
model = keras.models.Model(
inputs=[input_img, labels], outputs=output, name="ocr_model_v1"
)
# Optimizer
opt = keras.optimizers.Adam()
# Compile the model and return
model.compile(optimizer=opt)
return model
# Get the model
model = build_model()
model.summary()
```
## Training
```
epochs = 100
es_patience = 10
# Add early stopping
es = keras.callbacks.EarlyStopping(
monitor="val_loss", patience=es_patience, restore_best_weights=True
)
# Train the model
history = model.fit(
train_data_generator,
validation_data=valid_data_generator,
epochs=epochs,
callbacks=[es],
)
```
## Let's test-drive it
```
# Get the prediction model by extracting layers till the output layer
prediction_model = keras.models.Model(
model.get_layer(name="image").input, model.get_layer(name="dense2").output
)
prediction_model.summary()
# A utility function to decode the output of the network
def decode_batch_predictions(pred):
input_len = np.ones(pred.shape[0]) * pred.shape[1]
# Use greedy search. For complex tasks, you can use beam search
results = keras.backend.ctc_decode(pred, input_length=input_len, greedy=True)[0][0][
:, :max_length
]
# Iterate over the results and get back the text
output_text = []
for res in results:
res = tf.strings.reduce_join(num_to_char(res)).numpy().decode("utf-8")
output_text.append(res)
return output_text
# Let's check results on some validation samples
for batch in valid_data_generator.take(1):
batch_images = batch["image"]
batch_labels = batch["label"]
preds = prediction_model.predict(batch_images)
pred_texts = decode_batch_predictions(preds)
orig_texts = []
for label in batch_labels:
label = tf.strings.reduce_join(num_to_char(label)).numpy().decode("utf-8")
orig_texts.append(label)
_, ax = plt.subplots(4, 4, figsize=(15, 5))
for i in range(len(pred_texts)):
img = (batch_images[i, :, :, 0] * 255).numpy().astype(np.uint8)
img = img.T
title = f"Prediction: {pred_texts[i]}"
ax[i // 4, i % 4].imshow(img, cmap="gray")
ax[i // 4, i % 4].set_title(title)
ax[i // 4, i % 4].axis("off")
plt.show()
```
|
github_jupyter
|
import os
import numpy as np
import matplotlib.pyplot as plt
from pathlib import Path
from collections import Counter
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
!curl -LO https://github.com/AakashKumarNain/CaptchaCracker/raw/master/captcha_images_v2.zip
!unzip -qq captcha_images_v2.zip
# Path to the data directory
data_dir = Path("./captcha_images_v2/")
# Get list of all the images
images = sorted(list(map(str, list(data_dir.glob("*.png")))))
labels = [img.split(os.path.sep)[-1].split(".png")[0] for img in images]
characters = set(char for label in labels for char in label)
print("Number of images found: ", len(images))
print("Number of labels found: ", len(labels))
print("Number of unique characters: ", len(characters))
print("Characters present: ", characters)
# Batch size for training and validation
batch_size = 16
# Desired image dimensions
img_width = 200
img_height = 50
# Factor by which the image is going to be downsampled
# by the convolutional blocks. We will be using two
# convolution blocks and each convolution block will have
# a pooling layer which downsample the features by a factor of 2.
# Hence total downsampling factor would be 4.
downsample_factor = 4
# Maximum length of any captcha in the dataset
max_length = max([len(label) for label in labels])
# Mapping characters to numbers
char_to_num = layers.experimental.preprocessing.StringLookup(
vocabulary=list(characters), num_oov_indices=0, mask_token=None
)
# Mapping numbers back to original characters
num_to_char = layers.experimental.preprocessing.StringLookup(
vocabulary=char_to_num.get_vocabulary(), mask_token=None, invert=True
)
def split_data(images, labels, train_size=0.9, shuffle=True):
# 1. Get the total size of the dataset
size = len(images)
# 2. Make an indices array and shuffle it, if required
indices = np.arange(size)
if shuffle:
np.random.shuffle(indices)
# 3. Get the size of training samples
train_samples = int(size * train_size)
# 4. Split data into training and validation sets
x_train, y_train = images[indices[:train_samples]], labels[indices[:train_samples]]
x_valid, y_valid = images[indices[train_samples:]], labels[indices[train_samples:]]
return x_train, x_valid, y_train, y_valid
# Splitting data into training and validation sets
x_train, x_valid, y_train, y_valid = split_data(np.array(images), np.array(labels))
def encode_single_sample(img_path, label):
# 1. Read image
img = tf.io.read_file(img_path)
# 2. Decode and convert to grayscale
img = tf.io.decode_png(img, channels=1)
# 3. Convert to float32 in [0, 1] range
img = tf.image.convert_image_dtype(img, tf.float32)
# 4. Resize to the desired size
img = tf.image.resize(img, [img_height, img_width])
# 5. Transpose the image because we want the time
# dimension to correspond to the width of the image.
img = tf.transpose(img, perm=[1, 0, 2])
# 6. Map the characters in label to numbers
label = char_to_num(tf.strings.unicode_split(label, input_encoding="UTF-8"))
# 7. Return a dict as our model is expecting two inputs
return {"image": img, "label": label}
train_data_generator = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_data_generator = (
train_data_generator.map(
encode_single_sample, num_parallel_calls=tf.data.experimental.AUTOTUNE
)
.batch(batch_size)
.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
)
valid_data_generator = tf.data.Dataset.from_tensor_slices((x_valid, y_valid))
valid_data_generator = (
valid_data_generator.map(
encode_single_sample, num_parallel_calls=tf.data.experimental.AUTOTUNE
)
.batch(batch_size)
.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
)
_, ax = plt.subplots(4, 4, figsize=(10, 5))
for batch in train_data_generator.take(1):
images = batch["image"]
labels = batch["label"]
for i in range(16):
img = (images[i] * 255).numpy().astype("uint8")
label = tf.strings.reduce_join(num_to_char(labels[i])).numpy().decode("utf-8")
ax[i // 4, i % 4].imshow(img[:, :, 0].T, cmap="gray")
ax[i // 4, i % 4].set_title(label)
ax[i // 4, i % 4].axis("off")
plt.show()
class CTCLayer(layers.Layer):
def __init__(self, name=None):
super().__init__(name=name)
self.loss_fn = keras.backend.ctc_batch_cost
def call(self, y_true, y_pred):
# Compute the training-time loss value and add it
# to the layer using `self.add_loss()`.
batch_len = tf.cast(tf.shape(y_true)[0], dtype="int64")
input_length = tf.cast(tf.shape(y_pred)[1], dtype="int64")
label_length = tf.cast(tf.shape(y_true)[1], dtype="int64")
input_length = input_length * tf.ones(shape=(batch_len, 1), dtype="int64")
label_length = label_length * tf.ones(shape=(batch_len, 1), dtype="int64")
loss = self.loss_fn(y_true, y_pred, input_length, label_length)
self.add_loss(loss)
# On test time, just return the computed loss
return y_pred
def build_model():
# Inputs to the model
input_img = layers.Input(
shape=(img_width, img_height, 1), name="image", dtype="float32"
)
labels = layers.Input(name="label", shape=(None,), dtype="float32")
# First conv block
x = layers.Conv2D(
32,
(3, 3),
activation="relu",
kernel_initializer="he_normal",
padding="same",
name="Conv1",
)(input_img)
x = layers.MaxPooling2D((2, 2), name="pool1")(x)
# Second conv block
x = layers.Conv2D(
64,
(3, 3),
activation="relu",
kernel_initializer="he_normal",
padding="same",
name="Conv2",
)(x)
x = layers.MaxPooling2D((2, 2), name="pool2")(x)
# We have used two max pool with pool size and strides of 2.
# Hence, downsampled feature maps are 4x smaller. The number of
# filters in the last layer is 64. Reshape accordingly before
# passing it to RNNs
new_shape = ((img_width // 4), (img_height // 4) * 64)
x = layers.Reshape(target_shape=new_shape, name="reshape")(x)
x = layers.Dense(64, activation="relu", name="dense1")(x)
x = layers.Dropout(0.2)(x)
# RNNs
x = layers.Bidirectional(layers.LSTM(128, return_sequences=True, dropout=0.2))(x)
x = layers.Bidirectional(layers.LSTM(64, return_sequences=True, dropout=0.25))(x)
# Output layer
x = layers.Dense(len(characters) + 1, activation="softmax", name="dense2")(x)
# Add CTC layer for calculating CTC loss at each step
output = CTCLayer(name="ctc_loss")(labels, x)
# Define the model
model = keras.models.Model(
inputs=[input_img, labels], outputs=output, name="ocr_model_v1"
)
# Optimizer
opt = keras.optimizers.Adam()
# Compile the model and return
model.compile(optimizer=opt)
return model
# Get the model
model = build_model()
model.summary()
epochs = 100
es_patience = 10
# Add early stopping
es = keras.callbacks.EarlyStopping(
monitor="val_loss", patience=es_patience, restore_best_weights=True
)
# Train the model
history = model.fit(
train_data_generator,
validation_data=valid_data_generator,
epochs=epochs,
callbacks=[es],
)
# Get the prediction model by extracting layers till the output layer
prediction_model = keras.models.Model(
model.get_layer(name="image").input, model.get_layer(name="dense2").output
)
prediction_model.summary()
# A utility function to decode the output of the network
def decode_batch_predictions(pred):
input_len = np.ones(pred.shape[0]) * pred.shape[1]
# Use greedy search. For complex tasks, you can use beam search
results = keras.backend.ctc_decode(pred, input_length=input_len, greedy=True)[0][0][
:, :max_length
]
# Iterate over the results and get back the text
output_text = []
for res in results:
res = tf.strings.reduce_join(num_to_char(res)).numpy().decode("utf-8")
output_text.append(res)
return output_text
# Let's check results on some validation samples
for batch in valid_data_generator.take(1):
batch_images = batch["image"]
batch_labels = batch["label"]
preds = prediction_model.predict(batch_images)
pred_texts = decode_batch_predictions(preds)
orig_texts = []
for label in batch_labels:
label = tf.strings.reduce_join(num_to_char(label)).numpy().decode("utf-8")
orig_texts.append(label)
_, ax = plt.subplots(4, 4, figsize=(15, 5))
for i in range(len(pred_texts)):
img = (batch_images[i, :, :, 0] * 255).numpy().astype(np.uint8)
img = img.T
title = f"Prediction: {pred_texts[i]}"
ax[i // 4, i % 4].imshow(img, cmap="gray")
ax[i // 4, i % 4].set_title(title)
ax[i // 4, i % 4].axis("off")
plt.show()
| 0.844473 | 0.961171 |
# Sklearn
## Bike Sharing Demand
Задача на kaggle: https://www.kaggle.com/c/bike-sharing-demand
По историческим данным о прокате велосипедов и погодным условиям необходимо оценить спрос на прокат велосипедов.
В исходной постановке задачи доступно 11 признаков: https://www.kaggle.com/c/prudential-life-insurance-assessment/data
В наборе признаков присутсвуют вещественные, категориальные, и бинарные данные.
Для демонстрации используется обучающая выборка из исходных данных train.csv, файлы для работы прилагаются.
### Библиотеки
```
from sklearn.model_selection import train_test_split, StratifiedShuffleSplit, ShuffleSplit, GridSearchCV, RandomizedSearchCV
from sklearn import linear_model, metrics
import numpy as np
import pandas as pd
%pylab inline
```
### Загрузка данных
```
raw_data = pd.read_csv('bike_sharing_demand.csv', header = 0, sep = ',')
raw_data.head()
```
***datetime*** - hourly date + timestamp
***season*** - 1 = spring, 2 = summer, 3 = fall, 4 = winter
***holiday*** - whether the day is considered a holiday
***workingday*** - whether the day is neither a weekend nor holiday
***weather*** - 1: Clear, Few clouds, Partly cloudy, Partly cloudy
2: Mist + Cloudy, Mist + Broken clouds, Mist + Few clouds, Mist
3: Light Snow, Light Rain + Thunderstorm + Scattered clouds, Light Rain + Scattered clouds
4: Heavy Rain + Ice Pallets + Thunderstorm + Mist, Snow + Fog
***temp*** - temperature in Celsius
***atemp*** - "feels like" temperature in Celsius
***humidity*** - relative humidity
***windspeed*** - wind speed
***casual*** - number of non-registered user rentals initiated
***registered*** - number of registered user rentals initiated
***count*** - number of total rentals
```
print(raw_data.shape)
raw_data.isnull().values.any()
```
### Предобработка данных
#### Типы признаков
```
raw_data.info()
raw_data.datetime = raw_data.datetime.apply(pd.to_datetime)
raw_data['month'] = raw_data.datetime.apply(lambda x : x.month)
raw_data['hour'] = raw_data.datetime.apply(lambda x : x.hour)
raw_data.head()
```
#### Обучение и отложенный тест
```
train_data = raw_data.iloc[:-1000, :]
hold_out_test_data = raw_data.iloc[-1000:, :]
print(raw_data.shape, train_data.shape, hold_out_test_data.shape)
print('train period from {} to {}'.format(train_data.datetime.min(), train_data.datetime.max()))
print('evaluation period from {} to {}'.format(hold_out_test_data.datetime.min(), hold_out_test_data.datetime.max()))
```
#### Данные и целевая функция
```
#обучение
train_labels = train_data['count'].values
train_data = train_data.drop(['datetime', 'count'], axis = 1)
#тест
test_labels = hold_out_test_data['count'].values
test_data = hold_out_test_data.drop(['datetime', 'count'], axis = 1)
```
#### Целевая функция на обучающей выборке и на отложенном тесте
```
pylab.figure(figsize = (16, 6))
pylab.subplot(1,2,1)
pylab.hist(train_labels)
pylab.title('train data')
pylab.subplot(1,2,2)
pylab.hist(test_labels)
pylab.title('test data')
```
#### Числовые признаки
```
numeric_columns = ['temp', 'atemp', 'humidity', 'windspeed', 'casual', 'registered', 'month', 'hour']
train_data = train_data[numeric_columns]
test_data = test_data[numeric_columns]
train_data.head()
test_data.head()
```
### Модель
```
regressor = linear_model.SGDRegressor(random_state = 0, max_iter=1000, tol=1e-3)
regressor.fit(train_data, train_labels)
metrics.mean_absolute_error(test_labels, regressor.predict(test_data))
print(test_labels[:10])
print(regressor.predict(test_data)[:10])
regressor.coef_
```
### Scaling
```
from sklearn.preprocessing import StandardScaler
#создаем стандартный scaler
scaler = StandardScaler()
scaler.fit(train_data, train_labels)
scaled_train_data = scaler.transform(train_data)
scaled_test_data = scaler.transform(test_data)
regressor.fit(scaled_train_data, train_labels)
metrics.mean_absolute_error(test_labels, regressor.predict(scaled_test_data))
print(test_labels[:10])
print(regressor.predict(scaled_test_data)[:10])
```
### Подозрительно хорошо?
```
print(regressor.coef_)
print(list(map(lambda x : round(x, 2), regressor.coef_)))
train_data.head()
train_labels[:10]
np.all(train_data.registered + train_data.casual == train_labels)
train_data.drop(['casual', 'registered'], axis = 1, inplace = True)
test_data.drop(['casual', 'registered'], axis = 1, inplace = True)
scaler.fit(train_data, train_labels)
scaled_train_data = scaler.transform(train_data)
scaled_test_data = scaler.transform(test_data)
regressor.fit(scaled_train_data, train_labels)
metrics.mean_absolute_error(test_labels, regressor.predict(scaled_test_data))
print(list(map(lambda x : round(x, 2), regressor.coef_)))
```
### Pipeline
```
from sklearn.pipeline import Pipeline
#создаем pipeline из двух шагов: scaling и классификация
pipeline = Pipeline(steps = [('scaling', scaler), ('regression', regressor)])
pipeline.fit(train_data, train_labels)
metrics.mean_absolute_error(test_labels, pipeline.predict(test_data))
```
### Подбор параметров
```
pipeline.get_params().keys()
parameters_grid = {
'regression__loss' : ['huber', 'epsilon_insensitive', 'squared_loss', ],
'regression__n_iter' : [3, 5, 10, 50],
'regression__penalty' : ['l1', 'l2', 'none'],
'regression__alpha' : [0.0001, 0.01],
'scaling__with_mean' : [0., 0.5],
}
grid_cv = GridSearchCV(pipeline, parameters_grid, scoring = 'mean_absolute_error', cv = 4)
%%time
grid_cv.fit(train_data, train_labels)
print(grid_cv.best_score_)
print(grid_cv.best_params_)
```
### Оценка по отложенному тесту
```
metrics.mean_absolute_error(test_labels, grid_cv.best_estimator_.predict(test_data))
np.mean(test_labels)
test_predictions = grid_cv.best_estimator_.predict(test_data)
print(test_labels[:10])
print(test_predictions[:10])
pylab.figure(figsize=(16, 6))
pylab.subplot(1,2,1)
pylab.grid(True)
pylab.scatter(train_labels, pipeline.predict(train_data), alpha=0.5, color = 'red')
pylab.scatter(test_labels, pipeline.predict(test_data), alpha=0.5, color = 'blue')
pylab.title('no parameters setting')
pylab.xlim(-100,1100)
pylab.ylim(-100,1100)
pylab.subplot(1,2,2)
pylab.grid(True)
pylab.scatter(train_labels, grid_cv.best_estimator_.predict(train_data), alpha=0.5, color = 'red')
pylab.scatter(test_labels, grid_cv.best_estimator_.predict(test_data), alpha=0.5, color = 'blue')
pylab.title('grid search')
pylab.xlim(-100,1100)
pylab.ylim(-100,1100)
```
|
github_jupyter
|
from sklearn.model_selection import train_test_split, StratifiedShuffleSplit, ShuffleSplit, GridSearchCV, RandomizedSearchCV
from sklearn import linear_model, metrics
import numpy as np
import pandas as pd
%pylab inline
raw_data = pd.read_csv('bike_sharing_demand.csv', header = 0, sep = ',')
raw_data.head()
print(raw_data.shape)
raw_data.isnull().values.any()
raw_data.info()
raw_data.datetime = raw_data.datetime.apply(pd.to_datetime)
raw_data['month'] = raw_data.datetime.apply(lambda x : x.month)
raw_data['hour'] = raw_data.datetime.apply(lambda x : x.hour)
raw_data.head()
train_data = raw_data.iloc[:-1000, :]
hold_out_test_data = raw_data.iloc[-1000:, :]
print(raw_data.shape, train_data.shape, hold_out_test_data.shape)
print('train period from {} to {}'.format(train_data.datetime.min(), train_data.datetime.max()))
print('evaluation period from {} to {}'.format(hold_out_test_data.datetime.min(), hold_out_test_data.datetime.max()))
#обучение
train_labels = train_data['count'].values
train_data = train_data.drop(['datetime', 'count'], axis = 1)
#тест
test_labels = hold_out_test_data['count'].values
test_data = hold_out_test_data.drop(['datetime', 'count'], axis = 1)
pylab.figure(figsize = (16, 6))
pylab.subplot(1,2,1)
pylab.hist(train_labels)
pylab.title('train data')
pylab.subplot(1,2,2)
pylab.hist(test_labels)
pylab.title('test data')
numeric_columns = ['temp', 'atemp', 'humidity', 'windspeed', 'casual', 'registered', 'month', 'hour']
train_data = train_data[numeric_columns]
test_data = test_data[numeric_columns]
train_data.head()
test_data.head()
regressor = linear_model.SGDRegressor(random_state = 0, max_iter=1000, tol=1e-3)
regressor.fit(train_data, train_labels)
metrics.mean_absolute_error(test_labels, regressor.predict(test_data))
print(test_labels[:10])
print(regressor.predict(test_data)[:10])
regressor.coef_
from sklearn.preprocessing import StandardScaler
#создаем стандартный scaler
scaler = StandardScaler()
scaler.fit(train_data, train_labels)
scaled_train_data = scaler.transform(train_data)
scaled_test_data = scaler.transform(test_data)
regressor.fit(scaled_train_data, train_labels)
metrics.mean_absolute_error(test_labels, regressor.predict(scaled_test_data))
print(test_labels[:10])
print(regressor.predict(scaled_test_data)[:10])
print(regressor.coef_)
print(list(map(lambda x : round(x, 2), regressor.coef_)))
train_data.head()
train_labels[:10]
np.all(train_data.registered + train_data.casual == train_labels)
train_data.drop(['casual', 'registered'], axis = 1, inplace = True)
test_data.drop(['casual', 'registered'], axis = 1, inplace = True)
scaler.fit(train_data, train_labels)
scaled_train_data = scaler.transform(train_data)
scaled_test_data = scaler.transform(test_data)
regressor.fit(scaled_train_data, train_labels)
metrics.mean_absolute_error(test_labels, regressor.predict(scaled_test_data))
print(list(map(lambda x : round(x, 2), regressor.coef_)))
from sklearn.pipeline import Pipeline
#создаем pipeline из двух шагов: scaling и классификация
pipeline = Pipeline(steps = [('scaling', scaler), ('regression', regressor)])
pipeline.fit(train_data, train_labels)
metrics.mean_absolute_error(test_labels, pipeline.predict(test_data))
pipeline.get_params().keys()
parameters_grid = {
'regression__loss' : ['huber', 'epsilon_insensitive', 'squared_loss', ],
'regression__n_iter' : [3, 5, 10, 50],
'regression__penalty' : ['l1', 'l2', 'none'],
'regression__alpha' : [0.0001, 0.01],
'scaling__with_mean' : [0., 0.5],
}
grid_cv = GridSearchCV(pipeline, parameters_grid, scoring = 'mean_absolute_error', cv = 4)
%%time
grid_cv.fit(train_data, train_labels)
print(grid_cv.best_score_)
print(grid_cv.best_params_)
metrics.mean_absolute_error(test_labels, grid_cv.best_estimator_.predict(test_data))
np.mean(test_labels)
test_predictions = grid_cv.best_estimator_.predict(test_data)
print(test_labels[:10])
print(test_predictions[:10])
pylab.figure(figsize=(16, 6))
pylab.subplot(1,2,1)
pylab.grid(True)
pylab.scatter(train_labels, pipeline.predict(train_data), alpha=0.5, color = 'red')
pylab.scatter(test_labels, pipeline.predict(test_data), alpha=0.5, color = 'blue')
pylab.title('no parameters setting')
pylab.xlim(-100,1100)
pylab.ylim(-100,1100)
pylab.subplot(1,2,2)
pylab.grid(True)
pylab.scatter(train_labels, grid_cv.best_estimator_.predict(train_data), alpha=0.5, color = 'red')
pylab.scatter(test_labels, grid_cv.best_estimator_.predict(test_data), alpha=0.5, color = 'blue')
pylab.title('grid search')
pylab.xlim(-100,1100)
pylab.ylim(-100,1100)
| 0.544801 | 0.916894 |
# FloPy
## MODPATH 7 structured models example
This notebook demonstrates how to create and run example 1a from the MODPATH 7 documentation for MODFLOW-2005 and MODFLOW 6. The notebooks also shows how to create subsets of endpoint output and plot MODPATH results on PlotMapView objects.
```
import sys
import os
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# run installed version of flopy or add local path
try:
import flopy
except:
fpth = os.path.abspath(os.path.join("..", ".."))
sys.path.append(fpth)
import flopy
print(sys.version)
print("numpy version: {}".format(np.__version__))
print("matplotlib version: {}".format(mpl.__version__))
print("flopy version: {}".format(flopy.__version__))
if not os.path.isdir("data"):
os.makedirs("data", exist_ok=True)
```
### Flow model data
```
nper, nstp, perlen, tsmult = 1, 1, 1.0, 1.0
nlay, nrow, ncol = 3, 21, 20
delr = delc = 500.0
top = 400.0
botm = [220.0, 200.0, 0.0]
laytyp = [1, 0, 0]
kh = [50.0, 0.01, 200.0]
kv = [10.0, 0.01, 20.0]
wel_loc = (2, 10, 9)
wel_q = -150000.0
rch = 0.005
riv_h = 320.0
riv_z = 317.0
riv_c = 1.0e5
```
### MODPATH 7 data
```
# MODPATH zones
zone3 = np.ones((nrow, ncol), dtype=np.int32)
zone3[wel_loc[1:]] = 2
zones = [1, 1, zone3]
# create particles
# particle group 1
plocs = []
pids = []
for idx in range(nrow):
plocs.append((0, idx, 2))
pids.append(idx)
part0 = flopy.modpath.ParticleData(
plocs, drape=0, structured=True, particleids=pids
)
pg0 = flopy.modpath.ParticleGroup(
particlegroupname="PG1", particledata=part0, filename="ex01a.pg1.sloc"
)
# particle group 2
v = [(2, 0, 0), (0, 20, 0)]
part1 = flopy.modpath.ParticleData(
v, drape=1, structured=True, particleids=[1000, 1001]
)
pg1 = flopy.modpath.ParticleGroup(
particlegroupname="PG2", particledata=part1, filename="ex01a.pg2.sloc"
)
locsa = [[0, 0, 0, 0, nrow - 1, ncol - 1], [1, 0, 0, 1, nrow - 1, ncol - 1]]
locsb = [[2, 0, 0, 2, nrow - 1, ncol - 1]]
sd = flopy.modpath.CellDataType(
drape=0, columncelldivisions=1, rowcelldivisions=1, layercelldivisions=1
)
p = flopy.modpath.LRCParticleData(
subdivisiondata=[sd, sd], lrcregions=[locsa, locsb]
)
pg2 = flopy.modpath.ParticleGroupLRCTemplate(
particlegroupname="PG3", particledata=p, filename="ex01a.pg3.sloc"
)
particlegroups = [pg2]
# default iface for MODFLOW-2005 and MODFLOW 6
defaultiface = {"RECHARGE": 6, "ET": 6}
defaultiface6 = {"RCH": 6, "EVT": 6}
```
### MODPATH 7 using MODFLOW-2005
#### Create and run MODFLOW-2005
```
ws = os.path.join("data", "mp7_ex1_mf2005_dis")
nm = "ex01_mf2005"
exe_name = "mf2005"
iu_cbc = 130
m = flopy.modflow.Modflow(nm, model_ws=ws, exe_name=exe_name)
flopy.modflow.ModflowDis(
m,
nlay=nlay,
nrow=nrow,
ncol=ncol,
nper=nper,
itmuni=4,
lenuni=2,
perlen=perlen,
nstp=nstp,
tsmult=tsmult,
steady=True,
delr=delr,
delc=delc,
top=top,
botm=botm,
)
flopy.modflow.ModflowLpf(
m, ipakcb=iu_cbc, laytyp=laytyp, hk=kh, vka=kv, constantcv=True
)
flopy.modflow.ModflowBas(m, ibound=1, strt=top)
# recharge
flopy.modflow.ModflowRch(m, ipakcb=iu_cbc, rech=rch)
# wel
wd = [i for i in wel_loc] + [wel_q]
flopy.modflow.ModflowWel(m, ipakcb=iu_cbc, stress_period_data={0: wd})
# river
rd = []
for i in range(nrow):
rd.append([0, i, ncol - 1, riv_h, riv_c, riv_z])
flopy.modflow.ModflowRiv(m, ipakcb=iu_cbc, stress_period_data={0: rd})
# output control
flopy.modflow.ModflowOc(
m, stress_period_data={(0, 0): ["save head", "save budget", "print head"]}
)
flopy.modflow.ModflowPcg(m, hclose=1e-6, rclose=1e-6)
m.write_input()
success, buff = m.run_model()
assert success, "mf2005 model did not run"
```
#### Create and run MODPATH 7
```
# create modpath files
exe_name = "mp7"
mp = flopy.modpath.Modpath7(
modelname=nm + "_mp", flowmodel=m, exe_name=exe_name, model_ws=ws
)
mpbas = flopy.modpath.Modpath7Bas(mp, porosity=0.1, defaultiface=defaultiface)
mpsim = flopy.modpath.Modpath7Sim(
mp,
simulationtype="combined",
trackingdirection="forward",
weaksinkoption="pass_through",
weaksourceoption="pass_through",
budgetoutputoption="summary",
budgetcellnumbers=[1049, 1259],
traceparticledata=[1, 1000],
referencetime=[0, 0, 0.0],
stoptimeoption="extend",
timepointdata=[500, 1000.0],
zonedataoption="on",
zones=zones,
particlegroups=particlegroups,
)
# write modpath datasets
mp.write_input()
# run modpath
mp.run_model()
```
#### Load MODPATH 7 output
Get locations to extract pathline data
```
nodew = m.dis.get_node([wel_loc])
riv_locs = flopy.utils.ra_slice(m.riv.stress_period_data[0], ["k", "i", "j"])
nodesr = m.dis.get_node(riv_locs.tolist())
```
Pathline data
```
fpth = os.path.join(ws, nm + "_mp.mppth")
p = flopy.utils.PathlineFile(fpth)
pw0 = p.get_destination_pathline_data(nodew, to_recarray=True)
pr0 = p.get_destination_pathline_data(nodesr, to_recarray=True)
```
Endpoint data
Get particles that terminate in the well
```
fpth = os.path.join(ws, nm + "_mp.mpend")
e = flopy.utils.EndpointFile(fpth)
well_epd = e.get_destination_endpoint_data(dest_cells=nodew)
well_epd.shape
```
Get particles that terminate in the river boundaries
```
riv_epd = e.get_destination_endpoint_data(dest_cells=nodesr)
riv_epd.shape
```
Merge the particles that end in the well and the river boundaries.
```
epd0 = np.concatenate((well_epd, riv_epd))
epd0.shape
```
#### Plot MODPATH 7 output
```
mm = flopy.plot.PlotMapView(model=m)
mm.plot_grid(lw=0.5)
mm.plot_pathline(pw0, layer="all", color="blue", label="captured by wells")
mm.plot_pathline(pr0, layer="all", color="green", label="captured by rivers")
mm.plot_endpoint(epd0, direction="starting", colorbar=True)
mm.ax.legend();
```
### MODPATH 7 using MODFLOW 6
#### Create and run MODFLOW 6
```
ws = os.path.join("data", "mp7_ex1_mf6_dis")
nm = "ex01_mf6"
exe_name = "mf6"
# Create the Flopy simulation object
sim = flopy.mf6.MFSimulation(
sim_name=nm, exe_name="mf6", version="mf6", sim_ws=ws
)
# Create the Flopy temporal discretization object
pd = (perlen, nstp, tsmult)
tdis = flopy.mf6.modflow.mftdis.ModflowTdis(
sim, pname="tdis", time_units="DAYS", nper=nper, perioddata=[pd]
)
# Create the Flopy groundwater flow (gwf) model object
model_nam_file = "{}.nam".format(nm)
gwf = flopy.mf6.ModflowGwf(
sim, modelname=nm, model_nam_file=model_nam_file, save_flows=True
)
# Create the Flopy iterative model solver (ims) Package object
ims = flopy.mf6.modflow.mfims.ModflowIms(
sim,
pname="ims",
complexity="SIMPLE",
outer_dvclose=1e-6,
inner_dvclose=1e-6,
rcloserecord=1e-6,
)
# create gwf file
dis = flopy.mf6.modflow.mfgwfdis.ModflowGwfdis(
gwf,
pname="dis",
nlay=nlay,
nrow=nrow,
ncol=ncol,
length_units="FEET",
delr=delr,
delc=delc,
top=top,
botm=botm,
)
# Create the initial conditions package
ic = flopy.mf6.modflow.mfgwfic.ModflowGwfic(gwf, pname="ic", strt=top)
# Create the node property flow package
npf = flopy.mf6.modflow.mfgwfnpf.ModflowGwfnpf(
gwf, pname="npf", icelltype=laytyp, k=kh, k33=kv
)
# recharge
flopy.mf6.modflow.mfgwfrcha.ModflowGwfrcha(gwf, recharge=rch)
# wel
wd = [(wel_loc, wel_q)]
flopy.mf6.modflow.mfgwfwel.ModflowGwfwel(
gwf, maxbound=1, stress_period_data={0: wd}
)
# river
rd = []
for i in range(nrow):
rd.append([(0, i, ncol - 1), riv_h, riv_c, riv_z])
flopy.mf6.modflow.mfgwfriv.ModflowGwfriv(gwf, stress_period_data={0: rd})
# Create the output control package
headfile = "{}.hds".format(nm)
head_record = [headfile]
budgetfile = "{}.cbb".format(nm)
budget_record = [budgetfile]
saverecord = [("HEAD", "ALL"), ("BUDGET", "ALL")]
oc = flopy.mf6.modflow.mfgwfoc.ModflowGwfoc(
gwf,
pname="oc",
saverecord=saverecord,
head_filerecord=head_record,
budget_filerecord=budget_record,
)
# Write the datasets
sim.write_simulation()
# Run the simulation
success, buff = sim.run_simulation()
assert success, "mf6 model did not run"
```
#### Create and run MODPATH 7
```
# create modpath files
exe_name = "mp7"
mp = flopy.modpath.Modpath7(
modelname=nm + "_mp", flowmodel=gwf, exe_name=exe_name, model_ws=ws
)
mpbas = flopy.modpath.Modpath7Bas(mp, porosity=0.1, defaultiface=defaultiface6)
mpsim = flopy.modpath.Modpath7Sim(
mp,
simulationtype="combined",
trackingdirection="forward",
weaksinkoption="pass_through",
weaksourceoption="pass_through",
budgetoutputoption="summary",
budgetcellnumbers=[1049, 1259],
traceparticledata=[1, 1000],
referencetime=[0, 0, 0.0],
stoptimeoption="extend",
timepointdata=[500, 1000.0],
zonedataoption="on",
zones=zones,
particlegroups=particlegroups,
)
# write modpath datasets
mp.write_input()
# run modpath
mp.run_model()
```
#### Load MODPATH 7 output
Pathline data
```
fpth = os.path.join(ws, nm + "_mp.mppth")
p = flopy.utils.PathlineFile(fpth)
pw1 = p.get_destination_pathline_data(nodew, to_recarray=True)
pr1 = p.get_destination_pathline_data(nodesr, to_recarray=True)
```
Endpoint data
Get particles that terminate in the well
```
fpth = os.path.join(ws, nm + "_mp.mpend")
e = flopy.utils.EndpointFile(fpth)
well_epd = e.get_destination_endpoint_data(dest_cells=nodew)
```
Get particles that terminate in the river boundaries
```
riv_epd = e.get_destination_endpoint_data(dest_cells=nodesr)
```
Merge the particles that end in the well and the river boundaries.
```
epd1 = np.concatenate((well_epd, riv_epd))
```
### Plot MODPATH 7 output
```
mm = flopy.plot.PlotMapView(model=gwf)
mm.plot_grid(lw=0.5)
mm.plot_pathline(pw1, layer="all", color="blue", label="captured by wells")
mm.plot_pathline(pr1, layer="all", color="green", label="captured by rivers")
mm.plot_endpoint(epd1, direction="starting", colorbar=True)
mm.ax.legend();
```
### Compare MODPATH results
Compare MODPATH results for MODFLOW-2005 and MODFLOW 6. Also show pathline points every 5th point.
```
f, axes = plt.subplots(ncols=3, nrows=1, sharey=True, figsize=(15, 10))
axes = axes.flatten()
ax = axes[0]
ax.set_aspect("equal")
mm = flopy.plot.PlotMapView(model=m, ax=ax)
mm.plot_grid(lw=0.5)
mm.plot_pathline(
pw0,
layer="all",
color="blue",
lw=1,
marker="o",
markercolor="black",
markersize=3,
markerevery=5,
)
mm.plot_pathline(
pr0,
layer="all",
color="green",
lw=1,
marker="o",
markercolor="black",
markersize=3,
markerevery=5,
)
ax.set_title("MODFLOW-2005")
ax = axes[1]
ax.set_aspect("equal")
mm = flopy.plot.PlotMapView(model=gwf, ax=ax)
mm.plot_grid(lw=0.5)
mm.plot_pathline(
pw1,
layer="all",
color="blue",
lw=1,
marker="o",
markercolor="black",
markersize=3,
markerevery=5,
)
mm.plot_pathline(
pr1,
layer="all",
color="green",
lw=1,
marker="o",
markercolor="black",
markersize=3,
markerevery=5,
)
ax.set_title("MODFLOW 6")
ax = axes[2]
ax.set_aspect("equal")
mm = flopy.plot.PlotMapView(model=m, ax=ax)
mm.plot_grid(lw=0.5)
mm.plot_pathline(pw1, layer="all", color="blue", lw=1, label="MODFLOW 6")
mm.plot_pathline(
pw0, layer="all", color="blue", lw=1, linestyle=":", label="MODFLOW-2005"
)
mm.plot_pathline(pr1, layer="all", color="green", lw=1, label="_none")
mm.plot_pathline(
pr0, layer="all", color="green", lw=1, linestyle=":", label="_none"
)
ax.legend()
ax.set_title("MODFLOW 2005 and MODFLOW 6");
```
|
github_jupyter
|
import sys
import os
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# run installed version of flopy or add local path
try:
import flopy
except:
fpth = os.path.abspath(os.path.join("..", ".."))
sys.path.append(fpth)
import flopy
print(sys.version)
print("numpy version: {}".format(np.__version__))
print("matplotlib version: {}".format(mpl.__version__))
print("flopy version: {}".format(flopy.__version__))
if not os.path.isdir("data"):
os.makedirs("data", exist_ok=True)
nper, nstp, perlen, tsmult = 1, 1, 1.0, 1.0
nlay, nrow, ncol = 3, 21, 20
delr = delc = 500.0
top = 400.0
botm = [220.0, 200.0, 0.0]
laytyp = [1, 0, 0]
kh = [50.0, 0.01, 200.0]
kv = [10.0, 0.01, 20.0]
wel_loc = (2, 10, 9)
wel_q = -150000.0
rch = 0.005
riv_h = 320.0
riv_z = 317.0
riv_c = 1.0e5
# MODPATH zones
zone3 = np.ones((nrow, ncol), dtype=np.int32)
zone3[wel_loc[1:]] = 2
zones = [1, 1, zone3]
# create particles
# particle group 1
plocs = []
pids = []
for idx in range(nrow):
plocs.append((0, idx, 2))
pids.append(idx)
part0 = flopy.modpath.ParticleData(
plocs, drape=0, structured=True, particleids=pids
)
pg0 = flopy.modpath.ParticleGroup(
particlegroupname="PG1", particledata=part0, filename="ex01a.pg1.sloc"
)
# particle group 2
v = [(2, 0, 0), (0, 20, 0)]
part1 = flopy.modpath.ParticleData(
v, drape=1, structured=True, particleids=[1000, 1001]
)
pg1 = flopy.modpath.ParticleGroup(
particlegroupname="PG2", particledata=part1, filename="ex01a.pg2.sloc"
)
locsa = [[0, 0, 0, 0, nrow - 1, ncol - 1], [1, 0, 0, 1, nrow - 1, ncol - 1]]
locsb = [[2, 0, 0, 2, nrow - 1, ncol - 1]]
sd = flopy.modpath.CellDataType(
drape=0, columncelldivisions=1, rowcelldivisions=1, layercelldivisions=1
)
p = flopy.modpath.LRCParticleData(
subdivisiondata=[sd, sd], lrcregions=[locsa, locsb]
)
pg2 = flopy.modpath.ParticleGroupLRCTemplate(
particlegroupname="PG3", particledata=p, filename="ex01a.pg3.sloc"
)
particlegroups = [pg2]
# default iface for MODFLOW-2005 and MODFLOW 6
defaultiface = {"RECHARGE": 6, "ET": 6}
defaultiface6 = {"RCH": 6, "EVT": 6}
ws = os.path.join("data", "mp7_ex1_mf2005_dis")
nm = "ex01_mf2005"
exe_name = "mf2005"
iu_cbc = 130
m = flopy.modflow.Modflow(nm, model_ws=ws, exe_name=exe_name)
flopy.modflow.ModflowDis(
m,
nlay=nlay,
nrow=nrow,
ncol=ncol,
nper=nper,
itmuni=4,
lenuni=2,
perlen=perlen,
nstp=nstp,
tsmult=tsmult,
steady=True,
delr=delr,
delc=delc,
top=top,
botm=botm,
)
flopy.modflow.ModflowLpf(
m, ipakcb=iu_cbc, laytyp=laytyp, hk=kh, vka=kv, constantcv=True
)
flopy.modflow.ModflowBas(m, ibound=1, strt=top)
# recharge
flopy.modflow.ModflowRch(m, ipakcb=iu_cbc, rech=rch)
# wel
wd = [i for i in wel_loc] + [wel_q]
flopy.modflow.ModflowWel(m, ipakcb=iu_cbc, stress_period_data={0: wd})
# river
rd = []
for i in range(nrow):
rd.append([0, i, ncol - 1, riv_h, riv_c, riv_z])
flopy.modflow.ModflowRiv(m, ipakcb=iu_cbc, stress_period_data={0: rd})
# output control
flopy.modflow.ModflowOc(
m, stress_period_data={(0, 0): ["save head", "save budget", "print head"]}
)
flopy.modflow.ModflowPcg(m, hclose=1e-6, rclose=1e-6)
m.write_input()
success, buff = m.run_model()
assert success, "mf2005 model did not run"
# create modpath files
exe_name = "mp7"
mp = flopy.modpath.Modpath7(
modelname=nm + "_mp", flowmodel=m, exe_name=exe_name, model_ws=ws
)
mpbas = flopy.modpath.Modpath7Bas(mp, porosity=0.1, defaultiface=defaultiface)
mpsim = flopy.modpath.Modpath7Sim(
mp,
simulationtype="combined",
trackingdirection="forward",
weaksinkoption="pass_through",
weaksourceoption="pass_through",
budgetoutputoption="summary",
budgetcellnumbers=[1049, 1259],
traceparticledata=[1, 1000],
referencetime=[0, 0, 0.0],
stoptimeoption="extend",
timepointdata=[500, 1000.0],
zonedataoption="on",
zones=zones,
particlegroups=particlegroups,
)
# write modpath datasets
mp.write_input()
# run modpath
mp.run_model()
nodew = m.dis.get_node([wel_loc])
riv_locs = flopy.utils.ra_slice(m.riv.stress_period_data[0], ["k", "i", "j"])
nodesr = m.dis.get_node(riv_locs.tolist())
fpth = os.path.join(ws, nm + "_mp.mppth")
p = flopy.utils.PathlineFile(fpth)
pw0 = p.get_destination_pathline_data(nodew, to_recarray=True)
pr0 = p.get_destination_pathline_data(nodesr, to_recarray=True)
fpth = os.path.join(ws, nm + "_mp.mpend")
e = flopy.utils.EndpointFile(fpth)
well_epd = e.get_destination_endpoint_data(dest_cells=nodew)
well_epd.shape
riv_epd = e.get_destination_endpoint_data(dest_cells=nodesr)
riv_epd.shape
epd0 = np.concatenate((well_epd, riv_epd))
epd0.shape
mm = flopy.plot.PlotMapView(model=m)
mm.plot_grid(lw=0.5)
mm.plot_pathline(pw0, layer="all", color="blue", label="captured by wells")
mm.plot_pathline(pr0, layer="all", color="green", label="captured by rivers")
mm.plot_endpoint(epd0, direction="starting", colorbar=True)
mm.ax.legend();
ws = os.path.join("data", "mp7_ex1_mf6_dis")
nm = "ex01_mf6"
exe_name = "mf6"
# Create the Flopy simulation object
sim = flopy.mf6.MFSimulation(
sim_name=nm, exe_name="mf6", version="mf6", sim_ws=ws
)
# Create the Flopy temporal discretization object
pd = (perlen, nstp, tsmult)
tdis = flopy.mf6.modflow.mftdis.ModflowTdis(
sim, pname="tdis", time_units="DAYS", nper=nper, perioddata=[pd]
)
# Create the Flopy groundwater flow (gwf) model object
model_nam_file = "{}.nam".format(nm)
gwf = flopy.mf6.ModflowGwf(
sim, modelname=nm, model_nam_file=model_nam_file, save_flows=True
)
# Create the Flopy iterative model solver (ims) Package object
ims = flopy.mf6.modflow.mfims.ModflowIms(
sim,
pname="ims",
complexity="SIMPLE",
outer_dvclose=1e-6,
inner_dvclose=1e-6,
rcloserecord=1e-6,
)
# create gwf file
dis = flopy.mf6.modflow.mfgwfdis.ModflowGwfdis(
gwf,
pname="dis",
nlay=nlay,
nrow=nrow,
ncol=ncol,
length_units="FEET",
delr=delr,
delc=delc,
top=top,
botm=botm,
)
# Create the initial conditions package
ic = flopy.mf6.modflow.mfgwfic.ModflowGwfic(gwf, pname="ic", strt=top)
# Create the node property flow package
npf = flopy.mf6.modflow.mfgwfnpf.ModflowGwfnpf(
gwf, pname="npf", icelltype=laytyp, k=kh, k33=kv
)
# recharge
flopy.mf6.modflow.mfgwfrcha.ModflowGwfrcha(gwf, recharge=rch)
# wel
wd = [(wel_loc, wel_q)]
flopy.mf6.modflow.mfgwfwel.ModflowGwfwel(
gwf, maxbound=1, stress_period_data={0: wd}
)
# river
rd = []
for i in range(nrow):
rd.append([(0, i, ncol - 1), riv_h, riv_c, riv_z])
flopy.mf6.modflow.mfgwfriv.ModflowGwfriv(gwf, stress_period_data={0: rd})
# Create the output control package
headfile = "{}.hds".format(nm)
head_record = [headfile]
budgetfile = "{}.cbb".format(nm)
budget_record = [budgetfile]
saverecord = [("HEAD", "ALL"), ("BUDGET", "ALL")]
oc = flopy.mf6.modflow.mfgwfoc.ModflowGwfoc(
gwf,
pname="oc",
saverecord=saverecord,
head_filerecord=head_record,
budget_filerecord=budget_record,
)
# Write the datasets
sim.write_simulation()
# Run the simulation
success, buff = sim.run_simulation()
assert success, "mf6 model did not run"
# create modpath files
exe_name = "mp7"
mp = flopy.modpath.Modpath7(
modelname=nm + "_mp", flowmodel=gwf, exe_name=exe_name, model_ws=ws
)
mpbas = flopy.modpath.Modpath7Bas(mp, porosity=0.1, defaultiface=defaultiface6)
mpsim = flopy.modpath.Modpath7Sim(
mp,
simulationtype="combined",
trackingdirection="forward",
weaksinkoption="pass_through",
weaksourceoption="pass_through",
budgetoutputoption="summary",
budgetcellnumbers=[1049, 1259],
traceparticledata=[1, 1000],
referencetime=[0, 0, 0.0],
stoptimeoption="extend",
timepointdata=[500, 1000.0],
zonedataoption="on",
zones=zones,
particlegroups=particlegroups,
)
# write modpath datasets
mp.write_input()
# run modpath
mp.run_model()
fpth = os.path.join(ws, nm + "_mp.mppth")
p = flopy.utils.PathlineFile(fpth)
pw1 = p.get_destination_pathline_data(nodew, to_recarray=True)
pr1 = p.get_destination_pathline_data(nodesr, to_recarray=True)
fpth = os.path.join(ws, nm + "_mp.mpend")
e = flopy.utils.EndpointFile(fpth)
well_epd = e.get_destination_endpoint_data(dest_cells=nodew)
riv_epd = e.get_destination_endpoint_data(dest_cells=nodesr)
epd1 = np.concatenate((well_epd, riv_epd))
mm = flopy.plot.PlotMapView(model=gwf)
mm.plot_grid(lw=0.5)
mm.plot_pathline(pw1, layer="all", color="blue", label="captured by wells")
mm.plot_pathline(pr1, layer="all", color="green", label="captured by rivers")
mm.plot_endpoint(epd1, direction="starting", colorbar=True)
mm.ax.legend();
f, axes = plt.subplots(ncols=3, nrows=1, sharey=True, figsize=(15, 10))
axes = axes.flatten()
ax = axes[0]
ax.set_aspect("equal")
mm = flopy.plot.PlotMapView(model=m, ax=ax)
mm.plot_grid(lw=0.5)
mm.plot_pathline(
pw0,
layer="all",
color="blue",
lw=1,
marker="o",
markercolor="black",
markersize=3,
markerevery=5,
)
mm.plot_pathline(
pr0,
layer="all",
color="green",
lw=1,
marker="o",
markercolor="black",
markersize=3,
markerevery=5,
)
ax.set_title("MODFLOW-2005")
ax = axes[1]
ax.set_aspect("equal")
mm = flopy.plot.PlotMapView(model=gwf, ax=ax)
mm.plot_grid(lw=0.5)
mm.plot_pathline(
pw1,
layer="all",
color="blue",
lw=1,
marker="o",
markercolor="black",
markersize=3,
markerevery=5,
)
mm.plot_pathline(
pr1,
layer="all",
color="green",
lw=1,
marker="o",
markercolor="black",
markersize=3,
markerevery=5,
)
ax.set_title("MODFLOW 6")
ax = axes[2]
ax.set_aspect("equal")
mm = flopy.plot.PlotMapView(model=m, ax=ax)
mm.plot_grid(lw=0.5)
mm.plot_pathline(pw1, layer="all", color="blue", lw=1, label="MODFLOW 6")
mm.plot_pathline(
pw0, layer="all", color="blue", lw=1, linestyle=":", label="MODFLOW-2005"
)
mm.plot_pathline(pr1, layer="all", color="green", lw=1, label="_none")
mm.plot_pathline(
pr0, layer="all", color="green", lw=1, linestyle=":", label="_none"
)
ax.legend()
ax.set_title("MODFLOW 2005 and MODFLOW 6");
| 0.214198 | 0.845241 |

# Pinocchio: rigib-body algorithms
```
import magic_donotload
```
## Set up
We will need Pinocchio, the robot models stored in the package `example-robot-data`, a viewer (either GepettoViewer or MeshCat), some basic linear-algebra operators and the SciPy optimizers.
```
import pinocchio as pin
import example_robot_data as robex
import numpy as np
from numpy.linalg import inv, pinv, eig, norm, svd, det
from scipy.optimize import fmin_bfgs
import time
import copy
```
## 1. Load and display the robot
Pinocchio is a library to compute different quantities related to the robot model, like body positions, inertias, gravity or dynamic effects, joint jacobians, etc. For that, we need first to define the robot model. The easiest solution is to load it from a [URDF model](https://ocw.tudelft.nl/course-lectures/2-2-1-introduction-to-urdf/). This can be done with the function `buildModelFromUrdf`.
The package `example-robot-data` proposes to directly load several URDF models with a single line of code. This is what we are going to use for the tutorial. Look at the code inside the package if you want to adapt it to the URDF model of your favorite robot.
```
robot = robex.loadTalosArm() # Load a 6-dof manipulator arm
```
The robot can be visualized in a viewer. Two viewers are proposed in this tutorial: Gepetto-Viewer and MeshCat. Both can display the robot model in an external windows managed by another processus.
- MeshCat is browser based, which makes it very convenient in the jupyter context, as it can also be embedded inside the notebook.
- Gepetto-Viewer is the historical viewer for Gepetto, and is more powerful. But you need to first run the command `gepetto-gui` in a terminal, before starting your python command.
```
#Viewer = pin.visualize.GepettoVisualizer
Viewer = pin.visualize.MeshcatVisualizer
viz = Viewer(robot.model, robot.collision_model, robot.visual_model)
viz.initViewer(loadModel=True)
viz.display(robot.q0)
```
If chosing MeshCat, you can open the display directly inside the notebook.
```
hasattr(viz.viewer, 'jupyter_cell') and viz.viewer.jupyter_cell()
```
In both viewers, additional simple geometries can be added to enhance the visualization, although the syntax of the two viewers is not unified. For this tutorial, we provide a simple wrap-up for 3 methods to display a sphere, a box and place them in the scene. Check out the `vizutils.py` script if you are curious of the syntax, and would like to add more fancy primitives to your scene.
```
import vizutils
vizutils.addViewerBox(viz, 'world/box', .05, .1, .2, [1., .2, .2, .5])
vizutils.addViewerSphere(viz,'world/ball', .05, [.2, .2, 1., .5])
vizutils.applyViewerConfiguration(viz, 'world/box', [0.5, -.2, .2, 1, 0, 0, 0])
vizutils.applyViewerConfiguration(viz, 'world/ball', [0.5, .2, .2, 1, 0, 0, 0])
```
<div class="alert alert-block alert-info">
<img src="recap.png" title="Recap"/>
<h3>Recap of the main syntax elements exposed in this section</h3>
<ul>
<li><code>robex.loadXXX</code> loads the model of the robot XXX
<li><code>vizutils.applyViewerConfiguration(name, xyzquat_placement)</code> change the placement of the geometry named <code>name</code> in the viewer.
</ul>
</div>
## 2. Pinocchio's philosophy (model, data and algorithms)
### Model vs data
Pinocchio is not extensively using the object-oriented design pattern. We rather regroup the informations and buffers in some important data structure, then access to these structures using algorithms written in static functions.
The two main data structures are `robot.model` and `robot.data`.
```
rmodel = robot.model
rdata = rmodel.createData()
```
The `robot.model` structure contains all the *constant* information that the algorithms need to process. It is typically loaded from a file describing the model (URDF). The `robot.model` is typically constant and is not modified by algorithms.
The `robot.data` structure contains the memory buffers that the algorithms needs to store intermediary values or the final results to return to the user.
### Joints names and indexes
You can get the list of the joint names with the following:
```
for n in rmodel.names:
print(n)
```
In what follows, we will specifically use the joint named `gripper_left_joint` (nothing specific, you can change it if you like). Its index can be obtained with:
```
jointIndex = rmodel.getJointId("gripper_left_joint")
```
### A first algorithm: random configuration
Let's take a first example of algorithm. You can pick a random configuration by calling the algorithm `randomConfiguration`. This algorithm just needs the `robot.model` (as no intermediary buffer are needed).
```
q = pin.randomConfiguration(rmodel)
```
### A second algorithm: forward kinematics
Another example is the algorithm to compute the foward kinematics. It recursively computes the placement of all the joint frames of the kinematic tree, and stores the results in `robot.data.oMj`, which is an array indexed by the joint indexes.
```
pin.forwardKinematics(rmodel, rdata, q)
rdata.oMi[jointIndex]
```
<div class="alert alert-block alert-info">
<img src="recap.png" title="Recap"/>
<h3>Recap of the main syntax elements exposed in this section</h3>
<ul>
<li><code>robot.model</code> and <code>robot.data</code> structures
<li><code>pin.randomConfiguration(robot.model)</code> to sample a random configuration
<li><code>pin.forwardKinematics(robot.model, robot.data, q)</code> to compute joint frame placements.
<li><code>robot.data.oMi[jointIdex]</code> to access a particular joint frame.
</ul>
</div>
## 3. 3d cost: optimizing the end effector position
We will now define a first cost function, that penalizes the distance between the robot gripper and a target.
For that, let's define a target position `ptarget` of dimension 3.
```
ptarget = np.array([.5, .1, .3])
```
### Joints and frames
As explained upper, the position of the joint `jointIndex` is stored in `robot.data.oMi[jointIndex].translation`, and is recomputed by `pin.forwardKinematics`.
In Pinocchio, each joint is defined by its joint frame, whose **placement** (*i.e.* position & orientation) is stored in `robot.data.oMi`.
We also defined in addition other *operational* frames. They are defined by a fixed placement with respect to their parent joint frame. Denoting by `oMi` the placement of the parent joint frame (function of `q`), and by `iMf` the fixed placement of the operational frame with respect to the parent joint frame, the placement of the operational frame with respect to the world is easily computed by `oMf(q) = oMi(q) * iMf`.
A complete list of available frames is stored in `robot.frames`.
```
for f in robot.model.frames:
print(f.name)
```
All frame placements are computed directly by calling `pin.framesForwardKinematics` (or alternatively by calling `pin.updateFramePlacement` or `pin.updateFramePlacements` after a call to `pin.forwardKinematics`).
```
pin.framesForwardKinematics(rmodel, rdata, q)
```
For the tutorial, we will use the frame `gripper_left_fingertip_1_link`:
```
frameIndex = rmodel.getFrameId('gripper_left_fingertip_1_link')
```
### Cost-function template
In this tutorial, the cost functions will all be defined following the same template:
```
class Cost:
def __init__(self, rmodel, rdata, viz=None): # add any other arguments you like
self.rmodel = rmodel
self.rdata = rdata
self.viz = viz
def calc(self, q):
### Add the code to recompute your cost here
cost = 0
return cost
def callback(self, q):
if viz is None:
return
# Display something in viz ...
```
We will see later that the callback can be used to display or print data during the optimization. For example, we may want to display the robot configuration, and visualize the target position with a simple sphere added to the viewer.

Implement a `Cost` class computing the quadratic cost `p(q) - ptarget`.
```
%do_not_load -r 11-36 costs.py
```
<div class="alert alert-block alert-info">
<img src="recap.png" title="Recap"/>
<h3>Recap of the main syntax elements exposed in this section</h3>
<ul>
<li> Frame placements are computed by <code>pin.framesForwardKinematics(rmodel, rdata, q)</code>, and accessed by <code>rdata.oMf[frameIndex]</code>.
<li> Frame translation is a 3d array stored in <code>rdata.oMf[frameIndex].translation</code>. </ul>
</div>
## 4. 6d cost: optimizing the end effector placement (position+orientation)
We will now define a cost function that penalizes the placement, i.e. rotation+translation between an operational frame and a target frame.
### Spatial algebra in Pinocchio
Most of the physical quantifies stored in Pinocchio are 6D quantities, that are elements of the so-called *Spatial Algebra*, following Roy Featherstone definition and namings. Featherstone's work, and in particular [his excellent 2008 book](https://link.springer.com/content/pdf/10.1007%2F978-1-4899-7560-7.pdf) is the basis for all the algorithms in Pinocchio.
Frame placement, formally elements of the Lie group SE(3), are represented in Pinocchio by an object of class `pin.SE3` containing a 3x3 rotation array and 3 translation vector. Placements can by multiplied and inverted.
```
M1 = pin.SE3.Identity()
M2 = pin.SE3.Random()
print(M2, M2.rotation, M2.translation)
M3 = M1 * M2
M4 = M2.inverse()
```
SE(3) comes with `log` and `exp` operations. The `log` maps a SE3 placement into the 6D velocity that should be applied during 1 second to obtain this placement, starting from identity. In Pinocchio, 6D velocity are represented as `pin.Motion` object, and can be mapped to 6D array.
```
nu = pin.log(M1)
print(nu.vector)
```
We will not need much of spatial algebra in this tutorial. See the class `pin.Motion`, `pin.Force`, `pin.Inertia` for more details.
### Distances between frames
The `log` operator can be used to define a distance between placements. The norm of the log is evidently positive and null only if the placement is identity. Consequently, `log(M1.inverse() * M2)` is a positive scalar that is null only if `M1 == M2`.

Following the same previous pattern, define a Cost6d class penalizing the distance between a frame attached to the robot and a reference fixed frame.
```
%do_not_load -r 47-72 costs.py
```
<div class="alert alert-block alert-info">
<img src="recap.png" title="Recap"/>
<h3>Recap of the main syntax elements exposed in this section</h3>
<ul>
<li>SE3 logarithm is implemented by <code>pin.log</code>, returns a <code>pin.Motion</code> class that can be converted into a vector with <code>pin.log(M).vector</code>.
</ul>
</div>
## 5. Redundancy and introduction of a posture cost
We will now run the optimizer with the two cost functions defined above, and add a posture cost to regularize the rank-deficient hessian.
### Running the optimizer
We will use the optimizer BFGS from SciPy. It is quasi-newton algorithm, which means that it only needs the first-order derivatives (while having super-linear convergence). Even better, it automatically approximates the derivatives by finite differences if you don't provide them.
```
viz.viewer.jupyter_cell()
cost = Cost3d(rmodel, rdata, viz=viz)
qguess = robot.q0.copy()
qopt = fmin_bfgs(cost.calc, qguess, callback=cost.callback)
viz.display(qopt)
```
### Redundancy
The arm of the robot Talos, that we are useing by default in the notebook, as 6 degrees of freedom (plus one for the gripper). If using the 3D cost function, there is a continuum of solutions, as the kinematic is redundant for achieve a pointing tasks. You can obtain different optimum by changing the initial guess. Each new run with a random initial guess gives you a new optimum.
```
qguess = pin.randomConfiguration(rmodel)
qopt = fmin_bfgs(cost.calc, qguess, callback=cost.callback)
viz.display(qopt)
```
We will now add a small regularization to the cost, by optimizing a full-rank term on the robot posture, to make the solution unique indenpendantly from the robot kinematics and the considered task cost.

Introduce a new cost function for penalizing the distance of the robot posture to a reference posture, for example `robot.q0`.
```
%do_not_load -r 82-95 costs.py
```
### Optimize a sum of cost
Now we can define an ad-hoc cost that makes a sum of both costs.
```
class SumOfCost:
def __init__(self, costs, weights):
self.costs = costs
self.weights = np.array(weights)
def calc(self, q):
return sum(self.weights * [cost.calc(q) for cost in self.costs])
mycost = SumOfCost([Cost3d(rmodel, rdata), CostPosture(rmodel, rdata)], [1, 1e-3])
```
And we optimize this new cost.
```
fmin_bfgs(mycost.calc, qguess)
```
<div class="alert alert-block alert-info">
<img src="recap.png" title="Recap"/>
<h3>Recap of the main syntax elements exposed in this section</h3>
<ul>
<li>The BFGS solver is called by <code>fmin_bfgs(cost.calc, qguess, cost.callback)</code>.
</ul>
</div>
## 6. Gravity cost: introducing the dynamic model
The posture cost is a nice regularization but requires a reference posture that may be hard to select. We will now define a cost to minimize the gravity torque, as an alternative regularization.
### Dynamics in Pinocchio
The whole-body dynamics equation can be written as:
$$M(q) a_q + b(q,v_q) = \tau_q $$
where $q,v_q,a_q$ are the position, velocity and acceleration in the configuration space, $M(q)$ is the configuration-space inertia matrix, of size `robot.model.nv`x`robot.model.nv`, $b(q,v_q)$ gathers all the drift terms (Coriolis, centrifugal, gravity) and $\tau_q$ are the joint torques. We write $v_q$ and $a_q$ because in general $q$ (of size `robot.model.nq`) does not have the same size as its derivatives, although they corresponds to $\dot{q}$ and $\ddot{q}$ for now.
### Computing the gravity term
This equation corresponds to the inverse dynamics. We can evaluate parts of it or the entire equation, as we will see next. Let's start with a simple case.
The gravity term corresponds to the torques when the robot has no velocity nor acceleration:
$$g(q) = b(q,v_q=0) = dyninv(q,v_q=0,a_q=0)$$
In Pinocchio, it can be directly computed with:
```
g = pin.computeGeneralizedGravity(rmodel, rdata, q)
```

Define a new cost function that compute the squared norm of the gravity.
```
%do_not_load -r 107-117 costs.py
```
<div class="alert alert-block alert-info">
<img src="recap.png" title="Recap"/>
<h3>Recap of the main syntax elements exposed in this section</h3>
<ul>
<li> - The gravity torque can be computed by <code>pin.computeGeneralizedGravity(rmodel, rdata, q)</code>.
</ul>
</div>
## 7. Weighted gravity cost: RNEA and ABA recursive algorithms
Minimizing the gravity cost is often obtained by bringing the robot to singular configurations. A better behavior might be obtained by a variation of this cost implying the generalized inertia matrix.
### Recursive algorithms in Pinocchio
The 3 most efficient algorithms to evaluate the dynamic equations are implemented in Pinocchio:
- the recursive Newton-Euler algorithm (RNEA) computes the inverse dynamics $\tau_q = invdyn(q,v_q,a_q)$
- the articulated rigid body algorithm (ABA) computes the direct dynamics $a_q = dirdyn(q,v_q,_tau_q)$
- the composite rigid body algorithm (CRBA) computes the mass matrix.
The 3 algorithms can be directly called by their names in Pinocchio.
```
vq = np.random.rand(rmodel.nv) * 2 - 1
aq = np.random.rand(rmodel.nv) * 2 - 1
tauq = pin.rnea(rmodel, rdata, q, vq, aq)
aq2 = pin.aba(rmodel, rdata, q, vq, tauq)
assert norm(aq - aq2) < 1e-6
M = pin.crba(rmodel, rdata, q)
```
The gravity torque $g(q)=b(q,0)$ and the dynamic drift $b(q,vq)$ can be also computed by a truncated RNEA implementation.
```
b = pin.nle(rmodel, rdata, q, vq)
assert norm(M @ aq + b - tauq) < 1e-6
```
### Weighted gravity norm
A better posture regularization can be obtained by taking the norm of the gravity weighted by the inertia:
$$ l(q) = g(q)^T M(q)^{-1} g(q)$$
We can directly evaluate this expression by compute and inverting $M$. Yet it is more efficient to recognize that $M(q)^{-1} g(q) = -dirdyn(q,0,0)$ and can be evaluated by:
```
aq0 = -pin.aba(rmodel, rdata, q, np.zeros(rmodel.nv), np.zeros(rmodel.nv))
```

Implement a new cost function for this weighted gravity, and run it as the regularization of a reaching task.
```
%do_not_load -r 125-138 costs.py
```
<div class="alert alert-block alert-info">
<img src="recap.png" title="Recap"/>
<h3>Recap of the main syntax elements exposed in this section</h3>
<ul>
<li> Inverse dynamics: <code>tauq = pin.rnea(rmodel, rdata, q, vq, aq)</code>
<li>Direct dynamcis: <code>aq = pin.aba(rmodel, rdata, q, vq, tauq)</code>
<li> Inertia matrix: <code>M = pin.crba(rmodel, rdata, q)</code>
<li> Dynamics drift: <code>b = pin.nle(rmodel, rdata, q, vq)</code>
</ul>
</div>
## 8. Working with a free floating robot
We now apply the previous mathematical program to a more complex robot with a floatin basis.
### Loading Solo or Talos
You might want to use either the quadruped robot Solo or the humanoid robot Talos for this one.
```
# robot = robex.loadTalos()
robot = robex.loadSolo()
viz = Viewer(robot.model, robot.collision_model, robot.visual_model)
viz.initViewer(loadModel=True)
hasattr(viz.viewer, 'jupyter_cell') and viz.viewer.jupyter_cell()
viz.display(robot.q0)
vizutils.addViewerBox(viz, 'world/box', .02, .0324, .0648, [1., .2, .2, .5])
vizutils.addViewerSphere(viz, 'world/ball', .04, [.2, .2, 1., .5])
vizutils.applyViewerConfiguration(viz, 'world/box', [0.5, -.2, .2, 1, 0, 0, 0])
vizutils.applyViewerConfiguration(viz, 'world/ball', [0.5, .2, .2, 1, 0, 0, 0])
rmodel = robot.model
rdata = rmodel.createData()
```
### Oh! no, my configuration space is now a Lie group!
In Pinocchio, the floating basis is represented by default as a free-flyer joint. Free-flyer joints have 6 degrees of freedom, but are represented by 7 scalars: the position of the basis center in the world frame, and the orientation of the basis in the world frame stored as a quaternion. This implies that the size of the configuration vector $q$ is one more than the size of the configuration velocity, acceleration or torque: `robot.model.nv`==`robot.model.nv`+1
```
print(rmodel.joints[1])
assert rmodel.nq == rmodel.nv + 1
```
Formally, we say that the configuration $q$ is now living in a Lie group $\mathcal{Q}$, while $v_q$ is in the tangent of this group, the Lie algebra $T_q\mathcal{Q}$.
In practice, it means that sampling a random $q$ has it was a vector would not work: you must either use the `pin.randomConfiguration` introduced earlier, or normalize the configuration using `pin.normalize`.
```
q = np.random.rand(rmodel.nq) # Not a valid configuration, because the quaternion is not normalized.
q = pin.normalize(rmodel, q) # Now, it is normalize
# Or better, call directly randomConfiguration
q = pin.randomConfiguration(rmodel) # q is normalized
viz.display(q)
```
Similarly, you cannot directly sum a configuration with a velocity: they don't have the same size. Rather, you should integrate the velocity using `pin.integrate`.
```
vq = np.random.rand(rmodel.nv) * 2 - 1 # sample a random velocity between -1 and 1.
try:
q += vq
except:
print('!!! ERROR')
print('As expected, this raises an error because q and vq do not have the same dimension.')
q = pin.integrate(rmodel, q, vq)
```
The reciprocal operation is `pin.difference(rmodel, q1, q2)`: it returns the velocity `vq` so that $q_1 \oplus v_q = q_2$ (with $\oplus$ the integration in $\mathcal{Q}$).
```
q1 = pin.randomConfiguration(rmodel)
q2 = pin.randomConfiguration(rmodel)
vq = pin.difference(rmodel, q1, q2)
print(q2 - pin.integrate(rmodel, q1, vq))
```
Depending on the random samples `q1`,`q2`, the print might be 0, or something different on the quaternion part ... again, because making the vector operation `qa - qb` is not valid in $\mathcal{Q}$.
### Consequence on optimization
So, can we directly use the mathematical program that we defined above with our free-basis robot? Yes and no.
Let's see that in practice before answering in details. Just optimize your latest mathematical program with the new robot. If you want the normalization to significantly impact the display, choose a target that is far away from the robot.
**Before runing the next cell**, try to guess how the result will be displayed.

```
mycost = SumOfCost([Cost3d(rmodel, rdata, ptarget = np.array([2, 2, -1]), viz=viz), CostPosture(rmodel, rdata)], [1, 1e-3])
qopt = fmin_bfgs(mycost.calc, robot.q0, callback=mycost.costs[0].callback)
```
So, what happened?
All the cost functions that we defined are valid when `q` is a proper normalized configuration. **BUT** `fmin_bfgs` is not aware of the normalization constraint. The solver modifies the initial `q` guess into a new `q` vector that is not normalized, hence not a valid configuration.
To prevent that, two solutions are possible:
1. a cost can be added to penalize the solver when choosing a new `q` that is not normalized: `cost(q) = (1 - q[3:7] ** 2) ** 2` (or more generic, `cost(q) = sum((q - pin.normalize(rmodel, q)) ** 2)`.
2. or the vector `q` chosen by the solver should be first normalized when entering `cost.call`, so that the cost is evaluated on a proper configuration.

Redefine your sum-of-cost class with either adding an extra term in the sum to penalize the normalization, or by first normalizing the decision vector `q`.
```
%do_not_load -r 2-23 solutions.py
```
<div class="alert alert-block alert-info">
<img src="recap.png" title="Recap"/>
<h3>Recap of the main syntax elements exposed in this section</h3>
<ul>
<li><code>pin.integrate</code> adds a configuration with a velocity.</li>
<li><code>pin.difference</code> makes the difference between two configurations.</li>
<li><code>pin.normalize</code> project any vector of size NQ into a properly normalized configuration.</li>
</ul>
</div>
|
github_jupyter
|
import magic_donotload
import pinocchio as pin
import example_robot_data as robex
import numpy as np
from numpy.linalg import inv, pinv, eig, norm, svd, det
from scipy.optimize import fmin_bfgs
import time
import copy
robot = robex.loadTalosArm() # Load a 6-dof manipulator arm
#Viewer = pin.visualize.GepettoVisualizer
Viewer = pin.visualize.MeshcatVisualizer
viz = Viewer(robot.model, robot.collision_model, robot.visual_model)
viz.initViewer(loadModel=True)
viz.display(robot.q0)
hasattr(viz.viewer, 'jupyter_cell') and viz.viewer.jupyter_cell()
import vizutils
vizutils.addViewerBox(viz, 'world/box', .05, .1, .2, [1., .2, .2, .5])
vizutils.addViewerSphere(viz,'world/ball', .05, [.2, .2, 1., .5])
vizutils.applyViewerConfiguration(viz, 'world/box', [0.5, -.2, .2, 1, 0, 0, 0])
vizutils.applyViewerConfiguration(viz, 'world/ball', [0.5, .2, .2, 1, 0, 0, 0])
rmodel = robot.model
rdata = rmodel.createData()
for n in rmodel.names:
print(n)
jointIndex = rmodel.getJointId("gripper_left_joint")
q = pin.randomConfiguration(rmodel)
pin.forwardKinematics(rmodel, rdata, q)
rdata.oMi[jointIndex]
ptarget = np.array([.5, .1, .3])
for f in robot.model.frames:
print(f.name)
pin.framesForwardKinematics(rmodel, rdata, q)
frameIndex = rmodel.getFrameId('gripper_left_fingertip_1_link')
class Cost:
def __init__(self, rmodel, rdata, viz=None): # add any other arguments you like
self.rmodel = rmodel
self.rdata = rdata
self.viz = viz
def calc(self, q):
### Add the code to recompute your cost here
cost = 0
return cost
def callback(self, q):
if viz is None:
return
# Display something in viz ...
%do_not_load -r 11-36 costs.py
M1 = pin.SE3.Identity()
M2 = pin.SE3.Random()
print(M2, M2.rotation, M2.translation)
M3 = M1 * M2
M4 = M2.inverse()
nu = pin.log(M1)
print(nu.vector)
%do_not_load -r 47-72 costs.py
viz.viewer.jupyter_cell()
cost = Cost3d(rmodel, rdata, viz=viz)
qguess = robot.q0.copy()
qopt = fmin_bfgs(cost.calc, qguess, callback=cost.callback)
viz.display(qopt)
qguess = pin.randomConfiguration(rmodel)
qopt = fmin_bfgs(cost.calc, qguess, callback=cost.callback)
viz.display(qopt)
%do_not_load -r 82-95 costs.py
class SumOfCost:
def __init__(self, costs, weights):
self.costs = costs
self.weights = np.array(weights)
def calc(self, q):
return sum(self.weights * [cost.calc(q) for cost in self.costs])
mycost = SumOfCost([Cost3d(rmodel, rdata), CostPosture(rmodel, rdata)], [1, 1e-3])
fmin_bfgs(mycost.calc, qguess)
g = pin.computeGeneralizedGravity(rmodel, rdata, q)
%do_not_load -r 107-117 costs.py
vq = np.random.rand(rmodel.nv) * 2 - 1
aq = np.random.rand(rmodel.nv) * 2 - 1
tauq = pin.rnea(rmodel, rdata, q, vq, aq)
aq2 = pin.aba(rmodel, rdata, q, vq, tauq)
assert norm(aq - aq2) < 1e-6
M = pin.crba(rmodel, rdata, q)
b = pin.nle(rmodel, rdata, q, vq)
assert norm(M @ aq + b - tauq) < 1e-6
aq0 = -pin.aba(rmodel, rdata, q, np.zeros(rmodel.nv), np.zeros(rmodel.nv))
%do_not_load -r 125-138 costs.py
# robot = robex.loadTalos()
robot = robex.loadSolo()
viz = Viewer(robot.model, robot.collision_model, robot.visual_model)
viz.initViewer(loadModel=True)
hasattr(viz.viewer, 'jupyter_cell') and viz.viewer.jupyter_cell()
viz.display(robot.q0)
vizutils.addViewerBox(viz, 'world/box', .02, .0324, .0648, [1., .2, .2, .5])
vizutils.addViewerSphere(viz, 'world/ball', .04, [.2, .2, 1., .5])
vizutils.applyViewerConfiguration(viz, 'world/box', [0.5, -.2, .2, 1, 0, 0, 0])
vizutils.applyViewerConfiguration(viz, 'world/ball', [0.5, .2, .2, 1, 0, 0, 0])
rmodel = robot.model
rdata = rmodel.createData()
print(rmodel.joints[1])
assert rmodel.nq == rmodel.nv + 1
q = np.random.rand(rmodel.nq) # Not a valid configuration, because the quaternion is not normalized.
q = pin.normalize(rmodel, q) # Now, it is normalize
# Or better, call directly randomConfiguration
q = pin.randomConfiguration(rmodel) # q is normalized
viz.display(q)
vq = np.random.rand(rmodel.nv) * 2 - 1 # sample a random velocity between -1 and 1.
try:
q += vq
except:
print('!!! ERROR')
print('As expected, this raises an error because q and vq do not have the same dimension.')
q = pin.integrate(rmodel, q, vq)
q1 = pin.randomConfiguration(rmodel)
q2 = pin.randomConfiguration(rmodel)
vq = pin.difference(rmodel, q1, q2)
print(q2 - pin.integrate(rmodel, q1, vq))
mycost = SumOfCost([Cost3d(rmodel, rdata, ptarget = np.array([2, 2, -1]), viz=viz), CostPosture(rmodel, rdata)], [1, 1e-3])
qopt = fmin_bfgs(mycost.calc, robot.q0, callback=mycost.costs[0].callback)
%do_not_load -r 2-23 solutions.py
| 0.61832 | 0.987759 |
```
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
base_df = pd.read_stata('cross_border_replication_files/cross_border_main.dta')
base_df
```
`keep if border==1`
`collapse (sum) homicide homdguns nongunhom suicdguns , by(NCAseg18 year)`
```
df = (base_df[base_df['border']==1] #filter
.groupby(['NCAseg18', 'year']).sum()[['homicide', 'homdguns', 'nongunhom', 'suicdguns']] #collapsing by NCA and year
.reset_index())
df
df.to_csv('nca_deaths.csv', index=False)
```
Note: Segment NCA is an indicator that equals 1 if a municipio lies adjacent to TX, AZ, and NM (the “non-CA segment”), as opposed to the “CA segment” of the border.
`twoway `
`line homicide year if NCAseg18==0, lcolor(blue) lpattern(-.-) lwidth(thick) ||`
`line homicide year if NCAseg18==1, lcolor(cranberry) lpattern(.) lwidth(thick)`
`graphregion(color(white)) legend(region(color(white)) col(3) order(1 "CA" 2 "AZ, NM, TX"))`
`xline(2004, lcolor(gray) lpattern(-)) ylabel(, angle(0)) scale(1) ylabel(0(100)700)`
`xtitle("Year") ytitle("Count") aspect(1.0) title("All Homicides", margin(medsmall)) name(homds, replace);`
```
def formatting(title):
plt.xticks(rotation=90)
plt.ylim(0,700)
plt.title(title)
plt.legend()
plt.axvline('2004-01-01', color='black', linestyle='--')
sns.lineplot('year', 'homicide', data=df[df['NCAseg18']==0], label='Adjacent to TX, AZ, or NM')
sns.lineplot('year', 'homicide', data=df[df['NCAseg18']==1], label='Adjacent to CA')
formatting('All Homicides')
```
`twoway`
`line homdguns year if NCAseg18==0, lcolor(blue) lpattern(-.-) lwidth(thick) ||`
`line homdguns year if NCAseg18==1, lcolor(cranberry) lpattern(.) lwidth(thick)`
`graphregion(color(white)) legend(region(color(white)) col(3) order(1 "CA" 2 "AZ, NM, TX"))`
`xline(2004, lcolor(gray) lpattern(-)) ylabel(, angle(0)) scale(1) ylabel(0(100)700)`
`xtitle("Year") ytitle("Count") aspect(1.0) title("Gun-related Homicides", margin(medsmall)) name(homdguns, replace);`
```
sns.lineplot('year', 'homdguns', data=df[df['NCAseg18']==0], label='Adjacent to TX, AZ, or NM')
sns.lineplot('year', 'homdguns', data=df[df['NCAseg18']==1], label='Adjacent to CA')
formatting('Gun-related Homicides')
```
`twoway`
`line nongunhom year if NCAseg18==0, lcolor(blue) lpattern(-.-) lwidth(thick) ||`
`line nongunhom year if NCAseg18==1, lcolor(cranberry) lpattern(.) lwidth(thick)`
`graphregion(color(white)) legend(region(color(white)) col(3) order(1 "CA" 2 "AZ, NM, TX"))`
`xline(2004, lcolor(gray) lpattern(-)) ylabel(, angle(0)) scale(1) ylabel(0(100)700)`
`xtitle("Year") ytitle("Count") aspect(1.0) title("Non-gun Homicides", margin(medsmall))name(nongunhom, replace);`
```
sns.lineplot('year', 'nongunhom', data=df[df['NCAseg18']==0], label='Adjacent to TX, AZ, or NM')
sns.lineplot('year', 'nongunhom', data=df[df['NCAseg18']==1], label='Adjacent to CA')
formatting('Non-gun Homicides Homicides')
```
`twoway`
`line suicdguns year if NCAseg18==0, lcolor(blue) lpattern(-.-) lwidth(thick) ||`
`line suicdguns year if NCAseg18==1, lcolor(cranberry) lpattern(.) lwidth(thick)`
`graphregion(color(white)) legend(region(color(white)) col(3) order(1 "CA" 2 "AZ, NM, TX"))`
`xline(2004, lcolor(gray) lpattern(-)) ylabel(, angle(0)) scale(1) ylabel(0(100)700)`
`xtitle("Year") ytitle("Count") aspect(1.0) title("Gun-related Suicides", margin(medsmall)) name(suicdguns, replace);`
```
sns.lineplot('year', 'suicdguns', data=df[df['NCAseg18']==0], label='Adjacent to TX, AZ, or NM')
sns.lineplot('year', 'suicdguns', data=df[df['NCAseg18']==1], label='Adjacent to CA')
formatting('Non-gun Homicides Homicides')
```
|
github_jupyter
|
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
base_df = pd.read_stata('cross_border_replication_files/cross_border_main.dta')
base_df
df = (base_df[base_df['border']==1] #filter
.groupby(['NCAseg18', 'year']).sum()[['homicide', 'homdguns', 'nongunhom', 'suicdguns']] #collapsing by NCA and year
.reset_index())
df
df.to_csv('nca_deaths.csv', index=False)
def formatting(title):
plt.xticks(rotation=90)
plt.ylim(0,700)
plt.title(title)
plt.legend()
plt.axvline('2004-01-01', color='black', linestyle='--')
sns.lineplot('year', 'homicide', data=df[df['NCAseg18']==0], label='Adjacent to TX, AZ, or NM')
sns.lineplot('year', 'homicide', data=df[df['NCAseg18']==1], label='Adjacent to CA')
formatting('All Homicides')
sns.lineplot('year', 'homdguns', data=df[df['NCAseg18']==0], label='Adjacent to TX, AZ, or NM')
sns.lineplot('year', 'homdguns', data=df[df['NCAseg18']==1], label='Adjacent to CA')
formatting('Gun-related Homicides')
sns.lineplot('year', 'nongunhom', data=df[df['NCAseg18']==0], label='Adjacent to TX, AZ, or NM')
sns.lineplot('year', 'nongunhom', data=df[df['NCAseg18']==1], label='Adjacent to CA')
formatting('Non-gun Homicides Homicides')
sns.lineplot('year', 'suicdguns', data=df[df['NCAseg18']==0], label='Adjacent to TX, AZ, or NM')
sns.lineplot('year', 'suicdguns', data=df[df['NCAseg18']==1], label='Adjacent to CA')
formatting('Non-gun Homicides Homicides')
| 0.217421 | 0.791821 |
# Classify different data sets
### Basic includes
```
# Using pandas to load the csv file
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from keras import models
from keras import layers
from keras import callbacks
from keras.utils import to_categorical
# reuters and fashin mnist data set from keras
from keras.datasets import reuters
from keras.datasets import fashion_mnist
# needed to preprocess text
from keras.preprocessing.text import Tokenizer
```
### Classify the Fashion Mnist
---
```
(fashion_train_data, fashion_train_labels), (fashion_test_data, fashion_test_labels) = fashion_mnist.load_data()
#keep image 1 without modifying
test_index = 1
test_image = fashion_train_data[test_index]
#classify 6000 images of 28*28
print("Size of train inputs: ",fashion_train_data.shape)
print("Size of test inputs: ",fashion_test_data.shape)
#number of labels to clasify = 10
print("Classify into ",len(set(fashion_train_labels))," labels : ",set(fashion_train_labels))
print("""Labels:
0 T-shirt/top
1 Trouser
2 Pullover
3 Dress
4 Coat
5 Sandal
6 Shirt
7 Sneaker
8 Bag
9 Ankle boot """)
#normalize
#aplastar la imagen de 6000*28*28 a 60000*728
#estandarizar valores dividir entre 255 para obtener valores entre 0 y 1
fashion_train_data = fashion_train_data.reshape((60000, 28 * 28))
fashion_train_data = fashion_train_data.astype('float32') / 255
fashion_test_data = fashion_test_data.reshape((10000, 28 * 28))
fashion_test_data = fashion_test_data.astype('float32') / 255
#one hot encoding los labels
fashion_train_labels = to_categorical(fashion_train_labels)
fashion_test_labels = to_categorical(fashion_test_labels)
#generar set de validacion con 20%
# fashion_validation_data = fashion_train_data[:int(60000*0.2)] #(sino pongo int()el resultado es 12000.0)
# fashion_validation_labels = fashion_train_labels[:int(60000*0.2)]
#quitar el set de validacion del train set
# fashion_train_data = fashion_train_data[int(60000*0.8):]
# fashion_train_labels = fashion_train_labels[int(60000*0.8):]
#show image 1
test_index = 1
plt.title("Label: " + str(fashion_train_labels[test_index]))
plt.imshow(test_image, cmap="gray")
# The keras.models.Sequential class is a wrapper for the neural network model that treats
# the network as a sequence of layers
network = models.Sequential()
# Dense layers: fully connected layers
network.add(layers.Dense(360, activation='relu', input_shape=(28 * 28,)))
network.add(layers.Dropout(0.1))
network.add(layers.Dense(192, activation='relu'))
network.add(layers.Dropout(0.1))
network.add(layers.Dense(96, activation='relu'))
network.add(layers.Dropout(0.1))
network.add(layers.Dense(32, activation='relu'))
network.add(layers.Dropout(0.1))
network.add(layers.Dense(10, activation='softmax'))
early_stop = callbacks.EarlyStopping(monitor='val_loss',patience=5)
network.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=['categorical_accuracy'])
#sgd = stocastic gradient descent
#loss = funciones de error
network.summary()
#two ways to do the validation
#validation_data=(fashion_validation_data,fashion_validation_labels)
#validation_split=0.2,
history = network.fit(fashion_train_data, fashion_train_labels, batch_size = 128, validation_split=0.2, epochs=50, verbose=2)
test_loss, test_acc = network.evaluate(fashion_test_data, fashion_test_labels)
print("test loss: ", test_loss, "test accuracy: ", test_acc)
# Plot of the validation and training loss
# This dictionary stores the validation and accuracy of the model throughout the epochs
history_dict = history.history
# The history values are split in different lists for ease of plotting
print(history_dict.keys())
acc = history_dict['categorical_accuracy']
val_acc = history_dict['val_categorical_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
```
#### TO DO: Preprocess the data
1. Normalize the input data set
2. Perform one hot encoding
3. Create a train, test, and validation set
#### TO DO: Define and train a network, then plot the accuracy of the training, validation, and testing
1. Use a validation set
2. Propose and train a network
3. Print the history of the training
4. Evaluate with a test set
## Classifying newswires
---
Build a network to classify Reuters newswires into 46 different mutually-exclusive topics.
### Load and review the data
```
(reuters_train_data, reuters_train_labels), (reuters_test_data, reuters_test_labels) = reuters.load_data(num_words=10000)
print(reuters_train_data.shape)
print(reuters_train_labels.shape)
print(reuters_train_data[1])
print(reuters_train_labels[5])
print(set(reuters_train_labels))
```
Load the word index to decode the train data.
```
word_index = reuters.get_word_index()
reverse_index = dict([(value+3, key) for (key, value) in word_index.items()])
reverse_index[0] = "<PAD>"
reverse_index[1] = "<START>"
reverse_index[2] = "<UNKNOWN>" # unknown
reverse_index[3] = "<UNUSED>"
decoded_review = ' '.join([reverse_index.get(i,'?') for i in reuters_train_data[0]])
print(decoded_review)
#normalize
print(reuters_test_data[0])
# Turning the output into vector mode, each of length 10000
tokenizer = Tokenizer(num_words=10000)
reuters_train_data_token = tokenizer.sequences_to_matrix(reuters_train_data, mode='binary')
reuters_test_data_token = tokenizer.sequences_to_matrix(reuters_test_data, mode='binary')
# print("-----")
# print(reuters_test_data_token[0])
# print("-----")
print(reuters_train_data_token.shape)
print(reuters_test_data_token.shape)
# One-hot encoding of the labels
num_classes = 46
reuters_one_hot_train_labels = to_categorical(reuters_train_labels, num_classes)
reuters_one_hot_test_labels = to_categorical(reuters_test_labels, num_classes)
print(reuters_one_hot_train_labels.shape)
print(reuters_one_hot_test_labels.shape)
# the network as a sequence of layers
network2 = models.Sequential()
# Dense layers: fully connected layers
network2.add(layers.Dense(3600, activation='relu', input_dim=(10000)))
network2.add(layers.Dropout(0.1))
network2.add(layers.Dense(1600, activation='relu'))
network2.add(layers.Dropout(0.1))
network2.add(layers.Dense(720, activation='relu'))
network2.add(layers.Dropout(0.1))
network2.add(layers.Dense(128, activation='relu'))
network2.add(layers.Dropout(0.1))
network2.add(layers.Dense(46, activation='softmax'))
early_stop = callbacks.EarlyStopping(monitor='val_loss',patience=5)
network2.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=['categorical_accuracy'])
network.summary()
#validation_split=0.2,
history = network2.fit(reuters_train_data_token, reuters_one_hot_train_labels, batch_size = 128, validation_split=0.15, epochs=50, verbose=2)
test_loss, test_acc = network2.evaluate(reuters_test_data_token, reuters_one_hot_test_labels)
print("test loss: ", test_loss, "test accuracy: ", test_acc)
history_dict = history.history
# The history values are split in different lists for ease of plotting
acc = history_dict['categorical_accuracy']
val_acc = history_dict['val_categorical_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
```
#### TO DO: Preprocess the data
1. Normalize the input data set
2. Perform one hot encoding
3. Create a train, test, and validation set
#### TO DO: Define and train a network, then plot the accuracy of the training, validation, and testing
1. Use a validation set
2. Propose and train a network
3. Print the history of the training
4. Evaluate with a test set
## Predicting Student Admissions
---
Predict student admissions based on three pieces of data:
- GRE Scores
- GPA Scores
- Class rank
### Load and visualize the data
```
student_data = pd.read_csv("student_data.csv")
print(student_data)
```
Plot of the GRE and the GPA from the data.
```
X = np.array(student_data[["gre","gpa"]])
y = np.array(student_data["admit"])
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'red', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'cyan', edgecolor = 'k')
plt.xlabel('Test (GRE)')
plt.ylabel('Grades (GPA)')
plt.show()
```
Plot of the data by class rank.
```
f, plots = plt.subplots(2, 2, figsize=(20,10))
plots = [plot for sublist in plots for plot in sublist]
for idx, plot in enumerate(plots):
data_rank = student_data[student_data["rank"]==idx+1]
plot.set_title("Rank " + str(idx+1))
X = np.array(data_rank[["gre","gpa"]])
y = np.array(data_rank["admit"])
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plot.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'red', edgecolor = 'k')
plot.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'cyan', edgecolor = 'k')
plot.set_xlabel('Test (GRE)')
plot.set_ylabel('Grades (GPA)')
#convert the data to np arrays
gpa = np.array(student_data["gpa"])
gre = np.array(student_data["gre"])
#get the mean and std deviation of gpa and gre
gpa_mean = np.average(gpa)
gre_mean = np.average(gre)
gpa_std = np.std(gpa)
gre_std = np.std(gre)
#check the values
print("GPA mean: ",gpa_mean, " GRE mean: ",gre_mean)
print("GPA std: ",gpa_std, " GRE std: ",gre_std)
#standarize the input with the mean and std
student_data["gpa"]-=gpa_mean
student_data["gpa"]/=gpa_std
student_data["gre"]-=gre_mean
student_data["gre"]/=gpa_mean
#make the rank categorical
rank = np.array(student_data["rank"]-1) #-1 makes them 0-3 instead of 1-4
rank = to_categorical(rank)
#create array for the data
data = np.zeros((student_data.shape[0],6))
data[:,0] = np.array(student_data["gre"])
data[:,1] = np.array(student_data["gpa"])
data[:,2:6] = rank
#get the labels from the admitted value
student_labels = np.array(student_data["admit"])
#create a training and validation set
#there are 397 complete students,
#validation = 80, training =317
student_train_data = data [:317]
student_train_labels = to_categorical(student_labels[:317])
student_test_data = data[317:]
student_test_labels = to_categorical(student_labels[317:])
print (student_train_data.shape)
print (student_train_labels.shape)
print (student_test_data.shape)
print (student_test_labels.shape)
#create the model
# the network as a sequence of layers
network3 = models.Sequential()
# Dense layers: fully connected layers
network3.add(layers.Dense(200, activation='relu', input_dim=(6)))
network3.add(layers.Dropout(0.1))
network3.add(layers.Dense(90, activation='relu'))
network3.add(layers.Dropout(0.1))
network3.add(layers.Dense(30, activation='relu'))
network3.add(layers.Dropout(0.1))
network3.add(layers.Dense(2, activation='relu'))
early_stop = callbacks.EarlyStopping(monitor='val_loss',patience=5)
network3.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=['categorical_accuracy'])
network3.summary()
#validation_split=0.2,
history = network3.fit(student_train_data ,student_train_labels, validation_split=0.15, epochs=50, verbose=2)
test_loss, test_acc = network3.evaluate(student_test_data, student_test_labels)
print("test loss: ", test_loss, "test accuracy: ", test_acc)
```
#### TO DO: Preprocess the data
1. Normalize the input data set
2. Perform one hot encoding
3. Create a train, test, and validation set
#### TO DO: Define and train a network, then plot the accuracy of the training, validation, and testing
1. Use a validation set
2. Propose and train a network
3. Print the history of the training
4. Evaluate with a test set
|
github_jupyter
|
# Using pandas to load the csv file
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from keras import models
from keras import layers
from keras import callbacks
from keras.utils import to_categorical
# reuters and fashin mnist data set from keras
from keras.datasets import reuters
from keras.datasets import fashion_mnist
# needed to preprocess text
from keras.preprocessing.text import Tokenizer
(fashion_train_data, fashion_train_labels), (fashion_test_data, fashion_test_labels) = fashion_mnist.load_data()
#keep image 1 without modifying
test_index = 1
test_image = fashion_train_data[test_index]
#classify 6000 images of 28*28
print("Size of train inputs: ",fashion_train_data.shape)
print("Size of test inputs: ",fashion_test_data.shape)
#number of labels to clasify = 10
print("Classify into ",len(set(fashion_train_labels))," labels : ",set(fashion_train_labels))
print("""Labels:
0 T-shirt/top
1 Trouser
2 Pullover
3 Dress
4 Coat
5 Sandal
6 Shirt
7 Sneaker
8 Bag
9 Ankle boot """)
#normalize
#aplastar la imagen de 6000*28*28 a 60000*728
#estandarizar valores dividir entre 255 para obtener valores entre 0 y 1
fashion_train_data = fashion_train_data.reshape((60000, 28 * 28))
fashion_train_data = fashion_train_data.astype('float32') / 255
fashion_test_data = fashion_test_data.reshape((10000, 28 * 28))
fashion_test_data = fashion_test_data.astype('float32') / 255
#one hot encoding los labels
fashion_train_labels = to_categorical(fashion_train_labels)
fashion_test_labels = to_categorical(fashion_test_labels)
#generar set de validacion con 20%
# fashion_validation_data = fashion_train_data[:int(60000*0.2)] #(sino pongo int()el resultado es 12000.0)
# fashion_validation_labels = fashion_train_labels[:int(60000*0.2)]
#quitar el set de validacion del train set
# fashion_train_data = fashion_train_data[int(60000*0.8):]
# fashion_train_labels = fashion_train_labels[int(60000*0.8):]
#show image 1
test_index = 1
plt.title("Label: " + str(fashion_train_labels[test_index]))
plt.imshow(test_image, cmap="gray")
# The keras.models.Sequential class is a wrapper for the neural network model that treats
# the network as a sequence of layers
network = models.Sequential()
# Dense layers: fully connected layers
network.add(layers.Dense(360, activation='relu', input_shape=(28 * 28,)))
network.add(layers.Dropout(0.1))
network.add(layers.Dense(192, activation='relu'))
network.add(layers.Dropout(0.1))
network.add(layers.Dense(96, activation='relu'))
network.add(layers.Dropout(0.1))
network.add(layers.Dense(32, activation='relu'))
network.add(layers.Dropout(0.1))
network.add(layers.Dense(10, activation='softmax'))
early_stop = callbacks.EarlyStopping(monitor='val_loss',patience=5)
network.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=['categorical_accuracy'])
#sgd = stocastic gradient descent
#loss = funciones de error
network.summary()
#two ways to do the validation
#validation_data=(fashion_validation_data,fashion_validation_labels)
#validation_split=0.2,
history = network.fit(fashion_train_data, fashion_train_labels, batch_size = 128, validation_split=0.2, epochs=50, verbose=2)
test_loss, test_acc = network.evaluate(fashion_test_data, fashion_test_labels)
print("test loss: ", test_loss, "test accuracy: ", test_acc)
# Plot of the validation and training loss
# This dictionary stores the validation and accuracy of the model throughout the epochs
history_dict = history.history
# The history values are split in different lists for ease of plotting
print(history_dict.keys())
acc = history_dict['categorical_accuracy']
val_acc = history_dict['val_categorical_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
(reuters_train_data, reuters_train_labels), (reuters_test_data, reuters_test_labels) = reuters.load_data(num_words=10000)
print(reuters_train_data.shape)
print(reuters_train_labels.shape)
print(reuters_train_data[1])
print(reuters_train_labels[5])
print(set(reuters_train_labels))
word_index = reuters.get_word_index()
reverse_index = dict([(value+3, key) for (key, value) in word_index.items()])
reverse_index[0] = "<PAD>"
reverse_index[1] = "<START>"
reverse_index[2] = "<UNKNOWN>" # unknown
reverse_index[3] = "<UNUSED>"
decoded_review = ' '.join([reverse_index.get(i,'?') for i in reuters_train_data[0]])
print(decoded_review)
#normalize
print(reuters_test_data[0])
# Turning the output into vector mode, each of length 10000
tokenizer = Tokenizer(num_words=10000)
reuters_train_data_token = tokenizer.sequences_to_matrix(reuters_train_data, mode='binary')
reuters_test_data_token = tokenizer.sequences_to_matrix(reuters_test_data, mode='binary')
# print("-----")
# print(reuters_test_data_token[0])
# print("-----")
print(reuters_train_data_token.shape)
print(reuters_test_data_token.shape)
# One-hot encoding of the labels
num_classes = 46
reuters_one_hot_train_labels = to_categorical(reuters_train_labels, num_classes)
reuters_one_hot_test_labels = to_categorical(reuters_test_labels, num_classes)
print(reuters_one_hot_train_labels.shape)
print(reuters_one_hot_test_labels.shape)
# the network as a sequence of layers
network2 = models.Sequential()
# Dense layers: fully connected layers
network2.add(layers.Dense(3600, activation='relu', input_dim=(10000)))
network2.add(layers.Dropout(0.1))
network2.add(layers.Dense(1600, activation='relu'))
network2.add(layers.Dropout(0.1))
network2.add(layers.Dense(720, activation='relu'))
network2.add(layers.Dropout(0.1))
network2.add(layers.Dense(128, activation='relu'))
network2.add(layers.Dropout(0.1))
network2.add(layers.Dense(46, activation='softmax'))
early_stop = callbacks.EarlyStopping(monitor='val_loss',patience=5)
network2.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=['categorical_accuracy'])
network.summary()
#validation_split=0.2,
history = network2.fit(reuters_train_data_token, reuters_one_hot_train_labels, batch_size = 128, validation_split=0.15, epochs=50, verbose=2)
test_loss, test_acc = network2.evaluate(reuters_test_data_token, reuters_one_hot_test_labels)
print("test loss: ", test_loss, "test accuracy: ", test_acc)
history_dict = history.history
# The history values are split in different lists for ease of plotting
acc = history_dict['categorical_accuracy']
val_acc = history_dict['val_categorical_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
student_data = pd.read_csv("student_data.csv")
print(student_data)
X = np.array(student_data[["gre","gpa"]])
y = np.array(student_data["admit"])
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'red', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'cyan', edgecolor = 'k')
plt.xlabel('Test (GRE)')
plt.ylabel('Grades (GPA)')
plt.show()
f, plots = plt.subplots(2, 2, figsize=(20,10))
plots = [plot for sublist in plots for plot in sublist]
for idx, plot in enumerate(plots):
data_rank = student_data[student_data["rank"]==idx+1]
plot.set_title("Rank " + str(idx+1))
X = np.array(data_rank[["gre","gpa"]])
y = np.array(data_rank["admit"])
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plot.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'red', edgecolor = 'k')
plot.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'cyan', edgecolor = 'k')
plot.set_xlabel('Test (GRE)')
plot.set_ylabel('Grades (GPA)')
#convert the data to np arrays
gpa = np.array(student_data["gpa"])
gre = np.array(student_data["gre"])
#get the mean and std deviation of gpa and gre
gpa_mean = np.average(gpa)
gre_mean = np.average(gre)
gpa_std = np.std(gpa)
gre_std = np.std(gre)
#check the values
print("GPA mean: ",gpa_mean, " GRE mean: ",gre_mean)
print("GPA std: ",gpa_std, " GRE std: ",gre_std)
#standarize the input with the mean and std
student_data["gpa"]-=gpa_mean
student_data["gpa"]/=gpa_std
student_data["gre"]-=gre_mean
student_data["gre"]/=gpa_mean
#make the rank categorical
rank = np.array(student_data["rank"]-1) #-1 makes them 0-3 instead of 1-4
rank = to_categorical(rank)
#create array for the data
data = np.zeros((student_data.shape[0],6))
data[:,0] = np.array(student_data["gre"])
data[:,1] = np.array(student_data["gpa"])
data[:,2:6] = rank
#get the labels from the admitted value
student_labels = np.array(student_data["admit"])
#create a training and validation set
#there are 397 complete students,
#validation = 80, training =317
student_train_data = data [:317]
student_train_labels = to_categorical(student_labels[:317])
student_test_data = data[317:]
student_test_labels = to_categorical(student_labels[317:])
print (student_train_data.shape)
print (student_train_labels.shape)
print (student_test_data.shape)
print (student_test_labels.shape)
#create the model
# the network as a sequence of layers
network3 = models.Sequential()
# Dense layers: fully connected layers
network3.add(layers.Dense(200, activation='relu', input_dim=(6)))
network3.add(layers.Dropout(0.1))
network3.add(layers.Dense(90, activation='relu'))
network3.add(layers.Dropout(0.1))
network3.add(layers.Dense(30, activation='relu'))
network3.add(layers.Dropout(0.1))
network3.add(layers.Dense(2, activation='relu'))
early_stop = callbacks.EarlyStopping(monitor='val_loss',patience=5)
network3.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=['categorical_accuracy'])
network3.summary()
#validation_split=0.2,
history = network3.fit(student_train_data ,student_train_labels, validation_split=0.15, epochs=50, verbose=2)
test_loss, test_acc = network3.evaluate(student_test_data, student_test_labels)
print("test loss: ", test_loss, "test accuracy: ", test_acc)
| 0.767429 | 0.901314 |
<a href="https://colab.research.google.com/github/mtoce/DS-Unit-2-Linear-Models/blob/master/module4-logistic-regression/214_assig_LS_DS.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Lambda School Data Science
*Unit 2, Sprint 1, Module 4*
---
# Logistic Regression
## Assignment 🌯
You'll use a [**dataset of 400+ burrito reviews**](https://srcole.github.io/100burritos/). How accurately can you predict whether a burrito is rated 'Great'?
> We have developed a 10-dimensional system for rating the burritos in San Diego. ... Generate models for what makes a burrito great and investigate correlations in its dimensions.
- [ ] Do train/validate/test split. Train on reviews from 2016 & earlier. Validate on 2017. Test on 2018 & later.
- [ ] Begin with baselines for classification.
- [ ] Use scikit-learn for logistic regression.
- [ ] Get your model's validation accuracy. (Multiple times if you try multiple iterations.)
- [ ] Get your model's test accuracy. (One time, at the end.)
- [ ] Commit your notebook to your fork of the GitHub repo.
## Stretch Goals
- [ ] Add your own stretch goal(s) !
- [ ] Make exploratory visualizations.
- [ ] Do one-hot encoding.
- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).
- [ ] Get and plot your coefficients.
- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).
# Load Data and Pre-Cleaning
```
%%capture
import sys
import category_encoders as cat
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
# Load data downloaded from https://srcole.github.io/100burritos/
import pandas as pd
df = pd.read_csv(DATA_PATH+'burritos/burritos.csv')
# Derive binary classification target:
# We define a 'Great' burrito as having an
# overall rating of 4 or higher, on a 5 point scale.
# Drop unrated burritos.
df = df.dropna(subset=['overall'])
df['Great'] = df['overall'] >= 4
# Clean/combine the Burrito categories
df['Burrito'] = df['Burrito'].str.lower()
california = df['Burrito'].str.contains('california')
asada = df['Burrito'].str.contains('asada')
surf = df['Burrito'].str.contains('surf')
carnitas = df['Burrito'].str.contains('carnitas')
df.loc[california, 'Burrito'] = 'California'
df.loc[asada, 'Burrito'] = 'Asada'
df.loc[surf, 'Burrito'] = 'Surf & Turf'
df.loc[carnitas, 'Burrito'] = 'Carnitas'
df.loc[~california & ~asada & ~surf & ~carnitas, 'Burrito'] = 'Other'
# Drop some high cardinality categoricals
df = df.drop(columns=['Notes', 'Location', 'Reviewer', 'Address', 'URL', 'Neighborhood'])
# Drop some columns to prevent "leakage"
df = df.drop(columns=['Rec', 'overall'])
df_copy = df.copy()
df.head()
df.shape
#df.isnull().sum()
#df.dtypes
# let's figure out what columns to drop
# queso is a fully nan column. drop it.
df = df.drop(['Queso'], axis=1)
import numpy as np
df.shape
df.head()
```
# More Data Cleaning before Split
```
# look at mass and density columns that arent null
df[df['Mass (g)'].notnull()].head(10)
df.columns
# create lists to make data cleaning easier
mean_nan_replace = ['Mass (g)', 'Density (g/mL)', 'Length', 'Circum', 'Volume', 'Yelp', 'Google', 'Hunger', 'Temp', 'Meat', 'Fillings', 'Meat:filling', 'Uniformity', 'Salsa',
'Synergy', 'Wrap', 'Cost']
zero_nan_replace = ['Chips', 'Tortilla', 'Beef', 'Pico', 'Guac',
'Cheese', 'Fries', 'Sour cream', 'Pork', 'Chicken', 'Shrimp', 'Fish',
'Rice', 'Beans', 'Lettuce', 'Tomato', 'Bell peper', 'Carrots',
'Cabbage', 'Sauce', 'Salsa.1', 'Cilantro', 'Onion', 'Taquito',
'Pineapple', 'Ham', 'Chile relleno', 'Nopales', 'Lobster', 'Egg',
'Mushroom', 'Bacon', 'Sushi', 'Avocado', 'Corn', 'Zucchini', 'NonSD', 'Unreliable']
# replace all NaNs in measurement of burrito with means
df[mean_nan_replace] = df[mean_nan_replace].fillna(value=df.mean())
# Fill all NaNs in burrito and Unreliable and NonSD in
df[zero_nan_replace] = df[zero_nan_replace].fillna(value = 0)
df2 = pd.DataFrame()
# change all x's and X's in DF to 1's
df = df.replace({'x': 1, 'X': 1, 'yes': 1, 'Yes': 1, 'No': 0, 'no': 0, True: 1, False: 0, 'Other': 0})
```
# Train / Test / Validate Split
```
# we need to change date to a datetime object
df['Date'] = pd.to_datetime(df['Date'], infer_datetime_format=True)
# 3way split data
#df['Date'] = df['Date'].dt.year
train = df[df['Date'] == 2016]
test = df[(df['Date'] == 2018) | (df['Date'] == 2019)]
validate = df[df['Date'] == 2017]
print('train shape: ', train.shape)
print('test shape: ', test.shape)
print('validate shape: ', validate.shape)
```
# Baselines for Classification
```
# Define our target variable.
# We want to predict if a burrito is great so that column is our target
target = 'Great'
# lets get the baseline
baseline = df[target].value_counts(normalize=True)
burrito_list = ['Burrito_California', 'Burrito_Carnitas', 'Burrito_Asada', 'Burrito_0',
'Burrito_Surf & Turf']
features = ['Burrito', 'Date', 'Yelp', 'Google', 'Chips', 'Cost', 'Hunger',
'Mass (g)', 'Density (g/mL)', 'Length', 'Circum', 'Volume', 'Tortilla',
'Temp', 'Meat', 'Fillings', 'Meat:filling', 'Uniformity', 'Salsa',
'Wrap', 'Unreliable', 'NonSD', 'Beef', 'Pico', 'Guac',
'Cheese', 'Fries', 'Sour cream', 'Pork', 'Chicken', 'Shrimp', 'Fish',
'Rice', 'Beans', 'Lettuce', 'Tomato', 'Bell peper', 'Carrots',
'Cabbage', 'Sauce', 'Salsa.1', 'Cilantro', 'Onion', 'Taquito',
'Pineapple', 'Ham', 'Chile relleno', 'Nopales', 'Lobster', 'Egg',
'Mushroom', 'Bacon', 'Sushi', 'Avocado', 'Corn', 'Zucchini']
# define your variables were using for regression
X_train = train[features]
X_test = test[features]
X_val = validate[features]
y_train = train[target]
y_test = test[target]
y_val = validate[target]
```
# Logistic Regression
```
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Ridge
from sklearn.metrics import accuracy_score
import category_encoders as cat
# One hot encocde features and then run logistic regression
encoder = cat.OneHotEncoder(use_cat_names=True)
X_train = encoder.fit_transform(X_train)
X_val = encoder.transform(X_val)
X_test = encoder.transform(X_test)
# logistic regression
log_reg = LogisticRegression(solver='lbfgs')
log_reg.fit(X_train, y_train)
print('Validation Accuracy: ', log_reg.score(X_val, y_val))
# logistic regression on test data
y_pred = log_reg.predict(X_test)
X_test.shape
y_pred
print('Test Accuracy: ', log_reg.score(X_test, y_test))
accuracy_score(y_test, y_pred)
```
# SelectKBest Features
|
github_jupyter
|
%%capture
import sys
import category_encoders as cat
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
# Load data downloaded from https://srcole.github.io/100burritos/
import pandas as pd
df = pd.read_csv(DATA_PATH+'burritos/burritos.csv')
# Derive binary classification target:
# We define a 'Great' burrito as having an
# overall rating of 4 or higher, on a 5 point scale.
# Drop unrated burritos.
df = df.dropna(subset=['overall'])
df['Great'] = df['overall'] >= 4
# Clean/combine the Burrito categories
df['Burrito'] = df['Burrito'].str.lower()
california = df['Burrito'].str.contains('california')
asada = df['Burrito'].str.contains('asada')
surf = df['Burrito'].str.contains('surf')
carnitas = df['Burrito'].str.contains('carnitas')
df.loc[california, 'Burrito'] = 'California'
df.loc[asada, 'Burrito'] = 'Asada'
df.loc[surf, 'Burrito'] = 'Surf & Turf'
df.loc[carnitas, 'Burrito'] = 'Carnitas'
df.loc[~california & ~asada & ~surf & ~carnitas, 'Burrito'] = 'Other'
# Drop some high cardinality categoricals
df = df.drop(columns=['Notes', 'Location', 'Reviewer', 'Address', 'URL', 'Neighborhood'])
# Drop some columns to prevent "leakage"
df = df.drop(columns=['Rec', 'overall'])
df_copy = df.copy()
df.head()
df.shape
#df.isnull().sum()
#df.dtypes
# let's figure out what columns to drop
# queso is a fully nan column. drop it.
df = df.drop(['Queso'], axis=1)
import numpy as np
df.shape
df.head()
# look at mass and density columns that arent null
df[df['Mass (g)'].notnull()].head(10)
df.columns
# create lists to make data cleaning easier
mean_nan_replace = ['Mass (g)', 'Density (g/mL)', 'Length', 'Circum', 'Volume', 'Yelp', 'Google', 'Hunger', 'Temp', 'Meat', 'Fillings', 'Meat:filling', 'Uniformity', 'Salsa',
'Synergy', 'Wrap', 'Cost']
zero_nan_replace = ['Chips', 'Tortilla', 'Beef', 'Pico', 'Guac',
'Cheese', 'Fries', 'Sour cream', 'Pork', 'Chicken', 'Shrimp', 'Fish',
'Rice', 'Beans', 'Lettuce', 'Tomato', 'Bell peper', 'Carrots',
'Cabbage', 'Sauce', 'Salsa.1', 'Cilantro', 'Onion', 'Taquito',
'Pineapple', 'Ham', 'Chile relleno', 'Nopales', 'Lobster', 'Egg',
'Mushroom', 'Bacon', 'Sushi', 'Avocado', 'Corn', 'Zucchini', 'NonSD', 'Unreliable']
# replace all NaNs in measurement of burrito with means
df[mean_nan_replace] = df[mean_nan_replace].fillna(value=df.mean())
# Fill all NaNs in burrito and Unreliable and NonSD in
df[zero_nan_replace] = df[zero_nan_replace].fillna(value = 0)
df2 = pd.DataFrame()
# change all x's and X's in DF to 1's
df = df.replace({'x': 1, 'X': 1, 'yes': 1, 'Yes': 1, 'No': 0, 'no': 0, True: 1, False: 0, 'Other': 0})
# we need to change date to a datetime object
df['Date'] = pd.to_datetime(df['Date'], infer_datetime_format=True)
# 3way split data
#df['Date'] = df['Date'].dt.year
train = df[df['Date'] == 2016]
test = df[(df['Date'] == 2018) | (df['Date'] == 2019)]
validate = df[df['Date'] == 2017]
print('train shape: ', train.shape)
print('test shape: ', test.shape)
print('validate shape: ', validate.shape)
# Define our target variable.
# We want to predict if a burrito is great so that column is our target
target = 'Great'
# lets get the baseline
baseline = df[target].value_counts(normalize=True)
burrito_list = ['Burrito_California', 'Burrito_Carnitas', 'Burrito_Asada', 'Burrito_0',
'Burrito_Surf & Turf']
features = ['Burrito', 'Date', 'Yelp', 'Google', 'Chips', 'Cost', 'Hunger',
'Mass (g)', 'Density (g/mL)', 'Length', 'Circum', 'Volume', 'Tortilla',
'Temp', 'Meat', 'Fillings', 'Meat:filling', 'Uniformity', 'Salsa',
'Wrap', 'Unreliable', 'NonSD', 'Beef', 'Pico', 'Guac',
'Cheese', 'Fries', 'Sour cream', 'Pork', 'Chicken', 'Shrimp', 'Fish',
'Rice', 'Beans', 'Lettuce', 'Tomato', 'Bell peper', 'Carrots',
'Cabbage', 'Sauce', 'Salsa.1', 'Cilantro', 'Onion', 'Taquito',
'Pineapple', 'Ham', 'Chile relleno', 'Nopales', 'Lobster', 'Egg',
'Mushroom', 'Bacon', 'Sushi', 'Avocado', 'Corn', 'Zucchini']
# define your variables were using for regression
X_train = train[features]
X_test = test[features]
X_val = validate[features]
y_train = train[target]
y_test = test[target]
y_val = validate[target]
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Ridge
from sklearn.metrics import accuracy_score
import category_encoders as cat
# One hot encocde features and then run logistic regression
encoder = cat.OneHotEncoder(use_cat_names=True)
X_train = encoder.fit_transform(X_train)
X_val = encoder.transform(X_val)
X_test = encoder.transform(X_test)
# logistic regression
log_reg = LogisticRegression(solver='lbfgs')
log_reg.fit(X_train, y_train)
print('Validation Accuracy: ', log_reg.score(X_val, y_val))
# logistic regression on test data
y_pred = log_reg.predict(X_test)
X_test.shape
y_pred
print('Test Accuracy: ', log_reg.score(X_test, y_test))
accuracy_score(y_test, y_pred)
| 0.526343 | 0.973844 |
# Serving a TensorFlow Model as a REST Endpoint with TensorFlow Serving and SageMaker
We need to understand the application and business context to choose between real-time and batch predictions. Are we trying to optimize for latency or throughput? Does the application require our models to scale automatically throughout the day to handle cyclic traffic requirements? Do we plan to compare models in production through A/B tests?
If our application requires low latency, then we should deploy the model as a real-time API to provide super-fast predictions on single prediction requests over HTTPS. We can deploy, scale, and compare our model prediction servers with SageMaker Endpoints.
```
import boto3
import sagemaker
import pandas as pd
sess = sagemaker.Session()
bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
sm = boto3.Session().client(service_name="sagemaker", region_name=region)
%store -r training_job_name
try:
training_job_name
print("[OK]")
except NameError:
print("+++++++++++++++++++++++++++++++")
print("[ERROR] Please run the notebooks in the previous TRAIN section before you continue.")
print("+++++++++++++++++++++++++++++++")
```
# Copy the Model to the Notebook
```
!aws s3 cp s3://$bucket/$training_job_name/output/model.tar.gz ./model.tar.gz
!rm -rf ./model/
!mkdir -p ./model/
!tar -xvzf ./model.tar.gz -C ./model/
!saved_model_cli show --all --dir './model/tensorflow/saved_model/0/'
!saved_model_cli run --dir './model/tensorflow/saved_model/0/' --tag_set serve --signature_def serving_default \
--input_exprs 'input_ids=np.zeros((1,64));input_mask=np.zeros((1,64))'
```
# Show `inference.py`
```
!pygmentize ./code/inference.py
```
# Deploy the Model
This will create a default `EndpointConfig` with a single model.
The next notebook will demonstrate how to perform more advanced `EndpointConfig` strategies to support canary rollouts and A/B testing.
_Note: If not using a US-based region, you may need to adapt the container image to your current region using the following table:_
https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/deep-learning-containers-images.html
```
import time
timestamp = int(time.time())
tensorflow_model_name = "{}-{}-{}".format(training_job_name, "tf", timestamp)
print(tensorflow_model_name)
from sagemaker.tensorflow.estimator import TensorFlow
estimator = TensorFlow.attach(training_job_name=training_job_name)
# requires enough disk space for tensorflow, transformers, and bert downloads
instance_type = "ml.m4.xlarge"
from sagemaker.tensorflow.model import TensorFlowModel
tensorflow_model = TensorFlowModel(
name=tensorflow_model_name,
source_dir="code",
entry_point="inference.py",
model_data="s3://{}/{}/output/model.tar.gz".format(bucket, training_job_name),
role=role,
framework_version="2.3.1",
)
tensorflow_endpoint_name = "{}-{}-{}".format(training_job_name, "tf", timestamp)
print(tensorflow_endpoint_name)
tensorflow_model.deploy(
endpoint_name=tensorflow_endpoint_name,
initial_instance_count=1, # Should use >=2 for high(er) availability
instance_type=instance_type,
wait=False,
)
from IPython.core.display import display, HTML
display(
HTML(
'<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/endpoints/{}">SageMaker REST Endpoint</a></b>'.format(
region, tensorflow_endpoint_name
)
)
)
```
# _Wait Until the Endpoint is Deployed_
```
%%time
waiter = sm.get_waiter("endpoint_in_service")
waiter.wait(EndpointName=tensorflow_endpoint_name)
```
# _Wait Until the ^^ Endpoint ^^ is Deployed_
```
tensorflow_endpoint_arn = sm.describe_endpoint(EndpointName=tensorflow_endpoint_name)["EndpointArn"]
print(tensorflow_endpoint_arn)
```
# Show the Experiment Tracking Lineage
```
from sagemaker.lineage.visualizer import LineageTableVisualizer
lineage_table_viz = LineageTableVisualizer(sess)
lineage_table_viz_df = lineage_table_viz.show(endpoint_arn=tensorflow_endpoint_arn)
lineage_table_viz_df
```
# Test the Deployed Model
```
import json
from sagemaker.tensorflow.model import TensorFlowPredictor
from sagemaker.serializers import JSONLinesSerializer
from sagemaker.deserializers import JSONLinesDeserializer
predictor = TensorFlowPredictor(
endpoint_name=tensorflow_endpoint_name,
sagemaker_session=sess,
model_name="saved_model",
model_version=0,
content_type="application/jsonlines",
accept_type="application/jsonlines",
serializer=JSONLinesSerializer(),
deserializer=JSONLinesDeserializer(),
)
```
# Wait for the Endpoint to Settle Down
```
import time
time.sleep(30)
```
# Predict the `star_rating` with Ad Hoc `review_body` Samples
```
inputs = [{"features": ["This is great!"]}, {"features": ["This is bad."]}]
predicted_classes = predictor.predict(inputs)
for predicted_class in predicted_classes:
print("Predicted star_rating: {}".format(predicted_class))
```
|
github_jupyter
|
import boto3
import sagemaker
import pandas as pd
sess = sagemaker.Session()
bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
sm = boto3.Session().client(service_name="sagemaker", region_name=region)
%store -r training_job_name
try:
training_job_name
print("[OK]")
except NameError:
print("+++++++++++++++++++++++++++++++")
print("[ERROR] Please run the notebooks in the previous TRAIN section before you continue.")
print("+++++++++++++++++++++++++++++++")
!aws s3 cp s3://$bucket/$training_job_name/output/model.tar.gz ./model.tar.gz
!rm -rf ./model/
!mkdir -p ./model/
!tar -xvzf ./model.tar.gz -C ./model/
!saved_model_cli show --all --dir './model/tensorflow/saved_model/0/'
!saved_model_cli run --dir './model/tensorflow/saved_model/0/' --tag_set serve --signature_def serving_default \
--input_exprs 'input_ids=np.zeros((1,64));input_mask=np.zeros((1,64))'
!pygmentize ./code/inference.py
import time
timestamp = int(time.time())
tensorflow_model_name = "{}-{}-{}".format(training_job_name, "tf", timestamp)
print(tensorflow_model_name)
from sagemaker.tensorflow.estimator import TensorFlow
estimator = TensorFlow.attach(training_job_name=training_job_name)
# requires enough disk space for tensorflow, transformers, and bert downloads
instance_type = "ml.m4.xlarge"
from sagemaker.tensorflow.model import TensorFlowModel
tensorflow_model = TensorFlowModel(
name=tensorflow_model_name,
source_dir="code",
entry_point="inference.py",
model_data="s3://{}/{}/output/model.tar.gz".format(bucket, training_job_name),
role=role,
framework_version="2.3.1",
)
tensorflow_endpoint_name = "{}-{}-{}".format(training_job_name, "tf", timestamp)
print(tensorflow_endpoint_name)
tensorflow_model.deploy(
endpoint_name=tensorflow_endpoint_name,
initial_instance_count=1, # Should use >=2 for high(er) availability
instance_type=instance_type,
wait=False,
)
from IPython.core.display import display, HTML
display(
HTML(
'<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/endpoints/{}">SageMaker REST Endpoint</a></b>'.format(
region, tensorflow_endpoint_name
)
)
)
%%time
waiter = sm.get_waiter("endpoint_in_service")
waiter.wait(EndpointName=tensorflow_endpoint_name)
tensorflow_endpoint_arn = sm.describe_endpoint(EndpointName=tensorflow_endpoint_name)["EndpointArn"]
print(tensorflow_endpoint_arn)
from sagemaker.lineage.visualizer import LineageTableVisualizer
lineage_table_viz = LineageTableVisualizer(sess)
lineage_table_viz_df = lineage_table_viz.show(endpoint_arn=tensorflow_endpoint_arn)
lineage_table_viz_df
import json
from sagemaker.tensorflow.model import TensorFlowPredictor
from sagemaker.serializers import JSONLinesSerializer
from sagemaker.deserializers import JSONLinesDeserializer
predictor = TensorFlowPredictor(
endpoint_name=tensorflow_endpoint_name,
sagemaker_session=sess,
model_name="saved_model",
model_version=0,
content_type="application/jsonlines",
accept_type="application/jsonlines",
serializer=JSONLinesSerializer(),
deserializer=JSONLinesDeserializer(),
)
import time
time.sleep(30)
inputs = [{"features": ["This is great!"]}, {"features": ["This is bad."]}]
predicted_classes = predictor.predict(inputs)
for predicted_class in predicted_classes:
print("Predicted star_rating: {}".format(predicted_class))
| 0.303835 | 0.896704 |
# Partial Dependence Plots
While feature importance shows what variables most affect predictions, partial dependence plots show *how* a feature affects predictions.
This is useful to answer questions like:
* Controlling for all other house features, what impact do longitude and latitude have on home prices? To restate this, how would similarly sized houses be priced in different areas?
* Are predicted health differences between two groups due to differences in their diets, or due to some other factor?
If you are familiar with linear or logistic regression models, partial dependence plots can be interpreted similarly to the coefficients in those models. Though, partial dependence plots on sophisticated models can capture more complex patterns than coefficients from simple models. If you aren't familiar with linear or logistic regressions, don't worry about this comparison.
We will show a couple examples, explain the interpretation of these plots, and then review the code to create these plots.
# How it Works
Like permutation importance, **partial dependence plots are calculated after a model has been fit.** The model is fit on real data that has not been artificially manipulated in any way.
In our soccer example, teams may differ in many ways. How many passes they made, shots they took, goals they scored, etc. At first glance, it seems difficult to disentangle the effect of these features.
To see how partial plots separate out the effect of each feature, we start by considering a single row of data. For example, that row of data might represent a team that had the ball 50% of the time, made 100 passes, took 10 shots and scored 1 goal.
We will use the fitted model to predict our outcome (probability their player won "man of the match"). But we **repeatedly alter the value for one variable** to make a series of predictions. We could predict the outcome if the team had the ball only 40% of the time. We then predict with them having the ball 50% of the time. Then predict again for 60%. And so on. We trace out predicted outcomes (on the vertical axis) as we move from small values of ball possession to large values (on the horizontal axis).
In this description, we used only a single row of data. Interactions between features may cause the plot for a single row to be atypical. So, we repeat that mental experiment with multiple rows from the original dataset, and we plot the average predicted outcome on the vertical axis.
# Code Example
Model building isn't our focus, so we won't focus on the data exploration or model building code.
```
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
data = pd.read_csv('../input/fifa-2018-match-statistics/FIFA 2018 Statistics.csv')
y = (data['Man of the Match'] == "Yes") # Convert from string "Yes"/"No" to binary
feature_names = [i for i in data.columns if data[i].dtype in [np.int64]]
X = data[feature_names]
train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1)
tree_model = DecisionTreeClassifier(random_state=0, max_depth=5, min_samples_split=5).fit(train_X, train_y)
```
Our first example uses a decision tree, which you can see below. In practice, you'll use more sophistated models for real-world applications.
```
from sklearn import tree
import graphviz
tree_graph = tree.export_graphviz(tree_model, out_file=None, feature_names=feature_names)
graphviz.Source(tree_graph)
```
As guidance to read the tree:
- Leaves with children show their splitting criterion on the top
- The pair of values at the bottom show the count of False values and True values for the target respectively, of data points in that node of the tree.
Here is the code to create the Partial Dependence Plot using the [PDPBox library](https://pdpbox.readthedocs.io/en/latest/).
```
from matplotlib import pyplot as plt
from pdpbox import pdp, get_dataset, info_plots
# Create the data that we will plot
pdp_goals = pdp.pdp_isolate(model=tree_model, dataset=val_X, model_features=feature_names, feature='Goal Scored')
# plot it
pdp.pdp_plot(pdp_goals, 'Goal Scored')
plt.show()
```
A few items are worth pointing out as you interpret this plot
- The y axis is interpreted as **change in the prediction** from what it would be predicted at the baseline or leftmost value.
- A blue shaded area indicates level of confidence
From this particular graph, we see that scoring a goal substantially increases your chances of winning "Man of The Match." But extra goals beyond that appear to have little impact on predictions.
Here is another example plot:
```
feature_to_plot = 'Distance Covered (Kms)'
pdp_dist = pdp.pdp_isolate(model=tree_model, dataset=val_X, model_features=feature_names, feature=feature_to_plot)
pdp.pdp_plot(pdp_dist, feature_to_plot)
plt.show()
```
This graph seems too simple to represent reality. But that's because the model is so simple. You should be able to see from the decision tree above that this is representing exactly the model's structure.
You can easily compare the structure or implications of different models. Here is the same plot with a Random Forest model.
```
# Build Random Forest model
rf_model = RandomForestClassifier(random_state=0).fit(train_X, train_y)
pdp_dist = pdp.pdp_isolate(model=rf_model, dataset=val_X, model_features=feature_names, feature=feature_to_plot)
pdp.pdp_plot(pdp_dist, feature_to_plot)
plt.show()
```
This model thinks you are more likely to win *Man of the Match* if your players run a total of 100km over the course of the game. Though running much more causes lower predictions.
In general, the smooth shape of this curve seems more plausible than the step function from the Decision Tree model. Though this dataset is small enough that we would be careful in how we interpret any model.
# 2D Partial Dependence Plots
If you are curious about interactions between features, 2D partial dependence plots are also useful. An example may clarify this.
We will again use the Decision Tree model for this graph. It will create an extremely simple plot, but you should be able to match what you see in the plot to the tree itself.
```
# Similar to previous PDP plot except we use pdp_interact instead of pdp_isolate and pdp_interact_plot instead of pdp_isolate_plot
features_to_plot = ['Goal Scored', 'Distance Covered (Kms)']
inter1 = pdp.pdp_interact(model=tree_model, dataset=val_X, model_features=feature_names, features=features_to_plot)
pdp.pdp_interact_plot(pdp_interact_out=inter1, feature_names=features_to_plot, plot_type='contour')
plt.show()
```
This graph shows predictions for any combination of Goals Scored and Distance covered.
For example, we see the highest predictions when a team scores at least 1 goal and they run a total distance close to 100km. If they score 0 goals, distance covered doesn't matter. Can you see this by tracing through the decision tree with 0 goals?
But distance can impact predictions if they score goals. Make sure you can see this from the 2D partial dependence plot. Can you see this pattern in the decision tree too?
# Your Turn
**[Test your understanding](#$NEXT_NOTEBOOK_URL$)** on conceptual questions and a short coding challenge.
|
github_jupyter
|
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
data = pd.read_csv('../input/fifa-2018-match-statistics/FIFA 2018 Statistics.csv')
y = (data['Man of the Match'] == "Yes") # Convert from string "Yes"/"No" to binary
feature_names = [i for i in data.columns if data[i].dtype in [np.int64]]
X = data[feature_names]
train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1)
tree_model = DecisionTreeClassifier(random_state=0, max_depth=5, min_samples_split=5).fit(train_X, train_y)
from sklearn import tree
import graphviz
tree_graph = tree.export_graphviz(tree_model, out_file=None, feature_names=feature_names)
graphviz.Source(tree_graph)
from matplotlib import pyplot as plt
from pdpbox import pdp, get_dataset, info_plots
# Create the data that we will plot
pdp_goals = pdp.pdp_isolate(model=tree_model, dataset=val_X, model_features=feature_names, feature='Goal Scored')
# plot it
pdp.pdp_plot(pdp_goals, 'Goal Scored')
plt.show()
feature_to_plot = 'Distance Covered (Kms)'
pdp_dist = pdp.pdp_isolate(model=tree_model, dataset=val_X, model_features=feature_names, feature=feature_to_plot)
pdp.pdp_plot(pdp_dist, feature_to_plot)
plt.show()
# Build Random Forest model
rf_model = RandomForestClassifier(random_state=0).fit(train_X, train_y)
pdp_dist = pdp.pdp_isolate(model=rf_model, dataset=val_X, model_features=feature_names, feature=feature_to_plot)
pdp.pdp_plot(pdp_dist, feature_to_plot)
plt.show()
# Similar to previous PDP plot except we use pdp_interact instead of pdp_isolate and pdp_interact_plot instead of pdp_isolate_plot
features_to_plot = ['Goal Scored', 'Distance Covered (Kms)']
inter1 = pdp.pdp_interact(model=tree_model, dataset=val_X, model_features=feature_names, features=features_to_plot)
pdp.pdp_interact_plot(pdp_interact_out=inter1, feature_names=features_to_plot, plot_type='contour')
plt.show()
| 0.639173 | 0.990188 |
```
import os
import json
import random
import csv
import math
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
file_loc = 'mprobc_chr1_100kb.csv'
with open(file_loc) as input_:
stripped = [line.strip() for line in input_]
lines = [s.split('\t')[1:] for s in stripped if s]
header = []
final_list=[]
for counter,line in enumerate(lines[1:]):
abc = line[0]
header.append(abc)
new_list = [abc] + [int(math.fabs(math.floor(float(x)))) for x in line[1:]]
final_list.append(new_list)
with open('mprobc_chr1_100kb.csv', 'wt') as out_file:
tsv_writer = csv.writer(out_file, delimiter=',')
for i in final_list:
tsv_writer.writerow(i)
df = pd.read_csv('mprobc_chr1_100kb.csv',delimiter=',',header=None,index_col=None)
df.head()
df.describe()
df.info()
header = df.pop(0)
df.head()
```
### Normalization
Scaling The values between zero and one
```
scaler = MinMaxScaler()
scaled_values = scaler.fit_transform(df)
df.loc[:,:] = scaled_values
df.describe()
frequency_threshold = 0.00000
df2 = df
thres_inter_tar = []
thres_inter_sou = []
frequency = []
size = []
for chr_loc, i in enumerate(df2.iterrows()):
list_ = list(i[1])
# print(i[1])
for counter,j in enumerate(list_):
if j >= frequency_threshold:
# print(j)
if header[counter] == header[chr_loc]:
continue
else:
thres_inter_tar.append(header[counter])
frequency.append(j)
thres_inter_sou.append(header[chr_loc])
size.append(random.randint(1,9))
else:
continue
df_write = pd.DataFrame({
'source':thres_inter_tar,
'target':thres_inter_sou,
'weight':frequency,
'size':size,
})
df_write.to_csv('cis.csv',index=False)
df_write
```
### Complete script to generate complete insteraction csv and threshold interaction csv
#### Changes:
Chaage the file_loc and output file name
<br>
<b><u>Note:</u></b> threshold csv file have values scaled between 0 and 1
```
import os
import json
import random
import csv
import math
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
file_loc = 'mprobc_100kb.txt'
with open(file_loc) as input_:
stripped = [line.strip() for line in input_]
lines = [s.split('\t')[1:] for s in stripped if s]
header = []
final_list=[]
for counter,line in enumerate(lines[1:]):
abc = line[0]
header.append(abc)
new_list = [abc] + [int(math.fabs(math.floor(float(x)))) for x in line[1:]]
final_list.append(new_list)
with open('mprobc_100kb.csv', 'wt') as out_file:
tsv_writer = csv.writer(out_file, delimiter=',')
for i in final_list:
tsv_writer.writerow(i)
df = pd.read_csv('mprobc_100kb.csv',delimiter=',',header=None,index_col=None)
header = df.pop(0)
scaler = MinMaxScaler()
scaled_values = scaler.fit_transform(df)
df.loc[:,:] = scaled_values
frequency_threshold = 0.050
thres_inter_tar = []
thres_inter_sou = []
frequency = []
for chr_loc, i in enumerate(df.iterrows()):
list_ = list(i[1])
# print(i[1])
for counter,j in enumerate(list_):
if j >= frequency_threshold:
# print(j)
if header[counter] == header[chr_loc]:
continue
else:
thres_inter_tar.append(header[counter])
frequency.append(j)
thres_inter_sou.append(header[chr_loc])
else:
continue
df_write = pd.DataFrame({
'source':thres_inter_tar,
'target':thres_inter_sou,
'weight':frequency,
})
df_write.to_csv('interactions.csv',index=False)
```
|
github_jupyter
|
import os
import json
import random
import csv
import math
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
file_loc = 'mprobc_chr1_100kb.csv'
with open(file_loc) as input_:
stripped = [line.strip() for line in input_]
lines = [s.split('\t')[1:] for s in stripped if s]
header = []
final_list=[]
for counter,line in enumerate(lines[1:]):
abc = line[0]
header.append(abc)
new_list = [abc] + [int(math.fabs(math.floor(float(x)))) for x in line[1:]]
final_list.append(new_list)
with open('mprobc_chr1_100kb.csv', 'wt') as out_file:
tsv_writer = csv.writer(out_file, delimiter=',')
for i in final_list:
tsv_writer.writerow(i)
df = pd.read_csv('mprobc_chr1_100kb.csv',delimiter=',',header=None,index_col=None)
df.head()
df.describe()
df.info()
header = df.pop(0)
df.head()
scaler = MinMaxScaler()
scaled_values = scaler.fit_transform(df)
df.loc[:,:] = scaled_values
df.describe()
frequency_threshold = 0.00000
df2 = df
thres_inter_tar = []
thres_inter_sou = []
frequency = []
size = []
for chr_loc, i in enumerate(df2.iterrows()):
list_ = list(i[1])
# print(i[1])
for counter,j in enumerate(list_):
if j >= frequency_threshold:
# print(j)
if header[counter] == header[chr_loc]:
continue
else:
thres_inter_tar.append(header[counter])
frequency.append(j)
thres_inter_sou.append(header[chr_loc])
size.append(random.randint(1,9))
else:
continue
df_write = pd.DataFrame({
'source':thres_inter_tar,
'target':thres_inter_sou,
'weight':frequency,
'size':size,
})
df_write.to_csv('cis.csv',index=False)
df_write
import os
import json
import random
import csv
import math
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
file_loc = 'mprobc_100kb.txt'
with open(file_loc) as input_:
stripped = [line.strip() for line in input_]
lines = [s.split('\t')[1:] for s in stripped if s]
header = []
final_list=[]
for counter,line in enumerate(lines[1:]):
abc = line[0]
header.append(abc)
new_list = [abc] + [int(math.fabs(math.floor(float(x)))) for x in line[1:]]
final_list.append(new_list)
with open('mprobc_100kb.csv', 'wt') as out_file:
tsv_writer = csv.writer(out_file, delimiter=',')
for i in final_list:
tsv_writer.writerow(i)
df = pd.read_csv('mprobc_100kb.csv',delimiter=',',header=None,index_col=None)
header = df.pop(0)
scaler = MinMaxScaler()
scaled_values = scaler.fit_transform(df)
df.loc[:,:] = scaled_values
frequency_threshold = 0.050
thres_inter_tar = []
thres_inter_sou = []
frequency = []
for chr_loc, i in enumerate(df.iterrows()):
list_ = list(i[1])
# print(i[1])
for counter,j in enumerate(list_):
if j >= frequency_threshold:
# print(j)
if header[counter] == header[chr_loc]:
continue
else:
thres_inter_tar.append(header[counter])
frequency.append(j)
thres_inter_sou.append(header[chr_loc])
else:
continue
df_write = pd.DataFrame({
'source':thres_inter_tar,
'target':thres_inter_sou,
'weight':frequency,
})
df_write.to_csv('interactions.csv',index=False)
| 0.079524 | 0.340006 |
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import math
import matplotlib.pyplot as plt
import seaborn as sns
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
```
# Genrating synthetic data for clustering
```
from sklearn.datasets import make_blobs
from sklearn.preprocessing import StandardScaler
#-------------------------------------------------------------------------------
# Generate sample data
centers = [[1, 1], [-1, -1], [1, -1]]
X, labels_true = make_blobs(n_samples=750, centers=centers, cluster_std=0.4,
random_state=0)
ads_arr = StandardScaler().fit_transform(X)
sns.scatterplot(x=ads_arr[:,0],y=ads_arr[:,1])
```
# K-Means from scratch
### UDF for calculation of distances (Minkowski)
```
#distance calculation udf
def minkowski_(point_a,point_b,p=2):
if p==1:
#print('----> Manhattan')
dist = np.sum(abs(point_a-point_b))
#print('Manual Distance :',dist)
elif p==2:
#print('----> Euclidean')
dist = np.sqrt(np.sum(np.square(point_a-point_b)))
#print('Manual Distance :',dist)
return dist
#------------------------------------------------------------------
#UDF for calculation of distance between a point and all other points (including itself)
def distance_to_all(curr_vec,data,p_=2):
#curr_vec = X_arr[0] #example
distance_list = []
#data = X_arr[0:5]
for vec_idx in range(len(data)):
dist = minkowski_(point_a=curr_vec,point_b=data[vec_idx],p=p_)
distance_list.append(dist)
return distance_list
```
### UDF for one iteration of K-Means algorithm
```
def kmeans_unit_process(k,data,centroids,k_dist_arr,p,inertia_tray,inertia,total_dims):
#--------------------------------------------------------------------------------
#print('i/p (Old) Centroid :\n',centroids)
#Calculating distance of each point from each centre (k value)
for k in range(k):
#print(k)
centroid_k = centroids[k]
dist = distance_to_all(curr_vec=centroid_k,data=data,p_=2)
k_dist_arr[:,k] = dist
#---------------------------------------------------------
#Extracting cluster having least distance to each point
clusters = np.argmin(k_dist_arr,axis=-1)
clusters = clusters.reshape(k_dist_arr.shape[0],1)
#Extracting the least distance of each point to corresponding cluster
min_list = np.amin(k_dist_arr,axis=1)
min_list = min_list.reshape(k_dist_arr.shape[0],1)
#-------------------------------------------------------
#Appending together
appended = np.append(k_dist_arr,clusters,axis=-1)
appended = np.append(appended,min_list,axis=-1)
#-------------------------------------------------------
#Appending to original data
data = np.append(data,appended,axis=-1)
#-------------------------------------------------------
#Unique clusters
unique_clusters = np.unique(data[:,-2]).astype(int)
#print(unique_clusters)
#Extracting new clusters based on previous clusters and distances
for elem in unique_clusters:
#print('------------------- Cluster :',elem,'-------------------')
filtered = data[data[:,-2]==elem]
c_k = len(filtered)
k_inertia = np.sum(np.square(filtered[:,-1])) #As per sklearn formula of inertia (contradicting theory)
inertia += k_inertia
new_centroid = np.mean(filtered[:,0:total_dims],axis=0)
#print(total_dims)
#print('o/p (New) Centroid :',new_centroid)
centroids[elem] = new_centroid
#----------------------------------------------------------------------------------------------------------
#print('o/p (New) Centroid :\n',centroids)
#print('------- Done for all clusters (x1 iter) -------')
inertia_tray.append(inertia)
#print(np.mean(filtered[:,0:total_dims],axis=0))
#filtered[:,0:total_dims]
print('Inertia vals over iters :\n',inertia_tray)
return centroids,inertia_tray,clusters
```
### Overall UDF for K-Means
```
def Kmeans_manual(k,data,total_dims,seed=None,n_iters=3,tol_val=1e-4):
#--------------------------------------------------------------------------------
if seed != None:
np.random.seed(seed)
#Random centroids generated at beginning
initial_centroids = data[np.random.randint(low=1,high=data.shape[0],size=(k)),:]
print(initial_centroids.shape)
#--------------------------------------------------------------------------------
#Internal variable declarations
k_dist_arr = np.zeros((data.shape[0],k))
centroids = initial_centroids
inertia_tray = []
#Iterative loop based on user-specified iterations
for n in range(n_iters):
print('#-------------------- Iter :',n,' -------------------------#')
inertia = 0
centroids,inertia_tray,clusters = kmeans_unit_process(k=k,
data=data,
centroids=centroids,
k_dist_arr=k_dist_arr,
p=2,
inertia_tray=inertia_tray,
inertia=inertia,
total_dims=total_dims)
#Early stopping of the process if the inertia is not imporoving beyond the tolerance value specified
if n>1:
diff = inertia_tray[-2] - inertia_tray[-1]
print('Diff of inertia :',diff)
if diff<=tol_val:
print('Interrupting loop at iter :',n)
break
return clusters,centroids
```
## Invoking UDF for K-Means Clustering
```
clusters_,centers = Kmeans_manual(k=3,data=ads_arr,
total_dims=ads_arr.shape[1],
seed=50,n_iters=50,tol_val=1e-4)
clusters_ = clusters_.ravel()
len(clusters_)
```
## Final Cluster Centres
```
centers
```
# Plotting the Clusters derived in manual K-Means
```
sns.scatterplot(x=ads_arr[:,0],y=ads_arr[:,1],hue=clusters_)
```
# Sklearn implementation for benchmarking
```
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
from sklearn.metrics import silhouette_samples
#---------------------------------------------------------------------------
kmeans = KMeans(n_clusters=3,n_init=1,max_iter=50,tol=1e-4,random_state=50)
kmeans.fit(ads_arr)
kmeans_labels = kmeans.labels_
#------------------------------------
#Generate Summary
print(kmeans.cluster_centers_)
print(kmeans.inertia_)
print(kmeans.n_iter_)
print(kmeans_labels.shape)
sns.scatterplot(x=ads_arr[:,0],y=ads_arr[:,1],hue=kmeans_labels)
```
### Insights : The results are matching
# Elbow plotting & Silhouette Curve for k-means
```
k_list = [2,3,4,5,6,7,8,9,10]
inertia_list = []
silhouette_list = []
silhouette_samples_list = []
#--------------------------------------------------------------------------------------------------------------
for k_ in k_list:
kmeans = KMeans(n_clusters=k_,n_init=1,max_iter=50,tol=1e-4,random_state=50)
kmeans.fit(ads_arr)
#---------------------------------------------------------------------------
inertia_list.append(kmeans.inertia_)
silhouette_list.append(silhouette_score(X=ads_arr,labels=kmeans.labels_))
#--------------------------------------------------------------------------------------------------------------
sns.set_style('darkgrid')
#-----------------------------------------------------------------------
fig, axes = plt.subplots(1, 2, sharex=False, figsize=(10,5))
#-----------------------------------------------------------------------
fig.suptitle('Finding k-value for K-Means CLustering')
axes[0].set_title('Elbow Curve inertia-vs-k')
axes[1].set_title('Mean Silhouette Value Curve Mean_Sil_Value-vs-k')
axes[0].set(xlabel='K values',ylabel='Inertia')
axes[1].set(xlabel='K values',ylabel='Mean Sil Score')
#-----------------------------------------------------------------------
sns.lineplot(ax=axes[0],x=k_list, y=inertia_list,color='g')
sns.lineplot(ax=axes[1],x=k_list,y=silhouette_list)
```
## Insights :
1. Inertia Elbow Curve suggests the best k value at 3 (which is the exact number of clusters present in the simulated data)
2. The sillhouette curve suggests highest sillhouette score at k=3, in concurrence with inertia elbow curve
# END
|
github_jupyter
|
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import math
import matplotlib.pyplot as plt
import seaborn as sns
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
from sklearn.datasets import make_blobs
from sklearn.preprocessing import StandardScaler
#-------------------------------------------------------------------------------
# Generate sample data
centers = [[1, 1], [-1, -1], [1, -1]]
X, labels_true = make_blobs(n_samples=750, centers=centers, cluster_std=0.4,
random_state=0)
ads_arr = StandardScaler().fit_transform(X)
sns.scatterplot(x=ads_arr[:,0],y=ads_arr[:,1])
#distance calculation udf
def minkowski_(point_a,point_b,p=2):
if p==1:
#print('----> Manhattan')
dist = np.sum(abs(point_a-point_b))
#print('Manual Distance :',dist)
elif p==2:
#print('----> Euclidean')
dist = np.sqrt(np.sum(np.square(point_a-point_b)))
#print('Manual Distance :',dist)
return dist
#------------------------------------------------------------------
#UDF for calculation of distance between a point and all other points (including itself)
def distance_to_all(curr_vec,data,p_=2):
#curr_vec = X_arr[0] #example
distance_list = []
#data = X_arr[0:5]
for vec_idx in range(len(data)):
dist = minkowski_(point_a=curr_vec,point_b=data[vec_idx],p=p_)
distance_list.append(dist)
return distance_list
def kmeans_unit_process(k,data,centroids,k_dist_arr,p,inertia_tray,inertia,total_dims):
#--------------------------------------------------------------------------------
#print('i/p (Old) Centroid :\n',centroids)
#Calculating distance of each point from each centre (k value)
for k in range(k):
#print(k)
centroid_k = centroids[k]
dist = distance_to_all(curr_vec=centroid_k,data=data,p_=2)
k_dist_arr[:,k] = dist
#---------------------------------------------------------
#Extracting cluster having least distance to each point
clusters = np.argmin(k_dist_arr,axis=-1)
clusters = clusters.reshape(k_dist_arr.shape[0],1)
#Extracting the least distance of each point to corresponding cluster
min_list = np.amin(k_dist_arr,axis=1)
min_list = min_list.reshape(k_dist_arr.shape[0],1)
#-------------------------------------------------------
#Appending together
appended = np.append(k_dist_arr,clusters,axis=-1)
appended = np.append(appended,min_list,axis=-1)
#-------------------------------------------------------
#Appending to original data
data = np.append(data,appended,axis=-1)
#-------------------------------------------------------
#Unique clusters
unique_clusters = np.unique(data[:,-2]).astype(int)
#print(unique_clusters)
#Extracting new clusters based on previous clusters and distances
for elem in unique_clusters:
#print('------------------- Cluster :',elem,'-------------------')
filtered = data[data[:,-2]==elem]
c_k = len(filtered)
k_inertia = np.sum(np.square(filtered[:,-1])) #As per sklearn formula of inertia (contradicting theory)
inertia += k_inertia
new_centroid = np.mean(filtered[:,0:total_dims],axis=0)
#print(total_dims)
#print('o/p (New) Centroid :',new_centroid)
centroids[elem] = new_centroid
#----------------------------------------------------------------------------------------------------------
#print('o/p (New) Centroid :\n',centroids)
#print('------- Done for all clusters (x1 iter) -------')
inertia_tray.append(inertia)
#print(np.mean(filtered[:,0:total_dims],axis=0))
#filtered[:,0:total_dims]
print('Inertia vals over iters :\n',inertia_tray)
return centroids,inertia_tray,clusters
def Kmeans_manual(k,data,total_dims,seed=None,n_iters=3,tol_val=1e-4):
#--------------------------------------------------------------------------------
if seed != None:
np.random.seed(seed)
#Random centroids generated at beginning
initial_centroids = data[np.random.randint(low=1,high=data.shape[0],size=(k)),:]
print(initial_centroids.shape)
#--------------------------------------------------------------------------------
#Internal variable declarations
k_dist_arr = np.zeros((data.shape[0],k))
centroids = initial_centroids
inertia_tray = []
#Iterative loop based on user-specified iterations
for n in range(n_iters):
print('#-------------------- Iter :',n,' -------------------------#')
inertia = 0
centroids,inertia_tray,clusters = kmeans_unit_process(k=k,
data=data,
centroids=centroids,
k_dist_arr=k_dist_arr,
p=2,
inertia_tray=inertia_tray,
inertia=inertia,
total_dims=total_dims)
#Early stopping of the process if the inertia is not imporoving beyond the tolerance value specified
if n>1:
diff = inertia_tray[-2] - inertia_tray[-1]
print('Diff of inertia :',diff)
if diff<=tol_val:
print('Interrupting loop at iter :',n)
break
return clusters,centroids
clusters_,centers = Kmeans_manual(k=3,data=ads_arr,
total_dims=ads_arr.shape[1],
seed=50,n_iters=50,tol_val=1e-4)
clusters_ = clusters_.ravel()
len(clusters_)
centers
sns.scatterplot(x=ads_arr[:,0],y=ads_arr[:,1],hue=clusters_)
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
from sklearn.metrics import silhouette_samples
#---------------------------------------------------------------------------
kmeans = KMeans(n_clusters=3,n_init=1,max_iter=50,tol=1e-4,random_state=50)
kmeans.fit(ads_arr)
kmeans_labels = kmeans.labels_
#------------------------------------
#Generate Summary
print(kmeans.cluster_centers_)
print(kmeans.inertia_)
print(kmeans.n_iter_)
print(kmeans_labels.shape)
sns.scatterplot(x=ads_arr[:,0],y=ads_arr[:,1],hue=kmeans_labels)
k_list = [2,3,4,5,6,7,8,9,10]
inertia_list = []
silhouette_list = []
silhouette_samples_list = []
#--------------------------------------------------------------------------------------------------------------
for k_ in k_list:
kmeans = KMeans(n_clusters=k_,n_init=1,max_iter=50,tol=1e-4,random_state=50)
kmeans.fit(ads_arr)
#---------------------------------------------------------------------------
inertia_list.append(kmeans.inertia_)
silhouette_list.append(silhouette_score(X=ads_arr,labels=kmeans.labels_))
#--------------------------------------------------------------------------------------------------------------
sns.set_style('darkgrid')
#-----------------------------------------------------------------------
fig, axes = plt.subplots(1, 2, sharex=False, figsize=(10,5))
#-----------------------------------------------------------------------
fig.suptitle('Finding k-value for K-Means CLustering')
axes[0].set_title('Elbow Curve inertia-vs-k')
axes[1].set_title('Mean Silhouette Value Curve Mean_Sil_Value-vs-k')
axes[0].set(xlabel='K values',ylabel='Inertia')
axes[1].set(xlabel='K values',ylabel='Mean Sil Score')
#-----------------------------------------------------------------------
sns.lineplot(ax=axes[0],x=k_list, y=inertia_list,color='g')
sns.lineplot(ax=axes[1],x=k_list,y=silhouette_list)
| 0.446977 | 0.716297 |
```
# project: p6
# submitter: zchen697
# partner: none
import csv
from math import *
from numpy import *
# copied from https://automatetheboringstuff.com/chapter14/
def process_csv(filename):
exampleFile = open(filename, encoding="utf-8")
exampleReader = csv.reader(exampleFile)
exampleData = list(exampleReader)
exampleFile.close()
return exampleData
# use process_csv to pull out the header and data rows
csv_rows = process_csv("airbnb.csv")
csv_header = csv_rows[0]
csv_data = csv_rows[1:]
#q1
neigroup = [x[csv_header.index("neighborhood_group")] for x in csv_rows]
#csv_header.index("neighborhood_group")
neigroupfinal = []
for i in neigroup:
if not i in neigroupfinal:
neigroupfinal.append(i)
print(i)
neigroupfinal.remove("neighborhood_group")
neigroupfinal
#q2
price_sum = 0
number_count = 0
pricecheck = [x[csv_header.index("price")] for x in csv_rows]
pricecheck.pop(0)
for i in pricecheck:
if i != "NULL":
price_sum += int(i)
number_count += 1
elif i == "NULL":
continue
ave = price_sum / number_count
ave
#q3
location_check = [x[csv_header.index("neighborhood")] for x in csv_rows]
location_check.pop(0)
china_count = 0
for i in location_check:
if i == "Chinatown":
china_count += 1
china_count
#q4
name_check = [x[csv_header.index("name")] for x in csv_rows]
name_check1 = [x[csv_header.index("name")] for x in csv_rows]
name_check.pop(0)
name_check = [x.lower() for x in name_check if isinstance(x,str)]
super_name = []
number1 = []
#number1.append(name_check.index("superbowl") for)
#print(name_check)
for index, i in enumerate(name_check):
if "superbowl" in i:
super_name.append(i)
number1.append(index)
final_name = []
for i in number1:
final_name.append(name_check1[i+1])
final_name
#q5
name_check = [x[csv_header.index("name")] for x in csv_rows]
name_check1 = [x[csv_header.index("name")] for x in csv_rows]
name_check.pop(0)
name_check = [x.lower() for x in name_check if isinstance(x,str)]
super_name = []
number1 = []
for index, i in enumerate(name_check):
if "dream room" in i:
super_name.append(i)
number1.append(index)
final_name = []
for i in number1:
final_name.append(name_check1[i+1])
final_name
#q6
name_check = [x[csv_header.index("name")] for x in csv_rows]
name_check1 = [x[csv_header.index("name")] for x in csv_rows]
name_check.pop(0)
name_check = [x.lower() for x in name_check if isinstance(x,str)]
super_name = []
number1 = []
for index, i in enumerate(name_check):
if "free wifi" in i:
super_name.append(i)
number1.append(index)
final_name = []
for i in number1:
final_name.append(name_check1[i+1])
final_name
def check_anagrams(a,b):
l1 = list(a)
l2 = list(b)
if len(a) != len(b):
return False
else:
for i in l1:
if i in l2:
l2.remove(i)
if l2:
return False
else:
return True
#q7
host_name = [x[csv_header.index("host_name")] for x in csv_rows]
host_name1 = [x[csv_header.index("host_name")] for x in csv_rows]
host_name.pop(0)
host_name = [x.lower() for x in host_name if isinstance(x,str)]
a = "landeyo"
host_result = []
position_host = []
for index,i in enumerate(host_name):
if check_anagrams(i,a):
position_host.append(index)
for i in position_host:
host_result.append(host_name1[i+1])
host_result
#q8
room_night = [x[csv_header.index("minimum_nights")] for x in csv_rows]
room_ids = [x[csv_header.index("room_id")] for x in csv_rows]
room_night.pop(0)
room_ids.pop(0)
room_index = []
result_id = []
for index,i in enumerate(room_night):
if int(i) > 365:
room_index.append(index)
for i in room_index:
result_id.append(room_ids[i])
result_id
#q9
room_count = [x[csv_header.index("calculated_host_listings_count")] for x in csv_rows]
host_ids = [x[csv_header.index("host_id")] for x in csv_rows]
room_count.pop(0)
host_ids.pop(0)
room_index = []
result_id = []
for index,i in enumerate(room_count):
if int(i) > 50:
room_index.append(index)
for i in room_index:
result_id.append(host_ids[i])
result_id = list(set(result_id))
result_id
#q10
cheap_name = []
price_compare = 999999999
for i in range(len(csv_data)):
if csv_data[i][csv_header.index("neighborhood_group")] == "Brooklyn":
room_price = csv_data[i][csv_header.index("price")]
if price_compare >= int(room_price):
price_compare = int(room_price)
for i in range(len(csv_data)):
if int(csv_data[i][csv_header.index("price")]) == price_compare and csv_data[i][csv_header.index("neighborhood_group")] == "Brooklyn":
cheap_name.append(csv_data[i][csv_header.index("name")])
cheap_name
#q11
cheap_name = []
price_compare = 999999999
for i in range(len(csv_data)):
if csv_data[i][csv_header.index("neighborhood_group")] == "Manhattan":
room_price = csv_data[i][csv_header.index("price")]
if price_compare >= int(room_price):
price_compare = int(room_price)
for i in range(len(csv_data)):
if int(csv_data[i][csv_header.index("price")]) == price_compare and csv_data[i][csv_header.index("neighborhood_group")] == "Manhattan":
cheap_name.append(csv_data[i][csv_header.index("name")])
cheap_name
def review_ratio(a,b):
a = float(a)
b = float(b)
c = b/a
return c
#q12
# Clarification: for this question I do google what is the library for method "mean"
ratio_result = []
for i in range(len(csv_data)):
x = csv_data[i][csv_header.index("availability_365")]
y = csv_data[i][csv_header.index("number_of_reviews")]
if int(x) == 0:
continue
else:
ratio_result.append(review_ratio(x,y))
ave_ratio = mean(ratio_result)
ave_ratio
def smaller_price(a,b):
return min(a,b)
#q13
cheap_id = []
price_compare = 999999999
for i in range(len(csv_data)):
a = csv_data[i][csv_header.index("latitude")]
b = csv_data[i][csv_header.index("longitude")]
if float(a) >= 40.50 and float(a) <= 40.75:
if float(b) >= -74.00 and float(b) <= -73.95:
room_price = csv_data[i][csv_header.index("price")]
if price_compare >= float(room_price):
price_compare = float(room_price)
for i in range(len(csv_data)):
a = csv_data[i][csv_header.index("latitude")]
b = csv_data[i][csv_header.index("longitude")]
if float(csv_data[i][csv_header.index("price")]) == price_compare and ((float(a) >= 40.50 and float(a) <= 40.75) and (float(b) >= -74.00 and float(b) <= -73.95)):
cheap_id.append(csv_data[i][csv_header.index("room_id")])
cheap_id
#q14
cheap_id = []
price_compare = 999999999
for i in range(len(csv_data)):
a = csv_data[i][csv_header.index("latitude")]
b = csv_data[i][csv_header.index("longitude")]
if float(a) >= 40.75 and float(a) <= 41.00:
if float(b) >= -73.95 and float(b) <= -73.85:
room_price = csv_data[i][csv_header.index("price")]
if price_compare >= float(room_price):
price_compare = float(room_price)
for i in range(len(csv_data)):
a = csv_data[i][csv_header.index("latitude")]
b = csv_data[i][csv_header.index("longitude")]
if float(csv_data[i][csv_header.index("price")]) == price_compare and ((float(a) >= 40.75 and float(a) <= 41.00) and (float(b) >= -73.95 and float(b) <= -73.85)):
cheap_id.append(csv_data[i][csv_header.index("room_id")])
cheap_id
#q15
sum_price = 0
count_number = 0
for i in range(len(csv_data)):
a = csv_data[i][csv_header.index("number_of_reviews")]
if float(a) > 300:
sum_price += int(csv_data[i][csv_header.index("price")])
count_number += 1
ave_price = sum_price/count_number
ave_price
#q16
sum_review = 0
count_number = 0
for i in range(len(csv_data)):
a = csv_data[i][csv_header.index("price")]
if float(a) > 1000:
sum_review += int(csv_data[i][csv_header.index("number_of_reviews")])
count_number += 1
ave_review = sum_review/count_number
ave_review
#q17
a = 0
b = 0
c = "sweet"
d = "home"
for i in range(len(csv_data)):
if c in csv_data[i][csv_header.index("name")].lower():
a += 1
if (c in csv_data[i][csv_header.index("name")].lower()) and (d in csv_data[i][csv_header.index("name")].lower()):
b += 1
special_ratio = review_ratio(a,b) * 100
special_ratio
#q18
a = 0
b = 0
c = "pool"
d = "gym"
for i in range(len(csv_data)):
if c in csv_data[i][csv_header.index("name")].lower():
a += 1
if (c in csv_data[i][csv_header.index("name")].lower()) and (d in csv_data[i][csv_header.index("name")].lower()):
b += 1
special_ratio = review_ratio(a,b) * 100
special_ratio
#q19
a = 0
b = 0
c = "five"
d = "stars"
for i in range(len(csv_data)):
if c in csv_data[i][csv_header.index("name")].lower():
a += 1
if (c in csv_data[i][csv_header.index("name")].lower()) and (d in csv_data[i][csv_header.index("name")].lower()):
b += 1
special_ratio = review_ratio(a,b) * 100
special_ratio
#q20
cheap_name = []
price_compare = 0
m_night = 0
price_compare1 = 0
b_night = 0
for i in range(len(csv_data)):
if csv_data[i][csv_header.index("neighborhood_group")] == "Manhattan":
room_price = csv_data[i][csv_header.index("price")]
if price_compare <= int(room_price):
price_compare = int(room_price)
for i in range(len(csv_data)):
if int(csv_data[i][csv_header.index("price")]) == price_compare and csv_data[i][csv_header.index("neighborhood_group")] == "Manhattan":
m_night = int(csv_data[i][csv_header.index("minimum_nights")])
for i in range(len(csv_data)):
if csv_data[i][csv_header.index("neighborhood_group")] == "Brooklyn":
room_price = csv_data[i][csv_header.index("price")]
if price_compare1 <= int(room_price):
price_compare1 = int(room_price)
for i in range(len(csv_data)):
if int(csv_data[i][csv_header.index("price")]) == price_compare and csv_data[i][csv_header.index("neighborhood_group")] == "Brooklyn":
b_night = int(csv_data[i][csv_header.index("minimum_nights")])
total_cost = m_night * price_compare + b_night * price_compare1
total_cost
```
|
github_jupyter
|
# project: p6
# submitter: zchen697
# partner: none
import csv
from math import *
from numpy import *
# copied from https://automatetheboringstuff.com/chapter14/
def process_csv(filename):
exampleFile = open(filename, encoding="utf-8")
exampleReader = csv.reader(exampleFile)
exampleData = list(exampleReader)
exampleFile.close()
return exampleData
# use process_csv to pull out the header and data rows
csv_rows = process_csv("airbnb.csv")
csv_header = csv_rows[0]
csv_data = csv_rows[1:]
#q1
neigroup = [x[csv_header.index("neighborhood_group")] for x in csv_rows]
#csv_header.index("neighborhood_group")
neigroupfinal = []
for i in neigroup:
if not i in neigroupfinal:
neigroupfinal.append(i)
print(i)
neigroupfinal.remove("neighborhood_group")
neigroupfinal
#q2
price_sum = 0
number_count = 0
pricecheck = [x[csv_header.index("price")] for x in csv_rows]
pricecheck.pop(0)
for i in pricecheck:
if i != "NULL":
price_sum += int(i)
number_count += 1
elif i == "NULL":
continue
ave = price_sum / number_count
ave
#q3
location_check = [x[csv_header.index("neighborhood")] for x in csv_rows]
location_check.pop(0)
china_count = 0
for i in location_check:
if i == "Chinatown":
china_count += 1
china_count
#q4
name_check = [x[csv_header.index("name")] for x in csv_rows]
name_check1 = [x[csv_header.index("name")] for x in csv_rows]
name_check.pop(0)
name_check = [x.lower() for x in name_check if isinstance(x,str)]
super_name = []
number1 = []
#number1.append(name_check.index("superbowl") for)
#print(name_check)
for index, i in enumerate(name_check):
if "superbowl" in i:
super_name.append(i)
number1.append(index)
final_name = []
for i in number1:
final_name.append(name_check1[i+1])
final_name
#q5
name_check = [x[csv_header.index("name")] for x in csv_rows]
name_check1 = [x[csv_header.index("name")] for x in csv_rows]
name_check.pop(0)
name_check = [x.lower() for x in name_check if isinstance(x,str)]
super_name = []
number1 = []
for index, i in enumerate(name_check):
if "dream room" in i:
super_name.append(i)
number1.append(index)
final_name = []
for i in number1:
final_name.append(name_check1[i+1])
final_name
#q6
name_check = [x[csv_header.index("name")] for x in csv_rows]
name_check1 = [x[csv_header.index("name")] for x in csv_rows]
name_check.pop(0)
name_check = [x.lower() for x in name_check if isinstance(x,str)]
super_name = []
number1 = []
for index, i in enumerate(name_check):
if "free wifi" in i:
super_name.append(i)
number1.append(index)
final_name = []
for i in number1:
final_name.append(name_check1[i+1])
final_name
def check_anagrams(a,b):
l1 = list(a)
l2 = list(b)
if len(a) != len(b):
return False
else:
for i in l1:
if i in l2:
l2.remove(i)
if l2:
return False
else:
return True
#q7
host_name = [x[csv_header.index("host_name")] for x in csv_rows]
host_name1 = [x[csv_header.index("host_name")] for x in csv_rows]
host_name.pop(0)
host_name = [x.lower() for x in host_name if isinstance(x,str)]
a = "landeyo"
host_result = []
position_host = []
for index,i in enumerate(host_name):
if check_anagrams(i,a):
position_host.append(index)
for i in position_host:
host_result.append(host_name1[i+1])
host_result
#q8
room_night = [x[csv_header.index("minimum_nights")] for x in csv_rows]
room_ids = [x[csv_header.index("room_id")] for x in csv_rows]
room_night.pop(0)
room_ids.pop(0)
room_index = []
result_id = []
for index,i in enumerate(room_night):
if int(i) > 365:
room_index.append(index)
for i in room_index:
result_id.append(room_ids[i])
result_id
#q9
room_count = [x[csv_header.index("calculated_host_listings_count")] for x in csv_rows]
host_ids = [x[csv_header.index("host_id")] for x in csv_rows]
room_count.pop(0)
host_ids.pop(0)
room_index = []
result_id = []
for index,i in enumerate(room_count):
if int(i) > 50:
room_index.append(index)
for i in room_index:
result_id.append(host_ids[i])
result_id = list(set(result_id))
result_id
#q10
cheap_name = []
price_compare = 999999999
for i in range(len(csv_data)):
if csv_data[i][csv_header.index("neighborhood_group")] == "Brooklyn":
room_price = csv_data[i][csv_header.index("price")]
if price_compare >= int(room_price):
price_compare = int(room_price)
for i in range(len(csv_data)):
if int(csv_data[i][csv_header.index("price")]) == price_compare and csv_data[i][csv_header.index("neighborhood_group")] == "Brooklyn":
cheap_name.append(csv_data[i][csv_header.index("name")])
cheap_name
#q11
cheap_name = []
price_compare = 999999999
for i in range(len(csv_data)):
if csv_data[i][csv_header.index("neighborhood_group")] == "Manhattan":
room_price = csv_data[i][csv_header.index("price")]
if price_compare >= int(room_price):
price_compare = int(room_price)
for i in range(len(csv_data)):
if int(csv_data[i][csv_header.index("price")]) == price_compare and csv_data[i][csv_header.index("neighborhood_group")] == "Manhattan":
cheap_name.append(csv_data[i][csv_header.index("name")])
cheap_name
def review_ratio(a,b):
a = float(a)
b = float(b)
c = b/a
return c
#q12
# Clarification: for this question I do google what is the library for method "mean"
ratio_result = []
for i in range(len(csv_data)):
x = csv_data[i][csv_header.index("availability_365")]
y = csv_data[i][csv_header.index("number_of_reviews")]
if int(x) == 0:
continue
else:
ratio_result.append(review_ratio(x,y))
ave_ratio = mean(ratio_result)
ave_ratio
def smaller_price(a,b):
return min(a,b)
#q13
cheap_id = []
price_compare = 999999999
for i in range(len(csv_data)):
a = csv_data[i][csv_header.index("latitude")]
b = csv_data[i][csv_header.index("longitude")]
if float(a) >= 40.50 and float(a) <= 40.75:
if float(b) >= -74.00 and float(b) <= -73.95:
room_price = csv_data[i][csv_header.index("price")]
if price_compare >= float(room_price):
price_compare = float(room_price)
for i in range(len(csv_data)):
a = csv_data[i][csv_header.index("latitude")]
b = csv_data[i][csv_header.index("longitude")]
if float(csv_data[i][csv_header.index("price")]) == price_compare and ((float(a) >= 40.50 and float(a) <= 40.75) and (float(b) >= -74.00 and float(b) <= -73.95)):
cheap_id.append(csv_data[i][csv_header.index("room_id")])
cheap_id
#q14
cheap_id = []
price_compare = 999999999
for i in range(len(csv_data)):
a = csv_data[i][csv_header.index("latitude")]
b = csv_data[i][csv_header.index("longitude")]
if float(a) >= 40.75 and float(a) <= 41.00:
if float(b) >= -73.95 and float(b) <= -73.85:
room_price = csv_data[i][csv_header.index("price")]
if price_compare >= float(room_price):
price_compare = float(room_price)
for i in range(len(csv_data)):
a = csv_data[i][csv_header.index("latitude")]
b = csv_data[i][csv_header.index("longitude")]
if float(csv_data[i][csv_header.index("price")]) == price_compare and ((float(a) >= 40.75 and float(a) <= 41.00) and (float(b) >= -73.95 and float(b) <= -73.85)):
cheap_id.append(csv_data[i][csv_header.index("room_id")])
cheap_id
#q15
sum_price = 0
count_number = 0
for i in range(len(csv_data)):
a = csv_data[i][csv_header.index("number_of_reviews")]
if float(a) > 300:
sum_price += int(csv_data[i][csv_header.index("price")])
count_number += 1
ave_price = sum_price/count_number
ave_price
#q16
sum_review = 0
count_number = 0
for i in range(len(csv_data)):
a = csv_data[i][csv_header.index("price")]
if float(a) > 1000:
sum_review += int(csv_data[i][csv_header.index("number_of_reviews")])
count_number += 1
ave_review = sum_review/count_number
ave_review
#q17
a = 0
b = 0
c = "sweet"
d = "home"
for i in range(len(csv_data)):
if c in csv_data[i][csv_header.index("name")].lower():
a += 1
if (c in csv_data[i][csv_header.index("name")].lower()) and (d in csv_data[i][csv_header.index("name")].lower()):
b += 1
special_ratio = review_ratio(a,b) * 100
special_ratio
#q18
a = 0
b = 0
c = "pool"
d = "gym"
for i in range(len(csv_data)):
if c in csv_data[i][csv_header.index("name")].lower():
a += 1
if (c in csv_data[i][csv_header.index("name")].lower()) and (d in csv_data[i][csv_header.index("name")].lower()):
b += 1
special_ratio = review_ratio(a,b) * 100
special_ratio
#q19
a = 0
b = 0
c = "five"
d = "stars"
for i in range(len(csv_data)):
if c in csv_data[i][csv_header.index("name")].lower():
a += 1
if (c in csv_data[i][csv_header.index("name")].lower()) and (d in csv_data[i][csv_header.index("name")].lower()):
b += 1
special_ratio = review_ratio(a,b) * 100
special_ratio
#q20
cheap_name = []
price_compare = 0
m_night = 0
price_compare1 = 0
b_night = 0
for i in range(len(csv_data)):
if csv_data[i][csv_header.index("neighborhood_group")] == "Manhattan":
room_price = csv_data[i][csv_header.index("price")]
if price_compare <= int(room_price):
price_compare = int(room_price)
for i in range(len(csv_data)):
if int(csv_data[i][csv_header.index("price")]) == price_compare and csv_data[i][csv_header.index("neighborhood_group")] == "Manhattan":
m_night = int(csv_data[i][csv_header.index("minimum_nights")])
for i in range(len(csv_data)):
if csv_data[i][csv_header.index("neighborhood_group")] == "Brooklyn":
room_price = csv_data[i][csv_header.index("price")]
if price_compare1 <= int(room_price):
price_compare1 = int(room_price)
for i in range(len(csv_data)):
if int(csv_data[i][csv_header.index("price")]) == price_compare and csv_data[i][csv_header.index("neighborhood_group")] == "Brooklyn":
b_night = int(csv_data[i][csv_header.index("minimum_nights")])
total_cost = m_night * price_compare + b_night * price_compare1
total_cost
| 0.141252 | 0.350866 |
# Noise2Self for Neural Nets
This is a simple notebook demonstrating the principle of using self-supervision to train denoising networks.
For didactic purposes, we use a simple dataset (Gaussian noise on MNIST), a simple model (a small UNet), and a short training (100 iterations on a CPU). This notebook runs on a MacBook Pro in under one minute.
```
%load_ext autoreload
%autoreload 2
import sys
sys.path.append("..")
from util import show, plot_images, plot_tensors
```
# Data
We demonstrate the use of a self-supervised denoising objective on a synthetically noised version of MNIST.
```
from torchvision.datasets import MNIST
from torchvision import transforms
from torch.utils.data import Dataset
mnist_train = MNIST('../data/MNIST', download = True,
transform = transforms.Compose([
transforms.ToTensor(),
]), train = True)
mnist_test = MNIST('../data/MNIST', download = True,
transform = transforms.Compose([
transforms.ToTensor(),
]), train = False)
from torch import randn
def add_noise(img):
return img + randn(img.size())*0.4
class SyntheticNoiseDataset(Dataset):
def __init__(self, data, mode='train'):
self.mode = mode
self.data = data
def __len__(self):
return len(self.data)
def __getitem__(self, index):
img = self.data[index][0]
return add_noise(img), img
noisy_mnist_train = SyntheticNoiseDataset(mnist_train, 'train')
noisy_mnist_test = SyntheticNoiseDataset(mnist_test, 'test')
```
We will try to learn to predict the clean image on the right from the noisy image on the left.
```
noisy, clean = noisy_mnist_train[0]
plot_tensors([noisy[0], clean[0]], ['Noisy Image', 'Clean Image'])
```
# Masking
The strategy is to train a $J$-invariant version of a neural net by replacing a grid of pixels with the average of their neighbors, then only evaluating the model on the masked pixels.
```
from mask import Masker
masker = Masker(width = 4, mode='interpolate')
net_input, mask = masker.mask(noisy.unsqueeze(0), 0)
```
A mask; the data; the input to the neural net, which doesn't depend on the values of $x$ inside the mask; and the difference between the neural net input and $x$.
```
plot_tensors([mask, noisy[0], net_input[0], net_input[0] - noisy[0]],
["Mask", "Noisy Image", "Neural Net Input", "Difference"])
```
# Model
For our model, we use a short UNet with two levels of up- and down- sampling
```
from models.babyunet import BabyUnet
model = BabyUnet()
```
# Training
```
from torch.nn import MSELoss
from torch.optim import Adam
from torch.utils.data import DataLoader
loss_function = MSELoss()
optimizer = Adam(model.parameters(), lr=0.001)
data_loader = DataLoader(noisy_mnist_train, batch_size=32, shuffle=True)
for i, batch in enumerate(data_loader):
noisy_images, clean_images = batch
net_input, mask = masker.mask(noisy_images, i)
net_output = model(net_input)
loss = loss_function(net_output*mask, noisy_images*mask)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if i % 10 == 0:
print("Loss (", i, "): \t", round(loss.item(), 4))
if i == 100:
break
test_data_loader = DataLoader(noisy_mnist_test,
batch_size=32,
shuffle=False,
num_workers=3)
i, test_batch = next(enumerate(test_data_loader))
noisy, clean = test_batch
```
With our trained model, we have a choice. We may do a full $J$-invariant reconstruction, or we may just run the noisy data through the network unaltered.
```
simple_output = model(noisy)
invariant_output = masker.infer_full_image(noisy, model)
idx = 3
plot_tensors([clean[idx], noisy[idx], simple_output[idx], invariant_output[idx]],
["Ground Truth", "Noisy Image", "Single Pass Inference", "J-Invariant Inference"])
print("Test loss, single pass: ", round(loss_function(clean, simple_output).item(), 3))
print("Test loss, J-invariant: ", round(loss_function(clean, invariant_output).item(), 3))
```
While both the simple and invariant output are significantly denoised, the invariant output has a mild pixelation.
This is due to the fact that neighboring pixels are denoised using different information, leading to discontinuities in the reconstructed output.
|
github_jupyter
|
%load_ext autoreload
%autoreload 2
import sys
sys.path.append("..")
from util import show, plot_images, plot_tensors
from torchvision.datasets import MNIST
from torchvision import transforms
from torch.utils.data import Dataset
mnist_train = MNIST('../data/MNIST', download = True,
transform = transforms.Compose([
transforms.ToTensor(),
]), train = True)
mnist_test = MNIST('../data/MNIST', download = True,
transform = transforms.Compose([
transforms.ToTensor(),
]), train = False)
from torch import randn
def add_noise(img):
return img + randn(img.size())*0.4
class SyntheticNoiseDataset(Dataset):
def __init__(self, data, mode='train'):
self.mode = mode
self.data = data
def __len__(self):
return len(self.data)
def __getitem__(self, index):
img = self.data[index][0]
return add_noise(img), img
noisy_mnist_train = SyntheticNoiseDataset(mnist_train, 'train')
noisy_mnist_test = SyntheticNoiseDataset(mnist_test, 'test')
noisy, clean = noisy_mnist_train[0]
plot_tensors([noisy[0], clean[0]], ['Noisy Image', 'Clean Image'])
from mask import Masker
masker = Masker(width = 4, mode='interpolate')
net_input, mask = masker.mask(noisy.unsqueeze(0), 0)
plot_tensors([mask, noisy[0], net_input[0], net_input[0] - noisy[0]],
["Mask", "Noisy Image", "Neural Net Input", "Difference"])
from models.babyunet import BabyUnet
model = BabyUnet()
from torch.nn import MSELoss
from torch.optim import Adam
from torch.utils.data import DataLoader
loss_function = MSELoss()
optimizer = Adam(model.parameters(), lr=0.001)
data_loader = DataLoader(noisy_mnist_train, batch_size=32, shuffle=True)
for i, batch in enumerate(data_loader):
noisy_images, clean_images = batch
net_input, mask = masker.mask(noisy_images, i)
net_output = model(net_input)
loss = loss_function(net_output*mask, noisy_images*mask)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if i % 10 == 0:
print("Loss (", i, "): \t", round(loss.item(), 4))
if i == 100:
break
test_data_loader = DataLoader(noisy_mnist_test,
batch_size=32,
shuffle=False,
num_workers=3)
i, test_batch = next(enumerate(test_data_loader))
noisy, clean = test_batch
simple_output = model(noisy)
invariant_output = masker.infer_full_image(noisy, model)
idx = 3
plot_tensors([clean[idx], noisy[idx], simple_output[idx], invariant_output[idx]],
["Ground Truth", "Noisy Image", "Single Pass Inference", "J-Invariant Inference"])
print("Test loss, single pass: ", round(loss_function(clean, simple_output).item(), 3))
print("Test loss, J-invariant: ", round(loss_function(clean, invariant_output).item(), 3))
| 0.592549 | 0.987568 |
<center>
<img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DV0101EN-SkillsNetwork/labs/Module%204/logo.png" width="300" alt="cognitiveclass.ai logo" />
</center>
# Basic Plotly Charts
Estimated time needed: 30 minutes
## Objectives
In this lab, you will learn about creating plotly charts using plotly.graph_objects and plotly.express.
Learn more about:
* [Plotly python](https://plotly.com/python/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDV0101ENSkillsNetwork20297740-2021-01-01)
* [Plotly Graph Objects](https://plotly.com/python/graph-objects/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDV0101ENSkillsNetwork20297740-2021-01-01)
* [Plotly Express](https://plotly.com/python/plotly-express/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDV0101ENSkillsNetwork20297740-2021-01-01)
* Handling data using [Pandas](https://pandas.pydata.org/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDV0101ENSkillsNetwork20297740-2021-01-01)
We will be using the [airline dataset](https://developer.ibm.com/exchanges/data/all/airline/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDV0101ENSkillsNetwork20297740-2021-01-01) from [Data Asset eXchange](https://developer.ibm.com/exchanges/data/).
#### Airline Reporting Carrier On-Time Performance Dataset
The Reporting Carrier On-Time Performance Dataset contains information on approximately 200 million domestic US flights reported to the United States Bureau of Transportation Statistics. The dataset contains basic information about each flight (such as date, time, departure airport, arrival airport) and, if applicable, the amount of time the flight was delayed and information about the reason for the delay. This dataset can be used to predict the likelihood of a flight arriving on time.
Preview data, dataset metadata, and data glossary [here.](https://dax-cdn.cdn.appdomain.cloud/dax-airline/1.0.1/data-preview/index.html)
```
# Import required libraries
import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
```
# Read Data
```
# Read the airline data into pandas dataframe
airline_data = pd.read_csv('https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DV0101EN-SkillsNetwork/Data%20Files/airline_data.csv',
encoding = "ISO-8859-1",
dtype={'Div1Airport': str, 'Div1TailNum': str,
'Div2Airport': str, 'Div2TailNum': str})
# Preview the first 5 lines of the loaded data
airline_data.head()
# Shape of the data
airline_data.shape
# Randomly sample 500 data points. Setting the random state to be 42 so that we get same result.
data = airline_data.sample(n=500, random_state=42)
# Get the shape of the trimmed data
data.shape
```
### Lab structure
#### plotly.graph_objects
1. Review scatter plot creation
Theme: How departure time changes with respect to airport distance
2. **To do** - Create line plot
Theme: Extract average monthly delay time and see how it changes over the year
#### plotly.express
1. Review bar chart creation
Theme: Extract number of flights from a specific airline that goes to a destination
2. **To do** - Create bubble chart
Theme: Get number of flights as per reporting airline
3. **To do** - Create histogram
Theme: Get distribution of arrival delay
4. Review pie chart
Theme: Proportion of distance group by month (month indicated by numbers)
5. **To do** - Create sunburst chart
Theme: Hierarchical view in othe order of month and destination state holding value of number of flights
# plotly.graph_objects¶
## 1. Scatter Plot
Learn more about usage of scatter plot [here](https://plotly.com/python/line-and-scatter/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDV0101ENSkillsNetwork20297740-2021-01-01)
#### Idea: How departure time changes with respect to airport distance
```
# First we create a figure using go.Figure and adding trace to it through go.scatter
fig = go.Figure(data=go.Scatter(x=data['Distance'], y=data['DepTime'], mode='markers', marker=dict(color='red')))
# Updating layout through `update_layout`. Here we are adding title to the plot and providing title to x and y axis.
fig.update_layout(title='Distance vs Departure Time', xaxis_title='Distance', yaxis_title='DepTime')
# Display the figure
fig.show()
```
## 2. Line Plot
Learn more about line plot [here](https://plotly.com/python/line-charts/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDV0101ENSkillsNetwork20297740-2021-01-01)
#### Idea: Extract average monthly arrival delay time and see how it changes over the year.
```
# Group the data by Month and compute average over arrival delay time.
line_data = data.groupby('Month')['ArrDelay'].mean().reset_index()
# Display the data
line_data
```
#### To do:
* Create a line plot with x-axis being the month and y-axis being computed average delay time. Update plot title,\
xaxis, and yaxis title.
* Hint: Scatter and line plot vary by updating mode parameter.
```
fig = go.Figure(data=go.Scatter(x=line_data['Month'], y=line_data['ArrDelay'], mode='lines', marker=dict(color='red')))
fig.update_layout(title='Month vs Average Flight Delay Time', xaxis_title='Month', yaxis_title='ArrDelay')
fig.show()
```
Double-click **here** for the solution.
<!-- The answer is below:
fig = go.Figure(data=go.Scatter(x=line_data['Month'], y=line_data['ArrDelay'], mode='lines', marker=dict(color='green')))
fig.update_layout(title='Month vs Average Flight Delay Time', xaxis_title='Month', yaxis_title='ArrDelay')
fig.show()
-->
# plotly.express¶
## 1. Bar Chart
Learn more about bar chart [here](https://plotly.com/python/bar-charts/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDV0101ENSkillsNetwork20297740-2021-01-01)
#### Idea: Extract number of flights from a specific airline that goes to a destination
```
# Group the data by destination state and reporting airline. Compute total number of flights in each combination
bar_data = data.groupby(['DestState'])['Flights'].sum().reset_index()
# Display the data
bar_data
# Use plotly express bar chart function px.bar. Provide input data, x and y axis variable, and title of the chart.
# This will give total number of flights to the destination state.
fig = px.bar(bar_data, x="DestState", y="Flights", title='Total number of flights to the destination state split by reporting airline')
fig.show()
```
## 2. Bubble Chart
Learn more about bubble chart [here](https://plotly.com/python/bubble-charts/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDV0101ENSkillsNetwork20297740-2021-01-01)
#### Idea: Get number of flights as per reporting airline
```
# Group the data by reporting airline and get number of flights
bub_data = data.groupby('Reporting_Airline')['Flights'].sum().reset_index()
bub_data
```
**To do**
* Create a bubble chart using the `bub_data` with x-axis being reporting airline and y-axis being flights.
* Provide title to the chart
* Update size of the bubble based on the number of flights. Use `size` parameter.
* Update name of the hover tooltip to `reporting_airline` using `hover_name` parameter.
```
# Create bubble chart here
fig = px.scatter(bub_data, x="Reporting_Airline", y="Flights", size="Flights",
hover_name="Reporting_Airline", title='Reporting Airline vs Number of Flights', size_max=60)
fig.show()
```
Double-click **here** for the solution.
<!-- The answer is below:
fig = px.scatter(bub_data, x="Reporting_Airline", y="Flights", size="Flights",
hover_name="Reporting_Airline", title='Reporting Airline vs Number of Flights', size_max=60)
fig.show()
-->
# Histogram
Learn more about histogram [here](https://plotly.com/python/histograms/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDV0101ENSkillsNetwork20297740-2021-01-01)
#### Idea: Get distribution of arrival delay
```
# Set missing values to 0
data['ArrDelay'] = data['ArrDelay'].fillna(0)
```
**To do**
* Use px.histogram and pass the dataset.
* Pass `ArrDelay` to x parameter.
```
# Create histogram here
fig = px.histogram(data, x="ArrDelay")
fig.show()
```
Double-click **here** for the solution.
<!-- The answer is below:
fig = px.histogram(data, x="ArrDelay")
fig.show()
-->
# Pie Chart
Learn more about pie chart [here](https://plotly.com/python/pie-charts/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDV0101ENSkillsNetwork20297740-2021-01-01)
#### Idea: Proportion of distance group by month (month indicated by numbers)
```
# Use px.pie function to create the chart. Input dataset.
# Values parameter will set values associated to the sector. 'Month' feature is passed to it.
# labels for the sector are passed to the `names` parameter.
fig = px.pie(data, values='Month', names='DistanceGroup', title='Distance group proportion by month')
fig.show()
```
# Sunburst Charts
Learn more about sunburst chart [here](https://plotly.com/python/sunburst-charts/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDV0101ENSkillsNetwork20297740-2021-01-01)
#### Idea: Hierarchical view in othe order of month and destination state holding value of number of flights
**To do**
* Create sunburst chart using `px.sunburst`.
* Define hierarchy of sectors from root to leaves in `path` parameter. Here, we go from `Month` to `DestStateName` feature.
* Set sector values in `values` paramter. Here, we can pass in `Flights` feature.
* Show the figure.
```
# Create sunburst chart here
fig = px.sunburst(data, path=['Month', 'DestStateName'], values='Flights')
fig.show()
```
Double-click **here** for the solution.
<!-- The answer is below:
fig = px.sunburst(data, path=['Month', 'DestStateName'], values='Flights')
fig.show()
-->
## Summary
Congratulations for completing your first lab.
In this lab, you have learnt how to use `plotly.graph_objects` and `plotly.express` for creating plots and charts.
## Author
[Saishruthi Swaminathan](https://www.linkedin.com/in/saishruthi-swaminathan/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDV0101ENSkillsNetwork20297740-2021-01-01)
## Changelog
| Date | Version | Changed by | Change Description |
| ---------- | ------- | ---------- | ------------------------------------ |
| 12-18-2020 | 1.0 | Nayef | Added dataset link and upload to Git |
## <h3 align="center"> © IBM Corporation 2020. All rights reserved. <h3/>
|
github_jupyter
|
# Import required libraries
import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
# Read the airline data into pandas dataframe
airline_data = pd.read_csv('https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DV0101EN-SkillsNetwork/Data%20Files/airline_data.csv',
encoding = "ISO-8859-1",
dtype={'Div1Airport': str, 'Div1TailNum': str,
'Div2Airport': str, 'Div2TailNum': str})
# Preview the first 5 lines of the loaded data
airline_data.head()
# Shape of the data
airline_data.shape
# Randomly sample 500 data points. Setting the random state to be 42 so that we get same result.
data = airline_data.sample(n=500, random_state=42)
# Get the shape of the trimmed data
data.shape
# First we create a figure using go.Figure and adding trace to it through go.scatter
fig = go.Figure(data=go.Scatter(x=data['Distance'], y=data['DepTime'], mode='markers', marker=dict(color='red')))
# Updating layout through `update_layout`. Here we are adding title to the plot and providing title to x and y axis.
fig.update_layout(title='Distance vs Departure Time', xaxis_title='Distance', yaxis_title='DepTime')
# Display the figure
fig.show()
# Group the data by Month and compute average over arrival delay time.
line_data = data.groupby('Month')['ArrDelay'].mean().reset_index()
# Display the data
line_data
fig = go.Figure(data=go.Scatter(x=line_data['Month'], y=line_data['ArrDelay'], mode='lines', marker=dict(color='red')))
fig.update_layout(title='Month vs Average Flight Delay Time', xaxis_title='Month', yaxis_title='ArrDelay')
fig.show()
# Group the data by destination state and reporting airline. Compute total number of flights in each combination
bar_data = data.groupby(['DestState'])['Flights'].sum().reset_index()
# Display the data
bar_data
# Use plotly express bar chart function px.bar. Provide input data, x and y axis variable, and title of the chart.
# This will give total number of flights to the destination state.
fig = px.bar(bar_data, x="DestState", y="Flights", title='Total number of flights to the destination state split by reporting airline')
fig.show()
# Group the data by reporting airline and get number of flights
bub_data = data.groupby('Reporting_Airline')['Flights'].sum().reset_index()
bub_data
# Create bubble chart here
fig = px.scatter(bub_data, x="Reporting_Airline", y="Flights", size="Flights",
hover_name="Reporting_Airline", title='Reporting Airline vs Number of Flights', size_max=60)
fig.show()
# Set missing values to 0
data['ArrDelay'] = data['ArrDelay'].fillna(0)
# Create histogram here
fig = px.histogram(data, x="ArrDelay")
fig.show()
# Use px.pie function to create the chart. Input dataset.
# Values parameter will set values associated to the sector. 'Month' feature is passed to it.
# labels for the sector are passed to the `names` parameter.
fig = px.pie(data, values='Month', names='DistanceGroup', title='Distance group proportion by month')
fig.show()
# Create sunburst chart here
fig = px.sunburst(data, path=['Month', 'DestStateName'], values='Flights')
fig.show()
| 0.804098 | 0.925365 |
# Séance 4 – 03/11/2020 : POO <small>classes, objets, bidules</small>
Pompé sans vergogne sur https://loicgrobol.github.io/python-im/m2-2018/ (Loïc Grobol)
# Au commencement
Au commencement étaient les variables
```
x = 27
```
Elles représentaient parfois des concepts sophistiqués
```
import math
point_1 = (27, 13)
point_2 = (19, 84)
def distance(p1, p2):
return math.sqrt((p2[0]-p1[0])**2+(p2[1]-p1[1])**2)
distance(point_1, point_2)
```
Et c'était pénible à écrire et à comprendre
Pour simplifier, on peut nommer les données contenues dans variables, par exemple avec un `dict`
```
point_1 = {'x': 27, 'y': 13}
point_2 = {'x': 19, 'y': 84}
def distance(p1, p2):
return math.sqrt((p2['x']-p1['x'])**2+(p2['y']-p1['y'])**2)
distance(point_1, point_2)
```
Et c'est toujours aussi pénible à écrire mais un peu moins à lire
On peut avoir une syntaxe plus agréable en utilisant des tuples nommés
```
from collections import namedtuple
Point = namedtuple('Point', ('x', 'y'))
point_1 = Point(27, 13)
point_2 = Point(19, 84)
def distance(p1, p2):
return math.sqrt((p2.x-p1.x)**2+(p2.y-p1.y)**2)
distance(point_1, point_2)
```
Voilà, le cours est fini, bonnes vacances.
## Peut mieux faire
- Les trucs créés via `namedtuple` sont ce qu'on appelle des *enregistrements* (en C des *struct*s)
- Ils permettent de regrouper de façon lisibles des données qui vont ensemble
- Abscisse et ordonnée d'un point
- Année, mois et jour d'une date
- ~~Signifiant et signifié~~ Prénom et nom d'une personne
- Leur utilisation (comme tout le reste d'ailleurs) est **facultative** : on vit très bien en assembleur
- Mais ils permettent de rendre le code bien plus lisible (et écrivable)
- Et ils sont rétrocompatibles avec les tuples normaux
```
def mon_max(lst):
"""Renvoie le maximum d'une liste et son indice."""
res, arg_res = lst[0], 0
for i, e in enumerate(lst[1:], start=1):
if e > res:
res = e
arg_res = i
return res, arg_res
def bidule(lst1, lst2):
return lst2[mon_max(lst1)[1]]
bidule([2,7,1,3], [1,2,4,8])
```
Si on convertit `mon_max` pour renvoyer un tuple nommé, on peut continuer à utiliser `bidule`
```
MaxRet = namedtuple('MaxRet', ('value', 'indice'))
def mon_max(lst):
"""Renvoie le maximum d'une liste et son indice."""
res, arg_res = lst[0], 0
for i, e in enumerate(lst[1:], start=1):
if e > res:
res = e
arg_res = i
return MaxRet(res, arg_res)
def bidule(lst1, lst2):
"""Renvoie la valeur de lst2 à l'indice où lst1 atteint son max"""
return lst2[mon_max(lst1)[1]]
bidule([2,7,1,3], [1,2,4,8])
```
Vous êtes **fortement** encouragé⋅e⋅s à utiliser des tuples nommés quand vous écrivez une fonction qui renvoie plusieurs valeurs.
```
Vecteur = namedtuple('Vecteur', ('x', 'y'))
v1 = Vecteur(27, 13)
v2 = Vecteur(1, 0)
def norm(v):
return math.sqrt(v.x**2 + v.y**2)
def is_unit(v):
return norm(v) == 1
print(is_unit(v1))
print(is_unit(v2))
```
C'est plutôt lisible
Mais si je veux pouvoir faire aussi de la 3d
```
Vecteur3D = namedtuple('Vecteur3D', ('x', 'y', 'z'))
u1 = Vecteur3D(27, 13, 6)
u2 = Vecteur3D(1, 0, 0)
def norm3d(v):
return math.sqrt(v.x**2 + v.y**2 + v.z**2)
def is_unit3d(v):
return norm3d(v) == 1
print(is_unit3d(u1))
print(is_unit3d(u2))
```
C'est affreusement pénible de réécrire comme ça le même code.
Une autre solution
```
def norm(v):
if isinstance(v, Vecteur3D):
return math.sqrt(v.x**2 + v.y**2 + v.z**2)
elif isinstance(v, Vecteur):
return math.sqrt(v.x**2 + v.y**2)
else:
raise ValueError('Type non supporté')
def is_unit(v):
return norm(v) == 1
print(is_unit(v1))
print(is_unit(u2))
```
C'est un peu mieux mais pas top. (Même si on aurait pu trouver une solution plus intelligente)
## Ces fameux objets
Une des solutions pour faire mieux c'est de passer à la vitesse supérieure : les objets.
Ça va d'abord être un peu plus désagréable, pour ensuite être beaucoup plus agréable.
```
import math
class Vecteur:
def __init__(self, x, y):
self.x = x
self.y = y
def norm(self):
return math.sqrt(self.x**2 + self.y**2)
v1 = Vecteur(27, 13)
v2 = Vecteur(1, 0)
v1.x
#print(v2.norm())
class Vecteur3D:
def __init__(self, x, y, z):
self.x = x
self.y = y
self.z = z
def norm(self):
return math.sqrt(self.x**2 + self.y**2 + self.z**2)
u1 = Vecteur3D(27, 13, 6)
u2 = Vecteur3D(1, 0, 0)
print(u1.norm())
print(u2.norm())
def is_unit(v):
return v.norm() == 1
print(is_unit(v1))
print(is_unit(u2))
```
Le choix de la bonne fonction `norme` se fait automagiquement
Résumons
- Un objet, c'est un bidule qui regroupe
- Des données (on dit *attributs* ou *propriétés*)
- Des fonctions (on dit des *méthodes*)
- Ça permet d'organiser son code de façon plus lisible et plus facilement réutilisable (croyez moi sur parole)
Et vous en avez déjà rencontré plein
```
print(type('abc'))
print('abc'.islower())
```
Car en Python, tout est objet. Ce qui ne veut pas dire qu'on est obligé d'y faire attention…
## POO
La programmation orientée objet (POO) est une manière de programmer différente de la programmation procédurale vue jusqu'ici.
- Les outils de base sont les objets et les classes
- Un concept → une classe, une réalisation concrète → un objet
C'est une façon particulière de résoudre les problèmes, on parle de *paradigme*, et il y en a d'autres
- Fonctionnel : les outils de base sont les fonctions
- Impérative : les outils de base sont les structures de contrôle (boucles, tests…)
Python fait partie des langages multi-paradigmes : on utilise le plus pratique, ce qui n'est pas sans déplaire aux puristes mais
« *We are all consenting adults here* »
## Classes
* On définit une classe en utilisant le mot-clé `class`
* Par conventions, les noms de classe s'écrivent avec des majuscules (CapWords convention)
```
class Word:
""" Classe Word : définit un mot de la langue """
pass
```
Pour créer un objet, on appelle simplement sa classe comme une fonction
```
word1 = Word()
print(type(word1)) # renvoie la classe qu'instancie l'objet
```
On dit que `word1` est une *instance* de la classe `Word`
Et il a déjà des attributs et des méthodes
```
word1.__doc__
print(dir(word1))
```
Et aussi un identifiant unique
```
id(word1)
word2 = Word()
id(word2)
```
## Constructeur et attributs
* Il existe une méthode spéciale `__init__()` qui automatiquement appelée lors de la création d'un objet. C'est le constructeur
* Le constructeur permet de définir un état initial à l'objet, lui donner des attributs par exemple
* Les attributs dans l'exemple ci-dessous sont des variables propres à un objet, une instance
```
class Word:
""" Classe Word : définit un mot de la langue """
def __init__(self, form, lemma, pos):
self.form = form
self.lemma = lemma
self.pos = pos
word = Word('été', 'être', 'V')
word.lemma
word2 = Word('été', 'été', 'NOM')
word2.lemma
```
## Méthodes
* Les méthodes d'une classe sont des fonctions. Elles indiquent quelles actions peut mener un objet, elles peuvent donner des informations sur l'objet ou encore le modifier.
* Par convention, on nomme `self` leur premier paramètre, qui fera référence à l'objet lui-même.
```
class Word:
""" Classe Word : définit un mot simple de la langue """
def __init__(self, form, lemma, pos):
self.form = form
self.lemma = lemma
self.pos = pos
def __repr__(self):
return f"{self.form}"
def brown_string(self):
return f"{self.form}/{self.lemma}/{self.pos}"
def is_inflected(self):
"""
Returns True is the word is inflected
False otherwise
"""
if self.form != self.lemma:
return True
else:
return False
w = Word('orientales', 'oriental', 'adj')
print(w)
```
Pourquoi `self` ? Parce que écrire `w.is_inflected()` c'est du sucre pour
```
Word.is_inflected(w)
```
# Héritage
```
class Cake:
""" un beau gâteau """
def __init__(self, farine, oeuf, beurre):
self.farine = farine
self.oeuf = oeuf
self.beurre = beurre
self.poids = self.farine + self.oeuf*50 + self.beurre
def is_trop_gras(self):
if self.farine + self.beurre > 500:
return True
else:
return False
def cuire(self):
return self.beurre / self.oeuf
gateau = Cake(200, 3, 800)
gateau.poids
```
Cake est la classe mère.
Les classes enfants vont hériter de ses méthodes et de ses attributs.
Cela permet de factoriser le code, d'éviter les répétitions et les erreurs qui en découlent.
```
class CarrotCake(Cake):
""" pas seulement pour les lapins
hérite de Cake """
carotte = 3
def cuire(self):
return self.carotte * self.oeuf
class ChocolateCake(Cake):
""" LE gâteau
hérite de Cake """
def is_trop_gras(self):
return False
gato_carotte = CarrotCake(200, 3, 150)
gato_carotte.cuire()
gato_1.cuire()
gato_2 = ChocolateCake(200, 6, 600)
gato_2.is_trop_gras()
```
L'héritage est à utiliser avec parcimonie. On utilisera volontiers par contre la composition c-a-d l'utilisation d'objets d'autres classes comme attributs. Voir https://python-patterns.guide/gang-of-four/composition-over-inheritance/
### ☕ Exos ☕
Écrire une classe `Sentence` et une classe `Word` qui représenteront les données d'un fichier ud (https://universaldependencies.org/).
Vous écrirez également un programme qui reçoit un fichier .conll en argument et instancie autant d'objets Sentence et Word que nécessaires.
### ☕ Exos ☕
Écrivez un script qui reçoit trois arguments : un répertoire de fichiers conllu, une chaîne de car. notant le mode (form ou pos) et un entier (n entre 2 et 4).
Votre calculera les fréquences des n-grammes (où la valeur de n est passée en argument) dans les fichiers du répertoire. Deux modes de calcul : par forme ou par pos.
Je veux juste votre script, pas les données, ni les résultats.
|
github_jupyter
|
x = 27
import math
point_1 = (27, 13)
point_2 = (19, 84)
def distance(p1, p2):
return math.sqrt((p2[0]-p1[0])**2+(p2[1]-p1[1])**2)
distance(point_1, point_2)
point_1 = {'x': 27, 'y': 13}
point_2 = {'x': 19, 'y': 84}
def distance(p1, p2):
return math.sqrt((p2['x']-p1['x'])**2+(p2['y']-p1['y'])**2)
distance(point_1, point_2)
from collections import namedtuple
Point = namedtuple('Point', ('x', 'y'))
point_1 = Point(27, 13)
point_2 = Point(19, 84)
def distance(p1, p2):
return math.sqrt((p2.x-p1.x)**2+(p2.y-p1.y)**2)
distance(point_1, point_2)
def mon_max(lst):
"""Renvoie le maximum d'une liste et son indice."""
res, arg_res = lst[0], 0
for i, e in enumerate(lst[1:], start=1):
if e > res:
res = e
arg_res = i
return res, arg_res
def bidule(lst1, lst2):
return lst2[mon_max(lst1)[1]]
bidule([2,7,1,3], [1,2,4,8])
MaxRet = namedtuple('MaxRet', ('value', 'indice'))
def mon_max(lst):
"""Renvoie le maximum d'une liste et son indice."""
res, arg_res = lst[0], 0
for i, e in enumerate(lst[1:], start=1):
if e > res:
res = e
arg_res = i
return MaxRet(res, arg_res)
def bidule(lst1, lst2):
"""Renvoie la valeur de lst2 à l'indice où lst1 atteint son max"""
return lst2[mon_max(lst1)[1]]
bidule([2,7,1,3], [1,2,4,8])
Vecteur = namedtuple('Vecteur', ('x', 'y'))
v1 = Vecteur(27, 13)
v2 = Vecteur(1, 0)
def norm(v):
return math.sqrt(v.x**2 + v.y**2)
def is_unit(v):
return norm(v) == 1
print(is_unit(v1))
print(is_unit(v2))
Vecteur3D = namedtuple('Vecteur3D', ('x', 'y', 'z'))
u1 = Vecteur3D(27, 13, 6)
u2 = Vecteur3D(1, 0, 0)
def norm3d(v):
return math.sqrt(v.x**2 + v.y**2 + v.z**2)
def is_unit3d(v):
return norm3d(v) == 1
print(is_unit3d(u1))
print(is_unit3d(u2))
def norm(v):
if isinstance(v, Vecteur3D):
return math.sqrt(v.x**2 + v.y**2 + v.z**2)
elif isinstance(v, Vecteur):
return math.sqrt(v.x**2 + v.y**2)
else:
raise ValueError('Type non supporté')
def is_unit(v):
return norm(v) == 1
print(is_unit(v1))
print(is_unit(u2))
import math
class Vecteur:
def __init__(self, x, y):
self.x = x
self.y = y
def norm(self):
return math.sqrt(self.x**2 + self.y**2)
v1 = Vecteur(27, 13)
v2 = Vecteur(1, 0)
v1.x
#print(v2.norm())
class Vecteur3D:
def __init__(self, x, y, z):
self.x = x
self.y = y
self.z = z
def norm(self):
return math.sqrt(self.x**2 + self.y**2 + self.z**2)
u1 = Vecteur3D(27, 13, 6)
u2 = Vecteur3D(1, 0, 0)
print(u1.norm())
print(u2.norm())
def is_unit(v):
return v.norm() == 1
print(is_unit(v1))
print(is_unit(u2))
print(type('abc'))
print('abc'.islower())
class Word:
""" Classe Word : définit un mot de la langue """
pass
word1 = Word()
print(type(word1)) # renvoie la classe qu'instancie l'objet
word1.__doc__
print(dir(word1))
id(word1)
word2 = Word()
id(word2)
class Word:
""" Classe Word : définit un mot de la langue """
def __init__(self, form, lemma, pos):
self.form = form
self.lemma = lemma
self.pos = pos
word = Word('été', 'être', 'V')
word.lemma
word2 = Word('été', 'été', 'NOM')
word2.lemma
class Word:
""" Classe Word : définit un mot simple de la langue """
def __init__(self, form, lemma, pos):
self.form = form
self.lemma = lemma
self.pos = pos
def __repr__(self):
return f"{self.form}"
def brown_string(self):
return f"{self.form}/{self.lemma}/{self.pos}"
def is_inflected(self):
"""
Returns True is the word is inflected
False otherwise
"""
if self.form != self.lemma:
return True
else:
return False
w = Word('orientales', 'oriental', 'adj')
print(w)
Word.is_inflected(w)
class Cake:
""" un beau gâteau """
def __init__(self, farine, oeuf, beurre):
self.farine = farine
self.oeuf = oeuf
self.beurre = beurre
self.poids = self.farine + self.oeuf*50 + self.beurre
def is_trop_gras(self):
if self.farine + self.beurre > 500:
return True
else:
return False
def cuire(self):
return self.beurre / self.oeuf
gateau = Cake(200, 3, 800)
gateau.poids
class CarrotCake(Cake):
""" pas seulement pour les lapins
hérite de Cake """
carotte = 3
def cuire(self):
return self.carotte * self.oeuf
class ChocolateCake(Cake):
""" LE gâteau
hérite de Cake """
def is_trop_gras(self):
return False
gato_carotte = CarrotCake(200, 3, 150)
gato_carotte.cuire()
gato_1.cuire()
gato_2 = ChocolateCake(200, 6, 600)
gato_2.is_trop_gras()
| 0.554712 | 0.925331 |
```
# Python packages
from collections import namedtuple
import json
# Scipy stack programs installable from conda
import pandas, numpy as np
import matplotlib.pyplot as plt
import scipy.optimize
import numpy as np
# Pip installable packages
import CoolProp.CoolProp as CP
import cCOSMO
db = cCOSMO.VirginiaTechProfileDatabase(
"profiles/VT2005/Sigma_Profile_Database_Index_v2.txt",
"profiles/VT2005/Sigma_Profiles_v2/")
identifiers = [ "0438", "0087" ]
for iden in identifiers:
db.add_profile(iden)
prof = db.get_profile(iden)
print(prof.name)
COSMO = cCOSMO.COSMO1(identifiers, db)
T = 623.15;
z = np.array([0.235, 1-0.235])
%timeit COSMO.get_lngamma(T, z)
print(COSMO.get_lngamma(T, z))
# Print out what fluids are available in the database as a pandas DataFrame
j = json.loads(db.to_JSON()) # This is a dict, from numstring to info
j = [v for k,v in j.items()] # Remove the keys
pandas.DataFrame(j).set_index('VTIndex')
import cCOSMO
import numpy as np
db = cCOSMO.VirginiaTechProfileDatabase(
"profiles/VT2005/Sigma_Profile_Database_Index_v2.txt",
"profiles/VT2005/Sigma_Profiles_v2/")
identifiers = ["0006", "0438"]
for iden in identifiers:
db.add_profile(iden)
prof = db.get_profile(iden)
print(prof.name)
COSMO = cCOSMO.COSMO1(identifiers, db)
T = 298.15;
z = np.array([0, 1])
COSMO.get_lngamma_comb(T,z,0)
# %timeit COSMO.get_psigma_mix(z)
# psigma_mix = COSMO.get_psigma_mix(z)
# %timeit COSMO.get_Gamma(T, psigma_mix)
# %timeit COSMO.get_lngamma_resid(0, T, z)
# %timeit COSMO.get_lngamma(T, z)
# print(COSMO.get_lngamma(T, z))
%matplotlib inline
class _fac(object):
def __init__(self, j):
assert(j['using_tau_r'])
self.T_r = j['T_r']
self.T_max = j['Tmax']
self.T_min = j['Tmin']
self.n = np.array(j['n'])
self.t = np.array(j['t'])
self.reducing_value = j['reducing_value']
def psat(self, T):
theta = 1-T/self.T_r
RHS = np.dot(self.n,theta**self.t)
return self.reducing_value*np.exp(self.T_r/T*np.sum(RHS))
def dpsat_dT(self, T):
im = 0+1j; h = 1e-10
return (self.psat(T+im*h)/h).imag # Complex step derivative
def psat_factory(fluid):
# Get the JSON structure from CoolProp
pS = json.loads(CP.get_fluid_param_string(fluid,'JSON'))[0]['ANCILLARIES']['pS']
return _fac(pS)
COSMOIsoline = namedtuple('COSMOIsoline', ['T','p','x0L','x0V'])
def get_isotherm(fluids, T):
COSMO = cCOSMO.COSMO1(fluids, db)
assert(len(fluids)==2)
psats = [psat_factory(name) for name in fluids]
TT,PP,X0L,X0V = [],[],[],[]
for x0L in np.linspace(1e-6,1-1e-6):
xL = [x0L, 1-x0L]
gamma = np.exp(COSMO.get_lngamma(T, xL))
p = gamma[0]*xL[0]*psats[0].psat(T) + gamma[1]*xL[1]*psats[1].psat(T)
x0V = gamma[0]*xL[0]*psats[0].psat(T)/p
TT.append(T); PP.append(p); X0L.append(x0L); X0V.append(x0V)
return COSMOIsoline(TT, np.array(PP), np.array(X0L), X0V)
def get_isobar(fluids, p_Pa, Tguess):
psats = [psat_factory(name) for name in fluids]
assert(len(fluids)==2)
TT,PP,X0L,X0V = [],[],[],[]
for x0L in np.linspace(1e-6,1-1e-6):
xL = [x0L, 1-x0L]
def resid(T):
gamma = np.exp(COSMO.get_lngamma(T, xL))
pcalc = gamma[0]*xL[0]*psats[0].psat(T) + gamma[1]*xL[1]*psats[1].psat(T)
return np.abs(pcalc-p_Pa)/p_Pa
T = scipy.optimize.fsolve(resid, Tguess)
gamma = np.exp(COSMO.get_lngamma(T, xL))
p = p_Pa
x0V = gamma[0]*xL[0]*psats[0].psat(T)/p
TT.append(T); PP.append(p); X0L.append(x0L); X0V.append(x0V)
return COSMOIsoline(np.array(TT),np.array(PP),np.array(X0L),X0V)
fluids = ['ETHANOL','WATER']
for iden in fluids:
n = db.normalize_identifier(iden)
db.add_profile(n)
prof = db.get_profile(n)
print(prof.name)
for T in [423.15, 473.15]:
isoT = get_isotherm(fluids, T)
plt.plot(isoT.x0L, isoT.p/1e6, label='bubble')
plt.plot(isoT.x0V, isoT.p/1e6, label='dew')
for T, group in pandas.read_csv('t_p_x_y_isoth_red.dat',sep='\t').groupby('T / K'):
plt.plot(group['x1_L'], group['p / MPa'], 'o')
plt.plot(group['x1_V'], group['p / MPa'], 'o')
plt.xlabel(r'$x_{\rm ethanol}$')
plt.yscale('log')
plt.ylabel(r'$p$ / MPa');
%timeit get_isotherm(fluids, 423.15)
%timeit get_isobar(fluids, 101325.0, 373)
plt.figure()
for p in [101325.0]:
isoP = get_isobar(fluids, p, 373)
plt.plot(isoP.x0L, isoP.T/1e6, label='bubble')
plt.plot(isoP.x0V, isoP.T/1e6, label='dew')
plt.xlabel(r'$x_{\rm ethanol}$')
plt.yscale('log')
plt.ylabel(r'$T$ / K');
```
|
github_jupyter
|
# Python packages
from collections import namedtuple
import json
# Scipy stack programs installable from conda
import pandas, numpy as np
import matplotlib.pyplot as plt
import scipy.optimize
import numpy as np
# Pip installable packages
import CoolProp.CoolProp as CP
import cCOSMO
db = cCOSMO.VirginiaTechProfileDatabase(
"profiles/VT2005/Sigma_Profile_Database_Index_v2.txt",
"profiles/VT2005/Sigma_Profiles_v2/")
identifiers = [ "0438", "0087" ]
for iden in identifiers:
db.add_profile(iden)
prof = db.get_profile(iden)
print(prof.name)
COSMO = cCOSMO.COSMO1(identifiers, db)
T = 623.15;
z = np.array([0.235, 1-0.235])
%timeit COSMO.get_lngamma(T, z)
print(COSMO.get_lngamma(T, z))
# Print out what fluids are available in the database as a pandas DataFrame
j = json.loads(db.to_JSON()) # This is a dict, from numstring to info
j = [v for k,v in j.items()] # Remove the keys
pandas.DataFrame(j).set_index('VTIndex')
import cCOSMO
import numpy as np
db = cCOSMO.VirginiaTechProfileDatabase(
"profiles/VT2005/Sigma_Profile_Database_Index_v2.txt",
"profiles/VT2005/Sigma_Profiles_v2/")
identifiers = ["0006", "0438"]
for iden in identifiers:
db.add_profile(iden)
prof = db.get_profile(iden)
print(prof.name)
COSMO = cCOSMO.COSMO1(identifiers, db)
T = 298.15;
z = np.array([0, 1])
COSMO.get_lngamma_comb(T,z,0)
# %timeit COSMO.get_psigma_mix(z)
# psigma_mix = COSMO.get_psigma_mix(z)
# %timeit COSMO.get_Gamma(T, psigma_mix)
# %timeit COSMO.get_lngamma_resid(0, T, z)
# %timeit COSMO.get_lngamma(T, z)
# print(COSMO.get_lngamma(T, z))
%matplotlib inline
class _fac(object):
def __init__(self, j):
assert(j['using_tau_r'])
self.T_r = j['T_r']
self.T_max = j['Tmax']
self.T_min = j['Tmin']
self.n = np.array(j['n'])
self.t = np.array(j['t'])
self.reducing_value = j['reducing_value']
def psat(self, T):
theta = 1-T/self.T_r
RHS = np.dot(self.n,theta**self.t)
return self.reducing_value*np.exp(self.T_r/T*np.sum(RHS))
def dpsat_dT(self, T):
im = 0+1j; h = 1e-10
return (self.psat(T+im*h)/h).imag # Complex step derivative
def psat_factory(fluid):
# Get the JSON structure from CoolProp
pS = json.loads(CP.get_fluid_param_string(fluid,'JSON'))[0]['ANCILLARIES']['pS']
return _fac(pS)
COSMOIsoline = namedtuple('COSMOIsoline', ['T','p','x0L','x0V'])
def get_isotherm(fluids, T):
COSMO = cCOSMO.COSMO1(fluids, db)
assert(len(fluids)==2)
psats = [psat_factory(name) for name in fluids]
TT,PP,X0L,X0V = [],[],[],[]
for x0L in np.linspace(1e-6,1-1e-6):
xL = [x0L, 1-x0L]
gamma = np.exp(COSMO.get_lngamma(T, xL))
p = gamma[0]*xL[0]*psats[0].psat(T) + gamma[1]*xL[1]*psats[1].psat(T)
x0V = gamma[0]*xL[0]*psats[0].psat(T)/p
TT.append(T); PP.append(p); X0L.append(x0L); X0V.append(x0V)
return COSMOIsoline(TT, np.array(PP), np.array(X0L), X0V)
def get_isobar(fluids, p_Pa, Tguess):
psats = [psat_factory(name) for name in fluids]
assert(len(fluids)==2)
TT,PP,X0L,X0V = [],[],[],[]
for x0L in np.linspace(1e-6,1-1e-6):
xL = [x0L, 1-x0L]
def resid(T):
gamma = np.exp(COSMO.get_lngamma(T, xL))
pcalc = gamma[0]*xL[0]*psats[0].psat(T) + gamma[1]*xL[1]*psats[1].psat(T)
return np.abs(pcalc-p_Pa)/p_Pa
T = scipy.optimize.fsolve(resid, Tguess)
gamma = np.exp(COSMO.get_lngamma(T, xL))
p = p_Pa
x0V = gamma[0]*xL[0]*psats[0].psat(T)/p
TT.append(T); PP.append(p); X0L.append(x0L); X0V.append(x0V)
return COSMOIsoline(np.array(TT),np.array(PP),np.array(X0L),X0V)
fluids = ['ETHANOL','WATER']
for iden in fluids:
n = db.normalize_identifier(iden)
db.add_profile(n)
prof = db.get_profile(n)
print(prof.name)
for T in [423.15, 473.15]:
isoT = get_isotherm(fluids, T)
plt.plot(isoT.x0L, isoT.p/1e6, label='bubble')
plt.plot(isoT.x0V, isoT.p/1e6, label='dew')
for T, group in pandas.read_csv('t_p_x_y_isoth_red.dat',sep='\t').groupby('T / K'):
plt.plot(group['x1_L'], group['p / MPa'], 'o')
plt.plot(group['x1_V'], group['p / MPa'], 'o')
plt.xlabel(r'$x_{\rm ethanol}$')
plt.yscale('log')
plt.ylabel(r'$p$ / MPa');
%timeit get_isotherm(fluids, 423.15)
%timeit get_isobar(fluids, 101325.0, 373)
plt.figure()
for p in [101325.0]:
isoP = get_isobar(fluids, p, 373)
plt.plot(isoP.x0L, isoP.T/1e6, label='bubble')
plt.plot(isoP.x0V, isoP.T/1e6, label='dew')
plt.xlabel(r'$x_{\rm ethanol}$')
plt.yscale('log')
plt.ylabel(r'$T$ / K');
| 0.414069 | 0.432543 |
# Linear Shooting Method
To numerically approximate the Boundary Value Problem
$$
y^{''}=p(x)y^{'}+q(x)y+g(x) \ \ \ a < x < b $$
$$y(a)=\alpha$$
$$y(b) =\beta$$
The Boundary Value Problem is divided into two
Initial Value Problems:
1. The first 2nd order Intial Value Problem is the same as the original Boundary Value Problem with an extra initial condtion $y_1^{'}(a)=0$.
\begin{equation}
y^{''}_1=p(x)y^{'}_1+q(x)y_1+r(x), \ \ y_1(a)=\alpha, \ \ \color{green}{y^{'}_1(a)=0},\\
\end{equation}
2. The second 2nd order Intial Value Problem is the homogenous form of the original Boundary Value Problem with the initial condtions $y_2(a)=0$ and $y_2^{'}(a)=1$.
\begin{equation}
y^{''}_2=p(x)y^{'}_2+q(x)y_2, \ \ \color{green}{y_2(a)=0, \ \ y^{'}_2(a)=1}.
\end{equation}
combining these results together to get the unique solution
\begin{equation}
y(x)=y_1(x)+\frac{\beta-y_1(b)}{y_2(b)}y_2(x)
\end{equation}
provided that $y_2(b)\not=0$.
The truncation error for the shooting method is
$$ |y_i - y(x_i)| \leq K h^n\left|1+\frac{w_{1 i}}{u_{1 i}}\right| $$
$O(h^n)$ is the order of the numerical method used to approximate the solution of the Initial Value Problems.
```
import numpy as np
import math
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
class ListTable(list):
""" Overridden list class which takes a 2-dimensional list of
the form [[1,2,3],[4,5,6]], and renders an HTML Table in
IPython Notebook. """
def _repr_html_(self):
html = ["<table>"]
for row in self:
html.append("<tr>")
for col in row:
html.append("<td>{0}</td>".format(col))
html.append("</tr>")
html.append("</table>")
return ''.join(html)
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')
```
## Example Boundary Value Problem
To illustrate the shooting method we shall apply it to the Boundary Value Problem:
$$ y^{''}=2y^{'}+3y-6, $$
with boundary conditions
$$y(0) = 3, $$
$$y(1) = e^3+2, $$
with the exact solution is
$$y=e^{3x}+2. $$
The __boundary value problem__ is broken into two second order __Initial Value Problems:__
1. The first 2nd order Intial Value Problem is the same as the original Boundary Value Problem with an extra initial condtion $u^{'}(0)=0$.
\begin{equation}
u^{''} =2u'+3u-6, \ \ \ \ u(0)=3, \ \ \ \color{green}{u^{'}(0)=0}
\end{equation}
2. The second 2nd order Intial Value Problem is the homogenous form of the original Boundary Value Problem with the initial condtions $w^{'}(0)=0$ and $w^{'}(0)=1$.
\begin{equation}
w^{''} =2w^{'}+3w, \ \ \ \ \color{green}{w(1)=0}, \ \ \ \color{green}{w^{'}(1)=1}
\end{equation}
combining these results of these two intial value problems as a linear sum
\begin{equation}
y(x)=u(x)+\frac{e^{3x}+2-u(1)}{w(1)}w(x)
\end{equation}
gives the solution of the Boundary Value Problem.
## Discrete Axis
The stepsize is defined as
$$h=\frac{b-a}{N}$$
here it is
$$h=\frac{1-0}{10}$$
giving
$$x_i=0+0.1 i$$
for $i=0,1,...10.$
```
## BVP
N=10
h=1/N
x=np.linspace(0,1,N+1)
fig = plt.figure(figsize=(10,4))
plt.plot(x,0*x,'o:',color='red')
plt.xlim((0,1))
plt.title('Illustration of discrete time points for h=%s'%(h))
```
## Initial conditions
The initial conditions for the discrete equations are:
$$ u_1[0]=3$$
$$ \color{green}{u_2[0]=0}$$
$$ \color{green}{w_1[0]=0}$$
$$ \color{green}{w_2[0]=1}$$
```
U1=np.zeros(N+1)
U2=np.zeros(N+1)
W1=np.zeros(N+1)
W2=np.zeros(N+1)
U1[0]=3
U2[0]=0
W1[0]=0
W2[0]=1
```
## Numerical method
The Euler method is applied to numerically approximate the solution of the system of the two second order initial value problems they are converted in to two pairs of two first order initial value problems:
### 1. Inhomogenous Approximation
The plot below shows the numerical approximation for the two first order Intial Value Problems
\begin{equation}
u_1^{'} =u_2, \ \ \ \ u_1(0)=3,
\end{equation}
\begin{equation}
u_2^{'} =2u_2+3u_1-6, \ \ \ \color{green}{u_2(0)=0},
\end{equation}
that Euler approximate of the inhomogeneous two Initial Value Problems is :
$$u_{1}[i+1]=u_{1}[i] + h u_{2}[i]$$
$$u_{2}[i+1]=u_{2}[i] + h (2u_{2}[i]+3u_{1}[i] -6)$$
with $u_1[0]=3$ and $\color{green}{u_2[0]=0}$.
```
for i in range (0,N):
U1[i+1]=U1[i]+h*(U2[i])
U2[i+1]=U2[i]+h*(2*U2[i]+3*U1[i]-6)
```
### Plots
The plot below shows the Euler approximation of the two intial value problems $u_1$ on the left and $u2$ on the right.
```
fig = plt.figure(figsize=(12,4))
ax = fig.add_subplot(1,2,1)
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
plt.plot(x,U1,'^')
plt.title(r"$u_1'=u_2, \ \ u_1(0)=3$",fontsize=16)
plt.grid(True)
ax = fig.add_subplot(1,2,2)
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
plt.plot(x,U2,'v')
plt.title(r"$u_2'=2u_2+3u_1-6, \ \ u_2(0)=0$", fontsize=16)
plt.grid(True)
```
### 2. Homogenous Approximation
The homogeneous Bounday Value Problem is divided into two first order Intial Value Problems
\begin{equation}
w_1^{'} =w_2, \ \ \ \ \color{green}{w_1(1)=0}
\end{equation}
\begin{equation}
w_2^{'} =2w_2+3w_1, \ \ \ \color{green}{w_2(1)=1}
\end{equation}
The Euler approximation of the homogeneous of the two Initial Value Problem is
$$w_{1}[i+1]=w_{1}[i] + h w_{2}[i]$$
$$w_{2}[i+1]=w_{2}[i] + h (2w_{2}[i]+3w_{1}[i])$$
with $\color{green}{w_1[0]=0}$ and $\color{green}{w_2[1]=1}$.
```
for i in range (0,N):
W1[i+1]=W1[i]+h*(W2[i])
W2[i+1]=W2[i]+h*(2*W2[i]+3*W1[i])
```
### Homogenous Approximation
### Plots
The plot below shows the Euler approximation of the two intial value problems $u_1$ on the left and $u2$ on the right.
```
fig = plt.figure(figsize=(12,4))
ax = fig.add_subplot(1,2,1)
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
plt.plot(x,W1,'^')
plt.grid(True)
plt.title(r"$w_1'=w_2, \ \ w_1(0)=0$",fontsize=16)
ax = fig.add_subplot(1,2,2)
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
plt.plot(x,W2,'v')
plt.grid(True)
plt.title(r"$w_2'=2w_2+3w_1, \ \ w_2(0)=1$",fontsize=16)
plt.tight_layout()
plt.subplots_adjust(top=0.85)
beta=math.exp(3)+2
y=U1+(beta-U1[N])/W1[N]*W1
```
## Approximate Solution
Combining together the numerical approximation of $u_1$ and $u_2$ as a weighted sum
$$y(x[i])\approx u_{1}[i] + \frac{e^3+2-u_{1}[N]}{w_1[N]}w_{1}[i]$$
gives the approximate solution of the Boundary Value Problem.
The truncation error for the shooting method using the Euler method is
$$ |y_i - y(x[i])| \leq K h\left|1+\frac{w_{1}[i]}{u_{1}[i]}\right| $$
$O(h)$ is the order of the method.
The plot below shows the approximate solution of the Boundary Value Problem (left), the exact solution (middle) and the error (right)
```
Exact=np.exp(3*x)+2
fig = plt.figure(figsize=(12,4))
ax = fig.add_subplot(2,3,1)
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
plt.plot(x,y,'o')
plt.grid(True)
plt.title(r"Numerical: $u_1+\frac{e^3+2-u_1(N)}{w_1(N)}w_1$",
fontsize=16)
ax = fig.add_subplot(2,3,2)
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
plt.plot(x,Exact,'ks-')
plt.grid(True)
plt.title(r"Exact: $y=e^{3x}+2$",
fontsize=16)
ax = fig.add_subplot(2,3,3)
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
plt.plot(x,abs(y-Exact),'ro')
plt.grid(True)
plt.title(r"Error ",fontsize=16)
plt.tight_layout()
plt.subplots_adjust(top=0.85)
```
### Data
The Table below shows that output for $x$, the Euler numerical approximations $U1$, $U2$, $W1$ and $W2$ of the system of four Intial Value Problems, the shooting methods approximate solution $y_i=u_{1 i} + \frac{e^3+2-u_{1}(x_N)}{w_1(x_N)}w_{1 i}$ and the exact solution of the Boundary Value Problem.
```
table = ListTable()
table.append(['x', 'U1','U2','W1','W2','Approx','Exact'])
for i in range (0,len(x)):
table.append([round(x[i],3), round(U1[i],3), round(U2[i],3),
round(W1[i],5),round(W2[i],3),
round(y[i],5),
round(Exact[i],5)])
table
```
|
github_jupyter
|
import numpy as np
import math
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
class ListTable(list):
""" Overridden list class which takes a 2-dimensional list of
the form [[1,2,3],[4,5,6]], and renders an HTML Table in
IPython Notebook. """
def _repr_html_(self):
html = ["<table>"]
for row in self:
html.append("<tr>")
for col in row:
html.append("<td>{0}</td>".format(col))
html.append("</tr>")
html.append("</table>")
return ''.join(html)
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')
## BVP
N=10
h=1/N
x=np.linspace(0,1,N+1)
fig = plt.figure(figsize=(10,4))
plt.plot(x,0*x,'o:',color='red')
plt.xlim((0,1))
plt.title('Illustration of discrete time points for h=%s'%(h))
U1=np.zeros(N+1)
U2=np.zeros(N+1)
W1=np.zeros(N+1)
W2=np.zeros(N+1)
U1[0]=3
U2[0]=0
W1[0]=0
W2[0]=1
for i in range (0,N):
U1[i+1]=U1[i]+h*(U2[i])
U2[i+1]=U2[i]+h*(2*U2[i]+3*U1[i]-6)
fig = plt.figure(figsize=(12,4))
ax = fig.add_subplot(1,2,1)
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
plt.plot(x,U1,'^')
plt.title(r"$u_1'=u_2, \ \ u_1(0)=3$",fontsize=16)
plt.grid(True)
ax = fig.add_subplot(1,2,2)
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
plt.plot(x,U2,'v')
plt.title(r"$u_2'=2u_2+3u_1-6, \ \ u_2(0)=0$", fontsize=16)
plt.grid(True)
for i in range (0,N):
W1[i+1]=W1[i]+h*(W2[i])
W2[i+1]=W2[i]+h*(2*W2[i]+3*W1[i])
fig = plt.figure(figsize=(12,4))
ax = fig.add_subplot(1,2,1)
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
plt.plot(x,W1,'^')
plt.grid(True)
plt.title(r"$w_1'=w_2, \ \ w_1(0)=0$",fontsize=16)
ax = fig.add_subplot(1,2,2)
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
plt.plot(x,W2,'v')
plt.grid(True)
plt.title(r"$w_2'=2w_2+3w_1, \ \ w_2(0)=1$",fontsize=16)
plt.tight_layout()
plt.subplots_adjust(top=0.85)
beta=math.exp(3)+2
y=U1+(beta-U1[N])/W1[N]*W1
Exact=np.exp(3*x)+2
fig = plt.figure(figsize=(12,4))
ax = fig.add_subplot(2,3,1)
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
plt.plot(x,y,'o')
plt.grid(True)
plt.title(r"Numerical: $u_1+\frac{e^3+2-u_1(N)}{w_1(N)}w_1$",
fontsize=16)
ax = fig.add_subplot(2,3,2)
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
plt.plot(x,Exact,'ks-')
plt.grid(True)
plt.title(r"Exact: $y=e^{3x}+2$",
fontsize=16)
ax = fig.add_subplot(2,3,3)
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
plt.plot(x,abs(y-Exact),'ro')
plt.grid(True)
plt.title(r"Error ",fontsize=16)
plt.tight_layout()
plt.subplots_adjust(top=0.85)
table = ListTable()
table.append(['x', 'U1','U2','W1','W2','Approx','Exact'])
for i in range (0,len(x)):
table.append([round(x[i],3), round(U1[i],3), round(U2[i],3),
round(W1[i],5),round(W2[i],3),
round(y[i],5),
round(Exact[i],5)])
table
| 0.21305 | 0.981131 |
# AI@University - Hacking Competition
For this hacking competition you have **2 hours** to solve the **Maternity Ward** case study in teams of **4-5 students**. Be aware, that during that time you have to implement your **technical solution** as well as prepare a short **5 minute presentation** of your results. It is recommended to split the work amongst your team members accordingly.
Before the workshop, please make sure that you have installed Python 3 and the Juypter Notebook environment, preferrably using the [Anaconda distribution](https://www.anaconda.com/download/#macos), as it contains already a set of useful Data Science libraries such as pandas, numpy and scikit-learn.
**Note**:
* When downloading this file to macOS, it automatically gets converted to a text file. To be able to open it as a Jupyter Notebook, select the file and press `command` + `i`. In the opening detail view, delte the file ending `.txt`
# Case Study: Thomas J. Watson Hospital - Maternity Ward
After delivering a successful project for the oncology department of the **Thomas J. Watson Hospital** in **Berlin**, you have been hired as a consulting team of Data Scientists by the maternity ward to deliver another consulting project.
The hospital is facing some challenges in treating the increasing number of pregnant women in Berlin, as the hospital's
staff is either decreasing (paramedical staff) or only slightly increasing (physicians). The hospital's board of directors fears that the increasing birthrate and lack of personnel might impact the time and treatment their employees have for the young families. Their patients health is their highest priority.
Therefore, the Thomas J. Watson hospital is looking for ways to support its personnel in ensuring that the best
possible care is given to the soon-to-be mothers and their children.
You have been hired to find a solution that assists the physicians and paramedics in determining the health condition
of the fetus and its mother. To accomplish your goal, you have been provided with cardiotocography data that enables
you to predict whether the fetal health condition is **normal** or **pathologic** (Hint: the dataset might contain more classes on the health condition, which the hospital is not interested in).
You should present your results to the hospital board coming Friday. Keep in mind to present your findings in a way
that both business and technical stakeholders feel addressed.
### Data Dictionary
The dataset consists of measurements of fetal heart rate (FHR) and uterine contraction (UC) features on cardiotocograms classified by expert obstetricians.
Attribute|Description
---|---
LB|FHR baseline (beats per minute)
AC|number of accelerations per second
FM|Number of fetal movements per second
UC|Number of uterine contractions per second
ASTV|Percentage of time with abnormal short term variability
mSTV|Mean value of short term variability
ALTV|Percentage of time with abnormal long term variability
mLTV|Mean value of long term variability
DL|Number of light decelerations per second
DS|Number of severe decelerations per second
DP|Number of prolonged decelerations per second
Width|Width Of Fetal Heart Rate Histogram
Min|Minimum Of Fetal Heart Rate Histogram
Max|Number of highest Histogram peaks
Nmax|number of Histogram peaks
Nzeros|Number of lowest Histogram zeros
Mode|Mode of Fetal Heart Rate Histogram
Mean|Mean of Fetal Heart Rate Histogram
Median|Median of Fetal Heart Rate Histogram
Variance|Variance of Fetal Heart Rate Histogram
Tendency|Tendency of Fetal Heart Rate Histogram: -1=left assymetric, 0=symmetric, 1=right assymetric
NSP|Label: Normal=1, Suspect=2, Pathologic=3
### Medical Background
To ensure fetal and maternal health during pregnancy, **cardiotocography**, the measurement of **fetal heart rate**
(FHR) and **uterine contractions** (UC), is used to identify pathologic health conditions early on. Currently,
FHR and UC data is analyzed manually by a physician or paramedic, therefore leaving room for errors. FHR and UC are
strong indicators for certain critical health conditions, like a lack of oxygen, prematurity or growth restrictions,
that can lead to impairment and even death of the fetus. Cardiotocography is used both during pregnancy and during
delivery of the child.
# Evaluation
### Technical Solution (50%)
Your technical solution will be evaluated based on the degree of **accuracy** of your predictions on the hold-out validation set.

### Final Presentation (50%)
You will present your results to the key stakeholders of the hospital. Your presentation should cover
+ Understanding the business problem
+ Approach taken
+ Interpretation of the key results
# Sumitting Your Results
Please store your predictions on the validation data as a comma-separated file, and name that file after your team (`YOUR_TEAM_NAME.csv`).
To create a CSV-File from an array of predictions (`y_pred`), you can use the following `pandas` function:
```
pandas.Series(y_pred).to_csv('YOUR_TEAM_NAME.csv', sep=',', index=False)
```
Subsquently, share the file directly via MS Teams with the IBM coaches.
**Author**: Daniel Jaeck, Data Scientist.
Edited: Marco Perini, Data Scientist at IBM (marco.perini@ibm.com)
Copyright © IBM Corp. 2021. This notebooks and its source code are released under the terms of the MIT License.
|
github_jupyter
|
pandas.Series(y_pred).to_csv('YOUR_TEAM_NAME.csv', sep=',', index=False)
| 0.244724 | 0.991828 |
## Plain Python implementation of K-Means
K-Means is a very simple clustering algorithm (clustering belongs to unsupervised learning). Given a fixed number of clusters and an input dataset the algorithm tries to partition the data into clusters such that the clusters have high intra-class similarity and low inter-class similarity.
### Algorithm
1. Initialize the cluster centers, either randomly within the range of the input data or (recommended) with some of the existing training examples
2. Until convergence
2.1. Assign each datapoint to the closest cluster. The distance between a point and cluster center is measured using the Euclidean distance.
2.2. Update the current estimates of the cluster centers by setting them to the mean of all instance belonging to that cluster
### Objective function
The underlying objective function tries to find cluster centers such that, if the data are partitioned into the corresponding clusters, distances between data points and their closest cluster centers become as small as possible.
Given a set of datapoints ${x_1, ..., x_n}$ and a positive number $k$, find the clusters $C_1, ..., C_k$ that minimize
\begin{equation}
J = \sum_{i=1}^n \, \sum_{j=1}^k \, z_{ij} \, || x_i - \mu_j ||_2
\end{equation}
where:
- $z_{ij} \in \{0,1\}$ defines whether of not datapoint $x_i$ belongs to cluster $C_j$
- $\mu_j$ denotes the cluster center of cluster $C_j$
- $|| \, ||_2$ denotes the Euclidean distance
### Disadvantages of K-Means
- The number of clusters has to be set in the beginning
- The results depend on the inital cluster centers
- It's sensitive to outliers
- It's not suitable for finding non-convex clusters
- It's not guaranteed to find a global optimum, so it can get stuck in a local minimum
- ...
### How to solve major disadvantages?
However, the k-means algorithm has at least two major theoretic shortcomings:
- First, it has been shown that the worst case running time of the algorithm is super-polynomial in the input size.
- Second, the approximation found can be arbitrarily bad with respect to the objective function compared to the optimal clustering.
The k-means++ algorithm addresses the second of these obstacles by specifying a procedure to initialize the cluster centers before proceeding with the standard k-means optimization iterations. With the k-means++ initialization, the algorithm is guaranteed to find a solution that is O(log k) competitive to the optimal k-means solution.
Ref: https://datasciencelab.wordpress.com/2014/01/15/improved-seeding-for-clustering-with-k-means/
```
import numpy as np
import matplotlib.pyplot as plt
import random
from sklearn.datasets import make_blobs
np.random.seed(123)
# % matplotlib inline
```
## K-Means class
```
class MyKMeansCluster():
def __init__(self, n_cluster: int = 4):
self.k = n_cluster
def fit(self, data: np.ndarray):
"""
Fits the k-means model to the given dataset
"""
n_samples, _ = data.shape
# init the centers using python random.sample(population, k)
self.centers = np.array(random.sample(list(data), self.k))
self.init_centers = np.copy(self.centers)
# We will keep track of whether the assignment of data points to the clusters has changed.
# If it stops changing, we are done fitting the model
old_assign = None
n_iter = 0
while True:
# step 1: assign all data points based on the current centers
new_assign = [self.classify(data_point) for data_point in data]
if new_assign == old_assign:
print("Iteration is done: {}!".format(n_iter))
break
old_assign = new_assign
# step 2: re-calculate the centers
for cluster_id in range(self.k):
data_indexes = np.where(np.array(new_assign) == cluster_id)
data_pointers = data[data_indexes]
self.centers[cluster_id] = data_pointers.mean(axis = 0)
n_iter += 1
def classify(self, data_point: np.array) -> np.array:
"""
Given a datapoint, compute the cluster closest to the datapoint.
Return the cluster ID of that cluster.
"""
dists = self._l2_distance(data_point)
return np.argmin(dists)
def get_center(self):
return self.centers
def _l2_distance(self, data_point: np.array) -> np.ndarray:
""" Calculate the l2 distances between current centers and an given data point"""
dists = np.sqrt(np.sum((self.centers - data_point)**2, axis=1))
return dists
def _l1_distance(self, data_point: np.array) -> np.ndarray:
dists = np.sum(np.abs(self.centers - data_point), axis=1)
return dists
def plot_clusters(self, data: np.array):
plt.figure(figsize=(12,10))
plt.title("Initial centers in black, final centers in red")
plt.scatter(data[:, 0], data[:, 1], marker='.', c='y')
plt.scatter(self.centers[:, 0], self.centers[:,1], c='r')
plt.scatter(self.init_centers[:, 0], self.init_centers[:,1], c='k')
plt.show()
```
## Test case
```
data1 = np.random.normal(loc=(10, 4), scale=0.1, size=(10, 2))
data2 = np.random.normal(loc=(5, 5), scale=0.1, size=(10, 2))
# X = np.concatenate((data1, data2), axis=0)
X = np.vstack((data1, data2))
kmeans = MyKMeansCluster(n_cluster = 2)
# print("type: {}, shape: {}".format(type(X), X.shape))
kmeans.fit(X)
kmeans.plot_clusters(X)
print("centers: {}".format(kmeans.get_center()))
class KMeans():
def __init__(self, n_clusters=4):
self.k = n_clusters
def fit(self, data):
"""
Fits the k-means model to the given dataset
"""
n_samples, _ = data.shape
# initialize cluster centers
self.centers = np.array(random.sample(list(data), self.k))
self.initial_centers = np.copy(self.centers)
# We will keep track of whether the assignment of data points
# to the clusters has changed. If it stops changing, we are
# done fitting the model
old_assigns = None
n_iters = 0
while True:
new_assigns = [self.classify(datapoint) for datapoint in data]
if new_assigns == old_assigns:
print(f"Training finished after {n_iters} iterations!")
return
old_assigns = new_assigns
n_iters += 1
# recalculate centers
for id_ in range(self.k):
points_idx = np.where(np.array(new_assigns) == id_)
datapoints = data[points_idx]
self.centers[id_] = datapoints.mean(axis=0)
def l2_distance(self, datapoint):
dists = np.sqrt(np.sum((self.centers - datapoint)**2, axis=1))
return dists
def classify(self, datapoint):
"""
Given a datapoint, compute the cluster closest to the
datapoint. Return the cluster ID of that cluster.
"""
dists = self.l2_distance(datapoint)
return np.argmin(dists)
def plot_clusters(self, data):
plt.figure(figsize=(12,10))
plt.title("Initial centers in black, final centers in red")
plt.scatter(data[:, 0], data[:, 1], marker='.', c=y)
plt.scatter(self.centers[:, 0], self.centers[:,1], c='r')
plt.scatter(self.initial_centers[:, 0], self.initial_centers[:,1], c='k')
plt.show()
```
## Dataset
```
X, y = make_blobs(centers=4, n_samples=1000)
print(f'Shape of dataset: {X.shape}')
fig = plt.figure(figsize=(8,6))
plt.scatter(X[:,0], X[:,1], c=y)
plt.title("Dataset with 4 clusters")
plt.xlabel("First feature")
plt.ylabel("Second feature")
plt.show()
```
## Initializing and fitting the model
```
kmeans = KMeans(n_clusters=4)
kmeans.fit(X)
```
## Plot initial and final cluster centers
```
kmeans.plot_clusters(X)
class MyKMeansCluster:
def __init__(self, n_cluster: int):
assert(n_cluster > 0), "Invalid cluster number {}".format(n_cluster)
self.k = n_cluster
def fit(self, data: np.array):
n_samples, _ = data.shape
# init centers
self.centers = np.array(random.sample(list(data), self.k))
self.init_centers = np.copy(self.centers)
old_assign = None
n_iter = 0
while True:
# calculate new assigments
new_assign = [self.classify(data_point) for data_point in data]
if new_assign == old_assign:
print("fit is done after {} iterations".format(n_iter))
break
# re-calculate the centers
for cluster_id in range(self.k):
assign_indexes = np.where(np.array(new_assign) == cluster_id)
cluster_points = data[assign_indexes]
self.centers[cluster_id] = cluster_points.mean(axis=0)
old_assign = new_assign
n_iter += 1
def classify(self, data_point: np.array):
distances = self._l2_distance(data_point)
return np.argmin(distances)
def plot(self, data):
plt.figure(figsize=(12,10))
plt.title("Initial centers in black, final centers in red")
plt.scatter(data[:, 0], data[:, 1], marker='.', c=y)
plt.scatter(self.centers[:, 0], self.centers[:,1], c='r')
plt.scatter(self.init_centers[:, 0], self.init_centers[:,1], c='k')
plt.show()
def _l2_distance(self, data_point: np.array):
return np.sqrt(np.sum((self.centers - data_point)**2, axis=1))
knn_cluster = MyKMeansCluster(4)
knn_cluster.fit(X)
knn_cluster.plot(X)
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
import random
from sklearn.datasets import make_blobs
np.random.seed(123)
# % matplotlib inline
class MyKMeansCluster():
def __init__(self, n_cluster: int = 4):
self.k = n_cluster
def fit(self, data: np.ndarray):
"""
Fits the k-means model to the given dataset
"""
n_samples, _ = data.shape
# init the centers using python random.sample(population, k)
self.centers = np.array(random.sample(list(data), self.k))
self.init_centers = np.copy(self.centers)
# We will keep track of whether the assignment of data points to the clusters has changed.
# If it stops changing, we are done fitting the model
old_assign = None
n_iter = 0
while True:
# step 1: assign all data points based on the current centers
new_assign = [self.classify(data_point) for data_point in data]
if new_assign == old_assign:
print("Iteration is done: {}!".format(n_iter))
break
old_assign = new_assign
# step 2: re-calculate the centers
for cluster_id in range(self.k):
data_indexes = np.where(np.array(new_assign) == cluster_id)
data_pointers = data[data_indexes]
self.centers[cluster_id] = data_pointers.mean(axis = 0)
n_iter += 1
def classify(self, data_point: np.array) -> np.array:
"""
Given a datapoint, compute the cluster closest to the datapoint.
Return the cluster ID of that cluster.
"""
dists = self._l2_distance(data_point)
return np.argmin(dists)
def get_center(self):
return self.centers
def _l2_distance(self, data_point: np.array) -> np.ndarray:
""" Calculate the l2 distances between current centers and an given data point"""
dists = np.sqrt(np.sum((self.centers - data_point)**2, axis=1))
return dists
def _l1_distance(self, data_point: np.array) -> np.ndarray:
dists = np.sum(np.abs(self.centers - data_point), axis=1)
return dists
def plot_clusters(self, data: np.array):
plt.figure(figsize=(12,10))
plt.title("Initial centers in black, final centers in red")
plt.scatter(data[:, 0], data[:, 1], marker='.', c='y')
plt.scatter(self.centers[:, 0], self.centers[:,1], c='r')
plt.scatter(self.init_centers[:, 0], self.init_centers[:,1], c='k')
plt.show()
data1 = np.random.normal(loc=(10, 4), scale=0.1, size=(10, 2))
data2 = np.random.normal(loc=(5, 5), scale=0.1, size=(10, 2))
# X = np.concatenate((data1, data2), axis=0)
X = np.vstack((data1, data2))
kmeans = MyKMeansCluster(n_cluster = 2)
# print("type: {}, shape: {}".format(type(X), X.shape))
kmeans.fit(X)
kmeans.plot_clusters(X)
print("centers: {}".format(kmeans.get_center()))
class KMeans():
def __init__(self, n_clusters=4):
self.k = n_clusters
def fit(self, data):
"""
Fits the k-means model to the given dataset
"""
n_samples, _ = data.shape
# initialize cluster centers
self.centers = np.array(random.sample(list(data), self.k))
self.initial_centers = np.copy(self.centers)
# We will keep track of whether the assignment of data points
# to the clusters has changed. If it stops changing, we are
# done fitting the model
old_assigns = None
n_iters = 0
while True:
new_assigns = [self.classify(datapoint) for datapoint in data]
if new_assigns == old_assigns:
print(f"Training finished after {n_iters} iterations!")
return
old_assigns = new_assigns
n_iters += 1
# recalculate centers
for id_ in range(self.k):
points_idx = np.where(np.array(new_assigns) == id_)
datapoints = data[points_idx]
self.centers[id_] = datapoints.mean(axis=0)
def l2_distance(self, datapoint):
dists = np.sqrt(np.sum((self.centers - datapoint)**2, axis=1))
return dists
def classify(self, datapoint):
"""
Given a datapoint, compute the cluster closest to the
datapoint. Return the cluster ID of that cluster.
"""
dists = self.l2_distance(datapoint)
return np.argmin(dists)
def plot_clusters(self, data):
plt.figure(figsize=(12,10))
plt.title("Initial centers in black, final centers in red")
plt.scatter(data[:, 0], data[:, 1], marker='.', c=y)
plt.scatter(self.centers[:, 0], self.centers[:,1], c='r')
plt.scatter(self.initial_centers[:, 0], self.initial_centers[:,1], c='k')
plt.show()
X, y = make_blobs(centers=4, n_samples=1000)
print(f'Shape of dataset: {X.shape}')
fig = plt.figure(figsize=(8,6))
plt.scatter(X[:,0], X[:,1], c=y)
plt.title("Dataset with 4 clusters")
plt.xlabel("First feature")
plt.ylabel("Second feature")
plt.show()
kmeans = KMeans(n_clusters=4)
kmeans.fit(X)
kmeans.plot_clusters(X)
class MyKMeansCluster:
def __init__(self, n_cluster: int):
assert(n_cluster > 0), "Invalid cluster number {}".format(n_cluster)
self.k = n_cluster
def fit(self, data: np.array):
n_samples, _ = data.shape
# init centers
self.centers = np.array(random.sample(list(data), self.k))
self.init_centers = np.copy(self.centers)
old_assign = None
n_iter = 0
while True:
# calculate new assigments
new_assign = [self.classify(data_point) for data_point in data]
if new_assign == old_assign:
print("fit is done after {} iterations".format(n_iter))
break
# re-calculate the centers
for cluster_id in range(self.k):
assign_indexes = np.where(np.array(new_assign) == cluster_id)
cluster_points = data[assign_indexes]
self.centers[cluster_id] = cluster_points.mean(axis=0)
old_assign = new_assign
n_iter += 1
def classify(self, data_point: np.array):
distances = self._l2_distance(data_point)
return np.argmin(distances)
def plot(self, data):
plt.figure(figsize=(12,10))
plt.title("Initial centers in black, final centers in red")
plt.scatter(data[:, 0], data[:, 1], marker='.', c=y)
plt.scatter(self.centers[:, 0], self.centers[:,1], c='r')
plt.scatter(self.init_centers[:, 0], self.init_centers[:,1], c='k')
plt.show()
def _l2_distance(self, data_point: np.array):
return np.sqrt(np.sum((self.centers - data_point)**2, axis=1))
knn_cluster = MyKMeansCluster(4)
knn_cluster.fit(X)
knn_cluster.plot(X)
| 0.667256 | 0.987759 |
# Getting Started with R
Now that we've gotten familiar with Python, let's take a look at R. R is another very common programming language used in the field of data science. There's a lot of similarities that they have, especially with the basics, so we'll just be jumping right into coding and just learn the syntax differences.
Our first difference is that R has a built-in file selector. There are Python libraries that allow for the same thing, but it requires some extra imports and functions to get going. With R, we can just call the following function (please select the data from our Session 6 data folder when prompted):
```
infile <- file.choose() #select csv file
infile #view file name
```
Next we can read the file into a `DataFrame` -- note the `.` instead of the familiar `_`:
```
df <- read.csv(f0, header = TRUE,sep = ",") # load csv file
```
Here's some examples of how we can see different properties and pieces of the `DataFrame`:
```
colnames(df) # view column names
ncol(df) # column count
nrow(df) # row number
View(df) # view data in table format
str(df) # view column attributes
head(df) # first 6 rows
tail(df) # last 6 rows
```
Next, we can get the file name from our `infile`, similar to `os.path.basename`:
```
filename <- basename(infile) # get file name
filename
```
And then we can assigns variables with the arrow (or with an `=`, but an `<-` seems to work better in some niche cases):
```
tool <- ("ET101") # create tool name object
tool
time <- Sys.time() # get current time
time
```
In order to insert our new user-defined data, we can use `cbind`, short for "column bind":
```
df0 <- cbind(tool,time, filename,df) #combine objects and data
colnames(df0)
```
We can rename the columns simply by assigning them.
```
colnames(df0) <- c("EqName","FileProcessTime","FileName","ProcessDateTime","Recipe","Step","Interval",
"Pressure","GasFlow","ElectricalPower","Temperature" )
```
The `c` before the list of columns is kind of a tricky concept to grasp, but you can think of it as combine, concatenate, or convert/coerce. This allows the data to be encapsulated in a single object, rather than a structure that is composed of several objects. There's also a few other purposes for it, but basically, when in doubt, put a `c` on there and that will probably fix it:
To create a new column, use this syntax:
```
df0$MyName <- ("John Solis") # add new additional column
colnames(df0)
View(df0)
```
We can reorder columns by explicitly calling the columns in the desired order:
```
df1 <- df0[c("MyName","EqName","FileProcessTime","FileName","ProcessDateTime","Recipe","Step","Interval",
"Pressure","GasFlow","ElectricalPower","Temperature")]
colnames(df1)
ncol(df1)
```
We can also re-order columns by their index -- for instance, if we wanted to swap columns 1 and 2, we could do the following:
```
df2 <- df1[c(2,1,3:12)]
colnames(df2)
```
We can delete a column using `-`:
```
df3 <- subset(df2,select = -c(MyName))
colnames(df3)
```
R also has some basic built in plotting:
```
plot(df3$Pressure)
```
Since R has a file selector, it's easy to set the working directory. This begins wherever we began to run R or Python. If you remember from Session 2 when we saved a file before building an output path, this is the location that a file would save by default, and it would be where files are loaded from by default, meaning that if we wanted to load a file in our working directory, we would just have to call it by it's filename rather than it's full path, or if buried within folders in the working directory, we would only need the relative path instead of the full path. We can view the working directory with this command:
```
getwd() # this is the location where files are being saved or loaded from
```
And then we can set the working directory using this command:
```
setwd(choose.dir()) # change working folder
```
We can save our `DataFrame` to a file using `write.csv`:
```
write.csv(df3,"session1 file.csv",row.names = FALSE)
```
And to build a string using variables, we can use `paste` to dynamically build a file name:
```
outfile <- paste(tool,"session1 file.csv")
outfile
write.csv(df3,outfile,row.names = FALSE)
```
|
github_jupyter
|
infile <- file.choose() #select csv file
infile #view file name
df <- read.csv(f0, header = TRUE,sep = ",") # load csv file
colnames(df) # view column names
ncol(df) # column count
nrow(df) # row number
View(df) # view data in table format
str(df) # view column attributes
head(df) # first 6 rows
tail(df) # last 6 rows
filename <- basename(infile) # get file name
filename
tool <- ("ET101") # create tool name object
tool
time <- Sys.time() # get current time
time
df0 <- cbind(tool,time, filename,df) #combine objects and data
colnames(df0)
colnames(df0) <- c("EqName","FileProcessTime","FileName","ProcessDateTime","Recipe","Step","Interval",
"Pressure","GasFlow","ElectricalPower","Temperature" )
df0$MyName <- ("John Solis") # add new additional column
colnames(df0)
View(df0)
df1 <- df0[c("MyName","EqName","FileProcessTime","FileName","ProcessDateTime","Recipe","Step","Interval",
"Pressure","GasFlow","ElectricalPower","Temperature")]
colnames(df1)
ncol(df1)
df2 <- df1[c(2,1,3:12)]
colnames(df2)
df3 <- subset(df2,select = -c(MyName))
colnames(df3)
plot(df3$Pressure)
getwd() # this is the location where files are being saved or loaded from
setwd(choose.dir()) # change working folder
write.csv(df3,"session1 file.csv",row.names = FALSE)
outfile <- paste(tool,"session1 file.csv")
outfile
write.csv(df3,outfile,row.names = FALSE)
| 0.119961 | 0.979235 |
```
from sklearn import datasets
import numpy as np
iris = datasets.load_iris()
X = iris.data[:, [2,3]]
y = iris.target
print('Etykiety klas:', np.unique(y))
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state=1, stratify=y)
# test 'stratify' czy faktycznie rozkłada etykiety
# w takich samych proporcjach
print('Liczba etykiet w zbiorze: y:', np.bincount(y))
print('Liczba etykiet w zbiorze: y_train:', np.bincount(y_train))
print('Liczba etykiet w zbiorze: y_test:', np.bincount(y_test))
# standaryzacja zbiorów (transform)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
# sc.fit oblicza wartość średnią próbek
# oraz odchylenie standardowe
sc.fit(X_train)
# transform - standaryzacja danych
# na podstawie wartości obliczonych dzięki fit
X_train_std = sc.transform(X_train)
X_test_std = sc.transform(X_test)
from sklearn.linear_model import Perceptron
ppn = Perceptron(max_iter=40, eta0=0.01, random_state=1)
ppn.fit(X_train_std, y_train)
y_pred = ppn.predict(X_test_std)
print("Nieprawidłowo sklasyfikowane próbki:", (y_test != y_pred).sum())
from sklearn.metrics import accuracy_score
print("Dokładność: %.2f" % accuracy_score(y_test, y_pred))
# inna metoda obliczania accuracy
print("Dokładność: %.2f" % ppn.score(X_test_std, y_test))
# zobaczymy jak wygląda granica przebiegająca między etykietami
from matplotlib.colors import ListedColormap
import matplotlib.pyplot as plt
def versiontuple(v):
return tuple(map(int, (v.split("."))))
def plot_decision_regions(X, y, classifier, test_idx = None, resolution=0.02):
# konfiguracja generatora znaczników i mapy kolorów
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# wykresy powierzchni decyzyjnej
x1_min, x1_max = X[:, 0].min() -1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() -1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.3, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# narysuj wykres z próbkami
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1],
alpha=0.8, c=colors[idx],
marker=markers[idx], label=cl,
edgecolor='black')
# zaznacz próbki testowe
if test_idx:
# rysuj wykres wszystkich próbek
X_test, y_Test = X[list(test_idx), :], y[list(test_idx)]
plt.scatter(X_test[:, 0], X_test[:, 1], c='', edgecolor='black',
alpha=1.0, linewidth=1, marker='o', edgecolors='k',
s=100, label='Zestaw testowy')
X_combined_std = np.vstack((X_train_std, X_test_std))
y_combined = np.hstack((y_train, y_test))
plot_decision_regions(X=X_combined_std,
y=y_combined,
classifier=ppn,
test_idx=range(105, 150))
plt.xlabel('Długość płatka [standaryzowana]')
plt.ylabel('Szerokość płatka [standaryzowana]')
plt.legend(loc='upper left')
plt.show()
```
|
github_jupyter
|
from sklearn import datasets
import numpy as np
iris = datasets.load_iris()
X = iris.data[:, [2,3]]
y = iris.target
print('Etykiety klas:', np.unique(y))
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state=1, stratify=y)
# test 'stratify' czy faktycznie rozkłada etykiety
# w takich samych proporcjach
print('Liczba etykiet w zbiorze: y:', np.bincount(y))
print('Liczba etykiet w zbiorze: y_train:', np.bincount(y_train))
print('Liczba etykiet w zbiorze: y_test:', np.bincount(y_test))
# standaryzacja zbiorów (transform)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
# sc.fit oblicza wartość średnią próbek
# oraz odchylenie standardowe
sc.fit(X_train)
# transform - standaryzacja danych
# na podstawie wartości obliczonych dzięki fit
X_train_std = sc.transform(X_train)
X_test_std = sc.transform(X_test)
from sklearn.linear_model import Perceptron
ppn = Perceptron(max_iter=40, eta0=0.01, random_state=1)
ppn.fit(X_train_std, y_train)
y_pred = ppn.predict(X_test_std)
print("Nieprawidłowo sklasyfikowane próbki:", (y_test != y_pred).sum())
from sklearn.metrics import accuracy_score
print("Dokładność: %.2f" % accuracy_score(y_test, y_pred))
# inna metoda obliczania accuracy
print("Dokładność: %.2f" % ppn.score(X_test_std, y_test))
# zobaczymy jak wygląda granica przebiegająca między etykietami
from matplotlib.colors import ListedColormap
import matplotlib.pyplot as plt
def versiontuple(v):
return tuple(map(int, (v.split("."))))
def plot_decision_regions(X, y, classifier, test_idx = None, resolution=0.02):
# konfiguracja generatora znaczników i mapy kolorów
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# wykresy powierzchni decyzyjnej
x1_min, x1_max = X[:, 0].min() -1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() -1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.3, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# narysuj wykres z próbkami
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1],
alpha=0.8, c=colors[idx],
marker=markers[idx], label=cl,
edgecolor='black')
# zaznacz próbki testowe
if test_idx:
# rysuj wykres wszystkich próbek
X_test, y_Test = X[list(test_idx), :], y[list(test_idx)]
plt.scatter(X_test[:, 0], X_test[:, 1], c='', edgecolor='black',
alpha=1.0, linewidth=1, marker='o', edgecolors='k',
s=100, label='Zestaw testowy')
X_combined_std = np.vstack((X_train_std, X_test_std))
y_combined = np.hstack((y_train, y_test))
plot_decision_regions(X=X_combined_std,
y=y_combined,
classifier=ppn,
test_idx=range(105, 150))
plt.xlabel('Długość płatka [standaryzowana]')
plt.ylabel('Szerokość płatka [standaryzowana]')
plt.legend(loc='upper left')
plt.show()
| 0.558086 | 0.647032 |
# Python常见高级应用
这部分主要参考[python3-cookbook](https://python3-cookbook.readthedocs.io/zh_CN/latest/index.html),主要记录一些在实际编写模型算法及开发应用程序的过程中值得一用的高级技巧。
目前主要分为以下部分:
- 数据结构与算法
- 字符串与文本
- 迭代器与生成器
- 文件与IO
- 函数
- 类与对象
- 元编程
- 模块与包
- 并发编程
- 脚本编程与系统管理
- 测试调试与异常
随着实践的加深,逐渐补充各类高级用法。
## 数据结构与算法
Python 提供了大量的内置数据结构,包括列表,集合以及字典。大多数情况下使用这些数据结构是很简单的。 不过在实际码代码的过程中也会经常碰到到诸如查询,排序和过滤等等这些普遍存在的问题。了解这些基本算法的过程是很有必要的。
首先是排序算法。参考:[排序指南](https://docs.python.org/zh-cn/3.7/howto/sorting.html#)。
Python 列表有一个内置的 list.sort() 方法可以直接修改列表。还有一个 sorted() 内置函数,它会从一个可迭代对象构建一个新的排序列表。
```
sorted([5, 2, 3, 1, 4])
a = [5, 2, 3, 1, 4]
a.sort()
a
```
给定一个数,寻找一个数组中与该数字最接近的
```
# Python3 program to find Closest number in a list
def closest(lst, K):
return lst[min(range(len(lst)), key = lambda i: abs(lst[i]-K))]
# Driver code
lst = [3.64, 5.2, 9.42, 9.35, 8.5, 8]
K = 9.1
print(closest(lst, K))
```
或者使用numpy也可以快速得到,这是本repo中第一次用到numpy,后面会对它做详细介绍,这里只是简单提及。安装numpy方式如下:
```Shell
conda install -c conda-forge numpy
conda env export > environment.yml
```
```
import numpy as np
x = np.arange(100)
print("Original array:")
print(x)
a = np.random.uniform(0,100)
print("Value to compare:")
print(a)
index = (np.abs(x-a)).argmin()
print(x[index])
```
判断一个数组是否递增:
```
l = range(10000)
print(all(x<y for x, y in zip(l, l[1:])))
x=1
y=[x]
type(y)
```
接下来,看一个示例,从字典中提取子集,即想构造一个字典,它是另外一个字典的子集。
最简单的方式是使用字典推导。
```
prices = {
'ACME': 45.23,
'AAPL': 612.78,
'IBM': 205.55,
'HPQ': 37.20,
'FB': 10.75
}
# Make a dictionary of all prices over 200
p1 = {key: value for key, value in prices.items() if value > 200}
# Make a dictionary of tech stocks
tech_names = {'AAPL', 'IBM', 'HPQ', 'MSFT'}
p2 = {key: value for key, value in prices.items() if key in tech_names}
print(p1)
print(p2)
```
将一个整数拆分成最接近的两个整数相乘。主要参考:https://blog.csdn.net/qq_36607894/article/details/103595912
```
import numpy as np
def crack(integer):
start = int(np.sqrt(integer))
factor = integer / start
while not is_integer(factor):
start += 1
factor = integer / start
return int(factor), start
def is_integer(number):
if int(number) == number:
return True
else:
return False
print(crack(3))
print(crack(7))
print(crack(64))
print(crack(100))
print(crack(640))
print(crack(64000))
```
## 字符串与文本
重点关注文本的操作处理,比如提取字符串,搜索,替换以及解析等。
### 字符串搜索和替换
在字符串中搜索和匹配指定的文本模式。
对于简单的字面模式,直接使用 str.replace() 方法即可
```
text = 'yeah, but no, but yeah, but no, but yeah'
text.replace('yeah', 'yep')
```
对于复杂的模式,请使用 re 模块中的 sub() 函数。 为了说明这个,假设你想将形式为 11/27/2012 的日期字符串改成 2012-11-27
```
text = 'Today is 11/27/2012. PyCon starts 3/13/2013.'
import re
re.sub(r'(\d+)/(\d+)/(\d+)', r'\3-\1-\2', text)
```
sub() 函数中的第一个参数是被匹配的模式,第二个参数是替换模式。反斜杠数字比如 \3 指向前面模式的捕获组号。
### 拆分/拼接字符串
字符串拆分与拼接是很常用的操作:
```
text = 'geeks for geeks'
# Splits at space
print(text.split())
word = 'geeks, for, geeks'
# Splits at ','
print(word.split(', '))
word = 'geeks:for:geeks'
# Splitting at ':'
print(word.split(':'))
word = 'CatBatSatFatOr'
# Splitting at 3
print([word[i:i+3] for i in range(0, len(word), 3)])
```
将几个小的字符串合并为一个大的字符串。想要合并的字符串是在一个序列或者 iterable 中,那么最快的方式就是使用 join() 方法。
```
parts = ['Is', 'Chicago', 'Not', 'Chicago?']
print(' '.join(parts))
print(','.join(parts))
```
当使用加号(+)操作符去连接大量的字符串的时候是非常低效率的, 因为加号连接会引起内存复制以及垃圾回收操作。不应像下面这样写字符串连接代码:
```
s = ''
for p in parts:
s += p
```
这种写法会比使用 join() 方法运行的要慢一些,因为每一次执行+=操作的时候会创建一个新的字符串对象。 你最好是先收集所有的字符串片段然后再将它们连接起来。所以能用join就尽量不用+了。
最后补充一个拆分后合并一部分的小例子:
```
text = '/geeks/for/geeks'
temp_list = text.split('/')
prefix = '/'.join(temp_list[:-1])
prefix
```
### 字符串中插入变量
想创建一个内嵌变量的字符串,变量被它的值所表示的字符串替换掉。这在包括print结果,构建带参数的url等很多场景下都会用到。
Python并没有对在字符串中简单替换变量值提供直接的支持。 但是通过使用字符串的 format() 方法来解决这个问题。比如:
```
s = '{name} has {n} messages.'
s.format(name='Guido', n=37)
```
或者,如果要被替换的变量能在变量域中找到, 那么可以结合使用format_map() 和 vars() 。就像下面这样:
```
name = 'Guido'
n = 37
s.format_map(vars())
```
vars() 还有一个有意思的特性就是它也适用于对象实例。
```
class Info:
def __init__(self, name, n):
self.name = name
self.n = n
a = Info('Guido',37)
s_out=s.format_map(vars(a))
print(s_out)
```
format 和 format_map() 的一个缺陷就是它们并不能很好的处理变量缺失的情况,一种避免这种错误的方法是另外定义一个含有 __missing__() 方法的字典对象,就像下面这样:
```
class safesub(dict):
# """防止key找不到"""
def __missing__(self, key):
return '{' + key + '}'
del n # Make sure n is undefined
s.format_map(safesub(vars()))
```
多年以来由于Python缺乏对变量替换的内置支持而导致了各种不同的解决方案。比如常见的%做占位符。不过format() 和 format_map() 相比较上面这些方案而已更加先进,因此应该被优先选择。 使用 format() 方法还有一个好处就是你可以获得对字符串格式化的所有支持(对齐,填充,数字格式化等待), 而这些特性是使用像模板字符串之类的方案不可能获得的。
```
s = '{name} has {n} messages,{name}.'
s.format(name='Guido', n=37)
```
## 迭代器与生成器
迭代是Python最强大的功能之一。初看起来,可能会简单的认为迭代只不过是处理序列中元素的一种方法。 然而,绝非仅仅就是如此。
### 手动遍历迭代器
遍历一个**可迭代对象**中的所有元素,但是却不想使用for循环。
为了手动的遍历可迭代对象,使用 next() 函数并在代码中捕获 StopIteration 异常。StopIteration 用来指示迭代的结尾。 比如:
```
items = [1, 2, 3]
it = iter(items)
next(it)
next(it)
next(it)
next(it)
```
再看一个完整的例子,手动读取一个文件中的所有行:
```
def manual_iter():
with open('test.txt') as f:
try:
while True:
line = next(f)
print(line, end='')
except StopIteration:
pass
```
## 文件与IO
### 创建临时文件
```
from tempfile import TemporaryFile
with TemporaryFile('w+t') as f:
# Read/write to the file
f.write('Hello World\n')
f.write('Testing\n')
# Seek back to beginning and read the data
f.seek(0)
data = f.read()
```
## 函数
给函数参数增加元信息:写好一个函数后,可以为这个函数的参数增加一些额外的信息,这样的话其他使用者就能清楚的知道这个函数应该怎么使用。这时候可以使用函数参数注解,这是一个很好的办法,它能提示程序员应该怎样正确使用这个函数。
```
def add(x:int, y:int) -> int:
return x + y
```
python解释器不会对这些注解添加任何的语义。它们不会被类型检查,运行时跟没有加注解之前的效果也没有任何差距。 然而,对于那些阅读源码的人来讲就很有帮助啦。第三方工具和框架可能会对这些注解添加语义。同时它们也会出现在文档中。
```
help(add)
```
函数注解只存储在函数的 __annotations__ 属性中。例如:
```
add.__annotations__
```
## 类与对象
### 在类中封装属性名
Python程序员**不去依赖语言特性去封装数据**,而是通过**遵循一定的属性和方法命名规约**来达到这个效果。比如:
```
class A:
def __init__(self):
self._internal = 0 # An internal attribute
self.public = 1 # A public attribute
def public_method(self):
'''
A public method
'''
pass
def _internal_method(self):
pass
```
注意Python并不会真的阻止别人访问内部名称。但是如果你这么做肯定是不好的,可能会导致脆弱的代码。同时还要注意到,使用下划线开头的约定同样适用于模块名和模块级别函数。 内部实现的代码要小心使用。在basic-python中提到过,各类"_"的使用带来的不同,这里重复说明下,有时候可能会遇到在类定义中使用两个下划线(__)开头的命名。比如:
```
class B:
def __init__(self):
self.__private = 0
def __private_method(self):
pass
def public_method(self):
pass
self.__private_method()
```
这个时候双下划线的名称会在访问它时,被变成其他形式。比如,在前面的类B中,私有属性会被分别**重命名为 _B__private 和 _B__private_method**。 这时候你可能会问这样重命名的目的是什么,答案就是继承——这种属性通过继承是无法被覆盖的。比如:
```
class C(B):
def __init__(self):
super().__init__()
self.__private = 1 # Does not override B.__private
# Does not override B.__private_method()
def __private_method(self):
pass
```
私有名称 __private 和 __private_method 被重命名为 _C __private 和 _C __private_method ,这个跟父类B中的名称是完全不同的。
### 定义接口或抽象基类
定义一个接口或抽象类,并且通过执行类型检查来确保子类实现了某些特定的方法。使用 abc 模块可以很轻松的定义抽象基类。不过目前个人建议在实际编程中还是尽量先规避一些设计模式,看看从流程上能不能简化自己代码的pipeline,这样是更容易的方法,anyway,这里回到接口再看看。抽象类的目的就是让别的类继承它并实现特定的抽象方法:
```
from abc import ABCMeta, abstractmethod
class IStream(metaclass=ABCMeta):
@abstractmethod
def read(self, maxbytes=-1):
pass
@abstractmethod
def write(self, data):
pass
class SocketStream(IStream):
def read(self, maxbytes=-1):
pass
def write(self, data):
pass
```
抽象基类的一个主要用途是在代码中检查某些类是否为特定类型,实现了特定接口:
```
def serialize(obj, stream):
if not isinstance(stream, IStream):
raise TypeError('Expected an IStream')
pass
```
## 元编程
软件开发领域中最经典的口头禅就是 **“don’t repeat yourself”**。 也就是说,任何时候当你的程序中存在高度重复(或者是通过剪切复制)的代码时,都应该想想是否有更好的解决方案。在Python当中,通常都可以通过**元编程**来解决这类问题。简而言之,元编程就是关于**创建操作源代码(比如修改、生成或包装原来的代码)的函数和类**。 主要技术是使用**装饰器、类装饰器和元类**。不过还有一些其他技术, 包括**签名对象、使用 exec() 执行代码以及对内部函数和类的反射技术**等。
### 包装器
比如想在函数上添加一个包装器,增加额外的操作处理(比如日志、计时等)。这是很常用的功能。使用额外的代码包装一个函数,可以定义一个装饰器函数:
```
import time
from functools import wraps
def timethis(func):
'''
Decorator that reports the execution time.
'''
@wraps(func)
def wrapper(*args, **kwargs):
start = time.time()
result = func(*args, **kwargs)
end = time.time()
print(func.__name__, end-start)
return result
return wrapper
@timethis
def countdown(n):
# Counts down
while n > 0:
n -= 1
countdown(100000)
```
实际上一个装饰器就是一个函数,它接受一个函数作为参数并返回一个新的函数。下面两个函数是等价的:
```
@timethis
def countdown(n):
pass
def countdown(n):
pass
countdown = timethis(countdown)
```
另外,内置的装饰器比如 @staticmethod, @classmethod,@property 原理也是一样的。
不过在使用装饰器时,要注意复制元信息。即**任何时候**你定义装饰器的时候,都应该使用 functools 库中的 **@wraps 装饰器来注解底层包装函数**。如果你忘记了使用 @wraps , 那么你会发现被装饰函数丢失了所有有用的信息。
还可以定义带参数的装饰器,比如你想写一个装饰器,给函数添加日志功能,同时允许用户指定日志的级别和其他的选项。 下面是这个装饰器的定义和使用示例:
```
from functools import wraps
import logging
def logged(level, name=None, message=None):
"""
Add logging to a function. level is the logging
level, name is the logger name, and message is the
log message. If name and message aren't specified,
they default to the function's module and name.
"""
def decorate(func):
logname = name if name else func.__module__
log = logging.getLogger(logname)
logmsg = message if message else func.__name__
@wraps(func)
def wrapper(*args, **kwargs):
log.log(level, logmsg)
return func(*args, **kwargs)
return wrapper
return decorate
# Example use
@logged(logging.DEBUG)
def add(x, y):
return x + y
@logged(logging.CRITICAL, 'example')
def spam():
print('Spam!')
```
### 函数重载
python是不支持直接进行函数重载的,不过python允许参数注解。利用它可以简单实现基于类型的方法重载。
```
class Spam:
def bar(self, x:int, y:int):
print('Bar 1:', x, y)
def bar(self, s:str, n:int = 0):
print('Bar 2:', s, n)
s = Spam()
s.bar(2, 3) # Prints Bar 1: 2 3
s.bar('hello') # Prints Bar 2: hello 0
```
但是对于一般的函数重载会稍微麻烦一些,需要利用一些小技巧,这里给出一个参考资料:[Python 函数如何重载?](https://juejin.im/post/5cbcf38bf265da03af27d327),就暂时不赘述了。
个人认为既然python不支持,那就尽量避免吧,直接换个名,或者用下if else也不是多大的事情。
## 模块与包
模块与包是任何大型程序的核心,就连Python安装程序本身也是一个包。本节记录关于如何组织包、把大型模块分割成多个文件、创建命名空间包等内容。同时,也给出了自定义导入语句的秘籍。本节还参考了:[Importing `*` in Python](https://medium.com/@s16h/importing-star-in-python-88fe9e8bd4d2)
构建一个模块的层级包,需要将代码组织成由很多分层模块构成的包,然后在文件系统上组织你的代码,并确保每个目录都定义了一个__init__.py文件。这样就能执行各种import语句了。
如果希望对从模块或包导出的符号进行精确控制,可以在模块中定义一个变量 __all__ 来明确地列出需要导出的内容。比如:
``` python
# somemodule.py
def spam():
pass
def grok():
pass
blah = 42
# Only export 'spam' and 'grok'
__all__ = ['spam', 'grok']
```
下面给出一个例子:
```
from something import *
public_variable
_private_variable
public_function()
_private_function()
c = PublicClass()
c
c = _WeirdClass()
```
如上结果所示,以"_" 开头的都是私有。
关于__all__,它是一个str的list,定义了module中要导出的内容。如果不定义__all__,import * 默认会导入除了以_开头的所有names。一个例子:
```
from something_all import *
```
设置__all__的原因在于python的一个原则是explicit好于implicit。而from \<module\> import \* 并不explicit。最好显式地给出要导入的东西。即便需要很多名字,最好还是一一明确地倒入,比如根据PEP328:
```python
from Tkinter import (Tk, Frame, Button, Entry, Canvas, Text,
LEFT, DISABLED, NORMAL, RIDGE, END)
```
## 并发编程
对于并发编程, Python有多种长期支持的方法, 包括**多线程**, **调用子进程**, 以及各种各样的关于**生成器函数**的技巧。这部分将会简单记录并发编程各种方面的技巧, 包括通用的**多线程技术**以及**并行计算的实现方法**。并发的程序有潜在的危险. 因此, 要注意能给出更加可信赖和易调试的代码。这里只是简单介绍,更多关于并行计算的内容会在7-parallel-programming文件夹中记录。
### 启动与停止线程
为需要并发执行的代码创建/销毁线程:**threading 库**可以**在单独的线程中执行任何的在 Python 中可以调用的对象**。可以创建一个 Thread 对象并将你要执行的对象以 target 参数的形式提供给该对象。
```
import time
def countdown(n):
while n > 0:
print('T-minus', n)
n -= 1
time.sleep(1)
# Create and launch a thread
from threading import Thread
t = Thread(target=countdown, args=(3,))
t.start()
```
当创建好一个线程对象后,该对象**并不会立即执行**,除非你调用它的 **start() 方法**(当你调用 start() 方法时,它会调用你传递进来的函数,并把你传递进来的参数传递给该函数)。Python中的线程会在一个单独的系统级线程中执行(比如说一个 POSIX 线程或者一个 Windows 线程),这些线程将由操作系统来全权管理。线程一旦启动,将独立执行直到目标函数返回。
可以查询一个线程对象的状态,看它是否还在执行:
```
if t.is_alive():
print('Still running')
else:
print('Completed')
```
Python解释器直到所有线程都终止前仍保持运行。对于需要长时间运行的线程或者需要一直运行的后台任务,你应当考虑使用后台线程。
```python
# 使用daemon
t = Thread(target=countdown, args=(10,), daemon=True)
t.start()
```
由于全局解释锁(GIL)的原因,Python 的线程被限制到同一时刻只允许一个线程执行这样一个执行模型。所以,Python 的线程更适用于处理I/O和其他需要并发执行的阻塞操作(比如等待I/O、等待从数据库获取数据等等),而**不是需要多处理器并行的计算密集型任务**。所以这块暂时不需要太多关注。
### 简单并行编程
有个程序要执行CPU密集型工作,想让他利用**多核CPU**的优势来运行的快一点。
concurrent.futures 库提供了一个 ProcessPoolExecutor 类, 可被用来在一个单独的Python解释器中执行计算密集型函数。 不过,要使用它,首先要有一些计算密集型的任务。一个脚本,在这些日志文件中查找出所有访问过robots.txt文件的主机(因为我没有日志文件,所以没运行结果)
```
# findrobots.py
import gzip
import io
import glob
def find_robots(filename):
'''
Find all of the hosts that access robots.txt in a single log file
'''
robots = set()
with gzip.open(filename) as f:
for line in io.TextIOWrapper(f,encoding='ascii'):
fields = line.split()
if fields[6] == '/robots.txt':
robots.add(fields[0])
return robots
def find_all_robots(logdir):
'''
Find all hosts across and entire sequence of files
'''
files = glob.glob(logdir+'/*.log.gz')
all_robots = set()
for robots in map(find_robots, files):
all_robots.update(robots)
return all_robots
if __name__ == '__main__':
robots = find_all_robots('logs')
for ipaddr in robots:
print(ipaddr)
```
前面的程序使用了通常的**map-reduce风格**来编写。 函数 find_robots() 在一个文件名集合上做map操作,并将结果汇总为一个单独的结果, 也就是 find_all_robots() 函数中的 all_robots 集合。 现在,假设你想要修改这个程序让它使用多核CPU。 很简单——只需要**将map()操作替换为一个 concurrent.futures 库中生成的类似操作即可**。
```
# findrobots.py
import gzip
import io
import glob
from concurrent import futures
def find_robots(filename):
'''
Find all of the hosts that access robots.txt in a single log file
'''
robots = set()
with gzip.open(filename) as f:
for line in io.TextIOWrapper(f,encoding='ascii'):
fields = line.split()
if fields[6] == '/robots.txt':
robots.add(fields[0])
return robots
def find_all_robots(logdir):
'''
Find all hosts across and entire sequence of files
'''
files = glob.glob(logdir+'/*.log.gz')
all_robots = set()
with futures.ProcessPoolExecutor() as pool:
for robots in pool.map(find_robots, files):
all_robots.update(robots)
return all_robots
if __name__ == '__main__':
robots = find_all_robots('logs')
for ipaddr in robots:
print(ipaddr)
```
## 脚本编程与系统管理
首先补充一些基本的在python脚本中执行系统命令的代码。cank
```
import os
os.system('ls')
tmp = os.popen('ls *.py').readlines()
tmp
import subprocess
p = subprocess.Popen('ls', shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in p.stdout.readlines():
print (line)
retval = p.wait()
```
### 命令行解析器
Python 命令行与参数解析方法有很多工具,这里学习使用python 自带的argparse ,来说明python 如何进行命令行解析。主要参考了
- [HelloGitHub-Article](https://github.com/HelloGitHub-Team/Article)
- [argparse模块用法实例详解](https://zhuanlan.zhihu.com/p/56922793)
- python[官方文档](https://docs.python.org/zh-cn/3/library/argparse.html)
- [Python-argparse-命令行与参数解析](https://zhuanlan.zhihu.com/p/34395749)
通俗来说,命令行与参数解析就是当你输入cmd 打开dos 交互界面时候,启动程序要进行的参数给定。比如在dos 界面输入:
```code
python openPythonFile.py "a" -b "number"
```
其中,"a" -b等就是命令行与参数解析要做的事情。先不用深究参数的含义,这里就是个示例,简而言之,就是设计程序在**运行时必须给定某些额外参数**才能运行,也就是如果设置了命令行参数解析,那么各种编译器按F5 是无法直接运行程序的。这样的目的之一是不能随便就能运行脚本,可以达到一定程度上的安全功能。
那肯定就会好奇命令行中敲入一段命令后,是如何被解析执行的?自己如何实现一个命令行工具来帮助执行和处理任务?如何利用python库来帮助实现?
这一节就主要记录如何使用Python 内置的 argparse 标准库解析命令行。
argparse 作为 Python 内置的标准库,提供了较为简单的方式来编写命令行接口。当你在程序中定义需要哪些参数,argparse 便会从 sys.argv 中获取命令行输入进行解析,对正确或非法输入做出响应,也可以自动生成帮助信息和使用说明。
总体上分为三大步:
- 创建解析:设置解析器,后续对命令行的解析就**依赖于这个解析器**,它能够**将命令行字符串转换为Python对象**。通过实例化 argparse.ArgumentParser,给定一些选填参数,就可以设置一个解析器
- 添加参数:通过ArgumentParser.add_argument 方法来**为解析器设置参数信息**,以告诉解析器命令行字符串中的**哪些内容**应解析为**哪些类型的Python对象**。注意,每一个参数都要单独设置,需要两个参数就用两个add_argument
- 解析参数:定义好参数后,就可以使用 ArgumenteParser.**parse_args 方法来解析一组命令行参数字符串**了。默认情况下,参数取自**sys.argv[1:]**,它就是我们在命令行敲入的**一段命令(不含文件名)所对应的一个字符串列表**,比如,若输入 python3 cmd.py --sum 1 2 3,那么sys.argsv[1:]就是['--sum','1','2','3']。
基本的业务逻辑是这样的。解析好命令行后,我们就可以从解析结果中获取每个参数的值,进而根据自己的业务需求做进一步的处理。比如,对于上文中所定义的nums参数,我们可以通过解析后的结果中的accumulate方法对其进行求最大值或求和(取决于是否提供 --sum 参数)。
下面就给出一个较完整的代码示例。
```
import argparse
# 1. 设置解析器
parser = argparse.ArgumentParser(
description='My Cmd Line Program',
)
# 2. 定义参数
# 添加 nums 参数,在使用信息中显示为 num
# 其类型为 int,且支持输入多个,且至少需要提供一个
parser.add_argument('nums', metavar='num', type=int, nargs='+',
help='a num for the accumulator')
# 添加 --sum 参数,该参数被 parser 解析后所对应的属性名为 accumulate
# 若不提供 --sum,默认值为 max 函数,否则为 sum 函数
parser.add_argument('--sum', dest='accumulate', action='store_const',
const=sum, default=max,
help='sum the nums (default: find the max)')
# 3. 解析命令行
args = parser.parse_args(['--sum', '-1', '0', '1'])
print(args) # 结果:Namespace(accumulate=<built-in function sum>, nums=[-1, 0, 1])
# 4. 业务逻辑
result = args.accumulate(args.nums)
print(result)
```
若我们需要对一组数字求和,只需执行:
```Shell
$ python3 cmd.py --sum -1 0 1
0
```
若需要对一组数字求最大值,只需执行:
```Shell
$ python3 cmd.py -1 0 1
1
```
如果给定的参数不是数字,则会报错提示:
```Shell
$ python3 cmd.py a b c
usage: cmd.py [-h] [--sum] num [num ...]
cmd.py: error: argument num: invalid int value: 'a'
```
我们还可以通过 -h 或 --help 参数查看其自动生成的使用说明和帮助:
```Shell
usage: cmd.py [-h] [--sum] num [num ...]
My Cmd Line Program
positional arguments:
num a num for the accumulator
optional arguments:
-h, --help show this help message and exit
--sum sum the nums (default: find the max)
```
接下来进一步探讨关于argparse更多复杂的情况,比如各种类型参数、参数前缀、参数组、互斥选项、嵌套解析、自定义帮助等等。主要要认识的问题是:argparse支持哪些类型的参数?这些参数该如何配置?
```
import argparse
# 1. 设置解析器
parser = argparse.ArgumentParser(
description='My Cmd Line Program',
)
```
1. 参数动作
```
parser.add_argument('--sum', dest='accumulate', action='store_const',
const=sum, default=max,
help='sum the nums (default: find the max)')
```
这里面的 action,也就是 参数动作,究竟是用来做什么的呢?
想象一下,当我们在命令行输入**一串参数**后,对于**不同类型的参数是希望做不同的处理**的。 那么 **参数动作** 其实就是告诉解析器,我们希望**对应的参数该被如何处理**。比如,参数值是该被存成一个值呢,还是追加到一个列表中?是当成布尔的 True 呢,还是 False?
参数动作 被分成了如下 8 个类别:
- store —— 保存参数的值,这是**默认**的参数动作。它通常用于给一个参数指定值,如指定名字:
```
parser = argparse.ArgumentParser(
description='My Cmd Line Program',
)
parser.add_argument('--name')
parser.parse_args(['--name', 'Eric'])
```
- store_const —— 保存被 const 命名的固定值。当我们想通过**是否给定参数**来起到**标志**的作用,给定就取某个值,就可以使用该参数动作,如:
```
parser = argparse.ArgumentParser(
description='My Cmd Line Program',
)
parser.add_argument('--sum', action='store_const', const=sum)
parser.parse_args(['--sum'])
```
- store_true 和 store_false —— 是 store_const 的特殊情况,用来分别保存 True 和 False。如果为指定参数,则其默认值分别为 False 和 True,如:
```
parser = argparse.ArgumentParser(
description='My Cmd Line Program',
)
parser.add_argument('--use', action='store_true')
parser.add_argument('--nouse', action='store_false')
parser.parse_args(['--use', '--nouse'])
parser.parse_args([])
```
- append —— 将参数值追加保存到一个列表中。它常常用于命令行中允许多个相同选项,如:
```
parser = argparse.ArgumentParser(
description='My Cmd Line Program',
)
parser.add_argument('--file', action='append')
parser.parse_args(['--file', 'f1', '--file', 'f2'])
```
- append_const —— 将 const 命名的固定值追加保存到一个列表中(const 的默认值为 None)。它常常用于将多个参数所对应的固定值都保存在同一个列表中,相应的需要 dest 入参来配合,以放在同一个列表中,如:
不指定 dest 入参,则固定值保存在以参数名命名的变量中
```
parser = argparse.ArgumentParser(
description='My Cmd Line Program',
)
parser.add_argument('--int', action='append_const', const=int)
parser.add_argument('--str', action='append_const', const=str)
parser.parse_args(['--int', '--str'])
```
指定 dest 入参,则固定值保存在 dest 命名的变量中
```
parser = argparse.ArgumentParser(
description='My Cmd Line Program',
)
parser.add_argument('--int', dest='types', action='append_const', const=int)
parser.add_argument('--str', dest='types', action='append_const', const=str)
parser.parse_args(['--int', '--str'])
```
- count —— 计算参数出现次数,如:
```
parser = argparse.ArgumentParser(
description='My Cmd Line Program',
)
parser.add_argument('--increase', '-i', action='count')
parser.parse_args(['--increase', '--increase'])
parser.parse_args(['-iii'])
```
- help —— 打印解析器中所有选项和参数的完整帮助信息,然后退出。
- version —— 打印命令行版本,通过指定 version 入参来指定版本,调用后退出。如:
```
parser = argparse.ArgumentParser(prog='CMD')
parser.add_argument('--version', action='version', version='%(prog)s 1.0')
parser.parse_args(['--version'])
```
2. 参数类别
如果说 参数动作 定义了**解析器在接收到参数后该如何处理参数**,那么 参数类别 就是告诉解析器**这个参数的元信息**,也就是参数是什么样的。比如,参数是字符串呢?还是布尔类型呢?参数是在几个值中可选的呢?还是可以给定值,等等。
可选参数 顾名思义就是参数是可以加上,或不加上。**默认**情况下,通过 ArgumentParser.add_argument 添加的参数就是**可选参数**。
可以通过 - 来指定**短参数**,也就是名称短的参数;也可以通过 -- 来指定**长参数**,也就是名称长的参数。当然也可以两个都指定。
可选参数通常用于:用户提供一个参数以及对应值,则使用该值;若不提供,则使用默认值。如:
```
parser = argparse.ArgumentParser(prog='CMD')
parser.add_argument('--name', '-n')
parser.parse_args(['--name', 'Eric']) # 通过长参数指定名称
parser.parse_args(['-n', 'Eric']) # 通过短参数指定名称
parser.parse_args([]) # 不指定则默认为 None
```
参数类型 就是解析器**参数值是要作为什么类型去解析**,默认情况下是 str 类型。我们可以通过 type 入参来指定参数类型。
argparse 所支持的参数类型多种多样,可以是 int、float、bool等,比如:
```
parser = argparse.ArgumentParser(prog='CMD')
parser.add_argument('-i', type=int)
parser.add_argument('-f', type=float)
parser.add_argument('-b', type=bool)
parser.parse_args(['-i', '1', '-f', '2.1', '-b', '0'])
```
更厉害的是,type 入参还可以是**可调用(callable)对象**。这就给了我们很大的想象空间,可以指定 type=open 来把参数值作为文件进行处理,也可以指定自定义函数来进行类型检查和类型转换。
作为文件进行处理:
```
parser = argparse.ArgumentParser(prog='CMD')
parser.add_argument('--file', type=open)
parser.parse_args(['--file', 'test.txt'])
```
使用自定义函数进行处理,入参为参数值,需返回转换后的结果。 比如,对于参数 --num,我们希望当其值小于 1 时则返回 1,大于 10 时则返回 10:
```
def limit(string):
num = int(string)
if num < 1:
return 1
if num > 10:
return 10
return num
parser = argparse.ArgumentParser(prog='CMD')
parser.add_argument('--num', type=limit)
parser.parse_args(['--num', '-1']) # num 小于1,则取1
parser.parse_args(['--num', '15']) # num 大于10,则取10
parser.parse_args(['--num', '5']) # num 在1和10之间,则取原来的值
```
3. 参数默认值
参数默认值 用于在命令行中不传参数值的情况下的默认取值,可通过 default 来指定。如果不指定该值,则参数默认值为 None。
比如:
```
parser = argparse.ArgumentParser(prog='CMD')
parser.add_argument('-i', default=0, type=int)
parser.add_argument('-f', default=3.14, type=float)
parser.add_argument('-b', default=True, type=bool)
parser.parse_args([])
```
4. 位置参数
位置参数 就是通过位置而非是 - 或 -- 开头的参数来指定参数值。
比如,我们可以指定两个位置参数 x 和 y ,先添加的 x 位于第一个位置,后加入的 y 位于第二个位置。那么在命令行中输入 1 2的时候,分别对应到的就是 x 和 y:
```
parser = argparse.ArgumentParser(prog='CMD')
parser.add_argument('x')
parser.add_argument('y')
parser.parse_args(['1', '2'])
```
5. 可选值
可选值 就是**限定参数值的内容**,通过 choices 入参指定。
有些情况下,我们可能需要限制用户输入参数的内容,只能在预设的几个值中选一个,那么 可选值 就派上了用场。
比如,指定文件读取方式限制为 read-only 和 read-write:
```
parser = argparse.ArgumentParser(prog='CMD')
parser.add_argument('--mode', choices=('read-only', 'read-write'))
parser.parse_args(['--mode', 'read-only'])
parser.parse_args(['--mode', 'read'])
```
6. 互斥参数
互斥参数 就是多个参数之间彼此互斥,不能同时出现。使用互斥参数首先通过 ArgumentParser.add_mutually_exclusive_group 在解析器中添加一个互斥组,然后在这个组里添加参数,那么组内的所有参数都是互斥的。
比如,我们希望通过命令行来告知乘坐的交通工具,要么是汽车,要么是公交,要么是自行车,那么就可以这么写:
```
parser = argparse.ArgumentParser(prog='CMD')
group = parser.add_mutually_exclusive_group()
group.add_argument('--car', action='store_true')
group.add_argument('--bus', action='store_true')
group.add_argument('--bike', action='store_true')
parser.parse_args([]) # 什么都不乘坐
parser.parse_args(['--bus']) # 乘坐公交
parser.parse_args(['--bike']) # 骑自行车
parser.parse_args(['--bike', '--car']) # 又想骑车,又想坐车,那是不行的
```
7. 可变参数列表
可变参数列表 用来定义一个参数可以有多个值,且能通过 nargs 来定义值的个数。
若 nargs=N,N为一个数字,则要求该参数提供 N 个值,如:
```
parser = argparse.ArgumentParser(prog='CMD')
parser.add_argument('--foo', nargs=2)
print(parser.parse_args(['--foo', 'a', 'b']))
```
若 nargs=?,则要求改参数提供 0 或 1 个值,如:
```
parser = argparse.ArgumentParser(prog='CMD')
parser.add_argument('--foo', nargs='?')
parser.parse_args(['--foo'])
parser.parse_args(['--foo', 'a'])
```
若 nargs=*,则要求改参数提供 0 或多个值,如:
```
parser = argparse.ArgumentParser(prog='CMD')
parser.add_argument('--foo', nargs='*')
parser.parse_args(['--foo', 'a', 'b', 'c', 'd', 'e'])
```
若 nargs=+,则要求改参数至少提供 1 个值,如:
```
parser = argparse.ArgumentParser(prog='CMD')
parser.add_argument('--foo', nargs='+')
parser.parse_args(['--foo', 'a'])
```
小结下。
add_argument 方法定义**单个**的命令行参数应当**如何解析**。每个形参更多的描述:
- name or flags - 一个命名或者一个选项字符串的列表,例如 foo 或 -f, --foo。
- action - 当参数在命令行中出现时使用的动作基本类型。
- nargs - 命令行参数应当消耗的数目。
- const - 被一些 action 和 nargs 选择所需求的常数。
- default - 当参数未在命令行中出现时使用的值。
- type - 命令行参数应当被转换成的类型。
- choices - 可用的参数的容器。
- required - 此命令行选项是否可省略 (仅选项可用)。
- help - 一个此选项作用的简单描述。
- metavar - 在使用方法消息中使用的参数值示例。
- dest - 解析后的参数名称,默认情况下,对于可选参数选取最长的名称,中划线转换为下划线.
然后一个比较完整的,需要在命令行中执行的例子如下,对应的python文件是argv_argparse.py.
调用方式:
```Shell
python argv_argparse.py -h
python argv_argparse.py xiaoming 1991.11.11
python argv_argparse.py xiaoming 1991.11.11 -p xiaohong xiaohei -a 25 -r han -s female -o 1 2 3 4 5 6
```
-h表示调出help信息。
以上是参数动作和参数类别相关内容,接下来继续深入了解 argparse 的功能,包括如何修改参数前缀,如何定义参数组,如何定义嵌套的解析器,如何编写自定义动作等。
1. 帮助
自动生成帮助
当你在命令行程序中指定 -h 或 --help 参数时,都会输出帮助信息。而 argparse 可通过指定 add_help 入参为 True 或不指定,以达到自动输出帮助信息的目的。
```
import argparse
parser = argparse.ArgumentParser(add_help=True)
parser.add_argument('--foo')
parser.parse_args(['-h'])
```
自定义帮助
ArgumentParser 使用 formatter_class 入参来控制所输出的帮助格式。 比如,通过指定 formatter_class=argparse.RawTextHelpFormatter,我们可以让帮助内容遵循原始格式:
```
import argparse
parser = argparse.ArgumentParser(
add_help=True,
formatter_class=argparse.RawTextHelpFormatter,
description="""description raw formatted"""
)
parser.add_argument(
'-a', action="store_true",
help="""argument raw formatted"""
)
parser.parse_args(['-h'])
```
2. 参数组
有时候,我们需要给参数分组,以使得在显示帮助信息时能够显示到一起。
比如某命令行支持三个参数选项 --user、--password和--push,前两者需要放在一个名为 authentication 的分组中以表示它们是身份认证信息。那么我们可以用 ArgumentParser.add_argument_group 来满足:
```
import argparse
parser = argparse.ArgumentParser()
group = parser.add_argument_group('authentication')
group.add_argument('--user', action="store")
group.add_argument('--password', action="store")
parser.add_argument('--push', action='store')
parser.parse_args(['-h'])
```
3. 选项参数前缀
不知你是否注意到,在不同平台上命令行程序的选项参数前缀可能是不同的。比如在 Unix 上,其前缀是 -;而在 Windows 上,大多数命令行程序(比如 findstr)的选项参数前缀是 /。
在 argparse 中,选项参数前缀默认采用 Unix 命令行约定,也就是 -。但它也支持自定义前缀,下面是一个例子:
```
import argparse
parser = argparse.ArgumentParser(
description='Option prefix',
prefix_chars='-+/',
)
parser.add_argument('-power', action="store_false",
default=None,
help='Set power off',
)
parser.add_argument('+power', action="store_true",
default=None,
help='Set power on',
)
parser.add_argument('/win',
action="store_true",
default=False)
parser.parse_args(['-power'])
parser.parse_args(['+power', '/win'])
```
在这个例子中,我们指定了三个选项参数前缀 -、+和/,从而:
- 通过指定选项参数 -power,使得 power=False
- 通过指定选项参数 +power,使得 power=True
- 通过指定选项参数 /win,使得 win=True
### 读取配置文件
很多情况下,我们需要通过配置文件来定义一些参数性质的数据,因为配置文件作为一种可读性很好的格式,非常适用于存储程序中的配置数据。
在每个配置文件中,配置数据会被分组(比如例子中的“installation”、 “debug” 和 “server”)。 每个分组在其中指定对应的各个变量值。那么如何读取普通.ini格式的配置文件?
在python中,configparser 模块能被用来读取配置文件。例如,假设有配置文件config.ini。下面给出读取代码:
```
from configparser import ConfigParser
cfg = ConfigParser()
cfg.read('config.ini')
cfg.sections()
cfg.get('installation','library')
cfg.getboolean('debug','log_errors')
cfg.getint('server','port')
cfg.getint('server','nworkers')
print(cfg.get('server','signature'))
```
还可以读取一个section下的所有keys或所有键值对,参考:[Python 读取写入配置文件 —— ConfigParser](https://blog.csdn.net/jiede1/article/details/79064780)
```
cfg.options("installation")
cfg.items("installation")
```
如果需要,还能修改配置并使用 cfg.write() 方法将其写回到文件中。例如:
```
cfg.set('server','port','9000')
cfg.set('debug','log_errors','False')
import sys
cfg.write(sys.stdout)
```
### 日志功能
即在脚本和程序中将诊断信息写入日志文件。打印日志最简单方式是使用 logging 模块。
这小节的内容主要参考了:
- [python必掌握模块(四)logging模块用法](https://zhuanlan.zhihu.com/p/56968001)
- [Python模块学习之Logging日志模块](https://y4er.com/post/python-logging/)
日志是 学习任何编程语言都有必要掌握的核心模块。因为当把python代码放入到**生产环境**中的时候,我们只能看到代码运行的结果,我们不知道的是代码每一步过程的最终运行状态。
如果代码中间过程出现了问题的话,logging库的引用得出的日志记录可以帮助我们排查程序运行错误步骤的。方便我们修复代码,快速排查问题。
logging模块是Python内置的标准模块,主要用于输出运行日志,可以设置输出日志的等级、日志保存路径、日志文件回滚等;相比print,具备如下优点:
- 可以通过设置不同的日志等级,在release版本中只输出重要信息,而不必显示大量的调试信息:print将所有信息都输出到标准输出中,严重影响开发者从标准输出中查看其它数据;logging则可以由开发者决定将信息输出到什么地方,以及怎么输出
- logging具有更灵活的格式化功能,比如运行时间、模块信息
- print输出都在控制台上,logging可以输出到任何位置,比如文件甚至是远程服务器
logging 的模块结构如下:
- Logger 记录日志时创建的对象,调用其方法来传入日志模板和信息生成日志记录
- Log Record Logger对象生成的一条条记录
- Handler 处理日志记录,输出或者存储日志记录
- Formatter 格式化日志记录
- Filter 日志过滤器
- Parent Handler Handler之间存在分层关系
```
import logging
import sys
logger = logging.getLogger("Your Logger")
logger.setLevel(logging.DEBUG)
# 标准输出流,输出到控制台,用sys.stdout的话,就是输出白色的
# handler = logging.StreamHandler(sys.stdout)
handler = logging.StreamHandler()
formatter = logging.Formatter(fmt='%(asctime)s - %(name)s - %(levelname)s - %(message)s', datefmt='%Y/%m/%d %H:%M:%S')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.info("this is info msg")
logger.debug("this is debug msg")
logger.warning("this is warn msg")
logger.error("this is error msg")
```
理解下这个例子。首先,创建了一个logger对象,来作为生成日志记录的对象,然后设置输出级别(所有级别低于此级别的日志消息都会被忽略掉)。
接着创建了一个 StramHandler对象来处理日志。
随后创建一个formatter对象来格式化输出日志记录。构造最终的日志消息的时候我们使用了%操作符来格式化消息字符串。
然后把formatter 赋给 handler。
最后handler处理器添加到logger对象,完成整个处理流程。
下面稍作补充解释。
前面说到,设置输出级别时候,低于此级别的消息就被忽略了,低于是怎么判断的?其实logging 的 这些 level 常数是对应特定的整数值的,所以设置logging level的时候,也可以直接使用对应的整数来赋值。
```
print("logging.DEBUG:",logging.DEBUG)
print("logging.INFO:",logging.INFO)
print("logging.WARNING:",logging.WARNING)
print("logging.ERROR:",logging.ERROR)
print("logging.CRITICAL:",logging.CRITICAL)
```
从值上就可以看出来 DEBUG最低,然后依次往上,级别越来越高。
logging 提供的Handler有很多,比如:
- StreamHandler logging.StreamHandler 日志输出到流,可以是 sys.stderr,sys.stdout 或者文件
- FileHandler logging.FileHandler 日志输出到文件
- SMTPHandler logging.handlers.SMTPHandler 远程输出日志到邮件地址
- SysLogHandler logging.handlers.SysLogHandler 日志输出到syslog
- HTTPHandler logging.handlers.HTTPHandler 通过”GET”或者”POST”远程输出到HTTP服务器
如果需要设置一个全局的logger以供使用,可以参考:
- https://blog.csdn.net/weixin_42526352/article/details/90242840
- https://blog.csdn.net/brucewong0516/article/details/82817008
这里也给出例子--globalLog.py。
```
from globalLog import hydro_logger
hydro_logger.info("this is info msg")
hydro_logger.debug("this is debug msg")
hydro_logger.warning("this is warn msg")
hydro_logger.error("this is error msg")
```
## 测试、调试与异常
在Python测试代码之前没有编译器来分析代码,因此使得测试成为开发的一个重要部分。这里记录一些关于测试、调试和异常处理的常见问题。
### 关于单元测试
之前basic部分已经记录了一些unittest内容,这里补充一些诸如mock等概念的基本内容。主要参考了:[Python Mocking, You Are A Tricksy Beast](https://medium.com/python-pandemonium/python-mocking-you-are-a-tricksy-beast-6c4a1f8d19b2),[An Introduction to Mocking in Python](https://www.toptal.com/python/an-introduction-to-mocking-in-python)和[Understanding the Python Mock Object Library](https://realpython.com/python-mock-library/#patch-as-a-decorator)。
#### 为什么要使用mock
测试是验证逻辑是否正确的可靠高效的方式。不过由于一些复杂逻辑和依赖库,会使得测试变得困难。一个使用Python mock 对象的理由就是在测试过程中控制代码的行为。比如代码发送HTTP请求到外部服务,只有当服务的行为符合您的预期时,您的测试才会可预测地执行。有时,这些外部服务行为的临时更改可能导致测试套件中的间歇性故障。因此,我们想要使我们的代码在一个受控的环境下测试。而使用mock对象可以做到这一点。
有时,很难测试代码的某些环节,比如except代码块,if代码块,因为可能不出现这样的场景,这时候使用mock对象也可以帮助控制代码执行的路径来使程序能运行到这些地方,提升code coverage。
另一个原因是更好地理解如何使用代码的真实副本。一个python mock对象包含关于其用法的数据,您可以检查这些数据,比如:是否调用了某个方法,如何调用某个方法,多久一次调用某个方法。
此外,有时候我们会面临这样的情形,即我们想测试我们的代码,但是不想产生一些脏结果,比如:我们想要测试facebook的上传功能,但是并不想真的上传一个内容上去。再比如,写一个弹出一个CD drive的脚本,或者一个从/tmp文件夹清除缓存的服务,或者一个绑定到TCP端口的socket服务,这些在unittest下都会产生dirty结果。作为写代码的,更关心的是您的库成功地调用了系统函数来弹出CD,而不是每次运行测试时都还需要打开CD drive。保持单元测试的效率和性能意味着尽量避免运行自动化测试的缓慢代码。
还有,个人认为,在实际测试算法代码的过程中,后面函数会用到前面过程的数据结果,如果每次都从头测试,那么花费时间会很长,因此保存中间计算结果,然后使用mock来代替前面的函数过程,直接读取中间结果来供后面代码测试也是十分必要的。
而unittest.mock可以客服这些困难。接下来就看看mock究竟是什么。
#### What Is Mocking?
mock就是“看起来像真的”的意思,在**测试环境下**,一个mock对象**代替模拟**一个真实的对象。是一个灵活有力的提升测试质量的工具。
unittest.mock库提供了一个叫做Mock的类,可以使用它来模拟代码中的真实对象。Mock还提供了一个函数patch(),它用Mock实例提到了代码中的真实对象。可以将patch()用为decorator,也可以用作context manager , 取决于想要模拟的对象控制的scope。一旦退出指定的scope,patch就会立刻用真实的副本来取代mock对象。
首先先看看Mock。
```
from unittest.mock import Mock
mock = Mock()
mock
```
现在就可以使用 Mock来替代代码中的对象了。可以传递它为一个函数的参数或者重定义一个对象。形如:
```python
# Pass mock as an argument to do_something()
do_something(mock)
# Patch the json library
json = mock
```
注意,当你替换一个对象时,Mock必须要看起来真的像这个对象。比如要mock json库,那么程序调用dumps函数,你的mock对象里必须得有一个dumps函数。
```
mock.some_attribute
mock.do_something()
```
Mock可以创建任意属性,可以代替任意对象。用一下之前提到的json例子:
```
json = Mock()
json.dumps()
```
可以看到很容易的就mock了kson库和其dumps函数,dumps可以接受任意参数,返回值也是一个mock对象,因此mock可以用到很复杂的环境下。很灵活。
接下来,看看如何用mock更好地理解代码。Mock实例存储这怎么使用它们的数据。
首先可以断言程序使用了你期望的一个对象。
```
from unittest.mock import Mock
json = Mock()
json.loads('{"key": "value"}')
json.loads.assert_called()
json.loads.assert_called_with('{"key": "value"}')
json.loads.assert_called_once_with('{"key": "value"}')
json.loads('{"key": "value"}')
json.loads.assert_called_once()
json.loads.assert_called_once_with('{"key": "value"}')
json.loads.assert_not_called()
```
.assert_called()函数确保了调用mocked函数。 .assert_called_once()可以检查调用的次数。
第二,可以查看特殊属性以理解应用是如何使用该对象的。
```
from unittest.mock import Mock
json = Mock()
json.loads('{"key": "value"}')
json.loads.call_count
json.loads.call_args
json.loads.call_args_list
json.method_calls
```
通过以上测试代码可以使用各类属性来保证对象行为是想要的。这是一些固有的方法,接下来看看如何定制mocked方法。
管理一个Mock的返回值。一个使用mocks的原因就是控制代码的行为。一种十分常用的方式就是指定一个函数的返回值。
首先,创建一个文件my_calendar.py,代码见文件。然后执行下列语句:
```
!python my_calendar.py
```
上述代码如果在周末的时候运行是会报错的。而平常是正确的。写测试代码时候,很重要的是确保结果是可预测的。可以使用Mock来去除代码中的不确定性。如下所示,通过Mock .today() 指定返回值来实现。
```
import datetime
from unittest.mock import Mock
# Save a couple of test days
tuesday = datetime.datetime(year=2019, month=1, day=1)
saturday = datetime.datetime(year=2019, month=1, day=5)
# Mock datetime to control today's date
datetime = Mock()
def is_weekday():
today = datetime.datetime.today()
# Python's datetime library treats Monday as 0 and Sunday as 6
return (0 <= today.weekday() < 5)
# Mock .today() to return Tuesday
datetime.datetime.today.return_value = tuesday
# Test Tuesday is a weekday
assert is_weekday()
# Mock .today() to return Saturday
datetime.datetime.today.return_value = saturday
# Test Saturday is not a weekday
assert not is_weekday()
```
#### patch()
前面已经提到,unittest.mock还有一个很好的机制:patch(), 装饰器,补丁。
接下来通过实例分析。
看下mock官方的说明:“mock is a library for testing in Python. It allows you to replace parts of your system under test with mock objects and make assertions about how they have been used.”
关键是如何理解“replace parts of your system”。这里的parts指的是什么。实际上可以包括:
- functions
- classes,objects
可以使用Mock 或 MagicMock 类的实例的“mock 对象”来替代它们。
比如mocking functions,以一个简单的函数为例,文件 simple.py。在测试中调用该函数有两种方式,其一是直接使用:
```
import simple
from unittest import mock
def use_simple_function():
result = simple.simple_function()
print(result)
use_simple_function()
```
其二就是利用mock进行测试,为了模仿 simple_function,可以使用 mock.patch decorator。该decorator可以使用户通过以‘package.module.FunctionName’形式传入字符串参数指定想要mock的内容。对于本例,即module simple和函数simple_function,decorator如下第一行,函数可以表达为如下所示。其中函数mock_simple_function的参数是MagicMock类对象,用来代替想要mock的函数。
```
@mock.patch('simple.simple_function')
def mock_simple_function(mock_simple_func):
print(mock_simple_func)
mock_simple_function()
```
通过语句“@mock.patch(‘simple.simple_function’)”表明想使用MagicMock对象表达来替代simple_function,这个对象放入了mock_simple_func这一函数形参中。如上代码执行结果所示,输出是一个MagicMock对象,就是它替代了simple_function被调用。可从以下代码中看出:
```
@mock.patch('simple.simple_function')
def mock_simple_function(mock_simple_func):
print(mock_simple_func)
print(simple.simple_function)
result = simple.simple_function()
print(result)
mock_simple_function()
```
不过现在还有个更重要的问题:为什么创建新的MagicMock对象,究竟怎么调用这个对象。所以要看看MagicMock这个类。MagicMock之所以叫magic,是因为它有大多数python的magic函数的默认实现,即那些名称前后有双下划线的函数,可以查看[这里](http://www.ironpythoninaction.com/magic-methods.html)。比如__call__,即可以让一个类对象可以像函数那样被调用。因此,MagicMock对象是可以直接像函数那样被调用的。
回到例子中,现在已经mock了simple_function,但是还没有使用它来做什么。那么现在想返回simple_function()的结果要怎么做呢?可以使用MagicMock的return_value属性来实现:
```
@mock.patch('simple.simple_function')
def mock_simple_function(mock_simple_func):
mock_simple_func.return_value = "You have mocked simple_function"
result = simple.simple_function()
print(result)
mock_simple_function()
```
从上面结果可以看出很好地模仿了simple_function函数结果。
如果除了返回值之外,还想其他功能,可以使用MagicMock.side_effect 。比如想要测试一个错误并抛出一个异常。
```
def side_effect_function():
raise FloatingPointError("A disastrous floating point error has occurred")
@mock.patch('simple.simple_function')
def mock_simple_function_with_side_effect(mock_simple_func):
mock_simple_func.side_effect = side_effect_function
result = simple.simple_function()
print(result)
mock_simple_function_with_side_effect()
```
接下来看一看如何mock类。在simple.py 文件中定义一个类。然后定义一个调用的函数,首先还是传统的调用方式:
```
import simple
def use_simple_class():
inst = simple.SimpleClass()
print(inst.explode())
use_simple_class()
```
然后接下来看看mock下如何操作。依然使用@mock.patch decorator
```
from unittest import mock
@mock.patch("simple.SimpleClass")
def mock_simple_class(mock_class):
print(mock_class)
mock_simple_class()
```
通过@mock.patch decorator ,参数mock_class使用了MagicMock对象来代替了SimpleClass对象。
```
@mock.patch("simple.SimpleClass")
def mock_simple_class(mock_class):
print(mock_class)
print(simple.SimpleClass)
mock_simple_class()
```
接下来创建一个SimpleClass实例,然后打印,看看会发生什么。
```
@mock.patch("simple.SimpleClass")
def mock_simple_class(mock_class):
print(mock_class)
print(simple.SimpleClass)
inst = simple.SimpleClass()
print(inst)
mock_simple_class()
```
可以看出,调用SimpleClass() 就调用了MagicMock对象作为函数来创建了MagicMock对象。
到这里,mock一个函数和mock一个类并没有什么区别。不过在类中,使用的更多是其对象。从下面的例子中可以看出,类的MagicMock对象返回值是类对象。
简单小结一下,就是mock一个class时创建了一个MagicMock对象。创建一个类对象时,新的MagicMock对象也被创建,另外类的MagicMock对象返回值也就是类对象的MagicMock对象。
```
@mock.patch("simple.SimpleClass")
def mock_simple_class(mock_class):
print(mock_class)
print(simple.SimpleClass)
inst = simple.SimpleClass()
print(inst)
print(mock_class.return_value)
mock_simple_class()
```
此外,可以在类对象中通过explode函数来设置return_value,以mock 类对象的返回值。
```
@mock.patch("simple.SimpleClass")
def mock_simple_class(mock_class):
mock_class.return_value.explode.return_value = "BOO!"
inst = simple.SimpleClass()
result = inst.explode()
print(result)
print(mock_class.return_value)
mock_simple_class()
```
### 给程序性能测试
测试程序运行所花费的时间并做性能测试。如果只是简单的想测试下程序整体花费的时间, 通常使用Unix时间函数就行了,比如:
```code
bash % time python3 someprogram.py
real 0m13.937s
user 0m12.162s
sys 0m0.098s
bash %
```
如果你还需要一个程序各个细节的详细报告,可以使用 cProfile 模块:
```code
bash % python3 -m cProfile someprogram.py
bash %
```
不过通常情况是介于这两个极端之间。比如已经知道代码运行时在少数几个函数中花费了绝大部分时间。 对于这些函数的性能测试,可以使用一个简单的装饰器。要使用这个装饰器,只需要将其放置在要进行性能测试的函数定义前即可。
```
# timethis.py
import time
from functools import wraps
def timethis(func):
@wraps(func)
def wrapper(*args, **kwargs):
start = time.perf_counter()
r = func(*args, **kwargs)
end = time.perf_counter()
print('{}.{} : {}'.format(func.__module__, func.__name__, end - start))
return r
return wrapper
@timethis
def countdown(n):
while n > 0:
n -= 1
countdown(10000000)
```
对于测试很小的代码片段运行性能,使用 timeit 模块会很方便
```
from timeit import timeit
timeit('math.sqrt(2)', 'import math')
timeit('sqrt(2)', 'from math import sqrt')
timeit('math.sqrt(2)', 'import math', number=10000000)
timeit('sqrt(2)', 'from math import sqrt', number=10000000)
```
当执行性能测试的时候,需要注意的是你获取的结果都是近似值。 time.perf_counter() 函数会在给定平台上获取最高精度的计时值。 不过,它仍然还是基于时钟时间,很多因素会影响到它的精确度,比如机器负载。 如果你对于执行时间更感兴趣,使用 time.process_time() 来代替它。
```
from functools import wraps
def timethis(func):
@wraps(func)
def wrapper(*args, **kwargs):
start = time.process_time()
r = func(*args, **kwargs)
end = time.process_time()
print('{}.{} : {}'.format(func.__module__, func.__name__, end - start))
return r
return wrapper
```
### 加速程序运行
程序运行太慢,想在不使用复杂技术比如C扩展或JIT编译器的情况下加快程序运行速度。
关于程序优化的第一个准则是“不要优化”,第二个准则是“不要优化那些无关紧要的部分”。 如果你的程序运行缓慢,首先得对它进行性能测试找到问题所在。
通常来讲会发现程序在少数几个热点地方花费了大量时间, 比如内存的数据处理循环。一旦定位到这些点,就可以使用下面这些实用技术来加速程序运行。
#### 使用函数
很多程序员刚开始会使用Python语言写一些简单脚本。 当编写脚本的时候,通常习惯了写毫无结构的代码。比如:
```python
# somescript.py
import sys
import csv
with open(sys.argv[1]) as f:
for row in csv.reader(f):
# Some kind of processing
pass
```
像这样定义在全局范围的代码运行起来要比定义在函数中运行慢的多。 这种速度差异是由于局部变量和全局变量的实现方式(**使用局部变量要更快些**)。 因此,如果想让程序运行更快些,只需要将脚本语句放入函数中即可:
```python
# somescript.py
import sys
import csv
def main(filename):
with open(filename) as f:
for row in csv.reader(f):
# Some kind of processing
pass
main(sys.argv[1])
```
根据经验,使用函数带来15-30%的性能提升是很常见的。
局部变量会比全局变量运行速度快。 对于频繁访问的名称,通过将这些名称变成局部变量可以加速程序运行。
对于类中的属性访问也同样适用于这个原理。 通常来讲,查找某个值比如 self.name 会比访问一个局部变量要慢一些。 在内部循环中,可以将某个需要频繁访问的属性放入到一个局部变量中。
#### 尽可能去掉属性访问
每一次**使用点(.)操作符来访问属性的时候会带来额外的开销**。 它会触发特定的方法,比如 __getattribute__() 和 __getattr__() ,这些方法会进行字典操作操作。
通常你可以使用 from module import name 这样的导入形式,以及使用绑定的方法。比如下面的函数是耗时的:
```python
import math
def compute_roots(nums):
result = []
for n in nums:
result.append(math.sqrt(n))
return result
# Test
nums = range(1000000)
for n in range(100):
r = compute_roots(nums)
```
可以修改compute_roots函数如下:
```
from math import sqrt
def compute_roots(nums):
result = []
result_append = result.append
for n in nums:
result_append(sqrt(n))
return result
```
修改后的版本运行时间会减少一些。唯一不同之处就是消除了属性访问。 用 sqrt() 代替了 math.sqrt() 。 The result.append() 方法被赋给一个局部变量 result_append ,然后在内部循环中使用它。
这些改变只有在大量重复代码中才有意义,比如循环。 因此,这些优化也只是在某些特定地方才应该被使用。
#### 避免不必要的抽象
任何时候当你使用额外的处理层(比如装饰器、属性访问、描述器)去包装你的代码时,都会让程序运行变慢。
```
class A:
def __init__(self, x, y):
self.x = x
self.y = y
@property
def y(self):
return self._y
@y.setter
def y(self, value):
self._y = value
from timeit import timeit
a = A(1,2)
timeit('a.x', 'from __main__ import a')
timeit('a.y', 'from __main__ import a')
```
访问属性y相比属性x而言慢的不止一点点,大概慢了4.5倍。 如果你在意性能的话,那么就需要重新审视下对于y的属性访问器的定义是否真的有必要了。 如果没有必要,就使用简单属性吧。 如果仅仅是因为其他编程语言需要使用getter/setter函数就去修改代码风格,这个真的没有必要。
#### 使用内置的容器
内置的数据类型比如字符串、元组、列表、集合和字典都是使用C来实现的,运行起来非常快。 如果想自己实现新的数据结构(比如链接列表、平衡树等), 那么要想在性能上达到内置的速度几乎不可能,因此,还是乖乖的使用内置的吧。
另外,还要避免创建不必要的数据结构或复制。
#### 并行编程
这部分有参考:[Python性能优化的20条建议](https://segmentfault.com/a/1190000000666603)。
可以通过内置的模块multiprocessing实现下面几种并行模式:
多进程:对于CPU密集型的程序,可以使用multiprocessing的Process,Pool等封装好的类,通过多进程的方式实现并行计算。但是因为进程中的通信成本比较大,对于进程之间需要大量数据交互的程序效率未必有大的提高。
多线程:对于IO密集型的程序,multiprocessing.dummy模块使用multiprocessing的接口封装threading,使得多线程编程也变得非常轻松(比如可以使用Pool的map接口,简洁高效)。
分布式:multiprocessing中的Managers类提供了可以在不同进程之共享数据的方式,可以在此基础上开发出分布式的程序。
不同的业务场景可以选择其中的一种或几种的组合实现程序性能的优化。
#### 讨论
**在优化之前,有必要先研究下使用的算法**。 选择一个复杂度为 O(n log n) 的算法要比你去调整一个复杂度为 O(n**2) 的算法所带来的性能提升要大得多。
如果你觉得你还是得进行优化,那么请从整体考虑。 作为一般准则,不要对程序的每一个部分都去优化,因为这些修改会导致代码难以阅读和理解。 你应该**专注于优化产生性能瓶颈的地方,比如内部循环**。
对循环的优化所遵循的原则是尽量减少循环过程中的计算量,有多重循环的尽量将内层的计算提到上一层———[Python 代码性能优化技巧](https://www.ibm.com/developerworks/cn/linux/l-cn-python-optim/index.html)。
这里对循环做些补充,参考:[Python性能诀窍](http://pfmiles.github.io/blog/python-speed-performance-tips/).
Python支持好几种循环结构。for语句是最常用的。它遍历一个序列的每个元素,将每个元素赋值给循环变量。如果你的循环体很简单,for循环本身的解释成本将占据大部分的开销。这个时候**map函数**就能派上用场了。你可以将map函数看作是for循环采用C代码来实现。唯一的约束是“**循环体”必须是一个函数调用**。**list comprehension 列表生成式**除了语法上的便利性之外,他们常常和等价的map调用一样快甚至更快。比如:
```python
newlist = []
for word in oldlist:
newlist.append(word.upper())
```
可以使用map函数将这个循环由解释执行推到编译好的C代码中去执行:
```python
newlist = map(str.upper, oldlist)
```
List comprehension在python 2.0的时候被加入。它们提供了一种更紧凑的语法和更高效的方式来表达上面的for循环:
```python
newlist = [s.upper() for s in oldlist]
```
如果优化要求比较高,本节的这些简单技术满足不了,那么可以研究下基于即时编译(JIT)技术的一些工具。 例如,PyPy工程是Python解释器的另外一种实现,它会分析程序运行并对那些频繁执行的部分生成本机机器码。 它有时候能极大的提升性能,通常可以接近C代码的速度。 不过可惜的是,PyPy还不能完全支持Python3.。
还可以考虑下Numba工程, Numba是一个在你使用装饰器来选择Python函数进行优化时的动态编译器。 这些函数会使用LLVM被编译成本地机器码。它同样可以极大的提升性能。 但是,跟PyPy一样,它对于Python 3的支持现在还停留在实验阶段。
最后引用John Ousterhout说过的话作为结尾:“最好的性能优化是从不工作到工作状态的迁移”。 直到你真的需要优化的时候再去考虑它。确保你程序正确的运行通常比让它运行更快要更重要一些(至少开始是这样的).
|
github_jupyter
|
sorted([5, 2, 3, 1, 4])
a = [5, 2, 3, 1, 4]
a.sort()
a
# Python3 program to find Closest number in a list
def closest(lst, K):
return lst[min(range(len(lst)), key = lambda i: abs(lst[i]-K))]
# Driver code
lst = [3.64, 5.2, 9.42, 9.35, 8.5, 8]
K = 9.1
print(closest(lst, K))
conda install -c conda-forge numpy
conda env export > environment.yml
import numpy as np
x = np.arange(100)
print("Original array:")
print(x)
a = np.random.uniform(0,100)
print("Value to compare:")
print(a)
index = (np.abs(x-a)).argmin()
print(x[index])
l = range(10000)
print(all(x<y for x, y in zip(l, l[1:])))
x=1
y=[x]
type(y)
prices = {
'ACME': 45.23,
'AAPL': 612.78,
'IBM': 205.55,
'HPQ': 37.20,
'FB': 10.75
}
# Make a dictionary of all prices over 200
p1 = {key: value for key, value in prices.items() if value > 200}
# Make a dictionary of tech stocks
tech_names = {'AAPL', 'IBM', 'HPQ', 'MSFT'}
p2 = {key: value for key, value in prices.items() if key in tech_names}
print(p1)
print(p2)
import numpy as np
def crack(integer):
start = int(np.sqrt(integer))
factor = integer / start
while not is_integer(factor):
start += 1
factor = integer / start
return int(factor), start
def is_integer(number):
if int(number) == number:
return True
else:
return False
print(crack(3))
print(crack(7))
print(crack(64))
print(crack(100))
print(crack(640))
print(crack(64000))
text = 'yeah, but no, but yeah, but no, but yeah'
text.replace('yeah', 'yep')
text = 'Today is 11/27/2012. PyCon starts 3/13/2013.'
import re
re.sub(r'(\d+)/(\d+)/(\d+)', r'\3-\1-\2', text)
text = 'geeks for geeks'
# Splits at space
print(text.split())
word = 'geeks, for, geeks'
# Splits at ','
print(word.split(', '))
word = 'geeks:for:geeks'
# Splitting at ':'
print(word.split(':'))
word = 'CatBatSatFatOr'
# Splitting at 3
print([word[i:i+3] for i in range(0, len(word), 3)])
parts = ['Is', 'Chicago', 'Not', 'Chicago?']
print(' '.join(parts))
print(','.join(parts))
s = ''
for p in parts:
s += p
text = '/geeks/for/geeks'
temp_list = text.split('/')
prefix = '/'.join(temp_list[:-1])
prefix
s = '{name} has {n} messages.'
s.format(name='Guido', n=37)
name = 'Guido'
n = 37
s.format_map(vars())
class Info:
def __init__(self, name, n):
self.name = name
self.n = n
a = Info('Guido',37)
s_out=s.format_map(vars(a))
print(s_out)
class safesub(dict):
# """防止key找不到"""
def __missing__(self, key):
return '{' + key + '}'
del n # Make sure n is undefined
s.format_map(safesub(vars()))
s = '{name} has {n} messages,{name}.'
s.format(name='Guido', n=37)
items = [1, 2, 3]
it = iter(items)
next(it)
next(it)
next(it)
next(it)
def manual_iter():
with open('test.txt') as f:
try:
while True:
line = next(f)
print(line, end='')
except StopIteration:
pass
from tempfile import TemporaryFile
with TemporaryFile('w+t') as f:
# Read/write to the file
f.write('Hello World\n')
f.write('Testing\n')
# Seek back to beginning and read the data
f.seek(0)
data = f.read()
def add(x:int, y:int) -> int:
return x + y
help(add)
add.__annotations__
class A:
def __init__(self):
self._internal = 0 # An internal attribute
self.public = 1 # A public attribute
def public_method(self):
'''
A public method
'''
pass
def _internal_method(self):
pass
class B:
def __init__(self):
self.__private = 0
def __private_method(self):
pass
def public_method(self):
pass
self.__private_method()
class C(B):
def __init__(self):
super().__init__()
self.__private = 1 # Does not override B.__private
# Does not override B.__private_method()
def __private_method(self):
pass
from abc import ABCMeta, abstractmethod
class IStream(metaclass=ABCMeta):
@abstractmethod
def read(self, maxbytes=-1):
pass
@abstractmethod
def write(self, data):
pass
class SocketStream(IStream):
def read(self, maxbytes=-1):
pass
def write(self, data):
pass
def serialize(obj, stream):
if not isinstance(stream, IStream):
raise TypeError('Expected an IStream')
pass
import time
from functools import wraps
def timethis(func):
'''
Decorator that reports the execution time.
'''
@wraps(func)
def wrapper(*args, **kwargs):
start = time.time()
result = func(*args, **kwargs)
end = time.time()
print(func.__name__, end-start)
return result
return wrapper
@timethis
def countdown(n):
# Counts down
while n > 0:
n -= 1
countdown(100000)
@timethis
def countdown(n):
pass
def countdown(n):
pass
countdown = timethis(countdown)
from functools import wraps
import logging
def logged(level, name=None, message=None):
"""
Add logging to a function. level is the logging
level, name is the logger name, and message is the
log message. If name and message aren't specified,
they default to the function's module and name.
"""
def decorate(func):
logname = name if name else func.__module__
log = logging.getLogger(logname)
logmsg = message if message else func.__name__
@wraps(func)
def wrapper(*args, **kwargs):
log.log(level, logmsg)
return func(*args, **kwargs)
return wrapper
return decorate
# Example use
@logged(logging.DEBUG)
def add(x, y):
return x + y
@logged(logging.CRITICAL, 'example')
def spam():
print('Spam!')
class Spam:
def bar(self, x:int, y:int):
print('Bar 1:', x, y)
def bar(self, s:str, n:int = 0):
print('Bar 2:', s, n)
s = Spam()
s.bar(2, 3) # Prints Bar 1: 2 3
s.bar('hello') # Prints Bar 2: hello 0
下面给出一个例子:
如上结果所示,以"_" 开头的都是私有。
关于__all__,它是一个str的list,定义了module中要导出的内容。如果不定义__all__,import * 默认会导入除了以_开头的所有names。一个例子:
设置__all__的原因在于python的一个原则是explicit好于implicit。而from \<module\> import \* 并不explicit。最好显式地给出要导入的东西。即便需要很多名字,最好还是一一明确地倒入,比如根据PEP328:
## 并发编程
对于并发编程, Python有多种长期支持的方法, 包括**多线程**, **调用子进程**, 以及各种各样的关于**生成器函数**的技巧。这部分将会简单记录并发编程各种方面的技巧, 包括通用的**多线程技术**以及**并行计算的实现方法**。并发的程序有潜在的危险. 因此, 要注意能给出更加可信赖和易调试的代码。这里只是简单介绍,更多关于并行计算的内容会在7-parallel-programming文件夹中记录。
### 启动与停止线程
为需要并发执行的代码创建/销毁线程:**threading 库**可以**在单独的线程中执行任何的在 Python 中可以调用的对象**。可以创建一个 Thread 对象并将你要执行的对象以 target 参数的形式提供给该对象。
当创建好一个线程对象后,该对象**并不会立即执行**,除非你调用它的 **start() 方法**(当你调用 start() 方法时,它会调用你传递进来的函数,并把你传递进来的参数传递给该函数)。Python中的线程会在一个单独的系统级线程中执行(比如说一个 POSIX 线程或者一个 Windows 线程),这些线程将由操作系统来全权管理。线程一旦启动,将独立执行直到目标函数返回。
可以查询一个线程对象的状态,看它是否还在执行:
Python解释器直到所有线程都终止前仍保持运行。对于需要长时间运行的线程或者需要一直运行的后台任务,你应当考虑使用后台线程。
由于全局解释锁(GIL)的原因,Python 的线程被限制到同一时刻只允许一个线程执行这样一个执行模型。所以,Python 的线程更适用于处理I/O和其他需要并发执行的阻塞操作(比如等待I/O、等待从数据库获取数据等等),而**不是需要多处理器并行的计算密集型任务**。所以这块暂时不需要太多关注。
### 简单并行编程
有个程序要执行CPU密集型工作,想让他利用**多核CPU**的优势来运行的快一点。
concurrent.futures 库提供了一个 ProcessPoolExecutor 类, 可被用来在一个单独的Python解释器中执行计算密集型函数。 不过,要使用它,首先要有一些计算密集型的任务。一个脚本,在这些日志文件中查找出所有访问过robots.txt文件的主机(因为我没有日志文件,所以没运行结果)
前面的程序使用了通常的**map-reduce风格**来编写。 函数 find_robots() 在一个文件名集合上做map操作,并将结果汇总为一个单独的结果, 也就是 find_all_robots() 函数中的 all_robots 集合。 现在,假设你想要修改这个程序让它使用多核CPU。 很简单——只需要**将map()操作替换为一个 concurrent.futures 库中生成的类似操作即可**。
## 脚本编程与系统管理
首先补充一些基本的在python脚本中执行系统命令的代码。cank
### 命令行解析器
Python 命令行与参数解析方法有很多工具,这里学习使用python 自带的argparse ,来说明python 如何进行命令行解析。主要参考了
- [HelloGitHub-Article](https://github.com/HelloGitHub-Team/Article)
- [argparse模块用法实例详解](https://zhuanlan.zhihu.com/p/56922793)
- python[官方文档](https://docs.python.org/zh-cn/3/library/argparse.html)
- [Python-argparse-命令行与参数解析](https://zhuanlan.zhihu.com/p/34395749)
通俗来说,命令行与参数解析就是当你输入cmd 打开dos 交互界面时候,启动程序要进行的参数给定。比如在dos 界面输入:
其中,"a" -b等就是命令行与参数解析要做的事情。先不用深究参数的含义,这里就是个示例,简而言之,就是设计程序在**运行时必须给定某些额外参数**才能运行,也就是如果设置了命令行参数解析,那么各种编译器按F5 是无法直接运行程序的。这样的目的之一是不能随便就能运行脚本,可以达到一定程度上的安全功能。
那肯定就会好奇命令行中敲入一段命令后,是如何被解析执行的?自己如何实现一个命令行工具来帮助执行和处理任务?如何利用python库来帮助实现?
这一节就主要记录如何使用Python 内置的 argparse 标准库解析命令行。
argparse 作为 Python 内置的标准库,提供了较为简单的方式来编写命令行接口。当你在程序中定义需要哪些参数,argparse 便会从 sys.argv 中获取命令行输入进行解析,对正确或非法输入做出响应,也可以自动生成帮助信息和使用说明。
总体上分为三大步:
- 创建解析:设置解析器,后续对命令行的解析就**依赖于这个解析器**,它能够**将命令行字符串转换为Python对象**。通过实例化 argparse.ArgumentParser,给定一些选填参数,就可以设置一个解析器
- 添加参数:通过ArgumentParser.add_argument 方法来**为解析器设置参数信息**,以告诉解析器命令行字符串中的**哪些内容**应解析为**哪些类型的Python对象**。注意,每一个参数都要单独设置,需要两个参数就用两个add_argument
- 解析参数:定义好参数后,就可以使用 ArgumenteParser.**parse_args 方法来解析一组命令行参数字符串**了。默认情况下,参数取自**sys.argv[1:]**,它就是我们在命令行敲入的**一段命令(不含文件名)所对应的一个字符串列表**,比如,若输入 python3 cmd.py --sum 1 2 3,那么sys.argsv[1:]就是['--sum','1','2','3']。
基本的业务逻辑是这样的。解析好命令行后,我们就可以从解析结果中获取每个参数的值,进而根据自己的业务需求做进一步的处理。比如,对于上文中所定义的nums参数,我们可以通过解析后的结果中的accumulate方法对其进行求最大值或求和(取决于是否提供 --sum 参数)。
下面就给出一个较完整的代码示例。
若我们需要对一组数字求和,只需执行:
若需要对一组数字求最大值,只需执行:
如果给定的参数不是数字,则会报错提示:
我们还可以通过 -h 或 --help 参数查看其自动生成的使用说明和帮助:
接下来进一步探讨关于argparse更多复杂的情况,比如各种类型参数、参数前缀、参数组、互斥选项、嵌套解析、自定义帮助等等。主要要认识的问题是:argparse支持哪些类型的参数?这些参数该如何配置?
1. 参数动作
这里面的 action,也就是 参数动作,究竟是用来做什么的呢?
想象一下,当我们在命令行输入**一串参数**后,对于**不同类型的参数是希望做不同的处理**的。 那么 **参数动作** 其实就是告诉解析器,我们希望**对应的参数该被如何处理**。比如,参数值是该被存成一个值呢,还是追加到一个列表中?是当成布尔的 True 呢,还是 False?
参数动作 被分成了如下 8 个类别:
- store —— 保存参数的值,这是**默认**的参数动作。它通常用于给一个参数指定值,如指定名字:
- store_const —— 保存被 const 命名的固定值。当我们想通过**是否给定参数**来起到**标志**的作用,给定就取某个值,就可以使用该参数动作,如:
- store_true 和 store_false —— 是 store_const 的特殊情况,用来分别保存 True 和 False。如果为指定参数,则其默认值分别为 False 和 True,如:
- append —— 将参数值追加保存到一个列表中。它常常用于命令行中允许多个相同选项,如:
- append_const —— 将 const 命名的固定值追加保存到一个列表中(const 的默认值为 None)。它常常用于将多个参数所对应的固定值都保存在同一个列表中,相应的需要 dest 入参来配合,以放在同一个列表中,如:
不指定 dest 入参,则固定值保存在以参数名命名的变量中
指定 dest 入参,则固定值保存在 dest 命名的变量中
- count —— 计算参数出现次数,如:
- help —— 打印解析器中所有选项和参数的完整帮助信息,然后退出。
- version —— 打印命令行版本,通过指定 version 入参来指定版本,调用后退出。如:
2. 参数类别
如果说 参数动作 定义了**解析器在接收到参数后该如何处理参数**,那么 参数类别 就是告诉解析器**这个参数的元信息**,也就是参数是什么样的。比如,参数是字符串呢?还是布尔类型呢?参数是在几个值中可选的呢?还是可以给定值,等等。
可选参数 顾名思义就是参数是可以加上,或不加上。**默认**情况下,通过 ArgumentParser.add_argument 添加的参数就是**可选参数**。
可以通过 - 来指定**短参数**,也就是名称短的参数;也可以通过 -- 来指定**长参数**,也就是名称长的参数。当然也可以两个都指定。
可选参数通常用于:用户提供一个参数以及对应值,则使用该值;若不提供,则使用默认值。如:
参数类型 就是解析器**参数值是要作为什么类型去解析**,默认情况下是 str 类型。我们可以通过 type 入参来指定参数类型。
argparse 所支持的参数类型多种多样,可以是 int、float、bool等,比如:
更厉害的是,type 入参还可以是**可调用(callable)对象**。这就给了我们很大的想象空间,可以指定 type=open 来把参数值作为文件进行处理,也可以指定自定义函数来进行类型检查和类型转换。
作为文件进行处理:
使用自定义函数进行处理,入参为参数值,需返回转换后的结果。 比如,对于参数 --num,我们希望当其值小于 1 时则返回 1,大于 10 时则返回 10:
3. 参数默认值
参数默认值 用于在命令行中不传参数值的情况下的默认取值,可通过 default 来指定。如果不指定该值,则参数默认值为 None。
比如:
4. 位置参数
位置参数 就是通过位置而非是 - 或 -- 开头的参数来指定参数值。
比如,我们可以指定两个位置参数 x 和 y ,先添加的 x 位于第一个位置,后加入的 y 位于第二个位置。那么在命令行中输入 1 2的时候,分别对应到的就是 x 和 y:
5. 可选值
可选值 就是**限定参数值的内容**,通过 choices 入参指定。
有些情况下,我们可能需要限制用户输入参数的内容,只能在预设的几个值中选一个,那么 可选值 就派上了用场。
比如,指定文件读取方式限制为 read-only 和 read-write:
6. 互斥参数
互斥参数 就是多个参数之间彼此互斥,不能同时出现。使用互斥参数首先通过 ArgumentParser.add_mutually_exclusive_group 在解析器中添加一个互斥组,然后在这个组里添加参数,那么组内的所有参数都是互斥的。
比如,我们希望通过命令行来告知乘坐的交通工具,要么是汽车,要么是公交,要么是自行车,那么就可以这么写:
7. 可变参数列表
可变参数列表 用来定义一个参数可以有多个值,且能通过 nargs 来定义值的个数。
若 nargs=N,N为一个数字,则要求该参数提供 N 个值,如:
若 nargs=?,则要求改参数提供 0 或 1 个值,如:
若 nargs=*,则要求改参数提供 0 或多个值,如:
若 nargs=+,则要求改参数至少提供 1 个值,如:
小结下。
add_argument 方法定义**单个**的命令行参数应当**如何解析**。每个形参更多的描述:
- name or flags - 一个命名或者一个选项字符串的列表,例如 foo 或 -f, --foo。
- action - 当参数在命令行中出现时使用的动作基本类型。
- nargs - 命令行参数应当消耗的数目。
- const - 被一些 action 和 nargs 选择所需求的常数。
- default - 当参数未在命令行中出现时使用的值。
- type - 命令行参数应当被转换成的类型。
- choices - 可用的参数的容器。
- required - 此命令行选项是否可省略 (仅选项可用)。
- help - 一个此选项作用的简单描述。
- metavar - 在使用方法消息中使用的参数值示例。
- dest - 解析后的参数名称,默认情况下,对于可选参数选取最长的名称,中划线转换为下划线.
然后一个比较完整的,需要在命令行中执行的例子如下,对应的python文件是argv_argparse.py.
调用方式:
-h表示调出help信息。
以上是参数动作和参数类别相关内容,接下来继续深入了解 argparse 的功能,包括如何修改参数前缀,如何定义参数组,如何定义嵌套的解析器,如何编写自定义动作等。
1. 帮助
自动生成帮助
当你在命令行程序中指定 -h 或 --help 参数时,都会输出帮助信息。而 argparse 可通过指定 add_help 入参为 True 或不指定,以达到自动输出帮助信息的目的。
自定义帮助
ArgumentParser 使用 formatter_class 入参来控制所输出的帮助格式。 比如,通过指定 formatter_class=argparse.RawTextHelpFormatter,我们可以让帮助内容遵循原始格式:
2. 参数组
有时候,我们需要给参数分组,以使得在显示帮助信息时能够显示到一起。
比如某命令行支持三个参数选项 --user、--password和--push,前两者需要放在一个名为 authentication 的分组中以表示它们是身份认证信息。那么我们可以用 ArgumentParser.add_argument_group 来满足:
3. 选项参数前缀
不知你是否注意到,在不同平台上命令行程序的选项参数前缀可能是不同的。比如在 Unix 上,其前缀是 -;而在 Windows 上,大多数命令行程序(比如 findstr)的选项参数前缀是 /。
在 argparse 中,选项参数前缀默认采用 Unix 命令行约定,也就是 -。但它也支持自定义前缀,下面是一个例子:
在这个例子中,我们指定了三个选项参数前缀 -、+和/,从而:
- 通过指定选项参数 -power,使得 power=False
- 通过指定选项参数 +power,使得 power=True
- 通过指定选项参数 /win,使得 win=True
### 读取配置文件
很多情况下,我们需要通过配置文件来定义一些参数性质的数据,因为配置文件作为一种可读性很好的格式,非常适用于存储程序中的配置数据。
在每个配置文件中,配置数据会被分组(比如例子中的“installation”、 “debug” 和 “server”)。 每个分组在其中指定对应的各个变量值。那么如何读取普通.ini格式的配置文件?
在python中,configparser 模块能被用来读取配置文件。例如,假设有配置文件config.ini。下面给出读取代码:
还可以读取一个section下的所有keys或所有键值对,参考:[Python 读取写入配置文件 —— ConfigParser](https://blog.csdn.net/jiede1/article/details/79064780)
如果需要,还能修改配置并使用 cfg.write() 方法将其写回到文件中。例如:
### 日志功能
即在脚本和程序中将诊断信息写入日志文件。打印日志最简单方式是使用 logging 模块。
这小节的内容主要参考了:
- [python必掌握模块(四)logging模块用法](https://zhuanlan.zhihu.com/p/56968001)
- [Python模块学习之Logging日志模块](https://y4er.com/post/python-logging/)
日志是 学习任何编程语言都有必要掌握的核心模块。因为当把python代码放入到**生产环境**中的时候,我们只能看到代码运行的结果,我们不知道的是代码每一步过程的最终运行状态。
如果代码中间过程出现了问题的话,logging库的引用得出的日志记录可以帮助我们排查程序运行错误步骤的。方便我们修复代码,快速排查问题。
logging模块是Python内置的标准模块,主要用于输出运行日志,可以设置输出日志的等级、日志保存路径、日志文件回滚等;相比print,具备如下优点:
- 可以通过设置不同的日志等级,在release版本中只输出重要信息,而不必显示大量的调试信息:print将所有信息都输出到标准输出中,严重影响开发者从标准输出中查看其它数据;logging则可以由开发者决定将信息输出到什么地方,以及怎么输出
- logging具有更灵活的格式化功能,比如运行时间、模块信息
- print输出都在控制台上,logging可以输出到任何位置,比如文件甚至是远程服务器
logging 的模块结构如下:
- Logger 记录日志时创建的对象,调用其方法来传入日志模板和信息生成日志记录
- Log Record Logger对象生成的一条条记录
- Handler 处理日志记录,输出或者存储日志记录
- Formatter 格式化日志记录
- Filter 日志过滤器
- Parent Handler Handler之间存在分层关系
理解下这个例子。首先,创建了一个logger对象,来作为生成日志记录的对象,然后设置输出级别(所有级别低于此级别的日志消息都会被忽略掉)。
接着创建了一个 StramHandler对象来处理日志。
随后创建一个formatter对象来格式化输出日志记录。构造最终的日志消息的时候我们使用了%操作符来格式化消息字符串。
然后把formatter 赋给 handler。
最后handler处理器添加到logger对象,完成整个处理流程。
下面稍作补充解释。
前面说到,设置输出级别时候,低于此级别的消息就被忽略了,低于是怎么判断的?其实logging 的 这些 level 常数是对应特定的整数值的,所以设置logging level的时候,也可以直接使用对应的整数来赋值。
从值上就可以看出来 DEBUG最低,然后依次往上,级别越来越高。
logging 提供的Handler有很多,比如:
- StreamHandler logging.StreamHandler 日志输出到流,可以是 sys.stderr,sys.stdout 或者文件
- FileHandler logging.FileHandler 日志输出到文件
- SMTPHandler logging.handlers.SMTPHandler 远程输出日志到邮件地址
- SysLogHandler logging.handlers.SysLogHandler 日志输出到syslog
- HTTPHandler logging.handlers.HTTPHandler 通过”GET”或者”POST”远程输出到HTTP服务器
如果需要设置一个全局的logger以供使用,可以参考:
- https://blog.csdn.net/weixin_42526352/article/details/90242840
- https://blog.csdn.net/brucewong0516/article/details/82817008
这里也给出例子--globalLog.py。
## 测试、调试与异常
在Python测试代码之前没有编译器来分析代码,因此使得测试成为开发的一个重要部分。这里记录一些关于测试、调试和异常处理的常见问题。
### 关于单元测试
之前basic部分已经记录了一些unittest内容,这里补充一些诸如mock等概念的基本内容。主要参考了:[Python Mocking, You Are A Tricksy Beast](https://medium.com/python-pandemonium/python-mocking-you-are-a-tricksy-beast-6c4a1f8d19b2),[An Introduction to Mocking in Python](https://www.toptal.com/python/an-introduction-to-mocking-in-python)和[Understanding the Python Mock Object Library](https://realpython.com/python-mock-library/#patch-as-a-decorator)。
#### 为什么要使用mock
测试是验证逻辑是否正确的可靠高效的方式。不过由于一些复杂逻辑和依赖库,会使得测试变得困难。一个使用Python mock 对象的理由就是在测试过程中控制代码的行为。比如代码发送HTTP请求到外部服务,只有当服务的行为符合您的预期时,您的测试才会可预测地执行。有时,这些外部服务行为的临时更改可能导致测试套件中的间歇性故障。因此,我们想要使我们的代码在一个受控的环境下测试。而使用mock对象可以做到这一点。
有时,很难测试代码的某些环节,比如except代码块,if代码块,因为可能不出现这样的场景,这时候使用mock对象也可以帮助控制代码执行的路径来使程序能运行到这些地方,提升code coverage。
另一个原因是更好地理解如何使用代码的真实副本。一个python mock对象包含关于其用法的数据,您可以检查这些数据,比如:是否调用了某个方法,如何调用某个方法,多久一次调用某个方法。
此外,有时候我们会面临这样的情形,即我们想测试我们的代码,但是不想产生一些脏结果,比如:我们想要测试facebook的上传功能,但是并不想真的上传一个内容上去。再比如,写一个弹出一个CD drive的脚本,或者一个从/tmp文件夹清除缓存的服务,或者一个绑定到TCP端口的socket服务,这些在unittest下都会产生dirty结果。作为写代码的,更关心的是您的库成功地调用了系统函数来弹出CD,而不是每次运行测试时都还需要打开CD drive。保持单元测试的效率和性能意味着尽量避免运行自动化测试的缓慢代码。
还有,个人认为,在实际测试算法代码的过程中,后面函数会用到前面过程的数据结果,如果每次都从头测试,那么花费时间会很长,因此保存中间计算结果,然后使用mock来代替前面的函数过程,直接读取中间结果来供后面代码测试也是十分必要的。
而unittest.mock可以客服这些困难。接下来就看看mock究竟是什么。
#### What Is Mocking?
mock就是“看起来像真的”的意思,在**测试环境下**,一个mock对象**代替模拟**一个真实的对象。是一个灵活有力的提升测试质量的工具。
unittest.mock库提供了一个叫做Mock的类,可以使用它来模拟代码中的真实对象。Mock还提供了一个函数patch(),它用Mock实例提到了代码中的真实对象。可以将patch()用为decorator,也可以用作context manager , 取决于想要模拟的对象控制的scope。一旦退出指定的scope,patch就会立刻用真实的副本来取代mock对象。
首先先看看Mock。
现在就可以使用 Mock来替代代码中的对象了。可以传递它为一个函数的参数或者重定义一个对象。形如:
注意,当你替换一个对象时,Mock必须要看起来真的像这个对象。比如要mock json库,那么程序调用dumps函数,你的mock对象里必须得有一个dumps函数。
Mock可以创建任意属性,可以代替任意对象。用一下之前提到的json例子:
可以看到很容易的就mock了kson库和其dumps函数,dumps可以接受任意参数,返回值也是一个mock对象,因此mock可以用到很复杂的环境下。很灵活。
接下来,看看如何用mock更好地理解代码。Mock实例存储这怎么使用它们的数据。
首先可以断言程序使用了你期望的一个对象。
.assert_called()函数确保了调用mocked函数。 .assert_called_once()可以检查调用的次数。
第二,可以查看特殊属性以理解应用是如何使用该对象的。
通过以上测试代码可以使用各类属性来保证对象行为是想要的。这是一些固有的方法,接下来看看如何定制mocked方法。
管理一个Mock的返回值。一个使用mocks的原因就是控制代码的行为。一种十分常用的方式就是指定一个函数的返回值。
首先,创建一个文件my_calendar.py,代码见文件。然后执行下列语句:
上述代码如果在周末的时候运行是会报错的。而平常是正确的。写测试代码时候,很重要的是确保结果是可预测的。可以使用Mock来去除代码中的不确定性。如下所示,通过Mock .today() 指定返回值来实现。
#### patch()
前面已经提到,unittest.mock还有一个很好的机制:patch(), 装饰器,补丁。
接下来通过实例分析。
看下mock官方的说明:“mock is a library for testing in Python. It allows you to replace parts of your system under test with mock objects and make assertions about how they have been used.”
关键是如何理解“replace parts of your system”。这里的parts指的是什么。实际上可以包括:
- functions
- classes,objects
可以使用Mock 或 MagicMock 类的实例的“mock 对象”来替代它们。
比如mocking functions,以一个简单的函数为例,文件 simple.py。在测试中调用该函数有两种方式,其一是直接使用:
其二就是利用mock进行测试,为了模仿 simple_function,可以使用 mock.patch decorator。该decorator可以使用户通过以‘package.module.FunctionName’形式传入字符串参数指定想要mock的内容。对于本例,即module simple和函数simple_function,decorator如下第一行,函数可以表达为如下所示。其中函数mock_simple_function的参数是MagicMock类对象,用来代替想要mock的函数。
通过语句“@mock.patch(‘simple.simple_function’)”表明想使用MagicMock对象表达来替代simple_function,这个对象放入了mock_simple_func这一函数形参中。如上代码执行结果所示,输出是一个MagicMock对象,就是它替代了simple_function被调用。可从以下代码中看出:
不过现在还有个更重要的问题:为什么创建新的MagicMock对象,究竟怎么调用这个对象。所以要看看MagicMock这个类。MagicMock之所以叫magic,是因为它有大多数python的magic函数的默认实现,即那些名称前后有双下划线的函数,可以查看[这里](http://www.ironpythoninaction.com/magic-methods.html)。比如__call__,即可以让一个类对象可以像函数那样被调用。因此,MagicMock对象是可以直接像函数那样被调用的。
回到例子中,现在已经mock了simple_function,但是还没有使用它来做什么。那么现在想返回simple_function()的结果要怎么做呢?可以使用MagicMock的return_value属性来实现:
从上面结果可以看出很好地模仿了simple_function函数结果。
如果除了返回值之外,还想其他功能,可以使用MagicMock.side_effect 。比如想要测试一个错误并抛出一个异常。
接下来看一看如何mock类。在simple.py 文件中定义一个类。然后定义一个调用的函数,首先还是传统的调用方式:
然后接下来看看mock下如何操作。依然使用@mock.patch decorator
通过@mock.patch decorator ,参数mock_class使用了MagicMock对象来代替了SimpleClass对象。
接下来创建一个SimpleClass实例,然后打印,看看会发生什么。
可以看出,调用SimpleClass() 就调用了MagicMock对象作为函数来创建了MagicMock对象。
到这里,mock一个函数和mock一个类并没有什么区别。不过在类中,使用的更多是其对象。从下面的例子中可以看出,类的MagicMock对象返回值是类对象。
简单小结一下,就是mock一个class时创建了一个MagicMock对象。创建一个类对象时,新的MagicMock对象也被创建,另外类的MagicMock对象返回值也就是类对象的MagicMock对象。
此外,可以在类对象中通过explode函数来设置return_value,以mock 类对象的返回值。
### 给程序性能测试
测试程序运行所花费的时间并做性能测试。如果只是简单的想测试下程序整体花费的时间, 通常使用Unix时间函数就行了,比如:
如果你还需要一个程序各个细节的详细报告,可以使用 cProfile 模块:
不过通常情况是介于这两个极端之间。比如已经知道代码运行时在少数几个函数中花费了绝大部分时间。 对于这些函数的性能测试,可以使用一个简单的装饰器。要使用这个装饰器,只需要将其放置在要进行性能测试的函数定义前即可。
对于测试很小的代码片段运行性能,使用 timeit 模块会很方便
当执行性能测试的时候,需要注意的是你获取的结果都是近似值。 time.perf_counter() 函数会在给定平台上获取最高精度的计时值。 不过,它仍然还是基于时钟时间,很多因素会影响到它的精确度,比如机器负载。 如果你对于执行时间更感兴趣,使用 time.process_time() 来代替它。
### 加速程序运行
程序运行太慢,想在不使用复杂技术比如C扩展或JIT编译器的情况下加快程序运行速度。
关于程序优化的第一个准则是“不要优化”,第二个准则是“不要优化那些无关紧要的部分”。 如果你的程序运行缓慢,首先得对它进行性能测试找到问题所在。
通常来讲会发现程序在少数几个热点地方花费了大量时间, 比如内存的数据处理循环。一旦定位到这些点,就可以使用下面这些实用技术来加速程序运行。
#### 使用函数
很多程序员刚开始会使用Python语言写一些简单脚本。 当编写脚本的时候,通常习惯了写毫无结构的代码。比如:
像这样定义在全局范围的代码运行起来要比定义在函数中运行慢的多。 这种速度差异是由于局部变量和全局变量的实现方式(**使用局部变量要更快些**)。 因此,如果想让程序运行更快些,只需要将脚本语句放入函数中即可:
根据经验,使用函数带来15-30%的性能提升是很常见的。
局部变量会比全局变量运行速度快。 对于频繁访问的名称,通过将这些名称变成局部变量可以加速程序运行。
对于类中的属性访问也同样适用于这个原理。 通常来讲,查找某个值比如 self.name 会比访问一个局部变量要慢一些。 在内部循环中,可以将某个需要频繁访问的属性放入到一个局部变量中。
#### 尽可能去掉属性访问
每一次**使用点(.)操作符来访问属性的时候会带来额外的开销**。 它会触发特定的方法,比如 __getattribute__() 和 __getattr__() ,这些方法会进行字典操作操作。
通常你可以使用 from module import name 这样的导入形式,以及使用绑定的方法。比如下面的函数是耗时的:
可以修改compute_roots函数如下:
修改后的版本运行时间会减少一些。唯一不同之处就是消除了属性访问。 用 sqrt() 代替了 math.sqrt() 。 The result.append() 方法被赋给一个局部变量 result_append ,然后在内部循环中使用它。
这些改变只有在大量重复代码中才有意义,比如循环。 因此,这些优化也只是在某些特定地方才应该被使用。
#### 避免不必要的抽象
任何时候当你使用额外的处理层(比如装饰器、属性访问、描述器)去包装你的代码时,都会让程序运行变慢。
访问属性y相比属性x而言慢的不止一点点,大概慢了4.5倍。 如果你在意性能的话,那么就需要重新审视下对于y的属性访问器的定义是否真的有必要了。 如果没有必要,就使用简单属性吧。 如果仅仅是因为其他编程语言需要使用getter/setter函数就去修改代码风格,这个真的没有必要。
#### 使用内置的容器
内置的数据类型比如字符串、元组、列表、集合和字典都是使用C来实现的,运行起来非常快。 如果想自己实现新的数据结构(比如链接列表、平衡树等), 那么要想在性能上达到内置的速度几乎不可能,因此,还是乖乖的使用内置的吧。
另外,还要避免创建不必要的数据结构或复制。
#### 并行编程
这部分有参考:[Python性能优化的20条建议](https://segmentfault.com/a/1190000000666603)。
可以通过内置的模块multiprocessing实现下面几种并行模式:
多进程:对于CPU密集型的程序,可以使用multiprocessing的Process,Pool等封装好的类,通过多进程的方式实现并行计算。但是因为进程中的通信成本比较大,对于进程之间需要大量数据交互的程序效率未必有大的提高。
多线程:对于IO密集型的程序,multiprocessing.dummy模块使用multiprocessing的接口封装threading,使得多线程编程也变得非常轻松(比如可以使用Pool的map接口,简洁高效)。
分布式:multiprocessing中的Managers类提供了可以在不同进程之共享数据的方式,可以在此基础上开发出分布式的程序。
不同的业务场景可以选择其中的一种或几种的组合实现程序性能的优化。
#### 讨论
**在优化之前,有必要先研究下使用的算法**。 选择一个复杂度为 O(n log n) 的算法要比你去调整一个复杂度为 O(n**2) 的算法所带来的性能提升要大得多。
如果你觉得你还是得进行优化,那么请从整体考虑。 作为一般准则,不要对程序的每一个部分都去优化,因为这些修改会导致代码难以阅读和理解。 你应该**专注于优化产生性能瓶颈的地方,比如内部循环**。
对循环的优化所遵循的原则是尽量减少循环过程中的计算量,有多重循环的尽量将内层的计算提到上一层———[Python 代码性能优化技巧](https://www.ibm.com/developerworks/cn/linux/l-cn-python-optim/index.html)。
这里对循环做些补充,参考:[Python性能诀窍](http://pfmiles.github.io/blog/python-speed-performance-tips/).
Python支持好几种循环结构。for语句是最常用的。它遍历一个序列的每个元素,将每个元素赋值给循环变量。如果你的循环体很简单,for循环本身的解释成本将占据大部分的开销。这个时候**map函数**就能派上用场了。你可以将map函数看作是for循环采用C代码来实现。唯一的约束是“**循环体”必须是一个函数调用**。**list comprehension 列表生成式**除了语法上的便利性之外,他们常常和等价的map调用一样快甚至更快。比如:
可以使用map函数将这个循环由解释执行推到编译好的C代码中去执行:
List comprehension在python 2.0的时候被加入。它们提供了一种更紧凑的语法和更高效的方式来表达上面的for循环:
| 0.400749 | 0.927822 |
Analyse some recent GOLEM shots from 25000 to 26023.
# Getting data
The dataset has been created from the [GolSQL tool](http://golem.fjfi.cvut.cz/utils/miner), with the follow URL used to generate the dataset:
http://golem.fjfi.cvut.cz/utils/miner?new_diagn=electron_density%3Areliability&action=Add&xaxis=ShotNo&start_shot=21000&end_shot=29162&diagn_0=breakdown_field&filter_0=none&subplot_0=&yrange0_0=&yrange1_0=&scale_0=linear&diagn_1=breakdown_probability&filter_1=none&subplot_1=&yrange0_1=&yrange1_1=&scale_1=linear&diagn_2=breakdown_rate&filter_2=none&subplot_2=&yrange0_2=&yrange1_2=&scale_2=linear&diagn_3=breakdown_rate_err&filter_3=none&subplot_3=&yrange0_3=&yrange1_3=&scale_3=linear&diagn_4=breakdown_time&filter_4=none&subplot_4=&yrange0_4=&yrange1_4=&scale_4=linear&diagn_5=breakdown_voltage&filter_5=none&subplot_5=&yrange0_5=&yrange1_5=&scale_5=linear&diagn_6=cb&filter_6=none&subplot_6=&yrange0_6=&yrange1_6=&scale_6=linear&diagn_7=cbd&filter_7=none&subplot_7=&yrange0_7=&yrange1_7=&scale_7=linear&diagn_8=ccd&filter_8=none&subplot_8=&yrange0_8=&yrange1_8=&scale_8=linear&diagn_9=cst&filter_9=none&subplot_9=&yrange0_9=&yrange1_9=&scale_9=linear&diagn_10=chamber_inductance&filter_10=none&subplot_10=&yrange0_10=&yrange1_10=&scale_10=linear&diagn_11=chamber_resistance&filter_11=none&subplot_11=&yrange0_11=&yrange1_11=&scale_11=linear&diagn_12=chamber_temperature&filter_12=none&subplot_12=&yrange0_12=&yrange1_12=&scale_12=linear&diagn_13=discharge_aborted&filter_13=none&subplot_13=&yrange0_13=&yrange1_13=&scale_13=linear&diagn_14=electron_confinement_t98&filter_14=none&subplot_14=&yrange0_14=&yrange1_14=&scale_14=linear&diagn_15=electron_confinement_time&filter_15=none&subplot_15=&yrange0_15=&yrange1_15=&scale_15=linear&diagn_16=electron_temperature_max&filter_16=none&subplot_16=&yrange0_16=&yrange1_16=&scale_16=linear&diagn_17=lb&filter_17=none&subplot_17=&yrange0_17=&yrange1_17=&scale_17=linear&diagn_18=loop_voltage_max&filter_18=none&subplot_18=&yrange0_18=&yrange1_18=&scale_18=linear&diagn_19=loop_voltage_mean&filter_19=none&subplot_19=&yrange0_19=&yrange1_19=&scale_19=linear&diagn_20=plasma&filter_20=none&subplot_20=&yrange0_20=&yrange1_20=&scale_20=linear&diagn_21=plasma_life&filter_21=none&subplot_21=&yrange0_21=&yrange1_21=&scale_21=linear&diagn_22=toroidal_field_mean&filter_22=none&subplot_22=&yrange0_22=&yrange1_22=&scale_22=linear&diagn_23=toroidal_field_max&filter_23=none&subplot_23=&yrange0_23=&yrange1_23=&scale_23=linear&diagn_24=ub&filter_24=none&subplot_24=&yrange0_24=&yrange1_24=&scale_24=linear&diagn_25=ubd&filter_25=none&subplot_25=&yrange0_25=&yrange1_25=&scale_25=linear&diagn_26=ucd&filter_26=none&subplot_26=&yrange0_26=&yrange1_26=&scale_26=linear&diagn_27=ust&filter_27=none&subplot_27=&yrange0_27=&yrange1_27=&scale_27=linear&diagn_28=tst&filter_28=none&subplot_28=&yrange0_28=&yrange1_28=&scale_28=linear&diagn_29=tcd&filter_29=none&subplot_29=&yrange0_29=&yrange1_29=&scale_29=linear&diagn_30=tb&filter_30=none&subplot_30=&yrange0_30=&yrange1_30=&scale_30=linear&diagn_31=tbd&filter_31=none&subplot_31=&yrange0_31=&yrange1_31=&scale_31=linear&diagn_32=pressure&filter_32=none&subplot_32=&yrange0_32=&yrange1_32=&scale_32=linear&diagn_33=pressure_chamber&filter_33=none&subplot_33=&yrange0_33=&yrange1_33=&scale_33=linear&diagn_34=pressure_initial&filter_34=none&subplot_34=&yrange0_34=&yrange1_34=&scale_34=linear&diagn_35=pressure_request&filter_35=none&subplot_35=&yrange0_35=&yrange1_35=&scale_35=linear&diagn_36=plasma_current_mean&filter_36=none&subplot_36=&yrange0_36=&yrange1_36=&scale_36=linear&diagn_37=plasma_current_decay&filter_37=none&subplot_37=&yrange0_37=&yrange1_37=&scale_37=linear&diagn_38=zeff&filter_38=none&subplot_38=&yrange0_38=&yrange1_38=&scale_38=linear&diagn_39=input_power_mean&filter_39=none&subplot_39=&yrange0_39=&yrange1_39=&scale_39=linear&diagn_40=input_power_plasma_mean&filter_40=none&subplot_40=&yrange0_40=&yrange1_40=&scale_40=linear&diagn_41=electron_density_mean&filter_41=none&subplot_41=&yrange0_41=&yrange1_41=&scale_41=linear&diagn_42=electron_density_equilibrium&filter_42=none&subplot_42=&yrange0_42=&yrange1_42=&scale_42=linear
```
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import seaborn
import numpy as np
dataset_orig = pd.read_csv('close_shots.txt', delimiter='\s+', index_col='shots')
dataset_orig.head()
```
# Cleaning data
Filter bad shots from the dataset, as adviced in the [GOLEM DataMining Page](http://golem.fjfi.cvut.cz/wiki/Handling/DataMining/data_mining)
```
print(len(dataset_orig)) # number of shot before cleaning
# Drop failed plasma
dataset = dataset_orig.dropna(subset=['plasma'])
# Drop plasma longer than 25 ms
dataset = dataset.drop(dataset[dataset['plasma_life'] > 50e-3].index)
# Drop loop voltage below than 5V
dataset = dataset.drop(dataset[dataset['loop_voltage_max'] < 5].index)
# Drop pressure larger than 100mPa
dataset = dataset.drop(dataset[dataset['pressure'] > 100].index)
# Drop negative pressure request
dataset = dataset.drop(dataset[dataset['pressure_request'] < 0].index)
# Drop non physical ucd values
dataset = dataset.drop(dataset[dataset['ucd'] < 200].index)
# Drop non physical pressure
dataset = dataset.drop(dataset[dataset['pressure'] < 0].index)
# number of shot after cleaning
print(len(dataset))
```
# Confinement Time Evolution
```
dataset.columns
te_med = dataset.electron_confinement_time.median()
print(f'Median Confinement Time {te_med*1e6} [µs]')
ax=dataset.plot(x='plasma_current_mean', y='electron_confinement_time', kind='scatter', logy=True)
ax.set_ylim(1e-6, 1e-3)
ax.axhline(te_med, color='k')
```
It is not clear if the confinement time directly depends of the plasma current in GOLEM. The plasma being resistive, this scaling law probably doesn't apply in this case.
Now we test if the confinement time depends of the density. A proxy for the density is the pressure, if we suppose that most of the injected gaz is ionized.
```
ax=dataset.plot(x='pressure', y='electron_confinement_time', kind='scatter', logy=True, grid=True)
ax.set_ylim(1e-6, 1e-3)
ax.set_xlim(0, 80)
```
Now let's see if as expected, increasing the plasma current increases the electron temperature
```
dataset.plot(x='plasma_current_mean', y='electron_temperature_max', kind='scatter', alpha=0.2)
```
## Which parameters to maximize the plasma current?
```
# get the longest shots
dataset_ip = dataset[['ub', 'ucd', 'tcd', 'pressure_request','plasma_current_mean', 'input_power_mean']].dropna()
# keep only the pressure request=20 (majority) to remove a dimension
dataset_ip = dataset_ip[dataset_ip['pressure_request'] == 20]
# keep only the tcd=600 µs (majority) to remove a dimension
dataset_ip = dataset_ip[dataset_ip['tcd'] == 0.006]
dataset_ip = dataset_ip.drop(['tcd','pressure_request'], axis=1)
dataset_ip.sort_values('plasma_current_mean', ascending=False).head()
seaborn.pairplot(dataset_ip[['ub', 'ucd','plasma_current_mean']], diag_kind='kde')
# make the average of similar parameters
dataset_ip_avg = dataset_ip.groupby(['ucd','ub']).mean().reset_index()
dataset_ip_avg.head(10)
fig, ax = plt.subplots()
cax1=ax.scatter(x=dataset_ip_avg['ucd'], y=dataset_ip_avg['ub'], c=dataset_ip_avg['plasma_current_mean'])
cb1=plt.colorbar(cax1)
ax.set_xlabel('$U_{cd}$')
ax.set_ylabel('$U_B$')
ax.set_title('Plasma current (mean)')
```
# Which parameters to improve the plasma lifetime?
```
# get the longest shots
dataset_lt = dataset[['ub', 'ucd', 'tcd', 'pressure_request','plasma_life']].dropna()
dataset_lt.sort_values('plasma_life', ascending=False).head()
```
According to the GOLEM documentation, the following parameters can be tuned for each plasma shot:
- Toroidal Magnetic Field, set by $U_B$
- Current Drive, set by $U_{CD}$
- Time delay for Current Drive, $\tau_{CD}$
- Filling Pressure, $p_{WG}$ [mPa]
So let's look for the set of parameters which maximize the plasma duration
The question is : what is the set of parameters $\{u_B, u_{CD}, \tau_{CD}, p\}$ which maximize the plasma duration ?
The plasma life distribution is
```
seaborn.distplot(dataset_lt['plasma_life']*1e3, axlabel='plasma duration [ms]')
```
So, how to produce a plasma duration larger than 15 ms ?
```
longest_shots = dataset_lt[dataset_lt['plasma_life'] > 15e-3]
fig, ax = plt.subplots(3,1)
seaborn.distplot(longest_shots['ub'], ax=ax[0])
seaborn.distplot(longest_shots['ucd'], ax=ax[1])
seaborn.distplot(longest_shots['tcd']*1e3, ax=ax[2], axlabel='$t_{CD}$ [ms]')
fig.tight_layout()
```
The longuest shots are the ones using high $U_B$=1100 V, $U_{cd}$ close to 400 V and $t_{cd}$ around 5-6 ms.
```
# make the average of similar parameters
dataset_pl_avg = dataset.groupby(['ucd','ub','tcd']).mean().reset_index()
dataset_pl_avg.head(10)
```
## GOLEM Most frequent set of parameters
```
# get the number of occurence of unique rows,
# ie the most frequent set of parameters used on GOLEM
dataset.groupby(['ucd', 'tcd','ub','pressure_request']).size().reset_index(name='count').sort_values('count', ascending=False).head()
```
Most parameter set are $U_{cd}=400$, $t_{cd}=6$ ms, $U_B=800$ and a pressure request of 20 mPa. Maybe the default set of parameter in the graphical interface?
# Hugill Diagram
Hugill diagram is convenient way to summarize the operating regimes. It consists in representing the inverse of the safety factor at the edge $1/q_a$ (which is proportional to the plasma current) to the parameter $\bar n R / B$ (Murakami parameter)
```
from scipy.constants import pi, mu_0
R0 = 0.4 # m
a = 0.085 # m
dataset['q_a'] = 2*pi*a**2 * dataset.toroidal_field_mean/(mu_0 * dataset.plasma_current_mean * R0)
dataset['1/q_a'] = 1/dataset.q_a
dataset['Murakami'] = dataset.electron_density_equilibrium/1e19 * R0 / dataset.toroidal_field_mean
dataset.plot(kind='scatter', x='Murakami', y='1/q_a', xlim=(-1,10), ylim=(0,0.4), alpha=0.2)
```
# Paschen Curve
```
# breakdown_voltage: Loop voltage during breakdown
ax = dataset.plot(kind='scatter', x='pressure', y='breakdown_voltage',
alpha=0.2, xlim=(0,100), ylim=(0,20))
ax.set_xlabel('Measured pressure [mPa]')
ax.set_ylabel('Breakdown Voltage [V]')
# analytical model
p = np.linspace(0, 100e-3, num=101) # pressure from 0 to 100 mPa
d = 0.085 # gap distance
gamma_se = 2 # secondary emission coefficient
A = 7 # saturation ionization in the gas [Pa.m]^-1
B = 173 # related to the excitation and ionization energies [Pa.m]^-1
V_B = B*p*d / (np.log(A*p*d / np.log(1+1/gamma_se)))
fig, ax = plt.subplots()
ax.plot(p, V_B)
```
|
github_jupyter
|
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import seaborn
import numpy as np
dataset_orig = pd.read_csv('close_shots.txt', delimiter='\s+', index_col='shots')
dataset_orig.head()
print(len(dataset_orig)) # number of shot before cleaning
# Drop failed plasma
dataset = dataset_orig.dropna(subset=['plasma'])
# Drop plasma longer than 25 ms
dataset = dataset.drop(dataset[dataset['plasma_life'] > 50e-3].index)
# Drop loop voltage below than 5V
dataset = dataset.drop(dataset[dataset['loop_voltage_max'] < 5].index)
# Drop pressure larger than 100mPa
dataset = dataset.drop(dataset[dataset['pressure'] > 100].index)
# Drop negative pressure request
dataset = dataset.drop(dataset[dataset['pressure_request'] < 0].index)
# Drop non physical ucd values
dataset = dataset.drop(dataset[dataset['ucd'] < 200].index)
# Drop non physical pressure
dataset = dataset.drop(dataset[dataset['pressure'] < 0].index)
# number of shot after cleaning
print(len(dataset))
dataset.columns
te_med = dataset.electron_confinement_time.median()
print(f'Median Confinement Time {te_med*1e6} [µs]')
ax=dataset.plot(x='plasma_current_mean', y='electron_confinement_time', kind='scatter', logy=True)
ax.set_ylim(1e-6, 1e-3)
ax.axhline(te_med, color='k')
ax=dataset.plot(x='pressure', y='electron_confinement_time', kind='scatter', logy=True, grid=True)
ax.set_ylim(1e-6, 1e-3)
ax.set_xlim(0, 80)
dataset.plot(x='plasma_current_mean', y='electron_temperature_max', kind='scatter', alpha=0.2)
# get the longest shots
dataset_ip = dataset[['ub', 'ucd', 'tcd', 'pressure_request','plasma_current_mean', 'input_power_mean']].dropna()
# keep only the pressure request=20 (majority) to remove a dimension
dataset_ip = dataset_ip[dataset_ip['pressure_request'] == 20]
# keep only the tcd=600 µs (majority) to remove a dimension
dataset_ip = dataset_ip[dataset_ip['tcd'] == 0.006]
dataset_ip = dataset_ip.drop(['tcd','pressure_request'], axis=1)
dataset_ip.sort_values('plasma_current_mean', ascending=False).head()
seaborn.pairplot(dataset_ip[['ub', 'ucd','plasma_current_mean']], diag_kind='kde')
# make the average of similar parameters
dataset_ip_avg = dataset_ip.groupby(['ucd','ub']).mean().reset_index()
dataset_ip_avg.head(10)
fig, ax = plt.subplots()
cax1=ax.scatter(x=dataset_ip_avg['ucd'], y=dataset_ip_avg['ub'], c=dataset_ip_avg['plasma_current_mean'])
cb1=plt.colorbar(cax1)
ax.set_xlabel('$U_{cd}$')
ax.set_ylabel('$U_B$')
ax.set_title('Plasma current (mean)')
# get the longest shots
dataset_lt = dataset[['ub', 'ucd', 'tcd', 'pressure_request','plasma_life']].dropna()
dataset_lt.sort_values('plasma_life', ascending=False).head()
seaborn.distplot(dataset_lt['plasma_life']*1e3, axlabel='plasma duration [ms]')
longest_shots = dataset_lt[dataset_lt['plasma_life'] > 15e-3]
fig, ax = plt.subplots(3,1)
seaborn.distplot(longest_shots['ub'], ax=ax[0])
seaborn.distplot(longest_shots['ucd'], ax=ax[1])
seaborn.distplot(longest_shots['tcd']*1e3, ax=ax[2], axlabel='$t_{CD}$ [ms]')
fig.tight_layout()
# make the average of similar parameters
dataset_pl_avg = dataset.groupby(['ucd','ub','tcd']).mean().reset_index()
dataset_pl_avg.head(10)
# get the number of occurence of unique rows,
# ie the most frequent set of parameters used on GOLEM
dataset.groupby(['ucd', 'tcd','ub','pressure_request']).size().reset_index(name='count').sort_values('count', ascending=False).head()
from scipy.constants import pi, mu_0
R0 = 0.4 # m
a = 0.085 # m
dataset['q_a'] = 2*pi*a**2 * dataset.toroidal_field_mean/(mu_0 * dataset.plasma_current_mean * R0)
dataset['1/q_a'] = 1/dataset.q_a
dataset['Murakami'] = dataset.electron_density_equilibrium/1e19 * R0 / dataset.toroidal_field_mean
dataset.plot(kind='scatter', x='Murakami', y='1/q_a', xlim=(-1,10), ylim=(0,0.4), alpha=0.2)
# breakdown_voltage: Loop voltage during breakdown
ax = dataset.plot(kind='scatter', x='pressure', y='breakdown_voltage',
alpha=0.2, xlim=(0,100), ylim=(0,20))
ax.set_xlabel('Measured pressure [mPa]')
ax.set_ylabel('Breakdown Voltage [V]')
# analytical model
p = np.linspace(0, 100e-3, num=101) # pressure from 0 to 100 mPa
d = 0.085 # gap distance
gamma_se = 2 # secondary emission coefficient
A = 7 # saturation ionization in the gas [Pa.m]^-1
B = 173 # related to the excitation and ionization energies [Pa.m]^-1
V_B = B*p*d / (np.log(A*p*d / np.log(1+1/gamma_se)))
fig, ax = plt.subplots()
ax.plot(p, V_B)
| 0.521471 | 0.552359 |
```
# Dependencies
from bs4 import BeautifulSoup
import requests
import os
from splinter import Browser
from splinter.exceptions import ElementDoesNotExist
import pandas as pd
executable_path = {'executable_path': 'chromedriver.exe'}
browser = Browser('chrome', **executable_path, headless=False)
url = 'https://mars.nasa.gov/news/'
browser.visit(url)
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
news_titles = soup.find_all('div', class_="content_title")
latest_title = news_titles[0].find('a').text
print(latest_title)
news_p = soup.find_all('div', class_="article_teaser_body")
latest_news_p = news_p[0].text
print(latest_news_p)
#now to process the featured image at JPL:
JPL_url = 'https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars'
browser.visit(JPL_url)
html_JPL = browser.html
soup_jpl = BeautifulSoup(html_JPL, 'html.parser')
browser.find_by_id('full_image').click()
#print(image_test)
browser.find_link_by_partial_text('more info').click()
#feature_image_url = browser.find_by_tag('img')['src']
#print(feature_image_url)
feature_image_url = browser.find_link_by_partial_href('largesize')['href']
feature_image_url
#browser.find_by_id('firstheader')
#browser.find_by_tag('h1').click()
#after clicking to get to image on new page, click on "more info" button to get to largesize image.
#extract image path from this location.
```
## Mars Weather Twitter Section Below
```
#now to process the Mars weather:
weather_url = 'https://twitter.com/marswxreport?lang=en'
browser.visit(weather_url)
html_weather = browser.html
soup_weather = BeautifulSoup(html_weather, 'html.parser')
weather_p = soup_weather.find_all('p', class_="TweetTextSize TweetTextSize--normal js-tweet-text tweet-text")
mars_weather = weather_p[0].text
print(mars_weather)
```
## Mars facts section below (get the table)
```
facts_url = 'http://space-facts.com/mars'
mars_table = pd.read_html(facts_url, flavor = 'html5lib')
mars_table
mars_df = mars_table[0]
mars_df.columns = ['Description', 'Value']
mars_df
mars_html = mars_df.to_html(index=False)
mars_html
```
## Mars Hemispheres section below
```
hemisphere_url = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars'
browser.visit(hemisphere_url)
html_hemis = browser.html
soup_hemis = BeautifulSoup(html_hemis, 'html.parser')
hemis_links = soup_hemis.find_all('a', class_="itemLink product-item")
print(hemis_links)
#hemisphere = {}
#test_output = browser.find_by_css('a.product-item')[1].click()
#testagain = browser.find_link_by_text('Sample').first
#print(testagain)
#hemisphere['img_url'] = testagain['href']
#hemisphere
#hemisphere['title'] = browser.find_by_css('h2.title').text
#hemisphere
#test_output2 = browser.find_by_css('a.itemLink')[3].click()
links = browser.find_by_css("a.product-item")
number = len(links)
number
hemisphere_image_urls = []
for i in range (number):
hemisphere = {}
i = i + 1
print(i)
try:
browser.find_by_css('a.product-item')[i].click()
except:
continue
hemi_href = browser.find_link_by_text('Sample').first
hemisphere['img_url'] = hemi_href['href']
hemisphere['title'] = browser.find_by_css('h2.title').text
hemisphere_image_urls.append(hemisphere)
print(i)
browser.back()
hemisphere_image_urls
```
|
github_jupyter
|
# Dependencies
from bs4 import BeautifulSoup
import requests
import os
from splinter import Browser
from splinter.exceptions import ElementDoesNotExist
import pandas as pd
executable_path = {'executable_path': 'chromedriver.exe'}
browser = Browser('chrome', **executable_path, headless=False)
url = 'https://mars.nasa.gov/news/'
browser.visit(url)
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
news_titles = soup.find_all('div', class_="content_title")
latest_title = news_titles[0].find('a').text
print(latest_title)
news_p = soup.find_all('div', class_="article_teaser_body")
latest_news_p = news_p[0].text
print(latest_news_p)
#now to process the featured image at JPL:
JPL_url = 'https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars'
browser.visit(JPL_url)
html_JPL = browser.html
soup_jpl = BeautifulSoup(html_JPL, 'html.parser')
browser.find_by_id('full_image').click()
#print(image_test)
browser.find_link_by_partial_text('more info').click()
#feature_image_url = browser.find_by_tag('img')['src']
#print(feature_image_url)
feature_image_url = browser.find_link_by_partial_href('largesize')['href']
feature_image_url
#browser.find_by_id('firstheader')
#browser.find_by_tag('h1').click()
#after clicking to get to image on new page, click on "more info" button to get to largesize image.
#extract image path from this location.
#now to process the Mars weather:
weather_url = 'https://twitter.com/marswxreport?lang=en'
browser.visit(weather_url)
html_weather = browser.html
soup_weather = BeautifulSoup(html_weather, 'html.parser')
weather_p = soup_weather.find_all('p', class_="TweetTextSize TweetTextSize--normal js-tweet-text tweet-text")
mars_weather = weather_p[0].text
print(mars_weather)
facts_url = 'http://space-facts.com/mars'
mars_table = pd.read_html(facts_url, flavor = 'html5lib')
mars_table
mars_df = mars_table[0]
mars_df.columns = ['Description', 'Value']
mars_df
mars_html = mars_df.to_html(index=False)
mars_html
hemisphere_url = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars'
browser.visit(hemisphere_url)
html_hemis = browser.html
soup_hemis = BeautifulSoup(html_hemis, 'html.parser')
hemis_links = soup_hemis.find_all('a', class_="itemLink product-item")
print(hemis_links)
#hemisphere = {}
#test_output = browser.find_by_css('a.product-item')[1].click()
#testagain = browser.find_link_by_text('Sample').first
#print(testagain)
#hemisphere['img_url'] = testagain['href']
#hemisphere
#hemisphere['title'] = browser.find_by_css('h2.title').text
#hemisphere
#test_output2 = browser.find_by_css('a.itemLink')[3].click()
links = browser.find_by_css("a.product-item")
number = len(links)
number
hemisphere_image_urls = []
for i in range (number):
hemisphere = {}
i = i + 1
print(i)
try:
browser.find_by_css('a.product-item')[i].click()
except:
continue
hemi_href = browser.find_link_by_text('Sample').first
hemisphere['img_url'] = hemi_href['href']
hemisphere['title'] = browser.find_by_css('h2.title').text
hemisphere_image_urls.append(hemisphere)
print(i)
browser.back()
hemisphere_image_urls
| 0.093629 | 0.202502 |
# Attention Cues
:label:`sec_attention-cues`
Thank you for your attention
to this book.
Attention is a scarce resource:
at the moment
you are reading this book
and ignoring the rest.
Thus, similar to money,
your attention is being paid with an opportunity cost.
To ensure that your investment of attention
right now is worthwhile,
we have been highly motivated to pay our attention carefully
to produce a nice book.
Attention
is the keystone in the arch of life and
holds the key to any work's exceptionalism.
Since economics studies the allocation of scarce resources,
we are
in the era of the attention economy,
where human attention is treated as a limited, valuable, and scarce commodity
that can be exchanged.
Numerous business models have been
developed to capitalize on it.
On music or video streaming services,
we either pay attention to their ads
or pay money to hide them.
For growth in the world of online games,
we either pay attention to
participate in battles, which attract new gamers,
or pay money to instantly become powerful.
Nothing comes for free.
All in all,
information in our environment is not scarce,
attention is.
When inspecting a visual scene,
our optic nerve receives information
at the order of $10^8$ bits per second,
far exceeding what our brain can fully process.
Fortunately,
our ancestors had learned from experience (also known as data)
that *not all sensory inputs are created equal*.
Throughout human history,
the capability of directing attention
to only a fraction of information of interest
has enabled our brain
to allocate resources more smartly
to survive, to grow, and to socialize,
such as detecting predators, preys, and mates.
## Attention Cues in Biology
To explain how our attention is deployed in the visual world,
a two-component framework has emerged
and been pervasive.
This idea dates back to William James in the 1890s,
who is considered the "father of American psychology" :cite:`James.2007`.
In this framework,
subjects selectively direct the spotlight of attention
using both the *nonvolitional cue* and *volitional cue*.
The nonvolitional cue is based on
the saliency and conspicuity of objects in the environment.
Imagine there are five objects in front of you:
a newspaper, a research paper, a cup of coffee, a notebook, and a book such as in :numref:`fig_eye-coffee`.
While all the paper products are printed in black and white,
the coffee cup is red.
In other words,
this coffee is intrinsically salient and conspicuous in
this visual environment,
automatically and involuntarily drawing attention.
So you bring the fovea (the center of the macula where visual acuity is highest) onto the coffee as shown in :numref:`fig_eye-coffee`.

:width:`400px`
:label:`fig_eye-coffee`
After drinking coffee,
you become caffeinated and
want to read a book.
So you turn your head, refocus your eyes,
and look at the book as depicted in :numref:`fig_eye-book`.
Different from
the case in :numref:`fig_eye-coffee`
where the coffee biases you towards
selecting based on saliency,
in this task-dependent case you select the book under
cognitive and volitional control.
Using the volitional cue based on variable selection criteria,
this form of attention is more deliberate.
It is also more powerful with the subject's voluntary effort.

:width:`400px`
:label:`fig_eye-book`
## Queries, Keys, and Values
Inspired by the nonvolitional and volitional attention cues that explain the attentional deployment,
in the following we will
describe a framework for
designing attention mechanisms
by incorporating these two attention cues.
To begin with, consider the simpler case where only
nonvolitional cues are available.
To bias selection over sensory inputs,
we can simply use
a parameterized fully-connected layer
or even non-parameterized
max or average pooling.
Therefore,
what sets attention mechanisms
apart from those fully-connected layers
or pooling layers
is the inclusion of the volitional cues.
In the context of attention mechanisms,
we refer to volitional cues as *queries*.
Given any query,
attention mechanisms
bias selection over sensory inputs (e.g., intermediate feature representations)
via *attention pooling*.
These sensory inputs are called *values* in the context of attention mechanisms.
More generally,
every value is paired with a *key*,
which can be thought of the nonvolitional cue of that sensory input.
As shown in :numref:`fig_qkv`,
we can design attention pooling
so that the given query (volitional cue) can interact with keys (nonvolitional cues),
which guides bias selection over values (sensory inputs).

:label:`fig_qkv`
Note that there are many alternatives for the design of attention mechanisms.
For instance,
we can design a non-differentiable attention model
that can be trained using reinforcement learning methods :cite:`Mnih.Heess.Graves.ea.2014`.
Given the dominance of the framework in :numref:`fig_qkv`,
models under this framework
will be the center of our attention in this chapter.
## Visualization of Attention
Average pooling
can be treated as a weighted average of inputs,
where weights are uniform.
In practice,
attention pooling aggregates values using weighted average, where weights are computed between the given query and different keys.
```
import torch
from d2l import torch as d2l
```
To visualize attention weights,
we define the `show_heatmaps` function.
Its input `matrices` has the shape (number of rows for display, number of columns for display, number of queries, number of keys).
```
#@save
def show_heatmaps(matrices, xlabel, ylabel, titles=None, figsize=(2.5, 2.5),
cmap='Reds'):
d2l.use_svg_display()
num_rows, num_cols = matrices.shape[0], matrices.shape[1]
fig, axes = d2l.plt.subplots(num_rows, num_cols, figsize=figsize,
sharex=True, sharey=True, squeeze=False)
for i, (row_axes, row_matrices) in enumerate(zip(axes, matrices)):
for j, (ax, matrix) in enumerate(zip(row_axes, row_matrices)):
pcm = ax.imshow(matrix.detach().numpy(), cmap=cmap)
if i == num_rows - 1:
ax.set_xlabel(xlabel)
if j == 0:
ax.set_ylabel(ylabel)
if titles:
ax.set_title(titles[j])
fig.colorbar(pcm, ax=axes, shrink=0.6);
```
For demonstration,
we consider a simple case where
the attention weight is one only when the query and the key are the same; otherwise it is zero.
```
attention_weights = torch.eye(10).reshape((1, 1, 10, 10))
show_heatmaps(attention_weights, xlabel='Keys', ylabel='Queries')
```
In the subsequent sections,
we will often invoke this function to visualize attention weights.
## Summary
* Human attention is a limited, valuable, and scarce resource.
* Subjects selectively direct attention using both the nonvolitional and volitional cues. The former is based on saliency and the latter is task-dependent.
* Attention mechanisms are different from fully-connected layers or pooling layers due to inclusion of the volitional cues.
* Attention mechanisms bias selection over values (sensory inputs) via attention pooling, which incorporates queries (volitional cues) and keys (nonvolitional cues). Keys and values are paired.
* We can visualize attention weights between queries and keys.
## Exercises
1. What can be the volitional cue when decoding a sequence token by token in machine translation? What are the nonvolitional cues and the sensory inputs?
1. Randomly generate a $10 \times 10$ matrix and use the softmax operation to ensure each row is a valid probability distribution. Visualize the output attention weights.
[Discussions](https://discuss.d2l.ai/t/1592)
|
github_jupyter
|
import torch
from d2l import torch as d2l
#@save
def show_heatmaps(matrices, xlabel, ylabel, titles=None, figsize=(2.5, 2.5),
cmap='Reds'):
d2l.use_svg_display()
num_rows, num_cols = matrices.shape[0], matrices.shape[1]
fig, axes = d2l.plt.subplots(num_rows, num_cols, figsize=figsize,
sharex=True, sharey=True, squeeze=False)
for i, (row_axes, row_matrices) in enumerate(zip(axes, matrices)):
for j, (ax, matrix) in enumerate(zip(row_axes, row_matrices)):
pcm = ax.imshow(matrix.detach().numpy(), cmap=cmap)
if i == num_rows - 1:
ax.set_xlabel(xlabel)
if j == 0:
ax.set_ylabel(ylabel)
if titles:
ax.set_title(titles[j])
fig.colorbar(pcm, ax=axes, shrink=0.6);
attention_weights = torch.eye(10).reshape((1, 1, 10, 10))
show_heatmaps(attention_weights, xlabel='Keys', ylabel='Queries')
| 0.461017 | 0.9659 |
<img align="right" src="images/tf.png" width="128"/>
<img align="right" src="images/huc.png"/>
<img align="right" src="images/huygenslogo.png"/>
<img align="right" src="images/logo.png"/>
# Tutorial
This notebook gets you started with using
[Text-Fabric](https://annotation.github.io/text-fabric/) for coding in
[Missieven Corpus](https://github.com/Dans-labs/clariah-gm).
Familiarity with the underlying
[data model](https://annotation.github.io/text-fabric/tf/about/datamodel.html)
is recommended.
## Installing Text-Fabric
See [here](https://annotation.github.io/text-fabric/tf/about/install.html)
## Tip
If you start computing with this tutorial, first copy its parent directory to somewhere else,
outside your repository.
If you pull changes from the repository later, your work will not be overwritten.
Where you put your tutorial directory is up to you.
It will work from any directory.
```
%load_ext autoreload
%autoreload 2
from tf.app import use
```
## Corpus data
Text-Fabric will fetch the Missieven corpus for you.
It will fetch the newest version by default, but you can get other versions as well.
The data will be stored under `text-fabric-data` in your home directory.
# Incantation
The simplest way to get going is by this *incantation*:
For the very last version, use `hot`.
For the latest release, use `latest`.
If you have cloned the repos (TF app and data), use `clone`.
If you do not want/need to upgrade, leave out the checkout specifiers.
**After downloading new data it will take several minutes to optimize the data**.
The optimized data will be stored in your system, and all subsequent use of this
corpus will find that optimized data.
```
A = use("clariah/wp6-missieven:latest", hoist=globals())
# A = use("clariah/wp6-missieven", hoist=globals())
```
There is a lot of information in the report above, we'll look at that in a later chapter:
[compute](compute.ipynb)
# Getting around
## Where am I?
All information in a Text-Fabric dataset is tied to nodes and edges.
Nodes are integers, from 1 upwards, and the basic textual objects (*slots*) come first, in the order of the text.
In this corpus, slots are words, and we have more than 5 millions of them.
Here is how you can visualize a slot and see where you are, if you found an arbitrary word:
```
n = 1_504_875
A.plain(L.u(n, otype="line")[0])
A.plain(n)
```
This word is in volume 4, page 717, line 2.
You can click the passage specifier, and it will take you to the image of this page on the
Missieven site maintained by the Huygens institute.

## How to get to ...?
Suppose we want to move to volume 4, page 717.
How do we find the node that corresponds to that page?
```
p = A.nodeFromSectionStr("4 717")
p
```
This looks like a meaningless number, but like a barcode on a product, this is the key to all information
about a thing. What kind of thing?
```
F.otype.v(p)
```
We just asked for the value of the feature `otype` (object type) of node `p`, and it turned out to be a page.
In the same way we can get the page number:
```
F.n.v(p)
```
Which features are defined, and what they mean is dependent on the dataset.
The dataset designer has provided metadata and documentation about features that are
accessible wherever you work with Text-Fabric.
Just after the incantation you can expand the list of features and
inspect the metadata of each feature. You can also click on the feature name to go
to the documentation of the feature.

We can also navigate to a specific line:
```
ln = A.nodeFromSectionStr("4 717:2")
print(f"node {ln} is {F.otype.v(ln)} {F.n.v(ln)}")
```
We can also do this in a more structured way:
```
p = T.nodeFromSection((4, 717))
p
ln = T.nodeFromSection((4, 717, 2))
ln
```
At this point, have a look at the
[cheatsheet](https://annotation.github.io/text-fabric/tf/cheatsheet.html)
and find the documentation of these methods.
## Explore the neighbourhood
We show how to find the nodes of the lines in the page, how to print the text of those lines, and how to find the individual words.
Text-Fabric has an API called `Locality` (or simply `L`) to explore spatially related nodes.
From a node we can go `up`, `down`, `previous` and `next`. Here we go down.
```
lines = L.d(p, otype="line")
lines
```
# Display
Text-Fabric has a high-level display API to show textual material in various ways.
Here is a plain view.
```
for line in lines:
A.plain(line)
```
You do not (yet) see a clear distinction in text types.
There is a mixture of editorial text and original text, and there is even a footnote.
We can show the text other text formats.
Formats have been defined by the dataset designer, they are not built in into Text-Fabric.
Let's see what the designer has provided in this regard:
```
T.formats
```
Some formats show all text, others editorial texts only, and some show original letter content only and others
just the footnotes.
Yet other formats show all text except a specific type.
The formats that start with `text-` yield plain Unicode text.
The formats that start with `layout-` deliver formatted HTML.
We have designed the layout in such a way that the text types (editorial, original) are distinguished.
The default format is `text-orig-full`.
Let's switch to `layout-full`, which will also show the footnotes in place.
```
for line in lines:
A.plain(line, fmt="layout-full")
```
If we want to skip the remarks we can choose `layout-noremarks`:
```
for line in lines:
A.plain(line, fmt="layout-noremarks")
```
Or, without the footnotes:
```
for line in lines:
A.plain(line, fmt="layout-remarks")
```
Just the original text:
```
for line in lines:
A.plain(line, fmt="layout-orig")
```
# Drilling down
Lets navigate to individual words, we pick a few lines from this page we have seen in various ways.
```
lineNum = 5
ln = A.nodeFromSectionStr(f"4 717:{lineNum}")
A.plain(ln)
words = L.d(ln, otype="word")
words
```
Let's make a table of the words around this line and the values of some features that they carry:
```
features = "trans transo transr transn punc isorig isremark isnote".split()
table = []
for lno in range(lineNum - 2, lineNum + 3):
ln = T.nodeFromSection((3, 717, lno))
for w in L.d(ln, otype="word"):
row = tuple(Fs(feature).v(w) for feature in features)
table.append(row)
table
```
We can show that more prettily in a markdown table, but it is a bit of a hassle to compose
the markdown string.
Once we have that, we can pass it to a method in the Text-Fabric API that displays it as markdown.
```
NL = "\n"
mdHead = f"""
{" | ".join(features)}
{" | ".join("---" for _ in features)}
"""
mdData = "\n".join(
f"""{" | ".join(str(c or "").replace(NL, " ") for c in row)}""" for row in table
)
A.dm(f"""{mdHead}{mdData}""")
```
Note that the dataset designer has the text strings of all words into the feature `trans`;
editorial words also go into `transr`, but not into `transo`;
original words go into `transo`, but not into `transr`.
The existence of these features is mainly to make it possible to define the selective text formats
we have seen above.
If constructing a low level dataset is too low-level for your taste,
we can just collect a bunch of nodes and feed it to a higher-level display function of Text-Fabric:
```
table = []
for lno in range(lineNum - 2, lineNum + 3):
ln = T.nodeFromSection((3, 717, lno))
for w in L.d(ln, otype="word"):
table.append((w,))
table
```
Before we ask Text-Fabric to display this, we tell it the features we're interested in.
```
A.displaySetup(extraFeatures=features)
A.show(table, condensed=True, fmt="layout-full")
```
Where this machinery really shines is when it comes to displaying the results of queries.
See [search](search.ipynb).
---
# Next steps
By now you have an impression how to orient yourself in the Missieven dataset.
The next steps will show you how to get powerful: searching and computing.
After that it is time for collecting results, use them in new annotations and share them.
* **start** start computing with this corpus
* **[search](search.ipynb)** turbo charge your hand-coding with search templates
* **[compute](compute.ipynb)** sink down a level and compute it yourself
* **[exportExcel](exportExcel.ipynb)** make tailor-made spreadsheets out of your results
* **[annotate](annotate.ipynb)** export text, annotate with BRAT, import annotations
* **[share](share.ipynb)** draw in other people's data and let them use yours
* **[volumes](volumes.ipynb)** work with selected volumes only
CC-BY Dirk Roorda
|
github_jupyter
|
%load_ext autoreload
%autoreload 2
from tf.app import use
A = use("clariah/wp6-missieven:latest", hoist=globals())
# A = use("clariah/wp6-missieven", hoist=globals())
n = 1_504_875
A.plain(L.u(n, otype="line")[0])
A.plain(n)
p = A.nodeFromSectionStr("4 717")
p
F.otype.v(p)
F.n.v(p)
ln = A.nodeFromSectionStr("4 717:2")
print(f"node {ln} is {F.otype.v(ln)} {F.n.v(ln)}")
p = T.nodeFromSection((4, 717))
p
ln = T.nodeFromSection((4, 717, 2))
ln
lines = L.d(p, otype="line")
lines
for line in lines:
A.plain(line)
T.formats
for line in lines:
A.plain(line, fmt="layout-full")
for line in lines:
A.plain(line, fmt="layout-noremarks")
for line in lines:
A.plain(line, fmt="layout-remarks")
for line in lines:
A.plain(line, fmt="layout-orig")
lineNum = 5
ln = A.nodeFromSectionStr(f"4 717:{lineNum}")
A.plain(ln)
words = L.d(ln, otype="word")
words
features = "trans transo transr transn punc isorig isremark isnote".split()
table = []
for lno in range(lineNum - 2, lineNum + 3):
ln = T.nodeFromSection((3, 717, lno))
for w in L.d(ln, otype="word"):
row = tuple(Fs(feature).v(w) for feature in features)
table.append(row)
table
NL = "\n"
mdHead = f"""
{" | ".join(features)}
{" | ".join("---" for _ in features)}
"""
mdData = "\n".join(
f"""{" | ".join(str(c or "").replace(NL, " ") for c in row)}""" for row in table
)
A.dm(f"""{mdHead}{mdData}""")
table = []
for lno in range(lineNum - 2, lineNum + 3):
ln = T.nodeFromSection((3, 717, lno))
for w in L.d(ln, otype="word"):
table.append((w,))
table
A.displaySetup(extraFeatures=features)
A.show(table, condensed=True, fmt="layout-full")
| 0.254416 | 0.9842 |
# Tutorial 1: The Basic Tools of the Deep Life Sciences
Welcome to DeepChem's introductory tutorial for the deep life sciences. This series of notebooks is a step-by-step guide for you to get to know the new tools and techniques needed to do deep learning for the life sciences. We'll start from the basics, assuming that you're new to machine learning and the life sciences, and build up a repertoire of tools and techniques that you can use to do meaningful work in the life sciences.
**Scope:** This tutorial will encompass both the machine learning and data handling needed to build systems for the deep life sciences.
## Colab
This tutorial and the rest in the sequences are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
[](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/01_The_Basic_Tools_of_the_Deep_Life_Sciences.ipynb)
## Why do the DeepChem Tutorial?
**1) Career Advancement:** Applying AI in the life sciences is a booming
industry at present. There are a host of newly funded startups and initiatives
at large pharmaceutical and biotech companies centered around AI. Learning and
mastering DeepChem will bring you to the forefront of this field and will
prepare you to enter a career in this field.
**2) Humanitarian Considerations:** Disease is the oldest cause of human
suffering. From the dawn of human civilization, humans have suffered from pathogens,
cancers, and neurological conditions. One of the greatest achievements of
the last few centuries has been the development of effective treatments for
many diseases. By mastering the skills in this tutorial, you will be able to
stand on the shoulders of the giants of the past to help develop new
medicine.
**3) Lowering the Cost of Medicine:** The art of developing new medicine is
currently an elite skill that can only be practiced by a small core of expert
practitioners. By enabling the growth of open source tools for drug discovery,
you can help democratize these skills and open up drug discovery to more
competition. Increased competition can help drive down the cost of medicine.
## Getting Extra Credit
If you're excited about DeepChem and want to get more involved, there are some things that you can do right now:
* Star DeepChem on GitHub! - https://github.com/deepchem/deepchem
* Join the DeepChem forums and introduce yourself! - https://forum.deepchem.io
* Say hi on the DeepChem gitter - https://gitter.im/deepchem/Lobby
* Make a YouTube video teaching the contents of this notebook.
## Prerequisites
This tutorial sequence will assume some basic familiarity with the Python data science ecosystem. We will assume that you have familiarity with libraries such as Numpy, Pandas, and TensorFlow. We'll provide some brief refreshers on basics through the tutorial so don't worry if you're not an expert.
## Setup
The first step is to get DeepChem up and running. We recommend using Google Colab to work through this tutorial series. You'll need to run the following commands to get DeepChem installed on your colab notebook. Note that this will take something like 5 minutes to run on your colab instance.
```
!curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
import conda_installer
conda_installer.install()
!/root/miniconda/bin/conda info -e
!pip install --pre deepchem
```
You can of course run this tutorial locally if you prefer. In this case, don't run the above cell since it will download and install Anaconda on your local machine. In either case, we can now import the `deepchem` package to play with.
```
import deepchem as dc
dc.__version__
```
# Training a Model with DeepChem: A First Example
Deep learning can be used to solve many sorts of problems, but the basic workflow is usually the same. Here are the typical steps you follow.
1. Select the data set you will train your model on (or create a new data set if there isn't an existing suitable one).
2. Create the model.
3. Train the model on the data.
4. Evaluate the model on an independent test set to see how well it works.
5. Use the model to make predictions about new data.
With DeepChem, each of these steps can be as little as one or two lines of Python code. In this tutorial we will walk through a basic example showing the complete workflow to solve a real world scientific problem.
The problem we will solve is predicting the solubility of small molecules given their chemical formulas. This is a very important property in drug development: if a proposed drug isn't soluble enough, you probably won't be able to get enough into the patient's bloodstream to have a therapeutic effect. The first thing we need is a data set of measured solubilities for real molecules. One of the core components of DeepChem is MoleculeNet, a diverse collection of chemical and molecular data sets. For this tutorial, we can use the Delaney solubility data set.
```
tasks, datasets, transformers = dc.molnet.load_delaney(featurizer='GraphConv')
train_dataset, valid_dataset, test_dataset = datasets
```
I won't say too much about this code right now. We will see many similar examples in later tutorials. There are two details I do want to draw your attention to. First, notice the `featurizer` argument passed to the `load_delaney()` function. Molecules can be represented in many ways. We therefore tell it which representation we want to use, or in more technical language, how to "featurize" the data. Second, notice that we actually get three different data sets: a training set, a validation set, and a test set. Each of these serves a different function in the standard deep learning workflow.
Now that we have our data, the next step is to create a model. We will use a particular kind of model called a "graph convolutional network", or "graphconv" for short.
```
model = dc.models.GraphConvModel(n_tasks=1, mode='regression', dropout=0.2)
```
Here again I will not say much about the code. Later tutorials will give lots more information about `GraphConvModel`, as well as other types of models provided by DeepChem.
We now need to train the model on the data set. We simply give it the data set and tell it how many epochs of training to perform (that is, how many complete passes through the data to make).
```
model.fit(train_dataset, nb_epoch=100)
```
If everything has gone well, we should now have a fully trained model! But do we? To find out, we must evaluate the model on the test set. We do that by selecting an evaluation metric and calling `evaluate()` on the model. For this example, let's use the Pearson correlation, also known as r<sup>2</sup>, as our metric. We can evaluate it on both the training set and test set.
```
metric = dc.metrics.Metric(dc.metrics.pearson_r2_score)
print("Training set score:", model.evaluate(train_dataset, [metric], transformers))
print("Test set score:", model.evaluate(test_dataset, [metric], transformers))
```
Notice that it has a higher score on the training set than the test set. Models usually perform better on the particular data they were trained on than they do on similar but independent data. This is called "overfitting", and it is the reason it is essential to evaluate your model on an independent test set.
Our model still has quite respectable performance on the test set. For comparison, a model that produced totally random outputs would have a correlation of 0, while one that made perfect predictions would have a correlation of 1. Our model does quite well, so now we can use it to make predictions about other molecules we care about.
Since this is just a tutorial and we don't have any other molecules we specifically want to predict, let's just use the first ten molecules from the test set. For each one we print out the chemical structure (represented as a SMILES string) and the predicted solubility.
```
solubilities = model.predict_on_batch(test_dataset.X[:10])
for molecule, solubility in zip(test_dataset.ids, solubilities):
print(solubility, molecule)
```
# Congratulations! Time to join the Community!
Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with DeepChem, we encourage you to finish the rest of the tutorials in this series. You can also help the DeepChem community in the following ways:
## Star DeepChem on [GitHub](https://github.com/deepchem/deepchem)
This helps build awareness of the DeepChem project and the tools for open source drug discovery that we're trying to build.
## Join the DeepChem Gitter
The DeepChem [Gitter](https://gitter.im/deepchem/Lobby) hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!
|
github_jupyter
|
!curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
import conda_installer
conda_installer.install()
!/root/miniconda/bin/conda info -e
!pip install --pre deepchem
import deepchem as dc
dc.__version__
tasks, datasets, transformers = dc.molnet.load_delaney(featurizer='GraphConv')
train_dataset, valid_dataset, test_dataset = datasets
model = dc.models.GraphConvModel(n_tasks=1, mode='regression', dropout=0.2)
model.fit(train_dataset, nb_epoch=100)
metric = dc.metrics.Metric(dc.metrics.pearson_r2_score)
print("Training set score:", model.evaluate(train_dataset, [metric], transformers))
print("Test set score:", model.evaluate(test_dataset, [metric], transformers))
solubilities = model.predict_on_batch(test_dataset.X[:10])
for molecule, solubility in zip(test_dataset.ids, solubilities):
print(solubility, molecule)
| 0.58522 | 0.990606 |
```
%run utils/attention_graph.py
%run utils/mlflow_query.py
%run utils/loading.py
%run utils/comparison.py
%run utils/ranks.py
mlflow_helper = MlflowHelper(pkl_file=Path("mlflow_run_df.pkl"))
#mlflow_helper.query_all_runs(pkl_file=Path("mlflow_run_df.pkl"))
relevant_mimic_run_df = mlflow_helper.mimic_run_df(include_noise=True, include_refinements=False)
mimic_gram_false_00_run_id = relevant_mimic_run_df[
(relevant_mimic_run_df['data_tags_noise_type'].fillna('').apply(len) == 0) &
(relevant_mimic_run_df['data_tags_model_type'] == 'gram') &
(relevant_mimic_run_df['data_params_ModelConfigbase_hidden_embeddings_trainable'] == 'False')
].iloc[0].get('info_run_id')
mimic_gram_false_10_run_id = relevant_mimic_run_df[
(relevant_mimic_run_df['data_tags_noise_type'].fillna('') == 'added0.1_removed0.0_threshold0.0') &
(relevant_mimic_run_df['data_tags_model_type'] == 'gram') &
(relevant_mimic_run_df['data_params_ModelConfigbase_hidden_embeddings_trainable'] == 'False')
].iloc[0].get('info_run_id')
mimic_text_false_00_run_id = relevant_mimic_run_df[
(relevant_mimic_run_df['data_tags_noise_type'].fillna('').apply(len) == 0) &
(relevant_mimic_run_df['data_tags_model_type'] == 'text') &
(relevant_mimic_run_df['data_params_ModelConfigbase_hidden_embeddings_trainable'] == 'False')
].iloc[0].get('info_run_id')
mimic_text_false_10_run_id = relevant_mimic_run_df[
(relevant_mimic_run_df['data_tags_noise_type'].fillna('') == 'added0.1_removed0.0_threshold0.0') &
(relevant_mimic_run_df['data_tags_model_type'] == 'text') &
(relevant_mimic_run_df['data_params_ModelConfigbase_hidden_embeddings_trainable'] == 'False')
].iloc[0].get('info_run_id')
mimic_causal_false_00_run_id = relevant_mimic_run_df[
(relevant_mimic_run_df['data_tags_noise_type'].fillna('').apply(len) == 0) &
(relevant_mimic_run_df['data_tags_model_type'] == 'causal') &
(relevant_mimic_run_df['data_params_ModelConfigbase_hidden_embeddings_trainable'] == 'False')
].iloc[0].get('info_run_id')
mimic_causal_false_10_run_id = relevant_mimic_run_df[
(relevant_mimic_run_df['data_tags_noise_type'].fillna('') == 'added0.1_removed0.0_threshold0.0') &
(relevant_mimic_run_df['data_tags_model_type'] == 'causal') &
(relevant_mimic_run_df['data_params_ModelConfigbase_hidden_embeddings_trainable'] == 'False')
].iloc[0].get('info_run_id')
print('Gram', mimic_gram_false_00_run_id, 'Text', mimic_text_false_00_run_id, 'Causal', mimic_causal_false_00_run_id)
print('NOISE 10%: Gram', mimic_gram_false_10_run_id, 'Text', mimic_text_false_10_run_id, 'Causal', mimic_causal_false_10_run_id)
relevant_huawei_run_df = mlflow_helper.huawei_run_df(include_noise=False, include_refinements=False)
huawei_gram_false_00_run_id = relevant_huawei_run_df[
(relevant_huawei_run_df['data_tags_noise_type'].fillna('').apply(len) == 0) &
(relevant_huawei_run_df['data_tags_model_type'] == 'gram') &
(relevant_huawei_run_df['data_params_ModelConfigbase_hidden_embeddings_trainable'] == 'False')
].iloc[0].get('info_run_id')
create_graph_visualization(
run_id=huawei_gram_false_00_run_id,
local_mlflow_dir=mlflow_helper.local_mlflow_dir,
threshold=0.2,
run_name="huawe_gram",
use_node_mapping=False
)
```
# Attention Weights
```
def calculate_shared_attention_weights(attention_weights: Dict[str, Dict[str, float]]):
if attention_weights is None:
return [0.0]
attention_importances = calculate_attention_importances(attention_weights)
shared_weights = [
sum([
float(weight) for con_feature, weight in attention_weights[in_feature].items()
if len(attention_importances[con_feature]) > 1
])
for in_feature in attention_weights
]
return shared_weights
rel_runs = mlflow_helper.mimic_run_df()
shared_weights = []
for run_id in set(rel_runs["info_run_id"]):
attention_weights = load_attention_weights(run_id=run_id, local_mlflow_dir=mlflow_helper.local_mlflow_dir)
shared_weights.append({
"run_id": run_id,
"shared_weights": calculate_shared_attention_weights(attention_weights)
})
shared_df = pd.merge(rel_runs, pd.DataFrame.from_records(shared_weights), left_on="info_run_id", right_on="run_id")
shared_df["avg_shared_weights"] = shared_df["shared_weights"].apply(lambda x: np.mean(x))
shared_df["median_shared_weights"] = shared_df["shared_weights"].apply(lambda x: np.median(x))
shared_df
import seaborn as sns
import matplotlib.pyplot as plt
shared_df["data_tags_model_type"] = shared_df["data_tags_model_type"].apply(
lambda x: {
"gram": "hierarchy",
"causal": "causal_old",
"causal2": "causal",
}.get(x,x)
)
shared_df["Embeddings Trainable"] = shared_df["data_params_ModelConfigbase_hidden_embeddings_trainable"]
sns.catplot(
data=shared_df[
shared_df["data_tags_model_type"].apply(lambda x: x in ["hierarchy", "text", "causal"])
].explode("shared_weights"),
x="data_tags_model_type",
y="shared_weights",
hue="Embeddings Trainable",
kind="box",
order=["hierarchy", "causal", "text"],
palette="Set2",
).set_axis_labels("", "shared attention importance")
plt.savefig("sharedimportances_trainable_healthcare.png", dpi=100, bbox_inches="tight")
plt.show()
import json
texts = load_icd9_text()
unknowns = set([x for x,y in texts.items() if
(y["description"].lower().startswith("other")
or y["description"].lower().startswith("unspecified")
or y["description"].lower().endswith("unspecified")
or y["description"].lower().endswith("unspecified type")
or y["description"].lower().endswith("not elsewhere classified"))])
attentions = load_attention_weights(
mimic_gram_false_00_run_id,
mlflow_helper.local_mlflow_dir
)
print(sum([len(x) for x in attentions.values()]))
attentions_without_unknowns = {
x:[y for y in ys if y not in unknowns or x == y] for x,ys in attentions.items()
}
print(sum([len(x) for x in attentions_without_unknowns.values()]))
with open('gram_without_unknowns.json', 'w') as f:
json.dump(attentions_without_unknowns, f)
import string
def transform_to_words(description: str) -> Set[str]:
description = description.translate(
str.maketrans(string.punctuation, " " * len(string.punctuation))
)
words = [str(x).lower().strip() for x in description.split()]
return set([x for x in words if len(x) > 0])
input_descriptions = [(x, transform_to_words(y["description"])) for x,y in texts.items() if x in attentions]
word_overlaps = {}
for x, x_desc in tqdm(input_descriptions):
for y, y_desc in input_descriptions:
if x == y:
continue
word_overlap = x_desc.intersection(y_desc)
if len(word_overlap) == 0:
continue
overlap_string = " ".join([x for x in sorted(word_overlap)])
if overlap_string not in word_overlaps:
word_overlaps[overlap_string] = set()
word_overlaps[overlap_string].update([x,y])
print(len(word_overlaps))
print(sum([len(ws) for ws in word_overlaps.values()]))
max_size_diff = 0.2
max_intersection_diff = 0.25
cleaned_word_overlaps = {}
replacements = {}
for words, features in tqdm(word_overlaps.items()):
found_replacement = False
for other_words, other_features in cleaned_word_overlaps.items():
if (len(other_features) <= (1 + max_size_diff) * len(features) and
len(other_features) >= (1 - max_size_diff) * len(features) and
len(other_features.intersection(features)) >= max_intersection_diff * len(features)):
#print("Found replacement",
# words, len(features),
# other_words, len(other_features),
# len(other_features.intersection(features)))
if other_words not in replacements:
replacements[other_words] = set()
replacements[other_words].add(words)
found_replacement = True
break
if not found_replacement:
cleaned_word_overlaps[words] = features
print(len(cleaned_word_overlaps))
print(sum([len(ws) for ws in cleaned_word_overlaps.values()]))
cleaned_word_overlaps["10 any body burn degree involving less of or percent surface than third unspecified with"]
print(len(word_overlaps))
print(sum([len(ws) for ws in word_overlaps.values()]))
word_overlaps["(acute) asthma exacerbation with"]
feature_node_mapping = create_graph_visualization(
run_id=mimic_gram_false_00_run_id,
local_mlflow_dir=mlflow_helper.local_mlflow_dir,
threshold=0.2,
run_name='mimic_gram_false_00',
use_node_mapping=False)
colored_connections, feature_node_mapping = create_graph_visualization_reference(
run_id=mimic_gram_false_10_run_id,
reference_run_id=mimic_gram_false_00_run_id,
local_mlflow_dir=mlflow_helper.local_mlflow_dir,
threshold=0.2,
run_name='mimic_gram_false_00',
use_node_mapping=False)
```
## Drain Hierarchy
```
huawei_run_df = mlflow_helper.huawei_run_df(include_drain_hierarchy=True)
drain_run_id = huawei_run_df[
(huawei_run_df["data_params_HuaweiPreprocessorConfigdrain_log_sts"].fillna("[]").astype(str).apply(len) > 2)
& (huawei_run_df["data_tags_model_type"] == "gram")
]["info_run_id"].iloc[0]
drain_run_id
aw = load_attention_weights(run_id=drain_run_id, local_mlflow_dir=mlflow_helper.local_mlflow_dir)
aimp = calculate_attention_importances(aw)
drain_clusters = [
(k,[a for a,b in w if float(b) > 0.9])
for k,w in aimp.items()
if "log_cluster_template" in k and k[0].isdigit()]
[x for x in drain_clusters if len(x[1]) > 1]
drain_clusters
drain_levels = [w for k,ws in aw.items() for w in ws if "log_cluster_template" in w]
drain_levels_ = {}
for i in range(3):
drain_levels_[i] = len(set([x for x in drain_levels if str(i) + "_log_cluster_template" in x]))
drain_levels_
feature_node_mapping = create_graph_visualization(
run_id=drain_run_id,
local_mlflow_dir=mlflow_helper.local_mlflow_dir,
threshold=0.2,
run_name='drain_hierarchy',
use_node_mapping=False)
mimic_df = mlflow_helper.mimic_run_df(include_noise=False, include_refinements=False, risk_prediction=False, valid_x_columns=["level_0", "level_1", "level_2"])
mimic_df
import numpy as np
mimic_df.groupby(by=["data_params_SequenceConfigx_sequence_column_name", "data_tags_model_type"]).agg({
"data_metrics_num_connections": np.mean,
"data_metrics_x_vocab_size": np.mean,
"data_metrics_y_vocab_size": np.mean,
})
icd9_hierarchy = pd.read_csv('data/hierarchy_icd9.csv')
icd9_hierarchy
def load_icd9_hierarchy_parents_for_level(
icd9_hierarchy: pd.DataFrame,
all_features: Set[str],
max_level: str) -> Dict[str, str]:
parent_infos = {}
for feature in tqdm(all_features, desc="Processing icd9 hierarchy clusters for level " + max_level):
parents = set(icd9_hierarchy[icd9_hierarchy["level_0"] == feature][max_level])
if len(parents) > 1:
print("Found more than one parent!", feature, parents)
parent = list(parents)[0]
if feature in parent_infos and parent not in parent_infos[feature]:
print("Feature already in weights, but with different parent!", feature, parent, weights[feature])
parent_infos[feature] = parent
return parent_infos
def add_icd9_hierarchy_attention_weights_for_level(
feature_parents: Dict[str, str],
attention_weights: Dict[str, Dict[str, float]]) -> Dict[str, Dict[str, float]]:
new_attention_weights = {}
for feature, parent in feature_parents.items():
if feature in attention_weights:
new_attention_weights[feature] = attention_weights[feature]
elif parent in attention_weights:
new_attention_weights[feature] = attention_weights[parent]
else:
new_attention_weights[feature] = {
parent: 1.0,
}
return new_attention_weights
reference_run_id = list(
mimic_df[
(mimic_df["data_params_SequenceConfigx_sequence_column_name"] == "level_0") &
(mimic_df["data_tags_model_type"] != "simple")
]["info_run_id"]
)[0]
reference_attention = load_attention_weights(reference_run_id, mlflow_helper.local_mlflow_dir)
all_features = set(reference_attention.keys())
len(all_features)
cluster_infos = []
for level in set(mimic_df["data_params_SequenceConfigx_sequence_column_name"]):
icd9_parents = load_icd9_hierarchy_parents_for_level(
icd9_hierarchy=icd9_hierarchy,
all_features=all_features,
max_level=level)
for run_id in set(
mimic_df[
(mimic_df["data_params_SequenceConfigx_sequence_column_name"] == level)
]["info_run_id"]
):
original_attention = load_attention_weights(run_id, mlflow_helper.local_mlflow_dir)
if original_attention is None:
original_attention = {}
attention = add_icd9_hierarchy_attention_weights_for_level(
feature_parents=icd9_parents,
attention_weights=original_attention)
attention_importances = calculate_attention_importances(attention)
clusters_around = {
x:[y for y in ys if y[1] > 0.9] for x,ys in attention_importances.items()
}
clusters_around = {
x:ys for x,ys in clusters_around.items() if len(ys) > 0
}
shared_clusters = {
x:ys for x,ys in clusters_around.items() if len(ys) > 1
}
single_clusters = {
x:ys for x,ys in clusters_around.items() if len(ys) == 1
}
all_inputs = set(attention.keys())
clustered_inputs = {
y[0] for _,ys in clusters_around.items() for y in ys
}
shared_clustered_inputs = {
y[0] for _,ys in shared_clusters.items() for y in ys
}
single_clustered_inputs = {
y[0] for _,ys in single_clusters.items() for y in ys
}
non_clustered_inputs = all_inputs - clustered_inputs
if len(original_attention) == 0:
original_attention = {
x:{x:1.0}
for x in icd9_parents.values()
}
attention_importances_o = calculate_attention_importances(original_attention)
clusters_around_o = {
x:[y for y in ys if y[1] > 0.9] for x,ys in attention_importances_o.items()
}
clusters_around_o = {
x:ys for x,ys in clusters_around_o.items() if len(ys) > 0
}
shared_clusters_o = {
x:ys for x,ys in clusters_around_o.items() if len(ys) > 1
}
single_clusters_o = {
x:ys for x,ys in clusters_around_o.items() if len(ys) == 1
}
all_inputs_o = set(original_attention.keys())
clustered_inputs_o = {
y[0] for _,ys in clusters_around_o.items() for y in ys
}
shared_clustered_inputs_o = {
y[0] for _,ys in shared_clusters_o.items() for y in ys
}
single_clustered_inputs_o = {
y[0] for _,ys in single_clusters_o.items() for y in ys
}
non_clustered_inputs_o = all_inputs_o - clustered_inputs_o
cluster_infos.append({
'run_id': run_id,
'all_inputs': len(all_inputs),
'clustered_inputs': len(clustered_inputs),
'clustered_inputs_p': len(clustered_inputs) / len(all_inputs),
'shared_clustered_inputs': len(shared_clustered_inputs),
'shared_clustered_inputs_p': len(shared_clustered_inputs) / len(all_inputs),
'single_clustered_inputs': len(single_clustered_inputs),
'single_clustered_inputs_p': len(single_clustered_inputs) / len(all_inputs),
'non_clustered_inputs': len(non_clustered_inputs),
'non_clustered_inputs_p': len(non_clustered_inputs) / len(all_inputs),
'clusters': len(clusters_around),
'shared_clusters': len(shared_clusters),
'shared_clusters_p': len(shared_clusters) / len(clusters_around),
'single_clusters': len(single_clusters),
'single_clusters_p': len(single_clusters) / len(clusters_around),
'all_inputs_o': len(all_inputs_o),
'clustered_inputs_o': len(clustered_inputs_o),
'clustered_inputs_p_o': len(clustered_inputs_o) / len(all_inputs_o),
'shared_clustered_inputs_o': len(shared_clustered_inputs_o),
'shared_clustered_inputs_p_o': len(shared_clustered_inputs_o) / len(all_inputs_o),
'single_clustered_inputs_o': len(single_clustered_inputs_o),
'single_clustered_inputs_p_o': len(single_clustered_inputs_o) / len(all_inputs_o),
'non_clustered_inputs_o': len(non_clustered_inputs_o),
'non_clustered_inputs_p_o': len(non_clustered_inputs_o) / len(all_inputs_o),
'clusters_o': len(clusters_around_o),
'shared_clusters_o': len(shared_clusters_o),
'shared_clusters_p_o': len(shared_clusters_o) / len(clusters_around_o),
'single_clusters_o': len(single_clusters_o),
'single_clusters_p_o': len(single_clusters_o) / len(clusters_around_o),
})
pd.DataFrame.from_records(cluster_infos)
added_columns = cluster_infos[1].keys()
merged = pd.merge(
pd.melt(pd.DataFrame.from_records(cluster_infos), id_vars="run_id", value_vars=[x for x in added_columns if x != "run_id"]),
mimic_df,
left_on="run_id",
right_on="info_run_id",)
merged[["variable", "value", "data_tags_model_type"]]
import seaborn as sns
import matplotlib.pyplot as plt
f = sns.catplot(
data=merged,
x="data_params_SequenceConfigx_sequence_column_name",
order=["level_0", "level_1", "level_2"],
sharey=False,
y="value", col="variable", row="data_params_ModelConfigbase_hidden_embeddings_trainable",
kind="box", hue="data_tags_model_type")
f.set_titles("Trainable: {row_name}, Metric: {col_name}")
plt.show()
f = sns.catplot(
data=merged[merged["variable"].apply(lambda x: x in ["clustered_inputs_p", "shared_clustered_inputs_p", "single_clustered_inputs_p"])],
x="data_params_SequenceConfigx_sequence_column_name",
order=["level_0", "level_1", "level_2"],
sharey=False,
y="value", col="variable", row="data_params_ModelConfigbase_hidden_embeddings_trainable",
kind="box", hue="data_tags_model_type")
f.set_titles("Trainable: {row_name}, Metric: {col_name}")
plt.show()
def calculate_clusters(run_id, local_mlflow_dir, icd9_parents, threshold=0.9):
original_attention = load_attention_weights(run_id, local_mlflow_dir)
if original_attention is None:
original_attention = {}
attention = add_icd9_hierarchy_attention_weights_for_level(
feature_parents=icd9_parents,
attention_weights=original_attention)
attention_importances = calculate_attention_importances(attention)
clusters_around = {
x:[y[0] for y in ys if y[1] > threshold] for x,ys in attention_importances.items()
}
clusters_around = {
x:ys for x,ys in clusters_around.items() if len(ys) > 0
}
shared_clusters = {
x:ys for x,ys in clusters_around.items() if len(ys) > 1
}
single_clusters = {
x:ys for x,ys in clusters_around.items() if len(ys) == 1
}
all_inputs = set(attention.keys())
clustered_inputs = {
y for _,ys in clusters_around.items() for y in ys
}
shared_clustered_inputs = {
y for _,ys in shared_clusters.items() for y in ys
}
single_clustered_inputs = {
y for _,ys in single_clusters.items() for y in ys
}
non_clustered_inputs = all_inputs - clustered_inputs
return {
"clusters_around": clusters_around,
"shared_clusters": shared_clusters,
"single_clusters": single_clusters,
"clustered_inputs": clustered_inputs,
"non_clustered_inputs": non_clustered_inputs,
"shared_clustered_inputs": shared_clustered_inputs,
"single_clustered_inputs": single_clustered_inputs,
}
def compare_clusters(run_id_1, run_id_2, local_mlflow_dir, icd9_parents_1, icd9_parents_2, cluster_threshold=0.99):
clusters_1 = calculate_clusters(run_id_1, local_mlflow_dir, icd9_parents_1)
clusters_2 = calculate_clusters(run_id_2, local_mlflow_dir, icd9_parents_2)
return {
run_id_1: clusters_1,
run_id_2: clusters_2,
"same_clustered_inputs": clusters_1["clustered_inputs"].intersection(clusters_2["clustered_inputs"]),
"same_nonclustered_inputs": clusters_1["non_clustered_inputs"].intersection(clusters_2["non_clustered_inputs"]),
"same_shared_clustered_inputs": clusters_1["shared_clustered_inputs"].intersection(clusters_2["shared_clustered_inputs"]),
"same_single_clustered_inputs": clusters_1["single_clustered_inputs"].intersection(clusters_2["single_clustered_inputs"]),
"same_clusters": [
x for x in clusters_1["clusters_around"].values() if len([
y for y in clusters_2["clusters_around"].values() if len(set(y).intersection(set(x))) / len(set(x).union(set(y))) > cluster_threshold
]) > 0
],
"same_shared_clusters": [
x for x in clusters_1["shared_clusters"].values() if len([
y for y in clusters_2["shared_clusters"].values() if len(set(y).intersection(set(x))) / len(set(x).union(set(y))) > cluster_threshold
]) > 0
],
"same_single_clusters": [
x for x in clusters_1["single_clusters"].values() if len([
y for y in clusters_2["single_clusters"].values() if len(set(y).intersection(set(x))) / len(set(x).union(set(y))) > cluster_threshold
]) > 0
],
}
comparisons = []
level_parents = {}
for run_id_1 in set(mimic_df["info_run_id"]):
level_1 = mimic_df[mimic_df["info_run_id"] == run_id_1]["data_params_SequenceConfigx_sequence_column_name"].iloc[0]
if level_1 not in level_parents:
level_parents[level_1] = load_icd9_hierarchy_parents_for_level(
icd9_hierarchy=icd9_hierarchy,
all_features=all_features,
max_level=level_1)
icd9_parents_1 = level_parents[level_1]
for run_id_2 in set(mimic_df["info_run_id"]):
level_2 = mimic_df[mimic_df["info_run_id"] == run_id_2]["data_params_SequenceConfigx_sequence_column_name"].iloc[0]
if level_2 not in level_parents:
level_parents[level_2] = load_icd9_hierarchy_parents_for_level(
icd9_hierarchy=icd9_hierarchy,
all_features=all_features,
max_level=level_2)
icd9_parents_2 = level_parents[level_2]
comparison = compare_clusters(run_id_1, run_id_2, mlflow_helper.local_mlflow_dir, icd9_parents_1, icd9_parents_2, cluster_threshold=0.9)
comparisons.append({
"run_id_1": run_id_1,
"run_id_2": run_id_2,
"same_clusters": len(comparison["same_clusters"]),
"same_shared_clusters": len(comparison["same_shared_clusters"]),
"same_single_clusters": len(comparison["same_single_clusters"]),
"same_clustered_inputs": len(comparison["same_clustered_inputs"]),
"same_nonclustered_inputs": len(comparison["same_nonclustered_inputs"]),
"same_shared_clustered_inputs": len(comparison["same_shared_clustered_inputs"]),
"same_single_clustered_inputs": len(comparison["same_single_clustered_inputs"]),
})
pd.DataFrame.from_records(comparisons)
level_1="level_2"
level_2 = "level_0"
comp_1 = "simple"
comp_2 = "gram"
icd9_parents_1 = load_icd9_hierarchy_parents_for_level(
icd9_hierarchy=icd9_hierarchy,
all_features=all_features,
max_level=level_1)
icd9_parents_2 = load_icd9_hierarchy_parents_for_level(
icd9_hierarchy=icd9_hierarchy,
all_features=all_features,
max_level=level_2)
run_id_1 = mimic_df[
(mimic_df["data_params_SequenceConfigx_sequence_column_name"] == level_1) &
(mimic_df["data_tags_model_type"] == comp_1) &
(mimic_df["data_params_ModelConfigbase_feature_embeddings_trainable"] == "False")
]["info_run_id"].iloc[0]
run_id_2 = mimic_df[
(mimic_df["data_params_SequenceConfigx_sequence_column_name"] == level_2) &
(mimic_df["data_tags_model_type"] == comp_2) &
(mimic_df["data_params_ModelConfigbase_feature_embeddings_trainable"] == "False") &
(mimic_df["info_run_id"] != run_id_1)
]["info_run_id"].iloc[0]
ccomparison = compare_clusters(run_id_1, run_id_2, mlflow_helper.local_mlflow_dir, icd9_parents_1, icd9_parents_2, cluster_threshold=0.9)
len(ccomparison["same_clusters"])
len(ccomparison["same_clustered_inputs"])
comparison = Comparison(
run_id_1=run_id_1,
suffix_1="_" + comp_1 + level_1,
run_id_2=run_id_2,
suffix_2="_" + comp_2 + level_2,
local_mlflow_dir=mlflow_helper.local_mlflow_dir,
num_percentiles=10,
feature_replacements=icd9_parents_1)
plot_rank_comparison(comparison)
plot_outlier_distances(comparison)
analyse_best_worst_sequences(comparison, num_best_sequences=1, num_worst_sequences=1, descriptions=load_icd9_text())
plot_rank_comparison(comparison,
color="avg_input_frequencies_percentile" + comparison.suffix_1,
hover_data=[
"avg_input_frequencies_percentile" + comparison.suffix_1,
"avg_input_frequencies_percentile" + comparison.suffix_2,
])
plot_rank_comparison(comparison,
color="avg_input_frequencies_percentile" + comparison.suffix_2,
hover_data=[
"avg_input_frequencies_percentile" + comparison.suffix_1,
"avg_input_frequencies_percentile" + comparison.suffix_2,
])
plot_comparison(comparison,
plot_column="avg_input_frequencies")
index=max(comparison.comparison_df.index)
display(comparison.comparison_df.loc[index]["input" + comparison.suffix_1])
display(comparison.comparison_df.loc[index]["input" + comparison.suffix_2])
display(comparison.comparison_df.loc[index][[
"output_rank_noties" + comparison.suffix_1,
"output_rank_noties" + comparison.suffix_2,
"avg_input_frequencies" + comparison.suffix_1,
"avg_input_frequencies" + comparison.suffix_2,
"outlier_distance"]])
print(comparison.suffix_1)
for input in comparison.comparison_df.loc[index]["original_inputs" + comparison.suffix_1].split(','):
if input.strip() in comparison.attention_weights_for(comparison.suffix_1):
display(comparison.attention_weights_for(comparison.suffix_1).get(input.strip()))
print(comparison.suffix_2)
for input in comparison.comparison_df.loc[index]["original_inputs" + comparison.suffix_2].split(','):
if input.strip() in comparison.attention_weights_for(comparison.suffix_2):
display(comparison.attention_weights_for(comparison.suffix_2).get(input.strip()))
```
|
github_jupyter
|
%run utils/attention_graph.py
%run utils/mlflow_query.py
%run utils/loading.py
%run utils/comparison.py
%run utils/ranks.py
mlflow_helper = MlflowHelper(pkl_file=Path("mlflow_run_df.pkl"))
#mlflow_helper.query_all_runs(pkl_file=Path("mlflow_run_df.pkl"))
relevant_mimic_run_df = mlflow_helper.mimic_run_df(include_noise=True, include_refinements=False)
mimic_gram_false_00_run_id = relevant_mimic_run_df[
(relevant_mimic_run_df['data_tags_noise_type'].fillna('').apply(len) == 0) &
(relevant_mimic_run_df['data_tags_model_type'] == 'gram') &
(relevant_mimic_run_df['data_params_ModelConfigbase_hidden_embeddings_trainable'] == 'False')
].iloc[0].get('info_run_id')
mimic_gram_false_10_run_id = relevant_mimic_run_df[
(relevant_mimic_run_df['data_tags_noise_type'].fillna('') == 'added0.1_removed0.0_threshold0.0') &
(relevant_mimic_run_df['data_tags_model_type'] == 'gram') &
(relevant_mimic_run_df['data_params_ModelConfigbase_hidden_embeddings_trainable'] == 'False')
].iloc[0].get('info_run_id')
mimic_text_false_00_run_id = relevant_mimic_run_df[
(relevant_mimic_run_df['data_tags_noise_type'].fillna('').apply(len) == 0) &
(relevant_mimic_run_df['data_tags_model_type'] == 'text') &
(relevant_mimic_run_df['data_params_ModelConfigbase_hidden_embeddings_trainable'] == 'False')
].iloc[0].get('info_run_id')
mimic_text_false_10_run_id = relevant_mimic_run_df[
(relevant_mimic_run_df['data_tags_noise_type'].fillna('') == 'added0.1_removed0.0_threshold0.0') &
(relevant_mimic_run_df['data_tags_model_type'] == 'text') &
(relevant_mimic_run_df['data_params_ModelConfigbase_hidden_embeddings_trainable'] == 'False')
].iloc[0].get('info_run_id')
mimic_causal_false_00_run_id = relevant_mimic_run_df[
(relevant_mimic_run_df['data_tags_noise_type'].fillna('').apply(len) == 0) &
(relevant_mimic_run_df['data_tags_model_type'] == 'causal') &
(relevant_mimic_run_df['data_params_ModelConfigbase_hidden_embeddings_trainable'] == 'False')
].iloc[0].get('info_run_id')
mimic_causal_false_10_run_id = relevant_mimic_run_df[
(relevant_mimic_run_df['data_tags_noise_type'].fillna('') == 'added0.1_removed0.0_threshold0.0') &
(relevant_mimic_run_df['data_tags_model_type'] == 'causal') &
(relevant_mimic_run_df['data_params_ModelConfigbase_hidden_embeddings_trainable'] == 'False')
].iloc[0].get('info_run_id')
print('Gram', mimic_gram_false_00_run_id, 'Text', mimic_text_false_00_run_id, 'Causal', mimic_causal_false_00_run_id)
print('NOISE 10%: Gram', mimic_gram_false_10_run_id, 'Text', mimic_text_false_10_run_id, 'Causal', mimic_causal_false_10_run_id)
relevant_huawei_run_df = mlflow_helper.huawei_run_df(include_noise=False, include_refinements=False)
huawei_gram_false_00_run_id = relevant_huawei_run_df[
(relevant_huawei_run_df['data_tags_noise_type'].fillna('').apply(len) == 0) &
(relevant_huawei_run_df['data_tags_model_type'] == 'gram') &
(relevant_huawei_run_df['data_params_ModelConfigbase_hidden_embeddings_trainable'] == 'False')
].iloc[0].get('info_run_id')
create_graph_visualization(
run_id=huawei_gram_false_00_run_id,
local_mlflow_dir=mlflow_helper.local_mlflow_dir,
threshold=0.2,
run_name="huawe_gram",
use_node_mapping=False
)
def calculate_shared_attention_weights(attention_weights: Dict[str, Dict[str, float]]):
if attention_weights is None:
return [0.0]
attention_importances = calculate_attention_importances(attention_weights)
shared_weights = [
sum([
float(weight) for con_feature, weight in attention_weights[in_feature].items()
if len(attention_importances[con_feature]) > 1
])
for in_feature in attention_weights
]
return shared_weights
rel_runs = mlflow_helper.mimic_run_df()
shared_weights = []
for run_id in set(rel_runs["info_run_id"]):
attention_weights = load_attention_weights(run_id=run_id, local_mlflow_dir=mlflow_helper.local_mlflow_dir)
shared_weights.append({
"run_id": run_id,
"shared_weights": calculate_shared_attention_weights(attention_weights)
})
shared_df = pd.merge(rel_runs, pd.DataFrame.from_records(shared_weights), left_on="info_run_id", right_on="run_id")
shared_df["avg_shared_weights"] = shared_df["shared_weights"].apply(lambda x: np.mean(x))
shared_df["median_shared_weights"] = shared_df["shared_weights"].apply(lambda x: np.median(x))
shared_df
import seaborn as sns
import matplotlib.pyplot as plt
shared_df["data_tags_model_type"] = shared_df["data_tags_model_type"].apply(
lambda x: {
"gram": "hierarchy",
"causal": "causal_old",
"causal2": "causal",
}.get(x,x)
)
shared_df["Embeddings Trainable"] = shared_df["data_params_ModelConfigbase_hidden_embeddings_trainable"]
sns.catplot(
data=shared_df[
shared_df["data_tags_model_type"].apply(lambda x: x in ["hierarchy", "text", "causal"])
].explode("shared_weights"),
x="data_tags_model_type",
y="shared_weights",
hue="Embeddings Trainable",
kind="box",
order=["hierarchy", "causal", "text"],
palette="Set2",
).set_axis_labels("", "shared attention importance")
plt.savefig("sharedimportances_trainable_healthcare.png", dpi=100, bbox_inches="tight")
plt.show()
import json
texts = load_icd9_text()
unknowns = set([x for x,y in texts.items() if
(y["description"].lower().startswith("other")
or y["description"].lower().startswith("unspecified")
or y["description"].lower().endswith("unspecified")
or y["description"].lower().endswith("unspecified type")
or y["description"].lower().endswith("not elsewhere classified"))])
attentions = load_attention_weights(
mimic_gram_false_00_run_id,
mlflow_helper.local_mlflow_dir
)
print(sum([len(x) for x in attentions.values()]))
attentions_without_unknowns = {
x:[y for y in ys if y not in unknowns or x == y] for x,ys in attentions.items()
}
print(sum([len(x) for x in attentions_without_unknowns.values()]))
with open('gram_without_unknowns.json', 'w') as f:
json.dump(attentions_without_unknowns, f)
import string
def transform_to_words(description: str) -> Set[str]:
description = description.translate(
str.maketrans(string.punctuation, " " * len(string.punctuation))
)
words = [str(x).lower().strip() for x in description.split()]
return set([x for x in words if len(x) > 0])
input_descriptions = [(x, transform_to_words(y["description"])) for x,y in texts.items() if x in attentions]
word_overlaps = {}
for x, x_desc in tqdm(input_descriptions):
for y, y_desc in input_descriptions:
if x == y:
continue
word_overlap = x_desc.intersection(y_desc)
if len(word_overlap) == 0:
continue
overlap_string = " ".join([x for x in sorted(word_overlap)])
if overlap_string not in word_overlaps:
word_overlaps[overlap_string] = set()
word_overlaps[overlap_string].update([x,y])
print(len(word_overlaps))
print(sum([len(ws) for ws in word_overlaps.values()]))
max_size_diff = 0.2
max_intersection_diff = 0.25
cleaned_word_overlaps = {}
replacements = {}
for words, features in tqdm(word_overlaps.items()):
found_replacement = False
for other_words, other_features in cleaned_word_overlaps.items():
if (len(other_features) <= (1 + max_size_diff) * len(features) and
len(other_features) >= (1 - max_size_diff) * len(features) and
len(other_features.intersection(features)) >= max_intersection_diff * len(features)):
#print("Found replacement",
# words, len(features),
# other_words, len(other_features),
# len(other_features.intersection(features)))
if other_words not in replacements:
replacements[other_words] = set()
replacements[other_words].add(words)
found_replacement = True
break
if not found_replacement:
cleaned_word_overlaps[words] = features
print(len(cleaned_word_overlaps))
print(sum([len(ws) for ws in cleaned_word_overlaps.values()]))
cleaned_word_overlaps["10 any body burn degree involving less of or percent surface than third unspecified with"]
print(len(word_overlaps))
print(sum([len(ws) for ws in word_overlaps.values()]))
word_overlaps["(acute) asthma exacerbation with"]
feature_node_mapping = create_graph_visualization(
run_id=mimic_gram_false_00_run_id,
local_mlflow_dir=mlflow_helper.local_mlflow_dir,
threshold=0.2,
run_name='mimic_gram_false_00',
use_node_mapping=False)
colored_connections, feature_node_mapping = create_graph_visualization_reference(
run_id=mimic_gram_false_10_run_id,
reference_run_id=mimic_gram_false_00_run_id,
local_mlflow_dir=mlflow_helper.local_mlflow_dir,
threshold=0.2,
run_name='mimic_gram_false_00',
use_node_mapping=False)
huawei_run_df = mlflow_helper.huawei_run_df(include_drain_hierarchy=True)
drain_run_id = huawei_run_df[
(huawei_run_df["data_params_HuaweiPreprocessorConfigdrain_log_sts"].fillna("[]").astype(str).apply(len) > 2)
& (huawei_run_df["data_tags_model_type"] == "gram")
]["info_run_id"].iloc[0]
drain_run_id
aw = load_attention_weights(run_id=drain_run_id, local_mlflow_dir=mlflow_helper.local_mlflow_dir)
aimp = calculate_attention_importances(aw)
drain_clusters = [
(k,[a for a,b in w if float(b) > 0.9])
for k,w in aimp.items()
if "log_cluster_template" in k and k[0].isdigit()]
[x for x in drain_clusters if len(x[1]) > 1]
drain_clusters
drain_levels = [w for k,ws in aw.items() for w in ws if "log_cluster_template" in w]
drain_levels_ = {}
for i in range(3):
drain_levels_[i] = len(set([x for x in drain_levels if str(i) + "_log_cluster_template" in x]))
drain_levels_
feature_node_mapping = create_graph_visualization(
run_id=drain_run_id,
local_mlflow_dir=mlflow_helper.local_mlflow_dir,
threshold=0.2,
run_name='drain_hierarchy',
use_node_mapping=False)
mimic_df = mlflow_helper.mimic_run_df(include_noise=False, include_refinements=False, risk_prediction=False, valid_x_columns=["level_0", "level_1", "level_2"])
mimic_df
import numpy as np
mimic_df.groupby(by=["data_params_SequenceConfigx_sequence_column_name", "data_tags_model_type"]).agg({
"data_metrics_num_connections": np.mean,
"data_metrics_x_vocab_size": np.mean,
"data_metrics_y_vocab_size": np.mean,
})
icd9_hierarchy = pd.read_csv('data/hierarchy_icd9.csv')
icd9_hierarchy
def load_icd9_hierarchy_parents_for_level(
icd9_hierarchy: pd.DataFrame,
all_features: Set[str],
max_level: str) -> Dict[str, str]:
parent_infos = {}
for feature in tqdm(all_features, desc="Processing icd9 hierarchy clusters for level " + max_level):
parents = set(icd9_hierarchy[icd9_hierarchy["level_0"] == feature][max_level])
if len(parents) > 1:
print("Found more than one parent!", feature, parents)
parent = list(parents)[0]
if feature in parent_infos and parent not in parent_infos[feature]:
print("Feature already in weights, but with different parent!", feature, parent, weights[feature])
parent_infos[feature] = parent
return parent_infos
def add_icd9_hierarchy_attention_weights_for_level(
feature_parents: Dict[str, str],
attention_weights: Dict[str, Dict[str, float]]) -> Dict[str, Dict[str, float]]:
new_attention_weights = {}
for feature, parent in feature_parents.items():
if feature in attention_weights:
new_attention_weights[feature] = attention_weights[feature]
elif parent in attention_weights:
new_attention_weights[feature] = attention_weights[parent]
else:
new_attention_weights[feature] = {
parent: 1.0,
}
return new_attention_weights
reference_run_id = list(
mimic_df[
(mimic_df["data_params_SequenceConfigx_sequence_column_name"] == "level_0") &
(mimic_df["data_tags_model_type"] != "simple")
]["info_run_id"]
)[0]
reference_attention = load_attention_weights(reference_run_id, mlflow_helper.local_mlflow_dir)
all_features = set(reference_attention.keys())
len(all_features)
cluster_infos = []
for level in set(mimic_df["data_params_SequenceConfigx_sequence_column_name"]):
icd9_parents = load_icd9_hierarchy_parents_for_level(
icd9_hierarchy=icd9_hierarchy,
all_features=all_features,
max_level=level)
for run_id in set(
mimic_df[
(mimic_df["data_params_SequenceConfigx_sequence_column_name"] == level)
]["info_run_id"]
):
original_attention = load_attention_weights(run_id, mlflow_helper.local_mlflow_dir)
if original_attention is None:
original_attention = {}
attention = add_icd9_hierarchy_attention_weights_for_level(
feature_parents=icd9_parents,
attention_weights=original_attention)
attention_importances = calculate_attention_importances(attention)
clusters_around = {
x:[y for y in ys if y[1] > 0.9] for x,ys in attention_importances.items()
}
clusters_around = {
x:ys for x,ys in clusters_around.items() if len(ys) > 0
}
shared_clusters = {
x:ys for x,ys in clusters_around.items() if len(ys) > 1
}
single_clusters = {
x:ys for x,ys in clusters_around.items() if len(ys) == 1
}
all_inputs = set(attention.keys())
clustered_inputs = {
y[0] for _,ys in clusters_around.items() for y in ys
}
shared_clustered_inputs = {
y[0] for _,ys in shared_clusters.items() for y in ys
}
single_clustered_inputs = {
y[0] for _,ys in single_clusters.items() for y in ys
}
non_clustered_inputs = all_inputs - clustered_inputs
if len(original_attention) == 0:
original_attention = {
x:{x:1.0}
for x in icd9_parents.values()
}
attention_importances_o = calculate_attention_importances(original_attention)
clusters_around_o = {
x:[y for y in ys if y[1] > 0.9] for x,ys in attention_importances_o.items()
}
clusters_around_o = {
x:ys for x,ys in clusters_around_o.items() if len(ys) > 0
}
shared_clusters_o = {
x:ys for x,ys in clusters_around_o.items() if len(ys) > 1
}
single_clusters_o = {
x:ys for x,ys in clusters_around_o.items() if len(ys) == 1
}
all_inputs_o = set(original_attention.keys())
clustered_inputs_o = {
y[0] for _,ys in clusters_around_o.items() for y in ys
}
shared_clustered_inputs_o = {
y[0] for _,ys in shared_clusters_o.items() for y in ys
}
single_clustered_inputs_o = {
y[0] for _,ys in single_clusters_o.items() for y in ys
}
non_clustered_inputs_o = all_inputs_o - clustered_inputs_o
cluster_infos.append({
'run_id': run_id,
'all_inputs': len(all_inputs),
'clustered_inputs': len(clustered_inputs),
'clustered_inputs_p': len(clustered_inputs) / len(all_inputs),
'shared_clustered_inputs': len(shared_clustered_inputs),
'shared_clustered_inputs_p': len(shared_clustered_inputs) / len(all_inputs),
'single_clustered_inputs': len(single_clustered_inputs),
'single_clustered_inputs_p': len(single_clustered_inputs) / len(all_inputs),
'non_clustered_inputs': len(non_clustered_inputs),
'non_clustered_inputs_p': len(non_clustered_inputs) / len(all_inputs),
'clusters': len(clusters_around),
'shared_clusters': len(shared_clusters),
'shared_clusters_p': len(shared_clusters) / len(clusters_around),
'single_clusters': len(single_clusters),
'single_clusters_p': len(single_clusters) / len(clusters_around),
'all_inputs_o': len(all_inputs_o),
'clustered_inputs_o': len(clustered_inputs_o),
'clustered_inputs_p_o': len(clustered_inputs_o) / len(all_inputs_o),
'shared_clustered_inputs_o': len(shared_clustered_inputs_o),
'shared_clustered_inputs_p_o': len(shared_clustered_inputs_o) / len(all_inputs_o),
'single_clustered_inputs_o': len(single_clustered_inputs_o),
'single_clustered_inputs_p_o': len(single_clustered_inputs_o) / len(all_inputs_o),
'non_clustered_inputs_o': len(non_clustered_inputs_o),
'non_clustered_inputs_p_o': len(non_clustered_inputs_o) / len(all_inputs_o),
'clusters_o': len(clusters_around_o),
'shared_clusters_o': len(shared_clusters_o),
'shared_clusters_p_o': len(shared_clusters_o) / len(clusters_around_o),
'single_clusters_o': len(single_clusters_o),
'single_clusters_p_o': len(single_clusters_o) / len(clusters_around_o),
})
pd.DataFrame.from_records(cluster_infos)
added_columns = cluster_infos[1].keys()
merged = pd.merge(
pd.melt(pd.DataFrame.from_records(cluster_infos), id_vars="run_id", value_vars=[x for x in added_columns if x != "run_id"]),
mimic_df,
left_on="run_id",
right_on="info_run_id",)
merged[["variable", "value", "data_tags_model_type"]]
import seaborn as sns
import matplotlib.pyplot as plt
f = sns.catplot(
data=merged,
x="data_params_SequenceConfigx_sequence_column_name",
order=["level_0", "level_1", "level_2"],
sharey=False,
y="value", col="variable", row="data_params_ModelConfigbase_hidden_embeddings_trainable",
kind="box", hue="data_tags_model_type")
f.set_titles("Trainable: {row_name}, Metric: {col_name}")
plt.show()
f = sns.catplot(
data=merged[merged["variable"].apply(lambda x: x in ["clustered_inputs_p", "shared_clustered_inputs_p", "single_clustered_inputs_p"])],
x="data_params_SequenceConfigx_sequence_column_name",
order=["level_0", "level_1", "level_2"],
sharey=False,
y="value", col="variable", row="data_params_ModelConfigbase_hidden_embeddings_trainable",
kind="box", hue="data_tags_model_type")
f.set_titles("Trainable: {row_name}, Metric: {col_name}")
plt.show()
def calculate_clusters(run_id, local_mlflow_dir, icd9_parents, threshold=0.9):
original_attention = load_attention_weights(run_id, local_mlflow_dir)
if original_attention is None:
original_attention = {}
attention = add_icd9_hierarchy_attention_weights_for_level(
feature_parents=icd9_parents,
attention_weights=original_attention)
attention_importances = calculate_attention_importances(attention)
clusters_around = {
x:[y[0] for y in ys if y[1] > threshold] for x,ys in attention_importances.items()
}
clusters_around = {
x:ys for x,ys in clusters_around.items() if len(ys) > 0
}
shared_clusters = {
x:ys for x,ys in clusters_around.items() if len(ys) > 1
}
single_clusters = {
x:ys for x,ys in clusters_around.items() if len(ys) == 1
}
all_inputs = set(attention.keys())
clustered_inputs = {
y for _,ys in clusters_around.items() for y in ys
}
shared_clustered_inputs = {
y for _,ys in shared_clusters.items() for y in ys
}
single_clustered_inputs = {
y for _,ys in single_clusters.items() for y in ys
}
non_clustered_inputs = all_inputs - clustered_inputs
return {
"clusters_around": clusters_around,
"shared_clusters": shared_clusters,
"single_clusters": single_clusters,
"clustered_inputs": clustered_inputs,
"non_clustered_inputs": non_clustered_inputs,
"shared_clustered_inputs": shared_clustered_inputs,
"single_clustered_inputs": single_clustered_inputs,
}
def compare_clusters(run_id_1, run_id_2, local_mlflow_dir, icd9_parents_1, icd9_parents_2, cluster_threshold=0.99):
clusters_1 = calculate_clusters(run_id_1, local_mlflow_dir, icd9_parents_1)
clusters_2 = calculate_clusters(run_id_2, local_mlflow_dir, icd9_parents_2)
return {
run_id_1: clusters_1,
run_id_2: clusters_2,
"same_clustered_inputs": clusters_1["clustered_inputs"].intersection(clusters_2["clustered_inputs"]),
"same_nonclustered_inputs": clusters_1["non_clustered_inputs"].intersection(clusters_2["non_clustered_inputs"]),
"same_shared_clustered_inputs": clusters_1["shared_clustered_inputs"].intersection(clusters_2["shared_clustered_inputs"]),
"same_single_clustered_inputs": clusters_1["single_clustered_inputs"].intersection(clusters_2["single_clustered_inputs"]),
"same_clusters": [
x for x in clusters_1["clusters_around"].values() if len([
y for y in clusters_2["clusters_around"].values() if len(set(y).intersection(set(x))) / len(set(x).union(set(y))) > cluster_threshold
]) > 0
],
"same_shared_clusters": [
x for x in clusters_1["shared_clusters"].values() if len([
y for y in clusters_2["shared_clusters"].values() if len(set(y).intersection(set(x))) / len(set(x).union(set(y))) > cluster_threshold
]) > 0
],
"same_single_clusters": [
x for x in clusters_1["single_clusters"].values() if len([
y for y in clusters_2["single_clusters"].values() if len(set(y).intersection(set(x))) / len(set(x).union(set(y))) > cluster_threshold
]) > 0
],
}
comparisons = []
level_parents = {}
for run_id_1 in set(mimic_df["info_run_id"]):
level_1 = mimic_df[mimic_df["info_run_id"] == run_id_1]["data_params_SequenceConfigx_sequence_column_name"].iloc[0]
if level_1 not in level_parents:
level_parents[level_1] = load_icd9_hierarchy_parents_for_level(
icd9_hierarchy=icd9_hierarchy,
all_features=all_features,
max_level=level_1)
icd9_parents_1 = level_parents[level_1]
for run_id_2 in set(mimic_df["info_run_id"]):
level_2 = mimic_df[mimic_df["info_run_id"] == run_id_2]["data_params_SequenceConfigx_sequence_column_name"].iloc[0]
if level_2 not in level_parents:
level_parents[level_2] = load_icd9_hierarchy_parents_for_level(
icd9_hierarchy=icd9_hierarchy,
all_features=all_features,
max_level=level_2)
icd9_parents_2 = level_parents[level_2]
comparison = compare_clusters(run_id_1, run_id_2, mlflow_helper.local_mlflow_dir, icd9_parents_1, icd9_parents_2, cluster_threshold=0.9)
comparisons.append({
"run_id_1": run_id_1,
"run_id_2": run_id_2,
"same_clusters": len(comparison["same_clusters"]),
"same_shared_clusters": len(comparison["same_shared_clusters"]),
"same_single_clusters": len(comparison["same_single_clusters"]),
"same_clustered_inputs": len(comparison["same_clustered_inputs"]),
"same_nonclustered_inputs": len(comparison["same_nonclustered_inputs"]),
"same_shared_clustered_inputs": len(comparison["same_shared_clustered_inputs"]),
"same_single_clustered_inputs": len(comparison["same_single_clustered_inputs"]),
})
pd.DataFrame.from_records(comparisons)
level_1="level_2"
level_2 = "level_0"
comp_1 = "simple"
comp_2 = "gram"
icd9_parents_1 = load_icd9_hierarchy_parents_for_level(
icd9_hierarchy=icd9_hierarchy,
all_features=all_features,
max_level=level_1)
icd9_parents_2 = load_icd9_hierarchy_parents_for_level(
icd9_hierarchy=icd9_hierarchy,
all_features=all_features,
max_level=level_2)
run_id_1 = mimic_df[
(mimic_df["data_params_SequenceConfigx_sequence_column_name"] == level_1) &
(mimic_df["data_tags_model_type"] == comp_1) &
(mimic_df["data_params_ModelConfigbase_feature_embeddings_trainable"] == "False")
]["info_run_id"].iloc[0]
run_id_2 = mimic_df[
(mimic_df["data_params_SequenceConfigx_sequence_column_name"] == level_2) &
(mimic_df["data_tags_model_type"] == comp_2) &
(mimic_df["data_params_ModelConfigbase_feature_embeddings_trainable"] == "False") &
(mimic_df["info_run_id"] != run_id_1)
]["info_run_id"].iloc[0]
ccomparison = compare_clusters(run_id_1, run_id_2, mlflow_helper.local_mlflow_dir, icd9_parents_1, icd9_parents_2, cluster_threshold=0.9)
len(ccomparison["same_clusters"])
len(ccomparison["same_clustered_inputs"])
comparison = Comparison(
run_id_1=run_id_1,
suffix_1="_" + comp_1 + level_1,
run_id_2=run_id_2,
suffix_2="_" + comp_2 + level_2,
local_mlflow_dir=mlflow_helper.local_mlflow_dir,
num_percentiles=10,
feature_replacements=icd9_parents_1)
plot_rank_comparison(comparison)
plot_outlier_distances(comparison)
analyse_best_worst_sequences(comparison, num_best_sequences=1, num_worst_sequences=1, descriptions=load_icd9_text())
plot_rank_comparison(comparison,
color="avg_input_frequencies_percentile" + comparison.suffix_1,
hover_data=[
"avg_input_frequencies_percentile" + comparison.suffix_1,
"avg_input_frequencies_percentile" + comparison.suffix_2,
])
plot_rank_comparison(comparison,
color="avg_input_frequencies_percentile" + comparison.suffix_2,
hover_data=[
"avg_input_frequencies_percentile" + comparison.suffix_1,
"avg_input_frequencies_percentile" + comparison.suffix_2,
])
plot_comparison(comparison,
plot_column="avg_input_frequencies")
index=max(comparison.comparison_df.index)
display(comparison.comparison_df.loc[index]["input" + comparison.suffix_1])
display(comparison.comparison_df.loc[index]["input" + comparison.suffix_2])
display(comparison.comparison_df.loc[index][[
"output_rank_noties" + comparison.suffix_1,
"output_rank_noties" + comparison.suffix_2,
"avg_input_frequencies" + comparison.suffix_1,
"avg_input_frequencies" + comparison.suffix_2,
"outlier_distance"]])
print(comparison.suffix_1)
for input in comparison.comparison_df.loc[index]["original_inputs" + comparison.suffix_1].split(','):
if input.strip() in comparison.attention_weights_for(comparison.suffix_1):
display(comparison.attention_weights_for(comparison.suffix_1).get(input.strip()))
print(comparison.suffix_2)
for input in comparison.comparison_df.loc[index]["original_inputs" + comparison.suffix_2].split(','):
if input.strip() in comparison.attention_weights_for(comparison.suffix_2):
display(comparison.attention_weights_for(comparison.suffix_2).get(input.strip()))
| 0.403567 | 0.380615 |
```
import matplotlib.pyplot as plt
import numpy as np
from datetime import date
from collections import Counter
from helper import get_wsj, save_obj, load_obj, plot_training_history
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.utils import to_categorical
from keras.layers import Embedding
from keras.initializers import Constant
from models import model_1, model_2, model_3
plt.rcdefaults()
# Loading news article summaries and belonging news sections
articles = load_obj("articles.pkl")
sections = load_obj("sections.pkl")
# Checking how many different sections exist and how often they are used
section_counter = Counter()
for section in sections:
section_counter[section] += 1
plt.barh(np.arange(len(section_counter)), list(section_counter.values()))
plt.yticks(np.arange(len(section_counter)), list(section_counter.keys()))
plt.title("Frequency of News Sections")
plt.xlabel("Number of occurences")
plt.grid(axis="x")
plt.show()
# Filter out sections that rarely occure
MINIMUM_OCCURENCES = 100
cleaned_articles = []
cleaned_sections = []
for a, s in list(zip(articles, sections)):
if (section_counter[s] >= MINIMUM_OCCURENCES) and (s != "<ERROR>"):
cleaned_articles.append(a)
cleaned_sections.append(s)
print("Deleted {} articles. There are still {} articles left".format(len(articles) - len(cleaned_articles), len(cleaned_articles)))
# Plot number of words per article
plt.hist([len(a.split()) for a in cleaned_articles],25)
plt.title("Number of Words per Article")
plt.xlabel("Number of Words")
plt.ylabel("Number of Articles")
plt.grid(axis="y")
plt.show()
MAX_LEN = 60
less_or_equal_max_len = 0
for a in cleaned_articles:
if len(a.split()) <= MAX_LEN:
less_or_equal_max_len += 1
print("{0:.1f}% of the article summaries consist of less "
"or equal {1} words.".format(100 * less_or_equal_max_len / len(cleaned_articles), MAX_LEN))
```
It can be seen that most of the article summaries have not more than 60 words.
```
# Dictionaries for translating sections into index and back
label_to_index = {}
for s in cleaned_sections:
if s not in label_to_index:
label_to_index[s] = len(label_to_index)
index_to_label = dict((i,l) for l, i in label_to_index.items())
# List with all available sections that occure in cleaned dataset
labels = [label_to_index[s] for s in cleaned_sections]
# Maximum number of unique words that should be considered
MAX_NUM_WORDS = 20000
# Number of words per article that are used. If an article has less words,
# it is padded, if an article consists of more words, it is truncated
SEQUENCE_LENGTH = MAX_LEN
# Tokenize words
tokenizer = Tokenizer(num_words=MAX_NUM_WORDS)
tokenizer.fit_on_texts(cleaned_articles)
# Transform articles into sequences of tokens
sequences = tokenizer.texts_to_sequences(cleaned_articles)
word_index = tokenizer.word_index
print('Found %s unique tokens.' % len(word_index))
# Padd or truncate articles of length != SEQUENCE_LENGTH
tokenized_articles = pad_sequences(sequences, maxlen=SEQUENCE_LENGTH, padding='pre', truncating='post')
# Tokenize labels (sections)
labels = to_categorical(labels)
print('Shape of data tensor:', tokenized_articles.shape)
print('Shape of label tensor:', labels.shape)
# Load GloVe word vectors
embeddings_index = {}
with open("glove.6B.100d.txt", encoding='utf8') as f:
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
print('Found %s GloVe word vectors.' % len(embeddings_index))
# Prepare embedding matrix
embedding_dim = embeddings_index["sunday"].shape[0]
print('Preparing embedding matrix.')
num_words = min(MAX_NUM_WORDS, len(word_index)) + 1
embedding_matrix = np.zeros((num_words, embedding_dim))
for word, i in word_index.items():
if i > MAX_NUM_WORDS:
continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# Words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
# Load pre-trained word embeddings into an Embedding layer
embedding_layer = Embedding(num_words,
embedding_dim,
embeddings_initializer=Constant(embedding_matrix),
input_length=SEQUENCE_LENGTH,
trainable=False)
# Neural Network Model
models = []
models.append(model_1(SEQUENCE_LENGTH, embedding_layer, len(label_to_index)))
models.append(model_2(SEQUENCE_LENGTH, embedding_layer, len(label_to_index)))
models.append(model_3(SEQUENCE_LENGTH, embedding_layer, len(label_to_index)))
for n, model in enumerate(models):
print("Model #{}:".format(n))
model.summary()
history = []
for n, model in enumerate(models):
print("\nTraining model {}:".format(n))
print("----------------------")
history.append(model.fit(tokenized_articles, labels,
validation_split=0.25,
shuffle=True,
batch_size=1024,
epochs=50,
verbose=2))
# Plot training and validation accuracy values
for n, h in enumerate(history):
plot_training_history(h, "model {}".format(n))
def predict_news_section(article, model):
"""Predicts the belonging section for a given news article.
Ags:
article (str): Unprocessed text of a news article.
Returns:
section (str): The name of the predicted news section.
prob (numpy.ndarray): Predicted probability for each section.
"""
seq = tokenizer.texts_to_sequences([article])
padd_seq = pad_sequences(seq, maxlen=SEQUENCE_LENGTH, padding='pre', truncating='pre')
prob = model.predict(padd_seq).flatten()
return index_to_label[np.argmax(prob)], prob
# Predict the section for a randomly chosen article from the dataset
N = np.random.randint(0, len(cleaned_sections))
predicted_section, probability = predict_news_section(cleaned_articles[N], models[2])
plt.bar(np.arange(len(probability)), probability)
plt.show()
print("News section: ", cleaned_sections[N])
print("Predicted news section: ", predicted_section)
```
|
github_jupyter
|
import matplotlib.pyplot as plt
import numpy as np
from datetime import date
from collections import Counter
from helper import get_wsj, save_obj, load_obj, plot_training_history
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.utils import to_categorical
from keras.layers import Embedding
from keras.initializers import Constant
from models import model_1, model_2, model_3
plt.rcdefaults()
# Loading news article summaries and belonging news sections
articles = load_obj("articles.pkl")
sections = load_obj("sections.pkl")
# Checking how many different sections exist and how often they are used
section_counter = Counter()
for section in sections:
section_counter[section] += 1
plt.barh(np.arange(len(section_counter)), list(section_counter.values()))
plt.yticks(np.arange(len(section_counter)), list(section_counter.keys()))
plt.title("Frequency of News Sections")
plt.xlabel("Number of occurences")
plt.grid(axis="x")
plt.show()
# Filter out sections that rarely occure
MINIMUM_OCCURENCES = 100
cleaned_articles = []
cleaned_sections = []
for a, s in list(zip(articles, sections)):
if (section_counter[s] >= MINIMUM_OCCURENCES) and (s != "<ERROR>"):
cleaned_articles.append(a)
cleaned_sections.append(s)
print("Deleted {} articles. There are still {} articles left".format(len(articles) - len(cleaned_articles), len(cleaned_articles)))
# Plot number of words per article
plt.hist([len(a.split()) for a in cleaned_articles],25)
plt.title("Number of Words per Article")
plt.xlabel("Number of Words")
plt.ylabel("Number of Articles")
plt.grid(axis="y")
plt.show()
MAX_LEN = 60
less_or_equal_max_len = 0
for a in cleaned_articles:
if len(a.split()) <= MAX_LEN:
less_or_equal_max_len += 1
print("{0:.1f}% of the article summaries consist of less "
"or equal {1} words.".format(100 * less_or_equal_max_len / len(cleaned_articles), MAX_LEN))
# Dictionaries for translating sections into index and back
label_to_index = {}
for s in cleaned_sections:
if s not in label_to_index:
label_to_index[s] = len(label_to_index)
index_to_label = dict((i,l) for l, i in label_to_index.items())
# List with all available sections that occure in cleaned dataset
labels = [label_to_index[s] for s in cleaned_sections]
# Maximum number of unique words that should be considered
MAX_NUM_WORDS = 20000
# Number of words per article that are used. If an article has less words,
# it is padded, if an article consists of more words, it is truncated
SEQUENCE_LENGTH = MAX_LEN
# Tokenize words
tokenizer = Tokenizer(num_words=MAX_NUM_WORDS)
tokenizer.fit_on_texts(cleaned_articles)
# Transform articles into sequences of tokens
sequences = tokenizer.texts_to_sequences(cleaned_articles)
word_index = tokenizer.word_index
print('Found %s unique tokens.' % len(word_index))
# Padd or truncate articles of length != SEQUENCE_LENGTH
tokenized_articles = pad_sequences(sequences, maxlen=SEQUENCE_LENGTH, padding='pre', truncating='post')
# Tokenize labels (sections)
labels = to_categorical(labels)
print('Shape of data tensor:', tokenized_articles.shape)
print('Shape of label tensor:', labels.shape)
# Load GloVe word vectors
embeddings_index = {}
with open("glove.6B.100d.txt", encoding='utf8') as f:
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
print('Found %s GloVe word vectors.' % len(embeddings_index))
# Prepare embedding matrix
embedding_dim = embeddings_index["sunday"].shape[0]
print('Preparing embedding matrix.')
num_words = min(MAX_NUM_WORDS, len(word_index)) + 1
embedding_matrix = np.zeros((num_words, embedding_dim))
for word, i in word_index.items():
if i > MAX_NUM_WORDS:
continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# Words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
# Load pre-trained word embeddings into an Embedding layer
embedding_layer = Embedding(num_words,
embedding_dim,
embeddings_initializer=Constant(embedding_matrix),
input_length=SEQUENCE_LENGTH,
trainable=False)
# Neural Network Model
models = []
models.append(model_1(SEQUENCE_LENGTH, embedding_layer, len(label_to_index)))
models.append(model_2(SEQUENCE_LENGTH, embedding_layer, len(label_to_index)))
models.append(model_3(SEQUENCE_LENGTH, embedding_layer, len(label_to_index)))
for n, model in enumerate(models):
print("Model #{}:".format(n))
model.summary()
history = []
for n, model in enumerate(models):
print("\nTraining model {}:".format(n))
print("----------------------")
history.append(model.fit(tokenized_articles, labels,
validation_split=0.25,
shuffle=True,
batch_size=1024,
epochs=50,
verbose=2))
# Plot training and validation accuracy values
for n, h in enumerate(history):
plot_training_history(h, "model {}".format(n))
def predict_news_section(article, model):
"""Predicts the belonging section for a given news article.
Ags:
article (str): Unprocessed text of a news article.
Returns:
section (str): The name of the predicted news section.
prob (numpy.ndarray): Predicted probability for each section.
"""
seq = tokenizer.texts_to_sequences([article])
padd_seq = pad_sequences(seq, maxlen=SEQUENCE_LENGTH, padding='pre', truncating='pre')
prob = model.predict(padd_seq).flatten()
return index_to_label[np.argmax(prob)], prob
# Predict the section for a randomly chosen article from the dataset
N = np.random.randint(0, len(cleaned_sections))
predicted_section, probability = predict_news_section(cleaned_articles[N], models[2])
plt.bar(np.arange(len(probability)), probability)
plt.show()
print("News section: ", cleaned_sections[N])
print("Predicted news section: ", predicted_section)
| 0.759404 | 0.763219 |
```
import tensorflow as tf
print(tf.__version__)
# More imports
from tensorflow.keras.layers import Input, Dense, Flatten
from tensorflow.keras.applications.vgg16 import VGG16 as PretrainedModel, \
preprocess_input
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import SGD, Adam
from tensorflow.keras.preprocessing import image
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from glob import glob
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sys, os
# Data from: https://mmspg.epfl.ch/downloads/food-image-datasets/
# !wget --passive-ftp --prefer-family=ipv4 --ftp-user FoodImage@grebvm2.epfl.ch \
# --ftp-password Cahc1moo -nc ftp://tremplin.epfl.ch/Food-5K.zip
!wget -nc https://lazyprogrammer.me/course_files/Food-5K.zip
!unzip -qq -o Food-5K.zip
!ls
!ls Food-5K/training
!mv Food-5K/* .
# look at an image for fun
plt.imshow(image.load_img('training/0_808.jpg'))
plt.show()
# Food images start with 1, non-food images start with 0
plt.imshow(image.load_img('training/1_616.jpg'))
plt.show()
!mkdir data
# Make directories to store the data Keras-style
!mkdir data/train
!mkdir data/test
!mkdir data/train/nonfood
!mkdir data/train/food
!mkdir data/test/nonfood
!mkdir data/test/food
# Move the images
# Note: we will consider 'training' to be the train set
# 'validation' folder will be the test set
# ignore the 'evaluation' set
!mv training/0*.jpg data/train/nonfood
!mv training/1*.jpg data/train/food
!mv validation/0*.jpg data/test/nonfood
!mv validation/1*.jpg data/test/food
train_path = 'data/train'
valid_path = 'data/test'
# These images are pretty big and of different sizes
# Let's load them all in as the same (smaller) size
IMAGE_SIZE = [200, 200]
# useful for getting number of files
image_files = glob(train_path + '/*/*.jpg')
valid_image_files = glob(valid_path + '/*/*.jpg')
# useful for getting number of classes
folders = glob(train_path + '/*')
folders
# look at an image for fun
plt.imshow(image.load_img(np.random.choice(image_files)))
plt.show()
ptm = PretrainedModel(
input_shape=IMAGE_SIZE + [3],
weights='imagenet',
include_top=False)
# freeze pretrained model weights
ptm.trainable = False
# map the data into feature vectors
# Keras image data generator returns classes one-hot encoded
K = len(folders) # number of classes
x = Flatten()(ptm.output)
x = Dense(K, activation='softmax')(x)
# create a model object
model = Model(inputs=ptm.input, outputs=x)
# view the structure of the model
model.summary()
# create an instance of ImageDataGenerator
gen_train = ImageDataGenerator(
rotation_range=20,
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.1,
zoom_range=0.2,
horizontal_flip=True,
preprocessing_function=preprocess_input
)
gen_test = ImageDataGenerator(
preprocessing_function=preprocess_input
)
batch_size = 128
# create generators
train_generator = gen_train.flow_from_directory(
train_path,
shuffle=True,
target_size=IMAGE_SIZE,
batch_size=batch_size,
)
valid_generator = gen_test.flow_from_directory(
valid_path,
target_size=IMAGE_SIZE,
batch_size=batch_size,
)
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
# fit the model
r = model.fit_generator(
train_generator,
validation_data=valid_generator,
epochs=10,
steps_per_epoch=int(np.ceil(len(image_files) / batch_size)),
validation_steps=int(np.ceil(len(valid_image_files) / batch_size)),
)
# create a 2nd train generator which does not use data augmentation
# to get the true train accuracy
train_generator2 = gen_test.flow_from_directory(
train_path,
target_size=IMAGE_SIZE,
batch_size=batch_size,
)
model.evaluate_generator(
train_generator2,
steps=int(np.ceil(len(image_files) / batch_size)))
# loss
plt.plot(r.history['loss'], label='train loss')
plt.plot(r.history['val_loss'], label='val loss')
plt.legend()
plt.show()
# accuracies
plt.plot(r.history['accuracy'], label='train acc')
plt.plot(r.history['val_accuracy'], label='val acc')
plt.legend()
plt.show()
```
|
github_jupyter
|
import tensorflow as tf
print(tf.__version__)
# More imports
from tensorflow.keras.layers import Input, Dense, Flatten
from tensorflow.keras.applications.vgg16 import VGG16 as PretrainedModel, \
preprocess_input
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import SGD, Adam
from tensorflow.keras.preprocessing import image
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from glob import glob
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sys, os
# Data from: https://mmspg.epfl.ch/downloads/food-image-datasets/
# !wget --passive-ftp --prefer-family=ipv4 --ftp-user FoodImage@grebvm2.epfl.ch \
# --ftp-password Cahc1moo -nc ftp://tremplin.epfl.ch/Food-5K.zip
!wget -nc https://lazyprogrammer.me/course_files/Food-5K.zip
!unzip -qq -o Food-5K.zip
!ls
!ls Food-5K/training
!mv Food-5K/* .
# look at an image for fun
plt.imshow(image.load_img('training/0_808.jpg'))
plt.show()
# Food images start with 1, non-food images start with 0
plt.imshow(image.load_img('training/1_616.jpg'))
plt.show()
!mkdir data
# Make directories to store the data Keras-style
!mkdir data/train
!mkdir data/test
!mkdir data/train/nonfood
!mkdir data/train/food
!mkdir data/test/nonfood
!mkdir data/test/food
# Move the images
# Note: we will consider 'training' to be the train set
# 'validation' folder will be the test set
# ignore the 'evaluation' set
!mv training/0*.jpg data/train/nonfood
!mv training/1*.jpg data/train/food
!mv validation/0*.jpg data/test/nonfood
!mv validation/1*.jpg data/test/food
train_path = 'data/train'
valid_path = 'data/test'
# These images are pretty big and of different sizes
# Let's load them all in as the same (smaller) size
IMAGE_SIZE = [200, 200]
# useful for getting number of files
image_files = glob(train_path + '/*/*.jpg')
valid_image_files = glob(valid_path + '/*/*.jpg')
# useful for getting number of classes
folders = glob(train_path + '/*')
folders
# look at an image for fun
plt.imshow(image.load_img(np.random.choice(image_files)))
plt.show()
ptm = PretrainedModel(
input_shape=IMAGE_SIZE + [3],
weights='imagenet',
include_top=False)
# freeze pretrained model weights
ptm.trainable = False
# map the data into feature vectors
# Keras image data generator returns classes one-hot encoded
K = len(folders) # number of classes
x = Flatten()(ptm.output)
x = Dense(K, activation='softmax')(x)
# create a model object
model = Model(inputs=ptm.input, outputs=x)
# view the structure of the model
model.summary()
# create an instance of ImageDataGenerator
gen_train = ImageDataGenerator(
rotation_range=20,
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.1,
zoom_range=0.2,
horizontal_flip=True,
preprocessing_function=preprocess_input
)
gen_test = ImageDataGenerator(
preprocessing_function=preprocess_input
)
batch_size = 128
# create generators
train_generator = gen_train.flow_from_directory(
train_path,
shuffle=True,
target_size=IMAGE_SIZE,
batch_size=batch_size,
)
valid_generator = gen_test.flow_from_directory(
valid_path,
target_size=IMAGE_SIZE,
batch_size=batch_size,
)
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
# fit the model
r = model.fit_generator(
train_generator,
validation_data=valid_generator,
epochs=10,
steps_per_epoch=int(np.ceil(len(image_files) / batch_size)),
validation_steps=int(np.ceil(len(valid_image_files) / batch_size)),
)
# create a 2nd train generator which does not use data augmentation
# to get the true train accuracy
train_generator2 = gen_test.flow_from_directory(
train_path,
target_size=IMAGE_SIZE,
batch_size=batch_size,
)
model.evaluate_generator(
train_generator2,
steps=int(np.ceil(len(image_files) / batch_size)))
# loss
plt.plot(r.history['loss'], label='train loss')
plt.plot(r.history['val_loss'], label='val loss')
plt.legend()
plt.show()
# accuracies
plt.plot(r.history['accuracy'], label='train acc')
plt.plot(r.history['val_accuracy'], label='val acc')
plt.legend()
plt.show()
| 0.671578 | 0.496704 |
# GDP analysis for Indonesia
## Imports
```
import os
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
os.chdir('..')
from src.regressions import *
from src.helpers import *
from src.statistical_analysis import *
from src.evaluation_metrics import *
from src.feature_engineering import *
os.chdir('notebooks')
```
## Loading and visualizing the dataset
Load the dataset in a pandas dataframe.
```
PATH = os.path.join("..", "data", "indonesia.csv")
dataset = pd.read_csv(PATH)
```
Visualize the dataset.
```
dataset
```
Plot the GDP as a function of year.
```
plt.plot(dataset["YEAR"], dataset["REAL GROSS DOMESTIC PRODUCT PER CAPITA (CURRENT PRICES) (unit $ CURRENT)"])
plt.xlabel("Year")
plt.ylabel("GDP")
plt.title("The evolution of the Indonesian GDP from the 50's to the 00's")
plt.show()
```
Split the dataset into a matrix `X` for the features and a vector `y` for the labels.
```
ds = dataset.drop(columns=["YEAR"])
X, y = X_y_from_dataset(dataset.drop(columns=["YEAR"]))
print("We have", X.shape[0], "data points and", X.shape[1], "features")
ds.columns
```
Verify whether there are highly correlated features.
```
correlation_matrix = np.abs(np.corrcoef(X, rowvar=False))
sns.heatmap(correlation_matrix)
plt.title("correlation matrix")
plt.show()
mask = correlation_matrix > 0.8
np.fill_diagonal(mask, False)
for i, m in enumerate(mask):
if (sum(m) != 0):
print(ds.columns[i], "is highly correlated with: ",
", ".join(ds.columns[np.append(m, [False])]), "\n")
```
Plot the population evolution through time. It's clear they're highly correlated features.
```
sns.lineplot(x="YEAR", y="POPULATION (unit 000S)", data=dataset)
print("The condition number is", condition_number(X))
VIF_X = VIF(X)
print("The VIF is:", VIF_X, "\n The column with the highest VIF is", dataset.columns[np.argmax(VIF_X)])
```
The condition number, VIF and correlation matrix all tend to indicate that our data is kind off ill conditionned. We have to perform at least either some model selection, or filter out some predictors.
Now let us verify homoskedasticity with the Breusch–Pagan test.
```
_, p_value, s = breusch_pagan_test(X, y)
print(s, "because the p-value is: ", p_value)
```
## The models
Split the data into 80% training and 20% testing sets.
```
X_train, X_test, y_train, y_test = train_test_split(X, y)
year = dataset.dropna()["YEAR"]
```
### Least Squares
Train with the least squares estimator.
```
# Degree of zero means we leave the data set the way it is degree of 1 means we simply add a bias degree > 1
# We compute the polynomial expansion associated with that degree.
d = degree_cross_val(X_train, y_train, 10)
d
```
Augment the dataset.
```
# We split the dataset as instructed: the first 80% as train and the next 20% as test.
X_ls = build_poly(X, d)
X_train_ls, X_test_ls, _, _ = train_test_split(X_ls, y)
LS_w = least_squares(X_train_ls, y_train)
LS_prediction_data = predict(X_ls, LS_w)
LS_prediction_test = predict(X_test_ls, LS_w)
print("Testing R^2: ", R_squared(y_test, LS_prediction_test),
"\nFull data R^2:", R_squared(y, LS_prediction_data))
print("Full data adjusted R^2:", adjusted_R_squared(y, LS_prediction_data, X.shape[1]))
print("Testing RMSE: ", RMSE(y_test, LS_prediction_test),
"\nFull data RMSE:", RMSE(y, LS_prediction_data))
tot = np.mean(y_test)
print("This implies only", RMSE(y_test, LS_prediction_test)/tot, "error rate on the test and", RMSE(y, LS_prediction_data)/tot, "on the full dataset")
theil_U(y_test, LS_prediction_test)
```
Compute the CI for the coefficients.
```
X_for_var, w_for_var = (X_ls[:, 1:], LS_w[1:]) if d > 0 else (X, LS_w)
var = variance_least_squares_weights(X_for_var, y, LS_prediction_data)
lower_CI, upper_CI = confidence_interval(X_for_var.shape[0], X_for_var.shape[1], w_for_var, var)
```
Plot the coefficients with their CI intervals.
```
plt.figure(figsize=(15,8))
plt.errorbar(np.arange(X_for_var.shape[1]), w_for_var,
yerr=np.vstack([np.squeeze(w_for_var-lower_CI), np.squeeze(upper_CI-w_for_var)]),
fmt=".", ecolor='orange', lolims=True, uplims=True, label="Coefficients")
plt.xticks(np.arange(X.shape[1]), ["β"+str(i) for i in np.arange(X_for_var.shape[1])])
plt.title("95% CI around the regression coefficients")
plt.xlabel("The model coefficients")
plt.legend()
plt.grid(which='both', linestyle='-.', linewidth=0.5)
plt.show()
```
Compute the CI for the predictions.
```
var_ = variance_least_squares_line(X_for_var, y, LS_prediction_data)
lower_CI_line, upper_CI_line = confidence_interval(X_for_var.shape[0], X_for_var.shape[1], LS_prediction_data,
var_)
```
Plot the CI for the predictions.
```
plt.figure(figsize=(15,8))
plt.scatter(year, y, label="GDP")
plt.plot(year, LS_prediction_data, color="g", lw=1, ls='--', label="Prediction using least squares")
plt.gca().fill_between(year, np.squeeze(lower_CI_line), np.squeeze(upper_CI_line),
label="95% CI",
#color="#b9cfe7",
color="orange",
alpha=0.5,
edgecolor=None)
plt.xlabel("Year")
plt.ylabel("GDP")
plt.title("GDP prediction as a function of Year")
plt.legend()
plt.show()
```
### Ridge Regression
Train the ridge regression model.
```
lambda_r, d_r = cross_val_ridge(X_train, y_train, plot=False)
X_rr = build_poly(X, d_r)
X_train_rr, X_test_rr, _, _ = train_test_split(X_rr, y)
print("The optimal hyper-parameters for the polynomial expansion and l2 regularization term are respectively:",
lambda_r, d_r)
Ridge_w = ridge_regression(X_train_rr, y_train, lambda_r)
Ridge_prediction_data = predict(X_rr, Ridge_w)
Ridge_prediction_test = predict(X_test_rr, Ridge_w)
print("Testing R^2: ", R_squared(y_test, Ridge_prediction_test),
"\nFull data R^2:", R_squared(y, Ridge_prediction_data))
print("Full data adjusted R^2:", adjusted_R_squared(y, Ridge_prediction_data, X_rr.shape[1]))
print("Testing RMSE: ", RMSE(y_test, Ridge_prediction_test),
"\nFull data RMSE:", RMSE(y, Ridge_prediction_data))
print("This implies only", RMSE(y_test, Ridge_prediction_test)/tot, "error rate on the test and", RMSE(y, Ridge_prediction_data)/tot, "on the full dataset")
print(theil_U(y_test, Ridge_prediction_test))
X_for_var_rr, w_for_var_rr = (X_rr[:, 1:], Ridge_w[1:]) if d_r > 0 else (X, Ridge_w)
var = variance_least_squares_weights(X_for_var_rr, y, Ridge_prediction_data)
lower_CI_r, upper_CI_r = confidence_interval(X_for_var_rr.shape[0], X_for_var_rr.shape[1], w_for_var_rr, var)
```
Plot the coefficients with their CI intervals.
```
plt.figure(figsize=(15,8))
plt.errorbar(np.arange(X_for_var_rr.shape[1]), w_for_var_rr,
yerr=np.vstack([np.squeeze(w_for_var_rr-lower_CI_r), np.squeeze(upper_CI_r-w_for_var_rr)]),
fmt=".", ecolor='orange', lolims=True, uplims=True, label="Coefficients")
plt.xticks(np.arange(X_for_var_rr.shape[1]), ["β"+str(i) for i in np.arange(X_for_var_rr.shape[1])])
plt.title("95% CI around the regression coefficients")
plt.xlabel("The model coefficients")
plt.legend()
plt.grid(which='both', linestyle='-.', linewidth=0.5)
plt.show()
```
Compute the CI for the predictions.
```
var_ = variance_least_squares_line(X_for_var_rr, y, Ridge_prediction_data)
lower_CI_line_r, upper_CI_line_r = confidence_interval(X_for_var_rr.shape[0], X_for_var_rr.shape[1], Ridge_prediction_data, var_)
```
Plot the CI for the predictions.
```
plt.figure(figsize=(15,8))
plt.scatter(year, y, label="GDP")
plt.plot(year, Ridge_prediction_data, color="g", lw=1, ls='--', label="Prediction using ridge regression")
plt.gca().fill_between(year, np.squeeze(lower_CI_line_r), np.squeeze(upper_CI_line_r),
label="95% CI",
#color="#b9cfe7",
color="orange",
alpha=0.5,
edgecolor=None)
plt.xlabel("Year")
plt.ylabel("GDP")
plt.title("GDP prediction as a function of Year")
plt.legend()
plt.show()
```
### Subset selection
Find the optimal combination of features in terms of $r^2$.
```
scores, subsets = best_subset_ls(X_train, y_train)
i = np.argmax(scores)
sub = subsets[i]
variables = "\n\t- ".join(dataset.columns[list(sub)])
print("Best performance on the test: ", scores[i], "the subset is: ", sub)
print("This corresponds to the following variables:\n\t- " + variables)
X_ss = X[:, sub]
X_train_ss, X_test_ss, _, _ = train_test_split(X_ss, y)
```
Compute lest squares estimator using subset of features.
```
d_ss = degree_cross_val(X_train_ss, y_train, 10)
X_ls_ss = build_poly(X_ss, d_ss)
X_train_ls_ss, X_test_ls_ss, _, _ = train_test_split(X_ls_ss, y)
LS_w_ss = least_squares(X_train_ls_ss, y_train)
LS_ss_prediction_data = predict(X_ls_ss, LS_w_ss)
LS_ss_prediction_test = predict(X_test_ls_ss, LS_w_ss)
print("Testing R^2: ", R_squared(y_test, LS_ss_prediction_test),
"\nFull data R^2:", R_squared(y, LS_ss_prediction_data))
print("Full data adjusted R^2:", adjusted_R_squared(y, LS_ss_prediction_data, X.shape[1]))
print("Testing RMSE: ", RMSE(y_test, LS_ss_prediction_test),
"\nFull data RMSE:", RMSE(y, LS_ss_prediction_data))
print("This implies only", RMSE(y_test, LS_ss_prediction_test)/tot, "error rate on the test and", RMSE(y, LS_ss_prediction_data)/tot, "on the full dataset")
theil_U(y_test, LS_ss_prediction_test)
X_for_var_ls_ss, w_for_var_ls_ss = (X_ls_ss[:, 1:], LS_w_ss[1:]) if d_ss > 0 else (X_ls_ss, LS_w_ss)
var_ = variance_least_squares_line(X_for_var_ls_ss, y, LS_ss_prediction_data)
lower_CI_line_ls_ss, upper_CI_line_ls_ss = confidence_interval(X_for_var_ls_ss.shape[0], X_for_var_ls_ss.shape[1], LS_ss_prediction_data, var_)
plt.figure(figsize=(15,8))
plt.scatter(year, y, label="GDP")
plt.plot(year, LS_ss_prediction_data, color="g", lw=1, ls='--', label="Prediction using ridge regression")
plt.gca().fill_between(year, np.squeeze(lower_CI_line_ls_ss), np.squeeze(upper_CI_line_ls_ss),
label="95% CI",
#color="#b9cfe7",
color="orange",
alpha=0.5,
edgecolor=None)
plt.xlabel("Year")
plt.ylabel("GDP")
plt.title("GDP prediction as a function of Year")
plt.legend()
plt.show()
lambda_r_ss, d_r_ss = cross_val_ridge(X_train_ss, y_train, max_lambda=2, plot=False)
print("The optimal hyper-parameters for the polynomial expansion and l2 regularization term are respectively:",
lambda_r_ss, d_r_ss)
X_r_ss = build_poly(X_ss, d_ss)
X_train_r_ss, X_test_r_ss, _, _ = train_test_split(X_r_ss, y)
Ridge_w_lambda_ss = ridge_regression(X_train_r_ss, y_train, lambda_r_ss)
Ridge_prediction_lambda_ss_data = predict(X_r_ss, Ridge_w_lambda_ss)
Ridge_prediction_lambda_ss_test = predict(X_test_r_ss, Ridge_w_lambda_ss)
print("Testing R^2: ", R_squared(y_test, Ridge_prediction_lambda_ss_test),
"\nFull data R^2:", R_squared(y, Ridge_prediction_lambda_ss_data))
print("Full data adjusted R^2:", adjusted_R_squared(y, Ridge_prediction_lambda_ss_data, X.shape[1]))
print("Testing RMSE: ", RMSE(y_test, Ridge_prediction_lambda_ss_test),
"\nFull data RMSE:", RMSE(y, Ridge_prediction_lambda_ss_data))
print("This implies only", RMSE(y_test, Ridge_prediction_lambda_ss_test)/tot, "error rate on the test and", RMSE(y, Ridge_prediction_lambda_ss_data)/tot, "on the full dataset")
print(theil_U(y_test, Ridge_prediction_lambda_ss_test))
X_for_var_r_ss, w_for_var_r_ss = (X_r_ss[:, 1:], Ridge_w_lambda_ss[1:]) if d_r_ss > 0 else (X_r_ss, Ridge_w_lambda_ss)
var_ = variance_least_squares_line(X_for_var_r_ss, y, Ridge_prediction_lambda_ss_data)
lower_CI_line_r_ss, upper_CI_line_r_ss = confidence_interval(X_for_var_r_ss.shape[0], X_for_var_r_ss.shape[1], Ridge_prediction_lambda_ss_data, var_)
plt.figure(figsize=(15,8))
plt.scatter(year, y, label="GDP")
plt.plot(year, Ridge_prediction_lambda_ss_data, color="g", lw=1, ls='--', label="Prediction using ridge regression")
plt.gca().fill_between(year, np.squeeze(lower_CI_line_r_ss), np.squeeze(upper_CI_line_r_ss),
label="95% CI",
#color="#b9cfe7",
color="orange",
alpha=0.5,
edgecolor=None)
plt.xlabel("Year")
plt.ylabel("GDP")
plt.title("GDP prediction as a function of Year")
plt.legend()
plt.show()
```
### General to simple
```
idx = general_to_simple(X_train, y_train)
X_g2s = X[:, idx]
X_train_g2s, X_test_g2s, _, _ = train_test_split(X_ss, y)
d_g2s = degree_cross_val(X_train_g2s, y_train, 10)
X_ls_g2s = build_poly(X_g2s, d_g2s)
X_train_ls_g2s, X_test_ls_g2s, _, _ = train_test_split(X_ls_g2s, y)
LS_w_g2s = least_squares(X_train_ls_g2s, y_train)
LS_g2s_prediction_data = predict(X_ls_g2s, LS_w_g2s)
LS_g2s_prediction_test = predict(X_test_ls_g2s, LS_w_g2s)
print("Testing R^2: ", R_squared(y_test, LS_g2s_prediction_test),
"\nFull data R^2:", R_squared(y, LS_g2s_prediction_data))
print("Full data adjusted R^2:", adjusted_R_squared(y, LS_g2s_prediction_data, X.shape[1]))
print("Testing RMSE: ", RMSE(y_test, LS_g2s_prediction_test),
"\nFull data RMSE:", RMSE(y, LS_g2s_prediction_data))
print("This implies only", RMSE(y_test, LS_g2s_prediction_test)/tot, "error rate on the test and", RMSE(y, LS_g2s_prediction_data)/tot, "on the full dataset")
theil_U(y_test, LS_g2s_prediction_test)
## AIC
e = y - LS_g2s_prediction_data
n = len(e)
n_features=int(np.size(idx))
aic=np.log(np.dot(e.T, e) / n) + 2 * n_features / n
## BIC
bic=np.log(np.dot(e.T, e) / n) + n_features * np.log(n) / n
print('aic is equal to',aic)
print('bic is equal to',bic)
from importlib import reload
print(os.getcwdb())
#os.chdir("econometrics/GDP/src")
#reload(statistical_analysis)
os.chdir("../notebooks")
idx = general_to_simple_ridge(X_train, y_train)
X_g2s = X[:, idx]
X_train_g2s, X_test_g2s, _, _ = train_test_split(X_ss, y)
lambda_r_g2s, d_r_g2s = cross_val_ridge(X_train_g2s, y_train, plot=False)
print("The optimal hyper-parameters for the polynomial expansion and l2 regularization term are respectively:",
lambda_r_g2s, d_r_g2s)
X_r_g2s = build_poly(X_g2s, d_g2s)
X_train_r_g2s, X_test_r_g2s, _, _ = train_test_split(X_r_g2s, y)
Ridge_w_lambda_g2s = ridge_regression(X_train_r_g2s, y_train, lambda_r_g2s)
Ridge_prediction_lambda_g2s_data = predict(X_r_g2s, Ridge_w_lambda_g2s)
Ridge_prediction_lambda_g2s_test = predict(X_test_r_g2s, Ridge_w_lambda_g2s)
print("Testing R^2: ", R_squared(y_test, Ridge_prediction_lambda_g2s_test),
"\nFull data R^2:", R_squared(y, Ridge_prediction_lambda_g2s_data))
print("Full data adjusted R^2:", adjusted_R_squared(y, Ridge_prediction_lambda_g2s_data, X.shape[1]))
print("Testing RMSE: ", RMSE(y_test, Ridge_prediction_lambda_g2s_test),
"\nFull data RMSE:", RMSE(y, Ridge_prediction_lambda_g2s_data))
print("This implies only", RMSE(y_test, Ridge_prediction_lambda_g2s_test)/tot, "error rate on the test and", RMSE(y, Ridge_prediction_lambda_g2s_data)/tot, "on the full dataset")
print(theil_U(y_test, Ridge_prediction_lambda_g2s_test))
```
|
github_jupyter
|
import os
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
os.chdir('..')
from src.regressions import *
from src.helpers import *
from src.statistical_analysis import *
from src.evaluation_metrics import *
from src.feature_engineering import *
os.chdir('notebooks')
PATH = os.path.join("..", "data", "indonesia.csv")
dataset = pd.read_csv(PATH)
dataset
plt.plot(dataset["YEAR"], dataset["REAL GROSS DOMESTIC PRODUCT PER CAPITA (CURRENT PRICES) (unit $ CURRENT)"])
plt.xlabel("Year")
plt.ylabel("GDP")
plt.title("The evolution of the Indonesian GDP from the 50's to the 00's")
plt.show()
ds = dataset.drop(columns=["YEAR"])
X, y = X_y_from_dataset(dataset.drop(columns=["YEAR"]))
print("We have", X.shape[0], "data points and", X.shape[1], "features")
ds.columns
correlation_matrix = np.abs(np.corrcoef(X, rowvar=False))
sns.heatmap(correlation_matrix)
plt.title("correlation matrix")
plt.show()
mask = correlation_matrix > 0.8
np.fill_diagonal(mask, False)
for i, m in enumerate(mask):
if (sum(m) != 0):
print(ds.columns[i], "is highly correlated with: ",
", ".join(ds.columns[np.append(m, [False])]), "\n")
sns.lineplot(x="YEAR", y="POPULATION (unit 000S)", data=dataset)
print("The condition number is", condition_number(X))
VIF_X = VIF(X)
print("The VIF is:", VIF_X, "\n The column with the highest VIF is", dataset.columns[np.argmax(VIF_X)])
_, p_value, s = breusch_pagan_test(X, y)
print(s, "because the p-value is: ", p_value)
X_train, X_test, y_train, y_test = train_test_split(X, y)
year = dataset.dropna()["YEAR"]
# Degree of zero means we leave the data set the way it is degree of 1 means we simply add a bias degree > 1
# We compute the polynomial expansion associated with that degree.
d = degree_cross_val(X_train, y_train, 10)
d
# We split the dataset as instructed: the first 80% as train and the next 20% as test.
X_ls = build_poly(X, d)
X_train_ls, X_test_ls, _, _ = train_test_split(X_ls, y)
LS_w = least_squares(X_train_ls, y_train)
LS_prediction_data = predict(X_ls, LS_w)
LS_prediction_test = predict(X_test_ls, LS_w)
print("Testing R^2: ", R_squared(y_test, LS_prediction_test),
"\nFull data R^2:", R_squared(y, LS_prediction_data))
print("Full data adjusted R^2:", adjusted_R_squared(y, LS_prediction_data, X.shape[1]))
print("Testing RMSE: ", RMSE(y_test, LS_prediction_test),
"\nFull data RMSE:", RMSE(y, LS_prediction_data))
tot = np.mean(y_test)
print("This implies only", RMSE(y_test, LS_prediction_test)/tot, "error rate on the test and", RMSE(y, LS_prediction_data)/tot, "on the full dataset")
theil_U(y_test, LS_prediction_test)
X_for_var, w_for_var = (X_ls[:, 1:], LS_w[1:]) if d > 0 else (X, LS_w)
var = variance_least_squares_weights(X_for_var, y, LS_prediction_data)
lower_CI, upper_CI = confidence_interval(X_for_var.shape[0], X_for_var.shape[1], w_for_var, var)
plt.figure(figsize=(15,8))
plt.errorbar(np.arange(X_for_var.shape[1]), w_for_var,
yerr=np.vstack([np.squeeze(w_for_var-lower_CI), np.squeeze(upper_CI-w_for_var)]),
fmt=".", ecolor='orange', lolims=True, uplims=True, label="Coefficients")
plt.xticks(np.arange(X.shape[1]), ["β"+str(i) for i in np.arange(X_for_var.shape[1])])
plt.title("95% CI around the regression coefficients")
plt.xlabel("The model coefficients")
plt.legend()
plt.grid(which='both', linestyle='-.', linewidth=0.5)
plt.show()
var_ = variance_least_squares_line(X_for_var, y, LS_prediction_data)
lower_CI_line, upper_CI_line = confidence_interval(X_for_var.shape[0], X_for_var.shape[1], LS_prediction_data,
var_)
plt.figure(figsize=(15,8))
plt.scatter(year, y, label="GDP")
plt.plot(year, LS_prediction_data, color="g", lw=1, ls='--', label="Prediction using least squares")
plt.gca().fill_between(year, np.squeeze(lower_CI_line), np.squeeze(upper_CI_line),
label="95% CI",
#color="#b9cfe7",
color="orange",
alpha=0.5,
edgecolor=None)
plt.xlabel("Year")
plt.ylabel("GDP")
plt.title("GDP prediction as a function of Year")
plt.legend()
plt.show()
lambda_r, d_r = cross_val_ridge(X_train, y_train, plot=False)
X_rr = build_poly(X, d_r)
X_train_rr, X_test_rr, _, _ = train_test_split(X_rr, y)
print("The optimal hyper-parameters for the polynomial expansion and l2 regularization term are respectively:",
lambda_r, d_r)
Ridge_w = ridge_regression(X_train_rr, y_train, lambda_r)
Ridge_prediction_data = predict(X_rr, Ridge_w)
Ridge_prediction_test = predict(X_test_rr, Ridge_w)
print("Testing R^2: ", R_squared(y_test, Ridge_prediction_test),
"\nFull data R^2:", R_squared(y, Ridge_prediction_data))
print("Full data adjusted R^2:", adjusted_R_squared(y, Ridge_prediction_data, X_rr.shape[1]))
print("Testing RMSE: ", RMSE(y_test, Ridge_prediction_test),
"\nFull data RMSE:", RMSE(y, Ridge_prediction_data))
print("This implies only", RMSE(y_test, Ridge_prediction_test)/tot, "error rate on the test and", RMSE(y, Ridge_prediction_data)/tot, "on the full dataset")
print(theil_U(y_test, Ridge_prediction_test))
X_for_var_rr, w_for_var_rr = (X_rr[:, 1:], Ridge_w[1:]) if d_r > 0 else (X, Ridge_w)
var = variance_least_squares_weights(X_for_var_rr, y, Ridge_prediction_data)
lower_CI_r, upper_CI_r = confidence_interval(X_for_var_rr.shape[0], X_for_var_rr.shape[1], w_for_var_rr, var)
plt.figure(figsize=(15,8))
plt.errorbar(np.arange(X_for_var_rr.shape[1]), w_for_var_rr,
yerr=np.vstack([np.squeeze(w_for_var_rr-lower_CI_r), np.squeeze(upper_CI_r-w_for_var_rr)]),
fmt=".", ecolor='orange', lolims=True, uplims=True, label="Coefficients")
plt.xticks(np.arange(X_for_var_rr.shape[1]), ["β"+str(i) for i in np.arange(X_for_var_rr.shape[1])])
plt.title("95% CI around the regression coefficients")
plt.xlabel("The model coefficients")
plt.legend()
plt.grid(which='both', linestyle='-.', linewidth=0.5)
plt.show()
var_ = variance_least_squares_line(X_for_var_rr, y, Ridge_prediction_data)
lower_CI_line_r, upper_CI_line_r = confidence_interval(X_for_var_rr.shape[0], X_for_var_rr.shape[1], Ridge_prediction_data, var_)
plt.figure(figsize=(15,8))
plt.scatter(year, y, label="GDP")
plt.plot(year, Ridge_prediction_data, color="g", lw=1, ls='--', label="Prediction using ridge regression")
plt.gca().fill_between(year, np.squeeze(lower_CI_line_r), np.squeeze(upper_CI_line_r),
label="95% CI",
#color="#b9cfe7",
color="orange",
alpha=0.5,
edgecolor=None)
plt.xlabel("Year")
plt.ylabel("GDP")
plt.title("GDP prediction as a function of Year")
plt.legend()
plt.show()
scores, subsets = best_subset_ls(X_train, y_train)
i = np.argmax(scores)
sub = subsets[i]
variables = "\n\t- ".join(dataset.columns[list(sub)])
print("Best performance on the test: ", scores[i], "the subset is: ", sub)
print("This corresponds to the following variables:\n\t- " + variables)
X_ss = X[:, sub]
X_train_ss, X_test_ss, _, _ = train_test_split(X_ss, y)
d_ss = degree_cross_val(X_train_ss, y_train, 10)
X_ls_ss = build_poly(X_ss, d_ss)
X_train_ls_ss, X_test_ls_ss, _, _ = train_test_split(X_ls_ss, y)
LS_w_ss = least_squares(X_train_ls_ss, y_train)
LS_ss_prediction_data = predict(X_ls_ss, LS_w_ss)
LS_ss_prediction_test = predict(X_test_ls_ss, LS_w_ss)
print("Testing R^2: ", R_squared(y_test, LS_ss_prediction_test),
"\nFull data R^2:", R_squared(y, LS_ss_prediction_data))
print("Full data adjusted R^2:", adjusted_R_squared(y, LS_ss_prediction_data, X.shape[1]))
print("Testing RMSE: ", RMSE(y_test, LS_ss_prediction_test),
"\nFull data RMSE:", RMSE(y, LS_ss_prediction_data))
print("This implies only", RMSE(y_test, LS_ss_prediction_test)/tot, "error rate on the test and", RMSE(y, LS_ss_prediction_data)/tot, "on the full dataset")
theil_U(y_test, LS_ss_prediction_test)
X_for_var_ls_ss, w_for_var_ls_ss = (X_ls_ss[:, 1:], LS_w_ss[1:]) if d_ss > 0 else (X_ls_ss, LS_w_ss)
var_ = variance_least_squares_line(X_for_var_ls_ss, y, LS_ss_prediction_data)
lower_CI_line_ls_ss, upper_CI_line_ls_ss = confidence_interval(X_for_var_ls_ss.shape[0], X_for_var_ls_ss.shape[1], LS_ss_prediction_data, var_)
plt.figure(figsize=(15,8))
plt.scatter(year, y, label="GDP")
plt.plot(year, LS_ss_prediction_data, color="g", lw=1, ls='--', label="Prediction using ridge regression")
plt.gca().fill_between(year, np.squeeze(lower_CI_line_ls_ss), np.squeeze(upper_CI_line_ls_ss),
label="95% CI",
#color="#b9cfe7",
color="orange",
alpha=0.5,
edgecolor=None)
plt.xlabel("Year")
plt.ylabel("GDP")
plt.title("GDP prediction as a function of Year")
plt.legend()
plt.show()
lambda_r_ss, d_r_ss = cross_val_ridge(X_train_ss, y_train, max_lambda=2, plot=False)
print("The optimal hyper-parameters for the polynomial expansion and l2 regularization term are respectively:",
lambda_r_ss, d_r_ss)
X_r_ss = build_poly(X_ss, d_ss)
X_train_r_ss, X_test_r_ss, _, _ = train_test_split(X_r_ss, y)
Ridge_w_lambda_ss = ridge_regression(X_train_r_ss, y_train, lambda_r_ss)
Ridge_prediction_lambda_ss_data = predict(X_r_ss, Ridge_w_lambda_ss)
Ridge_prediction_lambda_ss_test = predict(X_test_r_ss, Ridge_w_lambda_ss)
print("Testing R^2: ", R_squared(y_test, Ridge_prediction_lambda_ss_test),
"\nFull data R^2:", R_squared(y, Ridge_prediction_lambda_ss_data))
print("Full data adjusted R^2:", adjusted_R_squared(y, Ridge_prediction_lambda_ss_data, X.shape[1]))
print("Testing RMSE: ", RMSE(y_test, Ridge_prediction_lambda_ss_test),
"\nFull data RMSE:", RMSE(y, Ridge_prediction_lambda_ss_data))
print("This implies only", RMSE(y_test, Ridge_prediction_lambda_ss_test)/tot, "error rate on the test and", RMSE(y, Ridge_prediction_lambda_ss_data)/tot, "on the full dataset")
print(theil_U(y_test, Ridge_prediction_lambda_ss_test))
X_for_var_r_ss, w_for_var_r_ss = (X_r_ss[:, 1:], Ridge_w_lambda_ss[1:]) if d_r_ss > 0 else (X_r_ss, Ridge_w_lambda_ss)
var_ = variance_least_squares_line(X_for_var_r_ss, y, Ridge_prediction_lambda_ss_data)
lower_CI_line_r_ss, upper_CI_line_r_ss = confidence_interval(X_for_var_r_ss.shape[0], X_for_var_r_ss.shape[1], Ridge_prediction_lambda_ss_data, var_)
plt.figure(figsize=(15,8))
plt.scatter(year, y, label="GDP")
plt.plot(year, Ridge_prediction_lambda_ss_data, color="g", lw=1, ls='--', label="Prediction using ridge regression")
plt.gca().fill_between(year, np.squeeze(lower_CI_line_r_ss), np.squeeze(upper_CI_line_r_ss),
label="95% CI",
#color="#b9cfe7",
color="orange",
alpha=0.5,
edgecolor=None)
plt.xlabel("Year")
plt.ylabel("GDP")
plt.title("GDP prediction as a function of Year")
plt.legend()
plt.show()
idx = general_to_simple(X_train, y_train)
X_g2s = X[:, idx]
X_train_g2s, X_test_g2s, _, _ = train_test_split(X_ss, y)
d_g2s = degree_cross_val(X_train_g2s, y_train, 10)
X_ls_g2s = build_poly(X_g2s, d_g2s)
X_train_ls_g2s, X_test_ls_g2s, _, _ = train_test_split(X_ls_g2s, y)
LS_w_g2s = least_squares(X_train_ls_g2s, y_train)
LS_g2s_prediction_data = predict(X_ls_g2s, LS_w_g2s)
LS_g2s_prediction_test = predict(X_test_ls_g2s, LS_w_g2s)
print("Testing R^2: ", R_squared(y_test, LS_g2s_prediction_test),
"\nFull data R^2:", R_squared(y, LS_g2s_prediction_data))
print("Full data adjusted R^2:", adjusted_R_squared(y, LS_g2s_prediction_data, X.shape[1]))
print("Testing RMSE: ", RMSE(y_test, LS_g2s_prediction_test),
"\nFull data RMSE:", RMSE(y, LS_g2s_prediction_data))
print("This implies only", RMSE(y_test, LS_g2s_prediction_test)/tot, "error rate on the test and", RMSE(y, LS_g2s_prediction_data)/tot, "on the full dataset")
theil_U(y_test, LS_g2s_prediction_test)
## AIC
e = y - LS_g2s_prediction_data
n = len(e)
n_features=int(np.size(idx))
aic=np.log(np.dot(e.T, e) / n) + 2 * n_features / n
## BIC
bic=np.log(np.dot(e.T, e) / n) + n_features * np.log(n) / n
print('aic is equal to',aic)
print('bic is equal to',bic)
from importlib import reload
print(os.getcwdb())
#os.chdir("econometrics/GDP/src")
#reload(statistical_analysis)
os.chdir("../notebooks")
idx = general_to_simple_ridge(X_train, y_train)
X_g2s = X[:, idx]
X_train_g2s, X_test_g2s, _, _ = train_test_split(X_ss, y)
lambda_r_g2s, d_r_g2s = cross_val_ridge(X_train_g2s, y_train, plot=False)
print("The optimal hyper-parameters for the polynomial expansion and l2 regularization term are respectively:",
lambda_r_g2s, d_r_g2s)
X_r_g2s = build_poly(X_g2s, d_g2s)
X_train_r_g2s, X_test_r_g2s, _, _ = train_test_split(X_r_g2s, y)
Ridge_w_lambda_g2s = ridge_regression(X_train_r_g2s, y_train, lambda_r_g2s)
Ridge_prediction_lambda_g2s_data = predict(X_r_g2s, Ridge_w_lambda_g2s)
Ridge_prediction_lambda_g2s_test = predict(X_test_r_g2s, Ridge_w_lambda_g2s)
print("Testing R^2: ", R_squared(y_test, Ridge_prediction_lambda_g2s_test),
"\nFull data R^2:", R_squared(y, Ridge_prediction_lambda_g2s_data))
print("Full data adjusted R^2:", adjusted_R_squared(y, Ridge_prediction_lambda_g2s_data, X.shape[1]))
print("Testing RMSE: ", RMSE(y_test, Ridge_prediction_lambda_g2s_test),
"\nFull data RMSE:", RMSE(y, Ridge_prediction_lambda_g2s_data))
print("This implies only", RMSE(y_test, Ridge_prediction_lambda_g2s_test)/tot, "error rate on the test and", RMSE(y, Ridge_prediction_lambda_g2s_data)/tot, "on the full dataset")
print(theil_U(y_test, Ridge_prediction_lambda_g2s_test))
| 0.380529 | 0.910067 |
# 讀檔
```
import pandas as pd
import os
import csv
filepath = '/Volumes/backup_128G/z_repository/TBIO_data/TEPC'
read_file_doc = 'keyword_by_author_cluster_%s.csv'
write_file_to = '{0}/{1}'.format(filepath, 'tepc_20190711_v3.xlsx')
clusters = {1:[], 2:[], 3:[], 4:[], 5:[]}
for n in range(1, 6):
fileToRead = read_file_doc % (n)
readFile = '{0}/{1}'.format(filepath, fileToRead)
# print(readFile)
with open(readFile) as csvfile:
readCSV = csv.reader(csvfile, delimiter=',')
for row in readCSV:
if row[1] == 'AUTHOR':
continue
clusters[n].append(row[1])
# print(clusters)
```
# Query the Organization and Societal sector
```
filepath = '/Volumes/backup_128G/z_repository/TBIO_data/RequestsFromTana/all_org_classes'
v10_tana = 'output_class_to_orgs_20190501_v10_tana.xlsx'
read_v10_tana = '{0}/{1}'.format(filepath, v10_tana)
workDf = pd.read_excel(read_v10_tana)
workDf = workDf.fillna(0)
workDf.shape, workDf.head()
```
# SPARQL
```
import stardog
import json
adminFile = '/Users/vincent/Projects/TBIO/tbio-conn-admin-local.json'
conn_details = {}
with open(adminFile, 'r') as readFile:
conn_details = json.loads(readFile.read())
def sparqlQryOrg(name):
return """
SELECT DISTINCT ?nameVal ?orgVal ?yearVal ?graphVal WHERE {
?name ?p ?orgEvt .
GRAPH ?graph {
?orgEvt ?orgp ?org .
}
?org a tbio:Organization .
FILTER(sameterm(?name, <http://tbio.orient.cas.cz#%s>)) .
OPTIONAL{?orgEvt <http://tbio.orient.cas.cz#occursInTime> ?year}
BIND(STR(?name) AS ?nameStr) .
BIND(REPLACE(?nameStr, "http://tbio.orient.cas.cz#", "") AS ?nameVal) .
BIND(STR(?org) AS ?orgStr) .
BIND(REPLACE(?orgStr, "http://tbio.orient.cas.cz#", "") AS ?orgVal) .
BIND(STR(?year) AS ?yearStr) .
BIND(REPLACE(?yearStr, "http://tbio.orient.cas.cz#", "") AS ?yearVal) .
BIND(STR(?graph) AS ?graphStr) .
BIND(REPLACE(?graphStr, "http://tbio.orient.cas.cz/", "") AS ?graphVal) .
} ORDER BY (?yearVal)
""" % (name)
orgList=[]
def getOrg(queryRes, name, n):
for nameVal in queryRes['results']['bindings']:
name = nameVal['nameVal']['value'] if 'nameVal' in nameVal else ""
org = nameVal['orgVal']['value'] if 'orgVal' in nameVal else ""
year = nameVal['yearVal']['value'] if 'yearVal' in nameVal else ""
graph = nameVal['graphVal']['value'] if 'graphVal' in nameVal else ""
if len(org) == 0:
continue
match = workDf.loc[workDf['Organization'] == org]
if match.empty == False:
SocietalSector = match.iloc[0]['SocietalSector']
else:
SocietalSector = 'None'
if SocietalSector == 0:
SocietalSector = 'None'
row = [n, name, org, SocietalSector, year, graph]
# print(row)
if row not in orgList:
orgList.append(row)
with stardog.Connection('tbio', **conn_details) as conn:
for n in range(1, 6):
for name in clusters[n]:
# print(name)
query = sparqlQryOrg(name)
results = conn.select(query)
getOrg(results, name, n)
orgList[0:1]
print(len(orgList))
```
# Output data
```
outDf = pd.DataFrame(orgList, columns=['Cluster', 'Name', 'Organization', 'Societal sector', 'Year', 'Graph'])
outDf.head()
```
# Write file
```
with pd.ExcelWriter(write_file_to) as writer:
# write file
outDf.to_excel(writer, "TEPC", index=False)
writer.save()
```
|
github_jupyter
|
import pandas as pd
import os
import csv
filepath = '/Volumes/backup_128G/z_repository/TBIO_data/TEPC'
read_file_doc = 'keyword_by_author_cluster_%s.csv'
write_file_to = '{0}/{1}'.format(filepath, 'tepc_20190711_v3.xlsx')
clusters = {1:[], 2:[], 3:[], 4:[], 5:[]}
for n in range(1, 6):
fileToRead = read_file_doc % (n)
readFile = '{0}/{1}'.format(filepath, fileToRead)
# print(readFile)
with open(readFile) as csvfile:
readCSV = csv.reader(csvfile, delimiter=',')
for row in readCSV:
if row[1] == 'AUTHOR':
continue
clusters[n].append(row[1])
# print(clusters)
filepath = '/Volumes/backup_128G/z_repository/TBIO_data/RequestsFromTana/all_org_classes'
v10_tana = 'output_class_to_orgs_20190501_v10_tana.xlsx'
read_v10_tana = '{0}/{1}'.format(filepath, v10_tana)
workDf = pd.read_excel(read_v10_tana)
workDf = workDf.fillna(0)
workDf.shape, workDf.head()
import stardog
import json
adminFile = '/Users/vincent/Projects/TBIO/tbio-conn-admin-local.json'
conn_details = {}
with open(adminFile, 'r') as readFile:
conn_details = json.loads(readFile.read())
def sparqlQryOrg(name):
return """
SELECT DISTINCT ?nameVal ?orgVal ?yearVal ?graphVal WHERE {
?name ?p ?orgEvt .
GRAPH ?graph {
?orgEvt ?orgp ?org .
}
?org a tbio:Organization .
FILTER(sameterm(?name, <http://tbio.orient.cas.cz#%s>)) .
OPTIONAL{?orgEvt <http://tbio.orient.cas.cz#occursInTime> ?year}
BIND(STR(?name) AS ?nameStr) .
BIND(REPLACE(?nameStr, "http://tbio.orient.cas.cz#", "") AS ?nameVal) .
BIND(STR(?org) AS ?orgStr) .
BIND(REPLACE(?orgStr, "http://tbio.orient.cas.cz#", "") AS ?orgVal) .
BIND(STR(?year) AS ?yearStr) .
BIND(REPLACE(?yearStr, "http://tbio.orient.cas.cz#", "") AS ?yearVal) .
BIND(STR(?graph) AS ?graphStr) .
BIND(REPLACE(?graphStr, "http://tbio.orient.cas.cz/", "") AS ?graphVal) .
} ORDER BY (?yearVal)
""" % (name)
orgList=[]
def getOrg(queryRes, name, n):
for nameVal in queryRes['results']['bindings']:
name = nameVal['nameVal']['value'] if 'nameVal' in nameVal else ""
org = nameVal['orgVal']['value'] if 'orgVal' in nameVal else ""
year = nameVal['yearVal']['value'] if 'yearVal' in nameVal else ""
graph = nameVal['graphVal']['value'] if 'graphVal' in nameVal else ""
if len(org) == 0:
continue
match = workDf.loc[workDf['Organization'] == org]
if match.empty == False:
SocietalSector = match.iloc[0]['SocietalSector']
else:
SocietalSector = 'None'
if SocietalSector == 0:
SocietalSector = 'None'
row = [n, name, org, SocietalSector, year, graph]
# print(row)
if row not in orgList:
orgList.append(row)
with stardog.Connection('tbio', **conn_details) as conn:
for n in range(1, 6):
for name in clusters[n]:
# print(name)
query = sparqlQryOrg(name)
results = conn.select(query)
getOrg(results, name, n)
orgList[0:1]
print(len(orgList))
outDf = pd.DataFrame(orgList, columns=['Cluster', 'Name', 'Organization', 'Societal sector', 'Year', 'Graph'])
outDf.head()
with pd.ExcelWriter(write_file_to) as writer:
# write file
outDf.to_excel(writer, "TEPC", index=False)
writer.save()
| 0.069672 | 0.345492 |
Lambda School Data Science, Unit 2: Predictive Modeling
# Regression & Classification, Module 4
## Assignment
- [ ] Watch Aaron Gallant's [video #1](https://www.youtube.com/watch?v=pREaWFli-5I) (12 minutes) & [video #2](https://www.youtube.com/watch?v=bDQgVt4hFgY) (9 minutes) to learn about the mathematics of Logistic Regression.
- [ ] Do train/validate/test split with the Tanzania Waterpumps data.
- [ ] Do one-hot encoding. (Remember it may not work with high cardinality categoricals.)
- [ ] Use scikit-learn for logistic regression.
- [ ] Get your validation accuracy score.
- [ ] Get and plot your coefficients.
- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)
- [ ] Commit your notebook to your fork of the GitHub repo.
> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.
## Stretch Goals
### Doing
- [ ] Add your own stretch goal(s) !
- [ ] Clean the data. For ideas, refer to [The Quartz guide to bad data](https://github.com/Quartz/bad-data-guide), a "reference to problems seen in real-world data along with suggestions on how to resolve them." One of the issues is ["Zeros replace missing values."](https://github.com/Quartz/bad-data-guide#zeros-replace-missing-values)
- [ ] Make exploratory visualizations.
- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).
- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).
#### Exploratory visualizations
Visualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data.
For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:
```python
train['functional'] = (train['status_group']=='functional').astype(int)
```
You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)
- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")
- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcut#discretization-and-quantiling).)
You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this problem, you may want to use the parameter `logistic=True`
You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty.
#### High-cardinality categoricals
This code from the previous assignment demonstrates how to replace less frequent values with 'OTHER'
```python
# Reduce cardinality for NEIGHBORHOOD feature ...
# Get a list of the top 10 neighborhoods
top10 = train['NEIGHBORHOOD'].value_counts()[:10].index
# At locations where the neighborhood is NOT in the top 10,
# replace the neighborhood with 'OTHER'
train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'
test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'
```
#### Pipelines
[Scikit-Learn User Guide](https://scikit-learn.org/stable/modules/compose.html) explains why pipelines are useful, and demonstrates how to use them:
> Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:
> - **Convenience and encapsulation.** You only have to call fit and predict once on your data to fit a whole sequence of estimators.
> - **Joint parameter selection.** You can grid search over parameters of all estimators in the pipeline at once.
> - **Safety.** Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors.
### Reading
- [ ] [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)
- [ ] [Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)
- [ ] [Statistical Modeling: The Two Cultures](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)
- [ ] [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way (without an excessive amount of formulas or academic pre-requisites).
```
# If you're in Colab...
import os, sys
in_colab = 'google.colab' in sys.modules
if in_colab:
# Install required python packages:
# category_encoders, version >= 2.0
# pandas-profiling, version >= 2.0
# plotly, version >= 4.0
!pip install --upgrade category_encoders pandas-profiling plotly
# Pull files from Github repo
os.chdir('/content')
!git init .
!git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git
!git pull origin master
# Change into directory for module
os.chdir('module4')
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
import pandas as pd
train_features = pd.read_csv('../data/tanzania/train_features.csv')
train_labels = pd.read_csv('../data/tanzania/train_labels.csv')
test_features = pd.read_csv('../data/tanzania/test_features.csv')
sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv')
assert train_features.shape == (59400, 40)
assert train_labels.shape == (59400, 2)
assert test_features.shape == (14358, 40)
assert sample_submission.shape == (14358, 2)
```
###Do train/validate/test split with the Tanzania Waterpumps data.
```
train_labels.head()
```
##Baseline Model
A baseline for classification can be the most common class in the training dataset. logistic regression predicts the probablity of an event occuring
```
y_train= train_labels['status_group']
# determine the majority class
(y_train.value_counts(normalize= True)*100).round(2)
```
Baseline prediction = 54.31
```
# accuray for the classification is frequency of the most common labels
# check how accurate model is
# guess this mojority class for every prediction
majority_class= y_train.mode()[0]
y_pred= [majority_class]*len(y_train)
print(len(y_pred))
#Accuracy of majority class baseline = frequency of the majority class
from sklearn.metrics import accuracy_score
accuracy_score(y_train, y_pred)
train_features.head()
X_train = train_features
X_train.shape , y_train.shape
X_test= test_features
sample_submission.head()
y_test = sample_submission['status_group']
X_test.shape, y_test.shape
# standarize the data
# split the data into train and validate data
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, stratify= y_train, test_size=0.2, random_state= 44 )
X_train.shape , X_val.shape , y_train.shape, y_val.shape
```
###Use scikit-learn for logistic regression.
```
# drop the non numeric feature
X_train_numeric= X_train.select_dtypes('number')
print(X_train_numeric.shape)
print(y_train.shape)
X_train_numeric.head()
# Look for nan
X_train_numeric.isna().sum()
from sklearn.linear_model import LogisticRegressionCV
# Instantiate it
model= LogisticRegressionCV(solver= 'lbfgs', multi_class= 'auto', n_jobs= -1,max_iter= 1000)
# Fit it
model.fit(X_train_numeric, y_train)
import sklearn
sklearn.__version__
```
###Get your validation accuracy score.
```
# evaluate on validation data
X_val_numeric = X_val.select_dtypes('number')
y_pred= model.predict(X_val_numeric)
acc= accuracy_score(y_val, y_pred)
print(f'Accuracy score for just numeric feature: {acc: .2f}')
# didn't beat the baseline prediction
X_train.isna().sum()
```
###Simple and fast Baseline Model with subset of columns
```
# Just numeric value with no missing columns
train_subset= X_train.select_dtypes('number').dropna(axis=1)
```
###Do one-hot encoding. (Remember it may not work with high cardinality categoricals.)
```
# check the cardinality of data
X_train.describe(exclude='number').T.sort_values(by='unique')
X_train['quantity'].value_counts()
#combine X_train and y_train for exploratory data visualisation
train = X_train.copy()
train['status_group']= y_train
train.groupby('quantity')['status_group'].value_counts(dropna = True, normalize=True)
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
train['functional']= (train['status_group']== 'functional').astype(int)
train[['status_group' , 'functional']]
sns.catplot(x = 'quantity', y= 'functional', data = train, kind= 'bar', color= 'gray')
# one or so many unique value feature is not useful for model
import category_encoders as ce
from sklearn.preprocessing import StandardScaler
# Use both numerical feature and categorical feature(quantify)
categorical_features= ['quantity']
numeric_features = X_train.select_dtypes('number').columns.drop('id').tolist()
# combine the feature
features = categorical_features + numeric_features
# create subset of numeric and categorical features
X_train_subset = X_train[features]
X_val_subset= X_val[features]
# do the hot encoding
encoder = ce.OneHotEncoder(use_cat_names = True)
X_train_encoded = encoder.fit_transform(X_train_subset)
X_val_encoded = encoder.transform(X_val_subset)
#Standarize the data
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train_encoded)
X_val_scaled = scaler.transform(X_val_encoded)
# Instantiate the model
model = LogisticRegressionCV(multi_class = 'auto', n_jobs = -1, )
# fit the model
model.fit(X_train_scaled, y_train)
# Print the accuracy output
print(f'validation score:{model.score(X_val_scaled, y_val):.2f}')
```
###Get and plot your coefficients.
```
# we have coefficient for each variable(column)
# model.coef[0] for the 0th class functional
#model.coef[1] for the 1st class needs to repair
coefficient= pd.Series(model.coef_[0], X_train_encoded.columns)
plt.figure(figsize= (10,7))
coefficient.sort_values().plot.barh();
```
|
github_jupyter
|
train['functional'] = (train['status_group']=='functional').astype(int)
# Reduce cardinality for NEIGHBORHOOD feature ...
# Get a list of the top 10 neighborhoods
top10 = train['NEIGHBORHOOD'].value_counts()[:10].index
# At locations where the neighborhood is NOT in the top 10,
# replace the neighborhood with 'OTHER'
train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'
test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'
# If you're in Colab...
import os, sys
in_colab = 'google.colab' in sys.modules
if in_colab:
# Install required python packages:
# category_encoders, version >= 2.0
# pandas-profiling, version >= 2.0
# plotly, version >= 4.0
!pip install --upgrade category_encoders pandas-profiling plotly
# Pull files from Github repo
os.chdir('/content')
!git init .
!git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git
!git pull origin master
# Change into directory for module
os.chdir('module4')
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
import pandas as pd
train_features = pd.read_csv('../data/tanzania/train_features.csv')
train_labels = pd.read_csv('../data/tanzania/train_labels.csv')
test_features = pd.read_csv('../data/tanzania/test_features.csv')
sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv')
assert train_features.shape == (59400, 40)
assert train_labels.shape == (59400, 2)
assert test_features.shape == (14358, 40)
assert sample_submission.shape == (14358, 2)
train_labels.head()
y_train= train_labels['status_group']
# determine the majority class
(y_train.value_counts(normalize= True)*100).round(2)
# accuray for the classification is frequency of the most common labels
# check how accurate model is
# guess this mojority class for every prediction
majority_class= y_train.mode()[0]
y_pred= [majority_class]*len(y_train)
print(len(y_pred))
#Accuracy of majority class baseline = frequency of the majority class
from sklearn.metrics import accuracy_score
accuracy_score(y_train, y_pred)
train_features.head()
X_train = train_features
X_train.shape , y_train.shape
X_test= test_features
sample_submission.head()
y_test = sample_submission['status_group']
X_test.shape, y_test.shape
# standarize the data
# split the data into train and validate data
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, stratify= y_train, test_size=0.2, random_state= 44 )
X_train.shape , X_val.shape , y_train.shape, y_val.shape
# drop the non numeric feature
X_train_numeric= X_train.select_dtypes('number')
print(X_train_numeric.shape)
print(y_train.shape)
X_train_numeric.head()
# Look for nan
X_train_numeric.isna().sum()
from sklearn.linear_model import LogisticRegressionCV
# Instantiate it
model= LogisticRegressionCV(solver= 'lbfgs', multi_class= 'auto', n_jobs= -1,max_iter= 1000)
# Fit it
model.fit(X_train_numeric, y_train)
import sklearn
sklearn.__version__
# evaluate on validation data
X_val_numeric = X_val.select_dtypes('number')
y_pred= model.predict(X_val_numeric)
acc= accuracy_score(y_val, y_pred)
print(f'Accuracy score for just numeric feature: {acc: .2f}')
# didn't beat the baseline prediction
X_train.isna().sum()
# Just numeric value with no missing columns
train_subset= X_train.select_dtypes('number').dropna(axis=1)
# check the cardinality of data
X_train.describe(exclude='number').T.sort_values(by='unique')
X_train['quantity'].value_counts()
#combine X_train and y_train for exploratory data visualisation
train = X_train.copy()
train['status_group']= y_train
train.groupby('quantity')['status_group'].value_counts(dropna = True, normalize=True)
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
train['functional']= (train['status_group']== 'functional').astype(int)
train[['status_group' , 'functional']]
sns.catplot(x = 'quantity', y= 'functional', data = train, kind= 'bar', color= 'gray')
# one or so many unique value feature is not useful for model
import category_encoders as ce
from sklearn.preprocessing import StandardScaler
# Use both numerical feature and categorical feature(quantify)
categorical_features= ['quantity']
numeric_features = X_train.select_dtypes('number').columns.drop('id').tolist()
# combine the feature
features = categorical_features + numeric_features
# create subset of numeric and categorical features
X_train_subset = X_train[features]
X_val_subset= X_val[features]
# do the hot encoding
encoder = ce.OneHotEncoder(use_cat_names = True)
X_train_encoded = encoder.fit_transform(X_train_subset)
X_val_encoded = encoder.transform(X_val_subset)
#Standarize the data
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train_encoded)
X_val_scaled = scaler.transform(X_val_encoded)
# Instantiate the model
model = LogisticRegressionCV(multi_class = 'auto', n_jobs = -1, )
# fit the model
model.fit(X_train_scaled, y_train)
# Print the accuracy output
print(f'validation score:{model.score(X_val_scaled, y_val):.2f}')
# we have coefficient for each variable(column)
# model.coef[0] for the 0th class functional
#model.coef[1] for the 1st class needs to repair
coefficient= pd.Series(model.coef_[0], X_train_encoded.columns)
plt.figure(figsize= (10,7))
coefficient.sort_values().plot.barh();
| 0.505127 | 0.958148 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.