code
stringlengths 2.5k
6.36M
| kind
stringclasses 2
values | parsed_code
stringlengths 0
404k
| quality_prob
float64 0
0.98
| learning_prob
float64 0.03
1
|
---|---|---|---|---|
```
def launch(spark_session, map_fun, args_dict):
""" Run the wrapper function with each hyperparameter combination as specified by the dictionary
Args:
:spark_session: SparkSession object
:map_fun: The TensorFlow function to run
:args_dict: A dictionary containing hyperparameter values to insert as arguments for each TensorFlow job
"""
sc = spark_session.sparkContext
# Length of the list of the first list of arguments represents the number of Spark tasks
num_tasks = len(args_dict.values()[0])
# Create a number of partitions (tasks)
nodeRDD = sc.parallelize(range(num_tasks), num_tasks)
# Execute each of the hyperparameter arguments as a task
nodeRDD.foreachPartition(_do_search(map_fun, args_dict))
def _do_search(map_fun, args_dict):
def _wrapper_fun(iter):
for i in iter:
executor_num = i
argcount = map_fun.func_code.co_argcount
names = map_fun.func_code.co_varnames
args = []
argIndex = 0
while argcount > 0:
# Get arguments for hyperparameter combination
param_name = names[argIndex]
param_val = args_dict[param_name][executor_num]
args.append(param_val)
argcount -= 1
argIndex += 1
map_fun(*args)
return _wrapper_fun
def mnist(num_steps):
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
mnist = input_data.read_data_sets('/tmp/tensorflow/mnist/input_data', one_hot=True)
# Create the model
x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.matmul(x, W) + b
# Define loss and optimizer
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
for _ in range(num_steps):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
# Test trained model
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
args_dict = {'num_steps': [1000, 10000]}
launch(spark, mnist, args_dict)
```
|
github_jupyter
|
def launch(spark_session, map_fun, args_dict):
""" Run the wrapper function with each hyperparameter combination as specified by the dictionary
Args:
:spark_session: SparkSession object
:map_fun: The TensorFlow function to run
:args_dict: A dictionary containing hyperparameter values to insert as arguments for each TensorFlow job
"""
sc = spark_session.sparkContext
# Length of the list of the first list of arguments represents the number of Spark tasks
num_tasks = len(args_dict.values()[0])
# Create a number of partitions (tasks)
nodeRDD = sc.parallelize(range(num_tasks), num_tasks)
# Execute each of the hyperparameter arguments as a task
nodeRDD.foreachPartition(_do_search(map_fun, args_dict))
def _do_search(map_fun, args_dict):
def _wrapper_fun(iter):
for i in iter:
executor_num = i
argcount = map_fun.func_code.co_argcount
names = map_fun.func_code.co_varnames
args = []
argIndex = 0
while argcount > 0:
# Get arguments for hyperparameter combination
param_name = names[argIndex]
param_val = args_dict[param_name][executor_num]
args.append(param_val)
argcount -= 1
argIndex += 1
map_fun(*args)
return _wrapper_fun
def mnist(num_steps):
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
mnist = input_data.read_data_sets('/tmp/tensorflow/mnist/input_data', one_hot=True)
# Create the model
x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.matmul(x, W) + b
# Define loss and optimizer
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
for _ in range(num_steps):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
# Test trained model
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
args_dict = {'num_steps': [1000, 10000]}
launch(spark, mnist, args_dict)
| 0.833087 | 0.859782 |
<h1>Branch and Rebase</h1>
In this notebook you will start with a repository containing names of cities in various US states. Following the distinction between the develop and issue branches, the names for each state will be added to the `cities` file in a different commit. However, the order of the commits will not match the order in which the states joined United States. For example, New York joined the union before Texas, and Hawaii joined after Texas. You will use the `rebase` command in Git to reorder the commits to match the order in which the states became part of the United States.
The following initializes the repo and creates the commits in an arbitrary order. Feel free to modify the cell to use your user name and email.
```
%%bash
git init rebase_repo
cd rebase_repo
git config --global user.email "peter@initech.com"
git config --global user.name "Peter Gibbons"
git checkout -b develop
echo "This repo contains
lists of cities for New York, Hawaii, and Texas" > README
git add README
git commit -m 'initial commit'
git branch hawaii
git branch newyork
git branch texas
git checkout --force hawaii
echo "Honolulu
Hilo
Kailua" >> cities
git add cities
git commit -am 'added hawaii'
git checkout --force newyork
echo "New York
Albany
Buffalo" >> cities
git add cities
git commit -am 'added new york'
git checkout --force texas
echo "Austin
Dallas
Houston" >> cities
git add cities
git commit -am 'added texas'
%cd rebase_repo
```
Start by defining an usual alias for the `git log` command.
```
!git config --global alias.lol 'log --graph --decorate --oneline --all'
```
After you run the detailed `log`, your output should resemble the following:
<pre>
* b4e0... (hawaii) added hawaii
| * b52c... (newyork) added new york
|/
| * df45... (HEAD -> texas) added texas
|/
* d3a8... (develop) initial commit
</pre>
```
!git lol
```
Since New York was the first to join the union, ensure that your `HEAD` points to the `newyork` branch before doing the rebase.
```
!git checkout newyork
```
Use your detailed log to confirm the correct state of the `HEAD` reference.
```
!git lol
```
You are ready to start with the `rebase`. Ensure that the commit for `newyork` is rebased back to the `develop` branch.
```
!git rebase develop
```
Don't be surprised with the output of the rebase command. If there is a direct path from `develop` to `newyork` then there is nothing to rebase.
```
!git lol
```
Next, rebase `texas` on top of the `newyork` commit.
```
!git checkout texas
!git rebase newyork
```
This time the command results in a conflict. Review the conflicting file and resolve the issue.
```
!cat cities
%%writefile cities
New York
Albany
Buffalo
Austin
Dallas
Houston
```
Remeber that once the `cities` file has the right content you need to re-stage it and `--continue` the rebase.
```
!git add cities
!git rebase --continue
```
Confirm that the rebase completed successfully using your `git log` alias.
```
!git lol
```
Finally, complete the steps to rebase `hawaii`.
```
!git checkout hawaii
!git rebase texas
!cat cities
%%writefile cities
New York
Albany
Buffalo
Austin
Dallas
Houston
Honolulu
Hilo
Kailua
!git add cities
!git rebase --continue
!cat cities
```
Once the rebase is done, check the detailed log.
```
!git lol
```
Assuming the rebase completed as expected, the order of the commits in the log should resemble the following:
<pre>
* ebac... (HEAD -> hawaii) added hawaii
* 0e46... (texas) added texas
* b52c... (newyork) added new york
* d3a8... (develop) initial commit
</pre>
Finally, checkout the `develop` branch and "fast-forward" it to the `hawaii` branch so that future commits to develop happen based on the `hawaii` commit.
```
!git checkout develop
!git merge hawaii
```
At the conclusion of this exercise your log should resemble the following:
<pre>
* 538b... (HEAD -> develop, hawaii) added hawaii
* f7b1... (texas) added texas
* baca... (newyork) added new york
* 28fe... initial commit
</pre>
```
!git lol
```
Copyright 2019 CounterFactual.AI LLC. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
|
github_jupyter
|
%%bash
git init rebase_repo
cd rebase_repo
git config --global user.email "peter@initech.com"
git config --global user.name "Peter Gibbons"
git checkout -b develop
echo "This repo contains
lists of cities for New York, Hawaii, and Texas" > README
git add README
git commit -m 'initial commit'
git branch hawaii
git branch newyork
git branch texas
git checkout --force hawaii
echo "Honolulu
Hilo
Kailua" >> cities
git add cities
git commit -am 'added hawaii'
git checkout --force newyork
echo "New York
Albany
Buffalo" >> cities
git add cities
git commit -am 'added new york'
git checkout --force texas
echo "Austin
Dallas
Houston" >> cities
git add cities
git commit -am 'added texas'
%cd rebase_repo
!git config --global alias.lol 'log --graph --decorate --oneline --all'
!git lol
!git checkout newyork
!git lol
!git rebase develop
!git lol
!git checkout texas
!git rebase newyork
!cat cities
%%writefile cities
New York
Albany
Buffalo
Austin
Dallas
Houston
!git add cities
!git rebase --continue
!git lol
!git checkout hawaii
!git rebase texas
!cat cities
%%writefile cities
New York
Albany
Buffalo
Austin
Dallas
Houston
Honolulu
Hilo
Kailua
!git add cities
!git rebase --continue
!cat cities
!git lol
!git checkout develop
!git merge hawaii
!git lol
| 0.177098 | 0.905531 |
<a href="https://colab.research.google.com/github/SoIllEconomist/ds4b/blob/master/python_ds4b/07_machine_learning/scikit_learn_overview.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Scikit-learn
Scikit-learn is an open source Python library that implements a range of machine learning, preprocessing, cross-validation and visualization algorithms using an unified interface.
## Loading Data
Your data needs to be numeric and stored as NumPy arrays or SciPy sparse
matrices. Other types that are convertible to numeric arrays, such as Pandas
DataFrame, are also acceptable.
```
import numpy as np
X = np.random.random((11,5))
y = np.array(['M','M','F','F','M','F','M','M','F','F','F'])
X[X < 0.7] = 0
```
## Train-Test-Split
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,random_state=42)
```
## Preprocessing Data
### Standardization
```
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train)
standardized_X = scaler.transform(X_train)
standardized_X_test = scaler.transform(X_test)
```
### Normalization
```
from sklearn.preprocessing import Normalizer
scaler = Normalizer().fit(X_train)
normalized_X = scaler.transform(X_train)
normalized_X_test = scaler.transform(X_test)
```
Binarization
```
from sklearn.preprocessing import Binarizer
binarizer = Binarizer(threshold=0.0).fit(X)
binary_X = binarizer.transform(X)
```
### Encoding Categorical Features
```
from sklearn.preprocessing import LabelEncoder
enc = LabelEncoder()
y = enc.fit_transform(y)
```
### Imputing Missing Values
```
from sklearn.impute import SimpleImputer
imp = SimpleImputer(missing_values=0, strategy='mean')
imp.fit_transform(X_train)
```
### Generating Polynomial Features
```
from sklearn.preprocessing import PolynomialFeatures
poly = PolynomialFeatures(5)
poly.fit_transform(X)
```
## Model Creation
### Supervised Learning Estimators
#### Linear Regression
```
from sklearn.linear_model import LinearRegression
lr = LinearRegression(normalize=True)
```
#### Support Vector Machines (SVM)
```
from sklearn.svm import SVC
svc = SVC(kernel='linear')
```
#### Naive Bayes
```
from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB()
```
#### KNN
```
from sklearn import neighbors
knn = neighbors.KNeighborsClassifier(n_neighbors=5)
```
### Unsupervised Learning Estimators
#### Principal Component Analysis (PCA)
```
from sklearn.decomposition import PCA
pca = PCA(n_components=0.95)
```
#### K Means
```
from sklearn.cluster import KMeans
k_means = KMeans(n_clusters=3, random_state=0)
```
## Model Fitting
### Supervised Learning
Fit the model to the data
```
lr.fit(X, y)
knn.fit(X_train, y_train)
svc.fit(X_train, y_train)
```
### Unsupervised Learning
Fit the model to the data
```
k_means.fit(X_train)
```
Fit to data, then transform it
```
pca_model = pca.fit_transform(X_train)
```
## Prediction
### Supervised Estimators
Predict Labels
```
y_pred = svc.predict(np.random.random((2,5)))
y_pred = lr.predict(X_test)
```
Estimate probability of a label
```
y_pred = knn.predict_proba(X_test)
```
### Unsupervised Estimators
Predict labels in clustering algorithms
```
y_pred = k_means.predict(X_test)
```
## Evaluate Model Performance
### Classification Metrics
#### Accuracy Score
Estimator score method
```
knn.score(X_test, y_test)
```
Metric scoring functions
```
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred)
```
#### Classification
Precision, recall, f1-score
and support
```
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
```
#### Confusion Matrix
```
from sklearn.metrics import confusion_matrix
print(confusion_matrix(y_test, y_pred))
```
### Regression Metrics
#### Mean Absolute Error
```
from sklearn.metrics import mean_absolute_error
y_true = [3, -0.5, 2]
mean_absolute_error(y_true, y_pred)
```
#### Mean Squared Error
```
from sklearn.metrics import mean_squared_error
mean_squared_error(y_test, y_pred)
```
#### $R^2$ Score
```
from sklearn.metrics import r2_score
r2_score(y_true, y_pred)
```
### Cluster Metrics
#### Adjusted Rand Index
```
from sklearn.metrics import adjusted_rand_score
adjusted_rand_score(y_true, y_pred)
```
#### Homogeneity
```
from sklearn.metrics import homogeneity_score
homogeneity_score(y_true, y_pred)
```
#### V-measure
```
from sklearn.metrics import v_measure_score
v_measure_score(y_true, y_pred)
```
### Cross-Validation
```
from sklearn.model_selection import cross_val_score
print(cross_val_score(knn, X_train, y_train, cv=4))
print(cross_val_score(lr, X, y, cv=2))
```
## Model Tuning
### Grid Search
```
from sklearn import svm, datasets
from sklearn.model_selection import GridSearchCV
iris = datasets.load_iris()
parameters = {'kernel':('linear', 'rbf'), 'C':[1, 10]}
svc = svm.SVC()
clf = GridSearchCV(svc, parameters)
clf.fit(iris.data, iris.target)
GridSearchCV(estimator=SVC(),
param_grid={'C': [1, 10], 'kernel': ('linear', 'rbf')})
sorted(clf.cv_results_.keys())
```
### Randomized Parameter Optimization
```
from sklearn.model_selection import RandomizedSearchCV
params = {"n_neighbors": range(1,5), "weights": ["uniform", "distance"]}
rsearch = RandomizedSearchCV(estimator=knn,
param_distributions=params,
cv=4,
n_iter=8,
random_state=5)
rsearch.fit(X_train, y_train)
print(rsearch.best_score_)
```
|
github_jupyter
|
import numpy as np
X = np.random.random((11,5))
y = np.array(['M','M','F','F','M','F','M','M','F','F','F'])
X[X < 0.7] = 0
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,random_state=42)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train)
standardized_X = scaler.transform(X_train)
standardized_X_test = scaler.transform(X_test)
from sklearn.preprocessing import Normalizer
scaler = Normalizer().fit(X_train)
normalized_X = scaler.transform(X_train)
normalized_X_test = scaler.transform(X_test)
from sklearn.preprocessing import Binarizer
binarizer = Binarizer(threshold=0.0).fit(X)
binary_X = binarizer.transform(X)
from sklearn.preprocessing import LabelEncoder
enc = LabelEncoder()
y = enc.fit_transform(y)
from sklearn.impute import SimpleImputer
imp = SimpleImputer(missing_values=0, strategy='mean')
imp.fit_transform(X_train)
from sklearn.preprocessing import PolynomialFeatures
poly = PolynomialFeatures(5)
poly.fit_transform(X)
from sklearn.linear_model import LinearRegression
lr = LinearRegression(normalize=True)
from sklearn.svm import SVC
svc = SVC(kernel='linear')
from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB()
from sklearn import neighbors
knn = neighbors.KNeighborsClassifier(n_neighbors=5)
from sklearn.decomposition import PCA
pca = PCA(n_components=0.95)
from sklearn.cluster import KMeans
k_means = KMeans(n_clusters=3, random_state=0)
lr.fit(X, y)
knn.fit(X_train, y_train)
svc.fit(X_train, y_train)
k_means.fit(X_train)
pca_model = pca.fit_transform(X_train)
y_pred = svc.predict(np.random.random((2,5)))
y_pred = lr.predict(X_test)
y_pred = knn.predict_proba(X_test)
y_pred = k_means.predict(X_test)
knn.score(X_test, y_test)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred)
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
from sklearn.metrics import confusion_matrix
print(confusion_matrix(y_test, y_pred))
from sklearn.metrics import mean_absolute_error
y_true = [3, -0.5, 2]
mean_absolute_error(y_true, y_pred)
from sklearn.metrics import mean_squared_error
mean_squared_error(y_test, y_pred)
from sklearn.metrics import r2_score
r2_score(y_true, y_pred)
from sklearn.metrics import adjusted_rand_score
adjusted_rand_score(y_true, y_pred)
from sklearn.metrics import homogeneity_score
homogeneity_score(y_true, y_pred)
from sklearn.metrics import v_measure_score
v_measure_score(y_true, y_pred)
from sklearn.model_selection import cross_val_score
print(cross_val_score(knn, X_train, y_train, cv=4))
print(cross_val_score(lr, X, y, cv=2))
from sklearn import svm, datasets
from sklearn.model_selection import GridSearchCV
iris = datasets.load_iris()
parameters = {'kernel':('linear', 'rbf'), 'C':[1, 10]}
svc = svm.SVC()
clf = GridSearchCV(svc, parameters)
clf.fit(iris.data, iris.target)
GridSearchCV(estimator=SVC(),
param_grid={'C': [1, 10], 'kernel': ('linear', 'rbf')})
sorted(clf.cv_results_.keys())
from sklearn.model_selection import RandomizedSearchCV
params = {"n_neighbors": range(1,5), "weights": ["uniform", "distance"]}
rsearch = RandomizedSearchCV(estimator=knn,
param_distributions=params,
cv=4,
n_iter=8,
random_state=5)
rsearch.fit(X_train, y_train)
print(rsearch.best_score_)
| 0.666714 | 0.975992 |
# Basic `Python`
We are going to learn the most basic commands of Python. The objective is not to teach you how to program to become an expert, but to learn the syntaxis of the language & recongnize it through the next notebooks
## Basic operations
### +, -, \*, /, **
<div class="alert alert-block alert-info">
<b>Try it out!</b>
<br><br>
- Define two integer variables & print the result of basic operations
</div>
### Operations over the same variables can be simplified by adding the operand before the `=`, like this:
x += 1
<div class="alert alert-block alert-info">
<b>Try it out!</b>
<br><br>- Execute the code in the cell below & print <b>x</b> & <b>y</b> final values
</div>
```
x, y = 3, 4
print(x,y)
x += 1
y /= 2
```
***
## `for` loops
Can iterate on any list.
<br>
#### Note the indentation in the line after the `for` command. This position indicates which lines belong to the `for` loop.
The print statement is no longer being looped on because it has no indent.
<div class="alert alert-block alert-info">
<b>Try it out!</b>
<br><br>Execute the code below
</div>
```
somelist = [10,15,25, '10ppm','5m/s']
for item in somelist:
print(item)
print(somelist)
```
### A list can also be just of numbers, and we can build a list using the function `range`:
#### range(start, end, step)
<br><i>Note: the end is non-inclusive</i>
<div class="alert alert-block alert-info">
<b>Try it out!</b>
<br><br>- Execute the code below
<br>
- Replace <b>somelist</b> in the <b>for</b> loop with: <b>range(0,41,5)</b>, and execute the code again
<br>
- Change the start, end or step and execute the code again
</div
```
somelist = [0, 5, 10, 15, 20, 25, 30, 35, 40]
for item in somelist:
print(item)
```
<div class="alert alert-block alert-info">
<b>Try it out!</b>
<br><br>- Try <b>range(10)</b> instead
```
somelist = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i','j']
for inx in range(10):
print(inx)
print('********\n')
for inx in range(10):
print(somelist[inx])
print('********\n')
for inx in range(0,10,2):
print(somelist[inx])
```
## Sometimes we want the index and the value of each element on a list
### The command `enumerate` gives you both
<div class="alert alert-block alert-info">
<b>Try it out!</b>
<br><br>
- Execute the code below
<br>
- Then try with <b>range(10)</b> instead
</div>
<br>Note the use of <b>{ }</b> and <b>.format</b> to print <b>inx</b> and <b>item</b>
```
for inx,item in enumerate(range(0,41,a5)):
print('index {}, value = {}'.format(inx,item))
```
***
# conditionals: `if`, `elif`, `else`
### Conditional operators can be used to compare different types of variables or to test logical statements.
- The basic operator to compare numerical values are: ==, !=, <, >, >=, <=
- The logical operators are: and, or, not
- and the use of conditional operators are exemplified in the next cell
#### Note the indentation again
<div class="alert alert-block alert-info">
<b>Try it out!</b>
<br><br>
- Execute the next cell
<br>
- Test different comparison operators and logical operators and execute
</div>
```
lat = 12
if (lat <= -23.5) or (lat >= 23.5):
print('extra-tropical')
elif lat == 0:
print('equator')
else:
print('tropical')
```
|
github_jupyter
|
x, y = 3, 4
print(x,y)
x += 1
y /= 2
somelist = [10,15,25, '10ppm','5m/s']
for item in somelist:
print(item)
print(somelist)
somelist = [0, 5, 10, 15, 20, 25, 30, 35, 40]
for item in somelist:
print(item)
somelist = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i','j']
for inx in range(10):
print(inx)
print('********\n')
for inx in range(10):
print(somelist[inx])
print('********\n')
for inx in range(0,10,2):
print(somelist[inx])
for inx,item in enumerate(range(0,41,a5)):
print('index {}, value = {}'.format(inx,item))
lat = 12
if (lat <= -23.5) or (lat >= 23.5):
print('extra-tropical')
elif lat == 0:
print('equator')
else:
print('tropical')
| 0.05694 | 0.985426 |
# Introduction
In October 2015, a data journalist named Walt Hickey analyzed movie ratings data and found strong evidence to suggest that [Fandango's](https://www.fandango.com/) rating system was biased and dishonest. He published his analysis in this [article](https://fivethirtyeight.com/features/fandango-movies-ratings/) — a great piece of data journalism that's totally worth reading.
Fandango displays a 5-star rating system on their website, where the minimum rating is 0 stars and the maximum is 5 stars.
Hickey found that there's a significant discrepancy between the number of stars displayed to users and the actual rating, which he was able to find in the HTML of the page. He was able to find that:
- The actual rating was almost always rounded up to the nearest half-star. For instance, a 4.1 movie would be rounded off to 4.5 stars, not to 4 stars, as you may expect.
- In the case of 8% of the ratings analyzed, the rounding up was done to the nearest whole star. For instance, a 4.5 rating would be rounded off to 5 stars.
- For one movie rating, the rounding off was completely bizarre: from a rating of 4 in the HTML of the page to a displayed rating of 5 stars.
The distribution of displayed ratings is clearly shifted to the right compared to the actual rating distribution, suggesting strongly that Fandango inflates the ratings under the hood.
In this project, we'll analyze more recent movie ratings data to determine whether there has been any change in Fandango's rating system after Hickey's analysis.
## The Data
One of the best ways to figure out whether there has been any change in Fandango's rating system after Hickey's analysis is to compare the system's characteristics previous and after the analysis. Fortunately, we have ready-made data for both these periods of time:
Walt Hickey made the data he analyzed publicly available on [GitHub](https://github.com/fivethirtyeight/data/tree/master/fandango). We'll use the data he collected to analyze the characteristics of Fandango's rating system previous to his analysis.
One of Dataquest's team members collected movie ratings data for movies released in 2016 and 2017. The data is publicly available on [GitHub](https://github.com/mircealex/Movie_ratings_2016_17) and we'll use it to analyze the rating system's characteristics _after_ Hickey's analysis.
Steps:
Read in and explore briefly the two data sets (fandango_score_comparison.csv and movie_ratings_16_17.csv) to understand their structure. You can find the documentation of both data sets in the GitHub repositories we linked to above.
For the data set with ratings previous to Hickey's analysis, select the following columns: 'FILM', 'Fandango_Stars', 'Fandango_Ratingvalue', 'Fandango_votes', 'Fandango_Difference'.
For the other data set, select the the following columns: 'movie', 'year', 'fandango'.
Define the population of interest for our goal — remember that our goal is to determine whether there has been any change in Fandango's rating system after Hickey's analysis.
By reading the README.md files of the two repositories, figure out whether the two samples are representative for the population we're trying to describe.
Determine whether the sampling is random or not — did all the movies have an equal chance to be included in the two samples?
```
import pandas as pd
pd.options.display.max_columns = 100 # Avoid having displayed truncated output
previous = pd.read_csv('fandango_score_comparison.csv')
after = pd.read_csv('movie_ratings_16_17.csv')
previous.head(3)
after.head(3)
```
We'll now isolate only the columns that provide information about Fandango, to have the revelant data available for later use. We'll make copies to avoid any `SettingWithCopyWarning` later on.
```
fandango_previous = previous[['FILM', 'Fandango_Stars', 'Fandango_Ratingvalue', 'Fandango_votes',
'Fandango_Difference']].copy()
fandango_after = after[['movie', 'year', 'fandango']].copy()
fandango_previous.head(3)
fandango_after.head(3)
```
Our goal is to determine whether there has been any change in Fandango's rating system after Hickey's analysis. The population of interest for our analysis is made up of all the movie ratings stored on Fandango's website, regardless of the year it was released.
Because we want to find out whether the parameters of this population changed after Hickey's analysis, we're interested in sampling the population at two different periods in time — previous and after Hickey's analysis — so we can compare the two states.
The data we're working with was sampled at the moments we want: one sample was taken previous to the analysis, and the other after the analysis. We want to describe the population, so we need to make sure that the samples are representative, otherwise we should expect a large sampling error and, ultimately, wrong conclusions.
From Hickey's article and from the [README.md](https://github.com/fivethirtyeight/data/blob/master/fandango/README.md) of the data set's repository, we can see that he used the following sampling criteria:
- The movie must have had at least 30 fan ratings on Fandango's website at the time of sampling (Aug. 24, 2015).
- The movie must have had tickets on sale in 2015.
The sampling was clearly not random because not every movie had the same chance to be included in the sample — some movies didn't have a chance at all (like those having under 30 fan ratings or those without tickets on sale in 2015). It's questionable whether this sample is representative of the entire population we're interested to describe. It seems more likely that it isn't, mostly because this sample is subject to temporal trends — e.g. movies in 2015 might have been outstandingly good or bad compared to other years.
The sampling conditions for our other sample were (as it can be read in the [README.md](https://github.com/mircealex/Movie_ratings_2016_17/blob/master/README.md) of the data set's repository):
- The movie must have been released in 2016 or later.
- The movie must have had a considerable number of votes and reviews
This second sample is also subject to temporal trends and it's unlikely to be representative of our population of
interest. The number of votes and reviews for each movie is unclear from the README.md or from the data.
Both authors had certain research questions in mind when they sampled the data, and they used a set of criteria to get a sample that would fit their questions. Their sampling method is called [purposive sampling](https://youtu.be/CdK7N_kTzHI) (or judgmental/selective/subjective sampling). While these samples were good enough for their research, they don't seem too useful for us.
## Changing our goal
At this point, we have at least two alternatives: either we collect new data, either we change the goal of our analysis by placing some limitations on it.
Tweaking our goal seems a much faster choice compared to collecting new data. Also, it's quasi-impossible to collect a new sample previous to Hickey's analysis at this moment in time.
Our new goal is to determine whether there's any difference between Fandango's ratings for popular movies in 2015 and Fandango's ratings for popular movies in 2016. This new goal should also be a fairly good proxy for our initial goal.
## Isolating the necessary samples
With the new goal, we now have two populations that we want to describe and compare with each other:
- All of Fandango's ratings for popular movies released in 2015.
- All of Fandango's ratings for popular movies released in 2016.
The term "popular" is vague and we need to define it with precision before continuing. We'll use Hickey's benchmark of 30 fan ratings and consider a movie as "popular" only if it has 30 fan ratings or more on Fandango's website.
One quick way to check the representativity of this sample is to sample randomly 10 movies from it and then check the number of fan ratings ourselves on Fandango's website. Ideally, at least 8 out of the 10 movies have 30 fan ratings or more.
```
fandango_after.sample(10, random_state = 1)
```
90% of the movies in our sample are popular. This is enough and we move forward with a bit more confidence.
Let's also double-check the other data set for popular movies. The documentation states clearly that there're only movies with at least 30 fan ratings, but it should take only a couple of seconds to double-check here.
```
sum(fandango_previous['Fandango_votes'] < 30)
```
We notice that there are movies with the year of release other than 2015 or 2016. For our purposes, we'll need to isolate only the movies released in 2015 and 2016.
Let's start with Hickey's data set and isolate only the movies released in 2015. There's no special column for the year a movie was released, but we should be able to extract it from the strings in the FILM column.
```
fandango_previous.head(2)
fandango_previous['Year'] = fandango_previous['FILM'].str[-5:-1]
fandango_previous.head(2)
```
Let's examine the frequency distribution for the Year column and then isolate the movies released in 2015.
```
fandango_previous['Year'].value_counts()
fandango_2015 = fandango_previous[fandango_previous['Year'] == '2015'].copy()
fandango_2015['Year'].value_counts()
```
We'll now do the same for our 2016 data set.
```
fandango_after.head(2)
fandango_after['year'].value_counts()
fandango_2016 = fandango_after[fandango_after['year'] == 2016].copy()
fandango_2016['year'].value_counts()
```
## Comparing Distribution Shapes for 2015 and 2016
There are many ways we can go about with our analysis, but let's start simple with making a high-level comparison between the shapes of the distributions of movie ratings for both samples.
Steps:
- Generate two kernel density plots on the same figure for the distribution of movie ratings of each sample.
- Customize the graph such that:
- It has a title with an increased font size.
- It has labels for both the x and y-axis.
- It has a legend which explains which distribution is for 2015 and which is for 2016.
- The x-axis starts at 0 and ends at 5 because movie ratings on Fandango start at 0 and end at 5.
- The tick labels of the x-axis are: [0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0].
- It has the fivethirtyeight style (this is optional). You can change to this style by using plt.style.use('fivethirtyeight'). This line of code must be placed before the code that generates the kernel density plots.
```
import matplotlib.pyplot as plt
from numpy import arange
%matplotlib inline
plt.style.use('fivethirtyeight')
fandango_2015['Fandango_Stars'].plot.kde(label = '2015', legend = True, figsize = (8,5.5))
fandango_2016['fandango'].plot.kde(label = '2016', legend = True)
plt.title("Comparing distribution shapes for Fandango's ratings\n(2015 vs 2016)",
y = 1) # the `y` parameter pads the title upward
plt.xlabel('Stars')
plt.xlim(0,5) # because ratings start at 0 and end at 5
plt.xticks(arange(0,5.1,.5))
plt.show()
```
### Observations
Two aspects are striking on the figure above:
- Both distributions are strongly left skewed.
- The 2016 distribution is slightly shifted to the left relative to the 2015 distribution.
The left skew suggests that movies on Fandango are given mostly high and very high fan ratings. Coupled with the fact that Fandango sells tickets, the high ratings are a bit dubious.
The slight left shift of the 2016 distribution is very interesting for our analysis.
- It shows that ratings were slightly lower in 2016 compared to 2015.
- This suggests that there was a difference indeed between Fandango's ratings for popular movies in 2015 and Fandango's ratings for popular movies in 2016.
- We can also see the direction of the difference: the ratings in 2016 were slightly lower compared to 2015.
## Comparing Relative Frequencies
We now need to analyze more granular information.
Steps:
Examine the frequency distribution tables of the two distributions.
- The samples have different number of movies. Does it make sense to compare the two tables using absolute frequencies?
- If absolute frequencies are not useful here, would relative frequencies be of more help? If so, what would be better for readability — proportions or percentages?
- Analyze the two tables and try to answer the following questions:
- Is it still clear that there is a difference between the two distributions?
- What can you tell about the direction of the difference just from the tables? Is the direction still that clear anymore?
Because the data sets have different numbers of movies, we normalize the tables and show percentages instead.
```
print('2015' + '\n' + '-' * 16) # To help us distinguish between the two tables
fandango_2015['Fandango_Stars'].value_counts(normalize = True).sort_index() * 100
print('2016' + '\n' + '-' * 16)
fandango_2016['fandango'].value_counts(normalize = True).sort_index() * 100
```
### Observations
In 2016, very high ratings (4.5 and 5 stars) had significantly lower percentages compared to 2015.
- In 2016, under 1% of the movies had a perfect rating of 5 stars, compared to 2015 when the percentage was close to 7%.
- Ratings of 4.5 were also more popular in 2015 — there were approximately 13% more movies rated with a 4.5 in 2015 compared to 2016.
- The minimum rating is also lower in 2016 — 2.5 instead of 3 stars, the minimum of 2015. There clearly is a difference between the two frequency distributions.
For some other ratings, the percentage went up in 2016. There was a greater percentage of movies in 2016 that received 3.5 and 4 stars, compared to 2015.
3.5 and 4.0 are high ratings and this challenges the direction of the change we saw on the kernel density plots.
## Determining the Direction of the Change
We'll now use the mean, the median, and the mode for both distributions and then use a bar graph better understand the direction of the change.
```
mean_2015 = fandango_2015['Fandango_Stars'].mean()
mean_2016 = fandango_2016['fandango'].mean()
median_2015 = fandango_2015['Fandango_Stars'].median()
median_2016 = fandango_2016['fandango'].median()
mode_2015 = fandango_2015['Fandango_Stars'].mode()[0] # the output of Series.mode() is a bit uncommon
mode_2016 = fandango_2016['fandango'].mode()[0]
summary = pd.DataFrame()
summary['2015'] = [mean_2015, median_2015, mode_2015]
summary['2016'] = [mean_2016, median_2016, mode_2016]
summary.index = ['mean', 'median', 'mode']
summary
plt.style.use('fivethirtyeight')
summary['2015'].plot.bar(color = 'blue', align = 'center', label = '2015', width = .25)
summary['2016'].plot.bar(color = 'yellow', align = 'edge', label = '2016', width = .25,
rot = 0, figsize = (8,5))
plt.title('Comparing summary statistics: 2015 vs 2016', y = 1.07)
plt.ylim(0,5.5)
plt.yticks(arange(0,5.1,.5))
plt.ylabel('Stars')
plt.legend(framealpha = 0, loc = 'upper center')
plt.show()
(summary.loc['mean'][0] - summary.loc['mean'][1]) / summary.loc['mean'][0]
```
## Conclusions
The mean rating was lower in 2016 by almost 5% relative to the mean rating in 2015.
While the median is the same for both distributions, the mode is lower in 2016 by 0.5. Coupled with what we saw for the mean, the direction of the change we saw on the kernel density plot is confirmed: on average, popular movies released in 2016 were rated slightly lower than popular movies released in 2015.
We cannot be completely sure what caused the change, however it occured shortly after Hickey's analysis.
|
github_jupyter
|
import pandas as pd
pd.options.display.max_columns = 100 # Avoid having displayed truncated output
previous = pd.read_csv('fandango_score_comparison.csv')
after = pd.read_csv('movie_ratings_16_17.csv')
previous.head(3)
after.head(3)
fandango_previous = previous[['FILM', 'Fandango_Stars', 'Fandango_Ratingvalue', 'Fandango_votes',
'Fandango_Difference']].copy()
fandango_after = after[['movie', 'year', 'fandango']].copy()
fandango_previous.head(3)
fandango_after.head(3)
fandango_after.sample(10, random_state = 1)
sum(fandango_previous['Fandango_votes'] < 30)
fandango_previous.head(2)
fandango_previous['Year'] = fandango_previous['FILM'].str[-5:-1]
fandango_previous.head(2)
fandango_previous['Year'].value_counts()
fandango_2015 = fandango_previous[fandango_previous['Year'] == '2015'].copy()
fandango_2015['Year'].value_counts()
fandango_after.head(2)
fandango_after['year'].value_counts()
fandango_2016 = fandango_after[fandango_after['year'] == 2016].copy()
fandango_2016['year'].value_counts()
import matplotlib.pyplot as plt
from numpy import arange
%matplotlib inline
plt.style.use('fivethirtyeight')
fandango_2015['Fandango_Stars'].plot.kde(label = '2015', legend = True, figsize = (8,5.5))
fandango_2016['fandango'].plot.kde(label = '2016', legend = True)
plt.title("Comparing distribution shapes for Fandango's ratings\n(2015 vs 2016)",
y = 1) # the `y` parameter pads the title upward
plt.xlabel('Stars')
plt.xlim(0,5) # because ratings start at 0 and end at 5
plt.xticks(arange(0,5.1,.5))
plt.show()
print('2015' + '\n' + '-' * 16) # To help us distinguish between the two tables
fandango_2015['Fandango_Stars'].value_counts(normalize = True).sort_index() * 100
print('2016' + '\n' + '-' * 16)
fandango_2016['fandango'].value_counts(normalize = True).sort_index() * 100
mean_2015 = fandango_2015['Fandango_Stars'].mean()
mean_2016 = fandango_2016['fandango'].mean()
median_2015 = fandango_2015['Fandango_Stars'].median()
median_2016 = fandango_2016['fandango'].median()
mode_2015 = fandango_2015['Fandango_Stars'].mode()[0] # the output of Series.mode() is a bit uncommon
mode_2016 = fandango_2016['fandango'].mode()[0]
summary = pd.DataFrame()
summary['2015'] = [mean_2015, median_2015, mode_2015]
summary['2016'] = [mean_2016, median_2016, mode_2016]
summary.index = ['mean', 'median', 'mode']
summary
plt.style.use('fivethirtyeight')
summary['2015'].plot.bar(color = 'blue', align = 'center', label = '2015', width = .25)
summary['2016'].plot.bar(color = 'yellow', align = 'edge', label = '2016', width = .25,
rot = 0, figsize = (8,5))
plt.title('Comparing summary statistics: 2015 vs 2016', y = 1.07)
plt.ylim(0,5.5)
plt.yticks(arange(0,5.1,.5))
plt.ylabel('Stars')
plt.legend(framealpha = 0, loc = 'upper center')
plt.show()
(summary.loc['mean'][0] - summary.loc['mean'][1]) / summary.loc['mean'][0]
| 0.449876 | 0.993196 |
# Evaluate the performance of CritterCounter models
We will use this to evaluate the performance of the models built for empty vs animal as well as a species classifier.
### Set up the environment
```
import os
import pandas as pd
import re
from keras.preprocessing.image import ImageDataGenerator
from keras.applications import ResNet50
from keras.models import Sequential
from keras.layers import Dropout, Flatten, Dense
import numpy as np
from sklearn.metrics import confusion_matrix
test_folder = '/data/holdout_set'
```
## Import the holdout data
### Create the data frame
```
test_file_paths = []
for root, sub, files in os.walk(test_folder):
if len(files) > 0:
test_file_paths += [os.path.join(root, file) for file in files]
df = pd.DataFrame({'path': test_file_paths})
df['category_name'] = df['path'].apply(lambda x: re.findall('/data/holdout_set/([a-z_]+)', x)[0])
```
#### Create the species specific dataset
```
species_subset = [
'american_black_bear',
'bobcat',
'cougar',
'coyote',
'deer',
'domestic_cow',
'domestic_dog',
'elk',
'moose',
'vehicle',
'wild_turkey'
]
species_df = df[df['category_name'].isin(species_subset)]
print(len(species_df))
species_df['category_name'].value_counts()
```
## Species Model Evaluation
### Import the pretrained network
```
img_width, img_height = 224, 224
batch_size = 1
model_path = '/data/models/ResNet50/MobileNetV2_20190323_weights.h5'
# Define the model
ResNet50 = ResNet50(weights=None, include_top=False, input_shape=(img_width, img_height, 3))
print('Model loaded.')
# build a classifier model to put on top of the convolutional model
model = Sequential()
model.add(ResNet50)
model.add(Flatten(input_shape=model.output_shape[1:]))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(15, activation='softmax'))
# Load the pretrained weights
model.load_weights(model_path)
model.summary()
```
### Create the Generator
```
test_samples = len(species_df)
test_datagen = ImageDataGenerator(rescale=1/255.)
test_generator = test_datagen.flow_from_dataframe(
species_df,
x_col='path',
y_col ='category_name',
target_size=(img_width, img_height),
batch_size=batch_size,
shuffle=False,
class_mode='categorical')
```
### Predict on the holdout set
```
predictions = model.predict_generator(
test_generator,
steps=test_samples//batch_size,
verbose=True
)
# Save results
np.save(file='/data/results/ResNet50_original_species_preds.npy', arr=predictions)
```
### Evaluation Results
#### Build Results DataFrame
```
id_map = {
0: 'american_black_bear',
1: 'bobcat',
2: 'cougar',
3: 'coyote',
4: 'domestic_cow',
5: 'domestic_dog',
6: 'elk',
7: 'gray_fox',
8: 'moose',
9: 'deer',
10: 'elk',
11: 'red_fox',
12: 'vehicle',
13: 'deer',
14: 'wild_turkey',
15: 'wolf'
}
preds = pd.DataFrame(predictions)
results_df = pd.concat([species_df.reset_index(drop=True), preds], axis=1)
results_df['top_class'] = pd.Series(predictions.argmax(axis=1))
results_df['top_prob'] = pd.Series(predictions.max(axis=1))
results_df['pred_category_name'] = results_df['top_class'].apply(lambda x: id_map[x])
results_df['top_1_acc'] = results_df['category_name'] == results_df['pred_category_name']
results_df['top_3_classes'] = pd.Series([list(i) for i in predictions.argsort(axis=1)[:,:-4:-1]])
results_df['top_3_classes'] = results_df['top_3_classes'].apply(lambda x: [id_map[i] for i in x])
results_df['top_3_acc'] = results_df.apply(lambda x: x['category_name'] in x['top_3_classes'], axis=1)
results_df['top_5_classes'] = pd.Series([list(i) for i in predictions.argsort(axis=1)[:,:-6:-1]])
results_df['top_5_classes'] = results_df['top_5_classes'].apply(lambda x: [id_map[i] for i in x])
results_df['top_5_acc'] = results_df.apply(lambda x: x['category_name'] in x['top_5_classes'], axis=1)
results_df.to_csv('/data/results/ResNet50_original_species_results.csv', index=False)
```
#### Evaluation Metrics
```
from sklearn.metrics import confusion_matrix, f1_score, precision_score, recall_score, precision_recall_curve
import seaborn as sns; sns.set()
print('Top 1 Accuracy: {:.2%}'.format(results_df['top_1_acc'].mean()))
print('Top 3 Accuracy: {:.2%}'.format(results_df['top_3_acc'].mean()))
print('Top 5 Accuracy: {:.2%}'.format(results_df['top_5_acc'].mean()))
print('F1 Score: {:.2%}'.format(f1_score(results_df['category_name'], results_df['pred_category_name'], average='weighted')))
print('Precision Score: {:.2%}'.format(precision_score(results_df['category_name'], results_df['pred_category_name'], average='weighted')))
print('Recall Score: {:.2%}'.format(recall_score(results_df['category_name'], results_df['pred_category_name'], average='weighted')))
```
#### Categorical Breakdown
```
results_df.groupby('category_name')['top_1_acc'].mean()*100
results_df.pred_category_name.nunique()
results_df.groupby('category_name')['top_1_acc'].mean()
conf_mat = confusion_matrix(results_df['category_name'], results_df['pred_category_name'])
pd.DataFrame(np.round(conf_mat/np.repeat(conf_mat.sum(axis=1), 13).reshape(13,13), 2))
sns.heatmap(conf_mat/np.repeat(conf_mat.sum(axis=1), 13).reshape(13,13))
```
## Species Model Evaluation
### Import the pretrained network
```
img_width, img_height = 224, 224
batch_size = 1
model_path = '/data/ResNet50/ResNet50_20190404_species_weights.h5'
# Define the model
ResNet50 = ResNet50(weights=None, include_top=False, input_shape=(img_width, img_height, 3))
print('Model loaded.')
# build a classifier model to put on top of the convolutional model
model = Sequential()
model.add(ResNet50)
model.add(Flatten(input_shape=model.output_shape[1:]))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(16, activation='softmax'))
# Load the pretrained weights
model.load_weights(model_path)
model.summary()
```
### Create the Generator
```
test_samples = len(species_df)
test_datagen = ImageDataGenerator(rescale=1/255.)
test_generator = test_datagen.flow_from_dataframe(
species_df,
x_col='path',
y_col ='category_name',
target_size=(img_width, img_height),
batch_size=batch_size,
shuffle=False,
class_mode='categorical')
```
### Predict on the holdout set
```
predictions = model.predict_generator(
test_generator,
steps=test_samples//batch_size,
verbose=True
)
# Save results
np.save(file='/data/results/ResNet50_species_preds.npy', arr=predictions)
```
### Evaluation Results
#### Build Results DataFrame
```
id_map = {
0: 'american_black_bear',
1: 'bobcat',
2: 'cougar',
3: 'coyote',
4: 'domestic_cow',
5: 'domestic_dog',
6: 'elk',
7: 'gray_fox',
8: 'moose',
9: 'deer',
10: 'elk',
11: 'red_fox',
12: 'vehicle',
13: 'deer',
14: 'wild_turkey',
15: 'wolf'
}
preds = pd.DataFrame(predictions)
results_df = pd.concat([species_df.reset_index(drop=True), preds], axis=1)
results_df['top_class'] = pd.Series(predictions.argmax(axis=1))
results_df['top_prob'] = pd.Series(predictions.max(axis=1))
results_df['pred_category_name'] = results_df['top_class'].apply(lambda x: id_map[x])
results_df['top_1_acc'] = results_df['category_name'] == results_df['pred_category_name']
results_df['top_3_classes'] = pd.Series([list(i) for i in predictions.argsort(axis=1)[:,:-4:-1]])
results_df['top_3_classes'] = results_df['top_3_classes'].apply(lambda x: [id_map[i] for i in x])
results_df['top_3_acc'] = results_df.apply(lambda x: x['category_name'] in x['top_3_classes'], axis=1)
results_df['top_5_classes'] = pd.Series([list(i) for i in predictions.argsort(axis=1)[:,:-6:-1]])
results_df['top_5_classes'] = results_df['top_5_classes'].apply(lambda x: [id_map[i] for i in x])
results_df['top_5_acc'] = results_df.apply(lambda x: x['category_name'] in x['top_5_classes'], axis=1)
results_df.to_csv('/data/results/ResNet50_species_results.csv', index=False)
```
#### Evaluation Metrics
```
from sklearn.metrics import confusion_matrix, f1_score, precision_score, recall_score, precision_recall_curve
import seaborn as sns; sns.set()
print('Top 1 Accuracy: {:.2%}'.format(results_df['top_1_acc'].mean()))
print('Top 3 Accuracy: {:.2%}'.format(results_df['top_3_acc'].mean()))
print('Top 5 Accuracy: {:.2%}'.format(results_df['top_5_acc'].mean()))
print('F1 Score: {:.2%}'.format(f1_score(results_df['category_name'], results_df['pred_category_name'], average='weighted')))
print('Precision Score: {:.2%}'.format(precision_score(results_df['category_name'], results_df['pred_category_name'], average='weighted')))
print('Recall Score: {:.2%}'.format(recall_score(results_df['category_name'], results_df['pred_category_name'], average='weighted')))
```
#### Categorical Breakdown
```
results_df.groupby('category_name')['top_1_acc'].mean()*100
results_df.pred_category_name.nunique()
results_df.groupby('category_name')['top_1_acc'].mean()
conf_mat = confusion_matrix(results_df['category_name'], results_df['pred_category_name'])
pd.DataFrame(np.round(conf_mat/np.repeat(conf_mat.sum(axis=1), 14).reshape(14,14), 2))
sns.heatmap(conf_mat/np.repeat(conf_mat.sum(axis=1), 14).reshape(14,14))
```
# Empty vs Non-Empty
#### Create the empty vs animal dataset
```
def empty_v_animal(label):
if label == 'empty':
return label
else:
return 'animal'
empty_df = df.copy()
empty_df['target'] = empty_df['category_name'].apply(empty_v_animal)
list_ = []
for key, grp in empty_df.groupby('target'):
grp = grp.sample(frac=1).reset_index(drop=True)
grp = grp[:200]
list_.append(grp)
empty_df = pd.concat(list_)
print(len(empty_df))
empty_df['target'].value_counts()
```
## Species Model Evaluation
### Import the pretrained network
```
img_width, img_height = 224, 224
batch_size = 1
model_path = '/data/ResNet50/ResNet50_20190403_exclusiveEVA_weights.h5'
# Define the model
ResNet50 = ResNet50(weights=None, include_top=False, input_shape=(img_width, img_height, 3))
print('Model loaded.')
# build a classifier model to put on top of the convolutional model
model = Sequential()
model.add(ResNet50)
model.add(Flatten(input_shape=model.output_shape[1:]))
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(2, activation='softmax'))
# Load the pretrained weights
model.load_weights(model_path)
model.summary()
```
### Create the Generator
```
test_samples = len(empty_df)
test_datagen = ImageDataGenerator(rescale=1/255.)
test_generator = test_datagen.flow_from_dataframe(
empty_df,
x_col='path',
y_col ='target',
target_size=(img_width, img_height),
batch_size=batch_size,
shuffle=False,
class_mode='categorical')
```
### Predict on the holdout set
```
predictions = model.predict_generator(
test_generator,
steps=test_samples//batch_size,
verbose=True
)
# Save results
np.save(file='/data/results/ResNet50_empty_preds.npy', arr=predictions)
```
### Evaluation Results
#### Build Results DataFrame
```
id_map = {
0: 'animal',
1: 'empty'
}
preds = pd.DataFrame(predictions)
results_df = pd.concat([empty_df.reset_index(drop=True), preds], axis=1)
results_df['top_class'] = pd.Series(predictions.argmax(axis=1))
results_df['top_prob'] = pd.Series(predictions.max(axis=1))
results_df['pred_category_name'] = results_df['top_class'].apply(lambda x: id_map[x])
results_df['acc'] = 1.0*(results_df['target'] == results_df['pred_category_name'])
results_df.to_csv('/data/results/ResNet50_empty_results.csv', index=False)
results_df.groupby(['target'])['acc'].mean()
```
#### Evaluation Metrics
```
from sklearn.metrics import confusion_matrix, f1_score, precision_score, recall_score, precision_recall_curve
import seaborn as sns; sns.set()
print('Accuracy: {:.2%}'.format(results_df['acc'].mean()))
print('F1 Score: {:.2%}'.format(f1_score(results_df['target'], results_df['pred_category_name'], average='weighted')))
print('Precision Score: {:.2%}'.format(precision_score(results_df['target'], results_df['pred_category_name'], average='weighted')))
print('Recall Score: {:.2%}'.format(recall_score(results_df['target'], results_df['pred_category_name'], average='weighted')))
```
#### Categorical Breakdown
```
results_df[(results_df['target']=='empty') & (results_df['top_class']==1)]['top_prob'].plot.density()
results_df[(results_df['target']=='animal') & (results_df['top_class']==0)]['top_prob'].plot.density()
results_df[(results_df['target']=='empty') & (results_df['top_class']==0)]['top_prob'].plot.density()
results_df[(results_df['target']=='animal') & (results_df['top_class']==1)]['top_prob'].plot.density()
results_df.groupby('category_name')['acc'].mean()
conf_mat = confusion_matrix(results_df['target'], results_df['pred_category_name'])
pd.DataFrame(np.round(conf_mat/np.repeat(conf_mat.sum(axis=1), 2).reshape(2,2), 2))
sns.heatmap(conf_mat/np.repeat(conf_mat.sum(axis=1), 2).reshape(2,2))
```
|
github_jupyter
|
import os
import pandas as pd
import re
from keras.preprocessing.image import ImageDataGenerator
from keras.applications import ResNet50
from keras.models import Sequential
from keras.layers import Dropout, Flatten, Dense
import numpy as np
from sklearn.metrics import confusion_matrix
test_folder = '/data/holdout_set'
test_file_paths = []
for root, sub, files in os.walk(test_folder):
if len(files) > 0:
test_file_paths += [os.path.join(root, file) for file in files]
df = pd.DataFrame({'path': test_file_paths})
df['category_name'] = df['path'].apply(lambda x: re.findall('/data/holdout_set/([a-z_]+)', x)[0])
species_subset = [
'american_black_bear',
'bobcat',
'cougar',
'coyote',
'deer',
'domestic_cow',
'domestic_dog',
'elk',
'moose',
'vehicle',
'wild_turkey'
]
species_df = df[df['category_name'].isin(species_subset)]
print(len(species_df))
species_df['category_name'].value_counts()
img_width, img_height = 224, 224
batch_size = 1
model_path = '/data/models/ResNet50/MobileNetV2_20190323_weights.h5'
# Define the model
ResNet50 = ResNet50(weights=None, include_top=False, input_shape=(img_width, img_height, 3))
print('Model loaded.')
# build a classifier model to put on top of the convolutional model
model = Sequential()
model.add(ResNet50)
model.add(Flatten(input_shape=model.output_shape[1:]))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(15, activation='softmax'))
# Load the pretrained weights
model.load_weights(model_path)
model.summary()
test_samples = len(species_df)
test_datagen = ImageDataGenerator(rescale=1/255.)
test_generator = test_datagen.flow_from_dataframe(
species_df,
x_col='path',
y_col ='category_name',
target_size=(img_width, img_height),
batch_size=batch_size,
shuffle=False,
class_mode='categorical')
predictions = model.predict_generator(
test_generator,
steps=test_samples//batch_size,
verbose=True
)
# Save results
np.save(file='/data/results/ResNet50_original_species_preds.npy', arr=predictions)
id_map = {
0: 'american_black_bear',
1: 'bobcat',
2: 'cougar',
3: 'coyote',
4: 'domestic_cow',
5: 'domestic_dog',
6: 'elk',
7: 'gray_fox',
8: 'moose',
9: 'deer',
10: 'elk',
11: 'red_fox',
12: 'vehicle',
13: 'deer',
14: 'wild_turkey',
15: 'wolf'
}
preds = pd.DataFrame(predictions)
results_df = pd.concat([species_df.reset_index(drop=True), preds], axis=1)
results_df['top_class'] = pd.Series(predictions.argmax(axis=1))
results_df['top_prob'] = pd.Series(predictions.max(axis=1))
results_df['pred_category_name'] = results_df['top_class'].apply(lambda x: id_map[x])
results_df['top_1_acc'] = results_df['category_name'] == results_df['pred_category_name']
results_df['top_3_classes'] = pd.Series([list(i) for i in predictions.argsort(axis=1)[:,:-4:-1]])
results_df['top_3_classes'] = results_df['top_3_classes'].apply(lambda x: [id_map[i] for i in x])
results_df['top_3_acc'] = results_df.apply(lambda x: x['category_name'] in x['top_3_classes'], axis=1)
results_df['top_5_classes'] = pd.Series([list(i) for i in predictions.argsort(axis=1)[:,:-6:-1]])
results_df['top_5_classes'] = results_df['top_5_classes'].apply(lambda x: [id_map[i] for i in x])
results_df['top_5_acc'] = results_df.apply(lambda x: x['category_name'] in x['top_5_classes'], axis=1)
results_df.to_csv('/data/results/ResNet50_original_species_results.csv', index=False)
from sklearn.metrics import confusion_matrix, f1_score, precision_score, recall_score, precision_recall_curve
import seaborn as sns; sns.set()
print('Top 1 Accuracy: {:.2%}'.format(results_df['top_1_acc'].mean()))
print('Top 3 Accuracy: {:.2%}'.format(results_df['top_3_acc'].mean()))
print('Top 5 Accuracy: {:.2%}'.format(results_df['top_5_acc'].mean()))
print('F1 Score: {:.2%}'.format(f1_score(results_df['category_name'], results_df['pred_category_name'], average='weighted')))
print('Precision Score: {:.2%}'.format(precision_score(results_df['category_name'], results_df['pred_category_name'], average='weighted')))
print('Recall Score: {:.2%}'.format(recall_score(results_df['category_name'], results_df['pred_category_name'], average='weighted')))
results_df.groupby('category_name')['top_1_acc'].mean()*100
results_df.pred_category_name.nunique()
results_df.groupby('category_name')['top_1_acc'].mean()
conf_mat = confusion_matrix(results_df['category_name'], results_df['pred_category_name'])
pd.DataFrame(np.round(conf_mat/np.repeat(conf_mat.sum(axis=1), 13).reshape(13,13), 2))
sns.heatmap(conf_mat/np.repeat(conf_mat.sum(axis=1), 13).reshape(13,13))
img_width, img_height = 224, 224
batch_size = 1
model_path = '/data/ResNet50/ResNet50_20190404_species_weights.h5'
# Define the model
ResNet50 = ResNet50(weights=None, include_top=False, input_shape=(img_width, img_height, 3))
print('Model loaded.')
# build a classifier model to put on top of the convolutional model
model = Sequential()
model.add(ResNet50)
model.add(Flatten(input_shape=model.output_shape[1:]))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(16, activation='softmax'))
# Load the pretrained weights
model.load_weights(model_path)
model.summary()
test_samples = len(species_df)
test_datagen = ImageDataGenerator(rescale=1/255.)
test_generator = test_datagen.flow_from_dataframe(
species_df,
x_col='path',
y_col ='category_name',
target_size=(img_width, img_height),
batch_size=batch_size,
shuffle=False,
class_mode='categorical')
predictions = model.predict_generator(
test_generator,
steps=test_samples//batch_size,
verbose=True
)
# Save results
np.save(file='/data/results/ResNet50_species_preds.npy', arr=predictions)
id_map = {
0: 'american_black_bear',
1: 'bobcat',
2: 'cougar',
3: 'coyote',
4: 'domestic_cow',
5: 'domestic_dog',
6: 'elk',
7: 'gray_fox',
8: 'moose',
9: 'deer',
10: 'elk',
11: 'red_fox',
12: 'vehicle',
13: 'deer',
14: 'wild_turkey',
15: 'wolf'
}
preds = pd.DataFrame(predictions)
results_df = pd.concat([species_df.reset_index(drop=True), preds], axis=1)
results_df['top_class'] = pd.Series(predictions.argmax(axis=1))
results_df['top_prob'] = pd.Series(predictions.max(axis=1))
results_df['pred_category_name'] = results_df['top_class'].apply(lambda x: id_map[x])
results_df['top_1_acc'] = results_df['category_name'] == results_df['pred_category_name']
results_df['top_3_classes'] = pd.Series([list(i) for i in predictions.argsort(axis=1)[:,:-4:-1]])
results_df['top_3_classes'] = results_df['top_3_classes'].apply(lambda x: [id_map[i] for i in x])
results_df['top_3_acc'] = results_df.apply(lambda x: x['category_name'] in x['top_3_classes'], axis=1)
results_df['top_5_classes'] = pd.Series([list(i) for i in predictions.argsort(axis=1)[:,:-6:-1]])
results_df['top_5_classes'] = results_df['top_5_classes'].apply(lambda x: [id_map[i] for i in x])
results_df['top_5_acc'] = results_df.apply(lambda x: x['category_name'] in x['top_5_classes'], axis=1)
results_df.to_csv('/data/results/ResNet50_species_results.csv', index=False)
from sklearn.metrics import confusion_matrix, f1_score, precision_score, recall_score, precision_recall_curve
import seaborn as sns; sns.set()
print('Top 1 Accuracy: {:.2%}'.format(results_df['top_1_acc'].mean()))
print('Top 3 Accuracy: {:.2%}'.format(results_df['top_3_acc'].mean()))
print('Top 5 Accuracy: {:.2%}'.format(results_df['top_5_acc'].mean()))
print('F1 Score: {:.2%}'.format(f1_score(results_df['category_name'], results_df['pred_category_name'], average='weighted')))
print('Precision Score: {:.2%}'.format(precision_score(results_df['category_name'], results_df['pred_category_name'], average='weighted')))
print('Recall Score: {:.2%}'.format(recall_score(results_df['category_name'], results_df['pred_category_name'], average='weighted')))
results_df.groupby('category_name')['top_1_acc'].mean()*100
results_df.pred_category_name.nunique()
results_df.groupby('category_name')['top_1_acc'].mean()
conf_mat = confusion_matrix(results_df['category_name'], results_df['pred_category_name'])
pd.DataFrame(np.round(conf_mat/np.repeat(conf_mat.sum(axis=1), 14).reshape(14,14), 2))
sns.heatmap(conf_mat/np.repeat(conf_mat.sum(axis=1), 14).reshape(14,14))
def empty_v_animal(label):
if label == 'empty':
return label
else:
return 'animal'
empty_df = df.copy()
empty_df['target'] = empty_df['category_name'].apply(empty_v_animal)
list_ = []
for key, grp in empty_df.groupby('target'):
grp = grp.sample(frac=1).reset_index(drop=True)
grp = grp[:200]
list_.append(grp)
empty_df = pd.concat(list_)
print(len(empty_df))
empty_df['target'].value_counts()
img_width, img_height = 224, 224
batch_size = 1
model_path = '/data/ResNet50/ResNet50_20190403_exclusiveEVA_weights.h5'
# Define the model
ResNet50 = ResNet50(weights=None, include_top=False, input_shape=(img_width, img_height, 3))
print('Model loaded.')
# build a classifier model to put on top of the convolutional model
model = Sequential()
model.add(ResNet50)
model.add(Flatten(input_shape=model.output_shape[1:]))
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(2, activation='softmax'))
# Load the pretrained weights
model.load_weights(model_path)
model.summary()
test_samples = len(empty_df)
test_datagen = ImageDataGenerator(rescale=1/255.)
test_generator = test_datagen.flow_from_dataframe(
empty_df,
x_col='path',
y_col ='target',
target_size=(img_width, img_height),
batch_size=batch_size,
shuffle=False,
class_mode='categorical')
predictions = model.predict_generator(
test_generator,
steps=test_samples//batch_size,
verbose=True
)
# Save results
np.save(file='/data/results/ResNet50_empty_preds.npy', arr=predictions)
id_map = {
0: 'animal',
1: 'empty'
}
preds = pd.DataFrame(predictions)
results_df = pd.concat([empty_df.reset_index(drop=True), preds], axis=1)
results_df['top_class'] = pd.Series(predictions.argmax(axis=1))
results_df['top_prob'] = pd.Series(predictions.max(axis=1))
results_df['pred_category_name'] = results_df['top_class'].apply(lambda x: id_map[x])
results_df['acc'] = 1.0*(results_df['target'] == results_df['pred_category_name'])
results_df.to_csv('/data/results/ResNet50_empty_results.csv', index=False)
results_df.groupby(['target'])['acc'].mean()
from sklearn.metrics import confusion_matrix, f1_score, precision_score, recall_score, precision_recall_curve
import seaborn as sns; sns.set()
print('Accuracy: {:.2%}'.format(results_df['acc'].mean()))
print('F1 Score: {:.2%}'.format(f1_score(results_df['target'], results_df['pred_category_name'], average='weighted')))
print('Precision Score: {:.2%}'.format(precision_score(results_df['target'], results_df['pred_category_name'], average='weighted')))
print('Recall Score: {:.2%}'.format(recall_score(results_df['target'], results_df['pred_category_name'], average='weighted')))
results_df[(results_df['target']=='empty') & (results_df['top_class']==1)]['top_prob'].plot.density()
results_df[(results_df['target']=='animal') & (results_df['top_class']==0)]['top_prob'].plot.density()
results_df[(results_df['target']=='empty') & (results_df['top_class']==0)]['top_prob'].plot.density()
results_df[(results_df['target']=='animal') & (results_df['top_class']==1)]['top_prob'].plot.density()
results_df.groupby('category_name')['acc'].mean()
conf_mat = confusion_matrix(results_df['target'], results_df['pred_category_name'])
pd.DataFrame(np.round(conf_mat/np.repeat(conf_mat.sum(axis=1), 2).reshape(2,2), 2))
sns.heatmap(conf_mat/np.repeat(conf_mat.sum(axis=1), 2).reshape(2,2))
| 0.730194 | 0.794744 |
# Distirbuted Training of Mask-RCNN in Amazon SageMaker using S3
This notebook is a step-by-step tutorial on distributed tranining of [Mask R-CNN](https://arxiv.org/abs/1703.06870) implemented in [TensorFlow](https://www.tensorflow.org/) framework. Mask R-CNN is also referred to as heavy weight object detection model and it is part of [MLPerf](https://www.mlperf.org/training-results-0-6/).
Concretely, we will describe the steps for training [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) and [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow) in [Amazon SageMaker](https://aws.amazon.com/sagemaker/) using [Amazon S3](https://aws.amazon.com/s3/) as data source.
The outline of steps is as follows:
1. Stage COCO 2017 dataset in [Amazon S3](https://aws.amazon.com/s3/)
2. Build SageMaker training image and push it to [Amazon ECR](https://aws.amazon.com/ecr/)
3. Configure data input channels
4. Configure hyper-prarameters
5. Define training metrics
6. Define training job and start training
Before we get started, let us initialize two python variables ```aws_region``` and ```s3_bucket``` that we will use throughout the notebook:
```
aws_region = # aws-region-code e.g. us-east-1
s3_bucket = # your-s3-bucket-name
```
## Stage COCO 2017 dataset in Amazon S3
We use [COCO 2017 dataset](http://cocodataset.org/#home) for training. We download COCO 2017 training and validation dataset to this notebook instance, extract the files from the dataset archives, and upload the extracted files to your Amazon [S3 bucket](https://docs.aws.amazon.com/en_pv/AmazonS3/latest/gsg/CreatingABucket.html) with the prefix ```mask-rcnn/sagemaker/input/train```. The ```prepare-s3-bucket.sh``` script executes this step.
```
!cat ./prepare-s3-bucket.sh
```
Using your *Amazon S3 bucket* as argument, run the cell below. If you have already uploaded COCO 2017 dataset to your Amazon S3 bucket *in this AWS region*, you may skip this step. The expected time to execute this step is 20 minutes.
```
%%time
!./prepare-s3-bucket.sh {s3_bucket}
```
## Build and push SageMaker training images
For this step, the [IAM Role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) attached to this notebook instance needs full access to Amazon ECR service. If you created this notebook instance using the ```./stack-sm.sh``` script in this repository, the IAM Role attached to this notebook instance is already setup with full access to ECR service.
Below, we have a choice of two different implementations:
1. [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) implementation supports a maximum per-GPU batch size of 1, and does not support mixed precision. It can be used with mainstream TensorFlow releases.
2. [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow) is an optimized implementation that supports a maximum batch size of 4 and supports mixed precision. This implementation uses custom TensorFlow ops. The required custom TensorFlow ops are available in [AWS Deep Learning Container](https://github.com/aws/deep-learning-containers/blob/master/available_images.md) images in ```tensorflow-training``` repository with image tag ```1.15.2-gpu-py36-cu100-ubuntu18.04```, or later.
It is recommended that you build and push both SageMaker training images and use either image for training later.
### TensorPack Faster-RCNN/Mask-RCNN
Use ```./container/build_tools/build_and_push.sh``` script to build and push the TensorPack Faster-RCNN/Mask-RCNN training image to Amazon ECR.
```
!cat ./container/build_tools/build_and_push.sh
```
Using your *AWS region* as argument, run the cell below.
```
%%time
! ./container/build_tools/build_and_push.sh {aws_region}
```
Set ```tensorpack_image``` below to Amazon ECR URI of the image you pushed above.
```
tensorpack_image = # mask-rcnn-tensorpack-sagemaker ECR URI
```
### AWS Samples Mask R-CNN
Use ```./container-optimized/build_tools/build_and_push.sh``` script to build and push the AWS Samples Mask R-CNN training image to Amazon ECR.
```
!cat ./container-optimized/build_tools/build_and_push.sh
```
Using your *AWS region* as argument, run the cell below.
```
%%time
! ./container-optimized/build_tools/build_and_push.sh {aws_region}
```
Set ```aws_samples_image``` below to Amazon ECR URI of the image you pushed above.
```
aws_samples_image = # mask-rcnn-tensorflow-sagemaker ECR URI
```
## SageMaker Initialization
First we upgrade SageMaker to 2.3.0 API. If your notebook is already using latest Sagemaker 2.x API, you may skip the next cell.
```
! pip install --upgrade pip
! pip install sagemaker==2.3.0
```
We have staged the data and we have built and pushed the training docker image to Amazon ECR. Now we are ready to start using Amazon SageMaker.
```
%%time
import boto3
import sagemaker
from sagemaker import get_execution_role
from sagemaker.estimator import Estimator
role = get_execution_role() # provide a pre-existing role ARN as an alternative to creating a new role
print(f'SageMaker Execution Role:{role}')
client = boto3.client('sts')
account = client.get_caller_identity()['Account']
print(f'AWS account:{account}')
session = boto3.session.Session()
region = session.region_name
print(f'AWS region:{region}')
```
Next, we set ```training_image``` to the Amazon ECR image URI you saved in a previous step.
```
training_image = # set to tensorpack_image or aws_samples_image
print(f'Training image: {training_image}')
```
## Define SageMaker Data Channels
In this step, we define SageMaker *train* data channel.
```
from sagemaker.inputs import TrainingInput
prefix = "mask-rcnn/sagemaker" #prefix in your S3 bucket
s3train = f's3://{s3_bucket}/{prefix}/input/train'
train_input = TrainingInput(s3_data=s3train,
distribution="FullyReplicated",
s3_data_type='S3Prefix',
input_mode='File')
data_channels = {'train': train_input}
```
Next, we define the model output location in S3 bucket.
```
s3_output_location = f's3://{s3_bucket}/{prefix}/output'
```
## Configure Hyper-parameters
Next, we define the hyper-parameters.
Note, some hyper-parameters are different between the two implementations. The batch size per GPU in TensorPack Faster-RCNN/Mask-RCNN is fixed at 1, but is configurable in AWS Samples Mask-RCNN. The learning rate schedule is specified in units of steps in TensorPack Faster-RCNN/Mask-RCNN, but in epochs in AWS Samples Mask-RCNN.
The detault learning rate schedule values shown below correspond to training for a total of 24 epochs, at 120,000 images per epoch.
<table align='left'>
<caption>TensorPack Faster-RCNN/Mask-RCNN Hyper-parameters</caption>
<tr>
<th style="text-align:center">Hyper-parameter</th>
<th style="text-align:center">Description</th>
<th style="text-align:center">Default</th>
</tr>
<tr>
<td style="text-align:center">mode_fpn</td>
<td style="text-align:left">Flag to indicate use of Feature Pyramid Network (FPN) in the Mask R-CNN model backbone</td>
<td style="text-align:center">"True"</td>
</tr>
<tr>
<td style="text-align:center">mode_mask</td>
<td style="text-align:left">A value of "False" means Faster-RCNN model, "True" means Mask R-CNN moodel</td>
<td style="text-align:center">"True"</td>
</tr>
<tr>
<td style="text-align:center">eval_period</td>
<td style="text-align:left">Number of epochs period for evaluation during training</td>
<td style="text-align:center">1</td>
</tr>
<tr>
<td style="text-align:center">lr_schedule</td>
<td style="text-align:left">Learning rate schedule in training steps</td>
<td style="text-align:center">'[240000, 320000, 360000]'</td>
</tr>
<tr>
<td style="text-align:center">batch_norm</td>
<td style="text-align:left">Batch normalization option ('FreezeBN', 'SyncBN', 'GN', 'None') </td>
<td style="text-align:center">'FreezeBN'</td>
</tr>
<tr>
<td style="text-align:center">images_per_epoch</td>
<td style="text-align:left">Images per epoch </td>
<td style="text-align:center">120000</td>
</tr>
<tr>
<td style="text-align:center">data_train</td>
<td style="text-align:left">Training data under data directory</td>
<td style="text-align:center">'coco_train2017'</td>
</tr>
<tr>
<td style="text-align:center">data_val</td>
<td style="text-align:left">Validation data under data directory</td>
<td style="text-align:center">'coco_val2017'</td>
</tr>
<tr>
<td style="text-align:center">resnet_arch</td>
<td style="text-align:left">Must be 'resnet50' or 'resnet101'</td>
<td style="text-align:center">'resnet50'</td>
</tr>
<tr>
<td style="text-align:center">backbone_weights</td>
<td style="text-align:left">ResNet backbone weights</td>
<td style="text-align:center">'ImageNet-R50-AlignPadding.npz'</td>
</tr>
<tr>
<td style="text-align:center">load_model</td>
<td style="text-align:left">Pre-trained model to load</td>
<td style="text-align:center"></td>
</tr>
<tr>
<td style="text-align:center">config:</td>
<td style="text-align:left">Any hyperparamter prefixed with <b>config:</b> is set as a model config parameter</td>
<td style="text-align:center"></td>
</tr>
</table>
<table align='left'>
<caption>AWS Samples Mask-RCNN Hyper-parameters</caption>
<tr>
<th style="text-align:center">Hyper-parameter</th>
<th style="text-align:center">Description</th>
<th style="text-align:center">Default</th>
</tr>
<tr>
<td style="text-align:center">mode_fpn</td>
<td style="text-align:left">Flag to indicate use of Feature Pyramid Network (FPN) in the Mask R-CNN model backbone</td>
<td style="text-align:center">"True"</td>
</tr>
<tr>
<td style="text-align:center">mode_mask</td>
<td style="text-align:left">A value of "False" means Faster-RCNN model, "True" means Mask R-CNN moodel</td>
<td style="text-align:center">"True"</td>
</tr>
<tr>
<td style="text-align:center">eval_period</td>
<td style="text-align:left">Number of epochs period for evaluation during training</td>
<td style="text-align:center">1</td>
</tr>
<tr>
<td style="text-align:center">lr_epoch_schedule</td>
<td style="text-align:left">Learning rate schedule in epochs</td>
<td style="text-align:center">'[(16, 0.1), (20, 0.01), (24, None)]'</td>
</tr>
<tr>
<td style="text-align:center">batch_size_per_gpu</td>
<td style="text-align:left">Batch size per gpu ( Minimum 1, Maximum 4)</td>
<td style="text-align:center">4</td>
</tr>
<tr>
<td style="text-align:center">batch_norm</td>
<td style="text-align:left">Batch normalization option ('FreezeBN', 'SyncBN', 'GN', 'None') </td>
<td style="text-align:center">'FreezeBN'</td>
</tr>
<tr>
<td style="text-align:center">images_per_epoch</td>
<td style="text-align:left">Images per epoch </td>
<td style="text-align:center">120000</td>
</tr>
<tr>
<td style="text-align:center">data_train</td>
<td style="text-align:left">Training data under data directory</td>
<td style="text-align:center">'train2017'</td>
</tr>
<tr>
<td style="text-align:center">data_val</td>
<td style="text-align:left">Validation data under data directory</td>
<td style="text-align:center">'val2017'</td>
</tr>
<tr>
<td style="text-align:center">resnet_arch</td>
<td style="text-align:left">Must be 'resnet50' or 'resnet101'</td>
<td style="text-align:center">'resnet50'</td>
</tr>
<tr>
<td style="text-align:center">backbone_weights</td>
<td style="text-align:left">ResNet backbone weights</td>
<td style="text-align:center">'ImageNet-R50-AlignPadding.npz'</td>
</tr>
<tr>
<td style="text-align:center">load_model</td>
<td style="text-align:left">Pre-trained model to load</td>
<td style="text-align:center"></td>
</tr>
<tr>
<td style="text-align:center">config:</td>
<td style="text-align:left">Any hyperparamter prefixed with <b>config:</b> is set as a model config parameter</td>
<td style="text-align:center"></td>
</tr>
</table>
```
hyperparameters = {
"mode_fpn": "True",
"mode_mask": "True",
"eval_period": 1,
"batch_norm": "FreezeBN"
}
```
## Define Training Metrics
Next, we define the regular expressions that SageMaker uses to extract algorithm metrics from training logs and send them to [AWS CloudWatch metrics](https://docs.aws.amazon.com/en_pv/AmazonCloudWatch/latest/monitoring/working_with_metrics.html). These algorithm metrics are visualized in SageMaker console.
```
metric_definitions=[
{
"Name": "fastrcnn_losses/box_loss",
"Regex": ".*fastrcnn_losses/box_loss:\\s*(\\S+).*"
},
{
"Name": "fastrcnn_losses/label_loss",
"Regex": ".*fastrcnn_losses/label_loss:\\s*(\\S+).*"
},
{
"Name": "fastrcnn_losses/label_metrics/accuracy",
"Regex": ".*fastrcnn_losses/label_metrics/accuracy:\\s*(\\S+).*"
},
{
"Name": "fastrcnn_losses/label_metrics/false_negative",
"Regex": ".*fastrcnn_losses/label_metrics/false_negative:\\s*(\\S+).*"
},
{
"Name": "fastrcnn_losses/label_metrics/fg_accuracy",
"Regex": ".*fastrcnn_losses/label_metrics/fg_accuracy:\\s*(\\S+).*"
},
{
"Name": "fastrcnn_losses/num_fg_label",
"Regex": ".*fastrcnn_losses/num_fg_label:\\s*(\\S+).*"
},
{
"Name": "maskrcnn_loss/accuracy",
"Regex": ".*maskrcnn_loss/accuracy:\\s*(\\S+).*"
},
{
"Name": "maskrcnn_loss/fg_pixel_ratio",
"Regex": ".*maskrcnn_loss/fg_pixel_ratio:\\s*(\\S+).*"
},
{
"Name": "maskrcnn_loss/maskrcnn_loss",
"Regex": ".*maskrcnn_loss/maskrcnn_loss:\\s*(\\S+).*"
},
{
"Name": "maskrcnn_loss/pos_accuracy",
"Regex": ".*maskrcnn_loss/pos_accuracy:\\s*(\\S+).*"
},
{
"Name": "mAP(bbox)/IoU=0.5",
"Regex": ".*mAP\\(bbox\\)/IoU=0\\.5:\\s*(\\S+).*"
},
{
"Name": "mAP(bbox)/IoU=0.5:0.95",
"Regex": ".*mAP\\(bbox\\)/IoU=0\\.5:0\\.95:\\s*(\\S+).*"
},
{
"Name": "mAP(bbox)/IoU=0.75",
"Regex": ".*mAP\\(bbox\\)/IoU=0\\.75:\\s*(\\S+).*"
},
{
"Name": "mAP(bbox)/large",
"Regex": ".*mAP\\(bbox\\)/large:\\s*(\\S+).*"
},
{
"Name": "mAP(bbox)/medium",
"Regex": ".*mAP\\(bbox\\)/medium:\\s*(\\S+).*"
},
{
"Name": "mAP(bbox)/small",
"Regex": ".*mAP\\(bbox\\)/small:\\s*(\\S+).*"
},
{
"Name": "mAP(segm)/IoU=0.5",
"Regex": ".*mAP\\(segm\\)/IoU=0\\.5:\\s*(\\S+).*"
},
{
"Name": "mAP(segm)/IoU=0.5:0.95",
"Regex": ".*mAP\\(segm\\)/IoU=0\\.5:0\\.95:\\s*(\\S+).*"
},
{
"Name": "mAP(segm)/IoU=0.75",
"Regex": ".*mAP\\(segm\\)/IoU=0\\.75:\\s*(\\S+).*"
},
{
"Name": "mAP(segm)/large",
"Regex": ".*mAP\\(segm\\)/large:\\s*(\\S+).*"
},
{
"Name": "mAP(segm)/medium",
"Regex": ".*mAP\\(segm\\)/medium:\\s*(\\S+).*"
},
{
"Name": "mAP(segm)/small",
"Regex": ".*mAP\\(segm\\)/small:\\s*(\\S+).*"
}
]
```
## Define SageMaker Training Job
Next, we use SageMaker [Estimator](https://sagemaker.readthedocs.io/en/stable/estimators.html) API to define a SageMaker Training Job.
We recommned using 32 GPUs, so we set ```instance_count=4``` and ```instance_type='ml.p3.16xlarge'```, because there are 8 Tesla V100 GPUs per ```ml.p3.16xlarge``` instance. We recommend using 100 GB [Amazon EBS](https://aws.amazon.com/ebs/) storage volume with each training instance, so we set ```volume_size = 100```.
We run the training job in your private VPC, so we need to set the ```subnets``` and ```security_group_ids``` prior to running the cell below. You may specify multiple subnet ids in the ```subnets``` list. The subnets included in the ```sunbets``` list must be part of the output of ```./stack-sm.sh``` CloudFormation stack script used to create this notebook instance. Specify only one security group id in ```security_group_ids``` list. The security group id must be part of the output of ```./stack-sm.sh``` script.
For ```instance_type``` below, you have the option to use ```ml.p3.16xlarge``` with 16 GB per-GPU memory and 25 Gbs network interconnectivity, or ```ml.p3dn.24xlarge``` with 32 GB per-GPU memory and 100 Gbs network interconnectivity. The ```ml.p3dn.24xlarge``` instance type offers significantly better performance than ```ml.p3.16xlarge``` for Mask R-CNN distributed TensorFlow training.
```
security_group_ids = ['sg-043bfdabb0f3675fd'] # ['sg-xxxxxxxx']
subnets = ['subnet-0f9b8cc9c33f79763','subnet-0cc8d9f0eb3bf5c93','subnet-0fe2a35b1c5495531'] # [ 'subnet-xxxxxxx']
sagemaker_session = sagemaker.session.Session(boto_session=session)
mask_rcnn_estimator = Estimator(image_uri=training_image,
role=role,
instance_count=1,
instance_type='ml.p3.16xlarge',
volume_size = 100,
max_run = 400000,
output_path=s3_output_location,
sagemaker_session=sagemaker_session,
hyperparameters = hyperparameters,
metric_definitions = metric_definitions,
subnets=subnets,
security_group_ids=security_group_ids)
```
Finally, we launch the SageMaker training job. See ```Training Jobs``` in SageMaker console to monitor the training job.
```
import time
job_name=f'mask-rcnn-s3-{int(time.time())}'
print(f"Launching Training Job: {job_name}")
# set wait=True below if you want to print logs in cell output
mask_rcnn_estimator.fit(inputs=data_channels, job_name=job_name, logs="All", wait=False)
```
|
github_jupyter
|
aws_region = # aws-region-code e.g. us-east-1
s3_bucket = # your-s3-bucket-name
!cat ./prepare-s3-bucket.sh
%%time
!./prepare-s3-bucket.sh {s3_bucket}
!cat ./container/build_tools/build_and_push.sh
%%time
! ./container/build_tools/build_and_push.sh {aws_region}
tensorpack_image = # mask-rcnn-tensorpack-sagemaker ECR URI
!cat ./container-optimized/build_tools/build_and_push.sh
%%time
! ./container-optimized/build_tools/build_and_push.sh {aws_region}
aws_samples_image = # mask-rcnn-tensorflow-sagemaker ECR URI
! pip install --upgrade pip
! pip install sagemaker==2.3.0
%%time
import boto3
import sagemaker
from sagemaker import get_execution_role
from sagemaker.estimator import Estimator
role = get_execution_role() # provide a pre-existing role ARN as an alternative to creating a new role
print(f'SageMaker Execution Role:{role}')
client = boto3.client('sts')
account = client.get_caller_identity()['Account']
print(f'AWS account:{account}')
session = boto3.session.Session()
region = session.region_name
print(f'AWS region:{region}')
training_image = # set to tensorpack_image or aws_samples_image
print(f'Training image: {training_image}')
from sagemaker.inputs import TrainingInput
prefix = "mask-rcnn/sagemaker" #prefix in your S3 bucket
s3train = f's3://{s3_bucket}/{prefix}/input/train'
train_input = TrainingInput(s3_data=s3train,
distribution="FullyReplicated",
s3_data_type='S3Prefix',
input_mode='File')
data_channels = {'train': train_input}
s3_output_location = f's3://{s3_bucket}/{prefix}/output'
hyperparameters = {
"mode_fpn": "True",
"mode_mask": "True",
"eval_period": 1,
"batch_norm": "FreezeBN"
}
metric_definitions=[
{
"Name": "fastrcnn_losses/box_loss",
"Regex": ".*fastrcnn_losses/box_loss:\\s*(\\S+).*"
},
{
"Name": "fastrcnn_losses/label_loss",
"Regex": ".*fastrcnn_losses/label_loss:\\s*(\\S+).*"
},
{
"Name": "fastrcnn_losses/label_metrics/accuracy",
"Regex": ".*fastrcnn_losses/label_metrics/accuracy:\\s*(\\S+).*"
},
{
"Name": "fastrcnn_losses/label_metrics/false_negative",
"Regex": ".*fastrcnn_losses/label_metrics/false_negative:\\s*(\\S+).*"
},
{
"Name": "fastrcnn_losses/label_metrics/fg_accuracy",
"Regex": ".*fastrcnn_losses/label_metrics/fg_accuracy:\\s*(\\S+).*"
},
{
"Name": "fastrcnn_losses/num_fg_label",
"Regex": ".*fastrcnn_losses/num_fg_label:\\s*(\\S+).*"
},
{
"Name": "maskrcnn_loss/accuracy",
"Regex": ".*maskrcnn_loss/accuracy:\\s*(\\S+).*"
},
{
"Name": "maskrcnn_loss/fg_pixel_ratio",
"Regex": ".*maskrcnn_loss/fg_pixel_ratio:\\s*(\\S+).*"
},
{
"Name": "maskrcnn_loss/maskrcnn_loss",
"Regex": ".*maskrcnn_loss/maskrcnn_loss:\\s*(\\S+).*"
},
{
"Name": "maskrcnn_loss/pos_accuracy",
"Regex": ".*maskrcnn_loss/pos_accuracy:\\s*(\\S+).*"
},
{
"Name": "mAP(bbox)/IoU=0.5",
"Regex": ".*mAP\\(bbox\\)/IoU=0\\.5:\\s*(\\S+).*"
},
{
"Name": "mAP(bbox)/IoU=0.5:0.95",
"Regex": ".*mAP\\(bbox\\)/IoU=0\\.5:0\\.95:\\s*(\\S+).*"
},
{
"Name": "mAP(bbox)/IoU=0.75",
"Regex": ".*mAP\\(bbox\\)/IoU=0\\.75:\\s*(\\S+).*"
},
{
"Name": "mAP(bbox)/large",
"Regex": ".*mAP\\(bbox\\)/large:\\s*(\\S+).*"
},
{
"Name": "mAP(bbox)/medium",
"Regex": ".*mAP\\(bbox\\)/medium:\\s*(\\S+).*"
},
{
"Name": "mAP(bbox)/small",
"Regex": ".*mAP\\(bbox\\)/small:\\s*(\\S+).*"
},
{
"Name": "mAP(segm)/IoU=0.5",
"Regex": ".*mAP\\(segm\\)/IoU=0\\.5:\\s*(\\S+).*"
},
{
"Name": "mAP(segm)/IoU=0.5:0.95",
"Regex": ".*mAP\\(segm\\)/IoU=0\\.5:0\\.95:\\s*(\\S+).*"
},
{
"Name": "mAP(segm)/IoU=0.75",
"Regex": ".*mAP\\(segm\\)/IoU=0\\.75:\\s*(\\S+).*"
},
{
"Name": "mAP(segm)/large",
"Regex": ".*mAP\\(segm\\)/large:\\s*(\\S+).*"
},
{
"Name": "mAP(segm)/medium",
"Regex": ".*mAP\\(segm\\)/medium:\\s*(\\S+).*"
},
{
"Name": "mAP(segm)/small",
"Regex": ".*mAP\\(segm\\)/small:\\s*(\\S+).*"
}
]
security_group_ids = ['sg-043bfdabb0f3675fd'] # ['sg-xxxxxxxx']
subnets = ['subnet-0f9b8cc9c33f79763','subnet-0cc8d9f0eb3bf5c93','subnet-0fe2a35b1c5495531'] # [ 'subnet-xxxxxxx']
sagemaker_session = sagemaker.session.Session(boto_session=session)
mask_rcnn_estimator = Estimator(image_uri=training_image,
role=role,
instance_count=1,
instance_type='ml.p3.16xlarge',
volume_size = 100,
max_run = 400000,
output_path=s3_output_location,
sagemaker_session=sagemaker_session,
hyperparameters = hyperparameters,
metric_definitions = metric_definitions,
subnets=subnets,
security_group_ids=security_group_ids)
import time
job_name=f'mask-rcnn-s3-{int(time.time())}'
print(f"Launching Training Job: {job_name}")
# set wait=True below if you want to print logs in cell output
mask_rcnn_estimator.fit(inputs=data_channels, job_name=job_name, logs="All", wait=False)
| 0.357343 | 0.978611 |
```
#default_exp crafter
#hide
from nbdev.showdoc import *
```
# Crafter
Takes a list of image filenames and transforms them to batches of the correct dimensions for CLIP.
This executor subclasses PyTorch's VisionDataset (for its file-loading expertise) and DataLoaders. The `DatasetImagePaths` takes a list of image paths and a transfom, returns the transformed tensors when called. DataLoader does batching internally so we pass it along to the encoder in that format.
```
#export
import torch
from torchvision.datasets import VisionDataset
from PIL import Image
#export
def make_dataset(new_files):
'''Returns a list of samples of a form (path_to_sample, class) and in
this case the class is just the filename'''
samples = []
slugs = []
for i, f in enumerate(new_files):
path, slug = f
samples.append((str(path), i))
slugs.append((slug, i))
return(samples, slugs)
#export
def pil_loader(path: str) -> Image.Image:
# open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835)
with open(path, 'rb') as f:
img = Image.open(f)
return img.convert('RGB')
#export
class DatasetImagePaths(VisionDataset):
def __init__(self, new_files, transforms = None):
super(DatasetImagePaths, self).__init__(new_files, transforms=transforms)
samples, slugs = make_dataset(new_files)
self.samples = samples
self.slugs = slugs
self.loader = pil_loader
self.root = 'file dataset'
def __len__(self):
return(len(self.samples))
def __getitem__(self, index):
path, target = self.samples[index]
sample = self.loader(path)
if sample is not None:
if self.transforms is not None:
sample = self.transforms(sample)
return sample, target
new_files = [('images/memes/Wholesome-Meme-8.jpg', 'Wholesome-Meme-8'), ('images/memes/Wholesome-Meme-1.jpg', 'Wholesome-Meme-1')]#, ('images/corrupted-file.jpeg', 'corrupted-file.jpeg')]
crafted = DatasetImagePaths(new_files)
crafted[0][0]
```
Okay, that seems to work decently. Test with transforms, which I will just find in CLIP source code and copy over, to prevent having to import CLIP in this executor.
```
#export
from torchvision.transforms import Compose, Resize, CenterCrop, ToTensor, Normalize
#export
def clip_transform(n_px):
return Compose([
Resize(n_px, interpolation=Image.BICUBIC),
CenterCrop(n_px),
ToTensor(),
Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711)),
])
```
Put that all together, and wrap in a DataLoader for batching. In future, need to figure out how to pick batch size and number of workers programmatically bsed on device capabilities.
```
#export
def crafter(new_files, device, batch_size=128, num_workers=4):
with torch.no_grad():
imagefiles=DatasetImagePaths(new_files, clip_transform(224))
img_loader=torch.utils.data.DataLoader(imagefiles, batch_size=batch_size, shuffle=False, num_workers=num_workers)
return(img_loader)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device
crafted_files = crafter(new_files, device)
crafted_files.batch_size, crafted_files.num_workers
file = new_files[1][0]
#export
def preproc(img):
transformed = clip_transform(224)(img)
return(transformed)
im = preproc([Image.open(file)][0])
# %matplotlib inline
# show_image(im)
```
|
github_jupyter
|
#default_exp crafter
#hide
from nbdev.showdoc import *
#export
import torch
from torchvision.datasets import VisionDataset
from PIL import Image
#export
def make_dataset(new_files):
'''Returns a list of samples of a form (path_to_sample, class) and in
this case the class is just the filename'''
samples = []
slugs = []
for i, f in enumerate(new_files):
path, slug = f
samples.append((str(path), i))
slugs.append((slug, i))
return(samples, slugs)
#export
def pil_loader(path: str) -> Image.Image:
# open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835)
with open(path, 'rb') as f:
img = Image.open(f)
return img.convert('RGB')
#export
class DatasetImagePaths(VisionDataset):
def __init__(self, new_files, transforms = None):
super(DatasetImagePaths, self).__init__(new_files, transforms=transforms)
samples, slugs = make_dataset(new_files)
self.samples = samples
self.slugs = slugs
self.loader = pil_loader
self.root = 'file dataset'
def __len__(self):
return(len(self.samples))
def __getitem__(self, index):
path, target = self.samples[index]
sample = self.loader(path)
if sample is not None:
if self.transforms is not None:
sample = self.transforms(sample)
return sample, target
new_files = [('images/memes/Wholesome-Meme-8.jpg', 'Wholesome-Meme-8'), ('images/memes/Wholesome-Meme-1.jpg', 'Wholesome-Meme-1')]#, ('images/corrupted-file.jpeg', 'corrupted-file.jpeg')]
crafted = DatasetImagePaths(new_files)
crafted[0][0]
#export
from torchvision.transforms import Compose, Resize, CenterCrop, ToTensor, Normalize
#export
def clip_transform(n_px):
return Compose([
Resize(n_px, interpolation=Image.BICUBIC),
CenterCrop(n_px),
ToTensor(),
Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711)),
])
#export
def crafter(new_files, device, batch_size=128, num_workers=4):
with torch.no_grad():
imagefiles=DatasetImagePaths(new_files, clip_transform(224))
img_loader=torch.utils.data.DataLoader(imagefiles, batch_size=batch_size, shuffle=False, num_workers=num_workers)
return(img_loader)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device
crafted_files = crafter(new_files, device)
crafted_files.batch_size, crafted_files.num_workers
file = new_files[1][0]
#export
def preproc(img):
transformed = clip_transform(224)(img)
return(transformed)
im = preproc([Image.open(file)][0])
# %matplotlib inline
# show_image(im)
| 0.576661 | 0.906405 |
```
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn.modules.utils import _pair, _quadruple
from matplotlib import pyplot as plt
import seaborn as sns
import numpy as np
proj_range = torch.load('proj_range.pt')
proj_mask = torch.load('proj_mask.pt')
pred_np = torch.load('pred_np.pt')
pred_np.clip(0, 118)
pred_np.shape
sum(pred_np > 1)
sns.scatterplot(np.arange(len(pred_np)), pred_np)
class MedianPool2d(nn.Module):
""" Median pool (usable as median filter when stride=1) module.
Args:
kernel_size: size of pooling kernel, int or 2-tuple
stride: pool stride, int or 2-tuple
padding: pool padding, int or 4-tuple (l, r, t, b) as in pytorch F.pad
same: override padding and enforce same padding, boolean
"""
def __init__(self, kernel_size=3, stride=1, padding=0, same=True):
super(MedianPool2d, self).__init__()
self.k = _pair(kernel_size)
self.stride = _pair(stride)
self.padding = _quadruple(padding) # convert to l, r, t, b
self.same = same
def _padding(self, x):
if self.same:
ih, iw = x.size()[2:]
if ih % self.stride[0] == 0:
ph = max(self.k[0] - self.stride[0], 0)
else:
ph = max(self.k[0] - (ih % self.stride[0]), 0)
if iw % self.stride[1] == 0:
pw = max(self.k[1] - self.stride[1], 0)
else:
pw = max(self.k[1] - (iw % self.stride[1]), 0)
pl = pw // 2
pr = pw - pl
pt = ph // 2
pb = ph - pt
padding = (pl, pr, pt, pb)
else:
padding = self.padding
return padding
def forward(self, x):
# using existing pytorch functions and tensor ops so that we get autograd,
# would likely be more efficient to implement from scratch at C/Cuda level
x = F.pad(x, self._padding(x), mode='reflect')
x = x.unfold(2, self.k[0], self.stride[0]).unfold(3, self.k[1], self.stride[1])
x = x.contiguous().view(x.size()[:4] + (-1,)).median(dim=-1)[0]
return x
medpool3 = MedianPool2d()
medpool5 = MedianPool2d(kernel_size=5)
medpool7 = MedianPool2d(kernel_size=7)
medpool13 = MedianPool2d(kernel_size=13)
medpool29 = MedianPool2d(kernel_size=29)
def fill_step(tensor, mask, medpool):
eps = 1e-6
H, W = tensor.shape[0], tensor.shape[1]
tensor = tensor * mask # clear the tensor
# apply median filter then combine
median = tensor.clone()
median = median + medpool(median.unsqueeze(0).unsqueeze(0)).squeeze() * torch.logical_not(mask)
mask = median.abs() > eps
return median, mask
x = proj_range.clone()
mask = proj_mask.clone()
print(x.shape)
plt.figure(figsize=(20, 20))
plt.subplot(6, 1, 1)
sns.heatmap(x, square=True)
plt.subplot(6, 1, 2)
x, mask = fill_step(x, mask, medpool3)
sns.heatmap(x, square=True)
plt.subplot(6, 1, 3)
x, mask = fill_step(x, mask, medpool5)
sns.heatmap(x, square=True)
plt.subplot(6, 1, 4)
x, mask = fill_step(x, mask, medpool7)
sns.heatmap(x, square=True)
plt.subplot(6, 1, 5)
x, mask = fill_step(x, mask, medpool13)
sns.heatmap(x, square=True)
plt.subplot(6, 1, 6)
x, mask = fill_step(x, mask, medpool13)
sns.heatmap(x, square=True)
import vispy
vispy.test()
```
|
github_jupyter
|
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn.modules.utils import _pair, _quadruple
from matplotlib import pyplot as plt
import seaborn as sns
import numpy as np
proj_range = torch.load('proj_range.pt')
proj_mask = torch.load('proj_mask.pt')
pred_np = torch.load('pred_np.pt')
pred_np.clip(0, 118)
pred_np.shape
sum(pred_np > 1)
sns.scatterplot(np.arange(len(pred_np)), pred_np)
class MedianPool2d(nn.Module):
""" Median pool (usable as median filter when stride=1) module.
Args:
kernel_size: size of pooling kernel, int or 2-tuple
stride: pool stride, int or 2-tuple
padding: pool padding, int or 4-tuple (l, r, t, b) as in pytorch F.pad
same: override padding and enforce same padding, boolean
"""
def __init__(self, kernel_size=3, stride=1, padding=0, same=True):
super(MedianPool2d, self).__init__()
self.k = _pair(kernel_size)
self.stride = _pair(stride)
self.padding = _quadruple(padding) # convert to l, r, t, b
self.same = same
def _padding(self, x):
if self.same:
ih, iw = x.size()[2:]
if ih % self.stride[0] == 0:
ph = max(self.k[0] - self.stride[0], 0)
else:
ph = max(self.k[0] - (ih % self.stride[0]), 0)
if iw % self.stride[1] == 0:
pw = max(self.k[1] - self.stride[1], 0)
else:
pw = max(self.k[1] - (iw % self.stride[1]), 0)
pl = pw // 2
pr = pw - pl
pt = ph // 2
pb = ph - pt
padding = (pl, pr, pt, pb)
else:
padding = self.padding
return padding
def forward(self, x):
# using existing pytorch functions and tensor ops so that we get autograd,
# would likely be more efficient to implement from scratch at C/Cuda level
x = F.pad(x, self._padding(x), mode='reflect')
x = x.unfold(2, self.k[0], self.stride[0]).unfold(3, self.k[1], self.stride[1])
x = x.contiguous().view(x.size()[:4] + (-1,)).median(dim=-1)[0]
return x
medpool3 = MedianPool2d()
medpool5 = MedianPool2d(kernel_size=5)
medpool7 = MedianPool2d(kernel_size=7)
medpool13 = MedianPool2d(kernel_size=13)
medpool29 = MedianPool2d(kernel_size=29)
def fill_step(tensor, mask, medpool):
eps = 1e-6
H, W = tensor.shape[0], tensor.shape[1]
tensor = tensor * mask # clear the tensor
# apply median filter then combine
median = tensor.clone()
median = median + medpool(median.unsqueeze(0).unsqueeze(0)).squeeze() * torch.logical_not(mask)
mask = median.abs() > eps
return median, mask
x = proj_range.clone()
mask = proj_mask.clone()
print(x.shape)
plt.figure(figsize=(20, 20))
plt.subplot(6, 1, 1)
sns.heatmap(x, square=True)
plt.subplot(6, 1, 2)
x, mask = fill_step(x, mask, medpool3)
sns.heatmap(x, square=True)
plt.subplot(6, 1, 3)
x, mask = fill_step(x, mask, medpool5)
sns.heatmap(x, square=True)
plt.subplot(6, 1, 4)
x, mask = fill_step(x, mask, medpool7)
sns.heatmap(x, square=True)
plt.subplot(6, 1, 5)
x, mask = fill_step(x, mask, medpool13)
sns.heatmap(x, square=True)
plt.subplot(6, 1, 6)
x, mask = fill_step(x, mask, medpool13)
sns.heatmap(x, square=True)
import vispy
vispy.test()
| 0.931936 | 0.726183 |
```
import os
from bids import BIDSLayout
from glob import glob
from nipype.interfaces.io import BIDSDataGrabber
from nipype.pipeline import Node, MapNode, Workflow
from nipype.interfaces.utility import Function
bids_dir = os.path.join('/Users/sebastientourbier/Softwares/mialsuperresolutiontoolkit/data')
output_dir = os.path.join('/Users/sebastientourbier/Softwares/mialsuperresolutiontoolkit/data','derivatives','mialsrtk')
subject = '01'
layout = BIDSLayout(bids_dir)
print(layout)
bg = Node(BIDSDataGrabber(infields = ['subject']),name='bids_grabber')
bg.inputs.base_dir = bids_dir
bg.inputs.subject = subject
bg.inputs.index_derivatives = True
bg.inputs.output_query = {'T2ws': dict(suffix='T2w',datatype='anat',extensions=[".nii",".nii.gz"]),
'masks': dict(suffix='mask',datatype='anat',extensions=[".nii",".nii.gz"])}
from traits.api import *
from nipype.utils.filemanip import split_filename
from nipype.interfaces.base import traits, isdefined, CommandLine, CommandLineInputSpec,\
TraitedSpec, File, InputMultiPath, OutputMultiPath, BaseInterface, BaseInterfaceInputSpec
class prepareDockerPathsInputSpec(BaseInterfaceInputSpec):
local_T2ws_paths = InputMultiPath(File(desc='input T2ws paths', mandatory = True, exists = True))
local_masks_paths = InputMultiPath(File(desc='input masks paths', mandatory = True, exists = True))
local_dir = Directory(mandatory=True)
docker_dir = Directory('/fetaldata',mandatory=True)
class prepareDockerPathsOutputSpec(TraitedSpec):
docker_T2ws_paths = OutputMultiPath(File(desc='docker T2ws paths'))
docker_masks_paths = OutputMultiPath(File(desc='docker masks paths'))
class prepareDockerPaths(BaseInterface):
input_spec = prepareDockerPathsInputSpec
output_spec = prepareDockerPathsOutputSpec
def _run_interface(self,runtime):
return runtime
def _list_outputs(self):
outputs = self._outputs().get()
outputs["docker_T2ws_paths"] = []
for p in self.inputs.local_T2ws_paths:
p = os.path.join(self.inputs.docker_dir,p.split(self.inputs.local_dir)[1].strip("/"))
print(p)
outputs["docker_T2ws_paths"].append(p)
outputs["docker_masks_paths"] = []
for p in self.inputs.local_masks_paths:
p = os.path.join(self.inputs.docker_dir,p.split(self.inputs.local_dir)[1].strip("/"))
print(p)
outputs["docker_masks_paths"].append(p)
return outputs
preparePaths = Node(interface=prepareDockerPaths(), name="preparePaths")
preparePaths.inputs.local_dir = bids_dir
preparePaths.inputs.docker_dir = '/fetaldata'
wf = Workflow(name="bids_demo",base_dir=output_dir)
wf.connect(bg, "T2ws", preparePaths, "local_T2ws_paths")
wf.connect(bg, "masks", preparePaths, "local_masks_paths")
import subprocess
def run(self, command, env={}, cwd=os.getcwd()):
merged_env = os.environ
merged_env.update(env)
process = subprocess.run(command, shell=True,
env=merged_env, cwd=cwd, capture_output=True)
return process
from traits.api import *
from nipype.utils.filemanip import split_filename
from nipype.interfaces.base import traits, isdefined, CommandLine, CommandLineInputSpec,\
TraitedSpec, File, InputMultiPath, OutputMultiPath, BaseInterface, BaseInterfaceInputSpec
import nibabel as nib
class BtkNLMDenoisingInputSpec(BaseInterfaceInputSpec):
bids_dir = Directory(desc='BIDS root directory',mandatory=True,exists=True)
in_file = File(desc='Input image',mandatory=True,)
out_postfix = traits.Str("_nlm", usedefault=True)
weight = traits.Float(0.1,desc='NLM weight (0.1 by default)')
class BtkNLMDenoisingOutputSpec(TraitedSpec):
out_file = File(desc='Denoised image')
class BtkNLMDenoising(BaseInterface):
input_spec = BtkNLMDenoisingInputSpec
output_spec = BtkNLMDenoisingOutputSpec
def _run_interface(self, runtime):
_, name, ext = split_filename(os.path.abspath(self.inputs.in_file))
out_file = os.path.join(os.getcwd().replace(self.inputs.bids_dir,'/fetaldata'), ''.join((name, self.inputs.out_postfix, ext)))
cmd = 'docker run --rm -u {}:{} -v "{}":/fetaldata sebastientourbier/mialsuperresolutiontoolkit btkNLMDenoising -i "{}" -o "{}" -b {}'.format(os.getuid(),os.getgid(),self.inputs.bids_dir,self.inputs.in_file,out_file,self.inputs.weight)
try:
print('... cmd: {}'.format(cmd))
run(self, cmd, env={}, cwd=os.path.abspath(self.inputs.bids_dir))
except:
print('Failed')
return runtime
def _list_outputs(self):
outputs = self._outputs().get()
_, name, ext = split_filename(os.path.abspath(self.inputs.in_file))
outputs['out_file'] = os.path.join(os.getcwd(), ''.join((name, self.inputs.out_postfix, ext)))
return outputs
class MultipleBtkNLMDenoisingInputSpec(BaseInterfaceInputSpec):
bids_dir = Directory(desc='BIDS root directory',mandatory=True,exists=True)
input_images = InputMultiPath(File(desc='files to be denoised', mandatory = True))
weight = traits.Float(0.1)
out_postfix = traits.Str("_nlm", usedefault=True)
class MultipleBtkNLMDenoisingOutputSpec(TraitedSpec):
output_images = OutputMultiPath(File())
class MultipleBtkNLMDenoising(BaseInterface):
input_spec = MultipleBtkNLMDenoisingInputSpec
output_spec = MultipleBtkNLMDenoisingOutputSpec
def _run_interface(self, runtime):
for input_image in self.inputs.input_images:
ax = BtkNLMDenoising(bids_dir = self.inputs.bids_dir, in_file = input_image, out_postfix=self.inputs.out_postfix, weight = self.inputs.weight)
ax.run()
return runtime
def _list_outputs(self):
outputs = self._outputs().get()
outputs['output_images'] = glob(os.path.abspath("*.nii.gz"))
return outputs
nlmDenoise = Node(interface=MultipleBtkNLMDenoising(),base_dir=os.path.join(output_dir,'bids_demo'),name='nlmDenoise')
nlmDenoise.inputs.bids_dir = bids_dir
nlmDenoise.inputs.weight = 0.1
wf.connect(preparePaths, "docker_T2ws_paths", nlmDenoise, "input_images")
res = wf.run()
wf.write_graph()
```
|
github_jupyter
|
import os
from bids import BIDSLayout
from glob import glob
from nipype.interfaces.io import BIDSDataGrabber
from nipype.pipeline import Node, MapNode, Workflow
from nipype.interfaces.utility import Function
bids_dir = os.path.join('/Users/sebastientourbier/Softwares/mialsuperresolutiontoolkit/data')
output_dir = os.path.join('/Users/sebastientourbier/Softwares/mialsuperresolutiontoolkit/data','derivatives','mialsrtk')
subject = '01'
layout = BIDSLayout(bids_dir)
print(layout)
bg = Node(BIDSDataGrabber(infields = ['subject']),name='bids_grabber')
bg.inputs.base_dir = bids_dir
bg.inputs.subject = subject
bg.inputs.index_derivatives = True
bg.inputs.output_query = {'T2ws': dict(suffix='T2w',datatype='anat',extensions=[".nii",".nii.gz"]),
'masks': dict(suffix='mask',datatype='anat',extensions=[".nii",".nii.gz"])}
from traits.api import *
from nipype.utils.filemanip import split_filename
from nipype.interfaces.base import traits, isdefined, CommandLine, CommandLineInputSpec,\
TraitedSpec, File, InputMultiPath, OutputMultiPath, BaseInterface, BaseInterfaceInputSpec
class prepareDockerPathsInputSpec(BaseInterfaceInputSpec):
local_T2ws_paths = InputMultiPath(File(desc='input T2ws paths', mandatory = True, exists = True))
local_masks_paths = InputMultiPath(File(desc='input masks paths', mandatory = True, exists = True))
local_dir = Directory(mandatory=True)
docker_dir = Directory('/fetaldata',mandatory=True)
class prepareDockerPathsOutputSpec(TraitedSpec):
docker_T2ws_paths = OutputMultiPath(File(desc='docker T2ws paths'))
docker_masks_paths = OutputMultiPath(File(desc='docker masks paths'))
class prepareDockerPaths(BaseInterface):
input_spec = prepareDockerPathsInputSpec
output_spec = prepareDockerPathsOutputSpec
def _run_interface(self,runtime):
return runtime
def _list_outputs(self):
outputs = self._outputs().get()
outputs["docker_T2ws_paths"] = []
for p in self.inputs.local_T2ws_paths:
p = os.path.join(self.inputs.docker_dir,p.split(self.inputs.local_dir)[1].strip("/"))
print(p)
outputs["docker_T2ws_paths"].append(p)
outputs["docker_masks_paths"] = []
for p in self.inputs.local_masks_paths:
p = os.path.join(self.inputs.docker_dir,p.split(self.inputs.local_dir)[1].strip("/"))
print(p)
outputs["docker_masks_paths"].append(p)
return outputs
preparePaths = Node(interface=prepareDockerPaths(), name="preparePaths")
preparePaths.inputs.local_dir = bids_dir
preparePaths.inputs.docker_dir = '/fetaldata'
wf = Workflow(name="bids_demo",base_dir=output_dir)
wf.connect(bg, "T2ws", preparePaths, "local_T2ws_paths")
wf.connect(bg, "masks", preparePaths, "local_masks_paths")
import subprocess
def run(self, command, env={}, cwd=os.getcwd()):
merged_env = os.environ
merged_env.update(env)
process = subprocess.run(command, shell=True,
env=merged_env, cwd=cwd, capture_output=True)
return process
from traits.api import *
from nipype.utils.filemanip import split_filename
from nipype.interfaces.base import traits, isdefined, CommandLine, CommandLineInputSpec,\
TraitedSpec, File, InputMultiPath, OutputMultiPath, BaseInterface, BaseInterfaceInputSpec
import nibabel as nib
class BtkNLMDenoisingInputSpec(BaseInterfaceInputSpec):
bids_dir = Directory(desc='BIDS root directory',mandatory=True,exists=True)
in_file = File(desc='Input image',mandatory=True,)
out_postfix = traits.Str("_nlm", usedefault=True)
weight = traits.Float(0.1,desc='NLM weight (0.1 by default)')
class BtkNLMDenoisingOutputSpec(TraitedSpec):
out_file = File(desc='Denoised image')
class BtkNLMDenoising(BaseInterface):
input_spec = BtkNLMDenoisingInputSpec
output_spec = BtkNLMDenoisingOutputSpec
def _run_interface(self, runtime):
_, name, ext = split_filename(os.path.abspath(self.inputs.in_file))
out_file = os.path.join(os.getcwd().replace(self.inputs.bids_dir,'/fetaldata'), ''.join((name, self.inputs.out_postfix, ext)))
cmd = 'docker run --rm -u {}:{} -v "{}":/fetaldata sebastientourbier/mialsuperresolutiontoolkit btkNLMDenoising -i "{}" -o "{}" -b {}'.format(os.getuid(),os.getgid(),self.inputs.bids_dir,self.inputs.in_file,out_file,self.inputs.weight)
try:
print('... cmd: {}'.format(cmd))
run(self, cmd, env={}, cwd=os.path.abspath(self.inputs.bids_dir))
except:
print('Failed')
return runtime
def _list_outputs(self):
outputs = self._outputs().get()
_, name, ext = split_filename(os.path.abspath(self.inputs.in_file))
outputs['out_file'] = os.path.join(os.getcwd(), ''.join((name, self.inputs.out_postfix, ext)))
return outputs
class MultipleBtkNLMDenoisingInputSpec(BaseInterfaceInputSpec):
bids_dir = Directory(desc='BIDS root directory',mandatory=True,exists=True)
input_images = InputMultiPath(File(desc='files to be denoised', mandatory = True))
weight = traits.Float(0.1)
out_postfix = traits.Str("_nlm", usedefault=True)
class MultipleBtkNLMDenoisingOutputSpec(TraitedSpec):
output_images = OutputMultiPath(File())
class MultipleBtkNLMDenoising(BaseInterface):
input_spec = MultipleBtkNLMDenoisingInputSpec
output_spec = MultipleBtkNLMDenoisingOutputSpec
def _run_interface(self, runtime):
for input_image in self.inputs.input_images:
ax = BtkNLMDenoising(bids_dir = self.inputs.bids_dir, in_file = input_image, out_postfix=self.inputs.out_postfix, weight = self.inputs.weight)
ax.run()
return runtime
def _list_outputs(self):
outputs = self._outputs().get()
outputs['output_images'] = glob(os.path.abspath("*.nii.gz"))
return outputs
nlmDenoise = Node(interface=MultipleBtkNLMDenoising(),base_dir=os.path.join(output_dir,'bids_demo'),name='nlmDenoise')
nlmDenoise.inputs.bids_dir = bids_dir
nlmDenoise.inputs.weight = 0.1
wf.connect(preparePaths, "docker_T2ws_paths", nlmDenoise, "input_images")
res = wf.run()
wf.write_graph()
| 0.345216 | 0.183082 |
# Symbolic Aggregate approXimation *(SAX)* Encoding
## Distance DEMO
```
# at first time install pynuTS with this command
#!pip install git+https://github.com/nickprock/pynuTS.git@main
import pandas as pd
import numpy as np
from pynuTS.decomposition import NaiveSAX
import matplotlib.pyplot as plt
%matplotlib inline
```
## Introduction
Symbolic Aggregate approXimation Encoding (SAX Encoding)
* Developed in 2002 by Keogh e Lin
* Dimensionality Reduction for sequences
* In this example we will use it to find anomaly patterns. For more informations read this [KDNuggets article](https://www.kdnuggets.com/2019/09/time-series-baseball.html).
## Create dataset.
We Create 10 sequences with 12 observations.
```
# Some useful functions
def sigmoid(x, a, b, c):
expo = a * (b - x)
sig = 1 / ( 1 + np.exp( expo ) ) * c
return sig
def white_noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
time = np.arange(120)
np.random.seed(42)
a = np.random.randn(10)
b = np.random.beta(a[0], a[-1], 10)
c = np.random.normal(loc = 10, scale=0.05,size=10)
list_series = []
for i in range(10):
noise = white_noise(time)
temp = sigmoid(time, a[i], b[i], c[i]) + noise
list_series.append(temp)
```
### Create DataFrame
* every row is a period
* every column is a sequence
```
X = pd.DataFrame(list_series).T
X
X.plot(figsize=(18,10))
plt.legend(["ts1", "ts2","ts3","ts4","ts5","ts6","ts7","ts8","ts9","ts10"])
plt.show()
```
## Distance Matrix with SAX Encoding
We choose a window size 12. We reduct the 120 periods in 10 periods, a letter is a year.
You transpose X because each row must be a time series and each column a timestep.
```
sax = NaiveSAX(windows=24)
sax_strings = np.apply_along_axis(sax.fit_transform, 1, X.T)
```
## Dimensionality Reduction with **Piecewise Aggregate Approximation**
The **Piecewise Aggregate Approximation** consisting in taking the mean over back-to-back points. This decreases the number of points and reduces noise while preserving the trend of the time series.
The labels for each level form the **SAX String** (like *'AAA'*)
<br>

<br>
```
sax_strings
```
### Choose the distance: Hamming
In information theory, the [Hamming distance](https://en.wikipedia.org/wiki/Hamming_distance) between two strings of equal length is the number of positions at which the corresponding symbols are different.
Use the [scipy version](https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.hamming.html)
```
from scipy.spatial.distance import hamming
print("The distance between ", sax_strings[0], " and ", sax_strings[1], " is: ",hamming(list(sax_strings[0]), list(sax_strings[1])))
# very dummy loop
for i in range(len(sax_strings)):
for j in range(len(sax_strings)):
print("The distance between ", sax_strings[i], " and ", sax_strings[j], " is: ",hamming(list(sax_strings[i]), list(sax_strings[j])))
```
## Credits
pynuTS by Nicola Procopio 2020
Original repository https://github.com/nickprock/pynuTS/
<br>
* *The **sigmoid** function was created by [Piero Savastano](https://github.com/pieroit) for [covid19italia](https://github.com/ondata/covid19italia/blob/master/visualizzazione/analisi_predittiva.ipynb)*
* *The **white_noise** function was created by [Aurélien Géron](https://github.com/ageron) for an Udacity course*
|
github_jupyter
|
# at first time install pynuTS with this command
#!pip install git+https://github.com/nickprock/pynuTS.git@main
import pandas as pd
import numpy as np
from pynuTS.decomposition import NaiveSAX
import matplotlib.pyplot as plt
%matplotlib inline
# Some useful functions
def sigmoid(x, a, b, c):
expo = a * (b - x)
sig = 1 / ( 1 + np.exp( expo ) ) * c
return sig
def white_noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
time = np.arange(120)
np.random.seed(42)
a = np.random.randn(10)
b = np.random.beta(a[0], a[-1], 10)
c = np.random.normal(loc = 10, scale=0.05,size=10)
list_series = []
for i in range(10):
noise = white_noise(time)
temp = sigmoid(time, a[i], b[i], c[i]) + noise
list_series.append(temp)
X = pd.DataFrame(list_series).T
X
X.plot(figsize=(18,10))
plt.legend(["ts1", "ts2","ts3","ts4","ts5","ts6","ts7","ts8","ts9","ts10"])
plt.show()
sax = NaiveSAX(windows=24)
sax_strings = np.apply_along_axis(sax.fit_transform, 1, X.T)
sax_strings
from scipy.spatial.distance import hamming
print("The distance between ", sax_strings[0], " and ", sax_strings[1], " is: ",hamming(list(sax_strings[0]), list(sax_strings[1])))
# very dummy loop
for i in range(len(sax_strings)):
for j in range(len(sax_strings)):
print("The distance between ", sax_strings[i], " and ", sax_strings[j], " is: ",hamming(list(sax_strings[i]), list(sax_strings[j])))
| 0.351534 | 0.930395 |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import style
import matplotlib.ticker as ticker
import seaborn as sns
from sklearn.datasets import load_boston
from sklearn.ensemble import RandomForestClassifier,\
VotingClassifier,\
GradientBoostingClassifier,\
StackingClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import plot_confusion_matrix
from sklearn.metrics import classification_report
from sklearn.metrics import f1_score, make_scorer
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import train_test_split
from sklearn.model_selection import RepeatedKFold
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import ParameterGrid
from sklearn.inspection import permutation_importance
import multiprocessing
from xgboost import XGBClassifier
labels = pd.read_csv('../../csv/train_labels.csv')
labels.head()
values = pd.read_csv('../../csv/train_values.csv')
values.T
to_be_categorized = ["land_surface_condition", "foundation_type", "roof_type",\
"position", "ground_floor_type", "other_floor_type",\
"plan_configuration", "legal_ownership_status"]
for row in to_be_categorized:
values[row] = values[row].astype("category")
values.info()
datatypes = dict(values.dtypes)
for row in values.columns:
if datatypes[row] != "int64" and datatypes[row] != "int32" and \
datatypes[row] != "int16" and datatypes[row] != "int8":
continue
if values[row].nlargest(1).item() > 32767 and values[row].nlargest(1).item() < 2**31:
values[row] = values[row].astype(np.int32)
elif values[row].nlargest(1).item() > 127:
values[row] = values[row].astype(np.int16)
else:
values[row] = values[row].astype(np.int8)
labels["building_id"] = labels["building_id"].astype(np.int32)
labels["damage_grade"] = labels["damage_grade"].astype(np.int8)
labels.info()
```
# Feature Engineering para XGBoost
```
important_values = values\
.merge(labels, on="building_id")
important_values.drop(columns=["building_id"], inplace = True)
important_values["geo_level_1_id"] = important_values["geo_level_1_id"].astype("category")
important_values
X_train, X_test, y_train, y_test = train_test_split(important_values.drop(columns = 'damage_grade'),
important_values['damage_grade'], test_size = 0.2, random_state = 123)
#OneHotEncoding
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
res = res.drop([feature_to_encode], axis=1)
return(res)
features_to_encode = ["geo_level_1_id", "land_surface_condition", "foundation_type", "roof_type",\
"position", "ground_floor_type", "other_floor_type",\
"plan_configuration", "legal_ownership_status"]
for feature in features_to_encode:
X_train = encode_and_bind(X_train, feature)
X_test = encode_and_bind(X_test, feature)
X_train
import time
# min_child_weight = [0, 1, 2]
# max_delta_step = [0, 5, 10]
def my_grid_search():
print(time.gmtime())
i = 1
df = pd.DataFrame({'subsample': [],
'gamma': [],
'learning_rate': [],
'max_depth': [],
'score': []})
for subsample in [0.75, 0.885, 0.95]:
for gamma in [0.75, 1, 1.25]:
for learning_rate in [0.4375, 0.45, 0.4625]:
for max_depth in [5, 6, 7]:
model = XGBClassifier(n_estimators = 350,
booster = 'gbtree',
subsample = subsample,
gamma = gamma,
max_depth = max_depth,
learning_rate = learning_rate,
label_encoder = False,
verbosity = 0)
model.fit(X_train, y_train)
y_preds = model.predict(X_test)
score = f1_score(y_test, y_preds, average = 'micro')
df = df.append(pd.Series(
data={'subsample': subsample,
'gamma': gamma,
'learning_rate': learning_rate,
'max_depth': max_depth,
'score': score},
name = i))
print(i, time.gmtime())
i += 1
return df.sort_values('score', ascending = False)
current_df = my_grid_search()
df = pd.read_csv('grid-search/res-feature-engineering.csv')
df.append(current_df)
df.to_csv('grid-search/res-feature-engineering.csv')
current_df
import time
def my_grid_search():
print(time.gmtime())
i = 1
df = pd.DataFrame({'subsample': [],
'gamma': [],
'learning_rate': [],
'max_depth': [],
'score': []})
for subsample in [0.885]:
for gamma in [1]:
for learning_rate in [0.45]:
for max_depth in [5,6,7,8]:
model = XGBClassifier(n_estimators = 350,
booster = 'gbtree',
subsample = subsample,
gamma = gamma,
max_depth = max_depth,
learning_rate = learning_rate,
label_encoder = False,
verbosity = 0)
model.fit(X_train, y_train)
y_preds = model.predict(X_test)
score = f1_score(y_test, y_preds, average = 'micro')
df = df.append(pd.Series(
data={'subsample': subsample,
'gamma': gamma,
'learning_rate': learning_rate,
'max_depth': max_depth,
'score': score},
name = i))
print(i, time.gmtime())
i += 1
return df.sort_values('score', ascending = False)
df = my_grid_search()
# df = pd.read_csv('grid-search/res-feature-engineering.csv')
# df.append(current_df)
df.to_csv('grid-search/res-feature-engineering.csv')
df
pd.read_csv('grid-search/res-no-feature-engineering.csv')\
.nlargest(20, 'score')
```
# ...
```
xgb_model_1 = XGBClassifier(n_estimators = 350,
subsample = 0.885,
booster = 'gbtree',
gamma = 1,
learning_rate = 0.45,
label_encoder = False,
verbosity = 2)
xgb_model_2 = XGBClassifier(n_estimators = 350,
subsample = 0.950,
booster = 'gbtree',
gamma = 0.5,
learning_rate = 0.45,
label_encoder = False,
verbosity = 2)
xgb_model_3 = XGBClassifier(n_estimators = 350,
subsample = 0.750,
booster = 'gbtree',
gamma = 1,
learning_rate = 0.45,
label_encoder = False,
verbosity = 2)
xgb_model_4 = XGBClassifier(n_estimators = 350,
subsample = 0.80,
booster = 'gbtree',
gamma = 1,
learning_rate = 0.55,
label_encoder = False,
verbosity = 2)
rf_model_1 = RandomForestClassifier(n_estimators = 150,
max_depth = None,
max_features = 45,
min_samples_split = 15,
min_samples_leaf = 1,
criterion = "gini",
verbose=True)
rf_model_2 = RandomForestClassifier(n_estimators = 250,
max_depth = None,
max_features = 45,
min_samples_split = 15,
min_samples_leaf = 1,
criterion = "gini",
verbose=True,
n_jobs =-1)
import lightgbm as lgb
lgbm_model_1 = lgb.LGBMClassifier(boosting_type='gbdt',
colsample_bytree=1.0,
importance_type='split',
learning_rate=0.15,
max_depth=None,
n_estimators=1600,
n_jobs=-1,
objective=None,
subsample=1.0,
subsample_for_bin=200000,
subsample_freq=0)
lgbm_model_2 = lgb.LGBMClassifier(boosting_type='gbdt',
colsample_bytree=1.0,
importance_type='split',
learning_rate=0.15,
max_depth=25,
n_estimators=1750,
n_jobs=-1,
objective=None,
subsample=0.7,
subsample_for_bin=240000,
subsample_freq=0)
lgbm_model_3 = lgb.LGBMClassifier(boosting_type='gbdt',
colsample_bytree=1.0,
importance_type='split',
learning_rate=0.20,
max_depth=40,
n_estimators=1450,
n_jobs=-1,
objective=None,
subsample=0.7,
subsample_for_bin=160000,
subsample_freq=0)
import sklearn as sk
import sklearn.neural_network
neuronal_1 = sk.neural_network.MLPClassifier(solver='adam',
activation = 'relu',
learning_rate_init=0.001,
learning_rate = 'adaptive',
verbose=True,
batch_size = 'auto')
estimators = [('xgb', xgb_model_1),
('rfm', rf_model_1),
('lgbm', lgbm_model_1)]
final_estimator = GradientBoostingClassifier(n_estimators = 305,
max_depth = 9,
min_samples_split = 2,
min_samples_leaf = 3,
subsample=0.6,
verbose=True,
learning_rate=0.15)
sc_model = StackingClassifier(estimators = estimators)
sc_model.fit(X_train, y_train)
y_preds = sc_model.predict(X_test)
f1_score(y_test, y_preds, average='micro')
test_values = pd.read_csv('../../csv/test_values.csv', index_col = "building_id")
test_values
test_values_subset = test_values
test_values_subset["geo_level_1_id"] = test_values_subset["geo_level_1_id"].astype("category")
test_values_subset
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
res = res.drop([feature_to_encode], axis=1)
return(res)
features_to_encode = ["geo_level_1_id", "land_surface_condition", "foundation_type", "roof_type",\
"position", "ground_floor_type", "other_floor_type",\
"plan_configuration", "legal_ownership_status"]
for feature in features_to_encode:
test_values_subset = encode_and_bind(test_values_subset, feature)
test_values_subset
test_values_subset.shape
# Genero las predicciones para los test.
preds = vc_model.predict(test_values_subset)
submission_format = pd.read_csv('../../csv/submission_format.csv', index_col = "building_id")
my_submission = pd.DataFrame(data=preds,
columns=submission_format.columns,
index=submission_format.index)
my_submission.head()
my_submission.to_csv('../../csv/predictions/jf/vote/jf-model-3-submission.csv')
!head ../../csv/predictions/jf/vote/jf-model-3-submission.csv
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import style
import matplotlib.ticker as ticker
import seaborn as sns
from sklearn.datasets import load_boston
from sklearn.ensemble import RandomForestClassifier,\
VotingClassifier,\
GradientBoostingClassifier,\
StackingClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import plot_confusion_matrix
from sklearn.metrics import classification_report
from sklearn.metrics import f1_score, make_scorer
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import train_test_split
from sklearn.model_selection import RepeatedKFold
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import ParameterGrid
from sklearn.inspection import permutation_importance
import multiprocessing
from xgboost import XGBClassifier
labels = pd.read_csv('../../csv/train_labels.csv')
labels.head()
values = pd.read_csv('../../csv/train_values.csv')
values.T
to_be_categorized = ["land_surface_condition", "foundation_type", "roof_type",\
"position", "ground_floor_type", "other_floor_type",\
"plan_configuration", "legal_ownership_status"]
for row in to_be_categorized:
values[row] = values[row].astype("category")
values.info()
datatypes = dict(values.dtypes)
for row in values.columns:
if datatypes[row] != "int64" and datatypes[row] != "int32" and \
datatypes[row] != "int16" and datatypes[row] != "int8":
continue
if values[row].nlargest(1).item() > 32767 and values[row].nlargest(1).item() < 2**31:
values[row] = values[row].astype(np.int32)
elif values[row].nlargest(1).item() > 127:
values[row] = values[row].astype(np.int16)
else:
values[row] = values[row].astype(np.int8)
labels["building_id"] = labels["building_id"].astype(np.int32)
labels["damage_grade"] = labels["damage_grade"].astype(np.int8)
labels.info()
important_values = values\
.merge(labels, on="building_id")
important_values.drop(columns=["building_id"], inplace = True)
important_values["geo_level_1_id"] = important_values["geo_level_1_id"].astype("category")
important_values
X_train, X_test, y_train, y_test = train_test_split(important_values.drop(columns = 'damage_grade'),
important_values['damage_grade'], test_size = 0.2, random_state = 123)
#OneHotEncoding
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
res = res.drop([feature_to_encode], axis=1)
return(res)
features_to_encode = ["geo_level_1_id", "land_surface_condition", "foundation_type", "roof_type",\
"position", "ground_floor_type", "other_floor_type",\
"plan_configuration", "legal_ownership_status"]
for feature in features_to_encode:
X_train = encode_and_bind(X_train, feature)
X_test = encode_and_bind(X_test, feature)
X_train
import time
# min_child_weight = [0, 1, 2]
# max_delta_step = [0, 5, 10]
def my_grid_search():
print(time.gmtime())
i = 1
df = pd.DataFrame({'subsample': [],
'gamma': [],
'learning_rate': [],
'max_depth': [],
'score': []})
for subsample in [0.75, 0.885, 0.95]:
for gamma in [0.75, 1, 1.25]:
for learning_rate in [0.4375, 0.45, 0.4625]:
for max_depth in [5, 6, 7]:
model = XGBClassifier(n_estimators = 350,
booster = 'gbtree',
subsample = subsample,
gamma = gamma,
max_depth = max_depth,
learning_rate = learning_rate,
label_encoder = False,
verbosity = 0)
model.fit(X_train, y_train)
y_preds = model.predict(X_test)
score = f1_score(y_test, y_preds, average = 'micro')
df = df.append(pd.Series(
data={'subsample': subsample,
'gamma': gamma,
'learning_rate': learning_rate,
'max_depth': max_depth,
'score': score},
name = i))
print(i, time.gmtime())
i += 1
return df.sort_values('score', ascending = False)
current_df = my_grid_search()
df = pd.read_csv('grid-search/res-feature-engineering.csv')
df.append(current_df)
df.to_csv('grid-search/res-feature-engineering.csv')
current_df
import time
def my_grid_search():
print(time.gmtime())
i = 1
df = pd.DataFrame({'subsample': [],
'gamma': [],
'learning_rate': [],
'max_depth': [],
'score': []})
for subsample in [0.885]:
for gamma in [1]:
for learning_rate in [0.45]:
for max_depth in [5,6,7,8]:
model = XGBClassifier(n_estimators = 350,
booster = 'gbtree',
subsample = subsample,
gamma = gamma,
max_depth = max_depth,
learning_rate = learning_rate,
label_encoder = False,
verbosity = 0)
model.fit(X_train, y_train)
y_preds = model.predict(X_test)
score = f1_score(y_test, y_preds, average = 'micro')
df = df.append(pd.Series(
data={'subsample': subsample,
'gamma': gamma,
'learning_rate': learning_rate,
'max_depth': max_depth,
'score': score},
name = i))
print(i, time.gmtime())
i += 1
return df.sort_values('score', ascending = False)
df = my_grid_search()
# df = pd.read_csv('grid-search/res-feature-engineering.csv')
# df.append(current_df)
df.to_csv('grid-search/res-feature-engineering.csv')
df
pd.read_csv('grid-search/res-no-feature-engineering.csv')\
.nlargest(20, 'score')
xgb_model_1 = XGBClassifier(n_estimators = 350,
subsample = 0.885,
booster = 'gbtree',
gamma = 1,
learning_rate = 0.45,
label_encoder = False,
verbosity = 2)
xgb_model_2 = XGBClassifier(n_estimators = 350,
subsample = 0.950,
booster = 'gbtree',
gamma = 0.5,
learning_rate = 0.45,
label_encoder = False,
verbosity = 2)
xgb_model_3 = XGBClassifier(n_estimators = 350,
subsample = 0.750,
booster = 'gbtree',
gamma = 1,
learning_rate = 0.45,
label_encoder = False,
verbosity = 2)
xgb_model_4 = XGBClassifier(n_estimators = 350,
subsample = 0.80,
booster = 'gbtree',
gamma = 1,
learning_rate = 0.55,
label_encoder = False,
verbosity = 2)
rf_model_1 = RandomForestClassifier(n_estimators = 150,
max_depth = None,
max_features = 45,
min_samples_split = 15,
min_samples_leaf = 1,
criterion = "gini",
verbose=True)
rf_model_2 = RandomForestClassifier(n_estimators = 250,
max_depth = None,
max_features = 45,
min_samples_split = 15,
min_samples_leaf = 1,
criterion = "gini",
verbose=True,
n_jobs =-1)
import lightgbm as lgb
lgbm_model_1 = lgb.LGBMClassifier(boosting_type='gbdt',
colsample_bytree=1.0,
importance_type='split',
learning_rate=0.15,
max_depth=None,
n_estimators=1600,
n_jobs=-1,
objective=None,
subsample=1.0,
subsample_for_bin=200000,
subsample_freq=0)
lgbm_model_2 = lgb.LGBMClassifier(boosting_type='gbdt',
colsample_bytree=1.0,
importance_type='split',
learning_rate=0.15,
max_depth=25,
n_estimators=1750,
n_jobs=-1,
objective=None,
subsample=0.7,
subsample_for_bin=240000,
subsample_freq=0)
lgbm_model_3 = lgb.LGBMClassifier(boosting_type='gbdt',
colsample_bytree=1.0,
importance_type='split',
learning_rate=0.20,
max_depth=40,
n_estimators=1450,
n_jobs=-1,
objective=None,
subsample=0.7,
subsample_for_bin=160000,
subsample_freq=0)
import sklearn as sk
import sklearn.neural_network
neuronal_1 = sk.neural_network.MLPClassifier(solver='adam',
activation = 'relu',
learning_rate_init=0.001,
learning_rate = 'adaptive',
verbose=True,
batch_size = 'auto')
estimators = [('xgb', xgb_model_1),
('rfm', rf_model_1),
('lgbm', lgbm_model_1)]
final_estimator = GradientBoostingClassifier(n_estimators = 305,
max_depth = 9,
min_samples_split = 2,
min_samples_leaf = 3,
subsample=0.6,
verbose=True,
learning_rate=0.15)
sc_model = StackingClassifier(estimators = estimators)
sc_model.fit(X_train, y_train)
y_preds = sc_model.predict(X_test)
f1_score(y_test, y_preds, average='micro')
test_values = pd.read_csv('../../csv/test_values.csv', index_col = "building_id")
test_values
test_values_subset = test_values
test_values_subset["geo_level_1_id"] = test_values_subset["geo_level_1_id"].astype("category")
test_values_subset
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
res = res.drop([feature_to_encode], axis=1)
return(res)
features_to_encode = ["geo_level_1_id", "land_surface_condition", "foundation_type", "roof_type",\
"position", "ground_floor_type", "other_floor_type",\
"plan_configuration", "legal_ownership_status"]
for feature in features_to_encode:
test_values_subset = encode_and_bind(test_values_subset, feature)
test_values_subset
test_values_subset.shape
# Genero las predicciones para los test.
preds = vc_model.predict(test_values_subset)
submission_format = pd.read_csv('../../csv/submission_format.csv', index_col = "building_id")
my_submission = pd.DataFrame(data=preds,
columns=submission_format.columns,
index=submission_format.index)
my_submission.head()
my_submission.to_csv('../../csv/predictions/jf/vote/jf-model-3-submission.csv')
!head ../../csv/predictions/jf/vote/jf-model-3-submission.csv
| 0.412648 | 0.714516 |
```
import core.config as config
from chofer_tda_datasets import Reininghaus2014ShrecReal, SciNe01EEGBottomTopFiltration
from chofer_tda_datasets.transforms import Hdf5GroupToDict, Hdf5GroupToDictSelector
from sklearn.model_selection import StratifiedShuffleSplit, GridSearchCV
from sklearn.svm import LinearSVC
from sklearn.metrics import accuracy_score
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
import sys
from IPython.display import clear_output
from collections import defaultdict
def bendich_vectorization(dgm, num_dims=100):
persistences = [d-b for b, d in dgm]
v = sorted(persistences, reverse=True)
if len(v) < num_dims:
v += [0]*(num_dims - len(v))
return v[:num_dims]
def svm_linear_standard_scaled_c_optimized(pca_num_dims=None):
grid = {'C': [0.1, 1, 10, 100]}
clf = GridSearchCV(cv=3,
estimator=LinearSVC(),
param_grid=grid,
n_jobs=10
)
pipeline_members = []
pipeline_members.append(('scaler', StandardScaler()))
if pca_num_dims is not None:
pipeline_members.append(('pca', PCA(pca_num_dims)))
pipeline_members.append(('classifier', clf))
return Pipeline(pipeline_members)
def bendich_vectorization_generic_experiment(dataset,
vectorization_callback,
vectorization_dimensions,
pca_num_dims=None):
train_size = 0.9
splitter = StratifiedShuffleSplit(n_splits=10,
train_size=train_size,
test_size=1-train_size,
random_state=123)
train_test_splits = list(splitter.split(X=dataset.targets, y=dataset.targets))
train_test_splits = [(train_i.tolist(), test_i.tolist()) for train_i, test_i in train_test_splits]
return_value = {}
X = []
y = []
for i, (x_i, y_i) in enumerate(dataset):
clear_output(wait=True)
print('loading data ... ', i, end='\r')
sys.stdout.flush()
v = vectorization_callback(x_i, num_dims=max(vectorization_dimensions))
X.append(v)
y.append(int(y_i))
# X = np.array(X)
y = np.array(y)
print('')
for dim in vectorization_dimensions:
print('dimension =', dim, ":")
return_value_dim = defaultdict(list)
return_value[dim] = return_value_dim
X_dim = []
for x in X:
X_dim.append(sum([v[:dim] for v in x], []))
X_dim = np.array(X_dim)
for run_i, (train_i, test_i) in enumerate(train_test_splits):
print('run', run_i, end='\r')
X_train = X_dim[train_i]
y_train = y[train_i]
X_test = X_dim[test_i]
y_test = y[test_i]
classifier = svm_linear_standard_scaled_c_optimized(pca_num_dims=pca_num_dims)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
return_value_dim['accuracies'].append(accuracy_score(y_test, y_pred))
return_value_dim['classifier'].append(classifier)
print('')
return return_value
ds_shrec_real = Reininghaus2014ShrecReal(data_root_folder_path=config.paths.data_root_dir)
ds_shrec_real.data_transforms = [Hdf5GroupToDict()]
def shrec_real_bendich_vectorization(input_dict, num_dims):
ret_val = []
for scale in range(1, 11):
for dim in ['0', '1']:
x = input_dict[str(scale)][dim]
ret_val.append(bendich_vectorization(x, num_dims=num_dims))
return ret_val
shrec_result = bendich_vectorization_generic_experiment(ds_shrec_real,
shrec_real_bendich_vectorization,
vectorization_dimensions=[5, 10, 20, 40, 80, 160])
with open('./bendich_exp_shrec_real.pickle', 'bw') as f:
pickle.dump(shrec_result, f)
for k, v in shrec_result.items():
print('dimension', k, 'accuracy:', np.mean(v['accuracies']))
ds_scine_eeg = SciNe01EEGBottomTopFiltration(data_root_folder_path=config.paths.data_root_dir)
sensor_indices = [str(i) for i in ds_scine_eeg.sensor_configurations['low_resolution_whole_head']]
selection = {'top': sensor_indices, 'bottom': sensor_indices}
selector = Hdf5GroupToDictSelector(selection)
ds_scine_eeg.data_transforms = [selector]
def scine_bendich_vectorization(input_dict, num_dims):
ret_val = []
for filt in ['top', 'bottom']:
for sensor_i in sensor_indices:
x = input_dict[filt][sensor_i]
ret_val.append(bendich_vectorization(x, num_dims=num_dims))
return ret_val
eeg_result = bendich_vectorization_generic_experiment(ds_scine_eeg,
scine_bendich_vectorization,
vectorization_dimensions=[5, 10, 20, 40, 80, 160],
pca_num_dims=None)
with open('./bendich_exp_scitrecs_eeg.pickle', 'bw') as f:
pickle.dump(eeg_result, f)
for k, v in eeg_result.items():
print('dimension', k, 'accuracy:', np.mean(v['accuracies']))
with open('./bendich_exp_scitrecs_eeg.pickle', 'br') as f:
result = pickle.load(f)
for k, v in result.items():
print(k, np.mean(v['accuracies']))
```
|
github_jupyter
|
import core.config as config
from chofer_tda_datasets import Reininghaus2014ShrecReal, SciNe01EEGBottomTopFiltration
from chofer_tda_datasets.transforms import Hdf5GroupToDict, Hdf5GroupToDictSelector
from sklearn.model_selection import StratifiedShuffleSplit, GridSearchCV
from sklearn.svm import LinearSVC
from sklearn.metrics import accuracy_score
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
import sys
from IPython.display import clear_output
from collections import defaultdict
def bendich_vectorization(dgm, num_dims=100):
persistences = [d-b for b, d in dgm]
v = sorted(persistences, reverse=True)
if len(v) < num_dims:
v += [0]*(num_dims - len(v))
return v[:num_dims]
def svm_linear_standard_scaled_c_optimized(pca_num_dims=None):
grid = {'C': [0.1, 1, 10, 100]}
clf = GridSearchCV(cv=3,
estimator=LinearSVC(),
param_grid=grid,
n_jobs=10
)
pipeline_members = []
pipeline_members.append(('scaler', StandardScaler()))
if pca_num_dims is not None:
pipeline_members.append(('pca', PCA(pca_num_dims)))
pipeline_members.append(('classifier', clf))
return Pipeline(pipeline_members)
def bendich_vectorization_generic_experiment(dataset,
vectorization_callback,
vectorization_dimensions,
pca_num_dims=None):
train_size = 0.9
splitter = StratifiedShuffleSplit(n_splits=10,
train_size=train_size,
test_size=1-train_size,
random_state=123)
train_test_splits = list(splitter.split(X=dataset.targets, y=dataset.targets))
train_test_splits = [(train_i.tolist(), test_i.tolist()) for train_i, test_i in train_test_splits]
return_value = {}
X = []
y = []
for i, (x_i, y_i) in enumerate(dataset):
clear_output(wait=True)
print('loading data ... ', i, end='\r')
sys.stdout.flush()
v = vectorization_callback(x_i, num_dims=max(vectorization_dimensions))
X.append(v)
y.append(int(y_i))
# X = np.array(X)
y = np.array(y)
print('')
for dim in vectorization_dimensions:
print('dimension =', dim, ":")
return_value_dim = defaultdict(list)
return_value[dim] = return_value_dim
X_dim = []
for x in X:
X_dim.append(sum([v[:dim] for v in x], []))
X_dim = np.array(X_dim)
for run_i, (train_i, test_i) in enumerate(train_test_splits):
print('run', run_i, end='\r')
X_train = X_dim[train_i]
y_train = y[train_i]
X_test = X_dim[test_i]
y_test = y[test_i]
classifier = svm_linear_standard_scaled_c_optimized(pca_num_dims=pca_num_dims)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
return_value_dim['accuracies'].append(accuracy_score(y_test, y_pred))
return_value_dim['classifier'].append(classifier)
print('')
return return_value
ds_shrec_real = Reininghaus2014ShrecReal(data_root_folder_path=config.paths.data_root_dir)
ds_shrec_real.data_transforms = [Hdf5GroupToDict()]
def shrec_real_bendich_vectorization(input_dict, num_dims):
ret_val = []
for scale in range(1, 11):
for dim in ['0', '1']:
x = input_dict[str(scale)][dim]
ret_val.append(bendich_vectorization(x, num_dims=num_dims))
return ret_val
shrec_result = bendich_vectorization_generic_experiment(ds_shrec_real,
shrec_real_bendich_vectorization,
vectorization_dimensions=[5, 10, 20, 40, 80, 160])
with open('./bendich_exp_shrec_real.pickle', 'bw') as f:
pickle.dump(shrec_result, f)
for k, v in shrec_result.items():
print('dimension', k, 'accuracy:', np.mean(v['accuracies']))
ds_scine_eeg = SciNe01EEGBottomTopFiltration(data_root_folder_path=config.paths.data_root_dir)
sensor_indices = [str(i) for i in ds_scine_eeg.sensor_configurations['low_resolution_whole_head']]
selection = {'top': sensor_indices, 'bottom': sensor_indices}
selector = Hdf5GroupToDictSelector(selection)
ds_scine_eeg.data_transforms = [selector]
def scine_bendich_vectorization(input_dict, num_dims):
ret_val = []
for filt in ['top', 'bottom']:
for sensor_i in sensor_indices:
x = input_dict[filt][sensor_i]
ret_val.append(bendich_vectorization(x, num_dims=num_dims))
return ret_val
eeg_result = bendich_vectorization_generic_experiment(ds_scine_eeg,
scine_bendich_vectorization,
vectorization_dimensions=[5, 10, 20, 40, 80, 160],
pca_num_dims=None)
with open('./bendich_exp_scitrecs_eeg.pickle', 'bw') as f:
pickle.dump(eeg_result, f)
for k, v in eeg_result.items():
print('dimension', k, 'accuracy:', np.mean(v['accuracies']))
with open('./bendich_exp_scitrecs_eeg.pickle', 'br') as f:
result = pickle.load(f)
for k, v in result.items():
print(k, np.mean(v['accuracies']))
| 0.350977 | 0.408218 |
# Exercise Set 6: Data Structuring 2
*Afternoon, August 15, 2018*
In this Exercise Set we will continue working with the weather data you downloaded and saved in Exercise Set 4.
> **_Note_**: to solve the bonus exercises in this exerise set you will need to apply the `.groupby()` method a few times. This has not yet been covered in the lectures (you will see it tomorrow).
>
> `.groupby()` is a method of pandas dataframes, meaning we can call it like so: `data.groupby('colname')`. The method groups your dataset by a specified column, and applies any following changes within each of these groups. For a more detailed explanation see [this link](https://www.tutorialspoint.com/python_pandas/python_pandas_groupby.htm). The [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html) might also be useful.
First load in the required modules and set up the plotting library:
```
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
```
## Exercise Section 6.1: Weather, part 2
This section is the second part of three that analyzes NOAA data. The first part is Exercise Section 4.1, the last part is Exercise Section 7.2.
> **Ex. 6.1.1:** Load the CSV data you stored yesterday as part of Exercise Section 4.1. If you didn't manage to save the CSV file, you can use the code in [this gist](https://gist.github.com/Kristianuruplarsen/be3a14b226fc4c4d7b62c39de70307e4) to load in the NOAA data.
```
# [Answer to Ex. 6.1.1]
import pandas as pd
df_weather = pd.read_csv('/Users/karlbindslev/Documents/GitHub/sds_group29/Test_karl/material/session_6/1864.csv', header = None).iloc[:,:4]
#df_weather.columns = ['station', 'datetime', 'obs_type', 'obs_value']
#df_weather['obs_value'] = df_weather['obs_value'] / 10
#df_select = df_weather[(df_weather.station == 'ITE00100550') & (df_weather.obs_type == 'TMAX')].copy()
#df_select['TMAX_F'] = 32 + 1.8 * df_select['obs_value']
#df_sorted = df_select.reset_index(drop=True).sort_values(by=['obs_value'])
print(df_sorted)
print(df_sorted['station'].unique())
```
> **Ex. 6.1.2:** Convert the date formatted as string to datetime. Make a new column with the month for each observation.
```
# [Answer to Ex. 6.1.2]
#print(type(df_sorted))
df_sorted.dtypes
df_sorted['datetime'] = pd.to_datetime(df_sorted['datetime'], format ='%Y%m%d')
df_sorted['month'] = df_sorted['datetime'].dt.month
print(df_sorted.head(5))
```
> **Ex. 6.1.3:** Set the datetime variable as temporal index and make a timeseries plot.
> _Hint:_ for this you need to know a few methods of the pandas DataFrames and pandas Series objects. Look up `.set_index()` and `.plot()`.
```
# [Answer to Ex. 6.1.3]
#print(df_sorted['datetime'][0])
df_sorted.set_index('datetime').plot()
```
> **Ex. 6.1.4:** Extract the country code from the station name into a separate column.
> _Hint:_ The station column contains a GHCND ID, given to each weather station by NOAA. The format of these ID's is a 2-3 letter country code, followed by a integer identifying the specific station. A simple approach is to assume a fixed length of the country ID. A more complex way would be to use the [`re`](https://docs.python.org/2/library/re.html) module.
```
# [Answer to Ex. 6.1.4]
import re
df_sorted['country_code'] = df_sorted['station'].str.extract('([A-Z]+)', expand = True)
print(df_sorted.head(5))
print(df_sorted['country_code'].unique())
```
> **Ex. 6.1.5:** Make a function that downloads and formats the weather data according to previous exercises in Exercise Section 4.1, 6.1. You should use data for ALL stations but still only select maximal temperature. _Bonus:_ To validate that your function works plot the temperature curve for each country in the same window. Use `plt.legend()` to add a legend.
```
# [Answer to Ex. 6.1.5]
def weather():
df_weather = pd.read_csv('/Users/karlbindslev/Documents/GitHub/sds_group29/Test_karl/material/session_6/1864.csv', sep = ',', header = None).iloc[:,:4]
df_weather.columns = ['station', 'datetime', 'obs_type', 'obs_value']
df_weather['obs_value'] = df_weather['obs_value'] / 10
#df_select = df_weather[(df_weather.station == 'ITE00100550') & (df_weather.obs_type == 'TMAX')].copy()
df_select['TMAX_F'] = 32 + 1.8 * df_select['obs_value']
df_sorted = df_select.reset_index(drop=True).sort_values(by=['obs_value'])
df_sorted['country_code'] = df_sorted['station'].str.extract('([A-Z]+)', expand = True)
print(df_sorted['country_code'].unique())
weather()
```
## Exercise Section 6.2:
In this section we will use [this dataset](https://archive.ics.uci.edu/ml/datasets/Adult) from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets.html) to practice some basic operations on pandas dataframes.
> **Ex. 6.2.1:** This link `'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data'` leads to a comma-separated file with income data from a US census. Load the data into a pandas dataframe and show the 25th to 35th row.
> _Hint #1:_ There are no column names in the dataset. Use the list `['age','workclass', 'fnlwgt', 'educ', 'educ_num', 'marital_status', 'occupation','relationship', 'race', 'sex','capital_gain', 'capital_loss', 'hours_per_week', 'native_country', 'wage']` as names.
> _Hint #2:_ When you read in the csv, you might find that pandas includes whitespace in all of the cells. To get around this include the argument `skipinitialspace = True` to `read_csv()`.
```
# [Answer to Ex. 6.2.1]
```
> **Ex. 6.2.2:** What is the missing value sign in this dataset? Replace all missing values with NA's understood by pandas. Then proceed to drop all rows containing any missing values with the `dropna` method. How many rows are removed in this operation?
> _Hint 1:_ if this doesn't work as expected you might want to take a look at the hint for 6.2.1 again.
> _Hint 2:_ The NaN method from NumPy might be useful
```
# [Answer to Ex. 6.2.2]
```
> **Ex. 6.2.3:** (_Bonus_) Is there any evidence of a gender-wage-gap in the data? Create a table showing the percentage of men and women earning more than 50K a year.
```
# [Answer to Ex. 6.2.3]
```
> **Ex. 6.2.4:** (_Bonus_) Group the data by years of education (`educ_num`) and marital status. Now plot the share of individuals who earn more than 50K for the two groups 'Divorced' and 'Married-civ-spouse' (normal marriage). Your final result should look like this:

> _Hint:_ the `.query()` method is extremely useful for filtering data.
```
# [Answer to Ex. 6.2.4]
```
|
github_jupyter
|
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
# [Answer to Ex. 6.1.1]
import pandas as pd
df_weather = pd.read_csv('/Users/karlbindslev/Documents/GitHub/sds_group29/Test_karl/material/session_6/1864.csv', header = None).iloc[:,:4]
#df_weather.columns = ['station', 'datetime', 'obs_type', 'obs_value']
#df_weather['obs_value'] = df_weather['obs_value'] / 10
#df_select = df_weather[(df_weather.station == 'ITE00100550') & (df_weather.obs_type == 'TMAX')].copy()
#df_select['TMAX_F'] = 32 + 1.8 * df_select['obs_value']
#df_sorted = df_select.reset_index(drop=True).sort_values(by=['obs_value'])
print(df_sorted)
print(df_sorted['station'].unique())
# [Answer to Ex. 6.1.2]
#print(type(df_sorted))
df_sorted.dtypes
df_sorted['datetime'] = pd.to_datetime(df_sorted['datetime'], format ='%Y%m%d')
df_sorted['month'] = df_sorted['datetime'].dt.month
print(df_sorted.head(5))
# [Answer to Ex. 6.1.3]
#print(df_sorted['datetime'][0])
df_sorted.set_index('datetime').plot()
# [Answer to Ex. 6.1.4]
import re
df_sorted['country_code'] = df_sorted['station'].str.extract('([A-Z]+)', expand = True)
print(df_sorted.head(5))
print(df_sorted['country_code'].unique())
# [Answer to Ex. 6.1.5]
def weather():
df_weather = pd.read_csv('/Users/karlbindslev/Documents/GitHub/sds_group29/Test_karl/material/session_6/1864.csv', sep = ',', header = None).iloc[:,:4]
df_weather.columns = ['station', 'datetime', 'obs_type', 'obs_value']
df_weather['obs_value'] = df_weather['obs_value'] / 10
#df_select = df_weather[(df_weather.station == 'ITE00100550') & (df_weather.obs_type == 'TMAX')].copy()
df_select['TMAX_F'] = 32 + 1.8 * df_select['obs_value']
df_sorted = df_select.reset_index(drop=True).sort_values(by=['obs_value'])
df_sorted['country_code'] = df_sorted['station'].str.extract('([A-Z]+)', expand = True)
print(df_sorted['country_code'].unique())
weather()
# [Answer to Ex. 6.2.1]
# [Answer to Ex. 6.2.2]
# [Answer to Ex. 6.2.3]
# [Answer to Ex. 6.2.4]
| 0.10872 | 0.987508 |
# buy-and-hold (monthly and holding period returns)
buy, then never ever sell, until the end date :)
```
import pandas as pd
import matplotlib.pyplot as plt
import datetime
from talib.abstract import *
import pinkfish as pf
# format price data
pd.options.display.float_format = '{:0.2f}'.format
%matplotlib inline
# set size of inline plots
'''note: rcParams can't be in same cell as import matplotlib
or %matplotlib inline
%matplotlib notebook: will lead to interactive plots embedded within
the notebook, you can zoom and resize the figure
%matplotlib inline: only draw static images in the notebook
'''
plt.rcParams["figure.figsize"] = (10, 7)
pf.DEBUG = True
```
Some global data
```
symbol = '^GSPC'
#symbol = 'SPY'
capital = 10000
start = datetime.datetime(1900, 1, 1)
end = datetime.datetime.now()
```
Define Strategy Class
```
class Strategy:
def __init__(self, symbol, capital, start, end):
self.symbol = symbol
self.capital = capital
self.start = start
self.end = end
def _algo(self):
pf.TradeLog.cash = self.capital
for i, row in enumerate(self.ts.itertuples()):
date = row.Index.to_pydatetime()
high = row.high; low = row.low; close = row.close
end_flag = pf.is_last_row(self.ts, i)
shares = 0
# buy
if self.tlog.shares == 0:
shares = self.tlog.buy(date, close)
# sell
elif end_flag:
shares = self.tlog.sell(date, close)
if shares > 0:
pf.DBG("{0} BUY {1} {2} @ {3:.2f}".format(
date, shares, self.symbol, close))
elif shares < 0:
pf.DBG("{0} SELL {1} {2} @ {3:.2f}".format(
date, -shares, self.symbol, close))
# record daily balance
self.dbal.append(date, high, low, close)
def run(self):
self.ts = pf.fetch_timeseries(self.symbol)
self.ts = pf.select_tradeperiod(self.ts, self.start, self.end,
use_adj=True)
self.ts, self.start = pf.finalize_timeseries(self.ts, self.start)
self.tlog = pf.TradeLog(self.symbol)
self.dbal = pf.DailyBal()
self._algo()
def get_logs(self):
""" return DataFrames """
self.tlog = self.tlog.get_log()
self.dbal = self.dbal.get_log(self.tlog)
return self.tlog, self.dbal
def get_stats(self):
stats = pf.stats(self.ts, self.tlog, self.dbal, self.capital)
return stats
```
Run Strategy
```
s = Strategy(symbol, capital, start, end)
s.run()
```
Retrieve log DataFrames
```
tlog, dbal = s.get_logs()
stats = s.get_stats()
tlog.tail()
dbal.tail()
pf.print_full(stats)
```
Summary
```
pf.summary(stats)
returns = dbal['close']
pf.monthly_returns_map(returns['1990':])
returns = dbal['close']
pf.holding_period_map(returns['1990':])
```
|
github_jupyter
|
import pandas as pd
import matplotlib.pyplot as plt
import datetime
from talib.abstract import *
import pinkfish as pf
# format price data
pd.options.display.float_format = '{:0.2f}'.format
%matplotlib inline
# set size of inline plots
'''note: rcParams can't be in same cell as import matplotlib
or %matplotlib inline
%matplotlib notebook: will lead to interactive plots embedded within
the notebook, you can zoom and resize the figure
%matplotlib inline: only draw static images in the notebook
'''
plt.rcParams["figure.figsize"] = (10, 7)
pf.DEBUG = True
symbol = '^GSPC'
#symbol = 'SPY'
capital = 10000
start = datetime.datetime(1900, 1, 1)
end = datetime.datetime.now()
class Strategy:
def __init__(self, symbol, capital, start, end):
self.symbol = symbol
self.capital = capital
self.start = start
self.end = end
def _algo(self):
pf.TradeLog.cash = self.capital
for i, row in enumerate(self.ts.itertuples()):
date = row.Index.to_pydatetime()
high = row.high; low = row.low; close = row.close
end_flag = pf.is_last_row(self.ts, i)
shares = 0
# buy
if self.tlog.shares == 0:
shares = self.tlog.buy(date, close)
# sell
elif end_flag:
shares = self.tlog.sell(date, close)
if shares > 0:
pf.DBG("{0} BUY {1} {2} @ {3:.2f}".format(
date, shares, self.symbol, close))
elif shares < 0:
pf.DBG("{0} SELL {1} {2} @ {3:.2f}".format(
date, -shares, self.symbol, close))
# record daily balance
self.dbal.append(date, high, low, close)
def run(self):
self.ts = pf.fetch_timeseries(self.symbol)
self.ts = pf.select_tradeperiod(self.ts, self.start, self.end,
use_adj=True)
self.ts, self.start = pf.finalize_timeseries(self.ts, self.start)
self.tlog = pf.TradeLog(self.symbol)
self.dbal = pf.DailyBal()
self._algo()
def get_logs(self):
""" return DataFrames """
self.tlog = self.tlog.get_log()
self.dbal = self.dbal.get_log(self.tlog)
return self.tlog, self.dbal
def get_stats(self):
stats = pf.stats(self.ts, self.tlog, self.dbal, self.capital)
return stats
s = Strategy(symbol, capital, start, end)
s.run()
tlog, dbal = s.get_logs()
stats = s.get_stats()
tlog.tail()
dbal.tail()
pf.print_full(stats)
pf.summary(stats)
returns = dbal['close']
pf.monthly_returns_map(returns['1990':])
returns = dbal['close']
pf.holding_period_map(returns['1990':])
| 0.585931 | 0.873647 |
# Advanced Tutorial (geared toward state-space models)
This tutorial covers more or less the same topics as the basic tutorial (filtering and smoothing of state-space models), but in greater detail.
## Defining state-space models
We consider a state-space model of the form:
\begin{align*}
X_0 & \sim N(0, 1) \\
X_t & = f(X_{t-1}) + U_t, \quad U_t \sim N(0, \sigma_X^2) \\
Y_t & = X_t + V_t, \quad V_t \sim N(0, \sigma_Y^2)
\end{align*}
where function $f$ is defined as follows: $f(x) = \tau_0 - \tau_1 * \exp( \tau_2 * x)$. This model comes from Population Ecology; there $X_t$ stands for the logarithm of the population size of a given species.
This model may be defined as follows.
```
# the usual imports
from matplotlib import pyplot as plt
import seaborn as sb
import numpy as np
# imports from the package
import particles
from particles import state_space_models as ssm
from particles import distributions as dists
class ThetaLogistic(ssm.StateSpaceModel):
""" Theta-Logistic state-space model (used in Ecology).
"""
default_params = {'tau0':.15, 'tau1':.12, 'tau2':.1, 'sigmaX': 0.47, 'sigmaY': 0.39}
def PX0(self): # Distribution of X_0
return dists.Normal()
def f(self, x):
return (x + self.tau0 - self.tau1 * np.exp(self.tau2 * x))
def PX(self, t, xp): # Distribution of X_t given X_{t-1} = xp (p=past)
return dists.Normal(loc=self.f(xp), scale=self.sigmaX)
def PY(self, t, xp, x): # Distribution of Y_t given X_t=x, and X_{t-1}=xp
return dists.Normal(loc=x, scale=self.sigmaY)
```
This is most similar to what we did in the previous tutorial (for stochastic volatility models): methods `PX0`, `PX` and `PY` return objects defined in module `distributions`. (See the [documentation](distributions.html) of that module for a list of available distributions).
The only novelty is that we defined (as a class attribute) the dictionary `default_parameters`, which provides default values for each parameter. When it is defined, each parameter that is not set explicitly when instantiating (calling) `ThetaLogistic` is replaced by its default value:
```
my_ssm = ThetaLogistic() # use default values for all parameters
x, y = my_ssm.simulate(100)
plt.style.use('ggplot')
plt.plot(y)
plt.xlabel('t')
plt.ylabel('data');
```
"Bogus Parameters" (parameters that do not appear in `PX0`, `PX` and `PY`) are simply ignored:
```
just_for_fun = ThetaLogistic(tau2=0.3, bogus=92.) # ok
```
This behaviour may look suprising, but it will allow us to define prior distributions that involve hyper-parameters.
## Automatic definition of `FeynmanKac` objects
We have seen in the previous tutorial how to run a bootstrap filter: we first define some `Bootstrap` object, and then passes it to SMC.
```
fk_boot = ssm.Bootstrap(ssm=my_ssm, data=y)
my_alg = particles.SMC(fk=fk_boot, N=100)
my_alg.run()
```
In fact, `ssm.Bootstrap` is a subclass of `FeynmanKac`, the base class for objects that represent "Feynman-Kac models" (covered in Chapters 5 and 10 of the book). To make things simple, a Feynman-Kac model is a "recipe" for our SMC algorithms; in particular, it tells us:
1. how to sample each particle $X_t^n$ at time $t$, given their ancestors $X_{t-1}^n$;
2. how to reweight each particle $X_t^n$ at time $t$.
The bootstrap filter is a particular "recipe", where:
1. we sample the particles $X_t^n$ according to the state transition of the model; in our case a $N(f(x_{t-1}),\sigma_X^2)$ distribution.
2. we reweight the particles according to the likelihood of the model; here the density of $N(x_t,\sigma_Y^2)$ at point $y_t$.
The class `ssm.Bootstrap` defines this recipe automatically from the supplied state-space model and data.
The bootstrap filter is not the only available "recipe". We may want to run a *guided* filter, where the particles are simulated according to user-chosen proposal kernels. Such proposal kernels may be defined by adding methods `proposal` and `proposal0` to our `StateSpaceModel` class:
```
class ThetaLogistic_with_prop(ThetaLogistic):
def proposal0(self, data):
return self.PX0()
def proposal(self, t, xp, data):
prec_prior = 1. / self.sigmaX**2
prec_lik = 1. / self.sigmaY**2
var = 1. / (prec_prior + prec_lik)
mu = var * (prec_prior * self.f(xp) + prec_lik * data[t])
return dists.Normal(loc=mu, scale=np.sqrt(var))
my_better_ssm = ThetaLogistic_with_prop()
```
In this particular case, we implemented the "optimal" proposal, that is, the distribution of $X_t$ given $X_{t-1}$ and $Y_t$. (Check this is indeed this case, this is a simple exercise!). (For simplicity, the proposal at time 0 is simply the distribution of X_0, so this one is not optimal.)
Now we may define our guided Feynman-Kac model:
```
fk_guided = ssm.GuidedPF(ssm=my_better_ssm, data=y)
```
An APF (auxiliarly particle filter) may be implemented in the same way: for this, we must also define method `logeta`, which computes the auxiliary function used in the resampling step; see the documentation and the end of Chapter 10 of the book.
## Running a particle filter
Here is the signature of class `SMC`:
```
alg = particles.SMC(fk=fk_guided, N=100, qmc=False, resampling='systematic', ESSrmin=0.5,
store_history=False, verbose=False, collect=None)
```
Apart from ``fk`` (which expects a `FeynmanKac` object), all the other arguments are optional. Here is what they do:
* `N`: the number of particles
* `qmc`: whether to use the QMC (quasi-Monte Carlo) version
* `resampling`: which resampling scheme to use (possible choices: `'multinomial'`, `'residual'`, `'stratified'`, `'systematic'` and `'ssp'`)
* `ESSrmin`: the particle filter resamples at each iteration such that ESS / N is below this threshold; set it to `1.` (resp. `0.`) to resample every time (resp. to never resample)
* `verbose`: whether to print progress information
The remaining arguments (``store_history`` and ``collect``) will be explained in the following sections.
Once we have a created a SMC object, we may run it, either step by step, or in one go. For instance:
```
next(alg) # processes data-point y_0
next(alg) # processes data-point y_1
for _ in range(8):
next(alg) # processes data-points y_3 to y_9
# alg.run() # would process all the remaining data-points
```
At any time, object `alg` has the following attributes:
* `alg.t`: index of next iteration
* `alg.X`: the N current particles $X_t^n$; typically a (N,) or (N,d) [numpy array](https://docs.scipy.org/doc/numpy-dev/user/quickstart.html)
* `alg.W`: the N normalised weights $W_t^n$ (a (N,) numpy array)
* `alg.Xp`: the N particles at the previous iteration, $X_{t-1}^n$
* `alg.A`: the N ancestor variables: A[3] = 12 means that the parent of $X_t^3$ was $X_{t-1}^{12}$.
* `alg.summaries`: various summaries collected at each iteration.
Let's do for instance a weighted histogram of the particles.
```
plt.hist(alg.X, 20, weights=alg.W);
```
Object alg.summaries contains various lists of quantities collected at each iteration, such as:
* `alg.summaries.ESSs`: the ESS (effective sample size) at each iteration
* `alg.summaries.rs_flags`: whether or not resampling was triggered at each step
* `alg.summaries.logLts`: estimates of the log-likelihood of the data $y_{0:t}$
All this and more is explained in the documentation of the `collectors` module. Let's plot the ESS and the log-likelihood:
```
plt.plot(alg.summaries.ESSs)
plt.xlabel('t')
plt.ylabel('ESS');
plt.plot(alg.summaries.logLts)
plt.xlabel('t')
plt.ylabel('log-likelihood');
```
## Running many particle filters in one go
Function multiSMC accepts the same arguments as `SMC` plus the following extra arguments:
* `nruns`: number of runs
* `nprocs`: if >0, number of CPU cores to use; if <=0, number of cores *not to* use; i.e. `nprocs=0` means use all cores
* `out_func`: a function that is applied to each resulting particle filter (see below).
To explain how exactly `multiSMC` works, let's try to compare the bootstrap and guided filters for the theta-logistic model we defined at the beginning of this tutorial:
```
outf = lambda pf: pf.logLt
results = particles.multiSMC(fk={'boot':fk_boot, 'guid':fk_guided},
nruns=20, nprocs=1, out_func=outf)
```
The command above runs **40** particle algorithms (on a single core): 20 bootstrap filters, and 20 guided filters. The output, ``results``, is a list of 40 dictionnaries; each dictionary contains the following (key, value) pairs:
* `'model'`: either `'boot'` or `'guid'` (according to whether a boostrap or guided filter has been run)
* `'run'`: a run indicator (between 0 and 19)
* `'output'`: the result of `outf(pf)` where pf is the SMC object that was run. (If `outf` is set to None, then the SMC object is returned.)
The rationale for function `outf` is that SMC objects may take a lot of memory in certain cases (especially if you set `store_history=True`, see section on smoothing below), so we may want to save only some results of interest rather than the complete object itself. Here the output is simply the estimate of the log-likelihood of the (complete) data computed by each particle filter. Let's check if the guided filter provides lower-variance estimates, relative to the bootstrap filter.
```
sb.boxplot(x=[r['fk'] for r in results], y=[r['output'] for r in results])
```
This is indeed the case. To understand this line of code, you must be a bit familiar with [list comprehensions](http://www.secnetix.de/olli/Python/list_comprehensions.hawk).
More generally, function `multiSMC` may be used to run multiple SMC algorithms, while varying any possible arguments; for more details, see the documentation of `multiSMC` and of the module `particles.utils`.
## Collectors, on-line smoothing
We have said that `alg.summaries` (where `alg` is a SMC object) contains **lists** that contains quantities computed each iteration (such as the ESS, the log-likelihood estimates). It is possible to compute extra such quantities such as:
* moments: at each time $t$, a dictionary with keys 'mean', and 'var', which stores the component-wise weighted means and variances.
* on-line smoothing estimates (naive, and $O(N^2)$, see module ``collectors`` for more details)
by providing a list of `Collector` objects to parameter `collect`. For instance, to collect moments:
```
from particles.collectors import Moments
alg_with_mom = particles.SMC(fk=fk_guided, N=100, collect=[Moments()])
alg_with_mom.run()
plt.plot([m['mean'] for m in alg_with_mom.summaries.moments],
label='filtered mean')
plt.plot(y, label='data')
plt.legend()
```
## Off-line smoothing
Off-line smoothing is the task of approximating, at some final time $T$ (i.e. when we have stopped acquiring data), the distribution of all the states, $X_{0:T}$, given the full data, $Y_{0:T}$.
To run a particular off-line smoothing algorithm, one must first run a particle filter, and save its **history**:
```
alg = particles.SMC(fk=fk_guided, N=100, store_history=True)
alg.run()
```
Now `alg` has a `hist` attribute, which is a `ParticleHistory` object. Basically, `alg.hist` recorded, at each time $t$:
* the N particles $X_t^n$
* their weights $W_t^n$
* the N ancestor variables
Smoothing algorithms are implemented as methods of class `ParticleHistory`. For instance, the FFBS (forward filtering backward sampling) algorithm, which samples complete smoothing trajectories, may be called as follows:
```
trajectories = alg.hist.backward_sampling(5, linear_cost=False)
plt.plot(trajectories);
```
The output of `backward_sampling` is a list of 100 arrays: `trajectories[t][m]` is the $t$-component of trajectory $m$. (If you want to turn it into a numpy array, simply do: `np.array(trajectories)`.)
Option `linear_cost` determines whether we use the standard, $O(N^2)$ version of FFBS (where generating a single trajectory costs $O(N)$), or the $O(N)$ version which relies on rejection. The latter algorithm requires us to specify an upper bound for the transition density of $X_t | X_{t-1}$; this may be done by defining a method `upper_bound_trans(self, t)` in the considered state-space model.
```
class ThetaLogistic_with_upper_bound(ThetaLogistic_with_prop):
def upper_bound_log_pt(self, t):
return -np.log(np.sqrt(2 * np.pi) * self.sigmaX)
my_ssm = ThetaLogistic_with_upper_bound()
alg = particles.SMC(fk=ssm.GuidedPF(ssm=my_ssm, data=y),
N=100, store_history=True)
alg.run()
(more_trajectories, acc_rate) = alg.hist.backward_sampling(10, linear_cost=True,
return_ar=True)
print('acceptance rate was %1.3f' % acc_rate)
plt.plot(more_trajectories);
```
Two-filter smoothing is also available. The difficulty with two-filter smoothing is that it requires to design an "information filter", that is a particle filter that computes recursively (backwards) the likelihood of the model. Since this is not trivial for the model considered here, we refer to Section 11.6 of the book and the documentation of package `smoothing`.
|
github_jupyter
|
# the usual imports
from matplotlib import pyplot as plt
import seaborn as sb
import numpy as np
# imports from the package
import particles
from particles import state_space_models as ssm
from particles import distributions as dists
class ThetaLogistic(ssm.StateSpaceModel):
""" Theta-Logistic state-space model (used in Ecology).
"""
default_params = {'tau0':.15, 'tau1':.12, 'tau2':.1, 'sigmaX': 0.47, 'sigmaY': 0.39}
def PX0(self): # Distribution of X_0
return dists.Normal()
def f(self, x):
return (x + self.tau0 - self.tau1 * np.exp(self.tau2 * x))
def PX(self, t, xp): # Distribution of X_t given X_{t-1} = xp (p=past)
return dists.Normal(loc=self.f(xp), scale=self.sigmaX)
def PY(self, t, xp, x): # Distribution of Y_t given X_t=x, and X_{t-1}=xp
return dists.Normal(loc=x, scale=self.sigmaY)
my_ssm = ThetaLogistic() # use default values for all parameters
x, y = my_ssm.simulate(100)
plt.style.use('ggplot')
plt.plot(y)
plt.xlabel('t')
plt.ylabel('data');
just_for_fun = ThetaLogistic(tau2=0.3, bogus=92.) # ok
fk_boot = ssm.Bootstrap(ssm=my_ssm, data=y)
my_alg = particles.SMC(fk=fk_boot, N=100)
my_alg.run()
class ThetaLogistic_with_prop(ThetaLogistic):
def proposal0(self, data):
return self.PX0()
def proposal(self, t, xp, data):
prec_prior = 1. / self.sigmaX**2
prec_lik = 1. / self.sigmaY**2
var = 1. / (prec_prior + prec_lik)
mu = var * (prec_prior * self.f(xp) + prec_lik * data[t])
return dists.Normal(loc=mu, scale=np.sqrt(var))
my_better_ssm = ThetaLogistic_with_prop()
fk_guided = ssm.GuidedPF(ssm=my_better_ssm, data=y)
alg = particles.SMC(fk=fk_guided, N=100, qmc=False, resampling='systematic', ESSrmin=0.5,
store_history=False, verbose=False, collect=None)
next(alg) # processes data-point y_0
next(alg) # processes data-point y_1
for _ in range(8):
next(alg) # processes data-points y_3 to y_9
# alg.run() # would process all the remaining data-points
plt.hist(alg.X, 20, weights=alg.W);
plt.plot(alg.summaries.ESSs)
plt.xlabel('t')
plt.ylabel('ESS');
plt.plot(alg.summaries.logLts)
plt.xlabel('t')
plt.ylabel('log-likelihood');
outf = lambda pf: pf.logLt
results = particles.multiSMC(fk={'boot':fk_boot, 'guid':fk_guided},
nruns=20, nprocs=1, out_func=outf)
sb.boxplot(x=[r['fk'] for r in results], y=[r['output'] for r in results])
from particles.collectors import Moments
alg_with_mom = particles.SMC(fk=fk_guided, N=100, collect=[Moments()])
alg_with_mom.run()
plt.plot([m['mean'] for m in alg_with_mom.summaries.moments],
label='filtered mean')
plt.plot(y, label='data')
plt.legend()
alg = particles.SMC(fk=fk_guided, N=100, store_history=True)
alg.run()
trajectories = alg.hist.backward_sampling(5, linear_cost=False)
plt.plot(trajectories);
class ThetaLogistic_with_upper_bound(ThetaLogistic_with_prop):
def upper_bound_log_pt(self, t):
return -np.log(np.sqrt(2 * np.pi) * self.sigmaX)
my_ssm = ThetaLogistic_with_upper_bound()
alg = particles.SMC(fk=ssm.GuidedPF(ssm=my_ssm, data=y),
N=100, store_history=True)
alg.run()
(more_trajectories, acc_rate) = alg.hist.backward_sampling(10, linear_cost=True,
return_ar=True)
print('acceptance rate was %1.3f' % acc_rate)
plt.plot(more_trajectories);
| 0.575111 | 0.987782 |
# Calculus
:label:`sec_calculus`
Finding the area of a polygon had remained mysterious
until at least 2,500 years ago, when ancient Greeks divided a polygon into triangles and summed their areas.
To find the area of curved shapes, such as a circle,
ancient Greeks inscribed polygons in such shapes.
As shown in :numref:`fig_circle_area`,
an inscribed polygon with more sides of equal length better approximates
the circle. This process is also known as the *method of exhaustion*.

:label:`fig_circle_area`
In fact, the method of exhaustion is where *integral calculus* (will be described in :numref:`sec_integral_calculus`) originates from.
More than 2,000 years later,
the other branch of calculus, *differential calculus*,
was invented.
Among the most critical applications of differential calculus,
optimization problems consider how to do something *the best*.
As discussed in :numref:`subsec_norms_and_objectives`,
such problems are ubiquitous in deep learning.
In deep learning, we *train* models, updating them successively
so that they get better and better as they see more and more data.
Usually, getting better means minimizing a *loss function*,
a score that answers the question "how *bad* is our model?"
This question is more subtle than it appears.
Ultimately, what we really care about
is producing a model that performs well on data
that we have never seen before.
But we can only fit the model to data that we can actually see.
Thus we can decompose the task of fitting models into two key concerns:
(i) *optimization*: the process of fitting our models to observed data;
(ii) *generalization*: the mathematical principles and practitioners' wisdom
that guide as to how to produce models whose validity extends
beyond the exact set of data examples used to train them.
To help you understand
optimization problems and methods in later chapters,
here we give a very brief primer on differential calculus
that is commonly used in deep learning.
## Derivatives and Differentiation
We begin by addressing the calculation of derivatives,
a crucial step in nearly all deep learning optimization algorithms.
In deep learning, we typically choose loss functions
that are differentiable with respect to our model's parameters.
Put simply, this means that for each parameter,
we can determine how rapidly the loss would increase or decrease,
were we to *increase* or *decrease* that parameter
by an infinitesimally small amount.
Suppose that we have a function $f: \mathbb{R} \rightarrow \mathbb{R}$,
whose input and output are both scalars.
[**The *derivative* of $f$ is defined as**]
(**$$f'(x) = \lim_{h \rightarrow 0} \frac{f(x+h) - f(x)}{h},$$**)
:eqlabel:`eq_derivative`
if this limit exists.
If $f'(a)$ exists,
$f$ is said to be *differentiable* at $a$.
If $f$ is differentiable at every number of an interval,
then this function is differentiable on this interval.
We can interpret the derivative $f'(x)$ in :eqref:`eq_derivative`
as the *instantaneous* rate of change of $f(x)$
with respect to $x$.
The so-called instantaneous rate of change is based on
the variation $h$ in $x$, which approaches $0$.
To illustrate derivatives,
let us experiment with an example.
(**Define $u = f(x) = 3x^2-4x$.**)
```
%matplotlib inline
import numpy as np
from IPython import display
from d2l import tensorflow as d2l
def f(x):
return 3 * x ** 2 - 4 * x
```
[**By setting $x=1$ and letting $h$ approach $0$,
the numerical result of $\frac{f(x+h) - f(x)}{h}$**]
in :eqref:`eq_derivative`
(**approaches $2$.**)
Though this experiment is not a mathematical proof,
we will see later that the derivative $u'$ is $2$ when $x=1$.
```
def numerical_lim(f, x, h):
return (f(x + h) - f(x)) / h
h = 0.1
for i in range(5):
print(f'h={h:.5f}, numerical limit={numerical_lim(f, 1, h):.5f}')
h *= 0.1
```
Let us familiarize ourselves with a few equivalent notations for derivatives.
Given $y = f(x)$, where $x$ and $y$ are the independent variable and the dependent variable of the function $f$, respectively. The following expressions are equivalent:
$$f'(x) = y' = \frac{dy}{dx} = \frac{df}{dx} = \frac{d}{dx} f(x) = Df(x) = D_x f(x),$$
where symbols $\frac{d}{dx}$ and $D$ are *differentiation operators* that indicate operation of *differentiation*.
We can use the following rules to differentiate common functions:
* $DC = 0$ ($C$ is a constant),
* $Dx^n = nx^{n-1}$ (the *power rule*, $n$ is any real number),
* $De^x = e^x$,
* $D\ln(x) = 1/x.$
To differentiate a function that is formed from a few simpler functions such as the above common functions,
the following rules can be handy for us.
Suppose that functions $f$ and $g$ are both differentiable and $C$ is a constant,
we have the *constant multiple rule*
$$\frac{d}{dx} [Cf(x)] = C \frac{d}{dx} f(x),$$
the *sum rule*
$$\frac{d}{dx} [f(x) + g(x)] = \frac{d}{dx} f(x) + \frac{d}{dx} g(x),$$
the *product rule*
$$\frac{d}{dx} [f(x)g(x)] = f(x) \frac{d}{dx} [g(x)] + g(x) \frac{d}{dx} [f(x)],$$
and the *quotient rule*
$$\frac{d}{dx} \left[\frac{f(x)}{g(x)}\right] = \frac{g(x) \frac{d}{dx} [f(x)] - f(x) \frac{d}{dx} [g(x)]}{[g(x)]^2}.$$
Now we can apply a few of the above rules to find
$u' = f'(x) = 3 \frac{d}{dx} x^2-4\frac{d}{dx}x = 6x-4$.
Thus, by setting $x = 1$, we have $u' = 2$:
this is supported by our earlier experiment in this section
where the numerical result approaches $2$.
This derivative is also the slope of the tangent line
to the curve $u = f(x)$ when $x = 1$.
[**To visualize such an interpretation of derivatives,
we will use `matplotlib`,**] a popular plotting library in Python.
To configure properties of the figures produced by `matplotlib`,
we need to define a few functions.
In the following,
the `use_svg_display` function specifies the `matplotlib` package to output the svg figures for sharper images.
Note that the comment `#@save` is a special mark where the following function,
class, or statements are saved in the `d2l` package
so later they can be directly invoked (e.g., `d2l.use_svg_display()`) without being redefined.
```
def use_svg_display(): #@save
"""Use the svg format to display a plot in Jupyter."""
display.set_matplotlib_formats('svg')
```
We define the `set_figsize` function to specify the figure sizes. Note that here we directly use `d2l.plt` since the import statement `from matplotlib import pyplot as plt` has been marked for being saved in the `d2l` package in the preface.
```
def set_figsize(figsize=(3.5, 2.5)): #@save
"""Set the figure size for matplotlib."""
use_svg_display()
d2l.plt.rcParams['figure.figsize'] = figsize
```
The following `set_axes` function sets properties of axes of figures produced by `matplotlib`.
```
#@save
def set_axes(axes, xlabel, ylabel, xlim, ylim, xscale, yscale, legend):
"""Set the axes for matplotlib."""
axes.set_xlabel(xlabel)
axes.set_ylabel(ylabel)
axes.set_xscale(xscale)
axes.set_yscale(yscale)
axes.set_xlim(xlim)
axes.set_ylim(ylim)
if legend:
axes.legend(legend)
axes.grid()
```
With these three functions for figure configurations,
we define the `plot` function
to plot multiple curves succinctly
since we will need to visualize many curves throughout the book.
```
#@save
def plot(X, Y=None, xlabel=None, ylabel=None, legend=None, xlim=None,
ylim=None, xscale='linear', yscale='linear',
fmts=('-', 'm--', 'g-.', 'r:'), figsize=(3.5, 2.5), axes=None):
"""Plot data points."""
if legend is None:
legend = []
set_figsize(figsize)
axes = axes if axes else d2l.plt.gca()
# Return True if `X` (tensor or list) has 1 axis
def has_one_axis(X):
return (hasattr(X, "ndim") and X.ndim == 1 or isinstance(X, list)
and not hasattr(X[0], "__len__"))
if has_one_axis(X):
X = [X]
if Y is None:
X, Y = [[]] * len(X), X
elif has_one_axis(Y):
Y = [Y]
if len(X) != len(Y):
X = X * len(Y)
axes.cla()
for x, y, fmt in zip(X, Y, fmts):
if len(x):
axes.plot(x, y, fmt)
else:
axes.plot(y, fmt)
set_axes(axes, xlabel, ylabel, xlim, ylim, xscale, yscale, legend)
```
Now we can [**plot the function $u = f(x)$ and its tangent line $y = 2x - 3$ at $x=1$**], where the coefficient $2$ is the slope of the tangent line.
```
x = np.arange(0, 3, 0.1)
plot(x, [f(x), 2 * x - 3], 'x', 'f(x)', legend=['f(x)', 'Tangent line (x=1)'])
```
## Partial Derivatives
So far we have dealt with the differentiation of functions of just one variable.
In deep learning, functions often depend on *many* variables.
Thus, we need to extend the ideas of differentiation to these *multivariate* functions.
Let $y = f(x_1, x_2, \ldots, x_n)$ be a function with $n$ variables. The *partial derivative* of $y$ with respect to its $i^\mathrm{th}$ parameter $x_i$ is
$$ \frac{\partial y}{\partial x_i} = \lim_{h \rightarrow 0} \frac{f(x_1, \ldots, x_{i-1}, x_i+h, x_{i+1}, \ldots, x_n) - f(x_1, \ldots, x_i, \ldots, x_n)}{h}.$$
To calculate $\frac{\partial y}{\partial x_i}$, we can simply treat $x_1, \ldots, x_{i-1}, x_{i+1}, \ldots, x_n$ as constants and calculate the derivative of $y$ with respect to $x_i$.
For notation of partial derivatives, the following are equivalent:
$$\frac{\partial y}{\partial x_i} = \frac{\partial f}{\partial x_i} = f_{x_i} = f_i = D_i f = D_{x_i} f.$$
## Gradients
:label:`subsec_calculus-grad`
We can concatenate partial derivatives of a multivariate function with respect to all its variables to obtain the *gradient* vector of the function.
Suppose that the input of function $f: \mathbb{R}^n \rightarrow \mathbb{R}$ is an $n$-dimensional vector $\mathbf{x} = [x_1, x_2, \ldots, x_n]^\top$ and the output is a scalar. The gradient of the function $f(\mathbf{x})$ with respect to $\mathbf{x}$ is a vector of $n$ partial derivatives:
$$\nabla_{\mathbf{x}} f(\mathbf{x}) = \bigg[\frac{\partial f(\mathbf{x})}{\partial x_1}, \frac{\partial f(\mathbf{x})}{\partial x_2}, \ldots, \frac{\partial f(\mathbf{x})}{\partial x_n}\bigg]^\top,$$
where $\nabla_{\mathbf{x}} f(\mathbf{x})$ is often replaced by $\nabla f(\mathbf{x})$ when there is no ambiguity.
Let $\mathbf{x}$ be an $n$-dimensional vector, the following rules are often used when differentiating multivariate functions:
* For all $\mathbf{A} \in \mathbb{R}^{m \times n}$, $\nabla_{\mathbf{x}} \mathbf{A} \mathbf{x} = \mathbf{A}^\top$,
* For all $\mathbf{A} \in \mathbb{R}^{n \times m}$, $\nabla_{\mathbf{x}} \mathbf{x}^\top \mathbf{A} = \mathbf{A}$,
* For all $\mathbf{A} \in \mathbb{R}^{n \times n}$, $\nabla_{\mathbf{x}} \mathbf{x}^\top \mathbf{A} \mathbf{x} = (\mathbf{A} + \mathbf{A}^\top)\mathbf{x}$,
* $\nabla_{\mathbf{x}} \|\mathbf{x} \|^2 = \nabla_{\mathbf{x}} \mathbf{x}^\top \mathbf{x} = 2\mathbf{x}$.
Similarly, for any matrix $\mathbf{X}$, we have $\nabla_{\mathbf{X}} \|\mathbf{X} \|_F^2 = 2\mathbf{X}$. As we will see later, gradients are useful for designing optimization algorithms in deep learning.
## Chain Rule
However, such gradients can be hard to find.
This is because multivariate functions in deep learning are often *composite*,
so we may not apply any of the aforementioned rules to differentiate these functions.
Fortunately, the *chain rule* enables us to differentiate composite functions.
Let us first consider functions of a single variable.
Suppose that functions $y=f(u)$ and $u=g(x)$ are both differentiable, then the chain rule states that
$$\frac{dy}{dx} = \frac{dy}{du} \frac{du}{dx}.$$
Now let us turn our attention to a more general scenario
where functions have an arbitrary number of variables.
Suppose that the differentiable function $y$ has variables
$u_1, u_2, \ldots, u_m$, where each differentiable function $u_i$
has variables $x_1, x_2, \ldots, x_n$.
Note that $y$ is a function of $x_1, x_2, \ldots, x_n$.
Then the chain rule gives
$$\frac{dy}{dx_i} = \frac{dy}{du_1} \frac{du_1}{dx_i} + \frac{dy}{du_2} \frac{du_2}{dx_i} + \cdots + \frac{dy}{du_m} \frac{du_m}{dx_i}$$
for any $i = 1, 2, \ldots, n$.
## Summary
* Differential calculus and integral calculus are two branches of calculus, where the former can be applied to the ubiquitous optimization problems in deep learning.
* A derivative can be interpreted as the instantaneous rate of change of a function with respect to its variable. It is also the slope of the tangent line to the curve of the function.
* A gradient is a vector whose components are the partial derivatives of a multivariate function with respect to all its variables.
* The chain rule enables us to differentiate composite functions.
## Exercises
1. Plot the function $y = f(x) = x^3 - \frac{1}{x}$ and its tangent line when $x = 1$.
1. Find the gradient of the function $f(\mathbf{x}) = 3x_1^2 + 5e^{x_2}$.
1. What is the gradient of the function $f(\mathbf{x}) = \|\mathbf{x}\|_2$?
1. Can you write out the chain rule for the case where $u = f(x, y, z)$ and $x = x(a, b)$, $y = y(a, b)$, and $z = z(a, b)$?
[Discussions](https://discuss.d2l.ai/t/197)
|
github_jupyter
|
%matplotlib inline
import numpy as np
from IPython import display
from d2l import tensorflow as d2l
def f(x):
return 3 * x ** 2 - 4 * x
def numerical_lim(f, x, h):
return (f(x + h) - f(x)) / h
h = 0.1
for i in range(5):
print(f'h={h:.5f}, numerical limit={numerical_lim(f, 1, h):.5f}')
h *= 0.1
def use_svg_display(): #@save
"""Use the svg format to display a plot in Jupyter."""
display.set_matplotlib_formats('svg')
def set_figsize(figsize=(3.5, 2.5)): #@save
"""Set the figure size for matplotlib."""
use_svg_display()
d2l.plt.rcParams['figure.figsize'] = figsize
#@save
def set_axes(axes, xlabel, ylabel, xlim, ylim, xscale, yscale, legend):
"""Set the axes for matplotlib."""
axes.set_xlabel(xlabel)
axes.set_ylabel(ylabel)
axes.set_xscale(xscale)
axes.set_yscale(yscale)
axes.set_xlim(xlim)
axes.set_ylim(ylim)
if legend:
axes.legend(legend)
axes.grid()
#@save
def plot(X, Y=None, xlabel=None, ylabel=None, legend=None, xlim=None,
ylim=None, xscale='linear', yscale='linear',
fmts=('-', 'm--', 'g-.', 'r:'), figsize=(3.5, 2.5), axes=None):
"""Plot data points."""
if legend is None:
legend = []
set_figsize(figsize)
axes = axes if axes else d2l.plt.gca()
# Return True if `X` (tensor or list) has 1 axis
def has_one_axis(X):
return (hasattr(X, "ndim") and X.ndim == 1 or isinstance(X, list)
and not hasattr(X[0], "__len__"))
if has_one_axis(X):
X = [X]
if Y is None:
X, Y = [[]] * len(X), X
elif has_one_axis(Y):
Y = [Y]
if len(X) != len(Y):
X = X * len(Y)
axes.cla()
for x, y, fmt in zip(X, Y, fmts):
if len(x):
axes.plot(x, y, fmt)
else:
axes.plot(y, fmt)
set_axes(axes, xlabel, ylabel, xlim, ylim, xscale, yscale, legend)
x = np.arange(0, 3, 0.1)
plot(x, [f(x), 2 * x - 3], 'x', 'f(x)', legend=['f(x)', 'Tangent line (x=1)'])
| 0.753557 | 0.989672 |
# BRONZE 5등급 문제풀이
### 2021.12.23
> ### `1000번`
```
lista=['3454','342','223']
lista=list(map(int,lista))
lista
[a,b] = map(int, input().split(' '))
print(a+b)
```
- input()만 하면 str 형태로 'a b' 이렇게 저장됨
- 따라서 split(' ') 또는 split()를 통해 list 형태 만듦.
- 그리고 map 함수를 통해 list형태에서 int형으로 변환 할 수 있음
첫째줄에 동시에 입력해야 해서 map함수를 사용한 것
2338번과 비교하기
> ### `1001번`
```
(a,b) = map(int, input().split(' '))
print(a-b)
```
> ### `1271번`
```
(a,b) = map(int, input().split(' '))
print(a//b)
print(a%b)
```
---
### 2021.12.24
> `1550번` 16진수 이해 안 감
> `2338번`
```
a = int(input())
b = int(input())
print(a+b)
print(a-b)
print(a*b)
```
input만 하면 str으로 저장됨
> `2475번`
```
(a,b,c,d,e) = map(int, input().split(' '))
print((a**2+b**2+c**2+d**2+e**2)%10)
res = 0
for n in list(map(int, input().split())):
res += n**2
print(res%10)
print(sum([n**2 for n in map(int, input().split())]) % 10)
```
> `2557번`
내 생각엔 이거 같은데..
```
if (input()=="") : print('Hello World!')
```
이거란다.
```
print('Hello World!')
```
> `2558번`
```
a = int(input())
b = int(input())
print(a+b)
```
> `2845번`
```
[a,b]=map(int,input().split())
[c,d,e,f,g]=map(int,input().split())
print(c-a*b,d-a*b,e-a*b,f-a*b,g-a*b)
a, b = map(int, input().split())
people = list(map(int, input().split()))
tot = a * b
for i in people:
print(i - tot, end=' ')
```
위 셀을 보면 마지막 줄에서 포문을 실행하고 있는데, 한번씩 실행하므로 원래대로라면 하나 출력하고 다음줄에 출력 다음줄에 출력 이런식으로 진행이 된다. 그런데 print 옵션 중 end='' 안에 원래 \n 적당한 수를 입력해줘야 하는데 빈 문자열을 지정해주면 다음 번 출력이 아래로 가는 게 아니라 바로 뒤로 출력되게 해줄 수 있다. default값으로는 프린트해야되는 개수 만큼 \n이 입력되어 있고 빈문자열을 지정해줌으로써 강제로 \n을 지워주는 것이다.
* 출력문 print는 두개의 옵션이 있음
> sep='' -> print문의 출력문들 사이에 원하는 내용을 넣을 수 있다, 기본값으로는 공백이 들어간다. 여기서 \n을 입력해주면 print해야하는 개수만큼 아래줄로 넘어가고 출력하게 된다
> end='' -> 출력을 완료한 뒤의 내용을 수정할 수 있다. 기본 값으로는 개행
```
print('a','c','a','s',sep='\n')
```
> `2914번`
```
a,b=map(int,input().split())
print((a*(b-1))+1)
```
> `3003번`
```
a,b,c,d,e,f=map(int,input().split())
print(1-a,1-b,2-c,2-d,2-e,8-f)
```
> `3046번`
```
a,b=map(int,input().split())
print(-(a-2*b))
```
> `5337번`
```
print("""\
. . .
| | _ | _. _ ._ _ _
|/\|(/.|(_.(_)[ | )(/.
""")
```
> print에서 줄 바꿈 하는 방법
- """\~""" 또는 print(asd\nasd\nasd)을 활용
- str에서만 활용가능해보임
```
print("asd\nasd\nasd")
```
> `5338번`
```
print("""\
_.-;;-._
'-..-'| || |
'-..-'|_.-;;-._|
'-..-'| || |
'-..-'|_.-''-._|
""")
```
> `5522번`
```
a=int(input())
b=int(input())
c=int(input())
d=int(input())
e=int(input())
print(a+b+c+d+e)
```
> `5554번`
```
a=int(input())
b=int(input())
c=int(input())
d=int(input())
print((a+b+c+d)//60)
print((a+b+c+d)%60)
sum = 0
for _ in range(4) :
sum += int(input())
print(sum // 60)
print(sum % 60)
```
> `6749번`
```
a=int(input())
b=int(input())
print(b+b-a)
```
> `8393번`
```
a=int(input())
print(round((a*(a+1))/2))
round(555.3666,2)
#반올림해서 나타낼 자릿수
```
> `10699번`
```
import datetime
print(str(datetime.datetime.now())[:10])
```
> `10962번`
```
print(input()+"??!")
```
> `11283번`
```
print(ord(input())-44031)
```
> `14652번`
```
N, M, K = map(int, input().split())
n = K // M
m = K % M
print(n, m)
```
### 2021.12.25 `MERRY CHISTMAS`
> `15727번`
```
import math
a=int(input())/5
print(math.ceil(a))
```
> `15894번`
```
print(4*(int(input())))
```
> `16430번`
```
from fractions import Fraction
a,b=map(int,input().split())
c=str(1-Fraction(a,b))
print(c[0],c[2])
```
- 분수!!
```
str(2-Fraction(2,3))
```
> `17496번`
```
a,b,c,d=map(int,input().split())
if (a%b != 0) : print((a//b)*c*d)
elif (a%b == 0): print(((a//b)-1)*c*d)
```
> `20492번`
```
a=int(input())
print(int(a*0.78),int(a-a*0.2*0.22))
```
|
github_jupyter
|
lista=['3454','342','223']
lista=list(map(int,lista))
lista
[a,b] = map(int, input().split(' '))
print(a+b)
(a,b) = map(int, input().split(' '))
print(a-b)
(a,b) = map(int, input().split(' '))
print(a//b)
print(a%b)
a = int(input())
b = int(input())
print(a+b)
print(a-b)
print(a*b)
(a,b,c,d,e) = map(int, input().split(' '))
print((a**2+b**2+c**2+d**2+e**2)%10)
res = 0
for n in list(map(int, input().split())):
res += n**2
print(res%10)
print(sum([n**2 for n in map(int, input().split())]) % 10)
if (input()=="") : print('Hello World!')
print('Hello World!')
a = int(input())
b = int(input())
print(a+b)
[a,b]=map(int,input().split())
[c,d,e,f,g]=map(int,input().split())
print(c-a*b,d-a*b,e-a*b,f-a*b,g-a*b)
a, b = map(int, input().split())
people = list(map(int, input().split()))
tot = a * b
for i in people:
print(i - tot, end=' ')
print('a','c','a','s',sep='\n')
a,b=map(int,input().split())
print((a*(b-1))+1)
a,b,c,d,e,f=map(int,input().split())
print(1-a,1-b,2-c,2-d,2-e,8-f)
a,b=map(int,input().split())
print(-(a-2*b))
print("""\
. . .
| | _ | _. _ ._ _ _
|/\|(/.|(_.(_)[ | )(/.
""")
print("asd\nasd\nasd")
print("""\
_.-;;-._
'-..-'| || |
'-..-'|_.-;;-._|
'-..-'| || |
'-..-'|_.-''-._|
""")
a=int(input())
b=int(input())
c=int(input())
d=int(input())
e=int(input())
print(a+b+c+d+e)
a=int(input())
b=int(input())
c=int(input())
d=int(input())
print((a+b+c+d)//60)
print((a+b+c+d)%60)
sum = 0
for _ in range(4) :
sum += int(input())
print(sum // 60)
print(sum % 60)
a=int(input())
b=int(input())
print(b+b-a)
a=int(input())
print(round((a*(a+1))/2))
round(555.3666,2)
#반올림해서 나타낼 자릿수
import datetime
print(str(datetime.datetime.now())[:10])
print(input()+"??!")
print(ord(input())-44031)
N, M, K = map(int, input().split())
n = K // M
m = K % M
print(n, m)
import math
a=int(input())/5
print(math.ceil(a))
print(4*(int(input())))
from fractions import Fraction
a,b=map(int,input().split())
c=str(1-Fraction(a,b))
print(c[0],c[2])
str(2-Fraction(2,3))
a,b,c,d=map(int,input().split())
if (a%b != 0) : print((a//b)*c*d)
elif (a%b == 0): print(((a//b)-1)*c*d)
a=int(input())
print(int(a*0.78),int(a-a*0.2*0.22))
| 0.039853 | 0.895248 |
```
%load_ext watermark
%watermark -a 'Sebastian Raschka' -v -p numpy,matplotlib,seaborn
```
# Exploratory Data Analysis
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn.apionly as sns
%matplotlib inline
```
## Histogram
```
# read dataset
df = pd.read_csv('../datasets/winequality/winequality-red.csv',
sep=';')
# create histogram
bin_edges = np.arange(0, df['residual sugar'].max() + 1, 1)
fig = plt.hist(df['residual sugar'], bins=bin_edges)
# add plot labels
plt.xlabel('count')
plt.ylabel('residual sugar')
plt.show()
```
## Scatterplot
```
# create scatterplot
fig = plt.scatter(df['pH'], df['residual sugar'])
# add plot labels
plt.xlabel('pH')
plt.ylabel('residual sugar')
plt.show()
```
## Scatterplot Matrix
```
df.columns
# create scatterplot matrix
fig = sns.pairplot(data=df[['alcohol', 'pH', 'residual sugar', 'quality']],
hue='quality')
# add plot labels
plt.xlabel('pH')
plt.ylabel('residual sugar')
plt.show()
```
## Bee Swarm Plot
- useful for small datasets but can be slow on large datasets
```
# create bee swarm plot
sns.swarmplot(x='quality', y='residual sugar',
data=df[df['quality'] < 6])
plt.show()
```
## Empirical Cumulative Distribution Function Plots
```
# sort and normalize data
x = np.sort(df['residual sugar'])
y = np.arange(1, x.shape[0] + 1) / x.shape[0]
# create ecd fplot
plt.plot(x, y, marker='o', linestyle='')
# add plot labels
plt.ylabel('ECDF')
plt.xlabel('residual sugar')
percent_four_or_less = y[x <= 4].max()
print('%.2f percent have 4 or less units residual sugar' %
(percent_four_or_less*100))
eightieth_percentile = x[y <= 0.8].max()
plt.axhline(0.8, color='black', linestyle='--')
plt.axvline(eightieth_percentile, color='black', label='80th percentile')
plt.legend()
plt.show()
```
## Boxplots
- Distribution of data in terms of median and percentiles (median is the 50th percentile)
```
percentiles = np.percentile(df['alcohol'], q=[25, 50, 75])
percentiles
```
manual approach:
```
for p in percentiles:
plt.axhline(p, color='black', linestyle='-')
plt.scatter(np.zeros(df.shape[0]) + 0.5, df['alcohol'])
iqr = percentiles[-1] - percentiles[0]
upper_whisker = min(df['alcohol'].max(), percentiles[-1] + iqr * 1.5)
lower_whisker = max(df['alcohol'].min(), percentiles[0] - iqr * 1.5)
plt.axhline(upper_whisker, color='black', linestyle='--')
plt.axhline(lower_whisker, color='black', linestyle='--')
plt.ylim([8, 16])
plt.ylabel('alcohol')
fig = plt.gca()
fig.axes.get_xaxis().set_ticks([])
plt.show()
```
using matplotlib.pyplot.boxplot:
```
plt.boxplot(df['alcohol'])
plt.ylim([8, 16])
plt.ylabel('alcohol')
fig = plt.gca()
fig.axes.get_xaxis().set_ticks([])
plt.show()
```
## Violin Plots
```
plt.violinplot(df['alcohol'], [0],
points=100,
bw_method='scott',
showmeans=False,
showextrema=True,
showmedians=True)
plt.ylim([8, 16])
plt.ylabel('alcohol')
fig = plt.gca()
fig.axes.get_xaxis().set_ticks([])
plt.show()
```
|
github_jupyter
|
%load_ext watermark
%watermark -a 'Sebastian Raschka' -v -p numpy,matplotlib,seaborn
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn.apionly as sns
%matplotlib inline
# read dataset
df = pd.read_csv('../datasets/winequality/winequality-red.csv',
sep=';')
# create histogram
bin_edges = np.arange(0, df['residual sugar'].max() + 1, 1)
fig = plt.hist(df['residual sugar'], bins=bin_edges)
# add plot labels
plt.xlabel('count')
plt.ylabel('residual sugar')
plt.show()
# create scatterplot
fig = plt.scatter(df['pH'], df['residual sugar'])
# add plot labels
plt.xlabel('pH')
plt.ylabel('residual sugar')
plt.show()
df.columns
# create scatterplot matrix
fig = sns.pairplot(data=df[['alcohol', 'pH', 'residual sugar', 'quality']],
hue='quality')
# add plot labels
plt.xlabel('pH')
plt.ylabel('residual sugar')
plt.show()
# create bee swarm plot
sns.swarmplot(x='quality', y='residual sugar',
data=df[df['quality'] < 6])
plt.show()
# sort and normalize data
x = np.sort(df['residual sugar'])
y = np.arange(1, x.shape[0] + 1) / x.shape[0]
# create ecd fplot
plt.plot(x, y, marker='o', linestyle='')
# add plot labels
plt.ylabel('ECDF')
plt.xlabel('residual sugar')
percent_four_or_less = y[x <= 4].max()
print('%.2f percent have 4 or less units residual sugar' %
(percent_four_or_less*100))
eightieth_percentile = x[y <= 0.8].max()
plt.axhline(0.8, color='black', linestyle='--')
plt.axvline(eightieth_percentile, color='black', label='80th percentile')
plt.legend()
plt.show()
percentiles = np.percentile(df['alcohol'], q=[25, 50, 75])
percentiles
for p in percentiles:
plt.axhline(p, color='black', linestyle='-')
plt.scatter(np.zeros(df.shape[0]) + 0.5, df['alcohol'])
iqr = percentiles[-1] - percentiles[0]
upper_whisker = min(df['alcohol'].max(), percentiles[-1] + iqr * 1.5)
lower_whisker = max(df['alcohol'].min(), percentiles[0] - iqr * 1.5)
plt.axhline(upper_whisker, color='black', linestyle='--')
plt.axhline(lower_whisker, color='black', linestyle='--')
plt.ylim([8, 16])
plt.ylabel('alcohol')
fig = plt.gca()
fig.axes.get_xaxis().set_ticks([])
plt.show()
plt.boxplot(df['alcohol'])
plt.ylim([8, 16])
plt.ylabel('alcohol')
fig = plt.gca()
fig.axes.get_xaxis().set_ticks([])
plt.show()
plt.violinplot(df['alcohol'], [0],
points=100,
bw_method='scott',
showmeans=False,
showextrema=True,
showmedians=True)
plt.ylim([8, 16])
plt.ylabel('alcohol')
fig = plt.gca()
fig.axes.get_xaxis().set_ticks([])
plt.show()
| 0.556641 | 0.949342 |
# Example to compute photon-ALP conversions in constant magnetic field and in the GMF
```
from gammaALPs import Source, ALP, ModuleList
from gammaALPs.base.transfer import w_pl_e9, EminGeV, EmaxGeV
import numpy as np
import matplotlib.pyplot as plt
from astropy import constants as c
from glob import glob
%matplotlib inline
m, g = 10., 2.
alp = ALP(m,g)
EGeV = np.logspace(0.,8.,1000)
pin = np.diag((1.,0.,0.))
punpol = np.diag((1.,1.,0.)) * 0.5
px_in = np.diag((1.,0.,0.))
py_in = np.diag((0.,1.,0.))
pa_in = np.diag((0.,0.,1.))
src = Source(z = 0., l = 0., b = 0., ra = None, dec = None)
m = ModuleList(alp, src, pin = pin, EGeV = EGeV, seed = 0)
m.add_propagation(environ = 'ICMCell', order = 0,
B0 = 1.,
L0 = 10.,
nsim = 1,
n0 = 1e-3,
r_abell = 11.,
beta = 0.,
eta = 0.)
m.modules[0].psin = np.ones_like(m.modules[0].psin) * np.pi / 2.
px,py,pa = m.run(multiprocess=1)
plt.semilogx(EGeV, pa[0], lw = 2)
plt.axvline(EminGeV(m_neV=m.alp.m, g11 = m.alp.g, BmuG=m.modules[0].B, n_cm3=m.modules[0].nel),
lw = 1., ls = '--', color = 'k')
plt.axvline(EmaxGeV(g11=m.alp.g, BmuG=m.modules[0].B/2),
lw = 1., ls = '--', color = 'k')
plt.xlabel("Energy (GeV)")
plt.ylabel("$P_{a\gamma}$")
plt.savefig("one_domain.pdf")
```
### Now do the GMF
```
EGeV = np.logspace(-1.,4.,101)
#src = Source(z = 0.017559, ra = '03h19m48.1s', dec = '+41d30m42s')
src = Source(z = 0.017559, l = 30., b = 0.)
m = ModuleList(alp, src, pin = pa_in, EGeV = EGeV, seed = 0)
m.add_propagation("GMF",0, model = 'jansson12', model_sum = 'ASS')
px,py,pa = m.run(multiprocess=1)
prx = m.modules[0].show_conv_prob_vs_r(pa_in, px_in)
pry = m.modules[0].show_conv_prob_vs_r(pa_in, py_in)
pra = m.modules[0].show_conv_prob_vs_r(pa_in, pa_in)
idx = 40
print EGeV[idx]
ax = plt.subplot(111)
ax.semilogy(m.modules[0]._r, (prx[:,idx] + pry[:,idx])[::-1], drawstyle = 'steps')
plt.ylabel("$P_{a\gamma}$", color = plt.cm.tab10(0.))
plt.xlabel("$r$ (kpc)")
ax.set_ylim(1e-6,1e-1)
ax2 = ax.twinx()
ax2.semilogy(m.modules[0]._r, m.modules[0].B[::-1], color = plt.cm.tab10(0.3), drawstyle = 'steps')
plt.ylabel("$B_{\perp}$ ($\mu$G)", color = plt.cm.tab10(0.3))
plt.savefig("pag_gmf_vs_r.pdf")
plt.semilogx(EGeV, px[0] + py[0], lw = 2)
plt.xlabel("Energy (GeV)")
plt.ylabel("$P_{a\gamma}$")
plt.savefig("pag_gmf.pdf")
```
### And the perseus cluster
```
EGeV = np.logspace(-1.,4.,101)
src = Source(z = 0.017559, ra = '03h19m48.1s', dec = '+41d30m42s')
m = ModuleList(alp, src, pin = punpol, EGeV = EGeV, seed = 0)
m.add_propagation("ICMGaussTurb",
0, # position of module counted from the source.
nsim = 10, # number of random B-field realizations
B0 = 10., # rms of B field
n0 = 39., # normalization of electron density
n2 = 4.05, # second normalization of electron density, see Churazov et al. 2003, Eq. 4
r_abell = 500., # extension of the cluster
r_core = 80., # electron density parameter, see Churazov et al. 2003, Eq. 4
r_core2 = 280., # electron density parameter, see Churazov et al. 2003, Eq. 4
beta = 1.2, # electron density parameter, see Churazov et al. 2003, Eq. 4
beta2= 0.58, # electron density parameter, see Churazov et al. 2003, Eq. 4
eta = 0.5, # scaling of B-field with electron denstiy
kL = 0.18, # maximum turbulence scale in kpc^-1, taken from A2199 cool-core cluster, see Vacca et al. 2012
kH = 9., # minimum turbulence scale, taken from A2199 cool-core cluster, see Vacca et al. 2012
q = -2.80, # turbulence spectral index, taken from A2199 cool-core cluster, see Vacca et al. 2012
thinning = 4 # thin out distance array. Can lead to different results!
)
px,py,pa = m.run(multiprocess=4)
print px.shape, m.modules[0]._r.shape
plt.semilogx(EGeV, px[0] + py[0], lw = 2)
plt.xlabel("Energy (GeV)")
plt.ylabel("$P_{\gamma\gamma}$")
plt.savefig("pgg_perseus_one_real.pdf")
for i in range(px.shape[0]):
plt.semilogx(EGeV, px[i] + py[i], lw = 1 if i else 2, alpha = 0.3 if i else 1., color = plt.cm.tab10(0.)
)
plt.xlabel("Energy (GeV)")
plt.ylabel("$P_{\gamma\gamma}$")
plt.savefig("pgg_perseus_ten_real.pdf")
prx = m.modules[0].show_conv_prob_vs_r(punpol, pa_in)
prx.shape
idx = 40
print EGeV[idx]
ax = plt.subplot(111)
ax.semilogy(m.modules[0].r, (prx[:,idx]), drawstyle = 'steps')
plt.ylabel("$P_{a\gamma}$", color = plt.cm.tab10(0.))
plt.xlabel("$r$ (kpc)")
ax2 = ax.twinx()
ax2.plot(m.modules[0].r, m.modules[0].B, color = plt.cm.tab10(0.3), drawstyle = 'steps', lw = 0.5)
plt.ylabel("$B_{\perp}$ ($\mu$G)", color = plt.cm.tab10(0.3))
plt.savefig("pa_perseus_vs_r.pdf")
```
|
github_jupyter
|
from gammaALPs import Source, ALP, ModuleList
from gammaALPs.base.transfer import w_pl_e9, EminGeV, EmaxGeV
import numpy as np
import matplotlib.pyplot as plt
from astropy import constants as c
from glob import glob
%matplotlib inline
m, g = 10., 2.
alp = ALP(m,g)
EGeV = np.logspace(0.,8.,1000)
pin = np.diag((1.,0.,0.))
punpol = np.diag((1.,1.,0.)) * 0.5
px_in = np.diag((1.,0.,0.))
py_in = np.diag((0.,1.,0.))
pa_in = np.diag((0.,0.,1.))
src = Source(z = 0., l = 0., b = 0., ra = None, dec = None)
m = ModuleList(alp, src, pin = pin, EGeV = EGeV, seed = 0)
m.add_propagation(environ = 'ICMCell', order = 0,
B0 = 1.,
L0 = 10.,
nsim = 1,
n0 = 1e-3,
r_abell = 11.,
beta = 0.,
eta = 0.)
m.modules[0].psin = np.ones_like(m.modules[0].psin) * np.pi / 2.
px,py,pa = m.run(multiprocess=1)
plt.semilogx(EGeV, pa[0], lw = 2)
plt.axvline(EminGeV(m_neV=m.alp.m, g11 = m.alp.g, BmuG=m.modules[0].B, n_cm3=m.modules[0].nel),
lw = 1., ls = '--', color = 'k')
plt.axvline(EmaxGeV(g11=m.alp.g, BmuG=m.modules[0].B/2),
lw = 1., ls = '--', color = 'k')
plt.xlabel("Energy (GeV)")
plt.ylabel("$P_{a\gamma}$")
plt.savefig("one_domain.pdf")
EGeV = np.logspace(-1.,4.,101)
#src = Source(z = 0.017559, ra = '03h19m48.1s', dec = '+41d30m42s')
src = Source(z = 0.017559, l = 30., b = 0.)
m = ModuleList(alp, src, pin = pa_in, EGeV = EGeV, seed = 0)
m.add_propagation("GMF",0, model = 'jansson12', model_sum = 'ASS')
px,py,pa = m.run(multiprocess=1)
prx = m.modules[0].show_conv_prob_vs_r(pa_in, px_in)
pry = m.modules[0].show_conv_prob_vs_r(pa_in, py_in)
pra = m.modules[0].show_conv_prob_vs_r(pa_in, pa_in)
idx = 40
print EGeV[idx]
ax = plt.subplot(111)
ax.semilogy(m.modules[0]._r, (prx[:,idx] + pry[:,idx])[::-1], drawstyle = 'steps')
plt.ylabel("$P_{a\gamma}$", color = plt.cm.tab10(0.))
plt.xlabel("$r$ (kpc)")
ax.set_ylim(1e-6,1e-1)
ax2 = ax.twinx()
ax2.semilogy(m.modules[0]._r, m.modules[0].B[::-1], color = plt.cm.tab10(0.3), drawstyle = 'steps')
plt.ylabel("$B_{\perp}$ ($\mu$G)", color = plt.cm.tab10(0.3))
plt.savefig("pag_gmf_vs_r.pdf")
plt.semilogx(EGeV, px[0] + py[0], lw = 2)
plt.xlabel("Energy (GeV)")
plt.ylabel("$P_{a\gamma}$")
plt.savefig("pag_gmf.pdf")
EGeV = np.logspace(-1.,4.,101)
src = Source(z = 0.017559, ra = '03h19m48.1s', dec = '+41d30m42s')
m = ModuleList(alp, src, pin = punpol, EGeV = EGeV, seed = 0)
m.add_propagation("ICMGaussTurb",
0, # position of module counted from the source.
nsim = 10, # number of random B-field realizations
B0 = 10., # rms of B field
n0 = 39., # normalization of electron density
n2 = 4.05, # second normalization of electron density, see Churazov et al. 2003, Eq. 4
r_abell = 500., # extension of the cluster
r_core = 80., # electron density parameter, see Churazov et al. 2003, Eq. 4
r_core2 = 280., # electron density parameter, see Churazov et al. 2003, Eq. 4
beta = 1.2, # electron density parameter, see Churazov et al. 2003, Eq. 4
beta2= 0.58, # electron density parameter, see Churazov et al. 2003, Eq. 4
eta = 0.5, # scaling of B-field with electron denstiy
kL = 0.18, # maximum turbulence scale in kpc^-1, taken from A2199 cool-core cluster, see Vacca et al. 2012
kH = 9., # minimum turbulence scale, taken from A2199 cool-core cluster, see Vacca et al. 2012
q = -2.80, # turbulence spectral index, taken from A2199 cool-core cluster, see Vacca et al. 2012
thinning = 4 # thin out distance array. Can lead to different results!
)
px,py,pa = m.run(multiprocess=4)
print px.shape, m.modules[0]._r.shape
plt.semilogx(EGeV, px[0] + py[0], lw = 2)
plt.xlabel("Energy (GeV)")
plt.ylabel("$P_{\gamma\gamma}$")
plt.savefig("pgg_perseus_one_real.pdf")
for i in range(px.shape[0]):
plt.semilogx(EGeV, px[i] + py[i], lw = 1 if i else 2, alpha = 0.3 if i else 1., color = plt.cm.tab10(0.)
)
plt.xlabel("Energy (GeV)")
plt.ylabel("$P_{\gamma\gamma}$")
plt.savefig("pgg_perseus_ten_real.pdf")
prx = m.modules[0].show_conv_prob_vs_r(punpol, pa_in)
prx.shape
idx = 40
print EGeV[idx]
ax = plt.subplot(111)
ax.semilogy(m.modules[0].r, (prx[:,idx]), drawstyle = 'steps')
plt.ylabel("$P_{a\gamma}$", color = plt.cm.tab10(0.))
plt.xlabel("$r$ (kpc)")
ax2 = ax.twinx()
ax2.plot(m.modules[0].r, m.modules[0].B, color = plt.cm.tab10(0.3), drawstyle = 'steps', lw = 0.5)
plt.ylabel("$B_{\perp}$ ($\mu$G)", color = plt.cm.tab10(0.3))
plt.savefig("pa_perseus_vs_r.pdf")
| 0.524151 | 0.818773 |
```
import pandas as pd
import numpy as np
df1=pd.read_csv('F:/0Sem 7/ML Lab/amazon food review dataset/Reviews.csv')
df1.head()
score=df1.values[:,6]
text=df1.values[:,9]
reviews=np.vstack((score,text)).T
print(score.shape, text.shape, reviews.shape)
p=0
n=0
for i in range(reviews.shape[0]):
if reviews[i,0] > 3:
reviews[i,0]=0 #positive review
p=p+1
else:
reviews[i,0]=1 #negative review
n=n+1
reviews = reviews[reviews[:,0].argsort()] #sort by 1st column
train=[]
for i in range(5000):
train.append(reviews[i])
for i in range(443777,443777+5000):
train.append(reviews[i])
train=np.asarray(train)
train.shape
my_reviews1=np.array([0,'This is a very good product. I am very happy with this item.'])
my_reviews2=np.array([1,'The product is very bad. I am very unsatisfied with the appearance.'])
my_reviews3=np.array([0,'It was one if the best items i have purchased. Very good.'])
my_reviews4=np.array([0,'All members of my family enjoyed the item. It is well thought.'])
my_reviews5=np.array([1,'Extremely poor quality. I hated the item and so did my brothers.'])
#train=np.vstack((train,my_reviews1))
#train=np.vstack((train,my_reviews2))
#train=np.vstack((train,my_reviews3))
#train=np.vstack((train,my_reviews4))
#train=np.vstack((train,my_reviews5))
import random as r
test=[]
for i in range(2000):
index=r.randint(0,reviews.shape[0])
test.append(reviews[index])
test=np.asarray(test)
test.shape
#test=np.vstack((test,my_reviews1))
#test=np.vstack((test,my_reviews2))
#test=np.vstack((test,my_reviews3))
#test=np.vstack((test,my_reviews4))
#test=np.vstack((test,my_reviews5))
train_all_words=[]
for i in range(train.shape[0]):
train_all_words.append(train[i,1].split())
train_all_words = [item for sublist in train_all_words for item in sublist]
test_all_words=[]
for i in range(test.shape[0]):
test_all_words.append(test[i,1].split())
test_all_words = [item for sublist in test_all_words for item in sublist]
from collections import Counter
def common_words(words, number_of_words, reverse=False):
counter = Counter(words)
return sorted(counter, key = counter.get, reverse=reverse)[:number_of_words]
train_least_common=common_words(train_all_words,200)
train_most_common=common_words(train_all_words,200,reverse=True)
test_least_common=common_words(test_all_words,200)
test_most_common=common_words(test_all_words,200,reverse=True)
for i in range(train.shape[0]):
train[i,1]=train[i,1].split()
for i in range(test.shape[0]):
test[i,1]=test[i,1].split()
for i in range(train.shape[0]):
for item in train_most_common:
if item in train[i,1]:
train[i,1].remove(item)
for item in train_least_common:
if item in train[i,1]:
train[i,1].remove(item)
for i in range(test.shape[0]):
for item in test_most_common:
if item in test[i,1]:
test[i,1].remove(item)
for item in test_least_common:
if item in test[i,1]:
test[i,1].remove(item)
for i in range(train.shape[0]):
train[i,1]=" ".join(train[i,1])
for i in range(test.shape[0]):
test[i,1]=" ".join(test[i,1])
train_docs=[]
test_docs=[]
train_label=[]
test_label=[]
for i in range(train.shape[0]):
train_docs.append(train[i,1])
train_label.append(train[i,0])
for i in range(test.shape[0]):
test_docs.append(test[i,1])
test_label.append(test[i,0])
from numpy import array
from numpy import asarray
from numpy import zeros
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import Embedding
# prepare tokenizer
t = Tokenizer()
t.fit_on_texts(train_docs)
vocab_size = len(t.word_index) + 1
#integer encode the documents
encoded_docs = t.texts_to_sequences(train_docs)
#print(encoded_docs)
#pad documents to a max length of 4 words
max_length = 100
padded_docs = pad_sequences(encoded_docs, maxlen=max_length, padding='post')
#print(padded_docs)
#load the whole embedding into memory
embeddings_index = dict()
f = open('F:/0Sem 7/ML Lab/glove/glove.6B.100d.txt',encoding='utf8')
for line in f:
values = line.split()
word = values[0]
coefs = asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
#print('Loaded %s word vectors.' % len(embeddings_index))
#create a weight matrix for words in training docs
embedding_matrix = zeros((vocab_size, 100))
for word, i in t.word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
embedding_matrix.shape
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
#define model
model = Sequential()
e = tf.keras.layers.Embedding(vocab_size, 100, weights=[embedding_matrix], input_length=100, trainable=False)
model.add(e)
model.add(layers.Conv1D(32,4, activation='relu'))
model.add(layers.Dropout(rate=0.8))
model.add(layers.MaxPooling1D(pool_size=2))
model.add(layers.LSTM(64, activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(128, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# summarize the model
print(model.summary())
for layer in model.layers:
print(layer.output_shape)
train_label=np.asarray(train_label)
train_label.shape
padded_docs.shape
# prepare tokenizer
t1 = Tokenizer()
t1.fit_on_texts(test_docs)
vocab_size1 = len(t1.word_index) + 1
#integer encode the documents
encoded_docs1 = t1.texts_to_sequences(test_docs)
#print(encoded_docs)
#pad documents to a max length of 4 words
max_length1 = 100
padded_docs1 = pad_sequences(encoded_docs1, maxlen=max_length1, padding='post')
test_label=np.asarray(test_label)
#print(padded_docs)
# fit the model
history=model.fit(padded_docs, train_label, validation_data=(padded_docs1,test_label), epochs=30, verbose=1)
"2nd LSTM Model"
#define model
model1 = Sequential()
e = tf.keras.layers.Embedding(vocab_size, 100, weights=[embedding_matrix], input_length=100, trainable=False)
model1.add(e)
model1.add(layers.Conv1D(32,4, activation='relu'))
model1.add(layers.Dropout(rate=0.8))
model1.add(layers.MaxPooling1D(pool_size=2))
model1.add(layers.LSTM(64, activation='tanh', return_sequences=True))
model1.add(layers.LSTM(128, activation='tanh', return_sequences=False))
model1.add(layers.Flatten())
model1.add(layers.Dense(256, activation ='tanh'))
model1.add(layers.Dense(1, activation='sigmoid'))
# compile the model
model1.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# summarize the model
print(model1.summary())
# fit the model
history1=model1.fit(padded_docs, train_label, validation_data=(padded_docs1,test_label), epochs=30, verbose=1)
"3rd LSTM Model"
#define model
model2 = Sequential()
e = tf.keras.layers.Embedding(vocab_size, 100, weights=[embedding_matrix], input_length=100, trainable=False)
model2.add(e)
model2.add(layers.Conv1D(32,4, activation='relu'))
model2.add(layers.Dropout(rate=0.8))
model2.add(layers.MaxPooling1D(pool_size=2))
model2.add(layers.LSTM(64, activation='relu', return_sequences=True))
model2.add(layers.LSTM(128, activation='relu', return_sequences=False))
model2.add(layers.Flatten())
model2.add(layers.Dense(256, activation='relu'))
model2.add(layers.Dense(1, activation='sigmoid'))
# compile the model
model2.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# summarize the model
print(model2.summary())
# fit the model
history2=model2.fit(padded_docs, train_label, validation_data=(padded_docs1,test_label), epochs=30, verbose=1)
#define model
model3 = Sequential()
e = tf.keras.layers.Embedding(vocab_size, 100, weights=[embedding_matrix], input_length=100, trainable=False)
model3.add(e)
model3.add(layers.Conv1D(32,4, activation='relu'))
model3.add(layers.Dropout(rate=0.8))
model3.add(layers.MaxPooling1D(pool_size=2))
model3.add(layers.GRU(64, activation='relu', return_sequences=True))
model3.add(layers.GRU(128, activation='relu', return_sequences=False))
model3.add(layers.Flatten())
model3.add(layers.Dense(256, activation='relu'))
model3.add(layers.Dense(1, activation='sigmoid'))
# compile the model
model3.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# summarize the model
print(model3.summary())
# fit the model
history3=model3.fit(padded_docs, train_label, validation_data=(padded_docs1,test_label), epochs=30, verbose=1)
#define model
model4 = Sequential()
e = tf.keras.layers.Embedding(vocab_size, 100, weights=[embedding_matrix], input_length=100, trainable=False)
model4.add(e)
model4.add(layers.Conv1D(32,4, activation='relu'))
model4.add(layers.Dropout(rate=0.8))
model4.add(layers.MaxPooling1D(pool_size=2))
model4.add(layers.Bidirectional(layers.LSTM(64, activation='relu', return_sequences=True)))
model4.add(layers.Bidirectional(layers.LSTM(128, activation='relu', return_sequences=False)))
model4.add(layers.Flatten())
model4.add(layers.Dense(256,activation='relu'))
model4.add(layers.Dense(1, activation='sigmoid'))
# compile the model
model4.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# summarize the model
print(model4.summary())
# fit the model
history4=model4.fit(padded_docs, train_label, validation_data=(padded_docs1,test_label), epochs=30, verbose=1)
```
|
github_jupyter
|
import pandas as pd
import numpy as np
df1=pd.read_csv('F:/0Sem 7/ML Lab/amazon food review dataset/Reviews.csv')
df1.head()
score=df1.values[:,6]
text=df1.values[:,9]
reviews=np.vstack((score,text)).T
print(score.shape, text.shape, reviews.shape)
p=0
n=0
for i in range(reviews.shape[0]):
if reviews[i,0] > 3:
reviews[i,0]=0 #positive review
p=p+1
else:
reviews[i,0]=1 #negative review
n=n+1
reviews = reviews[reviews[:,0].argsort()] #sort by 1st column
train=[]
for i in range(5000):
train.append(reviews[i])
for i in range(443777,443777+5000):
train.append(reviews[i])
train=np.asarray(train)
train.shape
my_reviews1=np.array([0,'This is a very good product. I am very happy with this item.'])
my_reviews2=np.array([1,'The product is very bad. I am very unsatisfied with the appearance.'])
my_reviews3=np.array([0,'It was one if the best items i have purchased. Very good.'])
my_reviews4=np.array([0,'All members of my family enjoyed the item. It is well thought.'])
my_reviews5=np.array([1,'Extremely poor quality. I hated the item and so did my brothers.'])
#train=np.vstack((train,my_reviews1))
#train=np.vstack((train,my_reviews2))
#train=np.vstack((train,my_reviews3))
#train=np.vstack((train,my_reviews4))
#train=np.vstack((train,my_reviews5))
import random as r
test=[]
for i in range(2000):
index=r.randint(0,reviews.shape[0])
test.append(reviews[index])
test=np.asarray(test)
test.shape
#test=np.vstack((test,my_reviews1))
#test=np.vstack((test,my_reviews2))
#test=np.vstack((test,my_reviews3))
#test=np.vstack((test,my_reviews4))
#test=np.vstack((test,my_reviews5))
train_all_words=[]
for i in range(train.shape[0]):
train_all_words.append(train[i,1].split())
train_all_words = [item for sublist in train_all_words for item in sublist]
test_all_words=[]
for i in range(test.shape[0]):
test_all_words.append(test[i,1].split())
test_all_words = [item for sublist in test_all_words for item in sublist]
from collections import Counter
def common_words(words, number_of_words, reverse=False):
counter = Counter(words)
return sorted(counter, key = counter.get, reverse=reverse)[:number_of_words]
train_least_common=common_words(train_all_words,200)
train_most_common=common_words(train_all_words,200,reverse=True)
test_least_common=common_words(test_all_words,200)
test_most_common=common_words(test_all_words,200,reverse=True)
for i in range(train.shape[0]):
train[i,1]=train[i,1].split()
for i in range(test.shape[0]):
test[i,1]=test[i,1].split()
for i in range(train.shape[0]):
for item in train_most_common:
if item in train[i,1]:
train[i,1].remove(item)
for item in train_least_common:
if item in train[i,1]:
train[i,1].remove(item)
for i in range(test.shape[0]):
for item in test_most_common:
if item in test[i,1]:
test[i,1].remove(item)
for item in test_least_common:
if item in test[i,1]:
test[i,1].remove(item)
for i in range(train.shape[0]):
train[i,1]=" ".join(train[i,1])
for i in range(test.shape[0]):
test[i,1]=" ".join(test[i,1])
train_docs=[]
test_docs=[]
train_label=[]
test_label=[]
for i in range(train.shape[0]):
train_docs.append(train[i,1])
train_label.append(train[i,0])
for i in range(test.shape[0]):
test_docs.append(test[i,1])
test_label.append(test[i,0])
from numpy import array
from numpy import asarray
from numpy import zeros
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import Embedding
# prepare tokenizer
t = Tokenizer()
t.fit_on_texts(train_docs)
vocab_size = len(t.word_index) + 1
#integer encode the documents
encoded_docs = t.texts_to_sequences(train_docs)
#print(encoded_docs)
#pad documents to a max length of 4 words
max_length = 100
padded_docs = pad_sequences(encoded_docs, maxlen=max_length, padding='post')
#print(padded_docs)
#load the whole embedding into memory
embeddings_index = dict()
f = open('F:/0Sem 7/ML Lab/glove/glove.6B.100d.txt',encoding='utf8')
for line in f:
values = line.split()
word = values[0]
coefs = asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
#print('Loaded %s word vectors.' % len(embeddings_index))
#create a weight matrix for words in training docs
embedding_matrix = zeros((vocab_size, 100))
for word, i in t.word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
embedding_matrix.shape
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
#define model
model = Sequential()
e = tf.keras.layers.Embedding(vocab_size, 100, weights=[embedding_matrix], input_length=100, trainable=False)
model.add(e)
model.add(layers.Conv1D(32,4, activation='relu'))
model.add(layers.Dropout(rate=0.8))
model.add(layers.MaxPooling1D(pool_size=2))
model.add(layers.LSTM(64, activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(128, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# summarize the model
print(model.summary())
for layer in model.layers:
print(layer.output_shape)
train_label=np.asarray(train_label)
train_label.shape
padded_docs.shape
# prepare tokenizer
t1 = Tokenizer()
t1.fit_on_texts(test_docs)
vocab_size1 = len(t1.word_index) + 1
#integer encode the documents
encoded_docs1 = t1.texts_to_sequences(test_docs)
#print(encoded_docs)
#pad documents to a max length of 4 words
max_length1 = 100
padded_docs1 = pad_sequences(encoded_docs1, maxlen=max_length1, padding='post')
test_label=np.asarray(test_label)
#print(padded_docs)
# fit the model
history=model.fit(padded_docs, train_label, validation_data=(padded_docs1,test_label), epochs=30, verbose=1)
"2nd LSTM Model"
#define model
model1 = Sequential()
e = tf.keras.layers.Embedding(vocab_size, 100, weights=[embedding_matrix], input_length=100, trainable=False)
model1.add(e)
model1.add(layers.Conv1D(32,4, activation='relu'))
model1.add(layers.Dropout(rate=0.8))
model1.add(layers.MaxPooling1D(pool_size=2))
model1.add(layers.LSTM(64, activation='tanh', return_sequences=True))
model1.add(layers.LSTM(128, activation='tanh', return_sequences=False))
model1.add(layers.Flatten())
model1.add(layers.Dense(256, activation ='tanh'))
model1.add(layers.Dense(1, activation='sigmoid'))
# compile the model
model1.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# summarize the model
print(model1.summary())
# fit the model
history1=model1.fit(padded_docs, train_label, validation_data=(padded_docs1,test_label), epochs=30, verbose=1)
"3rd LSTM Model"
#define model
model2 = Sequential()
e = tf.keras.layers.Embedding(vocab_size, 100, weights=[embedding_matrix], input_length=100, trainable=False)
model2.add(e)
model2.add(layers.Conv1D(32,4, activation='relu'))
model2.add(layers.Dropout(rate=0.8))
model2.add(layers.MaxPooling1D(pool_size=2))
model2.add(layers.LSTM(64, activation='relu', return_sequences=True))
model2.add(layers.LSTM(128, activation='relu', return_sequences=False))
model2.add(layers.Flatten())
model2.add(layers.Dense(256, activation='relu'))
model2.add(layers.Dense(1, activation='sigmoid'))
# compile the model
model2.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# summarize the model
print(model2.summary())
# fit the model
history2=model2.fit(padded_docs, train_label, validation_data=(padded_docs1,test_label), epochs=30, verbose=1)
#define model
model3 = Sequential()
e = tf.keras.layers.Embedding(vocab_size, 100, weights=[embedding_matrix], input_length=100, trainable=False)
model3.add(e)
model3.add(layers.Conv1D(32,4, activation='relu'))
model3.add(layers.Dropout(rate=0.8))
model3.add(layers.MaxPooling1D(pool_size=2))
model3.add(layers.GRU(64, activation='relu', return_sequences=True))
model3.add(layers.GRU(128, activation='relu', return_sequences=False))
model3.add(layers.Flatten())
model3.add(layers.Dense(256, activation='relu'))
model3.add(layers.Dense(1, activation='sigmoid'))
# compile the model
model3.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# summarize the model
print(model3.summary())
# fit the model
history3=model3.fit(padded_docs, train_label, validation_data=(padded_docs1,test_label), epochs=30, verbose=1)
#define model
model4 = Sequential()
e = tf.keras.layers.Embedding(vocab_size, 100, weights=[embedding_matrix], input_length=100, trainable=False)
model4.add(e)
model4.add(layers.Conv1D(32,4, activation='relu'))
model4.add(layers.Dropout(rate=0.8))
model4.add(layers.MaxPooling1D(pool_size=2))
model4.add(layers.Bidirectional(layers.LSTM(64, activation='relu', return_sequences=True)))
model4.add(layers.Bidirectional(layers.LSTM(128, activation='relu', return_sequences=False)))
model4.add(layers.Flatten())
model4.add(layers.Dense(256,activation='relu'))
model4.add(layers.Dense(1, activation='sigmoid'))
# compile the model
model4.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# summarize the model
print(model4.summary())
# fit the model
history4=model4.fit(padded_docs, train_label, validation_data=(padded_docs1,test_label), epochs=30, verbose=1)
| 0.313525 | 0.390534 |
# Objectives
## Overview
One of the key choices to make when training an ML model is what metric to choose by which to measure the efficacy of the model at learning the signal. Such metrics are useful for comparing how well the trained models generalize to new similar data.
This choice of metric is a key component of AutoML because it defines the cost function the AutoML search will seek to optimize. In EvalML, these metrics are called **objectives**. AutoML will seek to minimize (or maximize) the objective score as it explores more pipelines and parameters and will use the feedback from scoring pipelines to tune the available hyperparameters and continue the search. Therefore, it is critical to have an objective function that represents how the model will be applied in the intended domain of use.
EvalML supports a variety of objectives from traditional supervised ML including [mean squared error](https://en.wikipedia.org/wiki/Mean_squared_error) for regression problems and [cross entropy](https://en.wikipedia.org/wiki/Cross_entropy) or [area under the ROC curve](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) for classification problems. EvalML also allows the user to define a custom objective using their domain expertise, so that AutoML can search for models which provide the most value for the user's problem.
## Core Objectives
Use the `get_core_objectives` method to get a list of which objectives are included with EvalML for each problem type:
```
from evalml.objectives import get_core_objectives
from evalml.problem_types import ProblemTypes
for objective in get_core_objectives(ProblemTypes.BINARY):
print(objective.name)
```
EvalML defines a base objective class for each problem type: `RegressionObjective`, `BinaryClassificationObjective` and `MulticlassClassificationObjective`. All EvalML objectives are a subclass of one of these.
### Binary Classification Objectives and Thresholds
All binary classification objectives have a `threshold` property. Some binary classification objectives like log loss and AUC are unaffected by the choice of binary classification threshold, because they score based on predicted probabilities or examine a range of threshold values. These metrics are defined with `score_needs_proba` set to False. For all other binary classification objectives, we can compute the optimal binary classification threshold from the predicted probabilities and the target.
```
from evalml.pipelines import BinaryClassificationPipeline
from evalml.demos import load_fraud
from evalml.objectives import F1
X, y = load_fraud(n_rows=100)
X.ww.init(logical_types={"provider": "Categorical", "region": "Categorical"})
objective = F1()
pipeline = BinaryClassificationPipeline(component_graph=['Simple Imputer', 'DateTime Featurization Component', 'One Hot Encoder', 'Random Forest Classifier'])
pipeline.fit(X, y)
print(pipeline.threshold)
print(pipeline.score(X, y, objectives=[objective]))
y_pred_proba = pipeline.predict_proba(X)[True]
pipeline.threshold = objective.optimize_threshold(y_pred_proba, y)
print(pipeline.threshold)
print(pipeline.score(X, y, objectives=[objective]))
```
## Custom Objectives
Often times, the objective function is very specific to the use-case or business problem. To get the right objective to optimize requires thinking through the decisions or actions that will be taken using the model and assigning a cost/benefit to doing that correctly or incorrectly based on known outcomes in the training data.
Once you have determined the objective for your business, you can provide that to EvalML to optimize by defining a custom objective function.
### Defining a Custom Objective Function
To create a custom objective class, we must define several elements:
* `name`: The printable name of this objective.
* `objective_function`: This function takes the predictions, true labels, and an optional reference to the inputs, and returns a score of how well the model performed.
* `greater_is_better`: `True` if a higher `objective_function` value represents a better solution, and otherwise `False`.
* `score_needs_proba`: Only for classification objectives. `True` if the objective is intended to function with predicted probabilities as opposed to predicted values (example: cross entropy for classifiers).
* `decision_function`: Only for binary classification objectives. This function takes predicted probabilities that were output from the model and a binary classification threshold, and returns predicted values.
* `perfect_score`: The score achieved by a perfect model on this objective.
### Example: Fraud Detection
To give a concrete example, let's look at how the [fraud detection](../demos/fraud.ipynb) objective function is built.
```
from evalml.objectives.binary_classification_objective import BinaryClassificationObjective
import pandas as pd
class FraudCost(BinaryClassificationObjective):
"""Score the percentage of money lost of the total transaction amount process due to fraud"""
name = "Fraud Cost"
greater_is_better = False
score_needs_proba = False
perfect_score = 0.0
def __init__(self, retry_percentage=.5, interchange_fee=.02,
fraud_payout_percentage=1.0, amount_col='amount'):
"""Create instance of FraudCost
Arguments:
retry_percentage (float): What percentage of customers that will retry a transaction if it
is declined. Between 0 and 1. Defaults to .5
interchange_fee (float): How much of each successful transaction you can collect.
Between 0 and 1. Defaults to .02
fraud_payout_percentage (float): Percentage of fraud you will not be able to collect.
Between 0 and 1. Defaults to 1.0
amount_col (str): Name of column in data that contains the amount. Defaults to "amount"
"""
self.retry_percentage = retry_percentage
self.interchange_fee = interchange_fee
self.fraud_payout_percentage = fraud_payout_percentage
self.amount_col = amount_col
def decision_function(self, ypred_proba, threshold=0.0, X=None):
"""Determine if a transaction is fraud given predicted probabilities, threshold, and dataframe with transaction amount
Arguments:
ypred_proba (pd.Series): Predicted probablities
X (pd.DataFrame): Dataframe containing transaction amount
threshold (float): Dollar threshold to determine if transaction is fraud
Returns:
pd.Series: Series of predicted fraud labels using X and threshold
"""
if not isinstance(X, pd.DataFrame):
X = pd.DataFrame(X)
if not isinstance(ypred_proba, pd.Series):
ypred_proba = pd.Series(ypred_proba)
transformed_probs = (ypred_proba.values * X[self.amount_col])
return transformed_probs > threshold
def objective_function(self, y_true, y_predicted, X):
"""Calculate amount lost to fraud per transaction given predictions, true values, and dataframe with transaction amount
Arguments:
y_predicted (pd.Series): predicted fraud labels
y_true (pd.Series): true fraud labels
X (pd.DataFrame): dataframe with transaction amounts
Returns:
float: amount lost to fraud per transaction
"""
if not isinstance(X, pd.DataFrame):
X = pd.DataFrame(X)
if not isinstance(y_predicted, pd.Series):
y_predicted = pd.Series(y_predicted)
if not isinstance(y_true, pd.Series):
y_true = pd.Series(y_true)
# extract transaction using the amount columns in users data
try:
transaction_amount = X[self.amount_col]
except KeyError:
raise ValueError("`{}` is not a valid column in X.".format(self.amount_col))
# amount paid if transaction is fraud
fraud_cost = transaction_amount * self.fraud_payout_percentage
# money made from interchange fees on transaction
interchange_cost = transaction_amount * (1 - self.retry_percentage) * self.interchange_fee
# calculate cost of missing fraudulent transactions
false_negatives = (y_true & ~y_predicted) * fraud_cost
# calculate money lost from fees
false_positives = (~y_true & y_predicted) * interchange_cost
loss = false_negatives.sum() + false_positives.sum()
loss_per_total_processed = loss / transaction_amount.sum()
return loss_per_total_processed
```
|
github_jupyter
|
from evalml.objectives import get_core_objectives
from evalml.problem_types import ProblemTypes
for objective in get_core_objectives(ProblemTypes.BINARY):
print(objective.name)
from evalml.pipelines import BinaryClassificationPipeline
from evalml.demos import load_fraud
from evalml.objectives import F1
X, y = load_fraud(n_rows=100)
X.ww.init(logical_types={"provider": "Categorical", "region": "Categorical"})
objective = F1()
pipeline = BinaryClassificationPipeline(component_graph=['Simple Imputer', 'DateTime Featurization Component', 'One Hot Encoder', 'Random Forest Classifier'])
pipeline.fit(X, y)
print(pipeline.threshold)
print(pipeline.score(X, y, objectives=[objective]))
y_pred_proba = pipeline.predict_proba(X)[True]
pipeline.threshold = objective.optimize_threshold(y_pred_proba, y)
print(pipeline.threshold)
print(pipeline.score(X, y, objectives=[objective]))
from evalml.objectives.binary_classification_objective import BinaryClassificationObjective
import pandas as pd
class FraudCost(BinaryClassificationObjective):
"""Score the percentage of money lost of the total transaction amount process due to fraud"""
name = "Fraud Cost"
greater_is_better = False
score_needs_proba = False
perfect_score = 0.0
def __init__(self, retry_percentage=.5, interchange_fee=.02,
fraud_payout_percentage=1.0, amount_col='amount'):
"""Create instance of FraudCost
Arguments:
retry_percentage (float): What percentage of customers that will retry a transaction if it
is declined. Between 0 and 1. Defaults to .5
interchange_fee (float): How much of each successful transaction you can collect.
Between 0 and 1. Defaults to .02
fraud_payout_percentage (float): Percentage of fraud you will not be able to collect.
Between 0 and 1. Defaults to 1.0
amount_col (str): Name of column in data that contains the amount. Defaults to "amount"
"""
self.retry_percentage = retry_percentage
self.interchange_fee = interchange_fee
self.fraud_payout_percentage = fraud_payout_percentage
self.amount_col = amount_col
def decision_function(self, ypred_proba, threshold=0.0, X=None):
"""Determine if a transaction is fraud given predicted probabilities, threshold, and dataframe with transaction amount
Arguments:
ypred_proba (pd.Series): Predicted probablities
X (pd.DataFrame): Dataframe containing transaction amount
threshold (float): Dollar threshold to determine if transaction is fraud
Returns:
pd.Series: Series of predicted fraud labels using X and threshold
"""
if not isinstance(X, pd.DataFrame):
X = pd.DataFrame(X)
if not isinstance(ypred_proba, pd.Series):
ypred_proba = pd.Series(ypred_proba)
transformed_probs = (ypred_proba.values * X[self.amount_col])
return transformed_probs > threshold
def objective_function(self, y_true, y_predicted, X):
"""Calculate amount lost to fraud per transaction given predictions, true values, and dataframe with transaction amount
Arguments:
y_predicted (pd.Series): predicted fraud labels
y_true (pd.Series): true fraud labels
X (pd.DataFrame): dataframe with transaction amounts
Returns:
float: amount lost to fraud per transaction
"""
if not isinstance(X, pd.DataFrame):
X = pd.DataFrame(X)
if not isinstance(y_predicted, pd.Series):
y_predicted = pd.Series(y_predicted)
if not isinstance(y_true, pd.Series):
y_true = pd.Series(y_true)
# extract transaction using the amount columns in users data
try:
transaction_amount = X[self.amount_col]
except KeyError:
raise ValueError("`{}` is not a valid column in X.".format(self.amount_col))
# amount paid if transaction is fraud
fraud_cost = transaction_amount * self.fraud_payout_percentage
# money made from interchange fees on transaction
interchange_cost = transaction_amount * (1 - self.retry_percentage) * self.interchange_fee
# calculate cost of missing fraudulent transactions
false_negatives = (y_true & ~y_predicted) * fraud_cost
# calculate money lost from fees
false_positives = (~y_true & y_predicted) * interchange_cost
loss = false_negatives.sum() + false_positives.sum()
loss_per_total_processed = loss / transaction_amount.sum()
return loss_per_total_processed
| 0.845879 | 0.988514 |
# Data Exploration
```
import os
import pandas as pd
import matplotlib.pyplot as plt
os.chdir(r'D:\Data\Projects\Business Analytics\E-Commerce Data')
pd.set_option('display.float_format', lambda x: '%.3f' % x)
from warnings import filterwarnings
filterwarnings('ignore')
df = pd.read_csv('dfclean.csv', parse_dates=['InvoiceDate'])
print(df.shape)
df.head()
```
### Timeframe
```
df.InvoiceDate.min(), df.InvoiceDate.max()
# Order by time of day
df.InvoiceDate.dt.hour.value_counts()
df.InvoiceDate.dt.hour.value_counts().sort_index().plot(kind='bar');
```
### Orders
```
# Invoice
df['Invoice'] = df.UnitPrice * df.Quantity
df.groupby('Country').InvoiceNo.nunique().sum()
# Different products in one order
df.groupby('InvoiceNo').size().sort_values(ascending=False).head(10)
df.groupby('InvoiceNo').size().mean()
# Orders per customer (including cancellations)
df.groupby('CustomerID').InvoiceNo.nunique().\
sort_values(ascending=False).head(10)
# cancelled orders
df[df.InvoiceNo.str.startswith('C')]
```
### Discounts
```
df[df.StockCode == 'D']
df[(df.Description == 'Manual') & (df.InvoiceNo.str.startswith('C'))]
```
### Countries
```
# Customers per country
df.groupby('Country').CustomerID.nunique().sort_values(ascending = False).head(20)
# Orders per country
df.groupby('Country').InvoiceNo.nunique().sort_values(ascending=False).head(20)
# Spending per country
df.groupby('Country')['Invoice'].sum().sort_values(ascending = False).head(10)
```
### The customers
```
# How many customers are there?
df.CustomerID.nunique()
# Top 10 customers by number of items
df.groupby('CustomerID').size().sort_values(ascending = False).head(10)
# Top 10 customers by spending
df.groupby(['Country','CustomerID'])['Invoice'].sum().sort_values(ascending = False).head(10)
# Top 10 countries by spending
df.groupby('Country')['Invoice'].sum().sort_values(ascending = False).head(10)
# No of customers per country
df.groupby('Country')['CustomerID'].nunique().sort_values(ascending= False).head(10)
df.groupby('CustomerID')['InvoiceDate'].min().sort_values().head(5)
# First order of customer
newc= df.groupby('CustomerID')['InvoiceDate'].min().reset_index()
newc
a = newc.groupby(by = [newc.InvoiceDate.dt.month]).count()
a
a.plot()
# Favorite products per country
df.groupby('Country')['Description'].value_counts().sort_values(ascending=False)
df[df.Country == 'Netherlands']['Description'].value_counts().sort_values(ascending=False).head()
df[df.Country == 'EIRE']['Description'].value_counts().sort_values(ascending=False).head()
df[df.Country == 'United Kingdom']['Description'].value_counts().sort_values(ascending=False).head()
# Lieblingsprodukte des besten Kunden 14646 aus den Niederlanden
df[df.CustomerID == '14646']['Description'].value_counts().sort_values(ascending = False).head(10)
df[df.Invoice > 500]
```
### Statistical parameters for Quantity and UnitPrice
```
df.describe()
# Errors in Quantity, UnitPrice
df.UnitPrice.value_counts().sort_index().tail(10)
df.loc[df.CustomerID == 15098]
df.loc[(df.InvoiceNo == '581483')|(df.InvoiceNo == 'C581484')]
```
Many typos in Quantity and UnitPrice, which were subsequently cancelled. Not all errors can be traced, though, like you can see above.
|
github_jupyter
|
import os
import pandas as pd
import matplotlib.pyplot as plt
os.chdir(r'D:\Data\Projects\Business Analytics\E-Commerce Data')
pd.set_option('display.float_format', lambda x: '%.3f' % x)
from warnings import filterwarnings
filterwarnings('ignore')
df = pd.read_csv('dfclean.csv', parse_dates=['InvoiceDate'])
print(df.shape)
df.head()
df.InvoiceDate.min(), df.InvoiceDate.max()
# Order by time of day
df.InvoiceDate.dt.hour.value_counts()
df.InvoiceDate.dt.hour.value_counts().sort_index().plot(kind='bar');
# Invoice
df['Invoice'] = df.UnitPrice * df.Quantity
df.groupby('Country').InvoiceNo.nunique().sum()
# Different products in one order
df.groupby('InvoiceNo').size().sort_values(ascending=False).head(10)
df.groupby('InvoiceNo').size().mean()
# Orders per customer (including cancellations)
df.groupby('CustomerID').InvoiceNo.nunique().\
sort_values(ascending=False).head(10)
# cancelled orders
df[df.InvoiceNo.str.startswith('C')]
df[df.StockCode == 'D']
df[(df.Description == 'Manual') & (df.InvoiceNo.str.startswith('C'))]
# Customers per country
df.groupby('Country').CustomerID.nunique().sort_values(ascending = False).head(20)
# Orders per country
df.groupby('Country').InvoiceNo.nunique().sort_values(ascending=False).head(20)
# Spending per country
df.groupby('Country')['Invoice'].sum().sort_values(ascending = False).head(10)
# How many customers are there?
df.CustomerID.nunique()
# Top 10 customers by number of items
df.groupby('CustomerID').size().sort_values(ascending = False).head(10)
# Top 10 customers by spending
df.groupby(['Country','CustomerID'])['Invoice'].sum().sort_values(ascending = False).head(10)
# Top 10 countries by spending
df.groupby('Country')['Invoice'].sum().sort_values(ascending = False).head(10)
# No of customers per country
df.groupby('Country')['CustomerID'].nunique().sort_values(ascending= False).head(10)
df.groupby('CustomerID')['InvoiceDate'].min().sort_values().head(5)
# First order of customer
newc= df.groupby('CustomerID')['InvoiceDate'].min().reset_index()
newc
a = newc.groupby(by = [newc.InvoiceDate.dt.month]).count()
a
a.plot()
# Favorite products per country
df.groupby('Country')['Description'].value_counts().sort_values(ascending=False)
df[df.Country == 'Netherlands']['Description'].value_counts().sort_values(ascending=False).head()
df[df.Country == 'EIRE']['Description'].value_counts().sort_values(ascending=False).head()
df[df.Country == 'United Kingdom']['Description'].value_counts().sort_values(ascending=False).head()
# Lieblingsprodukte des besten Kunden 14646 aus den Niederlanden
df[df.CustomerID == '14646']['Description'].value_counts().sort_values(ascending = False).head(10)
df[df.Invoice > 500]
df.describe()
# Errors in Quantity, UnitPrice
df.UnitPrice.value_counts().sort_index().tail(10)
df.loc[df.CustomerID == 15098]
df.loc[(df.InvoiceNo == '581483')|(df.InvoiceNo == 'C581484')]
| 0.308398 | 0.826922 |
# Lesson 8 Practice: Seaborn
Use this notebook to follow along with the lesson in the corresponding lesson notebook: [L08-Seaborn-Lesson.ipynb](./L08-Seaborn-Lesson.ipynb).
## Instructions
Follow along with the teaching material in the lesson. Throughout the tutorial sections labeled as "Tasks" are interspersed and indicated with the icon: . You should follow the instructions provided in these sections by performing them in the practice notebook. When the tutorial is completed you can turn in the final practice notebook. For each task, use the cell below it to write and test your code. You may add additional cells for any task as needed or desired.
## Task 1a Setup
Import the following packages:
+ seaborn as sns
+ pandas as pd
+ numpy as np
+ matplotlib.pyplot as plt
Activate the `%matplotlib inline` magic.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
iris_df = sns.load_dataset('iris')
iris_df.head(5)
```
## Task 2a Load Data
+ View available datasets by calling `sns.get_dataset_names`.
+ Choose one of those datasets and explore it.
What is the shape?
```
iris_df.shape
```
What are the columns?
```
sepal_length, sepal_width, petal_length, petal_width, species
```
What are the data types?
```
iris_df.dtypes
```
Are there missing values?
```
iris_df.isna().sum()
```
Are there duplicated rows?
```
iris_df.nunique()
```
For categorical columns find the unique set of categories.
```
iris_df.duplicated(subset='species')
```
Is the data tidy?
```
yes
```
## Task 2b Preview Seaborn
Take some time to peruse the Seaborn [example gallery](https://seaborn.pydata.org/examples/index.html). Indicate which plot types are most interesting to you. Which do you expect will be most useful with current research projects you may be working on?
## Task 3a Using `relplot`
Experiment with the `size`, `hue` and `style` semantics by applying them to another example dataset of your choice.
*You should produce three or more plots for this task.*
```
sns.relplot(x= 'sepal_width', y= 'sepal_length', data=iris_df);
sns_plot.savefig("output.png", format='png', dpi=72)
sns.relplot(x= 'sepal_width', y= 'sepal_length', hue= 'species', data=iris_df);
sns_plot.savefig("output.png", format='png', dpi=72)
sns.relplot(x= 'sepal_width', y= 'sepal_length', hue='species', aspect=2, data=iris_df);
sns_plot.savefig("output.png", format='png', dpi=72)
sns.relplot(x= 'petal_width', y= 'petal_length', hue='species', aspect=2, data=iris_df);
sns_plot.savefig("output.png", format='png', dpi=72)
```
## Task 4a: Explore built-in styles
Using a dataset of your choice, practice creating a plot for each of these different styles:
+ darkgrid
+ whitegrid
+ dark
+ white
+ ticks
```
sns.set_style('whitegrid')
sns.relplot(x= 'sepal_width', y= 'sepal_length', data=iris_df);
sns_plot.savefig("output.png", format='png', dpi=72)
sns.set_style('darkgrid')
sns.relplot(x= 'sepal_width', y= 'sepal_length', data=iris_df);
sns_plot.savefig("output.png", format='png', dpi=72)
sns.set_style('dark')
sns.relplot(x= 'sepal_width', y= 'sepal_length', data=iris_df);
sns_plot.savefig("output.png", format='png', dpi=72)
sns.set_style('white')
sns.relplot(x= 'sepal_width', y= 'sepal_length', data=iris_df);
sns_plot.savefig("output.png", format='png', dpi=72)
sns.set_style('ticks')
sns.relplot(x= 'sepal_width', y= 'sepal_length', data=iris_df);
sns_plot.savefig("output.png", format='png', dpi=72)
```
## Task 4b
Experiment with the style options and palettes introduced above. Create and demonstrate a style of your own using a dataset of your choice.
```
custom_style = {'figure.facecolor': 'white',
'axes.facecolor': 'black'}
sns.palplot(sns.color_palette())
sns.set_style('whitegrid')
sns.relplot(x= 'sepal_width', y= 'sepal_length', data=iris_df);
sns_plot.savefig("output.png", format='png', dpi=72)
```
## Task 5a
Examine the [Seaborn gallery](https://seaborn.pydata.org/examples/index.html) and find **two to four plots** types that interest you. Re-create a version of those plots using a different data set (make any other style changes you wish).
```
sns.set_theme(style="whitegrid")
iris = sns.load_dataset("iris")
iris = pd.melt(iris, "species", var_name="measurement")
f, ax = plt.subplots()
sns.despine(bottom=True, left=True)
sns.stripplot(x="value", y="measurement", hue="species",
data=iris, dodge=True, alpha=.25, zorder=1)
sns.pointplot(x="value", y="measurement", hue="species",
data=iris, dodge=.532, join=False, palette="dark",
markers="d", scale=.75, ci=None)
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles[3:], labels[3:], title="species",
handletextpad=0, columnspacing=1,
loc="lower right", ncol=3, frameon=True)
sns_plot.savefig("output.png", format='png', dpi=72)
import seaborn as sns
sns.set_theme(style="ticks")
df = sns.load_dataset("anscombe")
sns.lmplot(x="x", y="y", col="dataset", hue="dataset", data=df,
col_wrap=2, ci=None, palette="muted", height=4,
scatter_kws={"s": 50, "alpha": 1})
sns_plot.savefig("output.png", format='png', dpi=72)
sns.set_theme(style="ticks")
rs = np.random.RandomState(4)
pos = rs.randint(-1, 2, (20, 5)).cumsum(axis=1)
pos -= pos[:, 0, np.newaxis]
step = np.tile(range(5), 20)
walk = np.repeat(range(20), 5)
df = pd.DataFrame(np.c_[pos.flat, step, walk],
columns=["position", "step", "walk"])
grid = sns.FacetGrid(df, col="walk", hue="walk", palette="tab20c",
col_wrap=4, height=1.5)
grid.map(plt.axhline, y=0, ls=":", c=".5")
grid.map(plt.plot, "step", "position", marker="o")
grid.set(xticks=np.arange(5), yticks=[-3, 3],
xlim=(-.5, 4.5), ylim=(-3.5, 3.5))
grid.fig.tight_layout(w_pad=1)
sns_plot.savefig("output.png", format='png', dpi=72)
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
iris_df = sns.load_dataset('iris')
iris_df.head(5)
iris_df.shape
sepal_length, sepal_width, petal_length, petal_width, species
iris_df.dtypes
iris_df.isna().sum()
iris_df.nunique()
iris_df.duplicated(subset='species')
yes
sns.relplot(x= 'sepal_width', y= 'sepal_length', data=iris_df);
sns_plot.savefig("output.png", format='png', dpi=72)
sns.relplot(x= 'sepal_width', y= 'sepal_length', hue= 'species', data=iris_df);
sns_plot.savefig("output.png", format='png', dpi=72)
sns.relplot(x= 'sepal_width', y= 'sepal_length', hue='species', aspect=2, data=iris_df);
sns_plot.savefig("output.png", format='png', dpi=72)
sns.relplot(x= 'petal_width', y= 'petal_length', hue='species', aspect=2, data=iris_df);
sns_plot.savefig("output.png", format='png', dpi=72)
sns.set_style('whitegrid')
sns.relplot(x= 'sepal_width', y= 'sepal_length', data=iris_df);
sns_plot.savefig("output.png", format='png', dpi=72)
sns.set_style('darkgrid')
sns.relplot(x= 'sepal_width', y= 'sepal_length', data=iris_df);
sns_plot.savefig("output.png", format='png', dpi=72)
sns.set_style('dark')
sns.relplot(x= 'sepal_width', y= 'sepal_length', data=iris_df);
sns_plot.savefig("output.png", format='png', dpi=72)
sns.set_style('white')
sns.relplot(x= 'sepal_width', y= 'sepal_length', data=iris_df);
sns_plot.savefig("output.png", format='png', dpi=72)
sns.set_style('ticks')
sns.relplot(x= 'sepal_width', y= 'sepal_length', data=iris_df);
sns_plot.savefig("output.png", format='png', dpi=72)
custom_style = {'figure.facecolor': 'white',
'axes.facecolor': 'black'}
sns.palplot(sns.color_palette())
sns.set_style('whitegrid')
sns.relplot(x= 'sepal_width', y= 'sepal_length', data=iris_df);
sns_plot.savefig("output.png", format='png', dpi=72)
sns.set_theme(style="whitegrid")
iris = sns.load_dataset("iris")
iris = pd.melt(iris, "species", var_name="measurement")
f, ax = plt.subplots()
sns.despine(bottom=True, left=True)
sns.stripplot(x="value", y="measurement", hue="species",
data=iris, dodge=True, alpha=.25, zorder=1)
sns.pointplot(x="value", y="measurement", hue="species",
data=iris, dodge=.532, join=False, palette="dark",
markers="d", scale=.75, ci=None)
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles[3:], labels[3:], title="species",
handletextpad=0, columnspacing=1,
loc="lower right", ncol=3, frameon=True)
sns_plot.savefig("output.png", format='png', dpi=72)
import seaborn as sns
sns.set_theme(style="ticks")
df = sns.load_dataset("anscombe")
sns.lmplot(x="x", y="y", col="dataset", hue="dataset", data=df,
col_wrap=2, ci=None, palette="muted", height=4,
scatter_kws={"s": 50, "alpha": 1})
sns_plot.savefig("output.png", format='png', dpi=72)
sns.set_theme(style="ticks")
rs = np.random.RandomState(4)
pos = rs.randint(-1, 2, (20, 5)).cumsum(axis=1)
pos -= pos[:, 0, np.newaxis]
step = np.tile(range(5), 20)
walk = np.repeat(range(20), 5)
df = pd.DataFrame(np.c_[pos.flat, step, walk],
columns=["position", "step", "walk"])
grid = sns.FacetGrid(df, col="walk", hue="walk", palette="tab20c",
col_wrap=4, height=1.5)
grid.map(plt.axhline, y=0, ls=":", c=".5")
grid.map(plt.plot, "step", "position", marker="o")
grid.set(xticks=np.arange(5), yticks=[-3, 3],
xlim=(-.5, 4.5), ylim=(-3.5, 3.5))
grid.fig.tight_layout(w_pad=1)
sns_plot.savefig("output.png", format='png', dpi=72)
| 0.477554 | 0.95594 |
# Train an MNIST model with PyTorch
MNIST is a widely used dataset for handwritten digit classification. It consists of 70,000 labeled 28x28 pixel grayscale images of hand-written digits. The dataset is split into 60,000 training images and 10,000 test images. There are 10 classes (one for each of the 10 digits). This tutorial shows how to train and test an MNIST model on SageMaker using PyTorch.
## Runtime
This notebook takes approximately 5 minutes to run.
## Contents
1. [PyTorch Estimator](#PyTorch-Estimator)
1. [Implement the entry point for training](#Implement-the-entry-point-for-training)
1. [Set hyperparameters](#Set-hyperparameters)
1. [Set up channels for the training and testing data](#Set-up-channels-for-the-training-and-testing-data)
1. [Run the training script on SageMaker](#Run-the-training-script-on-SageMaker)
1. [Inspect and store model data](#Inspect-and-store-model-data)
1. [Test and debug the entry point before executing the training container](#Test-and-debug-the-entry-point-before-executing-the-training-container)
1. [Conclusion](#Conclusion)
```
import os
import json
import sagemaker
from sagemaker.pytorch import PyTorch
from sagemaker import get_execution_role
sess = sagemaker.Session()
role = get_execution_role()
output_path = "s3://" + sess.default_bucket() + "/DEMO-mnist"
```
## PyTorch Estimator
The `PyTorch` class allows you to run your training script on SageMaker
infrastracture in a containerized environment. In this notebook, we
refer to this container as *training container*.
You need to configure
it with the following parameters to set up the environment:
- `entry_point`: A user-defined Python file used by the training container as the
instructions for training. We further discuss this file in the next subsection.
- `role`: An IAM role to make AWS service requests
- `instance_type`: The type of SageMaker instance to run your training script.
Set it to `local` if you want to run the training job on
the SageMaker instance you are using to run this notebook
- `instance_count`: The number of instances to run your training job on.
Multiple instances are needed for distributed training.
- `output_path`:
S3 bucket URI to save training output (model artifacts and output files)
- `framework_version`: The version of PyTorch to use
- `py_version`: The Python version to use
For more information, see the [EstimatorBase API reference](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html#sagemaker.estimator.EstimatorBase)
## Implement the entry point for training
The entry point for training is a Python script that provides all
the code for training a PyTorch model. It is used by the SageMaker
PyTorch Estimator (`PyTorch` class above) as the entry point for running the training job.
Under the hood, SageMaker PyTorch Estimator creates a docker image
with runtime environemnts
specified by the parameters you provide to initiate the
estimator class, and it injects the training script into the
docker image as the entry point to run the container.
In the rest of the notebook, we use *training image* to refer to the
docker image specified by the PyTorch Estimator and *training container*
to refer to the container that runs the training image.
This means your training script is very similar to a training script
you might run outside Amazon SageMaker, but it can access the useful environment
variables provided by the training image. See [the complete list of environment variables](https://github.com/aws/sagemaker-training-toolkit/blob/master/ENVIRONMENT_VARIABLES.md) for a complete
description of all environment variables your training script
can access.
In this example, we use the training script `code/train.py`
as the entry point for our PyTorch Estimator.
```
!pygmentize 'code/train.py'
```
## Set hyperparameters
In addition, the PyTorch estimator allows you to parse command line arguments
to your training script via `hyperparameters`.
Note: local mode is not supported in SageMaker Studio.
```
# Set local_mode to True to run the training script on the machine that runs this notebook
local_mode = False
if local_mode:
instance_type = "local"
else:
instance_type = "ml.c4.xlarge"
est = PyTorch(
entry_point="train.py",
source_dir="code", # directory of your training script
role=role,
framework_version="1.5.0",
py_version="py3",
instance_type=instance_type,
instance_count=1,
volume_size=250,
output_path=output_path,
hyperparameters={"batch-size": 128, "epochs": 1, "learning-rate": 1e-3, "log-interval": 100},
)
```
The training container executes your training script like:
```
python train.py --batch-size 100 --epochs 1 --learning-rate 1e-3 --log-interval 100
```
## Set up channels for the training and testing data
Tell the `PyTorch` estimator where to find the training and
testing data. It can be a path to an S3 bucket, or a path
in your local file system if you use local mode. In this example,
we download the MNIST data from a public S3 bucket and upload it
to your default bucket.
```
import logging
import boto3
from botocore.exceptions import ClientError
# Download training and testing data from a public S3 bucket
def download_from_s3(data_dir="./data", train=True):
"""Download MNIST dataset and convert it to numpy array
Args:
data_dir (str): directory to save the data
train (bool): download training set
Returns:
None
"""
if not os.path.exists(data_dir):
os.makedirs(data_dir)
if train:
images_file = "train-images-idx3-ubyte.gz"
labels_file = "train-labels-idx1-ubyte.gz"
else:
images_file = "t10k-images-idx3-ubyte.gz"
labels_file = "t10k-labels-idx1-ubyte.gz"
# download objects
s3 = boto3.client("s3")
bucket = f"sagemaker-sample-files"
for obj in [images_file, labels_file]:
key = os.path.join("datasets/image/MNIST", obj)
dest = os.path.join(data_dir, obj)
if not os.path.exists(dest):
s3.download_file(bucket, key, dest)
return
download_from_s3("./data", True)
download_from_s3("./data", False)
# Upload to the default bucket
prefix = "DEMO-mnist"
bucket = sess.default_bucket()
loc = sess.upload_data(path="./data", bucket=bucket, key_prefix=prefix)
channels = {"training": loc, "testing": loc}
```
The keys of the `channels` dictionary are passed to the training image,
and it creates the environment variable `SM_CHANNEL_<key name>`.
In this example, `SM_CHANNEL_TRAINING` and `SM_CHANNEL_TESTING` are created in the training image (see
how `code/train.py` accesses these variables). For more information,
see: [SM_CHANNEL_{channel_name}](https://github.com/aws/sagemaker-training-toolkit/blob/master/ENVIRONMENT_VARIABLES.md#sm_channel_channel_name).
If you want, you can create a channel for validation:
```
channels = {
'training': train_data_loc,
'validation': val_data_loc,
'test': test_data_loc
}
```
You can then access this channel within your training script via
`SM_CHANNEL_VALIDATION`.
## Run the training script on SageMaker
Now, the training container has everything to execute your training
script. Start the container by calling the `fit()` method.
```
est.fit(inputs=channels)
```
## Inspect and store model data
Now, the training is finished, and the model artifact has been saved in
the `output_path`.
```
pt_mnist_model_data = est.model_data
print("Model artifact saved at:\n", pt_mnist_model_data)
```
We store the variable `pt_mnist_model_data` in the current notebook kernel.
```
%store pt_mnist_model_data
```
## Test and debug the entry point before executing the training container
The entry point `code/train.py` can be executed in the training container.
When you develop your own training script, it is a good practice to simulate the container environment
in the local shell and test it before sending it to SageMaker, because debugging in a containerized environment
is rather cumbersome. The following script shows how you can test your training script:
```
!pygmentize code/test_train.py
```
## Conclusion
In this notebook, we trained a PyTorch model on the MNIST dataset by fitting a SageMaker estimator. For next steps on how to deploy the trained model and perform inference, see [Deploy a Trained PyTorch Model](https://sagemaker-examples.readthedocs.io/en/latest/frameworks/pytorch/get_started_mnist_deploy.html).
|
github_jupyter
|
import os
import json
import sagemaker
from sagemaker.pytorch import PyTorch
from sagemaker import get_execution_role
sess = sagemaker.Session()
role = get_execution_role()
output_path = "s3://" + sess.default_bucket() + "/DEMO-mnist"
!pygmentize 'code/train.py'
# Set local_mode to True to run the training script on the machine that runs this notebook
local_mode = False
if local_mode:
instance_type = "local"
else:
instance_type = "ml.c4.xlarge"
est = PyTorch(
entry_point="train.py",
source_dir="code", # directory of your training script
role=role,
framework_version="1.5.0",
py_version="py3",
instance_type=instance_type,
instance_count=1,
volume_size=250,
output_path=output_path,
hyperparameters={"batch-size": 128, "epochs": 1, "learning-rate": 1e-3, "log-interval": 100},
)
python train.py --batch-size 100 --epochs 1 --learning-rate 1e-3 --log-interval 100
import logging
import boto3
from botocore.exceptions import ClientError
# Download training and testing data from a public S3 bucket
def download_from_s3(data_dir="./data", train=True):
"""Download MNIST dataset and convert it to numpy array
Args:
data_dir (str): directory to save the data
train (bool): download training set
Returns:
None
"""
if not os.path.exists(data_dir):
os.makedirs(data_dir)
if train:
images_file = "train-images-idx3-ubyte.gz"
labels_file = "train-labels-idx1-ubyte.gz"
else:
images_file = "t10k-images-idx3-ubyte.gz"
labels_file = "t10k-labels-idx1-ubyte.gz"
# download objects
s3 = boto3.client("s3")
bucket = f"sagemaker-sample-files"
for obj in [images_file, labels_file]:
key = os.path.join("datasets/image/MNIST", obj)
dest = os.path.join(data_dir, obj)
if not os.path.exists(dest):
s3.download_file(bucket, key, dest)
return
download_from_s3("./data", True)
download_from_s3("./data", False)
# Upload to the default bucket
prefix = "DEMO-mnist"
bucket = sess.default_bucket()
loc = sess.upload_data(path="./data", bucket=bucket, key_prefix=prefix)
channels = {"training": loc, "testing": loc}
channels = {
'training': train_data_loc,
'validation': val_data_loc,
'test': test_data_loc
}
est.fit(inputs=channels)
pt_mnist_model_data = est.model_data
print("Model artifact saved at:\n", pt_mnist_model_data)
%store pt_mnist_model_data
!pygmentize code/test_train.py
| 0.505615 | 0.980876 |
#Derivation of MKS Localization Equation
The goal of this notebook is to derivate the Materials Knowledge Systems (MKS) equation from elastostatic equilibrium equation. Note that the MKS equation can be derivated from other partial differential equations.
### Definitions
Let $C(x)$ be the local stiffness tensor for a two phase material with stiffness tensors $C_A$ and $C_B$. The stiffness tensor at location $x$ can be represented at a perturbation from a reference stiffness tensor.
$$C(x) = C^R + C'(x)$$
The strain field at location $(x)$ can also be defined in terms of a simular perturbation.
$$\varepsilon(x) = \bar{\varepsilon} + \varepsilon '(x)$$
where $\bar{\varepsilon}$ is the average strain and $\varepsilon '(x)$ is the local strain perturbation from $\bar{\varepsilon}$.
The constitutive equation is therefore.
$$\sigma_{ij}(x) = \big(C^R_{ijlk} + C'_{ijlk}(x) \big ) \big (\bar{\varepsilon}_{lk} + \varepsilon'_{lk}(x) \big )$$
### Equilibrium Condition
The equilibrium condition is defined below.
$$\sigma_{ij,j}(x) = \Big [\big(C^R_{ijlk} + C'_{ijlk}(x) \big ) \big (\bar{\varepsilon}_{lk} + \varepsilon'_{lk}(x) \big )\Big ]_{,j} = 0$$
$$\sigma_{ij,j}(x) = C^R_{ijlk}\varepsilon'_{lk,j}(x) + C'_{ijlk,j}(x)\bar{\varepsilon}_{lk} + \Big [C'_{ijlk}(x) \varepsilon'_{lk}(x)\Big ]_{,j} = 0$$
Let
$$F_i(x) = C'_{ijlk,j}(x)\bar{\varepsilon}_{lk} + \Big [C'_{ijlk}(x) \varepsilon'_{lk}(x)\Big ]_{,j} $$
Using the definitation of $F(x)$ above, the equilibrium equation above can be rearranged in the form of an inhomogenous differential equation.
$$C^R_{ijlk}\varepsilon'_{lk,j}(x) + F_i(x) = 0$$
###Strain, Displacement, and Green's Functions
By using the relationship between strain and displacement, the equilibrium equation can be rewritten as follows.
$$ \varepsilon_{kl}(x) = \frac{\big (u_{k,l}(x) + u_{l,k}(x) \big)}{2} $$
$$C^R_{ijkl} \frac{\big (u'_{k,lj}(x) + u'_{l,kj}(x) \big)}{2} + F_i(x) = 0$$
The solution to the displacements can be found using green's functions.
$$C^R_{ijkl} G_{km,lj}(r) + \delta_{im}\delta(x-r) = 0$$
$$u'_k(x) = \int_V G_{ik}(r) F_i (x-r)dr = \int_V G_{ik}(r) \Big [C'_{ijlk}(x-r)\bar{\varepsilon}_{lk} + \big [C'_{ijlk}(x-r)\varepsilon'_{lk}(x-r)\big ]\Big ]_{,j}dr$$
and
$$u'_l(x) = \int_V G_{il}(r) F_i (x - r)dr = \int_V G_{ik}(r) \Big [C'_{ijlk}(x-r)\bar{\varepsilon}_{lk} + \big [C'_{ijlk}(x-r)\varepsilon'_{lk}(x-r)\big ]\Big ]_{,j}dr$$
therefore the strain can also be found interns of green's functions.
$$\varepsilon'_{kl}(x) = \int_V \frac{\big (G_{ik,l}(r) + G_{il,k}(r) \big)}{2} F_i (x-r)dr = \int_V \frac{\big (G_{ik,l}(r) + G_{il,k}(r) \big)}{2} \Big [C'_{ijlk}(x-r)\bar{\varepsilon}_{lk} + \big [C'_{ijlk}(x-r)\varepsilon'_{lk}(x-r)\big ]\Big ]_{,j}dr$$
Note that the $G(r)$ terms depend on the reference medium $C^R$.
### Integration by Parts
The equation above can be recast with all of the derivatives on the green's functions by integrating by parts.
$$
\varepsilon'_{kl}(x) = \Bigg [ \int_S \frac{\big (G_{ik,l}(r) + G_{il,k}(r) \big)}{2} \Big [C'_{ijlk}(x-r)\bar{\varepsilon}_{lk} + \big [C'_{ijlk}(x-r)\varepsilon'_{lk}(x-r)\big ]\Big ] n_j dS\Bigg ]_{r \rightarrow 0}^{r \rightarrow \infty} - $$
$$ \int_V \frac{\big (G_{ik,lj}(r) + G_{il,kj}(r) \big)}{2} \Big [C'_{ijlk}(x-r)\bar{\varepsilon}_{lk} + \big [C'_{ijlk}(x-r)\varepsilon'_{lk}(x-r)\big ]\Big ]dr
$$
###Principal Value Singularity
In the equation above, the surface term tending to zero is a principal value integral because of the singularity in the green's functions at $r = 0$. As a result, the integrand is not differentiable. Torquato shows that by excluding a sphere at the origin and using integration by parts and the divergence theorem we can arrive at the following equation [1].
$$\varepsilon'_{kl}(x) = I_{ikjl} - E_{ikjl} + \int_V \Phi_{ikjl}(r) \Big [C'_{ijlk}(x-r)\bar{\varepsilon}_{lk} + \big [C'_{ijlk}(x-r)\varepsilon'_{lk}(x-r)\big ]\Big ]dr $$
where
$$\Phi_{ikjl}(r) = - \frac{\big (G_{ik,lj}(r) + G_{il,kj}(r) \big)}{2} $$
is the green's function terms, and
$$I_{ikjl}^{\infty} = \lim_{r \rightarrow \infty} \int_S\frac{\big (G_{ik,l}(r) + G_{il,k}(r)\big)}{2} \Big [C'_{ijlk}(x-r)\bar{\varepsilon}_{lk} + \big [C'_{ijlk}(x-r)\varepsilon'_{lk}(x-r)\big ]\Big ]n_l dS $$
$$E_{ikjl}(x) = \lim_{r \rightarrow 0} \int_S\frac{\big (G_{ik,l}(r) + G_{il,k}(r)\big)}{2} n_l dS $$
are the contribution from the surface integrals at $\infty$ and from the singularity.
Finally let
$$\Gamma_{iklj}(r) = I_{ikjl}^{\infty}\delta(r)-E_{ikjl}\delta(r) + \Phi_{ikjl}(r)$$
the strain can then be written in the following form.
$$\varepsilon'_{kl}(x) = \int_V \Gamma_{ikjl}(r) \Big [C'_{ijlk}(x-r)\bar{\varepsilon}_{lk} + \big [C'_{ijlk}(x-r)\varepsilon'_{lk}(x-r)\big ]\Big ]dr $$
###Weak Contrast Expansion
$$\varepsilon'(x) =\int_V \Gamma(r) C'(x-r) [ \bar{\varepsilon} + \varepsilon'(x-r)]dr $$
By recursively inserting $\varepsilon'(x)$ into the LHS of the equation, we get the following series.
$$
\varepsilon'(x) =\int_V \Gamma(r) C'(x-r) \bar{\varepsilon} dr +\int_V \int_V \Big[ \Gamma(r) C'(x-r)\bar{\varepsilon}\Big ]\Big [\Gamma(r') C'(x-r') \bar{\varepsilon}\Big] dr'dr + ...$$
As long as
$$\Gamma(r) C'(x)\bar{\varepsilon} << 1$$
the series can be truncated after a few terms and still provide resonable accuracy.
###Materials Knowledge Systems
Let
$$ C'(x-r) = \int_H h m(h, x-r) dh$$
where $m(h, r)$ is the microstructure function which is a probablity density that spans both the local state space $h$ and real space $x$. The expectation of local state variable for the microstructure function is the integral over the local state space $H$ and discribes the expected local state $h$ which is equal to $C'(r)$.
Also let
$$\alpha(h, r) = \Gamma(r) h \bar{\varepsilon} $$
$$\alpha(h, h', r, r') = \Gamma(r) h \bar{\varepsilon} \Gamma(r') h' \bar{\varepsilon} $$
$$ etc... $$
where again $h$ is the local state variable.
Plugging these definitations into the Weak Contrast Expansion recasts the series in the following form.
$$\varepsilon '(x) =\int_V \int_H \alpha(h, r) m(h, x-r) dr dh + \int_V \int_V \int_H \int_H\alpha_(h, h', r, r') m(h, x-r) m(h', x-r') dr'dr dh dh'+ ...$$
The discrete version of this equation is the MKS localization.
$$\varepsilon'[s] = \sum_{l=0}^{L-1} \sum_{r=0}^{S-1} \alpha[l, r] m[l, s-r] +\sum_{l=0}^{L-1} \sum_{l'=0}^{L-1} \sum_{r=0}^{S-1} \sum_{r'=0}^{S-1} \alpha[l, l', r, r'] m[l, s-r] m_[l', s-r'] + ... $$
##References
[1] Torquato, S., 1997. *Effective stiffness tensor of composite media. I. Exact series expansions.* J. Mech. Phys. Solids 45, 1421–1448.
[2] Brent L.Adams, Surya Kalidindi, David T. Fullwood. *Microstructure Sensitive Design for Performance Optimization.*
[3] David T. Fullwood, Brent L.Adams, Surya Kalidindi. *A strong contrast homogenization formulation for multi-phase anisotropic materials.*
|
github_jupyter
|
#Derivation of MKS Localization Equation
The goal of this notebook is to derivate the Materials Knowledge Systems (MKS) equation from elastostatic equilibrium equation. Note that the MKS equation can be derivated from other partial differential equations.
### Definitions
Let $C(x)$ be the local stiffness tensor for a two phase material with stiffness tensors $C_A$ and $C_B$. The stiffness tensor at location $x$ can be represented at a perturbation from a reference stiffness tensor.
$$C(x) = C^R + C'(x)$$
The strain field at location $(x)$ can also be defined in terms of a simular perturbation.
$$\varepsilon(x) = \bar{\varepsilon} + \varepsilon '(x)$$
where $\bar{\varepsilon}$ is the average strain and $\varepsilon '(x)$ is the local strain perturbation from $\bar{\varepsilon}$.
The constitutive equation is therefore.
$$\sigma_{ij}(x) = \big(C^R_{ijlk} + C'_{ijlk}(x) \big ) \big (\bar{\varepsilon}_{lk} + \varepsilon'_{lk}(x) \big )$$
### Equilibrium Condition
The equilibrium condition is defined below.
$$\sigma_{ij,j}(x) = \Big [\big(C^R_{ijlk} + C'_{ijlk}(x) \big ) \big (\bar{\varepsilon}_{lk} + \varepsilon'_{lk}(x) \big )\Big ]_{,j} = 0$$
$$\sigma_{ij,j}(x) = C^R_{ijlk}\varepsilon'_{lk,j}(x) + C'_{ijlk,j}(x)\bar{\varepsilon}_{lk} + \Big [C'_{ijlk}(x) \varepsilon'_{lk}(x)\Big ]_{,j} = 0$$
Let
$$F_i(x) = C'_{ijlk,j}(x)\bar{\varepsilon}_{lk} + \Big [C'_{ijlk}(x) \varepsilon'_{lk}(x)\Big ]_{,j} $$
Using the definitation of $F(x)$ above, the equilibrium equation above can be rearranged in the form of an inhomogenous differential equation.
$$C^R_{ijlk}\varepsilon'_{lk,j}(x) + F_i(x) = 0$$
###Strain, Displacement, and Green's Functions
By using the relationship between strain and displacement, the equilibrium equation can be rewritten as follows.
$$ \varepsilon_{kl}(x) = \frac{\big (u_{k,l}(x) + u_{l,k}(x) \big)}{2} $$
$$C^R_{ijkl} \frac{\big (u'_{k,lj}(x) + u'_{l,kj}(x) \big)}{2} + F_i(x) = 0$$
The solution to the displacements can be found using green's functions.
$$C^R_{ijkl} G_{km,lj}(r) + \delta_{im}\delta(x-r) = 0$$
$$u'_k(x) = \int_V G_{ik}(r) F_i (x-r)dr = \int_V G_{ik}(r) \Big [C'_{ijlk}(x-r)\bar{\varepsilon}_{lk} + \big [C'_{ijlk}(x-r)\varepsilon'_{lk}(x-r)\big ]\Big ]_{,j}dr$$
and
$$u'_l(x) = \int_V G_{il}(r) F_i (x - r)dr = \int_V G_{ik}(r) \Big [C'_{ijlk}(x-r)\bar{\varepsilon}_{lk} + \big [C'_{ijlk}(x-r)\varepsilon'_{lk}(x-r)\big ]\Big ]_{,j}dr$$
therefore the strain can also be found interns of green's functions.
$$\varepsilon'_{kl}(x) = \int_V \frac{\big (G_{ik,l}(r) + G_{il,k}(r) \big)}{2} F_i (x-r)dr = \int_V \frac{\big (G_{ik,l}(r) + G_{il,k}(r) \big)}{2} \Big [C'_{ijlk}(x-r)\bar{\varepsilon}_{lk} + \big [C'_{ijlk}(x-r)\varepsilon'_{lk}(x-r)\big ]\Big ]_{,j}dr$$
Note that the $G(r)$ terms depend on the reference medium $C^R$.
### Integration by Parts
The equation above can be recast with all of the derivatives on the green's functions by integrating by parts.
$$
\varepsilon'_{kl}(x) = \Bigg [ \int_S \frac{\big (G_{ik,l}(r) + G_{il,k}(r) \big)}{2} \Big [C'_{ijlk}(x-r)\bar{\varepsilon}_{lk} + \big [C'_{ijlk}(x-r)\varepsilon'_{lk}(x-r)\big ]\Big ] n_j dS\Bigg ]_{r \rightarrow 0}^{r \rightarrow \infty} - $$
$$ \int_V \frac{\big (G_{ik,lj}(r) + G_{il,kj}(r) \big)}{2} \Big [C'_{ijlk}(x-r)\bar{\varepsilon}_{lk} + \big [C'_{ijlk}(x-r)\varepsilon'_{lk}(x-r)\big ]\Big ]dr
$$
###Principal Value Singularity
In the equation above, the surface term tending to zero is a principal value integral because of the singularity in the green's functions at $r = 0$. As a result, the integrand is not differentiable. Torquato shows that by excluding a sphere at the origin and using integration by parts and the divergence theorem we can arrive at the following equation [1].
$$\varepsilon'_{kl}(x) = I_{ikjl} - E_{ikjl} + \int_V \Phi_{ikjl}(r) \Big [C'_{ijlk}(x-r)\bar{\varepsilon}_{lk} + \big [C'_{ijlk}(x-r)\varepsilon'_{lk}(x-r)\big ]\Big ]dr $$
where
$$\Phi_{ikjl}(r) = - \frac{\big (G_{ik,lj}(r) + G_{il,kj}(r) \big)}{2} $$
is the green's function terms, and
$$I_{ikjl}^{\infty} = \lim_{r \rightarrow \infty} \int_S\frac{\big (G_{ik,l}(r) + G_{il,k}(r)\big)}{2} \Big [C'_{ijlk}(x-r)\bar{\varepsilon}_{lk} + \big [C'_{ijlk}(x-r)\varepsilon'_{lk}(x-r)\big ]\Big ]n_l dS $$
$$E_{ikjl}(x) = \lim_{r \rightarrow 0} \int_S\frac{\big (G_{ik,l}(r) + G_{il,k}(r)\big)}{2} n_l dS $$
are the contribution from the surface integrals at $\infty$ and from the singularity.
Finally let
$$\Gamma_{iklj}(r) = I_{ikjl}^{\infty}\delta(r)-E_{ikjl}\delta(r) + \Phi_{ikjl}(r)$$
the strain can then be written in the following form.
$$\varepsilon'_{kl}(x) = \int_V \Gamma_{ikjl}(r) \Big [C'_{ijlk}(x-r)\bar{\varepsilon}_{lk} + \big [C'_{ijlk}(x-r)\varepsilon'_{lk}(x-r)\big ]\Big ]dr $$
###Weak Contrast Expansion
$$\varepsilon'(x) =\int_V \Gamma(r) C'(x-r) [ \bar{\varepsilon} + \varepsilon'(x-r)]dr $$
By recursively inserting $\varepsilon'(x)$ into the LHS of the equation, we get the following series.
$$
\varepsilon'(x) =\int_V \Gamma(r) C'(x-r) \bar{\varepsilon} dr +\int_V \int_V \Big[ \Gamma(r) C'(x-r)\bar{\varepsilon}\Big ]\Big [\Gamma(r') C'(x-r') \bar{\varepsilon}\Big] dr'dr + ...$$
As long as
$$\Gamma(r) C'(x)\bar{\varepsilon} << 1$$
the series can be truncated after a few terms and still provide resonable accuracy.
###Materials Knowledge Systems
Let
$$ C'(x-r) = \int_H h m(h, x-r) dh$$
where $m(h, r)$ is the microstructure function which is a probablity density that spans both the local state space $h$ and real space $x$. The expectation of local state variable for the microstructure function is the integral over the local state space $H$ and discribes the expected local state $h$ which is equal to $C'(r)$.
Also let
$$\alpha(h, r) = \Gamma(r) h \bar{\varepsilon} $$
$$\alpha(h, h', r, r') = \Gamma(r) h \bar{\varepsilon} \Gamma(r') h' \bar{\varepsilon} $$
$$ etc... $$
where again $h$ is the local state variable.
Plugging these definitations into the Weak Contrast Expansion recasts the series in the following form.
$$\varepsilon '(x) =\int_V \int_H \alpha(h, r) m(h, x-r) dr dh + \int_V \int_V \int_H \int_H\alpha_(h, h', r, r') m(h, x-r) m(h', x-r') dr'dr dh dh'+ ...$$
The discrete version of this equation is the MKS localization.
$$\varepsilon'[s] = \sum_{l=0}^{L-1} \sum_{r=0}^{S-1} \alpha[l, r] m[l, s-r] +\sum_{l=0}^{L-1} \sum_{l'=0}^{L-1} \sum_{r=0}^{S-1} \sum_{r'=0}^{S-1} \alpha[l, l', r, r'] m[l, s-r] m_[l', s-r'] + ... $$
##References
[1] Torquato, S., 1997. *Effective stiffness tensor of composite media. I. Exact series expansions.* J. Mech. Phys. Solids 45, 1421–1448.
[2] Brent L.Adams, Surya Kalidindi, David T. Fullwood. *Microstructure Sensitive Design for Performance Optimization.*
[3] David T. Fullwood, Brent L.Adams, Surya Kalidindi. *A strong contrast homogenization formulation for multi-phase anisotropic materials.*
| 0.884651 | 0.991692 |
# Data Hacker Survey 2019
Resultados da pesquisa de mercado de Data Science feita pelo Data Hackers
## Sobre a base de dados
O dataset é criado a partir de uma pesquisa de mercado de Data Science no Brasil feita pela comunidade Data Hacker e foi retirado do site kaggle no link: [Data Hackers Survey 2019](https://www.kaggle.com/datahackers/pesquisa-data-hackers-2019) em 8 de agosto de 2020.
A pesquisa foi conduzida de forma online durante o mês de Novembro de 2019, e consistia em um questionário com 39 perguntas.
O dataset é composto por 1765 registros e 170 colunas.
## Variáveis que podem ser úteis na análise
Idade, Gênero, Nível de graduação, Salário, Tempo de experiência com Ciência de Dados, Linguagens de programação mais utilizadas, Estado, Emprego, Número de Funcionários e Setor do mercado.
## Questões que possam ser respondidas através da análise
### Sobre a pesquisa
- Qual é a distribuição de idade e gênero?
- Quantos cientistas de dados vivem no Brasil?
- Qual a distribuição dos participantes por estado?
- Maioria das pessoas que responderam a pesquisa são cientista de dados?
- Qual a situação profissional os participantes se encontram?
- Distribuição das pessoas que responderam a pesquisa por área de atuação?
- Qual o tamanho das empresas que os participantes trabalham pelo número de funcionários?
- Qual o nível de ensino mais popular entre os cientistas de dados?
### Mercado de trabalho
- Qual setor do mercado que contrata mais profissionais de ciência de dados?
- Qual plataforma os cientista de dados utilizam para se atualizar sobre o mercado de trabalho?
### Formação/Estudo e Ferramentas
- Quais as linguagens de programação mais utilizadas?
- Qual a distribuição entre profissionais e não profissionais em relação ao nível de ensino?
- É necessário ter um mestrado para ser cientista de dados?
### Salário
- Distribuição dos salários entre ciêntistas de dados
- Quais são os maiores e os menores salários?
- Qual a maior faixa de salários?
- Qual setor do mercado de trabalho paga os melhores salários para cientistas de dados?
- Qual linguagem de programação tem os melhores salários?
- Quantos profissionais ganham mais que 25 mil? Quais as características de quem ganha esse salário?
- Nivel de ensino dos profissionais que ganham mais que 25 mil reais
```
import pandas as pd
df = pd.read_csv('/content/datahackers-survey-2019-anonymous-responses.csv', sep=',')
df.head()
df.shape
# Limpando o nome das colunas
df.columns = [cols.replace("(","").replace(")","").replace(",","").replace("'","").replace(" ","_") for cols in df.columns]
por_cargo = df['D6_anonymized_role'].value_counts().reset_index()
por_cargo.columns = [ 'Cargo', 'Quantidade de pessoas' ]
por_cargo
df['P21_python'].value_counts().reset_index()
```
Quais das linguagens de programação listadas abaixo você utiliza no trabalho?
```
colunas_linguagem_de_programacao = []
linguagens_de_programacao = []
for column in df.columns:
if 'P21' in column:
colunas_linguagem_de_programacao.append(column)
linguagens_de_programacao.append(column.replace('P21', '').replace('_', ' ').strip().capitalize())
print(linguagens_de_programacao)
```
# Sobre a pesquisa
## Qual é a distribuição de idade e gênero?
```
por_genero = df['P2_gender'].value_counts(normalize=True) * 100
por_genero = por_genero.reset_index()
por_genero.columns = ['Gênero', 'Porcentagem']
por_genero
por_genero.plot.barh(x='Gênero', y='Porcentagem', rot=0, title='Distribuição de gênero', color='green')
por_idade = df['P1_age'].value_counts(sort=False, bins=5).reset_index()
por_idade.columns = ['Idade', 'Porcentagem']
por_idade.plot.bar(x='Idade', y='Porcentagem', rot=0, title='Distribuição de idade', color='cyan')
```
## Quantos cientistas de dados vivem no Brasil?
```
mora_no_brasil = df['P3_living_in_brasil'].value_counts(normalize=True) * 100
mora_no_brasil = mora_no_brasil.reset_index()
mora_no_brasil = mora_no_brasil.replace(1, 'Sim').replace(0, 'Não')
mora_no_brasil.columns = ['Vive no Brasil?', 'Porcentagem']
mora_no_brasil.plot.bar(x='Vive no Brasil?', y='Porcentagem', rot=0, title='Ciêntistas de dados que vivem no Brasil')
```
## Qual a distribuição dos participantes por estado?
```
por_estado = df['P5_living_state'].value_counts(normalize=True) * 100
por_estado = por_estado.reset_index()
por_estado.columns = ['Estado', 'Porcentagem']
por_estado.plot.bar(x='Estado', y='Porcentagem', rot=0,
figsize=(15, 5), title='Ciêntistas de dados por Estado', color='magenta')
```
## Maioria das pessoas que responderam a pesquisa são cientista de dados?
```
por_cargo = df['D6_anonymized_role'].value_counts(normalize=True) * 100
por_cargo = por_cargo.reset_index()
por_cargo
```
## Qual a situação profissional os participantes se encontram?
```
situacao_profissional = df['P10_job_situation'].value_counts(normalize=True) * 100
situacao_profissional = situacao_profissional.reset_index()
situacao_profissional.columns = ['Cargo', 'Porcentagem']
situacao_profissional
```
## Distribuição das pessoas que responderam a pesquisa por área de atuação?
```
area_de_atuacao = df['D6_anonymized_role'].value_counts(normalize=True) * 100
area_de_atuacao = area_de_atuacao.reset_index()
area_de_atuacao.columns = ['Cargo', 'Porcentagem']
area_de_atuacao
```
## Qual o tamanho das empresas que os participantes trabalham pelo número de funcionários?
```
numero_de_funcionarios = df['P12_workers_number'].value_counts().reset_index()
numero_de_funcionarios.columns = ['Números de funcionários na empresa', 'Número de participantes']
numero_de_funcionarios
```
## Qual o nível de ensino mais popular entre os cientistas de dados?
```
degree_level = df[df['P19_is_data_science_professional'].astype(int) == 1]
degree_level = degree_level['P8_degreee_level'].value_counts().reset_index()
degree_level.columns = ['Nível de ensino', 'Participantes']
degree_level.plot.barh(x='Nível de ensino', y='Participantes', rot=0,
figsize=(15, 5), title='Nível de ensino dos Ciêntistas de Dados', color='green')
```
# Mercado de trabalho
## Qual setor do mercado que contrata mais profissionais de ciência de dados?
```
setor_de_mercado = df[df['P19_is_data_science_professional'].astype(int) == 1]
setor_de_mercado = setor_de_mercado['D4_anonymized_market_sector'].value_counts().reset_index()
setor_de_mercado.columns = ['Setor do Mercado', 'Número de profissionais']
setor_de_mercado
```
O setor que mais contrata profissionais de ciência de dados é o setor de Tecnologia/Fábrica de Software
## Qual plataforma os ciêntista de dados utilizam para se atualizar sobre o mercado de trabalho?
```
plataforma_mais_usada_por_cientistas_de_dados = df[df['P19_is_data_science_professional'].astype(int) == 1]
plataforma_mais_usada_por_cientistas_de_dados = plataforma_mais_usada_por_cientistas_de_dados['P35_data_science_plataforms_preference'].value_counts().reset_index()
plataforma_mais_usada_por_cientistas_de_dados.columns = ['Plataformas', 'Número de profissionais']
plataforma_mais_usada_por_cientistas_de_dados.plot.bar(x='Plataformas', y='Número de profissionais', rot=0,
figsize=(15, 5), title='Plataformas utilizadas por ciêntistas de dados')
```
A plataforma que os ciêntitas de dados mais utilizam é a Udemy
# Formação/Estudo e Ferramentas
```
linguagens_de_programacao_mais_usadas = df['P22_most_used_proggraming_languages'].value_counts().reset_index()
linguagens_de_programacao_mais_usadas.columns = ['Linguagem de programação', 'Quantidade de participantes']
linguagens_de_programacao_mais_usadas
linguagens_de_programacao_mais_usadas.plot.barh(x='Linguagem de programação', y='Quantidade de participantes', rot=0, title='Linguagens de programação mais usadas', color='green')
```
## Qual a distribuição entre profissionais e não profissionais em relação ao nível de ensino?
```
cientistas_de_dados = df[df['P19_is_data_science_professional'].astype(int) == 1]
nao_cientistas_de_dados = df[df['P19_is_data_science_professional'].astype(int) == 0]
total = len(cientistas_de_dados) + len(nao_cientistas_de_dados)
print(f'Número de ciêntistas de dados: {(len(cientistas_de_dados) / total) * 100:.2f}%')
print(f'Número de participantes que não são ciêntistas de dados: {(len(nao_cientistas_de_dados) / total) * 100:.2f}%')
```
## É necessário ter um mestrado para ser cientista de dados?
```
degree_level = df[df['P19_is_data_science_professional'].astype(int) == 1]
degree_level = degree_level['P8_degreee_level'].value_counts().reset_index()
degree_level.columns = ['Nível de ensino', 'Participantes']
degree_level.plot.bar(x='Nível de ensino', y='Participantes', rot=0,
figsize=(15, 5), title='Nível de ensino dos Ciêntistas de Dados')
```
Não, pois a maioria dos ciêntistas de dados tem apenas Graduação/Bacharelado ou Pós-graduação
# Salário
## Análise dos salários
### Distribuição dos salários entre ciêntistas de dados
```
maior_menor_salario = df['P16_salary_range'].value_counts().reset_index()
maior_menor_salario.columns = ['Salários', 'Participantes']
maior_menor_salario
```
### Quais são os maiores e os menores salários?
Os menores salários são de 1.001/mês a 2.000/mês reais
Os maiores salários são acima de 25.001/mês reais
### Qual a maior faixa de salários?
Os melhores salários estão acima de R$ 25.001/mês
## Qual setor do mercado de trabalho paga os melhores salários para cientistas de dados?
```
maiores_salarios = df[(df['P16_salary_range'] == 'Acima de R$ 25.001/mês') |
(df['P16_salary_range'] == 'de R$ 20.001/mês a R$ 25.000/mês') |
(df['P16_salary_range'] == 'de R$ 16.001/mês a R$ 20.000/mês')]
setor_do_mercado = maiores_salarios['D4_anonymized_market_sector'].value_counts().reset_index()
setor_do_mercado.columns = ['Setores', 'Participantes']
setor_do_mercado.head(5)
```
## Qual linguagem de programação tem os melhores salários?
```
linguagens_com_melhor_salario = maiores_salarios['P22_most_used_proggraming_languages'].value_counts().reset_index()
linguagens_com_melhor_salario.columns = ['Linguagem', 'Participantes']
linguagens_com_melhor_salario
```
## Quantos profissionais ganham mais que 25 mil? Quais as características de quem ganha esse salário?
```
profissionais_com_salario_alto = df[df['P16_salary_range'] == 'Acima de R$ 25.001/mês']
print(f'Número de profissionais que ganham mais que 25 mil: {len(profissionais_com_salario_alto)}')
def exibe_grafico(coluna_principal, coluna1, coluna2, titulo):
novo_df = df[coluna_principal].value_counts().reset_index()
novo_df.columns = [coluna1, coluna2]
novo_df.plot.bar(x=coluna1, y=coluna2, rot=0,
figsize=(25, 5), title=titulo, color='green')
exibe_grafico('P8_degreee_level', 'Nivel de ensino', 'Profissionais', 'Nivel de ensino dos profissionais que gnaham mais que 25 mil reais')
exibe_grafico('P17_time_experience_data_science', 'Experiencia com ciência de dados', 'Profissionais', 'Nivel de ensino dos profissionais que gnaham mais que 25 mil reais')
```
|
github_jupyter
|
import pandas as pd
df = pd.read_csv('/content/datahackers-survey-2019-anonymous-responses.csv', sep=',')
df.head()
df.shape
# Limpando o nome das colunas
df.columns = [cols.replace("(","").replace(")","").replace(",","").replace("'","").replace(" ","_") for cols in df.columns]
por_cargo = df['D6_anonymized_role'].value_counts().reset_index()
por_cargo.columns = [ 'Cargo', 'Quantidade de pessoas' ]
por_cargo
df['P21_python'].value_counts().reset_index()
colunas_linguagem_de_programacao = []
linguagens_de_programacao = []
for column in df.columns:
if 'P21' in column:
colunas_linguagem_de_programacao.append(column)
linguagens_de_programacao.append(column.replace('P21', '').replace('_', ' ').strip().capitalize())
print(linguagens_de_programacao)
por_genero = df['P2_gender'].value_counts(normalize=True) * 100
por_genero = por_genero.reset_index()
por_genero.columns = ['Gênero', 'Porcentagem']
por_genero
por_genero.plot.barh(x='Gênero', y='Porcentagem', rot=0, title='Distribuição de gênero', color='green')
por_idade = df['P1_age'].value_counts(sort=False, bins=5).reset_index()
por_idade.columns = ['Idade', 'Porcentagem']
por_idade.plot.bar(x='Idade', y='Porcentagem', rot=0, title='Distribuição de idade', color='cyan')
mora_no_brasil = df['P3_living_in_brasil'].value_counts(normalize=True) * 100
mora_no_brasil = mora_no_brasil.reset_index()
mora_no_brasil = mora_no_brasil.replace(1, 'Sim').replace(0, 'Não')
mora_no_brasil.columns = ['Vive no Brasil?', 'Porcentagem']
mora_no_brasil.plot.bar(x='Vive no Brasil?', y='Porcentagem', rot=0, title='Ciêntistas de dados que vivem no Brasil')
por_estado = df['P5_living_state'].value_counts(normalize=True) * 100
por_estado = por_estado.reset_index()
por_estado.columns = ['Estado', 'Porcentagem']
por_estado.plot.bar(x='Estado', y='Porcentagem', rot=0,
figsize=(15, 5), title='Ciêntistas de dados por Estado', color='magenta')
por_cargo = df['D6_anonymized_role'].value_counts(normalize=True) * 100
por_cargo = por_cargo.reset_index()
por_cargo
situacao_profissional = df['P10_job_situation'].value_counts(normalize=True) * 100
situacao_profissional = situacao_profissional.reset_index()
situacao_profissional.columns = ['Cargo', 'Porcentagem']
situacao_profissional
area_de_atuacao = df['D6_anonymized_role'].value_counts(normalize=True) * 100
area_de_atuacao = area_de_atuacao.reset_index()
area_de_atuacao.columns = ['Cargo', 'Porcentagem']
area_de_atuacao
numero_de_funcionarios = df['P12_workers_number'].value_counts().reset_index()
numero_de_funcionarios.columns = ['Números de funcionários na empresa', 'Número de participantes']
numero_de_funcionarios
degree_level = df[df['P19_is_data_science_professional'].astype(int) == 1]
degree_level = degree_level['P8_degreee_level'].value_counts().reset_index()
degree_level.columns = ['Nível de ensino', 'Participantes']
degree_level.plot.barh(x='Nível de ensino', y='Participantes', rot=0,
figsize=(15, 5), title='Nível de ensino dos Ciêntistas de Dados', color='green')
setor_de_mercado = df[df['P19_is_data_science_professional'].astype(int) == 1]
setor_de_mercado = setor_de_mercado['D4_anonymized_market_sector'].value_counts().reset_index()
setor_de_mercado.columns = ['Setor do Mercado', 'Número de profissionais']
setor_de_mercado
plataforma_mais_usada_por_cientistas_de_dados = df[df['P19_is_data_science_professional'].astype(int) == 1]
plataforma_mais_usada_por_cientistas_de_dados = plataforma_mais_usada_por_cientistas_de_dados['P35_data_science_plataforms_preference'].value_counts().reset_index()
plataforma_mais_usada_por_cientistas_de_dados.columns = ['Plataformas', 'Número de profissionais']
plataforma_mais_usada_por_cientistas_de_dados.plot.bar(x='Plataformas', y='Número de profissionais', rot=0,
figsize=(15, 5), title='Plataformas utilizadas por ciêntistas de dados')
linguagens_de_programacao_mais_usadas = df['P22_most_used_proggraming_languages'].value_counts().reset_index()
linguagens_de_programacao_mais_usadas.columns = ['Linguagem de programação', 'Quantidade de participantes']
linguagens_de_programacao_mais_usadas
linguagens_de_programacao_mais_usadas.plot.barh(x='Linguagem de programação', y='Quantidade de participantes', rot=0, title='Linguagens de programação mais usadas', color='green')
cientistas_de_dados = df[df['P19_is_data_science_professional'].astype(int) == 1]
nao_cientistas_de_dados = df[df['P19_is_data_science_professional'].astype(int) == 0]
total = len(cientistas_de_dados) + len(nao_cientistas_de_dados)
print(f'Número de ciêntistas de dados: {(len(cientistas_de_dados) / total) * 100:.2f}%')
print(f'Número de participantes que não são ciêntistas de dados: {(len(nao_cientistas_de_dados) / total) * 100:.2f}%')
degree_level = df[df['P19_is_data_science_professional'].astype(int) == 1]
degree_level = degree_level['P8_degreee_level'].value_counts().reset_index()
degree_level.columns = ['Nível de ensino', 'Participantes']
degree_level.plot.bar(x='Nível de ensino', y='Participantes', rot=0,
figsize=(15, 5), title='Nível de ensino dos Ciêntistas de Dados')
maior_menor_salario = df['P16_salary_range'].value_counts().reset_index()
maior_menor_salario.columns = ['Salários', 'Participantes']
maior_menor_salario
maiores_salarios = df[(df['P16_salary_range'] == 'Acima de R$ 25.001/mês') |
(df['P16_salary_range'] == 'de R$ 20.001/mês a R$ 25.000/mês') |
(df['P16_salary_range'] == 'de R$ 16.001/mês a R$ 20.000/mês')]
setor_do_mercado = maiores_salarios['D4_anonymized_market_sector'].value_counts().reset_index()
setor_do_mercado.columns = ['Setores', 'Participantes']
setor_do_mercado.head(5)
linguagens_com_melhor_salario = maiores_salarios['P22_most_used_proggraming_languages'].value_counts().reset_index()
linguagens_com_melhor_salario.columns = ['Linguagem', 'Participantes']
linguagens_com_melhor_salario
profissionais_com_salario_alto = df[df['P16_salary_range'] == 'Acima de R$ 25.001/mês']
print(f'Número de profissionais que ganham mais que 25 mil: {len(profissionais_com_salario_alto)}')
def exibe_grafico(coluna_principal, coluna1, coluna2, titulo):
novo_df = df[coluna_principal].value_counts().reset_index()
novo_df.columns = [coluna1, coluna2]
novo_df.plot.bar(x=coluna1, y=coluna2, rot=0,
figsize=(25, 5), title=titulo, color='green')
exibe_grafico('P8_degreee_level', 'Nivel de ensino', 'Profissionais', 'Nivel de ensino dos profissionais que gnaham mais que 25 mil reais')
exibe_grafico('P17_time_experience_data_science', 'Experiencia com ciência de dados', 'Profissionais', 'Nivel de ensino dos profissionais que gnaham mais que 25 mil reais')
| 0.189821 | 0.946498 |
```
from numpy import array
from keras.preprocessing.text import one_hot
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers.embeddings import Embedding
import pandas as pd
# define the corpus
corpus = ['This is good pizza',
'I love Italian pizza',
'The best pizza',
'nice pizza',
'Excellent pizza',
'I love pizza',
'The pizza was alright',
'disgusting pineapple pizza',
'not good pizza',
'bad pizza',
'very bad pizza',
'I had better pizza']
# creating class labels for our
labels = array([1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0])
output_dim = 8
pd.DataFrame({'text': corpus, 'sentiment':labels})
# we extract the vocabulary from our corpus
sentences = [voc.split() for voc in corpus]
vocabulary = set([word for sentence in sentences for word in sentence])
vocab_size = len(vocabulary)
encoded_corpus = [one_hot(d, vocab_size) for d in corpus]
encoded_corpus
# we now pad the documents to
# the max length of the longest sentences
# to have an uniform length
max_length = 5
padded_docs = pad_sequences(encoded_corpus, maxlen=max_length, padding='post')
print(padded_docs)
# model definition
model = Sequential()
model.add(Embedding(vocab_size, output_dim, input_length=max_length, name='embedding'))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
# summarize the model
print(model.summary())
# fit the model
model.fit(padded_docs, labels, epochs=50, verbose=0)
# evaluate the model
loss, accuracy = model.evaluate(padded_docs, labels, verbose=0)
print('Accuracy: %f' % (accuracy * 100))
type(model)
from numpy import array
from keras.preprocessing.text import one_hot
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers.embeddings import Embedding
# define documents
docs = ['Well done!',
'Good work',
'Great effort',
'nice work',
'Excellent!',
'Weak',
'Poor effort!',
'not good',
'poor work',
'Could have done better.']
# define class labels
labels = array([1,1,1,1,1,0,0,0,0,0])
vocabulary = set(docs)
# integer encode the documents
vocab_size = len(set(docs))
encoded_corpus = [one_hot(d, vocab_size) for d in docs]
print(encoded_corpus)
# pad documents to a max length of 4 words
max_length = 4
padded_docs = pad_sequences(encoded_corpus, maxlen=max_length, padding='post')
print(padded_docs)
# define the model
model = Sequential()
model.add(Embedding(vocab_size, 8, input_length=max_length))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
# summarize the model
print(model.summary())
# fit the model
model.fit(padded_docs, labels, epochs=50, verbose=0)
# evaluate the model
loss, accuracy = model.evaluate(padded_docs, labels, verbose=0)
print('Accuracy: %f' % (accuracy * 100))
import nltk
nltk.download('punkt')
tokens = nltk.word_tokenize('This is a beautiful sentence')
print(tokens)
pos_tagget_tokens = nltk.pos_tag(tokens)
print(pos_tagget_tokens)
```
|
github_jupyter
|
from numpy import array
from keras.preprocessing.text import one_hot
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers.embeddings import Embedding
import pandas as pd
# define the corpus
corpus = ['This is good pizza',
'I love Italian pizza',
'The best pizza',
'nice pizza',
'Excellent pizza',
'I love pizza',
'The pizza was alright',
'disgusting pineapple pizza',
'not good pizza',
'bad pizza',
'very bad pizza',
'I had better pizza']
# creating class labels for our
labels = array([1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0])
output_dim = 8
pd.DataFrame({'text': corpus, 'sentiment':labels})
# we extract the vocabulary from our corpus
sentences = [voc.split() for voc in corpus]
vocabulary = set([word for sentence in sentences for word in sentence])
vocab_size = len(vocabulary)
encoded_corpus = [one_hot(d, vocab_size) for d in corpus]
encoded_corpus
# we now pad the documents to
# the max length of the longest sentences
# to have an uniform length
max_length = 5
padded_docs = pad_sequences(encoded_corpus, maxlen=max_length, padding='post')
print(padded_docs)
# model definition
model = Sequential()
model.add(Embedding(vocab_size, output_dim, input_length=max_length, name='embedding'))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
# summarize the model
print(model.summary())
# fit the model
model.fit(padded_docs, labels, epochs=50, verbose=0)
# evaluate the model
loss, accuracy = model.evaluate(padded_docs, labels, verbose=0)
print('Accuracy: %f' % (accuracy * 100))
type(model)
from numpy import array
from keras.preprocessing.text import one_hot
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers.embeddings import Embedding
# define documents
docs = ['Well done!',
'Good work',
'Great effort',
'nice work',
'Excellent!',
'Weak',
'Poor effort!',
'not good',
'poor work',
'Could have done better.']
# define class labels
labels = array([1,1,1,1,1,0,0,0,0,0])
vocabulary = set(docs)
# integer encode the documents
vocab_size = len(set(docs))
encoded_corpus = [one_hot(d, vocab_size) for d in docs]
print(encoded_corpus)
# pad documents to a max length of 4 words
max_length = 4
padded_docs = pad_sequences(encoded_corpus, maxlen=max_length, padding='post')
print(padded_docs)
# define the model
model = Sequential()
model.add(Embedding(vocab_size, 8, input_length=max_length))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
# summarize the model
print(model.summary())
# fit the model
model.fit(padded_docs, labels, epochs=50, verbose=0)
# evaluate the model
loss, accuracy = model.evaluate(padded_docs, labels, verbose=0)
print('Accuracy: %f' % (accuracy * 100))
import nltk
nltk.download('punkt')
tokens = nltk.word_tokenize('This is a beautiful sentence')
print(tokens)
pos_tagget_tokens = nltk.pos_tag(tokens)
print(pos_tagget_tokens)
| 0.729905 | 0.47171 |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Reinforcement Learning in Azure Machine Learning - Pong problem
Reinforcement Learning in Azure Machine Learning is a managed service for running distributed reinforcement learning training and simulation using the open source Ray framework.
This example uses Ray RLlib to train a Pong playing agent on a multi-node cluster.
## Pong problem
[Pong](https://en.wikipedia.org/wiki/Pong) is a two-dimensional sports game that simulates table tennis. The player controls an in-game paddle by moving it vertically across the left or right side of the screen. They can compete against another player controlling a second paddle on the opposing side. Players use the paddles to hit a ball back and forth.
<table style="width:50%">
<tr>
<th style="text-align: center;"><img src="./images/pong.gif" alt="Pong image" align="middle" margin-left="auto" margin-right="auto"/></th>
</tr>
<tr style="text-align: center;">
<th>Fig 1. Pong game animation (from <a href="https://towardsdatascience.com/intro-to-reinforcement-learning-pong-92a94aa0f84d">towardsdatascience.com</a>).</th>
</tr>
</table>
The goal here is to train an agent to win an episode of Pong game against opponent with the score of at least 18 points. An episode in Pong runs until one of the players reaches a score of 21. Episodes are a terminology that is used across all the [OpenAI gym](https://gym.openai.com/envs/Pong-v0/) environments that contains a strictly defined task.
Training a Pong agent is a compute-intensive task and this example demonstrates the use of Reinforcement Learning in Azure Machine Learning service to train an agent faster in a distributed, parallel environment. You'll learn more about using the head and the worker compute targets to train an agent in this notebook below.
## Prerequisite
It is highly recommended that the user should go through the [Reinforcement Learning in Azure Machine Learning - Cartpole Problem on Single Compute](../cartpole-on-single-compute/cartpole_sc.ipynb) to understand the basics of Reinforcement Learning in Azure Machine Learning and Ray RLlib used in this notebook.
## Set up Development Environment
The following subsections show typical steps to setup your development environment. Setup includes:
* Connecting to a workspace to enable communication between your local machine and remote resources
* Creating an experiment to track all your runs
* Setting up a virtual network
* Creating remote head and worker compute target on a virtual network to use for training
### Azure Machine Learning SDK
Display the Azure Machine Learning SDK version.
```
%matplotlib inline
# Azure Machine Learning core imports
import azureml.core
# Check core SDK version number
print("Azure Machine Learning SDK Version: ", azureml.core.VERSION)
```
### Get Azure Machine Learning workspace
Get a reference to an existing Azure Machine Learning workspace.
```
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, ws.location, ws.resource_group, sep = ' | ')
```
### Create Azure Machine Learning experiment
Create an experiment to track the runs in your workspace.
```
from azureml.core.experiment import Experiment
# Experiment name
experiment_name = 'rllib-pong-multi-node'
exp = Experiment(workspace=ws, name=experiment_name)
```
### Create Virtual Network
If you are using separate compute targets for the Ray head and worker, a virtual network must be created in the resource group. If you have alraeady created a virtual network in the resource group, you can skip this step.
To do this, you first must install the Azure Networking API.
`pip install --upgrade azure-mgmt-network==12.0.0`
```
# If you need to install the Azure Networking SDK, uncomment the following line.
#!pip install --upgrade azure-mgmt-network==12.0.0
from azure.mgmt.network import NetworkManagementClient
# Virtual network name
vnet_name ="rl_pong_vnet"
# Default subnet
subnet_name ="default"
# The Azure subscription you are using
subscription_id=ws.subscription_id
# The resource group for the reinforcement learning cluster
resource_group=ws.resource_group
# Azure region of the resource group
location=ws.location
network_client = NetworkManagementClient(ws._auth_object, subscription_id)
async_vnet_creation = network_client.virtual_networks.create_or_update(
resource_group,
vnet_name,
{
'location': location,
'address_space': {
'address_prefixes': ['10.0.0.0/16']
}
}
)
async_vnet_creation.wait()
print("Virtual network created successfully: ", async_vnet_creation.result())
```
### Set up Network Security Group on Virtual Network
Depending on your Azure setup, you may need to open certain ports to make it possible for Azure to manage the compute targets that you create. The ports that need to be opened are described [here](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-enable-virtual-network).
A common situation is that ports `29876-29877` are closed. The following code will add a security rule to open these ports. Or you can do this manually in the [Azure portal](https://portal.azure.com).
You may need to modify the code below to match your scenario.
```
import azure.mgmt.network.models
security_group_name = vnet_name + '-' + "nsg"
security_rule_name = "AllowAML"
# Create a network security group
nsg_params = azure.mgmt.network.models.NetworkSecurityGroup(
location=location,
security_rules=[
azure.mgmt.network.models.SecurityRule(
name=security_rule_name,
access=azure.mgmt.network.models.SecurityRuleAccess.allow,
description='Reinforcement Learning in Azure Machine Learning rule',
destination_address_prefix='*',
destination_port_range='29876-29877',
direction=azure.mgmt.network.models.SecurityRuleDirection.inbound,
priority=400,
protocol=azure.mgmt.network.models.SecurityRuleProtocol.tcp,
source_address_prefix='BatchNodeManagement',
source_port_range='*'
),
],
)
async_nsg_creation = network_client.network_security_groups.create_or_update(
resource_group,
security_group_name,
nsg_params,
)
async_nsg_creation.wait()
print("Network security group created successfully:", async_nsg_creation.result())
network_security_group = network_client.network_security_groups.get(
resource_group,
security_group_name,
)
# Define a subnet to be created with network security group
subnet = azure.mgmt.network.models.Subnet(
id='default',
address_prefix='10.0.0.0/24',
network_security_group=network_security_group
)
# Create subnet on virtual network
async_subnet_creation = network_client.subnets.create_or_update(
resource_group_name=resource_group,
virtual_network_name=vnet_name,
subnet_name=subnet_name,
subnet_parameters=subnet
)
async_subnet_creation.wait()
print("Subnet created successfully:", async_subnet_creation.result())
```
### Review the virtual network security rules
Ensure that the virtual network is configured correctly with required ports open. It is possible that you have configured rules with broader range of ports that allows ports 29876-29877 to be opened. Kindly review your network security group rules.
```
from files.networkutils import *
check_vnet_security_rules(ws._auth_object, ws.subscription_id, ws.resource_group, vnet_name, True)
```
### Create head compute target
In this example, we show how to set up separate compute targets for the Ray head and Ray worker nodes. First we define the head cluster with GPU for the Ray head node. One CPU of the head node will be used for the Ray head process and the rest of the CPUs will be used by the Ray worker processes.
```
from azureml.core.compute import AmlCompute, ComputeTarget
# Choose a name for the Ray head cluster
head_compute_name = 'head-gpu'
head_compute_min_nodes = 0
head_compute_max_nodes = 2
# This example uses GPU VM. For using CPU VM, set SKU to STANDARD_D2_V2
head_vm_size = 'STANDARD_NC6'
if head_compute_name in ws.compute_targets:
head_compute_target = ws.compute_targets[head_compute_name]
if head_compute_target and type(head_compute_target) is AmlCompute:
if head_compute_target.provisioning_state == 'Succeeded':
print('found head compute target. just use it', head_compute_name)
else:
raise Exception(
'found head compute target but it is in state', head_compute_target.provisioning_state)
else:
print('creating a new head compute target...')
provisioning_config = AmlCompute.provisioning_configuration(
vm_size=head_vm_size,
min_nodes=head_compute_min_nodes,
max_nodes=head_compute_max_nodes,
vnet_resourcegroup_name=ws.resource_group,
vnet_name=vnet_name,
subnet_name='default')
# Create the cluster
head_compute_target = ComputeTarget.create(ws, head_compute_name, provisioning_config)
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min node count is provided it will use the scale settings for the cluster
head_compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
# For a more detailed view of current AmlCompute status, use get_status()
print(head_compute_target.get_status().serialize())
```
### Create worker compute target
Now we create a compute target with CPUs for the additional Ray worker nodes. CPUs in these worker nodes are used by Ray worker processes. Each Ray worker node, depending on the CPUs on the node, may have multiple Ray worker processes. There can be multiple worker tasks on each worker process (core).
```
# Choose a name for your Ray worker compute target
worker_compute_name = 'worker-cpu'
worker_compute_min_nodes = 0
worker_compute_max_nodes = 4
# This example uses CPU VM. For using GPU VM, set SKU to STANDARD_NC6
worker_vm_size = 'STANDARD_D2_V2'
# Create the compute target if it hasn't been created already
if worker_compute_name in ws.compute_targets:
worker_compute_target = ws.compute_targets[worker_compute_name]
if worker_compute_target and type(worker_compute_target) is AmlCompute:
if worker_compute_target.provisioning_state == 'Succeeded':
print('found worker compute target. just use it', worker_compute_name)
else:
raise Exception(
'found worker compute target but it is in state', head_compute_target.provisioning_state)
else:
print('creating a new worker compute target...')
provisioning_config = AmlCompute.provisioning_configuration(
vm_size=worker_vm_size,
min_nodes=worker_compute_min_nodes,
max_nodes=worker_compute_max_nodes,
vnet_resourcegroup_name=ws.resource_group,
vnet_name=vnet_name,
subnet_name='default')
# Create the compute target
worker_compute_target = ComputeTarget.create(ws, worker_compute_name, provisioning_config)
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min node count is provided it will use the scale settings for the cluster
worker_compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
# For a more detailed view of current AmlCompute status, use get_status()
print(worker_compute_target.get_status().serialize())
```
## Train Pong Agent
To facilitate reinforcement learning, Azure Machine Learning Python SDK provides a high level abstraction, the _ReinforcementLearningEstimator_ class, which allows users to easily construct reinforcement learning run configurations for the underlying reinforcement learning framework. Reinforcement Learning in Azure Machine Learning supports the open source [Ray framework](https://ray.io/) and its highly customizable [RLLib](https://ray.readthedocs.io/en/latest/rllib.html#rllib-scalable-reinforcement-learning). In this section we show how to use _ReinforcementLearningEstimator_ and Ray/RLLib framework to train a Pong playing agent.
### Define worker configuration
Define a `WorkerConfiguration` using your worker compute target. We specify the number of nodes in the worker compute target to be used for training and additional PIP packages to install on those nodes as a part of setup.
In this case, we define the PIP packages as dependencies for both head and worker nodes. With this setup, the game simulations will run directly on the worker compute nodes.
```
from azureml.contrib.train.rl import WorkerConfiguration
# Specify the Ray worker configuration
worker_conf = WorkerConfiguration(
# Azure Machine Learning compute target to run Ray workers
compute_target=worker_compute_target,
# Number of worker nodes
node_count=4,
# GPU
use_gpu=False,
# PIP packages to use
)
```
### Create reinforcement learning estimator
The `ReinforcementLearningEstimator` is used to submit a job to Azure Machine Learning to start the Ray experiment run. We define the training script parameters here that will be passed to the estimator.
We specify `episode_reward_mean` to 18 as we want to stop the training as soon as the trained agent reaches an average win margin of at least 18 point over opponent over all episodes in the training epoch.
Number of Ray worker processes are defined by parameter `num_workers`. We set it to 13 as we have 13 CPUs available in our compute targets. Multiple Ray worker processes parallelizes agent training and helps in achieving our goal faster.
```
Number of CPUs in head_compute_target = 6 CPUs in 1 node = 6
Number of CPUs in worker_compute_target = 2 CPUs in each of 4 nodes = 8
Number of CPUs available = (Number of CPUs in head_compute_target) + (Number of CPUs in worker_compute_target) - (1 CPU for head node) = 6 + 8 - 1 = 13
```
```
from azureml.contrib.train.rl import ReinforcementLearningEstimator, Ray
training_algorithm = "IMPALA"
rl_environment = "PongNoFrameskip-v4"
# Training script parameters
script_params = {
# Training algorithm, IMPALA in this case
"--run": training_algorithm,
# Environment, Pong in this case
"--env": rl_environment,
# Add additional single quotes at the both ends of string values as we have spaces in the
# string parameters, outermost quotes are not passed to scripts as they are not actually part of string
# Number of GPUs
# Number of ray workers
"--config": '\'{"num_gpus": 1, "num_workers": 13}\'',
# Target episode reward mean to stop the training
# Total training time in seconds
"--stop": '\'{"episode_reward_mean": 18, "time_total_s": 3600}\'',
}
# Reinforcement learning estimator
rl_estimator = ReinforcementLearningEstimator(
# Location of source files
source_directory='files',
# Python script file
entry_script="pong_rllib.py",
# Parameters to pass to the script file
# Defined above.
script_params=script_params,
# The Azure Machine Learning compute target set up for Ray head nodes
compute_target=head_compute_target,
# GPU usage
use_gpu=True,
# Reinforcement learning framework. Currently must be Ray.
rl_framework=Ray('0.8.3'),
# Ray worker configuration defined above.
worker_configuration=worker_conf,
# How long to wait for whole cluster to start
cluster_coordination_timeout_seconds=3600,
# Maximum time for the whole Ray job to run
# This will cut off the run after an hour
max_run_duration_seconds=3600,
# Allow the docker container Ray runs in to make full use
# of the shared memory available from the host OS.
shm_size=24*1024*1024*1024
)
```
### Training script
As recommended in [RLlib](https://ray.readthedocs.io/en/latest/rllib.html) documentations, we use Ray [Tune](https://ray.readthedocs.io/en/latest/tune.html) API to run the training algorithm. All the RLlib built-in trainers are compatible with the Tune API. Here we use tune.run() to execute a built-in training algorithm. For convenience, down below you can see part of the entry script where we make this call.
```python
tune.run(
run_or_experiment=args.run,
config={
"env": args.env,
"num_gpus": args.config["num_gpus"],
"num_workers": args.config["num_workers"],
"callbacks": {"on_train_result": callbacks.on_train_result},
"sample_batch_size": 50,
"train_batch_size": 1000,
"num_sgd_iter": 2,
"num_data_loader_buffers": 2,
"model": {"dim": 42},
},
stop=args.stop,
local_dir='./logs')
```
### Submit the estimator to start a run
Now we use the rl_estimator configured above to submit a run.
```
run = exp.submit(config=rl_estimator)
```
### Monitor the run
Azure Machine Learning provides a Jupyter widget to show the status of an experiment run. You could use this widget to monitor the status of the runs. The widget shows the list of two child runs, one for head compute target run and one for worker compute target run. You can click on the link under **Status** to see the details of the child run. It will also show the metrics being logged.
```
from azureml.widgets import RunDetails
RunDetails(run).show()
```
### Stop the run
To stop the run, call `run.cancel()`.
```
# Uncomment line below to cancel the run
# run.cancel()
```
### Wait for completion
Wait for the run to complete before proceeding. If you want to stop the run, you may skip this and move to next section below.
**Note: The run may take anywhere from 30 minutes to 45 minutes to complete.**
```
run.wait_for_completion()
```
### Performance of the agent during training
Let's get the reward metrics for the training run agent and observe how the agent's rewards improved over the training iterations and how the agent learns to win the Pong game.
Collect the episode reward metrics from the worker run's metrics.
```
# Get the reward metrics from worker run
episode_reward_mean = run.get_metrics(name='episode_reward_mean')
```
Plot the reward metrics.
```
import matplotlib.pyplot as plt
plt.plot(episode_reward_mean['episode_reward_mean'])
plt.xlabel('training_iteration')
plt.ylabel('episode_reward_mean')
plt.show()
```
We observe that during the training over multiple episodes, the agent learns to win the Pong game against opponent with our target of 18 points in each episode of 21 points.
**Congratulations!! You have trained your Pong agent to win a game.**
## Cleaning up
For your convenience, below you can find code snippets to clean up any resources created as part of this tutorial that you don't wish to retain.
```
# To archive the created experiment:
#experiment.archive()
# To delete the compute targets:
#head_compute_target.delete()
#worker_compute_target.delete()
```
## Next
In this example, you learned how to solve distributed reinforcement learning training problems using head and worker compute targets. This was an introductory tutorial on Reinforement Learning in Azure Machine Learning service offering. We would love to hear your feedback to build the features you need!
|
github_jupyter
|
%matplotlib inline
# Azure Machine Learning core imports
import azureml.core
# Check core SDK version number
print("Azure Machine Learning SDK Version: ", azureml.core.VERSION)
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, ws.location, ws.resource_group, sep = ' | ')
from azureml.core.experiment import Experiment
# Experiment name
experiment_name = 'rllib-pong-multi-node'
exp = Experiment(workspace=ws, name=experiment_name)
# If you need to install the Azure Networking SDK, uncomment the following line.
#!pip install --upgrade azure-mgmt-network==12.0.0
from azure.mgmt.network import NetworkManagementClient
# Virtual network name
vnet_name ="rl_pong_vnet"
# Default subnet
subnet_name ="default"
# The Azure subscription you are using
subscription_id=ws.subscription_id
# The resource group for the reinforcement learning cluster
resource_group=ws.resource_group
# Azure region of the resource group
location=ws.location
network_client = NetworkManagementClient(ws._auth_object, subscription_id)
async_vnet_creation = network_client.virtual_networks.create_or_update(
resource_group,
vnet_name,
{
'location': location,
'address_space': {
'address_prefixes': ['10.0.0.0/16']
}
}
)
async_vnet_creation.wait()
print("Virtual network created successfully: ", async_vnet_creation.result())
import azure.mgmt.network.models
security_group_name = vnet_name + '-' + "nsg"
security_rule_name = "AllowAML"
# Create a network security group
nsg_params = azure.mgmt.network.models.NetworkSecurityGroup(
location=location,
security_rules=[
azure.mgmt.network.models.SecurityRule(
name=security_rule_name,
access=azure.mgmt.network.models.SecurityRuleAccess.allow,
description='Reinforcement Learning in Azure Machine Learning rule',
destination_address_prefix='*',
destination_port_range='29876-29877',
direction=azure.mgmt.network.models.SecurityRuleDirection.inbound,
priority=400,
protocol=azure.mgmt.network.models.SecurityRuleProtocol.tcp,
source_address_prefix='BatchNodeManagement',
source_port_range='*'
),
],
)
async_nsg_creation = network_client.network_security_groups.create_or_update(
resource_group,
security_group_name,
nsg_params,
)
async_nsg_creation.wait()
print("Network security group created successfully:", async_nsg_creation.result())
network_security_group = network_client.network_security_groups.get(
resource_group,
security_group_name,
)
# Define a subnet to be created with network security group
subnet = azure.mgmt.network.models.Subnet(
id='default',
address_prefix='10.0.0.0/24',
network_security_group=network_security_group
)
# Create subnet on virtual network
async_subnet_creation = network_client.subnets.create_or_update(
resource_group_name=resource_group,
virtual_network_name=vnet_name,
subnet_name=subnet_name,
subnet_parameters=subnet
)
async_subnet_creation.wait()
print("Subnet created successfully:", async_subnet_creation.result())
from files.networkutils import *
check_vnet_security_rules(ws._auth_object, ws.subscription_id, ws.resource_group, vnet_name, True)
from azureml.core.compute import AmlCompute, ComputeTarget
# Choose a name for the Ray head cluster
head_compute_name = 'head-gpu'
head_compute_min_nodes = 0
head_compute_max_nodes = 2
# This example uses GPU VM. For using CPU VM, set SKU to STANDARD_D2_V2
head_vm_size = 'STANDARD_NC6'
if head_compute_name in ws.compute_targets:
head_compute_target = ws.compute_targets[head_compute_name]
if head_compute_target and type(head_compute_target) is AmlCompute:
if head_compute_target.provisioning_state == 'Succeeded':
print('found head compute target. just use it', head_compute_name)
else:
raise Exception(
'found head compute target but it is in state', head_compute_target.provisioning_state)
else:
print('creating a new head compute target...')
provisioning_config = AmlCompute.provisioning_configuration(
vm_size=head_vm_size,
min_nodes=head_compute_min_nodes,
max_nodes=head_compute_max_nodes,
vnet_resourcegroup_name=ws.resource_group,
vnet_name=vnet_name,
subnet_name='default')
# Create the cluster
head_compute_target = ComputeTarget.create(ws, head_compute_name, provisioning_config)
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min node count is provided it will use the scale settings for the cluster
head_compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
# For a more detailed view of current AmlCompute status, use get_status()
print(head_compute_target.get_status().serialize())
# Choose a name for your Ray worker compute target
worker_compute_name = 'worker-cpu'
worker_compute_min_nodes = 0
worker_compute_max_nodes = 4
# This example uses CPU VM. For using GPU VM, set SKU to STANDARD_NC6
worker_vm_size = 'STANDARD_D2_V2'
# Create the compute target if it hasn't been created already
if worker_compute_name in ws.compute_targets:
worker_compute_target = ws.compute_targets[worker_compute_name]
if worker_compute_target and type(worker_compute_target) is AmlCompute:
if worker_compute_target.provisioning_state == 'Succeeded':
print('found worker compute target. just use it', worker_compute_name)
else:
raise Exception(
'found worker compute target but it is in state', head_compute_target.provisioning_state)
else:
print('creating a new worker compute target...')
provisioning_config = AmlCompute.provisioning_configuration(
vm_size=worker_vm_size,
min_nodes=worker_compute_min_nodes,
max_nodes=worker_compute_max_nodes,
vnet_resourcegroup_name=ws.resource_group,
vnet_name=vnet_name,
subnet_name='default')
# Create the compute target
worker_compute_target = ComputeTarget.create(ws, worker_compute_name, provisioning_config)
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min node count is provided it will use the scale settings for the cluster
worker_compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
# For a more detailed view of current AmlCompute status, use get_status()
print(worker_compute_target.get_status().serialize())
from azureml.contrib.train.rl import WorkerConfiguration
# Specify the Ray worker configuration
worker_conf = WorkerConfiguration(
# Azure Machine Learning compute target to run Ray workers
compute_target=worker_compute_target,
# Number of worker nodes
node_count=4,
# GPU
use_gpu=False,
# PIP packages to use
)
Number of CPUs in head_compute_target = 6 CPUs in 1 node = 6
Number of CPUs in worker_compute_target = 2 CPUs in each of 4 nodes = 8
Number of CPUs available = (Number of CPUs in head_compute_target) + (Number of CPUs in worker_compute_target) - (1 CPU for head node) = 6 + 8 - 1 = 13
from azureml.contrib.train.rl import ReinforcementLearningEstimator, Ray
training_algorithm = "IMPALA"
rl_environment = "PongNoFrameskip-v4"
# Training script parameters
script_params = {
# Training algorithm, IMPALA in this case
"--run": training_algorithm,
# Environment, Pong in this case
"--env": rl_environment,
# Add additional single quotes at the both ends of string values as we have spaces in the
# string parameters, outermost quotes are not passed to scripts as they are not actually part of string
# Number of GPUs
# Number of ray workers
"--config": '\'{"num_gpus": 1, "num_workers": 13}\'',
# Target episode reward mean to stop the training
# Total training time in seconds
"--stop": '\'{"episode_reward_mean": 18, "time_total_s": 3600}\'',
}
# Reinforcement learning estimator
rl_estimator = ReinforcementLearningEstimator(
# Location of source files
source_directory='files',
# Python script file
entry_script="pong_rllib.py",
# Parameters to pass to the script file
# Defined above.
script_params=script_params,
# The Azure Machine Learning compute target set up for Ray head nodes
compute_target=head_compute_target,
# GPU usage
use_gpu=True,
# Reinforcement learning framework. Currently must be Ray.
rl_framework=Ray('0.8.3'),
# Ray worker configuration defined above.
worker_configuration=worker_conf,
# How long to wait for whole cluster to start
cluster_coordination_timeout_seconds=3600,
# Maximum time for the whole Ray job to run
# This will cut off the run after an hour
max_run_duration_seconds=3600,
# Allow the docker container Ray runs in to make full use
# of the shared memory available from the host OS.
shm_size=24*1024*1024*1024
)
tune.run(
run_or_experiment=args.run,
config={
"env": args.env,
"num_gpus": args.config["num_gpus"],
"num_workers": args.config["num_workers"],
"callbacks": {"on_train_result": callbacks.on_train_result},
"sample_batch_size": 50,
"train_batch_size": 1000,
"num_sgd_iter": 2,
"num_data_loader_buffers": 2,
"model": {"dim": 42},
},
stop=args.stop,
local_dir='./logs')
run = exp.submit(config=rl_estimator)
from azureml.widgets import RunDetails
RunDetails(run).show()
# Uncomment line below to cancel the run
# run.cancel()
run.wait_for_completion()
# Get the reward metrics from worker run
episode_reward_mean = run.get_metrics(name='episode_reward_mean')
import matplotlib.pyplot as plt
plt.plot(episode_reward_mean['episode_reward_mean'])
plt.xlabel('training_iteration')
plt.ylabel('episode_reward_mean')
plt.show()
# To archive the created experiment:
#experiment.archive()
# To delete the compute targets:
#head_compute_target.delete()
#worker_compute_target.delete()
| 0.552298 | 0.977328 |
<a href="https://colab.research.google.com/github/JSJeong-me/KOSA-Python_Algorithm/blob/main/concurrent/Faster.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
# SuperFastPython.com
# download document files concurrently and save the files locally concurrently
from os import makedirs
from os.path import basename
from os.path import join
from urllib.request import urlopen
from concurrent.futures import ThreadPoolExecutor
from concurrent.futures import as_completed
# download a url and return the raw data, or None on error
def download_url(url):
try:
# open a connection to the server
with urlopen(url, timeout=3) as connection:
# read the contents of the html doc
return (connection.read(), url)
except:
# bad url, socket timeout, http forbidden, etc.
return (None, url)
# save data to a local file
def save_file(url, data, path):
# get the name of the file from the url
filename = basename(url)
# construct a local path for saving the file
outpath = join(path, filename)
# save to file
with open(outpath, 'wb') as file:
file.write(data)
return outpath
# download a list of URLs to local files
def download_docs(urls, path):
# create the local directory, if needed
makedirs(path, exist_ok=True)
# create the thread pool
n_threads = len(urls)
with ThreadPoolExecutor(n_threads) as executor:
# download each url and save as a local file
futures = [executor.submit(download_url, url) for url in urls]
# process each result as it is available
for future in as_completed(futures):
# get the downloaded url data
data, url = future.result()
# check for no data
if data is None:
print(f'>Error downloading {url}')
continue
# save the data to a local file
outpath = save_file(url, data, path)
# report progress
print(f'>Saved {url} to {outpath}')
# python concurrency API docs
URLS = ['https://docs.python.org/3/library/concurrency.html',
'https://docs.python.org/3/library/concurrent.html',
'https://docs.python.org/3/library/concurrent.futures.html',
'https://docs.python.org/3/library/threading.html',
'https://docs.python.org/3/library/multiprocessing.html',
'https://docs.python.org/3/library/multiprocessing.shared_memory.html',
'https://docs.python.org/3/library/subprocess.html',
'https://docs.python.org/3/library/queue.html',
'https://docs.python.org/3/library/sched.html',
'https://docs.python.org/3/library/contextvars.html']
# local path for saving the files
PATH = './'
# download all docs
download_docs(URLS, PATH)
```
|
github_jupyter
|
# SuperFastPython.com
# download document files concurrently and save the files locally concurrently
from os import makedirs
from os.path import basename
from os.path import join
from urllib.request import urlopen
from concurrent.futures import ThreadPoolExecutor
from concurrent.futures import as_completed
# download a url and return the raw data, or None on error
def download_url(url):
try:
# open a connection to the server
with urlopen(url, timeout=3) as connection:
# read the contents of the html doc
return (connection.read(), url)
except:
# bad url, socket timeout, http forbidden, etc.
return (None, url)
# save data to a local file
def save_file(url, data, path):
# get the name of the file from the url
filename = basename(url)
# construct a local path for saving the file
outpath = join(path, filename)
# save to file
with open(outpath, 'wb') as file:
file.write(data)
return outpath
# download a list of URLs to local files
def download_docs(urls, path):
# create the local directory, if needed
makedirs(path, exist_ok=True)
# create the thread pool
n_threads = len(urls)
with ThreadPoolExecutor(n_threads) as executor:
# download each url and save as a local file
futures = [executor.submit(download_url, url) for url in urls]
# process each result as it is available
for future in as_completed(futures):
# get the downloaded url data
data, url = future.result()
# check for no data
if data is None:
print(f'>Error downloading {url}')
continue
# save the data to a local file
outpath = save_file(url, data, path)
# report progress
print(f'>Saved {url} to {outpath}')
# python concurrency API docs
URLS = ['https://docs.python.org/3/library/concurrency.html',
'https://docs.python.org/3/library/concurrent.html',
'https://docs.python.org/3/library/concurrent.futures.html',
'https://docs.python.org/3/library/threading.html',
'https://docs.python.org/3/library/multiprocessing.html',
'https://docs.python.org/3/library/multiprocessing.shared_memory.html',
'https://docs.python.org/3/library/subprocess.html',
'https://docs.python.org/3/library/queue.html',
'https://docs.python.org/3/library/sched.html',
'https://docs.python.org/3/library/contextvars.html']
# local path for saving the files
PATH = './'
# download all docs
download_docs(URLS, PATH)
| 0.395718 | 0.717061 |
```
from fastai.vision.all import *
pd.options.display.max_columns = 100
datapath = Path("/../rsna_data/")
train_df = pd.read_csv(datapath/'train.csv')
train_df.pe_present_on_image.mean()
```
#### Load All Image Files
```
imgdatapath = (datapath/'full_raw_512')
files = get_image_files(imgdatapath)
filesdict = defaultdict(list)
for o in files: filesdict[o.parent.name] += [o]
len(filesdict)
labels_dict = dict(zip(train_df['SOPInstanceUID'], train_df['pe_present_on_image']))
len(files), len(labels_dict)
def get_label(o): return labels_dict[o.stem.split("_")[1]]
```
#### Load Metadata
```
metadata_path = datapath/'metadata'
metadata_files = get_files(metadata_path, extensions='.csv')
metadata_files
pid2metadata = {o.stem:pd.read_csv(o) for o in metadata_files}
```
#### Load Fold PIDs
```
resize = 512
# resize = 256
do_cv = True
FOLD = 0
if do_cv:
cv_pids_dir = (datapath/'cv_pids')
if not cv_pids_dir.exists(): cv_pids_dir.mkdir()
cv_df = train_df[['StudyInstanceUID', 'negative_exam_for_pe']].drop_duplicates().reset_index(drop=True)
all_pids = cv_df['StudyInstanceUID'].values
valid_pids = pd.read_pickle(datapath/f'cv_pids/pids_fold{FOLD}.pkl')
train_pids = list(set(all_pids).difference(valid_pids))
len(train_pids), len(valid_pids), len(train_pids+valid_pids)
train_metadf = pd.concat([pid2metadata[o] for o in train_pids]).reset_index(drop=True)
valid_metadf = pd.concat([pid2metadata[o] for o in valid_pids]).reset_index(drop=True)
```
#### Get Valid Files
```
train_files,valid_files = [],[]
for o in train_pids: train_files += filesdict[o]
for o in valid_pids: valid_files += filesdict[o]
len(train_files), len(valid_files), len(train_files+valid_files)
```
#### Load Model
```
# learn = load_learner(f"./models/xresnet34-{resize}-PR-fold{FOLD}-export.pkl", cpu=False)
learn = load_learner(f"./models/effb3-{resize}-PR-fold{FOLD}-export.pkl", cpu=False)
```
#### Get preds & Visual Embeddings
```
class EmbeddingHook:
def __init__(self, m, savedir, filename, csz=500000):
store_attr("m,savedir,filename,csz")
if len(m._forward_hooks) > 0: self.reset()
self.embeddings = tensor([])
self.hook = Hook(m, self.hook_fn, cpu=True)
self.save_iter = 0
savedir = Path(savedir)
if not savedir.exists(): savedir.mkdir()
def hook_fn(self, m, inp, out):
"Stack and save computed embeddings"
self.embeddings = torch.cat([self.embeddings, out])
if self.embeddings.shape[0] > self.csz:
self.save()
self.embeddings = tensor([])
def reset(self): self.m._forward_hooks = OrderedDict()
def save(self):
torch.save(self.embeddings, self.savedir/f"{self.filename}_part{self.save_iter}.pth")
self.save_iter += 1
len(train_files), len(valid_files)
all_files = train_files + valid_files
len(all_files)
all_dl = learn.dls.test_dl(all_files, with_labels=True, bs=64)
folder = f"full_EFFNETB3_{resize}_ALL_FROM_FOLD{FOLD}"; folder
# embhook = EmbeddingHook(learn.model[1][1], datapath/f'cnn_embs/{folder}', 'xresnet34_embeddings')
embhook = EmbeddingHook(learn.model._avg_pooling, datapath/f'cnn_embs/{folder}', 'effb3_embeddings')
preds, targs = learn.get_preds(dl=all_dl, act=noop)
# # Save preds, embeddings and ordered valid filenames
# torch.save(embhook.embeddings, datapath/f'cnn_embs/{folder}'/'xresnet34_embeddings_finalpart.pth')
# torch.save(preds, datapath/f'cnn_embs/{folder}'/'preds.pth')
# torch.save(all_dl.dataset.items, datapath/f'cnn_embs/{folder}'/'files.pth')
# Save preds, embeddings and ordered valid filenames
torch.save(embhook.embeddings, datapath/f'cnn_embs/{folder}'/'effb3_embeddings_finalpart.pth')
torch.save(preds, datapath/f'cnn_embs/{folder}'/'preds.pth')
torch.save(all_dl.dataset.items, datapath/f'cnn_embs/{folder}'/'files.pth')
# embeddings = torch.cat([torch.load(o) for o in [o for o in (datapath/f'cnn_embs/{folder}').ls() if 'embeddings' in str(o)]])
# embeddings.shape, preds.shape
```
embeddings
qi = proportion of positive images
### Image Weighted Log Loss (Competition Metric) - 2D CNN models
sz 256
Xresnet34 Fold 0, sz=256, temp=1.3, 0.3881 / Effnetb3 Fold 0, sz=256 temp=1.2 0.3356
Xresnet34 Fold 1, sz=256, temp = 1.3, 0.3684
sz 512
Xresnet34 Fold 0, sz=512, temp =0.8 0.2639 / Effnetb3 Fold 0, sz=512, temp=1.5 0.2655
Xresnet34 Fold 1, sz=512, temp = 1.5, 0.2679
Xresnet34 Fold 2 sz=512, temp = 1.4, 0.2686
Xresnet34 Fold 3 sz=512, temp = 1.1, 0.2373
Xresnet34 Fold 4 sz=512, temp = 1.1, 0.2533
```
valid_labels = L(valid_files).map(get_label)
valid_p = np.mean(valid_labels)
1-valid_p
accuracy(preds, targs)
sids = L(valid_files).map(lambda o: o.parent.name)
sid2qi =dict(pd.DataFrame({"sid":sids, "labels": valid_labels}).groupby("sid")['labels'].mean())
qis = tensor([sid2qi[o] for o in sids])
for temp in np.linspace(0.1, 2, 20):
l = F.cross_entropy(preds.float()/temp, targs, reduction='none')
avg_logloss = (l*qis).sum()/qis.sum()
print(temp, avg_logloss.item())
qis.sum()
plt.hist((preds.float()/.8).softmax(1)[:, 1])
img_losses = F.cross_entropy(preds.float()/0.8, targs, reduction='none')
tot_img_loss = (img_losses*qis).sum()
tot_img_wgts = qis.sum()
avg_logloss = tot_img_loss/tot_img_wgts;avg_logloss
tot_img_loss, tot_img_wgts
```
### Exam Weighted Log Loss
**Mean baseline**
Fold 1 0.3518
```
exam_targets = L([
# 'positive_exam_for_pe'
'negative_exam_for_pe',
'indeterminate',
'rv_lv_ratio_gte_1',
'rv_lv_ratio_lt_1',
# none
'leftsided_pe',
'rightsided_pe',
'central_pe',
'chronic_pe',
'acute_and_chronic_pe',
# neither chronic or acute_and_chronic
# 'qa_motion',
# 'qa_contrast',
# 'flow_artifact',
# 'true_filling_defect_not_pe',
]); exam_targets
neg_pe_wgt = 0.0736196319
indeterminate_wgt = 0.09202453988
rv_lv_gte_1_wgt = 0.2346625767
rv_lv_lt_1_wgt = 0.0782208589
left_pe_wgt = 0.06257668712
right_pe_wgt = 0.06257668712
central_pe_wgt = 0.1877300613
chronic_wgt = 0.1042944785
acute_chronic_wgt = 0.1042944785
exam_wgts = tensor([0.0736196319,0.09202453988,0.2346625767,0.0782208589,0.06257668712,0.06257668712,0.1877300613,0.1042944785, 0.1042944785])
train_targsdf = train_df[train_df.StudyInstanceUID.isin(train_pids)][["StudyInstanceUID"]+exam_targets].drop_duplicates()
valid_targsdf = train_df[train_df.StudyInstanceUID.isin(valid_pids)][["StudyInstanceUID"]+exam_targets].drop_duplicates()
exam_mean_preds = dict(train_targsdf[exam_targets].mean())
exam_mean_preds
exam_losses = F.binary_cross_entropy(tensor(list(exam_mean_preds.values()))[None,...].repeat(len(valid_pids),1),
tensor(valid_targsdf[exam_targets].values).float(),
reduction='none')
tot_exam_loss = (exam_losses*exam_wgts).sum()
tot_exam_wgts = (len(valid_pids)*exam_wgts.sum())
avg_exam_loss = tot_exam_loss/tot_exam_wgts; avg_exam_loss
```
### Combine both
Almost equal weights just take mean of two
```
img_wgt = 0.07361963
(tot_img_loss*img_wgt + tot_exam_loss) / (tot_img_wgts*img_wgt + tot_exam_wgts)
```
|
github_jupyter
|
from fastai.vision.all import *
pd.options.display.max_columns = 100
datapath = Path("/../rsna_data/")
train_df = pd.read_csv(datapath/'train.csv')
train_df.pe_present_on_image.mean()
imgdatapath = (datapath/'full_raw_512')
files = get_image_files(imgdatapath)
filesdict = defaultdict(list)
for o in files: filesdict[o.parent.name] += [o]
len(filesdict)
labels_dict = dict(zip(train_df['SOPInstanceUID'], train_df['pe_present_on_image']))
len(files), len(labels_dict)
def get_label(o): return labels_dict[o.stem.split("_")[1]]
metadata_path = datapath/'metadata'
metadata_files = get_files(metadata_path, extensions='.csv')
metadata_files
pid2metadata = {o.stem:pd.read_csv(o) for o in metadata_files}
resize = 512
# resize = 256
do_cv = True
FOLD = 0
if do_cv:
cv_pids_dir = (datapath/'cv_pids')
if not cv_pids_dir.exists(): cv_pids_dir.mkdir()
cv_df = train_df[['StudyInstanceUID', 'negative_exam_for_pe']].drop_duplicates().reset_index(drop=True)
all_pids = cv_df['StudyInstanceUID'].values
valid_pids = pd.read_pickle(datapath/f'cv_pids/pids_fold{FOLD}.pkl')
train_pids = list(set(all_pids).difference(valid_pids))
len(train_pids), len(valid_pids), len(train_pids+valid_pids)
train_metadf = pd.concat([pid2metadata[o] for o in train_pids]).reset_index(drop=True)
valid_metadf = pd.concat([pid2metadata[o] for o in valid_pids]).reset_index(drop=True)
train_files,valid_files = [],[]
for o in train_pids: train_files += filesdict[o]
for o in valid_pids: valid_files += filesdict[o]
len(train_files), len(valid_files), len(train_files+valid_files)
# learn = load_learner(f"./models/xresnet34-{resize}-PR-fold{FOLD}-export.pkl", cpu=False)
learn = load_learner(f"./models/effb3-{resize}-PR-fold{FOLD}-export.pkl", cpu=False)
class EmbeddingHook:
def __init__(self, m, savedir, filename, csz=500000):
store_attr("m,savedir,filename,csz")
if len(m._forward_hooks) > 0: self.reset()
self.embeddings = tensor([])
self.hook = Hook(m, self.hook_fn, cpu=True)
self.save_iter = 0
savedir = Path(savedir)
if not savedir.exists(): savedir.mkdir()
def hook_fn(self, m, inp, out):
"Stack and save computed embeddings"
self.embeddings = torch.cat([self.embeddings, out])
if self.embeddings.shape[0] > self.csz:
self.save()
self.embeddings = tensor([])
def reset(self): self.m._forward_hooks = OrderedDict()
def save(self):
torch.save(self.embeddings, self.savedir/f"{self.filename}_part{self.save_iter}.pth")
self.save_iter += 1
len(train_files), len(valid_files)
all_files = train_files + valid_files
len(all_files)
all_dl = learn.dls.test_dl(all_files, with_labels=True, bs=64)
folder = f"full_EFFNETB3_{resize}_ALL_FROM_FOLD{FOLD}"; folder
# embhook = EmbeddingHook(learn.model[1][1], datapath/f'cnn_embs/{folder}', 'xresnet34_embeddings')
embhook = EmbeddingHook(learn.model._avg_pooling, datapath/f'cnn_embs/{folder}', 'effb3_embeddings')
preds, targs = learn.get_preds(dl=all_dl, act=noop)
# # Save preds, embeddings and ordered valid filenames
# torch.save(embhook.embeddings, datapath/f'cnn_embs/{folder}'/'xresnet34_embeddings_finalpart.pth')
# torch.save(preds, datapath/f'cnn_embs/{folder}'/'preds.pth')
# torch.save(all_dl.dataset.items, datapath/f'cnn_embs/{folder}'/'files.pth')
# Save preds, embeddings and ordered valid filenames
torch.save(embhook.embeddings, datapath/f'cnn_embs/{folder}'/'effb3_embeddings_finalpart.pth')
torch.save(preds, datapath/f'cnn_embs/{folder}'/'preds.pth')
torch.save(all_dl.dataset.items, datapath/f'cnn_embs/{folder}'/'files.pth')
# embeddings = torch.cat([torch.load(o) for o in [o for o in (datapath/f'cnn_embs/{folder}').ls() if 'embeddings' in str(o)]])
# embeddings.shape, preds.shape
valid_labels = L(valid_files).map(get_label)
valid_p = np.mean(valid_labels)
1-valid_p
accuracy(preds, targs)
sids = L(valid_files).map(lambda o: o.parent.name)
sid2qi =dict(pd.DataFrame({"sid":sids, "labels": valid_labels}).groupby("sid")['labels'].mean())
qis = tensor([sid2qi[o] for o in sids])
for temp in np.linspace(0.1, 2, 20):
l = F.cross_entropy(preds.float()/temp, targs, reduction='none')
avg_logloss = (l*qis).sum()/qis.sum()
print(temp, avg_logloss.item())
qis.sum()
plt.hist((preds.float()/.8).softmax(1)[:, 1])
img_losses = F.cross_entropy(preds.float()/0.8, targs, reduction='none')
tot_img_loss = (img_losses*qis).sum()
tot_img_wgts = qis.sum()
avg_logloss = tot_img_loss/tot_img_wgts;avg_logloss
tot_img_loss, tot_img_wgts
exam_targets = L([
# 'positive_exam_for_pe'
'negative_exam_for_pe',
'indeterminate',
'rv_lv_ratio_gte_1',
'rv_lv_ratio_lt_1',
# none
'leftsided_pe',
'rightsided_pe',
'central_pe',
'chronic_pe',
'acute_and_chronic_pe',
# neither chronic or acute_and_chronic
# 'qa_motion',
# 'qa_contrast',
# 'flow_artifact',
# 'true_filling_defect_not_pe',
]); exam_targets
neg_pe_wgt = 0.0736196319
indeterminate_wgt = 0.09202453988
rv_lv_gte_1_wgt = 0.2346625767
rv_lv_lt_1_wgt = 0.0782208589
left_pe_wgt = 0.06257668712
right_pe_wgt = 0.06257668712
central_pe_wgt = 0.1877300613
chronic_wgt = 0.1042944785
acute_chronic_wgt = 0.1042944785
exam_wgts = tensor([0.0736196319,0.09202453988,0.2346625767,0.0782208589,0.06257668712,0.06257668712,0.1877300613,0.1042944785, 0.1042944785])
train_targsdf = train_df[train_df.StudyInstanceUID.isin(train_pids)][["StudyInstanceUID"]+exam_targets].drop_duplicates()
valid_targsdf = train_df[train_df.StudyInstanceUID.isin(valid_pids)][["StudyInstanceUID"]+exam_targets].drop_duplicates()
exam_mean_preds = dict(train_targsdf[exam_targets].mean())
exam_mean_preds
exam_losses = F.binary_cross_entropy(tensor(list(exam_mean_preds.values()))[None,...].repeat(len(valid_pids),1),
tensor(valid_targsdf[exam_targets].values).float(),
reduction='none')
tot_exam_loss = (exam_losses*exam_wgts).sum()
tot_exam_wgts = (len(valid_pids)*exam_wgts.sum())
avg_exam_loss = tot_exam_loss/tot_exam_wgts; avg_exam_loss
img_wgt = 0.07361963
(tot_img_loss*img_wgt + tot_exam_loss) / (tot_img_wgts*img_wgt + tot_exam_wgts)
| 0.41739 | 0.648355 |
## DataFrames
-------
Explore all methods and feautures available for DataFrame based operations
### Background
-----
```
import sys
import pandas as pd
import numpy as np
from io import StringIO
from pandas.io.json import json_normalize
import json
pd.show_versions()
```
### Create a DataFrame
------
- _DataFrame of m rows and n cols_
```
pd.DataFrame(np.random.randn(6,4), columns=list('ABCD'))
pd.DataFrame({ 'A' : 1,
'B1' : pd.Timestamp('20130102'),
'B2' : pd.date_range('20130101', periods=4),
'C' : pd.Series(1, index=list(range(4)), dtype='float32'),
'D' : np.array([3] * 4,dtype='int32'),
'E' : pd.Categorical(["test","train","test","train"]) })
data = 'col1,col2,col3\na,b,1\na,b,2\nc,d,3'
pd.read_csv(StringIO(data))
data = 'col1;col2;col3\na;b;1\na;b;2\nc;d;3'
pd.read_csv(StringIO(data), sep=";")
example = {'first_name': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'],
'last_name': ['Miller', 'Jacobson', ".", 'Milner', 'Cooze'],
'age': [42, 52, 36, 24, 73],
'preTestScore': [4, 24, 31, ".", "."],
'postTestScore': ["25,000", "94,000", 57, 62, 70]}
df = pd.DataFrame(example, columns = ['first_name', 'last_name', 'age', 'preTestScore', 'postTestScore'])
df.to_csv('../data/example.csv')
df
pd.read_csv
pd.read_csv('../data/example.csv')
pd.read_csv('../data/example.csv', header=None)
pd.read_csv('../data/example.csv', names=['UID', 'First Name', 'Last Name', 'Age', 'Pre-Test Score', 'Post-Test Score'])
pd.read_csv('../data/example.csv',
index_col=['First Name', 'Last Name'],
names=['UID', 'First Name', 'Last Name', 'Age', 'Pre-Test Score', 'Post-Test Score'])
```
## ISSUES
### Inner Join [Link](https://github.com/ZNClub-PA-ML-AI/DataFrames/issues/3)
```
df1 = pd.read_csv('../data/source1.csv')
df2 = pd.read_csv('../data/source2.csv')
df1.shape, df2.shape
pd.merge(left=df1, right=df2, on='id')
df1.join(other=df2, on=['id'])
# ?df1.join
df1.merge(right=df2, on='id').sort_values(['id','name_x'], ascending=[False, True])
# ?df1.sort_values
```
### Sort and Set comparison
```
df1 = pd.read_excel('../data/dsource1.xlsx', sheetname='Sheet1')
df2 = pd.read_excel('../data/dsource2.xlsx', sheetname='Sheet1')
df1.shape, df2.shape
df1.index, df2.index
df3 = df1.sort_values(by='child')
df4 = df2.sort_values(by='child')
df3.head(), df4.head()
```
### Summarize set membership of column
```
set(df3['child']).intersection(set(df4['child']))
set(df3['child']).difference(set(df4['child']))
set(df4['child']).difference(set(df3['child']))
```
|
github_jupyter
|
import sys
import pandas as pd
import numpy as np
from io import StringIO
from pandas.io.json import json_normalize
import json
pd.show_versions()
pd.DataFrame(np.random.randn(6,4), columns=list('ABCD'))
pd.DataFrame({ 'A' : 1,
'B1' : pd.Timestamp('20130102'),
'B2' : pd.date_range('20130101', periods=4),
'C' : pd.Series(1, index=list(range(4)), dtype='float32'),
'D' : np.array([3] * 4,dtype='int32'),
'E' : pd.Categorical(["test","train","test","train"]) })
data = 'col1,col2,col3\na,b,1\na,b,2\nc,d,3'
pd.read_csv(StringIO(data))
data = 'col1;col2;col3\na;b;1\na;b;2\nc;d;3'
pd.read_csv(StringIO(data), sep=";")
example = {'first_name': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'],
'last_name': ['Miller', 'Jacobson', ".", 'Milner', 'Cooze'],
'age': [42, 52, 36, 24, 73],
'preTestScore': [4, 24, 31, ".", "."],
'postTestScore': ["25,000", "94,000", 57, 62, 70]}
df = pd.DataFrame(example, columns = ['first_name', 'last_name', 'age', 'preTestScore', 'postTestScore'])
df.to_csv('../data/example.csv')
df
pd.read_csv
pd.read_csv('../data/example.csv')
pd.read_csv('../data/example.csv', header=None)
pd.read_csv('../data/example.csv', names=['UID', 'First Name', 'Last Name', 'Age', 'Pre-Test Score', 'Post-Test Score'])
pd.read_csv('../data/example.csv',
index_col=['First Name', 'Last Name'],
names=['UID', 'First Name', 'Last Name', 'Age', 'Pre-Test Score', 'Post-Test Score'])
df1 = pd.read_csv('../data/source1.csv')
df2 = pd.read_csv('../data/source2.csv')
df1.shape, df2.shape
pd.merge(left=df1, right=df2, on='id')
df1.join(other=df2, on=['id'])
# ?df1.join
df1.merge(right=df2, on='id').sort_values(['id','name_x'], ascending=[False, True])
# ?df1.sort_values
df1 = pd.read_excel('../data/dsource1.xlsx', sheetname='Sheet1')
df2 = pd.read_excel('../data/dsource2.xlsx', sheetname='Sheet1')
df1.shape, df2.shape
df1.index, df2.index
df3 = df1.sort_values(by='child')
df4 = df2.sort_values(by='child')
df3.head(), df4.head()
set(df3['child']).intersection(set(df4['child']))
set(df3['child']).difference(set(df4['child']))
set(df4['child']).difference(set(df3['child']))
| 0.110735 | 0.789802 |
# 9장 계산복잡도와 다루기 난이도: NP 이론 소개
## 주요 내용
* 1절 계산복잡도와 다루기 난이도
* 3절 다루기 난이도 분류
* 4절 NP 이론
## 1절 계산복잡도와 다루기 난이도
### 계산복잡도(computational complexity)
* 계산복잡도 연구: 주어진 문제를 풀 수 있는 가능한 모든 알고리즘에 대한 연구
* 계산복잡도 분석: 같은 문제를 푸는 모든 알고리즘의 효율설(복잡도)의 하한 구하기
#### 예제: 행렬곱셈 문제
* 행렬곱셈 문제를 푸는 하한(lower bound): $\Omega(n^2)$
* 지금까지 알려진 최고 성능 알고리즘
* Le Gall (2014)
* $\Theta(n^{2.3728639})$
#### 하한의 의미
* 행렬 곱셈을 실행하는 어떤 알고리즘도 $\Theta(n^2)$ 보다 좋을 수는 없음.
* 하지만 $\Theta(n^2)$의 복잡도를 갖는 알고리즘을 찾을 수 있다는 것을 보장하지는 않음.
#### 예제: 정렬 문제
* 알려진 하한 만큼 좋은 알고리즘 존재
* 정렬 문제의 하한: $\Omega(n \lg n)$
<div align="center"><img src="./images/algo09/algo09-01.png" width="650"/></div>
### 다루기 난이도(Intractability)
#### 다차시간 알고리즘(polynomial-time algorithm)
* 최악 시간복잡도의 상한이 다항식인 알고리즘
$$
W(n) \in O(p(n))
$$
여기서, $p(n)$은 다항식.
* 최악 시간복잡도가 아래와 같은 알고리즘은 모두 다차시간 알고리즘임:
$$
2n \qquad 3 n^3 + 4n \qquad 5n+n^{10} \qquad n \lg n
$$
* 주의: $n\lg n < n^2$
* 최악 시간복잡도가 아래와 같은 알고리즘은 모두 다차시간 알고리즘 아님:
$$
2^n \qquad 2^{0.01 n} \qquad 2^{\sqrt{n}} \qquad n!
$$
* 비다차시간 알고리즘도 경우에 따라 효율적으로 실행되는 사례가 많음.
* 예제: 되추적 알고리즘
* 반대로 경우에 따라 다차시간 알고리즘이 있는 문제가 그렇지 않은 문제보다 실제 상황에서 더 어려운 경우 있음.
* 따라서 다루기 난이도를 실제로 다루기 힘들 수 있다는 정도로만 해석할 필요 있음.
## 3절 문제 분류
1) 다차시간 알고리즘을 찾은 문제
2) 다루기 힘들다고 증명된 문제
3) 다루기 힘들다고 증명되지 않았지만 다차시간 알고리즘도 찾지 못한 문제
### 다차시간 알고리즘을 찾은 문제
* 다차시간 알고리즘이 알려진 문제
* 예제: 정렬된 배열검색, $\Theta(\lg n)$
* 예제: 행렬 곱셈, $\Theta(n^{2.3728639})$
### 다루기 힘들다고 증명된 문제
* 두 종류로 분류됨
* 지수 이상의 출력을 요구하는 문제: 예를 들어 모든 경로를 다 출력하는 문제
* 지수 이상의 출력을 요구하지 않지만 문제를 다차시간 내에 풀 수 없음이 증명된 문제
* 예제: 정지문제(Halting problem) 등 진위판별문제 관련 문제 다수 존재
### 다루기 힘들다고 증명되지 않았지만 다차시간 알고리즘도 찾지 못한 문제
* 다차시간 알고리즘이 알려지지 않았지만 그렇다고 해서 다차시간 알고리즘이 존재하지 않는다는 증명도 없는 문제
* 다수 존재. 지금까지 알려진 다루기 어려운 문제의 대다수가 이런 문제임
* 예제: 0-1 배낭채우기 문제, 외판원 문제, m-색칠하기 문제(m > 2) 등등
## NP 이론
* 다차시간 알고리즘 문제와 비다차시간 알고리즘을 분류하는 기준에 대한 이론
### P 와 NP
#### 집합 P
* 다차시간 알고리즘으로 풀 수 있는 모든 진위판별 문제의 집합
* 예제: 특정 항목이 주어진 배열에 포함되었는지 여부 판단하는 문제
* 외판원 특정 시간 안에 모든 도시를 방문하고 돌아올 수 있는지를 판멸하는 문제
* 이 문제에 대해 다차시간 알고리즘이 알려지지 않았으며,
그리고 그런 다차시간 알고리즘이 존재하지 않는다는 증명도 아직 없음.
#### 집합 NP
* NP: 다차시간 비결정 알고리즘 풀 수 있는 모든 진위판별 문제들의 집합
* NP = nondeterministical polynomial
* 다차시간 비결정 알고리즘: 검증단계가 다차시간 알고리즘인 비결정 알고리즘
* 비결정 알고리즘 작동법
* (비결정) 추측 단계: 문제의 답을 임의로 추측하여 생성
* (결정) 검증 단계: 임의로 추측된 답의 참/거짓 여부 판단
#### P 이면 NP!
* P 에 속하는 문제는 모두 NP에도 속한다.
#### 축소변환 가능성
* 진위판별 문제 A를 진위판별 문제 B로 변환하는 다차시간 변환 알고리즘이 존해할 때 문제 A는 문제 B로
**다차시간 다일 축소변환가능**(polynomial-time many-one reducible)이다라고 함.
* 간단하게 **축소변환 가능**이라 말하며 아래와 같이 표시함:
$$A \propto B$$
#### NP-complete 문제
* 아래 두 조건을 만족하는 문제 B를 NP-complete 라 함.
1. NP에 속함.
1. NP에 속한 임의의 다른 문제 A를 다차시간 내에 B의 문제로 축소변환 가능함.
* 예제: 외판원 문제, 0-1 배낭채우기 등등 지금까지 알려진 다루기 어려운 문제 대다수
#### NP-hard 문제
* 최소 NP-complete 만큼 다루기 어려운 문제
#### P, NP, NP-complete, NP-hard 의 현재 상태
* 주의: 아직 P = NP 여부 모름
<div align="center"><img src="./images/algo09/algo09-03.png" width="600"/></div>
<그림 출처: [위키피디아: P versus NP problem](https://en.wikipedia.org/wiki/P_versus_NP_problem)>
|
github_jupyter
|
# 9장 계산복잡도와 다루기 난이도: NP 이론 소개
## 주요 내용
* 1절 계산복잡도와 다루기 난이도
* 3절 다루기 난이도 분류
* 4절 NP 이론
## 1절 계산복잡도와 다루기 난이도
### 계산복잡도(computational complexity)
* 계산복잡도 연구: 주어진 문제를 풀 수 있는 가능한 모든 알고리즘에 대한 연구
* 계산복잡도 분석: 같은 문제를 푸는 모든 알고리즘의 효율설(복잡도)의 하한 구하기
#### 예제: 행렬곱셈 문제
* 행렬곱셈 문제를 푸는 하한(lower bound): $\Omega(n^2)$
* 지금까지 알려진 최고 성능 알고리즘
* Le Gall (2014)
* $\Theta(n^{2.3728639})$
#### 하한의 의미
* 행렬 곱셈을 실행하는 어떤 알고리즘도 $\Theta(n^2)$ 보다 좋을 수는 없음.
* 하지만 $\Theta(n^2)$의 복잡도를 갖는 알고리즘을 찾을 수 있다는 것을 보장하지는 않음.
#### 예제: 정렬 문제
* 알려진 하한 만큼 좋은 알고리즘 존재
* 정렬 문제의 하한: $\Omega(n \lg n)$
<div align="center"><img src="./images/algo09/algo09-01.png" width="650"/></div>
### 다루기 난이도(Intractability)
#### 다차시간 알고리즘(polynomial-time algorithm)
* 최악 시간복잡도의 상한이 다항식인 알고리즘
$$
W(n) \in O(p(n))
$$
여기서, $p(n)$은 다항식.
* 최악 시간복잡도가 아래와 같은 알고리즘은 모두 다차시간 알고리즘임:
$$
2n \qquad 3 n^3 + 4n \qquad 5n+n^{10} \qquad n \lg n
$$
* 주의: $n\lg n < n^2$
* 최악 시간복잡도가 아래와 같은 알고리즘은 모두 다차시간 알고리즘 아님:
$$
2^n \qquad 2^{0.01 n} \qquad 2^{\sqrt{n}} \qquad n!
$$
* 비다차시간 알고리즘도 경우에 따라 효율적으로 실행되는 사례가 많음.
* 예제: 되추적 알고리즘
* 반대로 경우에 따라 다차시간 알고리즘이 있는 문제가 그렇지 않은 문제보다 실제 상황에서 더 어려운 경우 있음.
* 따라서 다루기 난이도를 실제로 다루기 힘들 수 있다는 정도로만 해석할 필요 있음.
## 3절 문제 분류
1) 다차시간 알고리즘을 찾은 문제
2) 다루기 힘들다고 증명된 문제
3) 다루기 힘들다고 증명되지 않았지만 다차시간 알고리즘도 찾지 못한 문제
### 다차시간 알고리즘을 찾은 문제
* 다차시간 알고리즘이 알려진 문제
* 예제: 정렬된 배열검색, $\Theta(\lg n)$
* 예제: 행렬 곱셈, $\Theta(n^{2.3728639})$
### 다루기 힘들다고 증명된 문제
* 두 종류로 분류됨
* 지수 이상의 출력을 요구하는 문제: 예를 들어 모든 경로를 다 출력하는 문제
* 지수 이상의 출력을 요구하지 않지만 문제를 다차시간 내에 풀 수 없음이 증명된 문제
* 예제: 정지문제(Halting problem) 등 진위판별문제 관련 문제 다수 존재
### 다루기 힘들다고 증명되지 않았지만 다차시간 알고리즘도 찾지 못한 문제
* 다차시간 알고리즘이 알려지지 않았지만 그렇다고 해서 다차시간 알고리즘이 존재하지 않는다는 증명도 없는 문제
* 다수 존재. 지금까지 알려진 다루기 어려운 문제의 대다수가 이런 문제임
* 예제: 0-1 배낭채우기 문제, 외판원 문제, m-색칠하기 문제(m > 2) 등등
## NP 이론
* 다차시간 알고리즘 문제와 비다차시간 알고리즘을 분류하는 기준에 대한 이론
### P 와 NP
#### 집합 P
* 다차시간 알고리즘으로 풀 수 있는 모든 진위판별 문제의 집합
* 예제: 특정 항목이 주어진 배열에 포함되었는지 여부 판단하는 문제
* 외판원 특정 시간 안에 모든 도시를 방문하고 돌아올 수 있는지를 판멸하는 문제
* 이 문제에 대해 다차시간 알고리즘이 알려지지 않았으며,
그리고 그런 다차시간 알고리즘이 존재하지 않는다는 증명도 아직 없음.
#### 집합 NP
* NP: 다차시간 비결정 알고리즘 풀 수 있는 모든 진위판별 문제들의 집합
* NP = nondeterministical polynomial
* 다차시간 비결정 알고리즘: 검증단계가 다차시간 알고리즘인 비결정 알고리즘
* 비결정 알고리즘 작동법
* (비결정) 추측 단계: 문제의 답을 임의로 추측하여 생성
* (결정) 검증 단계: 임의로 추측된 답의 참/거짓 여부 판단
#### P 이면 NP!
* P 에 속하는 문제는 모두 NP에도 속한다.
#### 축소변환 가능성
* 진위판별 문제 A를 진위판별 문제 B로 변환하는 다차시간 변환 알고리즘이 존해할 때 문제 A는 문제 B로
**다차시간 다일 축소변환가능**(polynomial-time many-one reducible)이다라고 함.
* 간단하게 **축소변환 가능**이라 말하며 아래와 같이 표시함:
$$A \propto B$$
#### NP-complete 문제
* 아래 두 조건을 만족하는 문제 B를 NP-complete 라 함.
1. NP에 속함.
1. NP에 속한 임의의 다른 문제 A를 다차시간 내에 B의 문제로 축소변환 가능함.
* 예제: 외판원 문제, 0-1 배낭채우기 등등 지금까지 알려진 다루기 어려운 문제 대다수
#### NP-hard 문제
* 최소 NP-complete 만큼 다루기 어려운 문제
#### P, NP, NP-complete, NP-hard 의 현재 상태
* 주의: 아직 P = NP 여부 모름
<div align="center"><img src="./images/algo09/algo09-03.png" width="600"/></div>
<그림 출처: [위키피디아: P versus NP problem](https://en.wikipedia.org/wiki/P_versus_NP_problem)>
| 0.371593 | 0.985524 |
# Custom environment tutorial
This tutorial demonstrates how to create and use a custom environment in nnabla-rl.\
## Preparation
Let's start by first installing nnabla-rl and importing required packages for training.
```
!pip install nnabla-rl
import nnabla as nn
from nnabla import functions as NF
from nnabla import parametric_functions as NPF
import nnabla.solvers as NS
import nnabla_rl
import nnabla_rl.algorithms as A
import nnabla_rl.hooks as H
from nnabla_rl.utils.evaluator import EpisodicEvaluator
from nnabla_rl.models.q_function import DiscreteQFunction
from nnabla_rl.builders import ModelBuilder, SolverBuilder
import nnabla_rl.functions as RF
```
## Understanding gym.Env
If you don't know what gym library is, [gym documentation](https://gym.openai.com/docs/) will be helpful. Please read it before creating an original enviroment.
Referring to the [gym.Env implementation](https://github.com/openai/gym/blob/master/gym/core.py), gym Env has following five methods.
- `step(action): Run one timestep of the environment's dynamics.` This method's argument is action and this should return next_state, reward, done, and info.
- `reset(): Resets the environment to an initial state and returns an initial observation.`
- `render(): Renders the environment.` (Optional)
- `close(): Override close in your subclass to perform any necessary cleanup.` (Optional)
- `seed(): Sets the seed for this env's random number generator(s).` (Optional)
In addition, there are three key attributes.
- `action_space: The Space object corresponding to valid actions.`
- `observation_space: The Space object corresponding to valid observations`
- `reward_range: A tuple corresponding to the min and max possible rewards` (Optional)
action_space and observation_space should be defined by using [gym.Spaces](https://github.com/openai/gym/tree/master/gym/spaces).
These methods and attributes will decide how environment works, so let's implement them!!
## Creating a Simple Enviroment
As an example case, we will create a simple enviroment called CliffEnv which has following settings.
<img src="./assets/CliffEnv.png" width="500">
- In this enviroment, task goal is to reach the place where is 10.0 <= x and 0.0 <= y <= 5.0
- State is continuous and has 2 dimension (i.e., x and y).
- There are two discrete actions, up (y+=5), right (x+=5).
- If agent reaches the cliff region (x > 5.0 and x < 10.0 and y > 0.0 and y < 5.0) or (x < 0.0) or (y > 10.0) or (y < 0.0), -100 is given as reward.
- For all timesteps the agent gets -1 as reward.
- If agent reaches the goal (x >= 10.0 and y >= 5.0 and y <= 10.0), 100 is given as reward.
- Initial states are x=2.5, y=2.5.
We can easily guess the optimal actions are \[ "up", "right", "right" \] and the optimal score will be 98 (-1 + -1 + 100).
```
import gym
from gym import spaces
import numpy as np
class CliffEnv(gym.Env):
def __init__(self):
# action is defined as follows:
# 0 = up, 1 = right
self.action_space = spaces.Discrete(2)
self.observation_space = spaces.Box(shape=(2,), low=-np.inf, high=np.inf, dtype=np.float32)
self._state = np.array([2.5, 2.5])
def reset(self):
self._state = np.array([2.5, 2.5])
return self._state
def step(self, action):
if action == 0: # up (y+=5)
self._state[1] += 5.
elif action == 1: # right (x+=5)
self._state[0] += 5.
else:
raise ValueError
x, y = self._state
if (x > 5.0 and y < 5.0) or (x < 0.0) or (y > 10.0) or (y < 0.0):
done = True
reward = -100
elif x >= 10.0 and y >= 5.0 and y <= 10.0:
done = True
reward = 100
else:
done = False
reward = -1
info = {}
return self._state, reward, done, info
```
After defining an original enviroment, it would be nice to confirm if your implementation is correct by running this code.
```
env = CliffEnv()
# first call reset and every internal state will be initialized
state = env.reset()
done = False
while not done:
action = env.action_space.sample() # random sample from the action space
next_state, reward, done, info = env.step(action)
print('next_state=', next_state, 'action=', action, 'reward=', reward, 'done=', done)
if done:
print("Episode is Done")
break
```
## Appling nnabla-rl to an original environment
Environment is now ready to run the training!!\
Let's apply nnabla-rl algorithms to the created enviroment and train the agent!!
Define a Q function, a Q function solver and a solver builder.
```
class CliffQFunction(DiscreteQFunction):
def __init__(self, scope_name: str, n_action: int):
super(CliffQFunction, self).__init__(scope_name)
self._n_action = n_action
def all_q(self, s: nn.Variable) -> nn.Variable:
with nn.parameter_scope(self.scope_name):
h = NF.tanh(NPF.affine(s, 64, name="affine-1"))
h = NF.tanh(NPF.affine(h, 64, name="affine-2"))
q = NPF.affine(h, self._n_action, name="pred-q")
return q
class CliffQFunctionBuilder(ModelBuilder[DiscreteQFunction]):
def build_model(self, scope_name, env_info, algorithm_config, **kwargs):
return CliffQFunction(scope_name, env_info.action_dim)
class CliffSolverBuilder(SolverBuilder):
def build_solver(self, # type: ignore[override]
env_info,
algorithm_config,
**kwargs):
return NS.Adam(alpha=algorithm_config.learning_rate)
```
Instantiate your env and run the training !!
```
train_env = CliffEnv()
eval_env = CliffEnv()
iteration_num_hook = H.IterationNumHook(timing=100)
evaluator = EpisodicEvaluator(run_per_evaluation=10)
evaluation_hook = H.EvaluationHook(eval_env, evaluator, timing=100)
total_timesteps = 10000
config = A.DQNConfig(
gpu_id=0,
gamma=0.99,
learning_rate=1e-5,
batch_size=32,
learner_update_frequency=1,
target_update_frequency=1000,
start_timesteps=1000,
replay_buffer_size=1000,
max_explore_steps=10000,
initial_epsilon=1.0,
final_epsilon=0.0,
test_epsilon=0.0,
)
dqn = A.DQN(train_env, config=config, q_func_builder=CliffQFunctionBuilder(),
q_solver_builder=CliffSolverBuilder())
hooks = [iteration_num_hook, evaluation_hook]
dqn.set_hooks(hooks)
dqn.train_online(train_env, total_iterations=total_timesteps)
eval_env.close()
train_env.close()
```
We can see the agent gets 98 score in evaluation enviroment!! That means we solved the task. Congratuations!!
|
github_jupyter
|
!pip install nnabla-rl
import nnabla as nn
from nnabla import functions as NF
from nnabla import parametric_functions as NPF
import nnabla.solvers as NS
import nnabla_rl
import nnabla_rl.algorithms as A
import nnabla_rl.hooks as H
from nnabla_rl.utils.evaluator import EpisodicEvaluator
from nnabla_rl.models.q_function import DiscreteQFunction
from nnabla_rl.builders import ModelBuilder, SolverBuilder
import nnabla_rl.functions as RF
import gym
from gym import spaces
import numpy as np
class CliffEnv(gym.Env):
def __init__(self):
# action is defined as follows:
# 0 = up, 1 = right
self.action_space = spaces.Discrete(2)
self.observation_space = spaces.Box(shape=(2,), low=-np.inf, high=np.inf, dtype=np.float32)
self._state = np.array([2.5, 2.5])
def reset(self):
self._state = np.array([2.5, 2.5])
return self._state
def step(self, action):
if action == 0: # up (y+=5)
self._state[1] += 5.
elif action == 1: # right (x+=5)
self._state[0] += 5.
else:
raise ValueError
x, y = self._state
if (x > 5.0 and y < 5.0) or (x < 0.0) or (y > 10.0) or (y < 0.0):
done = True
reward = -100
elif x >= 10.0 and y >= 5.0 and y <= 10.0:
done = True
reward = 100
else:
done = False
reward = -1
info = {}
return self._state, reward, done, info
env = CliffEnv()
# first call reset and every internal state will be initialized
state = env.reset()
done = False
while not done:
action = env.action_space.sample() # random sample from the action space
next_state, reward, done, info = env.step(action)
print('next_state=', next_state, 'action=', action, 'reward=', reward, 'done=', done)
if done:
print("Episode is Done")
break
class CliffQFunction(DiscreteQFunction):
def __init__(self, scope_name: str, n_action: int):
super(CliffQFunction, self).__init__(scope_name)
self._n_action = n_action
def all_q(self, s: nn.Variable) -> nn.Variable:
with nn.parameter_scope(self.scope_name):
h = NF.tanh(NPF.affine(s, 64, name="affine-1"))
h = NF.tanh(NPF.affine(h, 64, name="affine-2"))
q = NPF.affine(h, self._n_action, name="pred-q")
return q
class CliffQFunctionBuilder(ModelBuilder[DiscreteQFunction]):
def build_model(self, scope_name, env_info, algorithm_config, **kwargs):
return CliffQFunction(scope_name, env_info.action_dim)
class CliffSolverBuilder(SolverBuilder):
def build_solver(self, # type: ignore[override]
env_info,
algorithm_config,
**kwargs):
return NS.Adam(alpha=algorithm_config.learning_rate)
train_env = CliffEnv()
eval_env = CliffEnv()
iteration_num_hook = H.IterationNumHook(timing=100)
evaluator = EpisodicEvaluator(run_per_evaluation=10)
evaluation_hook = H.EvaluationHook(eval_env, evaluator, timing=100)
total_timesteps = 10000
config = A.DQNConfig(
gpu_id=0,
gamma=0.99,
learning_rate=1e-5,
batch_size=32,
learner_update_frequency=1,
target_update_frequency=1000,
start_timesteps=1000,
replay_buffer_size=1000,
max_explore_steps=10000,
initial_epsilon=1.0,
final_epsilon=0.0,
test_epsilon=0.0,
)
dqn = A.DQN(train_env, config=config, q_func_builder=CliffQFunctionBuilder(),
q_solver_builder=CliffSolverBuilder())
hooks = [iteration_num_hook, evaluation_hook]
dqn.set_hooks(hooks)
dqn.train_online(train_env, total_iterations=total_timesteps)
eval_env.close()
train_env.close()
| 0.654895 | 0.953492 |
# Mentoria Evolution - Data Analysis
<font color=blue><b> Minerando Dados</b></font><br>
www.minerandodados.com.br
**Importante**: Antes de executar as seguintes células verifique se **todos os arquivos** estão no mesmo diretório
**Importe o Pandas**
```
import pandas as pd
```
**Ler a base de dados em memória**
```
dataset = pd.read_csv('kc_house_data.csv', sep=',')
```
** Tipo: DataFrame**
```
type(dataset)
```
**Imprime informações do Dataframe**
```
dataset.info()
```
## Mapeamento SQL para Pandas
```
dataset.head(10)
from IPython.display import Image
Image("tabela-sql-pandas.png")
```
**Retorna todas os registros do dataframe**
```
dataset
```
**Retorna os top 10 registros**
```
dataset.head(10)
```
**Retorna os imóveis com 3 quartos**
```
dataset.loc[dataset['bedrooms']==3]
```
**Retorna imóveis únicos na base de dados**
```
dataset.id.unique()
dataset.bedrooms.unique()
dataset.bathrooms.head(10)
dataset.bathrooms.mean()
```
**Retorna a contagem de todos os registros por colunas**
```
dataset.count()
```
**Imprime o nome das colunas do dataframe**
```
dataset.columns
```
**Informações estatisticas da base de dados**
```
dataset.describe()
```
## Fazendo Querys no Dataframe
**Lista imóveis com 3 quartos e com banheiros maior igual a 2**
```
dataset.loc[(dataset['bedrooms']==3) & (dataset['bathrooms'] > 2)]
```
** Conta a quantidade de imóveis com 4 quartos**
```
dataset[dataset['bedrooms']==4].count()
```
** Ordena Dataframe pela coluna preço por ordem decrescente**
```
dataset.sort_values(by='price', ascending=False)
```
**Agrupa e conta quantidade de imóveis por tamanho de quartos**
```
dataset.bedrooms.value_counts()
dataset.bathrooms.value_counts()
```
## Consulta os dados em mais de um dataset
* Utiliza o método merge() para união dos dataframes
* União do tipo **inner join**, **left join** e **right join**
* Especifica a coluna chave para união
**Carregando o dataset de pedidos**
```
orders = pd.read_csv('olist_orders_dataset.csv')
orders.head()
```
- Carregando o dataset de itens pedidos
```
orders_items = pd.read_csv('olist_order_items_dataset.csv')
orders_items.head()
```
** Consultando os dados nos dois datasets e juntando através da chave order_id**
- Selecionando os atributos do dataset **orders (pedidos)**
> - order_id (id do pedido)
> - order_status (status do pedido)
> - order_approved_at (data e hora da aprovação do pedido)
- Selecionando os atributos do dataset **orders_items (itens do pedidos)**
> - product_id (id do produto)
> - seller_id (id do vendedor)
> - price (preço do produto)
> - freight_value (valor do frete)
```
df_query = pd.merge(orders[['order_id','order_status','order_approved_at']],
orders_items[['order_id','product_id','seller_id','price','freight_value']],
on='order_id')
df_query.head()
```
** Left Join**
```
df_query = pd.merge(orders[['order_id','order_status','order_approved_at']],
orders_items[['order_id','product_id','seller_id','price','freight_value']],
on='order_id', how='left')
df_query.head()
```
**Right Join**
```
df_query = pd.merge(orders[['order_id','order_status','order_approved_at']],
orders_items[['order_id','product_id','seller_id','price','freight_value']],
, how='right',right_on='id_pedido' )
df_query.head()
```
* Pratique o que foi aprendido refazendo todos os passos
* Consulte a documentação para aprender mais sobre os métodos e recursos utilizados.
* **Dúvidas?** Mande um e-mail para mim em contato@minerandodados.com.br
|
github_jupyter
|
import pandas as pd
dataset = pd.read_csv('kc_house_data.csv', sep=',')
type(dataset)
dataset.info()
dataset.head(10)
from IPython.display import Image
Image("tabela-sql-pandas.png")
dataset
dataset.head(10)
dataset.loc[dataset['bedrooms']==3]
dataset.id.unique()
dataset.bedrooms.unique()
dataset.bathrooms.head(10)
dataset.bathrooms.mean()
dataset.count()
dataset.columns
dataset.describe()
dataset.loc[(dataset['bedrooms']==3) & (dataset['bathrooms'] > 2)]
dataset[dataset['bedrooms']==4].count()
dataset.sort_values(by='price', ascending=False)
dataset.bedrooms.value_counts()
dataset.bathrooms.value_counts()
orders = pd.read_csv('olist_orders_dataset.csv')
orders.head()
orders_items = pd.read_csv('olist_order_items_dataset.csv')
orders_items.head()
df_query = pd.merge(orders[['order_id','order_status','order_approved_at']],
orders_items[['order_id','product_id','seller_id','price','freight_value']],
on='order_id')
df_query.head()
df_query = pd.merge(orders[['order_id','order_status','order_approved_at']],
orders_items[['order_id','product_id','seller_id','price','freight_value']],
on='order_id', how='left')
df_query.head()
df_query = pd.merge(orders[['order_id','order_status','order_approved_at']],
orders_items[['order_id','product_id','seller_id','price','freight_value']],
, how='right',right_on='id_pedido' )
df_query.head()
| 0.142769 | 0.910784 |
A neural network consist of 2 cnn layers and 4 fully connected layers.
Source: https://github.com/jojonki/cnn-for-sentence-classification
```
from google.colab import drive
drive.mount('/content/drive')
import os
os.chdir('/content/drive/MyDrive/sharif/DeepLearning/ipython(guide)')
import numpy as np
import codecs
import os
import random
import pandas
from keras import backend as K
from keras.models import Model
from keras.layers.embeddings import Embedding
from keras.layers import Input, Dense, Lambda, Permute, Dropout
from keras.layers import Conv2D, MaxPooling1D,Conv1D
from keras.optimizers import SGD
import ast
import re
from sklearn.preprocessing import MultiLabelBinarizer
from sklearn.model_selection import train_test_split
import gensim
from keras.models import load_model
from keras.callbacks import EarlyStopping, ModelCheckpoint
limit_number = 750
data = pandas.read_csv('../Data/limited_to_'+str(limit_number)+'.csv',index_col=0,converters={'body': eval})
data = data.dropna().reset_index(drop=True)
X = data["body"].values.tolist()
y = pandas.read_csv('../Data/limited_to_'+str(limit_number)+'.csv')
labels = []
tag=[]
for item in y['tag']:
labels += [i for i in re.sub('\"|\[|\]|\'| |=','',item.lower()).split(",") if i!='' and i!=' ']
tag.append([i for i in re.sub('\"|\[|\]|\'| |=','',item.lower()).split(",") if i!='' and i!=' '])
labels = list(set(labels))
mlb = MultiLabelBinarizer()
Y=mlb.fit_transform(tag)
len(labels)
sentence_maxlen = max(map(len, (d for d in X)))
print('sentence maxlen', sentence_maxlen)
freq_dist = pandas.read_csv('../Data/FreqDist_sorted.csv',index_col=False)
vocab=[]
for item in freq_dist["word"]:
try:
word=re.sub(r"[\u200c-\u200f]","",item.replace(" ",""))
vocab.append(word)
except:
pass
print(vocab[10])
vocab = sorted(vocab)
vocab_size = len(vocab)
print('vocab size', len(vocab))
w2i = {w:i for i,w in enumerate(vocab)}
# i2w = {i:w for i,w in enumerate(vocab)}
print(w2i["زبان"])
def vectorize(data, sentence_maxlen, w2i):
vec_data = []
for d in data:
vec = [w2i[w] for w in d if w in w2i]
pad_len = max(0, sentence_maxlen - len(vec))
vec += [0] * pad_len
vec_data.append(vec)
# print(d)
vec_data = np.array(vec_data)
return vec_data
vecX = vectorize(X, sentence_maxlen, w2i)
vecY=Y
X_train, X_test, y_train, y_test = train_test_split(vecX, vecY, test_size=0.2)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25)
print('train: ', X_train.shape , '\ntest: ', X_test.shape , '\nval: ', X_val.shape ,"\ny_tain:",y_train.shape )
# print(vecX[0])
embd_dim = 300
```
# ***If the word2vec model is not generated before, we should run the next block.***
```
# embed_model = gensim.models.Word2Vec(X, size=embd_dim, window=5, min_count=5)
# embed_model.save('word2vec_model')
```
# ***Otherwise, we can run the next block.***
```
embed_model=gensim.models.Word2Vec.load('word2vec_model')
word2vec_embd_w = np.zeros((vocab_size, embd_dim))
for word, i in w2i.items():
if word in embed_model.wv.vocab:
embedding_vector =embed_model[word]
# words not found in embedding index will be all-zeros.
word2vec_embd_w[i] = embedding_vector
def Net(vocab_size, embd_size, sentence_maxlen, glove_embd_w):
sentence = Input((sentence_maxlen,), name='SentenceInput')
# embedding
embd_layer = Embedding(input_dim=vocab_size,
output_dim=embd_size,
weights=[word2vec_embd_w],
trainable=False,
name='shared_embd')
embd_sentence = embd_layer(sentence)
embd_sentence = Permute((2,1))(embd_sentence)
embd_sentence = Lambda(lambda x: K.expand_dims(x, -1))(embd_sentence)
# cnn
cnn = Conv2D(1,
kernel_size=(5, sentence_maxlen),
activation='relu')(embd_sentence)
print(cnn.shape)
cnn = Lambda(lambda x: K.sum(x, axis=3))(cnn)
print(cnn.shape)
cnn = MaxPooling1D(3)(cnn)
print(cnn.shape)
cnn1 = Conv1D(1,
kernel_size=(3),
activation='relu')(cnn)
print(cnn1.shape)
# cnn1 = Lambda(lambda x: K.sum(x, axis=3))(cnn1)
print(cnn1.shape)
cnn1 = MaxPooling1D(3)(cnn1)
print(cnn1.shape)
cnn1 = Lambda(lambda x: K.sum(x, axis=2))(cnn1)
print(cnn1.shape)
hidden1=Dense(400,activation="relu")(cnn1)
hidden2=Dense(300,activation="relu")(hidden1)
hidden3=Dense(200,activation="relu")(hidden2)
hidden4=Dense(150,activation="relu")(hidden3)
out = Dense(len(labels), activation='sigmoid')(hidden4)
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model = Model(inputs=sentence, outputs=out, name='sentence_claccification')
model.compile(optimizer=sgd, loss='binary_crossentropy',metrics=["accuracy","categorical_accuracy"])
return model
model = Net(vocab_size, embd_dim, sentence_maxlen,word2vec_embd_w)
print(model.summary())
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=5) # Model stop training after 5 epoch where validation loss didnt decrease
mc = ModelCheckpoint('best_2cnn_4fc.h5', monitor='val_loss', mode='min', verbose=1, save_best_only=True) #You save model weight at the epoch where validation loss is minimal
model.fit(X_train, y_train, batch_size=32,epochs=250,verbose=1,validation_data=(X_val, y_val),callbacks=[es,mc])#you can run for 1000 epoch btw model will stop after 50 epoch without better validation loss
```
# ***If the model is generated before:***
```
model = load_model('best_2cnn_4fc.h5')
# model.save('CNN_1_no_binary.h5')
pred=model.predict(X_test)
# For evaluation: If the probability > 0.5 you can say that it belong to the class.
print(pred[0])#example
y_pred=[]
measure = np.mean(pred[0]) + 1.15*np.sqrt(np.var(pred[0]))
for l in pred:
temp=[]
for value in l:
if value >= measure:
temp.append(1)
else:
temp.append(0)
y_pred.append(temp)
measure
from sklearn.metrics import classification_report,accuracy_score
print("accuracy=",accuracy_score(y_test, y_pred))
print(classification_report(y_test, y_pred))
from sklearn.metrics import classification_report,accuracy_score
print("accuracy=",accuracy_score(y_test, y_pred))
print(classification_report(y_test, y_pred))
```
|
github_jupyter
|
from google.colab import drive
drive.mount('/content/drive')
import os
os.chdir('/content/drive/MyDrive/sharif/DeepLearning/ipython(guide)')
import numpy as np
import codecs
import os
import random
import pandas
from keras import backend as K
from keras.models import Model
from keras.layers.embeddings import Embedding
from keras.layers import Input, Dense, Lambda, Permute, Dropout
from keras.layers import Conv2D, MaxPooling1D,Conv1D
from keras.optimizers import SGD
import ast
import re
from sklearn.preprocessing import MultiLabelBinarizer
from sklearn.model_selection import train_test_split
import gensim
from keras.models import load_model
from keras.callbacks import EarlyStopping, ModelCheckpoint
limit_number = 750
data = pandas.read_csv('../Data/limited_to_'+str(limit_number)+'.csv',index_col=0,converters={'body': eval})
data = data.dropna().reset_index(drop=True)
X = data["body"].values.tolist()
y = pandas.read_csv('../Data/limited_to_'+str(limit_number)+'.csv')
labels = []
tag=[]
for item in y['tag']:
labels += [i for i in re.sub('\"|\[|\]|\'| |=','',item.lower()).split(",") if i!='' and i!=' ']
tag.append([i for i in re.sub('\"|\[|\]|\'| |=','',item.lower()).split(",") if i!='' and i!=' '])
labels = list(set(labels))
mlb = MultiLabelBinarizer()
Y=mlb.fit_transform(tag)
len(labels)
sentence_maxlen = max(map(len, (d for d in X)))
print('sentence maxlen', sentence_maxlen)
freq_dist = pandas.read_csv('../Data/FreqDist_sorted.csv',index_col=False)
vocab=[]
for item in freq_dist["word"]:
try:
word=re.sub(r"[\u200c-\u200f]","",item.replace(" ",""))
vocab.append(word)
except:
pass
print(vocab[10])
vocab = sorted(vocab)
vocab_size = len(vocab)
print('vocab size', len(vocab))
w2i = {w:i for i,w in enumerate(vocab)}
# i2w = {i:w for i,w in enumerate(vocab)}
print(w2i["زبان"])
def vectorize(data, sentence_maxlen, w2i):
vec_data = []
for d in data:
vec = [w2i[w] for w in d if w in w2i]
pad_len = max(0, sentence_maxlen - len(vec))
vec += [0] * pad_len
vec_data.append(vec)
# print(d)
vec_data = np.array(vec_data)
return vec_data
vecX = vectorize(X, sentence_maxlen, w2i)
vecY=Y
X_train, X_test, y_train, y_test = train_test_split(vecX, vecY, test_size=0.2)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25)
print('train: ', X_train.shape , '\ntest: ', X_test.shape , '\nval: ', X_val.shape ,"\ny_tain:",y_train.shape )
# print(vecX[0])
embd_dim = 300
# embed_model = gensim.models.Word2Vec(X, size=embd_dim, window=5, min_count=5)
# embed_model.save('word2vec_model')
embed_model=gensim.models.Word2Vec.load('word2vec_model')
word2vec_embd_w = np.zeros((vocab_size, embd_dim))
for word, i in w2i.items():
if word in embed_model.wv.vocab:
embedding_vector =embed_model[word]
# words not found in embedding index will be all-zeros.
word2vec_embd_w[i] = embedding_vector
def Net(vocab_size, embd_size, sentence_maxlen, glove_embd_w):
sentence = Input((sentence_maxlen,), name='SentenceInput')
# embedding
embd_layer = Embedding(input_dim=vocab_size,
output_dim=embd_size,
weights=[word2vec_embd_w],
trainable=False,
name='shared_embd')
embd_sentence = embd_layer(sentence)
embd_sentence = Permute((2,1))(embd_sentence)
embd_sentence = Lambda(lambda x: K.expand_dims(x, -1))(embd_sentence)
# cnn
cnn = Conv2D(1,
kernel_size=(5, sentence_maxlen),
activation='relu')(embd_sentence)
print(cnn.shape)
cnn = Lambda(lambda x: K.sum(x, axis=3))(cnn)
print(cnn.shape)
cnn = MaxPooling1D(3)(cnn)
print(cnn.shape)
cnn1 = Conv1D(1,
kernel_size=(3),
activation='relu')(cnn)
print(cnn1.shape)
# cnn1 = Lambda(lambda x: K.sum(x, axis=3))(cnn1)
print(cnn1.shape)
cnn1 = MaxPooling1D(3)(cnn1)
print(cnn1.shape)
cnn1 = Lambda(lambda x: K.sum(x, axis=2))(cnn1)
print(cnn1.shape)
hidden1=Dense(400,activation="relu")(cnn1)
hidden2=Dense(300,activation="relu")(hidden1)
hidden3=Dense(200,activation="relu")(hidden2)
hidden4=Dense(150,activation="relu")(hidden3)
out = Dense(len(labels), activation='sigmoid')(hidden4)
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model = Model(inputs=sentence, outputs=out, name='sentence_claccification')
model.compile(optimizer=sgd, loss='binary_crossentropy',metrics=["accuracy","categorical_accuracy"])
return model
model = Net(vocab_size, embd_dim, sentence_maxlen,word2vec_embd_w)
print(model.summary())
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=5) # Model stop training after 5 epoch where validation loss didnt decrease
mc = ModelCheckpoint('best_2cnn_4fc.h5', monitor='val_loss', mode='min', verbose=1, save_best_only=True) #You save model weight at the epoch where validation loss is minimal
model.fit(X_train, y_train, batch_size=32,epochs=250,verbose=1,validation_data=(X_val, y_val),callbacks=[es,mc])#you can run for 1000 epoch btw model will stop after 50 epoch without better validation loss
model = load_model('best_2cnn_4fc.h5')
# model.save('CNN_1_no_binary.h5')
pred=model.predict(X_test)
# For evaluation: If the probability > 0.5 you can say that it belong to the class.
print(pred[0])#example
y_pred=[]
measure = np.mean(pred[0]) + 1.15*np.sqrt(np.var(pred[0]))
for l in pred:
temp=[]
for value in l:
if value >= measure:
temp.append(1)
else:
temp.append(0)
y_pred.append(temp)
measure
from sklearn.metrics import classification_report,accuracy_score
print("accuracy=",accuracy_score(y_test, y_pred))
print(classification_report(y_test, y_pred))
from sklearn.metrics import classification_report,accuracy_score
print("accuracy=",accuracy_score(y_test, y_pred))
print(classification_report(y_test, y_pred))
| 0.433262 | 0.688658 |
<a href="https://colab.research.google.com/github/Warvito/Normative-modelling-using-deep-autoencoders/blob/master/notebooks/freesurfer_organizer.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Organize freesurfer data
In this notebook, we prepared a Python code to create your freesurferData.csv that can be used in our models. Mainly, this code is loading the output of the Freesurfer functions ([link1](https://surfer.nmr.mgh.harvard.edu/fswiki/aparcstats2table), [link2](https://surfer.nmr.mgh.harvard.edu/fswiki/asegstats2table)):
> aparcstats2table --skip --hemi rh --meas volume --tablefile rh_aparc_stats.txt --subjects $list
> aparcstats2table --skip --hemi lh --meas volume --tablefile lh_aparc_stats.txt --subjects $list
> asegstats2table --skip --meas volume --tablefile aseg_stats.txt --subjects $list
where $list indicates your subjects name that are in the SUBJECTS_DIR.
---
First step, import necessary python libraries.
```
from google.colab import files
import pandas as pd
```
## RH_APARC file
Upload the stats text file of the right hemisphere.
```
uploaded = files.upload()
for rh_aparc_filename in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(name=rh_aparc_filename, length=len(uploaded[rh_aparc_filename])))
```
## LH_APARC file
Upload the stats text file of the left hemisphere.
```
uploaded = files.upload()
for lh_aparc_filename in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(name=lh_aparc_filename, length=len(uploaded[lh_aparc_filename])))
```
## ASEG file
Upload the stats text file of the anatomical structures.
```
uploaded = files.upload()
for aseg_filename in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(name=aseg_filename, length=len(uploaded[aseg_filename])))
```
Here, we have a list with all brain regions to be used.
```
#@title Columns name
COLUMNS_NAME = ['EstimatedTotalIntraCranialVol',
'Left-Lateral-Ventricle',
'Left-Inf-Lat-Vent',
'Left-Cerebellum-White-Matter',
'Left-Cerebellum-Cortex',
'Left-Thalamus-Proper',
'Left-Caudate',
'Left-Putamen',
'Left-Pallidum',
'3rd-Ventricle',
'4th-Ventricle',
'Brain-Stem',
'Left-Hippocampus',
'Left-Amygdala',
'CSF',
'Left-Accumbens-area',
'Left-VentralDC',
'Right-Lateral-Ventricle',
'Right-Inf-Lat-Vent',
'Right-Cerebellum-White-Matter',
'Right-Cerebellum-Cortex',
'Right-Thalamus-Proper',
'Right-Caudate',
'Right-Putamen',
'Right-Pallidum',
'Right-Hippocampus',
'Right-Amygdala',
'Right-Accumbens-area',
'Right-VentralDC',
'CC_Posterior',
'CC_Mid_Posterior',
'CC_Central',
'CC_Mid_Anterior',
'CC_Anterior',
'lh_bankssts_volume',
'lh_caudalanteriorcingulate_volume',
'lh_caudalmiddlefrontal_volume',
'lh_cuneus_volume',
'lh_entorhinal_volume',
'lh_fusiform_volume',
'lh_inferiorparietal_volume',
'lh_inferiortemporal_volume',
'lh_isthmuscingulate_volume',
'lh_lateraloccipital_volume',
'lh_lateralorbitofrontal_volume',
'lh_lingual_volume',
'lh_medialorbitofrontal_volume',
'lh_middletemporal_volume',
'lh_parahippocampal_volume',
'lh_paracentral_volume',
'lh_parsopercularis_volume',
'lh_parsorbitalis_volume',
'lh_parstriangularis_volume',
'lh_pericalcarine_volume',
'lh_postcentral_volume',
'lh_posteriorcingulate_volume',
'lh_precentral_volume',
'lh_precuneus_volume',
'lh_rostralanteriorcingulate_volume',
'lh_rostralmiddlefrontal_volume',
'lh_superiorfrontal_volume',
'lh_superiorparietal_volume',
'lh_superiortemporal_volume',
'lh_supramarginal_volume',
'lh_frontalpole_volume',
'lh_temporalpole_volume',
'lh_transversetemporal_volume',
'lh_insula_volume',
'rh_bankssts_volume',
'rh_caudalanteriorcingulate_volume',
'rh_caudalmiddlefrontal_volume',
'rh_cuneus_volume',
'rh_entorhinal_volume',
'rh_fusiform_volume',
'rh_inferiorparietal_volume',
'rh_inferiortemporal_volume',
'rh_isthmuscingulate_volume',
'rh_lateraloccipital_volume',
'rh_lateralorbitofrontal_volume',
'rh_lingual_volume',
'rh_medialorbitofrontal_volume',
'rh_middletemporal_volume',
'rh_parahippocampal_volume',
'rh_paracentral_volume',
'rh_parsopercularis_volume',
'rh_parsorbitalis_volume',
'rh_parstriangularis_volume',
'rh_pericalcarine_volume',
'rh_postcentral_volume',
'rh_posteriorcingulate_volume',
'rh_precentral_volume',
'rh_precuneus_volume',
'rh_rostralanteriorcingulate_volume',
'rh_rostralmiddlefrontal_volume',
'rh_superiorfrontal_volume',
'rh_superiorparietal_volume',
'rh_superiortemporal_volume',
'rh_supramarginal_volume',
'rh_frontalpole_volume',
'rh_temporalpole_volume',
'rh_transversetemporal_volume',
'rh_insula_volume']
```
Then, we merge all the files and select the columns used by our models.
```
aseg_stats = pd.read_csv(aseg_filename, delimiter='\t')
lh_aparc_stats = pd.read_csv(lh_aparc_filename, delimiter='\t')
rh_aparc_stats = pd.read_csv(rh_aparc_filename, delimiter='\t')
combined = pd.merge(aseg_stats, lh_aparc_stats, left_on='Measure:volume', right_on='lh.aparc.volume')
combined = pd.merge(combined, rh_aparc_stats, left_on='Measure:volume', right_on='rh.aparc.volume')
combined.rename(columns={'Measure:volume': 'Image_ID'}, inplace=True)
combined = combined.set_index('Image_ID')[COLUMNS_NAME]
combined.to_csv('freesurferData.csv')
```
Finally, you can download the file.
```
files.download('freesurferData.csv')
```
|
github_jupyter
|
from google.colab import files
import pandas as pd
uploaded = files.upload()
for rh_aparc_filename in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(name=rh_aparc_filename, length=len(uploaded[rh_aparc_filename])))
uploaded = files.upload()
for lh_aparc_filename in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(name=lh_aparc_filename, length=len(uploaded[lh_aparc_filename])))
uploaded = files.upload()
for aseg_filename in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(name=aseg_filename, length=len(uploaded[aseg_filename])))
#@title Columns name
COLUMNS_NAME = ['EstimatedTotalIntraCranialVol',
'Left-Lateral-Ventricle',
'Left-Inf-Lat-Vent',
'Left-Cerebellum-White-Matter',
'Left-Cerebellum-Cortex',
'Left-Thalamus-Proper',
'Left-Caudate',
'Left-Putamen',
'Left-Pallidum',
'3rd-Ventricle',
'4th-Ventricle',
'Brain-Stem',
'Left-Hippocampus',
'Left-Amygdala',
'CSF',
'Left-Accumbens-area',
'Left-VentralDC',
'Right-Lateral-Ventricle',
'Right-Inf-Lat-Vent',
'Right-Cerebellum-White-Matter',
'Right-Cerebellum-Cortex',
'Right-Thalamus-Proper',
'Right-Caudate',
'Right-Putamen',
'Right-Pallidum',
'Right-Hippocampus',
'Right-Amygdala',
'Right-Accumbens-area',
'Right-VentralDC',
'CC_Posterior',
'CC_Mid_Posterior',
'CC_Central',
'CC_Mid_Anterior',
'CC_Anterior',
'lh_bankssts_volume',
'lh_caudalanteriorcingulate_volume',
'lh_caudalmiddlefrontal_volume',
'lh_cuneus_volume',
'lh_entorhinal_volume',
'lh_fusiform_volume',
'lh_inferiorparietal_volume',
'lh_inferiortemporal_volume',
'lh_isthmuscingulate_volume',
'lh_lateraloccipital_volume',
'lh_lateralorbitofrontal_volume',
'lh_lingual_volume',
'lh_medialorbitofrontal_volume',
'lh_middletemporal_volume',
'lh_parahippocampal_volume',
'lh_paracentral_volume',
'lh_parsopercularis_volume',
'lh_parsorbitalis_volume',
'lh_parstriangularis_volume',
'lh_pericalcarine_volume',
'lh_postcentral_volume',
'lh_posteriorcingulate_volume',
'lh_precentral_volume',
'lh_precuneus_volume',
'lh_rostralanteriorcingulate_volume',
'lh_rostralmiddlefrontal_volume',
'lh_superiorfrontal_volume',
'lh_superiorparietal_volume',
'lh_superiortemporal_volume',
'lh_supramarginal_volume',
'lh_frontalpole_volume',
'lh_temporalpole_volume',
'lh_transversetemporal_volume',
'lh_insula_volume',
'rh_bankssts_volume',
'rh_caudalanteriorcingulate_volume',
'rh_caudalmiddlefrontal_volume',
'rh_cuneus_volume',
'rh_entorhinal_volume',
'rh_fusiform_volume',
'rh_inferiorparietal_volume',
'rh_inferiortemporal_volume',
'rh_isthmuscingulate_volume',
'rh_lateraloccipital_volume',
'rh_lateralorbitofrontal_volume',
'rh_lingual_volume',
'rh_medialorbitofrontal_volume',
'rh_middletemporal_volume',
'rh_parahippocampal_volume',
'rh_paracentral_volume',
'rh_parsopercularis_volume',
'rh_parsorbitalis_volume',
'rh_parstriangularis_volume',
'rh_pericalcarine_volume',
'rh_postcentral_volume',
'rh_posteriorcingulate_volume',
'rh_precentral_volume',
'rh_precuneus_volume',
'rh_rostralanteriorcingulate_volume',
'rh_rostralmiddlefrontal_volume',
'rh_superiorfrontal_volume',
'rh_superiorparietal_volume',
'rh_superiortemporal_volume',
'rh_supramarginal_volume',
'rh_frontalpole_volume',
'rh_temporalpole_volume',
'rh_transversetemporal_volume',
'rh_insula_volume']
aseg_stats = pd.read_csv(aseg_filename, delimiter='\t')
lh_aparc_stats = pd.read_csv(lh_aparc_filename, delimiter='\t')
rh_aparc_stats = pd.read_csv(rh_aparc_filename, delimiter='\t')
combined = pd.merge(aseg_stats, lh_aparc_stats, left_on='Measure:volume', right_on='lh.aparc.volume')
combined = pd.merge(combined, rh_aparc_stats, left_on='Measure:volume', right_on='rh.aparc.volume')
combined.rename(columns={'Measure:volume': 'Image_ID'}, inplace=True)
combined = combined.set_index('Image_ID')[COLUMNS_NAME]
combined.to_csv('freesurferData.csv')
files.download('freesurferData.csv')
| 0.333937 | 0.875148 |
# 3.2 Strings & String-Funktionen
## 3.2.1 - 3.2.4 Namenskonventionen, Strings verbinden & konvertieren
Im Anschluss dieser Übungseinheit kannst du ...
+ Variablen entsprechend der Python-Namenskonventionen benennen
+ Variablen mit String-Werten anlegen
+ den Datentyp String konvertieren
+ Strings und Characters verbinden
## 3.2.1 Python-Namenskonventionen für Variablen
Wenn du irgendwo in einem Programm eine Variable namens "x" siehst, weißt du dann, wofür diese Variable da ist und was sie beinhaltet?
<br>
Bei einem einfachen numerischen Wert und in einem kurzen Programm mag es keine Rolle spielen, wenn die Variable mit "x" benannt ist. Doch wenn das Programm komplexer wird und dort viele verschiedene Variablen vorkommen, die "x", "y", "z" usw. heißen, verlieren du und jemand, der mit deinem Code weiterarbeiten möchte, schnell den Überblick.
<br>
> Um dies zu verhindern, gibt es in Python sowie in anderen Programmiersprachen bestimmte Namenskonventionen. Diese Konventionen einzuhalten, gehört zum guten Ton bzw. zum guten Code. Du kannst sie dir als eine Vereinbarung unter ProgrammschreiberInnen vorstellen. Wenn alle sich an diese Vereinbarung halten, kann jeder jeden Code aller anderen besser lesen und verstehen.
<br>
In allen Programmiersprachen gilt: **Benenne Variablen möglichst eindeutig nach dem, was sie beinhalten/bewirken.**
<br>
Legst du beispielsweise eine Variable an, die die das Produkt von Kosten und Menge einer Ware beinhalten soll, wäre eine eindeutige Bezeichnung "Gesamtkosten". Da es darüber hinaus jedoch üblich ist, Variablen auf Englisch zu benennen, wäre eine noch bessere Bezeichnung: "total_costs". Anhand dieses Namens lassen sich weitere Namenskonventionen ableiten.
<br>
Die **Python-Namenskonventionen** lauten zusammengefasst:
* englische Variablennamen, z.B.: ``income = 2500``
* durchgängige Kleinschreibung der Variablen, außer bei Konstanten (Variablen, deren Wert sich nie ändern soll), z.B.: ``PI = 3.14``
* Trennung von Doppel-Namen mit einem Unterstrich (bei mehr als zwei Namen entsprechend mehr Unterstriche, doch zu lange Namen sollten vermieden werden), z.B.: ``total_customers = 1448``
* Variablen dürfen keine Sonderzeichen enthalten. Z.B. nicht zulässig ist "room13@floor3", doch zulässig ist: ``room13floor3 = 150``
* Variablennamen sollten in sich konsistent sein. Heißt eine Variable z.B. "firstname" (Vorname), sollte die Variable für den Nachnamen nicht "last_name" benannt werden, sondern ebenso ohne Unterstrich: ``lastname = 'Sander'``
* die folgenden beiden Konventionen **müssen eingehalten werden**, da sie sonst zu Programmfehlern führen:
* Variablen dürfen mit keiner Zahl und keinem Sonderzeichen beginnen, z.B. "1total_costs" wäre kein zulässiger Name und ergäbe einen <font color = "darkred">SyntaxError</font> (Zeichenfehler)
* verwende keine Variablennamen, die bereits intern von Python belegt sind, wie <b>int</b>, <b>float</b> und <b>str</b> (String). Dies führt nicht bei der Deklarierung und Initialisierung (Benennung und Wertzuweisung) zu einem Fehler, aber bei der Weiterverwendung früher oder später zu einem <font color = "darkred">TypeError</font> (Datentypfehler). Weitere reservierte Namen findest du hier: https://docs.python.org/3/reference/lexical_analysis.html#keywords
<br>
<div class="alert alert-block alert-info">
<font size="3"><b>Tipp:</b></font> Namenskonventionen einzuhalten bedeutet, den Code einheitlicher und ihn damit leichter erfassbar für alle ihn Lesenden zu gestalten. Hältst du dich nicht an die Namenskonventionen, wird in den meisten Fällen kein Fehler auftreten. Doch bedenke, dass andere und auch du selbst deinen Code nach längerer Zeit immer noch schnell verstehen möchten. Außerdem sieht dein Code damit professionell aus.
</div>
<br>
>Die Trennung mit einem Unterstrich wie bei "hello_world" nennt man auch **Snake Case** (wie eine am Boden kriechende Schlange). In anderen Programmiersprachen gibt es ähnliche, auf die Tierwelt bezogene Bezeichnungen. Zum Beispiel ist in Java hingegen "Camel Case" Konvention. Dort würde man dieselbe Variable so schreiben: helloWorld (großer Anfangsbuchstabe des zweiten Wortes, wie ein Kamelbuckel).
<br>
<div class="alert alert-block alert-warning">
<font size="3"><b>Übung:</b></font> Der folgende Code könnte durch eine eindeutigere Namensgebung der Variablen verbessert werden. Gib in der Codezelle darunter bessere Namen für <b>z</b>, <b>w</b> und <b>r</b> an.
<br>
Diese Übung hat keine eindeutige Lösung. Hier geht es eher darum, sinnvolle Namen zu finden und sie entsprechend der Namenskonventionen zu schreiben. Die Lösung zu dieser Übung dient dir nachträglich zur Orientierung.
</div>
```
first_name = 'Tim'
z = 'Schneider'
w = '015643287593'
r = 'Berlin'
```
## 3.2.2 Variablen mit String-Werten anlegen
Bisher haben wir uns vor allem mit den numerischen Datentypen Float (float) und Integer (int) befasst. Weil sie Zahlen darstellen und miteinander verrechnet werden, werden sie ohne Anführungsstriche geschrieben.
<br>
Strings, also Text bzw. Wörter sind dir als Variablenwerte schon ein paar Mal in diesem Kurs begegnet, auch in der vorhergehenden Aufgabe. Du hast bereits gesehen, dass **Strings in Python immer in Anführungsstrichen** geschrieben werden. Innerhalb dieser Anführungsstriche werden Zahlen, Operanden etc. von Python als Teil des Strings betrachtet. Du kannst für Strings einfache oder doppelte Anführungsstriche verwenden; zum Beispiel sind beide dieser Varianten korrekt:
<br>
``firstname = 'Tom'``
<br>
``firstname = "Tom"``
<br>
Jedoch ist es auch eine **Python-Konvention**, bevorzugt **einfache Anführungsstriche** zu verwenden, also eher:
<br>
``firstname = 'Tom'``
<br>
Weiterhin ist wichtig zu beachten, dass beide **nicht gemixt** werden dürfen, denn das ergäbe einen Fehler, wie:
```
firstname = 'Tom"
```
Schreibst du einen String, in welchem wiederum Anführungsstriche vorkommen, zum Beispiel ...
```
sentence = 'Roberts Lieblingsfilm ist "Terminator".'
print(sentence)
```
..., ist es erforderlich, innerhalb des Satzes die andere Variante der Anführungsstriche zu verwenden. Der gleiche Satz könnte also auch so geschrieben werden (Eigennamen werden im Deutschen allerdings in doppelte Anführungsstriche gesetzt):
```
sentence = "Roberts Lieblingsfilm ist 'Terminator'."
print(sentence)
```
## 3.2.3 Konvertierung von/zu Strings
Hast du einen String mit einem numerischen Wert angelegt, kannst du diesen zu einem Integer oder Float **konvertieren**, zum Beispiel:
```
number = '123'
number = int(number)
type(number)
```
Umgekehrt kannst du auch Zahlen zu Strings konvertieren, mit ``str()``.
<br>
<div class="alert alert-block alert-info">
<font size="3"><b>Tipp:</b></font> Konvertierung von Datentypen wird auch <b>Casten</b> genannt bzw. im Englischen <b>Type Casting</b>. Man sagt zum Beispiel: "Die Zahl wird zu Float gecastet."
<br>
Die Operation zur Umwandlung wird als <b>Cast-Operator</b> (engl.: cast operator) bezeichnet, wie z.B.: <b>int()</b>
</div>
<br>
<div class="alert alert-block alert-warning">
<font size="3"><b>Übung:</b></font> Caste die untenstehende Zahl zu String und überprüfe ihren Datentyp.
</div>
```
number = 7.7
```
Die Konvertierung zu String funktioniert in jedem Fall, selbst, wenn bereits ein String vorliegt:
```
test_string = 'string'
converted_string = str(test_string)
type(converted_string)
```
## 3.2.4 Strings verbinden (Concatenation)
### 3.2.4 a) Strings mit + und * verbinden
Wenn du einzelne Strings weiterverarbeiten möchtest, kann es nötig werden, dass du diese verbindest/konkatenierst. Dies geschieht mit Hilfe des <font color = "green">+</font>-Operators. Ein Beispiel:
```
city = 'Berlin'
street = 'Chaussestraße 7'
address = city + street
print(address)
```
An diesem Beispiel siehst du, dass zwar die Strings verbunden wurden, doch sollten noch ein Komma und/oder Leerzeichen zwischen den Strings platziert sein, müssten diese bereits in den Strings selbst vorhanden sein.
<br>
Allerdings wäre es nicht üblich, ein Komma und Leerzeichen direkt in die Variablenwerte einzubauen, wie:
street = ', Chaussestraße 7'
<br>
Der Standard ist, die Print-Funktion wie folgt anzupassen:
```
print(city + ', ' + street)
```
Das gleiche könntest du auch direkt in der Variable **address** schreiben und dann über ``print()`` ausgeben lassen:
```
address = city + ', ' + street
print(address)
```
Strings kannst du auch über verschiedene Formatierungsmöglichkeiten verbinden, die Python bietet. Dazu kommen wir gleich noch ausführlicher.
<br>
Wie du es schon bei numerischen Datentypen gesehen hast, kannst du auch Strings addieren bzw. verbinden, indem du das <font color = "green">+</font> vor das <font color = "green">=</font> setzt:
<br>
```
singing = 'la'
singing += 'la'
print(singing)
```
<div class="alert alert-block alert-info">
<font size="3"><b>Tipp:</b></font> Zu beachten ist hierbei, dass du der Variablen zuerst mit <b>=</b> einen Wert zuweist, bevor du sie erhöhst/erweiterst. Sonst weiß Python nicht, zu was es den Wert bei <b>+=</b> hinzufügen soll und es gibt einen <b>NameError</b> wegen einer undefinierten Variablen.
<br>
Deshalb wird <b>singing</b> in dem obigen Beispiel zuerst mit dem Anfangswert <b>'la'</b> initialisiert.
</div>
<br>
Um Strings zu vervielfachen, kannst du <font color = green>*</font> bei einer Wertzuweisung an eine Variable oder direkt in ``print()`` verwenden:
```
# als Wertzuweisung an eine folgende Variable
singing = 'la'
choir = singing*12
print(choir)
# direkt in print()
singing = 'la'
print(singing*8)
```
<div class="alert alert-block alert-info">
<font size="3"><b>Achtung:</b></font> Das Muliplikationszeichen funktioniert bei Strings <b>nicht</b> so: <b>s *= 'la'</b>. Python versteht das als rechnerische Multiplikation und bei Strings führt das zu einem <b>TypeError</b> (Fehler wegen inkompatibler Datentypen). Ein String kann mit <b>*</b> wie in den zwei obigen Beispielen zwar vervielfacht werden, aber im Zusammenhang mit <b>=</b> entsteht für Python eine unlösbare Rechnung, da Text nicht mit Text multipliziert werden kann.
</div>
<br>
<div class="alert alert-block alert-warning">
<font size="3"><b>Übung:</b></font> Such dir einen der vorgestellten Wege aus, um den String in der folgenden Code-Zelle zu verdreifachen.
</div>
```
joke = 'ha'
```
### 3.2.4 b) Characters mit join() verbinden
Einzelne Buchstaben eines Strings werden als **Characters** bzeichnet. Es ist möglich, Zeichen und Buchstaben nach Characters mit der Funktion ``join()`` zu platzieren, wie hier zum Beispiel ein Komma nach jedem Character:
```
numbers = '0123456789'
print(','.join(numbers))
```
<div class="alert alert-block alert-info">
<font size="3"><b>Tipp:</b></font> Funktionen wie <b>join()</b> werden über den Punkt-Operator (einfacher Punkt) an den Wert bzw. die Variable angehängt, auf die sie angewendet werden sollen. Es gibt keine eindeutige Regel dafür, wann Variablen in die Klammern eingetragen und wann Funktionen mit einem Punkt angehängt werden. Es ist wie beim Sprachenlernen - über Praxiserfahrung wird die richtige Anwendung zur Gewohnheit.
</div>
<br>
Du kannst mit ``join()`` auch einzelne Strings (auch in Form von Listeneinträgen) zu einem gesamten verbinden. Das wird in einer der folgenden Einheiten behandelt, in der wir mehrere Strings vorliegen haben.
Syntax: <font color = green>'Zeichen, mit dem/denen verbunden werden soll'.join(String/Liste)</font>
<div class="alert alert-block alert-warning">
<font size="3"><b>Übung:</b></font> Gegeben ist die Variable <b>alphabet</b> in der folgenden Code-Zelle.
<br>
Deine Aufgabe ist es, sie zu dieser gewünschten Ausgabe zu formatieren: <b>a & b & c</b>
<br>
Was müsste in <b>print()</b> stehen, um diese zu erhalten?
<br>
Achte auf die Leerzeichen vor und nach dem Kaufmanns-Und.
</div>
```
alphabet = 'abc'
```
<div class="alert alert-block alert-success">
<b>Großartig!</b> Du kannst jetzt allen Werten wie Zahlen und Strings passenden, Python-konformen Variablennamen zuweisen. Du kannst nun sogar Strings mit numerischen Werten zu Integers und Floats konvertieren und umgekehrt. Außerdem weißt du, wie du sie miteinander verbindest.
<br>
Als nächstes wird dir gezeigt, wie du Strings für eine visuell ansprechende Ausgabe formatieren kannst.
<div class="alert alert-block alert-info">
<h3>Das kannst du dir aus dieser Übung mitnehmen:</h3>
* **Python-Namenskonventionen für Variablen**
* eindeutige, leicht zuzuordnende Bezeichnung
* auf Englisch
* in Kleinbuchstaben
* Großschreibung nur bei Konstanten (unveränderliche Variablen), z.B.: ``PI = 3.14``
* bei mehr als einem Namen in der Bezeichnung Trennung mit Unterstrich (genannt Snake Case), z.B.: <font color = green>zwei_namen</font> oder: <font color = green>sogar_drei_namen</font>
* zu lange Namen vermeiden
* keine Verwendung von Sonderzeichen wie <font color = green>@</font>, nur Buchstaben und Zahlen
* in sich konsistente Namen entsprechend bereits benannter Variablen, z.B. <font color = green>firstname</font> passend zu <font color = green>lastname</font> (nicht last_name)
* Vermeidung von Fehlermeldungen bei ...
* keiner Verwendung von Zahlen sowie Sonderzeichen am Variablenanfang, z.B.: <font color = darkred>1customer = 'Evelyn'</font>
* keiner Verwendung von bereits von Python reservierten Namen wie <font color = darkred>str</font>, <font color = darkred>int</font>, <font color = darkred>float</font>, <font color = darkred>type</font> usw.
<br>
* **Strings**
* ... sind Zeichenketten, die aus allen möglichen Zeichen bestehen können, auch Zahlen, doch sie werden trotzdem immer als Text/Strings von Python interpretiert
* ... ihre Datentyp-Bezeichnung innerhalb von Python ist **str**
* die einzelnen Zeichen eines Strings werden als **Characters** bezeichnet, sie stellen jedoch jedoch keinen eigenen Datentyp dar. Einzelzeichen, wie 'z', sind stets auch Strings
<br>
* **Strings werden in Anführungsstrichen angelegt**
* entweder in einfachen **oder** doppelten (nicht mixen), so: <font color = green>'String'</font> (empfohlen) oder so: <font color = green>"String"</font>
* verwendest du Anführungsstriche innerhalb von Strings, wählst du dafür die genau anderen: <font color = green>"Film: 'Strings'"</font> oder <font color = green>'Film: "Strings"'</font>
<br>
* **Strings können konvertiert werden**
* eine Zahl wird in einen String konvertiert mit: ``str(123)`` oder auch ``str(1.0234)``
* ein String wird in einen Float konvertiert mit: ``float('12.3')``
* Ein String wird in einen Integer konvertiert mit: ``int('123')``
<br>
* **Strings können verbunden werden**
* ... über **<font color = green>+</font>** : ``print('String' + 'nächster String')`` => Output: Stringnächster String
* Kommas, Leerzeichen und weitere Zeichen sind zusätzlich einzuzufügen, z.B.: ``print('String' + ', ' + 'nächster String')`` => Output: String, nächster String
* vervielfacht mit **<font color = green>*</font>**: ``a = 'Aha!'`` => ``print(a*3)`` => Output: Aha!Aha!Aha!
<br>
* **Characters können über join() verbunden werden**
* als Strings gespeicherte Listeneinträge können auch über ``join()`` verbunden werden
* Syntax: <font color = green>'Zeichen, mit dem/denen verbunden werden soll'.join(String/Liste)</font>
</div>
|
github_jupyter
|
first_name = 'Tim'
z = 'Schneider'
w = '015643287593'
r = 'Berlin'
firstname = 'Tom"
sentence = 'Roberts Lieblingsfilm ist "Terminator".'
print(sentence)
sentence = "Roberts Lieblingsfilm ist 'Terminator'."
print(sentence)
number = '123'
number = int(number)
type(number)
number = 7.7
test_string = 'string'
converted_string = str(test_string)
type(converted_string)
city = 'Berlin'
street = 'Chaussestraße 7'
address = city + street
print(address)
print(city + ', ' + street)
address = city + ', ' + street
print(address)
singing = 'la'
singing += 'la'
print(singing)
# als Wertzuweisung an eine folgende Variable
singing = 'la'
choir = singing*12
print(choir)
# direkt in print()
singing = 'la'
print(singing*8)
joke = 'ha'
numbers = '0123456789'
print(','.join(numbers))
alphabet = 'abc'
| 0.228587 | 0.780391 |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
s=pd.Series([1,23,4,np.nan,8])
print(s)
dates=pd.date_range('20190213',periods=6)
print(dates)
df=pd.DataFrame(np.random.randn(6,4),index=dates,columns=['a','b','c','d'])
print(df)
dict={'a':[1,7,835,88],'b':[2,55,42,12],'c':[3,4,5,9],'d':[1,2,3,6]}
df=pd.DataFrame(dict)
print(df)
# print(df.dtypes)
# print(df.index)
# print(df.columns)
# print(df.describe())
# print(df.T)
# print(df.sort_index(axis=1,ascending=False))
# print(df.sort_index(axis=0,ascending=False))
print(df.sort_values(by='b'))
#pandas选择数据,类似于列表索引用[]
dates=pd.date_range('20190213',periods=6)
df=pd.DataFrame(np.arange(24).reshape((6,4)),index=dates,columns=['a','b','c','d'])
print(df)
# print(df['a'])
# print(df.a)
# print(df[0:3])
#通过标签选择(loc)
print(df.loc['20190215'])
print(df.loc[:,['a','b']])
print(df.loc['20190215',['a','b']])
#通过位置选择(iloc)
print(df)
print(df.iloc[1:2,2])
print(df.iloc[[1,3,5],3])
#通过位置与标签选择(ix)
print(df.ix[:3,['c','a']])
#是或否选择
print(df)
print(df[df.a>8])
#pandas设置值
dates=pd.date_range('20190213',periods=6)
df=pd.DataFrame(np.arange(24).reshape((6,4)),index=dates,columns=['a','b','c','d'])
print(df)
# df.iloc[0,0]=12581
# print(df)
df.loc['20190215','a']=12581
print(df)
# df[df.a>4]=0
# print(df)
df.a[df.a>4]=0
print(df)
df['e']=np.nan
print(df)
df['f']=pd.Series([1,2,3,4,5,6],index=dates)
print(df)
#pandas处理丢失数据
#的容纳()丢掉缺失值
dates=pd.date_range('20190213',periods=6)
df=pd.DataFrame(np.arange(24).reshape((6,4)),index=dates,columns=['a','b','c','d'])
df.iloc[0,0]=np.nan
df.iloc[5,3]=np.nan
print(df)
print(df.dropna(axis=0)) #丢掉NaN所在行
print(df.dropna(axis=1)) #丢掉NaN所在列
print(df.dropna(axis=1,how='any'))
print(df.dropna(axis=1,how='all'))
#fillna()填充缺失值
print(df.fillna(value=0))
#判断是否由缺失值isnull
print(df.isnull())
print(np.any(df.isnull())==True)
# pandas导入导出数据
# 读取数据read_csv,存储数据to_csv,其他格式类似
file=pd.read_csv('文件名')
print(file)
file.to_csv('路径')
#pandas数据合并
#concat()合并
df1=pd.DataFrame(np.zeros((3,4)),columns=['a','b','c','d'])
df2=pd.DataFrame(np.ones((3,4)),columns=['a','b','c','d'])
df3=pd.DataFrame(np.ones((3,4))*2,columns=['a','b','c','d'])
print(df1)
print(df2)
print(df3)
res=pd.concat([df1,df2,df3],axis=0,ignore_index=True)
print(res)
#concat中的属性join['inner','outer']
df1=pd.DataFrame(np.zeros((3,4)),index=[1,2,3],columns=['a','b','c','d'])
df2=pd.DataFrame(np.ones((3,4)),index=[2,3,4],columns=['b','c','d','e'])
print(df1)
print(df2)
print(pd.concat([df1,df2],axis=0,sort=True,join='outer'))
print(pd.concat([df1,df2],axis=0,sort=False,join='inner',ignore_index=True))
#属性join_axes
print(pd.concat([df1,df2],axis=1))
print(pd.concat([df1,df2],axis=1,join_axes=[df1.index]))
#append()适用于列标签相同的情况,纵向合并
df1=pd.DataFrame(np.zeros((3,4)),columns=['a','b','c','d'])
df2=pd.DataFrame(np.ones((3,4)),columns=['a','b','c','d'])
df3=pd.DataFrame(np.ones((3,4))*2,columns=['a','b','c','d'])
res1=df1.append(df2,ignore_index=True)
print(res1)
res2=df1.append([df2,df3],ignore_index=True)
print(res2)
df1=pd.DataFrame(np.zeros((3,4)),columns=['a','b','c','d'])
s1=pd.Series([1,2,3,4],index=['a','b','c','d'])
res=df1.append(s1,ignore_index=True)
print(res)
#pandas中的merge合并,横向合并
df1=pd.DataFrame({'key':['K0','K1','K2','K3'],'A':['A0','A1','A2','A3'],'B':['B0','B1','B2','B3']})
df2=pd.DataFrame({'key':['K0','K1','K2','K3'],'C':['C0','C1','C2','C3'],'D':['D0','D1','D2','D3']})
print(df1)
print(df2)
res=pd.merge(df1,df2,on='key')
print(res)
#考虑两个key的情况
df1=pd.DataFrame({'key1':['K0','K0','K1','K2'],'key2':['K0','K1','K0','K1'],'A':['A0','A1','A2','A3'],'B':['B0','B1','B2','B3']})
df2=pd.DataFrame({'key1':['K0','K1','K1','K2'],'key2':['K0','K0','K0','K0'],'C':['C0','C1','C2','C3'],'D':['D0','D1','D2','D3']})
print(df1)
print(df2)
print('---------------------1-----------------------')
res1=pd.merge(df1,df2,on=['key1','key2'])
print(res1)
print('--------------------2------------------------')
res2=pd.merge(df1,df2,on=['key1','key2'],how='inner') #how可取left,right,inner,outer
print(res2)
print('--------------------3------------------------')
res3=pd.merge(df1,df2,on=['key1','key2'],how='outer')
print(res3)
print('--------------------4------------------------')
res4=pd.merge(df1,df2,on=['key1','key2'],how='left') #以key1为主
print(res4)
print('--------------------5------------------------')
res5=pd.merge(df1,df2,on=['key1','key2'],how='right') #以key2为主
print(res5)
df1=pd.DataFrame({'col1':[0,1],'col_left':['a','b']})
df2=pd.DataFrame({'col1':[1,2,2],'col_right':[2,2,2]})
print(df1)
print(df2)
res=pd.merge(df1,df2,on='col1',how='outer',indicator=True)
print(res)
#不具有公共列的情况
df1=pd.DataFrame({'A':['A0','A1','A2'],'B':['B0','B1','B2']},index=['K0','K1','K2'])
df2=pd.DataFrame({'C':['C0','C1','C2'],'D':['D0','D1','D2']},index=['K0','K2','K3'])
print(df1)
print(df2)
res1=pd.merge(df1,df2,left_index=True,right_index=True,how='outer')
print(res1)
boys=pd.DataFrame({'k':['K0','K1','K2'],'age':[1,2,3]})
girls=pd.DataFrame({'k':['K0','K0','K3'],'age':[4,5,6]})
print(boys)
print(girls)
res1=pd.merge(boys,girls,on='k') #若不指定suffixes,会产生默认x,y
print(res1)
res2=pd.merge(boys,girls,on='k',suffixes=['_boy','_girl'],how='inner')
res2
#pandas的可视化
data=pd.Series(np.random.randn(1000))
data=data.cumsum()
data.plot()
data=pd.DataFrame(np.random.randn(1000,4),columns=list('abcd'))
print(data.head())
data=data.cumsum()
data.plot()
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
s=pd.Series([1,23,4,np.nan,8])
print(s)
dates=pd.date_range('20190213',periods=6)
print(dates)
df=pd.DataFrame(np.random.randn(6,4),index=dates,columns=['a','b','c','d'])
print(df)
dict={'a':[1,7,835,88],'b':[2,55,42,12],'c':[3,4,5,9],'d':[1,2,3,6]}
df=pd.DataFrame(dict)
print(df)
# print(df.dtypes)
# print(df.index)
# print(df.columns)
# print(df.describe())
# print(df.T)
# print(df.sort_index(axis=1,ascending=False))
# print(df.sort_index(axis=0,ascending=False))
print(df.sort_values(by='b'))
#pandas选择数据,类似于列表索引用[]
dates=pd.date_range('20190213',periods=6)
df=pd.DataFrame(np.arange(24).reshape((6,4)),index=dates,columns=['a','b','c','d'])
print(df)
# print(df['a'])
# print(df.a)
# print(df[0:3])
#通过标签选择(loc)
print(df.loc['20190215'])
print(df.loc[:,['a','b']])
print(df.loc['20190215',['a','b']])
#通过位置选择(iloc)
print(df)
print(df.iloc[1:2,2])
print(df.iloc[[1,3,5],3])
#通过位置与标签选择(ix)
print(df.ix[:3,['c','a']])
#是或否选择
print(df)
print(df[df.a>8])
#pandas设置值
dates=pd.date_range('20190213',periods=6)
df=pd.DataFrame(np.arange(24).reshape((6,4)),index=dates,columns=['a','b','c','d'])
print(df)
# df.iloc[0,0]=12581
# print(df)
df.loc['20190215','a']=12581
print(df)
# df[df.a>4]=0
# print(df)
df.a[df.a>4]=0
print(df)
df['e']=np.nan
print(df)
df['f']=pd.Series([1,2,3,4,5,6],index=dates)
print(df)
#pandas处理丢失数据
#的容纳()丢掉缺失值
dates=pd.date_range('20190213',periods=6)
df=pd.DataFrame(np.arange(24).reshape((6,4)),index=dates,columns=['a','b','c','d'])
df.iloc[0,0]=np.nan
df.iloc[5,3]=np.nan
print(df)
print(df.dropna(axis=0)) #丢掉NaN所在行
print(df.dropna(axis=1)) #丢掉NaN所在列
print(df.dropna(axis=1,how='any'))
print(df.dropna(axis=1,how='all'))
#fillna()填充缺失值
print(df.fillna(value=0))
#判断是否由缺失值isnull
print(df.isnull())
print(np.any(df.isnull())==True)
# pandas导入导出数据
# 读取数据read_csv,存储数据to_csv,其他格式类似
file=pd.read_csv('文件名')
print(file)
file.to_csv('路径')
#pandas数据合并
#concat()合并
df1=pd.DataFrame(np.zeros((3,4)),columns=['a','b','c','d'])
df2=pd.DataFrame(np.ones((3,4)),columns=['a','b','c','d'])
df3=pd.DataFrame(np.ones((3,4))*2,columns=['a','b','c','d'])
print(df1)
print(df2)
print(df3)
res=pd.concat([df1,df2,df3],axis=0,ignore_index=True)
print(res)
#concat中的属性join['inner','outer']
df1=pd.DataFrame(np.zeros((3,4)),index=[1,2,3],columns=['a','b','c','d'])
df2=pd.DataFrame(np.ones((3,4)),index=[2,3,4],columns=['b','c','d','e'])
print(df1)
print(df2)
print(pd.concat([df1,df2],axis=0,sort=True,join='outer'))
print(pd.concat([df1,df2],axis=0,sort=False,join='inner',ignore_index=True))
#属性join_axes
print(pd.concat([df1,df2],axis=1))
print(pd.concat([df1,df2],axis=1,join_axes=[df1.index]))
#append()适用于列标签相同的情况,纵向合并
df1=pd.DataFrame(np.zeros((3,4)),columns=['a','b','c','d'])
df2=pd.DataFrame(np.ones((3,4)),columns=['a','b','c','d'])
df3=pd.DataFrame(np.ones((3,4))*2,columns=['a','b','c','d'])
res1=df1.append(df2,ignore_index=True)
print(res1)
res2=df1.append([df2,df3],ignore_index=True)
print(res2)
df1=pd.DataFrame(np.zeros((3,4)),columns=['a','b','c','d'])
s1=pd.Series([1,2,3,4],index=['a','b','c','d'])
res=df1.append(s1,ignore_index=True)
print(res)
#pandas中的merge合并,横向合并
df1=pd.DataFrame({'key':['K0','K1','K2','K3'],'A':['A0','A1','A2','A3'],'B':['B0','B1','B2','B3']})
df2=pd.DataFrame({'key':['K0','K1','K2','K3'],'C':['C0','C1','C2','C3'],'D':['D0','D1','D2','D3']})
print(df1)
print(df2)
res=pd.merge(df1,df2,on='key')
print(res)
#考虑两个key的情况
df1=pd.DataFrame({'key1':['K0','K0','K1','K2'],'key2':['K0','K1','K0','K1'],'A':['A0','A1','A2','A3'],'B':['B0','B1','B2','B3']})
df2=pd.DataFrame({'key1':['K0','K1','K1','K2'],'key2':['K0','K0','K0','K0'],'C':['C0','C1','C2','C3'],'D':['D0','D1','D2','D3']})
print(df1)
print(df2)
print('---------------------1-----------------------')
res1=pd.merge(df1,df2,on=['key1','key2'])
print(res1)
print('--------------------2------------------------')
res2=pd.merge(df1,df2,on=['key1','key2'],how='inner') #how可取left,right,inner,outer
print(res2)
print('--------------------3------------------------')
res3=pd.merge(df1,df2,on=['key1','key2'],how='outer')
print(res3)
print('--------------------4------------------------')
res4=pd.merge(df1,df2,on=['key1','key2'],how='left') #以key1为主
print(res4)
print('--------------------5------------------------')
res5=pd.merge(df1,df2,on=['key1','key2'],how='right') #以key2为主
print(res5)
df1=pd.DataFrame({'col1':[0,1],'col_left':['a','b']})
df2=pd.DataFrame({'col1':[1,2,2],'col_right':[2,2,2]})
print(df1)
print(df2)
res=pd.merge(df1,df2,on='col1',how='outer',indicator=True)
print(res)
#不具有公共列的情况
df1=pd.DataFrame({'A':['A0','A1','A2'],'B':['B0','B1','B2']},index=['K0','K1','K2'])
df2=pd.DataFrame({'C':['C0','C1','C2'],'D':['D0','D1','D2']},index=['K0','K2','K3'])
print(df1)
print(df2)
res1=pd.merge(df1,df2,left_index=True,right_index=True,how='outer')
print(res1)
boys=pd.DataFrame({'k':['K0','K1','K2'],'age':[1,2,3]})
girls=pd.DataFrame({'k':['K0','K0','K3'],'age':[4,5,6]})
print(boys)
print(girls)
res1=pd.merge(boys,girls,on='k') #若不指定suffixes,会产生默认x,y
print(res1)
res2=pd.merge(boys,girls,on='k',suffixes=['_boy','_girl'],how='inner')
res2
#pandas的可视化
data=pd.Series(np.random.randn(1000))
data=data.cumsum()
data.plot()
data=pd.DataFrame(np.random.randn(1000,4),columns=list('abcd'))
print(data.head())
data=data.cumsum()
data.plot()
| 0.038811 | 0.372534 |
# Design
## Members
นาย ธนวัฒน์ ใจมอย 1620706612 <br/>
นาย นราธิป มิ่งรัตนา 1620706471 <br/>
นางสาว นันทัชภรณ์ ลูกจันทร์ 1620707651
##CRISP-DM
## DNA Framework 
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.express as px
import plotly.graph_objects as go
import warnings
warnings.filterwarnings("ignore")
path = 'https://raw.githubusercontent.com/dear3089/CS434_Data_Mining_finalExam/main/E_Commerce_Shipping_Data.csv'
from sklearn.feature_selection import VarianceThreshold
from sklearn.preprocessing import LabelEncoder
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn import tree
df_orignal = pd.read_csv(path)
df = df_orignal.copy()
df.head(10)
```
### ควมหมายของแต่ละ columns
### ID : หมายเลขประจำตัวลูกค้า
### Warehouse_block : โกดังแบ่งออกเป็น บล็อก A B C D F
### Mode_of_Shipment : การจัดส่งสินค้า เช่น ทางเรือ บนถนน เครื่องบิน
### Customer_care_calls : จำนวนที่ลูกค้าโทรหา Call center
### Customer_rating : บริษัทให้คะแนนลูกค้า โดย 1 = แย่สุด, 5 = ดีสุด
### Cost_of_the_Product : ต้นทุนผลิตภัณฑ์ หน่วยเป็น USD
### Prior_purchases : จำนวนการซื้อก่อนหน้า
### Product_importance : ความสำคัญของสินค้า เช่น ต่ำ, กลาง, สูง
### Gender : เพศ
### Discount_offered : ส่วนลดในการนำเสนอผลิตภัณฑ์
### Weight_in_gms : น้ำหนักหน่วยเป็น กรัม
### Reached.on.Time_Y.N : การจัดส่งสินค้า 1 = ส่งไม่ตรงเวลา, 0 = ส่งตรงเวลา
#Data Preprocessing
## Check-up
### ตรวจสอบข้อมูลที่ได้นำเข้ามาใช้
```
df.shape
# Check Type
df.info()
#Checking for null values using
df.isna().sum()
# ดูความหลากหลายของข้อมูล
df.nunique()/df.shape[0]
```
## Cleaning
### ลบข้อมูลที่ไม่ได้ใช้งานออก
```
#Dropping unwanted column using drop method
df.drop('ID', axis = 1, inplace = True)
df.head(10)
```
#Exploratory Data Analysis
##Checking value counts of categorical columns
```
cols = ['Warehouse_block', 'Mode_of_Shipment', 'Customer_care_calls', 'Customer_rating',
'Prior_purchases', 'Product_importance', 'Gender', 'Reached.on.Time_Y.N']
plt.figure(figsize = (25, 12))
plotnumber = 1
# plotting the countplot of each categorical column.
for i in range(len(cols)):
if plotnumber <= 8:
ax = plt.subplot(2, 4, plotnumber)
sns.countplot(x = cols[i], data = df, ax = ax, palette='rocket')
plotnumber += 1
#plt.tight_layout()
plt.show()
```
##Ware_house block
```
object_columns = df.select_dtypes(include=['object'])
warehouse = object_columns["Warehouse_block"].value_counts().reset_index()
warehouse.columns = ['warehouse',"values"]
fig = px.pie(warehouse,names='warehouse',values='values',color_discrete_sequence=px.colors.sequential.matter_r)
fig.show()
#Making a countplot of warehouse column and see the effect of Reached on time or not on the warehouse column.
# เปรียบเทียบค่า Reached on time or not ของแต่ละ warehouse
plt.figure(figsize = (17, 6))
sns.countplot('Warehouse_block', hue = 'Reached.on.Time_Y.N', data = df, palette='rocket')
plt.show()
```
##Gender
```
gender = object_columns['Gender'].value_counts().reset_index()
gender.columns = ["Gender","Counts"]
gender.drop("Gender",axis=1,inplace=True)
gender["Gender"] = ["Male","Female"]
fig = px.pie(gender,names='Gender',values='Counts',color_discrete_sequence=px.colors.sequential.Electric)
fig.update_traces(textinfo='percent+label')
#Making a countplot of gender column and see the effect of Reached on time or not on the warehouse column.
# เปรียบเทียบค่า Reached on time or not ของแต่ละ gender
plt.figure(figsize = (17, 6))
sns.countplot('Gender', hue = 'Reached.on.Time_Y.N', data = df, palette='rocket')
plt.show()
```
## Mode of shipment
```
transport = object_columns["Mode_of_Shipment"].value_counts().reset_index()
transport.columns = ["Mode","Values"]
fig = px.pie(transport,names='Mode',values='Values',color_discrete_sequence=px.colors.sequential.Magenta_r)
fig.update_traces(textinfo='percent+label')
# Making a countplot of mode of shipment column and see the effect of Reached on time or not on the warehouse column.
# เปรียบเทียบค่า Reached on time or not ของแต่ละ mode of shipment
plt.figure(figsize = (17, 6))
sns.countplot('Mode_of_Shipment', hue = 'Reached.on.Time_Y.N', data = df, palette='rocket')
plt.show()
```
##Product importance
```
product = object_columns['Product_importance'].value_counts().reset_index()
product.columns = ['Importance','Values']
fig = px.pie(product,names='Importance',values='Values',color_discrete_sequence=px.colors.sequential.Emrld_r)
fig.update_traces(textinfo='percent+label')
# Making a countplot of product importance column and see the effect of Reached on time or not on the warehouse column.
# เปรียบเทียบค่า Reached on time or not ของแต่ละ product importance
plt.figure(figsize = (17, 6))
sns.countplot('Product_importance', hue = 'Reached.on.Time_Y.N', data = df, palette='rocket')
plt.show()
```
##Customer_care calls
```
customer = df['Customer_care_calls'].value_counts()
fig = go.Figure()
fig.add_trace(go.Bar(x=customer.index,
y=customer.values,
marker_color='#00cec9')
)
fig.update_layout(
height=500,
title_text='Customer care calls',
yaxis_title='count',
title_x = 0.5,
font=dict(
family="Courier New, monospace",
size=14,
color="black")
)
fig.show()
customer = df["Customer_care_calls"].value_counts().reset_index()
customer.columns = ["Number of times","Value"]
fig = px.pie(customer,names="Number of times",values="Value")
fig.update_traces(textinfo='percent+label')
# Making a countplot of customer care calls column and see the effect of Reached on time or not on the warehouse column.
# เปรียบเทียบค่า Reached on time or not ของแต่ละ customer care calls
plt.figure(figsize = (17, 6))
sns.countplot('Customer_care_calls', hue = 'Reached.on.Time_Y.N', data = df, palette='rocket')
plt.show()
```
##Customer ratings
```
customer = df["Customer_rating"].value_counts().reset_index()
customer.columns = ["Ratings","Value"]
customer["Ratings"] = ["Rating_"+str(i) for i in customer["Ratings"].tolist()]
fig = px.pie(customer,names="Ratings",values="Value",color_discrete_sequence=px.colors.sequential.algae_r)
fig.update_traces(textinfo='percent+label')
#Making a countplot of customer ratings calls column and see the effect of Reached on time or not on the warehouse column.
# เปรียบเทียบค่า Reached on time or not ของแต่ละ customer ratings calls
plt.figure(figsize = (17, 6))
sns.countplot('Customer_rating', hue = 'Reached.on.Time_Y.N', data = df, palette='rocket')
plt.show()
```
##Prior Purchases
```
prior_purchases = df['Prior_purchases'].value_counts()
fig = go.Figure()
fig.add_trace(go.Bar(x=prior_purchases.index,
y=prior_purchases.values,
marker_color='#00cec9')
)
fig.update_layout(
height=500,
title_text='prior_purchases',
yaxis_title='count',
title_x = 0.5,
font=dict(
family="Courier New, monospace",
size=14,
color="black")
)
fig.show()
prior_purchases = df['Prior_purchases'].value_counts().reset_index()
prior_purchases.columns = ['Prior_purchases', 'value_counts']
fig = px.pie(prior_purchases, names = 'Prior_purchases', values = 'value_counts',
color_discrete_sequence = px.colors.sequential.matter_r, width = 650, height = 400)
fig.update_traces(textinfo = 'percent+label')
# Making a countplot of prior purchases column and see the effect of Reached on time or not on the warehouse column.
# เปรียบเทียบค่า Reached on time or not ของแต่ละ prior purchases
plt.figure(figsize = (17, 6))
sns.countplot('Prior_purchases', hue = 'Reached.on.Time_Y.N', data = df, palette='rocket')
plt.show()
```
## Reached On time delivery
```
reached_on_time_y_n = df['Reached.on.Time_Y.N'].value_counts().reset_index()
reached_on_time_y_n.columns = ['Reached.on.Time_Y.N', 'value_counts']
fig = px.pie(reached_on_time_y_n, names = 'Reached.on.Time_Y.N', values = 'value_counts',
color_discrete_sequence = px.colors.sequential.Darkmint_r, width = 650, height = 400,
hole = 0.3)
fig.update_traces(textinfo = 'percent+label')
```
# Encoding categorical variables
### ทำการแปลงค่าของข้อมูล จากตัวอักษร ให้เป็นตัวเลข และทำการ drop gender column ทิ้ง เพื่อเตรียมความพร้อมของข้อมูลก่อนเข้าสู่ models
```
df_model = df.copy()
df.head()
# ทำการแปลงค่าของข้อมูล จากตัวอักษร ให้เป็นตัวเลข และทำการ drop gender column ทิ้ง เพื่อเตรียมความพร้อมของข้อมูลก่อนเข้าสู่ models
df_model['Warehouse_block'] = df['Warehouse_block'].map({'A' : 0, 'B': 1, 'C': 2, 'D':3, 'F': 4})
df_model['Mode_of_Shipment'] = df['Mode_of_Shipment'].map({'Flight' : 0, 'Ship': 1, 'Road': 2})
df_model['Product_importance'] = df['Product_importance'].map({'low' : 0, 'medium': 1, 'high': 2})
# df_model.drop['Gender'] = df['Gender'].apply(lambda val: 1 if val == 'M' else 0)
df_model.drop(['Gender'], axis =1,inplace=True)
df_model.head()
df_model.info()
# creating features and label
target = 'Reached.on.Time_Y.N'
X = df_model.drop(target, axis=1)
y = df_model[target]
# spiltting our data into training and test data
# แบ่งข้อมูลเพื่อที่จะนำไปใช้ Train และทดสอบกับโมเดล โดยข้อมูลที่จะใช้ทดสอบจะถูกแบ่งออก 25%
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
df_model.head()
# Scaling the data using standardscaler
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
```
#Models
### ทดสอบหาโมเดลที่มีความแม่นยำสูง
## Naive Bayes Classification
```
from sklearn.metrics import classification_report,confusion_matrix
from sklearn.metrics import plot_confusion_matrix
from sklearn.naive_bayes import GaussianNB
model1 = GaussianNB()
model1.fit(X_train, y_train)
test_pred1 = model1.predict(X_test)
print('Classification Report of test_data \n',classification_report(y_test,test_pred1))
confusion_matrix = plot_confusion_matrix(model1, X_test, y_test)
```
## Random Forest Classifier
```
from sklearn.ensemble import RandomForestClassifier
model2 = RandomForestClassifier(random_state = 0)
model2.fit(X_train, y_train)
test_pred2 = model2.predict(X_test)
print('Classification Report of test_data \n',classification_report(y_test,test_pred2))
confusion_matrix = plot_confusion_matrix(model2, X_test, y_test)
```
## AdaBoost Classifier
```
from sklearn.ensemble import AdaBoostClassifier
model3 = AdaBoostClassifier(random_state = 0)
model3.fit(X_train, y_train)
test_pred3 = model3.predict(X_test)
print('Classification Report of test_data \n',classification_report(y_test,test_pred3))
confusion_matrix = plot_confusion_matrix(model3, X_test, y_test)
# ในส่วนของ recall จะได้ RandomForestClassifier เป็นโมเดลที่มีประสิทธิภาพสูงที่สุด
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
def fit_predict_score(Model, X_train, y_train, X_test, y_test):
"""Fit the model of your choice, predict for test data, and returns classification metrics."""
model = Model
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
return accuracy_score(y_test, y_pred), precision_score(y_test, y_pred), recall_score(y_test, y_pred), f1_score(y_test, y_pred)
def model_comparison(X, y):
"""Creates a DataFrame comparing Naive Bayes,
Random Forest, AdaBoost."""
# X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=0)
nbc_accuracy_score, nbc_pr, nbc_re, nbc_f1 = fit_predict_score(GaussianNB(), X_train, y_train, X_test, y_test)
rfc_accuracy_score, rfc_pr, rfc_re, rfc_f1 = fit_predict_score(RandomForestClassifier(random_state = 0), X_train, y_train, X_test, y_test)
ada_accuracy_score, ada_pr, ada_re, ada_f1 = fit_predict_score(AdaBoostClassifier(random_state = 0), X_train, y_train, X_test, y_test)
models = ['Naive Bayes', 'Random Forest', 'AdaBoost']
accuracy = [nbc_accuracy_score, rfc_accuracy_score, ada_accuracy_score]
precision = [nbc_pr, rfc_pr, ada_pr]
recall = [nbc_re, rfc_re, ada_re]
f1 = [nbc_f1, rfc_f1, ada_f1]
model_comparison = pd.DataFrame(data=[models, accuracy, precision, recall, f1]).T.rename({0: 'Model',
1: 'Accuracy',
2: 'Precision',
3: 'Recall',
4: 'F1 Score'
}, axis=1)
return model_comparison
model_comparison(X, y)
```
##ROC Test
```
from sklearn.metrics import roc_curve, roc_auc_score, auc
models = [
{
'label': 'Naive Bayes Classification',
'model': model1
},
{
'label' : 'Random Forest Classifier',
'model': model2
},
{
'label': 'AdaBoost Classifier',
'model': model3
}
]
plt.clf()
plt.figure(figsize=(8,6))
for m in models:
m['model'].probability = True
probas = m['model'].fit(X_train,y_train).predict_proba(X_test)
fpr, tpr, thresholds = roc_curve(y_test, probas[:, 1])
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, label='%s ROC (area = %0.2f)' % (m['label'], roc_auc))
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend(loc=0, fontsize='small')
plt.show()
```
##Tuning
### ปรับโมเดลให้มีประสิทธิภาพดีที่สุด
```
from sklearn.model_selection import GridSearchCV
# model = RandomForestClassifier(random_state=555, n_jobs=-1)
param_grid = [
{'n_estimators': [10, 25], 'max_features': [5, 10],
'max_depth': [10, 50, None], 'bootstrap': [True, False]}
]
grid_search_forest = GridSearchCV(model2, param_grid, cv=10, scoring='recall')
grid_search_forest.fit(X_train, y_train)
grid_search_forest.best_estimator_
print(classification_report(y_test,
grid_search_forest.best_estimator_.predict(X_test),
target_names=['0','1']))
```
## Feature importance
```
# หา Features ที่มีผลต่อโมเดลในการทำนายมากที่สุด
model_tuning = grid_search_forest.best_estimator_
importances = grid_search_forest.best_estimator_.feature_importances_
importances
indices = np.argsort(importances)[::-1]
indices
names = [X.columns[i] for i in np.argsort(model_tuning.feature_importances_)]
names
pd.DataFrame({
'column' : X.columns,
'importances' : importances
}).sort_values(by='importances', ascending=False)['importances'].cumsum()
mod_imp = pd.DataFrame({
'column' : X.columns,
'importances' : importances
}).sort_values(by='importances', ascending=False)
mod_imp['cumsum'] = mod_imp['importances'].cumsum()
mod_imp
# Create plot
plt.figure(figsize=(16,8))
# Create plot title
plt.title("Feature Importance")
# Add bars
plt.bar(range(X.shape[1]), importances[indices])
# Add feature names as x-axis labels
plt.xticks(range(X.shape[1]), names, rotation=45)
# Show plot
plt.show()
```
#Pipelines
```
from sklearn.preprocessing import LabelEncoder
class PipeLine():
def __init__(self):
self.mapping = {}
self.columns = ['Warehouse_block', 'Mode_of_Shipment', 'Customer_care_calls', 'Customer_rating', 'Cost_of_the_Product', 'Prior_purchases',
'Product_importance', 'Discount_offered', 'Weight_in_gms']
self.target = 'Reached.on.Time_Y.N'
self.scaler = StandardScaler()
def encoding(self, data):
data['Warehouse_block'] = data['Warehouse_block'].map({'A' : 0, 'B': 1, 'C': 2, 'D':3, 'F': 4})
data['Mode_of_Shipment'] = data['Mode_of_Shipment'].map({'Flight' : 0, 'Ship': 1, 'Road': 2})
data['Product_importance'] = data['Product_importance'].map({'low' : 0, 'medium': 1, 'high': 2})
return data
def build_trian(self, data):
targets = data[self.target].copy()
data = data[self.columns].copy()
data = self.encoding(data)
data = self.scaler.fit_transform(data)
return data, targets
def build_predict(self, data):
data = data[self.columns].copy()
data = self.encoding(data)
return self.scaler.transform(data)
# สร้างตัวแปลขึ้นมา 2 ตัว เพื่อเตรียมไว้ train ผ่าน function build_trian ของ Pipeline โดยใช้ไฟล์ df_orignal
pipeLine = PipeLine()
X_deploy, y_deploy = pipeLine.build_trian(df_orignal)
# สร้างตัวแปล rft_deploy ขึ้นมารับค่าจากค่าที่ Tuning
rft_deploy = RandomForestClassifier(bootstrap=False, max_depth=50, max_features=5,
n_estimators=25, random_state=0)
# นำ rft_deploy มาทำการ fit หรือ train ข้อมูล
rft_deploy.fit(X_deploy, y_deploy)
df_orignal.head()
test_df = pd.DataFrame({
'ID': np.nan,
'Warehouse_block': 'F',
'Mode_of_Shipment': 'Flight',
'Customer_care_calls': 4,
'Customer_rating': 5,
'Cost_of_the_Product': 216,
'Prior_purchases': 2,
'Product_importance': 'low',
'Gender': np.nan,
'Discount_offered': 59,
'Weight_in_gms': 3088,
'Reached.on.Time_Y.N': np.nan
},index=[0])
# ทำการแปลข้อมูลผ่าน build_predict แล้วนำค่าไป predict ด้วย rft_deploy
rft_deploy.predict(pipeLine.build_predict(test_df))[0]
pipeLine.encoding(test_df)
```
# Evaluation
### ประเมินผลการทำโมเดล
```
print(classification_report(y_test,
grid_search_forest.best_estimator_.predict(X_test),
target_names=['0','1']))
```
#Deployment
```
!pip install gradio -q
import gradio as gr
# udf
def predict_shipping(Warehouse_block, Mode_of_Shipment, Customer_care_calls,Customer_rating, Cost_of_the_product, Prior_purchases,Product_importance,Discount_offered,Weight_in_gms):
input_df = pd.DataFrame({
'ID': np.nan,
'Warehouse_block': Warehouse_block,
'Mode_of_Shipment': Mode_of_Shipment,
'Customer_care_calls': Customer_care_calls,
'Customer_rating': Customer_rating,
'Cost_of_the_Product': Cost_of_the_product,
'Prior_purchases': Prior_purchases,
'Product_importance': Product_importance,
'Gender': np.nan,
'Discount_offered': Discount_offered,
'Weight_in_gms': Weight_in_gms
},index=[0])
pred = rft_deploy.predict(pipeLine.build_predict(input_df))[0]
if pred == 0:
return 'On time'
else:
return 'Delay'
# inputs
Warehouse_block = gr.inputs.Dropdown(list(df['Warehouse_block'].unique()), default='A', label='Warehouse block')
Mode_of_Shipment = gr.inputs.Dropdown(list(df['Mode_of_Shipment'].unique()), default='Flight', label='Mode of Shipment')
Customer_care_calls = gr.inputs.Slider(minimum=1, maximum=10, step=1, default=1, label='Customer_care_calls')
Customer_rating = gr.inputs.Slider(minimum=1, maximum=5, step=1, default=1, label='Customer_rating')
Cost_of_the_product = gr.inputs.Textbox(default=1, label='Cost of the product')
Prior_purchases = gr.inputs.Slider(minimum=1, maximum=10, step=1, default=1, label='Prior purchases')
Product_importance = gr.inputs.Radio(list(df['Product_importance'].unique()), label='Product importance')
Discount_offered = gr.inputs.Textbox(default=1, label='Discount offered')
Weight_in_gms = gr.inputs.Textbox(default=1000, label='Weight in gms')
iface = gr.Interface(
fn=predict_shipping,
inputs=[Warehouse_block, Mode_of_Shipment, Customer_care_calls,Customer_rating, Cost_of_the_product, Prior_purchases,Product_importance,Discount_offered,Weight_in_gms],
live=False,
outputs='text')
iface.launch()
```
# Reference
DataSet: <br/>
* https://www.kaggle.com/prachi13/customer-analytics/code
* https://www.kaggle.com/niteshyadav3103/eda-e-commerce-shipping-data
* https://www.kaggle.com/lys620/e-commerce-shipping-eda
Article
* https://medium.com/@tong3089/data-mining-ครั้งแรก-cebebf88f2b2
```
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.express as px
import plotly.graph_objects as go
import warnings
warnings.filterwarnings("ignore")
path = 'https://raw.githubusercontent.com/dear3089/CS434_Data_Mining_finalExam/main/E_Commerce_Shipping_Data.csv'
from sklearn.feature_selection import VarianceThreshold
from sklearn.preprocessing import LabelEncoder
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn import tree
df_orignal = pd.read_csv(path)
df = df_orignal.copy()
df.head(10)
df.shape
# Check Type
df.info()
#Checking for null values using
df.isna().sum()
# ดูความหลากหลายของข้อมูล
df.nunique()/df.shape[0]
#Dropping unwanted column using drop method
df.drop('ID', axis = 1, inplace = True)
df.head(10)
cols = ['Warehouse_block', 'Mode_of_Shipment', 'Customer_care_calls', 'Customer_rating',
'Prior_purchases', 'Product_importance', 'Gender', 'Reached.on.Time_Y.N']
plt.figure(figsize = (25, 12))
plotnumber = 1
# plotting the countplot of each categorical column.
for i in range(len(cols)):
if plotnumber <= 8:
ax = plt.subplot(2, 4, plotnumber)
sns.countplot(x = cols[i], data = df, ax = ax, palette='rocket')
plotnumber += 1
#plt.tight_layout()
plt.show()
object_columns = df.select_dtypes(include=['object'])
warehouse = object_columns["Warehouse_block"].value_counts().reset_index()
warehouse.columns = ['warehouse',"values"]
fig = px.pie(warehouse,names='warehouse',values='values',color_discrete_sequence=px.colors.sequential.matter_r)
fig.show()
#Making a countplot of warehouse column and see the effect of Reached on time or not on the warehouse column.
# เปรียบเทียบค่า Reached on time or not ของแต่ละ warehouse
plt.figure(figsize = (17, 6))
sns.countplot('Warehouse_block', hue = 'Reached.on.Time_Y.N', data = df, palette='rocket')
plt.show()
gender = object_columns['Gender'].value_counts().reset_index()
gender.columns = ["Gender","Counts"]
gender.drop("Gender",axis=1,inplace=True)
gender["Gender"] = ["Male","Female"]
fig = px.pie(gender,names='Gender',values='Counts',color_discrete_sequence=px.colors.sequential.Electric)
fig.update_traces(textinfo='percent+label')
#Making a countplot of gender column and see the effect of Reached on time or not on the warehouse column.
# เปรียบเทียบค่า Reached on time or not ของแต่ละ gender
plt.figure(figsize = (17, 6))
sns.countplot('Gender', hue = 'Reached.on.Time_Y.N', data = df, palette='rocket')
plt.show()
transport = object_columns["Mode_of_Shipment"].value_counts().reset_index()
transport.columns = ["Mode","Values"]
fig = px.pie(transport,names='Mode',values='Values',color_discrete_sequence=px.colors.sequential.Magenta_r)
fig.update_traces(textinfo='percent+label')
# Making a countplot of mode of shipment column and see the effect of Reached on time or not on the warehouse column.
# เปรียบเทียบค่า Reached on time or not ของแต่ละ mode of shipment
plt.figure(figsize = (17, 6))
sns.countplot('Mode_of_Shipment', hue = 'Reached.on.Time_Y.N', data = df, palette='rocket')
plt.show()
product = object_columns['Product_importance'].value_counts().reset_index()
product.columns = ['Importance','Values']
fig = px.pie(product,names='Importance',values='Values',color_discrete_sequence=px.colors.sequential.Emrld_r)
fig.update_traces(textinfo='percent+label')
# Making a countplot of product importance column and see the effect of Reached on time or not on the warehouse column.
# เปรียบเทียบค่า Reached on time or not ของแต่ละ product importance
plt.figure(figsize = (17, 6))
sns.countplot('Product_importance', hue = 'Reached.on.Time_Y.N', data = df, palette='rocket')
plt.show()
customer = df['Customer_care_calls'].value_counts()
fig = go.Figure()
fig.add_trace(go.Bar(x=customer.index,
y=customer.values,
marker_color='#00cec9')
)
fig.update_layout(
height=500,
title_text='Customer care calls',
yaxis_title='count',
title_x = 0.5,
font=dict(
family="Courier New, monospace",
size=14,
color="black")
)
fig.show()
customer = df["Customer_care_calls"].value_counts().reset_index()
customer.columns = ["Number of times","Value"]
fig = px.pie(customer,names="Number of times",values="Value")
fig.update_traces(textinfo='percent+label')
# Making a countplot of customer care calls column and see the effect of Reached on time or not on the warehouse column.
# เปรียบเทียบค่า Reached on time or not ของแต่ละ customer care calls
plt.figure(figsize = (17, 6))
sns.countplot('Customer_care_calls', hue = 'Reached.on.Time_Y.N', data = df, palette='rocket')
plt.show()
customer = df["Customer_rating"].value_counts().reset_index()
customer.columns = ["Ratings","Value"]
customer["Ratings"] = ["Rating_"+str(i) for i in customer["Ratings"].tolist()]
fig = px.pie(customer,names="Ratings",values="Value",color_discrete_sequence=px.colors.sequential.algae_r)
fig.update_traces(textinfo='percent+label')
#Making a countplot of customer ratings calls column and see the effect of Reached on time or not on the warehouse column.
# เปรียบเทียบค่า Reached on time or not ของแต่ละ customer ratings calls
plt.figure(figsize = (17, 6))
sns.countplot('Customer_rating', hue = 'Reached.on.Time_Y.N', data = df, palette='rocket')
plt.show()
prior_purchases = df['Prior_purchases'].value_counts()
fig = go.Figure()
fig.add_trace(go.Bar(x=prior_purchases.index,
y=prior_purchases.values,
marker_color='#00cec9')
)
fig.update_layout(
height=500,
title_text='prior_purchases',
yaxis_title='count',
title_x = 0.5,
font=dict(
family="Courier New, monospace",
size=14,
color="black")
)
fig.show()
prior_purchases = df['Prior_purchases'].value_counts().reset_index()
prior_purchases.columns = ['Prior_purchases', 'value_counts']
fig = px.pie(prior_purchases, names = 'Prior_purchases', values = 'value_counts',
color_discrete_sequence = px.colors.sequential.matter_r, width = 650, height = 400)
fig.update_traces(textinfo = 'percent+label')
# Making a countplot of prior purchases column and see the effect of Reached on time or not on the warehouse column.
# เปรียบเทียบค่า Reached on time or not ของแต่ละ prior purchases
plt.figure(figsize = (17, 6))
sns.countplot('Prior_purchases', hue = 'Reached.on.Time_Y.N', data = df, palette='rocket')
plt.show()
reached_on_time_y_n = df['Reached.on.Time_Y.N'].value_counts().reset_index()
reached_on_time_y_n.columns = ['Reached.on.Time_Y.N', 'value_counts']
fig = px.pie(reached_on_time_y_n, names = 'Reached.on.Time_Y.N', values = 'value_counts',
color_discrete_sequence = px.colors.sequential.Darkmint_r, width = 650, height = 400,
hole = 0.3)
fig.update_traces(textinfo = 'percent+label')
df_model = df.copy()
df.head()
# ทำการแปลงค่าของข้อมูล จากตัวอักษร ให้เป็นตัวเลข และทำการ drop gender column ทิ้ง เพื่อเตรียมความพร้อมของข้อมูลก่อนเข้าสู่ models
df_model['Warehouse_block'] = df['Warehouse_block'].map({'A' : 0, 'B': 1, 'C': 2, 'D':3, 'F': 4})
df_model['Mode_of_Shipment'] = df['Mode_of_Shipment'].map({'Flight' : 0, 'Ship': 1, 'Road': 2})
df_model['Product_importance'] = df['Product_importance'].map({'low' : 0, 'medium': 1, 'high': 2})
# df_model.drop['Gender'] = df['Gender'].apply(lambda val: 1 if val == 'M' else 0)
df_model.drop(['Gender'], axis =1,inplace=True)
df_model.head()
df_model.info()
# creating features and label
target = 'Reached.on.Time_Y.N'
X = df_model.drop(target, axis=1)
y = df_model[target]
# spiltting our data into training and test data
# แบ่งข้อมูลเพื่อที่จะนำไปใช้ Train และทดสอบกับโมเดล โดยข้อมูลที่จะใช้ทดสอบจะถูกแบ่งออก 25%
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
df_model.head()
# Scaling the data using standardscaler
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
from sklearn.metrics import classification_report,confusion_matrix
from sklearn.metrics import plot_confusion_matrix
from sklearn.naive_bayes import GaussianNB
model1 = GaussianNB()
model1.fit(X_train, y_train)
test_pred1 = model1.predict(X_test)
print('Classification Report of test_data \n',classification_report(y_test,test_pred1))
confusion_matrix = plot_confusion_matrix(model1, X_test, y_test)
from sklearn.ensemble import RandomForestClassifier
model2 = RandomForestClassifier(random_state = 0)
model2.fit(X_train, y_train)
test_pred2 = model2.predict(X_test)
print('Classification Report of test_data \n',classification_report(y_test,test_pred2))
confusion_matrix = plot_confusion_matrix(model2, X_test, y_test)
from sklearn.ensemble import AdaBoostClassifier
model3 = AdaBoostClassifier(random_state = 0)
model3.fit(X_train, y_train)
test_pred3 = model3.predict(X_test)
print('Classification Report of test_data \n',classification_report(y_test,test_pred3))
confusion_matrix = plot_confusion_matrix(model3, X_test, y_test)
# ในส่วนของ recall จะได้ RandomForestClassifier เป็นโมเดลที่มีประสิทธิภาพสูงที่สุด
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
def fit_predict_score(Model, X_train, y_train, X_test, y_test):
"""Fit the model of your choice, predict for test data, and returns classification metrics."""
model = Model
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
return accuracy_score(y_test, y_pred), precision_score(y_test, y_pred), recall_score(y_test, y_pred), f1_score(y_test, y_pred)
def model_comparison(X, y):
"""Creates a DataFrame comparing Naive Bayes,
Random Forest, AdaBoost."""
# X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=0)
nbc_accuracy_score, nbc_pr, nbc_re, nbc_f1 = fit_predict_score(GaussianNB(), X_train, y_train, X_test, y_test)
rfc_accuracy_score, rfc_pr, rfc_re, rfc_f1 = fit_predict_score(RandomForestClassifier(random_state = 0), X_train, y_train, X_test, y_test)
ada_accuracy_score, ada_pr, ada_re, ada_f1 = fit_predict_score(AdaBoostClassifier(random_state = 0), X_train, y_train, X_test, y_test)
models = ['Naive Bayes', 'Random Forest', 'AdaBoost']
accuracy = [nbc_accuracy_score, rfc_accuracy_score, ada_accuracy_score]
precision = [nbc_pr, rfc_pr, ada_pr]
recall = [nbc_re, rfc_re, ada_re]
f1 = [nbc_f1, rfc_f1, ada_f1]
model_comparison = pd.DataFrame(data=[models, accuracy, precision, recall, f1]).T.rename({0: 'Model',
1: 'Accuracy',
2: 'Precision',
3: 'Recall',
4: 'F1 Score'
}, axis=1)
return model_comparison
model_comparison(X, y)
from sklearn.metrics import roc_curve, roc_auc_score, auc
models = [
{
'label': 'Naive Bayes Classification',
'model': model1
},
{
'label' : 'Random Forest Classifier',
'model': model2
},
{
'label': 'AdaBoost Classifier',
'model': model3
}
]
plt.clf()
plt.figure(figsize=(8,6))
for m in models:
m['model'].probability = True
probas = m['model'].fit(X_train,y_train).predict_proba(X_test)
fpr, tpr, thresholds = roc_curve(y_test, probas[:, 1])
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, label='%s ROC (area = %0.2f)' % (m['label'], roc_auc))
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend(loc=0, fontsize='small')
plt.show()
from sklearn.model_selection import GridSearchCV
# model = RandomForestClassifier(random_state=555, n_jobs=-1)
param_grid = [
{'n_estimators': [10, 25], 'max_features': [5, 10],
'max_depth': [10, 50, None], 'bootstrap': [True, False]}
]
grid_search_forest = GridSearchCV(model2, param_grid, cv=10, scoring='recall')
grid_search_forest.fit(X_train, y_train)
grid_search_forest.best_estimator_
print(classification_report(y_test,
grid_search_forest.best_estimator_.predict(X_test),
target_names=['0','1']))
# หา Features ที่มีผลต่อโมเดลในการทำนายมากที่สุด
model_tuning = grid_search_forest.best_estimator_
importances = grid_search_forest.best_estimator_.feature_importances_
importances
indices = np.argsort(importances)[::-1]
indices
names = [X.columns[i] for i in np.argsort(model_tuning.feature_importances_)]
names
pd.DataFrame({
'column' : X.columns,
'importances' : importances
}).sort_values(by='importances', ascending=False)['importances'].cumsum()
mod_imp = pd.DataFrame({
'column' : X.columns,
'importances' : importances
}).sort_values(by='importances', ascending=False)
mod_imp['cumsum'] = mod_imp['importances'].cumsum()
mod_imp
# Create plot
plt.figure(figsize=(16,8))
# Create plot title
plt.title("Feature Importance")
# Add bars
plt.bar(range(X.shape[1]), importances[indices])
# Add feature names as x-axis labels
plt.xticks(range(X.shape[1]), names, rotation=45)
# Show plot
plt.show()
from sklearn.preprocessing import LabelEncoder
class PipeLine():
def __init__(self):
self.mapping = {}
self.columns = ['Warehouse_block', 'Mode_of_Shipment', 'Customer_care_calls', 'Customer_rating', 'Cost_of_the_Product', 'Prior_purchases',
'Product_importance', 'Discount_offered', 'Weight_in_gms']
self.target = 'Reached.on.Time_Y.N'
self.scaler = StandardScaler()
def encoding(self, data):
data['Warehouse_block'] = data['Warehouse_block'].map({'A' : 0, 'B': 1, 'C': 2, 'D':3, 'F': 4})
data['Mode_of_Shipment'] = data['Mode_of_Shipment'].map({'Flight' : 0, 'Ship': 1, 'Road': 2})
data['Product_importance'] = data['Product_importance'].map({'low' : 0, 'medium': 1, 'high': 2})
return data
def build_trian(self, data):
targets = data[self.target].copy()
data = data[self.columns].copy()
data = self.encoding(data)
data = self.scaler.fit_transform(data)
return data, targets
def build_predict(self, data):
data = data[self.columns].copy()
data = self.encoding(data)
return self.scaler.transform(data)
# สร้างตัวแปลขึ้นมา 2 ตัว เพื่อเตรียมไว้ train ผ่าน function build_trian ของ Pipeline โดยใช้ไฟล์ df_orignal
pipeLine = PipeLine()
X_deploy, y_deploy = pipeLine.build_trian(df_orignal)
# สร้างตัวแปล rft_deploy ขึ้นมารับค่าจากค่าที่ Tuning
rft_deploy = RandomForestClassifier(bootstrap=False, max_depth=50, max_features=5,
n_estimators=25, random_state=0)
# นำ rft_deploy มาทำการ fit หรือ train ข้อมูล
rft_deploy.fit(X_deploy, y_deploy)
df_orignal.head()
test_df = pd.DataFrame({
'ID': np.nan,
'Warehouse_block': 'F',
'Mode_of_Shipment': 'Flight',
'Customer_care_calls': 4,
'Customer_rating': 5,
'Cost_of_the_Product': 216,
'Prior_purchases': 2,
'Product_importance': 'low',
'Gender': np.nan,
'Discount_offered': 59,
'Weight_in_gms': 3088,
'Reached.on.Time_Y.N': np.nan
},index=[0])
# ทำการแปลข้อมูลผ่าน build_predict แล้วนำค่าไป predict ด้วย rft_deploy
rft_deploy.predict(pipeLine.build_predict(test_df))[0]
pipeLine.encoding(test_df)
print(classification_report(y_test,
grid_search_forest.best_estimator_.predict(X_test),
target_names=['0','1']))
!pip install gradio -q
import gradio as gr
# udf
def predict_shipping(Warehouse_block, Mode_of_Shipment, Customer_care_calls,Customer_rating, Cost_of_the_product, Prior_purchases,Product_importance,Discount_offered,Weight_in_gms):
input_df = pd.DataFrame({
'ID': np.nan,
'Warehouse_block': Warehouse_block,
'Mode_of_Shipment': Mode_of_Shipment,
'Customer_care_calls': Customer_care_calls,
'Customer_rating': Customer_rating,
'Cost_of_the_Product': Cost_of_the_product,
'Prior_purchases': Prior_purchases,
'Product_importance': Product_importance,
'Gender': np.nan,
'Discount_offered': Discount_offered,
'Weight_in_gms': Weight_in_gms
},index=[0])
pred = rft_deploy.predict(pipeLine.build_predict(input_df))[0]
if pred == 0:
return 'On time'
else:
return 'Delay'
# inputs
Warehouse_block = gr.inputs.Dropdown(list(df['Warehouse_block'].unique()), default='A', label='Warehouse block')
Mode_of_Shipment = gr.inputs.Dropdown(list(df['Mode_of_Shipment'].unique()), default='Flight', label='Mode of Shipment')
Customer_care_calls = gr.inputs.Slider(minimum=1, maximum=10, step=1, default=1, label='Customer_care_calls')
Customer_rating = gr.inputs.Slider(minimum=1, maximum=5, step=1, default=1, label='Customer_rating')
Cost_of_the_product = gr.inputs.Textbox(default=1, label='Cost of the product')
Prior_purchases = gr.inputs.Slider(minimum=1, maximum=10, step=1, default=1, label='Prior purchases')
Product_importance = gr.inputs.Radio(list(df['Product_importance'].unique()), label='Product importance')
Discount_offered = gr.inputs.Textbox(default=1, label='Discount offered')
Weight_in_gms = gr.inputs.Textbox(default=1000, label='Weight in gms')
iface = gr.Interface(
fn=predict_shipping,
inputs=[Warehouse_block, Mode_of_Shipment, Customer_care_calls,Customer_rating, Cost_of_the_product, Prior_purchases,Product_importance,Discount_offered,Weight_in_gms],
live=False,
outputs='text')
iface.launch()
| 0.569134 | 0.239761 |
```
%pylab inline
from constantLowSkill2 import *
Vgrid = np.load("LowSkillWorker2.npy")
gamma
num = 10000
'''
x = [w,n,m,s,e,o]
x = [5,0,0,0,0,0]
'''
from jax import random
def simulation(key):
initE = random.choice(a = nE, p=E_distribution, key = key)
initS = random.choice(a = nS, p=S_distribution, key = key)
x = [5, 0, 0, initS, initE, 0]
path = []
move = []
for t in range(T_min, T_max):
_, key = random.split(key)
if t == T_max-1:
_,a = V(t,Vgrid[:,:,:,:,:,:,t],x)
else:
_,a = V(t,Vgrid[:,:,:,:,:,:,t+1],x)
xp = transition(t,a.reshape((1,-1)),x)
p = xp[:,-1]
x_next = xp[:,:-1]
path.append(x)
move.append(a)
x = x_next[random.choice(a = nS*nE, p=p, key = key)]
path.append(x)
return jnp.array(path), jnp.array(move)
%%time
# simulation part
keys = vmap(random.PRNGKey)(jnp.arange(num))
Paths, Moves = vmap(simulation)(keys)
# x = [w,n,m,s,e,o]
# x = [0,1,2,3,4,5]
ws = Paths[:,:,0].T
ns = Paths[:,:,1].T
ms = Paths[:,:,2].T
ss = Paths[:,:,3].T
es = Paths[:,:,4].T
os = Paths[:,:,5].T
cs = Moves[:,:,0].T
bs = Moves[:,:,1].T
ks = Moves[:,:,2].T
hs = Moves[:,:,3].T
actions = Moves[:,:,4].T
plt.plot(detEarning)
plt.figure(figsize = [16,8])
plt.title("The mean values of simulation")
plt.plot(range(20, T_max + 21),jnp.mean(ws + H*pt*os - ms,axis = 1), label = "wealth + home equity")
plt.plot(range(20, T_max + 21),jnp.mean(ws,axis = 1), label = "wealth")
plt.plot(range(20, T_max + 20),jnp.mean(cs,axis = 1), label = "consumption")
plt.plot(range(20, T_max + 20),jnp.mean(bs,axis = 1), label = "bond")
plt.plot(range(20, T_max + 20),jnp.mean(ks,axis = 1), label = "stock")
plt.legend()
plt.title("housing consumption")
plt.plot(range(20, T_max + 20),(hs).mean(axis = 1), label = "housing")
plt.title("housing consumption for renting peole")
plt.plot(hs[:, jnp.where(os.sum(axis = 0) == 0)[0]].mean(axis = 1), label = "housing")
plt.title("house owner percentage in the population")
plt.plot(range(20, T_max + 21),(os).mean(axis = 1), label = "owning")
jnp.where(os[T_max - 1, :] == 0)
# agent number, x = [w,n,m,s,e,o]
agentNum = 35
plt.figure(figsize = [16,8])
plt.plot(range(20, T_max + 21),(ws + os*(H*pt - ms))[:,agentNum], label = "wealth + home equity")
plt.plot(range(20, T_max + 21),ws[:,agentNum], label = "wealth")
plt.plot(range(20, T_max + 21),ns[:,agentNum], label = "401k")
plt.plot(range(20, T_max + 21),ms[:,agentNum], label = "mortgage")
plt.plot(range(20, T_max + 20),cs[:,agentNum], label = "consumption")
plt.plot(range(20, T_max + 20),bs[:,agentNum], label = "bond")
plt.plot(range(20, T_max + 20),ks[:,agentNum], label = "stock")
plt.plot(range(20, T_max + 21),os[:,agentNum]*100, label = "ownership", color = "k")
plt.legend()
# agent number, x = [w,n,m,s,e,o]
agentNum = 29
plt.figure(figsize = [16,8])
plt.plot(range(20, T_max + 21),(ws + os*(H*pt - ms))[:,agentNum], label = "wealth + home equity")
plt.plot(range(20, T_max + 21),ws[:,agentNum], label = "wealth")
plt.plot(range(20, T_max + 21),ns[:,agentNum], label = "401k")
plt.plot(range(20, T_max + 21),ms[:,agentNum], label = "mortgage")
plt.plot(range(20, T_max + 20),cs[:,agentNum], label = "consumption")
plt.plot(range(20, T_max + 20),bs[:,agentNum], label = "bond")
plt.plot(range(20, T_max + 20),ks[:,agentNum], label = "stock")
plt.plot(range(20, T_max + 21),os[:,agentNum]*100, label = "ownership", color = "k")
plt.legend()
# agent selling time collection
agentTime = []
for t in range(30):
if ((os[t,:] == 0) & (os[t+1,:] == 1)).sum()>0:
for agentNum in jnp.where((os[t,:] == 0) & (os[t+1,:] == 1))[0]:
agentTime.append([t, agentNum])
agentTime = jnp.array(agentTime)
# agent selling time collection
agentHold = []
for t in range(30):
if ((os[t,:] == 0) & (os[t+1,:] == 0)).sum()>0:
for agentNum in jnp.where((os[t,:] == 0) & (os[t+1,:] == 0))[0]:
agentHold.append([t, agentNum])
agentHold = jnp.array(agentHold)
plt.title("weath level for buyer and renter")
www = (os*(ws+H*pt - ms)).sum(axis = 1)/(os).sum(axis = 1)
for age in range(30):
buyer = agentTime[agentTime[:,0] == age]
renter = agentHold[agentHold[:,0] == age]
plt.scatter(age, ws[buyer[:,0], buyer[:,1]].mean(),color = "b")
plt.scatter(age, www[age], color = "green")
plt.scatter(age, ws[renter[:,0], renter[:,1]].mean(),color = "r")
plt.title("employement status for buyer and renter")
for age in range(31):
buyer = agentTime[agentTime[:,0] == age]
renter = agentHold[agentHold[:,0] == age]
plt.scatter(age, es[buyer[:,0], buyer[:,1]].mean(),color = "b")
plt.scatter(age, es[renter[:,0], renter[:,1]].mean(),color = "r")
# At every age
plt.plot((os[:T_max,:]*ks/(ks+bs)).sum(axis = 1)/os[:T_max,:].sum(axis = 1), label = "owner")
plt.plot(((1-os[:T_max,:])*ks/(ks+bs)).sum(axis = 1)/(1-os)[:T_max,:].sum(axis = 1), label = "renter")
plt.legend()
```
|
github_jupyter
|
%pylab inline
from constantLowSkill2 import *
Vgrid = np.load("LowSkillWorker2.npy")
gamma
num = 10000
'''
x = [w,n,m,s,e,o]
x = [5,0,0,0,0,0]
'''
from jax import random
def simulation(key):
initE = random.choice(a = nE, p=E_distribution, key = key)
initS = random.choice(a = nS, p=S_distribution, key = key)
x = [5, 0, 0, initS, initE, 0]
path = []
move = []
for t in range(T_min, T_max):
_, key = random.split(key)
if t == T_max-1:
_,a = V(t,Vgrid[:,:,:,:,:,:,t],x)
else:
_,a = V(t,Vgrid[:,:,:,:,:,:,t+1],x)
xp = transition(t,a.reshape((1,-1)),x)
p = xp[:,-1]
x_next = xp[:,:-1]
path.append(x)
move.append(a)
x = x_next[random.choice(a = nS*nE, p=p, key = key)]
path.append(x)
return jnp.array(path), jnp.array(move)
%%time
# simulation part
keys = vmap(random.PRNGKey)(jnp.arange(num))
Paths, Moves = vmap(simulation)(keys)
# x = [w,n,m,s,e,o]
# x = [0,1,2,3,4,5]
ws = Paths[:,:,0].T
ns = Paths[:,:,1].T
ms = Paths[:,:,2].T
ss = Paths[:,:,3].T
es = Paths[:,:,4].T
os = Paths[:,:,5].T
cs = Moves[:,:,0].T
bs = Moves[:,:,1].T
ks = Moves[:,:,2].T
hs = Moves[:,:,3].T
actions = Moves[:,:,4].T
plt.plot(detEarning)
plt.figure(figsize = [16,8])
plt.title("The mean values of simulation")
plt.plot(range(20, T_max + 21),jnp.mean(ws + H*pt*os - ms,axis = 1), label = "wealth + home equity")
plt.plot(range(20, T_max + 21),jnp.mean(ws,axis = 1), label = "wealth")
plt.plot(range(20, T_max + 20),jnp.mean(cs,axis = 1), label = "consumption")
plt.plot(range(20, T_max + 20),jnp.mean(bs,axis = 1), label = "bond")
plt.plot(range(20, T_max + 20),jnp.mean(ks,axis = 1), label = "stock")
plt.legend()
plt.title("housing consumption")
plt.plot(range(20, T_max + 20),(hs).mean(axis = 1), label = "housing")
plt.title("housing consumption for renting peole")
plt.plot(hs[:, jnp.where(os.sum(axis = 0) == 0)[0]].mean(axis = 1), label = "housing")
plt.title("house owner percentage in the population")
plt.plot(range(20, T_max + 21),(os).mean(axis = 1), label = "owning")
jnp.where(os[T_max - 1, :] == 0)
# agent number, x = [w,n,m,s,e,o]
agentNum = 35
plt.figure(figsize = [16,8])
plt.plot(range(20, T_max + 21),(ws + os*(H*pt - ms))[:,agentNum], label = "wealth + home equity")
plt.plot(range(20, T_max + 21),ws[:,agentNum], label = "wealth")
plt.plot(range(20, T_max + 21),ns[:,agentNum], label = "401k")
plt.plot(range(20, T_max + 21),ms[:,agentNum], label = "mortgage")
plt.plot(range(20, T_max + 20),cs[:,agentNum], label = "consumption")
plt.plot(range(20, T_max + 20),bs[:,agentNum], label = "bond")
plt.plot(range(20, T_max + 20),ks[:,agentNum], label = "stock")
plt.plot(range(20, T_max + 21),os[:,agentNum]*100, label = "ownership", color = "k")
plt.legend()
# agent number, x = [w,n,m,s,e,o]
agentNum = 29
plt.figure(figsize = [16,8])
plt.plot(range(20, T_max + 21),(ws + os*(H*pt - ms))[:,agentNum], label = "wealth + home equity")
plt.plot(range(20, T_max + 21),ws[:,agentNum], label = "wealth")
plt.plot(range(20, T_max + 21),ns[:,agentNum], label = "401k")
plt.plot(range(20, T_max + 21),ms[:,agentNum], label = "mortgage")
plt.plot(range(20, T_max + 20),cs[:,agentNum], label = "consumption")
plt.plot(range(20, T_max + 20),bs[:,agentNum], label = "bond")
plt.plot(range(20, T_max + 20),ks[:,agentNum], label = "stock")
plt.plot(range(20, T_max + 21),os[:,agentNum]*100, label = "ownership", color = "k")
plt.legend()
# agent selling time collection
agentTime = []
for t in range(30):
if ((os[t,:] == 0) & (os[t+1,:] == 1)).sum()>0:
for agentNum in jnp.where((os[t,:] == 0) & (os[t+1,:] == 1))[0]:
agentTime.append([t, agentNum])
agentTime = jnp.array(agentTime)
# agent selling time collection
agentHold = []
for t in range(30):
if ((os[t,:] == 0) & (os[t+1,:] == 0)).sum()>0:
for agentNum in jnp.where((os[t,:] == 0) & (os[t+1,:] == 0))[0]:
agentHold.append([t, agentNum])
agentHold = jnp.array(agentHold)
plt.title("weath level for buyer and renter")
www = (os*(ws+H*pt - ms)).sum(axis = 1)/(os).sum(axis = 1)
for age in range(30):
buyer = agentTime[agentTime[:,0] == age]
renter = agentHold[agentHold[:,0] == age]
plt.scatter(age, ws[buyer[:,0], buyer[:,1]].mean(),color = "b")
plt.scatter(age, www[age], color = "green")
plt.scatter(age, ws[renter[:,0], renter[:,1]].mean(),color = "r")
plt.title("employement status for buyer and renter")
for age in range(31):
buyer = agentTime[agentTime[:,0] == age]
renter = agentHold[agentHold[:,0] == age]
plt.scatter(age, es[buyer[:,0], buyer[:,1]].mean(),color = "b")
plt.scatter(age, es[renter[:,0], renter[:,1]].mean(),color = "r")
# At every age
plt.plot((os[:T_max,:]*ks/(ks+bs)).sum(axis = 1)/os[:T_max,:].sum(axis = 1), label = "owner")
plt.plot(((1-os[:T_max,:])*ks/(ks+bs)).sum(axis = 1)/(1-os)[:T_max,:].sum(axis = 1), label = "renter")
plt.legend()
| 0.273089 | 0.608769 |
```
import numpy as np
from qutip import *
import matplotlib.pyplot as plt
import time
import scipy.integrate as integrate
wv=1 # Frequency associated to the variation of the magnetic field
T=2*np.pi/wv; # Magnetic field period
wR=0.5
Ne=10;
e0=0.01;
ef=0.2;
elist=np.linspace(e0,ef,Ne); # Larmor frequency
args = {'wv': wv}
nT=100;
tlist= np.linspace(0, T, nT);
qe1=np.zeros(len(elist)) # Empty vector to save quasienergies for each value of wR
qe2=np.zeros(len(elist)) # Empty vector to save quasienergies for each value of wR
fD1=np.zeros(len(elist));
fD2=np.zeros(len(elist));
fG1=np.zeros(len(elist));
fG2=np.zeros(len(elist));
b = Bloch();
def fx(t,args):
return np.cos(args["wv"]*t)
def fy(t,args):
return np.sin(args["wv"]*t)
for n, e in enumerate(elist): # Iterative process to obtain quasienergies
p,pe= integrate.quad(lambda t: np.sqrt(1-(e*np.cos(t))**2),0,2*np.pi)
Hx = 1/2*wR*p*np.sqrt(1-e**2)*sigmax()
Hy = 1/2*wR*p*sigmay()
H = [[Hx, fx], [Hy, fy]];
f_modes_0, f_energies = floquet_modes(H, T, args)
qe1[n]=f_energies[0]
qe2[n]=f_energies[1]
f_modes_table_t = floquet_modes_table(f_modes_0, f_energies,
tlist, H, T, args); # Calculate floquet states in all tlists
e1=np.zeros(len(tlist))
e2=np.zeros(len(tlist))
nx1 = np.zeros(len(tlist))
ny1 = np.zeros(len(tlist))
nz1 = np.zeros(len(tlist))
nx2 = np.zeros(len(tlist))
ny2 = np.zeros(len(tlist))
nz2 = np.zeros(len(tlist))
for i, t in enumerate(tlist):
psi_t_1,psi_t_2 = floquet_modes_t_lookup(f_modes_table_t, t, T) #
Hd=Hx*fx(t,args)+Hy*fy(t,args)
e1[i] = expect(Hd, psi_t_1)
e2[i] = expect(Hd, psi_t_2)
fDN1=-T/nT*np.sum(e1)
fDN2=-T/nT*np.sum(e2)
nx1[i] = expect(sigmax(), psi_t_1)
ny1[i] = expect(sigmay(), psi_t_1)
nz1[i] = expect(sigmaz(), psi_t_1)
nx2[i] = expect(sigmax(), psi_t_2)
ny2[i] = expect(sigmay(), psi_t_2)
nz2[i] = expect(sigmaz(), psi_t_2)
PN1=[nx1,ny1,nz1]
PN2=[nx2,ny2,nz2]
b.add_points(PN1,'l')
b.add_points(PN2,'l')
fD1[n]=fDN1
fD2[n]=fDN2
fG1[n]=f_energies[0]-fDN1
fG2[n]=f_energies[1]-fDN2
fig, ((ax1, ax2),( ax3, ax4),(ax5,ax6)) = plt.subplots(nrows=3, ncols=2, sharex=True)
ax1.plot(elist,qe1,'+')
ax1.set_ylabel('Quasienergie1')
ax2.plot(elist,qe2,'+')
ax3.plot(elist,fD1,'+')
ax3.set_ylabel('Dynamic')
ax4.plot(elist,fD2,'+')
ax5.plot(elist,fG1,'+')
ax5.set_xlabel('$\epsilon$')
ax5.set_ylabel('Geometric')
ax6.plot(elist,fG2,'+')
ax6.set_xlabel('$\epsilon$')
b.make_sphere()
```
|
github_jupyter
|
import numpy as np
from qutip import *
import matplotlib.pyplot as plt
import time
import scipy.integrate as integrate
wv=1 # Frequency associated to the variation of the magnetic field
T=2*np.pi/wv; # Magnetic field period
wR=0.5
Ne=10;
e0=0.01;
ef=0.2;
elist=np.linspace(e0,ef,Ne); # Larmor frequency
args = {'wv': wv}
nT=100;
tlist= np.linspace(0, T, nT);
qe1=np.zeros(len(elist)) # Empty vector to save quasienergies for each value of wR
qe2=np.zeros(len(elist)) # Empty vector to save quasienergies for each value of wR
fD1=np.zeros(len(elist));
fD2=np.zeros(len(elist));
fG1=np.zeros(len(elist));
fG2=np.zeros(len(elist));
b = Bloch();
def fx(t,args):
return np.cos(args["wv"]*t)
def fy(t,args):
return np.sin(args["wv"]*t)
for n, e in enumerate(elist): # Iterative process to obtain quasienergies
p,pe= integrate.quad(lambda t: np.sqrt(1-(e*np.cos(t))**2),0,2*np.pi)
Hx = 1/2*wR*p*np.sqrt(1-e**2)*sigmax()
Hy = 1/2*wR*p*sigmay()
H = [[Hx, fx], [Hy, fy]];
f_modes_0, f_energies = floquet_modes(H, T, args)
qe1[n]=f_energies[0]
qe2[n]=f_energies[1]
f_modes_table_t = floquet_modes_table(f_modes_0, f_energies,
tlist, H, T, args); # Calculate floquet states in all tlists
e1=np.zeros(len(tlist))
e2=np.zeros(len(tlist))
nx1 = np.zeros(len(tlist))
ny1 = np.zeros(len(tlist))
nz1 = np.zeros(len(tlist))
nx2 = np.zeros(len(tlist))
ny2 = np.zeros(len(tlist))
nz2 = np.zeros(len(tlist))
for i, t in enumerate(tlist):
psi_t_1,psi_t_2 = floquet_modes_t_lookup(f_modes_table_t, t, T) #
Hd=Hx*fx(t,args)+Hy*fy(t,args)
e1[i] = expect(Hd, psi_t_1)
e2[i] = expect(Hd, psi_t_2)
fDN1=-T/nT*np.sum(e1)
fDN2=-T/nT*np.sum(e2)
nx1[i] = expect(sigmax(), psi_t_1)
ny1[i] = expect(sigmay(), psi_t_1)
nz1[i] = expect(sigmaz(), psi_t_1)
nx2[i] = expect(sigmax(), psi_t_2)
ny2[i] = expect(sigmay(), psi_t_2)
nz2[i] = expect(sigmaz(), psi_t_2)
PN1=[nx1,ny1,nz1]
PN2=[nx2,ny2,nz2]
b.add_points(PN1,'l')
b.add_points(PN2,'l')
fD1[n]=fDN1
fD2[n]=fDN2
fG1[n]=f_energies[0]-fDN1
fG2[n]=f_energies[1]-fDN2
fig, ((ax1, ax2),( ax3, ax4),(ax5,ax6)) = plt.subplots(nrows=3, ncols=2, sharex=True)
ax1.plot(elist,qe1,'+')
ax1.set_ylabel('Quasienergie1')
ax2.plot(elist,qe2,'+')
ax3.plot(elist,fD1,'+')
ax3.set_ylabel('Dynamic')
ax4.plot(elist,fD2,'+')
ax5.plot(elist,fG1,'+')
ax5.set_xlabel('$\epsilon$')
ax5.set_ylabel('Geometric')
ax6.plot(elist,fG2,'+')
ax6.set_xlabel('$\epsilon$')
b.make_sphere()
| 0.304765 | 0.562177 |
# Classifying Fashion-MNIST
Now it's your turn to build and train a neural network. You'll be using the [Fashion-MNIST dataset](https://github.com/zalandoresearch/fashion-mnist), a drop-in replacement for the MNIST dataset. MNIST is actually quite trivial with neural networks where you can easily achieve better than 97% accuracy. Fashion-MNIST is a set of 28x28 greyscale images of clothes. It's more complex than MNIST, so it's a better representation of the actual performance of your network, and a better representation of datasets you'll use in the real world.
<img src='assets/fashion-mnist-sprite.png' width=500px>
In this notebook, you'll build your own neural network. For the most part, you could just copy and paste the code from Part 3, but you wouldn't be learning. It's important for you to write the code yourself and get it to work. Feel free to consult the previous notebooks though as you work through this.
First off, let's load the dataset through torchvision.
```
import torch
from torchvision import datasets, transforms
import helper
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
# Download and load the training data
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
```
Here we can see one of the images.
```
image, label = next(iter(trainloader))
helper.imshow(image[0,:]);
```
## Building the network
Here you should define your network. As with MNIST, each image is 28x28 which is a total of 784 pixels, and there are 10 classes. You should include at least one hidden layer. We suggest you use ReLU activations for the layers and to return the logits or log-softmax from the forward pass. It's up to you how many layers you add and the size of those layers.
```
# TODO: Define your network architecture here
import torch
from torch import nn
import torch.nn.functional as F
from torchvision import datasets, transforms
from torch import optim
model = nn.Sequential(nn.Linear(784, 256),
nn.ReLU(),
nn.Linear(256, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.004)
epochs = 10
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
# Clear the gradients, do this because gradients are accumulated
optimizer.zero_grad()
# Flatten MNIST images into a 784 long vector
images = images.view(images.shape[0], -1)
# Forward pass
output = model(images)
# Loss calculation
loss = criterion(output, labels)
# Backpropagation
loss.backward()
# Gradient descent
optimizer.step()
running_loss += loss.item()
else:
print(f"Training loss: {running_loss/len(trainloader)}")
```
# Train the network
Now you should create your network and train it. First you'll want to define [the criterion](http://pytorch.org/docs/master/nn.html#loss-functions) ( something like `nn.CrossEntropyLoss`) and [the optimizer](http://pytorch.org/docs/master/optim.html) (typically `optim.SGD` or `optim.Adam`).
Then write the training code. Remember the training pass is a fairly straightforward process:
* Make a forward pass through the network to get the logits
* Use the logits to calculate the loss
* Perform a backward pass through the network with `loss.backward()` to calculate the gradients
* Take a step with the optimizer to update the weights
By adjusting the hyperparameters (hidden units, learning rate, etc), you should be able to get the training loss below 0.4.
```
# TODO: Create the network, define the criterion and optimizer
# TODO: Train the network here
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
# Test out your network!
dataiter = iter(testloader)
images, labels = dataiter.next()
img = images[0]
# Convert 2D image to 1D vector
img = img.resize_(1, 784)
with torch.no_grad():
logps = model(img)
# Output of the network are log-probabilities, need to take exponential for probabilities
ps = torch.exp(logps)
# TODO: Calculate the class probabilities (softmax) for img
# Plot the image and probabilities
helper.view_classify(img.resize_(1, 28, 28), ps, version='Fashion')
```
|
github_jupyter
|
import torch
from torchvision import datasets, transforms
import helper
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
# Download and load the training data
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
image, label = next(iter(trainloader))
helper.imshow(image[0,:]);
# TODO: Define your network architecture here
import torch
from torch import nn
import torch.nn.functional as F
from torchvision import datasets, transforms
from torch import optim
model = nn.Sequential(nn.Linear(784, 256),
nn.ReLU(),
nn.Linear(256, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.004)
epochs = 10
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
# Clear the gradients, do this because gradients are accumulated
optimizer.zero_grad()
# Flatten MNIST images into a 784 long vector
images = images.view(images.shape[0], -1)
# Forward pass
output = model(images)
# Loss calculation
loss = criterion(output, labels)
# Backpropagation
loss.backward()
# Gradient descent
optimizer.step()
running_loss += loss.item()
else:
print(f"Training loss: {running_loss/len(trainloader)}")
# TODO: Create the network, define the criterion and optimizer
# TODO: Train the network here
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
# Test out your network!
dataiter = iter(testloader)
images, labels = dataiter.next()
img = images[0]
# Convert 2D image to 1D vector
img = img.resize_(1, 784)
with torch.no_grad():
logps = model(img)
# Output of the network are log-probabilities, need to take exponential for probabilities
ps = torch.exp(logps)
# TODO: Calculate the class probabilities (softmax) for img
# Plot the image and probabilities
helper.view_classify(img.resize_(1, 28, 28), ps, version='Fashion')
| 0.592667 | 0.991015 |
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_02_2_pandas_cat.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# T81-558: Applications of Deep Neural Networks
**Module 2: Python for Machine Learning**
* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# Module 2 Material
Main video lecture:
* Part 2.1: Introduction to Pandas [[Video]](https://www.youtube.com/watch?v=bN4UuCBdpZc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_1_python_pandas.ipynb)
* **Part 2.2: Categorical Values** [[Video]](https://www.youtube.com/watch?v=4a1odDpG0Ho&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_2_pandas_cat.ipynb)
* Part 2.3: Grouping, Sorting, and Shuffling in Python Pandas [[Video]](https://www.youtube.com/watch?v=YS4wm5gD8DM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_3_pandas_grouping.ipynb)
* Part 2.4: Using Apply and Map in Pandas for Keras [[Video]](https://www.youtube.com/watch?v=XNCEZ4WaPBY&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_4_pandas_functional.ipynb)
* Part 2.5: Feature Engineering in Pandas for Deep Learning in Keras [[Video]](https://www.youtube.com/watch?v=BWPTj4_Mi9E&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_5_pandas_features.ipynb)
# Google CoLab Instructions
The following code ensures that Google CoLab is running the correct version of TensorFlow.
```
try:
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
```
# Part 2.2: Categorical and Continuous Values
Neural networks require their input to be a fixed number of columns. This input format is very similar to spreadsheet data. This input must be entirely numeric.
It is essential to represent the data in a way that the neural network can train from it. In class 6, we will see even more ways to preprocess data. For now, we will look at several of the most basic ways to transform data for a neural network.
Before we look at specific ways to preprocess data, it is important to consider four basic types of data, as defined by [[Cite:stevens1946theory]](http://psychology.okstate.edu/faculty/jgrice/psyc3214/Stevens_FourScales_1946.pdf). Statisticians commonly refer to as the [levels of measure](https://en.wikipedia.org/wiki/Level_of_measurement):
* Character Data (strings)
* **Nominal** - Individual discrete items, no order. For example, color, zip code, shape.
* **Ordinal** - Individual distinct items have an implied order. For example grade level, job title, Starbucks(tm) coffee size (tall, vente, grande)
* Numeric Data
* **Interval** - Numeric values, no defined start. For example, temperature. You would never say, "yesterday was twice as hot as today."
* **Ratio** - Numeric values, clearly defined start. For example, speed. You would say that "The first car is going twice as fast as the second."
### Encoding Continuous Values
One common transformation is to normalize the inputs. It is sometimes valuable to normalization numeric inputs to be put in a standard form so that the program can easily compare these two values. Consider if a friend told you that he received a 10 dollar discount. Is this a good deal? Maybe. But the cost is not normalized. If your friend purchased a car, then the discount is not that good. If your friend bought dinner, this is an excellent discount!
Percentages are a prevalent form of normalization. If your friend tells you they got 10% off, we know that this is a better discount than 5%. It does not matter how much the purchase price was. One widespread machine learning normalization is the Z-Score:
$z = \frac{x - \mu}{\sigma} $
To calculate the Z-Score you need to also calculate the mean($\mu$) and the standard deviation ($\sigma$). The mean is calculated as follows:
$\mu = \bar{x} = \frac{x_1+x_2+\cdots +x_n}{n}$
The standard deviation is calculated as follows:
$\sigma = \sqrt{\frac{1}{N} \sum_{i=1}^N (x_i - \mu)^2}, {\rm \ \ where\ \ } \mu = \frac{1}{N} \sum_{i=1}^N x_i$
The following Python code replaces the mpg with a z-score. Cars with average MPG will be near zero, above zero is above average, and below zero is below average. Z-Scores above/below -3/3 are very rare, these are outliers.
```
import os
import pandas as pd
from scipy.stats import zscore
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/auto-mpg.csv",
na_values=['NA','?'])
pd.set_option('display.max_columns', 7)
pd.set_option('display.max_rows', 5)
display(df)
df['mpg'] = zscore(df['mpg'])
display(df)
```
### Encoding Categorical Values as Dummies
The traditional means of encoding categorical values is to make them dummy variables. This technique is also called one-hot-encoding. Consider the following data set.
```
import pandas as pd
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
pd.set_option('display.max_columns', 0)
pd.set_option('display.max_rows', 0)
display(df)
areas = list(df['area'].unique())
print(f'Areas:{areas}')
areas = set(df['area'])
print(f'Number of areas: {len(areas)}')
print(f'Areas: {areas}')
```
There are four unique values in the areas column. To encode these to dummy variables, we would use four columns, each of which would represent one of the areas. For each row, one column would have a value of one, the rest zeros. For this reason, this type of encoding is sometimes called one-hot encoding. The following code shows how you might encode the values "a" through "d." The value A becomes [1,0,0,0] and the value B becomes [0,1,0,0].
```
dummies = pd.get_dummies(['a','b','c','d'],prefix='area')
print(dummies)
dummies = pd.get_dummies(df['area'],prefix='area')
print(dummies[0:10]) # Just show the first 10
df = pd.concat([df,dummies],axis=1)
# hmm this code removes original columns
# df = pd.get_dummies(df, columns=['area'])
```
To encode the "area" column, we use the following. Note that it is necessary to merge these dummies back into the data frame.
```
pd.set_option('display.max_columns', 0)
pd.set_option('display.max_rows', 10)
display(df[['id','job','area','income','area_a',
'area_b','area_c','area_d']])
```
Usually, you will remove the original column ('area'), because it is the goal to get the data frame to be entirely numeric for the neural network.
```
pd.set_option('display.max_columns', 0)
pd.set_option('display.max_rows', 5)
df.drop('area', axis=1, inplace=True)
display(df[['id','job','income','area_a',
'area_b','area_c','area_d']])
```
### Target Encoding for Categoricals
Target encoding can sometimes increase the predictive power of a machine learning model. However, it also dramatically increases the risk of overfitting. Because of this risk, you must take care if you are using this method. Target encoding is a popular technique for Kaggle competitions.
Generally, target encoding can only be used on a categorical feature when the output of the machine learning model is numeric (regression).
The concept of target encoding is straightforward. For each category, we calculate the average target value for that category. Then to encode, we substitute the percent that corresponds to the category that the categorical value has. Unlike dummy variables, where you have a column for each category, with target encoding, the program only needs a single column. In this way, target coding is more efficient than dummy variables
```
# Create a small sample dataset
import pandas as pd
import numpy as np
np.random.seed(43)
df = pd.DataFrame({
'cont_9': np.random.rand(10)*100,
'cat_0': ['dog'] * 5 + ['cat'] * 5,
'cat_1': ['wolf'] * 9 + ['tiger'] * 1,
'y': [1, 0, 1, 1, 1, 1, 0, 0, 0, 0]
})
pd.set_option('display.max_columns', 0)
pd.set_option('display.max_rows', 0)
display(df)
```
Rather than creating dummy variables for "dog" and "cat," we would like to change it to a number. We could use 0 for cat, 1 for dog. However, we can encode more information than just that. The simple 0 or 1 would also only work for one animal. Consider what the mean target value is for cat and dog.
```
means0 = df.groupby('cat_0')['y'].mean().to_dict()
means0
```
The danger is that we are now using the target value for training. This technique will potentially lead to overfitting. The possibility of overfitting is even greater if there are a small number of a particular category. To prevent this from happening, we use a weighting factor. The stronger the weight, the more than categories with a small number of values will tend towards the overall average of y. You can perform this calculation as follows.
```
df['y'].mean()
```
You can implement target encoding as follows. For more information on Target Encoding, refer to the article ["Target Encoding Done the Right Way"](https://maxhalford.github.io/blog/target-encoding-done-the-right-way/), that I based this code upon.
```
def calc_smooth_mean(df1, df2, cat_name, target, weight):
# Compute the global mean
mean = df[target].mean()
print(f'---mean is {mean}')
# Compute the number of values and the mean of each group
agg = df.groupby(cat_name)[target].agg(['count', 'mean'])
print(f'---agg is {agg}')
counts = agg['count']
print(f'---counts is {counts}')
means = agg['mean']
print(f'---means is {means}')
print('------------')
print('---weight---')
print(weight)
print('------------')
print('------------')
print('------------')
# Compute the "smoothed" means
smooth = (counts * means + weight * mean) / (counts + weight)
print(f'---smooth is {smooth}')
# Replace each value by the according smoothed mean
if df2 is None:
return df1[cat_name].map(smooth)
else:
return df1[cat_name].map(smooth),df2[cat_name].map(smooth.to_dict())
```
The following code encodes these two categories.
```
WEIGHT = 5
df['cat_0_enc'] = calc_smooth_mean(df1=df, df2=None,
cat_name='cat_0', target='y', weight=WEIGHT)
df['cat_1_enc'] = calc_smooth_mean(df1=df, df2=None,
cat_name='cat_1', target='y', weight=WEIGHT)
pd.set_option('display.max_columns', 0)
pd.set_option('display.max_rows', 0)
display(df)
```
### Encoding Categorical Values as Ordinal
Typically categoricals will be encoded as dummy variables. However, there might be other techniques to convert categoricals to numeric. Any time there is an order to the categoricals, a number should be used. Consider if you had a categorical that described the current education level of an individual.
* Kindergarten (0)
* First Grade (1)
* Second Grade (2)
* Third Grade (3)
* Fourth Grade (4)
* Fifth Grade (5)
* Sixth Grade (6)
* Seventh Grade (7)
* Eighth Grade (8)
* High School Freshman (9)
* High School Sophomore (10)
* High School Junior (11)
* High School Senior (12)
* College Freshman (13)
* College Sophomore (14)
* College Junior (15)
* College Senior (16)
* Graduate Student (17)
* PhD Candidate (18)
* Doctorate (19)
* Post Doctorate (20)
The above list has 21 levels. This would take 21 dummy variables. However, simply encoding this to dummies would lose the order information. Perhaps the easiest approach would be to assign simply number them and assign the category a single number that is equal to the value in parenthesis above. However, we might be able to do even better. Graduate student is likely more than a year, so you might increase more than just one value.
|
github_jupyter
|
try:
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
import os
import pandas as pd
from scipy.stats import zscore
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/auto-mpg.csv",
na_values=['NA','?'])
pd.set_option('display.max_columns', 7)
pd.set_option('display.max_rows', 5)
display(df)
df['mpg'] = zscore(df['mpg'])
display(df)
import pandas as pd
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
pd.set_option('display.max_columns', 0)
pd.set_option('display.max_rows', 0)
display(df)
areas = list(df['area'].unique())
print(f'Areas:{areas}')
areas = set(df['area'])
print(f'Number of areas: {len(areas)}')
print(f'Areas: {areas}')
dummies = pd.get_dummies(['a','b','c','d'],prefix='area')
print(dummies)
dummies = pd.get_dummies(df['area'],prefix='area')
print(dummies[0:10]) # Just show the first 10
df = pd.concat([df,dummies],axis=1)
# hmm this code removes original columns
# df = pd.get_dummies(df, columns=['area'])
pd.set_option('display.max_columns', 0)
pd.set_option('display.max_rows', 10)
display(df[['id','job','area','income','area_a',
'area_b','area_c','area_d']])
pd.set_option('display.max_columns', 0)
pd.set_option('display.max_rows', 5)
df.drop('area', axis=1, inplace=True)
display(df[['id','job','income','area_a',
'area_b','area_c','area_d']])
# Create a small sample dataset
import pandas as pd
import numpy as np
np.random.seed(43)
df = pd.DataFrame({
'cont_9': np.random.rand(10)*100,
'cat_0': ['dog'] * 5 + ['cat'] * 5,
'cat_1': ['wolf'] * 9 + ['tiger'] * 1,
'y': [1, 0, 1, 1, 1, 1, 0, 0, 0, 0]
})
pd.set_option('display.max_columns', 0)
pd.set_option('display.max_rows', 0)
display(df)
means0 = df.groupby('cat_0')['y'].mean().to_dict()
means0
df['y'].mean()
def calc_smooth_mean(df1, df2, cat_name, target, weight):
# Compute the global mean
mean = df[target].mean()
print(f'---mean is {mean}')
# Compute the number of values and the mean of each group
agg = df.groupby(cat_name)[target].agg(['count', 'mean'])
print(f'---agg is {agg}')
counts = agg['count']
print(f'---counts is {counts}')
means = agg['mean']
print(f'---means is {means}')
print('------------')
print('---weight---')
print(weight)
print('------------')
print('------------')
print('------------')
# Compute the "smoothed" means
smooth = (counts * means + weight * mean) / (counts + weight)
print(f'---smooth is {smooth}')
# Replace each value by the according smoothed mean
if df2 is None:
return df1[cat_name].map(smooth)
else:
return df1[cat_name].map(smooth),df2[cat_name].map(smooth.to_dict())
WEIGHT = 5
df['cat_0_enc'] = calc_smooth_mean(df1=df, df2=None,
cat_name='cat_0', target='y', weight=WEIGHT)
df['cat_1_enc'] = calc_smooth_mean(df1=df, df2=None,
cat_name='cat_1', target='y', weight=WEIGHT)
pd.set_option('display.max_columns', 0)
pd.set_option('display.max_rows', 0)
display(df)
| 0.4206 | 0.993123 |
#### New to Plotly?
Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).
<br>You can set up Plotly to work in [online](https://plot.ly/python/getting-started/#initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/#start-plotting-online).
<br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!
#### Version Check
Note: Animations are available in version 1.12.10+
Run `pip install plotly --upgrade` to update your Plotly version.
```
import plotly
plotly.__version__
```
#### Frames
Now, along with `data` and `layout`, `frames` is added to the keys that `figure` allows. Your `frames` key points to a list of figures, each of which will be cycled through upon instantiation of the plot.
#### Online Mode
You can use `plotly.plotly.icreate_animations()` or `plotly.plotly.create_animations()` to make `online` animations that you save on the Plotly cloud.
There are two steps for making an online animation:
1. Make a grid
2. Make the plot with data from the grid
The reason for making a grid is because animations are created through our [v2 api](https://api.plot.ly/v2/). In this process, we create a [grid](https://api.plot.ly/v2/#grids) composed of columns, and then make a plot which contains referenced data from the grid columns. You can learn how to upload a grid at the [grid endpoint](https://api.plot.ly/v2/grids#create) of the and how to make a plot with a grid at the [plot endpoint](https://api.plot.ly/v2/plots#create) of the v2 api.
A grid consists of columns which fundamentally are 1D lists of numerical data with an associated name. They are instantiated with the `grid_objs` class `Column`. To make a column, simply assign a varaible with a Column:
$$
\begin{align*}
Column([...], name)
\end{align*}
$$
The `Grid` class is also part of the `grid_objs` module. A `Grid` takes a list of columns:
$$
\begin{align*}
grid = Grid([column_1, column_2, ...])
\end{align*}
$$
**Please Note:** filenames MUST BE unique. An error will be thrown if a grid is not created with a unique filename. Therefore we reccomend appending a timestamp to your grid filename to ensure the filename is unique.
```
import plotly.plotly as py
from plotly.grid_objs import Grid, Column
import time
column_1 = Column([0.5], 'x')
column_2 = Column([0.5], 'y')
column_3 = Column([1.5], 'x2')
column_4 = Column([1.5], 'y2')
grid = Grid([column_1, column_2, column_3, column_4])
py.grid_ops.upload(grid, 'ping_pong_grid'+str(time.time()), auto_open=False)
```
Now you need to reference the columns from the grid that you just uploaded. You can do so by using the built-in grid method `get_column_reference()` which takes the *column name* as its argument and returns the reference to the data in the grid. Since we are dealing with *referenced data* which is pointing to data and not *raw data*, we use `xsrc` and `ysrc` in the `figure` to represent the `x` and `y` analogues that are normally used.
Make your figure and create an animated plot!
```
figure = {
'data': [
{
'xsrc': grid.get_column_reference('x'),
'ysrc': grid.get_column_reference('y'),
'mode': 'markers',
}
],
'layout': {'title': 'Ping Pong Animation',
'xaxis': {'range': [0, 2], 'autorange': False},
'yaxis': {'range': [0, 2], 'autorange': False},
'updatemenus': [{
'buttons': [
{'args': [None],
'label': 'Play',
'method': 'animate'}
],
'pad': {'r': 10, 't': 87},
'showactive': False,
'type': 'buttons'
}]},
'frames': [
{
'data': [
{
'xsrc': grid.get_column_reference('x2'),
'ysrc': grid.get_column_reference('y2'),
'mode': 'markers',
}
]
},
{
'data': [
{
'xsrc': grid.get_column_reference('x'),
'ysrc': grid.get_column_reference('y'),
'mode': 'markers',
}
]
}
]
}
py.icreate_animations(figure, 'ping_pong'+str(time.time()))
```
#### Adding Control Buttons to Animations
You can add play and pause buttons to control your animated charts by adding an `updatemenus` array to the `layout` of your `figure`. More information on style and placement of the buttons is available in Plotly's [`updatemenus` reference](https://plot.ly/python/reference/#layout-updatemenus).
<br>
The buttons are defined as follows:
```
'updatemenus': [{'type': 'buttons',
'buttons': [{'label': 'Your Label',
'method': 'animate',
'args': [See Below]}]}]
```
#### Defining Button Arguments
- `None`: Setting `'args'` to undefined (i.e. `'args': [None]`) will create a simple play button that will animate all frames.
- string: Animate all frames with group `'<some string>'`. This is a way of scoping the animations in case you would prefer to animate without explicitly enumerating all frames.
- `['frame1', 'frame2', ...]`: Animate a sequence of named frames.
- `[{data: [], layout: {}, traces: []}, {...}]`: Nearly identical to animating named frames; though this variant lets you inline data instead of adding it as named frames. This can be useful for interaction where it's undesirable to add and manage named frames for ephemeral changes.
- `[null]`: A simple way to create a pause button (requires `mode: 'immediate'`). This argument dumps the currently queued frames (`mode: 'immediate'`), and then animates an empty sequence of frames (`[null]`).
- <b>Please Note:</b> We <b>do not</b> recommend using: `[ ]`. This syntax may cause confusion because it looks indistinguishable from a "pause button", but nested properties have logic that treats empty arrays as entirely removable, so it will function as a play button.<br><br>
Refer to the examples below to see the buttons in action!
#### Points Changing Size
```
import plotly.plotly as py
from plotly.grid_objs import Grid, Column
import time
column_1 = Column([0.9, 1.1], 'x')
column_2 = Column([1.0, 1.0], 'y')
column_3 = Column([0.8, 1.2], 'x2')
column_4 = Column([1.2, 0.8], 'y2')
column_5 = Column([0.7, 1.3], 'x3')
column_6 = Column([0.7, 1.3], 'y3')
column_7 = Column([0.6, 1.4], 'x4')
column_8 = Column([1.5, 0.5], 'y4')
column_9 = Column([0.4, 1.6], 'x5')
column_10 = Column([1.2, 0.8], 'y5')
grid = Grid([column_1, column_2, column_3, column_4, column_5,
column_6, column_7, column_8, column_9, column_10])
py.grid_ops.upload(grid, 'points_changing_size_grid'+str(time.time()), auto_open=False)
# create figure
figure = {
'data': [
{
'xsrc': grid.get_column_reference('x'),
'ysrc': grid.get_column_reference('y'),
'mode': 'markers',
'marker': {'color': '#48186a', 'size': 10}
}
],
'layout': {'title': 'Growing Circles',
'xaxis': {'range': [0, 2], 'autorange': False},
'yaxis': {'range': [0, 2], 'autorange': False},
'updatemenus': [{
'buttons': [
{'args': [None],
'label': 'Play',
'method': 'animate'}
],
'pad': {'r': 10, 't': 87},
'showactive': False,
'type': 'buttons'
}]},
'frames': [
{
'data': [
{
'xsrc': grid.get_column_reference('x2'),
'ysrc': grid.get_column_reference('y2'),
'mode': 'markers',
'marker': {'color': '#3b528b', 'size': 25}
}
]
},
{
'data': [
{
'xsrc': grid.get_column_reference('x3'),
'ysrc': grid.get_column_reference('y3'),
'mode': 'markers',
'marker': {'color': '#26828e', 'size': 50}
}
]
},
{
'data': [
{
'xsrc': grid.get_column_reference('x4'),
'ysrc': grid.get_column_reference('y4'),
'mode': 'markers',
'marker': {'color': '#5ec962', 'size': 80}
}
]
},
{
'data': [
{
'xsrc': grid.get_column_reference('x5'),
'ysrc': grid.get_column_reference('y5'),
'mode': 'markers',
'marker': {'color': '#d8e219', 'size': 100}
}
]
}
]
}
py.icreate_animations(figure, 'points_changing_size'+str(time.time()))
```
#### Offline Mode
`Animations` can be created either `offline` or `online`. To learn about how to set up working offline, check out the [offline documentation](https://plot.ly/python/offline/).
#### Basic Example
To re-run the animation see the following example with a play button.
```
from plotly.offline import init_notebook_mode, iplot
from IPython.display import display, HTML
init_notebook_mode(connected=True)
figure = {'data': [{'x': [0, 1], 'y': [0, 1]}],
'layout': {'xaxis': {'range': [0, 5], 'autorange': False},
'yaxis': {'range': [0, 5], 'autorange': False},
'title': 'Start Title'},
'frames': [{'data': [{'x': [1, 2], 'y': [1, 2]}]},
{'data': [{'x': [1, 4], 'y': [1, 4]}]},
{'data': [{'x': [3, 4], 'y': [3, 4]}],
'layout': {'title': 'End Title'}}]}
iplot(figure)
```
#### Simple Play Button
```
from plotly.offline import init_notebook_mode, iplot
from IPython.display import display, HTML
init_notebook_mode(connected=True)
figure = {'data': [{'x': [0, 1], 'y': [0, 1]}],
'layout': {'xaxis': {'range': [0, 5], 'autorange': False},
'yaxis': {'range': [0, 5], 'autorange': False},
'title': 'Start Title',
'updatemenus': [{'type': 'buttons',
'buttons': [{'label': 'Play',
'method': 'animate',
'args': [None]}]}]
},
'frames': [{'data': [{'x': [1, 2], 'y': [1, 2]}]},
{'data': [{'x': [1, 4], 'y': [1, 4]}]},
{'data': [{'x': [3, 4], 'y': [3, 4]}],
'layout': {'title': 'End Title'}}]}
iplot(figure)
```
#### Moving Point on a Curve
```
from plotly.offline import init_notebook_mode, iplot
from IPython.display import display, HTML
import numpy as np
init_notebook_mode(connected=True)
t=np.linspace(-1,1,100)
x=t+t**2
y=t-t**2
xm=np.min(x)-1.5
xM=np.max(x)+1.5
ym=np.min(y)-1.5
yM=np.max(y)+1.5
N=50
s=np.linspace(-1,1,N)
xx=s+s**2
yy=s-s**2
data=[dict(x=x, y=y,
mode='lines',
line=dict(width=2, color='blue')
),
dict(x=x, y=y,
mode='lines',
line=dict(width=2, color='blue')
)
]
layout=dict(xaxis=dict(range=[xm, xM], autorange=False, zeroline=False),
yaxis=dict(range=[ym, yM], autorange=False, zeroline=False),
title='Kinematic Generation of a Planar Curve', hovermode='closest',
updatemenus= [{'type': 'buttons',
'buttons': [{'label': 'Play',
'method': 'animate',
'args': [None]}]}])
frames=[dict(data=[dict(x=[xx[k]],
y=[yy[k]],
mode='markers',
marker=dict(color='red', size=10)
)
]) for k in range(N)]
figure1=dict(data=data, layout=layout, frames=frames)
iplot(figure1)
```
#### Moving Frenet Frame Along a Planar Curve
```
from plotly.offline import init_notebook_mode, iplot
from IPython.display import display, HTML
import numpy as np
init_notebook_mode(connected=True)
N=50
s=np.linspace(-1,1,N)
vx=1+2*s
vy=1-2*s #v=(vx, vy) is the velocity
speed=np.sqrt(vx**2+vy**2)
ux=vx/speed #(ux, uy) unit tangent vector, (-uy, ux) unit normal vector
uy=vy/speed
xend=xx+ux #end coordinates for the unit tangent vector at (xx, yy)
yend=yy+uy
xnoe=xx-uy #end coordinates for the unit normal vector at (xx,yy)
ynoe=yy+ux
data=[dict(x=x, y=y,
name='frame',
mode='lines',
line=dict(width=2, color='blue')),
dict(x=x, y=y,
name='curve',
mode='lines',
line=dict(width=2, color='blue'))
]
layout=dict(width=600, height=600,
xaxis=dict(range=[xm, xM], autorange=False, zeroline=False),
yaxis=dict(range=[ym, yM], autorange=False, zeroline=False),
title='Moving Frenet Frame Along a Planar Curve', hovermode='closest',
updatemenus= [{'type': 'buttons',
'buttons': [{'label': 'Play',
'method': 'animate',
'args': [None]}]}])
frames=[dict(data=[dict(x=[xx[k], xend[k], None, xx[k], xnoe[k]],
y=[yy[k], yend[k], None, yy[k], ynoe[k]],
mode='lines',
line=dict(color='red', width=2))
]) for k in range(N)]
figure2=dict(data=data, layout=layout, frames=frames)
iplot(figure2)
```
#### Using a Slider and Buttons
The following example uses the well known [Gapminder dataset](https://www.gapminder.org/tag/gdp-per-capita/) to exemplify animation capabilites. This bubble chart animation shows the change in 'GDP per Capita' against the 'Life Expectancy' of several countries from the year 1952 to 2007, colored by their respective continent and sized by population.
```
from plotly.offline import init_notebook_mode, iplot
from IPython.display import display, HTML
import pandas as pd
init_notebook_mode(connected=True)
url = 'https://raw.githubusercontent.com/plotly/datasets/master/gapminderDataFiveYear.csv'
dataset = pd.read_csv(url)
years = ['1952', '1962', '1967', '1972', '1977', '1982', '1987', '1992', '1997', '2002', '2007']
# make list of continents
continents = []
for continent in dataset['continent']:
if continent not in continents:
continents.append(continent)
# make figure
figure = {
'data': [],
'layout': {},
'frames': []
}
# fill in most of layout
figure['layout']['xaxis'] = {'range': [30, 85], 'title': 'Life Expectancy'}
figure['layout']['yaxis'] = {'title': 'GDP per Capita', 'type': 'log'}
figure['layout']['hovermode'] = 'closest'
figure['layout']['sliders'] = {
'args': [
'transition', {
'duration': 400,
'easing': 'cubic-in-out'
}
],
'initialValue': '1952',
'plotlycommand': 'animate',
'values': years,
'visible': True
}
figure['layout']['updatemenus'] = [
{
'buttons': [
{
'args': [None, {'frame': {'duration': 500, 'redraw': False},
'fromcurrent': True, 'transition': {'duration': 300, 'easing': 'quadratic-in-out'}}],
'label': 'Play',
'method': 'animate'
},
{
'args': [[None], {'frame': {'duration': 0, 'redraw': False}, 'mode': 'immediate',
'transition': {'duration': 0}}],
'label': 'Pause',
'method': 'animate'
}
],
'direction': 'left',
'pad': {'r': 10, 't': 87},
'showactive': False,
'type': 'buttons',
'x': 0.1,
'xanchor': 'right',
'y': 0,
'yanchor': 'top'
}
]
sliders_dict = {
'active': 0,
'yanchor': 'top',
'xanchor': 'left',
'currentvalue': {
'font': {'size': 20},
'prefix': 'Year:',
'visible': True,
'xanchor': 'right'
},
'transition': {'duration': 300, 'easing': 'cubic-in-out'},
'pad': {'b': 10, 't': 50},
'len': 0.9,
'x': 0.1,
'y': 0,
'steps': []
}
# make data
year = 1952
for continent in continents:
dataset_by_year = dataset[dataset['year'] == year]
dataset_by_year_and_cont = dataset_by_year[dataset_by_year['continent'] == continent]
data_dict = {
'x': list(dataset_by_year_and_cont['lifeExp']),
'y': list(dataset_by_year_and_cont['gdpPercap']),
'mode': 'markers',
'text': list(dataset_by_year_and_cont['country']),
'marker': {
'sizemode': 'area',
'sizeref': 200000,
'size': list(dataset_by_year_and_cont['pop'])
},
'name': continent
}
figure['data'].append(data_dict)
# make frames
for year in years:
frame = {'data': [], 'name': str(year)}
for continent in continents:
dataset_by_year = dataset[dataset['year'] == int(year)]
dataset_by_year_and_cont = dataset_by_year[dataset_by_year['continent'] == continent]
data_dict = {
'x': list(dataset_by_year_and_cont['lifeExp']),
'y': list(dataset_by_year_and_cont['gdpPercap']),
'mode': 'markers',
'text': list(dataset_by_year_and_cont['country']),
'marker': {
'sizemode': 'area',
'sizeref': 200000,
'size': list(dataset_by_year_and_cont['pop'])
},
'name': continent
}
frame['data'].append(data_dict)
figure['frames'].append(frame)
slider_step = {'args': [
[year],
{'frame': {'duration': 300, 'redraw': False},
'mode': 'immediate',
'transition': {'duration': 300}}
],
'label': year,
'method': 'animate'}
sliders_dict['steps'].append(slider_step)
figure['layout']['sliders'] = [sliders_dict]
iplot(figure)
```
#### Important Notes
- Defining `redraw`: Setting `redraw: false` is an optimization for scatter plots so that animate just makes changes without redrawing the whole plot. For other plot types, such as contour plots, every frame <b>must</b> be a total plot redraw, i.e. `redraw: true`.
#### Reference
For additional information and attributes for creating bubble charts in Plotly see: https://plot.ly/python/bubble-charts/.
For more documentation on creating animations with Plotly, see https://plot.ly/python/#animations.
```
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
!pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'intro-to-animations.ipynb', 'python/animations/', 'Intro to Animations | plotly',
'An introduction to creating animations with Plotly in Python.',
title='Intro to Animations in Python | plotly',
name='Intro to Animations',
language='python',
page_type='example_index',
has_thumbnail='true', thumbnail='thumbnail/animations.gif',
display_as='animations', ipynb= '~notebook_demo/131', order=1)
```
|
github_jupyter
|
import plotly
plotly.__version__
import plotly.plotly as py
from plotly.grid_objs import Grid, Column
import time
column_1 = Column([0.5], 'x')
column_2 = Column([0.5], 'y')
column_3 = Column([1.5], 'x2')
column_4 = Column([1.5], 'y2')
grid = Grid([column_1, column_2, column_3, column_4])
py.grid_ops.upload(grid, 'ping_pong_grid'+str(time.time()), auto_open=False)
figure = {
'data': [
{
'xsrc': grid.get_column_reference('x'),
'ysrc': grid.get_column_reference('y'),
'mode': 'markers',
}
],
'layout': {'title': 'Ping Pong Animation',
'xaxis': {'range': [0, 2], 'autorange': False},
'yaxis': {'range': [0, 2], 'autorange': False},
'updatemenus': [{
'buttons': [
{'args': [None],
'label': 'Play',
'method': 'animate'}
],
'pad': {'r': 10, 't': 87},
'showactive': False,
'type': 'buttons'
}]},
'frames': [
{
'data': [
{
'xsrc': grid.get_column_reference('x2'),
'ysrc': grid.get_column_reference('y2'),
'mode': 'markers',
}
]
},
{
'data': [
{
'xsrc': grid.get_column_reference('x'),
'ysrc': grid.get_column_reference('y'),
'mode': 'markers',
}
]
}
]
}
py.icreate_animations(figure, 'ping_pong'+str(time.time()))
'updatemenus': [{'type': 'buttons',
'buttons': [{'label': 'Your Label',
'method': 'animate',
'args': [See Below]}]}]
import plotly.plotly as py
from plotly.grid_objs import Grid, Column
import time
column_1 = Column([0.9, 1.1], 'x')
column_2 = Column([1.0, 1.0], 'y')
column_3 = Column([0.8, 1.2], 'x2')
column_4 = Column([1.2, 0.8], 'y2')
column_5 = Column([0.7, 1.3], 'x3')
column_6 = Column([0.7, 1.3], 'y3')
column_7 = Column([0.6, 1.4], 'x4')
column_8 = Column([1.5, 0.5], 'y4')
column_9 = Column([0.4, 1.6], 'x5')
column_10 = Column([1.2, 0.8], 'y5')
grid = Grid([column_1, column_2, column_3, column_4, column_5,
column_6, column_7, column_8, column_9, column_10])
py.grid_ops.upload(grid, 'points_changing_size_grid'+str(time.time()), auto_open=False)
# create figure
figure = {
'data': [
{
'xsrc': grid.get_column_reference('x'),
'ysrc': grid.get_column_reference('y'),
'mode': 'markers',
'marker': {'color': '#48186a', 'size': 10}
}
],
'layout': {'title': 'Growing Circles',
'xaxis': {'range': [0, 2], 'autorange': False},
'yaxis': {'range': [0, 2], 'autorange': False},
'updatemenus': [{
'buttons': [
{'args': [None],
'label': 'Play',
'method': 'animate'}
],
'pad': {'r': 10, 't': 87},
'showactive': False,
'type': 'buttons'
}]},
'frames': [
{
'data': [
{
'xsrc': grid.get_column_reference('x2'),
'ysrc': grid.get_column_reference('y2'),
'mode': 'markers',
'marker': {'color': '#3b528b', 'size': 25}
}
]
},
{
'data': [
{
'xsrc': grid.get_column_reference('x3'),
'ysrc': grid.get_column_reference('y3'),
'mode': 'markers',
'marker': {'color': '#26828e', 'size': 50}
}
]
},
{
'data': [
{
'xsrc': grid.get_column_reference('x4'),
'ysrc': grid.get_column_reference('y4'),
'mode': 'markers',
'marker': {'color': '#5ec962', 'size': 80}
}
]
},
{
'data': [
{
'xsrc': grid.get_column_reference('x5'),
'ysrc': grid.get_column_reference('y5'),
'mode': 'markers',
'marker': {'color': '#d8e219', 'size': 100}
}
]
}
]
}
py.icreate_animations(figure, 'points_changing_size'+str(time.time()))
from plotly.offline import init_notebook_mode, iplot
from IPython.display import display, HTML
init_notebook_mode(connected=True)
figure = {'data': [{'x': [0, 1], 'y': [0, 1]}],
'layout': {'xaxis': {'range': [0, 5], 'autorange': False},
'yaxis': {'range': [0, 5], 'autorange': False},
'title': 'Start Title'},
'frames': [{'data': [{'x': [1, 2], 'y': [1, 2]}]},
{'data': [{'x': [1, 4], 'y': [1, 4]}]},
{'data': [{'x': [3, 4], 'y': [3, 4]}],
'layout': {'title': 'End Title'}}]}
iplot(figure)
from plotly.offline import init_notebook_mode, iplot
from IPython.display import display, HTML
init_notebook_mode(connected=True)
figure = {'data': [{'x': [0, 1], 'y': [0, 1]}],
'layout': {'xaxis': {'range': [0, 5], 'autorange': False},
'yaxis': {'range': [0, 5], 'autorange': False},
'title': 'Start Title',
'updatemenus': [{'type': 'buttons',
'buttons': [{'label': 'Play',
'method': 'animate',
'args': [None]}]}]
},
'frames': [{'data': [{'x': [1, 2], 'y': [1, 2]}]},
{'data': [{'x': [1, 4], 'y': [1, 4]}]},
{'data': [{'x': [3, 4], 'y': [3, 4]}],
'layout': {'title': 'End Title'}}]}
iplot(figure)
from plotly.offline import init_notebook_mode, iplot
from IPython.display import display, HTML
import numpy as np
init_notebook_mode(connected=True)
t=np.linspace(-1,1,100)
x=t+t**2
y=t-t**2
xm=np.min(x)-1.5
xM=np.max(x)+1.5
ym=np.min(y)-1.5
yM=np.max(y)+1.5
N=50
s=np.linspace(-1,1,N)
xx=s+s**2
yy=s-s**2
data=[dict(x=x, y=y,
mode='lines',
line=dict(width=2, color='blue')
),
dict(x=x, y=y,
mode='lines',
line=dict(width=2, color='blue')
)
]
layout=dict(xaxis=dict(range=[xm, xM], autorange=False, zeroline=False),
yaxis=dict(range=[ym, yM], autorange=False, zeroline=False),
title='Kinematic Generation of a Planar Curve', hovermode='closest',
updatemenus= [{'type': 'buttons',
'buttons': [{'label': 'Play',
'method': 'animate',
'args': [None]}]}])
frames=[dict(data=[dict(x=[xx[k]],
y=[yy[k]],
mode='markers',
marker=dict(color='red', size=10)
)
]) for k in range(N)]
figure1=dict(data=data, layout=layout, frames=frames)
iplot(figure1)
from plotly.offline import init_notebook_mode, iplot
from IPython.display import display, HTML
import numpy as np
init_notebook_mode(connected=True)
N=50
s=np.linspace(-1,1,N)
vx=1+2*s
vy=1-2*s #v=(vx, vy) is the velocity
speed=np.sqrt(vx**2+vy**2)
ux=vx/speed #(ux, uy) unit tangent vector, (-uy, ux) unit normal vector
uy=vy/speed
xend=xx+ux #end coordinates for the unit tangent vector at (xx, yy)
yend=yy+uy
xnoe=xx-uy #end coordinates for the unit normal vector at (xx,yy)
ynoe=yy+ux
data=[dict(x=x, y=y,
name='frame',
mode='lines',
line=dict(width=2, color='blue')),
dict(x=x, y=y,
name='curve',
mode='lines',
line=dict(width=2, color='blue'))
]
layout=dict(width=600, height=600,
xaxis=dict(range=[xm, xM], autorange=False, zeroline=False),
yaxis=dict(range=[ym, yM], autorange=False, zeroline=False),
title='Moving Frenet Frame Along a Planar Curve', hovermode='closest',
updatemenus= [{'type': 'buttons',
'buttons': [{'label': 'Play',
'method': 'animate',
'args': [None]}]}])
frames=[dict(data=[dict(x=[xx[k], xend[k], None, xx[k], xnoe[k]],
y=[yy[k], yend[k], None, yy[k], ynoe[k]],
mode='lines',
line=dict(color='red', width=2))
]) for k in range(N)]
figure2=dict(data=data, layout=layout, frames=frames)
iplot(figure2)
from plotly.offline import init_notebook_mode, iplot
from IPython.display import display, HTML
import pandas as pd
init_notebook_mode(connected=True)
url = 'https://raw.githubusercontent.com/plotly/datasets/master/gapminderDataFiveYear.csv'
dataset = pd.read_csv(url)
years = ['1952', '1962', '1967', '1972', '1977', '1982', '1987', '1992', '1997', '2002', '2007']
# make list of continents
continents = []
for continent in dataset['continent']:
if continent not in continents:
continents.append(continent)
# make figure
figure = {
'data': [],
'layout': {},
'frames': []
}
# fill in most of layout
figure['layout']['xaxis'] = {'range': [30, 85], 'title': 'Life Expectancy'}
figure['layout']['yaxis'] = {'title': 'GDP per Capita', 'type': 'log'}
figure['layout']['hovermode'] = 'closest'
figure['layout']['sliders'] = {
'args': [
'transition', {
'duration': 400,
'easing': 'cubic-in-out'
}
],
'initialValue': '1952',
'plotlycommand': 'animate',
'values': years,
'visible': True
}
figure['layout']['updatemenus'] = [
{
'buttons': [
{
'args': [None, {'frame': {'duration': 500, 'redraw': False},
'fromcurrent': True, 'transition': {'duration': 300, 'easing': 'quadratic-in-out'}}],
'label': 'Play',
'method': 'animate'
},
{
'args': [[None], {'frame': {'duration': 0, 'redraw': False}, 'mode': 'immediate',
'transition': {'duration': 0}}],
'label': 'Pause',
'method': 'animate'
}
],
'direction': 'left',
'pad': {'r': 10, 't': 87},
'showactive': False,
'type': 'buttons',
'x': 0.1,
'xanchor': 'right',
'y': 0,
'yanchor': 'top'
}
]
sliders_dict = {
'active': 0,
'yanchor': 'top',
'xanchor': 'left',
'currentvalue': {
'font': {'size': 20},
'prefix': 'Year:',
'visible': True,
'xanchor': 'right'
},
'transition': {'duration': 300, 'easing': 'cubic-in-out'},
'pad': {'b': 10, 't': 50},
'len': 0.9,
'x': 0.1,
'y': 0,
'steps': []
}
# make data
year = 1952
for continent in continents:
dataset_by_year = dataset[dataset['year'] == year]
dataset_by_year_and_cont = dataset_by_year[dataset_by_year['continent'] == continent]
data_dict = {
'x': list(dataset_by_year_and_cont['lifeExp']),
'y': list(dataset_by_year_and_cont['gdpPercap']),
'mode': 'markers',
'text': list(dataset_by_year_and_cont['country']),
'marker': {
'sizemode': 'area',
'sizeref': 200000,
'size': list(dataset_by_year_and_cont['pop'])
},
'name': continent
}
figure['data'].append(data_dict)
# make frames
for year in years:
frame = {'data': [], 'name': str(year)}
for continent in continents:
dataset_by_year = dataset[dataset['year'] == int(year)]
dataset_by_year_and_cont = dataset_by_year[dataset_by_year['continent'] == continent]
data_dict = {
'x': list(dataset_by_year_and_cont['lifeExp']),
'y': list(dataset_by_year_and_cont['gdpPercap']),
'mode': 'markers',
'text': list(dataset_by_year_and_cont['country']),
'marker': {
'sizemode': 'area',
'sizeref': 200000,
'size': list(dataset_by_year_and_cont['pop'])
},
'name': continent
}
frame['data'].append(data_dict)
figure['frames'].append(frame)
slider_step = {'args': [
[year],
{'frame': {'duration': 300, 'redraw': False},
'mode': 'immediate',
'transition': {'duration': 300}}
],
'label': year,
'method': 'animate'}
sliders_dict['steps'].append(slider_step)
figure['layout']['sliders'] = [sliders_dict]
iplot(figure)
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
!pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'intro-to-animations.ipynb', 'python/animations/', 'Intro to Animations | plotly',
'An introduction to creating animations with Plotly in Python.',
title='Intro to Animations in Python | plotly',
name='Intro to Animations',
language='python',
page_type='example_index',
has_thumbnail='true', thumbnail='thumbnail/animations.gif',
display_as='animations', ipynb= '~notebook_demo/131', order=1)
| 0.681939 | 0.990329 |
<a href="https://colab.research.google.com/github/3dsf/SkinDeep/blob/master/videoProcessor.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Video Processing with AI models
based on the ffmpeg-python tensorflow implementation
and using the models of
https://github.com/vijishmadhavan
# Install Libs
```
!pip install youtube-dl fastai==1.0.61 ffmpeg-python
```
# Select
## -- Model
## -- Video
## -- Output Name
*and collect metadata*
```
import os
modelToRun = "SkinDeep_1280.pkl" #@param ["ArtLine_500.pkl", "ArtLine_650.pkl", "ArtLine_1024.pkl", "SkinDeep.pkl", "SkinDeep_1280.pkl"]
pathToModel = os.path.join("/content/drive/",modelToRun)
downloadModel = {
"ArtLine_500.pkl": "https://www.dropbox.com/s/p9lynpwygjmeed2/ArtLine_500.pkl",
"ArtLine_650.pkl": "https://www.dropbox.com/s/starqc9qd2e1lg1/ArtLine_650.pkl",
"ArtLine_1024.pkl": "https://www.dropbox.com/s/rq90q9lr9arwdp8/ArtLine_1024%20%281%29.pkl",
"SkinDeep.pkl": "https://www.dropbox.com/s/5mmcqao4mozpube/SkinDeep.pkl?dl=1",
"SkinDeep_1280.pkl": "https://www.dropbox.com/s/wxty56nhidusojr/SkinDeep_1280.pkl"
}
if os.path.isfile(pathToModel) == False :
if os.path.isfile(modelToRun) == False :
print("Downloading Model")
download = downloadModel[modelToRun]
!wget -O $modelToRun $download
pathToModel = modelToRun
else :
print("Found Local Version")
pathToModel = modelToRun
videoURL = "https://www.youtube.com/watch?v=olnqoL-yLZE" #@param {type:"string"}
!rm input.mp4 #required
!time(youtube-dl -f 'bestvideo[ext=mp4]+bestaudio[ext=acc]/mp4' --output "input.%(ext)s" $videoURL)
output_name = "postMalone.sd1280.mp4" #@param {type:"string"}
import subprocess
AUDIO = False
process = subprocess.Popen(['ffmpeg', '-hide_banner', '-i', 'input.mp4', '-y' ], stdout=subprocess.PIPE, stderr=subprocess.STDOUT,universal_newlines=True)
for line in process.stdout:
print(line)
if ' Video:' in line:
l_split = line.split(',')
#print('---------printing line ", line)
for segment in l_split[1:]:
if 'fps' in segment:
s = segment.strip().split(' ')
fps = float(s[0])
if 'x' in segment:
s = segment.strip().split('x')
width = int(s[0])
s2 = s[1].split(' ')
height = int(s2[0])
if 'Duration:' in line:
s = line.split(',')
ss = s[0].split(' ')
sss = ss[3].strip().split(':')
seconds = float(sss[0])*60*60 + float(sss[1])*60 + float(sss[2])
if 'Audio:' in line:
AUDIO = True
print('fps = ', str(fps))
print('width = ', str(width))
print('height = ', str(height))
print('seconds = ', str(seconds))
print('AUDIO = ', AUDIO)
```
# Process Video
```
import os
import logging as logger
from torchvision import transforms as T
from fastai.utils.mem import *
from fastai.vision import open_image, load_learner, Image, torch, pil2tensor, image2np
import ffmpeg, cv2
import numpy as np
#progress bar
from IPython.display import HTML, display
from tqdm import *
#There is scaling warning that might come up, and this block supresses user warnings
#Comment out this block if your don't mind seeing the warnings
import warnings
warnings.filterwarnings("ignore", category=UserWarning)
### Progress bar
def progress(value, max=100):
return HTML("""
<progress
value='{value}'
max='{max}',
style='width: 100%'
>
{value}
</progress>
""".format(value=value, max=max))
### Class required for model
class FeatureLoss(nn.Module):
def __init__(self, m_feat, layer_ids, layer_wgts):
super().__init__()
self.m_feat = m_feat
self.loss_features = [self.m_feat[i] for i in layer_ids]
self.hooks = hook_outputs(self.loss_features, detach=False)
self.wgts = layer_wgts
self.metric_names = ['pixel',] + [f'feat_{i}' for i in range(len(layer_ids))
] + [f'gram_{i}' for i in range(len(layer_ids))]
def make_features(self, x, clone=False):
self.m_feat(x)
return [(o.clone() if clone else o) for o in self.hooks.stored]
def forward(self, input, target):
out_feat = self.make_features(target, clone=True)
in_feat = self.make_features(input)
self.feat_losses = [base_loss(input,target)]
self.feat_losses += [base_loss(f_in, f_out)*w
for f_in, f_out, w in zip(in_feat, out_feat, self.wgts)]
self.feat_losses += [base_loss(gram_matrix(f_in), gram_matrix(f_out))*w**2 * 5e3
for f_in, f_out, w in zip(in_feat, out_feat, self.wgts)]
self.metrics = dict(zip(self.metric_names, self.feat_losses))
return sum(self.feat_losses)
def __del__(self): self.hooks.remove()
### DETERMINE IF CUDA AVAILABLE and LOAD MODEL
def modelDeviceLoadSelect():
if torch.cuda.is_available():
def load_model():
global USEgPU
learn = load_learner('.', pathToModel, device=0 )
USEgPU = True
print("INFERENCE DEVICE : cuda")
return learn
else:
def load_model():
learn = load_learner('.', pathToModel, device='cpu')
print("INFERENCE DEVICE : cpu")
return learn
learn=load_model()
return learn
### Functions based on ffmpeg-python video tensorflow example
def readFrameAsNp(ffmpegDecode, width, height):
logger.debug('Reading frame')
# Note: RGB24 == 3 bytes per pixel.
frame_size = width * height * 3
in_bytes = ffmpegDecode.stdout.read(frame_size)
if len(in_bytes) == 0:
frame = None
else:
assert len(in_bytes) == frame_size
frame = (
np
.frombuffer(in_bytes, np.uint8)
.reshape([height, width, 3])
)
return frame
def writeFrameAsByte(ffmpegEncode, frame):
logger.debug('Writing frame')
ffmpegEncode.stdin.write(
frame
.astype(np.uint8)
.tobytes()
)
def vid2np(in_filename):
logger.info('vid2np() -- Decoding to pipe')
codec = 'h264'
args = (
ffmpeg
.input(in_filename,
**{'c:v': codec})
.output('pipe:', format='rawvideo', pix_fmt='rgb24')
.global_args("-hide_banner")
.compile()
)
return subprocess.Popen(args, stdout=subprocess.PIPE)
def np2vid(out_filename, fps_out, in_file, widthOut, heightOut):
logger.info('np2vid() encoding from pipe')
global AUDIO
codec = 'h264'
if AUDIO == True :
pipeline2 = ffmpeg.input(in_file)
audio = pipeline2.audio
args = (
ffmpeg
.input('pipe:', format='rawvideo', pix_fmt='rgb24',
s='{}x{}'.format(widthOut, heightOut),
framerate=fps_out )
.output(audio, out_filename , pix_fmt='yuv420p', **{'c:v': codec},
shortest=None, acodec='copy')
.global_args("-hide_banner")
.overwrite_output()
.compile()
)
else:
args = (
ffmpeg
.input('pipe:', format='rawvideo', pix_fmt='rgb24',
s='{}x{}'.format(widthOut, heightOut),
framerate=fps_out )
.output(out_filename , pix_fmt='yuv420p', **{'c:v': codec})
.global_args("-hide_banner")
.overwrite_output()
.compile()
)
return subprocess.Popen(args, stdin=subprocess.PIPE)
### The model changes the resolution, processes blank to find new resolution
def getOutputResolution():
#process a blank frame and return dimesions
blank = np.zeros([height,width,3],dtype=np.uint8)
blank.fill(255)
fastAI_image = Image(pil2tensor(blank, dtype=np.float32).div_(255))
p,img_hr,b = learn.predict(fastAI_image)
im = image2np(img_hr)
x = im.shape
out_height = x[0]
out_width = x[1]
return int(out_width), int(out_height)
### This is where all the magic happens
def processFrame(frame) :
global INCR
### Frame comes in as np array
#Load image in fastai's framework as an image
fastAI_image = Image(pil2tensor(frame, dtype=np.float32).div_(255))
# Inference
p,img_hr,b = learn.predict(fastAI_image)
# Convert output tensor into np array
im = image2np(img_hr)
# alpha and beta control line output darkness / warmness
norm_image = cv2.normalize(im, None, alpha=-60, beta=260, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)
INCR += 1
# enabling the next 2 lines will also output images when processing videos
#outCV2 = cv2.cvtColor(norm_image, cv2.COLOR_RGB2BGR )
#cv2.imwrite(output_name+ str(INCR) + ".png", outCV2) # INCR is just a frame counter
return norm_image
if __name__ == '__main__':
INCR = 0
learn = modelDeviceLoadSelect()
outWidth, outHeight = getOutputResolution()
estimatedFrames = fps * seconds
print('Model = ', pathToModel)
print('*** Video In***')
print('fps = ', str(fps))
print('width = ', str(width))
print('height = ', str(height))
print('seconds = ', str(seconds))
print('AUDIO = ', AUDIO)
print()
print('*** Video Out***')
print('outWidth = ', str(outWidth))
print('outHeight = ', str(outHeight))
print('output_name = ', output_name)
print()
#progress bar
print('estimatedFrames = ', estimatedFrames)
out = display(progress(0, 100), display_id=True)
inputVid = 'input.mp4'
ffmpegDecode = vid2np(inputVid)
ffmpegEncode = np2vid(output_name, fps, inputVid, outWidth, outHeight)
while True:
timeMark = time.process_time()
in_frame = readFrameAsNp(ffmpegDecode, width, height)
if in_frame is None:
logger.info('End of input stream')
break
logger.debug('Processing frame')
out_frame = processFrame(in_frame)
writeFrameAsByte(ffmpegEncode, out_frame)
#progress bar
out.update(progress(INCR, estimatedFrames))
minutesRemaining = str(round((estimatedFrames-INCR)*(time.process_time()-timeMark)/60))
print("\rEstimated Minutes Remaining = ", minutesRemaining, end="")
logger.info('Waiting for ffmpegDecode')
ffmpegDecode.wait()
logger.info('Waiting for ffmpegEncode')
ffmpegEncode.stdin.close()
ffmpegEncode.wait()
logger.info('Done')
```
# Download result
```
from google.colab import files
files.download(output_name)
```
|
github_jupyter
|
!pip install youtube-dl fastai==1.0.61 ffmpeg-python
import os
modelToRun = "SkinDeep_1280.pkl" #@param ["ArtLine_500.pkl", "ArtLine_650.pkl", "ArtLine_1024.pkl", "SkinDeep.pkl", "SkinDeep_1280.pkl"]
pathToModel = os.path.join("/content/drive/",modelToRun)
downloadModel = {
"ArtLine_500.pkl": "https://www.dropbox.com/s/p9lynpwygjmeed2/ArtLine_500.pkl",
"ArtLine_650.pkl": "https://www.dropbox.com/s/starqc9qd2e1lg1/ArtLine_650.pkl",
"ArtLine_1024.pkl": "https://www.dropbox.com/s/rq90q9lr9arwdp8/ArtLine_1024%20%281%29.pkl",
"SkinDeep.pkl": "https://www.dropbox.com/s/5mmcqao4mozpube/SkinDeep.pkl?dl=1",
"SkinDeep_1280.pkl": "https://www.dropbox.com/s/wxty56nhidusojr/SkinDeep_1280.pkl"
}
if os.path.isfile(pathToModel) == False :
if os.path.isfile(modelToRun) == False :
print("Downloading Model")
download = downloadModel[modelToRun]
!wget -O $modelToRun $download
pathToModel = modelToRun
else :
print("Found Local Version")
pathToModel = modelToRun
videoURL = "https://www.youtube.com/watch?v=olnqoL-yLZE" #@param {type:"string"}
!rm input.mp4 #required
!time(youtube-dl -f 'bestvideo[ext=mp4]+bestaudio[ext=acc]/mp4' --output "input.%(ext)s" $videoURL)
output_name = "postMalone.sd1280.mp4" #@param {type:"string"}
import subprocess
AUDIO = False
process = subprocess.Popen(['ffmpeg', '-hide_banner', '-i', 'input.mp4', '-y' ], stdout=subprocess.PIPE, stderr=subprocess.STDOUT,universal_newlines=True)
for line in process.stdout:
print(line)
if ' Video:' in line:
l_split = line.split(',')
#print('---------printing line ", line)
for segment in l_split[1:]:
if 'fps' in segment:
s = segment.strip().split(' ')
fps = float(s[0])
if 'x' in segment:
s = segment.strip().split('x')
width = int(s[0])
s2 = s[1].split(' ')
height = int(s2[0])
if 'Duration:' in line:
s = line.split(',')
ss = s[0].split(' ')
sss = ss[3].strip().split(':')
seconds = float(sss[0])*60*60 + float(sss[1])*60 + float(sss[2])
if 'Audio:' in line:
AUDIO = True
print('fps = ', str(fps))
print('width = ', str(width))
print('height = ', str(height))
print('seconds = ', str(seconds))
print('AUDIO = ', AUDIO)
import os
import logging as logger
from torchvision import transforms as T
from fastai.utils.mem import *
from fastai.vision import open_image, load_learner, Image, torch, pil2tensor, image2np
import ffmpeg, cv2
import numpy as np
#progress bar
from IPython.display import HTML, display
from tqdm import *
#There is scaling warning that might come up, and this block supresses user warnings
#Comment out this block if your don't mind seeing the warnings
import warnings
warnings.filterwarnings("ignore", category=UserWarning)
### Progress bar
def progress(value, max=100):
return HTML("""
<progress
value='{value}'
max='{max}',
style='width: 100%'
>
{value}
</progress>
""".format(value=value, max=max))
### Class required for model
class FeatureLoss(nn.Module):
def __init__(self, m_feat, layer_ids, layer_wgts):
super().__init__()
self.m_feat = m_feat
self.loss_features = [self.m_feat[i] for i in layer_ids]
self.hooks = hook_outputs(self.loss_features, detach=False)
self.wgts = layer_wgts
self.metric_names = ['pixel',] + [f'feat_{i}' for i in range(len(layer_ids))
] + [f'gram_{i}' for i in range(len(layer_ids))]
def make_features(self, x, clone=False):
self.m_feat(x)
return [(o.clone() if clone else o) for o in self.hooks.stored]
def forward(self, input, target):
out_feat = self.make_features(target, clone=True)
in_feat = self.make_features(input)
self.feat_losses = [base_loss(input,target)]
self.feat_losses += [base_loss(f_in, f_out)*w
for f_in, f_out, w in zip(in_feat, out_feat, self.wgts)]
self.feat_losses += [base_loss(gram_matrix(f_in), gram_matrix(f_out))*w**2 * 5e3
for f_in, f_out, w in zip(in_feat, out_feat, self.wgts)]
self.metrics = dict(zip(self.metric_names, self.feat_losses))
return sum(self.feat_losses)
def __del__(self): self.hooks.remove()
### DETERMINE IF CUDA AVAILABLE and LOAD MODEL
def modelDeviceLoadSelect():
if torch.cuda.is_available():
def load_model():
global USEgPU
learn = load_learner('.', pathToModel, device=0 )
USEgPU = True
print("INFERENCE DEVICE : cuda")
return learn
else:
def load_model():
learn = load_learner('.', pathToModel, device='cpu')
print("INFERENCE DEVICE : cpu")
return learn
learn=load_model()
return learn
### Functions based on ffmpeg-python video tensorflow example
def readFrameAsNp(ffmpegDecode, width, height):
logger.debug('Reading frame')
# Note: RGB24 == 3 bytes per pixel.
frame_size = width * height * 3
in_bytes = ffmpegDecode.stdout.read(frame_size)
if len(in_bytes) == 0:
frame = None
else:
assert len(in_bytes) == frame_size
frame = (
np
.frombuffer(in_bytes, np.uint8)
.reshape([height, width, 3])
)
return frame
def writeFrameAsByte(ffmpegEncode, frame):
logger.debug('Writing frame')
ffmpegEncode.stdin.write(
frame
.astype(np.uint8)
.tobytes()
)
def vid2np(in_filename):
logger.info('vid2np() -- Decoding to pipe')
codec = 'h264'
args = (
ffmpeg
.input(in_filename,
**{'c:v': codec})
.output('pipe:', format='rawvideo', pix_fmt='rgb24')
.global_args("-hide_banner")
.compile()
)
return subprocess.Popen(args, stdout=subprocess.PIPE)
def np2vid(out_filename, fps_out, in_file, widthOut, heightOut):
logger.info('np2vid() encoding from pipe')
global AUDIO
codec = 'h264'
if AUDIO == True :
pipeline2 = ffmpeg.input(in_file)
audio = pipeline2.audio
args = (
ffmpeg
.input('pipe:', format='rawvideo', pix_fmt='rgb24',
s='{}x{}'.format(widthOut, heightOut),
framerate=fps_out )
.output(audio, out_filename , pix_fmt='yuv420p', **{'c:v': codec},
shortest=None, acodec='copy')
.global_args("-hide_banner")
.overwrite_output()
.compile()
)
else:
args = (
ffmpeg
.input('pipe:', format='rawvideo', pix_fmt='rgb24',
s='{}x{}'.format(widthOut, heightOut),
framerate=fps_out )
.output(out_filename , pix_fmt='yuv420p', **{'c:v': codec})
.global_args("-hide_banner")
.overwrite_output()
.compile()
)
return subprocess.Popen(args, stdin=subprocess.PIPE)
### The model changes the resolution, processes blank to find new resolution
def getOutputResolution():
#process a blank frame and return dimesions
blank = np.zeros([height,width,3],dtype=np.uint8)
blank.fill(255)
fastAI_image = Image(pil2tensor(blank, dtype=np.float32).div_(255))
p,img_hr,b = learn.predict(fastAI_image)
im = image2np(img_hr)
x = im.shape
out_height = x[0]
out_width = x[1]
return int(out_width), int(out_height)
### This is where all the magic happens
def processFrame(frame) :
global INCR
### Frame comes in as np array
#Load image in fastai's framework as an image
fastAI_image = Image(pil2tensor(frame, dtype=np.float32).div_(255))
# Inference
p,img_hr,b = learn.predict(fastAI_image)
# Convert output tensor into np array
im = image2np(img_hr)
# alpha and beta control line output darkness / warmness
norm_image = cv2.normalize(im, None, alpha=-60, beta=260, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)
INCR += 1
# enabling the next 2 lines will also output images when processing videos
#outCV2 = cv2.cvtColor(norm_image, cv2.COLOR_RGB2BGR )
#cv2.imwrite(output_name+ str(INCR) + ".png", outCV2) # INCR is just a frame counter
return norm_image
if __name__ == '__main__':
INCR = 0
learn = modelDeviceLoadSelect()
outWidth, outHeight = getOutputResolution()
estimatedFrames = fps * seconds
print('Model = ', pathToModel)
print('*** Video In***')
print('fps = ', str(fps))
print('width = ', str(width))
print('height = ', str(height))
print('seconds = ', str(seconds))
print('AUDIO = ', AUDIO)
print()
print('*** Video Out***')
print('outWidth = ', str(outWidth))
print('outHeight = ', str(outHeight))
print('output_name = ', output_name)
print()
#progress bar
print('estimatedFrames = ', estimatedFrames)
out = display(progress(0, 100), display_id=True)
inputVid = 'input.mp4'
ffmpegDecode = vid2np(inputVid)
ffmpegEncode = np2vid(output_name, fps, inputVid, outWidth, outHeight)
while True:
timeMark = time.process_time()
in_frame = readFrameAsNp(ffmpegDecode, width, height)
if in_frame is None:
logger.info('End of input stream')
break
logger.debug('Processing frame')
out_frame = processFrame(in_frame)
writeFrameAsByte(ffmpegEncode, out_frame)
#progress bar
out.update(progress(INCR, estimatedFrames))
minutesRemaining = str(round((estimatedFrames-INCR)*(time.process_time()-timeMark)/60))
print("\rEstimated Minutes Remaining = ", minutesRemaining, end="")
logger.info('Waiting for ffmpegDecode')
ffmpegDecode.wait()
logger.info('Waiting for ffmpegEncode')
ffmpegEncode.stdin.close()
ffmpegEncode.wait()
logger.info('Done')
from google.colab import files
files.download(output_name)
| 0.539226 | 0.796213 |
# MapNode
If you want to iterate over a list of inputs, but need to feed all iterated outputs afterward as one input (an array) to the next node, you need to use a **``MapNode``**. A ``MapNode`` is quite similar to a normal ``Node``, but it can take a list of inputs and operate over each input separately, ultimately returning a list of outputs.
Imagine that you have a list of items (let's say files) and you want to execute the same node on them (for example some smoothing or masking). Some nodes accept multiple files and do exactly the same thing on them, but some don't (they expect only one file). `MapNode` can solve this problem. Imagine you have the following workflow:
<img src="../static/images/mapnode.png" width="325">
Node `A` outputs a list of files, but node `B` accepts only one file. Additionally, `C` expects a list of files. What you would like is to run `B` for every file in the output of `A` and collect the results as a list and feed it to `C`. Something like this:
```python
from nipype import Node, MapNode, Workflow
a = Node(interface=A(), name="a")
b = MapNode(interface=B(), name="b", iterfield=['in_file'])
c = Node(interface=C(), name="c")
my_workflow = Workflow(name="my_workflow")
my_workflow.connect([(a,b,[('out_files','in_file')]),
(b,c,[('out_file','in_files')])
])
```
Let's demonstrate this with a simple function interface:
```
import os.path as op
from nipype import Function
def square_func(x):
return x ** 2
square = Function(["x"], ["f_x"], square_func)
```
We see that this function just takes a numeric input and returns its squared value.
```
square.run(x=2).outputs.f_x
```
What if we wanted to square a list of numbers? We could set an iterable and just split up the workflow in multiple sub-workflows. But say we were making a simple workflow that squared a list of numbers and then summed them. The sum node would expect a list, but using an iterable would make a bunch of sum nodes, and each would get one number from the list. The solution here is to use a `MapNode`.
## `iterfield`
The `MapNode` constructor has a field called `iterfield`, which tells it what inputs should be expecting a list.
```
from nipype import MapNode
square_node = MapNode(square, name="square", iterfield=["x"])
square_node.inputs.x = [0, 1, 2, 3]
res = square_node.run()
res.outputs.f_x
```
Because `iterfield` can take a list of names, you can operate over multiple sets of data, as long as they're the same length. The values in each list will be paired; it does not compute a combinatoric product of the lists.
```
def power_func(x, y):
return x ** y
power = Function(["x", "y"], ["f_xy"], power_func)
power_node = MapNode(power, name="power", iterfield=["x", "y"])
power_node.inputs.x = [0, 1, 2, 3]
power_node.inputs.y = [0, 1, 2, 3]
res = power_node.run()
print(res.outputs.f_xy)
```
But not every input needs to be an iterfield.
```
power_node = MapNode(power, name="power", iterfield=["x"])
power_node.inputs.x = [0, 1, 2, 3]
power_node.inputs.y = 3
res = power_node.run()
print(res.outputs.f_xy)
```
As in the case of `iterables`, each underlying `MapNode` execution can happen in **parallel**. Hopefully, you see how these tools allow you to write flexible, reusable workflows that will help you process large amounts of data efficiently and reproducibly.
In more advanced applications it is useful to be able to iterate over items of nested lists (for example ``[[1,2],[3,4]]``). MapNode allows you to do this with the "nested=True" parameter. Outputs will preserve the same nested structure as the inputs.
# Why is this important?
Let's consider we have multiple functional images (A) and each of them should be motioned corrected (B1, B2, B3,..). But afterward, we want to put them all together into a GLM, i.e. the input for the GLM should be an array of [B1, B2, B3, ...]. [Iterables](basic_iteration.ipynb) can't do that. They would split up the pipeline. Therefore, we need **MapNodes**.
<img src="../static/images/mapnode.png" width="300">
Let's look at a simple example, where we want to motion correct two functional images. For this we need two nodes:
- Gunzip, to unzip the files (plural)
- Realign, to do the motion correction
```
from nipype.algorithms.misc import Gunzip
from nipype.interfaces.spm import Realign
from nipype import Node, MapNode, Workflow
# Here we specify a list of files (for this tutorial, we just add the same file twice)
files = [op.abspath('data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz'),
op.abspath('data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz')]
realign = Node(Realign(register_to_mean=True),
name='motion_correction')
```
If we try to specify the input for the **Gunzip** node with a simple **Node**, we get the following error:
```
gunzip = Node(Gunzip(), name='gunzip',)
try:
gunzip.inputs.in_file = files
except(Exception) as err:
if "TraitError" in str(err.__class__):
print("TraitError:", err)
else:
raise
else:
raise
```
```bash
TraitError: The 'in_file' trait of a GunzipInputSpec instance must be an existing file name, but a value of ['data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz', 'data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz'] <class 'list'> was specified.
```
But if we do it with a **MapNode**, it works:
```
gunzip = MapNode(Gunzip(), name='gunzip',
iterfield=['in_file'])
gunzip.inputs.in_file = files
```
Now, we just have to create a workflow, connect the nodes and we can run it:
```
mcflow = Workflow(name='realign_with_spm')
mcflow.connect(gunzip, 'out_file', realign, 'in_files')
mcflow.base_dir = op.abspath('output/')
mcflow.run('MultiProc', plugin_args={'n_procs': 4})
```
### Exercise 1
Create a workflow to calculate a sum of factorials of numbers from a range between $n_{min}$ and $n_{max}$, i.e.:
$$\sum _{k=n_{min}}^{n_{max}} k! = 0! + 1! +2! + 3! + \cdots$$
if $n_{min}=0$ and $n_{max}=3$
$$\sum _{k=0}^{3} k! = 0! + 1! +2! + 3! = 1 + 1 + 2 + 6 = 10$$
Use ``Node`` for a function that creates a list of integers and a function that sums everything at the end. Use ``MapNode`` to calculate factorials.
```
#write your solution here
from nipype import Workflow, Node, MapNode, Function
import os
def range_fun(n_min, n_max):
return list(range(n_min, n_max+1))
def factorial(n):
# print("FACTORIAL, {}".format(n))
import math
return math.factorial(n)
def summing(terms):
return sum(terms)
wf_ex1 = Workflow('ex1')
wf_ex1.base_dir = os.getcwd()
range_nd = Node(Function(input_names=['n_min', 'n_max'],
output_names=['range_list'],
function=range_fun),
name='range_list')
factorial_nd = MapNode(Function(input_names=['n'],
output_names=['fact_out'],
function=factorial),
iterfield=['n'],
name='factorial')
summing_nd = Node(Function(input_names=['terms'],
output_names=['sum_out'],
function=summing),
name='summing')
range_nd.inputs.n_min = 0
range_nd.inputs.n_max = 3
wf_ex1.add_nodes([range_nd])
wf_ex1.connect(range_nd, 'range_list', factorial_nd, 'n')
wf_ex1.connect(factorial_nd, 'fact_out', summing_nd, "terms")
eg = wf_ex1.run()
```
let's print all nodes:
```
eg.nodes()
```
the final result should be 10:
```
list(eg.nodes())[2].result.outputs
```
we can also check the results of two other nodes:
```
print(list(eg.nodes())[0].result.outputs)
print(list(eg.nodes())[1].result.outputs)
```
|
github_jupyter
|
from nipype import Node, MapNode, Workflow
a = Node(interface=A(), name="a")
b = MapNode(interface=B(), name="b", iterfield=['in_file'])
c = Node(interface=C(), name="c")
my_workflow = Workflow(name="my_workflow")
my_workflow.connect([(a,b,[('out_files','in_file')]),
(b,c,[('out_file','in_files')])
])
import os.path as op
from nipype import Function
def square_func(x):
return x ** 2
square = Function(["x"], ["f_x"], square_func)
square.run(x=2).outputs.f_x
from nipype import MapNode
square_node = MapNode(square, name="square", iterfield=["x"])
square_node.inputs.x = [0, 1, 2, 3]
res = square_node.run()
res.outputs.f_x
def power_func(x, y):
return x ** y
power = Function(["x", "y"], ["f_xy"], power_func)
power_node = MapNode(power, name="power", iterfield=["x", "y"])
power_node.inputs.x = [0, 1, 2, 3]
power_node.inputs.y = [0, 1, 2, 3]
res = power_node.run()
print(res.outputs.f_xy)
power_node = MapNode(power, name="power", iterfield=["x"])
power_node.inputs.x = [0, 1, 2, 3]
power_node.inputs.y = 3
res = power_node.run()
print(res.outputs.f_xy)
from nipype.algorithms.misc import Gunzip
from nipype.interfaces.spm import Realign
from nipype import Node, MapNode, Workflow
# Here we specify a list of files (for this tutorial, we just add the same file twice)
files = [op.abspath('data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz'),
op.abspath('data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz')]
realign = Node(Realign(register_to_mean=True),
name='motion_correction')
gunzip = Node(Gunzip(), name='gunzip',)
try:
gunzip.inputs.in_file = files
except(Exception) as err:
if "TraitError" in str(err.__class__):
print("TraitError:", err)
else:
raise
else:
raise
TraitError: The 'in_file' trait of a GunzipInputSpec instance must be an existing file name, but a value of ['data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz', 'data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz'] <class 'list'> was specified.
gunzip = MapNode(Gunzip(), name='gunzip',
iterfield=['in_file'])
gunzip.inputs.in_file = files
mcflow = Workflow(name='realign_with_spm')
mcflow.connect(gunzip, 'out_file', realign, 'in_files')
mcflow.base_dir = op.abspath('output/')
mcflow.run('MultiProc', plugin_args={'n_procs': 4})
#write your solution here
from nipype import Workflow, Node, MapNode, Function
import os
def range_fun(n_min, n_max):
return list(range(n_min, n_max+1))
def factorial(n):
# print("FACTORIAL, {}".format(n))
import math
return math.factorial(n)
def summing(terms):
return sum(terms)
wf_ex1 = Workflow('ex1')
wf_ex1.base_dir = os.getcwd()
range_nd = Node(Function(input_names=['n_min', 'n_max'],
output_names=['range_list'],
function=range_fun),
name='range_list')
factorial_nd = MapNode(Function(input_names=['n'],
output_names=['fact_out'],
function=factorial),
iterfield=['n'],
name='factorial')
summing_nd = Node(Function(input_names=['terms'],
output_names=['sum_out'],
function=summing),
name='summing')
range_nd.inputs.n_min = 0
range_nd.inputs.n_max = 3
wf_ex1.add_nodes([range_nd])
wf_ex1.connect(range_nd, 'range_list', factorial_nd, 'n')
wf_ex1.connect(factorial_nd, 'fact_out', summing_nd, "terms")
eg = wf_ex1.run()
eg.nodes()
list(eg.nodes())[2].result.outputs
print(list(eg.nodes())[0].result.outputs)
print(list(eg.nodes())[1].result.outputs)
| 0.416797 | 0.961786 |
# CurvLearn Tutorial
In this tutorial, you will learn how to build a non-Euclidean binary classification model, including
- define manifold and riemannian tensors.
- build non-Euclidean models from manifold operations.
- define loss function and apply riemannian optimization.
Let's start!
```
pip install curvlearn
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow as tf
```
Define hyperparameters.
```
epochs = 500
batch_size = 1024
log_steps = 100
learning_rate = 1e-3
```
CurvLearn now supports the following manifolds
- Constant curvature manifolds
- ```curvlearn.manifolds.Euclidean``` - Euclidean space with zero curvature.
- ```curvlearn.manifolds.Stereographic``` - Constant curvature stereographic projection model. The curvature can be positive, negative or zero.
- ```curvlearn.manifolds.PoincareBall``` - The stereographic projection of the Lorentz model with negative curvature.
- ```curvlearn.manifolds.ProjectedSphere``` - The stereographic projection of the sphere model with positive curvature.
- Mixed curvature manifolds
- ```curvlearn.manifolds.Product``` - Mixed-curvature space consists of multiple manifolds with different curvatures.
In this tutorial, we use the stereographic model with trainable curvature.
```
from curvlearn.manifolds import Stereographic
manifold = Stereographic()
curvature = tf.get_variable(name="curvature", initializer=tf.constant(0.0, dtype=manifold.dtype), trainable=True)
print(manifold.name)
```
Generate random binary classification dataset.
1 sprase feature and 8 dense features are used to predict the 0/1 label.
```
global_step = tf.get_variable(name='global_step',initializer=tf.constant(0), trainable=False)
dense = np.random.rand(10000, 8)
sparse = np.random.randint(0, 1000, [10000, 1])
labels = np.random.choice([0, 1], size=10000, replace=True)
dataset = tf.data.Dataset.from_tensor_slices(
{
'dense': tf.cast(dense, tf.float32),
'sparse': tf.cast(sparse, tf.int32),
'labels': tf.cast(labels, tf.float32)
}
)
dataset = dataset.shuffle(batch_size * 10).batch(batch_size, drop_remainder=False).repeat(epochs)
iterator = tf.data.make_one_shot_iterator(dataset)
batch = iterator.get_next()
dense, sparse, labels = batch['dense'], batch['sparse'], batch['labels']
```
Define tensors in the specific manifold can be simply realized through the wrapper function `manifold.variable`.
According to the variable name, tensors are optimized in different ways.
- "*RiemannianParameter*" is contained in the variable name: the variable is a riemannian tensor, and should be optimized by riemannian optimizers.
- Otherwise: the variable is an euclidean(tangent) tensor and is projected into the manifold. In this case, riemannian optimizers behave equivalently to vanilla euclidean optimizers.
Here we optimize dense embedding in euclidean space and sparse embedding in curved space.
```
embedding_table = tf.get_variable(
name='RiemannianParameter/embedding',
shape=(1000, 8),
dtype=manifold.dtype,
initializer=tf.truncated_normal_initializer(0.001)
)
embedding_table = manifold.variable(embedding_table, c=curvature)
sparse_embedding = tf.squeeze(tf.nn.embedding_lookup(embedding_table, sparse), axis=1)
dense_embedding = manifold.variable(dense, c=curvature)
```
Building riemannian neural networks requires replacing euclidean tensor operations with manifold operations.
CurvLearn now supports the following basic operations.
- ```variable(t, c)``` - Defines a riemannian variable from manifold or tangent space at origin according to its name.
- ```to_manifold(t, c, base)``` - Converts a tensor ```t``` in the tangent space of ```base``` point to the manifold.
- ```to_tangent(t, c, base)``` - Converts a tensor ```t``` in the manifold to the tangent space of ```base``` point.
- ```weight_sum(tensor_list, a, c)``` - Computes the sum of tensor list ```tensor_list``` with weight list ```a```.
- ```mean(t, c, axis)``` - Computes the average of elements along ```axis``` dimension of a tensor ```t```.
- ```sum(t, c, axis)``` - Computes the sum of elements along ```axis``` dimension of a tensor ```t```.
- ```concat(tensor_list, c, axis)``` - Concatenates tensor list ```tensor_list``` along ```axis``` dimension.
- ```matmul(t, m, c)``` - Multiplies tensor ```t``` by euclidean matrix ```m```.
- ```add(x, y, c)``` - Adds tensor ```x``` and tensor ```y```.
- ```add_bias(t, b, c)``` - Adds a euclidean bias vector ```b``` to tensor ```t```.
- ```activation(t, c_in, c_out, act)``` - Computes the value of activation function ```act``` for the input tensor ```t```.
- ```linear(t, in_dim, out_dim, c_in, c_out, act, scope)``` - Computes the linear transformation for the input tensor ```t```.
- ```distance(src, tar, c)``` - Computes the squared geodesic/distance between ```src``` and ```tar```.
Complex operations can be decomposed into basic operations explicitly or realized in tangent space implicitly.
Here we use two fully-connected layers as our model backbone.
```
x = manifold.concat([sparse_embedding, dense_embedding], axis=1, c=curvature)
x = manifold.linear(x, 16, 256, curvature, curvature, tf.nn.elu, 'hidden_layer_1')
x = manifold.linear(x, 256, 32, curvature, curvature, tf.nn.elu, 'hidden_layer_2')
```
Notice non-euclidean geometry can only be expressed by geodesics, we use the fermi-dirac decoder to decode the distance and generate the probabilities. Cross entropy is used as the loss function.
```
origin = manifold.proj(tf.zeros([32], dtype=manifold.dtype), c=curvature)
distance = tf.squeeze(manifold.distance(x, origin, c=curvature))
loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=labels, logits=1.0 - 1.0*distance))
```
CurvLearn now supports the following optimizers.
- ```curvlearn.optimizers.rsgd``` - Riemannian stochastic gradient optimizer.
- ```curvlearn.optimizers.radagrad``` - Riemannian Adagrad optimizer.
- ```curvlearn.optimizers.radam``` - Riemannian Adam optimizer.
Here we apply riemannian adam optimizer to minimize the loss.
```
from curvlearn.optimizers import RAdam
optimizer = RAdam(learning_rate=learning_rate, manifold=manifold, c=curvature)
train_op = optimizer.minimize(loss)
```
Now a non-Euclidean binary classification model is built successfully.
Let's check the performance!
```
ops = [train_op, curvature, loss] + tf.get_collection(tf.GraphKeys.UPDATE_OPS)
batch_idx = 0
global_init = tf.global_variables_initializer()
local_init = tf.local_variables_initializer()
cp = tf.ConfigProto()
cp.gpu_options.allow_growth = True
with tf.Session(config=cp) as sess:
sess.run([global_init, local_init])
while True:
try:
batch_idx += 1
_, c, loss = sess.run(ops)
if batch_idx % log_steps == 1:
print('No.{} batches, curvature {}, loss {}'.format(batch_idx, c, loss))
except tf.errors.OutOfRangeError:
print('Finish train')
break
```
Since our dataset is generated without any geometry prior, the curvature is trained to be near zero and the space is almost euclidean.
Check performance on real dataset([recommendation](hyperml/README.md), [link prediction](hgcn/README.md), [tree pretrain](tree_pretrain/README.md)) and see the advantages of non-euclidean geometry.
|
github_jupyter
|
pip install curvlearn
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow as tf
epochs = 500
batch_size = 1024
log_steps = 100
learning_rate = 1e-3
from curvlearn.manifolds import Stereographic
manifold = Stereographic()
curvature = tf.get_variable(name="curvature", initializer=tf.constant(0.0, dtype=manifold.dtype), trainable=True)
print(manifold.name)
global_step = tf.get_variable(name='global_step',initializer=tf.constant(0), trainable=False)
dense = np.random.rand(10000, 8)
sparse = np.random.randint(0, 1000, [10000, 1])
labels = np.random.choice([0, 1], size=10000, replace=True)
dataset = tf.data.Dataset.from_tensor_slices(
{
'dense': tf.cast(dense, tf.float32),
'sparse': tf.cast(sparse, tf.int32),
'labels': tf.cast(labels, tf.float32)
}
)
dataset = dataset.shuffle(batch_size * 10).batch(batch_size, drop_remainder=False).repeat(epochs)
iterator = tf.data.make_one_shot_iterator(dataset)
batch = iterator.get_next()
dense, sparse, labels = batch['dense'], batch['sparse'], batch['labels']
embedding_table = tf.get_variable(
name='RiemannianParameter/embedding',
shape=(1000, 8),
dtype=manifold.dtype,
initializer=tf.truncated_normal_initializer(0.001)
)
embedding_table = manifold.variable(embedding_table, c=curvature)
sparse_embedding = tf.squeeze(tf.nn.embedding_lookup(embedding_table, sparse), axis=1)
dense_embedding = manifold.variable(dense, c=curvature)
x = manifold.concat([sparse_embedding, dense_embedding], axis=1, c=curvature)
x = manifold.linear(x, 16, 256, curvature, curvature, tf.nn.elu, 'hidden_layer_1')
x = manifold.linear(x, 256, 32, curvature, curvature, tf.nn.elu, 'hidden_layer_2')
origin = manifold.proj(tf.zeros([32], dtype=manifold.dtype), c=curvature)
distance = tf.squeeze(manifold.distance(x, origin, c=curvature))
loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=labels, logits=1.0 - 1.0*distance))
from curvlearn.optimizers import RAdam
optimizer = RAdam(learning_rate=learning_rate, manifold=manifold, c=curvature)
train_op = optimizer.minimize(loss)
ops = [train_op, curvature, loss] + tf.get_collection(tf.GraphKeys.UPDATE_OPS)
batch_idx = 0
global_init = tf.global_variables_initializer()
local_init = tf.local_variables_initializer()
cp = tf.ConfigProto()
cp.gpu_options.allow_growth = True
with tf.Session(config=cp) as sess:
sess.run([global_init, local_init])
while True:
try:
batch_idx += 1
_, c, loss = sess.run(ops)
if batch_idx % log_steps == 1:
print('No.{} batches, curvature {}, loss {}'.format(batch_idx, c, loss))
except tf.errors.OutOfRangeError:
print('Finish train')
break
| 0.824956 | 0.987092 |
# Hello nbconvert
Hello World.
Changes are saved in the markdown file as well.
Images are fine, too.

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Dignissim sodales ut eu sem integer vitae justo eget. Non quam lacus suspendisse faucibus. Integer quis auctor elit sed vulputate mi. Diam volutpat commodo sed egestas egestas fringilla phasellus faucibus scelerisque. Aliquet bibendum enim facilisis gravida neque. Ultrices eros in cursus turpis massa. Velit euismod in pellentesque massa. Duis tristique sollicitudin nibh sit amet commodo. Sagittis vitae et leo duis ut diam quam nulla.
Nulla pellentesque dignissim enim sit amet venenatis urna. Vulputate enim nulla aliquet porttitor lacus luctus accumsan tortor posuere. Eu facilisis sed odio morbi quis commodo odio. Posuere morbi leo urna molestie. Facilisi nullam vehicula ipsum a arcu. Enim ut sem viverra aliquet eget. Massa massa ultricies mi quis. Interdum posuere lorem ipsum dolor sit amet consectetur. Sit amet risus nullam eget. Eget lorem dolor sed viverra ipsum. Leo vel fringilla est ullamcorper eget nulla facilisi etiam. Faucibus nisl tincidunt eget nullam non nisi. Sem et tortor consequat id. Nascetur ridiculus mus mauris vitae ultricies. Sem et tortor consequat id. Tincidunt tortor aliquam nulla facilisi cras fermentum. Id consectetur purus ut faucibus. Magna ac placerat vestibulum lectus mauris ultrices eros in. Pharetra diam sit amet nisl suscipit.
Scelerisque fermentum dui faucibus in ornare quam. Facilisis magna etiam tempor orci eu. Mauris nunc congue nisi vitae suscipit tellus mauris a diam. Sit amet volutpat consequat mauris nunc congue nisi vitae suscipit. Risus sed vulputate odio ut enim blandit volutpat. Tristique nulla aliquet enim tortor at auctor urna nunc. Porta non pulvinar neque laoreet suspendisse interdum consectetur libero id. Ipsum a arcu cursus vitae congue. Arcu bibendum at varius vel pharetra vel turpis nunc. Felis eget velit aliquet sagittis id. Non tellus orci ac auctor augue. Blandit cursus risus at ultrices mi tempus imperdiet nulla. Vitae elementum curabitur vitae nunc sed velit. Scelerisque felis imperdiet proin fermentum leo vel orci porta non. Faucibus a pellentesque sit amet porttitor. Auctor augue mauris augue neque gravida in fermentum et sollicitudin. Nullam vehicula ipsum a arcu cursus vitae congue mauris. Id diam vel quam elementum pulvinar. Ut lectus arcu bibendum at varius vel pharetra. Sed euismod nisi porta lorem mollis aliquam ut.
Sed velit dignissim sodales ut. Porta nibh venenatis cras sed. Euismod nisi porta lorem mollis aliquam. Enim lobortis scelerisque fermentum dui faucibus in ornare quam viverra. Non nisi est sit amet facilisis magna etiam. Nulla aliquet enim tortor at auctor urna nunc id. At auctor urna nunc id cursus metus aliquam eleifend mi. Vestibulum lectus mauris ultrices eros in. Eu feugiat pretium nibh ipsum consequat nisl vel. Etiam non quam lacus suspendisse. Commodo elit at imperdiet dui accumsan sit amet nulla. Odio euismod lacinia at quis risus sed vulputate. Amet nulla facilisi morbi tempus. Sit amet nisl suscipit adipiscing. Dictum varius duis at consectetur. Urna cursus eget nunc scelerisque viverra mauris in aliquam.
```
import pandas as pd
import matplotlib.pyplot as plt
link = "https://en.wikipedia.org/wiki/Belgrade"
tables = pd.read_html(link)
data = tables[2]
data = data.set_index('Municipality', drop=True)
data["Barajevo":"Zvezdara"].plot(kind='pie', y='Population (2011)', figsize=(14, 10));
plt.grid(zorder=0)
plt.legend(loc='center left', bbox_to_anchor=(1.2, 0.5));
data["Barajevo":"Zvezdara"].sort_values(by='Population density (per km2)').plot(
kind='barh', y='Population density (per km2)', figsize=(12, 8), grid=True);
```
|
github_jupyter
|
import pandas as pd
import matplotlib.pyplot as plt
link = "https://en.wikipedia.org/wiki/Belgrade"
tables = pd.read_html(link)
data = tables[2]
data = data.set_index('Municipality', drop=True)
data["Barajevo":"Zvezdara"].plot(kind='pie', y='Population (2011)', figsize=(14, 10));
plt.grid(zorder=0)
plt.legend(loc='center left', bbox_to_anchor=(1.2, 0.5));
data["Barajevo":"Zvezdara"].sort_values(by='Population density (per km2)').plot(
kind='barh', y='Population density (per km2)', figsize=(12, 8), grid=True);
| 0.567457 | 0.406744 |
# Voyages API Use Cases
## Run this example in [Colab](https://colab.research.google.com/github/SignalOceanSdk/SignalSDK/blob/master/docs/examples/jupyter/VoyagesAPI/VoyagesAPI-UseCases.ipynb).
## Setup
Install the Signal Ocean SDK:
```
pip install signal-ocean
```
Set your subscription key acquired here: https://apis.signalocean.com/profile
```
!pip install signal-ocean
signal_ocean_api_key = '' #replace with your subscription key
```
## Voyages API Use Cases
```
from signal_ocean import Connection
from signal_ocean.voyages import VoyagesAPI
from signal_ocean.voyages import Vessel, VesselFilter
from signal_ocean.voyages import VesselType, VesselTypeFilter
from signal_ocean.voyages import VesselClass, VesselClassFilter
import pandas as pd
import numpy as np
from datetime import date, timedelta
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_theme()
connection = Connection(signal_ocean_api_key)
api = VoyagesAPI(connection)
```
Declare helper functions
```
def get_voyage_load_area(voyage_events):
return next((e.area_name_level0 for e in voyage_events or [] if e.purpose=='Load'), None)
def get_voyage_discharge_country(voyage_events):
return next((e.country for e in reversed(voyage_events or []) if e.purpose=='Discharge'), None)
def get_voyage_load_country(voyage_events):
return next((e.country for e in voyage_events or [] if e.purpose=='Load'), None)
```
### Get voyages
```
# get vessel class id for vlcc
vessel_class = api.get_vessel_classes(VesselClassFilter('vlcc'))
vlcc_id = vessel_class[0].vessel_class_id
vlcc_id
date_from = date.today() - timedelta(days=180)
voyages = api.get_voyages(vessel_class_id=vlcc_id, date_from=date_from)
voyages = pd.DataFrame(v.__dict__ for v in voyages)
events = pd.DataFrame(e.__dict__ for voyage_events in voyages['events'].dropna() for e in voyage_events)
historical_events = events[events['event_horizon']=='Historical']
voyages['load_area'] = voyages['events'].apply(get_voyage_load_area)
voyages['discharge_country'] = voyages['events'].apply(get_voyage_discharge_country)
voyages['load_country'] = voyages['events'].apply(get_voyage_load_country)
```
### Number of exporting voyages
```
voyages_exports_usg = voyages[(voyages['load_area']=='US Gulf')&(voyages['discharge_country']!='United States')]
voyages_exports_usg.shape[0]
voyages_exports_usg['discharge_country'].value_counts()
```
### Port Delays
```
discharges_china = historical_events[(historical_events['country']=='China')&(historical_events['purpose']=='Discharge')].copy()
discharges_china['duration'] = discharges_china['sailing_date'] - discharges_china['arrival_date']
discharges_china['duration'].describe()
discharges_china['duration_in_hours'] = discharges_china['duration'] / np.timedelta64(1, 'h')
common_discharge_ports_china = discharges_china['port_name'].value_counts().head(8)
common_port_discharges_china = discharges_china[discharges_china['port_name'].isin(common_discharge_ports_china.index)]
sns.catplot(x="port_name", y="duration_in_hours", kind="box", data=common_port_discharges_china, aspect=2);
discharges_china['arrival_month'] = discharges_china['arrival_date'].dt.tz_localize(None).dt.to_period('M').dt.to_timestamp()
sns.lineplot(data=discharges_china, x='arrival_month', y='duration_in_hours')
plt.xticks(rotation=90);
```
### Discharge destinations
```
discharge_destinations_brazil = voyages[voyages['load_country']=='Brazil'].dropna(subset=['discharge_country'])
discharge_destinations_brazil['discharge_country'].value_counts()
sns.displot(discharge_destinations_brazil, x="start_date", hue="discharge_country", aspect=2);
```
### Advanced Voyage Search: Discharge origins
This use-case demonstrates how to utilise the advanced search endpoint to extract historical voyages by `vessel_class_id`, `first_load_arrival_date` and a specific `event_purpose`. The voyages with the provided purpose can then be merged and filtered with a specific `load_area` and `discharge_country` in order to visualize the vessel flows into the specified country.
```
# get vessel class id for vlcc
vessel_class = api.get_vessel_classes(VesselClassFilter('vlcc'))
vlcc_id = vessel_class[0].vessel_class_id
vlcc_id
date_from = date.today() - timedelta(days=60)
load_area = 'Arabian Gulf'
discharge_country = 'Japan'
```
In the following cell we extract the voyages with an `event_purpose="Discharge"`, which essentially looks up for all the voyages with *at least one discharge event*. This implies, due to the nature of the shipping pipeline, that load events are also included in the discharge call and can therefore be omitted.
```
voyages = api.get_voyages_by_advanced_search(vessel_class_id=vlcc_id, first_load_arrival_date_from=date_from,
event_horizon='Historical', event_purpose='Discharge')
voyages = pd.DataFrame(v.__dict__ for v in voyages)
voyages['load_area'] = voyages['events'].apply(get_voyage_load_area)
voyages['load_country'] = voyages['events'].apply(get_voyage_load_country)
voyages['discharge_country'] = voyages['events'].apply(get_voyage_discharge_country)
voyages_filtered = voyages.loc[(voyages['load_area'] == load_area) & (voyages['discharge_country'] == discharge_country)].reset_index(drop=True)
fig, _ = plt.subplots(figsize=(12, 5))
ax = sns.countplot(x='load_country', data=voyages_filtered)
ax.set_title(f'Vessel Flows ({discharge_country})', fontsize=14)
ax.set_xlabel('Load Countries', fontsize=12)
ax.set_ylabel('Vessel Counts', fontsize=12);
```
|
github_jupyter
|
pip install signal-ocean
!pip install signal-ocean
signal_ocean_api_key = '' #replace with your subscription key
from signal_ocean import Connection
from signal_ocean.voyages import VoyagesAPI
from signal_ocean.voyages import Vessel, VesselFilter
from signal_ocean.voyages import VesselType, VesselTypeFilter
from signal_ocean.voyages import VesselClass, VesselClassFilter
import pandas as pd
import numpy as np
from datetime import date, timedelta
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_theme()
connection = Connection(signal_ocean_api_key)
api = VoyagesAPI(connection)
def get_voyage_load_area(voyage_events):
return next((e.area_name_level0 for e in voyage_events or [] if e.purpose=='Load'), None)
def get_voyage_discharge_country(voyage_events):
return next((e.country for e in reversed(voyage_events or []) if e.purpose=='Discharge'), None)
def get_voyage_load_country(voyage_events):
return next((e.country for e in voyage_events or [] if e.purpose=='Load'), None)
# get vessel class id for vlcc
vessel_class = api.get_vessel_classes(VesselClassFilter('vlcc'))
vlcc_id = vessel_class[0].vessel_class_id
vlcc_id
date_from = date.today() - timedelta(days=180)
voyages = api.get_voyages(vessel_class_id=vlcc_id, date_from=date_from)
voyages = pd.DataFrame(v.__dict__ for v in voyages)
events = pd.DataFrame(e.__dict__ for voyage_events in voyages['events'].dropna() for e in voyage_events)
historical_events = events[events['event_horizon']=='Historical']
voyages['load_area'] = voyages['events'].apply(get_voyage_load_area)
voyages['discharge_country'] = voyages['events'].apply(get_voyage_discharge_country)
voyages['load_country'] = voyages['events'].apply(get_voyage_load_country)
voyages_exports_usg = voyages[(voyages['load_area']=='US Gulf')&(voyages['discharge_country']!='United States')]
voyages_exports_usg.shape[0]
voyages_exports_usg['discharge_country'].value_counts()
discharges_china = historical_events[(historical_events['country']=='China')&(historical_events['purpose']=='Discharge')].copy()
discharges_china['duration'] = discharges_china['sailing_date'] - discharges_china['arrival_date']
discharges_china['duration'].describe()
discharges_china['duration_in_hours'] = discharges_china['duration'] / np.timedelta64(1, 'h')
common_discharge_ports_china = discharges_china['port_name'].value_counts().head(8)
common_port_discharges_china = discharges_china[discharges_china['port_name'].isin(common_discharge_ports_china.index)]
sns.catplot(x="port_name", y="duration_in_hours", kind="box", data=common_port_discharges_china, aspect=2);
discharges_china['arrival_month'] = discharges_china['arrival_date'].dt.tz_localize(None).dt.to_period('M').dt.to_timestamp()
sns.lineplot(data=discharges_china, x='arrival_month', y='duration_in_hours')
plt.xticks(rotation=90);
discharge_destinations_brazil = voyages[voyages['load_country']=='Brazil'].dropna(subset=['discharge_country'])
discharge_destinations_brazil['discharge_country'].value_counts()
sns.displot(discharge_destinations_brazil, x="start_date", hue="discharge_country", aspect=2);
# get vessel class id for vlcc
vessel_class = api.get_vessel_classes(VesselClassFilter('vlcc'))
vlcc_id = vessel_class[0].vessel_class_id
vlcc_id
date_from = date.today() - timedelta(days=60)
load_area = 'Arabian Gulf'
discharge_country = 'Japan'
voyages = api.get_voyages_by_advanced_search(vessel_class_id=vlcc_id, first_load_arrival_date_from=date_from,
event_horizon='Historical', event_purpose='Discharge')
voyages = pd.DataFrame(v.__dict__ for v in voyages)
voyages['load_area'] = voyages['events'].apply(get_voyage_load_area)
voyages['load_country'] = voyages['events'].apply(get_voyage_load_country)
voyages['discharge_country'] = voyages['events'].apply(get_voyage_discharge_country)
voyages_filtered = voyages.loc[(voyages['load_area'] == load_area) & (voyages['discharge_country'] == discharge_country)].reset_index(drop=True)
fig, _ = plt.subplots(figsize=(12, 5))
ax = sns.countplot(x='load_country', data=voyages_filtered)
ax.set_title(f'Vessel Flows ({discharge_country})', fontsize=14)
ax.set_xlabel('Load Countries', fontsize=12)
ax.set_ylabel('Vessel Counts', fontsize=12);
| 0.429669 | 0.796332 |
# Google Play Store
Can we predict an application's success? How is the number of installations connected with other characteristics of the app?
Let's make a few plots to see how a certain feature affects the installations.
Data comes from [Kaggle](https://www.kaggle.com/lava18/google-play-store-apps).
```
import pandas as pd
from lets_plot import *
LetsPlot.setup_html()
df = pd.read_csv("https://raw.githubusercontent.com/JetBrains/lets-plot-docs/master/data/googleplaystore.csv")
print(df.shape)
df.head(3)
def size_to_bytes(size):
size = size.lower()
if size == 'varies with device' or size == '':
return -1
if 'k' in size:
return int(float(size.split('k')[0]) * 1024)
if 'm' in size:
return int(float(size.split('m')[0]) * 1024 * 1024)
return int(size)
df = df[~df.Type.isna()]
df = df[~df.Reviews.astype(str).str.contains('M')]
df.Reviews = df.Reviews.astype(int)
df.Size = df.Size.astype(str).apply(size_to_bytes).astype(int)
df.Installs = df.Installs.astype(str).str.replace(',', '', regex=False)\
.str.replace('+', '', regex=False).astype(int)
df.Price = df.Price.astype(str).str.replace('$', '', regex=False).astype(float)
print(df.shape)
df.head(3)
cat_df = df.groupby('Category').Installs.mean().to_frame().sort_values(by='Installs', ascending=False).reset_index()
ggplot() + \
geom_bar(aes(x='Category', y='Installs', fill='Category'), \
data=cat_df, stat='identity', sampling=sampling_pick(cat_df.shape[0])) + \
scale_fill_brewer(type='qual', palette='Dark2') + \
xlab('category') + ylab('mean installations') + \
ggsize(600, 450) + \
ggtitle('Installations by Category') + \
theme(panel_grid_major_x='blank', legend_position='none')
```
Here we can see that some categories are much more popular than others.
```
gen_df = df.groupby('Genres').Installs.mean().to_frame()\
.sort_values(by='Installs', ascending=False).reset_index()
ggplot() + \
geom_bar(aes(x='Genres', y='Installs', fill='Genres'), \
data=gen_df, stat='identity', sampling=sampling_pick(gen_df.shape[0]), \
tooltips=layer_tooltips().line('genre|@Genres')\
.format('@Installs', '.0f')\
.line('mean installations|@Installs')) + \
scale_fill_brewer(type='qual', palette='Dark2') + \
ylab('mean installations') + \
ggsize(600, 300) + \
ggtitle('Installations by Genre') + \
theme(panel_grid_major_x='blank', legend_position='none', \
axis_title_x='blank', axis_text_x='blank', axis_ticks_x='blank')
```
We see a big gap in popularity between different genres.
```
ggplot() + \
geom_bin2d(aes(x='Installs', y='Rating', fill='..count..'), \
data=df, color='white', size=1) + \
scale_fill_gradient(low='#e0ecf4', high='#8856a7') + \
scale_x_log10(name='installations') + \
ylim(1, 5) + ylab('rating') + \
ggsize(600, 300) + \
ggtitle('Connection Between Installations and Rating')
```
The rating and number of installations are more or less positively correlated. At least an app rated below 3 will not be popular.
```
ggplot() + \
geom_jitter(aes(x='Reviews', y='Installs', fill='Type'), \
data=df, shape=21, color='black', alpha=.1) + \
geom_smooth(aes(x='Reviews', y='Installs', group='Type', color='Type'), \
data=df, method='loess', deg=2) + \
scale_x_log10(name='reviews') + scale_y_log10(name='installations') + \
ggsize(600, 450) + \
ggtitle('Connection Between Installations and Reviews')
```
The plot shows that the number of installations and the number of reviews are practically the same thing.
The smoothing curves are far enough from each other, so it's better to separate free applications from the paid ones.
```
ggplot() + \
geom_bin2d(aes(x='Reviews', y='Size', fill='..count..'), \
data=df, color='white', size=1) + \
scale_fill_gradient(low='#e5f5f9', high='#2ca25f') + \
scale_x_log10(name='reviews') + scale_y_log10(name='size') + \
ggsize(600, 300) + \
ggtitle('Connection Between Reviews and Size')
```
It looks like we might not be interested in apps that are lighter than 1 Mb. For the others there is but minor correlation.
```
ggplot() + \
geom_bin2d(aes(x='Reviews', y='Price', fill='..count..'), \
data=df[df.Type == 'Paid'], color='white', size=1) + \
scale_fill_gradient(low='#ffeda0', high='#f03b20') + \
scale_x_log10(name='reviews') + scale_y_log10(name='price') + \
ggsize(600, 300) + \
ggtitle('Connection Between Price and Reviews')
```
I see nothing but chaos here. Anyway, paid apps are not very common, and others are either free of charge or use different sources of monetization.
|
github_jupyter
|
import pandas as pd
from lets_plot import *
LetsPlot.setup_html()
df = pd.read_csv("https://raw.githubusercontent.com/JetBrains/lets-plot-docs/master/data/googleplaystore.csv")
print(df.shape)
df.head(3)
def size_to_bytes(size):
size = size.lower()
if size == 'varies with device' or size == '':
return -1
if 'k' in size:
return int(float(size.split('k')[0]) * 1024)
if 'm' in size:
return int(float(size.split('m')[0]) * 1024 * 1024)
return int(size)
df = df[~df.Type.isna()]
df = df[~df.Reviews.astype(str).str.contains('M')]
df.Reviews = df.Reviews.astype(int)
df.Size = df.Size.astype(str).apply(size_to_bytes).astype(int)
df.Installs = df.Installs.astype(str).str.replace(',', '', regex=False)\
.str.replace('+', '', regex=False).astype(int)
df.Price = df.Price.astype(str).str.replace('$', '', regex=False).astype(float)
print(df.shape)
df.head(3)
cat_df = df.groupby('Category').Installs.mean().to_frame().sort_values(by='Installs', ascending=False).reset_index()
ggplot() + \
geom_bar(aes(x='Category', y='Installs', fill='Category'), \
data=cat_df, stat='identity', sampling=sampling_pick(cat_df.shape[0])) + \
scale_fill_brewer(type='qual', palette='Dark2') + \
xlab('category') + ylab('mean installations') + \
ggsize(600, 450) + \
ggtitle('Installations by Category') + \
theme(panel_grid_major_x='blank', legend_position='none')
gen_df = df.groupby('Genres').Installs.mean().to_frame()\
.sort_values(by='Installs', ascending=False).reset_index()
ggplot() + \
geom_bar(aes(x='Genres', y='Installs', fill='Genres'), \
data=gen_df, stat='identity', sampling=sampling_pick(gen_df.shape[0]), \
tooltips=layer_tooltips().line('genre|@Genres')\
.format('@Installs', '.0f')\
.line('mean installations|@Installs')) + \
scale_fill_brewer(type='qual', palette='Dark2') + \
ylab('mean installations') + \
ggsize(600, 300) + \
ggtitle('Installations by Genre') + \
theme(panel_grid_major_x='blank', legend_position='none', \
axis_title_x='blank', axis_text_x='blank', axis_ticks_x='blank')
ggplot() + \
geom_bin2d(aes(x='Installs', y='Rating', fill='..count..'), \
data=df, color='white', size=1) + \
scale_fill_gradient(low='#e0ecf4', high='#8856a7') + \
scale_x_log10(name='installations') + \
ylim(1, 5) + ylab('rating') + \
ggsize(600, 300) + \
ggtitle('Connection Between Installations and Rating')
ggplot() + \
geom_jitter(aes(x='Reviews', y='Installs', fill='Type'), \
data=df, shape=21, color='black', alpha=.1) + \
geom_smooth(aes(x='Reviews', y='Installs', group='Type', color='Type'), \
data=df, method='loess', deg=2) + \
scale_x_log10(name='reviews') + scale_y_log10(name='installations') + \
ggsize(600, 450) + \
ggtitle('Connection Between Installations and Reviews')
ggplot() + \
geom_bin2d(aes(x='Reviews', y='Size', fill='..count..'), \
data=df, color='white', size=1) + \
scale_fill_gradient(low='#e5f5f9', high='#2ca25f') + \
scale_x_log10(name='reviews') + scale_y_log10(name='size') + \
ggsize(600, 300) + \
ggtitle('Connection Between Reviews and Size')
ggplot() + \
geom_bin2d(aes(x='Reviews', y='Price', fill='..count..'), \
data=df[df.Type == 'Paid'], color='white', size=1) + \
scale_fill_gradient(low='#ffeda0', high='#f03b20') + \
scale_x_log10(name='reviews') + scale_y_log10(name='price') + \
ggsize(600, 300) + \
ggtitle('Connection Between Price and Reviews')
| 0.464416 | 0.92657 |
# Plot histograms
```
import os
import math
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from IPython.display import display, HTML
%matplotlib inline
def parse_if_number(s):
try: return float(s)
except: return True if s=="true" else False if s=="false" else s if s else None
def parse_ndarray(s):
return np.fromstring(s, sep=' ') if s else None
def get_file_name(name):
return name.replace(':', '-')
```
## Config
```
inputFile = 'data.csv'
repetitionsCount = -1 # -1 = auto-detect
factors = ['R', 'T', 'm', 'D']
# Plots
histBinNum = 30 # Histograms
histCenter = True # Center distribution
plotSize = (10, 10)
plotStyle = 'seaborn-whitegrid'
# Save
saveFigures = False
# Filter scalars
scalarsFilter = ['Floorplan.userCount']
# Filter histograms
histFilter = ['Floorplan.copies:histogram', 'Floorplan.collisions:histogram', 'Floorplan.totalCollisions:histogram', 'Floorplan.msgsPerSlot:histogram']
histNames = [
('Floorplan.copies:histogram', 'Number of copies received by each user in an hear window', 1),
('Floorplan.collisions:histogram', 'Number of collisions received by the users', 1),
('Floorplan.totalCollisions:histogram', 'Number of colliding messages received by the users in each slot', 1),
('Floorplan.msgsPerSlot:histogram', 'Number of messages sent in each slot', 1),
]
```
## Load scalars
```
df = pd.read_csv('exported_data/' + inputFile, converters = {
'attrvalue': parse_if_number,
'binedges': parse_ndarray,
'binvalues': parse_ndarray,
'vectime': parse_ndarray,
'vecvalue': parse_ndarray,
})
if repetitionsCount <= 0: # auto-detect
repetitionsCount = int(df[df.attrname == 'repetition']['attrvalue'].max()) + 1
print('Repetitions:', repetitionsCount)
scalars = df[(df.type == 'scalar') | ((df.type == 'itervar') & (df.attrname != 'TO')) | ((df.type == 'param') & (df.attrname == 'Floorplan.userCount')) | ((df.type == 'runattr') & (df.attrname == 'repetition'))]
scalars = scalars.assign(qname = scalars.attrname.combine_first(scalars.module + '.' + scalars.name))
for index, row in scalars[scalars.type == 'itervar'].iterrows():
val = scalars.loc[index, 'attrvalue']
if isinstance(val, str) and not all(c.isdigit() for c in val):
scalars.loc[index, 'attrvalue'] = eval(val)
scalars.value = scalars.value.combine_first(scalars.attrvalue.astype('float64'))
scalars_wide = scalars.pivot_table(index=['run'], columns='qname', values='value')
scalars_wide.sort_values([*factors, 'repetition'], inplace=True)
count = 0
for index in scalars_wide.index:
config = count // repetitionsCount
scalars_wide.loc[index, 'config'] = config
count += 1
scalars_wide = scalars_wide[['config', 'repetition', *factors, *scalarsFilter]]
# Computed
factorsCount = len(factors)
configsCount = len(scalars_wide)//repetitionsCount
print('Configs:', configsCount)
totalSims = configsCount*repetitionsCount
display(HTML("<style>div.output_scroll { height: auto; max-height: 48em; }</style>"))
pd.set_option('display.max_rows', totalSims)
pd.set_option('display.max_columns', 100)
if saveFigures:
os.makedirs('figures', exist_ok=True)
```
## Load histograms
```
histograms = df[df.type == 'histogram']
histograms = histograms.assign(qname = histograms.module + '.' + histograms.name)
histograms = histograms[histograms.qname.isin(histFilter)]
for index in scalars_wide.index:
r = index
cfg = scalars_wide.loc[index, 'config']
rep = scalars_wide.loc[index, 'repetition']
histograms.loc[histograms.run == r, 'config'] = cfg
histograms.loc[histograms.run == r, 'repetition'] = rep
for histname, _, _ in histNames:
histograms.loc[(histograms.run == r) & (histograms.qname == histname), 'binsize'] = histograms.loc[(histograms.run == r) & (histograms.qname == histname), 'binedges'].values[0][1] - histograms.loc[(histograms.run == r) & (histograms.qname == histname), 'binedges'].values[0][0]
histograms.loc[(histograms.run == r) & (histograms.qname == histname), 'binmin'] = histograms.loc[(histograms.run == r) & (histograms.qname == histname), 'binedges'].values[0].min()
histograms.loc[(histograms.run == r) & (histograms.qname == histname), 'binmax'] = histograms.loc[(histograms.run == r) & (histograms.qname == histname), 'binedges'].values[0].max()
histograms.sort_values(['config', 'repetition', 'qname'], inplace=True)
for cfg in range(0, configsCount):
for histname, _, _ in histNames:
histograms.loc[(histograms.config == cfg) & (histograms.qname == histname), 'binsizelcm'] = np.lcm.reduce(list(map(int, histograms.loc[(histograms.config == cfg) & (histograms.qname == histname), 'binsize'].values.tolist())))
histograms.loc[(histograms.config == cfg) & (histograms.qname == histname), 'binminall'] = histograms.loc[(histograms.config == cfg) & (histograms.qname == histname), 'binmin'].min()
histograms.loc[(histograms.config == cfg) & (histograms.qname == histname), 'binmaxall'] = histograms.loc[(histograms.config == cfg) & (histograms.qname == histname), 'binmax'].max()
histograms = histograms[['config', 'repetition', 'qname', 'binmin', 'binmax', 'binsize', 'binedges', 'binvalues', 'binminall', 'binmaxall', 'binsizelcm']]
```
## Compute means and ranges
```
def get_values_for_bin(hist, low, high):
edges = hist['binedges'].values[0]
values = hist['binvalues'].values[0]
inbin = []
lowidx = 0
highidx = 0
for edge in edges:
if edge < low:
lowidx += 1
if edge < high:
highidx += 1
continue
break
minval = math.inf
maxval = -math.inf
for i in range(lowidx, highidx):
if i > len(values) - 1:
break
inbin.append(values[i])
if values[i] < minval:
minval = values[i]
if values[i] > maxval:
maxval = values[i]
if len(inbin) == 0:
return (minval, 0, maxval)
return (minval, sum(inbin) / len(inbin), maxval)
cols = ['config']
for histname, _, _ in histNames:
name = histname[histname.index('.')+1:histname.index(':')]
cols.append(name + 'Bins')
cols.append(name + 'MeanValues')
cols.append(name + 'LowValues')
cols.append(name + 'HighValues')
data = []
for cfg in range(0, configsCount):
curdata = [cfg]
for histname, _, stepMultiplier in histNames:
binmin = int(histograms.loc[(histograms.config == cfg) & (histograms.qname == histname), 'binminall'].values[0])
binstep = int(stepMultiplier) * int(histograms.loc[(histograms.config == cfg) & (histograms.qname == histname), 'binsizelcm'].values[0])
binmax = 1 + int(histograms.loc[(histograms.config == cfg) & (histograms.qname == histname), 'binmaxall'].values[0])
bins = np.arange(binmin, binmax, binstep)
totalSize = (binmax - binmin - 1)//binstep
meanValues = np.zeros(totalSize)
lowValues = np.full(totalSize, math.inf)
highValues = np.full(totalSize, -math.inf)
for rep in range(0, repetitionsCount):
curHist = histograms[(histograms.config == cfg) & (histograms.qname == histname) & (histograms.repetition == rep)]
num = 0
for binlow, binhigh in zip(range(binmin, binmax - 1, binstep), range(binmin + binstep, binmax + binstep, binstep)):
values = get_values_for_bin(curHist, binlow, binhigh)
if lowValues[num] > values[0]:
lowValues[num] = values[0]
meanValues[num] += values[1]
if highValues[num] < values[2]:
highValues[num] = values[2]
num += 1
for i in range(0, len(meanValues)):
meanValues[i] = meanValues[i] / repetitionsCount
curdata.append(bins)
curdata.append(meanValues)
curdata.append(lowValues)
curdata.append(highValues)
data.append(curdata)
plotdf = pd.DataFrame.from_records(data, columns=cols, index='config')
```
## Plots
```
for cfg, hist in plotdf.iterrows():
print('Config ' + str(cfg))
display(scalars_wide.loc[(scalars_wide.repetition == 0) & (scalars_wide.config == cfg)][['config', *factors]])
for histName, histDesc, _ in histNames:
name = histName[histName.index('.')+1:histName.index(':')]
bins = hist[name + 'Bins']
means = hist[name + 'MeanValues']
lows = hist[name + 'LowValues']
highs = hist[name + 'HighValues']
bincenters = 0.5*(bins[1:]+bins[:-1])
ranges = [x for x in zip(lows, highs)]
ranges = np.array(ranges).T
plt.bar(bincenters, means, width=1, yerr=ranges, error_kw={'capsize': 3})
plt.title('Histogram for the ' + histDesc)
plt.xlabel(name)
if saveFigures:
fig = plt.gcf()
fig.savefig('figures/' + get_file_name(histName) + '-' + str(cfg) + '-perfplot.png')
plt.show()
print('#######################')
print()
```
### Rerun this notebook
To rerun this notebook, you can:
- just rerun the simulations with the corresponding configuration: `./simulate.sh -s LowDensity -c LowDensity2kr` (you will get slighly different results)
- download our datasets from `https://drive.google.com/file/d/1ZFRV2DecoTvax9lngEsuPPw8Cz1DXvLc/view?usp=sharing` (login with UNIPI institutional account)
- use our seed to rerun the simulations. Add `seed-set = ${runnumber}6965` to the configuration
|
github_jupyter
|
import os
import math
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from IPython.display import display, HTML
%matplotlib inline
def parse_if_number(s):
try: return float(s)
except: return True if s=="true" else False if s=="false" else s if s else None
def parse_ndarray(s):
return np.fromstring(s, sep=' ') if s else None
def get_file_name(name):
return name.replace(':', '-')
inputFile = 'data.csv'
repetitionsCount = -1 # -1 = auto-detect
factors = ['R', 'T', 'm', 'D']
# Plots
histBinNum = 30 # Histograms
histCenter = True # Center distribution
plotSize = (10, 10)
plotStyle = 'seaborn-whitegrid'
# Save
saveFigures = False
# Filter scalars
scalarsFilter = ['Floorplan.userCount']
# Filter histograms
histFilter = ['Floorplan.copies:histogram', 'Floorplan.collisions:histogram', 'Floorplan.totalCollisions:histogram', 'Floorplan.msgsPerSlot:histogram']
histNames = [
('Floorplan.copies:histogram', 'Number of copies received by each user in an hear window', 1),
('Floorplan.collisions:histogram', 'Number of collisions received by the users', 1),
('Floorplan.totalCollisions:histogram', 'Number of colliding messages received by the users in each slot', 1),
('Floorplan.msgsPerSlot:histogram', 'Number of messages sent in each slot', 1),
]
df = pd.read_csv('exported_data/' + inputFile, converters = {
'attrvalue': parse_if_number,
'binedges': parse_ndarray,
'binvalues': parse_ndarray,
'vectime': parse_ndarray,
'vecvalue': parse_ndarray,
})
if repetitionsCount <= 0: # auto-detect
repetitionsCount = int(df[df.attrname == 'repetition']['attrvalue'].max()) + 1
print('Repetitions:', repetitionsCount)
scalars = df[(df.type == 'scalar') | ((df.type == 'itervar') & (df.attrname != 'TO')) | ((df.type == 'param') & (df.attrname == 'Floorplan.userCount')) | ((df.type == 'runattr') & (df.attrname == 'repetition'))]
scalars = scalars.assign(qname = scalars.attrname.combine_first(scalars.module + '.' + scalars.name))
for index, row in scalars[scalars.type == 'itervar'].iterrows():
val = scalars.loc[index, 'attrvalue']
if isinstance(val, str) and not all(c.isdigit() for c in val):
scalars.loc[index, 'attrvalue'] = eval(val)
scalars.value = scalars.value.combine_first(scalars.attrvalue.astype('float64'))
scalars_wide = scalars.pivot_table(index=['run'], columns='qname', values='value')
scalars_wide.sort_values([*factors, 'repetition'], inplace=True)
count = 0
for index in scalars_wide.index:
config = count // repetitionsCount
scalars_wide.loc[index, 'config'] = config
count += 1
scalars_wide = scalars_wide[['config', 'repetition', *factors, *scalarsFilter]]
# Computed
factorsCount = len(factors)
configsCount = len(scalars_wide)//repetitionsCount
print('Configs:', configsCount)
totalSims = configsCount*repetitionsCount
display(HTML("<style>div.output_scroll { height: auto; max-height: 48em; }</style>"))
pd.set_option('display.max_rows', totalSims)
pd.set_option('display.max_columns', 100)
if saveFigures:
os.makedirs('figures', exist_ok=True)
histograms = df[df.type == 'histogram']
histograms = histograms.assign(qname = histograms.module + '.' + histograms.name)
histograms = histograms[histograms.qname.isin(histFilter)]
for index in scalars_wide.index:
r = index
cfg = scalars_wide.loc[index, 'config']
rep = scalars_wide.loc[index, 'repetition']
histograms.loc[histograms.run == r, 'config'] = cfg
histograms.loc[histograms.run == r, 'repetition'] = rep
for histname, _, _ in histNames:
histograms.loc[(histograms.run == r) & (histograms.qname == histname), 'binsize'] = histograms.loc[(histograms.run == r) & (histograms.qname == histname), 'binedges'].values[0][1] - histograms.loc[(histograms.run == r) & (histograms.qname == histname), 'binedges'].values[0][0]
histograms.loc[(histograms.run == r) & (histograms.qname == histname), 'binmin'] = histograms.loc[(histograms.run == r) & (histograms.qname == histname), 'binedges'].values[0].min()
histograms.loc[(histograms.run == r) & (histograms.qname == histname), 'binmax'] = histograms.loc[(histograms.run == r) & (histograms.qname == histname), 'binedges'].values[0].max()
histograms.sort_values(['config', 'repetition', 'qname'], inplace=True)
for cfg in range(0, configsCount):
for histname, _, _ in histNames:
histograms.loc[(histograms.config == cfg) & (histograms.qname == histname), 'binsizelcm'] = np.lcm.reduce(list(map(int, histograms.loc[(histograms.config == cfg) & (histograms.qname == histname), 'binsize'].values.tolist())))
histograms.loc[(histograms.config == cfg) & (histograms.qname == histname), 'binminall'] = histograms.loc[(histograms.config == cfg) & (histograms.qname == histname), 'binmin'].min()
histograms.loc[(histograms.config == cfg) & (histograms.qname == histname), 'binmaxall'] = histograms.loc[(histograms.config == cfg) & (histograms.qname == histname), 'binmax'].max()
histograms = histograms[['config', 'repetition', 'qname', 'binmin', 'binmax', 'binsize', 'binedges', 'binvalues', 'binminall', 'binmaxall', 'binsizelcm']]
def get_values_for_bin(hist, low, high):
edges = hist['binedges'].values[0]
values = hist['binvalues'].values[0]
inbin = []
lowidx = 0
highidx = 0
for edge in edges:
if edge < low:
lowidx += 1
if edge < high:
highidx += 1
continue
break
minval = math.inf
maxval = -math.inf
for i in range(lowidx, highidx):
if i > len(values) - 1:
break
inbin.append(values[i])
if values[i] < minval:
minval = values[i]
if values[i] > maxval:
maxval = values[i]
if len(inbin) == 0:
return (minval, 0, maxval)
return (minval, sum(inbin) / len(inbin), maxval)
cols = ['config']
for histname, _, _ in histNames:
name = histname[histname.index('.')+1:histname.index(':')]
cols.append(name + 'Bins')
cols.append(name + 'MeanValues')
cols.append(name + 'LowValues')
cols.append(name + 'HighValues')
data = []
for cfg in range(0, configsCount):
curdata = [cfg]
for histname, _, stepMultiplier in histNames:
binmin = int(histograms.loc[(histograms.config == cfg) & (histograms.qname == histname), 'binminall'].values[0])
binstep = int(stepMultiplier) * int(histograms.loc[(histograms.config == cfg) & (histograms.qname == histname), 'binsizelcm'].values[0])
binmax = 1 + int(histograms.loc[(histograms.config == cfg) & (histograms.qname == histname), 'binmaxall'].values[0])
bins = np.arange(binmin, binmax, binstep)
totalSize = (binmax - binmin - 1)//binstep
meanValues = np.zeros(totalSize)
lowValues = np.full(totalSize, math.inf)
highValues = np.full(totalSize, -math.inf)
for rep in range(0, repetitionsCount):
curHist = histograms[(histograms.config == cfg) & (histograms.qname == histname) & (histograms.repetition == rep)]
num = 0
for binlow, binhigh in zip(range(binmin, binmax - 1, binstep), range(binmin + binstep, binmax + binstep, binstep)):
values = get_values_for_bin(curHist, binlow, binhigh)
if lowValues[num] > values[0]:
lowValues[num] = values[0]
meanValues[num] += values[1]
if highValues[num] < values[2]:
highValues[num] = values[2]
num += 1
for i in range(0, len(meanValues)):
meanValues[i] = meanValues[i] / repetitionsCount
curdata.append(bins)
curdata.append(meanValues)
curdata.append(lowValues)
curdata.append(highValues)
data.append(curdata)
plotdf = pd.DataFrame.from_records(data, columns=cols, index='config')
for cfg, hist in plotdf.iterrows():
print('Config ' + str(cfg))
display(scalars_wide.loc[(scalars_wide.repetition == 0) & (scalars_wide.config == cfg)][['config', *factors]])
for histName, histDesc, _ in histNames:
name = histName[histName.index('.')+1:histName.index(':')]
bins = hist[name + 'Bins']
means = hist[name + 'MeanValues']
lows = hist[name + 'LowValues']
highs = hist[name + 'HighValues']
bincenters = 0.5*(bins[1:]+bins[:-1])
ranges = [x for x in zip(lows, highs)]
ranges = np.array(ranges).T
plt.bar(bincenters, means, width=1, yerr=ranges, error_kw={'capsize': 3})
plt.title('Histogram for the ' + histDesc)
plt.xlabel(name)
if saveFigures:
fig = plt.gcf()
fig.savefig('figures/' + get_file_name(histName) + '-' + str(cfg) + '-perfplot.png')
plt.show()
print('#######################')
print()
| 0.412294 | 0.71815 |
# Filtro de tweets
* ~~De la BD que contiene los tweets de septiembre 2017, se buscan sólo los que se encuentran entre las fechas del 19 al 26, de los tweets obtenidos se filtran por palabras clave para crear una nueva BD con tweets relevantes ocurridos dentro del periodo de interés y se guardan en un archivo los IDs de los tweets obtenidos. (INCOMPLETO)~~
* El fitro para obtener tweets del 19 al 26 se hace con un script que se ejecuta en el servidor de MongoDB (`mongo.js`) y guarda el resultado en una nueva BD. (COMPLETO)
* Se filtran los tweets de la nueva BD usando palabras clave y se meten a otra BD. (COMPLETO)
* La nueva BD sirve para obtener las relaciones paradigmáticas de las palabras clave para hacer más completa la lista y poder extraer más resultados que pudieron haber sido omitidos una primera pasada del filtro. (FALTA)
# Dependencies
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from pymongo import MongoClient
from nltk.tokenize import TweetTokenizer
```
# Conexión a MongoDB
```
try:
client = MongoClient()
print("Connected to MongoDB\n")
except pymongo.errors.ConnectionFailure as e:
print("Could not connect to MongoDB",e)
```
# Seleccionar BD y colección
BD de todo septiembre.
```
db = client.sept19_26_db
tweets = db.sept19_26_collection
```
# Número de Tweets
Una vez filtrados los tweets de septiembre entre las fechas 19 y 26, se tienen:
```
print("Tweets entre el 19 y 26 septiembre: ",tweets.find().count())
```
# Nueva BD
Se crea una nueva BD de tweets con palabras clave dentro del periodo del 19 al 26 de septiembre.
```
db_new = client.sept19_26_keywords_db
tweets_new = db_new.sept19_26_keywords_collection
```
# Consultar BD
Buscar tweets dentro de un rango de fechas en BD.
fecha = ["Tue Sep 19 00:00:00 +0000 2017","Wed Sep 20 14:59:58 +0000 2017"]
query = {
'created_at' :
{
"$gte":"Wed Sep 20 14:59:58 +0000 2017"
}
}
# Lista para filtro de palabras clave y tokenización de tweets
* `filtro`: Lista para buscar palabras clave dentro del tweet
* `ids`: set para guardar los tweets por id que contengan palabras clave para después hacer consulta a la BD por medio de su ID
* `tknzr`: Tokenizador de tweets
```
filtro = ["sismo","#sismo","#alertasísmica","#alertasismica", "albergue", "acopio", "víveres", "viveres",
"alerta", "sísmica","sismica", "ayuda", "#verificado19S","19s","derrumbe","colecta","#fuerzamexico",
"#fuerzaméxico","#acopio"]
IDs = set()
tknzr = TweetTokenizer(preserve_case=False, # Convertir a minúsculas
reduce_len=True, # Reducir caracteres repetidos
strip_handles=False) # Mostrar @usuarios
```
# Se obtienen tweets relevantes
```
for i in tweets.find():
if "retweeted_status" in i: # Si es retweet...
tmp = tknzr.tokenize(i["retweeted_status"]['text'])
for key in filtro: # Para buscar palabras clave dentro del tweet
if key in tmp:
#print(i["created_at"])
#IDs.add(i["_id"])
try: # Insertar tweet con palabras clave en nueva BD
insertar = tweets_new.insert_one(i)
except Exception as e:
#print("Error:",e)
pass
else: # Si no es retweet...
tmp = tknzr.tokenize(i['text'])
for key in filtro:
if key in tmp:
#print(i["created_at"])
#IDs.add(i["_id"])
try:
insertar = tweets_new.insert_one(i)
except Exception as e:
#print("Error:",e)
pass
```
# Filtro por keywords
```
print("Tweets con palabras clave del 19 al 26 septiembre: ",tweets_new.find().count())
```
# Escribir IDs en archivo
if True:
IDs_file = open('IDs.dat', 'w')
for item in IDs:
IDs_file.write(str(item)+"\n")
IDs_file.close()
|
github_jupyter
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from pymongo import MongoClient
from nltk.tokenize import TweetTokenizer
try:
client = MongoClient()
print("Connected to MongoDB\n")
except pymongo.errors.ConnectionFailure as e:
print("Could not connect to MongoDB",e)
db = client.sept19_26_db
tweets = db.sept19_26_collection
print("Tweets entre el 19 y 26 septiembre: ",tweets.find().count())
db_new = client.sept19_26_keywords_db
tweets_new = db_new.sept19_26_keywords_collection
filtro = ["sismo","#sismo","#alertasísmica","#alertasismica", "albergue", "acopio", "víveres", "viveres",
"alerta", "sísmica","sismica", "ayuda", "#verificado19S","19s","derrumbe","colecta","#fuerzamexico",
"#fuerzaméxico","#acopio"]
IDs = set()
tknzr = TweetTokenizer(preserve_case=False, # Convertir a minúsculas
reduce_len=True, # Reducir caracteres repetidos
strip_handles=False) # Mostrar @usuarios
for i in tweets.find():
if "retweeted_status" in i: # Si es retweet...
tmp = tknzr.tokenize(i["retweeted_status"]['text'])
for key in filtro: # Para buscar palabras clave dentro del tweet
if key in tmp:
#print(i["created_at"])
#IDs.add(i["_id"])
try: # Insertar tweet con palabras clave en nueva BD
insertar = tweets_new.insert_one(i)
except Exception as e:
#print("Error:",e)
pass
else: # Si no es retweet...
tmp = tknzr.tokenize(i['text'])
for key in filtro:
if key in tmp:
#print(i["created_at"])
#IDs.add(i["_id"])
try:
insertar = tweets_new.insert_one(i)
except Exception as e:
#print("Error:",e)
pass
print("Tweets con palabras clave del 19 al 26 septiembre: ",tweets_new.find().count())
| 0.054324 | 0.5 |
# VacationPy
----
#### Note
* Keep an eye on your API usage. Use https://developers.google.com/maps/reporting/gmp-reporting as reference for how to monitor your usage and billing.
* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
```
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
# Import API key
from api_keys import g_key
gmaps.configure(api_key=g_key)
```
### Store Part I results into DataFrame
* Load the csv exported in Part I to a DataFrame
```
weather_one = "../output_data/weather.csv"
weather_one_df = pd.read_csv(weather_one)
weather_one_df.head()
```
### Humidity Heatmap
* Configure gmaps.
* Use the Lat and Lng as locations and Humidity as the weight.
* Add Heatmap layer to map.
```
# Store latitude and longitude in locations
lat_lon = weather_one_df[['Lat', 'Lng']]
# Fill NaN values and convert to float
humid = weather_one_df["Humidity"]
lat_lon.head()
# Plot Heatmap
figure_layout = {
'width': '900px',
'height': '600px',
'border': '1px solid black',
'padding': '1px',
'margin': '0 auto 0 auto'
}
# Use the gmaps.figure passing a zoom_level of 2 and a center point so the map displays properly at
# a readable size
fig = gmaps.figure(layout=figure_layout,zoom_level=2,center=(15,25))
# Create heat layer
heat_layer = gmaps.heatmap_layer(lat_lon, weights=humid,
dissipating=False, max_intensity=100,
point_radius=1.5)
# Add heat layer
fig.add_layer(heat_layer)
# Display figure
fig
```
### Create new DataFrame fitting weather criteria
* Narrow down the cities to fit weather conditions.
* Drop any rows will null values.
```
# Create criteria for the perfect vacation climate
# A max temperature lower than 80 degrees but higher than 70.
crit_temperature = (weather_one_df.Temperature < 80) & (weather_one_df.Temperature > 70)
crit_Clouds = weather_one_df.Clouds == 0
final_criteria = crit_temperature & crit_Clouds
# Use boolean indexing to filter the weather_df dataframe
ideal_weather_df = weather_one_df[final_criteria]
ideal_weather_df = ideal_weather_df.dropna()
ideal_weather_df = ideal_weather_df.reset_index()
ideal_weather_df.head(10)
```
### Hotel Map
* Store into variable named `hotel_df`.
* Add a "Hotel Name" column to the DataFrame.
* Set parameters to search for hotels with 5000 meters.
* Hit the Google Places API for each city's coordinates.
* Store the first Hotel result into the DataFrame.
* Plot markers on top of the heatmap.
```
hotel_df = ideal_weather_df
hotel_df['Hotel Name'] = ""
hotel_df.head()
# params dictionary to update each iteration
params = {
"radius": 5000,
"types": "lodging",
"keyword": "Hotel",
"key": g_key
}
# Use the lat/lng we recovered to identify airports
for index, row in hotel_df.iterrows():
# get lat, lng from hotel_df
lat = row["Lat"]
lng = row["Lng"]
# change location each iteration while leaving original params in place
params["location"] = f"{lat},{lng}"
# Use the search term: "Hotel" and our lat/lng
base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
# make request and print url
name_address = requests.get(base_url, params=params)
# print the name_address url, avoid doing for public github repos in order to avoid exposing key
#print(name_address.url)
# convert to json
name_address = name_address.json()
#print(json.dumps(name_address, indent=4, sort_keys=True))
# Since some data may be missing we incorporate a try-except to skip any that are missing a data point.
try:
hotel_df.loc[index, "Hotel Name"] = name_address["results"][0]["name"]
#hotel_df.loc[index, "Airport Address"] = name_address["results"][0]["vicinity"]
#hotel_df.loc[index, "Airport Rating"] = name_address["results"][0]["rating"]
except (KeyError, IndexError):
print("Missing field/result... skipping.")
hotel_df.info()
# NOTE: Do not change any of the code in this cell
# Using the template add the hotel marks to the heatmap
info_box_template = """
<dl>
<dt>Name</dt><dd>{Hotel Name}</dd>
<dt>City</dt><dd>{City}</dd>
<dt>Country</dt><dd>{Country}</dd>
</dl>
"""
# Store the DataFrame Row
# NOTE: be sure to update with your DataFrame name
hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()]
locations = hotel_df[["Lat", "Lng"]]
# Add marker layer ontop of heat map
figure_layout = {
'width': '900px',
'height': '600px',
'border': '1px solid black',
'padding': '1px',
'margin': '0 auto 0 auto'
}
fig = gmaps.figure(layout=figure_layout,zoom_level=2,center=(15,25))
# Create hotel symbol layer
hotel_layer = gmaps.marker_layer(
locations,info_box_content=[info_box_template.format(**row) for index, row in hotel_df.iterrows()]
)
# Add layer
fig.add_layer(heat_layer)
fig.add_layer(hotel_layer)
# Display figure
fig
```
|
github_jupyter
|
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
# Import API key
from api_keys import g_key
gmaps.configure(api_key=g_key)
weather_one = "../output_data/weather.csv"
weather_one_df = pd.read_csv(weather_one)
weather_one_df.head()
# Store latitude and longitude in locations
lat_lon = weather_one_df[['Lat', 'Lng']]
# Fill NaN values and convert to float
humid = weather_one_df["Humidity"]
lat_lon.head()
# Plot Heatmap
figure_layout = {
'width': '900px',
'height': '600px',
'border': '1px solid black',
'padding': '1px',
'margin': '0 auto 0 auto'
}
# Use the gmaps.figure passing a zoom_level of 2 and a center point so the map displays properly at
# a readable size
fig = gmaps.figure(layout=figure_layout,zoom_level=2,center=(15,25))
# Create heat layer
heat_layer = gmaps.heatmap_layer(lat_lon, weights=humid,
dissipating=False, max_intensity=100,
point_radius=1.5)
# Add heat layer
fig.add_layer(heat_layer)
# Display figure
fig
# Create criteria for the perfect vacation climate
# A max temperature lower than 80 degrees but higher than 70.
crit_temperature = (weather_one_df.Temperature < 80) & (weather_one_df.Temperature > 70)
crit_Clouds = weather_one_df.Clouds == 0
final_criteria = crit_temperature & crit_Clouds
# Use boolean indexing to filter the weather_df dataframe
ideal_weather_df = weather_one_df[final_criteria]
ideal_weather_df = ideal_weather_df.dropna()
ideal_weather_df = ideal_weather_df.reset_index()
ideal_weather_df.head(10)
hotel_df = ideal_weather_df
hotel_df['Hotel Name'] = ""
hotel_df.head()
# params dictionary to update each iteration
params = {
"radius": 5000,
"types": "lodging",
"keyword": "Hotel",
"key": g_key
}
# Use the lat/lng we recovered to identify airports
for index, row in hotel_df.iterrows():
# get lat, lng from hotel_df
lat = row["Lat"]
lng = row["Lng"]
# change location each iteration while leaving original params in place
params["location"] = f"{lat},{lng}"
# Use the search term: "Hotel" and our lat/lng
base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
# make request and print url
name_address = requests.get(base_url, params=params)
# print the name_address url, avoid doing for public github repos in order to avoid exposing key
#print(name_address.url)
# convert to json
name_address = name_address.json()
#print(json.dumps(name_address, indent=4, sort_keys=True))
# Since some data may be missing we incorporate a try-except to skip any that are missing a data point.
try:
hotel_df.loc[index, "Hotel Name"] = name_address["results"][0]["name"]
#hotel_df.loc[index, "Airport Address"] = name_address["results"][0]["vicinity"]
#hotel_df.loc[index, "Airport Rating"] = name_address["results"][0]["rating"]
except (KeyError, IndexError):
print("Missing field/result... skipping.")
hotel_df.info()
# NOTE: Do not change any of the code in this cell
# Using the template add the hotel marks to the heatmap
info_box_template = """
<dl>
<dt>Name</dt><dd>{Hotel Name}</dd>
<dt>City</dt><dd>{City}</dd>
<dt>Country</dt><dd>{Country}</dd>
</dl>
"""
# Store the DataFrame Row
# NOTE: be sure to update with your DataFrame name
hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()]
locations = hotel_df[["Lat", "Lng"]]
# Add marker layer ontop of heat map
figure_layout = {
'width': '900px',
'height': '600px',
'border': '1px solid black',
'padding': '1px',
'margin': '0 auto 0 auto'
}
fig = gmaps.figure(layout=figure_layout,zoom_level=2,center=(15,25))
# Create hotel symbol layer
hotel_layer = gmaps.marker_layer(
locations,info_box_content=[info_box_template.format(**row) for index, row in hotel_df.iterrows()]
)
# Add layer
fig.add_layer(heat_layer)
fig.add_layer(hotel_layer)
# Display figure
fig
| 0.58166 | 0.852014 |
```
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
import model
from datetime import datetime
from datetime import timedelta
sns.set()
df = pd.read_csv('/home/husein/space/Stock-Prediction-Comparison/dataset/GOOG-year.csv')
date_ori = pd.to_datetime(df.iloc[:, 0]).tolist()
df.head()
minmax = MinMaxScaler().fit(df.iloc[:, 1:].astype('float32'))
df_log = minmax.transform(df.iloc[:, 1:].astype('float32'))
df_log = pd.DataFrame(df_log)
df_log.head()
timestamp = 5
epoch = 500
future_day = 50
def embed_seq(inputs, vocab_size=None, embed_dim=None, zero_pad=False, scale=False):
lookup_table = tf.get_variable('lookup_table', dtype=tf.float32, shape=[vocab_size, embed_dim])
if zero_pad:
lookup_table = tf.concat((tf.zeros([1, embed_dim]), lookup_table[1:, :]), axis=0)
outputs = tf.nn.embedding_lookup(lookup_table, inputs)
if scale:
outputs = outputs * (embed_dim ** 0.5)
return outputs
def learned_positional_encoding(inputs, embed_dim, zero_pad=False, scale=False):
T = inputs.get_shape().as_list()[1]
outputs = tf.range(T)
outputs = tf.expand_dims(outputs, 0)
outputs = tf.tile(outputs, [tf.shape(inputs)[0], 1])
return embed_seq(outputs, T, embed_dim, zero_pad=zero_pad, scale=scale)
def layer_norm(inputs, epsilon=1e-8):
mean, variance = tf.nn.moments(inputs, [-1], keep_dims=True)
normalized = (inputs - mean) / (tf.sqrt(variance + epsilon))
params_shape = inputs.get_shape()[-1:]
gamma = tf.get_variable('gamma', params_shape, tf.float32, tf.ones_initializer())
beta = tf.get_variable('beta', params_shape, tf.float32, tf.zeros_initializer())
return gamma * normalized + beta
def pointwise_feedforward(inputs, num_units=[None, None], activation=None):
outputs = tf.layers.conv1d(inputs, num_units[0], kernel_size=1, activation=activation)
outputs = tf.layers.conv1d(outputs, num_units[1], kernel_size=1, activation=None)
outputs += inputs
outputs = layer_norm(outputs)
return outputs
class Model:
def __init__(self, dimension_input, dimension_output, seq_len,
learning_rate, num_heads=8, attn_windows=range(1, 6)):
self.size_layer = dimension_input
self.num_heads = num_heads
self.seq_len = seq_len
self.X = tf.placeholder(tf.float32, [None, seq_len, dimension_input])
self.Y = tf.placeholder(tf.float32, [None, dimension_output])
feed = self.X
for i, win_size in enumerate(attn_windows):
with tf.variable_scope('attn_masked_window_%d' % win_size):
feed = self.multihead_attn(feed, self.window_mask(win_size))
feed += learned_positional_encoding(feed, dimension_input)
with tf.variable_scope('multihead'):
feed = self.multihead_attn(feed, None)
with tf.variable_scope('pointwise'):
feed = pointwise_feedforward(feed, num_units=[4*dimension_input,
dimension_input], activation=tf.nn.relu)
self.logits = tf.layers.dense(feed, dimension_output)[:,-1]
self.cost = tf.reduce_mean(tf.square(self.Y - self.logits))
self.optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(self.cost)
self.correct_pred = tf.equal(tf.argmax(self.logits, 1), tf.argmax(self.Y, 1))
self.accuracy = tf.reduce_mean(tf.cast(self.correct_pred, tf.float32))
def multihead_attn(self, inputs, masks):
T_q = T_k = inputs.get_shape().as_list()[1]
Q_K_V = tf.layers.dense(inputs, 3*self.size_layer, tf.nn.relu)
Q, K, V = tf.split(Q_K_V, 3, -1)
Q_ = tf.concat(tf.split(Q, self.num_heads, axis=2), axis=0)
K_ = tf.concat(tf.split(K, self.num_heads, axis=2), axis=0)
V_ = tf.concat(tf.split(V, self.num_heads, axis=2), axis=0)
align = tf.matmul(Q_, tf.transpose(K_, [0,2,1]))
align = align / np.sqrt(K_.get_shape().as_list()[-1])
if masks is not None:
paddings = tf.fill(tf.shape(align), float('-inf'))
align = tf.where(tf.equal(masks, 0), paddings, align)
align = tf.nn.softmax(align)
outputs = tf.matmul(align, V_)
outputs = tf.concat(tf.split(outputs, self.num_heads, axis=0), axis=2)
outputs += inputs
return layer_norm(outputs)
def window_mask(self, h_w):
masks = np.zeros([self.seq_len, self.seq_len])
for i in range(self.seq_len):
if i < h_w:
masks[i, :i+h_w+1] = 1.
elif i > self.seq_len - h_w - 1:
masks[i, i-h_w:] = 1.
else:
masks[i, i-h_w:i+h_w+1] = 1.
masks = tf.convert_to_tensor(masks)
return tf.tile(tf.expand_dims(masks,0), [tf.shape(self.X)[0]*self.num_heads, 1, 1])
tf.reset_default_graph()
modelnn = Model(df_log.shape[1], df_log.shape[1], timestamp, 0.01,num_heads=df_log.shape[1])
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
for i in range(epoch):
total_loss = 0
for k in range(0, (df_log.shape[0] // timestamp) * timestamp, timestamp):
batch_x = np.expand_dims(df_log.iloc[k: k + timestamp, :].values, axis = 0)
batch_y = df_log.iloc[k + 1: k + timestamp + 1, :].values
_, loss = sess.run([modelnn.optimizer, modelnn.cost],
feed_dict={modelnn.X: batch_x, modelnn.Y: batch_y})
total_loss += loss
total_loss /= (df_log.shape[0] // timestamp)
if (i + 1) % 100 == 0:
print('epoch:', i + 1, 'avg loss:', total_loss)
output_predict = np.zeros((df_log.shape[0] + future_day, df_log.shape[1]))
output_predict[0, :] = df_log.iloc[0, :]
upper_b = (df_log.shape[0] // timestamp) * timestamp
for k in range(0, (df_log.shape[0] // timestamp) * timestamp, timestamp):
try:
out_logits = sess.run(modelnn.logits, feed_dict = {modelnn.X:np.expand_dims(df_log.iloc[k: k + timestamp, :], axis = 0)})
output_predict[k + 1: k + timestamp + 1, :] = out_logits
except:
out_logits = sess.run(modelnn.logits, feed_dict = {modelnn.X:np.expand_dims(df_log.iloc[-timestamp:, :], axis = 0)})
output_predict[df_log.shape[0]-timestamp:df_log.shape[0],:] = out_logits
df_log.loc[df_log.shape[0]] = out_logits[-1, :]
date_ori.append(date_ori[-1]+timedelta(days=1))
for i in range(future_day - 1):
out_logits = sess.run(modelnn.logits, feed_dict = {modelnn.X:np.expand_dims(df_log.iloc[-timestamp:, :], axis = 0)})
output_predict[df_log.shape[0], :] = out_logits[-1, :]
df_log.loc[df_log.shape[0]] = out_logits[-1, :]
date_ori.append(date_ori[-1]+timedelta(days=1))
df_log = minmax.inverse_transform(output_predict)
date_ori=pd.Series(date_ori).dt.strftime(date_format='%Y-%m-%d').tolist()
current_palette = sns.color_palette("Paired", 12)
fig = plt.figure(figsize = (15,10))
ax = plt.subplot(111)
x_range_original = np.arange(df.shape[0])
x_range_future = np.arange(df_log.shape[0])
ax.plot(x_range_original, df.iloc[:, 1], label = 'true Open', color = current_palette[0])
ax.plot(x_range_future, df_log[:, 0], label = 'predict Open', color = current_palette[1])
ax.plot(x_range_original, df.iloc[:, 2], label = 'true High', color = current_palette[2])
ax.plot(x_range_future, df_log[:, 1], label = 'predict High', color = current_palette[3])
ax.plot(x_range_original, df.iloc[:, 3], label = 'true Low', color = current_palette[4])
ax.plot(x_range_future, df_log[:, 2], label = 'predict Low', color = current_palette[5])
ax.plot(x_range_original, df.iloc[:, 4], label = 'true Close', color = current_palette[6])
ax.plot(x_range_future, df_log[:, 3], label = 'predict Close', color = current_palette[7])
ax.plot(x_range_original, df.iloc[:, 5], label = 'true Adj Close', color = current_palette[8])
ax.plot(x_range_future, df_log[:, 4], label = 'predict Adj Close', color = current_palette[9])
box = ax.get_position()
ax.set_position([box.x0, box.y0 + box.height * 0.1, box.width, box.height * 0.9])
ax.legend(loc = 'upper center', bbox_to_anchor= (0.5, -0.05), fancybox = True, shadow = True, ncol = 5)
plt.title('overlap stock market')
plt.xticks(x_range_future[::30], date_ori[::30])
plt.show()
fig = plt.figure(figsize = (20,8))
plt.subplot(1, 2, 1)
plt.plot(x_range_original, df.iloc[:, 1], label = 'true Open', color = current_palette[0])
plt.plot(x_range_original, df.iloc[:, 2], label = 'true High', color = current_palette[2])
plt.plot(x_range_original, df.iloc[:, 3], label = 'true Low', color = current_palette[4])
plt.plot(x_range_original, df.iloc[:, 4], label = 'true Close', color = current_palette[6])
plt.plot(x_range_original, df.iloc[:, 5], label = 'true Adj Close', color = current_palette[8])
plt.xticks(x_range_original[::60], df.iloc[:, 0].tolist()[::60])
plt.legend()
plt.title('true market')
plt.subplot(1, 2, 2)
plt.plot(x_range_future, df_log[:, 0], label = 'predict Open', color = current_palette[1])
plt.plot(x_range_future, df_log[:, 1], label = 'predict High', color = current_palette[3])
plt.plot(x_range_future, df_log[:, 2], label = 'predict Low', color = current_palette[5])
plt.plot(x_range_future, df_log[:, 3], label = 'predict Close', color = current_palette[7])
plt.plot(x_range_future, df_log[:, 4], label = 'predict Adj Close', color = current_palette[9])
plt.xticks(x_range_future[::60], date_ori[::60])
plt.legend()
plt.title('predict market')
plt.show()
fig = plt.figure(figsize = (15,10))
ax = plt.subplot(111)
ax.plot(x_range_original, df.iloc[:, -1], label = 'true Volume')
ax.plot(x_range_future, df_log[:, -1], label = 'predict Volume')
box = ax.get_position()
ax.set_position([box.x0, box.y0 + box.height * 0.1, box.width, box.height * 0.9])
ax.legend(loc = 'upper center', bbox_to_anchor= (0.5, -0.05), fancybox = True, shadow = True, ncol = 5)
plt.xticks(x_range_future[::30], date_ori[::30])
plt.title('overlap market volume')
plt.show()
fig = plt.figure(figsize = (20,8))
plt.subplot(1, 2, 1)
plt.plot(x_range_original, df.iloc[:, -1], label = 'true Volume')
plt.xticks(x_range_original[::60], df.iloc[:, 0].tolist()[::60])
plt.legend()
plt.title('true market volume')
plt.subplot(1, 2, 2)
plt.plot(x_range_future, df_log[:, -1], label = 'predict Volume')
plt.xticks(x_range_future[::60], date_ori[::60])
plt.legend()
plt.title('predict market volume')
plt.show()
```
|
github_jupyter
|
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
import model
from datetime import datetime
from datetime import timedelta
sns.set()
df = pd.read_csv('/home/husein/space/Stock-Prediction-Comparison/dataset/GOOG-year.csv')
date_ori = pd.to_datetime(df.iloc[:, 0]).tolist()
df.head()
minmax = MinMaxScaler().fit(df.iloc[:, 1:].astype('float32'))
df_log = minmax.transform(df.iloc[:, 1:].astype('float32'))
df_log = pd.DataFrame(df_log)
df_log.head()
timestamp = 5
epoch = 500
future_day = 50
def embed_seq(inputs, vocab_size=None, embed_dim=None, zero_pad=False, scale=False):
lookup_table = tf.get_variable('lookup_table', dtype=tf.float32, shape=[vocab_size, embed_dim])
if zero_pad:
lookup_table = tf.concat((tf.zeros([1, embed_dim]), lookup_table[1:, :]), axis=0)
outputs = tf.nn.embedding_lookup(lookup_table, inputs)
if scale:
outputs = outputs * (embed_dim ** 0.5)
return outputs
def learned_positional_encoding(inputs, embed_dim, zero_pad=False, scale=False):
T = inputs.get_shape().as_list()[1]
outputs = tf.range(T)
outputs = tf.expand_dims(outputs, 0)
outputs = tf.tile(outputs, [tf.shape(inputs)[0], 1])
return embed_seq(outputs, T, embed_dim, zero_pad=zero_pad, scale=scale)
def layer_norm(inputs, epsilon=1e-8):
mean, variance = tf.nn.moments(inputs, [-1], keep_dims=True)
normalized = (inputs - mean) / (tf.sqrt(variance + epsilon))
params_shape = inputs.get_shape()[-1:]
gamma = tf.get_variable('gamma', params_shape, tf.float32, tf.ones_initializer())
beta = tf.get_variable('beta', params_shape, tf.float32, tf.zeros_initializer())
return gamma * normalized + beta
def pointwise_feedforward(inputs, num_units=[None, None], activation=None):
outputs = tf.layers.conv1d(inputs, num_units[0], kernel_size=1, activation=activation)
outputs = tf.layers.conv1d(outputs, num_units[1], kernel_size=1, activation=None)
outputs += inputs
outputs = layer_norm(outputs)
return outputs
class Model:
def __init__(self, dimension_input, dimension_output, seq_len,
learning_rate, num_heads=8, attn_windows=range(1, 6)):
self.size_layer = dimension_input
self.num_heads = num_heads
self.seq_len = seq_len
self.X = tf.placeholder(tf.float32, [None, seq_len, dimension_input])
self.Y = tf.placeholder(tf.float32, [None, dimension_output])
feed = self.X
for i, win_size in enumerate(attn_windows):
with tf.variable_scope('attn_masked_window_%d' % win_size):
feed = self.multihead_attn(feed, self.window_mask(win_size))
feed += learned_positional_encoding(feed, dimension_input)
with tf.variable_scope('multihead'):
feed = self.multihead_attn(feed, None)
with tf.variable_scope('pointwise'):
feed = pointwise_feedforward(feed, num_units=[4*dimension_input,
dimension_input], activation=tf.nn.relu)
self.logits = tf.layers.dense(feed, dimension_output)[:,-1]
self.cost = tf.reduce_mean(tf.square(self.Y - self.logits))
self.optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(self.cost)
self.correct_pred = tf.equal(tf.argmax(self.logits, 1), tf.argmax(self.Y, 1))
self.accuracy = tf.reduce_mean(tf.cast(self.correct_pred, tf.float32))
def multihead_attn(self, inputs, masks):
T_q = T_k = inputs.get_shape().as_list()[1]
Q_K_V = tf.layers.dense(inputs, 3*self.size_layer, tf.nn.relu)
Q, K, V = tf.split(Q_K_V, 3, -1)
Q_ = tf.concat(tf.split(Q, self.num_heads, axis=2), axis=0)
K_ = tf.concat(tf.split(K, self.num_heads, axis=2), axis=0)
V_ = tf.concat(tf.split(V, self.num_heads, axis=2), axis=0)
align = tf.matmul(Q_, tf.transpose(K_, [0,2,1]))
align = align / np.sqrt(K_.get_shape().as_list()[-1])
if masks is not None:
paddings = tf.fill(tf.shape(align), float('-inf'))
align = tf.where(tf.equal(masks, 0), paddings, align)
align = tf.nn.softmax(align)
outputs = tf.matmul(align, V_)
outputs = tf.concat(tf.split(outputs, self.num_heads, axis=0), axis=2)
outputs += inputs
return layer_norm(outputs)
def window_mask(self, h_w):
masks = np.zeros([self.seq_len, self.seq_len])
for i in range(self.seq_len):
if i < h_w:
masks[i, :i+h_w+1] = 1.
elif i > self.seq_len - h_w - 1:
masks[i, i-h_w:] = 1.
else:
masks[i, i-h_w:i+h_w+1] = 1.
masks = tf.convert_to_tensor(masks)
return tf.tile(tf.expand_dims(masks,0), [tf.shape(self.X)[0]*self.num_heads, 1, 1])
tf.reset_default_graph()
modelnn = Model(df_log.shape[1], df_log.shape[1], timestamp, 0.01,num_heads=df_log.shape[1])
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
for i in range(epoch):
total_loss = 0
for k in range(0, (df_log.shape[0] // timestamp) * timestamp, timestamp):
batch_x = np.expand_dims(df_log.iloc[k: k + timestamp, :].values, axis = 0)
batch_y = df_log.iloc[k + 1: k + timestamp + 1, :].values
_, loss = sess.run([modelnn.optimizer, modelnn.cost],
feed_dict={modelnn.X: batch_x, modelnn.Y: batch_y})
total_loss += loss
total_loss /= (df_log.shape[0] // timestamp)
if (i + 1) % 100 == 0:
print('epoch:', i + 1, 'avg loss:', total_loss)
output_predict = np.zeros((df_log.shape[0] + future_day, df_log.shape[1]))
output_predict[0, :] = df_log.iloc[0, :]
upper_b = (df_log.shape[0] // timestamp) * timestamp
for k in range(0, (df_log.shape[0] // timestamp) * timestamp, timestamp):
try:
out_logits = sess.run(modelnn.logits, feed_dict = {modelnn.X:np.expand_dims(df_log.iloc[k: k + timestamp, :], axis = 0)})
output_predict[k + 1: k + timestamp + 1, :] = out_logits
except:
out_logits = sess.run(modelnn.logits, feed_dict = {modelnn.X:np.expand_dims(df_log.iloc[-timestamp:, :], axis = 0)})
output_predict[df_log.shape[0]-timestamp:df_log.shape[0],:] = out_logits
df_log.loc[df_log.shape[0]] = out_logits[-1, :]
date_ori.append(date_ori[-1]+timedelta(days=1))
for i in range(future_day - 1):
out_logits = sess.run(modelnn.logits, feed_dict = {modelnn.X:np.expand_dims(df_log.iloc[-timestamp:, :], axis = 0)})
output_predict[df_log.shape[0], :] = out_logits[-1, :]
df_log.loc[df_log.shape[0]] = out_logits[-1, :]
date_ori.append(date_ori[-1]+timedelta(days=1))
df_log = minmax.inverse_transform(output_predict)
date_ori=pd.Series(date_ori).dt.strftime(date_format='%Y-%m-%d').tolist()
current_palette = sns.color_palette("Paired", 12)
fig = plt.figure(figsize = (15,10))
ax = plt.subplot(111)
x_range_original = np.arange(df.shape[0])
x_range_future = np.arange(df_log.shape[0])
ax.plot(x_range_original, df.iloc[:, 1], label = 'true Open', color = current_palette[0])
ax.plot(x_range_future, df_log[:, 0], label = 'predict Open', color = current_palette[1])
ax.plot(x_range_original, df.iloc[:, 2], label = 'true High', color = current_palette[2])
ax.plot(x_range_future, df_log[:, 1], label = 'predict High', color = current_palette[3])
ax.plot(x_range_original, df.iloc[:, 3], label = 'true Low', color = current_palette[4])
ax.plot(x_range_future, df_log[:, 2], label = 'predict Low', color = current_palette[5])
ax.plot(x_range_original, df.iloc[:, 4], label = 'true Close', color = current_palette[6])
ax.plot(x_range_future, df_log[:, 3], label = 'predict Close', color = current_palette[7])
ax.plot(x_range_original, df.iloc[:, 5], label = 'true Adj Close', color = current_palette[8])
ax.plot(x_range_future, df_log[:, 4], label = 'predict Adj Close', color = current_palette[9])
box = ax.get_position()
ax.set_position([box.x0, box.y0 + box.height * 0.1, box.width, box.height * 0.9])
ax.legend(loc = 'upper center', bbox_to_anchor= (0.5, -0.05), fancybox = True, shadow = True, ncol = 5)
plt.title('overlap stock market')
plt.xticks(x_range_future[::30], date_ori[::30])
plt.show()
fig = plt.figure(figsize = (20,8))
plt.subplot(1, 2, 1)
plt.plot(x_range_original, df.iloc[:, 1], label = 'true Open', color = current_palette[0])
plt.plot(x_range_original, df.iloc[:, 2], label = 'true High', color = current_palette[2])
plt.plot(x_range_original, df.iloc[:, 3], label = 'true Low', color = current_palette[4])
plt.plot(x_range_original, df.iloc[:, 4], label = 'true Close', color = current_palette[6])
plt.plot(x_range_original, df.iloc[:, 5], label = 'true Adj Close', color = current_palette[8])
plt.xticks(x_range_original[::60], df.iloc[:, 0].tolist()[::60])
plt.legend()
plt.title('true market')
plt.subplot(1, 2, 2)
plt.plot(x_range_future, df_log[:, 0], label = 'predict Open', color = current_palette[1])
plt.plot(x_range_future, df_log[:, 1], label = 'predict High', color = current_palette[3])
plt.plot(x_range_future, df_log[:, 2], label = 'predict Low', color = current_palette[5])
plt.plot(x_range_future, df_log[:, 3], label = 'predict Close', color = current_palette[7])
plt.plot(x_range_future, df_log[:, 4], label = 'predict Adj Close', color = current_palette[9])
plt.xticks(x_range_future[::60], date_ori[::60])
plt.legend()
plt.title('predict market')
plt.show()
fig = plt.figure(figsize = (15,10))
ax = plt.subplot(111)
ax.plot(x_range_original, df.iloc[:, -1], label = 'true Volume')
ax.plot(x_range_future, df_log[:, -1], label = 'predict Volume')
box = ax.get_position()
ax.set_position([box.x0, box.y0 + box.height * 0.1, box.width, box.height * 0.9])
ax.legend(loc = 'upper center', bbox_to_anchor= (0.5, -0.05), fancybox = True, shadow = True, ncol = 5)
plt.xticks(x_range_future[::30], date_ori[::30])
plt.title('overlap market volume')
plt.show()
fig = plt.figure(figsize = (20,8))
plt.subplot(1, 2, 1)
plt.plot(x_range_original, df.iloc[:, -1], label = 'true Volume')
plt.xticks(x_range_original[::60], df.iloc[:, 0].tolist()[::60])
plt.legend()
plt.title('true market volume')
plt.subplot(1, 2, 2)
plt.plot(x_range_future, df_log[:, -1], label = 'predict Volume')
plt.xticks(x_range_future[::60], date_ori[::60])
plt.legend()
plt.title('predict market volume')
plt.show()
| 0.712132 | 0.338487 |
```
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from subprocess import check_output
#print(check_output(["ls", "../input"]).decode("utf8"))
import sklearn
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
from sklearn import neighbors
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split,StratifiedKFold, ShuffleSplit, cross_val_score, train_test_split
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import GradientBoostingRegressor
loc0 = (r'UntitledspreadsheetSheet1.csv')
#loc1 = (r'housing_test.csv')
train = pd.read_csv(loc0, error_bad_lines=False)
#test = pd.read_csv(loc1, error_bad_lines=False)
print(train.shape)
train.head(10)
train.drop('GSTIN', axis = 1, inplace = True)
correlation = train.corr()
plt.figure(figsize=(10,10))
sns.heatmap(correlation, vmax=1, square=True,annot=True,cmap='viridis')
plt.title('Correlation between different fearures')
# Preparing data to be fed to a predictive model
train_Y = train['GST_Fraud']
train = train.drop('GST_Fraud', axis = 1)
train_X.head()
from sklearn.preprocessing import LabelEncoder
le=LabelEncoder()
encoded = le.fit_transform(train['Firm'])
#le.inverse_transform(test_Y)
train['Firm'] = encoded
le=LabelEncoder()
encoded = le.fit_transform(train['Field'])
train['Field'] = encoded
le=LabelEncoder()
encoded = le.fit_transform(train['DD/MM/YY'])
train['DD/MM/YY'] = encoded
train
X_train, X_test, y_train, y_test = train_test_split(train, train_Y, test_size = 0.2,
random_state = 42)
#CVtrain_X, CVtest_X = pd.get_dummies(CVtrain_X), pd.get_dummies(CVtest_X)
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import f1_score
def evaluate_models(number_of_est, maximum_depth, models, X_train, X_test):
'''Function to evaluate the performance of a tree based model (based on R2 score), over a grid of
number of estimators and maximum depth. Function takes in choice of model, array of n_estimators,
array of max_depth and training and testing sets'''
for model_choice in models:
for n_est in number_of_est:
for max_d in maximum_depth:
model = model_choice(n_estimators=n_est, max_depth=max_d, random_state = 42)
model.fit(X_train, y_train)
CVpred = model.predict(X_test)
print(CVpred)
r2 = r2_score(y_test, CVpred)
f1 = f1_score(y_test, CVpred, average='weighted')
print(model_choice,',Estimators:',n_est,',Max_Depth:',max_d,',R2:', r2,',f1:', f1)
models = [ GradientBoostingClassifier ]
number_of_est = [3,4,5,20, 30, 40, 50, 60]
#number_of_est = [450,400,300,200, 130, 80, 50, 60]
maximum_depth = [2,3,4,5,8,10]
#maximum_depth = [2,5, 10, 15, 20, 25,30,40,70,100,150]
evaluate_models(number_of_est, maximum_depth, models, X_train, X_test)
train_X = train
train_X.drop('GST_Fraud_amount(%)', axis = 1, inplace = True)
train_X.drop('fake invoice', axis = 1, inplace = True)
train
from sklearn.linear_model import LogisticRegression
models = [ GradientBoostingClassifier , LogisticRegression]
clf = LogisticRegression(random_state=0).fit(X_train, y_train)
h=clf.predict(X_test)
clf.score(X_test, y_test)
clf
h
y_test
X_test
```
|
github_jupyter
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from subprocess import check_output
#print(check_output(["ls", "../input"]).decode("utf8"))
import sklearn
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
from sklearn import neighbors
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split,StratifiedKFold, ShuffleSplit, cross_val_score, train_test_split
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import GradientBoostingRegressor
loc0 = (r'UntitledspreadsheetSheet1.csv')
#loc1 = (r'housing_test.csv')
train = pd.read_csv(loc0, error_bad_lines=False)
#test = pd.read_csv(loc1, error_bad_lines=False)
print(train.shape)
train.head(10)
train.drop('GSTIN', axis = 1, inplace = True)
correlation = train.corr()
plt.figure(figsize=(10,10))
sns.heatmap(correlation, vmax=1, square=True,annot=True,cmap='viridis')
plt.title('Correlation between different fearures')
# Preparing data to be fed to a predictive model
train_Y = train['GST_Fraud']
train = train.drop('GST_Fraud', axis = 1)
train_X.head()
from sklearn.preprocessing import LabelEncoder
le=LabelEncoder()
encoded = le.fit_transform(train['Firm'])
#le.inverse_transform(test_Y)
train['Firm'] = encoded
le=LabelEncoder()
encoded = le.fit_transform(train['Field'])
train['Field'] = encoded
le=LabelEncoder()
encoded = le.fit_transform(train['DD/MM/YY'])
train['DD/MM/YY'] = encoded
train
X_train, X_test, y_train, y_test = train_test_split(train, train_Y, test_size = 0.2,
random_state = 42)
#CVtrain_X, CVtest_X = pd.get_dummies(CVtrain_X), pd.get_dummies(CVtest_X)
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import f1_score
def evaluate_models(number_of_est, maximum_depth, models, X_train, X_test):
'''Function to evaluate the performance of a tree based model (based on R2 score), over a grid of
number of estimators and maximum depth. Function takes in choice of model, array of n_estimators,
array of max_depth and training and testing sets'''
for model_choice in models:
for n_est in number_of_est:
for max_d in maximum_depth:
model = model_choice(n_estimators=n_est, max_depth=max_d, random_state = 42)
model.fit(X_train, y_train)
CVpred = model.predict(X_test)
print(CVpred)
r2 = r2_score(y_test, CVpred)
f1 = f1_score(y_test, CVpred, average='weighted')
print(model_choice,',Estimators:',n_est,',Max_Depth:',max_d,',R2:', r2,',f1:', f1)
models = [ GradientBoostingClassifier ]
number_of_est = [3,4,5,20, 30, 40, 50, 60]
#number_of_est = [450,400,300,200, 130, 80, 50, 60]
maximum_depth = [2,3,4,5,8,10]
#maximum_depth = [2,5, 10, 15, 20, 25,30,40,70,100,150]
evaluate_models(number_of_est, maximum_depth, models, X_train, X_test)
train_X = train
train_X.drop('GST_Fraud_amount(%)', axis = 1, inplace = True)
train_X.drop('fake invoice', axis = 1, inplace = True)
train
from sklearn.linear_model import LogisticRegression
models = [ GradientBoostingClassifier , LogisticRegression]
clf = LogisticRegression(random_state=0).fit(X_train, y_train)
h=clf.predict(X_test)
clf.score(X_test, y_test)
clf
h
y_test
X_test
| 0.419053 | 0.482368 |
# Data analysis project
Lets have a look at the level of tourism in Denmark. Statistics Denmark have records of the number of visiting tourists from 2014:1 - 2019:1, so we can see how the tourism in Denmark has evovled over the years.
# Preparing the data
## Import packages and load the data
We start by importing the relevant packages for our analysis
```
# Imporing packages
import wbdata
import pandas
import pandas_datareader
import datetime
import matplotlib.pyplot as plt
import pydst
import calendar
from statsmodels.tsa.seasonal import seasonal_decompose
```
We then use the API for data extractions from Statistics Denmark. The data will be imported in English from the table called 'TURIST'.
```
Dst = pydst.Dst(lang='en')
Dst.get_data(table_id = 'TURIST');
```
For the preparation of the data extraction, the following code provides an overlook of the variables to choose from. The relevant categories within each variable is then chosen for the final list of variables "var_list" to be included in the analysis. The final data extract will be stored in a data frame called "df".
```
# Display the different categories for each variable in the dataset
indk_vars = Dst.get_variables(table_id='TURIST')
indk_vars['values'][2][:50];
# Specific categories for each variable is chosen as a dictionary
var_list = {'OVERNATF':['100'],\
'OMRÅDE':['000','084','085','083','082','081'],\
'NATION1':['*'],\
'PERIODE':['01','02','03','04','05','06','07','08','09','10','11','12'],\
'TID':['*']}
# The raw data frame is imported and a sample is given below
df = Dst.get_data(table_id = 'TURIST', variables=var_list);
df.sample(3)
```
## Setup the final data frame
The extracted data frame 'df' now contains the basic data, but some corrections have to be made. The variables in the data frame can be described as below:
- **OVERNATF:** The types of ways to live in Denmark as a tourist. Since we do not care how they stay in the country, we only use the 'All types' category.
- **OMRÅDE:** The different administrative areas, where the tourists are staying throughout their visit.
- **NATION1:** The nationality of the tourists.
- **PERIODE:** The period where the observations has be recorded. We only care about the months, so the rest of the categories has been dropped.
- **TID:** The time of the observations. This means, which year the observations were recorded. Statistic Denmark only have observations for the time 2014-2019.
- **INDHOLD:** The total number of observed tourists from a specific nation, at a given area, at a specific year and month.
Some observations are empty, so the number of tourists is given as '..'. This is a problem when we have to do some calculations. The empty observations are then replaced with zero and the variable is formatted into a number (integer).
A timeseries would also be nice to have for a visual look at the evolution in the number of tourists. For this we have to make a date variable. The datetime function is useful here, but we have to translate the name of each month into a number to use the function.
```
# Replace empty observations and format the variable
df['INDHOLD'] = df['INDHOLD'].replace('..', '0').astype(str).astype(int)
# Dictionary for the number of the month
dic = {'January':'01', 'February':'02', 'March':'03', 'April':'04', 'May':'05', 'June':'06',\
'July':'07', 'August':'08', 'September':'09', 'October':'10', 'November':'11', 'December':'12'}
# Making a variable for the date, using datetime
df['Month'] = df['PERIODE'].replace(dic)
df['Month'] = df['Month'].astype(str).astype(int)
df['day'] = 1
df['year'] = df['TID']
df['Date'] = pd.to_datetime(df[['year', 'Month', 'day']])
df = df[(df['Date'].dt.year > 2013)]
df = df[(df['Date'].dt.year < 2019)]
# Restructuring the data frame
df = df.set_index('Date')
df = df[['OMRÅDE','NATION1','year','Month','PERIODE','day','INDHOLD']]
df.sample(3)
```
## Splitting the data frame
The restructured data frame contains both observations for 'All Denmark' and for each of the 5 regions. For the timeseries we only want the observations for 'All Denmark' and only for tourist, which means we have to remove the observations for Danish tourists. An easy way to do this is by only including the value 'World outside Denmark' for nationality. The following code creates the data frame with total number of tourists for each month and year based grouped by their nationality.
```
# Total number of observations each month of the year
AD = df[df['OMRÅDE'] == 'All Denmark']
AD = AD[AD.NATION1.isin(['World outside Denmark'])]
```
# Overview of the level of tourism in Denmark
## Total number of tourists from 2014-2018
Just to get a quick overlook of the number of tourist in Denmark over the years, a timeseries is drawn. This is done from the calculated data frame AD, which sums the number of tourists grouped by each date.
```
# Sum number of tourists for each date
AD_sum = AD.groupby(['Date'])['INDHOLD'].sum()
# Setup the timeseries
ax = AD_sum.plot()
ax.set_ylabel('Number of visiting tourists');
```
It is clear and also expected, that the number of tourists is extremely seasonal. In the low seasons around the end and the beginning of each year (winter), the number of tourists is under 1 mio., while being around 5,5 mio. in the high seasons (summer). Over the five years, the overall level also seems to have increased. It is possible to see by comparing the low/high peaks over the years. To ease the task of analysis the overall level, it would be easier to decompose the seasonal effect in the timeseries.
The imposed model is multiplicative, so we get the seasonal effect and the residual in percentage.
```
# The decomposition of the time series
result = seasonal_decompose(AD_sum, model='multiplicativ')
result.plot()
plt.show()
```
The results tell that a clear linear trend is present in the timeseries. Denmark has either become a more popular travel destination or people are traveling more in general. The seasonal effect is clear in the decomposition and from the residual, it appears that most deviations from the expected happens in the first months of the year.
In order to get a better feel of the seasonal effects, the aggregated mean level of each month over the five-year period is calculated below. This gives a nice view of how popular each month is to visit Denmark in. The code below groups the data frame by each month and takes a mean over the five years.
```
# Aggregated mean level of each month of the five-year period
ax = AD.groupby('Month')['INDHOLD'].mean().plot.bar()
# Labels and title of the bar chart is constructed
ax.set_ylabel('Number of visiting turists');
ax.set_xlabel('');
ax.set_xticklabels(calendar.month_name[1:13])
ax.set_title('Agg. mean level of tourists each month');
```
July and August are the clear winners as the months with the highest number of tourists. It is due to people visiting Denmark during their summer holidays. The number of tourists visiting Denmark each of the months is almost the same amount as the total population in Denmark. The number of foreign tourists each month is almost never under 1 mio., which is quiet high for a small country as Denmark. This means Denmark is a well visited country, relative to its size.
# Characteristics for the tourists
Now that we know Denmark is well visited, it could be interesting to see which types of nationalities is visiting Denmark most frequently. By calculating an average number of tourists each year, based on nationality, it is possible to make a top 10 of most visiting nationality over the last five years in average.
```
# Only choose obs for all of Denmark
reg = df[df['OMRÅDE'] == 'All Denmark']
# Only choose obs based on a nationality
reg = reg[~reg.NATION1.isin(['Total','Denmark','World outside Denmark'])]
# The sum of each year for each nationality used for the avg. over years
reg2 = reg.groupby(['NATION1','year'])['INDHOLD'].sum().to_frame()
reg3 = reg2.groupby(['NATION1'])['INDHOLD'].mean().to_frame()
reg3.sort_values(by = ['INDHOLD'], ascending=[False]).head(10)
```
This top 10 clearly shows how dominant the German tourists are in Denmark, but since we only have a table with the numbers above, it can be difficult to compare the level of tourists across the nationalities. Lets try and make a nice looking bar chart, to ease the comparison.
```
# List and new dataframe of our top 10
lande = ['Germany','Norway','Sweden','Netherlands','United Kingdom','USA','Italy','France','Switzerland','Spain']
reg_top = reg[reg.NATION1.isin(lande)]
# The new dataframe is used to calculated the mean of tourists over the five years
ax = reg_top.sort_values(by = ['INDHOLD'], ascending=[False]).groupby(['NATION1','year'])['INDHOLD'].sum().to_frame()
axm = ax.groupby(['NATION1'])['INDHOLD'].mean().to_frame()
# A bar chart of the top 10 most visiting nationalities for better comparison
axm['INDHOLD'].sort_values(ascending=[True]).plot.barh();
plt.gca().set_ylabel('')
plt.gca().set_xlabel('Avg. visiting tourists each year')
plt.show()
```
The average level of visiting Germans each year is 14,5 mio. compared to Norway on second place with only 2,5 mio. This tells us that most of the visiting tourists are from Germany. The second and third place is Norway and Sweden, so Denmark is most visited by its neighbors.
The number of tourists from Germany is so high, that it can be hard to compare the other nations. To solve this problem, Germany is removed below to ease the comparison of the other nations. The conslusion is that the countries closest to Denmark are the most frequent visiting nations. USA is more present in the tourism compared to nationas from Southern Europe, but it is also a relative more populated nation. Italy, France Switzerland and Spain are all on the same level of tourism in Denmark.
```
# Drop observation from Germany
axm2 = axm.drop(axm.index[1])
# Sort again and plot the bar chart
axm2['INDHOLD'].sort_values(ascending=[True]).plot.barh();
plt.gca().set_ylabel('')
plt.gca().set_xlabel('Avg. visiting tourists each year')
plt.show()
```
Statistics Denmark has also provided data for how many of the tourists are staying in each of the five regions in Denmark. Since the data is available, let’s have a look at where the tourists are staying while visiting Denmark.
# Where the tourists live
It is possible to draw a map of Denmark from a shapefile (.shp). By providing the observations with the relevant geometric values, geopandas can draw a map showing with color, which of the five regions is most popular in Denmark to visit by tourists.
```
# Importing packages and shapefile
import geopandas as gpd
fp = "C:/Users/bt_27/Google Drev/Skole/10. Semester/Introduction to Python/4. Projekter/Projekt 1/Geo/DNK_adm1.shp"
map_df = gpd.read_file(fp)
# Dictionary is constructed in order to rename the regions for the merge
regs = {'Hovedstaden':'Region Hovedstaden','Midtjylland':'Region Midtjylland','Nordjylland':'Region Nordjylland',\
'Sjælland':'Region Sjælland','Syddanmark':'Region Syddanmark'}
# In order to merge, the variables has to be named the same
map_df['OMRÅDE'] = map_df['NAME_1'].replace(regs)
map_df1 = map_df[['OMRÅDE','geometry']]
# This is the dataframe, ready to be merged on the observations
map_df1
```
Now that the coordinates for regions on the map is ready, it can be merged on the original dataframe
```
# The original dataframe with obs for the regions
lo = df[df['OMRÅDE'] != 'All Denmark']
lo = lo[~lo.NATION1.isin(['Total','Denmark','World outside Denmark'])]
# The observations are aggregated by each region in Denmark
tmp = lo.groupby(['OMRÅDE'])['INDHOLD'].sum().to_frame()
# The observations are merge with the shapefile data on the regions
maps = pd.merge(map_df1, tmp, on='OMRÅDE')
# create the figure and the axes for the plot
fig, ax = plt.subplots(1, figsize=(10, 6))
maps.plot(column='INDHOLD', cmap='Reds', linewidth=0.8, ax=ax, edgecolor='0.8');
```
The map above shows the total amount of tourists for each region over the five-year period. The conclusion is that most tourists visit 'Region Hovedstaden', where Copenhagen must be the reason. Bornholm is also an attractive holiday location in the summer, which is also a part of 'Region Hovedstaden'. The second most visited region is 'Region Syddanmark'. This is probably tourists just crossing the border between Germany and Denmark. Maybe they only has a short visit or they might be resting after a long trip, before traveling further up in Denmark. A closer look at the distribution of nationalities in each region could explain some of these results.
The following pie chart is an interactive way of examine where the tourists of each nationality is staying while in Denmark.
```
# We start with a function genrating the pie chart.
def interactive_figure(fokus):
# Choose nationality for the pie chart
lo1 = lo[lo.NATION1.isin([fokus])]
# Sum the observations grouped by the regions and plot the chart
pie_sources = lo1.groupby(['OMRÅDE'])['INDHOLD'].sum().plot(kind='pie',autopct='%1.1f%%')
plt.gca().set_ylabel('')
plt.title('Share of tourists from the chosen country by region in Denmark')
plt.show()
import ipywidgets as widgets
# The implimentation of a widget makes it possible to change which country we want to investigate
widgets.interact(interactive_figure,
fokus=widgets.Dropdown(description="$Land$", options=lande, value='Germany'),);
```
The results show that tourists from Southern Europe (France, Spain, Italy) and countries where people have to fly to Denmark (USA, United Kingdom) has the highest percentages in Region Hovedstaden. It might be that most tourists come to Denmark in order to visit Copenhagen and if you fly to Kastrup, you do not have to stay overnight at some other place in Denmark than Region Hovedstaden. The European countries more to the north like Netherlands and Switzerland are more likely to visit Region Syddanmark. There might be some interesting things to see in Region Syddanmark, but since most of the tourists are traveling to Copenhagen, it could be that tourists from countries closer to Denmark are traveling by car. If you then have to stop and rest on your journey, it might be in Region Syddanmark on the way to Copenhagen.
A relatively large percentage of tourists from Sweden and Norway also stay in Region Syddanmark. This might be a reversed situation as for the people traveling to Copenhagen by car. If Swedish or Norwegian tourists are traveling down south into Europe, then they might have to rest in Region Syddanmark on their way.
Region Syddanmark do not have to be a resting place for people traveling across Denmark. Odense is a major city in Denmark, with attractions like H.C. Andersen, which could be a magnet for tourists.
The final **conclusion** is that Denmark is a well visited country, mostly by its neighbors and during the summer, the number of people living in Denmark almost double in size.
|
github_jupyter
|
# Imporing packages
import wbdata
import pandas
import pandas_datareader
import datetime
import matplotlib.pyplot as plt
import pydst
import calendar
from statsmodels.tsa.seasonal import seasonal_decompose
Dst = pydst.Dst(lang='en')
Dst.get_data(table_id = 'TURIST');
# Display the different categories for each variable in the dataset
indk_vars = Dst.get_variables(table_id='TURIST')
indk_vars['values'][2][:50];
# Specific categories for each variable is chosen as a dictionary
var_list = {'OVERNATF':['100'],\
'OMRÅDE':['000','084','085','083','082','081'],\
'NATION1':['*'],\
'PERIODE':['01','02','03','04','05','06','07','08','09','10','11','12'],\
'TID':['*']}
# The raw data frame is imported and a sample is given below
df = Dst.get_data(table_id = 'TURIST', variables=var_list);
df.sample(3)
# Replace empty observations and format the variable
df['INDHOLD'] = df['INDHOLD'].replace('..', '0').astype(str).astype(int)
# Dictionary for the number of the month
dic = {'January':'01', 'February':'02', 'March':'03', 'April':'04', 'May':'05', 'June':'06',\
'July':'07', 'August':'08', 'September':'09', 'October':'10', 'November':'11', 'December':'12'}
# Making a variable for the date, using datetime
df['Month'] = df['PERIODE'].replace(dic)
df['Month'] = df['Month'].astype(str).astype(int)
df['day'] = 1
df['year'] = df['TID']
df['Date'] = pd.to_datetime(df[['year', 'Month', 'day']])
df = df[(df['Date'].dt.year > 2013)]
df = df[(df['Date'].dt.year < 2019)]
# Restructuring the data frame
df = df.set_index('Date')
df = df[['OMRÅDE','NATION1','year','Month','PERIODE','day','INDHOLD']]
df.sample(3)
# Total number of observations each month of the year
AD = df[df['OMRÅDE'] == 'All Denmark']
AD = AD[AD.NATION1.isin(['World outside Denmark'])]
# Sum number of tourists for each date
AD_sum = AD.groupby(['Date'])['INDHOLD'].sum()
# Setup the timeseries
ax = AD_sum.plot()
ax.set_ylabel('Number of visiting tourists');
# The decomposition of the time series
result = seasonal_decompose(AD_sum, model='multiplicativ')
result.plot()
plt.show()
# Aggregated mean level of each month of the five-year period
ax = AD.groupby('Month')['INDHOLD'].mean().plot.bar()
# Labels and title of the bar chart is constructed
ax.set_ylabel('Number of visiting turists');
ax.set_xlabel('');
ax.set_xticklabels(calendar.month_name[1:13])
ax.set_title('Agg. mean level of tourists each month');
# Only choose obs for all of Denmark
reg = df[df['OMRÅDE'] == 'All Denmark']
# Only choose obs based on a nationality
reg = reg[~reg.NATION1.isin(['Total','Denmark','World outside Denmark'])]
# The sum of each year for each nationality used for the avg. over years
reg2 = reg.groupby(['NATION1','year'])['INDHOLD'].sum().to_frame()
reg3 = reg2.groupby(['NATION1'])['INDHOLD'].mean().to_frame()
reg3.sort_values(by = ['INDHOLD'], ascending=[False]).head(10)
# List and new dataframe of our top 10
lande = ['Germany','Norway','Sweden','Netherlands','United Kingdom','USA','Italy','France','Switzerland','Spain']
reg_top = reg[reg.NATION1.isin(lande)]
# The new dataframe is used to calculated the mean of tourists over the five years
ax = reg_top.sort_values(by = ['INDHOLD'], ascending=[False]).groupby(['NATION1','year'])['INDHOLD'].sum().to_frame()
axm = ax.groupby(['NATION1'])['INDHOLD'].mean().to_frame()
# A bar chart of the top 10 most visiting nationalities for better comparison
axm['INDHOLD'].sort_values(ascending=[True]).plot.barh();
plt.gca().set_ylabel('')
plt.gca().set_xlabel('Avg. visiting tourists each year')
plt.show()
# Drop observation from Germany
axm2 = axm.drop(axm.index[1])
# Sort again and plot the bar chart
axm2['INDHOLD'].sort_values(ascending=[True]).plot.barh();
plt.gca().set_ylabel('')
plt.gca().set_xlabel('Avg. visiting tourists each year')
plt.show()
# Importing packages and shapefile
import geopandas as gpd
fp = "C:/Users/bt_27/Google Drev/Skole/10. Semester/Introduction to Python/4. Projekter/Projekt 1/Geo/DNK_adm1.shp"
map_df = gpd.read_file(fp)
# Dictionary is constructed in order to rename the regions for the merge
regs = {'Hovedstaden':'Region Hovedstaden','Midtjylland':'Region Midtjylland','Nordjylland':'Region Nordjylland',\
'Sjælland':'Region Sjælland','Syddanmark':'Region Syddanmark'}
# In order to merge, the variables has to be named the same
map_df['OMRÅDE'] = map_df['NAME_1'].replace(regs)
map_df1 = map_df[['OMRÅDE','geometry']]
# This is the dataframe, ready to be merged on the observations
map_df1
# The original dataframe with obs for the regions
lo = df[df['OMRÅDE'] != 'All Denmark']
lo = lo[~lo.NATION1.isin(['Total','Denmark','World outside Denmark'])]
# The observations are aggregated by each region in Denmark
tmp = lo.groupby(['OMRÅDE'])['INDHOLD'].sum().to_frame()
# The observations are merge with the shapefile data on the regions
maps = pd.merge(map_df1, tmp, on='OMRÅDE')
# create the figure and the axes for the plot
fig, ax = plt.subplots(1, figsize=(10, 6))
maps.plot(column='INDHOLD', cmap='Reds', linewidth=0.8, ax=ax, edgecolor='0.8');
# We start with a function genrating the pie chart.
def interactive_figure(fokus):
# Choose nationality for the pie chart
lo1 = lo[lo.NATION1.isin([fokus])]
# Sum the observations grouped by the regions and plot the chart
pie_sources = lo1.groupby(['OMRÅDE'])['INDHOLD'].sum().plot(kind='pie',autopct='%1.1f%%')
plt.gca().set_ylabel('')
plt.title('Share of tourists from the chosen country by region in Denmark')
plt.show()
import ipywidgets as widgets
# The implimentation of a widget makes it possible to change which country we want to investigate
widgets.interact(interactive_figure,
fokus=widgets.Dropdown(description="$Land$", options=lande, value='Germany'),);
| 0.557845 | 0.988154 |
# Node classification with Relational Graph Convolutional Network (RGCN)
<table><tr><td>Run the latest release of this notebook:</td><td><a href="https://mybinder.org/v2/gh/stellargraph/stellargraph/master?urlpath=lab/tree/demos/node-classification/rgcn-node-classification.ipynb" alt="Open In Binder" target="_parent"><img src="https://mybinder.org/badge_logo.svg"/></a></td><td><a href="https://colab.research.google.com/github/stellargraph/stellargraph/blob/master/demos/node-classification/rgcn-node-classification.ipynb" alt="Open In Colab" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg"/></a></td></tr></table>
This example demonstrates how use an RGCN [1] on the AIFB dataset with stellargraph.
[1] Modeling Relational Data with Graph Convolutional Networks. Thomas N. Kipf, Michael Schlichtkrull (2017). https://arxiv.org/pdf/1703.06103.pdf
First we load the required libraries.
```
# install StellarGraph if running on Google Colab
import sys
if 'google.colab' in sys.modules:
%pip install -q stellargraph[demos]==1.3.0b
# verify that we're using the correct version of StellarGraph for this notebook
import stellargraph as sg
try:
sg.utils.validate_notebook_version("1.3.0b")
except AttributeError:
raise ValueError(
f"This notebook requires StellarGraph version 1.3.0b, but a different version {sg.__version__} is installed. Please see <https://github.com/stellargraph/stellargraph/issues/1172>."
) from None
from rdflib.extras.external_graph_libs import *
from rdflib import Graph, URIRef, Literal
import networkx as nx
from networkx.classes.function import info
import stellargraph as sg
from stellargraph.mapper import RelationalFullBatchNodeGenerator
from stellargraph.layer import RGCN
import numpy as np
import matplotlib.pyplot as plt
import os
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Model
import sklearn
from sklearn import model_selection
from collections import Counter
from stellargraph import datasets
from IPython.display import display, HTML
import matplotlib.pyplot as plt
%matplotlib inline
```
## Loading the data
(See [the "Loading from Pandas" demo](../basics/loading-pandas.ipynb) for details on how data can be loaded.)
```
dataset = datasets.AIFB()
display(HTML(dataset.description))
G, affiliation = dataset.load()
print(G.info())
```
The relationship 'affiliation' indicates whether a researcher is affiliated with a research group e.g. (researcher, research group, affiliation). This is used to create the one-hot labels in the `affiliation` DataFrame. These relationships are not included in `G` (nor is its inverse relationship 'employs'). The idea here is to test whether we can recover a 'missing' relationship.
## Input preparation
The nodes don't natively have features, so they've been replaced with one-hot indicators to allow the model to learn from the graph structure. We're only training on the people with affiliations, so we split that into train and test splits.
```
train_targets, test_targets = model_selection.train_test_split(
affiliation, train_size=0.8, test_size=None
)
generator = RelationalFullBatchNodeGenerator(G, sparse=True)
train_gen = generator.flow(train_targets.index, targets=train_targets)
test_gen = generator.flow(test_targets.index, targets=test_targets)
```
## RGCN model creation and training
We use stellargraph to create an RGCN object. This creates a stack of relational graph convolutional layers. We add a softmax layer to transform the features created by RGCN into class predictions and create a Keras model. Then we train the model on the stellargraph generators.
Each RGCN layer creates a weight matrix for each relationship in the graph. If `num_bases==0` these weight matrices are completely independent. If `num_bases!=0` each weight matrix is a different linear combination of the same basis matrices. This introduces parameter sharing and reduces the number of the parameters in the model. See the paper for more details.
```
rgcn = RGCN(
layer_sizes=[32, 32],
activations=["relu", "relu"],
generator=generator,
bias=True,
num_bases=20,
dropout=0.5,
)
x_in, x_out = rgcn.in_out_tensors()
predictions = Dense(train_targets.shape[-1], activation="softmax")(x_out)
model = Model(inputs=x_in, outputs=predictions)
model.compile(
loss="categorical_crossentropy",
optimizer=keras.optimizers.Adam(0.01),
metrics=["acc"],
)
history = model.fit(train_gen, validation_data=test_gen, epochs=20)
sg.utils.plot_history(history)
```
Now we assess the accuracy of our trained model on the test set - it does pretty well on this example dataset!
```
test_metrics = model.evaluate(test_gen)
print("\nTest Set Metrics:")
for name, val in zip(model.metrics_names, test_metrics):
print("\t{}: {:0.4f}".format(name, val))
```
## Node embeddings
We evaluate node embeddings as the activations of the output of the last graph convolution layer in the GCN layer stack and visualise them, coloring nodes by their true subject label. We expect to see nice clusters of researchers in the node embedding space, with researchers from the same group belonging to the same cluster.
To calculate the node embeddings rather than the class predictions, we create a new model with the same inputs as we used previously `x_inp` but now the output is the embeddings `x_out` rather than the predicted class. Additionally note that the weights trained previously are kept in the new model.
```
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
# get embeddings for all people nodes
all_gen = generator.flow(affiliation.index, targets=affiliation)
embedding_model = Model(inputs=x_in, outputs=x_out)
emb = embedding_model.predict(all_gen)
X = emb.squeeze(0)
y = affiliation.idxmax(axis="columns").astype("category")
if X.shape[1] > 2:
transform = TSNE
trans = transform(n_components=2)
emb_transformed = pd.DataFrame(trans.fit_transform(X), index=affiliation.index)
emb_transformed["label"] = y
else:
emb_transformed = pd.DataFrame(X, index=affiliation.index)
emb_transformed = emb_transformed.rename(columns={"0": 0, "1": 1})
emb_transformed["label"] = y
alpha = 0.7
fig, ax = plt.subplots(figsize=(7, 7))
ax.scatter(
emb_transformed[0],
emb_transformed[1],
c=emb_transformed["label"].cat.codes,
cmap="jet",
alpha=alpha,
)
ax.set(aspect="equal", xlabel="$X_1$", ylabel="$X_2$")
plt.title(
"{} visualization of RGCN embeddings for AIFB dataset".format(transform.__name__)
)
plt.show()
```
Aside from a slight overlap the classes are well separated despite only using 2-dimensions. This indicates that our model is performing well at clustering the researchers into the right groups.
<table><tr><td>Run the latest release of this notebook:</td><td><a href="https://mybinder.org/v2/gh/stellargraph/stellargraph/master?urlpath=lab/tree/demos/node-classification/rgcn-node-classification.ipynb" alt="Open In Binder" target="_parent"><img src="https://mybinder.org/badge_logo.svg"/></a></td><td><a href="https://colab.research.google.com/github/stellargraph/stellargraph/blob/master/demos/node-classification/rgcn-node-classification.ipynb" alt="Open In Colab" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg"/></a></td></tr></table>
|
github_jupyter
|
# install StellarGraph if running on Google Colab
import sys
if 'google.colab' in sys.modules:
%pip install -q stellargraph[demos]==1.3.0b
# verify that we're using the correct version of StellarGraph for this notebook
import stellargraph as sg
try:
sg.utils.validate_notebook_version("1.3.0b")
except AttributeError:
raise ValueError(
f"This notebook requires StellarGraph version 1.3.0b, but a different version {sg.__version__} is installed. Please see <https://github.com/stellargraph/stellargraph/issues/1172>."
) from None
from rdflib.extras.external_graph_libs import *
from rdflib import Graph, URIRef, Literal
import networkx as nx
from networkx.classes.function import info
import stellargraph as sg
from stellargraph.mapper import RelationalFullBatchNodeGenerator
from stellargraph.layer import RGCN
import numpy as np
import matplotlib.pyplot as plt
import os
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Model
import sklearn
from sklearn import model_selection
from collections import Counter
from stellargraph import datasets
from IPython.display import display, HTML
import matplotlib.pyplot as plt
%matplotlib inline
dataset = datasets.AIFB()
display(HTML(dataset.description))
G, affiliation = dataset.load()
print(G.info())
train_targets, test_targets = model_selection.train_test_split(
affiliation, train_size=0.8, test_size=None
)
generator = RelationalFullBatchNodeGenerator(G, sparse=True)
train_gen = generator.flow(train_targets.index, targets=train_targets)
test_gen = generator.flow(test_targets.index, targets=test_targets)
rgcn = RGCN(
layer_sizes=[32, 32],
activations=["relu", "relu"],
generator=generator,
bias=True,
num_bases=20,
dropout=0.5,
)
x_in, x_out = rgcn.in_out_tensors()
predictions = Dense(train_targets.shape[-1], activation="softmax")(x_out)
model = Model(inputs=x_in, outputs=predictions)
model.compile(
loss="categorical_crossentropy",
optimizer=keras.optimizers.Adam(0.01),
metrics=["acc"],
)
history = model.fit(train_gen, validation_data=test_gen, epochs=20)
sg.utils.plot_history(history)
test_metrics = model.evaluate(test_gen)
print("\nTest Set Metrics:")
for name, val in zip(model.metrics_names, test_metrics):
print("\t{}: {:0.4f}".format(name, val))
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
# get embeddings for all people nodes
all_gen = generator.flow(affiliation.index, targets=affiliation)
embedding_model = Model(inputs=x_in, outputs=x_out)
emb = embedding_model.predict(all_gen)
X = emb.squeeze(0)
y = affiliation.idxmax(axis="columns").astype("category")
if X.shape[1] > 2:
transform = TSNE
trans = transform(n_components=2)
emb_transformed = pd.DataFrame(trans.fit_transform(X), index=affiliation.index)
emb_transformed["label"] = y
else:
emb_transformed = pd.DataFrame(X, index=affiliation.index)
emb_transformed = emb_transformed.rename(columns={"0": 0, "1": 1})
emb_transformed["label"] = y
alpha = 0.7
fig, ax = plt.subplots(figsize=(7, 7))
ax.scatter(
emb_transformed[0],
emb_transformed[1],
c=emb_transformed["label"].cat.codes,
cmap="jet",
alpha=alpha,
)
ax.set(aspect="equal", xlabel="$X_1$", ylabel="$X_2$")
plt.title(
"{} visualization of RGCN embeddings for AIFB dataset".format(transform.__name__)
)
plt.show()
| 0.581303 | 0.98249 |
# WeatherPy
----
#### Note
* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
```
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
from scipy.stats import linregress
from pprint import pprint
# Import API key
from api_keys import weather_api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "output_data/cities.csv"
url = "http://api.openweathermap.org/data/2.5/weather?"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
```
## Generate Cities List
```
# List for holding lat_lngs and cities
lat_lngs = []
cities = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(lat_range[0], lat_range[1], size=1500)
lngs = np.random.uniform(lng_range[0], lng_range[1], size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
len(cities)
```
### Perform API Calls
* Perform a weather check on each city using a series of successive API calls.
* Include a print log of each city as it'sbeing processed (with the city number and city name).
```
cityname=[]
lat=[]
lng=[]
max_temp=[]
humidity=[]
cloudiness=[]
wind_speed=[]
country=[]
date=[]
units = "imperial"
set_count = 1
record_count = 0
for i, city in enumerate(cities):
if i % 50 == 0 and i >= 50:
set_count = set_count +1
record_count = 1
print(f"Processing Record {record_count} of Set {set_count} | {city}")
record_count = record_count +1
query_url = f"{url}appid={weather_api_key}&units{units}&q={city}"
post_response = requests.get(query_url).json()
try:
cityname.append(post_response['name'])
lat.append(post_response['coord']['lat'])
lng.append(post_response['coord']['lon'])
max_temp.append(post_response['main']['temp_max'])
humidity.append(post_response['main']['humidity'])
cloudiness.append(post_response['clouds']['all'])
wind_speed.append(post_response['wind']['speed'])
country.append(post_response['sys']['country'])
date.append(time.ctime(post_response['dt']))
except KeyError:
print("City not found...Skipping...")
```
### Convert Raw Data to DataFrame
* Export the city data into a .csv.
* Display the DataFrame
```
weather_dt ={"City":cityname,"Lat":lat,"Lng":lng,"Max Temp":max_temp,"Humidity":humidity,"Cloudiness":cloudiness,
"Wind Speed":wind_speed,"Country":country,"Date":date}
weather_df =pd.DataFrame(weather_dt)
weather_df.to_csv('output_data/cities.csv')
weather_df.head()
weather_df.describe()
```
## Inspect the data and remove the cities where the humidity > 100%.
----
Skip this step if there are no cities that have humidity > 100%.
```
# Get the indices of cities that have humidity over 100%.
# Make a new DataFrame equal to the city data to drop all humidity outliers by index.
# Passing "inplace=False" will make a copy of the city_data DataFrame, which we call "clean_city_data".
```
## Plotting the Data
* Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.
* Save the plotted figures as .pngs.
## Latitude vs. Temperature Plot
```
plt.scatter(weather_df["Lat"], weather_df["Max Temp"], marker="o")
plt.title("Max Temperature vs City Latitude")
plt.ylabel("Temperature (F)")
plt.xlabel("Latitude")
plt.grid(True)
plt.savefig("output_data/MaxTemperaturevsCityLatitude.png")
plt.show()
```
The above graph is showing the cities maxium temperature in relation to the cities latitude.
## Latitude vs. Humidity Plot
```
plt.scatter(weather_df["Lat"], weather_df["Humidity"], marker="o")
plt.title("Humidity vs City Latitude")
plt.ylabel("Humidity (%)")
plt.xlabel("Latitude")
plt.grid(True)
plt.savefig("output_data/HumidityvsCityLatitude.png")
plt.show()
```
The above graph is showing the cities humidity in relation to the cities latitude.
## Latitude vs. Cloudiness Plot
```
plt.scatter(weather_df["Lat"], weather_df["Cloudiness"], marker="o")
plt.title("Cloudiness vs City Latitude")
plt.ylabel("Cloudiness (%)")
plt.xlabel("Latitude")
plt.grid(True)
plt.savefig("output_data/CloudinessvsCityLatitude.png")
plt.show()
```
The above graph is showing the cities cloudiness in relation to the cities latitude.
## Latitude vs. Wind Speed Plot
```
plt.scatter(weather_df["Lat"], weather_df["Wind Speed"], marker="o")
plt.title("Wind Speed vs City Latitude")
plt.ylabel("Wind Speed (mph)")
plt.xlabel("Latitude")
plt.grid(True)
plt.savefig("output_data/WindSpeedvsCityLatitude.png")
plt.show()
```
The above graph is showing the cities wind speed in relation to the cities latitude.
## Linear Regression
```
north_weather =weather_df.loc[weather_df["Lat"] >= 0]
south_weather =weather_df.loc[weather_df["Lat"] < 0]
```
#### Northern Hemisphere - Max Temp vs. Latitude Linear Regression
```
(slope, intercept, rvalue, pvalue, stderr) = linregress(north_weather["Lat"], north_weather["Max Temp"])
regress_values = north_weather["Lat"] * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
line_eq
plt.scatter(north_weather["Lat"],north_weather["Max Temp"])
plt.plot(north_weather["Lat"],regress_values,"r-")
plt.xlabel('Latitude')
plt.ylabel('Temperature (F)')
plt.annotate(line_eq, (5,5), fontsize=15,color="red")
print(f"The r-value is: {rvalue**2}")
plt.savefig("output_data/NMaxTempvsLatitude.png")
plt.show()
```
#### Southern Hemisphere - Max Temp vs. Latitude Linear Regression
```
(slope, intercept, rvalue, pvalue, stderr) = linregress(south_weather["Lat"], south_weather["Max Temp"])
regress_values = south_weather["Lat"] * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
line_eq
plt.scatter(south_weather["Lat"],south_weather["Max Temp"])
plt.plot(south_weather["Lat"],regress_values,"r-")
plt.xlabel('Latitude')
plt.ylabel('Temperature (F)')
plt.annotate(line_eq, (5,5), fontsize=15,color="red")
print(f"The r-value is: {rvalue**2}")
plt.savefig("output_data/SMaxTempvsLatitude.png")
plt.show()
```
#### Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression
```
(slope, intercept, rvalue, pvalue, stderr) = linregress(north_weather["Lat"], north_weather["Humidity"])
regress_values = north_weather["Lat"] * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
line_eq
plt.scatter(north_weather["Lat"],north_weather["Humidity"])
plt.plot(north_weather["Lat"],regress_values,"r-")
plt.xlabel('Latitude')
plt.ylabel('Humidity (%)')
plt.annotate(line_eq, (5,5), fontsize=15,color="red")
print(f"The r-value is: {rvalue**2}")
plt.savefig("output_data/NHumidityvsLatitude.png")
plt.show()
```
#### Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression
```
(slope, intercept, rvalue, pvalue, stderr) = linregress(south_weather["Lat"], south_weather["Humidity"])
regress_values = south_weather["Lat"] * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
line_eq
plt.scatter(south_weather["Lat"],south_weather["Humidity"])
plt.plot(south_weather["Lat"],regress_values,"r-")
plt.xlabel('Latitude')
plt.ylabel('Humidity (%)')
plt.annotate(line_eq, (5,5), fontsize=15,color="red")
print(f"The r-value is: {rvalue**2}")
plt.savefig("output_data/SHumidityvsLatitude.png")
plt.show()
```
#### Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
```
(slope, intercept, rvalue, pvalue, stderr) = linregress(north_weather["Lat"], north_weather["Cloudiness"])
regress_values = north_weather["Lat"] * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
line_eq
plt.scatter(north_weather["Lat"],north_weather["Cloudiness"])
plt.plot(north_weather["Lat"],regress_values,"r-")
plt.xlabel('Latitude')
plt.ylabel('Cloudiness (%)')
plt.annotate(line_eq, (5,5), fontsize=15,color="red")
print(f"The r-value is: {rvalue**2}")
plt.savefig("output_data/NCloudinessvsLatitude.png")
plt.show()
```
#### Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
```
(slope, intercept, rvalue, pvalue, stderr) = linregress(south_weather["Lat"], south_weather["Cloudiness"])
regress_values = south_weather["Lat"] * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
line_eq
plt.scatter(south_weather["Lat"],south_weather["Cloudiness"])
plt.plot(south_weather["Lat"],regress_values,"r-")
plt.xlabel('Latitude')
plt.ylabel('Cloudiness (%)')
plt.annotate(line_eq, (5,5), fontsize=15,color="red")
print(f"The r-value is: {rvalue**2}")
plt.savefig("output_data/SCloudinessvsLatitude.png")
plt.show()
```
#### Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
```
(slope, intercept, rvalue, pvalue, stderr) = linregress(north_weather["Lat"], north_weather["Wind Speed"])
regress_values = north_weather["Lat"] * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
line_eq
plt.scatter(north_weather["Lat"],north_weather["Wind Speed"])
plt.plot(north_weather["Lat"],regress_values,"r-")
plt.xlabel('Latitude')
plt.ylabel('Wind Speed (mph)')
plt.annotate(line_eq, (5,5), fontsize=15,color="red")
print(f"The r-value is: {rvalue**2}")
plt.savefig("output_data/NWindSpeedvsLatitude.png")
plt.show()
```
#### Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
```
(slope, intercept, rvalue, pvalue, stderr) = linregress(south_weather["Lat"], south_weather["Wind Speed"])
regress_values = south_weather["Lat"] * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
line_eq
plt.scatter(south_weather["Lat"],south_weather["Wind Speed"])
plt.plot(south_weather["Lat"],regress_values,"r-")
plt.xlabel('Latitude')
plt.ylabel('Wind Speed (mph)')
plt.annotate(line_eq, (5,5), fontsize=15,color="red")
print(f"The r-value is: {rvalue**2}")
plt.savefig("output_data/SWindSpeedvsLatitude.png")
plt.show()
```
|
github_jupyter
|
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
from scipy.stats import linregress
from pprint import pprint
# Import API key
from api_keys import weather_api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "output_data/cities.csv"
url = "http://api.openweathermap.org/data/2.5/weather?"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
# List for holding lat_lngs and cities
lat_lngs = []
cities = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(lat_range[0], lat_range[1], size=1500)
lngs = np.random.uniform(lng_range[0], lng_range[1], size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
len(cities)
cityname=[]
lat=[]
lng=[]
max_temp=[]
humidity=[]
cloudiness=[]
wind_speed=[]
country=[]
date=[]
units = "imperial"
set_count = 1
record_count = 0
for i, city in enumerate(cities):
if i % 50 == 0 and i >= 50:
set_count = set_count +1
record_count = 1
print(f"Processing Record {record_count} of Set {set_count} | {city}")
record_count = record_count +1
query_url = f"{url}appid={weather_api_key}&units{units}&q={city}"
post_response = requests.get(query_url).json()
try:
cityname.append(post_response['name'])
lat.append(post_response['coord']['lat'])
lng.append(post_response['coord']['lon'])
max_temp.append(post_response['main']['temp_max'])
humidity.append(post_response['main']['humidity'])
cloudiness.append(post_response['clouds']['all'])
wind_speed.append(post_response['wind']['speed'])
country.append(post_response['sys']['country'])
date.append(time.ctime(post_response['dt']))
except KeyError:
print("City not found...Skipping...")
weather_dt ={"City":cityname,"Lat":lat,"Lng":lng,"Max Temp":max_temp,"Humidity":humidity,"Cloudiness":cloudiness,
"Wind Speed":wind_speed,"Country":country,"Date":date}
weather_df =pd.DataFrame(weather_dt)
weather_df.to_csv('output_data/cities.csv')
weather_df.head()
weather_df.describe()
# Get the indices of cities that have humidity over 100%.
# Make a new DataFrame equal to the city data to drop all humidity outliers by index.
# Passing "inplace=False" will make a copy of the city_data DataFrame, which we call "clean_city_data".
plt.scatter(weather_df["Lat"], weather_df["Max Temp"], marker="o")
plt.title("Max Temperature vs City Latitude")
plt.ylabel("Temperature (F)")
plt.xlabel("Latitude")
plt.grid(True)
plt.savefig("output_data/MaxTemperaturevsCityLatitude.png")
plt.show()
plt.scatter(weather_df["Lat"], weather_df["Humidity"], marker="o")
plt.title("Humidity vs City Latitude")
plt.ylabel("Humidity (%)")
plt.xlabel("Latitude")
plt.grid(True)
plt.savefig("output_data/HumidityvsCityLatitude.png")
plt.show()
plt.scatter(weather_df["Lat"], weather_df["Cloudiness"], marker="o")
plt.title("Cloudiness vs City Latitude")
plt.ylabel("Cloudiness (%)")
plt.xlabel("Latitude")
plt.grid(True)
plt.savefig("output_data/CloudinessvsCityLatitude.png")
plt.show()
plt.scatter(weather_df["Lat"], weather_df["Wind Speed"], marker="o")
plt.title("Wind Speed vs City Latitude")
plt.ylabel("Wind Speed (mph)")
plt.xlabel("Latitude")
plt.grid(True)
plt.savefig("output_data/WindSpeedvsCityLatitude.png")
plt.show()
north_weather =weather_df.loc[weather_df["Lat"] >= 0]
south_weather =weather_df.loc[weather_df["Lat"] < 0]
(slope, intercept, rvalue, pvalue, stderr) = linregress(north_weather["Lat"], north_weather["Max Temp"])
regress_values = north_weather["Lat"] * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
line_eq
plt.scatter(north_weather["Lat"],north_weather["Max Temp"])
plt.plot(north_weather["Lat"],regress_values,"r-")
plt.xlabel('Latitude')
plt.ylabel('Temperature (F)')
plt.annotate(line_eq, (5,5), fontsize=15,color="red")
print(f"The r-value is: {rvalue**2}")
plt.savefig("output_data/NMaxTempvsLatitude.png")
plt.show()
(slope, intercept, rvalue, pvalue, stderr) = linregress(south_weather["Lat"], south_weather["Max Temp"])
regress_values = south_weather["Lat"] * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
line_eq
plt.scatter(south_weather["Lat"],south_weather["Max Temp"])
plt.plot(south_weather["Lat"],regress_values,"r-")
plt.xlabel('Latitude')
plt.ylabel('Temperature (F)')
plt.annotate(line_eq, (5,5), fontsize=15,color="red")
print(f"The r-value is: {rvalue**2}")
plt.savefig("output_data/SMaxTempvsLatitude.png")
plt.show()
(slope, intercept, rvalue, pvalue, stderr) = linregress(north_weather["Lat"], north_weather["Humidity"])
regress_values = north_weather["Lat"] * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
line_eq
plt.scatter(north_weather["Lat"],north_weather["Humidity"])
plt.plot(north_weather["Lat"],regress_values,"r-")
plt.xlabel('Latitude')
plt.ylabel('Humidity (%)')
plt.annotate(line_eq, (5,5), fontsize=15,color="red")
print(f"The r-value is: {rvalue**2}")
plt.savefig("output_data/NHumidityvsLatitude.png")
plt.show()
(slope, intercept, rvalue, pvalue, stderr) = linregress(south_weather["Lat"], south_weather["Humidity"])
regress_values = south_weather["Lat"] * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
line_eq
plt.scatter(south_weather["Lat"],south_weather["Humidity"])
plt.plot(south_weather["Lat"],regress_values,"r-")
plt.xlabel('Latitude')
plt.ylabel('Humidity (%)')
plt.annotate(line_eq, (5,5), fontsize=15,color="red")
print(f"The r-value is: {rvalue**2}")
plt.savefig("output_data/SHumidityvsLatitude.png")
plt.show()
(slope, intercept, rvalue, pvalue, stderr) = linregress(north_weather["Lat"], north_weather["Cloudiness"])
regress_values = north_weather["Lat"] * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
line_eq
plt.scatter(north_weather["Lat"],north_weather["Cloudiness"])
plt.plot(north_weather["Lat"],regress_values,"r-")
plt.xlabel('Latitude')
plt.ylabel('Cloudiness (%)')
plt.annotate(line_eq, (5,5), fontsize=15,color="red")
print(f"The r-value is: {rvalue**2}")
plt.savefig("output_data/NCloudinessvsLatitude.png")
plt.show()
(slope, intercept, rvalue, pvalue, stderr) = linregress(south_weather["Lat"], south_weather["Cloudiness"])
regress_values = south_weather["Lat"] * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
line_eq
plt.scatter(south_weather["Lat"],south_weather["Cloudiness"])
plt.plot(south_weather["Lat"],regress_values,"r-")
plt.xlabel('Latitude')
plt.ylabel('Cloudiness (%)')
plt.annotate(line_eq, (5,5), fontsize=15,color="red")
print(f"The r-value is: {rvalue**2}")
plt.savefig("output_data/SCloudinessvsLatitude.png")
plt.show()
(slope, intercept, rvalue, pvalue, stderr) = linregress(north_weather["Lat"], north_weather["Wind Speed"])
regress_values = north_weather["Lat"] * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
line_eq
plt.scatter(north_weather["Lat"],north_weather["Wind Speed"])
plt.plot(north_weather["Lat"],regress_values,"r-")
plt.xlabel('Latitude')
plt.ylabel('Wind Speed (mph)')
plt.annotate(line_eq, (5,5), fontsize=15,color="red")
print(f"The r-value is: {rvalue**2}")
plt.savefig("output_data/NWindSpeedvsLatitude.png")
plt.show()
(slope, intercept, rvalue, pvalue, stderr) = linregress(south_weather["Lat"], south_weather["Wind Speed"])
regress_values = south_weather["Lat"] * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
line_eq
plt.scatter(south_weather["Lat"],south_weather["Wind Speed"])
plt.plot(south_weather["Lat"],regress_values,"r-")
plt.xlabel('Latitude')
plt.ylabel('Wind Speed (mph)')
plt.annotate(line_eq, (5,5), fontsize=15,color="red")
print(f"The r-value is: {rvalue**2}")
plt.savefig("output_data/SWindSpeedvsLatitude.png")
plt.show()
| 0.357007 | 0.800029 |
# Gaussian Processes
A demonstration of how to sample from, and fit to, a Gaussian Process.
If a function $f(x)$ is drawn from a Gaussian process
$$f(x) \sim \mathcal{GP}(m(x)=0,k(x,x'))$$
then a finite subset of function values $\mathbf{f}=(f(x_1),f(x_1),\dots,f(x_n))^T$ are distributed such that
$$\mathbf{f}\sim \mathcal{N}(0,\Sigma)$$
where $\Sigma_{ij}=k(x_i,x_j)$
Based on lectures from Machine Learning Summer School, Cambridge 2009, see http://videolectures.net/mlss09uk_rasmussen_gp/
Author: Juvid Aryaman
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.lines as mlines
import utls
utls.reset_plots()
%matplotlib inline
```
## Sample from a Gaussian Process
```
def cov_matrix_function(x1,x2,l):
"""Use a squared exponential covariance with a fixed length scale
:param x1: A double, parameter of the covariance function
:param x2: A double, parameter of the covariance function
:param l: A double, hyperparameter of the GP determining the length scale over which the correlation between neighbouring points decays
Returns: Squared exponential covariance function
"""
return np.exp(-(x1-x2)*(x1-x2)/l)
D = 90 # number of points along x where we will evaluate the GP. D = dimension of the cov matrix
x = np.linspace(-5,5,D)
ndraws = 5 # number of functions to draw from GP
cmap = plt.cm.jet
def sample_from_gp(l):
"""
Sample from a Gaussian Process
:param l: The length scale of the squared exponential GP
Returns: A numpy array of length (D) as a draw from the GP
"""
sigma = np.zeros((D,D))
for i in range(D):
for j in range(D):
sigma[i,j] = cov_matrix_function(x[i],x[j],l)
return sigma, np.random.multivariate_normal(np.zeros(D),sigma) # sample from the GP
def add_GP_draws_to_plot(ax, l):
"""Add a number of samples from a Gaussian process to a plot
:param ax: A AxesSubplot object, the axes to plot on
:param l: The length scale of the squared exponential GP
"""
for k in range(ndraws):
sigma, y = sample_from_gp(l)
col = cmap(int(round((k+1)/float(ndraws)*(cmap.N))))
ax.plot(x,y,'-',alpha=0.5,color=col, linewidth = 2)
ax.set_xlabel('Input, $x$')
ax.set_ylabel('Output, $f(x)$')
ax.set_title('$l={}$'.format(l),fontsize=20)
fig, axs = plt.subplots(1,3,figsize=(3*5,5))
axs = axs.ravel()
add_GP_draws_to_plot(axs[0],0.1)
add_GP_draws_to_plot(axs[1],1)
add_GP_draws_to_plot(axs[2],10)
plt.tight_layout()
```
Each panel shows 5 draws from a different Gaussian process. All of the panels use a covariance function of the same form, namely a squared exponential covariance function:
$$k(x,x')=\exp\left(-\frac{1}{l}(x-x')^2\right)$$
The function has a *hyperparameter* $l$ which determines the length scale over which the correlation between neighbouring points decays. Here we show what happens as $l$ is increased: the curvature of each function reduces. A large $l$ means that $k(x,x')$ reduces slowly with $x$ for some fixed $x'$, so neighbouring points have a high correlation, and therefore the sampled function $f(x)$ changes slowly with $x$.
Each colored line is a single sample from a Gaussian process. Each panel has a fixed $\Sigma_{i,j}$. However, every time we draw from the multivariate Gaussian $\mathcal{N}(0,\Sigma)$, we get a different vector $\mathbf{f}$, and therefore a different shaped curve.
Notice that we only evaluate the Gaussian process at `D` different points. If we wanted to evaluate the Gaussian process everywhere in $x$, we would need `D` to become infinity (which is impossible!). It is in this sense that we can consider a Gaussian process as a generalisation of a multivariate Gaussian distribution to infinitely many variables, because $\Sigma$ would need to be an $(\infty,\infty)$ matrix for us to evaluate $f(x)$ everywhere.
## Bayesian Inference with Gaussian Processes
One of the great things about Gaussian processes is that we can do Bayesian inference with them analytically (i.e. we can write down the posterior distribution, and the posterior predictive distribution, in terms of the data mathematically without needing to resort to expensive Monte Carlo algorithms)
The problem setting is that we have some data $\mathcal{D}=(\mathbf{x},\mathbf{y})$ and we want to make a prediction of the value of $y^*$ at some value of $x^*$ where we have no data. We do not know the functional form of $\mathbf{y}$, so we will use a Gaussian process.
We model the data as having a Gaussian likelihood
$$\mathbf{y}|\mathbf{x},f(x),M \sim \mathcal{N}(\mathbf{f},\sigma_{\text{noise}}^2)$$
where $M$ is our choice of model (namely a Gaussian process, with its associated hyperparameters) and $\sigma_{\text{noise}}$ is the noise in our data.
We then use a Gaussian process prior
$$f(x)|M\sim \mathcal{GP}(m(x)\equiv0,k(x,x'))$$
It turns out that this is a conjugate prior, where the posterior is also a Gaussian process. Note that, in this language, $f(x)$ takes the position of the parameters ($\theta$) in Bayes rule
$$p(\theta|\mathcal{D},M)=\frac{p(\mathcal{D}|\theta,M) p(\theta|M)}{p(\mathcal{D}|M)}$$
where $p(\theta|\mathcal{D},M)$ is the posterior, $p(\mathcal{D}|\theta,M)$ is the likelihood, $p(\theta|M)$ is the prior and $p(\mathcal{D}|M)$ is the marginal likelihood. So, in this sense, a Gaussian process is a parametric model with an infinite number of parameters (since a function has an infinite number of values in any given range of $x$).
### Make some pseudo-data
We will generate some data as a draw from a GP. For this demo, we will assume that
1. The data really were generated from a Gaussian process, and we know what the appropriate covariance function $k(x,x')$ is to use. In practice, this is unavoidable and is a modelling choice.
2. We know the values of the hyperparameters of the Gaussian process which generated our data. This is somewhat contrived for the sake of demonstration. Whilst we may sometimes know parameters like the noise in our data (`sigma_noise` below), we will probably not know parameters such as $l$ in the above example. In practice, we can maximize the marginal likelihood to learn 'best fit' values of the hyperparameters of our Gaussian process.
```
l = 1
var_noise = 0.01
sigma_true, y_true = sample_from_gp(l) # The true function, a sample from a GP
data_n = 10
data_indicies = np.random.choice(np.arange(int(round(0.1*D)),int(round(0.9*D))),data_n,replace=False)
data_y = y_true[data_indicies] + np.random.normal(loc=0.0,scale=np.sqrt(var_noise),size=data_n)
data_x = x[data_indicies]
```
So we have our data, `data_y` and `data_x`. We now want to make predictions about there values over all values in the variable `x`.
### Compute the posterior predictive distribution of the Gaussian process
```
K = np.zeros((data_n,data_n)) # make a covariance matrix
for i in range(data_n):
for j in range(data_n):
K[i,j] = cov_matrix_function(data_x[i],data_x[j],l) # squared exponential GP
means = np.zeros(D)
variances = np.zeros(D)
for i, xs in enumerate(x):
k = cov_matrix_function(xs, data_x, l)
K_inv_n = np.linalg.inv( K + var_noise*np.identity(data_n) )
v = np.dot(K_inv_n, data_y)
mean = np.dot(k, v)
v2 = np.dot(K_inv_n, k)
var = cov_matrix_function(xs, xs, l) + var_noise - np.dot(k, v2)
means[i] = mean
variances[i] = var
p2 = plt.Rectangle((0, 0), 0.1, 0.1, fc="red", alpha = 0.3, ec = 'red')
p3 = mlines.Line2D([], [], color='red')
# Plot a 95% BCI using the 2 sigma rule for Normal distributions
fig, ax = plt.subplots()
ax.fill_between(x, means+2*np.sqrt(variances), means-2*np.sqrt(variances), color='red', alpha=0.3)
p1=ax.plot(data_x, data_y, 'kx')
ax.plot(x, y_true,'-r')
ax.set_xlabel('input, x')
ax.set_ylabel('output, y')
ax.legend([p1[0],p2, p3], ['Data', 'Posterior predictive distribution', 'True function'], prop={'size':8});
```
We have plotted the true function the data came from. The shaded region is a 95% Bayesian confidence interval of the value of $y$ at any particular $x$. That means we have used all of the data we have to constrain the possible values of $x$ where we do not already have data.
Even where we have data, the uncertainty is non-zero due to the existence of measurement error (i.e. $\sigma_{\text{noise}}>0$).
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.lines as mlines
import utls
utls.reset_plots()
%matplotlib inline
def cov_matrix_function(x1,x2,l):
"""Use a squared exponential covariance with a fixed length scale
:param x1: A double, parameter of the covariance function
:param x2: A double, parameter of the covariance function
:param l: A double, hyperparameter of the GP determining the length scale over which the correlation between neighbouring points decays
Returns: Squared exponential covariance function
"""
return np.exp(-(x1-x2)*(x1-x2)/l)
D = 90 # number of points along x where we will evaluate the GP. D = dimension of the cov matrix
x = np.linspace(-5,5,D)
ndraws = 5 # number of functions to draw from GP
cmap = plt.cm.jet
def sample_from_gp(l):
"""
Sample from a Gaussian Process
:param l: The length scale of the squared exponential GP
Returns: A numpy array of length (D) as a draw from the GP
"""
sigma = np.zeros((D,D))
for i in range(D):
for j in range(D):
sigma[i,j] = cov_matrix_function(x[i],x[j],l)
return sigma, np.random.multivariate_normal(np.zeros(D),sigma) # sample from the GP
def add_GP_draws_to_plot(ax, l):
"""Add a number of samples from a Gaussian process to a plot
:param ax: A AxesSubplot object, the axes to plot on
:param l: The length scale of the squared exponential GP
"""
for k in range(ndraws):
sigma, y = sample_from_gp(l)
col = cmap(int(round((k+1)/float(ndraws)*(cmap.N))))
ax.plot(x,y,'-',alpha=0.5,color=col, linewidth = 2)
ax.set_xlabel('Input, $x$')
ax.set_ylabel('Output, $f(x)$')
ax.set_title('$l={}$'.format(l),fontsize=20)
fig, axs = plt.subplots(1,3,figsize=(3*5,5))
axs = axs.ravel()
add_GP_draws_to_plot(axs[0],0.1)
add_GP_draws_to_plot(axs[1],1)
add_GP_draws_to_plot(axs[2],10)
plt.tight_layout()
l = 1
var_noise = 0.01
sigma_true, y_true = sample_from_gp(l) # The true function, a sample from a GP
data_n = 10
data_indicies = np.random.choice(np.arange(int(round(0.1*D)),int(round(0.9*D))),data_n,replace=False)
data_y = y_true[data_indicies] + np.random.normal(loc=0.0,scale=np.sqrt(var_noise),size=data_n)
data_x = x[data_indicies]
K = np.zeros((data_n,data_n)) # make a covariance matrix
for i in range(data_n):
for j in range(data_n):
K[i,j] = cov_matrix_function(data_x[i],data_x[j],l) # squared exponential GP
means = np.zeros(D)
variances = np.zeros(D)
for i, xs in enumerate(x):
k = cov_matrix_function(xs, data_x, l)
K_inv_n = np.linalg.inv( K + var_noise*np.identity(data_n) )
v = np.dot(K_inv_n, data_y)
mean = np.dot(k, v)
v2 = np.dot(K_inv_n, k)
var = cov_matrix_function(xs, xs, l) + var_noise - np.dot(k, v2)
means[i] = mean
variances[i] = var
p2 = plt.Rectangle((0, 0), 0.1, 0.1, fc="red", alpha = 0.3, ec = 'red')
p3 = mlines.Line2D([], [], color='red')
# Plot a 95% BCI using the 2 sigma rule for Normal distributions
fig, ax = plt.subplots()
ax.fill_between(x, means+2*np.sqrt(variances), means-2*np.sqrt(variances), color='red', alpha=0.3)
p1=ax.plot(data_x, data_y, 'kx')
ax.plot(x, y_true,'-r')
ax.set_xlabel('input, x')
ax.set_ylabel('output, y')
ax.legend([p1[0],p2, p3], ['Data', 'Posterior predictive distribution', 'True function'], prop={'size':8});
| 0.855021 | 0.993203 |
```
import pandas as pd
import os
import re
import matplotlib as mpl
import matplotlib.patches as patches
import matplotlib.pyplot as plt
def rgb_rel(rgb):
return tuple([round(x/255, 3) for x in rgb])
def rel_rgb(rgb):
return rgb_rel(rgb)
# color definitions
white = (1, 1, 1)
light_blue = rel_rgb([50, 150, 255])
dark_blue = rel_rgb([50, 50, 255])
mustard = rgb_rel([220, 200, 0])
medium_grey = rgb_rel([160, 160, 160])
purple = rgb_rel([150, 0, 150])
red = rgb_rel([255, 0, 0])
light_yellow = rgb_rel([255, 255, 150])
light_orange = rgb_rel([255, 180, 100])
all60 = rel_rgb([0,109,44]) # darkest shade
any60 = rel_rgb([49,163,84])
any50 = rel_rgb([116,196,118])
any40 = rel_rgb([186,228,179])
any20 = rel_rgb([237,248,233]) # lightest shade
region_colors = {
'Gap': medium_grey,
'Variation': mustard,
'Unknown': purple,
'SD_98': light_blue,
'SD_99': dark_blue,
'UAB': red,
'LCaln': light_orange
}
def color_segdups(region_score):
if region_score < 980:
return white
elif 980 <= region_score < 990:
return light_blue
elif 990 <= region_score < 1001:
return dark_blue
else:
raise ValueError(region)
def load_annotation(file_path, color=None):
df = pd.read_csv(file_path, sep='\t')
if 'Issue_Type' in df:
df['color'] = df['Issue_Type'].apply(lambda x: region_colors[x])
elif 'chromStart' in df:
df['color'] = df['score'].apply(color_segdups)
else:
assert color is not None, 'no color: {}'.format(file_path)
df['color'] = df['start'].apply(lambda x: color)
if 'end' in df:
df['length'] = df['end'] - df['start']
if 'chromEnd' in df:
df['length'] = df['chromEnd'] - df['chromStart']
df['start'] = df['chromStart']
df['end'] = df['chromEnd']
df = df.loc[(df['score'] >= 980), :].copy()
if '#chrom' in df:
df['chrom'] = df['#chrom']
return df
def load_cytogenetic_bands():
# http://circos.ca/tutorials/lessons/2d_tracks/connectors/configuration
gie_stain_rgb = {
'gpos100': (0,0,0),
'gpos': (0,0,0),
'gpos75': (130,130,130),
'gpos66': (160,160,160),
'gpos50': (200,200,200),
'gpos33': (210,210,210),
'gpos25': (200,200,200),
'gvar': (220,220,220),
'gneg': (255,255,255),
'acen': (217,47,39),
'stalk': (100,127,164)
}
gie_stain_frac_rgb = {}
for k, v in gie_stain_rgb.items():
gie_stain_frac_rgb[k] = rgb_rel(v)
path = '/home/local/work/code/github/project-diploid-assembly/annotation/grch38/known_regions'
cytobands = 'ucsc_cytoband.bed'
df = pd.read_csv(
os.path.join(path, cytobands),
header=0,
names=['chrom', 'start', 'end', 'name', 'gieStain'],
sep='\t'
)
df['length'] = df['end'] - df['start']
df['color'] = df['gieStain'].apply(lambda x: rel_rgb(gie_stain_rgb[x]))
return df
grch38_path = '/home/local/work/code/github/project-diploid-assembly/annotation/grch38'
issues = os.path.join(grch38_path, '20200723_GRCh38_p13_unresolved-issues.bed')
segdups = os.path.join(grch38_path, 'GRCh38_segdups.bed')
ctg_aln_path = '/home/local/work/data/hgsvc/aln_summary'
annotations = [
(load_annotation(segdups), 'SD >98% id.'),
(load_annotation(issues), 'Issues'),
(
load_annotation(
os.path.join(ctg_aln_path, 'lowQAln_0-20_any_all.bed'),
region_colors['LCaln']
),
'Any MQ<20'
),
(
load_annotation(
os.path.join(ctg_aln_path, 'highQ_60_geq20_all.bed'),
any20
),
'>20% MQ:60'
),
(
load_annotation(
os.path.join(ctg_aln_path, 'highQ_60_geq40_all.bed'),
any40
),
'>40% MQ:60'
),
(
load_annotation(
os.path.join(ctg_aln_path, 'highQ_60_geq60_all.bed'),
any50
),
'>60% MQ:60'
),
(
load_annotation(
os.path.join(ctg_aln_path, 'highQ_60_geq80_all.bed'),
any60
),
'>80% MQ:60'
),
(
load_annotation(
os.path.join(ctg_aln_path, 'highQ_60_all_all.bed'),
all60
),
'100% MQ:60'
)
]
# Figure stuff
width = 10
height = 4
fig, ax = plt.subplots(figsize=(width, height))
y_start = 0
#primary_chroms = ['chr' + str(i) for i in range(1, 23)] + ['chrX']
primary_chroms = ['chr16']
y_labels = []
y_label_pos = []
legend_patches = []
max_plot = 0
cyto_bands = load_cytogenetic_bands()
for c in reversed(primary_chroms):
y_labels.append(c.strip('chr') + 'p')
y_label_pos.append(y_start + 0.5)
barh_xranges = []
barh_colors = []
for idx, band in cyto_bands.loc[cyto_bands['chrom'] == c, :].iterrows():
x_min = band['start']
x_width = band['length']
x_max = x_min + x_width
max_plot = max(max_plot, x_max)
barh_xranges.append((x_min, x_width))
barh_colors.append(band['color'])
ax.broken_barh(
barh_xranges,
(y_start, 1),
edgecolor='black',
facecolors=barh_colors,
zorder=10
)
y_start += 1
# add annotations bottom to top
for ann_table, ann_label in annotations:
barh_xranges = []
barh_colors = []
y_labels.append(ann_label)
y_label_pos.append(y_start + 0.5)
for idx, region in ann_table.loc[ann_table['chrom'] == c, :].iterrows():
x_min = region['start']
x_width = region['length']
barh_xranges.append((x_min, x_width))
if region['color'] is None:
raise ValueError(ann_label)
barh_colors.append(region['color'])
# if c == primary_chroms[0]:
# if ann_label == 'Issues':
# for issue_type in ['Gap', 'Variation', 'Unknown']:
# p = patches.Patch(
# facecolor=region_colors[issue_type],
# edgecolor='black',
# label='{}: Issue / {}'.format(literal, issue_type)
# )
# legend_patches.append(p)
# else:
# p = patches.Patch(
# facecolor=region_colors[ann_label],
# edgecolor='black',
# label='{}: {}'.format(literal, ann_label)
# )
# legend_patches.append(p)
ax.broken_barh(
barh_xranges,
(y_start, 1),
edgecolor=None,
facecolors=barh_colors
)
y_start += 1
y_start += 2
# build custom legend
# ax.legend(
# handles=list(reversed(legend_patches)),
# loc='best',
# handlelength=3,
# handleheight=1,
# prop={'size': 16}
# )
# annotate variation in region
ax.annotate(
'HG-2425',
(22760989 + 20001, 2.5), # point
(25e6, 2.25), # text
arrowprops=dict(
facecolor='black',
width=2,
headwidth=8,
headlength=4
),
fontsize=14
)
_ = ax.set_yticks(y_label_pos)
_ = ax.set_yticklabels(y_labels, fontsize=14)
#_ = ax.set_xticklabels([])
#_ = ax.set_xticks([])
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.tick_params(axis='y', which='major', length=10)
ax.tick_params(axis='x', which='major', length=10, size=14)
ax.vlines([195.55e6, 196.1e6], -0.75, y_start - 1.75, colors='black', linestyles='dotted', zorder=15)
#_ = ax.set_xlim(-500000, max_plot // 1e6 * 1e6 + 1e6)
_ = ax.set_xlim(10e6, 40e6)
out_path = '/home/local/work/data/hgsvc/figSX_panels/ideograms'
fig.savefig(
os.path.join(out_path, 'chr16p_lowres.png'), dpi=150, bbox_inches='tight'
)
fig.savefig(
os.path.join(out_path, 'chr16p.svg'), bbox_inches='tight'
)
```
|
github_jupyter
|
import pandas as pd
import os
import re
import matplotlib as mpl
import matplotlib.patches as patches
import matplotlib.pyplot as plt
def rgb_rel(rgb):
return tuple([round(x/255, 3) for x in rgb])
def rel_rgb(rgb):
return rgb_rel(rgb)
# color definitions
white = (1, 1, 1)
light_blue = rel_rgb([50, 150, 255])
dark_blue = rel_rgb([50, 50, 255])
mustard = rgb_rel([220, 200, 0])
medium_grey = rgb_rel([160, 160, 160])
purple = rgb_rel([150, 0, 150])
red = rgb_rel([255, 0, 0])
light_yellow = rgb_rel([255, 255, 150])
light_orange = rgb_rel([255, 180, 100])
all60 = rel_rgb([0,109,44]) # darkest shade
any60 = rel_rgb([49,163,84])
any50 = rel_rgb([116,196,118])
any40 = rel_rgb([186,228,179])
any20 = rel_rgb([237,248,233]) # lightest shade
region_colors = {
'Gap': medium_grey,
'Variation': mustard,
'Unknown': purple,
'SD_98': light_blue,
'SD_99': dark_blue,
'UAB': red,
'LCaln': light_orange
}
def color_segdups(region_score):
if region_score < 980:
return white
elif 980 <= region_score < 990:
return light_blue
elif 990 <= region_score < 1001:
return dark_blue
else:
raise ValueError(region)
def load_annotation(file_path, color=None):
df = pd.read_csv(file_path, sep='\t')
if 'Issue_Type' in df:
df['color'] = df['Issue_Type'].apply(lambda x: region_colors[x])
elif 'chromStart' in df:
df['color'] = df['score'].apply(color_segdups)
else:
assert color is not None, 'no color: {}'.format(file_path)
df['color'] = df['start'].apply(lambda x: color)
if 'end' in df:
df['length'] = df['end'] - df['start']
if 'chromEnd' in df:
df['length'] = df['chromEnd'] - df['chromStart']
df['start'] = df['chromStart']
df['end'] = df['chromEnd']
df = df.loc[(df['score'] >= 980), :].copy()
if '#chrom' in df:
df['chrom'] = df['#chrom']
return df
def load_cytogenetic_bands():
# http://circos.ca/tutorials/lessons/2d_tracks/connectors/configuration
gie_stain_rgb = {
'gpos100': (0,0,0),
'gpos': (0,0,0),
'gpos75': (130,130,130),
'gpos66': (160,160,160),
'gpos50': (200,200,200),
'gpos33': (210,210,210),
'gpos25': (200,200,200),
'gvar': (220,220,220),
'gneg': (255,255,255),
'acen': (217,47,39),
'stalk': (100,127,164)
}
gie_stain_frac_rgb = {}
for k, v in gie_stain_rgb.items():
gie_stain_frac_rgb[k] = rgb_rel(v)
path = '/home/local/work/code/github/project-diploid-assembly/annotation/grch38/known_regions'
cytobands = 'ucsc_cytoband.bed'
df = pd.read_csv(
os.path.join(path, cytobands),
header=0,
names=['chrom', 'start', 'end', 'name', 'gieStain'],
sep='\t'
)
df['length'] = df['end'] - df['start']
df['color'] = df['gieStain'].apply(lambda x: rel_rgb(gie_stain_rgb[x]))
return df
grch38_path = '/home/local/work/code/github/project-diploid-assembly/annotation/grch38'
issues = os.path.join(grch38_path, '20200723_GRCh38_p13_unresolved-issues.bed')
segdups = os.path.join(grch38_path, 'GRCh38_segdups.bed')
ctg_aln_path = '/home/local/work/data/hgsvc/aln_summary'
annotations = [
(load_annotation(segdups), 'SD >98% id.'),
(load_annotation(issues), 'Issues'),
(
load_annotation(
os.path.join(ctg_aln_path, 'lowQAln_0-20_any_all.bed'),
region_colors['LCaln']
),
'Any MQ<20'
),
(
load_annotation(
os.path.join(ctg_aln_path, 'highQ_60_geq20_all.bed'),
any20
),
'>20% MQ:60'
),
(
load_annotation(
os.path.join(ctg_aln_path, 'highQ_60_geq40_all.bed'),
any40
),
'>40% MQ:60'
),
(
load_annotation(
os.path.join(ctg_aln_path, 'highQ_60_geq60_all.bed'),
any50
),
'>60% MQ:60'
),
(
load_annotation(
os.path.join(ctg_aln_path, 'highQ_60_geq80_all.bed'),
any60
),
'>80% MQ:60'
),
(
load_annotation(
os.path.join(ctg_aln_path, 'highQ_60_all_all.bed'),
all60
),
'100% MQ:60'
)
]
# Figure stuff
width = 10
height = 4
fig, ax = plt.subplots(figsize=(width, height))
y_start = 0
#primary_chroms = ['chr' + str(i) for i in range(1, 23)] + ['chrX']
primary_chroms = ['chr16']
y_labels = []
y_label_pos = []
legend_patches = []
max_plot = 0
cyto_bands = load_cytogenetic_bands()
for c in reversed(primary_chroms):
y_labels.append(c.strip('chr') + 'p')
y_label_pos.append(y_start + 0.5)
barh_xranges = []
barh_colors = []
for idx, band in cyto_bands.loc[cyto_bands['chrom'] == c, :].iterrows():
x_min = band['start']
x_width = band['length']
x_max = x_min + x_width
max_plot = max(max_plot, x_max)
barh_xranges.append((x_min, x_width))
barh_colors.append(band['color'])
ax.broken_barh(
barh_xranges,
(y_start, 1),
edgecolor='black',
facecolors=barh_colors,
zorder=10
)
y_start += 1
# add annotations bottom to top
for ann_table, ann_label in annotations:
barh_xranges = []
barh_colors = []
y_labels.append(ann_label)
y_label_pos.append(y_start + 0.5)
for idx, region in ann_table.loc[ann_table['chrom'] == c, :].iterrows():
x_min = region['start']
x_width = region['length']
barh_xranges.append((x_min, x_width))
if region['color'] is None:
raise ValueError(ann_label)
barh_colors.append(region['color'])
# if c == primary_chroms[0]:
# if ann_label == 'Issues':
# for issue_type in ['Gap', 'Variation', 'Unknown']:
# p = patches.Patch(
# facecolor=region_colors[issue_type],
# edgecolor='black',
# label='{}: Issue / {}'.format(literal, issue_type)
# )
# legend_patches.append(p)
# else:
# p = patches.Patch(
# facecolor=region_colors[ann_label],
# edgecolor='black',
# label='{}: {}'.format(literal, ann_label)
# )
# legend_patches.append(p)
ax.broken_barh(
barh_xranges,
(y_start, 1),
edgecolor=None,
facecolors=barh_colors
)
y_start += 1
y_start += 2
# build custom legend
# ax.legend(
# handles=list(reversed(legend_patches)),
# loc='best',
# handlelength=3,
# handleheight=1,
# prop={'size': 16}
# )
# annotate variation in region
ax.annotate(
'HG-2425',
(22760989 + 20001, 2.5), # point
(25e6, 2.25), # text
arrowprops=dict(
facecolor='black',
width=2,
headwidth=8,
headlength=4
),
fontsize=14
)
_ = ax.set_yticks(y_label_pos)
_ = ax.set_yticklabels(y_labels, fontsize=14)
#_ = ax.set_xticklabels([])
#_ = ax.set_xticks([])
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.tick_params(axis='y', which='major', length=10)
ax.tick_params(axis='x', which='major', length=10, size=14)
ax.vlines([195.55e6, 196.1e6], -0.75, y_start - 1.75, colors='black', linestyles='dotted', zorder=15)
#_ = ax.set_xlim(-500000, max_plot // 1e6 * 1e6 + 1e6)
_ = ax.set_xlim(10e6, 40e6)
out_path = '/home/local/work/data/hgsvc/figSX_panels/ideograms'
fig.savefig(
os.path.join(out_path, 'chr16p_lowres.png'), dpi=150, bbox_inches='tight'
)
fig.savefig(
os.path.join(out_path, 'chr16p.svg'), bbox_inches='tight'
)
| 0.332527 | 0.248181 |
```
import pysam
import pandas as pd
from tqdm.auto import tqdm
import numpy as np
import itertools
bam_file = "/home/dbeb/btech/bb1160039/scratch/project/heart_10k_v3_possorted_genome_bam.bam"
bai_file = "/home/dbeb/btech/bb1160039/scratch/project/heart_10k_v3_possorted_genome_bam.bam.bai"
samf = pysam.Samfile(bam_file, "rb")
replicon_dict = dict([[replicon, {'seq_start_pos': 0,'seq_end_pos': length}] for replicon, length in zip(samf.references, samf.lengths)])
print(replicon_dict['1']['seq_start_pos'])
print(replicon_dict['1']['seq_end_pos'])
samfile = pysam.AlignmentFile(bam_file, "rb", index_filename = bai_file)
x=['1','2','3','4','5','6','7','8','9','10','11','12','13','14','15','16','17','18','19','X','Y']
# x=['1','2','3','4','5','6','7','8','9','10','11','12','13','14','15','16','17','18','19','20','21','22','X','Y']
list_tags = []
for i in tqdm(range(0,len(x))):
for read in samfile.fetch(x[i], replicon_dict[x[i]]['seq_start_pos'],replicon_dict[x[i]]['seq_end_pos']):
try:
if read.has_tag("GX") and read.get_tag("NH")==1:
list_tags.append([read.get_tag("CB"),read.get_tag("UB"),str(read.get_tag("GX")+"-"+read.get_tag("GN")+"-"+read.reference_name+"-"+str(read.get_reference_positions()[0])+"-"+str(len(read.get_reference_positions())))])
except KeyError:
continue
%%time
list_tags_rm_dup = list(list_tags for list_tags,_ in itertools.groupby(list_tags))
print(len(list_tags))
print(len(list_tags_rm_dup))
start=0
val=int(list_tags_rm_dup[start][2].split("-")[3])
for i in tqdm(range(1,len(list_tags_rm_dup))):
if (int(list_tags_rm_dup[i][2].split("-")[3]) - val>50 and i-start>1):
if (i-start>1):
#update the inner elements
for inner in list_tags_rm_dup[start:i]:
inner[2] = list_tags_rm_dup[start][2]
#update start to point to this pos
start = i
#update val to the val at this pos
val = int(list_tags_rm_dup[i][2].split("-")[3])
%%time
list_tags_rm_dup_final = list(list_tags_rm_dup for list_tags_rm_dup,_ in itertools.groupby(list_tags_rm_dup))
n=len(list_tags_rm_dup_final)
n
%%time
df_all_p1 = pd.DataFrame(list_tags_rm_dup_final[:int(n/10)], columns=["celltag","moltag","pseudoname"])
df_all_p2 = pd.DataFrame(list_tags_rm_dup_final[int(n/10):int(2*n/10)], columns=["celltag","moltag","pseudoname"])
df_all_p3 = pd.DataFrame(list_tags_rm_dup_final[int(2*n/10):int(3*n/10)], columns=["celltag","moltag","pseudoname"])
df_all_p4 = pd.DataFrame(list_tags_rm_dup_final[int(3*n/10):int(4*n/10)], columns=["celltag","moltag","pseudoname"])
df_all_p5 = pd.DataFrame(list_tags_rm_dup_final[int(4*n/10):int(5*n/10)], columns=["celltag","moltag","pseudoname"])
df_all_p6 = pd.DataFrame(list_tags_rm_dup_final[int(5*n/10):int(6*n/10)], columns=["celltag","moltag","pseudoname"])
df_all_p7 = pd.DataFrame(list_tags_rm_dup_final[int(6*n/10):int(7*n/10)], columns=["celltag","moltag","pseudoname"])
df_all_p8 = pd.DataFrame(list_tags_rm_dup_final[int(7*n/10):int(8*n/10)], columns=["celltag","moltag","pseudoname"])
df_all_p9 = pd.DataFrame(list_tags_rm_dup_final[int(8*n/10):int(9*n/10)], columns=["celltag","moltag","pseudoname"])
df_all_p10 = pd.DataFrame(list_tags_rm_dup_final[int(9*n/10):], columns=["celltag","moltag","pseudoname"])
%%time
c1 = df_all_p1['celltag'].value_counts()
c2 = df_all_p2['celltag'].value_counts()
c3 = df_all_p3['celltag'].value_counts()
c4 = df_all_p4['celltag'].value_counts()
c5 = df_all_p5['celltag'].value_counts()
c6 = df_all_p6['celltag'].value_counts()
c7 = df_all_p7['celltag'].value_counts()
c8 = df_all_p8['celltag'].value_counts()
c9 = df_all_p9['celltag'].value_counts()
c10 = df_all_p10['celltag'].value_counts()
c11 = df_all_p1['pseudoname'].value_counts()
c22 = df_all_p2['pseudoname'].value_counts()
c33 = df_all_p3['pseudoname'].value_counts()
c44 = df_all_p4['pseudoname'].value_counts()
c55 = df_all_p5['pseudoname'].value_counts()
c66 = df_all_p6['pseudoname'].value_counts()
c77 = df_all_p7['pseudoname'].value_counts()
c88 = df_all_p8['pseudoname'].value_counts()
c99 = df_all_p9['pseudoname'].value_counts()
c1010 = df_all_p10['pseudoname'].value_counts()
%time
df_all_p1_subset = df_all_p1[df_all_p1["celltag"].isin(c1[c1>10].index)]
df_all_p1_subset = df_all_p1_subset[df_all_p1_subset["pseudoname"].isin(c11[c11>10].index)]
df_all_p2_subset = df_all_p2[df_all_p2["celltag"].isin(c2[c2>10].index)]
df_all_p2_subset = df_all_p2_subset[df_all_p2_subset["pseudoname"].isin(c22[c22>10].index)]
df_all_p3_subset = df_all_p3[df_all_p3["celltag"].isin(c3[c3>10].index)]
df_all_p3_subset = df_all_p3_subset[df_all_p3_subset["pseudoname"].isin(c33[c33>10].index)]
df_all_p4_subset = df_all_p4[df_all_p4["celltag"].isin(c4[c4>10].index)]
df_all_p4_subset = df_all_p4_subset[df_all_p4_subset["pseudoname"].isin(c44[c44>10].index)]
df_all_p5_subset = df_all_p5[df_all_p5["celltag"].isin(c5[c5>10].index)]
df_all_p5_subset = df_all_p5_subset[df_all_p5_subset["pseudoname"].isin(c55[c55>10].index)]
df_all_p6_subset = df_all_p6[df_all_p6["celltag"].isin(c6[c6>10].index)]
df_all_p6_subset = df_all_p6_subset[df_all_p6_subset["pseudoname"].isin(c66[c66>10].index)]
df_all_p7_subset = df_all_p7[df_all_p7["celltag"].isin(c7[c7>10].index)]
df_all_p7_subset = df_all_p7_subset[df_all_p7_subset["pseudoname"].isin(c77[c77>10].index)]
df_all_p8_subset = df_all_p8[df_all_p8["celltag"].isin(c8[c8>10].index)]
df_all_p8_subset = df_all_p8_subset[df_all_p8_subset["pseudoname"].isin(c88[c88>10].index)]
df_all_p9_subset = df_all_p9[df_all_p9["celltag"].isin(c9[c9>10].index)]
df_all_p9_subset = df_all_p9_subset[df_all_p9_subset["pseudoname"].isin(c99[c99>10].index)]
df_all_p10_subset = df_all_p10[df_all_p10["celltag"].isin(c10[c10>10].index)]
df_all_p10_subset = df_all_p10_subset[df_all_p10_subset["pseudoname"].isin(c1010[c1010>10].index)]
%%time
counts_p1=df_all_p1_subset.groupby(['pseudoname','celltag']).size().unstack('celltag', fill_value=0)
counts_p2=df_all_p2_subset.groupby(['pseudoname','celltag']).size().unstack('celltag', fill_value=0)
counts_p3=df_all_p3_subset.groupby(['pseudoname','celltag']).size().unstack('celltag', fill_value=0)
counts_p4=df_all_p4_subset.groupby(['pseudoname','celltag']).size().unstack('celltag', fill_value=0)
counts_p5=df_all_p5_subset.groupby(['pseudoname','celltag']).size().unstack('celltag', fill_value=0)
counts_p6=df_all_p6_subset.groupby(['pseudoname','celltag']).size().unstack('celltag', fill_value=0)
counts_p7=df_all_p7_subset.groupby(['pseudoname','celltag']).size().unstack('celltag', fill_value=0)
counts_p8=df_all_p8_subset.groupby(['pseudoname','celltag']).size().unstack('celltag', fill_value=0)
counts_p9=df_all_p9_subset.groupby(['pseudoname','celltag']).size().unstack('celltag', fill_value=0)
counts_p10=df_all_p10_subset.groupby(['pseudoname','celltag']).size().unstack('celltag', fill_value=0)
cell_bcode=set(counts_p1.columns).intersection(set(counts_p2.columns)).intersection(set(counts_p3.columns)).intersection(set(counts_p4.columns)).intersection(set(counts_p5.columns)).intersection(set(counts_p6.columns)).intersection(set(counts_p7.columns)).intersection(set(counts_p8.columns)).intersection(set(counts_p9.columns)).intersection(set(counts_p10.columns))
len(cell_bcode)
%%time
counts_p1 = counts_p1[cell_bcode]
counts_p2 = counts_p2[cell_bcode]
counts_p3 = counts_p3[cell_bcode]
counts_p4 = counts_p4[cell_bcode]
counts_p5 = counts_p5[cell_bcode]
counts_p6 = counts_p6[cell_bcode]
counts_p7 = counts_p7[cell_bcode]
counts_p8 = counts_p8[cell_bcode]
counts_p9 = counts_p9[cell_bcode]
counts_p10 = counts_p10[cell_bcode]
%%time
counts_full = pd.concat([counts_p1,counts_p2, counts_p3, counts_p4, counts_p5, counts_p6, counts_p7, counts_p8, counts_p9, counts_p10])
# counts_full.to_csv("/home/dbeb/btech/bb1160039/scratch/project/counts_genes_plant.csv")
```
|
github_jupyter
|
import pysam
import pandas as pd
from tqdm.auto import tqdm
import numpy as np
import itertools
bam_file = "/home/dbeb/btech/bb1160039/scratch/project/heart_10k_v3_possorted_genome_bam.bam"
bai_file = "/home/dbeb/btech/bb1160039/scratch/project/heart_10k_v3_possorted_genome_bam.bam.bai"
samf = pysam.Samfile(bam_file, "rb")
replicon_dict = dict([[replicon, {'seq_start_pos': 0,'seq_end_pos': length}] for replicon, length in zip(samf.references, samf.lengths)])
print(replicon_dict['1']['seq_start_pos'])
print(replicon_dict['1']['seq_end_pos'])
samfile = pysam.AlignmentFile(bam_file, "rb", index_filename = bai_file)
x=['1','2','3','4','5','6','7','8','9','10','11','12','13','14','15','16','17','18','19','X','Y']
# x=['1','2','3','4','5','6','7','8','9','10','11','12','13','14','15','16','17','18','19','20','21','22','X','Y']
list_tags = []
for i in tqdm(range(0,len(x))):
for read in samfile.fetch(x[i], replicon_dict[x[i]]['seq_start_pos'],replicon_dict[x[i]]['seq_end_pos']):
try:
if read.has_tag("GX") and read.get_tag("NH")==1:
list_tags.append([read.get_tag("CB"),read.get_tag("UB"),str(read.get_tag("GX")+"-"+read.get_tag("GN")+"-"+read.reference_name+"-"+str(read.get_reference_positions()[0])+"-"+str(len(read.get_reference_positions())))])
except KeyError:
continue
%%time
list_tags_rm_dup = list(list_tags for list_tags,_ in itertools.groupby(list_tags))
print(len(list_tags))
print(len(list_tags_rm_dup))
start=0
val=int(list_tags_rm_dup[start][2].split("-")[3])
for i in tqdm(range(1,len(list_tags_rm_dup))):
if (int(list_tags_rm_dup[i][2].split("-")[3]) - val>50 and i-start>1):
if (i-start>1):
#update the inner elements
for inner in list_tags_rm_dup[start:i]:
inner[2] = list_tags_rm_dup[start][2]
#update start to point to this pos
start = i
#update val to the val at this pos
val = int(list_tags_rm_dup[i][2].split("-")[3])
%%time
list_tags_rm_dup_final = list(list_tags_rm_dup for list_tags_rm_dup,_ in itertools.groupby(list_tags_rm_dup))
n=len(list_tags_rm_dup_final)
n
%%time
df_all_p1 = pd.DataFrame(list_tags_rm_dup_final[:int(n/10)], columns=["celltag","moltag","pseudoname"])
df_all_p2 = pd.DataFrame(list_tags_rm_dup_final[int(n/10):int(2*n/10)], columns=["celltag","moltag","pseudoname"])
df_all_p3 = pd.DataFrame(list_tags_rm_dup_final[int(2*n/10):int(3*n/10)], columns=["celltag","moltag","pseudoname"])
df_all_p4 = pd.DataFrame(list_tags_rm_dup_final[int(3*n/10):int(4*n/10)], columns=["celltag","moltag","pseudoname"])
df_all_p5 = pd.DataFrame(list_tags_rm_dup_final[int(4*n/10):int(5*n/10)], columns=["celltag","moltag","pseudoname"])
df_all_p6 = pd.DataFrame(list_tags_rm_dup_final[int(5*n/10):int(6*n/10)], columns=["celltag","moltag","pseudoname"])
df_all_p7 = pd.DataFrame(list_tags_rm_dup_final[int(6*n/10):int(7*n/10)], columns=["celltag","moltag","pseudoname"])
df_all_p8 = pd.DataFrame(list_tags_rm_dup_final[int(7*n/10):int(8*n/10)], columns=["celltag","moltag","pseudoname"])
df_all_p9 = pd.DataFrame(list_tags_rm_dup_final[int(8*n/10):int(9*n/10)], columns=["celltag","moltag","pseudoname"])
df_all_p10 = pd.DataFrame(list_tags_rm_dup_final[int(9*n/10):], columns=["celltag","moltag","pseudoname"])
%%time
c1 = df_all_p1['celltag'].value_counts()
c2 = df_all_p2['celltag'].value_counts()
c3 = df_all_p3['celltag'].value_counts()
c4 = df_all_p4['celltag'].value_counts()
c5 = df_all_p5['celltag'].value_counts()
c6 = df_all_p6['celltag'].value_counts()
c7 = df_all_p7['celltag'].value_counts()
c8 = df_all_p8['celltag'].value_counts()
c9 = df_all_p9['celltag'].value_counts()
c10 = df_all_p10['celltag'].value_counts()
c11 = df_all_p1['pseudoname'].value_counts()
c22 = df_all_p2['pseudoname'].value_counts()
c33 = df_all_p3['pseudoname'].value_counts()
c44 = df_all_p4['pseudoname'].value_counts()
c55 = df_all_p5['pseudoname'].value_counts()
c66 = df_all_p6['pseudoname'].value_counts()
c77 = df_all_p7['pseudoname'].value_counts()
c88 = df_all_p8['pseudoname'].value_counts()
c99 = df_all_p9['pseudoname'].value_counts()
c1010 = df_all_p10['pseudoname'].value_counts()
%time
df_all_p1_subset = df_all_p1[df_all_p1["celltag"].isin(c1[c1>10].index)]
df_all_p1_subset = df_all_p1_subset[df_all_p1_subset["pseudoname"].isin(c11[c11>10].index)]
df_all_p2_subset = df_all_p2[df_all_p2["celltag"].isin(c2[c2>10].index)]
df_all_p2_subset = df_all_p2_subset[df_all_p2_subset["pseudoname"].isin(c22[c22>10].index)]
df_all_p3_subset = df_all_p3[df_all_p3["celltag"].isin(c3[c3>10].index)]
df_all_p3_subset = df_all_p3_subset[df_all_p3_subset["pseudoname"].isin(c33[c33>10].index)]
df_all_p4_subset = df_all_p4[df_all_p4["celltag"].isin(c4[c4>10].index)]
df_all_p4_subset = df_all_p4_subset[df_all_p4_subset["pseudoname"].isin(c44[c44>10].index)]
df_all_p5_subset = df_all_p5[df_all_p5["celltag"].isin(c5[c5>10].index)]
df_all_p5_subset = df_all_p5_subset[df_all_p5_subset["pseudoname"].isin(c55[c55>10].index)]
df_all_p6_subset = df_all_p6[df_all_p6["celltag"].isin(c6[c6>10].index)]
df_all_p6_subset = df_all_p6_subset[df_all_p6_subset["pseudoname"].isin(c66[c66>10].index)]
df_all_p7_subset = df_all_p7[df_all_p7["celltag"].isin(c7[c7>10].index)]
df_all_p7_subset = df_all_p7_subset[df_all_p7_subset["pseudoname"].isin(c77[c77>10].index)]
df_all_p8_subset = df_all_p8[df_all_p8["celltag"].isin(c8[c8>10].index)]
df_all_p8_subset = df_all_p8_subset[df_all_p8_subset["pseudoname"].isin(c88[c88>10].index)]
df_all_p9_subset = df_all_p9[df_all_p9["celltag"].isin(c9[c9>10].index)]
df_all_p9_subset = df_all_p9_subset[df_all_p9_subset["pseudoname"].isin(c99[c99>10].index)]
df_all_p10_subset = df_all_p10[df_all_p10["celltag"].isin(c10[c10>10].index)]
df_all_p10_subset = df_all_p10_subset[df_all_p10_subset["pseudoname"].isin(c1010[c1010>10].index)]
%%time
counts_p1=df_all_p1_subset.groupby(['pseudoname','celltag']).size().unstack('celltag', fill_value=0)
counts_p2=df_all_p2_subset.groupby(['pseudoname','celltag']).size().unstack('celltag', fill_value=0)
counts_p3=df_all_p3_subset.groupby(['pseudoname','celltag']).size().unstack('celltag', fill_value=0)
counts_p4=df_all_p4_subset.groupby(['pseudoname','celltag']).size().unstack('celltag', fill_value=0)
counts_p5=df_all_p5_subset.groupby(['pseudoname','celltag']).size().unstack('celltag', fill_value=0)
counts_p6=df_all_p6_subset.groupby(['pseudoname','celltag']).size().unstack('celltag', fill_value=0)
counts_p7=df_all_p7_subset.groupby(['pseudoname','celltag']).size().unstack('celltag', fill_value=0)
counts_p8=df_all_p8_subset.groupby(['pseudoname','celltag']).size().unstack('celltag', fill_value=0)
counts_p9=df_all_p9_subset.groupby(['pseudoname','celltag']).size().unstack('celltag', fill_value=0)
counts_p10=df_all_p10_subset.groupby(['pseudoname','celltag']).size().unstack('celltag', fill_value=0)
cell_bcode=set(counts_p1.columns).intersection(set(counts_p2.columns)).intersection(set(counts_p3.columns)).intersection(set(counts_p4.columns)).intersection(set(counts_p5.columns)).intersection(set(counts_p6.columns)).intersection(set(counts_p7.columns)).intersection(set(counts_p8.columns)).intersection(set(counts_p9.columns)).intersection(set(counts_p10.columns))
len(cell_bcode)
%%time
counts_p1 = counts_p1[cell_bcode]
counts_p2 = counts_p2[cell_bcode]
counts_p3 = counts_p3[cell_bcode]
counts_p4 = counts_p4[cell_bcode]
counts_p5 = counts_p5[cell_bcode]
counts_p6 = counts_p6[cell_bcode]
counts_p7 = counts_p7[cell_bcode]
counts_p8 = counts_p8[cell_bcode]
counts_p9 = counts_p9[cell_bcode]
counts_p10 = counts_p10[cell_bcode]
%%time
counts_full = pd.concat([counts_p1,counts_p2, counts_p3, counts_p4, counts_p5, counts_p6, counts_p7, counts_p8, counts_p9, counts_p10])
# counts_full.to_csv("/home/dbeb/btech/bb1160039/scratch/project/counts_genes_plant.csv")
| 0.042295 | 0.142769 |
<a href="https://colab.research.google.com/github/vdnew/Loan-Prediction/blob/main/Logistic_Regression.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set(style="white", color_codes=True)
path = '/content/train_loanprediction1.csv'
train = pd.read_csv(path)
train.head()
train.describe()
train.info()
train.shape
train.isnull().any()
train.isnull().sum()
train[['Gender']].info()
train.head(10)
train['Property_Area'].unique()
train['Property_Area'].value_counts()
train_loan = train.dropna()
train_loan.info()
train.info()
```
<h1> Data Preprocessing </h1>
```
train['Dependents'].fillna(1,inplace=True)
train.info()
train['LoanAmount'].fillna(train.LoanAmount.mean(),inplace=True)
train.info()
train.head(10)
ValueMapping = {'Yes': 1, 'No': 0}
train['Married_Section'] = train['Married'].map(ValueMapping)
train.head()
ValueMapping1 = {'Male': 1, 'Female': 0}
train['Gender_Section'] = train['Gender'].map(ValueMapping1)
train.head()
train['Education'].unique()
ValueMapping2 = {'Graduate': 1, 'Not Graduate': 0}
train['Edu_Section'] = train['Education'].map(ValueMapping2)
train.head()
train.info()
train['Married_Section'].fillna(train.Married_Section.mean(), inplace=True)
train['Gender_Section'].fillna(train.Gender_Section.mean(), inplace=True)
train['Loan_Amount_Term'].fillna(train.Loan_Amount_Term.mean(), inplace=True)
train['Credit_History'].fillna(train.Credit_History.mean(), inplace=True)
train.info()
ValueMapping3 = {'Yes': 1, 'No': 0}
train['Employed_Section'] = train['Self_Employed'].map(ValueMapping3)
train.head()
train.info()
from sklearn.preprocessing import LabelEncoder
lb = LabelEncoder()
train['Property_Section'] = lb.fit_transform(train['Property_Area'])
train.head()
ValueMapping4 = {'Y':1, 'N':0}
train['Loan_Section'] = train['Loan_Status'].map(ValueMapping4)
train.head()
sns.FacetGrid(train,hue="Gender_Section",size=4) \
.map(plt.scatter,"Loan_Status","LoanAmount") \
.add_legend()
plt.show()
sns.FacetGrid(train,hue="Property_Section",size=4) \
.map(plt.scatter,"ApplicantIncome","CoapplicantIncome") \
.add_legend()
plt.show()
plt.figure(figsize = (10,7))
x = train["LoanAmount"]
plt.hist(x, bins = 30, color = "pink")
plt.title("Loan taken by Customers")
plt.xlabel("Loan Figures")
plt.ylabel("Count")
sns.boxplot(x="Property_Area", y="Gender_Section", data=train)
sns.boxplot(x="Married_Section", y="ApplicantIncome", data=train)
train_temp=train[train["Education"]== "Graduate"]
train_temp["Self_Employed"].hist()
sns.FacetGrid(train, hue="Credit_History", size=6).map(sns.kdeplot, "CoapplicantIncome").add_legend()
cols = ['ApplicantIncome','CoapplicantIncome','LoanAmount','Loan_Amount_Term','Credit_History','Married_Section',
'Gender_Section','Edu_Section','Employed_Section','Property_Section']
f, ax = plt.subplots(figsize=(10, 7))
cm = np.corrcoef(train[cols].values.T)
sns.set(font_scale=1.5)
hm = sns.heatmap(cm,
cbar=True,
annot=True,
square=True,
fmt='.2f',
annot_kws={'size': 15},
yticklabels=cols,
xticklabels=cols)
plt.show()
train['Employed_Section'].unique()
train['Employed_Section'].fillna(1,inplace=True)
train.head()
train['Employed_Section'].unique()
train['Gender_Section'].unique()
train['Gender_Section'].fillna(1,inplace=True)
train.head()
train['Gender_Section'].unique()
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
X=train[['ApplicantIncome','CoapplicantIncome','LoanAmount','Loan_Amount_Term','Credit_History','Married_Section',
'Gender_Section','Edu_Section','Employed_Section','Property_Section']].values
y=train[["Loan_Section"]].values
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
train.isna().any()
model.fit(X_train, y_train)
model.score(X_train,y_train)
model.score(X_test,y_test)
expected = y_test
predicted = model.predict(X_test)
from sklearn import metrics
print(metrics.classification_report(expected, predicted))
metrics.confusion_matrix(expected, predicted)
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set(style="white", color_codes=True)
path = '/content/train_loanprediction1.csv'
train = pd.read_csv(path)
train.head()
train.describe()
train.info()
train.shape
train.isnull().any()
train.isnull().sum()
train[['Gender']].info()
train.head(10)
train['Property_Area'].unique()
train['Property_Area'].value_counts()
train_loan = train.dropna()
train_loan.info()
train.info()
train['Dependents'].fillna(1,inplace=True)
train.info()
train['LoanAmount'].fillna(train.LoanAmount.mean(),inplace=True)
train.info()
train.head(10)
ValueMapping = {'Yes': 1, 'No': 0}
train['Married_Section'] = train['Married'].map(ValueMapping)
train.head()
ValueMapping1 = {'Male': 1, 'Female': 0}
train['Gender_Section'] = train['Gender'].map(ValueMapping1)
train.head()
train['Education'].unique()
ValueMapping2 = {'Graduate': 1, 'Not Graduate': 0}
train['Edu_Section'] = train['Education'].map(ValueMapping2)
train.head()
train.info()
train['Married_Section'].fillna(train.Married_Section.mean(), inplace=True)
train['Gender_Section'].fillna(train.Gender_Section.mean(), inplace=True)
train['Loan_Amount_Term'].fillna(train.Loan_Amount_Term.mean(), inplace=True)
train['Credit_History'].fillna(train.Credit_History.mean(), inplace=True)
train.info()
ValueMapping3 = {'Yes': 1, 'No': 0}
train['Employed_Section'] = train['Self_Employed'].map(ValueMapping3)
train.head()
train.info()
from sklearn.preprocessing import LabelEncoder
lb = LabelEncoder()
train['Property_Section'] = lb.fit_transform(train['Property_Area'])
train.head()
ValueMapping4 = {'Y':1, 'N':0}
train['Loan_Section'] = train['Loan_Status'].map(ValueMapping4)
train.head()
sns.FacetGrid(train,hue="Gender_Section",size=4) \
.map(plt.scatter,"Loan_Status","LoanAmount") \
.add_legend()
plt.show()
sns.FacetGrid(train,hue="Property_Section",size=4) \
.map(plt.scatter,"ApplicantIncome","CoapplicantIncome") \
.add_legend()
plt.show()
plt.figure(figsize = (10,7))
x = train["LoanAmount"]
plt.hist(x, bins = 30, color = "pink")
plt.title("Loan taken by Customers")
plt.xlabel("Loan Figures")
plt.ylabel("Count")
sns.boxplot(x="Property_Area", y="Gender_Section", data=train)
sns.boxplot(x="Married_Section", y="ApplicantIncome", data=train)
train_temp=train[train["Education"]== "Graduate"]
train_temp["Self_Employed"].hist()
sns.FacetGrid(train, hue="Credit_History", size=6).map(sns.kdeplot, "CoapplicantIncome").add_legend()
cols = ['ApplicantIncome','CoapplicantIncome','LoanAmount','Loan_Amount_Term','Credit_History','Married_Section',
'Gender_Section','Edu_Section','Employed_Section','Property_Section']
f, ax = plt.subplots(figsize=(10, 7))
cm = np.corrcoef(train[cols].values.T)
sns.set(font_scale=1.5)
hm = sns.heatmap(cm,
cbar=True,
annot=True,
square=True,
fmt='.2f',
annot_kws={'size': 15},
yticklabels=cols,
xticklabels=cols)
plt.show()
train['Employed_Section'].unique()
train['Employed_Section'].fillna(1,inplace=True)
train.head()
train['Employed_Section'].unique()
train['Gender_Section'].unique()
train['Gender_Section'].fillna(1,inplace=True)
train.head()
train['Gender_Section'].unique()
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
X=train[['ApplicantIncome','CoapplicantIncome','LoanAmount','Loan_Amount_Term','Credit_History','Married_Section',
'Gender_Section','Edu_Section','Employed_Section','Property_Section']].values
y=train[["Loan_Section"]].values
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
train.isna().any()
model.fit(X_train, y_train)
model.score(X_train,y_train)
model.score(X_test,y_test)
expected = y_test
predicted = model.predict(X_test)
from sklearn import metrics
print(metrics.classification_report(expected, predicted))
metrics.confusion_matrix(expected, predicted)
| 0.328637 | 0.771155 |
```
import pandas as pd
import numpy as np
import ipyvolume as ipv
import pyuff
import os
```
# Importing geometry data
```
df = pd.read_excel('points.xlsx')#file with geometry data and local CS
cx = np.array([1,0,0])*10
cy = np.array([0,1,0])*10
cz = np.array([0,0,1])*10
x = df['X']
y = df['Y']
z = df['Z']
tm_i = df.keys()[4:-3]
```
# Matrices for tranformation from global to local CS
```
trans_matrices = []
j = 0
t = []
for i in tm_i:
t.append(np.asarray(df[i][:3]))
j+=1
if j==3:
j=0
t=np.cos(np.transpose(np.asarray(t))*np.pi/180)
trans_matrices.append(t)
t=[]
uffwrite = pyuff.UFF('./tree_structure_mini.uff')
```
# Writing model info
```
data={'type':151,
'model_name':'3D tree structure',
'description':'Dimention: 379x179x474 - CAD model: tree.step',
'db_app':'0',
'program':'0'}
uffwrite._write_set(data,'overwrite')
```
# Writing geometry
```
data={'type':15,
'node_nums':np.array(range(len(x))),
'def_cs':np.zeros_like(x),
'disp_cs':list(df['cs']),
'color':np.ones_like(x),
'x':x,
'y':y,
'z':z}
uffwrite._write_set(data,'add')
```
# Data for trace lines
```
traces = []
for i in range(1,len(df['cs'])):
if len(traces)<df['cs'][i]:
traces.append([])
traces[df['cs'][i]-1].append(i)
```
# Writing datasets for each trace line
```
for i in range(len(traces)):
data={'type': 82,
'trace_num': i+1,
'n_nodes': len(traces[i]),
'color': 0,
'id': 'line %i'%(i+1),
'nodes': np.asarray(traces[i])}
uffwrite._write_set(data,'add')
```
# CS matrices to UFF compatible structure
```
n = len(trans_matrices)
tm = np.zeros([4*n,3])
for i in range(n):
tm[4*i:4*i+3,:]=trans_matrices[i]
tm[4*i+3,:]=[0,0,0]
```
# Writing CS matrices
```
data={'type':2420,
'nodes':np.array(range(n)),
'local_cs':tm}
uffwrite._write_set(data,'add')
n = len(uffwrite.get_set_types())#checking numer of writen datasets
frfs = np.load('FRFs_mini.npy')#importing FRFs data
freq = np.load('Freq_mini.npy')#importing Freq list
```
# Writing each FRF into own dataset 58
```
for o in range(3):
for v in range(3):
for t in range(43):
resp_node = 0
resp_direc = o+1
ref_node = t+1
ref_direc = v+1
frf = frfs[o,v,t,:2000]
datai={'type':58,
'binary':1,
'func_type':4,
'rsp_node': resp_node,
'rsp_dir': resp_direc,
'ref_dir': ref_direc,
'ref_node': ref_node,
'data': frf,
'x': freq,
'id1': 'id1',
'rsp_ent_name': 'name',
'ref_ent_name': 'name',
'abscissa_spacing':1,
'abscissa_spec_data_type':18,
'ordinate_spec_data_type':12,
'orddenom_spec_data_type':13}
uffwrite._write_set(datai,'add')
v_x,v_y,v_z = np.load('shapes.npy')#importing modal shapes
freq = np.load('nat-freq.npy')#importing modal frequences
```
# Writing each mode into own dataset 55
```
n=10
if v_x.shape[1]<10:
n=v_x.shape[1]
for i in range(n):
vektor_x = v_x[:,i]
vektor_y = v_y[:,i]
vektor_z = v_z[:,i]
data={'type':55,
'analysis_type':2,
'data_ch':3,
'spec_data_type':8,
'load_case':0,
'mode_n':i,
'freq':freq[i],
'node_nums':np.array(range(1,44)),
'r1':vektor_x,
'r2':vektor_y,
'r3':vektor_z,
'r4':np.zeros_like(vektor_x),
'r5':np.zeros_like(vektor_x),
'r6':np.zeros_like(vektor_x),
}
uffwrite._write_set(data,'add')
```
# Checking number of datasets 55 and 58
```
j=0
for s in pyuff.UFF('./tree_structure_mini.uff').get_set_types():
if s==55:
j+=1
j=0
for s in pyuff.UFF('./tree_structure_mini.uff').get_set_types():
if s==58:
j+=1
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import ipyvolume as ipv
import pyuff
import os
df = pd.read_excel('points.xlsx')#file with geometry data and local CS
cx = np.array([1,0,0])*10
cy = np.array([0,1,0])*10
cz = np.array([0,0,1])*10
x = df['X']
y = df['Y']
z = df['Z']
tm_i = df.keys()[4:-3]
trans_matrices = []
j = 0
t = []
for i in tm_i:
t.append(np.asarray(df[i][:3]))
j+=1
if j==3:
j=0
t=np.cos(np.transpose(np.asarray(t))*np.pi/180)
trans_matrices.append(t)
t=[]
uffwrite = pyuff.UFF('./tree_structure_mini.uff')
data={'type':151,
'model_name':'3D tree structure',
'description':'Dimention: 379x179x474 - CAD model: tree.step',
'db_app':'0',
'program':'0'}
uffwrite._write_set(data,'overwrite')
data={'type':15,
'node_nums':np.array(range(len(x))),
'def_cs':np.zeros_like(x),
'disp_cs':list(df['cs']),
'color':np.ones_like(x),
'x':x,
'y':y,
'z':z}
uffwrite._write_set(data,'add')
traces = []
for i in range(1,len(df['cs'])):
if len(traces)<df['cs'][i]:
traces.append([])
traces[df['cs'][i]-1].append(i)
for i in range(len(traces)):
data={'type': 82,
'trace_num': i+1,
'n_nodes': len(traces[i]),
'color': 0,
'id': 'line %i'%(i+1),
'nodes': np.asarray(traces[i])}
uffwrite._write_set(data,'add')
n = len(trans_matrices)
tm = np.zeros([4*n,3])
for i in range(n):
tm[4*i:4*i+3,:]=trans_matrices[i]
tm[4*i+3,:]=[0,0,0]
data={'type':2420,
'nodes':np.array(range(n)),
'local_cs':tm}
uffwrite._write_set(data,'add')
n = len(uffwrite.get_set_types())#checking numer of writen datasets
frfs = np.load('FRFs_mini.npy')#importing FRFs data
freq = np.load('Freq_mini.npy')#importing Freq list
for o in range(3):
for v in range(3):
for t in range(43):
resp_node = 0
resp_direc = o+1
ref_node = t+1
ref_direc = v+1
frf = frfs[o,v,t,:2000]
datai={'type':58,
'binary':1,
'func_type':4,
'rsp_node': resp_node,
'rsp_dir': resp_direc,
'ref_dir': ref_direc,
'ref_node': ref_node,
'data': frf,
'x': freq,
'id1': 'id1',
'rsp_ent_name': 'name',
'ref_ent_name': 'name',
'abscissa_spacing':1,
'abscissa_spec_data_type':18,
'ordinate_spec_data_type':12,
'orddenom_spec_data_type':13}
uffwrite._write_set(datai,'add')
v_x,v_y,v_z = np.load('shapes.npy')#importing modal shapes
freq = np.load('nat-freq.npy')#importing modal frequences
n=10
if v_x.shape[1]<10:
n=v_x.shape[1]
for i in range(n):
vektor_x = v_x[:,i]
vektor_y = v_y[:,i]
vektor_z = v_z[:,i]
data={'type':55,
'analysis_type':2,
'data_ch':3,
'spec_data_type':8,
'load_case':0,
'mode_n':i,
'freq':freq[i],
'node_nums':np.array(range(1,44)),
'r1':vektor_x,
'r2':vektor_y,
'r3':vektor_z,
'r4':np.zeros_like(vektor_x),
'r5':np.zeros_like(vektor_x),
'r6':np.zeros_like(vektor_x),
}
uffwrite._write_set(data,'add')
j=0
for s in pyuff.UFF('./tree_structure_mini.uff').get_set_types():
if s==55:
j+=1
j=0
for s in pyuff.UFF('./tree_structure_mini.uff').get_set_types():
if s==58:
j+=1
| 0.050694 | 0.713793 |
<a href="https://colab.research.google.com/github/cesarriat/mlir/blob/master/Copy_of_MiPrimeraApp.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://storage.googleapis.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
california_housing_dataframe["median_house_value"] /= 1000.0
california_housing_dataframe
california_housing_dataframe.describe()
# Primer Paso: Definir las características y configurar las denominadas columnas de características
# Definir la característica de entrada: total_rooms.
my_feature = california_housing_dataframe [["total_rooms"]]
# Configurar una columna numérica de característica para total_rooms.
feature_columns = [tf.feature_column.numeric_column("total_rooms")]
# Segundo Paso : Definir el Objetivo (Target)
# Definir la etiqueta.
targets = california_housing_dataframe["median_house_value"]
# Tercer Paso: Configurar el LinearRegressor
# Usar descenso de gradiente como el optimizador para entrenar el modelo.
# Configurar una tasa de aprendizaje de 0.0000001 para Descenso de Gradiente.
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0000001)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
# Configurar el modelo de regresión lineal con nuestras columnas característica y optimizador.
linear_regressor = tf.estimator.LinearRegressor(
feature_columns=feature_columns,
optimizer=my_optimizer
)
```
**Cuarto Paso: Definir la Función input**
```
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Entrena un modelo de regresión lineal de una característica.
Argumentos:
features:DataFrame pandas de característicass
targets: DataFrame pandas de objetivos
batch_size: Tamaño de lotes pasados al modelo
shuffle: True or False. Si se deben mezclar los datos.
num_epochs: Número de epochs por los que los datos se repetirán. None = repetir indefinidamente
Devuelve:
Tuple de (features, labels) para el siguiente lote de datos
"""
# Convertir datos pandas en un dict de arrays np.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construir un dataset, y configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Mezclar los datos, si se especifica.
if shuffle:
ds = ds.shuffle(buffer_size=10000)
# Devolver el nuevo lote de datos.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
```
**Paso cinco: Entrenar el Modelo**
```
_ = linear_regressor.train(
input_fn = lambda:my_input_fn(my_feature, targets),
steps=100
)
_
```
**Paso seis: Evaluar el Modelo**
```
# Crear una función input para predicciones.
# Nota: Como vamos a hacer sólo una predicción para cada ejemplo, no tenemos
# que repetir o mezclar los datos aquí.
prediction_input_fn =lambda: my_input_fn(my_feature, targets, num_epochs=1, shuffle=False)
# Llamar a predict() en el linear_regressor para hacer predicciones.
predictions = linear_regressor.predict(input_fn=prediction_input_fn)
# Formateamos las predicciones como un array NumPy, para que podamos calcular las métricas de error.
predictions = np.array([item['predictions'][0] for item in predictions])
# Imprimimos Error Cuadrático Medio y Raíz Error Cuadrático Medio.
mean_squared_error = metrics.mean_squared_error(predictions, targets)
root_mean_squared_error = math.sqrt(mean_squared_error)
print("Error Cuadrático Medio (en datos entrenamiento): %0.3f" % mean_squared_error)
print("Raíz Error Cuadrático Medio (en datos entrenamiento): %0.3f" % root_mean_squared_error)
min_house_value = california_housing_dataframe["median_house_value"].min()
max_house_value = california_housing_dataframe["median_house_value"].max()
min_max_difference = max_house_value - min_house_value
print("Min. Median House Value: %0.3f" % min_house_value)
print("Max. Median House Value: %0.3f" % max_house_value)
print("Diferencia entre Min. y Max.: %0.3f" % min_max_difference)
print("Raíz Error Cuadrático Medio: %0.3f" % root_mean_squared_error)
calibration_data = pd.DataFrame()
calibration_data["predicciones"] = pd.Series(predictions)
calibration_data["objetivos"] = pd.Series(targets)
calibration_data.describe()
sample = california_housing_dataframe.sample(n=300)
# Obtenemos los valores mínimo y máximo de total_rooms.
x_0 = sample["total_rooms"].min()
x_1 = sample["total_rooms"].max()
# Recuperamos el peso y sesgo final generado durante el entrenamiento.
weight = linear_regressor.get_variable_value('linear/linear_model/total_rooms/weights')[0]
bias = linear_regressor.get_variable_value('linear/linear_model/bias_weights')
# Obtener los median_house_values predichos para los valores min and max total_rooms.
y_0 = weight * x_0 + bias
y_1 = weight * x_1 + bias
# Trazamos nuestra línea de regresión desde (x_0, y_0) to (x_1, y_1).
plt.plot([x_0, x_1], [y_0, y_1], c='r')
# Trazamos nuestra línea de regresión desde (x_0, y_0) to (x_1, y_1).
plt.plot([x_0, x_1], [y_0, y_1], c='r')
# Damos nombre a los ejes del gráfico.
plt.ylabel("median_house_value")
plt.xlabel("total_rooms")
# Trazamos una gráfica de dispersión de nuestros datos sample.
plt.scatter(sample["total_rooms"], sample["median_house_value"])
# Mostrar gráfico.
plt.show()
def train_model(learning_rate, steps, batch_size, input_feature="total_rooms"):
"""Entrenar un modelo de regresión lineal de una característica.
Args:
learning_rate: Un `float`, la tasa de aprendizaje.
steps: Un no-cero `int`, el número total de pasos de entrenamiento. Un paso de entrenamiento
consiste en un paso adelante y atrás usando un único lote.
batch_size: Un no-cero `int`, tamaño del lote.
input_feature: un `string` especificando una columna de `california_housing_dataframe`
para usar como característica de entrada.
"""
periods = 10
steps_per_period = steps / periods
my_feature = input_feature
my_feature_data = california_housing_dataframe[[my_feature]]
my_label = "median_house_value"
targets = california_housing_dataframe[my_label]
# Crear columns característica.
feature_columns = [tf.feature_column.numeric_column(my_feature)]
# Crear funciones input.
training_input_fn = lambda:my_input_fn(my_feature_data, targets, batch_size=batch_size)
prediction_input_fn = lambda: my_input_fn(my_feature_data, targets, num_epochs=1, shuffle=False)
# Crear un objeto linear regressor.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
linear_regressor = tf.estimator.LinearRegressor(
feature_columns=feature_columns,
optimizer=my_optimizer
)
# Configuración para trazar el estado de la línea de nuestro modelo cada período.
plt.figure(figsize=(15, 6))
plt.subplot(1, 2, 1)
plt.title("Línea aprendida por Período")
plt.ylabel(my_label)
plt.xlabel(my_feature)
sample = california_housing_dataframe.sample(n=300)
plt.scatter(sample[my_feature], sample[my_label])
colors = [cm.coolwarm(x) for x in np.linspace(-1, 1, periods)]
# Entrena el modelo, pero haciéndolo dentro de un loop de modo que podamos periódicamente
# evaluar las métricas de pérdida.
print("Entrenamiento del modelo...")
print("RMSE (en datos de entrenamiento):")
root_mean_squared_errors = []
for period in range (0, periods):
# Entrenar el modelo, empezando desde el estado anterior.
linear_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Tómate un descanso y calcula las predicciones.
predictions = linear_regressor.predict(input_fn=prediction_input_fn)
predictions = np.array([item['predictions'][0] for item in predictions])
# Calcular pérdida.
root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(predictions, targets))
# Ocasionalmente imprimir la pérdida actual.
print(" período %02d : %0.2f" % (period, root_mean_squared_error))
# Agregar las métricas de pérdida de este período a nuestra lista.
root_mean_squared_errors.append(root_mean_squared_error)
# Por último, rastrea los pesos y los sesgos a lo largo del tiempo.
# Aplica algo de math para asegurarte que los datos y la línea se representan claramente.
y_extents = np.array([0, sample[my_label].max()])
weight = linear_regressor.get_variable_value('linear/linear_model/%s/weights' % input_feature)[0]
bias = linear_regressor.get_variable_value('linear/linear_model/bias_weights')
x_extents = (y_extents - bias) / weight
x_extents = np.maximum(np.minimum(x_extents,
sample[my_feature].max()),
sample[my_feature].min())
y_extents = weight * x_extents + bias
plt.plot(x_extents, y_extents, color=colors[period])
print("Entrenamiento del Modelo finalizado.")
# Muestra un gráfico de métricas de pérdida por períodos.
plt.subplot(1, 2, 2)
plt.ylabel('RMSE')
plt.xlabel('Períodos')
plt.title("Raíz Error Cuádratico Medio vs. Períodos")
plt.tight_layout()
plt.plot(root_mean_squared_errors)
# Muestra una tabla con los datos de calibración.
calibration_data = pd.DataFrame()
calibration_data["predictions"] = pd.Series(predictions)
calibration_data["targets"] = pd.Series(targets)
display.display(calibration_data.describe())
print("RMSE final(en datos de entrenamiento): %0.2f" % root_mean_squared_error)
train_model(
learning_rate=0.00001,
steps=100,
batch_size=1
)
train_model(
learning_rate=0.00002,
steps=500,
batch_size=5
)
train_model(
learning_rate=0.00002,
steps=1000,
batch_size=5,
input_feature="population"
)
```
|
github_jupyter
|
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://storage.googleapis.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
california_housing_dataframe["median_house_value"] /= 1000.0
california_housing_dataframe
california_housing_dataframe.describe()
# Primer Paso: Definir las características y configurar las denominadas columnas de características
# Definir la característica de entrada: total_rooms.
my_feature = california_housing_dataframe [["total_rooms"]]
# Configurar una columna numérica de característica para total_rooms.
feature_columns = [tf.feature_column.numeric_column("total_rooms")]
# Segundo Paso : Definir el Objetivo (Target)
# Definir la etiqueta.
targets = california_housing_dataframe["median_house_value"]
# Tercer Paso: Configurar el LinearRegressor
# Usar descenso de gradiente como el optimizador para entrenar el modelo.
# Configurar una tasa de aprendizaje de 0.0000001 para Descenso de Gradiente.
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0000001)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
# Configurar el modelo de regresión lineal con nuestras columnas característica y optimizador.
linear_regressor = tf.estimator.LinearRegressor(
feature_columns=feature_columns,
optimizer=my_optimizer
)
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Entrena un modelo de regresión lineal de una característica.
Argumentos:
features:DataFrame pandas de característicass
targets: DataFrame pandas de objetivos
batch_size: Tamaño de lotes pasados al modelo
shuffle: True or False. Si se deben mezclar los datos.
num_epochs: Número de epochs por los que los datos se repetirán. None = repetir indefinidamente
Devuelve:
Tuple de (features, labels) para el siguiente lote de datos
"""
# Convertir datos pandas en un dict de arrays np.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construir un dataset, y configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Mezclar los datos, si se especifica.
if shuffle:
ds = ds.shuffle(buffer_size=10000)
# Devolver el nuevo lote de datos.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
_ = linear_regressor.train(
input_fn = lambda:my_input_fn(my_feature, targets),
steps=100
)
_
# Crear una función input para predicciones.
# Nota: Como vamos a hacer sólo una predicción para cada ejemplo, no tenemos
# que repetir o mezclar los datos aquí.
prediction_input_fn =lambda: my_input_fn(my_feature, targets, num_epochs=1, shuffle=False)
# Llamar a predict() en el linear_regressor para hacer predicciones.
predictions = linear_regressor.predict(input_fn=prediction_input_fn)
# Formateamos las predicciones como un array NumPy, para que podamos calcular las métricas de error.
predictions = np.array([item['predictions'][0] for item in predictions])
# Imprimimos Error Cuadrático Medio y Raíz Error Cuadrático Medio.
mean_squared_error = metrics.mean_squared_error(predictions, targets)
root_mean_squared_error = math.sqrt(mean_squared_error)
print("Error Cuadrático Medio (en datos entrenamiento): %0.3f" % mean_squared_error)
print("Raíz Error Cuadrático Medio (en datos entrenamiento): %0.3f" % root_mean_squared_error)
min_house_value = california_housing_dataframe["median_house_value"].min()
max_house_value = california_housing_dataframe["median_house_value"].max()
min_max_difference = max_house_value - min_house_value
print("Min. Median House Value: %0.3f" % min_house_value)
print("Max. Median House Value: %0.3f" % max_house_value)
print("Diferencia entre Min. y Max.: %0.3f" % min_max_difference)
print("Raíz Error Cuadrático Medio: %0.3f" % root_mean_squared_error)
calibration_data = pd.DataFrame()
calibration_data["predicciones"] = pd.Series(predictions)
calibration_data["objetivos"] = pd.Series(targets)
calibration_data.describe()
sample = california_housing_dataframe.sample(n=300)
# Obtenemos los valores mínimo y máximo de total_rooms.
x_0 = sample["total_rooms"].min()
x_1 = sample["total_rooms"].max()
# Recuperamos el peso y sesgo final generado durante el entrenamiento.
weight = linear_regressor.get_variable_value('linear/linear_model/total_rooms/weights')[0]
bias = linear_regressor.get_variable_value('linear/linear_model/bias_weights')
# Obtener los median_house_values predichos para los valores min and max total_rooms.
y_0 = weight * x_0 + bias
y_1 = weight * x_1 + bias
# Trazamos nuestra línea de regresión desde (x_0, y_0) to (x_1, y_1).
plt.plot([x_0, x_1], [y_0, y_1], c='r')
# Trazamos nuestra línea de regresión desde (x_0, y_0) to (x_1, y_1).
plt.plot([x_0, x_1], [y_0, y_1], c='r')
# Damos nombre a los ejes del gráfico.
plt.ylabel("median_house_value")
plt.xlabel("total_rooms")
# Trazamos una gráfica de dispersión de nuestros datos sample.
plt.scatter(sample["total_rooms"], sample["median_house_value"])
# Mostrar gráfico.
plt.show()
def train_model(learning_rate, steps, batch_size, input_feature="total_rooms"):
"""Entrenar un modelo de regresión lineal de una característica.
Args:
learning_rate: Un `float`, la tasa de aprendizaje.
steps: Un no-cero `int`, el número total de pasos de entrenamiento. Un paso de entrenamiento
consiste en un paso adelante y atrás usando un único lote.
batch_size: Un no-cero `int`, tamaño del lote.
input_feature: un `string` especificando una columna de `california_housing_dataframe`
para usar como característica de entrada.
"""
periods = 10
steps_per_period = steps / periods
my_feature = input_feature
my_feature_data = california_housing_dataframe[[my_feature]]
my_label = "median_house_value"
targets = california_housing_dataframe[my_label]
# Crear columns característica.
feature_columns = [tf.feature_column.numeric_column(my_feature)]
# Crear funciones input.
training_input_fn = lambda:my_input_fn(my_feature_data, targets, batch_size=batch_size)
prediction_input_fn = lambda: my_input_fn(my_feature_data, targets, num_epochs=1, shuffle=False)
# Crear un objeto linear regressor.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
linear_regressor = tf.estimator.LinearRegressor(
feature_columns=feature_columns,
optimizer=my_optimizer
)
# Configuración para trazar el estado de la línea de nuestro modelo cada período.
plt.figure(figsize=(15, 6))
plt.subplot(1, 2, 1)
plt.title("Línea aprendida por Período")
plt.ylabel(my_label)
plt.xlabel(my_feature)
sample = california_housing_dataframe.sample(n=300)
plt.scatter(sample[my_feature], sample[my_label])
colors = [cm.coolwarm(x) for x in np.linspace(-1, 1, periods)]
# Entrena el modelo, pero haciéndolo dentro de un loop de modo que podamos periódicamente
# evaluar las métricas de pérdida.
print("Entrenamiento del modelo...")
print("RMSE (en datos de entrenamiento):")
root_mean_squared_errors = []
for period in range (0, periods):
# Entrenar el modelo, empezando desde el estado anterior.
linear_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Tómate un descanso y calcula las predicciones.
predictions = linear_regressor.predict(input_fn=prediction_input_fn)
predictions = np.array([item['predictions'][0] for item in predictions])
# Calcular pérdida.
root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(predictions, targets))
# Ocasionalmente imprimir la pérdida actual.
print(" período %02d : %0.2f" % (period, root_mean_squared_error))
# Agregar las métricas de pérdida de este período a nuestra lista.
root_mean_squared_errors.append(root_mean_squared_error)
# Por último, rastrea los pesos y los sesgos a lo largo del tiempo.
# Aplica algo de math para asegurarte que los datos y la línea se representan claramente.
y_extents = np.array([0, sample[my_label].max()])
weight = linear_regressor.get_variable_value('linear/linear_model/%s/weights' % input_feature)[0]
bias = linear_regressor.get_variable_value('linear/linear_model/bias_weights')
x_extents = (y_extents - bias) / weight
x_extents = np.maximum(np.minimum(x_extents,
sample[my_feature].max()),
sample[my_feature].min())
y_extents = weight * x_extents + bias
plt.plot(x_extents, y_extents, color=colors[period])
print("Entrenamiento del Modelo finalizado.")
# Muestra un gráfico de métricas de pérdida por períodos.
plt.subplot(1, 2, 2)
plt.ylabel('RMSE')
plt.xlabel('Períodos')
plt.title("Raíz Error Cuádratico Medio vs. Períodos")
plt.tight_layout()
plt.plot(root_mean_squared_errors)
# Muestra una tabla con los datos de calibración.
calibration_data = pd.DataFrame()
calibration_data["predictions"] = pd.Series(predictions)
calibration_data["targets"] = pd.Series(targets)
display.display(calibration_data.describe())
print("RMSE final(en datos de entrenamiento): %0.2f" % root_mean_squared_error)
train_model(
learning_rate=0.00001,
steps=100,
batch_size=1
)
train_model(
learning_rate=0.00002,
steps=500,
batch_size=5
)
train_model(
learning_rate=0.00002,
steps=1000,
batch_size=5,
input_feature="population"
)
| 0.732974 | 0.960878 |
<p style="border: 1px solid #e7692c; border-left: 15px solid #e7692c; padding: 10px; text-align:justify;">
<strong style="color: #e7692c">Tip.</strong> <a style="color: #000000;" href="https://nbviewer.jupyter.org/github/PacktPublishing/Hands-On-Computer-Vision-with-Tensorflow/blob/master/ch8/ch8_nb1_action_recognition.ipynb" title="View with Jupyter Online">Click here to view this notebook on <code>nbviewer.jupyter.org</code></a>.
<br/>These notebooks are better read there, as Github default viewer ignores some of the formatting and interactive content.
</p>
<table style="font-size: 1em; padding: 0; margin: 0;">
<tr style="vertical-align: top; padding: 0; margin: 0;">
<td style="vertical-align: top; padding: 0; margin: 0; padding-right: 15px;">
<p style="background: #363636; color:#ffffff; text-align:justify; padding: 10px 25px;">
<strong style="font-size: 1.0em;"><span style="font-size: 1.2em;"><span style="color: #e7692c;">Hands-on</span> Computer Vision with TensorFlow 2</span><br/>by <em>Eliot Andres</em> & <em>Benjamin Planche</em> (Packt Pub.)</strong><br/><br/>
<strong>> Chapter 8: Video and Recurrent Neural Networks</strong><br/>
</p>
<h1 style="width: 100%; text-align: left; padding: 0px 25px;"><small style="color: #e7692c;">Notebook 1:</small><br/>Action recognition in video using LSTMs <br/>from Scratch</h1>
<br/>
<p style="border-left: 15px solid #363636; text-align:justify; padding: 0 10px;">
In this chapter, we covered the inner workings of the basic RNN as well as LSTMs.
<br/><br/>
As a practical application for this new type of neural networks, we will build a model to recognize actions in videos.
</p>
<br/>
<p style="border-left: 15px solid #363636; text-align:justify; padding: 0 10px;">
<strong> Requirements </strong>
<br/><br/>
To run this notebook, you need to download the <a href="https://www.crcv.ucf.edu/data/UCF101.php">UCF101 dataset</a> and extract it. When done, change the `BASE_PATH` variable to point to the dataset folder.
</p>
<br/>
<p style="border-left: 15px solid #e7692c; padding: 0 10px; text-align:justify;">
<strong style="color: #e7692c;">Tip.</strong> The notebooks shared on this git repository illustrate some of notions from the book "<em><strong>Hands-on Computer Vision with TensorFlow 2</strong></em>" written by Eliot Andres and Benjamin Planche and published by Packt. If you enjoyed the insights shared here, <strong>please consider acquiring the book!</strong>
<br/><br/>
The book provides further guidance for those eager to learn about computer vision and to harness the power of TensorFlow 2 and Keras to build performant recognition systems for object detection, segmentation, video processing, smartphone applications, and more.</p>
</td>
<td style="vertical-align: top; padding: 0; margin: 0; width: 255px;">
<a href="https://www.packtpub.com" title="Buy on Packt!">
<img src="../banner_images/book_cover.png">
</a>
<p style="background: #e7692c; color:#ffffff; padding: 10px; text-align:justify;"><strong>Leverage deep learning to create powerful image processing apps with TensorFlow 2 and Keras. <br/></strong>Get the book for more insights!</p>
<ul style="height: 32px; white-space: nowrap; text-align: center; margin: 0px; padding: 0px; padding-top: 10px;">
<li style="display: inline-block; height: 100%; vertical-align: middle; float: left; margin: 5px; padding: 0px;">
<a href="https://www.packtpub.com" title="Get your Packt book!">
<img style="vertical-align: middle; max-width: 75px; max-height: 32px;" src="../banner_images/logo_packt.png" width="75px">
</a>
</li>
<li style="display: inline-block; height: 100%; vertical-align: middle; float: left; margin: 5px; padding: 0px;">
<a href="https://www.packtpub.com" title="Get the book on O'Reilly Safari!">
<img style="vertical-align: middle; max-width: 75px; max-height: 32px;" src="../banner_images/logo_oreilly.png" width="75px">
</a>
</li>
<li style="display: inline-block; height: 100%; vertical-align: middle; float: left; margin: 5px; padding: 0px;">
<a href="https://www.packtpub.com" title="Get the book on Amazon!">
<img style="vertical-align: middle; max-width: 75px; max-height: 32px;" src="../banner_images/logo_amazon.png" width="75px">
</a>
</li>
</ul>
</td>
</tr>
</table>
```
# Install packages in the current environment
import sys
!{sys.executable} -m pip install opencv-python
!{sys.executable} -m pip install matplotlib
!{sys.executable} -m pip install tqdm
!{sys.executable} -m pip install scikit-learn
import tensorflow as tf
import os
import cv2
import numpy as np
import matplotlib.pyplot as plt
import tqdm
from sklearn.preprocessing import LabelBinarizer
BASE_PATH = '../data/UCF-101'
VIDEOS_PATH = os.path.join(BASE_PATH, '**','*.avi')
SEQUENCE_LENGTH = 40
```
## Step 1 - Extract features from videos and cache them in files
### Sample 'SEQUENCE_LENGTH' frames from each video
```
def frame_generator():
video_paths = tf.io.gfile.glob(VIDEOS_PATH)
np.random.shuffle(video_paths)
for video_path in video_paths:
frames = []
cap = cv2.VideoCapture(video_path)
num_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
sample_every_frame = max(1, num_frames // SEQUENCE_LENGTH)
current_frame = 0
label = os.path.basename(os.path.dirname(video_path))
max_images = SEQUENCE_LENGTH
while True:
success, frame = cap.read()
if not success:
break
if current_frame % sample_every_frame == 0:
# OPENCV reads in BGR, tensorflow expects RGB so we invert the order
frame = frame[:, :, ::-1]
img = tf.image.resize(frame, (299, 299))
img = tf.keras.applications.inception_v3.preprocess_input(
img)
max_images -= 1
yield img, video_path
if max_images == 0:
break
current_frame += 1
dataset = tf.data.Dataset.from_generator(frame_generator,
output_types=(tf.float32, tf.string),
output_shapes=((299, 299, 3), ()))
dataset = dataset.batch(16).prefetch(tf.data.experimental.AUTOTUNE)
```
### Feature extraction model
```
inception_v3 = tf.keras.applications.InceptionV3(include_top=False, weights='imagenet')
x = inception_v3.output
# We add Average Pooling to transform the feature map from
# 8 * 8 * 2048 to 1 x 2048, as we don't need spatial information
pooling_output = tf.keras.layers.GlobalAveragePooling2D()(x)
feature_extraction_model = tf.keras.Model(inception_v3.input, pooling_output)
```
## Extract features and store them in .npy files
Extraction takes about ~1h20 minutes on an NVIDIA 1080 GPU
```
current_path = None
all_features = []
for img, batch_paths in tqdm.tqdm(dataset):
batch_features = feature_extraction_model(img)
batch_features = tf.reshape(batch_features,
(batch_features.shape[0], -1))
for features, path in zip(batch_features.numpy(), batch_paths.numpy()):
if path != current_path and current_path is not None:
output_path = current_path.decode().replace('.avi', '.npy')
np.save(output_path, all_features)
all_features = []
current_path = path
all_features.append(features)
```
## Step 2: Train the LSTM on video features
### Labels preprocessing
```
LABELS = ['UnevenBars','ApplyLipstick','TableTennisShot','Fencing','Mixing','SumoWrestling','HulaHoop','PommelHorse','HorseRiding','SkyDiving','BenchPress','GolfSwing','HeadMassage','FrontCrawl','Haircut','HandstandWalking','Skiing','PlayingDaf','PlayingSitar','FrisbeeCatch','CliffDiving','BoxingSpeedBag','Kayaking','Rafting','WritingOnBoard','VolleyballSpiking','Archery','MoppingFloor','JumpRope','Lunges','BasketballDunk','Surfing','SkateBoarding','FloorGymnastics','Billiards','CuttingInKitchen','BlowingCandles','PlayingCello','JugglingBalls','Drumming','ThrowDiscus','BaseballPitch','SoccerPenalty','Hammering','BodyWeightSquats','SoccerJuggling','CricketShot','BandMarching','PlayingPiano','BreastStroke','ApplyEyeMakeup','HighJump','IceDancing','HandstandPushups','RockClimbingIndoor','HammerThrow','WallPushups','RopeClimbing','Basketball','Shotput','Nunchucks','WalkingWithDog','PlayingFlute','PlayingDhol','PullUps','CricketBowling','BabyCrawling','Diving','TaiChi','YoYo','BlowDryHair','PushUps','ShavingBeard','Knitting','HorseRace','TrampolineJumping','Typing','Bowling','CleanAndJerk','MilitaryParade','FieldHockeyPenalty','PlayingViolin','Skijet','PizzaTossing','LongJump','PlayingTabla','PlayingGuitar','BrushingTeeth','PoleVault','Punch','ParallelBars','Biking','BalanceBeam','Swing','JavelinThrow','Rowing','StillRings','SalsaSpin','TennisSwing','JumpingJack','BoxingPunchingBag']
encoder = LabelBinarizer()
encoder.fit(LABELS)
```
### Defining the model
```
model = tf.keras.Sequential([
tf.keras.layers.Masking(mask_value=0.),
tf.keras.layers.LSTM(512, dropout=0.5, recurrent_dropout=0.5),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(len(LABELS), activation='softmax')
])
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy', 'top_k_categorical_accuracy'])
test_file = os.path.join('data', 'testlist01.txt')
train_file = os.path.join('data', 'trainlist01.txt')
with open('data/testlist01.txt') as f:
test_list = [row.strip() for row in list(f)]
with open('data/trainlist01.txt') as f:
train_list = [row.strip() for row in list(f)]
train_list = [row.split(' ')[0] for row in train_list]
def make_generator(file_list):
def generator():
np.random.shuffle(file_list)
for path in file_list:
full_path = os.path.join(BASE_PATH, path).replace('.avi', '.npy')
label = os.path.basename(os.path.dirname(path))
features = np.load(full_path)
padded_sequence = np.zeros((SEQUENCE_LENGTH, 2048))
padded_sequence[0:len(features)] = np.array(features)
transformed_label = encoder.transform([label])
yield padded_sequence, transformed_label[0]
return generator
train_dataset = tf.data.Dataset.from_generator(make_generator(train_list),
output_types=(tf.float32, tf.int16),
output_shapes=((SEQUENCE_LENGTH, 2048), (len(LABELS))))
train_dataset = train_dataset.batch(16).prefetch(tf.data.experimental.AUTOTUNE)
valid_dataset = tf.data.Dataset.from_generator(make_generator(test_list),
output_types=(tf.float32, tf.int16),
output_shapes=((SEQUENCE_LENGTH, 2048), (len(LABELS))))
valid_dataset = valid_dataset.batch(16).prefetch(tf.data.experimental.AUTOTUNE)
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir='/tmp', update_freq=1000)
model.fit(train_dataset, epochs=500, callbacks=[tensorboard_callback], validation_data=valid_dataset)
```
|
github_jupyter
|
# Install packages in the current environment
import sys
!{sys.executable} -m pip install opencv-python
!{sys.executable} -m pip install matplotlib
!{sys.executable} -m pip install tqdm
!{sys.executable} -m pip install scikit-learn
import tensorflow as tf
import os
import cv2
import numpy as np
import matplotlib.pyplot as plt
import tqdm
from sklearn.preprocessing import LabelBinarizer
BASE_PATH = '../data/UCF-101'
VIDEOS_PATH = os.path.join(BASE_PATH, '**','*.avi')
SEQUENCE_LENGTH = 40
def frame_generator():
video_paths = tf.io.gfile.glob(VIDEOS_PATH)
np.random.shuffle(video_paths)
for video_path in video_paths:
frames = []
cap = cv2.VideoCapture(video_path)
num_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
sample_every_frame = max(1, num_frames // SEQUENCE_LENGTH)
current_frame = 0
label = os.path.basename(os.path.dirname(video_path))
max_images = SEQUENCE_LENGTH
while True:
success, frame = cap.read()
if not success:
break
if current_frame % sample_every_frame == 0:
# OPENCV reads in BGR, tensorflow expects RGB so we invert the order
frame = frame[:, :, ::-1]
img = tf.image.resize(frame, (299, 299))
img = tf.keras.applications.inception_v3.preprocess_input(
img)
max_images -= 1
yield img, video_path
if max_images == 0:
break
current_frame += 1
dataset = tf.data.Dataset.from_generator(frame_generator,
output_types=(tf.float32, tf.string),
output_shapes=((299, 299, 3), ()))
dataset = dataset.batch(16).prefetch(tf.data.experimental.AUTOTUNE)
inception_v3 = tf.keras.applications.InceptionV3(include_top=False, weights='imagenet')
x = inception_v3.output
# We add Average Pooling to transform the feature map from
# 8 * 8 * 2048 to 1 x 2048, as we don't need spatial information
pooling_output = tf.keras.layers.GlobalAveragePooling2D()(x)
feature_extraction_model = tf.keras.Model(inception_v3.input, pooling_output)
current_path = None
all_features = []
for img, batch_paths in tqdm.tqdm(dataset):
batch_features = feature_extraction_model(img)
batch_features = tf.reshape(batch_features,
(batch_features.shape[0], -1))
for features, path in zip(batch_features.numpy(), batch_paths.numpy()):
if path != current_path and current_path is not None:
output_path = current_path.decode().replace('.avi', '.npy')
np.save(output_path, all_features)
all_features = []
current_path = path
all_features.append(features)
LABELS = ['UnevenBars','ApplyLipstick','TableTennisShot','Fencing','Mixing','SumoWrestling','HulaHoop','PommelHorse','HorseRiding','SkyDiving','BenchPress','GolfSwing','HeadMassage','FrontCrawl','Haircut','HandstandWalking','Skiing','PlayingDaf','PlayingSitar','FrisbeeCatch','CliffDiving','BoxingSpeedBag','Kayaking','Rafting','WritingOnBoard','VolleyballSpiking','Archery','MoppingFloor','JumpRope','Lunges','BasketballDunk','Surfing','SkateBoarding','FloorGymnastics','Billiards','CuttingInKitchen','BlowingCandles','PlayingCello','JugglingBalls','Drumming','ThrowDiscus','BaseballPitch','SoccerPenalty','Hammering','BodyWeightSquats','SoccerJuggling','CricketShot','BandMarching','PlayingPiano','BreastStroke','ApplyEyeMakeup','HighJump','IceDancing','HandstandPushups','RockClimbingIndoor','HammerThrow','WallPushups','RopeClimbing','Basketball','Shotput','Nunchucks','WalkingWithDog','PlayingFlute','PlayingDhol','PullUps','CricketBowling','BabyCrawling','Diving','TaiChi','YoYo','BlowDryHair','PushUps','ShavingBeard','Knitting','HorseRace','TrampolineJumping','Typing','Bowling','CleanAndJerk','MilitaryParade','FieldHockeyPenalty','PlayingViolin','Skijet','PizzaTossing','LongJump','PlayingTabla','PlayingGuitar','BrushingTeeth','PoleVault','Punch','ParallelBars','Biking','BalanceBeam','Swing','JavelinThrow','Rowing','StillRings','SalsaSpin','TennisSwing','JumpingJack','BoxingPunchingBag']
encoder = LabelBinarizer()
encoder.fit(LABELS)
model = tf.keras.Sequential([
tf.keras.layers.Masking(mask_value=0.),
tf.keras.layers.LSTM(512, dropout=0.5, recurrent_dropout=0.5),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(len(LABELS), activation='softmax')
])
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy', 'top_k_categorical_accuracy'])
test_file = os.path.join('data', 'testlist01.txt')
train_file = os.path.join('data', 'trainlist01.txt')
with open('data/testlist01.txt') as f:
test_list = [row.strip() for row in list(f)]
with open('data/trainlist01.txt') as f:
train_list = [row.strip() for row in list(f)]
train_list = [row.split(' ')[0] for row in train_list]
def make_generator(file_list):
def generator():
np.random.shuffle(file_list)
for path in file_list:
full_path = os.path.join(BASE_PATH, path).replace('.avi', '.npy')
label = os.path.basename(os.path.dirname(path))
features = np.load(full_path)
padded_sequence = np.zeros((SEQUENCE_LENGTH, 2048))
padded_sequence[0:len(features)] = np.array(features)
transformed_label = encoder.transform([label])
yield padded_sequence, transformed_label[0]
return generator
train_dataset = tf.data.Dataset.from_generator(make_generator(train_list),
output_types=(tf.float32, tf.int16),
output_shapes=((SEQUENCE_LENGTH, 2048), (len(LABELS))))
train_dataset = train_dataset.batch(16).prefetch(tf.data.experimental.AUTOTUNE)
valid_dataset = tf.data.Dataset.from_generator(make_generator(test_list),
output_types=(tf.float32, tf.int16),
output_shapes=((SEQUENCE_LENGTH, 2048), (len(LABELS))))
valid_dataset = valid_dataset.batch(16).prefetch(tf.data.experimental.AUTOTUNE)
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir='/tmp', update_freq=1000)
model.fit(train_dataset, epochs=500, callbacks=[tensorboard_callback], validation_data=valid_dataset)
| 0.454714 | 0.96707 |
```
# default_exp data
```
# Data preparation
> Downloading the data and developing the machinery for feeding it to our models
The `fastai` library provides the very flexible mechanism of the DataBlock API. This should generally be our main goto tool when working with data in non-standard formats.
As fastai v2 is a new version of the library and we have little experience with it, we decided to first drop down to using the mid-level API. This is to ensure we have full control over data processing and to learn how to write custom transforms (we will need this to run some of the experiments we have planned). Once we complete the deepdive into how data is handled by the `fastai` library, hopefully we will be able to utilize the lessons we learn and use the higher level DataBlock API.
Let's download the data.
```
!wget https://static-content.springer.com/esm/art%3A10.1038%2Fs41598-019-48909-4/MediaObjects/41598_2019_48909_MOESM2_ESM.xlsx -O data/Dominicana.xlsx
!wget https://static-content.springer.com/esm/art%3A10.1038%2Fs41598-019-48909-4/MediaObjects/41598_2019_48909_MOESM3_ESM.xlsx -O data/ETP.xlsx
```
## First look at the data
```
#export
from fastai2.data.all import *
dominicana = pd.read_excel('data/Dominicana.xlsx')
etp = pd.read_excel('data/ETP.xlsx')
```
And this is what the data looks like. It contains the ICI information (independent variables) as well as labels, such as Coda type or Clan membership.
```
dominicana.head()
etp.head()
```
## Data for pretraining
Let's construct our dataset step by step. A `TfmdLists` object is able to read the rows of our DataFrame and treat them as items (`item` is the name for an example in the `fastai` parlance).
```
tfmd_lists = TfmdLists(dominicana, [noop])
tfmd_lists[0]
len(tfmd_lists), dominicana.shape[0]
```
Looking good. Let's see if we can extract the ICI information from a single row and package it in a way that would be suitable for our model.
```
#export
def get_independent_vars(row, start_col=4, n_vals=9):
vals = [v for v in row[start_col:(start_col+n_vals)].values if v != 0]
return np.pad(vals, (n_vals - len(vals), 0))
get_independent_vars(dominicana.iloc[0])
```
This looks good. Can we use this in `TfmdLists`?
```
tfmd_lists = TfmdLists(dominicana, [get_independent_vars])
tfmd_lists[0]
```
For the pretraining, we can go directly from this representation to the targets (the target being the last ICI)
```
#export
def independent_vars_to_targs(ary): return ary[-1]
tfmd_lists = TfmdLists(dominicana, [get_independent_vars, independent_vars_to_targs])
tfmd_lists[0]
```
We now need to make sure that the independent variables, our train data, doesn't contain the targets.
```
#export
def drop_last_value(ary): return ary[:-1]
```
We would like each example to be represented as a tuple of `(independent_variables, targets)`. In order to arrive at this representation, we can run two transformation pipelines in parallel.
One transformation pipeline will give us the independent variables:
```TfmdLists(dominicana, [get_independent_vars, drop_last_value])```
and the other will give us the dependent variable, our target:
```TfmdLists(dominicana, [get_independent_vars, independent_vars_to_targs])```
The fastai class that can wrap multiple transformation pipelines is called `Datasets`.
```
datasets = Datasets(dominicana, [[get_independent_vars, drop_last_value], [get_independent_vars, independent_vars_to_targs]]); datasets
datasets[0]
```
This is looking good. We have the data for pretraining ready. But what about actual training? Here we will need labels transformed in a way suitable for our model to learn from.
## Data for training
We specify the first pipeline as follows:
```TfmdLists(dominicana, [get_independent_vars])```
For the second pipeline however, we will need new functionality we have not developed yet. We would like to be able to specify a set of labels as targets (this could be clan membership or coda type for instance).
```
#export
def get_target(row, col_name): return row[col_name]
get_clan_membership = partialler(get_target, col_name='Clan')
tfmd_lists = TfmdLists(dominicana, [get_clan_membership])
tfmd_lists[0]
dominicana.Clan.unique()
```
We can now extract the clan name as a string, but this is not a representation we can train our model on. We need to go from string labels to a set of indexes.
```
Categorize(vocab=['EC1', 'EC2'])('EC1')
```
This does the trick!
Let's now pull all this into a `Datasets` object.
```
datasets = Datasets(dominicana, [[get_independent_vars], [get_clan_membership, Categorize]]); datasets
```
The `Datasets` class can work with the `Categorize` transform to initialize it without us having to explicitly pass the vocab (it creates the vocab from the data we provide it).
```
datasets.tfms[1][-1].vocab
```
We now have everything we need on the data side to reproduce the RNN experiments from the paper. Let us now see if we can use the DataBlock API to nicely package it all up.
## Using the DataBlock API
For the pretraining, we can use all the data we have across the two datasets (the Dominica and Eastern Tropical Pacific (ETP) datasets). Let's concatenate them together.
```
# export
pd.set_option('display.max_columns', None)
merged_datasets = pd.concat((etp, dominicana)).fillna(0)
merged_datasets.head()
```
Now let us craft a DataBlock that will read in the data
```
#export
get_ETP_independent_vars = partial(get_independent_vars, start_col=5, n_vals=11)
trainable_params
#export
dblock_pretrain = DataBlock(
get_x = (get_ETP_independent_vars, drop_last_value),
get_y = (get_ETP_independent_vars, independent_vars_to_targs),
splitter=TrainTestSplitter(test_size=0.1, random_state=42) # having a validation set is crucial for any task,
) # including pretraining!
datasets_pretrain = dblock_pretrain.datasets(merged_datasets)
datasets_pretrain
```
This is looking good. As for the train data, situation is a bit more complex - we need to align how we create our datasets with the paper.
It seems that due to lack of data whale identification was only evaluated on train set. Since this gives us little insights into how the model would generalize to unseen data, let us not include this in our analysis.
With regards to the "coda type classification" task, the paper reports training on 23 coda types from the Dominicana dataset and 43 coda types from the ETP dataset. The authors were very kind to share their [code on github](https://github.com/dgruber212/Sperm_Whale_Machine_Learning/blob/master/RNNClassifier.py) and we can align how we create our datasets with them.
```
dominicana.head()
#export
mask = dominicana.CodaType.isin(['1-NOISE', '2-NOISE','3-NOISE','4-NOISE','5-NOISE','6-NOISE','7-NOISE','8-NOISE','9-NOISE','10-NOISE','10i','10R'])
dominicana_clean = dominicana[~mask]
dominicana_clean.shape
dominicana_clean.CodaType.nunique()
```
For the ETP dataset, unfortunately the preprocessing code is not in the repository. Based on the information in the paper, we were unable to infer how exactly the data was processed.
We will only use the Dominicana dataset for our experiments. This might actually be advantageous - the intent behind this repository is open the reaserch field to a broader audience and to evaluate the performance of Random Forests. As such, narrowing down the scope of our inquiry in terms of data we run our experiments on might be beneficial.
We will therefore focus on two tasks - performing classification on the Dominicana dataset for the clan identification and coda identification task.
For the clan identification task, the authors report balancing the classes to 949 per class. Let us carry out this procedure.
```
dominicana.Clan.value_counts()
dominicana[dominicana.Clan == 'EC1'].sample(n=949).shape
#export
dominicana_clan = pd.concat(
(
dominicana[dominicana.Clan == 'EC1'].sample(n=949, random_state=42),
dominicana[dominicana.Clan == 'EC2']
)
)
dominicana_clan.shape
```
We can now construct our dataset.
```
#export
dblock_train = DataBlock(
get_x = get_independent_vars,
get_y = (get_clan_membership, Categorize),
splitter = TrainTestSplitter(test_size=0.1, random_state=42, stratify=dominicana_clan.Clan.factorize()[0])
)
datasets_clan = dblock_train.datasets(dominicana_clan)
datasets_clan
```
Let's now construct a dataset for classifying coda types.
```
#export
get_coda_type = partialler(get_target, col_name='CodaType')
dblock_train = DataBlock(
get_x = get_independent_vars,
get_y = (get_coda_type, Categorize),
splitter = TrainTestSplitter(test_size=0.1, random_state=42, stratify=dominicana_clean.CodaType.factorize()[0])
)
datasets_coda = dblock_train.datasets(dominicana_clean)
datasets_coda.vocab
```
And now we are ready to start training!
|
github_jupyter
|
# default_exp data
!wget https://static-content.springer.com/esm/art%3A10.1038%2Fs41598-019-48909-4/MediaObjects/41598_2019_48909_MOESM2_ESM.xlsx -O data/Dominicana.xlsx
!wget https://static-content.springer.com/esm/art%3A10.1038%2Fs41598-019-48909-4/MediaObjects/41598_2019_48909_MOESM3_ESM.xlsx -O data/ETP.xlsx
#export
from fastai2.data.all import *
dominicana = pd.read_excel('data/Dominicana.xlsx')
etp = pd.read_excel('data/ETP.xlsx')
dominicana.head()
etp.head()
tfmd_lists = TfmdLists(dominicana, [noop])
tfmd_lists[0]
len(tfmd_lists), dominicana.shape[0]
#export
def get_independent_vars(row, start_col=4, n_vals=9):
vals = [v for v in row[start_col:(start_col+n_vals)].values if v != 0]
return np.pad(vals, (n_vals - len(vals), 0))
get_independent_vars(dominicana.iloc[0])
tfmd_lists = TfmdLists(dominicana, [get_independent_vars])
tfmd_lists[0]
#export
def independent_vars_to_targs(ary): return ary[-1]
tfmd_lists = TfmdLists(dominicana, [get_independent_vars, independent_vars_to_targs])
tfmd_lists[0]
#export
def drop_last_value(ary): return ary[:-1]
and the other will give us the dependent variable, our target:
The fastai class that can wrap multiple transformation pipelines is called `Datasets`.
This is looking good. We have the data for pretraining ready. But what about actual training? Here we will need labels transformed in a way suitable for our model to learn from.
## Data for training
We specify the first pipeline as follows:
For the second pipeline however, we will need new functionality we have not developed yet. We would like to be able to specify a set of labels as targets (this could be clan membership or coda type for instance).
We can now extract the clan name as a string, but this is not a representation we can train our model on. We need to go from string labels to a set of indexes.
This does the trick!
Let's now pull all this into a `Datasets` object.
The `Datasets` class can work with the `Categorize` transform to initialize it without us having to explicitly pass the vocab (it creates the vocab from the data we provide it).
We now have everything we need on the data side to reproduce the RNN experiments from the paper. Let us now see if we can use the DataBlock API to nicely package it all up.
## Using the DataBlock API
For the pretraining, we can use all the data we have across the two datasets (the Dominica and Eastern Tropical Pacific (ETP) datasets). Let's concatenate them together.
Now let us craft a DataBlock that will read in the data
This is looking good. As for the train data, situation is a bit more complex - we need to align how we create our datasets with the paper.
It seems that due to lack of data whale identification was only evaluated on train set. Since this gives us little insights into how the model would generalize to unseen data, let us not include this in our analysis.
With regards to the "coda type classification" task, the paper reports training on 23 coda types from the Dominicana dataset and 43 coda types from the ETP dataset. The authors were very kind to share their [code on github](https://github.com/dgruber212/Sperm_Whale_Machine_Learning/blob/master/RNNClassifier.py) and we can align how we create our datasets with them.
For the ETP dataset, unfortunately the preprocessing code is not in the repository. Based on the information in the paper, we were unable to infer how exactly the data was processed.
We will only use the Dominicana dataset for our experiments. This might actually be advantageous - the intent behind this repository is open the reaserch field to a broader audience and to evaluate the performance of Random Forests. As such, narrowing down the scope of our inquiry in terms of data we run our experiments on might be beneficial.
We will therefore focus on two tasks - performing classification on the Dominicana dataset for the clan identification and coda identification task.
For the clan identification task, the authors report balancing the classes to 949 per class. Let us carry out this procedure.
We can now construct our dataset.
Let's now construct a dataset for classifying coda types.
| 0.667906 | 0.969957 |
# SPD
Replicate the design of [Smaldino et al. (2013)](https://www.journals.uchicago.edu/doi/10.1086/669615).
Dummy agents - no interactions just movement.
## Imports & properties
```
# model
from mesa import Model, Agent
from mesa.time import RandomActivation
from mesa.space import SingleGrid
from mesa.datacollection import DataCollector
# visualization
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import holoviews as hv
%load_ext holoviews.ipython
import seaborn as sns
sns.set_theme(style="darkgrid")
# parameter sweep
from mesa.batchrunner import BatchRunner
# environment properties
grid_size = 10
N = 1
starting_energy = 10
living_cost = 1
max_steps = 10e6
```
## Setup model
```
# errors
class ModelError(Exception):
pass
class UnidentifiedCellError(ModelError):
pass
class DummyAgent(Agent):
"""
Always Abstain strategy
- never interacts with any other agents
- moves around randomly
- dies when energy depleted
"""
def __init__(self, model, energy=starting_energy):
super().__init__(model.next_id(), model)
self.energy = energy
def step(self):
# pay cost of living
self.energy -= living_cost
if self.energy <= 0:
# agent died
self.model.grid.remove_agent(self)
self.model.schedule.remove(self)
return
# alive
self.model.n_agents += 1
# move to a random adjacent unoccupied square if exists
neighborhood = self.model.grid.get_neighborhood(self.pos, moore=True)
neighborhood = filter(lambda c: self.model.grid.is_cell_empty(c), neighborhood)
neighborhood = sorted(list(neighborhood))
if neighborhood:
cell = self.random.choice(neighborhood)
self.model.grid.move_agent(self, cell)
class SPDModel(Model):
def __init__(self, n0=N, grid_size=grid_size, wrap=True):
"""
Args:
n0: starting number of agents
grid_size: size length of square grid to use
wrap: whether to wrap grid
"""
super().__init__()
self.schedule = RandomActivation(self)
self.grid = SingleGrid(grid_size, grid_size, torus=wrap)
# Setup agents
for i in range(n0):
agent = DummyAgent(self)
self.grid.position_agent(agent)
self.schedule.add(agent)
self.n_agents = n0
# Init model
self.running = True
self.datacollector = DataCollector(
{
"n_agents": "n_agents",
},
{
"x": lambda a: a.pos[0],
"y": lambda a: a.pos[1],
},
)
self.datacollector.collect(self)
def step(self):
# reset model counters
self.n_agents = 0
self.schedule.step()
self.datacollector.collect(self)
# stop the model if no agents are alive
if self.n_agents == 0:
self.running = False
```
## Run model
```
spd = SPDModel()
def value(cell):
if cell is None:
return 0
elif isinstance(cell, Agent):
return 1
else:
raise UnidentifiedCellError()
hmap = hv.HoloMap(kdims='step')
i = 0
while spd.running:
spd.step()
data = np.array([[value(c) for c in row] for row in spd.grid.grid])
hmap[i] = hv.Image(data, vdims=[hv.Dimension('State', range=(0,3))])
i += 1
hmap
results = spd.datacollector.get_model_vars_dataframe()
sns.lineplot(data=results)
```
## Paramater sweep
```
variable_params = {
"n0": range(1,100,1),
}
fixed_params = {
"grid_size": grid_size,
"wrap": True,
}
param_run = BatchRunner(SPDModel,
variable_params,
fixed_params,
max_steps=max_steps,
model_reporters={
"n_agents": lambda m: m.n_agents,
})
param_run.run_all()
run_data = param_run.get_model_vars_dataframe()
sns.scatterplot(x="n0", y="n_agents", data=run_data)
```
|
github_jupyter
|
# model
from mesa import Model, Agent
from mesa.time import RandomActivation
from mesa.space import SingleGrid
from mesa.datacollection import DataCollector
# visualization
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import holoviews as hv
%load_ext holoviews.ipython
import seaborn as sns
sns.set_theme(style="darkgrid")
# parameter sweep
from mesa.batchrunner import BatchRunner
# environment properties
grid_size = 10
N = 1
starting_energy = 10
living_cost = 1
max_steps = 10e6
# errors
class ModelError(Exception):
pass
class UnidentifiedCellError(ModelError):
pass
class DummyAgent(Agent):
"""
Always Abstain strategy
- never interacts with any other agents
- moves around randomly
- dies when energy depleted
"""
def __init__(self, model, energy=starting_energy):
super().__init__(model.next_id(), model)
self.energy = energy
def step(self):
# pay cost of living
self.energy -= living_cost
if self.energy <= 0:
# agent died
self.model.grid.remove_agent(self)
self.model.schedule.remove(self)
return
# alive
self.model.n_agents += 1
# move to a random adjacent unoccupied square if exists
neighborhood = self.model.grid.get_neighborhood(self.pos, moore=True)
neighborhood = filter(lambda c: self.model.grid.is_cell_empty(c), neighborhood)
neighborhood = sorted(list(neighborhood))
if neighborhood:
cell = self.random.choice(neighborhood)
self.model.grid.move_agent(self, cell)
class SPDModel(Model):
def __init__(self, n0=N, grid_size=grid_size, wrap=True):
"""
Args:
n0: starting number of agents
grid_size: size length of square grid to use
wrap: whether to wrap grid
"""
super().__init__()
self.schedule = RandomActivation(self)
self.grid = SingleGrid(grid_size, grid_size, torus=wrap)
# Setup agents
for i in range(n0):
agent = DummyAgent(self)
self.grid.position_agent(agent)
self.schedule.add(agent)
self.n_agents = n0
# Init model
self.running = True
self.datacollector = DataCollector(
{
"n_agents": "n_agents",
},
{
"x": lambda a: a.pos[0],
"y": lambda a: a.pos[1],
},
)
self.datacollector.collect(self)
def step(self):
# reset model counters
self.n_agents = 0
self.schedule.step()
self.datacollector.collect(self)
# stop the model if no agents are alive
if self.n_agents == 0:
self.running = False
spd = SPDModel()
def value(cell):
if cell is None:
return 0
elif isinstance(cell, Agent):
return 1
else:
raise UnidentifiedCellError()
hmap = hv.HoloMap(kdims='step')
i = 0
while spd.running:
spd.step()
data = np.array([[value(c) for c in row] for row in spd.grid.grid])
hmap[i] = hv.Image(data, vdims=[hv.Dimension('State', range=(0,3))])
i += 1
hmap
results = spd.datacollector.get_model_vars_dataframe()
sns.lineplot(data=results)
variable_params = {
"n0": range(1,100,1),
}
fixed_params = {
"grid_size": grid_size,
"wrap": True,
}
param_run = BatchRunner(SPDModel,
variable_params,
fixed_params,
max_steps=max_steps,
model_reporters={
"n_agents": lambda m: m.n_agents,
})
param_run.run_all()
run_data = param_run.get_model_vars_dataframe()
sns.scatterplot(x="n0", y="n_agents", data=run_data)
| 0.663887 | 0.904735 |
## Udacity P3: Behavior Cloning
```
from library import *
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '1'
tf.python.control_flow_ops = tf
dropout_rate = 0.25
learning_rate = 1e-03
# Number of neurons
neuron_100 = 100
neuron_50 = 50
neuron_10 = 10
neuron_1 = 1
# Number of frames in each convolution layers
conv_layer_1 = 24
conv_layer_2 = 36
conv_layer_3 = 48
conv_layer_4 = 64
# Number of epochs
epoch_no = 5
pooling_size = (2,2)
activation_func = 'relu'
loss_type = 'mean_squared_error'
batch_size = 32
# generator function to augment and tune train data
from train_generator_lib import train_generator
# generator function to tune validation data
from valid_generator_lib import valid_generator
from sklearn.utils import shuffle
from keras.models import model_from_json
# images of size are 320x160
local_data_path = "C:/Users/NIKHIL XAVIER/git/Self-Driving-Car/P3/CarND-Behavioral-Cloning-P3-master/data"
os.chdir(r"C:/Users/NIKHIL XAVIER/git/Self-Driving-Car/P3/CarND-Behavioral-Cloning-P3-master/data")
#os.chdir(r"C:\Users\NIKHIL XAVIER\git\Self-Driving-Car\P3\CarND-Behavioral-Cloning-P3-master\data")
new_data_path = os.path.join(local_data_path, "IMG/", "*.jpg")
final_list = []
turn_str =[]
turn_lft = []
turn_rgt = []
df = pd.io.parsers.read_csv(os.path.join(local_data_path, 'driving_log.csv'))
# Use as dataframe
df = pd.read_csv('driving_log.csv', header=0)
df.columns = ["center", "left","right", "steering", "throttle", "break", "speed"]
df.drop(['throttle', 'break', 'speed'], axis = 1, inplace = True)
for counter in range(len(df)):
keep_prob = random.random()
if (df["steering"][counter] >0.20 and df["steering"][counter] <=0.50):
new_steering = df["steering"][counter]*(1.0 + np.random.uniform(-1,1)/100.0)
turn_rgt.append([df["center"][counter], df["left"][counter], df["right"][counter], new_steering])
new_steering = df["steering"][counter]*(1.0 + np.random.uniform(-1,1)/100.0)
turn_rgt.append([df["center"][counter], df["left"][counter], df["right"][counter], new_steering])
elif (df["steering"][counter] >= -0.50 and df["steering"][counter] < -0.15):
new_steering = df["steering"][counter]*(1.0 + np.random.uniform(-1,1)/100.0)
turn_lft.append([df["center"][counter], df["left"][counter], df["right"][counter], new_steering])
new_steering = df["steering"][counter]*(1.0 + np.random.uniform(-1,1)/100.0)
turn_lft.append([df["center"][counter], df["left"][counter], df["right"][counter], new_steering])
elif (df["steering"][counter] > -0.02 and df["steering"][counter] < 0.02):
if (keep_prob <=0.90):
turn_str.append([df["center"][counter], df["left"][counter], df["right"][counter], df["steering"][counter]])
elif (keep_prob >0.90):
turn_str.append([df["center"][counter], df["left"][counter], df["right"][counter], df["steering"][counter]])
final_list = turn_rgt + turn_lft + turn_str
print(len(final_list), len(turn_str), len(turn_lft), len(turn_rgt))
random.shuffle(final_list)
# create sets for validation data set and train data set
df_train, df_valid = sklearn.model_selection.train_test_split(final_list, test_size=.20)
# Model architecture
model = models.Sequential()
model.add(layers.core.Lambda(lambda x: (x / 127.5 - 1.), input_shape = (160,320,3)))
model.add(layers.convolutional.Convolution2D(conv_layer_1, 5, 5, activation=activation_func))
model.add(layers.pooling.MaxPooling2D(pool_size=pooling_size))
model.add(layers.convolutional.Convolution2D(conv_layer_2, 5, 5, activation=activation_func))
model.add(layers.pooling.MaxPooling2D(pool_size=pooling_size))
model.add(layers.convolutional.Convolution2D(conv_layer_3, 5, 5, activation=activation_func))
model.add(layers.pooling.MaxPooling2D(pool_size=pooling_size))
model.add(layers.convolutional.Convolution2D(conv_layer_4, 3, 3, activation=activation_func))
model.add(layers.pooling.MaxPooling2D(pool_size=pooling_size))
model.add(layers.core.Flatten())
model.add(layers.core.Dense(neuron_100, activation=activation_func))
model.add(layers.core.Dropout(dropout_rate))
model.add(layers.core.Dense(neuron_50, activation=activation_func))
model.add(layers.core.Dropout(dropout_rate))
model.add(layers.core.Dense(neuron_10, activation=activation_func))
model.add(layers.core.Dense(neuron_1))
model.compile(optimizer=optimizers.Adam(lr=learning_rate), loss=loss_type)
nb_epoch = 5
samples_per_epoch = 20000
nb_val_samples = 2000
from keras.callbacks import ModelCheckpoint
path_link="C:/Users/NIKHIL XAVIER/git/Self-Driving-Car/P3/CarND-Behavioral-Cloning-P3-master/checkpoint2/check-{epoch:02d}-{val_loss:.4f}.hdf5"
checkpoint = ModelCheckpoint(filepath= path_link, verbose=1, save_best_only=False)
callbacks_list = [checkpoint]
train_generator = train_generator(df_train, batch_size=batch_size, key = 1)
validation_generator = valid_generator(df_valid, batch_size=batch_size, key = 0)
history_object = model.fit_generator(train_generator, samples_per_epoch= samples_per_epoch,
validation_data=validation_generator,
nb_val_samples=nb_val_samples, nb_epoch=nb_epoch, verbose=1, callbacks=callbacks_list)
model_json = model.to_json()
with open("C:/Users/NIKHIL XAVIER/git/Self-Driving-Car/P3/CarND-Behavioral-Cloning-P3-master/model_final.json", "w") as json_file:
json_file.write(model_json)
model.save("C:/Users/NIKHIL XAVIER/git/Self-Driving-Car/P3/CarND-Behavioral-Cloning-P3-master/model_final.h5")
print("Saved model to disk")
print(model.summary())
```
|
github_jupyter
|
from library import *
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '1'
tf.python.control_flow_ops = tf
dropout_rate = 0.25
learning_rate = 1e-03
# Number of neurons
neuron_100 = 100
neuron_50 = 50
neuron_10 = 10
neuron_1 = 1
# Number of frames in each convolution layers
conv_layer_1 = 24
conv_layer_2 = 36
conv_layer_3 = 48
conv_layer_4 = 64
# Number of epochs
epoch_no = 5
pooling_size = (2,2)
activation_func = 'relu'
loss_type = 'mean_squared_error'
batch_size = 32
# generator function to augment and tune train data
from train_generator_lib import train_generator
# generator function to tune validation data
from valid_generator_lib import valid_generator
from sklearn.utils import shuffle
from keras.models import model_from_json
# images of size are 320x160
local_data_path = "C:/Users/NIKHIL XAVIER/git/Self-Driving-Car/P3/CarND-Behavioral-Cloning-P3-master/data"
os.chdir(r"C:/Users/NIKHIL XAVIER/git/Self-Driving-Car/P3/CarND-Behavioral-Cloning-P3-master/data")
#os.chdir(r"C:\Users\NIKHIL XAVIER\git\Self-Driving-Car\P3\CarND-Behavioral-Cloning-P3-master\data")
new_data_path = os.path.join(local_data_path, "IMG/", "*.jpg")
final_list = []
turn_str =[]
turn_lft = []
turn_rgt = []
df = pd.io.parsers.read_csv(os.path.join(local_data_path, 'driving_log.csv'))
# Use as dataframe
df = pd.read_csv('driving_log.csv', header=0)
df.columns = ["center", "left","right", "steering", "throttle", "break", "speed"]
df.drop(['throttle', 'break', 'speed'], axis = 1, inplace = True)
for counter in range(len(df)):
keep_prob = random.random()
if (df["steering"][counter] >0.20 and df["steering"][counter] <=0.50):
new_steering = df["steering"][counter]*(1.0 + np.random.uniform(-1,1)/100.0)
turn_rgt.append([df["center"][counter], df["left"][counter], df["right"][counter], new_steering])
new_steering = df["steering"][counter]*(1.0 + np.random.uniform(-1,1)/100.0)
turn_rgt.append([df["center"][counter], df["left"][counter], df["right"][counter], new_steering])
elif (df["steering"][counter] >= -0.50 and df["steering"][counter] < -0.15):
new_steering = df["steering"][counter]*(1.0 + np.random.uniform(-1,1)/100.0)
turn_lft.append([df["center"][counter], df["left"][counter], df["right"][counter], new_steering])
new_steering = df["steering"][counter]*(1.0 + np.random.uniform(-1,1)/100.0)
turn_lft.append([df["center"][counter], df["left"][counter], df["right"][counter], new_steering])
elif (df["steering"][counter] > -0.02 and df["steering"][counter] < 0.02):
if (keep_prob <=0.90):
turn_str.append([df["center"][counter], df["left"][counter], df["right"][counter], df["steering"][counter]])
elif (keep_prob >0.90):
turn_str.append([df["center"][counter], df["left"][counter], df["right"][counter], df["steering"][counter]])
final_list = turn_rgt + turn_lft + turn_str
print(len(final_list), len(turn_str), len(turn_lft), len(turn_rgt))
random.shuffle(final_list)
# create sets for validation data set and train data set
df_train, df_valid = sklearn.model_selection.train_test_split(final_list, test_size=.20)
# Model architecture
model = models.Sequential()
model.add(layers.core.Lambda(lambda x: (x / 127.5 - 1.), input_shape = (160,320,3)))
model.add(layers.convolutional.Convolution2D(conv_layer_1, 5, 5, activation=activation_func))
model.add(layers.pooling.MaxPooling2D(pool_size=pooling_size))
model.add(layers.convolutional.Convolution2D(conv_layer_2, 5, 5, activation=activation_func))
model.add(layers.pooling.MaxPooling2D(pool_size=pooling_size))
model.add(layers.convolutional.Convolution2D(conv_layer_3, 5, 5, activation=activation_func))
model.add(layers.pooling.MaxPooling2D(pool_size=pooling_size))
model.add(layers.convolutional.Convolution2D(conv_layer_4, 3, 3, activation=activation_func))
model.add(layers.pooling.MaxPooling2D(pool_size=pooling_size))
model.add(layers.core.Flatten())
model.add(layers.core.Dense(neuron_100, activation=activation_func))
model.add(layers.core.Dropout(dropout_rate))
model.add(layers.core.Dense(neuron_50, activation=activation_func))
model.add(layers.core.Dropout(dropout_rate))
model.add(layers.core.Dense(neuron_10, activation=activation_func))
model.add(layers.core.Dense(neuron_1))
model.compile(optimizer=optimizers.Adam(lr=learning_rate), loss=loss_type)
nb_epoch = 5
samples_per_epoch = 20000
nb_val_samples = 2000
from keras.callbacks import ModelCheckpoint
path_link="C:/Users/NIKHIL XAVIER/git/Self-Driving-Car/P3/CarND-Behavioral-Cloning-P3-master/checkpoint2/check-{epoch:02d}-{val_loss:.4f}.hdf5"
checkpoint = ModelCheckpoint(filepath= path_link, verbose=1, save_best_only=False)
callbacks_list = [checkpoint]
train_generator = train_generator(df_train, batch_size=batch_size, key = 1)
validation_generator = valid_generator(df_valid, batch_size=batch_size, key = 0)
history_object = model.fit_generator(train_generator, samples_per_epoch= samples_per_epoch,
validation_data=validation_generator,
nb_val_samples=nb_val_samples, nb_epoch=nb_epoch, verbose=1, callbacks=callbacks_list)
model_json = model.to_json()
with open("C:/Users/NIKHIL XAVIER/git/Self-Driving-Car/P3/CarND-Behavioral-Cloning-P3-master/model_final.json", "w") as json_file:
json_file.write(model_json)
model.save("C:/Users/NIKHIL XAVIER/git/Self-Driving-Car/P3/CarND-Behavioral-Cloning-P3-master/model_final.h5")
print("Saved model to disk")
print(model.summary())
| 0.595022 | 0.70178 |
## 1 K-means Clustering
在这个练习中,您将实现K-means算法并将其用于图像压缩。通过减少图像中出现的颜色的数量,只剩下那些在图像中最常见的颜色。
### 1.1 Implementing K-means
#### 1.1.1 Finding closest centroids
在K-means算法的分配簇的阶段,算法将每一个训练样本 $x_i$ 分配给最接近的簇中心。

$c^{(i)}$ 表示离样本$x_i$ 最近的簇中心点。$u_j$ 是第j 个簇中心点的位置(值),
```
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.io import loadmat
def findClosestCentroids(X, centroids):
"""
output a one-dimensional array idx that holds the
index of the closest centroid to every training example.
"""
idx = []
max_dist = 1000000 # 限制一下最大距离
for i in range(len(X)):
minus = X[i] - centroids # here use numpy's broadcasting
dist = minus[:,0]**2 + minus[:,1]**2
if dist.min() < max_dist:
ci = np.argmin(dist)
idx.append(ci)
return np.array(idx)
```
接下来使用作业提供的例子,自定义一个centroids,[3, 3], [6, 2], [8, 5],算出结果idx[0:3]应该是 [0, 2, 1]
```
mat = loadmat('data/ex7data2.mat')
# print(mat)
X = mat['X']
init_centroids = np.array([[3, 3], [6, 2], [8, 5]])
idx = findClosestCentroids(X, init_centroids)
print(idx[0:3])
```
#### 1.1.2 Computing centroid means
分配好每个点对应的簇中心,接下来要做的是,重新计算每个簇中心,为这个簇里面所有点位置的平均值。

$C_k$ 是我们分配好给簇中心点的样本集。
```
def computeCentroids(X, idx):
centroids = []
for i in range(len(np.unique(idx))): # Returns the sorted unique elements of an array. means K
u_k = X[idx==i].mean(axis=0) # 求每列的平均值,idx==i选出中心对应的样本
centroids.append(u_k)
return np.array(centroids)
computeCentroids(X, idx)
```
### 1.2 K-means on example dataset
```
def plotData(X, centroids, idx=None):
"""
可视化数据,并自动分开着色。
idx: 最后一次迭代生成的idx向量,存储每个样本分配的簇中心点的值
centroids: 包含每次中心点历史记录
"""
colors = ['b','g','gold','darkorange','salmon','olivedrab',
'maroon', 'navy', 'sienna', 'tomato', 'lightgray', 'gainsboro'
'coral', 'aliceblue', 'dimgray', 'mintcream', 'mintcream']
assert len(centroids[0]) <= len(colors), 'colors not enough '
subX = [] # 分好类的样本点
if idx is not None:
for i in range(centroids[0].shape[0]):
x_i = X[idx == i]
subX.append(x_i)
else:
subX = [X] # 将X转化为一个元素的列表,每个元素为每个簇的样本集,方便下方绘图
# 分别画出每个簇的点,并着不同的颜色
plt.figure(figsize=(8,5))
for i in range(len(subX)):
xx = subX[i]
plt.scatter(xx[:,0], xx[:,1], c=colors[i], label='Cluster %d'%i)
plt.legend()
plt.grid(True)
plt.xlabel('x1',fontsize=14)
plt.ylabel('x2',fontsize=14)
plt.title('Plot of X Points',fontsize=16)
# 画出簇中心点的移动轨迹
xx, yy = [], []
for centroid in centroids:
xx.append(centroid[:,0])
yy.append(centroid[:,1])
plt.plot(xx, yy, 'rx--', markersize=8)
plotData(X, [init_centroids])
def runKmeans(X, centroids, max_iters):
K = len(centroids)
centroids_all = []
centroids_all.append(centroids)
centroid_i = centroids
for i in range(max_iters):
idx = findClosestCentroids(X, centroid_i)
centroid_i = computeCentroids(X, idx)
centroids_all.append(centroid_i)
return idx, centroids_all
idx, centroids_all = runKmeans(X, init_centroids, 20)
plotData(X, centroids_all, idx)
```
### 1.3 Random initialization
在实践中,对簇中心点进行初始化的一个好的策略就是从训练集中选择随机的例子。
```
def initCentroids(X, K):
"""随机初始化"""
m, n = X.shape
idx = np.random.choice(m, K)
centroids = X[idx]
return centroids
```
进行三次随机初始化,看下各自的效果。会发现第三次的效果并不理想,这是正常的,落入了局部最优。
```
for i in range(3):
centroids = initCentroids(X, 3)
idx, centroids_all = runKmeans(X, centroids, 10)
plotData(X, centroids_all, idx)
```
上面运行了三次随机初始化,可以看到不同的随机化,效果是不一样的。
### 1.4 Image compression with K-means
这部分你将用Kmeans来进行图片压缩。在一个简单的24位颜色表示图像。每个像素被表示为三个8位无符号整数(从0到255),指定了红、绿和蓝色的强度值。这种编码通常被称为RGB编码。我们的图像包含数千种颜色,在这一部分的练习中,你将把颜色的数量减少到16种颜色。
这可以有效地压缩照片。具体地说,您只需要存储16个选中颜色的RGB值,而对于图中的每个像素,现在只需要将该颜色的索引存储在该位置(只需要4 bits就能表示16种可能性)。
接下来我们要用K-means算法选16种颜色,用于图片压缩。你将把原始图片的每个像素看作一个数据样本,然后利用K-means算法去找分组最好的16种颜色。
#### 1.4.1 K-means on pixels
```
from skimage import io
A = io.imread('data/bird_small.png')
print(A.shape)
plt.imshow(A);
A = A/255. # Divide by 255 so that all values are in the range 0 - 1
```
https://stackoverflow.com/questions/18691084/what-does-1-mean-in-numpy-reshape
```
# Reshape the image into an (N,3) matrix where N = number of pixels.
# Each row will contain the Red, Green and Blue pixel values
# This gives us our dataset matrix X that we will use K-Means on.
X = A.reshape(-1, 3)
K = 16
centroids = initCentroids(X, K)
idx, centroids_all = runKmeans(X, centroids, 10)
img = np.zeros(X.shape)
centroids = centroids_all[-1]
for i in range(len(centroids)):
img[idx == i] = centroids[i]
img = img.reshape((128, 128, 3))
fig, axes = plt.subplots(1, 2, figsize=(12,6))
axes[0].imshow(A)
axes[1].imshow(img)
```
## 2 Principal Component Analysis
这部分,你将运用PCA来实现降维。您将首先通过一个2D数据集进行实验,以获得关于PCA如何工作的直观感受,然后在一个更大的图像数据集上使用它。
### 2.1 Example Dataset
为了帮助您理解PCA是如何工作的,您将首先从一个二维数据集开始,该数据集有一个大的变化方向和一个较小的变化方向。
在这部分练习中,您将看到使用PCA将数据从2D减少到1D时会发生什么。
```
mat = loadmat('data/ex7data1.mat')
X = mat['X']
print(X.shape)
plt.scatter(X[:,0], X[:,1], facecolors='none', edgecolors='b')
```
### 2.2 Implementing PCA
PCA由两部分组成:
1. 计算数据的方差矩阵
2. 用SVD计算特征向量$(U_1, U_2, ..., U_n)$
在PCA之前,记得标准化数据。
然后计算方差矩阵,如果你的每条样本数据是以行的形式表示,那么计算公式如下:

接着就可以用SVD计算主成分

U包含了主成分,**每一列**就是我们数据要映射的向量,S为对角矩阵,为奇异值。
```
def featureNormalize(X):
means = X.mean(axis=0)
stds = X.std(axis=0, ddof=1)
X_norm = (X - means) / stds
return X_norm, means, stds
```
由于我们的协方差矩阵为X.T@X, X中每行为一条数据,我们是想要对列(特征)做压缩。
这里由于是对协方差矩阵做SVD(), 所以得到的入口基其实为 V‘,出口基为V,可以打印出各自的shape来判断。
故我们这里是对 数据集的列 做压缩。
```
def pca(X):
sigma = (X.T @ X) / len(X)
U, S, V = np.linalg.svd(sigma)
return U, S, V
X_norm, means, stds = featureNormalize(X)
U, S, V = pca(X_norm)
print(U[:,0])
plt.figure(figsize=(7, 5))
plt.scatter(X[:,0], X[:,1], facecolors='none', edgecolors='b')
# 没看懂 S*U=?
plt.plot([means[0], means[0] + 1.5*S[0]*U[0,0]],
[means[1], means[1] + 1.5*S[0]*U[0,1]],
c='r', linewidth=3, label='First Principal Component')
plt.plot([means[0], means[0] + 1.5*S[1]*U[1,0]],
[means[1], means[1] + 1.5*S[1]*U[1,1]],
c='g', linewidth=3, label='Second Principal Component')
plt.grid()
# changes limits of x or y axis so that equal increments of x and y have the same length
plt.axis("equal")
plt.legend()
```
### 2.3 Dimensionality Reduction with PCA
#### 2.3.1 Projecting the data onto the principal components
```
def projectData(X, U, K):
Z = X @ U[:,:K]
return Z
# project the first example onto the first dimension
# and you should see a value of about 1.481
Z = projectData(X_norm, U, 1)
Z
```
#### 2.3.2 Reconstructing an approximation of the data
重建数据
```
def recoverData(Z, U, K):
X_rec = Z @ U[:,:K].T
return X_rec
# you will recover an approximation of the first example and you should see a value of
# about [-1.047 -1.047].
X_rec = recoverData(Z, U, 1)
X_rec[0]
```
#### 2.3.3 Visualizing the projections
```
plt.figure(figsize=(7,5))
plt.axis("equal")
plot = plt.scatter(X_norm[:,0], X_norm[:,1], s=30, facecolors='none',
edgecolors='b',label='Original Data Points')
plot = plt.scatter(X_rec[:,0], X_rec[:,1], s=30, facecolors='none',
edgecolors='r',label='PCA Reduced Data Points')
plt.title("Example Dataset: Reduced Dimension Points Shown",fontsize=14)
plt.xlabel('x1 [Feature Normalized]',fontsize=14)
plt.ylabel('x2 [Feature Normalized]',fontsize=14)
plt.grid(True)
for x in range(X_norm.shape[0]):
plt.plot([X_norm[x,0],X_rec[x,0]],[X_norm[x,1],X_rec[x,1]],'k--')
# 输入第一项全是X坐标,第二项都是Y坐标
plt.legend()
```
### 2.4 Face Image Dataset
在这部分练习中,您将人脸图像上运行PCA,看看如何在实践中使用它来减少维度。
```
mat = loadmat('data/ex7faces.mat')
X = mat['X']
print(X.shape)
def displayData(X, row, col):
fig, axs = plt.subplots(row, col, figsize=(8,8))
for r in range(row):
for c in range(col):
axs[r][c].imshow(X[r*col + c].reshape(32,32).T, cmap = 'Greys_r')
axs[r][c].set_xticks([])
axs[r][c].set_yticks([])
displayData(X, 10, 10)
```
#### 2.4.1 PCA on Faces
```
X_norm, means, stds = featureNormalize(X)
U, S, V = pca(X_norm)
U.shape, S.shape
displayData(U[:,:36].T, 6, 6)
```
#### 2.4.2 Dimensionality Reduction
```
z = projectData(X_norm, U, K=36)
X_rec = recoverData(z, U, K=36)
displayData(X_rec, 10, 10)
```
|
github_jupyter
|
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.io import loadmat
def findClosestCentroids(X, centroids):
"""
output a one-dimensional array idx that holds the
index of the closest centroid to every training example.
"""
idx = []
max_dist = 1000000 # 限制一下最大距离
for i in range(len(X)):
minus = X[i] - centroids # here use numpy's broadcasting
dist = minus[:,0]**2 + minus[:,1]**2
if dist.min() < max_dist:
ci = np.argmin(dist)
idx.append(ci)
return np.array(idx)
mat = loadmat('data/ex7data2.mat')
# print(mat)
X = mat['X']
init_centroids = np.array([[3, 3], [6, 2], [8, 5]])
idx = findClosestCentroids(X, init_centroids)
print(idx[0:3])
def computeCentroids(X, idx):
centroids = []
for i in range(len(np.unique(idx))): # Returns the sorted unique elements of an array. means K
u_k = X[idx==i].mean(axis=0) # 求每列的平均值,idx==i选出中心对应的样本
centroids.append(u_k)
return np.array(centroids)
computeCentroids(X, idx)
def plotData(X, centroids, idx=None):
"""
可视化数据,并自动分开着色。
idx: 最后一次迭代生成的idx向量,存储每个样本分配的簇中心点的值
centroids: 包含每次中心点历史记录
"""
colors = ['b','g','gold','darkorange','salmon','olivedrab',
'maroon', 'navy', 'sienna', 'tomato', 'lightgray', 'gainsboro'
'coral', 'aliceblue', 'dimgray', 'mintcream', 'mintcream']
assert len(centroids[0]) <= len(colors), 'colors not enough '
subX = [] # 分好类的样本点
if idx is not None:
for i in range(centroids[0].shape[0]):
x_i = X[idx == i]
subX.append(x_i)
else:
subX = [X] # 将X转化为一个元素的列表,每个元素为每个簇的样本集,方便下方绘图
# 分别画出每个簇的点,并着不同的颜色
plt.figure(figsize=(8,5))
for i in range(len(subX)):
xx = subX[i]
plt.scatter(xx[:,0], xx[:,1], c=colors[i], label='Cluster %d'%i)
plt.legend()
plt.grid(True)
plt.xlabel('x1',fontsize=14)
plt.ylabel('x2',fontsize=14)
plt.title('Plot of X Points',fontsize=16)
# 画出簇中心点的移动轨迹
xx, yy = [], []
for centroid in centroids:
xx.append(centroid[:,0])
yy.append(centroid[:,1])
plt.plot(xx, yy, 'rx--', markersize=8)
plotData(X, [init_centroids])
def runKmeans(X, centroids, max_iters):
K = len(centroids)
centroids_all = []
centroids_all.append(centroids)
centroid_i = centroids
for i in range(max_iters):
idx = findClosestCentroids(X, centroid_i)
centroid_i = computeCentroids(X, idx)
centroids_all.append(centroid_i)
return idx, centroids_all
idx, centroids_all = runKmeans(X, init_centroids, 20)
plotData(X, centroids_all, idx)
def initCentroids(X, K):
"""随机初始化"""
m, n = X.shape
idx = np.random.choice(m, K)
centroids = X[idx]
return centroids
for i in range(3):
centroids = initCentroids(X, 3)
idx, centroids_all = runKmeans(X, centroids, 10)
plotData(X, centroids_all, idx)
from skimage import io
A = io.imread('data/bird_small.png')
print(A.shape)
plt.imshow(A);
A = A/255. # Divide by 255 so that all values are in the range 0 - 1
# Reshape the image into an (N,3) matrix where N = number of pixels.
# Each row will contain the Red, Green and Blue pixel values
# This gives us our dataset matrix X that we will use K-Means on.
X = A.reshape(-1, 3)
K = 16
centroids = initCentroids(X, K)
idx, centroids_all = runKmeans(X, centroids, 10)
img = np.zeros(X.shape)
centroids = centroids_all[-1]
for i in range(len(centroids)):
img[idx == i] = centroids[i]
img = img.reshape((128, 128, 3))
fig, axes = plt.subplots(1, 2, figsize=(12,6))
axes[0].imshow(A)
axes[1].imshow(img)
mat = loadmat('data/ex7data1.mat')
X = mat['X']
print(X.shape)
plt.scatter(X[:,0], X[:,1], facecolors='none', edgecolors='b')
def featureNormalize(X):
means = X.mean(axis=0)
stds = X.std(axis=0, ddof=1)
X_norm = (X - means) / stds
return X_norm, means, stds
def pca(X):
sigma = (X.T @ X) / len(X)
U, S, V = np.linalg.svd(sigma)
return U, S, V
X_norm, means, stds = featureNormalize(X)
U, S, V = pca(X_norm)
print(U[:,0])
plt.figure(figsize=(7, 5))
plt.scatter(X[:,0], X[:,1], facecolors='none', edgecolors='b')
# 没看懂 S*U=?
plt.plot([means[0], means[0] + 1.5*S[0]*U[0,0]],
[means[1], means[1] + 1.5*S[0]*U[0,1]],
c='r', linewidth=3, label='First Principal Component')
plt.plot([means[0], means[0] + 1.5*S[1]*U[1,0]],
[means[1], means[1] + 1.5*S[1]*U[1,1]],
c='g', linewidth=3, label='Second Principal Component')
plt.grid()
# changes limits of x or y axis so that equal increments of x and y have the same length
plt.axis("equal")
plt.legend()
def projectData(X, U, K):
Z = X @ U[:,:K]
return Z
# project the first example onto the first dimension
# and you should see a value of about 1.481
Z = projectData(X_norm, U, 1)
Z
def recoverData(Z, U, K):
X_rec = Z @ U[:,:K].T
return X_rec
# you will recover an approximation of the first example and you should see a value of
# about [-1.047 -1.047].
X_rec = recoverData(Z, U, 1)
X_rec[0]
plt.figure(figsize=(7,5))
plt.axis("equal")
plot = plt.scatter(X_norm[:,0], X_norm[:,1], s=30, facecolors='none',
edgecolors='b',label='Original Data Points')
plot = plt.scatter(X_rec[:,0], X_rec[:,1], s=30, facecolors='none',
edgecolors='r',label='PCA Reduced Data Points')
plt.title("Example Dataset: Reduced Dimension Points Shown",fontsize=14)
plt.xlabel('x1 [Feature Normalized]',fontsize=14)
plt.ylabel('x2 [Feature Normalized]',fontsize=14)
plt.grid(True)
for x in range(X_norm.shape[0]):
plt.plot([X_norm[x,0],X_rec[x,0]],[X_norm[x,1],X_rec[x,1]],'k--')
# 输入第一项全是X坐标,第二项都是Y坐标
plt.legend()
mat = loadmat('data/ex7faces.mat')
X = mat['X']
print(X.shape)
def displayData(X, row, col):
fig, axs = plt.subplots(row, col, figsize=(8,8))
for r in range(row):
for c in range(col):
axs[r][c].imshow(X[r*col + c].reshape(32,32).T, cmap = 'Greys_r')
axs[r][c].set_xticks([])
axs[r][c].set_yticks([])
displayData(X, 10, 10)
X_norm, means, stds = featureNormalize(X)
U, S, V = pca(X_norm)
U.shape, S.shape
displayData(U[:,:36].T, 6, 6)
z = projectData(X_norm, U, K=36)
X_rec = recoverData(z, U, K=36)
displayData(X_rec, 10, 10)
| 0.331228 | 0.941761 |
```
import os
import pandas as pd
import numpy as np
from IPython.display import Image
from subprocess import call
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import ShuffleSplit
from sklearn.tree import export_graphviz
from sklearn.ensemble import RandomForestClassifier
from sklearn import preprocessing
from sklearn import metrics
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
dataset = pd.read_csv('dataset_final_treat.csv')
dataset.head()
columns_drop = []
if len(columns_drop) > 0:
dataset = dataset.drop(columns_drop, axis=1)
if 'stimul' in dataset.columns:
mask = dataset.stimul.duplicated()
print(dataset.stimul[~mask])
new_stimuls = {'GREEN': 0, 'WHITE': 1, 'RED': 2, 'BLUE': 3}
for index, item in dataset.iterrows():
dataset['stimul'][index] = new_stimuls[item.stimul]
if 'classify' in dataset.columns:
mask = dataset.classify.duplicated()
print(dataset.classify[~mask])
new_classify = {'Alterado': 0, 'Atermo': 1}
for index, item in dataset.iterrows():
dataset['classify'][index] = new_classify[item.classify]
def normalize_column(column_name):
columns = list(dataset.columns)
x = dataset[[column_name]]
min_max_scaler = preprocessing.MinMaxScaler()
x_scaled = min_max_scaler.fit_transform(x)
dataset_aux = pd.DataFrame({column_name: x_scaled[:, 0]})
dataset.pop(column_name)
dataset.insert(columns.index(column_name), column_name, dataset_aux)
# normalize_column('3_seconds_before')
# normalize_column('size_instantly_before_stimul')
# normalize_column('size_instantly_after_stimul')
# normalize_column('3_seconds_after')
# normalize_column('5_seconds_after')
# normalize_column('6_seconds_after')
# normalize_column('10_seconds_after')
# normalize_column('min_value1')
# normalize_column('min_value2')
# normalize_column('min_value3')
# normalize_column('max_value1')
# normalize_column('max_value2')
# normalize_column('max_value3')
print(dataset.head())
X = dataset.copy()
X.pop('classify')
y = dataset['classify']
# X = X.fillna(X.mean())
# y = y.fillna(y.mean())
X = X.fillna(0)
y = y.fillna(0)
print(X, y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
clf = RandomForestClassifier(n_estimators=100)
clf.fit(X_train,y_train)
y_pred=clf.predict(X_test)
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
aux = dataset.copy()
aux.pop('classify')
feature_imp = pd.Series(clf.feature_importances_,index=list(aux.columns)).sort_values(ascending=False)
feature_imp
sns.barplot(x=feature_imp, y=feature_imp.index)
plt.xlabel('Feature Importance Score')
plt.ylabel('Features')
plt.title("Visualizing Important Features")
plt.legend()
plt.show()
scores = cross_val_score(clf, X, y, cv=5)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
scores = cross_val_score(clf, X, y, cv=10, scoring='f1_macro')
print("f1_macro: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
if False:
estimator = clf.estimators_[0]
aux = dataset.copy()
aux.pop('classify')
target = ['Altered', 'Aterm']
export_graphviz(estimator, out_file='tree.dot', feature_names=aux.columns, class_names=target,
rounded=True, special_characters=True, proportion=False, precision=2, filled=True)
# Convert to png using system command (requires Graphviz)
call(['dot', '-Tpng', 'tree.dot', '-o', 'tree.png', '-Gdpi=600'])
# Display in jupyter notebook
Image(filename = 'tree.png')
```
|
github_jupyter
|
import os
import pandas as pd
import numpy as np
from IPython.display import Image
from subprocess import call
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import ShuffleSplit
from sklearn.tree import export_graphviz
from sklearn.ensemble import RandomForestClassifier
from sklearn import preprocessing
from sklearn import metrics
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
dataset = pd.read_csv('dataset_final_treat.csv')
dataset.head()
columns_drop = []
if len(columns_drop) > 0:
dataset = dataset.drop(columns_drop, axis=1)
if 'stimul' in dataset.columns:
mask = dataset.stimul.duplicated()
print(dataset.stimul[~mask])
new_stimuls = {'GREEN': 0, 'WHITE': 1, 'RED': 2, 'BLUE': 3}
for index, item in dataset.iterrows():
dataset['stimul'][index] = new_stimuls[item.stimul]
if 'classify' in dataset.columns:
mask = dataset.classify.duplicated()
print(dataset.classify[~mask])
new_classify = {'Alterado': 0, 'Atermo': 1}
for index, item in dataset.iterrows():
dataset['classify'][index] = new_classify[item.classify]
def normalize_column(column_name):
columns = list(dataset.columns)
x = dataset[[column_name]]
min_max_scaler = preprocessing.MinMaxScaler()
x_scaled = min_max_scaler.fit_transform(x)
dataset_aux = pd.DataFrame({column_name: x_scaled[:, 0]})
dataset.pop(column_name)
dataset.insert(columns.index(column_name), column_name, dataset_aux)
# normalize_column('3_seconds_before')
# normalize_column('size_instantly_before_stimul')
# normalize_column('size_instantly_after_stimul')
# normalize_column('3_seconds_after')
# normalize_column('5_seconds_after')
# normalize_column('6_seconds_after')
# normalize_column('10_seconds_after')
# normalize_column('min_value1')
# normalize_column('min_value2')
# normalize_column('min_value3')
# normalize_column('max_value1')
# normalize_column('max_value2')
# normalize_column('max_value3')
print(dataset.head())
X = dataset.copy()
X.pop('classify')
y = dataset['classify']
# X = X.fillna(X.mean())
# y = y.fillna(y.mean())
X = X.fillna(0)
y = y.fillna(0)
print(X, y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
clf = RandomForestClassifier(n_estimators=100)
clf.fit(X_train,y_train)
y_pred=clf.predict(X_test)
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
aux = dataset.copy()
aux.pop('classify')
feature_imp = pd.Series(clf.feature_importances_,index=list(aux.columns)).sort_values(ascending=False)
feature_imp
sns.barplot(x=feature_imp, y=feature_imp.index)
plt.xlabel('Feature Importance Score')
plt.ylabel('Features')
plt.title("Visualizing Important Features")
plt.legend()
plt.show()
scores = cross_val_score(clf, X, y, cv=5)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
scores = cross_val_score(clf, X, y, cv=10, scoring='f1_macro')
print("f1_macro: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
if False:
estimator = clf.estimators_[0]
aux = dataset.copy()
aux.pop('classify')
target = ['Altered', 'Aterm']
export_graphviz(estimator, out_file='tree.dot', feature_names=aux.columns, class_names=target,
rounded=True, special_characters=True, proportion=False, precision=2, filled=True)
# Convert to png using system command (requires Graphviz)
call(['dot', '-Tpng', 'tree.dot', '-o', 'tree.png', '-Gdpi=600'])
# Display in jupyter notebook
Image(filename = 'tree.png')
| 0.587825 | 0.312377 |
```
# Dependencies
import tweepy
import json
import numpy as np
from datetime import datetime
import pandas as pd
from config2 import consumer_key, consumer_secret, access_token, access_token_secret
# Twitter API Keys
consumer_key = consumer_key
consumer_secret = consumer_secret
access_token = access_token
access_token_secret = access_token_secret
# Setup Tweepy API Authentication
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth, parser=tweepy.parsers.JSONParser())
target_term = '@NintendoAmerica'
nin_tweets = []
date = []
tweet_ids = []
oldest_tweet = None
for x in range(1,100):
public_tweets = api.search(target_term, count=100, result_type="recent", max_id=oldest_tweet)
for tweet in public_tweets['statuses']:
tweet_id = tweet["id"]
tweet_author = tweet["user"]["screen_name"]
tweet_text = tweet["text"]
nin_tweets.append(tweet['text'])
date.append(tweet['created_at'])
tweet_ids.append(tweet['id'])
oldest_tweet = tweet_id - 1
print(len(nin_tweets))
nin_tweets2 = []
date2 = []
tweet_ids2 = []
oldest_tweet2 = tweet_ids[9885]
for x in range(1,100):
public_tweets = api.search(target_term, count=100, result_type="recent", max_id=oldest_tweet)
for tweet in public_tweets['statuses']:
tweet_id = tweet["id"]
tweet_author = tweet["user"]["screen_name"]
tweet_text = tweet["text"]
nin_tweets2.append(tweet['text'])
date2.append(tweet['created_at'])
tweet_ids2.append(tweet['id'])
oldest_tweet2 = tweet_id - 1
print(len(nin_tweets2))
# Import and Initialize Sentiment Analyzer
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
analyzer = SentimentIntensityAnalyzer()
compound_list = []
positive_list = []
negative_list = []
neutral_list = []
for tweet in nin_tweets:
# Run Vader Analysis on each tweet
results = analyzer.polarity_scores(tweet)
compound = results["compound"]
pos = results["pos"]
neu = results["neu"]
neg = results["neg"]
# Add each value to the appropriate list
compound_list.append(compound)
positive_list.append(pos)
negative_list.append(neg)
neutral_list.append(neu)
compound_list2 = []
positive_list2 = []
negative_list2 = []
neutral_list2 = []
for tweet in nin_tweets2:
# Run Vader Analysis on each tweet
results = analyzer.polarity_scores(tweet)
compound = results["compound"]
pos = results["pos"]
neu = results["neu"]
neg = results["neg"]
# Add each value to the appropriate list
compound_list2.append(compound)
positive_list2.append(pos)
negative_list2.append(neg)
neutral_list2.append(neu)
june_18_N_1 = {
'Text': nin_tweets,
'Compounded': compound_list,
'Negative': negative_list,
'Positive': positive_list,
'Neutral': neutral_list,
'Date': date
}
june_18_N_2 = {
'Text': nin_tweets2,
'Compounded': compound_list2,
'Negative': negative_list2,
'Positive': positive_list2,
'Neutral': neutral_list2,
'Date': date2
}
june_18_nin_df = pd.DataFrame(june_18_N_1)
june_18_nin_2df = pd.DataFrame(june_18_N_2)
# june_17_df.head()
# date[9831]
converted_timestamps = []
for raw in date:
# https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior
# http://strftime.org/
converted_time = datetime.strptime(raw, "%a %b %d %H:%M:%S %z %Y")
converted_timestamps.append(converted_time)
converted_timestamps2 = []
for raw in date2:
# https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior
# http://strftime.org/
converted_time = datetime.strptime(raw, "%a %b %d %H:%M:%S %z %Y")
converted_timestamps2.append(converted_time)
hour = []
for x in range(len(converted_timestamps)):
hours = converted_timestamps[x].hour
hour.append(hours)
hour2 = []
for x in range(len(converted_timestamps2)):
hours = converted_timestamps2[x].hour
hour2.append(hours)
june_18_nin_df['Hour'] = hour
june_18_nin_2df['Hour'] = hour2
# june_17_df.head()
june_18_nin_df.to_csv('june_18_N_1.csv')
june_18_nin_2df.to_csv('june_18_N_2.csv')
june_18_mean_nin_df = june_18_nin_df.groupby('Hour').mean()
june_18_mean_nin_2df = june_18_nin_2df.groupby('Hour').mean()
june_18_mean_nin_df.to_csv('june_18_mean_N_1.csv')
june_18_mean_nin_2df.to_csv('june_18_mean_N_2.csv')
len(june_18_nin_df['Date'])
len(june_18_nin_2df['Date'])
```
|
github_jupyter
|
# Dependencies
import tweepy
import json
import numpy as np
from datetime import datetime
import pandas as pd
from config2 import consumer_key, consumer_secret, access_token, access_token_secret
# Twitter API Keys
consumer_key = consumer_key
consumer_secret = consumer_secret
access_token = access_token
access_token_secret = access_token_secret
# Setup Tweepy API Authentication
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth, parser=tweepy.parsers.JSONParser())
target_term = '@NintendoAmerica'
nin_tweets = []
date = []
tweet_ids = []
oldest_tweet = None
for x in range(1,100):
public_tweets = api.search(target_term, count=100, result_type="recent", max_id=oldest_tweet)
for tweet in public_tweets['statuses']:
tweet_id = tweet["id"]
tweet_author = tweet["user"]["screen_name"]
tweet_text = tweet["text"]
nin_tweets.append(tweet['text'])
date.append(tweet['created_at'])
tweet_ids.append(tweet['id'])
oldest_tweet = tweet_id - 1
print(len(nin_tweets))
nin_tweets2 = []
date2 = []
tweet_ids2 = []
oldest_tweet2 = tweet_ids[9885]
for x in range(1,100):
public_tweets = api.search(target_term, count=100, result_type="recent", max_id=oldest_tweet)
for tweet in public_tweets['statuses']:
tweet_id = tweet["id"]
tweet_author = tweet["user"]["screen_name"]
tweet_text = tweet["text"]
nin_tweets2.append(tweet['text'])
date2.append(tweet['created_at'])
tweet_ids2.append(tweet['id'])
oldest_tweet2 = tweet_id - 1
print(len(nin_tweets2))
# Import and Initialize Sentiment Analyzer
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
analyzer = SentimentIntensityAnalyzer()
compound_list = []
positive_list = []
negative_list = []
neutral_list = []
for tweet in nin_tweets:
# Run Vader Analysis on each tweet
results = analyzer.polarity_scores(tweet)
compound = results["compound"]
pos = results["pos"]
neu = results["neu"]
neg = results["neg"]
# Add each value to the appropriate list
compound_list.append(compound)
positive_list.append(pos)
negative_list.append(neg)
neutral_list.append(neu)
compound_list2 = []
positive_list2 = []
negative_list2 = []
neutral_list2 = []
for tweet in nin_tweets2:
# Run Vader Analysis on each tweet
results = analyzer.polarity_scores(tweet)
compound = results["compound"]
pos = results["pos"]
neu = results["neu"]
neg = results["neg"]
# Add each value to the appropriate list
compound_list2.append(compound)
positive_list2.append(pos)
negative_list2.append(neg)
neutral_list2.append(neu)
june_18_N_1 = {
'Text': nin_tweets,
'Compounded': compound_list,
'Negative': negative_list,
'Positive': positive_list,
'Neutral': neutral_list,
'Date': date
}
june_18_N_2 = {
'Text': nin_tweets2,
'Compounded': compound_list2,
'Negative': negative_list2,
'Positive': positive_list2,
'Neutral': neutral_list2,
'Date': date2
}
june_18_nin_df = pd.DataFrame(june_18_N_1)
june_18_nin_2df = pd.DataFrame(june_18_N_2)
# june_17_df.head()
# date[9831]
converted_timestamps = []
for raw in date:
# https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior
# http://strftime.org/
converted_time = datetime.strptime(raw, "%a %b %d %H:%M:%S %z %Y")
converted_timestamps.append(converted_time)
converted_timestamps2 = []
for raw in date2:
# https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior
# http://strftime.org/
converted_time = datetime.strptime(raw, "%a %b %d %H:%M:%S %z %Y")
converted_timestamps2.append(converted_time)
hour = []
for x in range(len(converted_timestamps)):
hours = converted_timestamps[x].hour
hour.append(hours)
hour2 = []
for x in range(len(converted_timestamps2)):
hours = converted_timestamps2[x].hour
hour2.append(hours)
june_18_nin_df['Hour'] = hour
june_18_nin_2df['Hour'] = hour2
# june_17_df.head()
june_18_nin_df.to_csv('june_18_N_1.csv')
june_18_nin_2df.to_csv('june_18_N_2.csv')
june_18_mean_nin_df = june_18_nin_df.groupby('Hour').mean()
june_18_mean_nin_2df = june_18_nin_2df.groupby('Hour').mean()
june_18_mean_nin_df.to_csv('june_18_mean_N_1.csv')
june_18_mean_nin_2df.to_csv('june_18_mean_N_2.csv')
len(june_18_nin_df['Date'])
len(june_18_nin_2df['Date'])
| 0.290981 | 0.096238 |
```
!pip install git+https://github.com/LIAAD/yake
!pip install Rouge
!python -m pip install --upgrade pip
s = ''.join(list(str(np.random.randint(-1000,1000,100))))
s.replace('\n' ,"")
import numpy as np
import pandas as pd
import pdb
import string
import os
import re
from nltk.tokenize import word_tokenize
from nltk.stem.isri import ISRIStemmer
dubai_dir = r'data\EASC-UTF-8\Articles\Topic147\tourisms (8).txt'
dubai = open(dubai_dir, encoding="utf-8").read()
import document
import preprocess
import evaluate
pr = preprocess.Preprocess()
original_text = dubai
preprocessed_text = pr.get_clean_article(dubai)
sentences = pr.get_article_sentences(preprocessed_text)
original_sentences = pr.get_article_sentences(dubai)
paragraphs = pr.get_cleaned_article_paragraphes(preprocessed_text)
para_sent_list = pr.get_para_sentences(paragraphs)
tokenized_word_sentences = pr.get_tokenized_word_sentences(sentences)
print(original_text,"\n")
print(preprocessed_text,"\n")
print(sentences,"\n")
print(paragraphs,"\n")
print(para_sent_list,"\n")
print(len(paragraphs),"\n")
print(preprocessed_sentences,"\n")
print(tokenized_word_sentences,"\n")
doc = document.Doc(
original_text = original_text , original_sentences = original_sentences ,
preprocessed_text = preprocessed_text.replace('ppp',""),
sentences = sentences,
paragraphs = paragraphs ,para_sent_list = para_sent_list ,tokenized_word_sentences = tokenized_word_sentences)
doc.para_sent_list
```
## Keyphrase Feaure
```
sent1 = preprocessed_sentences[0]
sent1
sent4 = preprocessed_sentences[4]
sent4
doc.key_phrases = doc.get_doc_key_phrase(preprocessed_text)
doc.key_phrases
doc.key_phrase_frequency = doc.get_key_phrase_frequency(sent1)
doc.key_phrase_frequency
doc.get_key_phrase_proper_name()
doc.get_key_phrase_length()
doc.get_topic_idf(sentences)
doc.get_key_phrase_score(sent1)
```
## Sentence Location Feature
```
len(para_sent_list)
para_sent_list
for paragrpah_index,list_para in enumerate(para_sent_list) : print (list_para)
list_para[0]
doc.sentence_location_score(sent1)
doc.sentence_location_score(sent4)
l = [[1,0,0], [0,4,0], [0,0,1], [3,0,0]]
l = [[0 if x == 1 else x for x in sub_l] for sub_l in l]
l
doc.tf_idf,doc.centroid_vector = doc.get_tfidf_centroid_vector(sentences)
tf.shape #17 sentence , 18 word
vec.shape
for i in range(len(doc.tf_idf)) :
print(doc.cosine_similarity_V1(doc.tf_idf[i],doc.centroid_vector))
from scipy import spatial
a = [3, 45, 7, 2]
b = [2, 54, 13, 15]
result = 1 - spatial.distance.cosine(a, b)
result
from numpy import dot
from numpy.linalg import norm
cos_sim = dot(a, b)/(norm(a)*norm(b))
cos_sim
from sklearn.feature_extraction.text import TfidfVectorizer
corpus = [
'This is the first document.',
'This document is the second document.',
'And this is the third one.',
'Is this the first document?',
]
vectorizer = TfidfVectorizer()
X = vectorizer.fit(corpus)
print(vectorizer.get_feature_names())
['and', 'document', 'first', 'is', 'one', 'second', 'the', 'third', 'this']
#print(X.shape)
X.transform(['This document is the second documen']).toarray()
sentences
(0.4 * 17)/100
vec_sentence = doc.tf_idf.transform([sent1.strip()]).toarray()[0]
vec_sentence = np.squeeze(vec_sentence)
vec_sentence.shape
doc.cosine_similarity_V1(vec_sentence ,doc.centroid_vector)
len(sentences)
org_sentences = pr.get_article_sentences(dubai)
len(org_sentences)
```
# Centrality Feature
```
from sklearn.metrics.pairwise import cosine_similarity
vecs = doc.tf_idf.transform(sentences)
vecs.shape
cosine_similarity(vecs,vecs).shape
cos = cosine_similarity(vecs,vecs)
vec_sentence = doc.tf_idf.transform([sent1.strip()]).toarray()[0]
vec_sentence.shape
cosine_similarity(vec_sentence.reshape(1,-1),vecs).shape
cos_1 = cosine_similarity(vec_sentence.reshape(1,-1),vecs)
cos_1
cos[0]
cos[1]
np.where(cos[1] > 0.1)
len(np.where(cos[1] > 0.1)[0])
np.equal(cos[0],cos_1[0])
features = [doc.get_key_phrase_score ,doc.sentence_location_score,doc.get_centroid_score,
doc.get_centrality_score ,doc.sentence_length_score ,doc.cue_phrases_score ,
doc.strong_words_score]
def score(sentences) :
lst = []
ordered_list = []
max_legnth_summary = 5
summary = []
sentence_scores = []
for index,sentence in enumerate(sentences) :
total_score = 0
for feature in features :
score = feature(sentence)
total_score += score
sentence_scores.append((index,total_score))
ordered_list = sorted(sentence_scores,key = lambda x : x[1] ,reverse = True)
summary = ordered_list[:max_legnth_summary]
#pdb.set_trace()
last_summary = sorted(summary,key = lambda x : x[0])
sum_list = [original_sentences[x] for (x,y) in last_summary]
text_list = ".".join(sum_list)
return text_list
score(sentences)
```
|
github_jupyter
|
!pip install git+https://github.com/LIAAD/yake
!pip install Rouge
!python -m pip install --upgrade pip
s = ''.join(list(str(np.random.randint(-1000,1000,100))))
s.replace('\n' ,"")
import numpy as np
import pandas as pd
import pdb
import string
import os
import re
from nltk.tokenize import word_tokenize
from nltk.stem.isri import ISRIStemmer
dubai_dir = r'data\EASC-UTF-8\Articles\Topic147\tourisms (8).txt'
dubai = open(dubai_dir, encoding="utf-8").read()
import document
import preprocess
import evaluate
pr = preprocess.Preprocess()
original_text = dubai
preprocessed_text = pr.get_clean_article(dubai)
sentences = pr.get_article_sentences(preprocessed_text)
original_sentences = pr.get_article_sentences(dubai)
paragraphs = pr.get_cleaned_article_paragraphes(preprocessed_text)
para_sent_list = pr.get_para_sentences(paragraphs)
tokenized_word_sentences = pr.get_tokenized_word_sentences(sentences)
print(original_text,"\n")
print(preprocessed_text,"\n")
print(sentences,"\n")
print(paragraphs,"\n")
print(para_sent_list,"\n")
print(len(paragraphs),"\n")
print(preprocessed_sentences,"\n")
print(tokenized_word_sentences,"\n")
doc = document.Doc(
original_text = original_text , original_sentences = original_sentences ,
preprocessed_text = preprocessed_text.replace('ppp',""),
sentences = sentences,
paragraphs = paragraphs ,para_sent_list = para_sent_list ,tokenized_word_sentences = tokenized_word_sentences)
doc.para_sent_list
sent1 = preprocessed_sentences[0]
sent1
sent4 = preprocessed_sentences[4]
sent4
doc.key_phrases = doc.get_doc_key_phrase(preprocessed_text)
doc.key_phrases
doc.key_phrase_frequency = doc.get_key_phrase_frequency(sent1)
doc.key_phrase_frequency
doc.get_key_phrase_proper_name()
doc.get_key_phrase_length()
doc.get_topic_idf(sentences)
doc.get_key_phrase_score(sent1)
len(para_sent_list)
para_sent_list
for paragrpah_index,list_para in enumerate(para_sent_list) : print (list_para)
list_para[0]
doc.sentence_location_score(sent1)
doc.sentence_location_score(sent4)
l = [[1,0,0], [0,4,0], [0,0,1], [3,0,0]]
l = [[0 if x == 1 else x for x in sub_l] for sub_l in l]
l
doc.tf_idf,doc.centroid_vector = doc.get_tfidf_centroid_vector(sentences)
tf.shape #17 sentence , 18 word
vec.shape
for i in range(len(doc.tf_idf)) :
print(doc.cosine_similarity_V1(doc.tf_idf[i],doc.centroid_vector))
from scipy import spatial
a = [3, 45, 7, 2]
b = [2, 54, 13, 15]
result = 1 - spatial.distance.cosine(a, b)
result
from numpy import dot
from numpy.linalg import norm
cos_sim = dot(a, b)/(norm(a)*norm(b))
cos_sim
from sklearn.feature_extraction.text import TfidfVectorizer
corpus = [
'This is the first document.',
'This document is the second document.',
'And this is the third one.',
'Is this the first document?',
]
vectorizer = TfidfVectorizer()
X = vectorizer.fit(corpus)
print(vectorizer.get_feature_names())
['and', 'document', 'first', 'is', 'one', 'second', 'the', 'third', 'this']
#print(X.shape)
X.transform(['This document is the second documen']).toarray()
sentences
(0.4 * 17)/100
vec_sentence = doc.tf_idf.transform([sent1.strip()]).toarray()[0]
vec_sentence = np.squeeze(vec_sentence)
vec_sentence.shape
doc.cosine_similarity_V1(vec_sentence ,doc.centroid_vector)
len(sentences)
org_sentences = pr.get_article_sentences(dubai)
len(org_sentences)
from sklearn.metrics.pairwise import cosine_similarity
vecs = doc.tf_idf.transform(sentences)
vecs.shape
cosine_similarity(vecs,vecs).shape
cos = cosine_similarity(vecs,vecs)
vec_sentence = doc.tf_idf.transform([sent1.strip()]).toarray()[0]
vec_sentence.shape
cosine_similarity(vec_sentence.reshape(1,-1),vecs).shape
cos_1 = cosine_similarity(vec_sentence.reshape(1,-1),vecs)
cos_1
cos[0]
cos[1]
np.where(cos[1] > 0.1)
len(np.where(cos[1] > 0.1)[0])
np.equal(cos[0],cos_1[0])
features = [doc.get_key_phrase_score ,doc.sentence_location_score,doc.get_centroid_score,
doc.get_centrality_score ,doc.sentence_length_score ,doc.cue_phrases_score ,
doc.strong_words_score]
def score(sentences) :
lst = []
ordered_list = []
max_legnth_summary = 5
summary = []
sentence_scores = []
for index,sentence in enumerate(sentences) :
total_score = 0
for feature in features :
score = feature(sentence)
total_score += score
sentence_scores.append((index,total_score))
ordered_list = sorted(sentence_scores,key = lambda x : x[1] ,reverse = True)
summary = ordered_list[:max_legnth_summary]
#pdb.set_trace()
last_summary = sorted(summary,key = lambda x : x[0])
sum_list = [original_sentences[x] for (x,y) in last_summary]
text_list = ".".join(sum_list)
return text_list
score(sentences)
| 0.237046 | 0.453625 |
# The Standard Library
## Data Structures
We already saw that Python provides several standard data structures, such as **list**, **tuple**, **dict** and **set, as part of its built-in types.
The standard library provides powerful and optimized versions of such data structures.
### collections : Container Data Types
Importing the **collections** module can be done using:
```python
import collections
```
#### Counter
A **counter** is a collection that tracks how many times equivalent values were added.
```python
>>> print(collections.Counter(['a', 'b', 'c', 'a', 'b', 'b']))
Counter({’b’: 3, ’a’: 2, ’c’: 1})
>>> print(collections.Counter({'a': 2, 'b': 3, 'c': 1}))
Counter({’b’: 3, ’a’: 2, ’c’: 1})
>>> print(collections.Counter(a=2, b=3, c=1))
Counter({’b’: 3, ’a’: 2, ’c’: 1})
```
> An empty **Counter** can be constructed using:
```python
c = collections.counter()
```
A **Counter** can be updated using:
```python
>>> print('Initial :', c)
Initial : Counter()
>>> c.update('abcdaab')
>>> print('Sequence:', c)
Sequence: Counter({’a’: 3, ’b’: 2, ’c’: 1, ’d’: 1})
>>> c.update({'a': 1, 'd': 5})
>>> print('Dict :', c)
Dict: Counter({’d’: 6, ’a’: 4, ’b’: 2, ’c’: 1})
```
You can access the counts, once a **Counter** is populated:
```python
>>> c = collections.Counter('abcdaab')
>>> for letter in 'abcde':
>>> print('{} : {}'.format(letter, c[letter]))
a : 3
b : 2
c : 1
d : 1
e : 0
```
You can also get an iterator that produces all items known to the **Counter**, using the **elements()** method:
```python
>>> c = collections.Counter('extremely')
>>> c['z'] = 0
>>> print(c)
Counter({’e’: 3, ’m’: 1, ’l’: 1, ’r’: 1, ’t’: 1, ’y’: 1, ’x’: 1, ’z’: 0})
>>> print(list(c.elements()))
[’e’, ’e’, ’e’, ’m’, ’l’, ’r’, ’t’, ’y’, ’x’]
```
**Counter** instances support arithmetic and set operations for aggregating results.
```python
>>> c1 = collections.Counter(['a', 'b', 'c', 'a', 'b', 'b'])
>>> c2 = collections.Counter('alphabet')
>>> print('C1:', c1)
C1: Counter({’b’: 3, ’a’: 2, ’c’: 1})
>>> print('C2:', c2)
C2: Counter({’a’: 2, ’b’: 1, ’e’: 1, ’h’: 1, ’l’: 1, ’p’: 1, ’t’: 1})
>>> print('\nCombined counts:')
>>> print(c1 + c2)
Combined counts:
Counter({’a’: 4, ’b’: 4, ’c’: 1, ’e’: 1, ’h’: 1, ’l’: 1, ’p’: 1, ’t’: 1})
>>> print('\nSubtraction:')
>>> print(c1 - c2)
Subtraction:
Counter({’b’: 2, ’c’: 1})
>>> print('\nIntersection (taking positive minimums):')
>>> print(c1 & c2)
Intersection (taking positive minimums):
Counter({’a’: 2, ’b’: 1})
>>> print('\nUnion (taking maximums):')
>>> print(c1 | c2)
Union (taking maximums):
Counter({’b’: 3, ’a’: 2, ’c’: 1, ’e’: 1, ’h’: 1, ’l’: 1, ’p’: 1, ’t’: 1})
```
#### defaultdict
The **defaultdict** lets the user specify the default value when the container is initialized, as in the following example:
```python
def default_factory():
return 'default value'
d = collections.defaultdict(default_factory, foo='bar')
print('d:', d)
print('foo =>', d['foo'])
print('bar =>', d['bar'])
```
which produces:
```shell
d: defaultdict(<function default_factory
at 0x100d9ba28>, {’foo’: ’bar’})
foo => bar
bar => default value
```
#### deque
This is a double-ended queue. It supports adding and removing elements from either end.
```python
d = collections.deque('abcdefg')
print('Deque:', d)
print('Length:', len(d))
print('Left end:', d[0])
print('Right end:', d[-1])
d.remove('c')
print('remove(c):', d)
```
The result is
```shell
Deque: deque([’a’, ’b’, ’c’, ’d’, ’e’, ’f’, ’g’])
Length: 7
Left end: a
Right end: g
remove(c): deque([’a’, ’b’, ’d’, ’e’, ’f’, ’g’])
```
Populating a **deque** can be done on the left or right:
```python
# Add to the right
d1 = collections.deque()
d1.extend('abcdefg')
print('extend :', d1)
d1.append('h')
print('append :', d1)
# Add to the left
d2 = collections.deque()
d2.extendleft(range(6))
print('extendleft:', d2)
d2.appendleft(6)
print('appendleft:', d2)
```
```shell
extend: deque([’a’, ’b’, ’c’, ’d’, ’e’, ’f’, ’g’])
append: deque([’a’, ’b’, ’c’, ’d’, ’e’, ’f’, ’g’, ’h’])
extendleft: deque([5, 4, 3, 2, 1, 0])
appendleft: deque([6, 5, 4, 3, 2, 1, 0])
```
Similary, the elements can be consumed, as when using **pop** for **list**:
```python
print('From the right:')
d = collections.deque('abcdefg')
while True:
try:
print(d.pop(), end='')
except IndexError:
break
print()
print('\nFrom the left:')
d = collections.deque(range(6))
while True:
try:
print(d.popleft(), end='')
except IndexError:
break
print()
```
which produces
```shell
From the right:
g f e d c b a
From the left:
0 1 2 3 4 5
```
Another interesting method is to rotate in either direction:
```python
d = collections.deque(range(10))
print('Normal :', d)
d = collections.deque(range(10))
d.rotate(2)
print('Right rotation:', d)
d = collections.deque(range(10))
d.rotate(-2)
print('Left rotation :', d)
```
which produces
```shell
Normal : deque([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
Right rotation: deque([8, 9, 0, 1, 2, 3, 4, 5, 6, 7])
Left rotation : deque([2, 3, 4, 5, 6, 7, 8, 9, 0, 1])
```
#### namedtuple
In opposition to the built-in **tuple** type, **namedtuple** allows you to access its members using names:
```python
Person = collections.namedtuple('Person', 'name age')
bob = Person(name='Bob', age=30)
print('\nRepresentation:', bob)
jane = Person(name='Jane', age=29)
print('\nField by name:', jane.name)
print('\nFields by index:')
for p in [bob, jane]:
print('{} is {} years old'.format(*p))
```
which produces
```shell
Type of Person: <type ’type’>
Representation: Person(name=’Bob’, age=30, gender=’male’)
Field by name: Jane
Fields by index:
Bob is a 30 year old male
Jane is a 29 year old female
```
> Field names are invalid if they are repeated or conflict with Python keywords.
#### OrderedDict
An **OrderedDict** is a dictionary subclass that remembers the order in which its contents were added.
> The built-in **dict** does not track the insertion order.
```python
print('Regular dictionary:')
d = {}
d['a'] = 'A'
d['b'] = 'B'
d['c'] = 'C'
for k, v in d.items():
print(k, v)
print('\nOrderedDict:')
d = collections.OrderedDict()
d['a'] = 'A'
d['b'] = 'B'
d['c'] = 'C'
for k, v in d.items():
print(k, v)
```
> Notice that in order to iterate on the dictionary, we use the same syntaxt as for **dict**
```python
for k, v in d.items():
```
Because of keeping track of the insertion order, comparing two **OrderedDict** is little bit subtle:
```python
print('dict :', end=' ')
d1 = {}
d1['a'] = 'A'
d1['b'] = 'B'
d1['c'] = 'C'
d2 = {}
d2['c'] = 'C'
d2['b'] = 'B'
d2['a'] = 'A'
print(d1 == d2)
print('OrderedDict:', end=' ')
d1 = collections.OrderedDict()
d1['a'] = 'A'
d1['b'] = 'B'
d1['c'] = 'C'
d2 = collections.OrderedDict()
d2['c'] = 'C'
d2['b'] = 'B'
d2['a'] = 'A'
print(d1 == d2)
```
which produces:
```shell
dict: True
OrderedDict: False
```
### array : Sequence of Fixed-Type Data
### heapq : Heap Sort Algorithm
### bisect : Maintaint Lists in Sorted Order
### queue : Thread-Safe FIFO Implementation
### struct : Binary Data Structures
### weakref : Impermanent References to Objects
### copy : Duplicate Objects
### pprint : Pretty-Print Data Structures
## Text
### string : Text Constants and Templates
### textwrap : Formatting Text Paragraphs
### re : Regular Expressions
### difflib : Compare Sequences
## Dates and Times
### time : Clock Time
### datetime : Date and Time Value Manipulation
### calendar : Work with Dates
## Mathematics
### decimal : Fixed and Floating-Point Math
### fractions : Rational Numbers
### random : Pseudorandom Number Generators
### math : Mathematical Functions
## Algorithms
### functools : Tools for Manipulating Functions
### itertools : Iterator Functions
### operator : Functional interface to Built-in Operators
### contextlib : Context manager Utilities
## The File System
### os.path : Platform-Independent manipulation of Filenames
### glob : Filename Pattern Matching
### linecache : Read Text Files Efficiently
### tempfile : Temporary File System Objects
### shutil : High-Level File Operations
### mmap : Memory-Map Files
### codecs : String Encoding and Decoding
### StringIO : Text Buffers with a File-like API
### fnmatch : UNIX-Style Directory Listings
### dircache : Cache Directory Listings
### filecmp : Compare Files
```
# css style
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
```
|
github_jupyter
|
import collections
>>> print(collections.Counter(['a', 'b', 'c', 'a', 'b', 'b']))
Counter({’b’: 3, ’a’: 2, ’c’: 1})
>>> print(collections.Counter({'a': 2, 'b': 3, 'c': 1}))
Counter({’b’: 3, ’a’: 2, ’c’: 1})
>>> print(collections.Counter(a=2, b=3, c=1))
Counter({’b’: 3, ’a’: 2, ’c’: 1})
c = collections.counter()
>>> print('Initial :', c)
Initial : Counter()
>>> c.update('abcdaab')
>>> print('Sequence:', c)
Sequence: Counter({’a’: 3, ’b’: 2, ’c’: 1, ’d’: 1})
>>> c.update({'a': 1, 'd': 5})
>>> print('Dict :', c)
Dict: Counter({’d’: 6, ’a’: 4, ’b’: 2, ’c’: 1})
>>> c = collections.Counter('abcdaab')
>>> for letter in 'abcde':
>>> print('{} : {}'.format(letter, c[letter]))
a : 3
b : 2
c : 1
d : 1
e : 0
>>> c = collections.Counter('extremely')
>>> c['z'] = 0
>>> print(c)
Counter({’e’: 3, ’m’: 1, ’l’: 1, ’r’: 1, ’t’: 1, ’y’: 1, ’x’: 1, ’z’: 0})
>>> print(list(c.elements()))
[’e’, ’e’, ’e’, ’m’, ’l’, ’r’, ’t’, ’y’, ’x’]
>>> c1 = collections.Counter(['a', 'b', 'c', 'a', 'b', 'b'])
>>> c2 = collections.Counter('alphabet')
>>> print('C1:', c1)
C1: Counter({’b’: 3, ’a’: 2, ’c’: 1})
>>> print('C2:', c2)
C2: Counter({’a’: 2, ’b’: 1, ’e’: 1, ’h’: 1, ’l’: 1, ’p’: 1, ’t’: 1})
>>> print('\nCombined counts:')
>>> print(c1 + c2)
Combined counts:
Counter({’a’: 4, ’b’: 4, ’c’: 1, ’e’: 1, ’h’: 1, ’l’: 1, ’p’: 1, ’t’: 1})
>>> print('\nSubtraction:')
>>> print(c1 - c2)
Subtraction:
Counter({’b’: 2, ’c’: 1})
>>> print('\nIntersection (taking positive minimums):')
>>> print(c1 & c2)
Intersection (taking positive minimums):
Counter({’a’: 2, ’b’: 1})
>>> print('\nUnion (taking maximums):')
>>> print(c1 | c2)
Union (taking maximums):
Counter({’b’: 3, ’a’: 2, ’c’: 1, ’e’: 1, ’h’: 1, ’l’: 1, ’p’: 1, ’t’: 1})
def default_factory():
return 'default value'
d = collections.defaultdict(default_factory, foo='bar')
print('d:', d)
print('foo =>', d['foo'])
print('bar =>', d['bar'])
d: defaultdict(<function default_factory
at 0x100d9ba28>, {’foo’: ’bar’})
foo => bar
bar => default value
d = collections.deque('abcdefg')
print('Deque:', d)
print('Length:', len(d))
print('Left end:', d[0])
print('Right end:', d[-1])
d.remove('c')
print('remove(c):', d)
Deque: deque([’a’, ’b’, ’c’, ’d’, ’e’, ’f’, ’g’])
Length: 7
Left end: a
Right end: g
remove(c): deque([’a’, ’b’, ’d’, ’e’, ’f’, ’g’])
# Add to the right
d1 = collections.deque()
d1.extend('abcdefg')
print('extend :', d1)
d1.append('h')
print('append :', d1)
# Add to the left
d2 = collections.deque()
d2.extendleft(range(6))
print('extendleft:', d2)
d2.appendleft(6)
print('appendleft:', d2)
extend: deque([’a’, ’b’, ’c’, ’d’, ’e’, ’f’, ’g’])
append: deque([’a’, ’b’, ’c’, ’d’, ’e’, ’f’, ’g’, ’h’])
extendleft: deque([5, 4, 3, 2, 1, 0])
appendleft: deque([6, 5, 4, 3, 2, 1, 0])
print('From the right:')
d = collections.deque('abcdefg')
while True:
try:
print(d.pop(), end='')
except IndexError:
break
print()
print('\nFrom the left:')
d = collections.deque(range(6))
while True:
try:
print(d.popleft(), end='')
except IndexError:
break
print()
From the right:
g f e d c b a
From the left:
0 1 2 3 4 5
d = collections.deque(range(10))
print('Normal :', d)
d = collections.deque(range(10))
d.rotate(2)
print('Right rotation:', d)
d = collections.deque(range(10))
d.rotate(-2)
print('Left rotation :', d)
Normal : deque([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
Right rotation: deque([8, 9, 0, 1, 2, 3, 4, 5, 6, 7])
Left rotation : deque([2, 3, 4, 5, 6, 7, 8, 9, 0, 1])
Person = collections.namedtuple('Person', 'name age')
bob = Person(name='Bob', age=30)
print('\nRepresentation:', bob)
jane = Person(name='Jane', age=29)
print('\nField by name:', jane.name)
print('\nFields by index:')
for p in [bob, jane]:
print('{} is {} years old'.format(*p))
Type of Person: <type ’type’>
Representation: Person(name=’Bob’, age=30, gender=’male’)
Field by name: Jane
Fields by index:
Bob is a 30 year old male
Jane is a 29 year old female
print('Regular dictionary:')
d = {}
d['a'] = 'A'
d['b'] = 'B'
d['c'] = 'C'
for k, v in d.items():
print(k, v)
print('\nOrderedDict:')
d = collections.OrderedDict()
d['a'] = 'A'
d['b'] = 'B'
d['c'] = 'C'
for k, v in d.items():
print(k, v)
for k, v in d.items():
print('dict :', end=' ')
d1 = {}
d1['a'] = 'A'
d1['b'] = 'B'
d1['c'] = 'C'
d2 = {}
d2['c'] = 'C'
d2['b'] = 'B'
d2['a'] = 'A'
print(d1 == d2)
print('OrderedDict:', end=' ')
d1 = collections.OrderedDict()
d1['a'] = 'A'
d1['b'] = 'B'
d1['c'] = 'C'
d2 = collections.OrderedDict()
d2['c'] = 'C'
d2['b'] = 'B'
d2['a'] = 'A'
print(d1 == d2)
dict: True
OrderedDict: False
# css style
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
| 0.52342 | 0.984738 |
d-sandbox
<div style="text-align: center; line-height: 0; padding-top: 9px;">
<img src="https://databricks.com/wp-content/uploads/2018/03/db-academy-rgb-1200px.png" alt="Databricks Learning" style="width: 600px">
</div>
# Hands-on with Databricks
**Objective**: *Familiarize yourself with the Databricks platform, the use of notebooks, and basic SQL operations in Databricks.*
In this lab, you will complete a series of exercises to familiarize yourself with the content covered in Lesson 0.1.
## Exercise 1
In order to execute code with Databricks, you need to have your notebook attached to an active cluster.
Ensure that:
1. You have created a cluster following the walkthrough of the video in this lesson.
2. Your cluster's Databricks Runtime Version is 7.2 ML.
3. Your cluster is active and running.
4. This notebook is attached to your cluster.
## Exercise 2
The fundamental piece of a Databricks notebook is the command cell. We use command cells to write and run our code.
Complete the following:
1. Insert a command cell beneath this one.
2. Write `1 + 1` in the command cell.
3. Run the command cell.
4. Verify that the output of the executed code is `2`.
```
1 + 1
```
## Exercise 3
Command cells can also be used to add comments using a lightweight markup language named *markdown*. (That's how these command cells are written).
Complete the following:
1. Double-click on this command cell.
2. Notice the *magic command* at the top of the command cell that enables the use of markdown.
3. Insert a command cell beneath this one and add the magic command to the first line.
4. Write `THE MAGIC COMMAND FOR MARKDOWN IS _____` with the magic command filling the blank.
`THE MAGIC COMMAND FOR MARKDOWN IS %md`
## Exercise 4
Throughout this course, we will be using a setup file in each of our notebooks that connects Databricks to our data.
Complete the following:
1. Run the below command cell to execute the setup file.
2. Insert a SQL command cell beneath the command cell containg the setup file.
3. Query all of the data in the table **`dsfda.ht_daily_metrics`** using the query `SELECT * FROM dsfda.ht_daily_metrics`.
4. Examine the displayed table to learn about its columns and rows.
```
%run "../../Includes/Classroom-Setup"
%sql
SELECT * FROM dsfda.ht_daily_metrics
```
## Exercise 5
Throughout this course, we will need to manipulate data and save it as new tables using Delta, just as we did in the video during the lesson.
Complete the following:
1. Insert a new SQL command cell beneath this one.
2. Write a SQL query to return rows from the **dsfda.ht_users** table where the individual's lifestyle is `"Sedentary"`.
3. Use the SQL query to create a new Delta table named **dsfda.ht_users_sedentary** and store the data in the following location: `"/dsfda/ht-users-sedentary"`.
```
%sql
CREATE OR REPLACE TABLE dsfda.ht_users_sedentary
USING DELTA LOCATION "/dsfda/ht-users-sedentary"
AS (
SELECT *
FROM dsfda.ht_users
WHERE lifestyle = 'Sedentary'
)
%sql
SELECT * FROM dsfda.ht_users_sedentary
```
Great job! You've completed the first lesson of the Data Science Fundamentals with Databricks course.
Please proceed to the next lesson to begin Module 2: An Introduction to Data Science.
-sandbox
© 2021 Databricks, Inc. All rights reserved.<br/>
Apache, Apache Spark, Spark and the Spark logo are trademarks of the <a href="http://www.apache.org/">Apache Software Foundation</a>.<br/>
<br/>
<a href="https://databricks.com/privacy-policy">Privacy Policy</a> | <a href="https://databricks.com/terms-of-use">Terms of Use</a> | <a href="http://help.databricks.com/">Support</a>
|
github_jupyter
|
1 + 1
%run "../../Includes/Classroom-Setup"
%sql
SELECT * FROM dsfda.ht_daily_metrics
%sql
CREATE OR REPLACE TABLE dsfda.ht_users_sedentary
USING DELTA LOCATION "/dsfda/ht-users-sedentary"
AS (
SELECT *
FROM dsfda.ht_users
WHERE lifestyle = 'Sedentary'
)
%sql
SELECT * FROM dsfda.ht_users_sedentary
| 0.193223 | 0.949342 |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import StandardScaler
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
from sklearn.mixture import GaussianMixture
from sklearn.cluster import AgglomerativeClustering
def loadData(data):
df = pd.read_csv(data)
# print(df.shape[1])
x = df.iloc[:,:-1]
y = df.iloc[:,-1]
return x,y
def cat2num(y):
for i in range(len(y)):
if y[i] == 'dos':
y[i] = 0
elif y[i] == 'normal':
y[i] = 1
elif y[i] == 'probe':
y[i] = 2
elif y[i] == 'r2l':
y[i] = 3
else:
y[i] = 4
return y
def PCA(x):
print('PCA Started !')
print('')
mu = np.mean(x, axis=0)
cov = ( ((x - mu).T).dot(x - mu) ) / (x.shape[0]-1)
# print('Covariance matrix \n%s' %cov)
eigenVal, eigenVec = np.linalg.eig(cov)
# print('Eigenvectors \n%s' %eigenVec)
# print('\nEigenvalues \n%s' %eigenVal)
eList = []
for i in range(len(eigenVal)):
eList.append((np.abs(eigenVal[i]), eigenVec[:,i]))
# print(eList)
eList.sort(key=lambda x:x[0])
eList.reverse()
# print('Eigenvalues in descending order:')
# for i in eList:
# print(i[0])
eSum = sum(eigenVal)
eVar = []
for i in sorted(eigenVal, reverse=True):
eVar.append((i / eSum)*100)
eVar = np.abs(np.cumsum(eVar))
# print(eVar)
# Calculating the index of first eigen value, upto which error is <10%
index = next(x[0] for x in enumerate(eVar) if x[1] > 90)
print('Number of eigen values selected to maintain threshold at 10% is:',index+1)
print('')
w = eList[0][1].reshape(len(eigenVec),1)
for i in range(1,index+1):
w = np.hstack((w, eList[i][1].reshape(len(eigenVec),1))) #Concatinating Eigen Vectors column wise to form W matrix
# print('Matrix W:\n', w)
# print(w.shape)
x_reduced = x.dot(w)
print('PCA Reduced Data')
print('')
print(x_reduced)
print('')
print('PCA Completed !')
return x_reduced
def cal_purity(labels,y):
cnf_matrix = np.zeros((5,5))
for i in range(len(y)):
cnf_matrix[int(labels[i]),y[i]] +=1
num = 0
for i in range(5):
num += np.max(cnf_matrix[i])
return (num/len(y))
if __name__ == '__main__':
data = '../Dataset/intrusion_detection/data.csv'
x,y = loadData(data)
y = cat2num(y)
x = StandardScaler().fit_transform(x)
x_reduced = PCA(x)
#GMM
print('GMM Started !!!')
print('')
gmm = GaussianMixture(n_components=5).fit(x_reduced)
labels = gmm.predict(x_reduced)
plt.scatter(x_reduced[:, 0], x_reduced[:, 1], c=labels, s=40, cmap='viridis');
print('GMM Completed !!!')
print('')
purity_gmm = cal_purity(labels,y)
print('')
print('Purity while reducing data as per threshold: ', purity_gmm)
# Hierarchical clustering
clustering = AgglomerativeClustering(n_clusters=5, affinity='euclidean', linkage='single')
clustering.fit_predict(x_reduced)
labels = clustering.labels_
plt.scatter(x_reduced[:, 0], x_reduced[:, 1], c=labels, s=40, cmap='viridis');
print('Hierarchical clustering Completed !!!')
print('')
```

|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import StandardScaler
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
from sklearn.mixture import GaussianMixture
from sklearn.cluster import AgglomerativeClustering
def loadData(data):
df = pd.read_csv(data)
# print(df.shape[1])
x = df.iloc[:,:-1]
y = df.iloc[:,-1]
return x,y
def cat2num(y):
for i in range(len(y)):
if y[i] == 'dos':
y[i] = 0
elif y[i] == 'normal':
y[i] = 1
elif y[i] == 'probe':
y[i] = 2
elif y[i] == 'r2l':
y[i] = 3
else:
y[i] = 4
return y
def PCA(x):
print('PCA Started !')
print('')
mu = np.mean(x, axis=0)
cov = ( ((x - mu).T).dot(x - mu) ) / (x.shape[0]-1)
# print('Covariance matrix \n%s' %cov)
eigenVal, eigenVec = np.linalg.eig(cov)
# print('Eigenvectors \n%s' %eigenVec)
# print('\nEigenvalues \n%s' %eigenVal)
eList = []
for i in range(len(eigenVal)):
eList.append((np.abs(eigenVal[i]), eigenVec[:,i]))
# print(eList)
eList.sort(key=lambda x:x[0])
eList.reverse()
# print('Eigenvalues in descending order:')
# for i in eList:
# print(i[0])
eSum = sum(eigenVal)
eVar = []
for i in sorted(eigenVal, reverse=True):
eVar.append((i / eSum)*100)
eVar = np.abs(np.cumsum(eVar))
# print(eVar)
# Calculating the index of first eigen value, upto which error is <10%
index = next(x[0] for x in enumerate(eVar) if x[1] > 90)
print('Number of eigen values selected to maintain threshold at 10% is:',index+1)
print('')
w = eList[0][1].reshape(len(eigenVec),1)
for i in range(1,index+1):
w = np.hstack((w, eList[i][1].reshape(len(eigenVec),1))) #Concatinating Eigen Vectors column wise to form W matrix
# print('Matrix W:\n', w)
# print(w.shape)
x_reduced = x.dot(w)
print('PCA Reduced Data')
print('')
print(x_reduced)
print('')
print('PCA Completed !')
return x_reduced
def cal_purity(labels,y):
cnf_matrix = np.zeros((5,5))
for i in range(len(y)):
cnf_matrix[int(labels[i]),y[i]] +=1
num = 0
for i in range(5):
num += np.max(cnf_matrix[i])
return (num/len(y))
if __name__ == '__main__':
data = '../Dataset/intrusion_detection/data.csv'
x,y = loadData(data)
y = cat2num(y)
x = StandardScaler().fit_transform(x)
x_reduced = PCA(x)
#GMM
print('GMM Started !!!')
print('')
gmm = GaussianMixture(n_components=5).fit(x_reduced)
labels = gmm.predict(x_reduced)
plt.scatter(x_reduced[:, 0], x_reduced[:, 1], c=labels, s=40, cmap='viridis');
print('GMM Completed !!!')
print('')
purity_gmm = cal_purity(labels,y)
print('')
print('Purity while reducing data as per threshold: ', purity_gmm)
# Hierarchical clustering
clustering = AgglomerativeClustering(n_clusters=5, affinity='euclidean', linkage='single')
clustering.fit_predict(x_reduced)
labels = clustering.labels_
plt.scatter(x_reduced[:, 0], x_reduced[:, 1], c=labels, s=40, cmap='viridis');
print('Hierarchical clustering Completed !!!')
print('')
| 0.211335 | 0.603815 |
In this notebook, we'll examine computing ciliary beat frequency (CBF) from a couple example videos using the core techniques from the [2015 Quinn *et al* paper in *Science Translational Medicine*](http://dx.doi.org/10.1126/scitranslmed.aaa1233).
CBF is a quantity that clinicians and researchers have used for some time as an objective measure of ciliary motion. It is precisely what it sounds like: the frequency at which cilia beat. This can be easily done in a GUI-viewer like ImageJ (now Fiji) by clicking on a single pixel of the video and asking for the frequency, but in Python this requires some additional work.
With any spectral analysis of a time series, we'll be presented with a range of frequencies present at any given location. In our paper, we limited the scope of these frequencies to only the *dominant* frequency that was present *at each pixel*. In essence, we compute the frequency spectra at each pixel of a video of cilia, then strip out all the frequencies at each pixel except for the one with the greatest power.
There are three main ways in which we computed CBF. Each of these is implemented in `stm.py`.
#### 0: Preliminaries
Here are some basic imports we'll need for the rest of the notebook.
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import scipy.signal as signal
import stm # Our package.
# Our two example videos.
v_norm = np.load("../data/normal.npy")
v_dysk = np.load("../data/dyskinetic.npy")
# We'll plot the first frame of these two videos to give a sense of them.
plt.figure()
plt.subplot(1, 2, 1)
plt.imshow(v_norm[0], cmap = "gray")
plt.subplot(1, 2, 2)
plt.imshow(v_dysk[0], cmap = "gray")
```
#### 1: "Raw" FFT-based CBF
The title is something of a misnomer: the computed CBF is not "raw" in any sense, and all our CBF computations use the FFT in some regard. This technique, however, is the only that *explicitly* uses the FFT. It's also the most basic technique, as it doesn't involve any shifting or windowing of the original signal. As a result, it's very fast, but can produce a lot of noise.
Here's what it looks like.
```
h1_norm = stm.cbf(v_norm, method = "fft")
h1_dysk = stm.cbf(v_dysk, method = "fft")
plt.figure()
plt.subplot(1, 2, 1)
plt.title("Normal")
plt.imshow(h1_norm, cmap = "Blues")
plt.colorbar()
plt.subplot(1, 2, 2)
plt.title("Dyskinetic")
plt.imshow(h1_dysk, cmap = "Reds")
plt.colorbar()
```
This is a pretty noisy estimation but still gives a good idea of where certain frequencies are present. Note that in some locations around the cilia in both cases, there is saturation of the signal: large pixel areas that are indicating maximal CBF. These are likely noise as well.
A common post-processing step we would perform is a median filter to dampen spurious signals. The only drawback of this approach is that it assumes a very small amount of noise relative to signal; the reality is likely that there is more noise than this approach implicitly assumes. Nonetheless it is still worthwhile:
```
h1_norm_filt = signal.medfilt2d(h1_norm, 5) # Kernel size of 5x5.
h1_dysk_filt = signal.medfilt2d(h1_dysk, 5)
plt.figure()
plt.subplot(1, 2, 1)
plt.title("Normal")
plt.imshow(h1_norm_filt, cmap = "Blues")
plt.colorbar()
plt.subplot(1, 2, 2)
plt.title("Dyskinetic")
plt.imshow(h1_dysk_filt, cmap = "Reds")
plt.colorbar()
```
It was also useful to look at histograms of the frequencies that are present, discarding the spatial representation in favor of a distribution of frequencies.
```
plt.figure()
plt.subplot(2, 2, 1)
plt.title("Normal")
_ = plt.hist(h1_norm.flatten(), bins = 20)
plt.subplot(2, 2, 2)
plt.title("Dyskinetic")
_ = plt.hist(h1_dysk.flatten(), bins = 20)
plt.subplot(2, 2, 3)
plt.title("Normal (Median Filtered)")
_ = plt.hist(h1_norm_filt.flatten(), bins = 20)
plt.subplot(2, 2, 4)
plt.title("Dyskinetic (Median Filtered)")
_ = plt.hist(h1_dysk_filt.flatten(), bins = 20)
```
#### 2: Periodogram
A periodogram is an estimate of the power spectral density (PSD, hence the name) of the signal, and is a step up from pixel-based FFT...but only 1 step. It performs a lot of the same steps as in the first method under-the-hood, and thus the code in the attached module is considerably shorter.
In theory, this method is a bit more robust to noise.
```
h2_norm = stm.cbf(v_norm, method = "psd")
h2_dysk = stm.cbf(v_dysk, method = "psd")
plt.figure()
plt.subplot(1, 2, 1)
plt.title("Normal")
plt.imshow(h2_norm, cmap = "Blues")
plt.colorbar()
plt.subplot(1, 2, 2)
plt.title("Dyskinetic")
plt.imshow(h2_dysk, cmap = "Reds")
plt.colorbar()
```
There are some minute differences from the first method, but not much.
```
plt.figure()
plt.subplot(2, 2, 1)
plt.title("Normal (Method 1)")
plt.imshow(h1_norm, cmap = "Blues")
plt.colorbar()
plt.subplot(2, 2, 2)
plt.title("Dyskinetic (Method 1)")
plt.imshow(h1_dysk, cmap = "Reds")
plt.colorbar()
plt.figure()
plt.subplot(2, 2, 3)
plt.title("Normal (Method 2)")
plt.imshow(h2_norm, cmap = "Blues")
plt.colorbar()
plt.subplot(2, 2, 4)
plt.title("Dyskinetic (Method 2)")
plt.imshow(h2_dysk, cmap = "Reds")
plt.colorbar()
```
We can do our post-processing.
```
h2_norm_filt = signal.medfilt2d(h2_norm, 5) # Kernel size of 5x5.
h2_dysk_filt = signal.medfilt2d(h2_dysk, 5)
plt.figure()
plt.subplot(1, 2, 1)
plt.title("Normal")
plt.imshow(h2_norm_filt, cmap = "Blues")
plt.colorbar()
plt.subplot(1, 2, 2)
plt.title("Dyskinetic")
plt.imshow(h2_dysk_filt, cmap = "Reds")
plt.colorbar()
plt.figure()
plt.subplot(2, 2, 1)
plt.title("Normal")
_ = plt.hist(h2_norm.flatten(), bins = 20)
plt.subplot(2, 2, 2)
plt.title("Dyskinetic")
_ = plt.hist(h2_dysk.flatten(), bins = 20)
plt.subplot(2, 2, 3)
plt.title("Normal (Median Filtered)")
_ = plt.hist(h2_norm_filt.flatten(), bins = 20)
plt.subplot(2, 2, 4)
plt.title("Dyskinetic (Median Filtered)")
_ = plt.hist(h2_dysk_filt.flatten(), bins = 20)
```
#### 3: Welch Periodogram
Think of Welch's algorithm as a post-processing of the periodogram: it performs window-based smoothing on the resulting frequency spectra, dampening noise at the expense of frequency resolution. Given the propensity of frequency-based noise to appear in the resulting spectra, this trade-off is often preferred.
```
h3_norm = stm.cbf(v_norm, method = "welch")
h3_dysk = stm.cbf(v_dysk, method = "welch")
plt.figure()
plt.subplot(1, 2, 1)
plt.title("Normal")
plt.imshow(h3_norm, cmap = "Blues")
plt.colorbar()
plt.subplot(1, 2, 2)
plt.title("Dyskinetic")
plt.imshow(h3_dysk, cmap = "Reds")
plt.colorbar()
h3_norm_filt = signal.medfilt2d(h3_norm, 5) # Kernel size of 5x5.
h3_dysk_filt = signal.medfilt2d(h3_dysk, 5)
plt.figure()
plt.subplot(1, 2, 1)
plt.title("Normal")
plt.imshow(h3_norm_filt, cmap = "Blues")
plt.colorbar()
plt.subplot(1, 2, 2)
plt.title("Dyskinetic")
plt.imshow(h3_dysk_filt, cmap = "Reds")
plt.colorbar()
```
Strangely, the dyskinetic video seems to see a considerable increase in frequencies across the board once the median filter is applied. We'll look at the histogram for a better view.
```
plt.figure()
plt.subplot(2, 2, 1)
plt.title("Normal")
_ = plt.hist(h3_norm.flatten(), bins = 20)
plt.subplot(2, 2, 2)
plt.title("Dyskinetic")
_ = plt.hist(h3_dysk.flatten(), bins = 20)
plt.subplot(2, 2, 3)
plt.title("Normal (Median Filtered)")
_ = plt.hist(h3_norm_filt.flatten(), bins = 20)
plt.subplot(2, 2, 4)
plt.title("Dyskinetic (Median Filtered)")
_ = plt.hist(h3_dysk_filt.flatten(), bins = 20)
```
This is interesting--there must be something about the spatial arrangement of dominant frequencies in the dyskinetic video (from Welch's method only) that results in a huge shift in the frequencies that are present.
Or it just might be a bug somewhere.
|
github_jupyter
|
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import scipy.signal as signal
import stm # Our package.
# Our two example videos.
v_norm = np.load("../data/normal.npy")
v_dysk = np.load("../data/dyskinetic.npy")
# We'll plot the first frame of these two videos to give a sense of them.
plt.figure()
plt.subplot(1, 2, 1)
plt.imshow(v_norm[0], cmap = "gray")
plt.subplot(1, 2, 2)
plt.imshow(v_dysk[0], cmap = "gray")
h1_norm = stm.cbf(v_norm, method = "fft")
h1_dysk = stm.cbf(v_dysk, method = "fft")
plt.figure()
plt.subplot(1, 2, 1)
plt.title("Normal")
plt.imshow(h1_norm, cmap = "Blues")
plt.colorbar()
plt.subplot(1, 2, 2)
plt.title("Dyskinetic")
plt.imshow(h1_dysk, cmap = "Reds")
plt.colorbar()
h1_norm_filt = signal.medfilt2d(h1_norm, 5) # Kernel size of 5x5.
h1_dysk_filt = signal.medfilt2d(h1_dysk, 5)
plt.figure()
plt.subplot(1, 2, 1)
plt.title("Normal")
plt.imshow(h1_norm_filt, cmap = "Blues")
plt.colorbar()
plt.subplot(1, 2, 2)
plt.title("Dyskinetic")
plt.imshow(h1_dysk_filt, cmap = "Reds")
plt.colorbar()
plt.figure()
plt.subplot(2, 2, 1)
plt.title("Normal")
_ = plt.hist(h1_norm.flatten(), bins = 20)
plt.subplot(2, 2, 2)
plt.title("Dyskinetic")
_ = plt.hist(h1_dysk.flatten(), bins = 20)
plt.subplot(2, 2, 3)
plt.title("Normal (Median Filtered)")
_ = plt.hist(h1_norm_filt.flatten(), bins = 20)
plt.subplot(2, 2, 4)
plt.title("Dyskinetic (Median Filtered)")
_ = plt.hist(h1_dysk_filt.flatten(), bins = 20)
h2_norm = stm.cbf(v_norm, method = "psd")
h2_dysk = stm.cbf(v_dysk, method = "psd")
plt.figure()
plt.subplot(1, 2, 1)
plt.title("Normal")
plt.imshow(h2_norm, cmap = "Blues")
plt.colorbar()
plt.subplot(1, 2, 2)
plt.title("Dyskinetic")
plt.imshow(h2_dysk, cmap = "Reds")
plt.colorbar()
plt.figure()
plt.subplot(2, 2, 1)
plt.title("Normal (Method 1)")
plt.imshow(h1_norm, cmap = "Blues")
plt.colorbar()
plt.subplot(2, 2, 2)
plt.title("Dyskinetic (Method 1)")
plt.imshow(h1_dysk, cmap = "Reds")
plt.colorbar()
plt.figure()
plt.subplot(2, 2, 3)
plt.title("Normal (Method 2)")
plt.imshow(h2_norm, cmap = "Blues")
plt.colorbar()
plt.subplot(2, 2, 4)
plt.title("Dyskinetic (Method 2)")
plt.imshow(h2_dysk, cmap = "Reds")
plt.colorbar()
h2_norm_filt = signal.medfilt2d(h2_norm, 5) # Kernel size of 5x5.
h2_dysk_filt = signal.medfilt2d(h2_dysk, 5)
plt.figure()
plt.subplot(1, 2, 1)
plt.title("Normal")
plt.imshow(h2_norm_filt, cmap = "Blues")
plt.colorbar()
plt.subplot(1, 2, 2)
plt.title("Dyskinetic")
plt.imshow(h2_dysk_filt, cmap = "Reds")
plt.colorbar()
plt.figure()
plt.subplot(2, 2, 1)
plt.title("Normal")
_ = plt.hist(h2_norm.flatten(), bins = 20)
plt.subplot(2, 2, 2)
plt.title("Dyskinetic")
_ = plt.hist(h2_dysk.flatten(), bins = 20)
plt.subplot(2, 2, 3)
plt.title("Normal (Median Filtered)")
_ = plt.hist(h2_norm_filt.flatten(), bins = 20)
plt.subplot(2, 2, 4)
plt.title("Dyskinetic (Median Filtered)")
_ = plt.hist(h2_dysk_filt.flatten(), bins = 20)
h3_norm = stm.cbf(v_norm, method = "welch")
h3_dysk = stm.cbf(v_dysk, method = "welch")
plt.figure()
plt.subplot(1, 2, 1)
plt.title("Normal")
plt.imshow(h3_norm, cmap = "Blues")
plt.colorbar()
plt.subplot(1, 2, 2)
plt.title("Dyskinetic")
plt.imshow(h3_dysk, cmap = "Reds")
plt.colorbar()
h3_norm_filt = signal.medfilt2d(h3_norm, 5) # Kernel size of 5x5.
h3_dysk_filt = signal.medfilt2d(h3_dysk, 5)
plt.figure()
plt.subplot(1, 2, 1)
plt.title("Normal")
plt.imshow(h3_norm_filt, cmap = "Blues")
plt.colorbar()
plt.subplot(1, 2, 2)
plt.title("Dyskinetic")
plt.imshow(h3_dysk_filt, cmap = "Reds")
plt.colorbar()
plt.figure()
plt.subplot(2, 2, 1)
plt.title("Normal")
_ = plt.hist(h3_norm.flatten(), bins = 20)
plt.subplot(2, 2, 2)
plt.title("Dyskinetic")
_ = plt.hist(h3_dysk.flatten(), bins = 20)
plt.subplot(2, 2, 3)
plt.title("Normal (Median Filtered)")
_ = plt.hist(h3_norm_filt.flatten(), bins = 20)
plt.subplot(2, 2, 4)
plt.title("Dyskinetic (Median Filtered)")
_ = plt.hist(h3_dysk_filt.flatten(), bins = 20)
| 0.616474 | 0.991489 |
# Proyecto usando datos de Kaggle
* qué es kaggle y cómo descargar datos --> https://www.youtube.com/watch?v=NhHTWGIglRI
* mirar notebooks --> https://www.kaggle.com/alexisbcook/titanic-tutorial
* Repo en --> https://github.com/gonzalezgouveia/proyecto-titanic/
* Video explicativo de este código YouTube --> https://www.youtube.com/watch?v=VkU-9Us6Rpw
### Pasos de este estudio
1. Carga de datos
1. Exploración
1. Procesamiento
1. Modelos
1. Evaluación
1. Predicción
1. Conclusión y próximos pasos
# Analisis de datos del titanic
## 1. Cargando datos
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# lectura de datos en Python
train = pd.read_csv('./../data/train.csv')
test = pd.read_csv('./../data/test.csv')
train.head()
```
## 2. Explorando datos
```
# que columnas tienen los datos?
train.columns
# qué tamaño tienen los datos?
train.shape
# hay valores nulos en los datos?
train.info()
# como se distribuyen las variables numéricas
train.describe()
# como se comportan las variables categóricas
train.describe(include=['O'])
```
## 2.1 EDA: Estudio de variable target
```
train.groupby(['Survived']).count()['PassengerId']
# target vs sex
train.groupby(['Survived','Sex']).count()['PassengerId']
grouped_sex = train.groupby(['Survived','Sex']).count()['PassengerId']
print(grouped_sex)
(grouped_sex.unstack(level=0).plot.bar())
plt.show()
# tarea hacer para otras variables
# embarked vs pclass
print(train.groupby(['Pclass', 'Embarked'])
.count()['PassengerId']
.unstack(level=0)
.plot.bar())
```
## 3.Procesamiento de datos
Empezamos seleccionando las variables que queremos trabajar que serían:
* Survived
* Sex
* Age
* Pclass
```
train[['Survived', 'Sex', 'Age', 'Pclass']].head(3)
```
Estudiamos los datos nulos
```
train[['Survived', 'Sex', 'Age', 'Pclass']].info()
```
-----------------------
Tenemos que mejorar
* Datos faltantes en Age `train['Age'].isna()`
* La variable Sex aparece como object y queremos int o float para algoritmos
-----------------------
```
# mirar como se distribuyen los nulos en edad
(train[train['Age'].isna()]
.groupby(['Sex', 'Pclass'])
.count()['PassengerId']
.unstack(level=0))
(train[train['Age'].isna()]
.groupby(['SibSp', 'Parch'])
.count()['PassengerId']
.unstack(level=0))
```
De arriba se puede concluir que era gente que viajaba mayormente sola y la mayoría eran de 3era clase.
Esto nos da la idea de que se puede crear una variable que indique si la persona viajaba sola o acompañada.
La crearemos más adelante
```
# calcular mediana de Age para imputar
train['Age'].median()
# imputar valor para rellenar nulos
train['Age'] = train['Age'].fillna(28.0)
train[['Survived', 'Sex', 'Age', 'Pclass']].info()
```
Ya no tenemos nulos. Falta resolver lo de pasar Sex a int
```
# map para label encoding
train['Sex'] = train['Sex'].map({'female': 1, 'male': 0}).astype(int)
```
Ahora tenemos la tabla preprocesada lista
```
train[['Survived', 'Sex', 'Age', 'Pclass']].head(3)
```
## 3.1 Crear nuevas variables
```
# crear nueva variable tipo flag "solo"
train['FlagSolo'] = np.where(
(train['SibSp'] == 0) & (train['Parch'] == 0), 1, 0)
grouped_flag = train.groupby(['Survived','FlagSolo']).count()['PassengerId']
print(grouped_flag)
(grouped_flag.unstack(level=0).plot.bar())
plt.show()
train[['Survived', 'Sex', 'Age', 'Pclass', 'FlagSolo']].head(3)
```
Estos ya serían los datos con los que vamos a hacer modelos
```
# variable dependiente
Y_train = train['Survived']
# preprocesamiento de variables independientes
features = ['Sex', 'Age', 'Pclass', 'FlagSolo']
X_train = train[features]
print(Y_train.shape, X_train.shape)
```
## 4. Modelos
Sin entrar en mucho detalle. Vamos a escoger dos modelos de prueba.
* regresión logistica
* arboles de decisión
```
# entrenando modelo regresión logistica
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
logreg.fit(X_train, Y_train)
# entrenando modelo arboles de decisión
from sklearn.tree import DecisionTreeClassifier
decision_tree = DecisionTreeClassifier()
decision_tree.fit(X_train, Y_train)
```
## 5. Evaluación
Aquí vamos a hacer una matriz de confusión y evaluar que tan bueno es cada modelo
```
from sklearn.metrics import plot_confusion_matrix
def conf_mat_acc(modelo):
disp = plot_confusion_matrix(modelo, X_train, Y_train,
cmap=plt.cm.Blues, values_format="d")
true_pred = disp.confusion_matrix[0,0]+disp.confusion_matrix[1,1]
total_data = np.sum(disp.confusion_matrix)
accuracy = true_pred/total_data
print('accuracy: ', np.round(accuracy, 2))
plt.show()
conf_mat_acc(logreg)
conf_mat_acc(decision_tree)
```
## 5.1 Evaluación sobre el test set
Antes hicimos la matriz de confusión sobre el train set. Esto no es del todo correcto porque estamos utilizando como validación los datos que usamos de entrenamiento. Por lo tanto, la estimación del error sería sesgada y tendría poca capacidad de generalización a casos que no haya "visto" el modelo.
Por eso necesitamos utilizar el test set. Sin embargo, Kaggle no nos regresa el valor real del test set, para verificarlo tenemos que enviar nuestros resultados y mirar el score que pone kaggle esto lo veremos más adelante
```
# ahora hay que preparar el test set para evaluación
print(test.head(3))
test.info()
# preprocesando test set
# hacer map a Sex
test['Sex'] = test['Sex'].map({'female': 1, 'male': 0}).astype(int)
# rellenar Age
test['Age'] = test['Age'].fillna(28.0)
# Crear FlagSolo
test['FlagSolo'] = np.where(
(test['SibSp'] == 0) & (test['Parch'] == 0), 1, 0)
print(test.info())
test[features].head(3)
# crear test set
X_test = test[features]
print(X_test.shape)
# prediccion de Survived en test set
Y_pred_log = logreg.predict(X_test)
Y_pred_tree = decision_tree.predict(X_test)
print(Y_pred_log[0:10])
```
Nota: Estas predicciones deberían ser ahora comparadas con el valor real para obtener una mejor estimación del error de predicción sobre el test set y poder escoger un modelo.
Sin embargo, como es una competicion de Kaggle este valor solo lo conoce la plataforma.
Vamos a exportar estos CSV y luego subirlos para ver cual tiene mejor rendimiento.
## 6. Predicción
```
# prediciendo sobre el test set
print(Y_pred_log[0:20])
print(Y_pred_tree[0:20])
# para descargar en ordenador
def download_output(y_pred, name):
output = pd.DataFrame({'PassengerId': test.PassengerId,
'Survived': y_pred})
output.to_csv(name, index=False)
download_output(Y_pred_log, 'rafa_pred_log.csv')
download_output(Y_pred_tree, 'rafa_pred_tree.csv')
```
Luego de hacer el envio a kaggle:

Con lo que muestra que en el test_set hay un mejor valor para accuracy que con train set.
Por esta razón, nos quedaríamos con el modelo de regresión logística. Porque generaliza mejor las predicciones para datos con los que no se ha entrenado el modelo.
# Conclusion
* importante del análisis exploratorio
* creación de variables
* probar varios modelos
* calculo del error con el test_set
* vimos (casi) todo el proceso de ciencia de datos en un ejemplo
## próximos pasos
Ahora, lo que vendría sería desplegar este modelo a producción, hacer predicciones según lo necesite el usuario, hacer seguimiento y realizar el mantenimiento del despliegue.
Similar a como se describe superficialmente aquí https://cloud.google.com/ai-platform/docs/ml-solutions-overview
Sin embargo, estás etapas corresponden abarcan pasos relacionados a la ingeniería de software o devops que no serán cubiertos en este notebook.
|
github_jupyter
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# lectura de datos en Python
train = pd.read_csv('./../data/train.csv')
test = pd.read_csv('./../data/test.csv')
train.head()
# que columnas tienen los datos?
train.columns
# qué tamaño tienen los datos?
train.shape
# hay valores nulos en los datos?
train.info()
# como se distribuyen las variables numéricas
train.describe()
# como se comportan las variables categóricas
train.describe(include=['O'])
train.groupby(['Survived']).count()['PassengerId']
# target vs sex
train.groupby(['Survived','Sex']).count()['PassengerId']
grouped_sex = train.groupby(['Survived','Sex']).count()['PassengerId']
print(grouped_sex)
(grouped_sex.unstack(level=0).plot.bar())
plt.show()
# tarea hacer para otras variables
# embarked vs pclass
print(train.groupby(['Pclass', 'Embarked'])
.count()['PassengerId']
.unstack(level=0)
.plot.bar())
train[['Survived', 'Sex', 'Age', 'Pclass']].head(3)
train[['Survived', 'Sex', 'Age', 'Pclass']].info()
# mirar como se distribuyen los nulos en edad
(train[train['Age'].isna()]
.groupby(['Sex', 'Pclass'])
.count()['PassengerId']
.unstack(level=0))
(train[train['Age'].isna()]
.groupby(['SibSp', 'Parch'])
.count()['PassengerId']
.unstack(level=0))
# calcular mediana de Age para imputar
train['Age'].median()
# imputar valor para rellenar nulos
train['Age'] = train['Age'].fillna(28.0)
train[['Survived', 'Sex', 'Age', 'Pclass']].info()
# map para label encoding
train['Sex'] = train['Sex'].map({'female': 1, 'male': 0}).astype(int)
train[['Survived', 'Sex', 'Age', 'Pclass']].head(3)
# crear nueva variable tipo flag "solo"
train['FlagSolo'] = np.where(
(train['SibSp'] == 0) & (train['Parch'] == 0), 1, 0)
grouped_flag = train.groupby(['Survived','FlagSolo']).count()['PassengerId']
print(grouped_flag)
(grouped_flag.unstack(level=0).plot.bar())
plt.show()
train[['Survived', 'Sex', 'Age', 'Pclass', 'FlagSolo']].head(3)
# variable dependiente
Y_train = train['Survived']
# preprocesamiento de variables independientes
features = ['Sex', 'Age', 'Pclass', 'FlagSolo']
X_train = train[features]
print(Y_train.shape, X_train.shape)
# entrenando modelo regresión logistica
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
logreg.fit(X_train, Y_train)
# entrenando modelo arboles de decisión
from sklearn.tree import DecisionTreeClassifier
decision_tree = DecisionTreeClassifier()
decision_tree.fit(X_train, Y_train)
from sklearn.metrics import plot_confusion_matrix
def conf_mat_acc(modelo):
disp = plot_confusion_matrix(modelo, X_train, Y_train,
cmap=plt.cm.Blues, values_format="d")
true_pred = disp.confusion_matrix[0,0]+disp.confusion_matrix[1,1]
total_data = np.sum(disp.confusion_matrix)
accuracy = true_pred/total_data
print('accuracy: ', np.round(accuracy, 2))
plt.show()
conf_mat_acc(logreg)
conf_mat_acc(decision_tree)
# ahora hay que preparar el test set para evaluación
print(test.head(3))
test.info()
# preprocesando test set
# hacer map a Sex
test['Sex'] = test['Sex'].map({'female': 1, 'male': 0}).astype(int)
# rellenar Age
test['Age'] = test['Age'].fillna(28.0)
# Crear FlagSolo
test['FlagSolo'] = np.where(
(test['SibSp'] == 0) & (test['Parch'] == 0), 1, 0)
print(test.info())
test[features].head(3)
# crear test set
X_test = test[features]
print(X_test.shape)
# prediccion de Survived en test set
Y_pred_log = logreg.predict(X_test)
Y_pred_tree = decision_tree.predict(X_test)
print(Y_pred_log[0:10])
# prediciendo sobre el test set
print(Y_pred_log[0:20])
print(Y_pred_tree[0:20])
# para descargar en ordenador
def download_output(y_pred, name):
output = pd.DataFrame({'PassengerId': test.PassengerId,
'Survived': y_pred})
output.to_csv(name, index=False)
download_output(Y_pred_log, 'rafa_pred_log.csv')
download_output(Y_pred_tree, 'rafa_pred_tree.csv')
| 0.247714 | 0.845496 |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import style
import matplotlib.ticker as ticker
import seaborn as sns
from sklearn.datasets import load_boston
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import plot_confusion_matrix
from sklearn.metrics import classification_report
from sklearn.metrics import f1_score
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import train_test_split
from sklearn.model_selection import RepeatedKFold
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import ParameterGrid
from sklearn.inspection import permutation_importance
import multiprocessing
labels = pd.read_csv('../../csv/train_labels.csv')
labels.head()
values = pd.read_csv('../../csv/train_values.csv')
values.T
#Promedio de altura por piso
values['height_percentage_per_floor_pre_eq'] = values['height_percentage']/values['count_floors_pre_eq']
values['volume_percentage'] = values['area_percentage'] * values['height_percentage']
#Algunos promedios por localizacion
values['avg_age_for_geo_level_2_id'] = values.groupby('geo_level_2_id')['age'].transform('mean')
values['avg_area_percentage_for_geo_level_2_id'] = values.groupby('geo_level_2_id')['area_percentage'].transform('mean')
values['avg_height_percentage_for_geo_level_2_id'] = values.groupby('geo_level_2_id')['height_percentage'].transform('mean')
values['avg_count_floors_for_geo_level_2_id'] = values.groupby('geo_level_2_id')['count_floors_pre_eq'].transform('mean')
values['avg_age_for_geo_level_3_id'] = values.groupby('geo_level_3_id')['age'].transform('mean')
values['avg_area_percentage_for_geo_level_3_id'] = values.groupby('geo_level_3_id')['area_percentage'].transform('mean')
values['avg_height_percentage_for_geo_level_3_id'] = values.groupby('geo_level_3_id')['height_percentage'].transform('mean')
values['avg_count_floors_for_geo_level_3_id'] = values.groupby('geo_level_3_id')['count_floors_pre_eq'].transform('mean')
#Relacion material(los mas importantes segun el modelo 5)-antiguedad
values['20_yr_age_range'] = values['age'] // 20 * 20
values['20_yr_age_range'] = values['20_yr_age_range'].astype('str')
values['superstructure'] = ''
values['superstructure'] = np.where(values['has_superstructure_mud_mortar_stone'], values['superstructure'] + 'b', values['superstructure'])
values['superstructure'] = np.where(values['has_superstructure_cement_mortar_brick'], values['superstructure'] + 'e', values['superstructure'])
values['superstructure'] = np.where(values['has_superstructure_timber'], values['superstructure'] + 'f', values['superstructure'])
values['age_range_superstructure'] = values['20_yr_age_range'] + values['superstructure']
del values['20_yr_age_range']
del values['superstructure']
values
values.isnull().values.any()
labels.isnull().values.any()
values.dtypes
values["building_id"].count() == values["building_id"].drop_duplicates().count()
values.info()
to_be_categorized = ["land_surface_condition", "foundation_type", "roof_type",\
"position", "ground_floor_type", "other_floor_type",\
"plan_configuration", "legal_ownership_status", "age_range_superstructure"]
for row in to_be_categorized:
values[row] = values[row].astype("category")
values.info()
datatypes = dict(values.dtypes)
for row in values.columns:
if datatypes[row] != "int64" and datatypes[row] != "int32" and \
datatypes[row] != "int16" and datatypes[row] != "int8":
continue
if values[row].nlargest(1).item() > 32767 and values[row].nlargest(1).item() < 2**31:
values[row] = values[row].astype(np.int32)
elif values[row].nlargest(1).item() > 127:
values[row] = values[row].astype(np.int16)
else:
values[row] = values[row].astype(np.int8)
values.info()
labels.info()
labels["building_id"] = labels["building_id"].astype(np.int32)
labels["damage_grade"] = labels["damage_grade"].astype(np.int8)
labels.info()
```
# Nuevo Modelo
```
important_values = values\
.merge(labels, on="building_id")
important_values.drop(columns=["building_id"], inplace = True)
important_values["geo_level_1_id"] = important_values["geo_level_1_id"].astype("category")
important_values
important_values.shape
X_train, X_test, y_train, y_test = train_test_split(important_values.drop(columns = 'damage_grade'),
important_values['damage_grade'], test_size = 0.2, random_state = 123)
#OneHotEncoding
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
res = res.drop([feature_to_encode], axis=1)
return(res)
features_to_encode = ["geo_level_1_id", "land_surface_condition", "foundation_type", "roof_type",\
"position", "ground_floor_type", "other_floor_type",\
"plan_configuration", "legal_ownership_status", "age_range_superstructure"]
for feature in features_to_encode:
X_train = encode_and_bind(X_train, feature)
X_test = encode_and_bind(X_test, feature)
X_train
X_train.shape
# # Busco los mejores tres parametros indicados abajo.
# n_estimators = [65, 100, 135]
# max_features = [0.2, 0.5, 0.8]
# max_depth = [None, 2, 5]
# min_samples_split = [5, 15, 25]
# # min_impurity_decrease = [0.0, 0.01, 0.025, 0.05, 0.1]
# # min_samples_leaf
# hyperF = {'n_estimators': n_estimators,
# 'max_features': max_features,
# 'max_depth': max_depth,
# 'min_samples_split': min_samples_split
# }
# gridF = GridSearchCV(estimator = RandomForestClassifier(random_state = 123),
# scoring = 'f1_micro',
# param_grid = hyperF,
# cv = 3,
# verbose = 1,
# n_jobs = -1)
# bestF = gridF.fit(X_train, y_train)
# res = pd.DataFrame(bestF.cv_results_)
# res.loc[res['rank_test_score'] <= 10]
# Utilizo los mejores parametros segun el GridSearch
rf_model = RandomForestClassifier(n_estimators = 150,
max_depth = None,
max_features = 50,
min_samples_split = 15,
min_samples_leaf = 1,
criterion = "gini",
verbose=True)
rf_model.fit(X_train, y_train)
rf_model.score(X_train, y_train)
# Calculo el F1 score para mi training set.
y_preds = rf_model.predict(X_test)
f1_score(y_test, y_preds, average='micro')
test_values = pd.read_csv('../../csv/test_values.csv', index_col = "building_id")
test_values
test_values_subset = test_values
test_values_subset["geo_level_1_id"] = test_values_subset["geo_level_1_id"].astype("category")
test_values_subset
#Promedio de altura por piso
test_values_subset['height_percentage_per_floor_pre_eq'] = test_values_subset['height_percentage']/test_values_subset['count_floors_pre_eq']
test_values_subset['volume_percentage'] = test_values_subset['area_percentage'] * test_values_subset['height_percentage']
#Algunos promedios por localizacion
test_values_subset['avg_age_for_geo_level_2_id'] = test_values_subset.groupby('geo_level_2_id')['age'].transform('mean')
test_values_subset['avg_area_percentage_for_geo_level_2_id'] = test_values_subset.groupby('geo_level_2_id')['area_percentage'].transform('mean')
test_values_subset['avg_height_percentage_for_geo_level_2_id'] = test_values_subset.groupby('geo_level_2_id')['height_percentage'].transform('mean')
test_values_subset['avg_count_floors_for_geo_level_2_id'] = test_values_subset.groupby('geo_level_2_id')['count_floors_pre_eq'].transform('mean')
test_values_subset['avg_age_for_geo_level_3_id'] = test_values_subset.groupby('geo_level_3_id')['age'].transform('mean')
test_values_subset['avg_area_percentage_for_geo_level_3_id'] = test_values_subset.groupby('geo_level_3_id')['area_percentage'].transform('mean')
test_values_subset['avg_height_percentage_for_geo_level_3_id'] = test_values_subset.groupby('geo_level_3_id')['height_percentage'].transform('mean')
test_values_subset['avg_count_floors_for_geo_level_3_id'] = test_values_subset.groupby('geo_level_3_id')['count_floors_pre_eq'].transform('mean')
#Relacion material(los mas importantes segun el modelo 5)-antiguedad
test_values_subset['20_yr_age_range'] = test_values_subset['age'] // 20 * 20
test_values_subset['20_yr_age_range'] = test_values_subset['20_yr_age_range'].astype('str')
test_values_subset['superstructure'] = ''
test_values_subset['superstructure'] = np.where(test_values_subset['has_superstructure_mud_mortar_stone'], test_values_subset['superstructure'] + 'b', test_values_subset['superstructure'])
test_values_subset['superstructure'] = np.where(test_values_subset['has_superstructure_cement_mortar_brick'], test_values_subset['superstructure'] + 'e', test_values_subset['superstructure'])
test_values_subset['superstructure'] = np.where(test_values_subset['has_superstructure_timber'], test_values_subset['superstructure'] + 'f', test_values_subset['superstructure'])
test_values_subset['age_range_superstructure'] = test_values_subset['20_yr_age_range'] + test_values_subset['superstructure']
del test_values_subset['20_yr_age_range']
del test_values_subset['superstructure']
test_values_subset
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
res = res.drop([feature_to_encode], axis=1)
return(res)
features_to_encode = ["geo_level_1_id", "land_surface_condition", "foundation_type", "roof_type",\
"position", "ground_floor_type", "other_floor_type",\
"plan_configuration", "legal_ownership_status", "age_range_superstructure"]
for feature in features_to_encode:
test_values_subset = encode_and_bind(test_values_subset, feature)
test_values_subset
features_in_model_not_in_tests =\
list(filter(lambda col: col not in test_values_subset.columns.to_list(), X_train.columns.to_list()))
for f in features_in_model_not_in_tests:
test_values_subset[f] = 0
test_values_subset.drop(columns = list(filter(lambda col: col not in X_train.columns.to_list() , test_values_subset.columns.to_list())), inplace = True)
test_values_subset.shape
# Genero las predicciones para los test.
preds = rf_model.predict(test_values_subset)
submission_format = pd.read_csv('../../csv/submission_format.csv', index_col = "building_id")
my_submission = pd.DataFrame(data=preds,
columns=submission_format.columns,
index=submission_format.index)
my_submission.head()
my_submission.to_csv('../../csv/predictions/jf-model-7-3-submission.csv')
!head ../../csv/predictions/jf-model-7-3-submission.csv
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import style
import matplotlib.ticker as ticker
import seaborn as sns
from sklearn.datasets import load_boston
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import plot_confusion_matrix
from sklearn.metrics import classification_report
from sklearn.metrics import f1_score
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import train_test_split
from sklearn.model_selection import RepeatedKFold
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import ParameterGrid
from sklearn.inspection import permutation_importance
import multiprocessing
labels = pd.read_csv('../../csv/train_labels.csv')
labels.head()
values = pd.read_csv('../../csv/train_values.csv')
values.T
#Promedio de altura por piso
values['height_percentage_per_floor_pre_eq'] = values['height_percentage']/values['count_floors_pre_eq']
values['volume_percentage'] = values['area_percentage'] * values['height_percentage']
#Algunos promedios por localizacion
values['avg_age_for_geo_level_2_id'] = values.groupby('geo_level_2_id')['age'].transform('mean')
values['avg_area_percentage_for_geo_level_2_id'] = values.groupby('geo_level_2_id')['area_percentage'].transform('mean')
values['avg_height_percentage_for_geo_level_2_id'] = values.groupby('geo_level_2_id')['height_percentage'].transform('mean')
values['avg_count_floors_for_geo_level_2_id'] = values.groupby('geo_level_2_id')['count_floors_pre_eq'].transform('mean')
values['avg_age_for_geo_level_3_id'] = values.groupby('geo_level_3_id')['age'].transform('mean')
values['avg_area_percentage_for_geo_level_3_id'] = values.groupby('geo_level_3_id')['area_percentage'].transform('mean')
values['avg_height_percentage_for_geo_level_3_id'] = values.groupby('geo_level_3_id')['height_percentage'].transform('mean')
values['avg_count_floors_for_geo_level_3_id'] = values.groupby('geo_level_3_id')['count_floors_pre_eq'].transform('mean')
#Relacion material(los mas importantes segun el modelo 5)-antiguedad
values['20_yr_age_range'] = values['age'] // 20 * 20
values['20_yr_age_range'] = values['20_yr_age_range'].astype('str')
values['superstructure'] = ''
values['superstructure'] = np.where(values['has_superstructure_mud_mortar_stone'], values['superstructure'] + 'b', values['superstructure'])
values['superstructure'] = np.where(values['has_superstructure_cement_mortar_brick'], values['superstructure'] + 'e', values['superstructure'])
values['superstructure'] = np.where(values['has_superstructure_timber'], values['superstructure'] + 'f', values['superstructure'])
values['age_range_superstructure'] = values['20_yr_age_range'] + values['superstructure']
del values['20_yr_age_range']
del values['superstructure']
values
values.isnull().values.any()
labels.isnull().values.any()
values.dtypes
values["building_id"].count() == values["building_id"].drop_duplicates().count()
values.info()
to_be_categorized = ["land_surface_condition", "foundation_type", "roof_type",\
"position", "ground_floor_type", "other_floor_type",\
"plan_configuration", "legal_ownership_status", "age_range_superstructure"]
for row in to_be_categorized:
values[row] = values[row].astype("category")
values.info()
datatypes = dict(values.dtypes)
for row in values.columns:
if datatypes[row] != "int64" and datatypes[row] != "int32" and \
datatypes[row] != "int16" and datatypes[row] != "int8":
continue
if values[row].nlargest(1).item() > 32767 and values[row].nlargest(1).item() < 2**31:
values[row] = values[row].astype(np.int32)
elif values[row].nlargest(1).item() > 127:
values[row] = values[row].astype(np.int16)
else:
values[row] = values[row].astype(np.int8)
values.info()
labels.info()
labels["building_id"] = labels["building_id"].astype(np.int32)
labels["damage_grade"] = labels["damage_grade"].astype(np.int8)
labels.info()
important_values = values\
.merge(labels, on="building_id")
important_values.drop(columns=["building_id"], inplace = True)
important_values["geo_level_1_id"] = important_values["geo_level_1_id"].astype("category")
important_values
important_values.shape
X_train, X_test, y_train, y_test = train_test_split(important_values.drop(columns = 'damage_grade'),
important_values['damage_grade'], test_size = 0.2, random_state = 123)
#OneHotEncoding
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
res = res.drop([feature_to_encode], axis=1)
return(res)
features_to_encode = ["geo_level_1_id", "land_surface_condition", "foundation_type", "roof_type",\
"position", "ground_floor_type", "other_floor_type",\
"plan_configuration", "legal_ownership_status", "age_range_superstructure"]
for feature in features_to_encode:
X_train = encode_and_bind(X_train, feature)
X_test = encode_and_bind(X_test, feature)
X_train
X_train.shape
# # Busco los mejores tres parametros indicados abajo.
# n_estimators = [65, 100, 135]
# max_features = [0.2, 0.5, 0.8]
# max_depth = [None, 2, 5]
# min_samples_split = [5, 15, 25]
# # min_impurity_decrease = [0.0, 0.01, 0.025, 0.05, 0.1]
# # min_samples_leaf
# hyperF = {'n_estimators': n_estimators,
# 'max_features': max_features,
# 'max_depth': max_depth,
# 'min_samples_split': min_samples_split
# }
# gridF = GridSearchCV(estimator = RandomForestClassifier(random_state = 123),
# scoring = 'f1_micro',
# param_grid = hyperF,
# cv = 3,
# verbose = 1,
# n_jobs = -1)
# bestF = gridF.fit(X_train, y_train)
# res = pd.DataFrame(bestF.cv_results_)
# res.loc[res['rank_test_score'] <= 10]
# Utilizo los mejores parametros segun el GridSearch
rf_model = RandomForestClassifier(n_estimators = 150,
max_depth = None,
max_features = 50,
min_samples_split = 15,
min_samples_leaf = 1,
criterion = "gini",
verbose=True)
rf_model.fit(X_train, y_train)
rf_model.score(X_train, y_train)
# Calculo el F1 score para mi training set.
y_preds = rf_model.predict(X_test)
f1_score(y_test, y_preds, average='micro')
test_values = pd.read_csv('../../csv/test_values.csv', index_col = "building_id")
test_values
test_values_subset = test_values
test_values_subset["geo_level_1_id"] = test_values_subset["geo_level_1_id"].astype("category")
test_values_subset
#Promedio de altura por piso
test_values_subset['height_percentage_per_floor_pre_eq'] = test_values_subset['height_percentage']/test_values_subset['count_floors_pre_eq']
test_values_subset['volume_percentage'] = test_values_subset['area_percentage'] * test_values_subset['height_percentage']
#Algunos promedios por localizacion
test_values_subset['avg_age_for_geo_level_2_id'] = test_values_subset.groupby('geo_level_2_id')['age'].transform('mean')
test_values_subset['avg_area_percentage_for_geo_level_2_id'] = test_values_subset.groupby('geo_level_2_id')['area_percentage'].transform('mean')
test_values_subset['avg_height_percentage_for_geo_level_2_id'] = test_values_subset.groupby('geo_level_2_id')['height_percentage'].transform('mean')
test_values_subset['avg_count_floors_for_geo_level_2_id'] = test_values_subset.groupby('geo_level_2_id')['count_floors_pre_eq'].transform('mean')
test_values_subset['avg_age_for_geo_level_3_id'] = test_values_subset.groupby('geo_level_3_id')['age'].transform('mean')
test_values_subset['avg_area_percentage_for_geo_level_3_id'] = test_values_subset.groupby('geo_level_3_id')['area_percentage'].transform('mean')
test_values_subset['avg_height_percentage_for_geo_level_3_id'] = test_values_subset.groupby('geo_level_3_id')['height_percentage'].transform('mean')
test_values_subset['avg_count_floors_for_geo_level_3_id'] = test_values_subset.groupby('geo_level_3_id')['count_floors_pre_eq'].transform('mean')
#Relacion material(los mas importantes segun el modelo 5)-antiguedad
test_values_subset['20_yr_age_range'] = test_values_subset['age'] // 20 * 20
test_values_subset['20_yr_age_range'] = test_values_subset['20_yr_age_range'].astype('str')
test_values_subset['superstructure'] = ''
test_values_subset['superstructure'] = np.where(test_values_subset['has_superstructure_mud_mortar_stone'], test_values_subset['superstructure'] + 'b', test_values_subset['superstructure'])
test_values_subset['superstructure'] = np.where(test_values_subset['has_superstructure_cement_mortar_brick'], test_values_subset['superstructure'] + 'e', test_values_subset['superstructure'])
test_values_subset['superstructure'] = np.where(test_values_subset['has_superstructure_timber'], test_values_subset['superstructure'] + 'f', test_values_subset['superstructure'])
test_values_subset['age_range_superstructure'] = test_values_subset['20_yr_age_range'] + test_values_subset['superstructure']
del test_values_subset['20_yr_age_range']
del test_values_subset['superstructure']
test_values_subset
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
res = res.drop([feature_to_encode], axis=1)
return(res)
features_to_encode = ["geo_level_1_id", "land_surface_condition", "foundation_type", "roof_type",\
"position", "ground_floor_type", "other_floor_type",\
"plan_configuration", "legal_ownership_status", "age_range_superstructure"]
for feature in features_to_encode:
test_values_subset = encode_and_bind(test_values_subset, feature)
test_values_subset
features_in_model_not_in_tests =\
list(filter(lambda col: col not in test_values_subset.columns.to_list(), X_train.columns.to_list()))
for f in features_in_model_not_in_tests:
test_values_subset[f] = 0
test_values_subset.drop(columns = list(filter(lambda col: col not in X_train.columns.to_list() , test_values_subset.columns.to_list())), inplace = True)
test_values_subset.shape
# Genero las predicciones para los test.
preds = rf_model.predict(test_values_subset)
submission_format = pd.read_csv('../../csv/submission_format.csv', index_col = "building_id")
my_submission = pd.DataFrame(data=preds,
columns=submission_format.columns,
index=submission_format.index)
my_submission.head()
my_submission.to_csv('../../csv/predictions/jf-model-7-3-submission.csv')
!head ../../csv/predictions/jf-model-7-3-submission.csv
| 0.381565 | 0.71044 |
# transforms
The `transforms` module provides functions to easily manipulate data for `pytorch` networks.
## `cross_correlation`
```
from transforms import cross_correlation
from pydub import AudioSegment
from IPython.display import display
from utils import play_audio, split_channels
from visualization import wave
import numpy as np
sample_path = './data/sample_data/reflections/samples/mahler_2894305.wav'
s = AudioSegment.from_wav(sample_path)
play_audio(s)
lag = 1 * int(s.frame_rate / 1000.)
left, right = split_channels(s)
xc = cross_correlation(left, right, lag)
wave(xc, **dict(suptitle='cross_correlation', title=['Example output']))
```
## `normalized_cross_correlation`
```
from transforms import normalized_cross_correlation
nxc = normalized_cross_correlation(left, right, lag)
wave(nxc, **dict(suptitle='normalized_cross_correlation', title=['Example output']))
```
## `autocorrelation`
```
from transforms import autocorrelation
from utils import audiosegment_to_array
ac = autocorrelation(s).reshape((-1,))
wave(ac, **dict(suptitle='autocorrelation', title=['Example output']))
```
## `second_layer_autocorrelation`
```
from transforms import second_layer_autocorrelation
ac2 = second_layer_autocorrelation(s).reshape((-1,))
wave(ac2, **dict(suptitle='second_layer_autocorrelation', title=['Example output']))
```
## `amplitude_spectrum`
```
from transforms import amplitude_spectrum
from utils import split_channels
from visualization import spectrum
left, right = split_channels(s)
spectrum(left, s.frame_rate, spectrum_type='amplitude', **dict(suptitle='amplitude_spectrum', title='Example output'))
spectrum(right, s.frame_rate, spectrum_type='amplitude', **dict(suptitle='amplitude_spectrum', title='Example output'))
```
## `power_spectrum`
```
spectrum(left, s.frame_rate, spectrum_type='power', **dict(suptitle='power_spectrum', title='Example output'))
spectrum(right, s.frame_rate, spectrum_type='power', **dict(suptitle='power_spectrum', title='Example output'))
```
## `phase_spectrum`
```
spectrum(left, s.frame_rate, spectrum_type='phase', **dict(suptitle='phase_spectrum', title='Example output'))
spectrum(right, s.frame_rate, spectrum_type='phase', **dict(suptitle='phase_spectrum', title='Example output'))
```
## `log_spectrum`
```
spectrum(left, s.frame_rate, spectrum_type='log', **dict(suptitle='log_spectrum', title='Example output'))
spectrum(right, s.frame_rate, spectrum_type='log', **dict(suptitle='log_spectrum', title='Example output'))
```
## `cepstrum`
```
from visualization import cepstrum
offset = 1024
window_length = offset * 64 * 2
cepstrum(left, s.frame_rate, offset, window_length, **dict(suptitle='cepstrum', title='Example output'))
```
## `cepstral_autocorrelation`
```
from transforms import cepstral_autocorrelation
cac = cepstral_autocorrelation(s).reshape((-1,))
wave(cac)
```
## `cepstral_second_layer_autocorrelation`
```
from transforms import cepstral_second_layer_autocorrelation
csc = cepstral_second_layer_autocorrelation(s)
```
## `mfcc`
```
from transforms import mfcc
from utils import split_channels
from visualization import spectrogram
left, right = mfcc(s)
spectrogram(left)
```
|
github_jupyter
|
from transforms import cross_correlation
from pydub import AudioSegment
from IPython.display import display
from utils import play_audio, split_channels
from visualization import wave
import numpy as np
sample_path = './data/sample_data/reflections/samples/mahler_2894305.wav'
s = AudioSegment.from_wav(sample_path)
play_audio(s)
lag = 1 * int(s.frame_rate / 1000.)
left, right = split_channels(s)
xc = cross_correlation(left, right, lag)
wave(xc, **dict(suptitle='cross_correlation', title=['Example output']))
from transforms import normalized_cross_correlation
nxc = normalized_cross_correlation(left, right, lag)
wave(nxc, **dict(suptitle='normalized_cross_correlation', title=['Example output']))
from transforms import autocorrelation
from utils import audiosegment_to_array
ac = autocorrelation(s).reshape((-1,))
wave(ac, **dict(suptitle='autocorrelation', title=['Example output']))
from transforms import second_layer_autocorrelation
ac2 = second_layer_autocorrelation(s).reshape((-1,))
wave(ac2, **dict(suptitle='second_layer_autocorrelation', title=['Example output']))
from transforms import amplitude_spectrum
from utils import split_channels
from visualization import spectrum
left, right = split_channels(s)
spectrum(left, s.frame_rate, spectrum_type='amplitude', **dict(suptitle='amplitude_spectrum', title='Example output'))
spectrum(right, s.frame_rate, spectrum_type='amplitude', **dict(suptitle='amplitude_spectrum', title='Example output'))
spectrum(left, s.frame_rate, spectrum_type='power', **dict(suptitle='power_spectrum', title='Example output'))
spectrum(right, s.frame_rate, spectrum_type='power', **dict(suptitle='power_spectrum', title='Example output'))
spectrum(left, s.frame_rate, spectrum_type='phase', **dict(suptitle='phase_spectrum', title='Example output'))
spectrum(right, s.frame_rate, spectrum_type='phase', **dict(suptitle='phase_spectrum', title='Example output'))
spectrum(left, s.frame_rate, spectrum_type='log', **dict(suptitle='log_spectrum', title='Example output'))
spectrum(right, s.frame_rate, spectrum_type='log', **dict(suptitle='log_spectrum', title='Example output'))
from visualization import cepstrum
offset = 1024
window_length = offset * 64 * 2
cepstrum(left, s.frame_rate, offset, window_length, **dict(suptitle='cepstrum', title='Example output'))
from transforms import cepstral_autocorrelation
cac = cepstral_autocorrelation(s).reshape((-1,))
wave(cac)
from transforms import cepstral_second_layer_autocorrelation
csc = cepstral_second_layer_autocorrelation(s)
from transforms import mfcc
from utils import split_channels
from visualization import spectrogram
left, right = mfcc(s)
spectrogram(left)
| 0.894721 | 0.966914 |
## Suspots Dataset
This notebook show a time series model ,built using DNN used for predicting future seasonality of the Susposts dataset depending on its past seasonality.
Link for Dataset : https://www.kaggle.com/robervalt/sunspots
```
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import csv
```
Function for ploting the graphs.
```
def plot_series(time, series, format='-',start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Value")
plt.grid(True)
```
Reading the CSV file, then tranforming the dataset to have 2 columns "series" and "time" which are numpy array and then finally plotting the dataset to analysis the data and look for trends and seasonalities
```
time_step = []
sunspots = []
with open('Sunspots.csv') as csvfile:
reader = csv.reader(csvfile, delimiter=',')
next(reader)
for row in reader:
sunspots.append(float(row[2]))
time_step.append(int(row[0]))
series = np.array(sunspots)
time = np.array(time_step)
plt.figure(figsize=(10,6))
plot_series(time, series)
print("Head of the Data")
print(pd.DataFrame(series,time).head())
print()
print("Tail of the Data")
print(pd.DataFrame(series,time).tail())
```
Splitting the series and time columns, basically the whole dataset into train and validation sets. The splitting is at "time = 3000".In short the data before "time = 3000" is the training_set and the data after that is the validation_set.
```
split_time = 3000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
```
The ***windowed_dataset*** is a function that coverts the series data in a data frame structure which be fed to the model for training.
```
def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
dataset = tf.data.Dataset.from_tensor_slices(series)
dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True)
dataset = dataset.flat_map(lambda window: window.batch(window_size + 1))
dataset = dataset.shuffle(shuffle_buffer).map(lambda window: (window[:-1], window[-1]))
dataset = dataset.batch(batch_size).prefetch(1)
return dataset
```
Assigning the value to window_size, batch_size, and buffer_size.
Lastly creating the DNN model.
```
window_size = 60
batch_size = 32
shuffle_buffer_size = 1000
dataset = windowed_dataset(x_train, window_size,
batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(40, input_shape=[window_size], activation='relu'),
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(1)
])
model.compile(loss='mse',optimizer=tf.keras.optimizers.SGD(lr=1e-7,momentum=0.9))
model.summary()
```
Fitting the training set (***dataset***) to the model.
```
model.fit(dataset,epochs=20,verbose=1)
```
Forecasting values using the model.The predictions are based on the features and seasonality learned by the model from the training set and then predicting future values based on the past data. The predicted value is plotted along with the validation_set inorder to get an idea about the efficiency of the model.
```
forecast=[]
for time in range(len(series) - window_size):
forecast.append(model.predict(series[time:time + window_size][np.newaxis]))
forecast = forecast[split_time-window_size:]
results = np.array(forecast)[:, 0, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, results)
```
Mean Absolute Error.
```
tf.keras.metrics.mean_absolute_error(x_valid, results).numpy()
```
|
github_jupyter
|
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import csv
def plot_series(time, series, format='-',start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Value")
plt.grid(True)
time_step = []
sunspots = []
with open('Sunspots.csv') as csvfile:
reader = csv.reader(csvfile, delimiter=',')
next(reader)
for row in reader:
sunspots.append(float(row[2]))
time_step.append(int(row[0]))
series = np.array(sunspots)
time = np.array(time_step)
plt.figure(figsize=(10,6))
plot_series(time, series)
print("Head of the Data")
print(pd.DataFrame(series,time).head())
print()
print("Tail of the Data")
print(pd.DataFrame(series,time).tail())
split_time = 3000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
dataset = tf.data.Dataset.from_tensor_slices(series)
dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True)
dataset = dataset.flat_map(lambda window: window.batch(window_size + 1))
dataset = dataset.shuffle(shuffle_buffer).map(lambda window: (window[:-1], window[-1]))
dataset = dataset.batch(batch_size).prefetch(1)
return dataset
window_size = 60
batch_size = 32
shuffle_buffer_size = 1000
dataset = windowed_dataset(x_train, window_size,
batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(40, input_shape=[window_size], activation='relu'),
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(1)
])
model.compile(loss='mse',optimizer=tf.keras.optimizers.SGD(lr=1e-7,momentum=0.9))
model.summary()
model.fit(dataset,epochs=20,verbose=1)
forecast=[]
for time in range(len(series) - window_size):
forecast.append(model.predict(series[time:time + window_size][np.newaxis]))
forecast = forecast[split_time-window_size:]
results = np.array(forecast)[:, 0, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, results)
tf.keras.metrics.mean_absolute_error(x_valid, results).numpy()
| 0.743541 | 0.970771 |
# Step 5.1: Experiment 1: Machine Learning
---
## 1. Imports
```
import warnings
warnings.filterwarnings('ignore')
import math
import numpy as np #operaciones matriciales y con vectores
import pandas as pd #tratamiento de datos
import random
import matplotlib.pyplot as plt #gráficos
import seaborn as sns
import joblib
from sklearn import naive_bayes
from sklearn import tree
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression
from sklearn import tree
from sklearn import linear_model
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split #metodo de particionamiento de datasets para evaluación
from sklearn import preprocessing
from sklearn import metrics
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import cross_validate
from sklearn.model_selection import GridSearchCV
```
---
## 2. Load the Standardize B/M Only Stratosphere Dataset
```
BM_onlyStratosphere = pd.read_csv(r"C:\Users\Usuario\Documents\Github\PDG\PDG-2\Datasets\Time Window\Standardized\SDTrainExp2.csv", delimiter = ",")
BM_onlyStratosphere.head(2)
BM_onlyStratosphere.shape
```
---
## 3. Let's Create a copy of the original dataset...
```
BM_onlyStratosphere_copy = BM_onlyStratosphere.copy()
BM_onlyStratosphere_copy.shape
```
---
## 4. Let's create a Dataframe to save the Accuracies...
```
acc_Machine_Learning = pd.DataFrame(columns=['Name',"Accuracy_Value","CV"])
```
---
---
## 5. :::::::: MACHINE LEARNING ::::::::
#### 5.1 Gaussian Naive Bayes
```
x = BM_onlyStratosphere_copy.iloc[:,:-1]
y = BM_onlyStratosphere_copy['Type']
gnb = naive_bayes.GaussianNB()
params = {}
gscv_gnb = GridSearchCV(estimator=gnb, param_grid=params, cv=10, return_train_score=True)
gscv_gnb.fit(x,y)
gscv_gnb.cv_results_
```
The **best_score (Mean cross-validated score of the best_estimator)** is :
```
gscv_gnb.best_score_
```
The **best estimator (model)** is :
```
gnb = gscv_gnb.best_estimator_
gnb
acc_Machine_Learning= acc_Machine_Learning.append({'Name' : 'GaussianNB ', 'Accuracy_Value' : gscv_gnb.best_score_, 'CV' : 10},
ignore_index=True)
acc_Machine_Learning
```
---
#### 5.2 Decision Tree Classifier
```
dtc = tree.DecisionTreeClassifier()
tree_params = {'criterion':['gini','entropy'],
'max_depth':[3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],
'random_state' : [1234]
}
gscv_dtc = GridSearchCV(dtc, tree_params, cv=10)
gscv_dtc.fit(x,y)
```
The **best_score (Mean cross-validated score of the best_estimator)** is :
```
gscv_dtc.best_score_
```
The **best estimator (model)** is :
```
dtc = gscv_dtc.best_estimator_
dtc
acc_Machine_Learning= acc_Machine_Learning.append({'Name' : dtc, 'Accuracy_Value' : gscv_dtc.best_score_, 'CV' : 10},
ignore_index=True)
acc_Machine_Learning
```
---
#### 5.3 KNN
```
knn = KNeighborsClassifier()
knn_params = {'n_neighbors':[1,3,5], 'weights' : ['uniform','distance'], 'metric':['euclidean','manhattan']}
# gscv_knn = GridSearchCV(knn, knn_params, cv=5, n_jobs=-1)
# gscv_knn.fit(x,y)
```
The **best_score (Mean cross-validated score of the best_estimator)** is :
```
# gscv_knn.best_score_
```
The **best estimator (model)** is :
```
# knn = gscv_knn.best_estimator_
# knn
# acc_Machine_Learning= acc_Machine_Learning.append({'Name' : knn, 'Accuracy_Value' : gscv_knn.best_score_, 'CV' :5},
# ignore_index=True)
# acc_Machine_Learning
```
---
#### 5.4 Logistic Regression
```
logreg = linear_model.LinearRegression()
params = {}
gscv_lg = GridSearchCV(logreg, params, cv=10)
gscv_lg.fit(x,y)
```
The **best_score (Mean cross-validated score of the best_estimator)** is :
```
gscv_lg.best_score_
```
The **best estimator (model)** is :
```
logreg = gscv_lg.best_estimator_
logreg
acc_Machine_Learning= acc_Machine_Learning.append({'Name' : logreg, 'Accuracy_Value' : gscv_lg.best_score_, 'CV' :10},
ignore_index=True)
acc_Machine_Learning
```
---
#### 5.5 Random Forest Classifier
```
clf = RandomForestClassifier()
clf_param = {
'n_estimators': [64, 128],
'max_features': ['auto', 'sqrt', 'log2'],
'max_depth' : [4,5,6,7,8,9,10,11,12,13,14,15],
'criterion' :['gini', 'entropy'],
'random_state' : [1234]
}
gscv_rfc = GridSearchCV(clf, params, cv=10)
gscv_rfc.fit(x,y)
```
The **best_score (Mean cross-validated score of the best_estimator)** is :
```
gscv_rfc.best_score_
```
The **best estimator (model)** is :
```
clf = gscv_rfc.best_estimator_
clf
acc_Machine_Learning= acc_Machine_Learning.append({'Name' : clf, 'Accuracy_Value' : gscv_rfc.best_score_, 'CV' :10},
ignore_index=True)
acc_Machine_Learning
```
---
## 6. Let's save the accuracies
```
acc_Machine_Learning = acc_Machine_Learning.sort_values(by=['Accuracy_Value'], ascending=False)
acc_Machine_Learning
acc_Machine_Learning.to_csv(r"C:\Users\Usuario\Documents\Github\PDG\PDG-2\Datasets\Time Window\Accuracies\MLAccuraciesExp1.csv",sep=',',index=False)
```
---
## 7. Let's choose the best ML Algorithm
```
acc_Machine_Learning.iloc[0,:]
```
---
---
## 8. ::::::::::::::::: TEST WITH REAL DATA :::::::::::::::::::::
```
b = pd.read_csv(r"C:\Users\Usuario\Documents\Github\PDG\PDG-2\Datasets\Time Window\TEST\BTestExp2.csv", delimiter = ",")
b.shape
```
--
```
m = malign_dataset = pd.read_csv(r"C:\Users\Usuario\Documents\Github\PDG\PDG-2\Datasets\Time Window\TEST\MTestExp2.csv", delimiter = ",")
m.shape
```
---
```
frames = [b, m]
test_dataset = pd.concat(frames)
```
---
```
le = joblib.load('./Tools/label_encoder_type_exp2.encoder')
test_dataset.Type.unique()
test_dataset.Type = le.transform(test_dataset.Type)
test_dataset.Type.unique()
types = test_dataset.Type
test_dataset = test_dataset.drop(['Type'], axis=1)
test_dataset.columns
```
---
```
test_dataset = test_dataset[['Avg_bps','Avg_pps'
,'Bytes','p2_ib','duration','number_sp','number_dp','First_Protocol'
,'first_sp','p3_ib','first_dp','p1_ib','p3_d']]
```
--
```
test_dataset.info()
```
---
First_Protocol
```
le = joblib.load('./Tools/label_encoder_first_protocol_exp2.encoder')
test_dataset.First_Protocol.unique()
test_dataset.First_Protocol = le.transform(test_dataset.First_Protocol)
test_dataset.First_Protocol.unique()
```
---
```
scaler = joblib.load("./Tools/scalerExp2.save")
test_dataset[['Avg_bps','Avg_pps'
,'Bytes','p2_ib','duration','number_sp','number_dp'
,'p3_ib','p1_ib','p3_d']] = scaler.transform(test_dataset[['Avg_bps','Avg_pps'
,'Bytes','p2_ib','duration','number_sp','number_dp'
,'p3_ib','p1_ib','p3_d']])
test_dataset.head(2)
clf
y_pred= clf.predict(test_dataset)
y_pred
unique, counts = np.unique(y_pred, return_counts=True)
dict(zip(unique, counts))
y_pred
types = types.astype(np.int64)
cm= metrics.confusion_matrix(types, y_pred)
plt.imshow(cm, cmap=plt.cm.Blues)
plt.title("Matriz de confusión")
plt.colorbar()
tick_marks = np.arange(3)
plt.xticks(tick_marks, ['0','1'])
plt.yticks(tick_marks, ['0','1'])
target_names = ['1', '0']
print(classification_report(types, y_pred, target_names=target_names))
```
---
----
## Let's save the 3 best models models...
```
joblib.dump(clf,"./Models/clf.save")
joblib.dump(dtc,"./Models/dtc.save")
joblib.dump(gnb,"./Models/gnb.save")
```
## References
### Naive
1. https://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.GaussianNB.html
2. https://www.datacamp.com/community/tutorials/naive-bayes-scikit-learn
3. https://stackoverflow.com/questions/58212613/naive-bayes-gaussian-throwing-valueerror-could-not-convert-string-to-float-m
4. https://scikit-learn.org/stable/modules/naive_bayes.html
5. https://scikit-learn.org/stable/modules/model_evaluation.html
### Decision Tree
1. https://stackoverflow.com/questions/35097003/cross-validation-decision-trees-in-sklearn
2. https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html
### Label Encoder
1. https://www.interactivechaos.com/python/function/labelencoder
### KNN
1. https://medium.com/@svanillasun/how-to-deal-with-cross-validation-based-on-knn-algorithm-compute-auc-based-on-naive-bayes-ff4b8284cff4
|
github_jupyter
|
import warnings
warnings.filterwarnings('ignore')
import math
import numpy as np #operaciones matriciales y con vectores
import pandas as pd #tratamiento de datos
import random
import matplotlib.pyplot as plt #gráficos
import seaborn as sns
import joblib
from sklearn import naive_bayes
from sklearn import tree
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression
from sklearn import tree
from sklearn import linear_model
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split #metodo de particionamiento de datasets para evaluación
from sklearn import preprocessing
from sklearn import metrics
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import cross_validate
from sklearn.model_selection import GridSearchCV
BM_onlyStratosphere = pd.read_csv(r"C:\Users\Usuario\Documents\Github\PDG\PDG-2\Datasets\Time Window\Standardized\SDTrainExp2.csv", delimiter = ",")
BM_onlyStratosphere.head(2)
BM_onlyStratosphere.shape
BM_onlyStratosphere_copy = BM_onlyStratosphere.copy()
BM_onlyStratosphere_copy.shape
acc_Machine_Learning = pd.DataFrame(columns=['Name',"Accuracy_Value","CV"])
x = BM_onlyStratosphere_copy.iloc[:,:-1]
y = BM_onlyStratosphere_copy['Type']
gnb = naive_bayes.GaussianNB()
params = {}
gscv_gnb = GridSearchCV(estimator=gnb, param_grid=params, cv=10, return_train_score=True)
gscv_gnb.fit(x,y)
gscv_gnb.cv_results_
gscv_gnb.best_score_
gnb = gscv_gnb.best_estimator_
gnb
acc_Machine_Learning= acc_Machine_Learning.append({'Name' : 'GaussianNB ', 'Accuracy_Value' : gscv_gnb.best_score_, 'CV' : 10},
ignore_index=True)
acc_Machine_Learning
dtc = tree.DecisionTreeClassifier()
tree_params = {'criterion':['gini','entropy'],
'max_depth':[3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],
'random_state' : [1234]
}
gscv_dtc = GridSearchCV(dtc, tree_params, cv=10)
gscv_dtc.fit(x,y)
gscv_dtc.best_score_
dtc = gscv_dtc.best_estimator_
dtc
acc_Machine_Learning= acc_Machine_Learning.append({'Name' : dtc, 'Accuracy_Value' : gscv_dtc.best_score_, 'CV' : 10},
ignore_index=True)
acc_Machine_Learning
knn = KNeighborsClassifier()
knn_params = {'n_neighbors':[1,3,5], 'weights' : ['uniform','distance'], 'metric':['euclidean','manhattan']}
# gscv_knn = GridSearchCV(knn, knn_params, cv=5, n_jobs=-1)
# gscv_knn.fit(x,y)
# gscv_knn.best_score_
# knn = gscv_knn.best_estimator_
# knn
# acc_Machine_Learning= acc_Machine_Learning.append({'Name' : knn, 'Accuracy_Value' : gscv_knn.best_score_, 'CV' :5},
# ignore_index=True)
# acc_Machine_Learning
logreg = linear_model.LinearRegression()
params = {}
gscv_lg = GridSearchCV(logreg, params, cv=10)
gscv_lg.fit(x,y)
gscv_lg.best_score_
logreg = gscv_lg.best_estimator_
logreg
acc_Machine_Learning= acc_Machine_Learning.append({'Name' : logreg, 'Accuracy_Value' : gscv_lg.best_score_, 'CV' :10},
ignore_index=True)
acc_Machine_Learning
clf = RandomForestClassifier()
clf_param = {
'n_estimators': [64, 128],
'max_features': ['auto', 'sqrt', 'log2'],
'max_depth' : [4,5,6,7,8,9,10,11,12,13,14,15],
'criterion' :['gini', 'entropy'],
'random_state' : [1234]
}
gscv_rfc = GridSearchCV(clf, params, cv=10)
gscv_rfc.fit(x,y)
gscv_rfc.best_score_
clf = gscv_rfc.best_estimator_
clf
acc_Machine_Learning= acc_Machine_Learning.append({'Name' : clf, 'Accuracy_Value' : gscv_rfc.best_score_, 'CV' :10},
ignore_index=True)
acc_Machine_Learning
acc_Machine_Learning = acc_Machine_Learning.sort_values(by=['Accuracy_Value'], ascending=False)
acc_Machine_Learning
acc_Machine_Learning.to_csv(r"C:\Users\Usuario\Documents\Github\PDG\PDG-2\Datasets\Time Window\Accuracies\MLAccuraciesExp1.csv",sep=',',index=False)
acc_Machine_Learning.iloc[0,:]
b = pd.read_csv(r"C:\Users\Usuario\Documents\Github\PDG\PDG-2\Datasets\Time Window\TEST\BTestExp2.csv", delimiter = ",")
b.shape
m = malign_dataset = pd.read_csv(r"C:\Users\Usuario\Documents\Github\PDG\PDG-2\Datasets\Time Window\TEST\MTestExp2.csv", delimiter = ",")
m.shape
frames = [b, m]
test_dataset = pd.concat(frames)
le = joblib.load('./Tools/label_encoder_type_exp2.encoder')
test_dataset.Type.unique()
test_dataset.Type = le.transform(test_dataset.Type)
test_dataset.Type.unique()
types = test_dataset.Type
test_dataset = test_dataset.drop(['Type'], axis=1)
test_dataset.columns
test_dataset = test_dataset[['Avg_bps','Avg_pps'
,'Bytes','p2_ib','duration','number_sp','number_dp','First_Protocol'
,'first_sp','p3_ib','first_dp','p1_ib','p3_d']]
test_dataset.info()
le = joblib.load('./Tools/label_encoder_first_protocol_exp2.encoder')
test_dataset.First_Protocol.unique()
test_dataset.First_Protocol = le.transform(test_dataset.First_Protocol)
test_dataset.First_Protocol.unique()
scaler = joblib.load("./Tools/scalerExp2.save")
test_dataset[['Avg_bps','Avg_pps'
,'Bytes','p2_ib','duration','number_sp','number_dp'
,'p3_ib','p1_ib','p3_d']] = scaler.transform(test_dataset[['Avg_bps','Avg_pps'
,'Bytes','p2_ib','duration','number_sp','number_dp'
,'p3_ib','p1_ib','p3_d']])
test_dataset.head(2)
clf
y_pred= clf.predict(test_dataset)
y_pred
unique, counts = np.unique(y_pred, return_counts=True)
dict(zip(unique, counts))
y_pred
types = types.astype(np.int64)
cm= metrics.confusion_matrix(types, y_pred)
plt.imshow(cm, cmap=plt.cm.Blues)
plt.title("Matriz de confusión")
plt.colorbar()
tick_marks = np.arange(3)
plt.xticks(tick_marks, ['0','1'])
plt.yticks(tick_marks, ['0','1'])
target_names = ['1', '0']
print(classification_report(types, y_pred, target_names=target_names))
joblib.dump(clf,"./Models/clf.save")
joblib.dump(dtc,"./Models/dtc.save")
joblib.dump(gnb,"./Models/gnb.save")
| 0.249905 | 0.869327 |
# How to generate the `genemap.txt` file
Using UCSC refGene for gene definition, Rutgers Map for genetic distances, and linear interpolation for those that cannot be found in the database.
## Gene range file
Downloaded [`refGene.txt.gz` from UCSC](http://hgdownload.soe.ucsc.edu/goldenPath/hg19/database/refGene.txt.gz), currently version March 01, 2020. Gene range file generated using [this workflow](https://gaow.github.io/cnv-gene-mapping/dsc/20190627_Clean_RefGene.html) written by Min Qiao when she was at UChicago. Please refer to the link for trickiness converting `refGene.txt.gz` to gene ranges.
The output is a 4-column file,
```
chr start end gene_name
```
## Genetic distance map file
Rutgers genetic maps downloaded from [here](http://compgen.rutgers.edu/downloads/rutgers_map_v3.zip). Preprocessing scripts below were mostly written by Hang Dai when he was at Baylor. I put some in a workflow script to better organize them.
```
# Copied from Hang Dai's preprocessing scripts in 2014
# to SoS workflow, with minor data formatting adjustments
# add_chr_to_original_file
[preprocess_1]
depends: executable('bgzip')
parameter: chrom = list()
if len(chrom) == 0: chrom = list(range(1,23)) + ['X']
input: for_each = 'chrom'
output: f'RUMap_chr{_chrom}.txt.gz'
bash: expand = '${ }'
awk -F'\t' -v chromosome="${_chrom}" 'BEGIN {OFS="\t"} {if (NR==1) {print "#chr",$1,$2,$3,$6,$7,$8,$9} else {if ($2=="SNP") {print chromosome,$1,$2,$3,$6,$7,$8,$9}}}' RUMapv3_B137_chr${_chrom if _chrom != 'X' else 23}.txt | sort -k5 -g | bgzip -c > ${_output}
# make_tabix_index_file.sh
[preprocess_2]
output: f'{_input}.tbi'
bash: expand = '${ }'
tabix -s1 -b5 -e5 -c# ${_input}
# chr_min_max_dict
[preprocess_3]
input: group_by='all'
python: expand = '${ }'
import subprocess
chr_min_max_dict={}
for item in [${_input:nr,}]:
print(item)
command='zcat {} | head -2 | tail -1'.format(item)
p=subprocess.Popen(command, universal_newlines=True, shell=True, stdout=subprocess.PIPE)
out=p.stdout.read().split('\t') #a list
min_pos=out[4]
command='zcat {} | tail -1'.format(item)
p=subprocess.Popen(command, universal_newlines=True, shell=True, stdout=subprocess.PIPE)
out=p.stdout.read().split('\t') #a list
max_pos=out[4]
chr_min_max_dict[item]=[min_pos, max_pos]
print(chr_min_max_dict)
print(len(chr_min_max_dict))
[liftover_download: provides = ['hg19ToHg38.over.chain.gz', 'liftOver']]
download:
https://hgdownload.soe.ucsc.edu/gbdb/hg19/liftOver/hg19ToHg38.over.chain.gz
http://hgdownload.soe.ucsc.edu/admin/exe/linux.x86_64/liftOver
bash:
chmod +x liftOver
[liftover_genemap]
depends: 'hg19ToHg38.over.chain.gz', 'liftOver'
parameter: genemap = 'genemap.hg19.txt'
input: genemap
output: f'{_input:nn}.hg38.txt'
bash: expand = '${ }'
awk '{print "chr"$1,$2,$3,$4}' ${_input} > ${_output:nn}.hg19.bed
./liftOver ${_output:nn}.hg19.bed hg19ToHg38.over.chain.gz ${_output:nn}.hg38.bed ${_output:nn}.unlifted.bed
python: expand = '${ }'
genemap = dict([(x.split()[3], x.strip().split()) for x in open(${_input:r}).readlines()])
new_coord = dict([(x.split()[3], x.strip().split()) for x in open('${_output:nn}.hg38.bed').readlines()])
total = len(genemap)
unmapped = 0
for k in list(genemap.keys()):
if k in new_coord:
genemap[k][0] = new_coord[k][0][3:]
genemap[k][1] = new_coord[k][1]
genemap[k][2] = new_coord[k][2]
else:
del genemap[k]
unmapped += 1
print(f'{unmapped} units failed to be mapped to hg38.')
with open(${_output:r}, 'w') as f:
f.write('\n'.join(['\t'.join(x) for x in genemap.values()]))
```
To use it, after downloading and decompressing Rutgers Map data, run:
```
sos run genemap.ipynb preprocess
python genetic_pos_searcher.py genemap.txt
mv CM_genemap.txt genemap.hg19.txt
sos run genemap.ipynb liftover_genemap --genemap genemap.hg19.txt
```
|
github_jupyter
|
chr start end gene_name
# Copied from Hang Dai's preprocessing scripts in 2014
# to SoS workflow, with minor data formatting adjustments
# add_chr_to_original_file
[preprocess_1]
depends: executable('bgzip')
parameter: chrom = list()
if len(chrom) == 0: chrom = list(range(1,23)) + ['X']
input: for_each = 'chrom'
output: f'RUMap_chr{_chrom}.txt.gz'
bash: expand = '${ }'
awk -F'\t' -v chromosome="${_chrom}" 'BEGIN {OFS="\t"} {if (NR==1) {print "#chr",$1,$2,$3,$6,$7,$8,$9} else {if ($2=="SNP") {print chromosome,$1,$2,$3,$6,$7,$8,$9}}}' RUMapv3_B137_chr${_chrom if _chrom != 'X' else 23}.txt | sort -k5 -g | bgzip -c > ${_output}
# make_tabix_index_file.sh
[preprocess_2]
output: f'{_input}.tbi'
bash: expand = '${ }'
tabix -s1 -b5 -e5 -c# ${_input}
# chr_min_max_dict
[preprocess_3]
input: group_by='all'
python: expand = '${ }'
import subprocess
chr_min_max_dict={}
for item in [${_input:nr,}]:
print(item)
command='zcat {} | head -2 | tail -1'.format(item)
p=subprocess.Popen(command, universal_newlines=True, shell=True, stdout=subprocess.PIPE)
out=p.stdout.read().split('\t') #a list
min_pos=out[4]
command='zcat {} | tail -1'.format(item)
p=subprocess.Popen(command, universal_newlines=True, shell=True, stdout=subprocess.PIPE)
out=p.stdout.read().split('\t') #a list
max_pos=out[4]
chr_min_max_dict[item]=[min_pos, max_pos]
print(chr_min_max_dict)
print(len(chr_min_max_dict))
[liftover_download: provides = ['hg19ToHg38.over.chain.gz', 'liftOver']]
download:
https://hgdownload.soe.ucsc.edu/gbdb/hg19/liftOver/hg19ToHg38.over.chain.gz
http://hgdownload.soe.ucsc.edu/admin/exe/linux.x86_64/liftOver
bash:
chmod +x liftOver
[liftover_genemap]
depends: 'hg19ToHg38.over.chain.gz', 'liftOver'
parameter: genemap = 'genemap.hg19.txt'
input: genemap
output: f'{_input:nn}.hg38.txt'
bash: expand = '${ }'
awk '{print "chr"$1,$2,$3,$4}' ${_input} > ${_output:nn}.hg19.bed
./liftOver ${_output:nn}.hg19.bed hg19ToHg38.over.chain.gz ${_output:nn}.hg38.bed ${_output:nn}.unlifted.bed
python: expand = '${ }'
genemap = dict([(x.split()[3], x.strip().split()) for x in open(${_input:r}).readlines()])
new_coord = dict([(x.split()[3], x.strip().split()) for x in open('${_output:nn}.hg38.bed').readlines()])
total = len(genemap)
unmapped = 0
for k in list(genemap.keys()):
if k in new_coord:
genemap[k][0] = new_coord[k][0][3:]
genemap[k][1] = new_coord[k][1]
genemap[k][2] = new_coord[k][2]
else:
del genemap[k]
unmapped += 1
print(f'{unmapped} units failed to be mapped to hg38.')
with open(${_output:r}, 'w') as f:
f.write('\n'.join(['\t'.join(x) for x in genemap.values()]))
sos run genemap.ipynb preprocess
python genetic_pos_searcher.py genemap.txt
mv CM_genemap.txt genemap.hg19.txt
sos run genemap.ipynb liftover_genemap --genemap genemap.hg19.txt
| 0.288168 | 0.774114 |
# GPyOpt: parallel Bayesian optimization
### Written by Javier Gonzalez, University of Sheffield.
*Last updated Tuesday, 15 March 2016.*
In this noteboook we are going to learn how to use GPyOpt to run parallel BO methods. The goal of these approaches is to make use of all the computational power or our machine to perform the optimization. For instance, if we hace a computer with 4 cores, we may want to make 4 evaluations of $f$ in parallel everytime we test the performance of the algorithm.
In this notebook we will use the **Local Penalization** method describe in the paper *Batch Bayesian Optimization via Local Penalization*.
```
from IPython.display import HTML
HTML('<iframe src=http://arxiv.org/pdf/1505.08052v4.pdf width=700 height=550></iframe>')
%pylab inline
import GPyOpt
```
As in previous examples we use a synthetic objective function but you can think about doing the same with any function you like. In this case, we use the Branin function. For the optimization we will perturb the evaluations with Gaussian noise with sd = 0.1.
```
# --- Objective function
objective_true = GPyOpt.objective_examples.experiments2d.branin() # true function
objective_noisy = GPyOpt.objective_examples.experiments2d.branin(sd = 0.1) # noisy version
bounds = objective_noisy.bounds
domain = [{'name': 'var_1', 'type': 'continuous', 'domain': bounds[0]}, ## use default bounds
{'name': 'var_2', 'type': 'continuous', 'domain': bounds[1]}]
objective_true.plot()
```
As in previous cases, we create a GPyOpt object with the desing space and fucntion to optimize. In this case we need to select the evaluator type, which in this case is the *local penalization method* the batch size and the number of cores that we want to use. The evaluation of the function will be splitted accross the available cores.
```
batch_size = 4
num_cores = 4
from numpy.random import seed
seed(123)
BO_demo_parallel = GPyOpt.methods.BayesianOptimization(f=objective_noisy.f,
domain = domain,
acquisition_type = 'EI',
normalize_Y = True,
initial_design_numdata = 10,
evaluator_type = 'local_penalization',
batch_size = batch_size,
num_cores = num_cores,
acquisition_jitter = 0)
```
We will optimize this function by running 10 parallel evaluations in 3 cores of our machine.
```
# --- Run the optimization for 10 iterations
max_iter = 10
BO_demo_parallel.run_optimization(max_iter)
```
We plot the resutls. Observe that the final number of evaluations that we will make is $10*4=40$.
```
BO_demo_parallel.plot_acquisition()
```
See how the method explores the space using the four parallel evaluations of $f$ and it is able to identify the location of the three minima.
```
BO_demo_parallel.plot_convergence()
```
|
github_jupyter
|
from IPython.display import HTML
HTML('<iframe src=http://arxiv.org/pdf/1505.08052v4.pdf width=700 height=550></iframe>')
%pylab inline
import GPyOpt
# --- Objective function
objective_true = GPyOpt.objective_examples.experiments2d.branin() # true function
objective_noisy = GPyOpt.objective_examples.experiments2d.branin(sd = 0.1) # noisy version
bounds = objective_noisy.bounds
domain = [{'name': 'var_1', 'type': 'continuous', 'domain': bounds[0]}, ## use default bounds
{'name': 'var_2', 'type': 'continuous', 'domain': bounds[1]}]
objective_true.plot()
batch_size = 4
num_cores = 4
from numpy.random import seed
seed(123)
BO_demo_parallel = GPyOpt.methods.BayesianOptimization(f=objective_noisy.f,
domain = domain,
acquisition_type = 'EI',
normalize_Y = True,
initial_design_numdata = 10,
evaluator_type = 'local_penalization',
batch_size = batch_size,
num_cores = num_cores,
acquisition_jitter = 0)
# --- Run the optimization for 10 iterations
max_iter = 10
BO_demo_parallel.run_optimization(max_iter)
BO_demo_parallel.plot_acquisition()
BO_demo_parallel.plot_convergence()
| 0.60964 | 0.964954 |
## Recommender System Algorithm
### Objective
We want to help consumers find attorneys. To surface attorneys to consumers, sales consultants often have to help attorneys describe their areas of practice (areas like Criminal Defense, Business or Personal Injury).
To expand their practices, attorneys can branch into related areas of practice. This can allow attorneys to help different customers while remaining within the bounds of their experience.
Attached is an anonymized dataset of attorneys and their specialties. The columns are anonymized attorney IDs and specialty IDs. Please design a process that returns the top 5 recommended practice areas for a given attorney with a set of specialties.
## Data
```
import pandas as pd
import numpy as np
from sklearn.preprocessing import normalize
# Import data
data = pd.read_excel('data.xlsx', 'data')
data.shape
# View first few rows of the dataset
data.head()
```
## 3. Data Exploration
```
# Information of the dataset
data.info()
# Check missing values
data.isnull().sum()
# Check duplicates
data.duplicated().sum()
# Check unique value count for the two ID's
data['attorney_id'].nunique(), data['specialty_id'].nunique()
data['specialty_id'].value_counts()
# Check number of specialties per attorney
data.groupby('attorney_id')['specialty_id'].nunique().sort_values()
```
The number of specialties of an attorney ranges from 1 to 28.
```
# View a sample: an attorney with 28 specialties
data[data['attorney_id']==157715]
```
## Recommendation System
### Recommendation for Top K Practice Areas based on Similarity for Specialties
#### Step 1: Build the specialty-attorney matrix
```
# Build the specialty-attorney matrix
specialty_attorney = data.groupby(['specialty_id','attorney_id'])['attorney_id'].count().unstack(fill_value=0)
specialty_attorney = (specialty_attorney > 0).astype(int)
specialty_attorney
```
#### Step 2: Build specialty-specialty similarity matrix
```
# Build specialty-specialty similarity matrix
specialty_attorney_norm = normalize(specialty_attorney, axis=1)
similarity = np.dot(specialty_attorney_norm, specialty_attorney_norm.T)
df_similarity = pd.DataFrame(similarity, index=specialty_attorney.index, columns=specialty_attorney.index)
df_similarity
```
#### Step 3: Find the Top K most similar specialties
```
# Find the top k most similar specialties
def topk_specialty(specialty, similarity, k):
result = similarity.loc[specialty].sort_values(ascending=False)[1:k + 1].reset_index()
result = result.rename(columns={'specialty_id': 'Specialty_Recommend', specialty: 'Similarity'})
return result
```
### Testing Recommender System based on Similarity
#### Process:
1. Ask user to input the ID of his/her obtained specialties
2. The system will recommend top 5 practice areas for the user's specialties based on similarity
```
# Test on a specialty sample 1
user_input1 = int(input('Please input your specialty ID: '))
recommend_user1 = topk_specialty(specialty=user_input1, similarity=df_similarity, k=5)
print('Top 5 recommended practice areas for user 1:')
print('--------------------------------------------')
print(recommend_user1)
# Test on a specialty sample 2
user_input2 = int(input('Please input your specialty ID: '))
recommend_user2 = topk_specialty(specialty=user_input2, similarity=df_similarity, k=5)
print('Top 5 recommended practice areas for user 2:')
print('--------------------------------------------')
print(recommend_user2)
```
### Popularity-based Recommendation - If user requests recommedation based on popularity
```
# Get ranked specialties based on popularity
df_specialty_popular = data_recommend.groupby('specialty_id')['attorney_id'].nunique().sort_values(ascending=False)
df_specialty_popular
#Q: data_recommend not defined
# Top 5 specialties based on popularity among attorneys
df_specialty_popular.columns = ['specialty_id', 'count_popular']
print('The 5 most popular specialties:')
print('--------------------------------')
print(df_specialty_popular.nlargest(5, keep='all'))
```
|
github_jupyter
|
import pandas as pd
import numpy as np
from sklearn.preprocessing import normalize
# Import data
data = pd.read_excel('data.xlsx', 'data')
data.shape
# View first few rows of the dataset
data.head()
# Information of the dataset
data.info()
# Check missing values
data.isnull().sum()
# Check duplicates
data.duplicated().sum()
# Check unique value count for the two ID's
data['attorney_id'].nunique(), data['specialty_id'].nunique()
data['specialty_id'].value_counts()
# Check number of specialties per attorney
data.groupby('attorney_id')['specialty_id'].nunique().sort_values()
# View a sample: an attorney with 28 specialties
data[data['attorney_id']==157715]
# Build the specialty-attorney matrix
specialty_attorney = data.groupby(['specialty_id','attorney_id'])['attorney_id'].count().unstack(fill_value=0)
specialty_attorney = (specialty_attorney > 0).astype(int)
specialty_attorney
# Build specialty-specialty similarity matrix
specialty_attorney_norm = normalize(specialty_attorney, axis=1)
similarity = np.dot(specialty_attorney_norm, specialty_attorney_norm.T)
df_similarity = pd.DataFrame(similarity, index=specialty_attorney.index, columns=specialty_attorney.index)
df_similarity
# Find the top k most similar specialties
def topk_specialty(specialty, similarity, k):
result = similarity.loc[specialty].sort_values(ascending=False)[1:k + 1].reset_index()
result = result.rename(columns={'specialty_id': 'Specialty_Recommend', specialty: 'Similarity'})
return result
# Test on a specialty sample 1
user_input1 = int(input('Please input your specialty ID: '))
recommend_user1 = topk_specialty(specialty=user_input1, similarity=df_similarity, k=5)
print('Top 5 recommended practice areas for user 1:')
print('--------------------------------------------')
print(recommend_user1)
# Test on a specialty sample 2
user_input2 = int(input('Please input your specialty ID: '))
recommend_user2 = topk_specialty(specialty=user_input2, similarity=df_similarity, k=5)
print('Top 5 recommended practice areas for user 2:')
print('--------------------------------------------')
print(recommend_user2)
# Get ranked specialties based on popularity
df_specialty_popular = data_recommend.groupby('specialty_id')['attorney_id'].nunique().sort_values(ascending=False)
df_specialty_popular
#Q: data_recommend not defined
# Top 5 specialties based on popularity among attorneys
df_specialty_popular.columns = ['specialty_id', 'count_popular']
print('The 5 most popular specialties:')
print('--------------------------------')
print(df_specialty_popular.nlargest(5, keep='all'))
| 0.44746 | 0.950778 |
## Tabular data handling
This module defines the main class to handle tabular data in the fastai library: [`TabularDataBunch`](/tabular.data.html#TabularDataBunch). As always, there is also a helper function to quickly get your data.
To allow you to easily create a [`Learner`](/basic_train.html#Learner) for your data, it provides [`tabular_learner`](/tabular.learner.html#tabular_learner).
```
from fastai.gen_doc.nbdoc import *
from fastai.tabular import *
show_doc(TabularDataBunch)
```
The best way to quickly get your data in a [`DataBunch`](/basic_data.html#DataBunch) suitable for tabular data is to organize it in two (or three) dataframes. One for training, one for validation, and if you have it, one for testing. Here we are interested in a subsample of the [adult dataset](https://archive.ics.uci.edu/ml/datasets/adult).
```
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
valid_idx = range(len(df)-2000, len(df))
df.head()
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country']
dep_var = 'salary'
```
The initialization of [`TabularDataBunch`](/tabular.data.html#TabularDataBunch) is the same as [`DataBunch`](/basic_data.html#DataBunch) so you really want to use the factory method instead.
```
show_doc(TabularDataBunch.from_df)
```
Optionally, use `test_df` for the test set. The dependent variable is `dep_var`, while the categorical and continuous variables are in the `cat_names` columns and `cont_names` columns respectively. If `cont_names` is None then we assume all variables that aren't dependent or categorical are continuous. The [`TabularProcessor`](/tabular.data.html#TabularProcessor) in `procs` are applied to the dataframes as preprocessing, then the categories are replaced by their codes+1 (leaving 0 for `nan`) and the continuous variables are normalized.
Note that the [`TabularProcessor`](/tabular.data.html#TabularProcessor) should be passed as `Callable`: the actual initialization with `cat_names` and `cont_names` is done during the preprocessing.
```
procs = [FillMissing, Categorify, Normalize]
data = TabularDataBunch.from_df(path, df, dep_var, valid_idx=valid_idx, procs=procs, cat_names=cat_names)
```
You can then easily create a [`Learner`](/basic_train.html#Learner) for this data with [`tabular_learner`](/tabular.learner.html#tabular_learner).
```
show_doc(tabular_learner)
```
`emb_szs` is a `dict` mapping categorical column names to embedding sizes; you only need to pass sizes for columns where you want to override the default behaviour of the model.
```
show_doc(TabularList)
```
Basic class to create a list of inputs in `items` for tabular data. `cat_names` and `cont_names` are the names of the categorical and the continuous variables respectively. `processor` will be applied to the inputs or one will be created from the transforms in `procs`.
```
show_doc(TabularList.from_df)
show_doc(TabularList.get_emb_szs)
show_doc(TabularList.show_xys)
show_doc(TabularList.show_xyzs)
show_doc(TabularLine, doc_string=False)
```
An object that will contain the encoded `cats`, the continuous variables `conts`, the `classes` and the `names` of the columns. This is the basic input for a dataset dealing with tabular data.
```
show_doc(TabularProcessor)
```
Create a [`PreProcessor`](/data_block.html#PreProcessor) from `procs`.
## Undocumented Methods - Methods moved below this line will intentionally be hidden
```
show_doc(TabularProcessor.process_one)
show_doc(TabularList.new)
show_doc(TabularList.get)
show_doc(TabularProcessor.process)
show_doc(TabularList.reconstruct)
```
## New Methods - Please document or move to the undocumented section
|
github_jupyter
|
from fastai.gen_doc.nbdoc import *
from fastai.tabular import *
show_doc(TabularDataBunch)
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
valid_idx = range(len(df)-2000, len(df))
df.head()
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country']
dep_var = 'salary'
show_doc(TabularDataBunch.from_df)
procs = [FillMissing, Categorify, Normalize]
data = TabularDataBunch.from_df(path, df, dep_var, valid_idx=valid_idx, procs=procs, cat_names=cat_names)
show_doc(tabular_learner)
show_doc(TabularList)
show_doc(TabularList.from_df)
show_doc(TabularList.get_emb_szs)
show_doc(TabularList.show_xys)
show_doc(TabularList.show_xyzs)
show_doc(TabularLine, doc_string=False)
show_doc(TabularProcessor)
show_doc(TabularProcessor.process_one)
show_doc(TabularList.new)
show_doc(TabularList.get)
show_doc(TabularProcessor.process)
show_doc(TabularList.reconstruct)
| 0.427994 | 0.99045 |
# The data block API
```
from fastai.gen_doc.nbdoc import *
from fastai.vision import *
from fastai import *
```
The data block API lets you customize how to create a [`DataBunch`](/basic_data.html#DataBunch) by isolating the underlying parts of that process in separate blocks, mainly:
- where are the inputs
- how to label them
- how to split the data into a training and validation set
- what type of [`Dataset`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset) to create
- possible transforms to apply
- how to warp in dataloaders and create the [`DataBunch`](/basic_data.html#DataBunch)
This is a bit longer than using the factory methods but is way more flexible. As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts.
## Examples of use
In [`vision.data`](/vision.data.html#vision.data), we create an easy [`DataBunch`](/basic_data.html#DataBunch) suitable for classification by simply typing:
```
path = untar_data(URLs.MNIST_TINY)
tfms = get_transforms(do_flip=False)
data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24)
```
This is aimed at data that is in fodlers following an ImageNet style, with a train and valid directory containing each one subdirectory per class, where all the pictures are. With the data block API, the same thing is achieved like this:
```
path = untar_data(URLs.MNIST_TINY)
tfms = get_transforms(do_flip=False)
path.ls()
(path/'train').ls()
data = (ImageFileList.from_folder(path) #Where to find the data? -> in path and its subfolders
.label_from_folder() #How to label? -> depending on the folder of the filenames
.split_by_folder() #How to split in train/valid? -> use the folders
.add_test_folder() #Optionally add a test set
.datasets() #How to convert to datasets?
.transform(tfms, size=224) #Data augmentation? -> use tfms with a size of 224
.databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch
data.train_ds[0]
data.show_batch(rows=3, figsize=(5,5))
data.valid_ds.classes
```
Let's look at another example from [`vision.data`](/vision.data.html#vision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is:
```
planet = untar_data(URLs.PLANET_TINY)
planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.)
data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms)
```
With the data block API we can rewrite this like that:
```
data = (ImageFileList.from_folder(planet)
#Where to find the data? -> in planet and its subfolders
.label_from_csv('labels.csv', sep=' ', folder='train', suffix='.jpg')
#How to label? -> use the csv file labels.csv in path,
#add .jpg to the names and take them in the folder train
.random_split_by_pct()
#How to split in train/valid? -> randomly with the default 20% in valid
.datasets()
#How to convert to datasets? -> use ImageMultiDataset
.transform(planet_tfms, size=128)
#Data augmentation? -> use tfms with a size of 128
.databunch())
#Finally? -> use the defaults for conversion to databunch
data.show_batch(rows=3, figsize=(10,8))
```
The data block API also allows you to use dataset types for which there is no direct [`ImageDataBunch`](/vision.data.html#ImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.html#DataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder.
```
camvid = untar_data(URLs.CAMVID_TINY)
path_lbl = camvid/'labels'
path_img = camvid/'images'
```
We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...)
```
codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes
```
And we define the following function that infers the mask filename from the image filename.
```
get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}'
```
Then we can easily define a [`DataBunch`](/basic_data.html#DataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image.
```
data = (ImageFileList.from_folder(path_img) #Where are the input files? -> in path_img
.label_from_func(get_y_fn) #How to label? -> use get_y_fn
.random_split_by_pct() #How to split between train and valid? -> randomly
.datasets(SegmentationDataset, classes=codes) #How to create a dataset? -> use SegmentationDataset
.transform(get_transforms(), size=96, tfm_y=True) #Data aug -> Use standard tfms with tfm_y=True
.databunch(bs=64)) #Lastly convert in a databunch.
data.show_batch(rows=2, figsize=(5,5))
```
One last example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/#home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename.
```
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
img2bbox = {img:bb for img, bb in zip(images, lbl_bbox)}
get_y_func = lambda o:img2bbox[o.name]
```
The following code is very similar to what we saw before. The only new addition is the use of special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes.
```
data = (ImageFileList.from_folder(coco)
#Where are the images? -> in coco
.label_from_func(get_y_func)
#How to find the labels? -> use get_y_func
.random_split_by_pct()
#How to split in train/valid? -> randomly with the default 20% in valid
.datasets(ObjectDetectDataset)
#How to create datasets? -> with ObjectDetectDataset
#Data augmentation? -> Standard transforms with tfm_y=True
.databunch(bs=16, collate_fn=bb_pad_collate))
#Finally we convert to a DataBunch and we use bb_pad_collate
data.show_batch(rows=3, ds_type=DatasetType.Valid, figsize=(8,7))
```
## Provide inputs
The inputs we want to feed our model are regrouped in the following class. The class contains methods to get the corresponding labels.
```
show_doc(InputList, title_level=3, doc_string=False)
```
This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...)
```
show_doc(InputList.from_folder)
```
Note that [`InputList`](/data_block.html#InputList) is subclassed in vision by [`ImageFileList`](/vision.data.html#ImageFileList) that changes the default of `extensions` to image file extensions (which is why we used [`ImageFileList`](/vision.data.html#ImageFileList) in our previous examples).
## Labelling the inputs
All the followings are methods of [`InputList`](/data_block.html#InputList). Note that some of them are primarly intended for inputs that are filenames and might not work in general situations.
```
show_doc(InputList.label_from_csv)
```
If a `folder` is specified, filenames are taken in `self.path/folder`. `suffix` is added. If `sep` is specified, splits the values in `label_col` accordingly. This method is intended for inputs that are filenames.
```
jekyll_note("This method will only keep the filenames that are both present in the csv file and in `self.items`.")
show_doc(InputList.label_from_df)
jekyll_note("This method will only keep the filenames that are both present in the dataframe and in `self.items`.")
show_doc(InputList.label_from_folder)
jekyll_note("This method looks at the last subfolder in the path to determine the classes.")
show_doc(InputList.label_from_func)
```
This method is primarly intended for inputs that are filenames, but could work in other settings.
```
show_doc(InputList.label_from_re)
show_doc(LabelList, title_level=3, doc_string=False)
```
A list of labelled inputs in `items` (expected to be tuples of input, label) with a `path` attribute. This class contains methods to create `SplitDataset`.
## Split the data between train and validation.
The following functions are methods of [`LabelList`](/data_block.html#LabelList), to create a [`SplitData`](/data_block.html#SplitData) in different ways.
```
show_doc(LabelList.random_split_by_pct)
show_doc(LabelList.split_by_files)
show_doc(LabelList.split_by_fname_file)
show_doc(LabelList.split_by_folder)
jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.")
show_doc(LabelList.split_by_idx)
show_doc(SplitData, title_level=3)
```
You won't normally construct a [`SplitData`](/data_block.html#SplitData) yourself, but instead will use one of the `split*` methods in [`LabelList`](/data_block.html#LabelList).
```
show_doc(SplitData.datasets)
show_doc(SplitData.add_test)
```
## Create datasets
To create the datasets from [`SplitData`](/data_block.html#SplitData) we have the following class method.
```
show_doc(SplitData.datasets)
show_doc(SplitDatasets, title_level=3)
```
This class can be constructed directly from one of the following factory methods.
```
show_doc(SplitDatasets.from_single)
show_doc(SplitDatasets.single_from_c)
show_doc(SplitDatasets.single_from_classes)
```
Then we can build the [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader) around our [`Dataset`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset) like this.
```
show_doc(SplitDatasets.dataloaders)
```
The methods `img_transform` and `img_databunch` used earlier are documented in [`vision.data`](/vision.data.html#vision.data).
## Utility classes
```
show_doc(ItemList, title_level=3)
show_doc(PathItemList, title_level=3)
```
|
github_jupyter
|
from fastai.gen_doc.nbdoc import *
from fastai.vision import *
from fastai import *
path = untar_data(URLs.MNIST_TINY)
tfms = get_transforms(do_flip=False)
data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24)
path = untar_data(URLs.MNIST_TINY)
tfms = get_transforms(do_flip=False)
path.ls()
(path/'train').ls()
data = (ImageFileList.from_folder(path) #Where to find the data? -> in path and its subfolders
.label_from_folder() #How to label? -> depending on the folder of the filenames
.split_by_folder() #How to split in train/valid? -> use the folders
.add_test_folder() #Optionally add a test set
.datasets() #How to convert to datasets?
.transform(tfms, size=224) #Data augmentation? -> use tfms with a size of 224
.databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch
data.train_ds[0]
data.show_batch(rows=3, figsize=(5,5))
data.valid_ds.classes
planet = untar_data(URLs.PLANET_TINY)
planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.)
data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms)
data = (ImageFileList.from_folder(planet)
#Where to find the data? -> in planet and its subfolders
.label_from_csv('labels.csv', sep=' ', folder='train', suffix='.jpg')
#How to label? -> use the csv file labels.csv in path,
#add .jpg to the names and take them in the folder train
.random_split_by_pct()
#How to split in train/valid? -> randomly with the default 20% in valid
.datasets()
#How to convert to datasets? -> use ImageMultiDataset
.transform(planet_tfms, size=128)
#Data augmentation? -> use tfms with a size of 128
.databunch())
#Finally? -> use the defaults for conversion to databunch
data.show_batch(rows=3, figsize=(10,8))
camvid = untar_data(URLs.CAMVID_TINY)
path_lbl = camvid/'labels'
path_img = camvid/'images'
codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes
get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}'
data = (ImageFileList.from_folder(path_img) #Where are the input files? -> in path_img
.label_from_func(get_y_fn) #How to label? -> use get_y_fn
.random_split_by_pct() #How to split between train and valid? -> randomly
.datasets(SegmentationDataset, classes=codes) #How to create a dataset? -> use SegmentationDataset
.transform(get_transforms(), size=96, tfm_y=True) #Data aug -> Use standard tfms with tfm_y=True
.databunch(bs=64)) #Lastly convert in a databunch.
data.show_batch(rows=2, figsize=(5,5))
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
img2bbox = {img:bb for img, bb in zip(images, lbl_bbox)}
get_y_func = lambda o:img2bbox[o.name]
data = (ImageFileList.from_folder(coco)
#Where are the images? -> in coco
.label_from_func(get_y_func)
#How to find the labels? -> use get_y_func
.random_split_by_pct()
#How to split in train/valid? -> randomly with the default 20% in valid
.datasets(ObjectDetectDataset)
#How to create datasets? -> with ObjectDetectDataset
#Data augmentation? -> Standard transforms with tfm_y=True
.databunch(bs=16, collate_fn=bb_pad_collate))
#Finally we convert to a DataBunch and we use bb_pad_collate
data.show_batch(rows=3, ds_type=DatasetType.Valid, figsize=(8,7))
show_doc(InputList, title_level=3, doc_string=False)
show_doc(InputList.from_folder)
show_doc(InputList.label_from_csv)
jekyll_note("This method will only keep the filenames that are both present in the csv file and in `self.items`.")
show_doc(InputList.label_from_df)
jekyll_note("This method will only keep the filenames that are both present in the dataframe and in `self.items`.")
show_doc(InputList.label_from_folder)
jekyll_note("This method looks at the last subfolder in the path to determine the classes.")
show_doc(InputList.label_from_func)
show_doc(InputList.label_from_re)
show_doc(LabelList, title_level=3, doc_string=False)
show_doc(LabelList.random_split_by_pct)
show_doc(LabelList.split_by_files)
show_doc(LabelList.split_by_fname_file)
show_doc(LabelList.split_by_folder)
jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.")
show_doc(LabelList.split_by_idx)
show_doc(SplitData, title_level=3)
show_doc(SplitData.datasets)
show_doc(SplitData.add_test)
show_doc(SplitData.datasets)
show_doc(SplitDatasets, title_level=3)
show_doc(SplitDatasets.from_single)
show_doc(SplitDatasets.single_from_c)
show_doc(SplitDatasets.single_from_classes)
show_doc(SplitDatasets.dataloaders)
show_doc(ItemList, title_level=3)
show_doc(PathItemList, title_level=3)
| 0.6137 | 0.980072 |
#### AFSK Demodulator
## Step 4: Low Pass Filter
This is a Pynq portion of the AFSK demodulator project. We will be using the FPGA overlay that we created in Vivado.
At this point we have created the bitstream for "project_04" and copied the bitstream, TCL wrapper, and hardware hand-off file to the Pynq board.
Let's first verify that we can load the module.
```
from pynq import Overlay, Xlnk
import numpy as np
import pynq.lib.dma
overlay = Overlay('project_04.bit')
dma = overlay.demodulator.dma
```
## Accellerating FIR Filters
Below is the implementation of the AFSK demodulator in Python. We are now going to remove the low pass filter code and replace it with new code.
```
import sys
sys.path.append('../../base')
import numpy as np
from scipy.signal import lfiltic, lfilter, firwin
from scipy.io.wavfile import read
from DigitalPLL import DigitalPLL
from HDLC import HDLC
from AX25 import AX25
import time
block_size = 2640
xlnk = Xlnk()
def demod(data):
start_time = time.time()
output = np.array([],dtype=np.bool)
with xlnk.cma_array(shape=(block_size,), dtype=np.int16) as out_buffer, \
xlnk.cma_array(shape=(block_size,), dtype=np.int8) as in_buffer:
for i in range(0, len(data), block_size):
out_buffer[:len(data[i:i+block_size])] = data[i:i+block_size]
dma.sendchannel.transfer(out_buffer)
dma.recvchannel.transfer(in_buffer)
dma.sendchannel.wait()
dma.recvchannel.wait()
output = np.append(output, in_buffer)
stop_time = time.time()
sw_exec_time = stop_time - start_time
print('FPGA demodulator execution time: ',sw_exec_time)
return output
class NRZI:
def __init__(self):
self.state = False
def __call__(self, x):
result = (x == self.state)
self.state = x
return result
audio_file = read('../../base/TNC_Test_Ver-1.102-26400-1sec.wav')
sample_rate = audio_file[0]
audio_data = audio_file[1]
delay = 12 # ~446us
bpf_delay = 70
lpf_delay = 50
filter_delay = bpf_delay + lpf_delay
# demodulate the audio data
d = demod(audio_data[:26400])
# like before, the sign has changed. We need to revert that before it goes into the PLL
dx = np.append(d, demod(np.zeros(filter_delay)))[filter_delay:] * -1
print(dx[:16], len(dx))
# Create the PLL
pll = DigitalPLL(sample_rate, 1200.0)
locked = np.zeros(len(dx), dtype=int)
sample = np.zeros(len(dx), dtype=int)
# Clock recovery
for i in range(len(dx)):
sample[i] = pll(dx[i])
locked[i] = pll.locked()
nrzi = NRZI()
data = [int(nrzi(x)) for x,y in zip(dx, sample) if y]
hdlc = HDLC()
for b,s,l in zip(dx, sample, locked):
if s:
packet = hdlc(nrzi(b), l)
if packet is not None:
print(AX25(packet[1]))
# xlnk.xlnk_reset()
```
|
github_jupyter
|
from pynq import Overlay, Xlnk
import numpy as np
import pynq.lib.dma
overlay = Overlay('project_04.bit')
dma = overlay.demodulator.dma
import sys
sys.path.append('../../base')
import numpy as np
from scipy.signal import lfiltic, lfilter, firwin
from scipy.io.wavfile import read
from DigitalPLL import DigitalPLL
from HDLC import HDLC
from AX25 import AX25
import time
block_size = 2640
xlnk = Xlnk()
def demod(data):
start_time = time.time()
output = np.array([],dtype=np.bool)
with xlnk.cma_array(shape=(block_size,), dtype=np.int16) as out_buffer, \
xlnk.cma_array(shape=(block_size,), dtype=np.int8) as in_buffer:
for i in range(0, len(data), block_size):
out_buffer[:len(data[i:i+block_size])] = data[i:i+block_size]
dma.sendchannel.transfer(out_buffer)
dma.recvchannel.transfer(in_buffer)
dma.sendchannel.wait()
dma.recvchannel.wait()
output = np.append(output, in_buffer)
stop_time = time.time()
sw_exec_time = stop_time - start_time
print('FPGA demodulator execution time: ',sw_exec_time)
return output
class NRZI:
def __init__(self):
self.state = False
def __call__(self, x):
result = (x == self.state)
self.state = x
return result
audio_file = read('../../base/TNC_Test_Ver-1.102-26400-1sec.wav')
sample_rate = audio_file[0]
audio_data = audio_file[1]
delay = 12 # ~446us
bpf_delay = 70
lpf_delay = 50
filter_delay = bpf_delay + lpf_delay
# demodulate the audio data
d = demod(audio_data[:26400])
# like before, the sign has changed. We need to revert that before it goes into the PLL
dx = np.append(d, demod(np.zeros(filter_delay)))[filter_delay:] * -1
print(dx[:16], len(dx))
# Create the PLL
pll = DigitalPLL(sample_rate, 1200.0)
locked = np.zeros(len(dx), dtype=int)
sample = np.zeros(len(dx), dtype=int)
# Clock recovery
for i in range(len(dx)):
sample[i] = pll(dx[i])
locked[i] = pll.locked()
nrzi = NRZI()
data = [int(nrzi(x)) for x,y in zip(dx, sample) if y]
hdlc = HDLC()
for b,s,l in zip(dx, sample, locked):
if s:
packet = hdlc(nrzi(b), l)
if packet is not None:
print(AX25(packet[1]))
# xlnk.xlnk_reset()
| 0.205775 | 0.79956 |
# Numerical Analysis - 8
###### Rafael Barsotti
#### 1) Implemente o método de Euler para resolver o problema de valor inicial (PVI) $x′ = x^{1/3}$, $x(0) = 0$. O que acontece ? (Observe que esse problema apresenta mais de uma solução analítica.)
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import math as m
# Questão 1 - Método de Euler
#Função f(x,t)
def f1(x):
y = x**(1/3)
return y
# Método de Euler
def euler_method(a,b,f1,x0,t0,n):
D = np.array([[0,x0]])
h = (b-a)/n
t = t0
x = x0
for i in range(n):
x = x + h*f1(x)
t = t + h
D = np.append(D,[[t,x]], axis = 0)
return D
# Plot EDO
def edo_plot(D):
x = D[:,0]
y = D[:,1]
plt.plot(x,y, 'ro', color = 'b')
plt.show()
def euler_plot(D):
x = D[:,0]
y = D[:,1]
plt.plot(x,y, color = 'b')
plt.show()
a = euler_method(0,100,f1,0,0,30)
edo_plot(a)
euler_plot(a)
```
#### 2) Considere o método de Heun, também conhecido como método dos trapézios para EDO’s ou método de Euler melhorado, dado por:
#### $\overline{x}(t + h) = x(t) + hf(t, x(t))$
#### $x(t + h) = x(t) + \frac{h}{2}[f(t, x(t)) + f(t + h, \overline{x}(t + h))]$
#### (a) Utilize o método de Heun (na mão!) para obter uma solução para o PVI $x′ = −x + t + \frac{1}{2}$, $x(0) = 1$ no intervalo $[0, 1]$ com $h = 0.1$. Interpolando os pontos por um spline de ordem 1 obtenha a chamada aproximação poligonal de Euler.
```
#Questão 2a - Método de Heun
#Função f(x,t)
def f2(x,t):
y = -x + t + 1/2
return y
# Método de Heun
def heun_method(f2,n):
t = 0
x = 1
h = 0.1
D = np.array([[t,x]])
for i in range(n):
xbarra = x + h*f2(x,t)
x = x + h/2*(f2(x,t)+f2(xbarra,t)
t = t + h
print(t,xbarra,x)
heun_method(f2,10)
```
#### (b) Implemente o método de Heun para obter uma solução para o PVI $x′ = −100x^2$ , $x(0) = 1$ com $h = 0.1$. Agora substitua $\overline{x}(t + h)$ por $x(t + h)$. Explique o que acontece.
```
#Questão 2b - Método de Heun
#Função f(x,t)
def f2(x):
y = -100*(x**2)
return y
# Método de Heun
def heun_method(f2,n):
t = 0
x = 1
h = 0.1
D = np.array([[t,x]])
for i in range(n):
xbarra = x + h*f2(x)
x = x + h/2*(f2(x)+f2(xbarra))
t = t + h
D = np.append(D,[[t,x]], axis = 0)
return D
```
#### 3) Mostre que o método de Heun é um método de Runge-Kutta. Qual é a ordem?
#### 4) Considere o PVI $x′ = (tx)^3 −(\frac{x}{t})^2$, $x(1) = 1$. Utilize (na mão) os métodos de Taylor e Runge-Kutta de ordem 2 para obter aproximações para $x(1 + h)$ com $h = 0.1$. Compare as respostas.
#### 5a) Resolva o PVI $x′ = 10x − 5t^2 + 11t − 1$, $x(0) = 0$. Com $h = 2^{−8}$, obtenha uma solução computacional do PVI no intervalo $[0, 3]$ utilizando o RK4 descrito em sala. Faça um gráfico com a solução analítica e a aproximação poligonal obtida utilizando os pontos obtidos pelo RK4.
```
#Questão 5a - Método de RK4
# Função f(x,t)
def f3(x,t):
y = 10*x - 5*(t**2) + 11*t - 1
return y
def f3_analytic(x,n,h):
t = 0
c1 = x
D = np.array([[t,x]])
for i in range(n):
t = t + h
x = c1*m.e**(10*t) + (t**2)/2 - t
D = np.append(D,[[t,x]], axis = 0)
return D
# Método RK4
def rk4_method(f3,x,t,h,n):
D = np.array([[t,x]])
for i in range(n):
K1 = h*f3(x,t)
K2 = h*f3(x+(1/2*K1),t+(h*1/2))
K3 = h*f3(x+(1/2*K2),t+(h*1/2))
K4 = h*f3(x+K3,t+h)
x = x + 1/6*(K1 + 2*K2 + 2*K3 + K4)
t = t + h
D = np.append(D,[[t,x]], axis = 0)
return D
def erro_global(d1,d2):
e = d2[:,1] - d1[:,1]
error = np.amax(e)
print("O erro global é {}".format(error))
# Solucao Analitica com c = 0
D = f3_analytic(0,768,2**-8)
euler_plot(D)
# Plot RK4
h = 2**-8
d = rk4_method(f3,0,0,h,768)
edo_plot(d)
euler_plot(d)
```
#### 5b) Refaça o item anterior substituindo a condição inicial por $x(0) = \epsilon$, com $\epsilon = 0.0001$. Obtenhao erro global, isto é, a máxima distância entre a solução analítica e a aproximação numérica.
```
# Plot Solucao Analitica c1 = 0.0001
D = f3_analytic(0.0001,768,2**-8)
euler_plot(D)
# Plot RK4
h = 2**-8
e = 0.0001
d = rk4_method(f3,e,0,h,768)
edo_plot(d)
euler_plot(d)
erro_global(d,D)
```
#### 6) Determine se as soluções da EDO $x′ = t(x^3−6x^2+15x)$ convergem ou divergem uma das outras.
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import math as m
# Questão 1 - Método de Euler
#Função f(x,t)
def f1(x):
y = x**(1/3)
return y
# Método de Euler
def euler_method(a,b,f1,x0,t0,n):
D = np.array([[0,x0]])
h = (b-a)/n
t = t0
x = x0
for i in range(n):
x = x + h*f1(x)
t = t + h
D = np.append(D,[[t,x]], axis = 0)
return D
# Plot EDO
def edo_plot(D):
x = D[:,0]
y = D[:,1]
plt.plot(x,y, 'ro', color = 'b')
plt.show()
def euler_plot(D):
x = D[:,0]
y = D[:,1]
plt.plot(x,y, color = 'b')
plt.show()
a = euler_method(0,100,f1,0,0,30)
edo_plot(a)
euler_plot(a)
#Questão 2a - Método de Heun
#Função f(x,t)
def f2(x,t):
y = -x + t + 1/2
return y
# Método de Heun
def heun_method(f2,n):
t = 0
x = 1
h = 0.1
D = np.array([[t,x]])
for i in range(n):
xbarra = x + h*f2(x,t)
x = x + h/2*(f2(x,t)+f2(xbarra,t)
t = t + h
print(t,xbarra,x)
heun_method(f2,10)
#Questão 2b - Método de Heun
#Função f(x,t)
def f2(x):
y = -100*(x**2)
return y
# Método de Heun
def heun_method(f2,n):
t = 0
x = 1
h = 0.1
D = np.array([[t,x]])
for i in range(n):
xbarra = x + h*f2(x)
x = x + h/2*(f2(x)+f2(xbarra))
t = t + h
D = np.append(D,[[t,x]], axis = 0)
return D
#Questão 5a - Método de RK4
# Função f(x,t)
def f3(x,t):
y = 10*x - 5*(t**2) + 11*t - 1
return y
def f3_analytic(x,n,h):
t = 0
c1 = x
D = np.array([[t,x]])
for i in range(n):
t = t + h
x = c1*m.e**(10*t) + (t**2)/2 - t
D = np.append(D,[[t,x]], axis = 0)
return D
# Método RK4
def rk4_method(f3,x,t,h,n):
D = np.array([[t,x]])
for i in range(n):
K1 = h*f3(x,t)
K2 = h*f3(x+(1/2*K1),t+(h*1/2))
K3 = h*f3(x+(1/2*K2),t+(h*1/2))
K4 = h*f3(x+K3,t+h)
x = x + 1/6*(K1 + 2*K2 + 2*K3 + K4)
t = t + h
D = np.append(D,[[t,x]], axis = 0)
return D
def erro_global(d1,d2):
e = d2[:,1] - d1[:,1]
error = np.amax(e)
print("O erro global é {}".format(error))
# Solucao Analitica com c = 0
D = f3_analytic(0,768,2**-8)
euler_plot(D)
# Plot RK4
h = 2**-8
d = rk4_method(f3,0,0,h,768)
edo_plot(d)
euler_plot(d)
# Plot Solucao Analitica c1 = 0.0001
D = f3_analytic(0.0001,768,2**-8)
euler_plot(D)
# Plot RK4
h = 2**-8
e = 0.0001
d = rk4_method(f3,e,0,h,768)
edo_plot(d)
euler_plot(d)
erro_global(d,D)
| 0.353317 | 0.958226 |
# Conducting a simulation
Running a simulation means taking a model and sampling sort of distribution with it.
![Example simulation][/images/mm/particle-box.gif]
## Recapping molecular modelling
Remember, our **model** from a molecular modelling perspective is the **potential energy**, which depends
on the coordinates of every atom or particle in the system. We can either model the system energy
using **QM or MM** methods. QM methods are more accurate, but more expensive. MM methods simplify away
some of the less-relevant details (this depends on your system), make some approximations, and allow us to
study larger and slower systems.
## The Boltzmann distribution
The **Boltzmann distribution** describes the probability of observing **states** as a function of its energy and other **thermodynamic variables** (like the temperature). Delving into the thermodynamic theory, *the Boltzmann distribution is the distribution that maximizes a system's entropy*, so this is a physically-rooted distribution. Concisely put into an equation:
$\Huge p_i \; \alpha \; e^{E_i/k_BT}$
where $p_i$ is the probability of a state, $E$ is the energy of the system, $k_b$ is Boltzmann's constant, and $T$ is the temperature
## What is a 'state'?
In the Boltmzann distribution, a state refers to an energetic state (which can be associated to a chemical structure's 3D coordinates.
Going further, depending on our thermodynamic conditions, we have **macrostates** that desribe a system's
macroscopic properties (like temperature, pressure, volume, energy, number of particles). There are a set of **microstates** that can satisfy or achieve a particular macrostate.
For example, if you had 3 coins, you could have a macrostate consisting of 2 Tails and 1 Head. The corresopnding micorstates might be HTT, THT, TTH
# Application to molecular simulation
One often overlooked fact is that *all molecules move around, a lot or a little* (unless you're at absolute zero but that's not the point). Thermal motion means that every atom vibrates a little bit - every molecule can wiggle ever so slightly or fly around. However, *the physical phenomenon that atoms move around is the whole reason we have a distribution of configurations (coordinates)*
Under the Boltzmann distribution, the probability of witnessing a chemical microstate (a particular set of coordinates that a chemical configuration occupies) is related to the energy of that state.
*If a particular configuration is high-energy, we probably won't witness it. If it low-energy, there is a good chance we will*
## Monte Carlo methods
Monte carlo (MC) sampling is not unique to molecular simulation, but molecular modellers do like to implement MC methods.
Briefly, MC methods involve a trial where you try to change/alter some part of your system.
In molecular modeling, your *MC trial moves involve altering your configuration* (rotating a molecule, displacing an atom, stretching a bond, etc.)
The *choice to accept this move depends on the energy before and after the trial move*. If the energy is lower, we accept the move and proceed with the simulation. If the energy is higher, we calculate the relative probabilities (according to the Boltzmann distribution), and compare that to a randomly-generated number; we either reject the move and propose a new one or accept and proceed.
There are lots of different algorithms, but a common one in the molecular modelling field is the **Metropolis-Hastings** algorithm
If you *sample a lot of configurations*, you can eventually get a good idea of the distribution of various configurations of your system. From this resultant sample or **trajectory**, we can start computing various (static) properties. By nature of the sampling, the configurations are somewhat independent and uncorrelated compared to other sampling methods
## Molecular dynamics methods
$\Huge F \; = \; ma$
In molecular dynamics (MD) sampling, we utilize kinetic energy and momentum to actually simulate the motion of these atoms. This is where we bring **Newton's laws of motion** in order to physically capture these motions - *the acceleration on an object is related to the forces acting upon it*
To compute the forces acting upon each atom, we look back to another physical relationship - *force is the negative derivative of energy with respect to distance*. This works well because now we we can *relate motion to our molecular model*; given the energy of our system, compute the gradient to get the forces, and these forces dictate the acceleration
$\Huge F(\vec r) = -\nabla U(\vec r)$
There a variety of other formalisms that have been used in MD like **Hamiltonian** or **Lagrangian mechanics**, but the idea is to relate potential and kinetic energy to the motion of a system
We also know that
$\huge a = \frac{d^2 x}{d t^2}$
Which means we can relate acceleration to position via a second order ordinary differential equation.
If we integrate this, we can get a system's position over time.
This is very hard to do analytically,
so we often resort to various numerical methods to integrate a second order ODE (compute the gradient and take a small step in that direction). In MD, we call this an **integrator**, and the field is very interested in all the different integration algorithms, their computational complexity, and overall stability (energy conservation versus time step, time-reversibility, among others). Don't forget, this integration means we now also account for things like velocity and kinetic energy (which follow the **Maxwell-Boltzmann distribution**)
To summarize molecular dynamics, we are *integrating Newton's equations of motion over time according to a potential energy function*.
After integrating for a finite number of steps, we have sampled a number of configurations that are more correlated to each other compared to MC methods.
### Statistical side note
As a molecular modeller venturing into broader areas of statistics and data science, I find myself trying to relate concepts like Markov chain Monte Carlo or Hamiltonian dynamics back to these molecular modelling notions of MC and MD. I think there are similarities in that the MC analogs are drawing random samples, but the Hamiltonian and MD methods are accounting for some sort of kinetics or momentum. Even the notion of some steepest descent gradient algorithms reminds me that we essentially compute a gradient (force) of our objecive function (energy).
## The law of large numbers, ergodicity, and phase space
As in statistics, the only way we can reliably trust our sample is if we *draw enough samples*.
If we sample enough, the sample statistics and population statistics relate well.
In simulation, before we can even begin to think about drawing enough samples, we have to draw *physically correct* samples. We call this **ergodicity** - when our the probability distributions from our simulations don't change much. This means we need to run a simulation long enough such that our sampled configurations results replicate the underlying physical distributions.
Here's a more involved discussion. For N atoms, we have 6 N variables (for the 3 dimensions we have a velocity/momentum and a position). This results in a 6N **phase space**. Over the course of the simulation,
we are effectively traversing through 6N phase space, with some regions being more "popular" or favorable than others. When this probability density no longer changes with respect to time, our system is ergodic and we just need to generate a lot of samples from this probability distribution.
The formulation (**Liouville's theorem**) is as follows
$\large \frac{\partial \rho}{\partial t} = -iL = 0$
A simpler way of thinking about this: you can start a simulation from some very unrealistic coordinates (like water in a crystalline configuration even though you're at room temperature), but if you simulate long enough, eventually you begin visiting only the physically-realistic and probabilistic configurations. At this point, your system is **equilibrated** and then you begin the task of sampling from this distribution. So if you run a 100 ns simulation, you might discard the first 20 ns as "burn-in" or "equilibration" when you were trying to hit equilibration. The other 80 ns you actually care about and analyze - this is your "production" run where you are reliably sampling from the correct distribution.
## MC vs MD
There are a variety of things to think about here: computational complexity, equilibration, and the physical properties you want to measure. But at the end of each simulation, you end up with a series of configurations (coordinates).
### Computational complexity
In most force fields (potential energy functions), bonded interactions are cheap because each atom participates in maybe a dozen different bonded interactions. Nonbonded interactions are much harder becuase each atom participates in a nonbonded interaction with *every other atom in your system*, this is $O(n^2)$, and these nonbonded, pairwise interactions are *the most expensive calculations in a simulation code*. In reality, there are some simulation tricks to speed up this pairwise computation to only look at the relevant/nearby atoms (neighbor lists) or use reciprocal space to rapidly compute long-distance interactions (Ewald sums)
In MC, you don't move EVERY atom, you move a few or just one. To evaluate a trial move, you need to compute how the energy changes. Fortunately, for the 99% of atoms that didn't move, that saves you some energy calculations. You only need to calculate the energy for the part of the system that changed.
In MD, you are moving EVERY atom, so you have to do this $O(n^2)$ calculation every, single time.
So comparing each iteration, a single MC iteration is faster than a single MD iteration. Actually, for various reasons, MD algorithms have found success being implemented as GPU kernels, so MD is really accelerated by GPUs. The complexity of MC has inhibited MC packages from really harnessing the computational power of a GPU. Don't get me wrong, there are some MC packages that utilize the GPU fantastically well, but you can find more MD packages that use the GPU.
### Equilibration
MC means we take "random" moves - we could twist a long polymer, move an atom halfway across the simulation box, or something creative. Because MD aims to simulate the motion of atoms, our moves are somewhat constrained to local displacements.
With a wider variety, and more "radical" moves, MC can reach equilibration faster than MD, whose moves are very dependent on small displacements
### Physical properties
It's 2-0, so we have to find something in favor of MD. Some physical properties depend on the time-evolved-dynamics of a system - we care about how the coordinates relate to each other over time. MC cannot do this because each configuration is fairly uncorrelated from the previous one. In MD, these configurational correlations help us calculate transport properties like viscosity and diffusion. MC has a hard time computing these properties due to the lack of correlation between configurations
## A grad student confession
Honestly, most comptuational grad students don't think about these underlying theories or formulations that often.
We're more concerned with applying them to do our research. We often take coursework that covers these concepts,
but more often than not, we shrug off simulation techniques as just calculating energy/forces and moving atoms.
In terms of implementing these algorithms, they are already well-implemented in existing software packages. We don't have to write our Metropolis-Hastings algorithms, MC moves, or integrators - other generations of academics, scientists, and engineers have constructed and tested these tools and made sure they work. They made way for newer generations of students to spend their time applying these tools to research.
Usually, a particular lab or field gravtitates to either MC or MD, and then that becomes the learning environment
and code infrastructure for new students. Occasionally we move into another method, but only if the scientific problem truly necessitates using another method.
Should the (unfortunate) time come when we have to find bugs in these packages, then we dust off the textbooks and re-re-re-re-learn these algorithms and techniques.
# Conclusion
There are a variety of simulation/sampling techniques (MD or MC), each with its own perks and drawbacks. Fundamentally, there is a lot of derivation and proof that validates these methods in sampling the Boltzmann distribution. The tools of other scientists and engineers have allowed us to study interesting scientific problems without being "caught in the weeds".
In broader statistical/data science perspectives, we use simulation methods to sample from a distribution and compute various properties (some dependent on time-correlations), and we have to ensure that we have correctly sampled enough to draw reliable conclusions. Some build the model and simulation cornerstones, others apply these tools as they see fit.
|
github_jupyter
|
# Conducting a simulation
Running a simulation means taking a model and sampling sort of distribution with it.
![Example simulation][/images/mm/particle-box.gif]
## Recapping molecular modelling
Remember, our **model** from a molecular modelling perspective is the **potential energy**, which depends
on the coordinates of every atom or particle in the system. We can either model the system energy
using **QM or MM** methods. QM methods are more accurate, but more expensive. MM methods simplify away
some of the less-relevant details (this depends on your system), make some approximations, and allow us to
study larger and slower systems.
## The Boltzmann distribution
The **Boltzmann distribution** describes the probability of observing **states** as a function of its energy and other **thermodynamic variables** (like the temperature). Delving into the thermodynamic theory, *the Boltzmann distribution is the distribution that maximizes a system's entropy*, so this is a physically-rooted distribution. Concisely put into an equation:
$\Huge p_i \; \alpha \; e^{E_i/k_BT}$
where $p_i$ is the probability of a state, $E$ is the energy of the system, $k_b$ is Boltzmann's constant, and $T$ is the temperature
## What is a 'state'?
In the Boltmzann distribution, a state refers to an energetic state (which can be associated to a chemical structure's 3D coordinates.
Going further, depending on our thermodynamic conditions, we have **macrostates** that desribe a system's
macroscopic properties (like temperature, pressure, volume, energy, number of particles). There are a set of **microstates** that can satisfy or achieve a particular macrostate.
For example, if you had 3 coins, you could have a macrostate consisting of 2 Tails and 1 Head. The corresopnding micorstates might be HTT, THT, TTH
# Application to molecular simulation
One often overlooked fact is that *all molecules move around, a lot or a little* (unless you're at absolute zero but that's not the point). Thermal motion means that every atom vibrates a little bit - every molecule can wiggle ever so slightly or fly around. However, *the physical phenomenon that atoms move around is the whole reason we have a distribution of configurations (coordinates)*
Under the Boltzmann distribution, the probability of witnessing a chemical microstate (a particular set of coordinates that a chemical configuration occupies) is related to the energy of that state.
*If a particular configuration is high-energy, we probably won't witness it. If it low-energy, there is a good chance we will*
## Monte Carlo methods
Monte carlo (MC) sampling is not unique to molecular simulation, but molecular modellers do like to implement MC methods.
Briefly, MC methods involve a trial where you try to change/alter some part of your system.
In molecular modeling, your *MC trial moves involve altering your configuration* (rotating a molecule, displacing an atom, stretching a bond, etc.)
The *choice to accept this move depends on the energy before and after the trial move*. If the energy is lower, we accept the move and proceed with the simulation. If the energy is higher, we calculate the relative probabilities (according to the Boltzmann distribution), and compare that to a randomly-generated number; we either reject the move and propose a new one or accept and proceed.
There are lots of different algorithms, but a common one in the molecular modelling field is the **Metropolis-Hastings** algorithm
If you *sample a lot of configurations*, you can eventually get a good idea of the distribution of various configurations of your system. From this resultant sample or **trajectory**, we can start computing various (static) properties. By nature of the sampling, the configurations are somewhat independent and uncorrelated compared to other sampling methods
## Molecular dynamics methods
$\Huge F \; = \; ma$
In molecular dynamics (MD) sampling, we utilize kinetic energy and momentum to actually simulate the motion of these atoms. This is where we bring **Newton's laws of motion** in order to physically capture these motions - *the acceleration on an object is related to the forces acting upon it*
To compute the forces acting upon each atom, we look back to another physical relationship - *force is the negative derivative of energy with respect to distance*. This works well because now we we can *relate motion to our molecular model*; given the energy of our system, compute the gradient to get the forces, and these forces dictate the acceleration
$\Huge F(\vec r) = -\nabla U(\vec r)$
There a variety of other formalisms that have been used in MD like **Hamiltonian** or **Lagrangian mechanics**, but the idea is to relate potential and kinetic energy to the motion of a system
We also know that
$\huge a = \frac{d^2 x}{d t^2}$
Which means we can relate acceleration to position via a second order ordinary differential equation.
If we integrate this, we can get a system's position over time.
This is very hard to do analytically,
so we often resort to various numerical methods to integrate a second order ODE (compute the gradient and take a small step in that direction). In MD, we call this an **integrator**, and the field is very interested in all the different integration algorithms, their computational complexity, and overall stability (energy conservation versus time step, time-reversibility, among others). Don't forget, this integration means we now also account for things like velocity and kinetic energy (which follow the **Maxwell-Boltzmann distribution**)
To summarize molecular dynamics, we are *integrating Newton's equations of motion over time according to a potential energy function*.
After integrating for a finite number of steps, we have sampled a number of configurations that are more correlated to each other compared to MC methods.
### Statistical side note
As a molecular modeller venturing into broader areas of statistics and data science, I find myself trying to relate concepts like Markov chain Monte Carlo or Hamiltonian dynamics back to these molecular modelling notions of MC and MD. I think there are similarities in that the MC analogs are drawing random samples, but the Hamiltonian and MD methods are accounting for some sort of kinetics or momentum. Even the notion of some steepest descent gradient algorithms reminds me that we essentially compute a gradient (force) of our objecive function (energy).
## The law of large numbers, ergodicity, and phase space
As in statistics, the only way we can reliably trust our sample is if we *draw enough samples*.
If we sample enough, the sample statistics and population statistics relate well.
In simulation, before we can even begin to think about drawing enough samples, we have to draw *physically correct* samples. We call this **ergodicity** - when our the probability distributions from our simulations don't change much. This means we need to run a simulation long enough such that our sampled configurations results replicate the underlying physical distributions.
Here's a more involved discussion. For N atoms, we have 6 N variables (for the 3 dimensions we have a velocity/momentum and a position). This results in a 6N **phase space**. Over the course of the simulation,
we are effectively traversing through 6N phase space, with some regions being more "popular" or favorable than others. When this probability density no longer changes with respect to time, our system is ergodic and we just need to generate a lot of samples from this probability distribution.
The formulation (**Liouville's theorem**) is as follows
$\large \frac{\partial \rho}{\partial t} = -iL = 0$
A simpler way of thinking about this: you can start a simulation from some very unrealistic coordinates (like water in a crystalline configuration even though you're at room temperature), but if you simulate long enough, eventually you begin visiting only the physically-realistic and probabilistic configurations. At this point, your system is **equilibrated** and then you begin the task of sampling from this distribution. So if you run a 100 ns simulation, you might discard the first 20 ns as "burn-in" or "equilibration" when you were trying to hit equilibration. The other 80 ns you actually care about and analyze - this is your "production" run where you are reliably sampling from the correct distribution.
## MC vs MD
There are a variety of things to think about here: computational complexity, equilibration, and the physical properties you want to measure. But at the end of each simulation, you end up with a series of configurations (coordinates).
### Computational complexity
In most force fields (potential energy functions), bonded interactions are cheap because each atom participates in maybe a dozen different bonded interactions. Nonbonded interactions are much harder becuase each atom participates in a nonbonded interaction with *every other atom in your system*, this is $O(n^2)$, and these nonbonded, pairwise interactions are *the most expensive calculations in a simulation code*. In reality, there are some simulation tricks to speed up this pairwise computation to only look at the relevant/nearby atoms (neighbor lists) or use reciprocal space to rapidly compute long-distance interactions (Ewald sums)
In MC, you don't move EVERY atom, you move a few or just one. To evaluate a trial move, you need to compute how the energy changes. Fortunately, for the 99% of atoms that didn't move, that saves you some energy calculations. You only need to calculate the energy for the part of the system that changed.
In MD, you are moving EVERY atom, so you have to do this $O(n^2)$ calculation every, single time.
So comparing each iteration, a single MC iteration is faster than a single MD iteration. Actually, for various reasons, MD algorithms have found success being implemented as GPU kernels, so MD is really accelerated by GPUs. The complexity of MC has inhibited MC packages from really harnessing the computational power of a GPU. Don't get me wrong, there are some MC packages that utilize the GPU fantastically well, but you can find more MD packages that use the GPU.
### Equilibration
MC means we take "random" moves - we could twist a long polymer, move an atom halfway across the simulation box, or something creative. Because MD aims to simulate the motion of atoms, our moves are somewhat constrained to local displacements.
With a wider variety, and more "radical" moves, MC can reach equilibration faster than MD, whose moves are very dependent on small displacements
### Physical properties
It's 2-0, so we have to find something in favor of MD. Some physical properties depend on the time-evolved-dynamics of a system - we care about how the coordinates relate to each other over time. MC cannot do this because each configuration is fairly uncorrelated from the previous one. In MD, these configurational correlations help us calculate transport properties like viscosity and diffusion. MC has a hard time computing these properties due to the lack of correlation between configurations
## A grad student confession
Honestly, most comptuational grad students don't think about these underlying theories or formulations that often.
We're more concerned with applying them to do our research. We often take coursework that covers these concepts,
but more often than not, we shrug off simulation techniques as just calculating energy/forces and moving atoms.
In terms of implementing these algorithms, they are already well-implemented in existing software packages. We don't have to write our Metropolis-Hastings algorithms, MC moves, or integrators - other generations of academics, scientists, and engineers have constructed and tested these tools and made sure they work. They made way for newer generations of students to spend their time applying these tools to research.
Usually, a particular lab or field gravtitates to either MC or MD, and then that becomes the learning environment
and code infrastructure for new students. Occasionally we move into another method, but only if the scientific problem truly necessitates using another method.
Should the (unfortunate) time come when we have to find bugs in these packages, then we dust off the textbooks and re-re-re-re-learn these algorithms and techniques.
# Conclusion
There are a variety of simulation/sampling techniques (MD or MC), each with its own perks and drawbacks. Fundamentally, there is a lot of derivation and proof that validates these methods in sampling the Boltzmann distribution. The tools of other scientists and engineers have allowed us to study interesting scientific problems without being "caught in the weeds".
In broader statistical/data science perspectives, we use simulation methods to sample from a distribution and compute various properties (some dependent on time-correlations), and we have to ensure that we have correctly sampled enough to draw reliable conclusions. Some build the model and simulation cornerstones, others apply these tools as they see fit.
| 0.926487 | 0.987338 |
## Exercise 02
Metropolis simulation of the 1d quantum anharmonic oscillator.
A c++ code to simulate the model is available in the folder 'code', and the data from which these plots are made are in 'code/results'
```
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams["figure.figsize"] = (20,10)
plt.figure()
dat = np.loadtxt("code/results/h_mu1_lamda0")
plt.plot(dat[:,0],dat[:,1], '.', label="N=128")
x = np.arange(-3,3,0.01)
plt.plot(x, np.exp(-x*x)/np.sqrt(np.pi), '--', label="analytic")
plt.xlim(-3,3)
plt.title("lambda=0, mu^2=1")
plt.legend()
plt.show()
plt.figure()
dat = np.loadtxt("code/results/h_mu6")
plt.plot(dat[:,0],dat[:,1], '.-', label="mu^2=6")
dat = np.loadtxt("code/results/h_mu3")
plt.plot(dat[:,0],dat[:,1], '.-', label="mu^2=3")
dat = np.loadtxt("code/results/h_mu0")
plt.plot(dat[:,0],dat[:,1], '.-', label="mu^2=0")
dat = np.loadtxt("code/results/h_mu-3")
plt.plot(dat[:,0],dat[:,1], '.-', label="mu^2=-3")
dat = np.loadtxt("code/results/h_mu-5")
plt.plot(dat[:,0],dat[:,1], '.-', label="mu^2=-5")
dat = np.loadtxt("code/results/h_mu-8")
plt.plot(dat[:,0],dat[:,1], '.-', label="mu^2=-8")
plt.xlim(-3,3)
plt.title("lambda=1, various mu^2")
plt.legend()
plt.rcParams["figure.figsize"] = (20,10)
plt.show()
plt.figure()
dat = np.loadtxt("code/results/h_mu-1_corr")
plt.errorbar(dat[:,0], dat[:,1], yerr = dat[:,2], marker = '.', label="mu^2=-1")
dat = np.loadtxt("code/results/h_mu-2_corr")
plt.errorbar(dat[:,0], dat[:,1], yerr = dat[:,2], marker = '.', label="mu^2=-2")
dat = np.loadtxt("code/results/h_mu-3_corr")
plt.errorbar(dat[:,0], dat[:,1], yerr = dat[:,2], marker = '.', label="mu^2=-3")
plt.yscale("log", nonposy='clip')
plt.xlim(1,50)
plt.xlabel("t/a")
plt.ylim(0.001,1)
plt.title("Correlator: lambda=1, N=128")
plt.legend()
plt.rcParams["figure.figsize"] = (20,10)
plt.show()
plt.figure()
color = ['r', 'g', 'b']
# these values for E_0 are copied from the output files in code/results:
E0 = {}
E0[-1]=0.50980651272806732
E0[-2]=0.33261720673993472
E0[-3]=0.12223116355668487
# value of lattice spacing to convert dimensionless lattice t/a to t.
# (note this was previously missing, spotted by Carl-Joar.)
a = 0.1
plt.xlim(1,24)
plt.ylim(0,2.5)
plt.title("E_1-E_0: lambda=1, N=128")
plt.rcParams["figure.figsize"] = (20,10)
for mu2 in [-1,-2,-3]:
dat = np.loadtxt('code/results/h_mu'+str(mu2)+'_corr')
plt.plot(2, E0[mu2], 'x', color=color[mu2], label='E_0: '+'mu^2='+str(mu2))
for dt_over_a in range(5,9):
arr = []
for t_over_a in range(2,30):
arr.append([t_over_a, -np.log((dat[t_over_a+dt_over_a]/dat[t_over_a])[1])/(dt_over_a*a)])
arr = np.array(arr)
plt.plot(arr[:,0], arr[:,1]+E0[mu2], '.-', color=color[mu2], label = 'E_1: mu^2='+str(mu2)+' [dt/a='+str(dt_over_a)+']')
plt.xlabel("t/a")
plt.legend()
plt.show()
```
|
github_jupyter
|
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams["figure.figsize"] = (20,10)
plt.figure()
dat = np.loadtxt("code/results/h_mu1_lamda0")
plt.plot(dat[:,0],dat[:,1], '.', label="N=128")
x = np.arange(-3,3,0.01)
plt.plot(x, np.exp(-x*x)/np.sqrt(np.pi), '--', label="analytic")
plt.xlim(-3,3)
plt.title("lambda=0, mu^2=1")
plt.legend()
plt.show()
plt.figure()
dat = np.loadtxt("code/results/h_mu6")
plt.plot(dat[:,0],dat[:,1], '.-', label="mu^2=6")
dat = np.loadtxt("code/results/h_mu3")
plt.plot(dat[:,0],dat[:,1], '.-', label="mu^2=3")
dat = np.loadtxt("code/results/h_mu0")
plt.plot(dat[:,0],dat[:,1], '.-', label="mu^2=0")
dat = np.loadtxt("code/results/h_mu-3")
plt.plot(dat[:,0],dat[:,1], '.-', label="mu^2=-3")
dat = np.loadtxt("code/results/h_mu-5")
plt.plot(dat[:,0],dat[:,1], '.-', label="mu^2=-5")
dat = np.loadtxt("code/results/h_mu-8")
plt.plot(dat[:,0],dat[:,1], '.-', label="mu^2=-8")
plt.xlim(-3,3)
plt.title("lambda=1, various mu^2")
plt.legend()
plt.rcParams["figure.figsize"] = (20,10)
plt.show()
plt.figure()
dat = np.loadtxt("code/results/h_mu-1_corr")
plt.errorbar(dat[:,0], dat[:,1], yerr = dat[:,2], marker = '.', label="mu^2=-1")
dat = np.loadtxt("code/results/h_mu-2_corr")
plt.errorbar(dat[:,0], dat[:,1], yerr = dat[:,2], marker = '.', label="mu^2=-2")
dat = np.loadtxt("code/results/h_mu-3_corr")
plt.errorbar(dat[:,0], dat[:,1], yerr = dat[:,2], marker = '.', label="mu^2=-3")
plt.yscale("log", nonposy='clip')
plt.xlim(1,50)
plt.xlabel("t/a")
plt.ylim(0.001,1)
plt.title("Correlator: lambda=1, N=128")
plt.legend()
plt.rcParams["figure.figsize"] = (20,10)
plt.show()
plt.figure()
color = ['r', 'g', 'b']
# these values for E_0 are copied from the output files in code/results:
E0 = {}
E0[-1]=0.50980651272806732
E0[-2]=0.33261720673993472
E0[-3]=0.12223116355668487
# value of lattice spacing to convert dimensionless lattice t/a to t.
# (note this was previously missing, spotted by Carl-Joar.)
a = 0.1
plt.xlim(1,24)
plt.ylim(0,2.5)
plt.title("E_1-E_0: lambda=1, N=128")
plt.rcParams["figure.figsize"] = (20,10)
for mu2 in [-1,-2,-3]:
dat = np.loadtxt('code/results/h_mu'+str(mu2)+'_corr')
plt.plot(2, E0[mu2], 'x', color=color[mu2], label='E_0: '+'mu^2='+str(mu2))
for dt_over_a in range(5,9):
arr = []
for t_over_a in range(2,30):
arr.append([t_over_a, -np.log((dat[t_over_a+dt_over_a]/dat[t_over_a])[1])/(dt_over_a*a)])
arr = np.array(arr)
plt.plot(arr[:,0], arr[:,1]+E0[mu2], '.-', color=color[mu2], label = 'E_1: mu^2='+str(mu2)+' [dt/a='+str(dt_over_a)+']')
plt.xlabel("t/a")
plt.legend()
plt.show()
| 0.336767 | 0.938011 |
```
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import csv
import re
import seaborn; seaborn.set()
%matplotlib inline
#https://benchmarksgame-team.pages.debian.net/benchmarksgame/description/knucleotide.html#knucleotide
with open('main_data.csv') as f:
f_csv = csv.reader(f)
headers = next(f_csv)
print(headers )
data = pd.read_csv('main_data.csv')
data_r = data.iloc[:,[0,1,7,9]]
data_lang = data_r[data_r['lang'].isin([]
#+['gcc','gpp','java','node','rust','csharpcore','ghc']
#+['go','typescript']
#+['ghc','hipe','openj9','sbcl','fsharpcore']
+['ghc','fsharpcore']
#+['gpp','gcc','rust','go']
)]
data_lang = data_lang[data_lang['elapsed(s)'] != 0]
data_lang = data_lang[data_lang['status'] >= 0]
data_binarytrees = data_lang[data_lang['name']=='binarytrees'].groupby('lang').min().sort_values(by=['lang'])
data_fannkuchredux = data_lang[data_lang['name']=='fannkuchredux'].groupby('lang').min().sort_values(by=['lang'])
data_fasta = data_lang[data_lang['name']=='fasta'].groupby('lang').min().sort_values(by=['lang'])
data_knucleotide = data_lang[data_lang['name']=='knucleotide'].groupby('lang').min().sort_values(by=['lang'])
data_mandelbrot = data_lang[data_lang['name']=='mandelbrot'].groupby('lang').min().sort_values(by=['lang'])
data_nbody = data_lang[data_lang['name']=='nbody'].groupby('lang').min().sort_values(by=['lang'])
data_pidigits = data_lang[data_lang['name']=='pidigits'].groupby('lang').min().sort_values(by=['lang'])
data_regexredux = data_lang[data_lang['name']=='regexredux'].groupby('lang').min().sort_values(by=['lang'])
data_revcomp = data_lang[data_lang['name']=='revcomp'].groupby('lang').min().sort_values(by=['lang'])
data_spectralnorm = data_lang[data_lang['name']=='spectralnorm'].groupby('lang').min().sort_values(by=['lang'])
#data_fasta_c = data_fasta['elapsed(s)']/data_fasta['elapsed(s)'].min()
data_regexredux
#plt.style.use('seaborn-whitegrid')
#fig=plt.hist(data_fasta['elapsed(s)'], histtype='bar', color='steelblue')
fig=plt.figure()
plt.plot(data_fasta['elapsed(s)'],'-ob', label='fasta')
#plt.plot(data_binarytrees['elapsed(s)'],'-og')
#plt.plot(data_fannkuchredux['elapsed(s)'],'-or')
#plt.plot(data_knucleotide['elapsed(s)'],'-oc')
plt.plot(data_mandelbrot['elapsed(s)'],'-om', label='mandelbrot')
#plt.plot(data_nbody['elapsed(s)'],'-oy')
#plt.plot(data_pidigits['elapsed(s)'],'-ok', label='pidigits')
plt.plot(data_regexredux['elapsed(s)'],'-og', label='regexredux')
plt.plot(data_revcomp['elapsed(s)'],'-or', label='revcomp')
plt.plot(data_spectralnorm['elapsed(s)'],'-oc', label='spectralnorm')
plt.ylabel('elasped(s)')
plt.xlim(-0.3,1.3)
plt.legend(loc=0);
"""
plt.subplot(2,1,1)
plt.title('Fasta')
plt.ylabel('elapsed(s)')
plt.ylim(0,5)
plt.plot(data_fasta['elapsed(s)'],'-ok')
plt.subplot(2,1,2)
plt.plot([1,2,3],[2,3,4])
"""
fig.savefig('data_1(Functional2).png')
#plt.style.use('seaborn-whitegrid')
fig2=plt.figure()
plt.plot(data_binarytrees['elapsed(s)'],'-og', label='binarytrees')
plt.plot(data_fannkuchredux['elapsed(s)'],'-or', label='fannkuchredux')
plt.plot(data_knucleotide['elapsed(s)'],'-oc', label='knucleotide')
plt.plot(data_nbody['elapsed(s)'],'-oy', label='nbody')
plt.plot(data_pidigits['elapsed(s)'],'-ok', label='pidigits')
plt.ylabel('elasped(s)')
plt.xlim(-0.3,1.3)
plt.legend(loc=0);
fig2.savefig('data_2(Functional2).png')
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import csv
import re
import seaborn; seaborn.set()
%matplotlib inline
#https://benchmarksgame-team.pages.debian.net/benchmarksgame/description/knucleotide.html#knucleotide
with open('main_data.csv') as f:
f_csv = csv.reader(f)
headers = next(f_csv)
print(headers )
data = pd.read_csv('main_data.csv')
data_r = data.iloc[:,[0,1,7,9]]
data_lang = data_r[data_r['lang'].isin([]
#+['gcc','gpp','java','node','rust','csharpcore','ghc']
#+['go','typescript']
#+['ghc','hipe','openj9','sbcl','fsharpcore']
+['ghc','fsharpcore']
#+['gpp','gcc','rust','go']
)]
data_lang = data_lang[data_lang['elapsed(s)'] != 0]
data_lang = data_lang[data_lang['status'] >= 0]
data_binarytrees = data_lang[data_lang['name']=='binarytrees'].groupby('lang').min().sort_values(by=['lang'])
data_fannkuchredux = data_lang[data_lang['name']=='fannkuchredux'].groupby('lang').min().sort_values(by=['lang'])
data_fasta = data_lang[data_lang['name']=='fasta'].groupby('lang').min().sort_values(by=['lang'])
data_knucleotide = data_lang[data_lang['name']=='knucleotide'].groupby('lang').min().sort_values(by=['lang'])
data_mandelbrot = data_lang[data_lang['name']=='mandelbrot'].groupby('lang').min().sort_values(by=['lang'])
data_nbody = data_lang[data_lang['name']=='nbody'].groupby('lang').min().sort_values(by=['lang'])
data_pidigits = data_lang[data_lang['name']=='pidigits'].groupby('lang').min().sort_values(by=['lang'])
data_regexredux = data_lang[data_lang['name']=='regexredux'].groupby('lang').min().sort_values(by=['lang'])
data_revcomp = data_lang[data_lang['name']=='revcomp'].groupby('lang').min().sort_values(by=['lang'])
data_spectralnorm = data_lang[data_lang['name']=='spectralnorm'].groupby('lang').min().sort_values(by=['lang'])
#data_fasta_c = data_fasta['elapsed(s)']/data_fasta['elapsed(s)'].min()
data_regexredux
#plt.style.use('seaborn-whitegrid')
#fig=plt.hist(data_fasta['elapsed(s)'], histtype='bar', color='steelblue')
fig=plt.figure()
plt.plot(data_fasta['elapsed(s)'],'-ob', label='fasta')
#plt.plot(data_binarytrees['elapsed(s)'],'-og')
#plt.plot(data_fannkuchredux['elapsed(s)'],'-or')
#plt.plot(data_knucleotide['elapsed(s)'],'-oc')
plt.plot(data_mandelbrot['elapsed(s)'],'-om', label='mandelbrot')
#plt.plot(data_nbody['elapsed(s)'],'-oy')
#plt.plot(data_pidigits['elapsed(s)'],'-ok', label='pidigits')
plt.plot(data_regexredux['elapsed(s)'],'-og', label='regexredux')
plt.plot(data_revcomp['elapsed(s)'],'-or', label='revcomp')
plt.plot(data_spectralnorm['elapsed(s)'],'-oc', label='spectralnorm')
plt.ylabel('elasped(s)')
plt.xlim(-0.3,1.3)
plt.legend(loc=0);
"""
plt.subplot(2,1,1)
plt.title('Fasta')
plt.ylabel('elapsed(s)')
plt.ylim(0,5)
plt.plot(data_fasta['elapsed(s)'],'-ok')
plt.subplot(2,1,2)
plt.plot([1,2,3],[2,3,4])
"""
fig.savefig('data_1(Functional2).png')
#plt.style.use('seaborn-whitegrid')
fig2=plt.figure()
plt.plot(data_binarytrees['elapsed(s)'],'-og', label='binarytrees')
plt.plot(data_fannkuchredux['elapsed(s)'],'-or', label='fannkuchredux')
plt.plot(data_knucleotide['elapsed(s)'],'-oc', label='knucleotide')
plt.plot(data_nbody['elapsed(s)'],'-oy', label='nbody')
plt.plot(data_pidigits['elapsed(s)'],'-ok', label='pidigits')
plt.ylabel('elasped(s)')
plt.xlim(-0.3,1.3)
plt.legend(loc=0);
fig2.savefig('data_2(Functional2).png')
| 0.302391 | 0.276245 |
# PVDAQ - PVData¶
This notebook is an example about how to access the PVdata and related metadata through OEDI data lake.
## 0. Prerequisites
To run this example, it requires you have OEDI data lake deployed, where all quries run through. About how to deploy OEDI data lake, please refer to the documentation here - https://openedi.github.io/open-data-access-tools/.
In this example, the deployed database is `oedi_data_lake`, where the tables related to pvdata are named:
* `pvdaq_parquet_inverters`
* `pvdaq_parquet_meters`
* `pvdaq_parquet_metrics`
* `pvdaq_parquet_modules`
* `pvdaq_parquet_mount`
* `pvdaq_parquet_other_instruments`
* `pvdaq_parquet_pvdata`
* `pvdaq_parquet_site`
* `pvdaq_parquet_system`
The staging location for queries is `s3://nrel-tests/pvdaq/`.
```
# database
database_name = "oedi_pvdaq"
# tables
inverters_table = "pvdaq_parquet_inverters"
meters_table = "pvdaq_parquet_meters"
metrics_table = "pvdaq_parquet_metrics"
modules_table = "pvdaq_parquet_modules"
mount_table = "pvdaq_parquet_mount"
other_instruments_table = "pvdaq_parquet_other_instruments"
pvdata_table = "pvdaq_parquet_pvdata"
site_table = "pvdaq_parquet_site"
system_table = "pvdaq_parquet_system"
staging_location = "s3://nrel-tests/pvdaq/"
```
## 1. Metadata
The metadata of pvdaq tables include 'Columns', 'Partition Keys' and 'Partition Values'. OEDIGlue class provides utility methods to retrieve the metadata from a given table.
```
from oedi.AWS.glue import OEDIGlue
glue = OEDIGlue()
# PVDAQ Site Table
glue.get_table_columns(database_name, site_table)
# PVDAQ System Table
glue.get_table_columns(database_name, system_table)
# PVDAQ Metrics Table
glue.get_table_columns(database_name, metrics_table)
# PVDAQ PVDATA Table
glue.get_table_columns(database_name, pvdata_table)
```
## 2. PV System Locations
Visualize the locations of PV systems on the map
```
import pandas as pd
from oedi.AWS.athena import OEDIAthena
athena = OEDIAthena(staging_location=staging_location, region_name="us-west-2")
query_string1 = f"""
SELECT system.public_name, site.latitude, site.longitude
FROM {database_name}.{system_table} AS system
INNER JOIN {database_name}.{site_table} AS site
ON cast(system.site_id as varchar)=site.site_id;
"""
systems = athena.run_query(query_string1)
systems[["latitude", "longitude"]] = systems[["latitude", "longitude"]].apply(pd.to_numeric)
import folium
imap = folium.Map(location=[32.53056, -89.01959696969696], zoom_start=5, tiles="Stamen Terrain")
for index, row in systems.iterrows():
folium.Marker(
location=[row.latitude, row.longitude],
fill_color="#43d9de",
radius=8,
popup=f"<i>{row.public_name}</i>", tooltip="Click Me"
).add_to(imap)
imap
```
## 3. PV System metrics
```
query_string2 = f"""
select pvdata.measured_on, pvdata.value, metrics.common_name, metrics.system_id
from {database_name}.{pvdata_table} as pvdata
inner join {database_name}.{metrics_table} as metrics
on pvdata.metric_id=metrics.metric_id
where metrics.system_id=1230
AND year='2006';
"""
pvdata = athena.run_query(query_string2)
pvdata.head()
pvdata["common_name"].unique()
df = pd.DataFrame()
for column in sorted(pvdata["common_name"].unique()):
sub = pvdata[pvdata["common_name"] == column]
sub = sub.set_index("measured_on")
column = column.lower().replace(" ", "_")
sub = sub.drop(columns=["common_name", "system_id"])
sub = sub.rename(columns={"value": column})
if df.empty:
df = sub
else:
df = df.join(sub, on="measured_on")
df.head()
import matplotlib.pyplot as plt
_, a = plt.subplots(4, 1, figsize=(18, 12), tight_layout=True)
a[0].set_ylabel("unit: W")
a[1].set_ylabel("unit: %")
a[2].set_ylabel("unit: W/m^2")
a[3].set_ylabel("unit: C")
df.plot(ax=a, subplots=True)
```
|
github_jupyter
|
# database
database_name = "oedi_pvdaq"
# tables
inverters_table = "pvdaq_parquet_inverters"
meters_table = "pvdaq_parquet_meters"
metrics_table = "pvdaq_parquet_metrics"
modules_table = "pvdaq_parquet_modules"
mount_table = "pvdaq_parquet_mount"
other_instruments_table = "pvdaq_parquet_other_instruments"
pvdata_table = "pvdaq_parquet_pvdata"
site_table = "pvdaq_parquet_site"
system_table = "pvdaq_parquet_system"
staging_location = "s3://nrel-tests/pvdaq/"
from oedi.AWS.glue import OEDIGlue
glue = OEDIGlue()
# PVDAQ Site Table
glue.get_table_columns(database_name, site_table)
# PVDAQ System Table
glue.get_table_columns(database_name, system_table)
# PVDAQ Metrics Table
glue.get_table_columns(database_name, metrics_table)
# PVDAQ PVDATA Table
glue.get_table_columns(database_name, pvdata_table)
import pandas as pd
from oedi.AWS.athena import OEDIAthena
athena = OEDIAthena(staging_location=staging_location, region_name="us-west-2")
query_string1 = f"""
SELECT system.public_name, site.latitude, site.longitude
FROM {database_name}.{system_table} AS system
INNER JOIN {database_name}.{site_table} AS site
ON cast(system.site_id as varchar)=site.site_id;
"""
systems = athena.run_query(query_string1)
systems[["latitude", "longitude"]] = systems[["latitude", "longitude"]].apply(pd.to_numeric)
import folium
imap = folium.Map(location=[32.53056, -89.01959696969696], zoom_start=5, tiles="Stamen Terrain")
for index, row in systems.iterrows():
folium.Marker(
location=[row.latitude, row.longitude],
fill_color="#43d9de",
radius=8,
popup=f"<i>{row.public_name}</i>", tooltip="Click Me"
).add_to(imap)
imap
query_string2 = f"""
select pvdata.measured_on, pvdata.value, metrics.common_name, metrics.system_id
from {database_name}.{pvdata_table} as pvdata
inner join {database_name}.{metrics_table} as metrics
on pvdata.metric_id=metrics.metric_id
where metrics.system_id=1230
AND year='2006';
"""
pvdata = athena.run_query(query_string2)
pvdata.head()
pvdata["common_name"].unique()
df = pd.DataFrame()
for column in sorted(pvdata["common_name"].unique()):
sub = pvdata[pvdata["common_name"] == column]
sub = sub.set_index("measured_on")
column = column.lower().replace(" ", "_")
sub = sub.drop(columns=["common_name", "system_id"])
sub = sub.rename(columns={"value": column})
if df.empty:
df = sub
else:
df = df.join(sub, on="measured_on")
df.head()
import matplotlib.pyplot as plt
_, a = plt.subplots(4, 1, figsize=(18, 12), tight_layout=True)
a[0].set_ylabel("unit: W")
a[1].set_ylabel("unit: %")
a[2].set_ylabel("unit: W/m^2")
a[3].set_ylabel("unit: C")
df.plot(ax=a, subplots=True)
| 0.345547 | 0.843959 |
# Preprocessing Boston Airbnb data
# 1. Import libraries
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
import seaborn as sns
%matplotlib inline
```
# 2. Read in data and take a first look
```
# read data
data = pd.read_csv('Boston Airbnb/listings.csv', sep=',')
listings_df = data.copy()
# display all columns at once
pd.set_option('display.max_columns', 100)
listings_df.head()
```
### take a look at the size of the data --> approx. 3600 rows and 95 columns
```
# data shape
listings_df.shape
```
### data columns...many columns seem to be of no interest for us
```
# data columns
listings_df.columns
```
# 3. Important columns / Columns of interest
## 3a. Remove columns with >= 98% NaN
```
# means of missing values
listings_df.isnull().mean().sort_values(ascending=False).head(10)
# find columns with >= 98% NaN's
print('listings_df shape = {}'.format(listings_df.shape))
index1 = np.where(listings_df.isnull().mean()>=0.98)[0]
drop_cols = listings_df.columns[index1]
print('')
print('drop_columns = {}'.format(drop_cols))
# remove said columns
listings_df_reduced = listings_df.drop(columns=drop_cols)
print('')
print('listings_df_reduced shape without >= 98% NaN columns = {}'.format(listings_df_reduced.shape))
# take another look at the data
listings_df_reduced.head()
# ... and its columns
listings_df_reduced.columns
```
# 3b. Select columns of possible interest (resp. drop columns of no interest)
```
# columns of interest
cols_to_drop = ['id', 'listing_url', 'scrape_id', 'last_scraped', 'name', 'summary',
'space', 'description', 'experiences_offered', 'neighborhood_overview',
'notes', 'transit', 'access', 'interaction', 'house_rules',
'thumbnail_url', 'medium_url', 'picture_url', 'xl_picture_url',
'host_id', 'host_url', 'host_name', 'host_since', 'host_location',
'host_about', 'host_thumbnail_url', 'host_picture_url', 'host_neighbourhood',
'host_verifications', 'street', 'neighbourhood', 'city', 'state', 'zipcode',
'market', 'smart_location', 'country_code', 'country', 'amenities',
'weekly_price', 'monthly_price', 'calendar_updated', 'availability_30',
'availability_60', 'availability_90', 'availability_365', 'calendar_last_scraped',
'first_review', 'last_review',]
# select columns of interest
listings_df_reduced = listings_df_reduced.drop(columns=cols_to_drop)
listings_df_reduced.shape
# look at the data
listings_df_reduced.head()
# data types of columns
listings_df_reduced.dtypes.sort_values()
# check for missing values
listings_df_reduced.isnull().mean().sort_values()
```
# 3c. Observations:
* price & extra_people have no missing values
* cleaning_fee and security_deposit have missing values but we assume that missing values correspond to 0 USD
* extra_people, cleaning_fee, security_deposit & price are listed as strings/objects, due to the "$" sign. We should convert these columns to float
* it does not clearly state if price is per night or per minimum nights resp. per 1 Person or for more people --> we need to make some assumptions
# 4. Convert prices and fees to float
```
def convert_prices_and_fees_to_float(df, price_and_fees_cols):
"""
function to convert price/fee columns to float
"""
# fill missing values with 0
df[price_and_fees_cols] = df[price_and_fees_cols].fillna('$0.0')
# remove '$' and ',' symbols
for col_temp in price_and_fees_cols:
df[col_temp] = df[col_temp].apply(lambda x: x.replace('$', '').replace(',','')).astype(float)
return df
# call function above
price_and_fees_cols = ['extra_people', 'price', 'cleaning_fee', 'security_deposit']
listings_df_reduced = convert_prices_and_fees_to_float(listings_df_reduced, price_and_fees_cols)
# check if missing values were replaced
listings_df_reduced.isnull().mean().sort_values()
# check if prices and fees are float now
listings_df_reduced.dtypes.sort_values()
```
# 4a. add a total_price column
* We assume "price" is the price per night per airbnb. We add a total_price column (price+cleaning_fee/minimum_nights) which is supposed to break down the cleaning fee and evenly add it to the price per night.
* Obsivously in this case we assume that people stay for a minimum_nights amount of nights (which may not be 100% correct but we do not have much more information on that)
```
# calculate total price per night as price + cleaning_fee evenly distributed over the minimum_nights
listings_df_reduced['total_price_per_night'] = listings_df_reduced['price']+listings_df_reduced['cleaning_fee']/listings_df_reduced['minimum_nights']
listings_df_reduced[['price', 'cleaning_fee', 'minimum_nights','total_price_per_night']][:10]
```
# 5. Take another look at the data
```
# look at the data
listings_df_reduced.head(10)
# data shape
listings_df_reduced.shape
```
# 6. Convert True/False (t/f) columns to 1/0
```
def convert_true_false_to_1_0(df, true_false_columns):
"""
function to convert boolean columns (true/false) to integer columns with 1/0
"""
for col_temp in true_false_columns:
df[col_temp] = df[col_temp].apply(lambda x: x.replace('t', '1').replace('f','0')).astype(int)
return df
# call function just above and convert columns
true_false_columns = ['host_is_superhost', 'host_has_profile_pic', 'host_identity_verified',
'is_location_exact', 'requires_license', 'instant_bookable', 'require_guest_profile_picture',
'require_guest_phone_verification']
listings_df_reduced = convert_true_false_to_1_0(listings_df_reduced, true_false_columns)
listings_df_reduced.head(10)
# check if true/false columns were converted to 1/0 (int)
listings_df_reduced.dtypes.sort_values()
# check which columns still have missing values
listings_df_reduced.isnull().sum().sort_values()
```
# 7. Remove/Impute observations with missing values
##### We are going to remove missing values in the following columns (because these columns are important for the model later on and can not be imputed in a useful way):
* property_type
* beds
* bedrooms
* bathrooms
```
# drop columns with missing values in the 4 mentioned columns
listings_df_reduced = listings_df_reduced.dropna(subset=['property_type', 'beds', 'bedrooms', 'bathrooms'])
# data shape
listings_df_reduced.shape
```
### We reduced our dataset from 3585 rows to 3554 rows which means we removed 31 observations
```
# check which columns still have missing values
listings_df_reduced.isnull().sum().sort_values()
```
### For some reason there are airbnbs that do not have beds (although bed_type='Real Bed')?? We want to remove those observations
```
np.unique(listings_df_reduced['beds'])
listings_df_reduced[listings_df_reduced['beds']==0].head(10)
# remove observations where "beds=0"
listings_df_reduced = listings_df_reduced[listings_df_reduced['beds'] > 0]
# data shape
listings_df_reduced.shape
```
###### We removed another 4 observations
### We still have missing values in the review columns and host response/acceptance columns
```
# check which columns still have missing values
listings_df_reduced.isnull().sum().sort_values()
```
### We are going to impute the missing review scores with the mean of the corresponding columns
```
# all different review columns
reviews = [i for i in listings_df_reduced if 'review' in i]
reviews
# mean function
fill_mean = lambda col: col.fillna(col.mean())
# fill missing values with mean of corresponding column
fill_df = listings_df_reduced.loc[:, reviews].apply(fill_mean, axis=0)
# replace review columns with "mean-imputed" adjusted columns
listings_df_reduced.loc[:, reviews] = fill_df.loc[:, :]
listings_df_reduced[reviews].head(10)
# data shape (obvisouly no loss of observations because we are only imputing values)
listings_df_reduced.shape
# check which columns still have missing values
listings_df_reduced.isnull().sum().sort_values()
```
### We still have missing values in the host acceptance/response columns. We are going to impute those observations
```
def convert_host_acceptance_and_response_columns_to_float_and_impute(df, host_cols):
"""
function to convert host response/acceptance columns impute values
"""
for col_temp in host_cols:
df[col_temp] = df[col_temp].map(lambda x: x.replace('%',''), na_action='ignore')
df[col_temp] = df[col_temp].fillna(method="backfill")
return df
# considered columns and call function just above
host_cols = ['host_acceptance_rate', 'host_response_rate', 'host_response_time']
listings_df_reduced = convert_host_acceptance_and_response_columns_to_float_and_impute(listings_df_reduced, host_cols)
# convert the rate-columns to float
listings_df_reduced[['host_acceptance_rate', 'host_response_rate']] = listings_df_reduced[['host_acceptance_rate', 'host_response_rate']].astype(float)
listings_df_reduced.head()
# Now we converted many columns to int and float. Everything that is still of type "object" will be converted to dummies later
listings_df_reduced.dtypes.sort_values()
# And we finally do not have missing values anymore
listings_df_reduced.isnull().sum().sort_values()
```
### Now we have no missing values left in the dataset
# 8. Distributions & Correlations in the data
```
# histogram of "total_price_per_night"
plt.figure()
plt.hist(listings_df_reduced['total_price_per_night'], bins= 50)
plt.xlabel('total_price_per_night')
plt.ylabel('frequency')
plt.title('total_price_per_night')
plt.show()
```
### We notive a right skewed distribution of the total_price_per_night (which will later on be our response variable). In this case we might want to think about a transformation (such as log or sqrt)
### Log transformed data
```
# histogram of the log of "total_price_per_night"
plt.figure()
plt.hist(np.log(listings_df_reduced['total_price_per_night']), bins= 50)
plt.xlabel('log(total_price_per_night)')
plt.ylabel('frequency')
plt.title('log(total_price_per_night)')
plt.show()
```
### We notice a log transformation seems to be a good one (will be done later in the other notebook when fitting the model)
```
# Take a look at the lowest prices
min_price = min(listings_df_reduced['total_price_per_night'])
print('Minimum price per night = {}'.format(min_price))
listings_df_reduced.sort_values(by=['total_price_per_night']).head()
# Take a look a the highest prices
max_price = max(listings_df_reduced['total_price_per_night'])
print('Maximum price per night = {}'.format(max_price))
listings_df_reduced.sort_values(by=['total_price_per_night'], ascending=False).head()
### We have prices in the range from 10USD to 4000USD. In my opinion Airbnbs are mostly booked by younger people (which are usually not the rich age group). Therefore we want to remove Airbnbs that are too expensive, let's say where 'total_price_per_night > 500'. We also want to remove Airbnbs that are too cheap, e.g. 'total_price_per_night < 20'
### Moreover Airbnbs with really high security deposits (let's say >500 USD) will be removed
```
# 8a. Outliers
### The data seems to contain some outliers. We are going to remove:
* total_price_per_night > 500 USD or total_price_per_night < 20 USD (Reason: Airbnbs are mostly booked by younger people (which are usually not the rich age group))
* security_deposit > 500 (Reason: People do not like to pay very high security deposits)
* bathrooms >= 5 (Reason: Take a look at the table just below...some Airbnbs have 5 or more bathrooms but only 1 bed...this seems weird to me)
* accommodates > 6 (Reason: Airbnbs in Boston are probably booked for a city trip for a couple of days. People usually do not go on city trips with too many people. Maybe a car full of people (5-6 people tops))
* minimum_nights >= 30 (Reason: Sometimes it might actually be useful to stay for a month, but most of the time people only tend to stay a few days. Everything more than 30 days is weird to me)
### Why do some places have so many bathrooms??
```
listings_df_reduced[(listings_df_reduced['bathrooms'] >= 5)]
### Some places require a minimum amount of 90 or more nights, some even 300 nights. This seems a bit odd. We are going to remove observations with 90 or more nights
listings_df_reduced['minimum_nights'].value_counts().sort_index()
# Minimum nights > 30
listings_df_reduced[(listings_df_reduced['minimum_nights'] > 30)]
# Remove outliers
listings_df_reduced = listings_df_reduced[(listings_df_reduced['total_price_per_night'] >= 20) & (listings_df_reduced['total_price_per_night'] <= 500)]
listings_df_reduced = listings_df_reduced[(listings_df_reduced['security_deposit'] <= 500)]
listings_df_reduced = listings_df_reduced[(listings_df_reduced['bathrooms'] < 5)]
listings_df_reduced = listings_df_reduced[listings_df_reduced['accommodates'] <= 6]
listings_df_reduced = listings_df_reduced[listings_df_reduced['minimum_nights'] <= 30]
listings_df_reduced.shape
```
### We now reduced our dataset to 3260 observations
```
# Another look at the price distribution
plt.figure()
plt.hist(listings_df_reduced['total_price_per_night'], bins= 20)
plt.xlabel('total_price_per_night')
plt.ylabel('frequency')
plt.title('total_price_per_night')
plt.show()
```
# 8b. Transformation
### For the outlier-adjusted dataset, the square root seems to be a better transformation than the log (for later use)
```
# Another look at the price distribution
plt.figure()
plt.hist(np.sqrt(listings_df_reduced['total_price_per_night']), bins= 20)
plt.xlabel('sqrt(total_price_per_night)')
plt.ylabel('frequency')
plt.title('sqrt(total_price_per_night)')
plt.show()
```
# 8c. Data overview
```
listings_df_reduced.describe()
```
# 9. Check out the different
* bed_types
* room_types
* property_types and
* neighbourhoods
#### These variables can later be converted to dummies for a regression model for instance
```
# df columns
listings_df_reduced.columns
# different bed_types
np.unique(listings_df_reduced['bed_type'])
# different room types
np.unique(listings_df_reduced['room_type'])
# different property types
np.unique(listings_df_reduced['property_type'])
# different neightbourhoods
np.unique(listings_df_reduced['neighbourhood_cleansed'])
# different cancellation policies
np.unique(listings_df_reduced['cancellation_policy'])
# different host response time
np.unique(listings_df_reduced['host_response_time'])
# shape of preprocessed df
listings_df_reduced.shape
# percentage of data left from the original data set
listings_df_reduced.shape[0]/data.shape[0]
```
# 8. Export preprocessed dataset
```
# export data as csv
listings_df_reduced.to_csv('Boston Airbnb/listings_preprocessed_new.csv', index=False)
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
import seaborn as sns
%matplotlib inline
# read data
data = pd.read_csv('Boston Airbnb/listings.csv', sep=',')
listings_df = data.copy()
# display all columns at once
pd.set_option('display.max_columns', 100)
listings_df.head()
# data shape
listings_df.shape
# data columns
listings_df.columns
# means of missing values
listings_df.isnull().mean().sort_values(ascending=False).head(10)
# find columns with >= 98% NaN's
print('listings_df shape = {}'.format(listings_df.shape))
index1 = np.where(listings_df.isnull().mean()>=0.98)[0]
drop_cols = listings_df.columns[index1]
print('')
print('drop_columns = {}'.format(drop_cols))
# remove said columns
listings_df_reduced = listings_df.drop(columns=drop_cols)
print('')
print('listings_df_reduced shape without >= 98% NaN columns = {}'.format(listings_df_reduced.shape))
# take another look at the data
listings_df_reduced.head()
# ... and its columns
listings_df_reduced.columns
# columns of interest
cols_to_drop = ['id', 'listing_url', 'scrape_id', 'last_scraped', 'name', 'summary',
'space', 'description', 'experiences_offered', 'neighborhood_overview',
'notes', 'transit', 'access', 'interaction', 'house_rules',
'thumbnail_url', 'medium_url', 'picture_url', 'xl_picture_url',
'host_id', 'host_url', 'host_name', 'host_since', 'host_location',
'host_about', 'host_thumbnail_url', 'host_picture_url', 'host_neighbourhood',
'host_verifications', 'street', 'neighbourhood', 'city', 'state', 'zipcode',
'market', 'smart_location', 'country_code', 'country', 'amenities',
'weekly_price', 'monthly_price', 'calendar_updated', 'availability_30',
'availability_60', 'availability_90', 'availability_365', 'calendar_last_scraped',
'first_review', 'last_review',]
# select columns of interest
listings_df_reduced = listings_df_reduced.drop(columns=cols_to_drop)
listings_df_reduced.shape
# look at the data
listings_df_reduced.head()
# data types of columns
listings_df_reduced.dtypes.sort_values()
# check for missing values
listings_df_reduced.isnull().mean().sort_values()
def convert_prices_and_fees_to_float(df, price_and_fees_cols):
"""
function to convert price/fee columns to float
"""
# fill missing values with 0
df[price_and_fees_cols] = df[price_and_fees_cols].fillna('$0.0')
# remove '$' and ',' symbols
for col_temp in price_and_fees_cols:
df[col_temp] = df[col_temp].apply(lambda x: x.replace('$', '').replace(',','')).astype(float)
return df
# call function above
price_and_fees_cols = ['extra_people', 'price', 'cleaning_fee', 'security_deposit']
listings_df_reduced = convert_prices_and_fees_to_float(listings_df_reduced, price_and_fees_cols)
# check if missing values were replaced
listings_df_reduced.isnull().mean().sort_values()
# check if prices and fees are float now
listings_df_reduced.dtypes.sort_values()
# calculate total price per night as price + cleaning_fee evenly distributed over the minimum_nights
listings_df_reduced['total_price_per_night'] = listings_df_reduced['price']+listings_df_reduced['cleaning_fee']/listings_df_reduced['minimum_nights']
listings_df_reduced[['price', 'cleaning_fee', 'minimum_nights','total_price_per_night']][:10]
# look at the data
listings_df_reduced.head(10)
# data shape
listings_df_reduced.shape
def convert_true_false_to_1_0(df, true_false_columns):
"""
function to convert boolean columns (true/false) to integer columns with 1/0
"""
for col_temp in true_false_columns:
df[col_temp] = df[col_temp].apply(lambda x: x.replace('t', '1').replace('f','0')).astype(int)
return df
# call function just above and convert columns
true_false_columns = ['host_is_superhost', 'host_has_profile_pic', 'host_identity_verified',
'is_location_exact', 'requires_license', 'instant_bookable', 'require_guest_profile_picture',
'require_guest_phone_verification']
listings_df_reduced = convert_true_false_to_1_0(listings_df_reduced, true_false_columns)
listings_df_reduced.head(10)
# check if true/false columns were converted to 1/0 (int)
listings_df_reduced.dtypes.sort_values()
# check which columns still have missing values
listings_df_reduced.isnull().sum().sort_values()
# drop columns with missing values in the 4 mentioned columns
listings_df_reduced = listings_df_reduced.dropna(subset=['property_type', 'beds', 'bedrooms', 'bathrooms'])
# data shape
listings_df_reduced.shape
# check which columns still have missing values
listings_df_reduced.isnull().sum().sort_values()
np.unique(listings_df_reduced['beds'])
listings_df_reduced[listings_df_reduced['beds']==0].head(10)
# remove observations where "beds=0"
listings_df_reduced = listings_df_reduced[listings_df_reduced['beds'] > 0]
# data shape
listings_df_reduced.shape
# check which columns still have missing values
listings_df_reduced.isnull().sum().sort_values()
# all different review columns
reviews = [i for i in listings_df_reduced if 'review' in i]
reviews
# mean function
fill_mean = lambda col: col.fillna(col.mean())
# fill missing values with mean of corresponding column
fill_df = listings_df_reduced.loc[:, reviews].apply(fill_mean, axis=0)
# replace review columns with "mean-imputed" adjusted columns
listings_df_reduced.loc[:, reviews] = fill_df.loc[:, :]
listings_df_reduced[reviews].head(10)
# data shape (obvisouly no loss of observations because we are only imputing values)
listings_df_reduced.shape
# check which columns still have missing values
listings_df_reduced.isnull().sum().sort_values()
def convert_host_acceptance_and_response_columns_to_float_and_impute(df, host_cols):
"""
function to convert host response/acceptance columns impute values
"""
for col_temp in host_cols:
df[col_temp] = df[col_temp].map(lambda x: x.replace('%',''), na_action='ignore')
df[col_temp] = df[col_temp].fillna(method="backfill")
return df
# considered columns and call function just above
host_cols = ['host_acceptance_rate', 'host_response_rate', 'host_response_time']
listings_df_reduced = convert_host_acceptance_and_response_columns_to_float_and_impute(listings_df_reduced, host_cols)
# convert the rate-columns to float
listings_df_reduced[['host_acceptance_rate', 'host_response_rate']] = listings_df_reduced[['host_acceptance_rate', 'host_response_rate']].astype(float)
listings_df_reduced.head()
# Now we converted many columns to int and float. Everything that is still of type "object" will be converted to dummies later
listings_df_reduced.dtypes.sort_values()
# And we finally do not have missing values anymore
listings_df_reduced.isnull().sum().sort_values()
# histogram of "total_price_per_night"
plt.figure()
plt.hist(listings_df_reduced['total_price_per_night'], bins= 50)
plt.xlabel('total_price_per_night')
plt.ylabel('frequency')
plt.title('total_price_per_night')
plt.show()
# histogram of the log of "total_price_per_night"
plt.figure()
plt.hist(np.log(listings_df_reduced['total_price_per_night']), bins= 50)
plt.xlabel('log(total_price_per_night)')
plt.ylabel('frequency')
plt.title('log(total_price_per_night)')
plt.show()
# Take a look at the lowest prices
min_price = min(listings_df_reduced['total_price_per_night'])
print('Minimum price per night = {}'.format(min_price))
listings_df_reduced.sort_values(by=['total_price_per_night']).head()
# Take a look a the highest prices
max_price = max(listings_df_reduced['total_price_per_night'])
print('Maximum price per night = {}'.format(max_price))
listings_df_reduced.sort_values(by=['total_price_per_night'], ascending=False).head()
### We have prices in the range from 10USD to 4000USD. In my opinion Airbnbs are mostly booked by younger people (which are usually not the rich age group). Therefore we want to remove Airbnbs that are too expensive, let's say where 'total_price_per_night > 500'. We also want to remove Airbnbs that are too cheap, e.g. 'total_price_per_night < 20'
### Moreover Airbnbs with really high security deposits (let's say >500 USD) will be removed
listings_df_reduced[(listings_df_reduced['bathrooms'] >= 5)]
### Some places require a minimum amount of 90 or more nights, some even 300 nights. This seems a bit odd. We are going to remove observations with 90 or more nights
listings_df_reduced['minimum_nights'].value_counts().sort_index()
# Minimum nights > 30
listings_df_reduced[(listings_df_reduced['minimum_nights'] > 30)]
# Remove outliers
listings_df_reduced = listings_df_reduced[(listings_df_reduced['total_price_per_night'] >= 20) & (listings_df_reduced['total_price_per_night'] <= 500)]
listings_df_reduced = listings_df_reduced[(listings_df_reduced['security_deposit'] <= 500)]
listings_df_reduced = listings_df_reduced[(listings_df_reduced['bathrooms'] < 5)]
listings_df_reduced = listings_df_reduced[listings_df_reduced['accommodates'] <= 6]
listings_df_reduced = listings_df_reduced[listings_df_reduced['minimum_nights'] <= 30]
listings_df_reduced.shape
# Another look at the price distribution
plt.figure()
plt.hist(listings_df_reduced['total_price_per_night'], bins= 20)
plt.xlabel('total_price_per_night')
plt.ylabel('frequency')
plt.title('total_price_per_night')
plt.show()
# Another look at the price distribution
plt.figure()
plt.hist(np.sqrt(listings_df_reduced['total_price_per_night']), bins= 20)
plt.xlabel('sqrt(total_price_per_night)')
plt.ylabel('frequency')
plt.title('sqrt(total_price_per_night)')
plt.show()
listings_df_reduced.describe()
# df columns
listings_df_reduced.columns
# different bed_types
np.unique(listings_df_reduced['bed_type'])
# different room types
np.unique(listings_df_reduced['room_type'])
# different property types
np.unique(listings_df_reduced['property_type'])
# different neightbourhoods
np.unique(listings_df_reduced['neighbourhood_cleansed'])
# different cancellation policies
np.unique(listings_df_reduced['cancellation_policy'])
# different host response time
np.unique(listings_df_reduced['host_response_time'])
# shape of preprocessed df
listings_df_reduced.shape
# percentage of data left from the original data set
listings_df_reduced.shape[0]/data.shape[0]
# export data as csv
listings_df_reduced.to_csv('Boston Airbnb/listings_preprocessed_new.csv', index=False)
| 0.272702 | 0.926503 |
```
import numpy as np
import os
from sklearn.metrics import confusion_matrix
import seaborn as sn; sn.set(font_scale=1.4)
from sklearn.utils import shuffle
import matplotlib.pyplot as plt
import cv2
from random import randint
import tensorflow.keras.layers as Layers
import tensorflow.keras.activations as Actications
import tensorflow.keras.models as Models
import tensorflow.keras.optimizers as Optimizer
import tensorflow.keras.metrics as Metrics
import tensorflow.keras.utils as Utils
from keras.layers import Dense, Flatten, Conv2D, MaxPooling2D, Dropout
from keras.models import Sequential
import tensorflow as tf
from tqdm import tqdm
class_names = ['angry', 'shock', 'normal', 'smile']
class_names_label = {class_name:i for i, class_name in enumerate(class_names)}
nb_classes = len(class_names)
IMAGE_SIZE = (120, 120)
def load_data():
"""
Load the data:
- 200 images to train the network.
- 40 images to evaluate how accurately the network learned to classify images.
"""
filters = np.array([[0, -1, 0], [-1, 5, -1], [0, -1, 0]])
datasets = ['dataset_rgb_01/train', 'dataset_rgb_01/test']
output = []
# Iterate through training and test sets
for dataset in datasets:
images = []
labels = []
print("Loading {}".format(dataset))
# Iterate through each folder corresponding to a category
for folder in os.listdir(dataset):
label = class_names_label[folder]
# Iterate through each image in our folder
for file in tqdm(os.listdir(os.path.join(dataset, folder))):
# Get the path name of the image
img_path = os.path.join(os.path.join(dataset, folder), file)
# Open and resize the img
image = cv2.imread(img_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, IMAGE_SIZE)
image = cv2.filter2D(image,-1,filters)
# Append the image and its corresponding label to the output
images.append(image)
labels.append(label)
images = np.array(images, dtype = 'float32')
labels = np.array(labels, dtype = 'int32')
output.append((images, labels))
return output
(train_images, train_labels), (test_images, test_labels) = load_data()
train_images, train_labels = shuffle(train_images, train_labels, random_state=25)
# Exploring Datasets
n_train = train_labels.shape[0]
n_test = test_labels.shape[0]
print ("Number of Class: {}".format(nb_classes))
print ("Number of training examples: {}".format(n_train))
print ("Number of testing examples: {}".format(n_test))
print ("Each image is of size: {}".format(IMAGE_SIZE))
import pandas as pd
_, train_counts = np.unique(train_labels, return_counts=True)
_, test_counts = np.unique(test_labels, return_counts=True)
pd.DataFrame({'train': train_counts,
'test': test_counts},
index=class_names
).plot.bar()
plt.show()
plt.pie(train_counts,
explode=(0, 0, 0, 0) ,
labels=class_names,
autopct='%1.1f%%'
)
plt.axis('equal')
plt.title('Proportion of each observed category')
plt.show()
# Data Normalization
train_images = train_images / 255.0
test_images = test_images / 255.0
# Visualize the Data
def display_random_image(class_names, images, labels):
"""
Display a random image from the images array and its correspond label from the labels array.
"""
index = np.random.randint(images.shape[0])
plt.figure()
plt.imshow(images[index])
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.title('Image #{} : '.format(index) + class_names[labels[index]])
plt.show()
display_random_image(class_names, train_images, train_labels)
def display_examples(class_names, images, labels):
"""
Display 25 images from the images array with its corresponding labels
"""
fig = plt.figure(figsize=(10,10))
fig.suptitle("Some examples of images of the dataset", fontsize=16)
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[labels[i]])
plt.show()
display_examples(class_names, train_images, train_labels)
# CNN models
model = Models.Sequential()
# CNN Architecture
model.add(Layers.Conv2D(32,kernel_size=(3,3),activation='relu',input_shape=(120, 120, 3)))
model.add(Layers.MaxPool2D(2,2))
model.add(Layers.Conv2D(64,kernel_size=(3,3),activation='relu'))
model.add(Layers.MaxPool2D(2,2))
model.add(Layers.Conv2D(128,kernel_size=(3,3),activation='relu'))
model.add(Layers.MaxPool2D(2,2))
model.add(Layers.Conv2D(256,kernel_size=(3,3),activation='relu'))
model.add(Layers.MaxPool2D(2,2))
# ANN Architecture
model.add(Layers.Flatten())
model.add(Layers.Dropout(0.2))
model.add(Layers.Dense(1000, activation='relu'))
model.add(Layers.Dense(4, activation='softmax'))
# Compiling Model
model.compile(optimizer=Optimizer.Adam(lr=0.00001),loss='sparse_categorical_crossentropy',metrics=['accuracy'])
model.summary()
import time
# START OF TIME
start = time.time()
### MODEL FITTING
history = model.fit(train_images,
train_labels,
batch_size= 16,
epochs= 90,
validation_split=0.2
)
### MODEL FITTING
# END OF TIME
end = time.time()
# RESULT
print("Time elapsed for this training section: {0:.2f}s".format(end - start))
# EPOCHS RUNTIME
# 30 Epochs = 75.61s
# 60 Epochs = 147.82s
# 90 Epochs = 220.07
# 120 Epochs = 292.83
def plot_accuracy_loss(history):
"""
Plot the accuracy and the loss during the training of the nn.
"""
fig = plt.figure(figsize=(20,10))
# Plot accuracy
plt.subplot(221)
plt.plot(history.history['accuracy'],'bo--', label = "acc")
plt.plot(history.history['val_accuracy'], 'ro--', label = "val_acc")
plt.title("train_acc vs val_acc")
plt.ylabel("accuracy")
plt.xlabel("epochs")
plt.legend()
# Plot loss function
plt.subplot(222)
plt.plot(history.history['loss'],'bo--', label = "loss")
plt.plot(history.history['val_loss'], 'ro--', label = "val_loss")
plt.title("train_loss vs val_loss")
plt.ylabel("loss")
plt.xlabel("epochs")
plt.legend()
plt.show()
plot_accuracy_loss(history)
loss, acc = model.evaluate(test_images, test_labels)
print("System Accuracy : {0:.2f}%".format(acc*100))
print("System Loss : {0:.5f}".format(loss))
# 30 Epochs, Accuracy = 83.75%
# 60 Epochs, Accuracy = 95.00%
# 90 Epochs, Accuracy = 97.50
# 120 Epochs Accuracy = 100%
predictions = model.predict(test_images) # Vector of probabilities
pred_labels = np.argmax(predictions, axis = 1) # We take the highest probability
display_random_image(class_names, test_images, pred_labels)
def print_mislabeled_images(class_names, test_images, test_labels, pred_labels):
"""
Print 25 examples of mislabeled images by the classifier, e.g when test_labels != pred_labels
"""
BOO = (test_labels == pred_labels)
mislabeled_indices = np.where(BOO == 0)
mislabeled_images = test_images[mislabeled_indices]
mislabeled_labels = pred_labels[mislabeled_indices]
title = "Some examples of mislabeled images by the classifier:"
display_examples(class_names, mislabeled_images, mislabeled_labels)
print_mislabeled_images(class_names, test_images, test_labels, pred_labels)
CM = confusion_matrix(test_labels, pred_labels)
ax = plt.axes()
sn.heatmap(CM, annot=True,
annot_kws={"size": 10},
xticklabels=class_names,
yticklabels=class_names, ax = ax)
ax.set_title('Confusion matrix')
plt.show()
```
|
github_jupyter
|
import numpy as np
import os
from sklearn.metrics import confusion_matrix
import seaborn as sn; sn.set(font_scale=1.4)
from sklearn.utils import shuffle
import matplotlib.pyplot as plt
import cv2
from random import randint
import tensorflow.keras.layers as Layers
import tensorflow.keras.activations as Actications
import tensorflow.keras.models as Models
import tensorflow.keras.optimizers as Optimizer
import tensorflow.keras.metrics as Metrics
import tensorflow.keras.utils as Utils
from keras.layers import Dense, Flatten, Conv2D, MaxPooling2D, Dropout
from keras.models import Sequential
import tensorflow as tf
from tqdm import tqdm
class_names = ['angry', 'shock', 'normal', 'smile']
class_names_label = {class_name:i for i, class_name in enumerate(class_names)}
nb_classes = len(class_names)
IMAGE_SIZE = (120, 120)
def load_data():
"""
Load the data:
- 200 images to train the network.
- 40 images to evaluate how accurately the network learned to classify images.
"""
filters = np.array([[0, -1, 0], [-1, 5, -1], [0, -1, 0]])
datasets = ['dataset_rgb_01/train', 'dataset_rgb_01/test']
output = []
# Iterate through training and test sets
for dataset in datasets:
images = []
labels = []
print("Loading {}".format(dataset))
# Iterate through each folder corresponding to a category
for folder in os.listdir(dataset):
label = class_names_label[folder]
# Iterate through each image in our folder
for file in tqdm(os.listdir(os.path.join(dataset, folder))):
# Get the path name of the image
img_path = os.path.join(os.path.join(dataset, folder), file)
# Open and resize the img
image = cv2.imread(img_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, IMAGE_SIZE)
image = cv2.filter2D(image,-1,filters)
# Append the image and its corresponding label to the output
images.append(image)
labels.append(label)
images = np.array(images, dtype = 'float32')
labels = np.array(labels, dtype = 'int32')
output.append((images, labels))
return output
(train_images, train_labels), (test_images, test_labels) = load_data()
train_images, train_labels = shuffle(train_images, train_labels, random_state=25)
# Exploring Datasets
n_train = train_labels.shape[0]
n_test = test_labels.shape[0]
print ("Number of Class: {}".format(nb_classes))
print ("Number of training examples: {}".format(n_train))
print ("Number of testing examples: {}".format(n_test))
print ("Each image is of size: {}".format(IMAGE_SIZE))
import pandas as pd
_, train_counts = np.unique(train_labels, return_counts=True)
_, test_counts = np.unique(test_labels, return_counts=True)
pd.DataFrame({'train': train_counts,
'test': test_counts},
index=class_names
).plot.bar()
plt.show()
plt.pie(train_counts,
explode=(0, 0, 0, 0) ,
labels=class_names,
autopct='%1.1f%%'
)
plt.axis('equal')
plt.title('Proportion of each observed category')
plt.show()
# Data Normalization
train_images = train_images / 255.0
test_images = test_images / 255.0
# Visualize the Data
def display_random_image(class_names, images, labels):
"""
Display a random image from the images array and its correspond label from the labels array.
"""
index = np.random.randint(images.shape[0])
plt.figure()
plt.imshow(images[index])
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.title('Image #{} : '.format(index) + class_names[labels[index]])
plt.show()
display_random_image(class_names, train_images, train_labels)
def display_examples(class_names, images, labels):
"""
Display 25 images from the images array with its corresponding labels
"""
fig = plt.figure(figsize=(10,10))
fig.suptitle("Some examples of images of the dataset", fontsize=16)
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[labels[i]])
plt.show()
display_examples(class_names, train_images, train_labels)
# CNN models
model = Models.Sequential()
# CNN Architecture
model.add(Layers.Conv2D(32,kernel_size=(3,3),activation='relu',input_shape=(120, 120, 3)))
model.add(Layers.MaxPool2D(2,2))
model.add(Layers.Conv2D(64,kernel_size=(3,3),activation='relu'))
model.add(Layers.MaxPool2D(2,2))
model.add(Layers.Conv2D(128,kernel_size=(3,3),activation='relu'))
model.add(Layers.MaxPool2D(2,2))
model.add(Layers.Conv2D(256,kernel_size=(3,3),activation='relu'))
model.add(Layers.MaxPool2D(2,2))
# ANN Architecture
model.add(Layers.Flatten())
model.add(Layers.Dropout(0.2))
model.add(Layers.Dense(1000, activation='relu'))
model.add(Layers.Dense(4, activation='softmax'))
# Compiling Model
model.compile(optimizer=Optimizer.Adam(lr=0.00001),loss='sparse_categorical_crossentropy',metrics=['accuracy'])
model.summary()
import time
# START OF TIME
start = time.time()
### MODEL FITTING
history = model.fit(train_images,
train_labels,
batch_size= 16,
epochs= 90,
validation_split=0.2
)
### MODEL FITTING
# END OF TIME
end = time.time()
# RESULT
print("Time elapsed for this training section: {0:.2f}s".format(end - start))
# EPOCHS RUNTIME
# 30 Epochs = 75.61s
# 60 Epochs = 147.82s
# 90 Epochs = 220.07
# 120 Epochs = 292.83
def plot_accuracy_loss(history):
"""
Plot the accuracy and the loss during the training of the nn.
"""
fig = plt.figure(figsize=(20,10))
# Plot accuracy
plt.subplot(221)
plt.plot(history.history['accuracy'],'bo--', label = "acc")
plt.plot(history.history['val_accuracy'], 'ro--', label = "val_acc")
plt.title("train_acc vs val_acc")
plt.ylabel("accuracy")
plt.xlabel("epochs")
plt.legend()
# Plot loss function
plt.subplot(222)
plt.plot(history.history['loss'],'bo--', label = "loss")
plt.plot(history.history['val_loss'], 'ro--', label = "val_loss")
plt.title("train_loss vs val_loss")
plt.ylabel("loss")
plt.xlabel("epochs")
plt.legend()
plt.show()
plot_accuracy_loss(history)
loss, acc = model.evaluate(test_images, test_labels)
print("System Accuracy : {0:.2f}%".format(acc*100))
print("System Loss : {0:.5f}".format(loss))
# 30 Epochs, Accuracy = 83.75%
# 60 Epochs, Accuracy = 95.00%
# 90 Epochs, Accuracy = 97.50
# 120 Epochs Accuracy = 100%
predictions = model.predict(test_images) # Vector of probabilities
pred_labels = np.argmax(predictions, axis = 1) # We take the highest probability
display_random_image(class_names, test_images, pred_labels)
def print_mislabeled_images(class_names, test_images, test_labels, pred_labels):
"""
Print 25 examples of mislabeled images by the classifier, e.g when test_labels != pred_labels
"""
BOO = (test_labels == pred_labels)
mislabeled_indices = np.where(BOO == 0)
mislabeled_images = test_images[mislabeled_indices]
mislabeled_labels = pred_labels[mislabeled_indices]
title = "Some examples of mislabeled images by the classifier:"
display_examples(class_names, mislabeled_images, mislabeled_labels)
print_mislabeled_images(class_names, test_images, test_labels, pred_labels)
CM = confusion_matrix(test_labels, pred_labels)
ax = plt.axes()
sn.heatmap(CM, annot=True,
annot_kws={"size": 10},
xticklabels=class_names,
yticklabels=class_names, ax = ax)
ax.set_title('Confusion matrix')
plt.show()
| 0.731155 | 0.510619 |
# Monetary Economics: Chapter 5
### Preliminaries
```
# This line configures matplotlib to show figures embedded in the notebook,
# instead of opening a new window for each figure. More about that later.
# If you are using an old version of IPython, try using '%pylab inline' instead.
%matplotlib inline
import matplotlib.pyplot as plt
from pysolve3.model import Model
from pysolve3.utils import is_close,round_solution
```
### Model LP1
```
def create_lp1_model():
model = Model()
model.set_var_default(0)
model.var('Bcb', desc='Government bills held by the Central Bank')
model.var('Bd', desc='Demand for government bills')
model.var('Bh', desc='Government bills held by households')
model.var('Bs', desc='Government bills supplied by government')
model.var('BLd', desc='Demand for government bonds')
model.var('BLh', desc='Government bonds held by households')
model.var('BLs', desc='Supply of government bonds')
model.var('CG', desc='Capital gains on bonds')
model.var('CGe', desc='Expected capital gains on bonds')
model.var('C', desc='Consumption')
model.var('ERrbl', desc='Expected rate of return on bonds')
model.var('Hd', desc='Demand for cash')
model.var('Hh', desc='Cash held by households')
model.var('Hs', desc='Cash supplied by the central bank')
model.var('Pbl', desc='Price of bonds')
model.var('Pble', desc='Expected price of bonds')
model.var('Rb', desc='Interest rate on government bills')
model.var('Rbl', desc='Interest rate on government bonds')
model.var('T', desc='Taxes')
model.var('V', desc='Household wealth')
model.var('Ve', desc='Expected household wealth')
model.var('Y', desc='Income = GDP')
model.var('YDr', desc='Regular disposable income of households')
model.var('YDre', desc='Expected regular disposable income of households')
model.set_param_default(0)
model.param('alpha1', desc='Propensity to consume out of income')
model.param('alpha2', desc='Propensity to consume out of wealth')
model.param('chi', desc='Weight of conviction in expected bond price')
model.param('lambda10', desc='Parameter in asset demand function')
model.param('lambda12', desc='Parameter in asset demand function')
model.param('lambda13', desc='Parameter in asset demand function')
model.param('lambda14', desc='Parameter in asset demand function')
model.param('lambda20', desc='Parameter in asset demand function')
model.param('lambda22', desc='Parameter in asset demand function')
model.param('lambda23', desc='Parameter in asset demand function')
model.param('lambda24', desc='Parameter in asset demand function')
model.param('lambda30', desc='Parameter in asset demand function')
model.param('lambda32', desc='Parameter in asset demand function')
model.param('lambda33', desc='Parameter in asset demand function')
model.param('lambda34', desc='Parameter in asset demand function')
model.param('theta', desc='Tax rate')
model.param('G', desc='Government goods')
model.param('Rbar', desc='Exogenously set interest rate on govt bills')
model.param('Pblbar', desc='Exogenously set price of bonds')
model.add('Y = C + G') # 5.1
model.add('YDr = Y - T + Rb(-1)*Bh(-1) + BLh(-1)') # 5.2
model.add('T = theta *(Y + Rb(-1)*Bh(-1) + BLh(-1))') # 5.3
model.add('V - V(-1) = (YDr - C) + CG') # 5.4
model.add('CG = (Pbl - Pbl(-1))*BLh(-1)')
model.add('C = alpha1*YDre + alpha2*V(-1)')
model.add('Ve = V(-1) + (YDre - C) + CG')
model.add('Hh = V - Bh - Pbl*BLh')
model.add('Hd = Ve - Bd - Pbl*BLd')
model.add('Bd = Ve*lambda20 + Ve*lambda22*Rb' +
'- Ve*lambda23*ERrbl - lambda24*YDre')
model.add('BLd = (Ve*lambda30 - Ve*lambda32*Rb ' +
'+ Ve*lambda33*ERrbl - lambda34*YDre)/Pbl')
model.add('Bh = Bd')
model.add('BLh = BLd')
model.add('Bs - Bs(-1) = (G + Rb(-1)*Bs(-1) + ' +
'BLs(-1)) - (T + Rb(-1)*Bcb(-1)) - (BLs - BLs(-1))*Pbl')
model.add('Hs - Hs(-1) = Bcb - Bcb(-1)')
model.add('Bcb = Bs - Bh')
model.add('BLs = BLh')
model.add('ERrbl = Rbl + chi * (Pble - Pbl) / Pbl')
model.add('Rbl = 1./Pbl')
model.add('Pble = Pbl')
model.add('CGe = chi * (Pble - Pbl)*BLh')
model.add('YDre = YDr(-1)')
model.add('Rb = Rbar')
model.add('Pbl = Pblbar')
return model
lp1_parameters = {'alpha1': 0.8,
'alpha2': 0.2,
'chi': 0.1,
'lambda20': 0.44196,
'lambda22': 1.1,
'lambda23': 1,
'lambda24': 0.03,
'lambda30': 0.3997,
'lambda32': 1,
'lambda33': 1.1,
'lambda34': 0.03,
'theta': 0.1938}
lp1_exogenous = {'G': 20,
'Rbar': 0.03,
'Pblbar': 20}
lp1_variables = {'V': 95.803,
'Bh': 37.839,
'Bs': 57.964,
'Bcb': 57.964 - 37.839,
'BLh': 1.892,
'BLs': 1.892,
'Hs': 20.125,
'YDr': 95.803,
'Rb': 0.03,
'Pbl': 20}
```
### Scenario: Interest rate shock
```
lp1 = create_lp1_model()
lp1.set_values(lp1_parameters)
lp1.set_values(lp1_exogenous)
lp1.set_values(lp1_variables)
for _ in range(15):
lp1.solve(iterations=100, threshold=1e-6)
# shock the system
lp1.set_values({'Rbar': 0.04,
'Pblbar': 15})
for _ in range(45):
lp1.solve(iterations=100, threshold=1e-6)
```
###### Figure 5.2
```
caption = '''
Figure 5.2 Evolution of the wealth to disposable income ratio, following an increase
in both the short-term and long-term interest rates, with model LP1'''
data = [s['V']/s['YDr'] for s in lp1.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1.1, 1.1])
axes.tick_params(top=False, right=False)
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(0.89, 1.01)
axes.plot(data, 'k')
# add labels
plt.text(20, 0.98, 'Wealth to disposable income ratio')
fig.text(0.1, -.05, caption);
```
###### Figure 5.3
```
caption = '''
Figure 5.3 Evolution of the wealth to disposable income ratio, following an increase
in both the short-term and long-term interest rates, with model LP1'''
ydrdata = [s['YDr'] for s in lp1.solutions[5:]]
cdata = [s['C'] for s in lp1.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1.1, 1.1])
axes.tick_params(top=False, right=False)
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(92.5, 101.5)
axes.plot(ydrdata, 'k')
axes.plot(cdata, linestyle='--', color='r')
# add labels
plt.text(16, 98, 'Disposable')
plt.text(16, 97.6, 'income')
plt.text(22, 95, 'Consumption')
fig.text(0.1, -.05, caption);
```
###### Figure 5.4
```
caption = '''
Figure 5.4 Evolution of the bonds to wealth ration and the bills to wealth ratio,
following an increase from 3% to 4% in the short-term interest rate, while the
long-term interest rates moves from 5% to 6.67%, with model LP1'''
bhdata = [s['Bh']/s['V'] for s in lp1.solutions[5:]]
pdata = [s['Pbl']*s['BLh']/s['V'] for s in lp1.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1.1, 1.1])
axes.tick_params(top=False, right=False)
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(0.382, 0.408)
axes.plot(bhdata, 'k')
axes.plot(pdata, linestyle='--', color='r')
# add labels
plt.text(14, 0.3978, 'Bonds to wealth ratio')
plt.text(17, 0.39, 'Bills to wealth ratio')
fig.text(0.1, -.05, caption);
```
### Model LP2
```
def create_lp2_model():
model = Model()
model.set_var_default(0)
model.var('Bcb', desc='Government bills held by the Central Bank')
model.var('Bd', desc='Demand for government bills')
model.var('Bh', desc='Government bills held by households')
model.var('Bs', desc='Government bills supplied by government')
model.var('BLd', desc='Demand for government bonds')
model.var('BLh', desc='Government bonds held by households')
model.var('BLs', desc='Supply of government bonds')
model.var('CG', desc='Capital gains on bonds')
model.var('CGe', desc='Expected capital gains on bonds')
model.var('C', desc='Consumption')
model.var('ERrbl', desc='Expected rate of return on bonds')
model.var('Hd', desc='Demand for cash')
model.var('Hh', desc='Cash held by households')
model.var('Hs', desc='Cash supplied by the central bank')
model.var('Pbl', desc='Price of bonds')
model.var('Pble', desc='Expected price of bonds')
model.var('Rb', desc='Interest rate on government bills')
model.var('Rbl', desc='Interest rate on government bonds')
model.var('T', desc='Taxes')
model.var('TP', desc='Target proportion in households portfolio')
model.var('V', desc='Household wealth')
model.var('Ve', desc='Expected household wealth')
model.var('Y', desc='Income = GDP')
model.var('YDr', desc='Regular disposable income of households')
model.var('YDre', desc='Expected regular disposable income of households')
model.var('z1', desc='Switch parameter')
model.var('z2', desc='Switch parameter')
model.set_param_default(0)
model.param('add', desc='Random shock to expectations')
model.param('alpha1', desc='Propensity to consume out of income')
model.param('alpha2', desc='Propensity to consume out of wealth')
model.param('beta', desc='Adjustment parameter in price of bills')
model.param('betae', desc='Adjustment parameter in expectations')
model.param('bot', desc='Bottom value for TP')
model.param('chi', desc='Weight of conviction in expected bond price')
model.param('lambda10', desc='Parameter in asset demand function')
model.param('lambda12', desc='Parameter in asset demand function')
model.param('lambda13', desc='Parameter in asset demand function')
model.param('lambda14', desc='Parameter in asset demand function')
model.param('lambda20', desc='Parameter in asset demand function')
model.param('lambda22', desc='Parameter in asset demand function')
model.param('lambda23', desc='Parameter in asset demand function')
model.param('lambda24', desc='Parameter in asset demand function')
model.param('lambda30', desc='Parameter in asset demand function')
model.param('lambda32', desc='Parameter in asset demand function')
model.param('lambda33', desc='Parameter in asset demand function')
model.param('lambda34', desc='Parameter in asset demand function')
model.param('theta', desc='Tax rate')
model.param('top', desc='Top value for TP')
model.param('G', desc='Government goods')
model.param('Pblbar', desc='Exogenously set price of bonds')
model.param('Rbar', desc='Exogenously set interest rate on govt bills')
model.add('Y = C + G') # 5.1
model.add('YDr = Y - T + Rb(-1)*Bh(-1) + BLh(-1)') # 5.2
model.add('T = theta *(Y + Rb(-1)*Bh(-1) + BLh(-1))') # 5.3
model.add('V - V(-1) = (YDr - C) + CG') # 5.4
model.add('CG = (Pbl - Pbl(-1))*BLh(-1)')
model.add('C = alpha1*YDre + alpha2*V(-1)')
model.add('Ve = V(-1) + (YDre - C) + CG')
model.add('Hh = V - Bh - Pbl*BLh')
model.add('Hd = Ve - Bd - Pbl*BLd')
model.add('Bd = Ve*lambda20 + Ve*lambda22*Rb' +
'- Ve*lambda23*ERrbl - lambda24*YDre')
model.add('BLd = (Ve*lambda30 - Ve*lambda32*Rb ' +
'+ Ve*lambda33*ERrbl - lambda34*YDre)/Pbl')
model.add('Bh = Bd')
model.add('BLh = BLd')
model.add('Bs - Bs(-1) = (G + Rb(-1)*Bs(-1) + BLs(-1))' +
' - (T + Rb(-1)*Bcb(-1)) - Pbl*(BLs - BLs(-1))')
model.add('Hs - Hs(-1) = Bcb - Bcb(-1)')
model.add('Bcb = Bs - Bh')
model.add('BLs = BLh')
model.add('ERrbl = Rbl + ((chi * (Pble - Pbl))/ Pbl)')
model.add('Rbl = 1./Pbl')
model.add('Pble = Pble(-1) - betae*(Pble(-1) - Pbl) + add')
model.add('CGe = chi * (Pble - Pbl)*BLh')
model.add('YDre = YDr(-1)')
model.add('Rb = Rbar')
model.add('Pbl = (1 + z1*beta - z2*beta)*Pbl(-1)')
model.add('z1 = if_true(TP > top)')
model.add('z2 = if_true(TP < bot)')
model.add('TP = (BLh(-1)*Pbl(-1))/(BLh(-1)*Pbl(-1) + Bh(-1))')
return model
lp2_parameters = {'alpha1': 0.8,
'alpha2': 0.2,
'beta': 0.01,
'betae': 0.5,
'chi': 0.1,
'lambda20': 0.44196,
'lambda22': 1.1,
'lambda23': 1,
'lambda24': 0.03,
'lambda30': 0.3997,
'lambda32': 1,
'lambda33': 1.1,
'lambda34': 0.03,
'theta': 0.1938,
'bot': 0.495,
'top': 0.505 }
lp2_exogenous = {'G': 20,
'Rbar': 0.03,
'Pblbar': 20,
'add': 0}
lp2_variables = {'V': 95.803,
'Bh': 37.839,
'Bs': 57.964,
'Bcb': 57.964 - 37.839,
'BLh': 1.892,
'BLs': 1.892,
'Hs': 20.125,
'YDr': 95.803,
'Rb': 0.03,
'Pbl': 20,
'Pble': 20,
'TP': 1.892*20/(1.892*20+37.839), # BLh*Pbl/(BLh*Pbl+Bh)
'z1': 0,
'z2': 0}
```
### Scenario: interest rate shock
```
lp2_bill = create_lp2_model()
lp2_bill.set_values(lp2_parameters)
lp2_bill.set_values(lp2_exogenous)
lp2_bill.set_values(lp2_variables)
lp2_bill.set_values({'z1': lp2_bill.evaluate('if_true(TP > top)'),
'z2': lp2_bill.evaluate('if_true(TP < bot)')})
for _ in range(10):
lp2_bill.solve(iterations=100, threshold=1e-4)
# shock the system
lp2_bill.set_values({'Rbar': 0.035})
for _ in range(45):
lp2_bill.solve(iterations=100, threshold=1e-4)
```
###### Figure 5.5
```
caption = '''
Figure 5.5 Evolution of the long-term interest rate (the bond yield), following an
increase in the short-term interest rate (the bill rate), as a result of the response of
the central bank and the Treasury, with Model LP2.'''
rbdata = [s['Rb'] for s in lp2_bill.solutions[5:]]
pbldata = [1./s['Pbl'] for s in lp2_bill.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1.1, 0.9])
axes.tick_params(top=False, right=False)
axes.spines['top'].set_visible(False)
axes.set_ylim(0.029, 0.036)
axes.plot(rbdata, linestyle='--', color='r')
axes2 = axes.twinx()
axes2.spines['top'].set_visible(False)
axes2.set_ylim(0.0495, 0.052)
axes2.plot(pbldata, 'k')
# add labels
plt.text(12, 0.0518, 'Short-term interest rate')
plt.text(15, 0.0513, 'Long-term interest rate')
fig.text(0.05, 1.05, 'Bill rate')
fig.text(1.15, 1.05, 'Bond yield')
fig.text(0.1, -.1, caption);
```
###### Figure 5.6
```
caption = '''
Figure 5.6 Evolution of the target proportion (TP), that is the share of bonds in the
government debt held by households, following an increase in the short-term interest
rate (the bill rate) and the response of the central bank and of the Treasury,
with Model LP2'''
tpdata = [s['TP'] for s in lp2_bill.solutions[5:]]
topdata = [s['top'] for s in lp2_bill.solutions[5:]]
botdata = [s['bot'] for s in lp2_bill.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1.1, 1.1])
axes.tick_params(top=False, right=False)
axes.spines['top'].set_visible(False)
axes.set_ylim(0.490, 0.506)
axes.plot(topdata, color='k')
axes.plot(botdata, color='k')
axes.plot(tpdata, linestyle='--', color='r')
# add labels
plt.text(30, 0.5055, 'Ceiling of target range')
plt.text(30, 0.494, 'Floor of target range')
plt.text(10, 0.493, 'Share of bonds')
plt.text(10, 0.4922, 'in government debt')
plt.text(10, 0.4914, 'held by households')
fig.text(0.1, -.15, caption);
```
### Scenario: Shock to the bond price expectations
```
lp2_bond = create_lp2_model()
lp2_bond.set_values(lp2_parameters)
lp2_bond.set_values(lp2_exogenous)
lp2_bond.set_values(lp2_variables)
lp2_bond.set_values({'z1': 'if_true(TP > top)',
'z2': 'if_true(TP < bot)'})
for _ in range(10):
lp2_bond.solve(iterations=100, threshold=1e-5)
# shock the system
lp2_bond.set_values({'add': -3})
lp2_bond.solve(iterations=100, threshold=1e-5)
lp2_bond.set_values({'add': 0})
for _ in range(43):
lp2_bond.solve(iterations=100, threshold=1e-4)
```
###### Figure 5.7
```
caption = '''
Figure 5.7 Evolution of the long-term interest rate, following an anticipated fall in
the price of bonds, as a consequence of the response of the central bank and of the
Treasury, with Model LP2'''
pbldata = [1./s['Pbl'] for s in lp2_bond.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.9, 0.9])
axes.tick_params(top=False, right=False)
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(0.0497, 0.0512)
axes.plot(pbldata, linestyle='--', color='k')
# add labels
plt.text(15, 0.0509, 'Long-term interest rate')
fig.text(0.1, -.1, caption);
```
###### Figure 5.8
```
caption = '''
Figure 5.8 Evolution of the expected and actual bond prices, following an anticipated
fall in the price of bonds, as a consequence of the response of the central bank and of
the Treasury, with Model LP2'''
pbldata = [s['Pbl'] for s in lp2_bond.solutions[5:]]
pbledata = [s['Pble'] for s in lp2_bond.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.9, 0.9])
axes.tick_params(top=False, right=False)
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(16.5, 21)
axes.plot(pbldata, linestyle='--', color='k')
axes.plot(pbledata, linestyle='-', color='r')
# add labels
plt.text(8, 20, 'Actual price of bonds')
plt.text(10, 19, 'Expected price of bonds')
fig.text(0.1, -.1, caption);
```
###### Figure 5.9
```
caption = '''
Figure 5.9 Evolution of the target proportion (TP), that is the share of bonds in the
government debt held by households, following an anticipated fall in the price of
bonds, as a consequence of the response of the central bank and of the Treasury, with
Model LP2'''
tpdata = [s['TP'] for s in lp2_bond.solutions[5:]]
botdata = [s['top'] for s in lp2_bond.solutions[5:]]
topdata = [s['bot'] for s in lp2_bond.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.9, 0.9])
axes.tick_params(top=False, right=False)
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(0.47, 0.52)
axes.plot(tpdata, linestyle='--', color='r')
axes.plot(botdata, linestyle='-', color='k')
axes.plot(topdata, linestyle='-', color='k')
# add labels
plt.text(30, 0.508, 'Ceiling of target range')
plt.text(30, 0.491, 'Floor of target range')
plt.text(10, 0.49, 'Share of bonds in')
plt.text(10, 0.487, 'government debt')
plt.text(10, 0.484, 'held by households')
fig.text(0.1, -.15, caption);
```
### Scenario: Model LP1, propensity to consume shock
```
lp1_alpha = create_lp1_model()
lp1_alpha.set_values(lp1_parameters)
lp1_alpha.set_values(lp1_exogenous)
lp1_alpha.set_values(lp1_variables)
for _ in range(10):
lp1_alpha.solve(iterations=100, threshold=1e-6)
# shock the system
lp1_alpha.set_values({'alpha1': 0.7})
for _ in range(45):
lp1_alpha.solve(iterations=100, threshold=1e-6)
```
### Model LP3
```
def create_lp3_model():
model = Model()
model.set_var_default(0)
model.var('Bcb', desc='Government bills held by the Central Bank')
model.var('Bd', desc='Demand for government bills')
model.var('Bh', desc='Government bills held by households')
model.var('Bs', desc='Government bills supplied by government')
model.var('BLd', desc='Demand for government bonds')
model.var('BLh', desc='Government bonds held by households')
model.var('BLs', desc='Supply of government bonds')
model.var('CG', desc='Capital gains on bonds')
model.var('CGe', desc='Expected capital gains on bonds')
model.var('C', desc='Consumption')
model.var('ERrbl', desc='Expected rate of return on bonds')
model.var('Hd', desc='Demand for cash')
model.var('Hh', desc='Cash held by households')
model.var('Hs', desc='Cash supplied by the central bank')
model.var('Pbl', desc='Price of bonds')
model.var('Pble', desc='Expected price of bonds')
model.var('PSBR', desc='Public sector borrowing requirement (PSBR)')
model.var('Rb', desc='Interest rate on government bills')
model.var('Rbl', desc='Interest rate on government bonds')
model.var('T', desc='Taxes')
model.var('TP', desc='Target proportion in households portfolio')
model.var('V', desc='Household wealth')
model.var('Ve', desc='Expected household wealth')
model.var('Y', desc='Income = GDP')
model.var('YDr', desc='Regular disposable income of households')
model.var('YDre', desc='Expected regular disposable income of households')
model.var('z1', desc='Switch parameter')
model.var('z2', desc='Switch parameter')
model.var('z3', desc='Switch parameter')
model.var('z4', desc='Switch parameter')
# no longer exogenous
model.var('G', desc='Government goods')
model.set_param_default(0)
model.param('add', desc='Random shock to expectations')
model.param('add2', desc='Addition to the government expenditure setting rule')
model.param('alpha1', desc='Propensity to consume out of income')
model.param('alpha2', desc='Propensity to consume out of wealth')
model.param('beta', desc='Adjustment parameter in price of bills')
model.param('betae', desc='Adjustment parameter in expectations')
model.param('bot', desc='Bottom value for TP')
model.param('chi', desc='Weight of conviction in expected bond price')
model.param('lambda10', desc='Parameter in asset demand function')
model.param('lambda12', desc='Parameter in asset demand function')
model.param('lambda13', desc='Parameter in asset demand function')
model.param('lambda14', desc='Parameter in asset demand function')
model.param('lambda20', desc='Parameter in asset demand function')
model.param('lambda22', desc='Parameter in asset demand function')
model.param('lambda23', desc='Parameter in asset demand function')
model.param('lambda24', desc='Parameter in asset demand function')
model.param('lambda30', desc='Parameter in asset demand function')
model.param('lambda32', desc='Parameter in asset demand function')
model.param('lambda33', desc='Parameter in asset demand function')
model.param('lambda34', desc='Parameter in asset demand function')
model.param('theta', desc='Tax rate')
model.param('top', desc='Top value for TP')
model.param('Pblbar', desc='Exogenously set price of bonds')
model.param('Rbar', desc='Exogenously set interest rate on govt bills')
model.add('Y = C + G') # 5.1
model.add('YDr = Y - T + Rb(-1)*Bh(-1) + BLh(-1)') # 5.2
model.add('T = theta *(Y + Rb(-1)*Bh(-1) + BLh(-1))') # 5.3
model.add('V - V(-1) = (YDr - C) + CG') # 5.4
model.add('CG = (Pbl - Pbl(-1))*BLh(-1)')
model.add('C = alpha1*YDre + alpha2*V(-1)')
model.add('Ve = V(-1) + (YDre - C) + CG')
model.add('Hh = V - Bh - Pbl*BLh')
model.add('Hd = Ve - Bd - Pbl*BLd')
model.add('Bd = Ve*lambda20 + Ve*lambda22*Rb' +
'- Ve*lambda23*ERrbl - lambda24*YDre')
model.add('BLd = (Ve*lambda30 - Ve*lambda32*Rb ' +
'+ Ve*lambda33*ERrbl - lambda34*YDre)/Pbl')
model.add('Bh = Bd')
model.add('BLh = BLd')
model.add('Bs - Bs(-1) = (G + Rb(-1)*Bs(-1) + BLs(-1))' +
' - (T + Rb(-1)*Bcb(-1)) - Pbl*(BLs - BLs(-1))')
model.add('Hs - Hs(-1) = Bcb - Bcb(-1)')
model.add('Bcb = Bs - Bh')
model.add('BLs = BLh')
model.add('ERrbl = Rbl + ((chi * (Pble - Pbl))/ Pbl)')
model.add('Rbl = 1./Pbl')
model.add('Pble = Pble(-1) - betae*(Pble(-1) - Pbl) + add')
model.add('CGe = chi * (Pble - Pbl)*BLh')
model.add('YDre = YDr(-1)')
model.add('Rb = Rbar')
model.add('Pbl = (1 + z1*beta - z2*beta)*Pbl(-1)')
model.add('z1 = if_true(TP > top)')
model.add('z2 = if_true(TP < bot)')
model.add('TP = (BLh(-1)*Pbl(-1))/(BLh(-1)*Pbl(-1) + Bh(-1))')
model.add('PSBR = (G + Rb*Bs(-1) + BLs(-1)) - (T + Rb*Bcb(-1))')
model.add('z3 = if_true((PSBR(-1)/Y(-1)) > 0.03)')
model.add('z4 = if_true((PSBR(-1)/Y(-1)) < -0.03)')
model.add('G = G(-1) - (z3 + z4)*PSBR(-1) + add2')
return model
lp3_parameters = {'alpha1': 0.8,
'alpha2': 0.2,
'beta': 0.01,
'betae': 0.5,
'chi': 0.1,
'lambda20': 0.44196,
'lambda22': 1.1,
'lambda23': 1,
'lambda24': 0.03,
'lambda30': 0.3997,
'lambda32': 1,
'lambda33': 1.1,
'lambda34': 0.03,
'theta': 0.1938,
'bot': 0.495,
'top': 0.505 }
lp3_exogenous = {'Rbar': 0.03,
'Pblbar': 20,
'add': 0,
'add2': 0}
lp3_variables = {'G': 20,
'V': 95.803,
'Bh': 37.839,
'Bs': 57.964,
'Bcb': 57.964 - 37.839,
'BLh': 1.892,
'BLs': 1.892,
'Hs': 20.125,
'YDr': 95.803,
'Rb': 0.03,
'Pbl': 20,
'Pble': 20,
'PSBR': 0,
'Y': 115.8,
'TP': 1.892*20/(1.892*20+37.839), # BLh*Pbl/(BLh*Pbl+Bh)
'z1': 0,
'z2': 0,
'z3': 0,
'z4': 0}
```
### Scenario: LP3, decrease in propensity to consume
```
lp3_alpha = create_lp3_model()
lp3_alpha.set_values(lp3_parameters)
lp3_alpha.set_values(lp3_exogenous)
lp3_alpha.set_values(lp3_variables)
for _ in range(10):
lp3_alpha.solve(iterations=100, threshold=1e-6)
# shock the system
lp3_alpha.set_values({'alpha1': 0.7})
for _ in range(45):
lp3_alpha.solve(iterations=100, threshold=1e-6)
```
###### Figure 5.10
```
caption = '''
Figure 5.10 Evolution of national income (GDP), following a sharp decrease in the
propensity to consume out of current income, with Model LP1'''
ydata = [s['Y'] for s in lp1_alpha.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.9, 0.9])
axes.tick_params(top=False, right=False)
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(90, 128)
axes.plot(ydata, linestyle='--', color='k')
# add labels
plt.text(20, 110, 'Gross Domestic Product')
fig.text(0.1, -.05, caption);
```
###### Figure 5.11
```
caption = '''
Figure 5.11 Evolution of national income (GDP), following a sharp decrease in the
propensity to consume out of current income, with Model LP3'''
ydata = [s['Y'] for s in lp3_alpha.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.9, 0.9])
axes.tick_params(top=False, right=False)
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(90, 128)
axes.plot(ydata, linestyle='--', color='k')
# add labels
plt.text(20, 110, 'Gross Domestic Product')
fig.text(0.1, -.05, caption);
```
###### Figure 5.12
```
caption = '''
Figure 5.12 Evolution of pure government expenditures and of the government deficit
to national income ratio (the PSBR to GDP ratio), following a sharp decrease in the
propensity to consume out of current income, with Model LP3'''
gdata = [s['G'] for s in lp3_alpha.solutions[5:]]
ratiodata = [s['PSBR']/s['Y'] for s in lp3_alpha.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.9, 0.9])
axes.tick_params(top=False)
axes.spines['top'].set_visible(False)
axes.set_ylim(16, 20.5)
axes.plot(gdata, linestyle='--', color='r')
plt.text(5, 20.4, 'Pure government')
plt.text(5, 20.15, 'expenditures (LHS)')
plt.text(30, 18, 'Deficit to national')
plt.text(30, 17.75, 'income ration (RHS)')
axes2 = axes.twinx()
axes2.tick_params(top=False)
axes2.spines['top'].set_visible(False)
axes2.set_ylim(-.01, 0.04)
axes2.plot(ratiodata, linestyle='-', color='b')
# add labels
fig.text(0.1, 1.05, 'G')
fig.text(0.9, 1.05, 'PSBR to Y ratio')
fig.text(0.1, -.1, caption);
```
|
github_jupyter
|
# This line configures matplotlib to show figures embedded in the notebook,
# instead of opening a new window for each figure. More about that later.
# If you are using an old version of IPython, try using '%pylab inline' instead.
%matplotlib inline
import matplotlib.pyplot as plt
from pysolve3.model import Model
from pysolve3.utils import is_close,round_solution
def create_lp1_model():
model = Model()
model.set_var_default(0)
model.var('Bcb', desc='Government bills held by the Central Bank')
model.var('Bd', desc='Demand for government bills')
model.var('Bh', desc='Government bills held by households')
model.var('Bs', desc='Government bills supplied by government')
model.var('BLd', desc='Demand for government bonds')
model.var('BLh', desc='Government bonds held by households')
model.var('BLs', desc='Supply of government bonds')
model.var('CG', desc='Capital gains on bonds')
model.var('CGe', desc='Expected capital gains on bonds')
model.var('C', desc='Consumption')
model.var('ERrbl', desc='Expected rate of return on bonds')
model.var('Hd', desc='Demand for cash')
model.var('Hh', desc='Cash held by households')
model.var('Hs', desc='Cash supplied by the central bank')
model.var('Pbl', desc='Price of bonds')
model.var('Pble', desc='Expected price of bonds')
model.var('Rb', desc='Interest rate on government bills')
model.var('Rbl', desc='Interest rate on government bonds')
model.var('T', desc='Taxes')
model.var('V', desc='Household wealth')
model.var('Ve', desc='Expected household wealth')
model.var('Y', desc='Income = GDP')
model.var('YDr', desc='Regular disposable income of households')
model.var('YDre', desc='Expected regular disposable income of households')
model.set_param_default(0)
model.param('alpha1', desc='Propensity to consume out of income')
model.param('alpha2', desc='Propensity to consume out of wealth')
model.param('chi', desc='Weight of conviction in expected bond price')
model.param('lambda10', desc='Parameter in asset demand function')
model.param('lambda12', desc='Parameter in asset demand function')
model.param('lambda13', desc='Parameter in asset demand function')
model.param('lambda14', desc='Parameter in asset demand function')
model.param('lambda20', desc='Parameter in asset demand function')
model.param('lambda22', desc='Parameter in asset demand function')
model.param('lambda23', desc='Parameter in asset demand function')
model.param('lambda24', desc='Parameter in asset demand function')
model.param('lambda30', desc='Parameter in asset demand function')
model.param('lambda32', desc='Parameter in asset demand function')
model.param('lambda33', desc='Parameter in asset demand function')
model.param('lambda34', desc='Parameter in asset demand function')
model.param('theta', desc='Tax rate')
model.param('G', desc='Government goods')
model.param('Rbar', desc='Exogenously set interest rate on govt bills')
model.param('Pblbar', desc='Exogenously set price of bonds')
model.add('Y = C + G') # 5.1
model.add('YDr = Y - T + Rb(-1)*Bh(-1) + BLh(-1)') # 5.2
model.add('T = theta *(Y + Rb(-1)*Bh(-1) + BLh(-1))') # 5.3
model.add('V - V(-1) = (YDr - C) + CG') # 5.4
model.add('CG = (Pbl - Pbl(-1))*BLh(-1)')
model.add('C = alpha1*YDre + alpha2*V(-1)')
model.add('Ve = V(-1) + (YDre - C) + CG')
model.add('Hh = V - Bh - Pbl*BLh')
model.add('Hd = Ve - Bd - Pbl*BLd')
model.add('Bd = Ve*lambda20 + Ve*lambda22*Rb' +
'- Ve*lambda23*ERrbl - lambda24*YDre')
model.add('BLd = (Ve*lambda30 - Ve*lambda32*Rb ' +
'+ Ve*lambda33*ERrbl - lambda34*YDre)/Pbl')
model.add('Bh = Bd')
model.add('BLh = BLd')
model.add('Bs - Bs(-1) = (G + Rb(-1)*Bs(-1) + ' +
'BLs(-1)) - (T + Rb(-1)*Bcb(-1)) - (BLs - BLs(-1))*Pbl')
model.add('Hs - Hs(-1) = Bcb - Bcb(-1)')
model.add('Bcb = Bs - Bh')
model.add('BLs = BLh')
model.add('ERrbl = Rbl + chi * (Pble - Pbl) / Pbl')
model.add('Rbl = 1./Pbl')
model.add('Pble = Pbl')
model.add('CGe = chi * (Pble - Pbl)*BLh')
model.add('YDre = YDr(-1)')
model.add('Rb = Rbar')
model.add('Pbl = Pblbar')
return model
lp1_parameters = {'alpha1': 0.8,
'alpha2': 0.2,
'chi': 0.1,
'lambda20': 0.44196,
'lambda22': 1.1,
'lambda23': 1,
'lambda24': 0.03,
'lambda30': 0.3997,
'lambda32': 1,
'lambda33': 1.1,
'lambda34': 0.03,
'theta': 0.1938}
lp1_exogenous = {'G': 20,
'Rbar': 0.03,
'Pblbar': 20}
lp1_variables = {'V': 95.803,
'Bh': 37.839,
'Bs': 57.964,
'Bcb': 57.964 - 37.839,
'BLh': 1.892,
'BLs': 1.892,
'Hs': 20.125,
'YDr': 95.803,
'Rb': 0.03,
'Pbl': 20}
lp1 = create_lp1_model()
lp1.set_values(lp1_parameters)
lp1.set_values(lp1_exogenous)
lp1.set_values(lp1_variables)
for _ in range(15):
lp1.solve(iterations=100, threshold=1e-6)
# shock the system
lp1.set_values({'Rbar': 0.04,
'Pblbar': 15})
for _ in range(45):
lp1.solve(iterations=100, threshold=1e-6)
caption = '''
Figure 5.2 Evolution of the wealth to disposable income ratio, following an increase
in both the short-term and long-term interest rates, with model LP1'''
data = [s['V']/s['YDr'] for s in lp1.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1.1, 1.1])
axes.tick_params(top=False, right=False)
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(0.89, 1.01)
axes.plot(data, 'k')
# add labels
plt.text(20, 0.98, 'Wealth to disposable income ratio')
fig.text(0.1, -.05, caption);
caption = '''
Figure 5.3 Evolution of the wealth to disposable income ratio, following an increase
in both the short-term and long-term interest rates, with model LP1'''
ydrdata = [s['YDr'] for s in lp1.solutions[5:]]
cdata = [s['C'] for s in lp1.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1.1, 1.1])
axes.tick_params(top=False, right=False)
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(92.5, 101.5)
axes.plot(ydrdata, 'k')
axes.plot(cdata, linestyle='--', color='r')
# add labels
plt.text(16, 98, 'Disposable')
plt.text(16, 97.6, 'income')
plt.text(22, 95, 'Consumption')
fig.text(0.1, -.05, caption);
caption = '''
Figure 5.4 Evolution of the bonds to wealth ration and the bills to wealth ratio,
following an increase from 3% to 4% in the short-term interest rate, while the
long-term interest rates moves from 5% to 6.67%, with model LP1'''
bhdata = [s['Bh']/s['V'] for s in lp1.solutions[5:]]
pdata = [s['Pbl']*s['BLh']/s['V'] for s in lp1.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1.1, 1.1])
axes.tick_params(top=False, right=False)
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(0.382, 0.408)
axes.plot(bhdata, 'k')
axes.plot(pdata, linestyle='--', color='r')
# add labels
plt.text(14, 0.3978, 'Bonds to wealth ratio')
plt.text(17, 0.39, 'Bills to wealth ratio')
fig.text(0.1, -.05, caption);
def create_lp2_model():
model = Model()
model.set_var_default(0)
model.var('Bcb', desc='Government bills held by the Central Bank')
model.var('Bd', desc='Demand for government bills')
model.var('Bh', desc='Government bills held by households')
model.var('Bs', desc='Government bills supplied by government')
model.var('BLd', desc='Demand for government bonds')
model.var('BLh', desc='Government bonds held by households')
model.var('BLs', desc='Supply of government bonds')
model.var('CG', desc='Capital gains on bonds')
model.var('CGe', desc='Expected capital gains on bonds')
model.var('C', desc='Consumption')
model.var('ERrbl', desc='Expected rate of return on bonds')
model.var('Hd', desc='Demand for cash')
model.var('Hh', desc='Cash held by households')
model.var('Hs', desc='Cash supplied by the central bank')
model.var('Pbl', desc='Price of bonds')
model.var('Pble', desc='Expected price of bonds')
model.var('Rb', desc='Interest rate on government bills')
model.var('Rbl', desc='Interest rate on government bonds')
model.var('T', desc='Taxes')
model.var('TP', desc='Target proportion in households portfolio')
model.var('V', desc='Household wealth')
model.var('Ve', desc='Expected household wealth')
model.var('Y', desc='Income = GDP')
model.var('YDr', desc='Regular disposable income of households')
model.var('YDre', desc='Expected regular disposable income of households')
model.var('z1', desc='Switch parameter')
model.var('z2', desc='Switch parameter')
model.set_param_default(0)
model.param('add', desc='Random shock to expectations')
model.param('alpha1', desc='Propensity to consume out of income')
model.param('alpha2', desc='Propensity to consume out of wealth')
model.param('beta', desc='Adjustment parameter in price of bills')
model.param('betae', desc='Adjustment parameter in expectations')
model.param('bot', desc='Bottom value for TP')
model.param('chi', desc='Weight of conviction in expected bond price')
model.param('lambda10', desc='Parameter in asset demand function')
model.param('lambda12', desc='Parameter in asset demand function')
model.param('lambda13', desc='Parameter in asset demand function')
model.param('lambda14', desc='Parameter in asset demand function')
model.param('lambda20', desc='Parameter in asset demand function')
model.param('lambda22', desc='Parameter in asset demand function')
model.param('lambda23', desc='Parameter in asset demand function')
model.param('lambda24', desc='Parameter in asset demand function')
model.param('lambda30', desc='Parameter in asset demand function')
model.param('lambda32', desc='Parameter in asset demand function')
model.param('lambda33', desc='Parameter in asset demand function')
model.param('lambda34', desc='Parameter in asset demand function')
model.param('theta', desc='Tax rate')
model.param('top', desc='Top value for TP')
model.param('G', desc='Government goods')
model.param('Pblbar', desc='Exogenously set price of bonds')
model.param('Rbar', desc='Exogenously set interest rate on govt bills')
model.add('Y = C + G') # 5.1
model.add('YDr = Y - T + Rb(-1)*Bh(-1) + BLh(-1)') # 5.2
model.add('T = theta *(Y + Rb(-1)*Bh(-1) + BLh(-1))') # 5.3
model.add('V - V(-1) = (YDr - C) + CG') # 5.4
model.add('CG = (Pbl - Pbl(-1))*BLh(-1)')
model.add('C = alpha1*YDre + alpha2*V(-1)')
model.add('Ve = V(-1) + (YDre - C) + CG')
model.add('Hh = V - Bh - Pbl*BLh')
model.add('Hd = Ve - Bd - Pbl*BLd')
model.add('Bd = Ve*lambda20 + Ve*lambda22*Rb' +
'- Ve*lambda23*ERrbl - lambda24*YDre')
model.add('BLd = (Ve*lambda30 - Ve*lambda32*Rb ' +
'+ Ve*lambda33*ERrbl - lambda34*YDre)/Pbl')
model.add('Bh = Bd')
model.add('BLh = BLd')
model.add('Bs - Bs(-1) = (G + Rb(-1)*Bs(-1) + BLs(-1))' +
' - (T + Rb(-1)*Bcb(-1)) - Pbl*(BLs - BLs(-1))')
model.add('Hs - Hs(-1) = Bcb - Bcb(-1)')
model.add('Bcb = Bs - Bh')
model.add('BLs = BLh')
model.add('ERrbl = Rbl + ((chi * (Pble - Pbl))/ Pbl)')
model.add('Rbl = 1./Pbl')
model.add('Pble = Pble(-1) - betae*(Pble(-1) - Pbl) + add')
model.add('CGe = chi * (Pble - Pbl)*BLh')
model.add('YDre = YDr(-1)')
model.add('Rb = Rbar')
model.add('Pbl = (1 + z1*beta - z2*beta)*Pbl(-1)')
model.add('z1 = if_true(TP > top)')
model.add('z2 = if_true(TP < bot)')
model.add('TP = (BLh(-1)*Pbl(-1))/(BLh(-1)*Pbl(-1) + Bh(-1))')
return model
lp2_parameters = {'alpha1': 0.8,
'alpha2': 0.2,
'beta': 0.01,
'betae': 0.5,
'chi': 0.1,
'lambda20': 0.44196,
'lambda22': 1.1,
'lambda23': 1,
'lambda24': 0.03,
'lambda30': 0.3997,
'lambda32': 1,
'lambda33': 1.1,
'lambda34': 0.03,
'theta': 0.1938,
'bot': 0.495,
'top': 0.505 }
lp2_exogenous = {'G': 20,
'Rbar': 0.03,
'Pblbar': 20,
'add': 0}
lp2_variables = {'V': 95.803,
'Bh': 37.839,
'Bs': 57.964,
'Bcb': 57.964 - 37.839,
'BLh': 1.892,
'BLs': 1.892,
'Hs': 20.125,
'YDr': 95.803,
'Rb': 0.03,
'Pbl': 20,
'Pble': 20,
'TP': 1.892*20/(1.892*20+37.839), # BLh*Pbl/(BLh*Pbl+Bh)
'z1': 0,
'z2': 0}
lp2_bill = create_lp2_model()
lp2_bill.set_values(lp2_parameters)
lp2_bill.set_values(lp2_exogenous)
lp2_bill.set_values(lp2_variables)
lp2_bill.set_values({'z1': lp2_bill.evaluate('if_true(TP > top)'),
'z2': lp2_bill.evaluate('if_true(TP < bot)')})
for _ in range(10):
lp2_bill.solve(iterations=100, threshold=1e-4)
# shock the system
lp2_bill.set_values({'Rbar': 0.035})
for _ in range(45):
lp2_bill.solve(iterations=100, threshold=1e-4)
caption = '''
Figure 5.5 Evolution of the long-term interest rate (the bond yield), following an
increase in the short-term interest rate (the bill rate), as a result of the response of
the central bank and the Treasury, with Model LP2.'''
rbdata = [s['Rb'] for s in lp2_bill.solutions[5:]]
pbldata = [1./s['Pbl'] for s in lp2_bill.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1.1, 0.9])
axes.tick_params(top=False, right=False)
axes.spines['top'].set_visible(False)
axes.set_ylim(0.029, 0.036)
axes.plot(rbdata, linestyle='--', color='r')
axes2 = axes.twinx()
axes2.spines['top'].set_visible(False)
axes2.set_ylim(0.0495, 0.052)
axes2.plot(pbldata, 'k')
# add labels
plt.text(12, 0.0518, 'Short-term interest rate')
plt.text(15, 0.0513, 'Long-term interest rate')
fig.text(0.05, 1.05, 'Bill rate')
fig.text(1.15, 1.05, 'Bond yield')
fig.text(0.1, -.1, caption);
caption = '''
Figure 5.6 Evolution of the target proportion (TP), that is the share of bonds in the
government debt held by households, following an increase in the short-term interest
rate (the bill rate) and the response of the central bank and of the Treasury,
with Model LP2'''
tpdata = [s['TP'] for s in lp2_bill.solutions[5:]]
topdata = [s['top'] for s in lp2_bill.solutions[5:]]
botdata = [s['bot'] for s in lp2_bill.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1.1, 1.1])
axes.tick_params(top=False, right=False)
axes.spines['top'].set_visible(False)
axes.set_ylim(0.490, 0.506)
axes.plot(topdata, color='k')
axes.plot(botdata, color='k')
axes.plot(tpdata, linestyle='--', color='r')
# add labels
plt.text(30, 0.5055, 'Ceiling of target range')
plt.text(30, 0.494, 'Floor of target range')
plt.text(10, 0.493, 'Share of bonds')
plt.text(10, 0.4922, 'in government debt')
plt.text(10, 0.4914, 'held by households')
fig.text(0.1, -.15, caption);
lp2_bond = create_lp2_model()
lp2_bond.set_values(lp2_parameters)
lp2_bond.set_values(lp2_exogenous)
lp2_bond.set_values(lp2_variables)
lp2_bond.set_values({'z1': 'if_true(TP > top)',
'z2': 'if_true(TP < bot)'})
for _ in range(10):
lp2_bond.solve(iterations=100, threshold=1e-5)
# shock the system
lp2_bond.set_values({'add': -3})
lp2_bond.solve(iterations=100, threshold=1e-5)
lp2_bond.set_values({'add': 0})
for _ in range(43):
lp2_bond.solve(iterations=100, threshold=1e-4)
caption = '''
Figure 5.7 Evolution of the long-term interest rate, following an anticipated fall in
the price of bonds, as a consequence of the response of the central bank and of the
Treasury, with Model LP2'''
pbldata = [1./s['Pbl'] for s in lp2_bond.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.9, 0.9])
axes.tick_params(top=False, right=False)
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(0.0497, 0.0512)
axes.plot(pbldata, linestyle='--', color='k')
# add labels
plt.text(15, 0.0509, 'Long-term interest rate')
fig.text(0.1, -.1, caption);
caption = '''
Figure 5.8 Evolution of the expected and actual bond prices, following an anticipated
fall in the price of bonds, as a consequence of the response of the central bank and of
the Treasury, with Model LP2'''
pbldata = [s['Pbl'] for s in lp2_bond.solutions[5:]]
pbledata = [s['Pble'] for s in lp2_bond.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.9, 0.9])
axes.tick_params(top=False, right=False)
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(16.5, 21)
axes.plot(pbldata, linestyle='--', color='k')
axes.plot(pbledata, linestyle='-', color='r')
# add labels
plt.text(8, 20, 'Actual price of bonds')
plt.text(10, 19, 'Expected price of bonds')
fig.text(0.1, -.1, caption);
caption = '''
Figure 5.9 Evolution of the target proportion (TP), that is the share of bonds in the
government debt held by households, following an anticipated fall in the price of
bonds, as a consequence of the response of the central bank and of the Treasury, with
Model LP2'''
tpdata = [s['TP'] for s in lp2_bond.solutions[5:]]
botdata = [s['top'] for s in lp2_bond.solutions[5:]]
topdata = [s['bot'] for s in lp2_bond.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.9, 0.9])
axes.tick_params(top=False, right=False)
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(0.47, 0.52)
axes.plot(tpdata, linestyle='--', color='r')
axes.plot(botdata, linestyle='-', color='k')
axes.plot(topdata, linestyle='-', color='k')
# add labels
plt.text(30, 0.508, 'Ceiling of target range')
plt.text(30, 0.491, 'Floor of target range')
plt.text(10, 0.49, 'Share of bonds in')
plt.text(10, 0.487, 'government debt')
plt.text(10, 0.484, 'held by households')
fig.text(0.1, -.15, caption);
lp1_alpha = create_lp1_model()
lp1_alpha.set_values(lp1_parameters)
lp1_alpha.set_values(lp1_exogenous)
lp1_alpha.set_values(lp1_variables)
for _ in range(10):
lp1_alpha.solve(iterations=100, threshold=1e-6)
# shock the system
lp1_alpha.set_values({'alpha1': 0.7})
for _ in range(45):
lp1_alpha.solve(iterations=100, threshold=1e-6)
def create_lp3_model():
model = Model()
model.set_var_default(0)
model.var('Bcb', desc='Government bills held by the Central Bank')
model.var('Bd', desc='Demand for government bills')
model.var('Bh', desc='Government bills held by households')
model.var('Bs', desc='Government bills supplied by government')
model.var('BLd', desc='Demand for government bonds')
model.var('BLh', desc='Government bonds held by households')
model.var('BLs', desc='Supply of government bonds')
model.var('CG', desc='Capital gains on bonds')
model.var('CGe', desc='Expected capital gains on bonds')
model.var('C', desc='Consumption')
model.var('ERrbl', desc='Expected rate of return on bonds')
model.var('Hd', desc='Demand for cash')
model.var('Hh', desc='Cash held by households')
model.var('Hs', desc='Cash supplied by the central bank')
model.var('Pbl', desc='Price of bonds')
model.var('Pble', desc='Expected price of bonds')
model.var('PSBR', desc='Public sector borrowing requirement (PSBR)')
model.var('Rb', desc='Interest rate on government bills')
model.var('Rbl', desc='Interest rate on government bonds')
model.var('T', desc='Taxes')
model.var('TP', desc='Target proportion in households portfolio')
model.var('V', desc='Household wealth')
model.var('Ve', desc='Expected household wealth')
model.var('Y', desc='Income = GDP')
model.var('YDr', desc='Regular disposable income of households')
model.var('YDre', desc='Expected regular disposable income of households')
model.var('z1', desc='Switch parameter')
model.var('z2', desc='Switch parameter')
model.var('z3', desc='Switch parameter')
model.var('z4', desc='Switch parameter')
# no longer exogenous
model.var('G', desc='Government goods')
model.set_param_default(0)
model.param('add', desc='Random shock to expectations')
model.param('add2', desc='Addition to the government expenditure setting rule')
model.param('alpha1', desc='Propensity to consume out of income')
model.param('alpha2', desc='Propensity to consume out of wealth')
model.param('beta', desc='Adjustment parameter in price of bills')
model.param('betae', desc='Adjustment parameter in expectations')
model.param('bot', desc='Bottom value for TP')
model.param('chi', desc='Weight of conviction in expected bond price')
model.param('lambda10', desc='Parameter in asset demand function')
model.param('lambda12', desc='Parameter in asset demand function')
model.param('lambda13', desc='Parameter in asset demand function')
model.param('lambda14', desc='Parameter in asset demand function')
model.param('lambda20', desc='Parameter in asset demand function')
model.param('lambda22', desc='Parameter in asset demand function')
model.param('lambda23', desc='Parameter in asset demand function')
model.param('lambda24', desc='Parameter in asset demand function')
model.param('lambda30', desc='Parameter in asset demand function')
model.param('lambda32', desc='Parameter in asset demand function')
model.param('lambda33', desc='Parameter in asset demand function')
model.param('lambda34', desc='Parameter in asset demand function')
model.param('theta', desc='Tax rate')
model.param('top', desc='Top value for TP')
model.param('Pblbar', desc='Exogenously set price of bonds')
model.param('Rbar', desc='Exogenously set interest rate on govt bills')
model.add('Y = C + G') # 5.1
model.add('YDr = Y - T + Rb(-1)*Bh(-1) + BLh(-1)') # 5.2
model.add('T = theta *(Y + Rb(-1)*Bh(-1) + BLh(-1))') # 5.3
model.add('V - V(-1) = (YDr - C) + CG') # 5.4
model.add('CG = (Pbl - Pbl(-1))*BLh(-1)')
model.add('C = alpha1*YDre + alpha2*V(-1)')
model.add('Ve = V(-1) + (YDre - C) + CG')
model.add('Hh = V - Bh - Pbl*BLh')
model.add('Hd = Ve - Bd - Pbl*BLd')
model.add('Bd = Ve*lambda20 + Ve*lambda22*Rb' +
'- Ve*lambda23*ERrbl - lambda24*YDre')
model.add('BLd = (Ve*lambda30 - Ve*lambda32*Rb ' +
'+ Ve*lambda33*ERrbl - lambda34*YDre)/Pbl')
model.add('Bh = Bd')
model.add('BLh = BLd')
model.add('Bs - Bs(-1) = (G + Rb(-1)*Bs(-1) + BLs(-1))' +
' - (T + Rb(-1)*Bcb(-1)) - Pbl*(BLs - BLs(-1))')
model.add('Hs - Hs(-1) = Bcb - Bcb(-1)')
model.add('Bcb = Bs - Bh')
model.add('BLs = BLh')
model.add('ERrbl = Rbl + ((chi * (Pble - Pbl))/ Pbl)')
model.add('Rbl = 1./Pbl')
model.add('Pble = Pble(-1) - betae*(Pble(-1) - Pbl) + add')
model.add('CGe = chi * (Pble - Pbl)*BLh')
model.add('YDre = YDr(-1)')
model.add('Rb = Rbar')
model.add('Pbl = (1 + z1*beta - z2*beta)*Pbl(-1)')
model.add('z1 = if_true(TP > top)')
model.add('z2 = if_true(TP < bot)')
model.add('TP = (BLh(-1)*Pbl(-1))/(BLh(-1)*Pbl(-1) + Bh(-1))')
model.add('PSBR = (G + Rb*Bs(-1) + BLs(-1)) - (T + Rb*Bcb(-1))')
model.add('z3 = if_true((PSBR(-1)/Y(-1)) > 0.03)')
model.add('z4 = if_true((PSBR(-1)/Y(-1)) < -0.03)')
model.add('G = G(-1) - (z3 + z4)*PSBR(-1) + add2')
return model
lp3_parameters = {'alpha1': 0.8,
'alpha2': 0.2,
'beta': 0.01,
'betae': 0.5,
'chi': 0.1,
'lambda20': 0.44196,
'lambda22': 1.1,
'lambda23': 1,
'lambda24': 0.03,
'lambda30': 0.3997,
'lambda32': 1,
'lambda33': 1.1,
'lambda34': 0.03,
'theta': 0.1938,
'bot': 0.495,
'top': 0.505 }
lp3_exogenous = {'Rbar': 0.03,
'Pblbar': 20,
'add': 0,
'add2': 0}
lp3_variables = {'G': 20,
'V': 95.803,
'Bh': 37.839,
'Bs': 57.964,
'Bcb': 57.964 - 37.839,
'BLh': 1.892,
'BLs': 1.892,
'Hs': 20.125,
'YDr': 95.803,
'Rb': 0.03,
'Pbl': 20,
'Pble': 20,
'PSBR': 0,
'Y': 115.8,
'TP': 1.892*20/(1.892*20+37.839), # BLh*Pbl/(BLh*Pbl+Bh)
'z1': 0,
'z2': 0,
'z3': 0,
'z4': 0}
lp3_alpha = create_lp3_model()
lp3_alpha.set_values(lp3_parameters)
lp3_alpha.set_values(lp3_exogenous)
lp3_alpha.set_values(lp3_variables)
for _ in range(10):
lp3_alpha.solve(iterations=100, threshold=1e-6)
# shock the system
lp3_alpha.set_values({'alpha1': 0.7})
for _ in range(45):
lp3_alpha.solve(iterations=100, threshold=1e-6)
caption = '''
Figure 5.10 Evolution of national income (GDP), following a sharp decrease in the
propensity to consume out of current income, with Model LP1'''
ydata = [s['Y'] for s in lp1_alpha.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.9, 0.9])
axes.tick_params(top=False, right=False)
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(90, 128)
axes.plot(ydata, linestyle='--', color='k')
# add labels
plt.text(20, 110, 'Gross Domestic Product')
fig.text(0.1, -.05, caption);
caption = '''
Figure 5.11 Evolution of national income (GDP), following a sharp decrease in the
propensity to consume out of current income, with Model LP3'''
ydata = [s['Y'] for s in lp3_alpha.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.9, 0.9])
axes.tick_params(top=False, right=False)
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(90, 128)
axes.plot(ydata, linestyle='--', color='k')
# add labels
plt.text(20, 110, 'Gross Domestic Product')
fig.text(0.1, -.05, caption);
caption = '''
Figure 5.12 Evolution of pure government expenditures and of the government deficit
to national income ratio (the PSBR to GDP ratio), following a sharp decrease in the
propensity to consume out of current income, with Model LP3'''
gdata = [s['G'] for s in lp3_alpha.solutions[5:]]
ratiodata = [s['PSBR']/s['Y'] for s in lp3_alpha.solutions[5:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.9, 0.9])
axes.tick_params(top=False)
axes.spines['top'].set_visible(False)
axes.set_ylim(16, 20.5)
axes.plot(gdata, linestyle='--', color='r')
plt.text(5, 20.4, 'Pure government')
plt.text(5, 20.15, 'expenditures (LHS)')
plt.text(30, 18, 'Deficit to national')
plt.text(30, 17.75, 'income ration (RHS)')
axes2 = axes.twinx()
axes2.tick_params(top=False)
axes2.spines['top'].set_visible(False)
axes2.set_ylim(-.01, 0.04)
axes2.plot(ratiodata, linestyle='-', color='b')
# add labels
fig.text(0.1, 1.05, 'G')
fig.text(0.9, 1.05, 'PSBR to Y ratio')
fig.text(0.1, -.1, caption);
| 0.628065 | 0.859251 |
```
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [8, 4]
plt.rcParams['font.size'] = 12
```
# Paths and waveguides
gdsfactory leverages [PHIDL](https://github.com/amccaugh/phidl) efficient
module for creating smooth curves, particularly useful for creating waveguide
structures such as those used in photonics. Creating a path device is simple:
- Create a blank `Path`
- Append points to the `Path` either using the built-in functions (`arc()`,
`straight()`, `euler()`, etc) or by providing your own lists of points
- Specify what you want the cross-section (`CrossSection`) to look like
- Combine the `Path` and the `CrossSection` (will output a Device with the path
polygons in it)
## Path creation
The first step is to generate the list of points we want the path to follow.
Let's start out by creating a blank `Path` and using the built-in functions to
make a few smooth turns.
```
from pp import Path, CrossSection, Component, qp
from pp import path as pa
import pp
import numpy as np
P = Path()
P.append( pa.arc(radius = 10, angle = 90) ) # Circular arc
P.append( pa.straight(length = 10) ) # Straight section
P.append( pa.euler(radius = 3, angle = -90) ) # Euler bend (aka "racetrack" curve)
P.append( pa.straight(length = 40) )
P.append( pa.arc(radius = 8, angle = -45) )
P.append( pa.straight(length = 10) )
P.append( pa.arc(radius = 8, angle = 45) )
P.append( pa.straight(length = 10) )
qp(P)
```
We can also modify our Path in the same ways as any other PHIDL object:
- Manipulation with `move()`, `rotate()`, `mirror()`, etc
- Accessing properties like `xmin`, `y`, `center`, `bbox`, etc
```
P.movey(10)
P.xmin = 20
qp(P)
```
We can also check the length of the curve with the `length()` method:
```
P.length()
```
## Defining the cross-section
Now that we've got our path defined, the next step is to tell phidl what we want
the cross-section of the path to look like. To do this, we create a blank
`CrossSection` and add whatever cross-sections we want to it. We can then
combine the `Path` and the `CrossSection` using the `extrude()` function to
generate our final geometry:
```
# Create a blank CrossSection
X = CrossSection()
# Add a single "section" to the cross-section
X.add(width = 1, offset = 0, layer = 0)
# Combine the Path and the CrossSection
waveguide_device = P.extrude(cross_section = X)
# Quickplot the resulting Component
qp(waveguide_device)
```
Now, what if we want a more complicated waveguide? For instance, in some
photonic applications it's helpful to have a shallow etch that appears on either
side of the waveguide (often called a "sleeve). Additionally, it might be nice
to have a Port on either end of the center section so we can snap other
geometries to it. Let's try adding something like that in:
```
# Create a blank CrossSection
X = CrossSection()
# Add a a few "sections" to the cross-section
X.add(width = 1, offset = 0, layer = 0, ports = ('in','out'))
X.add(width = 3, offset = 2, layer = 2)
X.add(width = 3, offset = -2, layer = 2)
# Combine the Path and the CrossSection
waveguide_device = P.extrude(cross_section = X)
# Quickplot the resulting Component
waveguide_device
```
## Building Paths quickly
You can pass `append()` lists of path segments. This makes it easy to combine
paths very quickly. Below we show 3 examples using this functionality:
**Example 1:** Assemble a complex path by making a list of Paths and passing it
to `append()`
```
P = Path()
# Create the basic Path components
left_turn = pa.euler(radius = 4, angle = 90)
right_turn = pa.euler(radius = 4, angle = -90)
straight = pa.straight(length = 10)
# Assemble a complex path by making list of Paths and passing it to `append()`
P.append([
straight,
left_turn,
straight,
right_turn,
straight,
straight,
right_turn,
left_turn,
straight,
])
qp(P)
```
**Example 2:** Create an "S-turn" just by making a list of `[left_turn,
right_turn]`
```
P = Path()
# Create an "S-turn" just by making a list
s_turn = [left_turn, right_turn]
P.append(s_turn)
qp(P)
```
**Example 3:** Repeat the S-turn 3 times by nesting our S-turn list in another
list
```
P = Path()
# Create an "S-turn" using a list
s_turn = [left_turn, right_turn]
# Repeat the S-turn 3 times by nesting our S-turn list 3x times in another list
triple_s_turn = [s_turn, s_turn, s_turn]
P.append(triple_s_turn)
qp(P)
```
Note you can also use the Path() constructor to immediately contruct your Path:
```
P = Path([straight, left_turn, straight, right_turn, straight])
qp(P)
```
## Custom curves
Now let's have some fun and try to make a loop-de-loop structure with parallel
waveguides and several Ports.
To create a new type of curve we simply make a function that produces an array
of points. The best way to do that is to create a function which allows you to
specify a large number of points along that curve -- in the case below, the
`looploop()` function outputs 1000 points along a looping path. Later, if we
want reduce the number of points in our geometry we can trivially `simplify` the
path.
```
def looploop(num_pts = 1000):
""" Simple limacon looping curve """
t = np.linspace(-np.pi,0,num_pts)
r = 20 + 25*np.sin(t)
x = r*np.cos(t)
y = r*np.sin(t)
points = np.array((x,y)).T
return points
# Create the path points
P = Path()
P.append( pa.arc(radius = 10, angle = 90) )
P.append( pa.straight())
P.append( pa.arc(radius = 5, angle = -90) )
P.append( looploop(num_pts = 1000) )
P.rotate(-45)
# Create the crosssection
X = CrossSection()
X.add(width = 0.5, offset = 2, layer = 0, ports = [None,None])
X.add(width = 0.5, offset = 4, layer = 1, ports = [None,'out2'])
X.add(width = 1.5, offset = 0, layer = 2, ports = ['in','out'])
X.add(width = 1, offset = 0, layer = 3)
D = P.extrude(cross_section = X)
qp(D) # quickplot the resulting Component
c = pp.import_phidl_component(component=D)
pp.show(c)
```
You can create Paths from any array of points -- just be sure that they form
smooth curves! If we examine our path `P` we can see that all we've simply
created a long list of points:
```
import numpy as np
path_points = P.points # Curve points are stored as a numpy array in P.points
print(np.shape(path_points)) # The shape of the array is Nx2
print(len(P)) # Equivalently, use len(P) to see how many points are inside
```
## Simplifying / reducing point usage
One of the chief concerns of generating smooth curves is that too many points
are generated, inflating file sizes and making boolean operations
computationally expensive. Fortunately, PHIDL has a fast implementation of the
[Ramer-Douglas–Peucker
algorithm](https://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm)
that lets you reduce the number of points in a curve without changing its shape.
All that needs to be done is when you `extrude()` the Component, you specify the
`simplify` argument.
If we specify `simplify = 1e-3`, the number of points in the line drops from
12,000 to 4,000, and the remaining points form a line that is identical to
within `1e-3` distance from the original (for the default 1 micron unit size,
this corresponds to 1 nanometer resolution):
```
# The remaining points form a identical line to within `1e-3` from the original
D = P.extrude(cross_section = X, simplify = 1e-3)
qp(D) # quickplot the resulting Component
```
Let's say we need fewer points. We can increase the simplify tolerance by
specifying `simplify = 1e-1`. This drops the number of points to ~400 points
form a line that is identical to within `1e-1` distance from the original:
```
D = P.extrude(cross_section = X, simplify = 1e-1)
qp(D) # quickplot the resulting Component
```
Taken to absurdity, what happens if we set `simplify = 0.3`? Once again, the
~200 remaining points form a line that is within `0.3` units from the original
-- but that line looks pretty bad.
```
D = P.extrude(cross_section = X, simplify = 0.3)
qp(D) # quickplot the resulting Component
```
## Curvature calculation
The `Path` class has a `curvature()` method that computes the curvature `K` of
your smooth path (K = 1/(radius of curvature)). This can be helpful for
verifying that your curves transition smoothly such as in [track-transition
curves](https://en.wikipedia.org/wiki/Track_transition_curve) (also known as
"racetrack", "Euler", or "straight-to-bend" curves in the photonics world).
Note this curvature is numerically computed so areas where the curvature jumps
instantaneously (such as between an arc and a straight segment) will be slightly
interpolated, and sudden changes in point density along the curve can cause
discontinuities.
```
P = Path()
P.append([
pa.straight(length = 10), # Should have a curvature of 0
# Euler straight-to-bend transition with min. bend radius of 3 (max curvature of 1/3)
pa.euler(radius = 3, angle = 90, p = 0.5, use_eff = False),
pa.straight(length = 10), # Should have a curvature of 0
pa.arc(radius = 10, angle = 90), # Should have a curvature of 1/10
pa.arc(radius = 5, angle = -90), # Should have a curvature of -1/5
pa.straight(length = 20), # Should have a curvature of 0
])
s,K = P.curvature()
plt.plot(s,K,'.-')
plt.xlabel('Position along curve (arc length)')
plt.ylabel('Curvature');
```
## Transitioning between cross-sections
Often a critical element of building paths is being able to transition between
cross-sections. You can use the `transition()` function to do exactly this: you
simply feed it two `CrossSection`s and it will output a new `CrossSection` that
smoothly transitions between the two.
Let's start off by creating two cross-sections we want to transition between.
Note we give all the cross-sectional elements names by specifying the `name`
argument in the `add()` function -- this is important because the transition
function will try to match names between the two input cross-sections, and any
names not present in both inputs will be skipped.
```
from pp import Path, CrossSection, Component, qp
from pp import path as pa
import numpy as np
import pp
# Create our first CrossSection
X1 = CrossSection()
X1.add(width = 1.2, offset = 0, layer = 2, name = 'wg', ports = ('in1', 'out1'))
X1.add(width = 2.2, offset = 0, layer = 3, name = 'etch')
X1.add(width = 1.1, offset = 3, layer = 1, name = 'wg2')
# Create the second CrossSection that we want to transition to
X2 = CrossSection()
X2.add(width = 1, offset = 0, layer = 2, name = 'wg', ports = ('in2', 'out2'))
X2.add(width = 3.5, offset = 0, layer = 3, name = 'etch')
X2.add(width = 3, offset = 5, layer = 1, name = 'wg2')
# To show the cross-sections, let's create two Paths and
# create Devices by extruding them
P1 = pa.straight(length = 5)
P2 = pa.straight(length = 5)
WG1 = P1.extrude(cross_section = X1)
WG2 = P2.extrude(cross_section = X2)
# Place both cross-section Devices and quickplot them
D = Component()
wg1 = D << WG1
wg2 = D << WG2
wg2.movex(7.5)
qp(D)
```
Now let's create the transitional CrossSection by calling `transition()` with
these two CrossSections as input. If we want the width to vary as a smooth
sinusoid between the sections, we can set `width_type` to `'sine'`
(alternatively we could also use `'linear'`).
```
# Create the transitional CrossSection
Xtrans = pa.transition(cross_section1 = X1,
cross_section2 = X2,
width_type = 'sine')
# Create a Path for the transitional CrossSection to follow
P3 = pa.straight(length = 15)
# Use the transitional CrossSection to create a Component
WG_trans = P3.extrude(Xtrans)
qp(WG_trans)
```
Now that we have all of our components, let's `connect()` everything and see
what it looks like
```
D = Component()
wg1 = D << WG1 # First cross-section Component
wg2 = D << WG2
wgt = D << WG_trans
wgt.connect('in2', wg1.ports['out1'])
wg2.connect('in2', wgt.ports['out1'])
qp(D)
```
Note that since `transition()` outputs a `CrossSection`, we can make the
transition follow an arbitrary path:
```
# Transition along a curving Path
P4 = pa.euler(radius = 25, angle = 45, p = 0.5, use_eff = False)
WG_trans = P4.extrude(Xtrans)
D = Component()
wg1 = D << WG1 # First cross-section Component
wg2 = D << WG2
wgt = D << WG_trans
wgt.connect('in2', wg1.ports['out1'])
wg2.connect('in2', wgt.ports['out1'])
qp(D)
```
## Variable width / offset
In some instances, you may want to vary the width or offset of the path's cross-
section as it travels. This can be accomplished by giving the `CrossSection`
arguments that are functions or lists. Let's say we wanted a width that varies
sinusoidally along the length of the Path. To do this, we need to make a width
function that is parameterized from 0 to 1: for an example function
`my_width_fun(t)` where the width at `t==0` is the width at the beginning of the
Path and the width at `t==1` is the width at the end.
```
def my_custom_width_fun(t):
# Note: Custom width/offset functions MUST be vectorizable--you must be able
# to call them with an array input like my_custom_width_fun([0, 0.1, 0.2, 0.3, 0.4])
num_periods = 5
w = 3 + np.cos(2*np.pi*t * num_periods)
return w
# Create the Path
P = pa.straight(length = 40)
# Create two cross-sections: one fixed width, one modulated by my_custom_offset_fun
X = CrossSection()
X.add(width = 3, offset = -6, layer = 0)
X.add(width = my_custom_width_fun, offset = 0, layer = 0)
# Extrude the Path to create the Component
D = P.extrude(cross_section = X)
qp(D)
```
We can do the same thing with the offset argument:
```
def my_custom_offset_fun(t):
# Note: Custom width/offset functions MUST be vectorizable--you must be able
# to call them with an array input like my_custom_offset_fun([0, 0.1, 0.2, 0.3, 0.4])
num_periods = 3
w = 3 + np.cos(2*np.pi*t * num_periods)
return w
# Create the Path
P = pa.straight(length = 40)
# Create two cross-sections: one fixed offset, one modulated by my_custom_offset_fun
X = CrossSection()
X.add(width = 1, offset = my_custom_offset_fun, layer = 0)
X.add(width = 1, offset = 0, layer = 0)
# Extrude the Path to create the Device
D = P.extrude(cross_section = X)
qp(D)
```
## Offsetting a Path
Sometimes it's convenient to start with a simple Path and offset the line it
follows to suit your needs (without using a custom-offset CrossSection). Here,
we start with two copies of simple straight Path and use the `offset()`
function to directly modify each Path.
```
def my_custom_offset_fun(t):
# Note: Custom width/offset functions MUST be vectorizable--you must be able
# to call them with an array input like my_custom_offset_fun([0, 0.1, 0.2, 0.3, 0.4])
num_periods = 1
w = 2 + np.cos(2*np.pi*t * num_periods)
return w
P1 = pa.straight(length = 40)
P2 = P1.copy() # Make a copy of the Path
P1.offset(offset = my_custom_offset_fun)
P2.offset(offset = my_custom_offset_fun)
P2.mirror((1,0)) # reflect across X-axis
qp([P1, P2])
```
## Modifying a CrossSection
In case you need to modify the CrossSection, it can be done simply by specifying
a `name` argument for the cross-sectional element you want to modify later.
Here is an example where we name one of thee cross-sectional elements
`'myelement1'` and `'myelement2'`:
```
# Create the Path
P = pa.arc(radius = 10, angle = 45)
# Create two cross-sections: one fixed width, one modulated by my_custom_offset_fun
X = CrossSection()
X.add(width = 1, offset = 0, layer = 0, ports = (1,2), name = 'myelement1')
X.add(width = 1, offset = 3, layer = 0, ports = (3,4), name = 'myelement2')
# Extrude the Path to create the Device
D = P.extrude(cross_section = X)
qp(D)
```
In case we want to change any of the CrossSection elements, we simply access the
Python dictionary that specifies that element and modify the values
```
# Copy our original CrossSection
Xcopy = X.copy()
# Modify
Xcopy['myelement2']['width'] = 2 # X['myelement2'] is a dictionary
Xcopy['myelement2']['layer'] = 1 # X['myelement2'] is a dictionary
# Extrude the Path to create the Device
D = P.extrude(cross_section = Xcopy)
qp(D)
from pp import path as pa
from pp import CrossSection, Component
import pp
X1 = CrossSection()
X1.add(width = 1.2, offset = 0, layer = 2, name = 'wg', ports = ('in1', 'out1'))
X1.add(width = 2.2, offset = 0, layer = 3, name = 'etch')
X1.add(width = 1.1, offset = 3, layer = 1, name = 'wg2')
# Create the second CrossSection that we want to transition to
X2 = CrossSection()
X2.add(width = 1, offset = 0, layer = 2, name = 'wg', ports = ('in2', 'out2'))
X2.add(width = 3.5, offset = 0, layer = 3, name = 'etch')
X2.add(width = 3, offset = 5, layer = 1, name = 'wg2')
Xtrans = pa.transition(cross_section1 = X1,
cross_section2 = X2,
width_type = 'sine')
P1 = pa.straight(length = 5)
P2 = pa.straight(length = 5)
WG1 = P1.extrude(cross_section = X1)
WG2 = P2.extrude(cross_section = X2)
P4 = pa.euler(radius = 25, angle = 45, p = 0.5, use_eff = False)
WG_trans = P4.extrude(Xtrans)
c = Component()
wg1 = c << WG1
wg2 = c << WG2
wgt = c << WG_trans
wgt.connect('in2', wg1.ports['out1'])
wg2.connect('in2', wgt.ports['out1'])
pp.qp(c)
len(c.references)
```
|
github_jupyter
|
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [8, 4]
plt.rcParams['font.size'] = 12
from pp import Path, CrossSection, Component, qp
from pp import path as pa
import pp
import numpy as np
P = Path()
P.append( pa.arc(radius = 10, angle = 90) ) # Circular arc
P.append( pa.straight(length = 10) ) # Straight section
P.append( pa.euler(radius = 3, angle = -90) ) # Euler bend (aka "racetrack" curve)
P.append( pa.straight(length = 40) )
P.append( pa.arc(radius = 8, angle = -45) )
P.append( pa.straight(length = 10) )
P.append( pa.arc(radius = 8, angle = 45) )
P.append( pa.straight(length = 10) )
qp(P)
P.movey(10)
P.xmin = 20
qp(P)
P.length()
# Create a blank CrossSection
X = CrossSection()
# Add a single "section" to the cross-section
X.add(width = 1, offset = 0, layer = 0)
# Combine the Path and the CrossSection
waveguide_device = P.extrude(cross_section = X)
# Quickplot the resulting Component
qp(waveguide_device)
# Create a blank CrossSection
X = CrossSection()
# Add a a few "sections" to the cross-section
X.add(width = 1, offset = 0, layer = 0, ports = ('in','out'))
X.add(width = 3, offset = 2, layer = 2)
X.add(width = 3, offset = -2, layer = 2)
# Combine the Path and the CrossSection
waveguide_device = P.extrude(cross_section = X)
# Quickplot the resulting Component
waveguide_device
P = Path()
# Create the basic Path components
left_turn = pa.euler(radius = 4, angle = 90)
right_turn = pa.euler(radius = 4, angle = -90)
straight = pa.straight(length = 10)
# Assemble a complex path by making list of Paths and passing it to `append()`
P.append([
straight,
left_turn,
straight,
right_turn,
straight,
straight,
right_turn,
left_turn,
straight,
])
qp(P)
P = Path()
# Create an "S-turn" just by making a list
s_turn = [left_turn, right_turn]
P.append(s_turn)
qp(P)
P = Path()
# Create an "S-turn" using a list
s_turn = [left_turn, right_turn]
# Repeat the S-turn 3 times by nesting our S-turn list 3x times in another list
triple_s_turn = [s_turn, s_turn, s_turn]
P.append(triple_s_turn)
qp(P)
P = Path([straight, left_turn, straight, right_turn, straight])
qp(P)
def looploop(num_pts = 1000):
""" Simple limacon looping curve """
t = np.linspace(-np.pi,0,num_pts)
r = 20 + 25*np.sin(t)
x = r*np.cos(t)
y = r*np.sin(t)
points = np.array((x,y)).T
return points
# Create the path points
P = Path()
P.append( pa.arc(radius = 10, angle = 90) )
P.append( pa.straight())
P.append( pa.arc(radius = 5, angle = -90) )
P.append( looploop(num_pts = 1000) )
P.rotate(-45)
# Create the crosssection
X = CrossSection()
X.add(width = 0.5, offset = 2, layer = 0, ports = [None,None])
X.add(width = 0.5, offset = 4, layer = 1, ports = [None,'out2'])
X.add(width = 1.5, offset = 0, layer = 2, ports = ['in','out'])
X.add(width = 1, offset = 0, layer = 3)
D = P.extrude(cross_section = X)
qp(D) # quickplot the resulting Component
c = pp.import_phidl_component(component=D)
pp.show(c)
import numpy as np
path_points = P.points # Curve points are stored as a numpy array in P.points
print(np.shape(path_points)) # The shape of the array is Nx2
print(len(P)) # Equivalently, use len(P) to see how many points are inside
# The remaining points form a identical line to within `1e-3` from the original
D = P.extrude(cross_section = X, simplify = 1e-3)
qp(D) # quickplot the resulting Component
D = P.extrude(cross_section = X, simplify = 1e-1)
qp(D) # quickplot the resulting Component
D = P.extrude(cross_section = X, simplify = 0.3)
qp(D) # quickplot the resulting Component
P = Path()
P.append([
pa.straight(length = 10), # Should have a curvature of 0
# Euler straight-to-bend transition with min. bend radius of 3 (max curvature of 1/3)
pa.euler(radius = 3, angle = 90, p = 0.5, use_eff = False),
pa.straight(length = 10), # Should have a curvature of 0
pa.arc(radius = 10, angle = 90), # Should have a curvature of 1/10
pa.arc(radius = 5, angle = -90), # Should have a curvature of -1/5
pa.straight(length = 20), # Should have a curvature of 0
])
s,K = P.curvature()
plt.plot(s,K,'.-')
plt.xlabel('Position along curve (arc length)')
plt.ylabel('Curvature');
from pp import Path, CrossSection, Component, qp
from pp import path as pa
import numpy as np
import pp
# Create our first CrossSection
X1 = CrossSection()
X1.add(width = 1.2, offset = 0, layer = 2, name = 'wg', ports = ('in1', 'out1'))
X1.add(width = 2.2, offset = 0, layer = 3, name = 'etch')
X1.add(width = 1.1, offset = 3, layer = 1, name = 'wg2')
# Create the second CrossSection that we want to transition to
X2 = CrossSection()
X2.add(width = 1, offset = 0, layer = 2, name = 'wg', ports = ('in2', 'out2'))
X2.add(width = 3.5, offset = 0, layer = 3, name = 'etch')
X2.add(width = 3, offset = 5, layer = 1, name = 'wg2')
# To show the cross-sections, let's create two Paths and
# create Devices by extruding them
P1 = pa.straight(length = 5)
P2 = pa.straight(length = 5)
WG1 = P1.extrude(cross_section = X1)
WG2 = P2.extrude(cross_section = X2)
# Place both cross-section Devices and quickplot them
D = Component()
wg1 = D << WG1
wg2 = D << WG2
wg2.movex(7.5)
qp(D)
# Create the transitional CrossSection
Xtrans = pa.transition(cross_section1 = X1,
cross_section2 = X2,
width_type = 'sine')
# Create a Path for the transitional CrossSection to follow
P3 = pa.straight(length = 15)
# Use the transitional CrossSection to create a Component
WG_trans = P3.extrude(Xtrans)
qp(WG_trans)
D = Component()
wg1 = D << WG1 # First cross-section Component
wg2 = D << WG2
wgt = D << WG_trans
wgt.connect('in2', wg1.ports['out1'])
wg2.connect('in2', wgt.ports['out1'])
qp(D)
# Transition along a curving Path
P4 = pa.euler(radius = 25, angle = 45, p = 0.5, use_eff = False)
WG_trans = P4.extrude(Xtrans)
D = Component()
wg1 = D << WG1 # First cross-section Component
wg2 = D << WG2
wgt = D << WG_trans
wgt.connect('in2', wg1.ports['out1'])
wg2.connect('in2', wgt.ports['out1'])
qp(D)
def my_custom_width_fun(t):
# Note: Custom width/offset functions MUST be vectorizable--you must be able
# to call them with an array input like my_custom_width_fun([0, 0.1, 0.2, 0.3, 0.4])
num_periods = 5
w = 3 + np.cos(2*np.pi*t * num_periods)
return w
# Create the Path
P = pa.straight(length = 40)
# Create two cross-sections: one fixed width, one modulated by my_custom_offset_fun
X = CrossSection()
X.add(width = 3, offset = -6, layer = 0)
X.add(width = my_custom_width_fun, offset = 0, layer = 0)
# Extrude the Path to create the Component
D = P.extrude(cross_section = X)
qp(D)
def my_custom_offset_fun(t):
# Note: Custom width/offset functions MUST be vectorizable--you must be able
# to call them with an array input like my_custom_offset_fun([0, 0.1, 0.2, 0.3, 0.4])
num_periods = 3
w = 3 + np.cos(2*np.pi*t * num_periods)
return w
# Create the Path
P = pa.straight(length = 40)
# Create two cross-sections: one fixed offset, one modulated by my_custom_offset_fun
X = CrossSection()
X.add(width = 1, offset = my_custom_offset_fun, layer = 0)
X.add(width = 1, offset = 0, layer = 0)
# Extrude the Path to create the Device
D = P.extrude(cross_section = X)
qp(D)
def my_custom_offset_fun(t):
# Note: Custom width/offset functions MUST be vectorizable--you must be able
# to call them with an array input like my_custom_offset_fun([0, 0.1, 0.2, 0.3, 0.4])
num_periods = 1
w = 2 + np.cos(2*np.pi*t * num_periods)
return w
P1 = pa.straight(length = 40)
P2 = P1.copy() # Make a copy of the Path
P1.offset(offset = my_custom_offset_fun)
P2.offset(offset = my_custom_offset_fun)
P2.mirror((1,0)) # reflect across X-axis
qp([P1, P2])
# Create the Path
P = pa.arc(radius = 10, angle = 45)
# Create two cross-sections: one fixed width, one modulated by my_custom_offset_fun
X = CrossSection()
X.add(width = 1, offset = 0, layer = 0, ports = (1,2), name = 'myelement1')
X.add(width = 1, offset = 3, layer = 0, ports = (3,4), name = 'myelement2')
# Extrude the Path to create the Device
D = P.extrude(cross_section = X)
qp(D)
# Copy our original CrossSection
Xcopy = X.copy()
# Modify
Xcopy['myelement2']['width'] = 2 # X['myelement2'] is a dictionary
Xcopy['myelement2']['layer'] = 1 # X['myelement2'] is a dictionary
# Extrude the Path to create the Device
D = P.extrude(cross_section = Xcopy)
qp(D)
from pp import path as pa
from pp import CrossSection, Component
import pp
X1 = CrossSection()
X1.add(width = 1.2, offset = 0, layer = 2, name = 'wg', ports = ('in1', 'out1'))
X1.add(width = 2.2, offset = 0, layer = 3, name = 'etch')
X1.add(width = 1.1, offset = 3, layer = 1, name = 'wg2')
# Create the second CrossSection that we want to transition to
X2 = CrossSection()
X2.add(width = 1, offset = 0, layer = 2, name = 'wg', ports = ('in2', 'out2'))
X2.add(width = 3.5, offset = 0, layer = 3, name = 'etch')
X2.add(width = 3, offset = 5, layer = 1, name = 'wg2')
Xtrans = pa.transition(cross_section1 = X1,
cross_section2 = X2,
width_type = 'sine')
P1 = pa.straight(length = 5)
P2 = pa.straight(length = 5)
WG1 = P1.extrude(cross_section = X1)
WG2 = P2.extrude(cross_section = X2)
P4 = pa.euler(radius = 25, angle = 45, p = 0.5, use_eff = False)
WG_trans = P4.extrude(Xtrans)
c = Component()
wg1 = c << WG1
wg2 = c << WG2
wgt = c << WG_trans
wgt.connect('in2', wg1.ports['out1'])
wg2.connect('in2', wgt.ports['out1'])
pp.qp(c)
len(c.references)
| 0.665084 | 0.976401 |
# Ground state solvers
## Introduction
<img src="aux_files/H2_gs.png" width="200">
In this tutorial we are going to discuss the ground state calculation interface of Qiskit Nature. The goal is to compute the ground state of a molecular Hamiltonian. This Hamiltonian can be electronic or vibrational. To know more about the preparation of the Hamiltonian, check out the Electronic structure and Vibrational structure tutorials.
The first step is to define the molecular system. In the following we ask for the electronic part of a hydrogen molecule.
```
from qiskit import Aer
from qiskit_nature.drivers import PySCFDriver, UnitsType, Molecule
from qiskit_nature.problems.second_quantization import ElectronicStructureProblem
from qiskit_nature.converters.second_quantization import QubitConverter
from qiskit_nature.mappers.second_quantization import JordanWignerMapper
molecule = Molecule(geometry=[['H', [0., 0., 0.]],
['H', [0., 0., 0.735]]],
charge=0, multiplicity=1)
driver = PySCFDriver(molecule = molecule, unit=UnitsType.ANGSTROM, basis='sto3g')
es_problem = ElectronicStructureProblem(driver)
qubit_converter = QubitConverter(JordanWignerMapper())
```
## The Solver
Then we need to define a solver. The solver is the algorithm through which the ground state is computed.
Let's first start with a purely classical example: the NumPy minimum eigensolver. This algorithm exactly diagonalizes the Hamiltonian. Although it scales badly, it can be used on small systems to check the results of the quantum algorithms.
```
from qiskit.algorithms import NumPyMinimumEigensolver
numpy_solver = NumPyMinimumEigensolver()
```
To find the ground state we coul also use the Variational Quantum Eigensolver (VQE) algorithm. The VQE algorithms works by exchanging information between a classical and a quantum computer as depicted in the following figure.
<img src="aux_files/vqe.png" width="600">
Let's initialize a VQE solver.
```
from qiskit.providers.aer import StatevectorSimulator
from qiskit import Aer
from qiskit.utils import QuantumInstance
from qiskit_nature.algorithms import VQEUCCFactory
quantum_instance = QuantumInstance(backend = Aer.get_backend('statevector_simulator'))
vqe_solver = VQEUCCFactory(quantum_instance)
```
To define the VQE solver one needs two essential elements:
1. A variational form: here we use the Unitary Coupled Cluster (UCC) ansatz (see for instance [Physical Review A 98.2 (2018): 022322]). Since it is a chemistry standard, a factory is already available allowing a fast initialization of a VQE with UCC. The default is to use all single and double excitations. However, the excitation type (S, D, SD) as well as other parameters can be selected.
2. An initial state: the initial state of the qubits. In the factory used above, the qubits are initialized in the Hartree-Fock (see the electronic structure tutorial) initial state (the qubits corresponding to occupied MOs are $|1\rangle$ and those corresponding to virtual MOs are $|0\rangle$.
3. The backend: this is the quantum machine on which the right part of the figure above will be performed. Here we ask for the perfect quantum emulator (```statevector_simulator```).
One could also use any available ansatz / initial state or even define one's own. For instance,
```
from qiskit.algorithms import VQE
from qiskit.circuit.library import TwoLocal
tl_circuit = TwoLocal(rotation_blocks = ['h', 'rx'], entanglement_blocks = 'cz',
entanglement='full', reps=3, parameter_prefix = 'y')
tl_circuit.draw(output='mpl')
another_solver = VQE(ansatz = tl_circuit,
quantum_instance = QuantumInstance(Aer.get_backend('statevector_simulator')))
```
## The calculation and results
We are now ready to run the calculation.
```
from qiskit_nature.algorithms import GroundStateEigensolver
calc = GroundStateEigensolver(qubit_converter, vqe_solver)
res = calc.solve(es_problem)
print(res)
```
We can compare the VQE results to the NumPy exact solver and see that they match.
```
calc = GroundStateEigensolver(qubit_converter, numpy_solver)
res = calc.solve(es_problem)
print(res)
```
## Using a filter function
Sometimes the true ground state of the Hamiltonian is not of interest because it lies in a different symmetry sector of the Hilbert space. In this case the NumPy eigensolver can take a filter function to return only the eigenstates with for example the correct number of particles. This is of particular importance in the case of vibrational structure calculations where the true ground state of the Hamiltonian is the vacuum state. A default filter function to check the number of particles is implemented in the different transformations and can be used as
```
from qiskit_nature.drivers import GaussianForcesDriver
from qiskit_nature.algorithms import NumPyMinimumEigensolverFactory
from qiskit_nature.problems.second_quantization import VibrationalStructureProblem
from qiskit_nature.mappers.second_quantization import DirectMapper
driver = GaussianForcesDriver(logfile='aux_files/CO2_freq_B3LYP_ccpVDZ.log')
vib_problem = VibrationalStructureProblem(driver, num_modals=2, truncation_order=2)
qubit_covnerter = QubitConverter(DirectMapper())
solver_without_filter = NumPyMinimumEigensolverFactory(use_default_filter_criterion=False)
solver_with_filter = NumPyMinimumEigensolverFactory(use_default_filter_criterion=True)
gsc_wo = GroundStateEigensolver(qubit_converter, solver_without_filter)
result_wo = gsc_wo.solve(vib_problem)
gsc_w = GroundStateEigensolver(qubit_converter, solver_with_filter)
result_w = gsc_w.solve(vib_problem)
print(result_wo)
print('\n\n')
print(result_w)
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
|
github_jupyter
|
from qiskit import Aer
from qiskit_nature.drivers import PySCFDriver, UnitsType, Molecule
from qiskit_nature.problems.second_quantization import ElectronicStructureProblem
from qiskit_nature.converters.second_quantization import QubitConverter
from qiskit_nature.mappers.second_quantization import JordanWignerMapper
molecule = Molecule(geometry=[['H', [0., 0., 0.]],
['H', [0., 0., 0.735]]],
charge=0, multiplicity=1)
driver = PySCFDriver(molecule = molecule, unit=UnitsType.ANGSTROM, basis='sto3g')
es_problem = ElectronicStructureProblem(driver)
qubit_converter = QubitConverter(JordanWignerMapper())
from qiskit.algorithms import NumPyMinimumEigensolver
numpy_solver = NumPyMinimumEigensolver()
from qiskit.providers.aer import StatevectorSimulator
from qiskit import Aer
from qiskit.utils import QuantumInstance
from qiskit_nature.algorithms import VQEUCCFactory
quantum_instance = QuantumInstance(backend = Aer.get_backend('statevector_simulator'))
vqe_solver = VQEUCCFactory(quantum_instance)
from qiskit.algorithms import VQE
from qiskit.circuit.library import TwoLocal
tl_circuit = TwoLocal(rotation_blocks = ['h', 'rx'], entanglement_blocks = 'cz',
entanglement='full', reps=3, parameter_prefix = 'y')
tl_circuit.draw(output='mpl')
another_solver = VQE(ansatz = tl_circuit,
quantum_instance = QuantumInstance(Aer.get_backend('statevector_simulator')))
from qiskit_nature.algorithms import GroundStateEigensolver
calc = GroundStateEigensolver(qubit_converter, vqe_solver)
res = calc.solve(es_problem)
print(res)
calc = GroundStateEigensolver(qubit_converter, numpy_solver)
res = calc.solve(es_problem)
print(res)
from qiskit_nature.drivers import GaussianForcesDriver
from qiskit_nature.algorithms import NumPyMinimumEigensolverFactory
from qiskit_nature.problems.second_quantization import VibrationalStructureProblem
from qiskit_nature.mappers.second_quantization import DirectMapper
driver = GaussianForcesDriver(logfile='aux_files/CO2_freq_B3LYP_ccpVDZ.log')
vib_problem = VibrationalStructureProblem(driver, num_modals=2, truncation_order=2)
qubit_covnerter = QubitConverter(DirectMapper())
solver_without_filter = NumPyMinimumEigensolverFactory(use_default_filter_criterion=False)
solver_with_filter = NumPyMinimumEigensolverFactory(use_default_filter_criterion=True)
gsc_wo = GroundStateEigensolver(qubit_converter, solver_without_filter)
result_wo = gsc_wo.solve(vib_problem)
gsc_w = GroundStateEigensolver(qubit_converter, solver_with_filter)
result_w = gsc_w.solve(vib_problem)
print(result_wo)
print('\n\n')
print(result_w)
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
| 0.761095 | 0.993524 |
# GDAL Test Notebook to check usage of GDAL library
To avoid non compatibility between packages at the installation, install everything in one shot
conda install -c conda-forge gdal matplotlib scikit-image tqdm tensorflow
```
from osgeo import gdal
import matplotlib
import tensorflow
file_path = r"C:\Users\VArri\Documents\Rooftop\dataset\dataset\dataset\austin1.tif"
raster = gdal.Open(file_path)
type(raster)
# Projection
print(raster.GetProjection())
# Dimensions
print(raster.RasterXSize)
print(raster.RasterYSize)
# Number of bands
print(raster.RasterCount)
# Metadata for the raster dataset
print(raster.GetMetadata())
ulx, xres, xskew, uly, yskew, yres = raster.GetGeoTransform()
# Xp = padfTransform[0] + P*padfTransform[1] + L*padfTransform[2];
# Yp = padfTransform[3] + P*padfTransform[4] + L*padfTransform[5];
# In a north up image, padfTransform[1] is the pixel width, and padfTransform[5] is the pixel height.
# The upper left corner of the upper left pixel is at position (padfTransform[0],padfTransform[3]).
print(raster.GetGeoTransform())
raster.GetProjection()
!gdalinfo $file_path
from tqdm import tqdm
!gdalwarp -tr 0.6 0.6 $file_path cvt.tif
import numpy
!gdalinfo cvt.tif
ext = GetExtent(raster)
ext
def GetExtent(ds):
""" Return list of corner coordinates from a gdal Dataset """
xmin, xpixel, _, ymax, _, ypixel = ds.GetGeoTransform()
width, height = ds.RasterXSize, ds.RasterYSize
xmax = xmin + width * xpixel
ymin = ymax + height * ypixel
return (xmin, ymax), (xmax, ymax), (xmax, ymin), (xmin, ymin)
# https://gis.stackexchange.com/questions/57834/how-to-get-raster-corner-coordinates-using-python-gdal-bindings
```
```
import json
with open(r'C:\Users\VArri\Documents\Rooftop\dataset\dataset\dataset\colab\val\bellingham1101.json') as json_file:
data = json.load(json_file)
for p in data['shapes']:
print(p['points'])
print(data["version"])
# print(data)
for element in data:
if 'imageData' in element:
print(element['imageData'])
# element.pop('imageData', None)
print(data)
```
## Preprocessing steps for 5000x5000px GeoTiff images
```
import os
from tqdm import tqdm
import json
from osgeo import gdal
dataset_dir = r"C:\Users\VArri\Documents\Rooftop\dataset\dataset\dataset"
train_dir = os.path.join(dataset_dir, 'train', 'images')
test_dir = os.path.join(dataset_dir, 'test', 'images')
file_path = os.path.join(dataset_dir, 'austin1.tif')
res_path = os.path.join(dataset_dir, 'austin1cvt.tif')
final_path = os.path.join(dataset_dir, 'austin1fin.tif')
!gdalwarp -tr 0.6 0.6 $file_path $res_path
!gdalinfo $res_path
def GetExtent(ds):
""" Return list of corner coordinates from a gdal Dataset """
xmin, xpixel, _, ymax, _, ypixel = ds.GetGeoTransform()
width, height = ds.RasterXSize, ds.RasterYSize
xmax = xmin + width * xpixel
ymin = ymax + height * ypixel
return round(xmin,0), round(xmax,0), round(ymin,0), round(ymax, 0)
# https://gis.stackexchange.com/questions/57834/how-to-get-raster-corner-coordinates-using-python-gdal-bindings
raster = gdal.Open(file_path)
ext = GetExtent(raster)
#print(ext)
xmin, xmax, ymin, ymax = [str(i) for i in ext]
print('Tile extent is')
print('Upper Left : ('+ xmin + ', ' + ymax + ') \n'
'Lower Left : (' + xmin + ', ' + ymin + ') \n'
'Upper Right : (' + xmax + ', ' + ymax + ') \n'
'Lower Right : (' + xmax + ', ' + ymin)
nxmin = ext[0]
nxmax = ext[0] + 1024 * 0.6
nymin = ext[3] - 1024 * 0.6
nymax = ext[3]
!gdalwarp -overwrite -te $nxmin $nymin $nxmax $nymax $res_path $final_path
!gdalinfo $res_path
os.getcwd()
def crop(dataset_dir, file_name):
file_path = os.path.join(dataset_dir, file_name)
name, ext = file_name.split('.')
raster = gdal.Open(file_path)
ext = GetExtent(raster)
for i in range(raster.RasterXSize//1024+1):
for j in range(raster.RasterYSize//1024+1):
if i==raster.RasterXSize//1024 or j==raster.RasterYSize//1024:
if i==raster.RasterXSize//1024 and j!=raster.RasterYSize//1024:
nxmin = ext[1] - 1024 * 0.6
nxmax = ext[1]
nymin = ext[3] - 1024 * 0.6 * (j+1)
nymax = ext[3] - 1024 * 0.6 * j
final_path = os.path.join(dataset_dir, name + '_'+str(i)+str(j)+ext)
!gdalwarp -overwrite -te $nxmin $nymin $nxmax $nymax $file_path $final_path
elif i!=raster.RasterXSize//1024 and j==raster.RasterYSize//1024:
nxmin = ext[0] + 1024 * 0.6 * i
nxmax = ext[0] + 1024 * 0.6 * (i+1)
nymin = ext[2]
nymax = ext[2] + 1024 * 0.6
final_path = os.path.join(dataset_dir, 'austin1_'+str(i)+str(j)+'.tif')
!gdalwarp -overwrite -te $nxmin $nymin $nxmax $nymax $file_path $final_path
elif i==raster.RasterXSize//1024 and j==raster.RasterYSize//1024:
nxmin = ext[1] - 1024 * 0.6
nxmax = ext[1]
nymin = ext[2]
nymax = ext[2] + 1024 * 0.6
final_path = os.path.join(dataset_dir, 'austin1_'+str(i)+str(j)+'.tif')
!gdalwarp -overwrite -te $nxmin $nymin $nxmax $nymax $file_path $final_path
continue
nxmin = ext[0] + 1024 * 0.6 * i
nxmax = ext[0] + 1024 * 0.6 * (i+1)
nymin = ext[3] - 1024 * 0.6 * (j+1)
nymax = ext[3] - 1024 * 0.6 * j
final_path = os.path.join(dataset_dir, 'austin1_'+str(i)+str(j)+'.tif')
!gdalwarp -overwrite -te $nxmin $nymin $nxmax $nymax $file_path $final_path
crop(dataset_dir, res_path)
final_path = os.path.join(dataset_dir, 'austin1_22.tif')
!gdalinfo $final_path
import json
import os
dataset_dir = r"C:\Users\VArri\Documents\Rooftop\dataset\dataset\dataset"
file_path = os.path.join(dataset_dir, 'austin711.json')
with open(file_path) as json_file:
data = json.load(json_file)
i=0
j=1
pxmin=1024*i
pxmax=1024*(i+1)
pymin=1024*j
pymax=1024*(j+1)
del data['imageHeight']
for p in data['shapes']:
print(p['points'])
print(p['points'][0][0])
print(data['imageHeight'])
data['imageHeight']=2048
print(data['imageHeight'])
with open("to.json", "w") as to:
destination = {}
json.dump(to, destination)
```
|
github_jupyter
|
from osgeo import gdal
import matplotlib
import tensorflow
file_path = r"C:\Users\VArri\Documents\Rooftop\dataset\dataset\dataset\austin1.tif"
raster = gdal.Open(file_path)
type(raster)
# Projection
print(raster.GetProjection())
# Dimensions
print(raster.RasterXSize)
print(raster.RasterYSize)
# Number of bands
print(raster.RasterCount)
# Metadata for the raster dataset
print(raster.GetMetadata())
ulx, xres, xskew, uly, yskew, yres = raster.GetGeoTransform()
# Xp = padfTransform[0] + P*padfTransform[1] + L*padfTransform[2];
# Yp = padfTransform[3] + P*padfTransform[4] + L*padfTransform[5];
# In a north up image, padfTransform[1] is the pixel width, and padfTransform[5] is the pixel height.
# The upper left corner of the upper left pixel is at position (padfTransform[0],padfTransform[3]).
print(raster.GetGeoTransform())
raster.GetProjection()
!gdalinfo $file_path
from tqdm import tqdm
!gdalwarp -tr 0.6 0.6 $file_path cvt.tif
import numpy
!gdalinfo cvt.tif
ext = GetExtent(raster)
ext
def GetExtent(ds):
""" Return list of corner coordinates from a gdal Dataset """
xmin, xpixel, _, ymax, _, ypixel = ds.GetGeoTransform()
width, height = ds.RasterXSize, ds.RasterYSize
xmax = xmin + width * xpixel
ymin = ymax + height * ypixel
return (xmin, ymax), (xmax, ymax), (xmax, ymin), (xmin, ymin)
# https://gis.stackexchange.com/questions/57834/how-to-get-raster-corner-coordinates-using-python-gdal-bindings
import json
with open(r'C:\Users\VArri\Documents\Rooftop\dataset\dataset\dataset\colab\val\bellingham1101.json') as json_file:
data = json.load(json_file)
for p in data['shapes']:
print(p['points'])
print(data["version"])
# print(data)
for element in data:
if 'imageData' in element:
print(element['imageData'])
# element.pop('imageData', None)
print(data)
import os
from tqdm import tqdm
import json
from osgeo import gdal
dataset_dir = r"C:\Users\VArri\Documents\Rooftop\dataset\dataset\dataset"
train_dir = os.path.join(dataset_dir, 'train', 'images')
test_dir = os.path.join(dataset_dir, 'test', 'images')
file_path = os.path.join(dataset_dir, 'austin1.tif')
res_path = os.path.join(dataset_dir, 'austin1cvt.tif')
final_path = os.path.join(dataset_dir, 'austin1fin.tif')
!gdalwarp -tr 0.6 0.6 $file_path $res_path
!gdalinfo $res_path
def GetExtent(ds):
""" Return list of corner coordinates from a gdal Dataset """
xmin, xpixel, _, ymax, _, ypixel = ds.GetGeoTransform()
width, height = ds.RasterXSize, ds.RasterYSize
xmax = xmin + width * xpixel
ymin = ymax + height * ypixel
return round(xmin,0), round(xmax,0), round(ymin,0), round(ymax, 0)
# https://gis.stackexchange.com/questions/57834/how-to-get-raster-corner-coordinates-using-python-gdal-bindings
raster = gdal.Open(file_path)
ext = GetExtent(raster)
#print(ext)
xmin, xmax, ymin, ymax = [str(i) for i in ext]
print('Tile extent is')
print('Upper Left : ('+ xmin + ', ' + ymax + ') \n'
'Lower Left : (' + xmin + ', ' + ymin + ') \n'
'Upper Right : (' + xmax + ', ' + ymax + ') \n'
'Lower Right : (' + xmax + ', ' + ymin)
nxmin = ext[0]
nxmax = ext[0] + 1024 * 0.6
nymin = ext[3] - 1024 * 0.6
nymax = ext[3]
!gdalwarp -overwrite -te $nxmin $nymin $nxmax $nymax $res_path $final_path
!gdalinfo $res_path
os.getcwd()
def crop(dataset_dir, file_name):
file_path = os.path.join(dataset_dir, file_name)
name, ext = file_name.split('.')
raster = gdal.Open(file_path)
ext = GetExtent(raster)
for i in range(raster.RasterXSize//1024+1):
for j in range(raster.RasterYSize//1024+1):
if i==raster.RasterXSize//1024 or j==raster.RasterYSize//1024:
if i==raster.RasterXSize//1024 and j!=raster.RasterYSize//1024:
nxmin = ext[1] - 1024 * 0.6
nxmax = ext[1]
nymin = ext[3] - 1024 * 0.6 * (j+1)
nymax = ext[3] - 1024 * 0.6 * j
final_path = os.path.join(dataset_dir, name + '_'+str(i)+str(j)+ext)
!gdalwarp -overwrite -te $nxmin $nymin $nxmax $nymax $file_path $final_path
elif i!=raster.RasterXSize//1024 and j==raster.RasterYSize//1024:
nxmin = ext[0] + 1024 * 0.6 * i
nxmax = ext[0] + 1024 * 0.6 * (i+1)
nymin = ext[2]
nymax = ext[2] + 1024 * 0.6
final_path = os.path.join(dataset_dir, 'austin1_'+str(i)+str(j)+'.tif')
!gdalwarp -overwrite -te $nxmin $nymin $nxmax $nymax $file_path $final_path
elif i==raster.RasterXSize//1024 and j==raster.RasterYSize//1024:
nxmin = ext[1] - 1024 * 0.6
nxmax = ext[1]
nymin = ext[2]
nymax = ext[2] + 1024 * 0.6
final_path = os.path.join(dataset_dir, 'austin1_'+str(i)+str(j)+'.tif')
!gdalwarp -overwrite -te $nxmin $nymin $nxmax $nymax $file_path $final_path
continue
nxmin = ext[0] + 1024 * 0.6 * i
nxmax = ext[0] + 1024 * 0.6 * (i+1)
nymin = ext[3] - 1024 * 0.6 * (j+1)
nymax = ext[3] - 1024 * 0.6 * j
final_path = os.path.join(dataset_dir, 'austin1_'+str(i)+str(j)+'.tif')
!gdalwarp -overwrite -te $nxmin $nymin $nxmax $nymax $file_path $final_path
crop(dataset_dir, res_path)
final_path = os.path.join(dataset_dir, 'austin1_22.tif')
!gdalinfo $final_path
import json
import os
dataset_dir = r"C:\Users\VArri\Documents\Rooftop\dataset\dataset\dataset"
file_path = os.path.join(dataset_dir, 'austin711.json')
with open(file_path) as json_file:
data = json.load(json_file)
i=0
j=1
pxmin=1024*i
pxmax=1024*(i+1)
pymin=1024*j
pymax=1024*(j+1)
del data['imageHeight']
for p in data['shapes']:
print(p['points'])
print(p['points'][0][0])
print(data['imageHeight'])
data['imageHeight']=2048
print(data['imageHeight'])
with open("to.json", "w") as to:
destination = {}
json.dump(to, destination)
| 0.250454 | 0.779091 |
# From Decision Trees to Random Forests
```
Authors: Alexandre Gramfort
Thomas Moreau
```
## Bagging classifiers
We saw that by increasing the depth of the tree, we are going to get an over-fitted model. A way to bypass the choice of a specific depth it to combine several trees together.
Let's start by training several trees on slightly different data. The slightly different dataset could be generated by randomly sampling with replacement. In statistics, this called a boostrap sample. We will use the iris dataset to create such ensemble and ensure that we have some data for training and some left out data for testing.
```
import numpy as np
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
X, y = load_iris(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y)
```
Before to train several decision trees, we will run a single tree. However, instead to train this tree on `X_train`, we want to train it on a bootstrap sample. You can use the `np.random.choice` function sample with replacement some index. You will need to create a sample_weight vector and pass it to the `fit` method of the `DecisionTreeClassifier`. We provide the `generate_sample_weight` function which will generate the `sample_weight` array.
```
def bootstrap_idx(X):
indices = np.random.choice(
np.arange(X.shape[0]), size=X.shape[0], replace=True
)
return indices
bootstrap_idx(X_train)
from collections import Counter
Counter(bootstrap_idx(X_train))
def bootstrap_sample(X, y):
indices = bootstrap_idx(X)
return X[indices], y[indices]
X_train_bootstrap, y_train_bootstrap = bootstrap_sample(X_train, y_train)
print(f'Classes distribution in the original data: {Counter(y_train)}')
print(f'Classes distribution in the bootstrap: {Counter(y_train_bootstrap)}')
```
<div class="alert alert-success">
<b>EXERCISE: Create a bagging classifier</b>:<br>
<br>
A bagging classifier will train several decision tree classifiers, each of them on a different bootstrap sample.
<ul>
<li>
Create several <code>DecisionTreeClassifier</code> and store them in a Python list;
</li>
<li>
Loop over these trees and <code>fit</code> them by generating a bootstrap sample using <code>bootstrap_sample</code> function;
</li>
<li>
To predict with this ensemble of trees on new data (testing set), you can provide the same set to each tree and call the <code>predict</code> method. Aggregate all predictions in a NumPy array;
</li>
<li>
Once the predictions available, you need to provide a single prediction: you can retain the class which was the most predicted which is called a majority vote;
</li>
<li>
Finally, check the accuracy of your model.
</li>
</ul>
</div>
<div class="alert alert-success">
<b>EXERCISE: using scikit-learn</b>:
<br>
After implementing your own bagging classifier, use a <code>BaggingClassifier</code> from scikit-learn to fit the above data.
</div>
## Random Forests
A very famous classifier is the random forest classifier. It is similar to the bagging classifier. In addition of the bootstrap, the random forest will use a subset of features (selected randomly) to find the best split.
<div class="alert alert-success">
<b>EXERCISE: Create a random forest classifier</b>:
<br>
Use your previous code which was generated several <code>DecisionTreeClassifier</code>. Check the list of the option of this classifier and modify one of the parameters such that only the $\sqrt{F}$ features are used for the splitting. $F$ represents the number of features in the dataset.
</div>
<div class="alert alert-success">
<b>EXERCISE: using scikit-learn</b>:
<br>
After implementing your own random forest classifier, use a <code>RandomForestClassifier</code> from scikit-learn to fit the above data.
</div>
```
from figures import plot_forest_interactive
plot_forest_interactive()
```
|
github_jupyter
|
Authors: Alexandre Gramfort
Thomas Moreau
import numpy as np
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
X, y = load_iris(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y)
def bootstrap_idx(X):
indices = np.random.choice(
np.arange(X.shape[0]), size=X.shape[0], replace=True
)
return indices
bootstrap_idx(X_train)
from collections import Counter
Counter(bootstrap_idx(X_train))
def bootstrap_sample(X, y):
indices = bootstrap_idx(X)
return X[indices], y[indices]
X_train_bootstrap, y_train_bootstrap = bootstrap_sample(X_train, y_train)
print(f'Classes distribution in the original data: {Counter(y_train)}')
print(f'Classes distribution in the bootstrap: {Counter(y_train_bootstrap)}')
from figures import plot_forest_interactive
plot_forest_interactive()
| 0.748628 | 0.990385 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.