prompt
stringlengths 501
4.98M
| target
stringclasses 1
value | chunk_prompt
bool 1
class | kind
stringclasses 2
values | prob
float64 0.2
0.97
⌀ | path
stringlengths 10
394
⌀ | quality_prob
float64 0.4
0.99
⌀ | learning_prob
float64 0.15
1
⌀ | filename
stringlengths 4
221
⌀ |
---|---|---|---|---|---|---|---|---|
# 1.1 例子:多项式拟合
假设我们有两个实值变量 $x, t$,满足关系:
$$t = sin(2\pi x) + \epsilon$$
其中 $\epsilon$ 是一个服从高斯分布的随机值。
假设我们有 `N` 组 $(x, t)$ 的观测值 $\mathsf x \equiv (x_1, \dots, x_N)^\top, \mathsf t \equiv (t_1, \dots, t_N)^\top$:
```
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
%matplotlib inline
# 设置 n
N = 10
# 生成 0,1 之间等距的 N 个 数
x_tr = np.linspace(0, 1, N)
# 计算 t
t_tr = np.sin(2 * np.pi * x_tr) + 0.25 * np.random.randn(N)
# 绘图
xx = np.linspace(0, 1, 500)
fig, ax = plt.subplots()
ax.plot(x_tr, t_tr, 'co')
ax.plot(xx, np.sin(2 * np.pi * xx), 'g')
ax.set_xlim(-0.1, 1.1)
ax.set_ylim(-1.5, 1.5)
ax.set_xticks([0, 1])
ax.set_yticks([-1, 0, 1])
ax.set_xlabel("$x$", fontsize="x-large")
ax.set_ylabel("$t$", fontsize="x-large")
plt.show()
```
使用这 $N$ 个数据点作为训练集,我们希望得到这个一个模型:给定一个新的输入 $\hat x$,预测他对应的输出 $\hat t$。
我们使用曲线拟合的方法来解决这个问题。
具体来说,我们来拟合这样一个多项式函数:
$$
y(x,\mathbf w)=w_0+w_1 x + w_2 x^2 + \cdots + w_M x^M = \sum_{j=0}^M w_j x^j
$$
其中 $M$ 是多项式的阶数,$x^j$ 表示 $x$ 的 $j$ 次方,$\mathbf w \equiv (w_0, w_1, \dots, w_M)$ 表示多项式的系数。
这些多项式的系数可以通过我们的数据拟合得到,即在训练集上最小化一个关于 $y(x,\mathbf w)$ 和 $t$ 的损失函数。常见的一个损失函数是平方误差和,定义为:
$$
E(\mathbf w)=\frac{1}{2} \sum_{i=1}^N \left\{y(x, \mathbf w) - t_n\right\}^2
$$
因子 $\frac{1}{2}$ 是为了之后的计算方便加上的。
这个损失函数是非负的,当且仅当函数 $y(x, \mathbf w)$ 通过所有的数据点时才会为零。
对于这个损失函数,因为它是一个关于 $\mathbf w$ 的二次函数,其关于 $\mathbf w$ 的梯度是一个关于 $\mathbf w$ 的线性函数,因此我们可以找到一个唯一解 $\mathbf w^\star$。
$$
\frac{\partial E(\mathbf w)}{w_j} = \sum_{n=1}^N \left(\sum_{j=0}^M w_j x_n^j - t_n\right) x_n^j = 0
$$
另一个需要考虑的拟合参数是多项式的阶数 $M$,我们可以看看当 $M = 0,1,3,9$ 时的效果(红线)。
可以看到,$M=3$ 似乎是其中较好的一个选择,$M=9$ 虽然拟合的效果最好(通过了所有的训练数据点),但很明显过拟合了。
```
fig, axes = plt.subplots(2, 2, figsize=(12, 8))
axes = axes.flatten()
Ms = [0, 1, 3, 9]
for ax, M in zip(axes, Ms):
# 计算参数
coeff = np.polyfit(x_tr, t_tr, M)
# 生成函数 y(x, w)
f = np.poly1d(coeff)
# 绘图
xx = np.linspace(0, 1, 500)
ax.plot(x_tr, t_tr, 'co')
ax.plot(xx, np.sin(2 * np.pi * xx), 'g')
ax.plot(xx, f(xx), 'r')
ax.set_xlim(-0.1, 1.1)
ax.set_ylim(-1.5, 1.5)
ax.set_xticks([0, 1])
ax.set_yticks([-1, 0, 1])
ax.set_xlabel("$x$",fontsize="x-large")
ax.set_ylabel("$t$",fontsize="x-large")
ax.text(0.6, 1, '$M={}$'.format(M), fontsize="x-large")
plt.show()
```
通常我们为了检测模型的效果,会找到一组与训练集相同分布的数据进行测试,然后计算不同模型选择下,训练集和测试集上的 $E(\mathbf w^\star)$ 值。
注意到随着测试点数目的变化,$E(\mathbf w^\star)$ 的尺度也在不断变化,因此一个更好的选择是使用 `root-mean-square (RMS)` 误差:
$$
E_{RMS}=\sqrt{2E(\mathbf w^star) / N}
$$
`RMS` 误差的衡量尺度和单位与目标值 $t$ 一致。
我们用相同的方法产生100组数据作为测试集,计算不同 $M$ 下的 `RMS` 误差:|
```
x_te = np.random.rand(100)
t_te = np.sin(2 * np.pi * x_te) + 0.25 * np.random.randn(100)
rms_tr, rms_te = [], []
for M in xrange(10):
# 计算参数
coeff = np.polyfit(x_tr, t_tr, M)
# 生成函数 y(x, w)
f = np.poly1d(coeff)
# RMS
rms_tr.append(np.sqrt(((f(x_tr) - t_tr) ** 2).sum() / x_tr.shape[0]))
rms_te.append(np.sqrt(((f(x_te) - t_te) ** 2).sum() / x_te.shape[0]))
# 画图
fig, ax = plt.subplots()
ax.plot(range(10), rms_tr, 'bo-', range(10), rms_te, 'ro-')
ax.set_xlim(-1, 10)
ax.set_ylim(0, 1)
ax.set_xticks(xrange(0, 10, 3))
ax.set_yticks([0, 0.5, 1])
ax.set_xlabel("$M$",fontsize="x-large")
ax.set_ylabel("$E_{RMS}$",fontsize="x-large")
ax.legend(['Traning', 'Test'], loc="best")
plt.show()
```
可以看到 $M = 9$ 时,虽然训练集上的误差已经降到 `0`,但是测试集上的误差却很大。
我们来看看 $M = 9$ 时的多项式系数,为了更好的拟合这些点,系数都会变得很大:
```
for i, w in enumerate(np.polyfit(x_tr, t_tr, 9)):
print "w_{}, {:.2f}".format(9 - i, w)
```
另一个有趣的现象是查看当训练数据量 $N$ 变多时,$M = 9$ 的模型的表现:
```
fig, axes = plt.subplots(1, 2, figsize=(12, 4))
axes = axes.flatten()
for ax, N in zip(axes, (15, 100)):
# 生成 0,1 之间等距的 N 个 数
x_tr_more = np.linspace(0, 1, N)
# 计算 t
t_tr_more = np.sin(2 * np.pi * x_tr_more) + 0.25 * np.random.randn(N)
# 计算参数
coeff = np.polyfit(x_tr_more, t_tr_more, M)
# 生成函数 y(x, w)
f = np.poly1d(coeff)
# 绘图
xx = np.linspace(0, 1, 500)
ax.plot(x_tr_more, t_tr_more, 'co')
ax.plot(xx, np.sin(2 * np.pi * xx), 'g')
ax.plot(xx, f(xx), 'r')
ax.set_xlim(-0.1, 1.1)
ax.set_ylim(-1.5, 1.5)
ax.set_xticks([0, 1])
ax.set_yticks([-1, 0, 1])
ax.set_xlabel("$x$", fontsize="x-large")
ax.set_ylabel("$t$", fontsize="x-large")
ax.text(0.6, 1, '$N={}$'.format(N), fontsize="x-large")
plt.show()
```
可以看到,随着 $N$ 的增大,模型拟合的过拟合现象在减少。
当模型的复杂度固定时,随着数据的增加,过拟合的现象也在逐渐减少。
回到之前的问题,如果我们一定要在 $N=10$ 的数据上使用 $M=9$ 的模型,那么一个通常的做法是给参数加一个正则项的约束防止过拟合,一个最常用的正则项是平方正则项,即控制所有参数的平方和大小:
$$
\tilde E(\mathbf w) = \frac{1}{2}\sum_{i=1}^N \left\{y(x_n,\mathbf w) - t_n\right\}^2 + \frac{\lambda}{2} \|\mathbf w\|^2
$$
其中 $\|\mathbf w\|^2 \equiv \mathbf{w^\top w} = w_0^2 + \dots | w_M^2$,$\lambda$ 是控制正则项和误差项的相对重要性。
若设向量 $\phi(x)$ 满足 $\phi_i(x) = x^i, i = 0,1,\dots,M$,则对 $\mathbf w$ 最小化,解应当满足:
$$
\left[\sum_{n=1}^N \phi(x_n) \phi(x_n)^\top + \lambda \mathbf I\right] \mathbf w = \sum_{n=1}^N t_n \phi(x_n)
$$
```
def phi(x, M):
return x[:,None] ** np.arange(M + 1)
# 加正则项的解
M = 9
lam = 0.0001
phi_x_tr = phi(x_tr, M)
S_0 = phi_x_tr.T.dot(phi_x_tr) + lam * np.eye(M+1)
y_0 = t_tr.dot(phi_x_tr)
coeff = np.linalg.solve(S_0, y_0)[::-1]
f = np.poly1d(coeff)
# 绘图
xx = np.linspace(0, 1, 500)
fig, ax = plt.subplots()
ax.plot(x_tr, t_tr, 'co')
ax.plot(xx, np.sin(2 * np.pi * xx), 'g')
ax.plot(xx, f(xx), 'r')
ax.set_xlim(-0.1, 1.1)
ax.set_ylim(-1.5, 1.5)
ax.set_xticks([0, 1])
ax.set_yticks([-1, 0, 1])
ax.set_xlabel("$x$", fontsize="x-large")
ax.set_ylabel("$t$", fontsize="x-large")
plt.show()
```
通常情况下,如果我们需要决定我们系统的复杂度参数 ($\lambda, M$),一个常用的方法是从训练数据中拿出一小部分作为验证集,来测试我们的复杂度;不过这样会带来训练减少的问题。
| true |
code
| 0.445952 | null | null | null | null |
|
Deep Learning
=============
Assignment 2
------------
Previously in `1_notmnist.ipynb`, we created a pickle with formatted datasets for training, development and testing on the [notMNIST dataset](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html).
The goal of this assignment is to progressively train deeper and more accurate models using TensorFlow.
```
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
```
First reload the data we generated in `1_notmnist.ipynb`.
```
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
```
Reformat into a shape that's more adapted to the models we're going to train:
- data as a flat matrix,
- labels as float 1-hot encodings.
```
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
```
We're first going to train a multinomial logistic regression using simple gradient descent.
TensorFlow works like this:
* First you describe the computation that you want to see performed: what the inputs, the variables, and the operations look like. These get created as nodes over a computation graph. This description is all contained within the block below:
with graph.as_default():
...
* Then you can run the operations on this graph as many times as you want by calling `session.run()`, providing it outputs to fetch from the graph that get returned. This runtime operation is all contained in the block below:
with tf.Session(graph=graph) as session:
...
Let's load all the data into TensorFlow and build the computation graph corresponding to our training:
```
# With gradient descent training, even this much data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000
graph = tf.Graph()
with graph.as_default():
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random valued following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
```
Let's run this computation and iterate:
```
num_steps = 801
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tf.initialize_all_variables().run()
print('Initialized')
for step in range(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 100 == 0):
print('Loss at step %d: %f' % (step, l))
print('Training accuracy: %.1f%%' % accuracy(predictions, train_labels[:train_subset, :]))
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print('Validation accuracy: %.1f%%' % accuracy(valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
```
Let's now switch to stochastic gradient descent training instead, which is much faster.
The graph will be similar, except that instead of holding all the training data into a constant node, we create a `Placeholder` node which will be fed actual data at every call of `sesion.run()`.
```
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
```
Let's run it:
```
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
```
---
Problem
-------
Turn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units (nn.relu()) and 1024 hidden nodes. This model should improve your validation / test accuracy.
---
```
batch_size = 128
num_hidden_nodes = 1024
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights1 = tf.Variable(
tf.truncated_normal([image_size * image_size, num_hidden_nodes]))
biases1 = tf.Variable(tf.zeros([num_hidden_nodes]))
weights2 = tf.Variable(
tf.truncated_normal([num_hidden_nodes, num_labels]))
biases2 = tf.Variable(tf.zeros([num_labels]))
# Training computation.
lay1_train = tf.nn.relu(tf.matmul(tf_train_dataset, weights1) + biases1)
logits = tf.matmul(lay1_train, weights2) + biases2
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
lay1_valid = tf.nn.relu(tf.matmul(tf_valid_dataset, weights1) + biases1)
valid_prediction = tf.nn.softmax(tf.matmul(lay1_valid, weights2) + biases2)
lay1_test = tf.nn.relu(tf.matmul(tf_test_dataset, weights1) + biases1)
test_prediction = tf.nn.softmax(tf.matmul(lay1_test, weights2) + biases2)
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
```
| true |
code
| 0.674479 | null | null | null | null |
|
# PLOT Notes
# Matplib - generating plots concoiusly
2019.07.12.
based on https://dev.to/skotaro/artist-in-matplotlib---something-i-wanted-to-know-before-spending-tremendous-hours-on-googling-how-tos--31oo
## Pyplot and object-oriented API
these are two different coding styles to make plots in matplolib, Object-oriented (OO) API style is officially recommended - we utilize an instance of axes.Axes in order to render visualizations on an instance of figure.Figure. The second is based on MATLAB and uses a state-based interface. This is encapsulated in the pyplot module. important thinkgs:
* The Figure is the final image that may contain 1 or more Axes.
* The Axes represent an individual plot (don't confuse this with the word "axis", which refers to the x/y axis of a plot).
For more info see:
* pyplot tutorial https://matplotlib.org/tutorials/introductory/pyplot.html
* OO API tutorial https://matplotlib.org/tutorials/introductory/lifecycle.html
## Pylot interface
* MATLAB-user-friendly style in which everything is done with plt.***
* very fast, but has limited options
* Example 1: Pyplot example - simple plots
* called "stateful interface" - which figure and subplot you are currently in
```
"""
Example 1: Pyplot example - simple plots
"""
import numpy as np
import matplotlib.pyplot as plt
#https://matplotlib.org/tutorials/introductory/pyplot.html
def f(t):
return np.exp(-t) * np.cos(2*np.pi*t)
t1 = np.arange(0.0, 5.0, 0.1)
t2 = np.arange(0.0, 5.0, 0.02)
plt.figure(1)
plt.subplot(211)
plt.plot(t1, f(t1), 'bo', t2, f(t2), 'k')
plt.subplot(212)
plt.plot(t2, np.cos(2*np.pi*t2), 'r--')
plt.show();
```
## OO API style
* fig, ax = plt.subplots(), followed by ax.plot, ax.imshow etc. fig and ax are, artists.
* fig.add_subplot, alternative starting stetment
* fig = plt.gcf() and ax = plt.gca(). used when you switch from Pyplot interface to OO interface
### The hierarchy in matplotlib
* matplotlib has a hierarchical structure of specia artist elelemnts called as "containers"
* figure - wholle arrea to display
* axes - individual ploits
* axis - x,y axis to plot the data
* 4th containers are ticks!
* see figure at:https://res.cloudinary.com/practicaldev/image/fetch/s--dNi3F76s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/rr39m52m6peef1drke7m.png
* starting a figure
> fig, ax = plt.subplots() # make Figure and Axes which belongs to 'fig'
* ot
> fig = plt.figure() # make Figure
> ax = fig.add_subplot(1,1,1) # make Axes belonging to fig
* rules to remember:
* Figure can contain multiple Axes because fig.axes is a list of Axes.
* Axes can belong to only single Figure because ax.figure is not a list.
* Axes can have one XAxis and YAxis respectively for similar reason.
* XAxis and YAxis can belong to single Axes and, accordingly, single Figure.
> fig.axes
> ax.figure
> ax.xaxis
> ax.xaxis.axes
> ax.xaxis.figure
* Artists
* every single component in a figure is an Artist object
* names of all elements are here: https://res.cloudinary.com/practicaldev/image/fetch/s--1x1epD95--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/b9psb0mtz7yk8qmfe26f.png
* two types of artists objects:
* CONTAINERS;
Figure, Axes, Axis and Tick
have many "boxes" (Python lists,) for each type of primitives.
eg:, an Axes obj (ax), has an empty list ax.lines.
a command ax.plot adds a Line2D obj to that list and does other accompanying settings silently.
* PRIMITIVES; placed inside our containers, eg: Line2D made by ax.plot, PathCollection by ax.scatter, or Text by ax.annotate
see Example 2: Containers and Primitives.
```
"""
Example 2: Containers and Primitives
"""
# data
x = np.linspace(0, 2*np.pi, 100) # 100 numbers, equally distributed
#
fig = plt.figure()
ax = fig.add_subplot(1,1,1) # make a blank plotting area
print('ax.lines before plot:\n', ax.lines) # empty
line1, = ax.plot(x, np.sin(x), label='1st plot') # add Line2D in ax.lines
print('ax.lines after 1st plot:\n', ax.lines)
line2, = ax.plot(x, np.sin(x+np.pi/8), label='2nd plot') # add another Line2D
print('ax.lines after 2nd plot:\n', ax.lines)
ax.legend()
print('line1:', line1)
print('line2:', line2)
```
## FIGURE CONTAINER
Important:
* Attributes with a plural name are lists and those with a singular name represent a single object.
* Fig attributes can be chnaged into axis or axes attributes with Transforms
Figure attributes & description:
* fig.axes // A list of Axes instances (includes Subplot)
* fig.patch // The Rectangle background
* fig.images // A list of FigureImages patches - useful for raw pixel display
* fig.legends // A list of Figure Legend instances (different from Axes.legends)
* fig.lines // A list of Figure Line2D instances (rarely used, see Axes.lines)
* fig.patches // A list of Figure patches (rarely used, see Axes.patches)
* fig.texts // A list Figure Text instances
Legend
* we have ax.legend and fig.legend
* ax.legend only collects labels from Artists belonging to ax
* fig.legend gathers labels from all Axes under fig, eg for large number of plots wiht the same elements
```
"""
Example 3: Combining legends from different sources
"""
x = np.linspace(0, 2*np.pi, 100)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(x, np.sin(x), label='sin(x)')
ax1 = ax.twinx() # Create a twin Axes sharing the xaxis, ie second y axis on the right site
ax1.plot(x, 2*np.cos(x), c='C1', label='2*cos(x)')
# cf. 'CN' notation
# https://matplotlib.org/tutorials/colors/colors.html#cn-color-selection
# combined ax.legends
handler, label = ax.get_legend_handles_labels()
handler1, label1 = ax1.get_legend_handles_labels()
ax.legend(handler+handler1, label+label1, loc='upper center', title='ax.legend')
# Legend made by ax1.legend remains
# easy way with fig.legend and all handlers
fig.legend(loc='upper right', bbox_to_anchor=(1,1),
bbox_transform=ax.transAxes, title='fig.legend\nax.transAxes')
plt.show();
"""
Example 3b: Using ax.twinx() to create second y axis with different scale
"""
import numpy as np
import matplotlib.pyplot as plt
# Create some mock data
t = np.arange(0.01, 10.0, 0.01)
data1 = np.exp(t)
data2 = np.sin(2 * np.pi * t)
fig, ax1 = plt.subplots()
color = 'tab:red'
ax1.set_xlabel('time (s)')
ax1.set_ylabel('exp', color=color)
ax1.plot(t, data1, color=color)
ax1.tick_params(axis='y', labelcolor=color)
ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis
color = 'tab:blue'
ax2.set_ylabel('sin', color=color) # we already handled the x-label with ax1
ax2.plot(t, data2, color=color)
ax2.tick_params(axis='y', labelcolor=color)
fig.tight_layout() # otherwise the right y-label is slightly clipped
plt.show()
```
## AXES CONTAINER
The matplotlib.axes.Axes is the center of the matplotlib universe
Has the following objects:
* XAXIS
* YAXIS
* Ticks container
How it works?
* Frequently-used commands such as ax.plot and ax.scatter are called "helper methods"
* helper methods add corresponding Artists in appropriate containers and do other miscellaneous jobs.
* ie. ax.plot and ax.scatter add Line2D and PathCollection objects in corresponding lists.
Reusing a plotted object is not recommended
* helper methods do many things other than creating an Artist
ax.set_*** methods
* Used to modify attributes and values of Axis and Tick instances
* static - Changes made with them are not updated when something changed.
* ie. if you chnage them for plot1, you will also get the same on another plot, unless chnage them again
Ticker.
* automatically update ticks for each new plot; formatter and locator
* ax.xaxis.get_major_formatter()
* ax.xaxis.get_major_locator()
* Tick formatters: https://matplotlib.org/gallery/ticks_and_spines/tick-formatters.html
## TICK CONTAINER
for a short line for a tick itself and a text for a tick label.
```
"""
Example 4: Using Ticker for customized ticks & labels
"""
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import matplotlib.ticker as ticker # this is required to used `Ticker`
x = np.linspace(0, 2*np.pi, 100)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
line1, = ax.plot(x, np.sin(x), label='') # X range: 0 to 2pi
ax.set_xticks([0, 0.5*np.pi, np.pi, 1.5*np.pi, 2*np.pi])
line2, = ax.plot(1.5*x, np.sin(x), label='') # X range: 0 to 3pi
# locate ticks at every 0.5*pi
ax.xaxis.set_major_locator(ticker.MultipleLocator(0.5*np.pi)) # locate ticks at every 0.5*pi
# custome tick labels
@ticker.FuncFormatter # FuncFormatter can be used as a decorator
def major_formatter_radian(x, pos):
return '{}$\pi$'.format(x/np.pi) # probably not the best way to show radian tick labels
ax.xaxis.set_major_formatter(major_formatter_radian)
plt.show();
"""
Example 5: Tick formatters:
https://matplotlib.org/gallery/ticks_and_spines/tick-formatters.html
"""
# Setup a plot such that only the bottom spine is shown
def setup(ax):
ax.spines['right'].set_color('none')
ax.spines['left'].set_color('none')
ax.yaxis.set_major_locator(ticker.NullLocator())
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.tick_params(which='major', width=1.00, length=5)
ax.tick_params(which='minor', width=0.75, length=2.5, labelsize=10)
ax.set_xlim(0, 5)
ax.set_ylim(0, 1)
ax.patch.set_alpha(0.0)
fig = plt.figure(figsize=(8, 6))
n = 7
# Null formatter
ax = fig.add_subplot(n, 1, 1)
setup(ax)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1.00))
ax.xaxis.set_minor_locator(ticker.MultipleLocator(0.25))
ax.xaxis.set_major_formatter(ticker.NullFormatter())
ax.xaxis.set_minor_formatter(ticker.NullFormatter())
ax.text(0.0, 0.1, "NullFormatter()", fontsize=16, transform=ax.transAxes)
# Fixed formatter
ax = fig.add_subplot(n, 1, 2)
setup(ax)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1.0))
ax.xaxis.set_minor_locator(ticker.MultipleLocator(0.25))
majors = ["", "0", "1", "2", "3", "4", "5"]
ax.xaxis.set_major_formatter(ticker.FixedFormatter(majors))
minors = [""] + ["%.2f" % (x-int(x)) if (x-int(x))
else "" for x in np.arange(0, 5, 0.25)]
ax.xaxis.set_minor_formatter(ticker.FixedFormatter(minors))
ax.text(0.0, 0.1, "FixedFormatter(['', '0', '1', ...])",
fontsize=15, transform=ax.transAxes)
# FuncFormatter can be used as a decorator
@ticker.FuncFormatter
def major_formatter(x, pos):
return "[%.2f]" % x
ax = fig.add_subplot(n, 1, 3)
setup(ax)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1.00))
ax.xaxis.set_minor_locator(ticker.MultipleLocator(0.25))
ax.xaxis.set_major_formatter(major_formatter)
ax.text(0.0, 0.1, 'FuncFormatter(lambda x, pos: "[%.2f]" % x)',
fontsize=15, transform=ax.transAxes)
# FormatStr formatter
ax = fig.add_subplot(n, 1, 4)
setup(ax)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1.00))
ax.xaxis.set_minor_locator(ticker.MultipleLocator(0.25))
ax.xaxis.set_major_formatter(ticker.FormatStrFormatter(">%d<"))
ax.text(0.0, 0.1, "FormatStrFormatter('>%d<')",
fontsize=15, transform=ax.transAxes)
# Scalar formatter
ax = fig.add_subplot(n, 1, 5)
setup(ax)
ax.xaxis.set_major_locator(ticker.AutoLocator())
ax.xaxis.set_minor_locator(ticker.AutoMinorLocator())
ax.xaxis.set_major_formatter(ticker.ScalarFormatter(useMathText=True))
ax.text(0.0, 0.1, "ScalarFormatter()", fontsize=15, transform=ax.transAxes)
# StrMethod formatter
ax = fig.add_subplot(n, 1, 6)
setup(ax)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1.00))
ax.xaxis.set_minor_locator(ticker.MultipleLocator(0.25))
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter("{x}"))
ax.text(0.0, 0.1, "StrMethodFormatter('{x}')",
fontsize=15, transform=ax.transAxes)
# Percent formatter
ax = fig.add_subplot(n, 1, 7)
setup(ax)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1.00))
ax.xaxis.set_minor_locator(ticker.MultipleLocator(0.25))
ax.xaxis.set_major_formatter(ticker.PercentFormatter(xmax=5))
ax.text(0.0, 0.1, "PercentFormatter(xmax=5)",
fontsize=15, transform=ax.transAxes)
# Push the top of the top axes outside the figure because we only show the
# bottom spine.
fig.subplots_adjust(left=0.05, right=0.95, bottom=0.05, top=1.05)
plt.show()
"""
Example 6; Tick Locators
"""
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
# Setup a plot such that only the bottom spine is shown
def setup(ax):
ax.spines['right'].set_color('none')
ax.spines['left'].set_color('none')
ax.yaxis.set_major_locator(ticker.NullLocator())
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.tick_params(which='major', width=1.00)
ax.tick_params(which='major', length=5)
ax.tick_params(which='minor', width=0.75)
ax.tick_params(which='minor', length=2.5)
ax.set_xlim(0, 5)
ax.set_ylim(0, 1)
ax.patch.set_alpha(0.0)
plt.figure(figsize=(8, 6))
n = 8
# Null Locator
ax = plt.subplot(n, 1, 1)
setup(ax)
ax.xaxis.set_major_locator(ticker.NullLocator())
ax.xaxis.set_minor_locator(ticker.NullLocator())
ax.text(0.0, 0.1, "NullLocator()", fontsize=14, transform=ax.transAxes)
# Multiple Locator
ax = plt.subplot(n, 1, 2)
setup(ax)
ax.xaxis.set_major_locator(ticker.MultipleLocator(0.5))
ax.xaxis.set_minor_locator(ticker.MultipleLocator(0.1))
ax.text(0.0, 0.1, "MultipleLocator(0.5)", fontsize=14,
transform=ax.transAxes)
# Fixed Locator
ax = plt.subplot(n, 1, 3)
setup(ax)
majors = [0, 1, 5]
ax.xaxis.set_major_locator(ticker.FixedLocator(majors))
minors = np.linspace(0, 1, 11)[1:-1]
ax.xaxis.set_minor_locator(ticker.FixedLocator(minors))
ax.text(0.0, 0.1, "FixedLocator([0, 1, 5])", fontsize=14,
transform=ax.transAxes)
# Linear Locator
ax = plt.subplot(n, 1, 4)
setup(ax)
ax.xaxis.set_major_locator(ticker.LinearLocator(3))
ax.xaxis.set_minor_locator(ticker.LinearLocator(31))
ax.text(0.0, 0.1, "LinearLocator(numticks=3)",
fontsize=14, transform=ax.transAxes)
# Index Locator
ax = plt.subplot(n, 1, 5)
setup(ax)
ax.plot(range(0, 5), [0]*5, color='white')
ax.xaxis.set_major_locator(ticker.IndexLocator(base=.5, offset=.25))
ax.text(0.0, 0.1, "IndexLocator(base=0.5, offset=0.25)",
fontsize=14, transform=ax.transAxes)
# Auto Locator
ax = plt.subplot(n, 1, 6)
setup(ax)
ax.xaxis.set_major_locator(ticker.AutoLocator())
ax.xaxis.set_minor_locator(ticker.AutoMinorLocator())
ax.text(0.0, 0.1, "AutoLocator()", fontsize=14, transform=ax.transAxes)
# MaxN Locator
ax = plt.subplot(n, 1, 7)
setup(ax)
ax.xaxis.set_major_locator(ticker.MaxNLocator(4))
ax.xaxis.set_minor_locator(ticker.MaxNLocator(40))
ax.text(0.0, 0.1, "MaxNLocator(n=4)", fontsize=14, transform=ax.transAxes)
# Log Locator
ax = plt.subplot(n, 1, 8)
setup(ax)
ax.set_xlim(10**3, 10**10)
ax.set_xscale('log')
ax.xaxis.set_major_locator(ticker.LogLocator(base=10.0, numticks=15))
ax.text(0.0, 0.1, "LogLocator(base=10, numticks=15)",
fontsize=15, transform=ax.transAxes)
# Push the top of the top axes outside the figure because we only show the
# bottom spine.
plt.subplots_adjust(left=0.05, right=0.95, bottom=0.05, top=1.05)
plt.show()
```
| true |
code
| 0.794405 | null | null | null | null |
|
# Bring your own data to create a music genre model for AWS DeepComposer
---
This notebook is for the <b>Bring your own data to create a music genre model for AWS DeepComposer</b> blog and is associated with the <b> AWS DeepComposer: Train it Again Maestro </b> web series on the <b>A Cloud Guru</b> platform.
This covers preparing your data to train a custom music genre model for AWS DeepComposer.
---
```
# Create the environment
!conda update --all --y
!pip install numpy==1.16.4
!pip install pretty_midi
!pip install pypianoroll
# IMPORTS
import os
import numpy as np
from numpy import save
import pypianoroll
from pypianoroll import Multitrack, Track
from utils import display_utils
import matplotlib.pyplot as plt
%matplotlib inline
root_dir = './2Experiments'
# Directory to save checkpoints
model_dir = os.path.join(root_dir,'2Reggae') # JSP: 229, Bach: 19199
# Directory to save pianorolls during training
train_dir = os.path.join(model_dir, 'train')
# Location of the original MIDI files used for training; place your MIDI files here
reggae_midi_location = './reggae_midi/'
# Directory to save eval data
dataset_eval_dir = './dataset/'
```
# Prepare Training Data (MIDI files -----> .npy)
---
This section of code demonstrates the process of converting MIDI files to the needed format for training, which is a .npy file. The final shape on the .npy file should be (x, 32, 128, 4), which represents (number of samples, number of time steps per sample, pitch range, instruments).
---
<img src="images/training-image.png" alt="multitrack object" width="600">
```
#helper function that stores the reshaped arrays, per instrument
def store_track(track, collection):
"""
Pull out the 4 selected instrument types based on program number
The program number represents the unique identifier for the instrument (ie. track.program)
https://en.wikipedia.org/wiki/General_MIDI
"""
instrument1_program_numbers = [1,2,3,4,5,6,7,8] #Piano
instrument2_program_numbers = [17,18,19,20,21,22,23,24] #Organ
instrument3_program_numbers = [33,34,35,36,37,38,39,40] #Bass
instrument4_program_numbers = [25,26,27,28,29,30,31,32] #Guitar
if isinstance (collection, dict):
if track.program in instrument1_program_numbers:
collection['Piano'].append(track)
elif track.program in instrument2_program_numbers:
collection['Organ'].append(track)
elif track.program in instrument3_program_numbers:
collection['Bass'].append(track)
elif track.program in instrument4_program_numbers:
collection['Guitar'].append(track)
else:
print("Skipping this instrument------------------->", track.name)
else: #collection will hold chosen tracks
if track.program in instrument1_program_numbers:
collection.append(track)
elif track.program in instrument2_program_numbers:
collection.append(track)
elif track.program in instrument3_program_numbers:
collection.append(track)
elif track.program in instrument4_program_numbers:
collection.append(track)
else:
print("Skipping this instrument------------------->", track.name)
return collection
#helper function that returns the pianorolls merged to 4 tracks for 4 chosen instruments
def get_merged(music_tracks, filename):
chosen_tracks = []
#choose the tracks from the Multitrack object
for index, track in enumerate(music_tracks.tracks):
chosen_tracks = store_track(track, chosen_tracks)
#dictionary to hold reshaped pianorolls for 4 chosen instruments
reshaped_piano_roll_dict = {'Piano': [], 'Organ': [], 'Bass': [], 'Guitar': []}
#loop thru chosen tracks
for index, track in enumerate(chosen_tracks):
fig, ax = track.plot()
plt.show()
try:
#reshape pianoroll to 2 bar (i.e. 32 time step) chunks
track.pianoroll = track.pianoroll.reshape( -1, 32, 128)
#store reshaped pianoroll per instrument
reshaped_piano_roll_dict = store_track(track, reshaped_piano_roll_dict)
except Exception as e:
print("ERROR!!!!!----> Skipping track # ", index, " with error ", e)
#will hold all merged instrument tracks
merge_piano_roll_list = []
for instrument in reshaped_piano_roll_dict:
try:
merged_pianorolls = np.empty(shape=(0,32,128))
#concatenate/stack all tracks for a single instrument
if len(reshaped_piano_roll_dict[instrument]) > 0:
if reshaped_piano_roll_dict[instrument]:
merged_pianorolls = np.stack([track.pianoroll for track in reshaped_piano_roll_dict[instrument]], -1)
merged_pianorolls = merged_pianorolls[:, :, :, 0]
merged_piano_rolls = np.any(merged_pianorolls, axis=0)
merge_piano_roll_list.append(merged_piano_rolls)
except Exception as e:
print("ERROR!!!!!----> Cannot concatenate/merge track for instrument", instrument, " with error ", e)
continue;
merge_piano_roll_list = np.stack([track for track in merge_piano_roll_list], -1)
return merge_piano_roll_list.reshape(-1,32,128,4)
```
<img src="images/multi_track_object.png" alt="multitrack object" width="600">
<img src="images/track_object.png" alt="track object" width="600">
```
#holds final reshaped tracks that will be saved to training .npy file
track_list = np.empty(shape=(0,32,128,4))
#init with beat resolution of 4
music_tracks = pypianoroll.Multitrack(beat_resolution=4)
#loop through all the .mid files
for filename in os.listdir(reggae_midi_location):
print("Starting to process filename---->", reggae_midi_location + filename)
if filename.endswith(".mid"):
try:
#Load MIDI file using parse_midi
#returns Multi-Track object containing Track objects
music_tracks.parse_midi(reggae_midi_location + filename)
#add padding to avoid reshape errors
#pad the pianorolls with zeros making the length a multiple of 32
music_tracks.pad_to_multiple(32)
music_tracks.pad_to_same()
#merge pianoroll objects by instrument
merged_tracks_to_add_to_training_file = get_merged(music_tracks, filename)
#concatenate merged pianoroll objects to final training data track list
track_list = np.concatenate((merged_tracks_to_add_to_training_file, track_list))
print("Successfully processed filename---->", reggae_midi_location + filename)
except Exception as e:
print("**********ERROR**************It's possible that not all 4 instruments exist in this track; at least one is 0")
print("Skipping file---->", filename, e)
print(e)
# binarize data
track_list[track_list == 0] = -1
track_list[track_list >= 0] = 1
#split the data into training and evaluation datasets
training_data, eval_data = np.split(track_list, 2)
#save training data
save(train_dir + '/reggae-train.npy', np.array(training_data))
#save evaluation data
save(dataset_eval_dir + '/eval.npy', np.array(eval_data))
```
# Review Training Data
```
#double check the shape on training data, should be (x, 32, 128, 4), where x represents the amount of records
training_data = np.load(train_dir + '/reggae-train.npy')
print("Testing the training shape: ", training_data.shape)
#view sample of data that will be fed to model, four graphs == four tracks
display_utils.show_pianoroll(training_data)
```
| true |
code
| 0.490968 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/dcshapiro/AI-Feynman/blob/master/AI_Feynman_cleared_output.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# AI Feynman 2.0: Learning Regression Equations From Data
### Clone repository and install dependencies
```
!git clone https://github.com/SJ001/AI-Feynman.git
```
Look at what we downloaded
```
!ls /content/AI-Feynman
# %pycat AI-Feynman/requirements.txt if you need to fix the dependencies
```
Fix broken requirements file (may not be needed if later versions fix this).
```
%%writefile AI-Feynman/requirements.txt
torch>=1.4.0
matplotlib
sympy==1.4
pandas
scipy
sortedcontainers
```
Install dependencies not already installed in Google Collab
```
!pip install -r AI-Feynman/requirements.txt
```
Check that fortran is installed
```
!gfortran --version
```
Check the OS version
```
!lsb_release -a
```
Install the csh shell
```
!sudo apt-get install csh
```
Set loose permissions to avoid some reported file permissions issues
```
!chmod +777 /content/AI-Feynman/Code/*
```
### Compile the fortran code
Look at the code directory
```
!ls -l /content/AI-Feynman/Code
```
Compile .f files into .x files
```
!cd /content/AI-Feynman/Code/ && ./compile.sh
```
### Run the first example from the AI-Feynman repository
Change working directory to the Code directory
```
import os
os.chdir("/content/AI-Feynman/Code/")
print(os.getcwd())
!pwd
%%writefile ai_feynman_magic.py
from S_run_aifeynman import run_aifeynman
# Run example 1 as the regression dataset
run_aifeynman("/content/AI-Feynman/example_data/","example1.txt",30,"14ops.txt", polyfit_deg=3, NN_epochs=400)
```
Look at the first line of the example 1 file
```
!head -n 1 /content/AI-Feynman/example_data/example1.txt
# Example 1 has data generated from an equation, where the last column is the regression target, and the rest of the columns are the input data
# The following example shows the relationship between the first line of the file example1.txt and the formula used to make the data
x=[1.6821347439986711,1.1786188905177983,4.749225735259924,1.3238356535004034,3.462199507094163]
x0,x1,x2,x3=x[0],x[1],x[2],x[3]
(x0**2 - 2*x0*x1 + x1**2 + x2**2 - 2*x2*x3 + x3**2)**0.5
```
Run the code. It takes a long time, so go get some coffee.
```
!cd /content/AI-Feynman/Code/ && python3 ai_feynman_magic.py
```
### Assess the results
```
!cat results.dat
```
We found a candidate with an excellent fit, let's see what we got
```
!ls -l /content/AI-Feynman/Code/results/
!ls -l /content/AI-Feynman/Code/results/NN_trained_models/models
!cat /content/AI-Feynman/Code/results/solution_example1.txt
```
Note in the cell above that the solution with the lowest loss is the formula this data was generated from
### Try our own dataset generation and equation learning
Until now we were not storing the results in Google Drive. We might want to keep the data in Drive so that the results don't disappear when this Collab instance gets nice and dead.
```
from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
```
Make a directory in the mounted Google Drive where we will do our work
```
!mkdir -p /content/gdrive/My\ Drive/Lemay.ai_research/AI-Feynman
```
Copy over the stuff we did so far, and from now on we work out of Google Drive
```
!cp -r /content/AI-Feynman /content/gdrive/My\ Drive/Lemay.ai_research/
```
The code below generates our regression example dataset
We generate points for 4 columns, where x0 is from the same equation as x1, and x2 is from the same equation as x3
The last column is Y
```
import os
import random
os.chdir("/content/gdrive/My Drive/Lemay.ai_research/AI-Feynman/example_data")
def getY(x01,x23):
y = -0.5*x01+0.5*x23+3
return y
def getRow():
[x0,x2]=[random.random() for x in range(2)]
x1=x0
x3=x2
y=getY(x1,x3)
return str(x0)+" "+str(x1)+" "+str(x2)+" "+str(x3)+" "+str(y)+"\n"
with open("duplicateVarsExample.txt", "w") as f:
for _ in range(10000):
f.write(getRow())
f.close()
# switch back to the code directory
os.chdir("/content/gdrive/My Drive/Lemay.ai_research/AI-Feynman/Code")
```
Let's look at our data
```
!head -n 20 ../example_data/duplicateVarsExample.txt
```
Let's also plot the data for x01 and x23 against Y
```
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
plt.style.use('seaborn-whitegrid')
import numpy as np
df=pd.read_csv("../example_data/duplicateVarsExample.txt",sep=" ",header=None)
df.plot.scatter(x=0, y=4)
df.plot.scatter(x=2, y=4)
!pwd
```
Let's write out the runner file for this experiment
```
%%writefile ai_feynman_duplicate_variables.py
from S_run_aifeynman import run_aifeynman
run_aifeynman("/content/gdrive/My Drive/Lemay.ai_research/AI-Feynman/example_data/","duplicateVarsExample.txt",30,"14ops.txt", polyfit_deg=3, NN_epochs=400)
```
Don't forget to lower the file permissions
```
!chmod 777 /content/gdrive/My\ Drive/Lemay.ai_research/AI-Feynman/Code/*
!chmod +x /content/gdrive/My\ Drive/Lemay.ai_research/AI-Feynman/Code/*.scr
```
Now we run the file, and go get more coffee, because this is not going to be fast...
```
!python3 ai_feynman_duplicate_variables.py
```
Initial models quickly mapped to x0 and x2 (the system realized x1 and x3 are duplicates and so not needed)
Later on the system found 3.000000000000+log(sqrt(exp((x2-x1)))) which is a bit crazy but looks like a plane
We can see on Wolfram alpha that an equivalent form of this equation is:
(x2 - x1)/2 + 3.000000000000
which is what we used to generate the dataset!
Link: https://www.wolframalpha.com/input/?i=3.000000000000%2Blog%28sqrt%28exp%28%28x2-x1%29%29%29%29
```
!ls -l /content/gdrive/My\ Drive/Lemay.ai_research/AI-Feynman/Code/results/
!cat /content/gdrive/My\ Drive/Lemay.ai_research/AI-Feynman/Code/results/solution_duplicateVarsExample.txt
```
The solver settled on *log(sqrt(exp(-x1 + x3))) + 3.0* which we know is correct
Now, that was a bit of a softball problem as it has an exact solution. Let's now add noise to the dataset and see how the library holds up
### Let's add small amount of noise to every variabe and see the fit quality
We do the same thing as before, but now we add or subtract noise to x0,x1,x2,x3 after generating y
```
import os
import random
os.chdir("/content/gdrive/My Drive/Lemay.ai_research/AI-Feynman/example_data")
def getY(x01,x23):
y = -0.5*x01+0.5*x23+3
return y
def getRow():
x=[random.random() for x in range(4)]
x[1]=x[0]
x[3]=x[2]
y=getY(x[1],x[3])
mu=0
sigma=0.05
noise=np.random.normal(mu, sigma, 4)
x=x+noise
return str(x[0])+" "+str(x[1])+" "+str(x[2])+" "+str(x[3])+" "+str(y)+"\n"
with open("duplicateVarsWithNoise100k.txt", "w") as f:
for _ in range(100000):
f.write(getRow())
f.close()
# switch back to the code directory
os.chdir("/content/gdrive/My Drive/Lemay.ai_research/AI-Feynman/Code")
```
Let's have a look at the data
```
!head -n 20 ../example_data/duplicateVarsWithNoise100k.txt
```
Now let's plot the data
```
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
plt.style.use('seaborn-whitegrid')
import numpy as np
df=pd.read_csv("../example_data/duplicateVarsWithNoise100k.txt",sep=" ",header=None)
df.plot.scatter(x=0, y=4)
df.plot.scatter(x=1, y=4)
df.plot.scatter(x=2, y=4)
df.plot.scatter(x=3, y=4)
%%writefile ai_feynman_duplicateVarsWithNoise.py
from S_run_aifeynman import run_aifeynman
run_aifeynman("/content/gdrive/My Drive/Lemay.ai_research/AI-Feynman/example_data/","duplicateVarsWithNoise100k.txt",30,"14ops.txt", polyfit_deg=3, NN_epochs=600)
!chmod +777 /content/gdrive/My\ Drive/Lemay.ai_research/AI-Feynman/Code/*
!chmod +777 /content/gdrive/My\ Drive/Lemay.ai_research/AI-Feynman/*
# switch back to the code directory
os.chdir("/content/gdrive/My Drive/Lemay.ai_research/AI-Feynman/Code/")
!pwd
!chmod +x /content/gdrive/My\ Drive/Lemay.ai_research/AI-Feynman/Code/*.scr
!ls -l *.scr
print(os.getcwd())
!sudo python3 ai_feynman_duplicateVarsWithNoise.py
%%writefile ai_feynman_duplicateVarsWithNoise3.py
from S_run_aifeynman import run_aifeynman
run_aifeynman("/content/gdrive/My Drive/Lemay.ai_research/AI-Feynman/example_data/","duplicateVarsWithNoise.txt",30,"19ops.txt", polyfit_deg=3, NN_epochs=1000)
print(os.getcwd())
!sudo python3 ai_feynman_duplicateVarsWithNoise3.py
```
### No duplicate columns but same noise
```
import os
import random
import numpy as np
os.chdir("/content/gdrive/My Drive/Lemay.ai_research/AI-Feynman/example_data")
def getY(x01,x23):
y = -0.5*x01+0.5*x23+3
return y
def getRow():
x=[0 for x in range(4)]
x[1]=random.random()
x[3]=random.random()
y=getY(x[1],x[3])
mu=0
sigma=0.05
noise=np.random.normal(mu, sigma, 4)
x=x+noise
return str(x[1])+" "+str(x[3])+" "+str(y)+"\n"
with open("varsWithNoise.txt", "w") as f:
for _ in range(100000):
f.write(getRow())
f.close()
# switch back to the code directory
os.chdir("/content/gdrive/My Drive/Lemay.ai_research/AI-Feynman/Code")
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
plt.style.use('seaborn-whitegrid')
import numpy as np
df=pd.read_csv("../example_data/varsWithNoise.txt",sep=" ",header=None)
df.plot.scatter(x=0, y=2)
df.plot.scatter(x=1, y=2)
%%writefile ai_feynman_varsWithNoise.py
from S_run_aifeynman import run_aifeynman
run_aifeynman("/content/gdrive/My Drive/Lemay.ai_research/AI-Feynman/example_data/","varsWithNoise.txt",30,"14ops.txt", polyfit_deg=3, NN_epochs=1000)
!sudo python3 ai_feynman_varsWithNoise.py
```
| true |
code
| 0.459197 | null | null | null | null |
|
```
import healpy as hp
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import astropy.units as u
```
# White noise NET in Radio-astronomy and Cosmology
> Create a white noise map and compare with power spectrum expected from the NET
- categories: [cosmology, python, healpy]
Noise-Equivalent-Temperature, it is a measure of sensitivity of a detector, in cosmology, it is often quoted
in $\mu K \sqrt(s)$, i.e. it is the sensitivity per unit time and can be divided by the integration time to
get the actual standard deviation of the white noise of the instrument.
For example let's consider a white noise NET of $200 \mu K \sqrt(s)$
it means that if you integrate for 100 seconds for each pixel, the standard deviation will be $20 \mu K$.
```
net = 200 * u.Unit("uK * sqrt(s)")
net
integration_time_per_pixel = 100 * u.s
standard_deviation = net / np.sqrt(integration_time_per_pixel)
```
## Create a white noise map
Now that we have an estimate of the standard deviation per pixel, we can use `numpy` to create a map of gaussian white noise.
```
nside = 128
npix = hp.nside2npix(nside)
m = np.random.normal(scale = standard_deviation.value, size=npix) * standard_deviation.unit
hp.mollview(m, unit=m.unit, title="White noise map")
```
## Power spectrum
Finally we can compute the angular power spectrum with `anafast`, i.e. the power as a function of the angular scales, from low $\ell$ values for large angular scales, to high $\ell$ values for small angular scales.
At low $\ell$ there is not much statistics and the power spectrum is biased, but if we exclude lower ells, we can have an estimate of the white noise $C_\ell$ coefficients. We can then compare with the theoretical power computed as:
$$ C_\ell = \Omega_{pix}\sigma^2 $$
Where: $\Omega_{pix}$ is the pixel are in square-radians and $\sigma^2$ is the white noise standard variance.
```
cl = hp.anafast(m)
cl[100:].mean()
pixel_area = hp.nside2pixarea(nside)
white_noise_cl = standard_deviation.value**2 * pixel_area
white_noise_cl
plt.figure(figsize=(6,4))
plt.loglog(cl, label="Map power spectrum", alpha=.7)
plt.hlines(white_noise_cl, 0, len(cl), label="White noise level")
plt.xlabel("$\ell$")
plt.ylabel("$C_\ell [\mu K ^ 2]$");
```
## Masking
In case we are removing some pixels from a map, for example to mask out a strong signal (e.g. the Milky Way), our estimate of the power spectrum on the partial sky is lower.
However we assume that the properties of the noise will be the same also in the masked region.
At first order, for simple masks, we can just correct for the amplitude by dividing the power spectrum by the sky fraction.
```
m.value[len(m)//2-30000:len(m)//2+30000] = hp.UNSEEN
hp.mollview(m, unit=m.unit, title="White noise map")
cl_masked = hp.anafast(m)
plt.figure(figsize=(6,4))
plt.loglog(cl, label="Map power spectrum", alpha=.7)
plt.loglog(cl_masked, label="Map power spectrum (Masked)", alpha=.7)
plt.hlines(white_noise_cl, 0, len(cl), label="White noise level")
plt.xlabel("$\ell$")
plt.ylabel("$C_\ell [\mu K ^ 2]$")
plt.legend();
sky_fraction = hp.mask_good(m).sum() / len(m)
print(sky_fraction)
plt.figure(figsize=(6,4))
plt.loglog(cl, label="Map power spectrum", alpha=.7)
plt.loglog(cl_masked / sky_fraction, label="Map power spectrum (Masked) - corrected", alpha=.7)
plt.hlines(white_noise_cl, 0, len(cl), label="White noise level")
plt.xlabel("$\ell$")
plt.ylabel("$C_\ell [\mu K ^ 2]$")
plt.legend();
```
| true |
code
| 0.696449 | null | null | null | null |
|
## Import Libraries and Read Dataset
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
#machine learning libraries
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split, GridSearchCV #cross validation and split dataset
from sklearn.metrics import accuracy_score, confusion_matrix
from sklearn.neighbors import KNeighborsClassifier, NeighborhoodComponentsAnalysis, LocalOutlierFactor
from sklearn.decomposition import pca
#warning library
import warnings
warnings.filterwarnings("ignore")
#read dataset into data variable
data = pd.read_csv("data.csv")
```
## Descriptive Statistic
```
# preview dataset
data.head()
# Dataset dimensions - (rows, columns)
data.shape
# Features data-type
data.info()
# Statistical summary
data.describe().T
# Count of null values
data.isnull().sum()
```
## Observations:
1. There are a total of 569 records and 33 features in the dataset.
2. Each feature can be integer, float or object dataype.
3. There are zero NaN values in the dataset.
4. In the outcome column, M represents malignant cancer and 0 represents benign cancer.
# Data Preprocessing
```
#drop unnecessary columns
data.drop(['Unnamed: 32','id'], inplace=True, axis=1) #axis=1 -> cloumn drop
#rename diagnosis as target feature
data = data.rename(columns={"diagnosis":"target"})
#visualize target feature count
sns.countplot(data["target"])
print(data.target.value_counts()) # B 357, M 212
#set the value of string target feature to integer
data["target"] = [1 if i.strip() == 'M' else 0 for i in data.target]
```
# Exploratory Data Analysis
```
#Correlation Matrix
corr_matrix = data.corr()
sns.clustermap(corr_matrix, annot=True, fmt = ".2f")
plt.title("Correlation Matrix")
plt.show()
#Correlation Matrix with values bigger than 0.75
threshold = 0.75
filtre = np.abs(corr_matrix["target"]) > threshold
corr_features = corr_matrix.columns[filtre].tolist()
sns.clustermap(data[corr_features].corr(), annot=True, fmt = ".2f")
plt.title("Correlation Between features with correlation threshold 0.75")
plt.show()
""" there are correlated features """
#pair plot
sns.pairplot(data[corr_features], diag_kind="kde",markers="+",hue="target")
plt.show()
"""there are skewness"""
```
# Outlier Detection
```
#outlier values
y = data["target"]
X = data.drop(["target"], axis=1) #axis=1 -> cloumn drop
columns = X.columns.tolist()
#LOF<1 inlier values
#LOF>1 outlier values
clf = LocalOutlierFactor()
y_pred = clf.fit_predict(X) #ourlier or not
X_score = clf.negative_outlier_factor_
outlier_score = pd.DataFrame()
outlier_score["score"] = X_score
#scatter plot to detect outlier values
threshold_outlier = -2
filtre_outlier = outlier_score["score"] < threshold_outlier
outlier_index = outlier_score[filtre_outlier].index.tolist()
#visualize the outlier values
plt.figure()
plt.scatter(X.iloc[outlier_index,0], X.iloc[outlier_index,1], color="b", s = 50, label = "Outlier Points")
plt.scatter(X.iloc[:,0], X.iloc[:,1], color="k", s = 3, label = "Data Points")
# circles are drawn around the points to show outliers
radius = (X_score.max() - X_score)/(X_score.max() - X_score.min())
outlier_score["radius"] = radius
plt.scatter(X.iloc[:,0], X.iloc[:,1], s = 1000*radius, edgecolors="r", facecolors = "none" , label = "Outlier Scores")
plt.legend()
plt.show()
#drop outliers
X = X.drop(outlier_index)
y = y.drop(outlier_index).tolist()
```
# Train Test Split
```
#train test split
test_size = 0.2 # 20% test, %80 train
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = test_size, random_state = 42 ) #random_state is set to 42 to ensure consistency
```
# Standardization
```
"""Since there is a big difference between the values in the dataset, we should standardize."""
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
X_train_df = pd.DataFrame(X_train, columns = columns)
X_train_df["target"] = y_train
data_melted = pd.melt(X_train_df, id_vars="target", var_name="features", value_name="value")
plt.figure()
sns.boxplot(x = "features", y = "value", hue = "target", data = data_melted)
plt.xticks(rotation = 90)
plt.show()
```
# Basic KNN Model
```
#model creation
knn = KNeighborsClassifier()
knn.fit(X_train, y_train)
#test the resulting model with X_test
y_predict = knn.predict(X_test)
cm = confusion_matrix(y_test, y_predict) #confusion matrix
acc = accuracy_score(y_test, y_predict) #accuracy value
print("Confusion Matrix:", cm)
print("Basic Knn Accuracy Score:", acc)
def KNN_Best_Params(x_train, x_test, Y_train, Y_test):
#find best k and weight value
k_range = list(range(1,31))
weight_options = ["uniform", "distance"]
print(" ")
param_grid = dict(n_neighbors = k_range, weights = weight_options)
knn = KNeighborsClassifier()
grid = GridSearchCV(knn, param_grid=param_grid, cv = 10, scoring = "accuracy")
grid.fit(x_train, Y_train)
print("Best training score: {} with parameters: {} ".format(grid.best_score_, grid.best_params_))
knn = KNeighborsClassifier(**grid.best_params_)
knn.fit(x_train, Y_train)
y_pred_test = knn.predict(x_test)
y_pred_train = knn.predict(x_train)
cm_test = confusion_matrix(Y_test, y_pred_test )
cm_train = confusion_matrix(Y_train, y_pred_train)
acc_test = accuracy_score(Y_test, y_pred_test)
acc_train = accuracy_score(Y_train, y_pred_train)
print("Test Score: {} , Train Score: {}".format( acc_test,acc_train ))
print()
print("CM test:", cm_test)
print("CM ttrain", cm_train)
return grid
grid = KNN_Best_Params(X_train, X_test, y_train, y_test)
```
| true |
code
| 0.72457 | null | null | null | null |
|
# LFD Homework 2
Second week homework for the "Learning from Data" course offerd by [Caltech on edX](https://courses.edx.org/courses/course-v1:CaltechX+CS1156x+3T2017). This notebook only contains the simulation / exploration problems.
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib notebook
```
## P: Hoeffding Inequality
Task: Run a computer simulation of $10$ times simultaneously and independently flipping $1000$ virtual fair coins. Record the the following fractions of heads in these $10$ runs as
* $\nu_1$ fraction of heads for the first coin $c_1$
* $\nu_{\mathrm{rand}}$ fraction of heads for a randomly chosen coin $c_{\mathrm{rand}}$
* $\nu_\min$ fraction of heads for the coin with the minimal frequency of heads $c_\min$
This can be implemented as:
```
def hfd_experiment(number_coins=1000, runs=10):
''' Creates one experiment of [number_coins] simultaneously flipped fair coins for [runs].'''
coins = (np.random.rand(number_coins, runs) > .5).astype(float)
coins_sum = coins.sum(axis=1, keepdims=True)
nu_1 = coins_sum[0,0] / runs
nu_rand = coins_sum[np.random.randint(number_coins),0] / runs
nu_min = coins_sum[coins_sum.argmin(),0] / runs
return nu_1, nu_rand, nu_min
```
Now the task is to repeat this experiment $100000$ times in order to get a simulated distribution of $\nu_1, \nu_{\mathrm{rand}}$ and $\nu_\min$ respectively.
```
full_distribution = np.array([hfd_experiment() for i in range(100000)])
```
The distributions look as follows:
```
fig, ax = plt.subplots(1, 3, sharex=True, figsize=(9.75, 4.5))
fig.suptitle('Distributions for $\\nu_1, \\nu_{\mathrm{rand}}$ and $\\nu_\\min$')
sns.distplot(full_distribution[:,0], bins=15, kde_kws={'bw':.075}, ax=ax[0], axlabel='$\\nu_1$')
sns.distplot(full_distribution[:,1], bins=15, kde_kws={'bw':.075}, ax=ax[1], axlabel='$\\nu_{\mathrm{rand}}$')
sns.distplot(full_distribution[:,2], bins=3, kde_kws={'bw':.075}, ax=ax[2], axlabel='$\\nu_\\min$')
for x in ax: x.set_xlim(0., 1.)
```
The average value of the different $\nu$ is:
```
nu_bar = full_distribution.mean(axis=0)
print('nu_1_bar\t= {:.3f}\nnu_rand_bar\t= {:.3f}\nnu_min_bar\t= {:.3f}'.format(*nu_bar))
```
## P: Linear Regression
In this problem we use the same target function $f: \mathcal{X} \mapsto \mathcal{Y}$ as in the last homework (Perceptron). Therefore we can re-use its code (with a few cosmetic changes):
```
def generate_data(N = 10, f=None):
''' Generates linear target function f and labeled, linearly separable test data generated by f.'''
if f is None:
# choose two random points p1, p2 and compute a vector p orthogonal to their difference
p1, p2 = (np.random.rand(2,2) - 0.5) * 2.
p = np.array([1, -(p2 - p1)[0]/(p2 - p1)[1]])
p /= np.linalg.norm(p)
f = lambda x: np.sign((x - p1) @ p).reshape(-1,1)
f.db = lambda x: (p2[1] - p1[1])/(p2[0] - p1[0]) * (x - p1[0]) + p1[1]
# generate uniformely distributed data points and apply classifier to label them
X = (np.random.rand(N, 2) - 0.5) * 2
Y = f(X)
return X,Y,f
def plot_data(X, Y, db = None):
''' Plots two dimensional, linearly separable data from the interval [-1, 1] and the optional decision boundary db.'''
plt.figure()
pos_examples = X[(Y == 1).reshape(-1)]
neg_examples = X[(Y == -1).reshape(-1)]
neu_examples = X[(Y == 0).reshape(-1)]
# plot the three groups of examples
plt.scatter(pos_examples[:,0], pos_examples[:,1], color='steelblue', marker='+')
plt.scatter(neg_examples[:,0], neg_examples[:,1], color='red', marker='o')
plt.scatter(neu_examples[:,0], neu_examples[:,1], color='black', marker='o')
# plot the decision boundary if provided
if db is not None:
x = np.arange(-1., 1., 0.01)
plt.plot(x, db(x), c='red', ls='dashed', lw=1.)
plt.grid(alpha=.3)
plt.gca().set_xlim(-1, 1)
plt.gca().set_ylim(-1, 1)
```
Note that we provide the option to pass in the target function $f$. This will come in handy later. Now we are ready to generate some linearly separable test data for classification with linear regression or perceptrons. For instance, with $N = 100$ our functions generates:
```
X, Y, f = generate_data(100)
plot_data(X, Y, f.db)
```
### Linear Model
The next step is to learn a linear model in the generated data. As demonstrated in the lecture, the weights of the linear model can be computed using the normal equation of the least squares method for linear regression as
$$
\mathbf{w} = \left(\mathbf{X}^\intercal\mathbf{X}\right)^{-1} \mathbf{X}^\intercal \mathbf{y}
$$
then the elected hypothesis function $g: \mathcal{X} \mapsto \mathcal{Y}$ can perform binary classification on a single example $\mathbf{x} \in \mathcal{X}$ as $g(\mathbf{x}) = \mathrm{sign}{(\mathbf{w}^\intercal\mathbf{x})}$ which can be computed for all training examples in a single run (batch computation) as
$$
h(\mathbf{X}) = \mathrm{sign}\left(\mathbf{X}\mathbf{w}\right)
$$
```
class LRBClassifier:
''' Simple linear regression based binary classifier.'''
def __init__(self, X, Y, add_intercept=True):
N, d = X.shape
if add_intercept:
X = np.concatenate((np.ones((N, 1)), X), axis=1)
self.w = np.linalg.pinv(X.T @ X) @ (X.T) @ Y
self.E_in = np.sum(self(X, add_intercept=False) != Y)/N
def __call__(self, X, add_intercept=True):
N, d = X.shape
if add_intercept:
X = np.concatenate((np.ones((N, 1)), X), axis=1)
return np.sign(X @ self.w).reshape(-1,1)
```
Let's test this new linear classifier with some generated data and plot what it is doing. Thereby it is particularly interesting to plot decision boundaries for $f$ and $g$. The symbol for classes is based on the actual $f$, but we highlight the decision boundary of $g$ (in red) to quickly spot classification errors. The decision boundary of $f$ is also added for reference (in gray).
```
X, Y, f = generate_data(100)
g = LRBClassifier(X, Y)
# we compute the decision boundary through finding an orthogonal vector to w (10e-5 term avoids division by zero)
g.db = lambda x: (- g.w[1] * x - g.w[0])/ (g.w[2] + 10e-5)
plot_data(X, Y, g.db)
# also, we can plot the actual decision boundary of f
x = np.arange(-1., 1., 0.01)
plt.plot(x, f.db(x), c='gray', ls='dashed', lw=2., alpha=.3)
print('E_in = {:.3f}'.format(g.E_in))
```
Now we can prepare the experiment as required by problems 5 and 6:
```
def experiment_lrbc(N=100, N_test=1000, repeat=1000, f=None, gen=generate_data):
data = []
for i in range(repeat):
# generate test data and function
X, Y, f = gen(N, f)
# train a linear regression based classifier and obtain its E_in
g = LRBClassifier(X, Y)
E_in = g.E_in
# obtain the out of sample error rate using the generated function f
X_test, Y_test, _ = gen(N_test, f)
E_out = np.sum(Y_test != g(X_test)) / float(N_test)
data.append((E_in, E_out))
if i%100 == 0:
print('experiment (run={}): E_in={:.3f} / E_out={:.3f}'.format(i, E_in, E_out))
results = np.array(data)
print('\nAverage Errors\n--------------\nE_in\t= {:.3f}\nE_out\t= {:.3f}'.format(*np.mean(results, axis=0)))
return results
```
And finally run the first experiments:
```
results = experiment_lrbc()
```
### Linear Model and Perceptron
Here we have to train a `LRBClassifier` and use its weights as initialization to the perceptron learning algorithm `pla`. We can recycle the perceptron learning algorithm developed in the last homework:
```
class PerceptronClassifier:
'''Perceptron binary classifier.'''
def __init__(self, X, Y, add_intercept=True, init_w=None, max_iter=10e5):
N, d = X.shape
if add_intercept:
X = np.concatenate((np.ones((N, 1)), X), axis=1)
self.w = np.zeros((d+1, 1)) if init_w is None else init_w
# perceptron learning algorithm
X_prime, Y_prime = X.copy(), Y.copy()
self.iterations = 0
while X_prime.shape[0] > 0:
# randomly select misclassified point
i = np.random.randint(X_prime.shape[0])
x_i, y_i = X_prime[i], Y_prime[i]
# update hypothesis
self.w += y_i * x_i.reshape(-1,1)
# identify misclassified points
idx = (self(X, add_intercept=False) != Y).reshape(-1)
X_prime, Y_prime = X[idx], Y[idx]
self.iterations += 1
# divergence circuit breaker
if self.iterations >= max_iter:
raise StopIteration('maximum of {} iterations reached'.format(max_iter))
def __call__(self, X, add_intercept=True):
N = X.shape[0]
if add_intercept:
X = np.concatenate((np.ones((N, 1)), X), axis=1)
return np.sign(X @ self.w).reshape(-1,1)
```
The experiment requires us to set $N = 10$ and find the weights using linear regression, then run the `pla` on these weights to to find a $g$ without in sample classification errors. Thereby we are interested in the number of iterations it takes the `pla` to converge:
```
def experiment_pbc_w_init(N=10, repeat=1000, f=None):
data = []
for i in range(repeat):
# generate test data and function
X, Y, f = generate_data(N, f)
# train a linear regression based classifier on the data, then use this as
# initializing weights for the pla
g_lrbc = LRBClassifier(X, Y)
g_pbc = PerceptronClassifier(X, Y, init_w=g_lrbc.w)
# obtain the number of iterations until convergence for the pla
iterations = g_pbc.iterations
data.append(iterations)
if i%100 == 0:
print('experiment (run={}): iterations={}'.format(i, iterations))
results = np.array(data)
print('\nAverage Iterations\n------------------\nIterations\t= {}'.format(np.mean(results)))
return results
```
Finally, we can run the experiment for problem 7:
```
results = experiment_pbc_w_init()
```
## P: Nonlinear Transformation
These problems again refer to the linear regression based binary classifier. The nonlinear target function is defined as
$$
f(\mathbf{x}) = \mathrm{sign}\left(x_1^2 + x_2^2 - 0.6\right)
$$
Our implementation to genrate data from above had already been prepared for passing in a target function. So all we need to do now is to implement $f$ and provide a mechanism to add some random noise to the data:
```
f = lambda X: np.sign(np.sum(X**2, axis=1, keepdims=True) - .6)
def generate_noisy_data(N = 10, f=None, noise_ratio=.1):
'''Generates linear target function f and labeled, linearly separable test data with added noise.'''
X, Y, f = generate_data(N, f)
# add some random noise
n_noise = np.round(noise_ratio * N).astype(int)
idx = np.random.randint(N, size=n_noise)
Y[idx] = -Y[idx]
return X, Y, f
```
Let's plot this to get a feeling of what's going on:
```
X, Y, _ = generate_noisy_data(100, f)
plot_data(X, Y)
```
Now the first task in problem 8 is to apply linear regression without any nonlinear transformation of features on a training set of size $N=1000$ and determine its in-sample error $E_{\mathrm{in}}$. Here we can re-use the experiment from above:
```
results = experiment_lrbc(N=1000, f=f, gen=generate_noisy_data)
```
### Applying the Nonlinear Transformation
Next we transform $\mathbf{X}$ by applying the nonlinear transformation $\Phi: \mathcal{X} \mapsto \mathcal{Z}$ which adds nonlinear features as $\Phi(\mathbf{x}) = (1, x_1, x_2, x_1 x_2, x_1^2, x_2^2)$. In the implementation we will not add the intercept feature $x_0$ as this happens already in the linear regression classifier implementation.
```
def phi(X):
X1, X2 = np.hsplit(X, 2)
Z = np.concatenate((X, X1 * X2, X1**2, X2**2), axis=1)
return Z
```
Armed with this nonlinear transformation, we can finally prepare the last experiments:
```
def experiment_lrbc_transform(N=100, N_test=1000, repeat=1000, f=None, gen=generate_data):
data = []
w_acc = np.zeros((6,1))
for i in range(repeat):
# generate test data and function
X, Y, f = gen(N, f)
Z = phi(X)
# train a linear regression based classifier and obtain its E_in
g = LRBClassifier(Z, Y)
w_acc += g.w
E_in = g.E_in
# obtain the out of sample error rate using the generated function f
X_test, Y_test, _ = gen(N_test, f)
Z_test = phi(X_test)
E_out = np.sum(Y_test != g(Z_test)) / float(N_test)
data.append((E_in, E_out))
if i%100 == 0:
print('experiment (run={}): E_in={:.3f} / E_out={:.3f}'.format(i, E_in, E_out))
results = np.array(data)
print('\nAverage Errors\n--------------\nE_in\t= {:.3f}\nE_out\t= {:.3f}'.format(*np.mean(results, axis=0)))
return results, w_acc / repeat
```
Note that the arithmetic average over the weight vectors gives us a vector capturing the general direction of the weight vectors. This experiment yields:
```
results, w = experiment_lrbc_transform(N=1000, f=f, gen=generate_noisy_data)
print('\n--------------\n{:.3f} + {:.3f}x_1 + {:.3f}x_2 + {:.3f}x_1x_2 + {:.3f}x_1^2 + {:.3f}x_2^2'.format(*w.flat))
```
| true |
code
| 0.685647 | null | null | null | null |
|
<h1> Create TensorFlow model </h1>
This notebook illustrates:
<ol>
<li> Creating a model using the high-level Estimator API
</ol>
```
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.1
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
```
<h2> Create TensorFlow model using TensorFlow's Estimator API </h2>
<p>
First, write an input_fn to read the data.
<p>
## Lab Task 1
Verify that the headers match your CSV output
```
import shutil
import numpy as np
import tensorflow as tf
# Determine CSV, label, and key columns
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks,key'.split(',')
LABEL_COLUMN = 'weight_pounds'
KEY_COLUMN = 'key'
# Set default values for each CSV column
DEFAULTS = [[0.0], ['null'], [0.0], ['null'], [0.0], ['nokey']]
TRAIN_STEPS = 1000
```
## Lab Task 2
Fill out the details of the input function below
```
# Create an input function reading a file using the Dataset API
# Then provide the results to the Estimator API
def read_dataset(filename_pattern, mode, batch_size = 512):
def _input_fn():
def decode_csv(line_of_text):
# TODO #1: Use tf.decode_csv to parse the provided line
# TODO #2: Make a Python dict. The keys are the column names, the values are from the parsed data
# TODO #3: Return a tuple of features, label where features is a Python dict and label a float
return features, label
# TODO #4: Use tf.gfile.Glob to create list of files that match pattern
file_list = None
# Create dataset from file list
dataset = (tf.compat.v1.data.TextLineDataset(file_list) # Read text file
.map(decode_csv)) # Transform each elem by applying decode_csv fn
# TODO #5: In training mode, shuffle the dataset and repeat indefinitely
# (Look at the API for tf.data.dataset shuffle)
# The mode input variable will be tf.estimator.ModeKeys.TRAIN if in training mode
# Tell the dataset to provide data in batches of batch_size
# This will now return batches of features, label
return dataset
return _input_fn
```
## Lab Task 3
Use the TensorFlow feature column API to define appropriate feature columns for your raw features that come from the CSV.
<b> Bonus: </b> Separate your columns into wide columns (categorical, discrete, etc.) and deep columns (numeric, embedding, etc.)
```
# Define feature columns
```
## Lab Task 4
To predict with the TensorFlow model, we also need a serving input function (we'll use this in a later lab). We will want all the inputs from our user.
Verify and change the column names and types here as appropriate. These should match your CSV_COLUMNS
```
# Create serving input function to be able to serve predictions later using provided inputs
def serving_input_fn():
feature_placeholders = {
'is_male': tf.compat.v1.placeholder(tf.string, [None]),
'mother_age': tf.compat.v1.placeholder(tf.float32, [None]),
'plurality': tf.compat.v1.placeholder(tf.string, [None]),
'gestation_weeks': tf.compat.v1.placeholder(tf.float32, [None])
}
features = {
key: tf.expand_dims(tensor, -1)
for key, tensor in feature_placeholders.items()
}
return tf.compat.v1.estimator.export.ServingInputReceiver(features, feature_placeholders)
```
## Lab Task 5
Complete the TODOs in this code:
```
# Create estimator to train and evaluate
def train_and_evaluate(output_dir):
EVAL_INTERVAL = 300
run_config = tf.estimator.RunConfig(save_checkpoints_secs = EVAL_INTERVAL,
keep_checkpoint_max = 3)
# TODO #1: Create your estimator
estimator = None
train_spec = tf.estimator.TrainSpec(
# TODO #2: Call read_dataset passing in the training CSV file and the appropriate mode
input_fn = None,
max_steps = TRAIN_STEPS)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec = tf.estimator.EvalSpec(
# TODO #3: Call read_dataset passing in the evaluation CSV file and the appropriate mode
input_fn = None,
steps = None,
start_delay_secs = 60, # start evaluating after N seconds
throttle_secs = EVAL_INTERVAL, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
```
Finally, train!
```
# Run the model
shutil.rmtree('babyweight_trained', ignore_errors = True) # start fresh each time
tf.compat.v1.summary.FileWriterCache.clear()
train_and_evaluate('babyweight_trained')
```
The exporter directory contains the final model.
Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| true |
code
| 0.422773 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/projects/modelingsteps/ModelingSteps_1through4.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Modeling Steps 1 - 4
**By Neuromatch Academy**
__Content creators:__ Marius 't Hart, Megan Peters, Paul Schrater, Gunnar Blohm
__Content reviewers:__ Eric DeWitt, Tara van Viegen, Marius Pachitariu
__Production editors:__ Ella Batty
**Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**
<p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>
**Note that this is the same as W1D2 Tutorial 1 - we provide it here as well for ease of access.**
---
# Tutorial objectives
Yesterday you gained some understanding of what models can buy us in neuroscience. But how do you build a model? Today, we will try to clarify the process of computational modeling, by thinking through the logic of modeling based on your project ideas.
We assume that you have a general idea of a project in mind, i.e. a preliminary question, and/or phenomenon you would like to understand. You should have started developing a project idea yesterday with [this brainstorming demo](https://youtu.be/H6rSlZzlrgQ). Maybe you have a goal in mind. We will now work through the 4 first steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)):
**Framing the question**
1. finding a phenomenon and a question to ask about it
2. understanding the state of the art
3. determining the basic ingredients
4. formulating specific, mathematically defined hypotheses
The remaining steps 5-10 will be covered in a second notebook that you can consult throughout the modeling process when you work on your projects.
**Importantly**, we will guide you through Steps 1-4 today. After you do more work on projects, you likely have to revite some or all of these steps *before* you move on the the remaining steps of modeling.
**Note**: there will be no coding today. It's important that you think through the different steps of this how-to-model tutorial to maximize your chance of succeeding in your group projects. **Also**: "Models" here can be data analysis pipelines, not just computational models...
**Think! Sections**: All activities you should perform are labeled with **Think!**. These are discussion based exercises and can be found in the Table of Content on the left side of the notebook. Make sure you complete all within a section before moving on!
### Demos
We will demo the modeling process to you based on the train illusion. The introductory video will explain the phenomenon to you. Then we will do roleplay to showcase some common pitfalls to you based on a computational modeling project around the train illusion. In addition to the computational model, we will also provide a data neuroscience project example to you so you can appreciate similarities and differences.
Enjoy!
```
# @title Video 1: Introduction to tutorial
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Mf4y1b7xS", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="GyGNs1fLIYQ", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
# Setup
```
# Imports
import numpy as np
import matplotlib.pyplot as plt
# for random distributions:
from scipy.stats import norm, poisson
# for logistic regression:
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
# @title Plotting Functions
def rasterplot(spikes,movement,trial):
[movements, trials, neurons, timepoints] = np.shape(spikes)
trial_spikes = spikes[movement,trial,:,:]
trial_events = [((trial_spikes[x,:] > 0).nonzero()[0]-150)/100 for x in range(neurons)]
plt.figure()
dt=1/100
plt.eventplot(trial_events, linewidths=1);
plt.title('movement: %d - trial: %d'%(movement, trial))
plt.ylabel('neuron')
plt.xlabel('time [s]')
def plotCrossValAccuracies(accuracies):
f, ax = plt.subplots(figsize=(8, 3))
ax.boxplot(accuracies, vert=False, widths=.7)
ax.scatter(accuracies, np.ones(8))
ax.set(
xlabel="Accuracy",
yticks=[],
title=f"Average test accuracy: {accuracies.mean():.2%}"
)
ax.spines["left"].set_visible(False)
#@title Generate Data
def generateSpikeTrains():
gain = 2
neurons = 50
movements = [0,1,2]
repetitions = 800
np.random.seed(37)
# set up the basic parameters:
dt = 1/100
start, stop = -1.5, 1.5
t = np.arange(start, stop+dt, dt) # a time interval
Velocity_sigma = 0.5 # std dev of the velocity profile
Velocity_Profile = norm.pdf(t,0,Velocity_sigma)/norm.pdf(0,0,Velocity_sigma) # The Gaussian velocity profile, normalized to a peak of 1
# set up the neuron properties:
Gains = np.random.rand(neurons) * gain # random sensitivity between 0 and `gain`
FRs = (np.random.rand(neurons) * 60 ) - 10 # random base firing rate between -10 and 50
# output matrix will have this shape:
target_shape = [len(movements), repetitions, neurons, len(Velocity_Profile)]
# build matrix for spikes, first, they depend on the velocity profile:
Spikes = np.repeat(Velocity_Profile.reshape([1,1,1,len(Velocity_Profile)]),len(movements)*repetitions*neurons,axis=2).reshape(target_shape)
# multiplied by gains:
S_gains = np.repeat(np.repeat(Gains.reshape([1,1,neurons]), len(movements)*repetitions, axis=1).reshape(target_shape[:3]), len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes * S_gains
# and multiplied by the movement:
S_moves = np.repeat( np.array(movements).reshape([len(movements),1,1,1]), repetitions*neurons*len(Velocity_Profile), axis=3 ).reshape(target_shape)
Spikes = Spikes * S_moves
# on top of a baseline firing rate:
S_FR = np.repeat(np.repeat(FRs.reshape([1,1,neurons]), len(movements)*repetitions, axis=1).reshape(target_shape[:3]), len(Velocity_Profile)).reshape(target_shape)
Spikes = Spikes + S_FR
# can not run the poisson random number generator on input lower than 0:
Spikes = np.where(Spikes < 0, 0, Spikes)
# so far, these were expected firing rates per second, correct for dt:
Spikes = poisson.rvs(Spikes * dt)
return(Spikes)
def subsetPerception(spikes):
movements = [0,1,2]
split = 400
subset = 40
hwin = 3
[num_movements, repetitions, neurons, timepoints] = np.shape(spikes)
decision = np.zeros([num_movements, repetitions])
# ground truth for logistic regression:
y_train = np.repeat([0,1,1],split)
y_test = np.repeat([0,1,1],repetitions-split)
m_train = np.repeat(movements, split)
m_test = np.repeat(movements, split)
# reproduce the time points:
dt = 1/100
start, stop = -1.5, 1.5
t = np.arange(start, stop+dt, dt)
w_idx = list( (abs(t) < (hwin*dt)).nonzero()[0] )
w_0 = min(w_idx)
w_1 = max(w_idx)+1 # python...
# get the total spike counts from stationary and movement trials:
spikes_stat = np.sum( spikes[0,:,:,:], axis=2)
spikes_move = np.sum( spikes[1:,:,:,:], axis=3)
train_spikes_stat = spikes_stat[:split,:]
train_spikes_move = spikes_move[:,:split,:].reshape([-1,neurons])
test_spikes_stat = spikes_stat[split:,:]
test_spikes_move = spikes_move[:,split:,:].reshape([-1,neurons])
# data to use to predict y:
x_train = np.concatenate((train_spikes_stat, train_spikes_move))
x_test = np.concatenate(( test_spikes_stat, test_spikes_move))
# this line creates a logistics regression model object, and immediately fits it:
population_model = LogisticRegression(solver='liblinear', random_state=0).fit(x_train, y_train)
# solver, one of: 'liblinear', 'newton-cg', 'lbfgs', 'sag', and 'saga'
# some of those require certain other options
#print(population_model.coef_) # slope
#print(population_model.intercept_) # intercept
ground_truth = np.array(population_model.predict(x_test))
ground_truth = ground_truth.reshape([3,-1])
output = {}
output['perception'] = ground_truth
output['spikes'] = spikes[:,split:,:subset,:]
return(output)
def getData():
spikes = generateSpikeTrains()
dataset = subsetPerception(spikes=spikes)
return(dataset)
dataset = getData()
perception = dataset['perception']
spikes = dataset['spikes']
```
----
# Step 1: Finding a phenomenon and a question to ask about it
```
# @title Video 2: Asking a question
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1VK4y1M7dc", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="4Gl8X_y_uoA", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 1
from ipywidgets import widgets
from IPython.display import Markdown
markdown1 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people have the wrong percept. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We asked the following (arbitrary) question for our demo project: "How do noisy vestibular estimates of motion lead to illusory percepts of self motion?"
</font>
'''
markdown2 = '''
## Step 1
<br>
<font size='3pt'>
The train illusion occurs when sitting on a train and viewing another train outside the window. Suddenly, the other train *seems* to move, i.e. you experience visual motion of the other train relative to your train. But which train is actually moving?
Often people mix this up. In particular, they think their own train might be moving when it's the other train that moves; or vice versa. The illusion is usually resolved once you gain vision of the surroundings that lets you disambiguate the relative motion; or if you experience strong vibrations indicating that it is indeed your own train that is in motion.
We assume that we have build the train illusion model (see the other example project colab). That model predicts that accumulated sensory evidence from vestibular signals determines the decision of whether self-motion is experienced or not. We now have vestibular neuron data (simulated in our case, but let's pretend) and would like to see if that prediction holds true.
The data contains *N* neurons and *M* trials for each of 3 motion conditions: no self-motion, slowly accelerating self-motion and faster accelerating self-motion. In our data,
*N* = 40 and *M* = 400.
**So we can ask the following question**: "Does accumulated vestibular neuron activity correlate with self-motion judgements?"
</font>
'''
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
```
## Think! 1: Asking your own question
*Please discuss the following for about 25 min*
You should already have a project idea from your brainstorming yesterday. **Write down the phenomenon, question and goal(s) if you have them.**
As a reminder, here is what you should discuss and write down:
* What exact aspect of data needs modeling?
* Answer this question clearly and precisely!
Otherwise you will get lost (almost guaranteed)
* Write everything down!
* Also identify aspects of data that you do not want to address (yet)
* Define an evaluation method!
* How will you know your modeling is good?
* E.g. comparison to specific data (quantitative method of comparison?)
* For computational models: think of an experiment that could test your model
* You essentially want your model to interface with this experiment, i.e. you want to simulate this experiment
You can find interesting questions by looking for phenomena that differ from your expectations. In *what* way does it differ? *How* could that be explained (starting to think about mechanistic questions and structural hypotheses)? *Why* could it be the way it is? What experiment could you design to investigate this phenomenon? What kind of data would you need?
**Make sure to avoid the pitfalls!**
<details>
<summary>Click here for a recap on pitfalls</summary>
Question is too general
<ul>
<li>Remember: science advances one small step at the time. Get the small step right…</li>
</ul>
Precise aspect of phenomenon you want to model is unclear
<ul>
<li>You will fail to ask a meaningful question</li>
</ul>
You have already chosen a toolkit
<ul>
<li>This will prevent you from thinking deeply about the best way to answer your scientific question</li>
</ul>
You don’t have a clear goal
<ul>
<li>What do you want to get out of modeling?</li>
</ul>
You don’t have a potential experiment in mind
<ul>
<li>This will help concretize your objectives and think through the logic behind your goal</li>
</ul>
</details>
**Note**
The hardest part is Step 1. Once that is properly set up, all other should be easier. **BUT**: often you think that Step 1 is done only to figure out in later steps (anywhere really) that you were not as clear on your question and goal than you thought. Revisiting Step 1 is frequent necessity. Don't feel bad about it. You can revisit Step 1 later; for now, let's move on to the nest step.
----
# Step 2: Understanding the state of the art & background
Here you will do a literature review (**to be done AFTER this tutorial!**).
```
# @title Video 3: Literature Review & Background Knowledge
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1by4y1M7TZ", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d8zriLaMc14", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 2
from ipywidgets import widgets
from IPython.display import Markdown
markdown1 = '''
## Step 2
<br>
<font size='3pt'>
You have learned all about the vestibular system in the Intro video. This is also where you would do a literature search to learn more about what's known about self-motion perception and vestibular signals. You would also want to examine any attempts to model self-motion, perceptual decision making and vestibular processing.</font>
'''
markdown21 = '''
## Step 2
<br>
<font size='3pt'>
While it seems a well-known fact that vestibular signals are noisy, we should check if we can also find this in the literature.
Let's also see what's in our data, there should be a 4d array called `spikes` that has spike counts (positive integers), a 2d array called `perception` with self-motion judgements (0=no motion or 1=motion). Let's see what this data looks like:
</font><br>
'''
markdown22 = '''
<br>
<font size='3pt'>
In the `spikes` array, we see our 3 acceleration conditions (first dimension), with 400 trials each (second dimensions) and simultaneous recordings from 40 neurons (third dimension), across 3 seconds in 10 ms bins (fourth dimension). The first two dimensions are also there in the `perception` array.
Perfect perception would have looked like [0, 1, 1]. The average judgements are far from correct (lots of self-motion illusions) but they do make some sense: it's closer to 0 in the no-motion condition and closer to 1 in both of the real-motion conditions.
The idea of our project is that the vestibular signals are noisy so that they might be mis-interpreted by the brain. Let's see if we can reproduce the stimuli from the data:
</font>
<br>
'''
markdown23 = '''
<br>
<font size='3pt'>
Blue is the no-motion condition, and produces flat average spike counts across the 3 s time interval. The orange and green line do show a bell-shaped curve that corresponds to the acceleration profile. But there also seems to be considerable noise: exactly what we need. Let's see what the spike trains for a single trial look like:
</font>
<br>
'''
markdown24 = '''
<br>
<font size='3pt'>
You can change the trial number in the bit of code above to compare what the rasterplots look like in different trials. You'll notice that they all look kind of the same: the 3 conditions are very hard (impossible?) to distinguish by eye-balling.
Now that we have seen the data, let's see if we can extract self-motion judgements from the spike counts.
</font>
<br>
'''
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown21))
print(f'The shape of `spikes` is: {np.shape(spikes)}')
print(f'The shape of `perception` is: {np.shape(perception)}')
print(f'The mean of `perception` is: {np.mean(perception, axis=1)}')
display(Markdown(markdown22))
for move_no in range(3):
plt.plot(np.arange(-1.5,1.5+(1/100),(1/100)),np.mean(np.mean(spikes[move_no,:,:,:], axis=0), axis=0), label=['no motion', '$1 m/s^2$', '$2 m/s^2$'][move_no])
plt.xlabel('time [s]');
plt.ylabel('averaged spike counts');
plt.legend()
plt.show()
display(Markdown(markdown23))
for move in range(3):
rasterplot(spikes = spikes, movement = move, trial = 0)
plt.show()
display(Markdown(markdown24))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
```
Here you will do a literature review (**to be done AFTER this tutorial!**). For the projects, do not spend too much time on this. A thorough literature review could take weeks or months depending on your prior knowledge of the field...
The important thing for your project here is not to exhaustively survey the literature but rather to learn the process of modeling. 1-2 days of digging into the literature should be enough!
**Here is what you should get out of it**:
* Survey the literature
* What’s known?
* What has already been done?
* Previous models as a starting point?
* What hypotheses have been emitted in the field?
* Are there any alternative / complementary modeling approaches?
* What skill sets are required?
* Do I need learn something before I can start?
* Ensure that no important aspect is missed
* Potentially provides specific data sets / alternative modeling approaches for comparison
**Do this AFTER the tutorial**
----
# Step 3: Determining the basic ingredients
```
# @title Video 4: Determining basic ingredients
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Mq4y1x77s", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="XpEj-p7JkFE", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 3
from ipywidgets import widgets
from IPython.display import Markdown, Math
markdown1 = r'''
## Step 3
<br>
<font size='3pt'>
We determined that we probably needed the following ingredients for our model:
* Vestibular input: *v(t)*
* Binary decision output: *d* - time dependent?
* Decision threshold: θ
* A filter (maybe running average?): *f*
* An integration mechanism to get from vestibular acceleration to sensed velocity: ∫
</font>
'''
markdown2 = '''
## Step 3
<br>
<font size='3pt'>
In order to address our question we need to design an appropriate computational data analysis pipeline. We did some brainstorming and think that we need to somehow extract the self-motion judgements from the spike counts of our neurons. Based on that, our algorithm needs to make a decision: was there self motion or not? This is a classical 2-choice classification problem. We will have to transform the raw spike data into the right input for the algorithm (spike pre-processing).
So we determined that we probably needed the following ingredients:
* spike trains *S* of 3-second trials (10ms spike bins)
* ground truth movement *m<sub>r</sub>* (real) and perceived movement *m<sub>p</sub>*
* some form of classifier *C* giving us a classification *c*
* spike pre-processing
</font>
'''
# No idea why this is necessary but math doesn't render properly without it
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
```
## Think! 3: Determine your basic ingredients
*Please discuss the following for about 25 min*
This will allow you to think deeper about what your modeling project will need. It's a crucial step before you can formulate hypotheses because you first need to understand what your modeling approach will need. There are 2 aspects you want to think about:
1. What parameters / variables are needed?]
* Constants?
* Do they change over space, time, conditions…?
* What details can be omitted?
* Constraints, initial conditions?
* Model inputs / outputs?
2. Variables needed to describe the process to be modelled?
* Brainstorming!
* What can be observed / measured? latent variables?
* Where do these variables come from?
* Do any abstract concepts need to be instantiated as variables?
* E.g. value, utility, uncertainty, cost, salience, goals, strategy, plant, dynamics
* Instantiate them so that they relate to potential measurements!
This is a step where your prior knowledge and intuition is tested. You want to end up with an inventory of *specific* concepts and/or interactions that need to be instantiated.
**Make sure to avoid the pitfalls!**
<details>
<summary>Click here for a recap on pitfalls</summary>
I’m experienced, I don’t need to think about ingredients anymore
<ul>
<li>Or so you think…</li>
</ul>
I can’t think of any ingredients
<ul>
<li>Think about the potential experiment. What are your stimuli? What parameters? What would you control? What do you measure?</li>
</ul>
I have all inputs and outputs
<ul>
<li>Good! But what will link them? Thinking about that will start shaping your model and hypotheses</li>
</ul>
I can’t think of any links (= mechanisms)
<ul>
<li>You will acquire a library of potential mechanisms as you keep modeling and learning</li>
<li>But the literature will often give you hints through hypotheses</li>
<li>If you still can't think of links, then maybe you're missing ingredients?</li>
</ul>
</details>
----
# Step 4: Formulating specific, mathematically defined hypotheses
```
# @title Video 5: Formulating a hypothesis
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1fh411h7aX", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="nHXMSXLcd9A", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# @title Example projects step 4
from ipywidgets import widgets
from IPython.display import Markdown
# Not writing in latex because that didn't render in jupyterbook
markdown1 = r'''
## Step 4
<br>
<font size='3pt'>
Our main hypothesis is that the strength of the illusion has a linear relationship to the amplitude of vestibular noise.
Mathematically, this would write as
<div align="center">
<em>S</em> = <em>k</em> ⋅ <em>N</em>
</div>
where *S* is the illusion strength and *N* is the noise level, and *k* is a free parameter.
>we could simply use the frequency of occurance across repetitions as the "strength of the illusion"
We would get the noise as the standard deviation of *v(t)*, i.e.
<div align="center">
<em>N</em> = <b>E</b>[<em>v(t)</em><sup>2</sup>],
</div>
where **E** stands for the expected value.
Do we need to take the average across time points?
> doesn't really matter because we have the generative process, so we can just use the σ that we define
</font>
'''
markdown2 = '''
## Step 4
<br>
<font size='3pt'>
We think that noise in the signal drives whether or not people perceive self motion. Maybe the brain uses the strongest signal at peak acceleration to decide on self motion, but we actually think it is better to accumulate evidence over some period of time. We want to test this. The noise idea also means that when the signal-to-noise ratio is higher, the brain does better, and this would be in the faster acceleration condition. We want to test this too.
We came up with the following hypotheses focussing on specific details of our overall research question:
* Hyp 1: Accumulated vestibular spike rates explain self-motion judgements better than average spike rates around peak acceleration.
* Hyp 2: Classification performance should be better for faster vs slower self-motion.
> There are many other hypotheses you could come up with, but for simplicity, let's go with those.
Mathematically, we can write our hypotheses as follows (using our above ingredients):
* Hyp 1: **E**(c<sub>accum</sub>) > **E**(c<sub>win</sub>)
* Hyp 2: **E**(c<sub>fast</sub>) > **E**(c<sub>slow</sub>)
Where **E** denotes taking the expected value (in this case the mean) of its argument: classification outcome in a given trial type.
</font>
'''
# No idea why this is necessary but math doesn't render properly without it
display(Markdown(r""))
out2 = widgets.Output()
with out2:
display(Markdown(markdown2))
out1 = widgets.Output()
with out1:
display(Markdown(markdown1))
out = widgets.Tab([out1, out2])
out.set_title(0, 'Computational Model')
out.set_title(1, 'Data Analysis')
display(out)
```
## Think! 4: Formulating your hypothesis
*Please discuss the following for about 25 min*
Once you have your question and goal lines up, you have done a literature review (let's assume for now) and you have thought about ingredients needed for your model, you're now ready to start thinking about *specific* hypotheses.
Formulating hypotheses really consists of two consecutive steps:
1. You think about the hypotheses in words by relating ingredients identified in Step 3
* What is the model mechanism expected to do?
* How are different parameters expected to influence model results?
2. You then express these hypotheses in mathematical language by giving the ingredients identified in Step 3 specific variable names.
* Be explicit, e.g. $y(t)=f(x(t),k)$ but $z(t)$ doesn’t influence $y$
There are also "structural hypotheses" that make assumptions on what model components you hypothesize will be crucial to capture the phenomenon at hand.
**Important**: Formulating the hypotheses is the last step before starting to model. This step determines the model approach and ingredients. It provides a more detailed description of the question / goal from Step 1. The more precise the hypotheses, the easier the model will be to justify.
**Make sure to avoid the pitfalls!**
<details>
<summary>Click here for a recap on pitfalls</summary>
I don’t need hypotheses, I will just play around with the model
<ul>
<li>Hypotheses help determine and specify goals. You can (and should) still play…</li>
</ul>
My hypotheses don’t match my question (or vice versa)
<ul>
<li>This is a normal part of the process!</li>
<li>You need to loop back to Step 1 and revisit your question / phenomenon / goals</li>
</ul>
I can’t write down a math hypothesis
<ul>
<li>Often that means you lack ingredients and/or clarity on the hypothesis</li>
<li>OR: you have a “structural” hypothesis, i.e. you expect a certain model component to be crucial in explaining the phenomenon / answering the question</li>
</ul>
</details>
----
# Summary
In this tutorial, we worked through some steps of the process of modeling.
- We defined a phenomenon and formulated a question (step 1)
- We collected information the state-of-the-art about the topic (step 2)
- We determined the basic ingredients (step 3), and used these to formulate a specific mathematically defined hypothesis (step 4)
You are now in a position that you could start modeling without getting lost. But remember: you might have to work through steps 1-4 again after doing a literature review and/or if there were other pitfalls you identified along the way (which is totally normal).
----
# Next steps
In [a follow-up notebook](https://compneuro.neuromatch.io/projects/modelingsteps/ModelingSteps_5through10.html), we will continue with the steps 5-10 to guide you through the implementation and completion stages of the projects. You can also find this in the Modeling Steps section of the Project Booklet.
----
# Reading
Blohm G, Kording KP, Schrater PR (2020). _A How-to-Model Guide for Neuroscience_. eNeuro, 7(1) ENEURO.0352-19.2019. https://doi.org/10.1523/ENEURO.0352-19.2019
Kording KP, Blohm G, Schrater P, Kay K (2020). _Appreciating the variety of goals in computational neuroscience_. Neurons, Behavior, Data Analysis, and Theory 3(6). https://nbdt.scholasticahq.com/article/16723-appreciating-the-variety-of-goals-in-computational-neuroscience
Schrater PR, Peters MK, Kording KP, Blohm G (2019). _Modeling in Neuroscience as a Decision Process_. OSF pre-print. https://osf.io/w56vt/
| true |
code
| 0.737406 | null | null | null | null |
|
# Artificial Intelligence Nanodegree
## Convolutional Neural Networks
---
In this notebook, we train a CNN to classify images from the CIFAR-10 database.
### 1. Load CIFAR-10 Database
```
import keras
from keras.datasets import cifar10
# load the pre-shuffled train and test data
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
```
### 2. Visualize the First 24 Training Images
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
fig = plt.figure(figsize=(20,5))
for i in range(36):
ax = fig.add_subplot(3, 12, i + 1, xticks=[], yticks=[])
ax.imshow(np.squeeze(x_train[i]))
```
### 3. Rescale the Images by Dividing Every Pixel in Every Image by 255
```
# rescale [0,255] --> [0,1]
x_train = x_train.astype('float32')/255
x_test = x_test.astype('float32')/255
```
### 4. Break Dataset into Training, Testing, and Validation Sets
```
from keras.utils import np_utils
# one-hot encode the labels
num_classes = len(np.unique(y_train))
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
# break training set into training and validation sets
(x_train, x_valid) = x_train[5000:], x_train[:5000]
(y_train, y_valid) = y_train[5000:], y_train[:5000]
# print shape of training set
print('x_train shape:', x_train.shape)
# print number of training, validation, and test images
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
print(x_valid.shape[0], 'validation samples')
```
### 5. Define the Model Architecture
```
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout, Activation
# My modified model
model = Sequential()
model.add(Conv2D(filters=32, kernel_size=4, padding='same', input_shape=x_train.shape[1:]))
model.add(Activation('relu'))
model.add(Conv2D(filters=64, kernel_size=4, padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.15))
model.add(Conv2D(filters=128, kernel_size=4, padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=4))
model.add(Dropout(0.25))
model.add(Conv2D(filters=256, kernel_size=4, padding='same'))
model.add(MaxPooling2D(pool_size=4))
model.add(Dropout(0.35))
# model.add(Conv2D(filters=512, kernel_size=4, padding='same'))
model.add(Dropout(0.45))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.55))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
# Original model
# model = Sequential()
# model.add(Conv2D(filters=16, kernel_size=2, padding='same', activation='relu',
# input_shape=(32, 32, 3)))
# model.add(MaxPooling2D(pool_size=2))
# model.add(Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'))
# model.add(MaxPooling2D(pool_size=2))
# model.add(Conv2D(filters=64, kernel_size=2, padding='same', activation='relu'))
# model.add(MaxPooling2D(pool_size=2))
# model.add(Dropout(0.3))
# model.add(Flatten())
# model.add(Dense(500, activation='relu'))
# model.add(Dropout(0.4))
# model.add(Dense(10, activation='softmax'))
model.summary()
```
### 6. Compile the Model
```
# compile the model
model.compile(loss='categorical_crossentropy', optimizer='rmsprop',
metrics=['accuracy'])
```
### 7. Train the Model
```
from keras.callbacks import ModelCheckpoint
# train the model
checkpointer = ModelCheckpoint(filepath='model.weights.best.hdf5', verbose=1,
save_best_only=True)
hist = model.fit(x_train, y_train, batch_size=32, epochs=100,
validation_data=(x_valid, y_valid), callbacks=[checkpointer],
verbose=2, shuffle=True)
```
### 8. Load the Model with the Best Validation Accuracy
```
# load the weights that yielded the best validation accuracy
model.load_weights('model.weights.best.hdf5')
```
### 9. Calculate Classification Accuracy on Test Set
```
# evaluate and print test accuracy
score = model.evaluate(x_test, y_test, verbose=0)
print('\n', 'Test accuracy:', score[1])
```
### 10. Visualize Some Predictions
This may give you some insight into why the network is misclassifying certain objects.
```
# get predictions on the test set
y_hat = model.predict(x_test)
# define text labels (source: https://www.cs.toronto.edu/~kriz/cifar.html)
cifar10_labels = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
# plot a random sample of test images, their predicted labels, and ground truth
fig = plt.figure(figsize=(20, 8))
for i, idx in enumerate(np.random.choice(x_test.shape[0], size=32, replace=False)):
ax = fig.add_subplot(4, 8, i + 1, xticks=[], yticks=[])
ax.imshow(np.squeeze(x_test[idx]))
pred_idx = np.argmax(y_hat[idx])
true_idx = np.argmax(y_test[idx])
ax.set_title("{} ({})".format(cifar10_labels[pred_idx], cifar10_labels[true_idx]),
color=("green" if pred_idx == true_idx else "red"))
```
| true |
code
| 0.772316 | null | null | null | null |
|
```
from __future__ import division, print_function, absolute_import
```
# Introduction to Visualization:
Density Estimation and Data Exploration
========
##### Version 0.1
There are many flavors of data analysis that fall under the "visualization" umbrella in astronomy. Today, by way of example, we will focus on 2 basic problems.
***
By AA Miller
16 September 2017
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
## Problem 1) Density Estimation
Starting with 2MASS and SDSS and extending through LSST, we are firmly in an era where data and large statistical samples are cheap. With this explosion in data volume comes a problem: we do not know the underlying probability density function (PDF) of the random variables measured via our observations. Hence - density estimation: an attempt to recover the unknown PDF from observations. In some cases theory can guide us to a parametric form for the PDF, but more often than not such guidance is not available.
There is a common, simple, and very familiar tool for density estimation: histograms.
But there is also a problem:
HISTOGRAMS LIE!
We will "prove" this to be the case in a series of examples. For this exercise, we will load the famous Linnerud data set, which tested 20 middle aged men by measuring the number of chinups, situps, and jumps they could do in order to compare these numbers to their weight, pulse, and waist size. To load the data (just chinups for now) we will run the following:
from sklearn.datasets import load_linnerud
linnerud = load_linnerud()
chinups = linnerud.data[:,0]
```
from sklearn.datasets import load_linnerud
linnerud = load_linnerud()
chinups = linnerud.data[:,0]
```
**Problem 1a**
Plot the histogram for the number of chinups using the default settings in pyplot.
```
plt.hist( # complete
```
Already with this simple plot we see a problem - the choice of bin centers and number of bins suggest that there is a 0% probability that middle aged men can do 10 chinups. Intuitively this seems incorrect, so lets examine how the histogram changes if we change the number of bins or the bin centers.
**Problem 1b**
Using the same data make 2 new histograms: (i) one with 5 bins (`bins = 5`), and (ii) one with the bars centered on the left bin edges (`align = "left"`).
*Hint - if overplotting the results, you may find it helpful to use the `histtype = "step"` option*
```
plt.hist( # complete
# complete
```
These small changes significantly change the output PDF. With fewer bins we get something closer to a continuous distribution, while shifting the bin centers reduces the probability to zero at 9 chinups.
What if we instead allow the bin width to vary and require the same number of points in each bin? You can determine the bin edges for bins with 5 sources using the following command:
bins = np.append(np.sort(chinups)[::5], np.max(chinups))
**Problem 1c**
Plot a histogram with variable width bins, each with the same number of points.
*Hint - setting `normed = True` will normalize the bin heights so that the PDF integrates to 1.*
```
# complete
plt.hist(# complete
```
*Ending the lie*
Earlier I stated that histograms lie. One simple way to combat this lie: show all the data. Displaying the original data points allows viewers to somewhat intuit the effects of the particular bin choices that have been made (though this can also be cumbersome for very large data sets, which these days is essentially all data sets). The standard for showing individual observations relative to a histogram is a "rug plot," which shows a vertical tick (or other symbol) at the location of each source used to estimate the PDF.
**Problem 1d** Execute the cell below to see an example of a rug plot.
```
plt.hist(chinups, histtype = 'step')
# this is the code for the rug plot
plt.plot(chinups, np.zeros_like(chinups), '|', color='k', ms = 25, mew = 4)
```
Of course, even rug plots are not a perfect solution. Many of the chinup measurements are repeated, and those instances cannot be easily isolated above. One (slightly) better solution is to vary the transparency of the rug "whiskers" using `alpha = 0.3` in the whiskers plot call. But this too is far from perfect.
To recap, histograms are not ideal for density estimation for the following reasons:
* They introduce discontinuities that are not present in the data
* They are strongly sensitive to user choices ($N_\mathrm{bins}$, bin centering, bin grouping), without any mathematical guidance to what these choices should be
* They are difficult to visualize in higher dimensions
Histograms are useful for generating a quick representation of univariate data, but for the reasons listed above they should never be used for analysis. Most especially, functions should not be fit to histograms given how greatly the number of bins and bin centering affects the output histogram.
Okay - so if we are going to rail on histograms this much, there must be a better option. There is: [Kernel Density Estimation](https://en.wikipedia.org/wiki/Kernel_density_estimation) (KDE), a nonparametric form of density estimation whereby a normalized kernel function is convolved with the discrete data to obtain a continuous estimate of the underlying PDF. As a rule, the kernel must integrate to 1 over the interval $-\infty$ to $\infty$ and be symmetric. There are many possible kernels (gaussian is highly popular, though Epanechnikov, an inverted parabola, produces the minimal mean square error).
KDE is not completely free of the problems we illustrated for histograms above (in particular, both a kernel and the width of the kernel need to be selected), but it does manage to correct a number of the ills. We will now demonstrate this via a few examples using the `scikit-learn` implementation of KDE: [`KernelDensity`](http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KernelDensity.html#sklearn.neighbors.KernelDensity), which is part of the [`sklearn.neighbors`](http://scikit-learn.org/stable/modules/classes.html#module-sklearn.neighbors) module.
*Note* There are many implementations of KDE in Python, and Jake VanderPlas has put together [an excellent description of the strengths and weaknesses of each](https://jakevdp.github.io/blog/2013/12/01/kernel-density-estimation/). We will use the `scitkit-learn` version as it is in many cases the fastest implementation.
To demonstrate the basic idea behind KDE, we will begin by representing each point in the dataset as a block (i.e. we will adopt the tophat kernel). Borrowing some code from Jake, we can estimate the KDE using the following code:
from sklearn.neighbors import KernelDensity
def kde_sklearn(data, grid, bandwidth = 1.0, **kwargs):
kde_skl = KernelDensity(bandwidth = bandwidth, **kwargs)
kde_skl.fit(data[:, np.newaxis])
log_pdf = kde_skl.score_samples(grid[:, np.newaxis]) # sklearn returns log(density)
return np.exp(log_pdf)
The two main options to set are the bandwidth and the kernel.
```
# execute this cell
from sklearn.neighbors import KernelDensity
def kde_sklearn(data, grid, bandwidth = 1.0, **kwargs):
kde_skl = KernelDensity(bandwidth = bandwidth, **kwargs)
kde_skl.fit(data[:, np.newaxis])
log_pdf = kde_skl.score_samples(grid[:, np.newaxis]) # sklearn returns log(density)
return np.exp(log_pdf)
```
**Problem 1e**
Plot the KDE of the PDF for the number of chinups middle aged men can do using a bandwidth of 0.1 and a tophat kernel.
*Hint - as a general rule, the grid should be smaller than the bandwidth when plotting the PDF.*
```
grid = # complete
PDFtophat = kde_sklearn( # complete
plt.plot( # complete
```
In this representation, each "block" has a height of 0.25. The bandwidth is too narrow to provide any overlap between the blocks. This choice of kernel and bandwidth produces an estimate that is essentially a histogram with a large number of bins. It gives no sense of continuity for the distribution. Now, we examine the difference (relative to histograms) upon changing the the width (i.e. kernel) of the blocks.
**Problem 1f**
Plot the KDE of the PDF for the number of chinups middle aged men can do using bandwidths of 1 and 5 and a tophat kernel. How do the results differ from the histogram plots above?
```
PDFtophat1 = # complete
# complete
# complete
# complete
```
It turns out blocks are not an ideal representation for continuous data (see discussion on histograms above). Now we will explore the resulting PDF from other kernels.
**Problem 1g** Plot the KDE of the PDF for the number of chinups middle aged men can do using a gaussian and Epanechnikov kernel. How do the results differ from the histogram plots above?
*Hint - you will need to select the bandwidth. The examples above should provide insight into the useful range for bandwidth selection. You may need to adjust the values to get an answer you "like."*
```
PDFgaussian = # complete
PDFepanechnikov = # complete
```
So, what is the *optimal* choice of bandwidth and kernel? Unfortunately, there is no hard and fast rule, as every problem will likely have a different optimization. Typically, the choice of bandwidth is far more important than the choice of kernel. In the case where the PDF is likely to be gaussian (or close to gaussian), then [Silverman's rule of thumb](https://en.wikipedia.org/wiki/Kernel_density_estimation#A_rule-of-thumb_bandwidth_estimator) can be used:
$$h = 1.059 \sigma n^{-1/5}$$
where $h$ is the bandwidth, $\sigma$ is the standard deviation of the samples, and $n$ is the total number of samples. Note - in situations with bimodal or more complicated distributions, this rule of thumb can lead to woefully inaccurate PDF estimates. The most general way to estimate the choice of bandwidth is via cross validation (we will cover cross-validation later today).
*What about multidimensional PDFs?* It is possible using many of the Python implementations of KDE to estimate multidimensional PDFs, though it is very very important to beware the curse of dimensionality in these circumstances.
## Problem 2) Data Exploration
Now a more open ended topic: data exploration. In brief, data exploration encompases a large suite of tools (including those discussed above) to examine data that live in large dimensional spaces. There is no single best method or optimal direction for data exploration. Instead, today we will introduce some of the tools available via python.
As an example we will start with a basic line plot - and examine tools beyond `matplotlib`.
```
x = np.arange(0, 6*np.pi, 0.1)
y = np.cos(x)
plt.plot(x,y, lw = 2)
plt.xlabel('X')
plt.ylabel('Y')
plt.xlim(0, 6*np.pi)
```
### Seaborn
[`Seaborn`](https://stanford.edu/~mwaskom/software/seaborn/index.html) is a plotting package that enables many useful features for exploration. In fact, a lot of the functionality that we developed above can readily be handled with `seaborn`.
To begin, we will make the same plot that we created in matplotlib.
```
import seaborn as sns
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(x,y, lw = 2)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_xlim(0, 6*np.pi)
```
These plots look identical, but it is possible to change the style with `seaborn`.
`seaborn` has 5 style presets: `darkgrid`, `whitegrid`, `dark`, `white`, and `ticks`. You can change the preset using the following:
sns.set_style("whitegrid")
which will change the output for all subsequent plots. Note - if you want to change the style for only a single plot, that can be accomplished with the following:
with sns.axes_style("dark"):
with all ploting commands inside the `with` statement.
**Problem 3a**
Re-plot the sine curve using each `seaborn` preset to see which you like best - then adopt this for the remainder of the notebook.
```
sns.set_style( # complete
# complete
```
The folks behind `seaborn` have thought a lot about color palettes, which is a good thing. Remember - the choice of color for plots is one of the most essential aspects of visualization. A poor choice of colors can easily mask interesting patterns or suggest structure that is not real. To learn more about what is available, see the [`seaborn` color tutorial](http://stanford.edu/~mwaskom/software/seaborn/tutorial/color_palettes.html).
Here we load the default:
```
# default color palette
current_palette = sns.color_palette()
sns.palplot(current_palette)
```
which we will now change to `colorblind`, which is clearer to those that are colorblind.
```
# set palette to colorblind
sns.set_palette("colorblind")
current_palette = sns.color_palette()
sns.palplot(current_palette)
```
Now that we have covered the basics of `seaborn` (and the above examples truly only scratch the surface of what is possible), we will explore the power of `seaborn` for higher dimension data sets. We will load the famous Iris data set, which measures 4 different features of 3 different types of Iris flowers. There are 150 different flowers in the data set.
*Note - for those familiar with `pandas` `seaborn` is designed to integrate easily and directly with `pandas DataFrame` objects. In the example below the Iris data are loaded into a `DataFrame`. `iPython` notebooks also display the `DataFrame` data in a nice readable format.*
```
iris = sns.load_dataset("iris")
iris
```
Now that we have a sense of the data structure, it is useful to examine the distribution of features. Above, we went to great pains to produce histograms, KDEs, and rug plots. `seaborn` handles all of that effortlessly with the `distplot` function.
**Problem 3b**
Plot the distribution of petal lengths for the Iris data set.
```
# note - hist, kde, and rug all set to True, set to False to turn them off
with sns.axes_style("dark"):
sns.distplot(iris['petal_length'], bins=20, hist=True, kde=True, rug=True)
```
Of course, this data set lives in a 4D space, so plotting more than univariate distributions is important (and as we will see tomorrow this is particularly useful for visualizing classification results). Fortunately, `seaborn` makes it very easy to produce handy summary plots.
At this point, we are familiar with basic scatter plots in matplotlib.
**Problem 3c**
Make a matplotlib scatter plot showing the Iris petal length against the Iris petal width.
```
plt.scatter( # complete
```
Of course, when there are many many data points, scatter plots become difficult to interpret. As in the example below:
```
with sns.axes_style("darkgrid"):
xexample = np.random.normal(loc = 0.2, scale = 1.1, size = 10000)
yexample = np.random.normal(loc = -0.1, scale = 0.9, size = 10000)
plt.scatter(xexample, yexample)
```
Here, we see that there are many points, clustered about the origin, but we have no sense of the underlying density of the distribution. 2D histograms, such as `plt.hist2d()`, can alleviate this problem. I prefer to use `plt.hexbin()` which is a little easier on the eyes (though note - these histograms are just as subject to the same issues discussed above).
```
# hexbin w/ bins = "log" returns the log of counts/bin
# mincnt = 1 displays only hexpix with at least 1 source present
with sns.axes_style("darkgrid"):
plt.hexbin(xexample, yexample, bins = "log", cmap = "viridis", mincnt = 1)
plt.colorbar()
```
While the above plot provides a significant improvement over the scatter plot by providing a better sense of the density near the center of the distribution, the binedge effects are clearly present. An even better solution, like before, is a density estimate, which is easily built into `seaborn` via the `kdeplot` function.
```
with sns.axes_style("darkgrid"):
sns.kdeplot(xexample, yexample,shade=False)
```
This plot is much more appealing (and informative) than the previous two. For the first time we can clearly see that the distribution is not actually centered on the origin. Now we will move back to the Iris data set.
Suppose we want to see univariate distributions in addition to the scatter plot? This is certainly possible with `matplotlib` and you can find examples on the web, however, with `seaborn` this is really easy.
```
sns.jointplot(x=iris['petal_length'], y=iris['petal_width'])
```
But! Histograms and scatter plots can be problematic as we have discussed many times before.
**Problem 3d**
Re-create the plot above but set `kind='kde'` to produce density estimates of the distributions.
```
sns.jointplot( # complete
```
That is much nicer than what was presented above. However - we still have a problem in that our data live in 4D, but we are (mostly) limited to 2D projections of that data. One way around this is via the `seaborn` version of a `pairplot`, which plots the distribution of every variable in the data set against each other. (Here is where the integration with `pandas DataFrame`s becomes so powerful.)
```
sns.pairplot(iris[["sepal_length", "sepal_width", "petal_length", "petal_width"]])
```
For data sets where we have classification labels, we can even color the various points using the `hue` option, and produce KDEs along the diagonal with `diag_type = 'kde'`.
```
sns.pairplot(iris, vars = ["sepal_length", "sepal_width", "petal_length", "petal_width"],
hue = "species", diag_kind = 'kde')
```
Even better - there is an option to create a `PairGrid` which allows fine tuned control of the data as displayed above, below, and along the diagonal. In this way it becomes possible to avoid having symmetric redundancy, which is not all that informative. In the example below, we will show scatter plots and contour plots simultaneously.
```
g = sns.PairGrid(iris, vars = ["sepal_length", "sepal_width", "petal_length", "petal_width"],
hue = "species", diag_sharey=False)
g.map_lower(sns.kdeplot)
g.map_upper(plt.scatter, edgecolor='white')
g.map_diag(sns.kdeplot, lw=3)
```
Note - one disadvantage to the plot above is that the contours do not share the same color scheme as the KDE estimates and the scatter plot. I have not been able to figure out how to change this in a satisfactory way. (One potential solution is detailed [here](http://stackoverflow.com/questions/32889590/seaborn-pairgrid-using-kdeplot-with-2-hues), however, it is worth noting that this solution restricts your color choices to a maximum of ~5 unless you are a colormaps wizard, and I am not.)
| true |
code
| 0.743413 | null | null | null | null |
|
### GluonTS Callbacks
This notebook illustrates how one can control the training with GluonTS Callback's. A callback is a function which gets called at one or more specific hook points during training.
You can use predefined GluonTS callbacks like the logging callback TrainingHistory, ModelAveraging or TerminateOnNaN, or you can implement your own callback.
#### 1. Using a single Callback
```
# fetching some data
from gluonts.dataset.repository.datasets import get_dataset
dataset = "m4_hourly"
dataset = get_dataset(dataset)
prediction_length = dataset.metadata.prediction_length
freq = dataset.metadata.freq
from gluonts.model.simple_feedforward import SimpleFeedForwardEstimator
from gluonts.mx.trainer import Trainer
from gluonts.mx.trainer.callback import TrainingHistory
# defining a callback, which will log the training loss for each epoch
history = TrainingHistory()
trainer=Trainer(epochs=20, callbacks=history)
estimator = SimpleFeedForwardEstimator(prediction_length=prediction_length, freq = freq, trainer=trainer)
predictor = estimator.train(dataset.train, num_workers=None)
# print the training loss over the epochs
print(history.loss_history)
# in case you are using a validation dataset you can get the validation loss with
# history.validation_loss_history
```
#### 2. Using multiple Callbacks
To continue the training from a given predictor you can use the WarmStart Callback. When you want to use more than one callback, provide them as a list:
```
from gluonts.mx.trainer.callback import WarmStart
warm_start = WarmStart(predictor=predictor)
trainer=Trainer(epochs=10, callbacks=[history, warm_start])
estimator = SimpleFeedForwardEstimator(prediction_length=prediction_length, freq = freq, trainer=trainer)
predictor = estimator.train(dataset.train, num_workers=None)
print(history.loss_history) # The training loss history of all 20+10 epochs we trained the model
```
#### 3. Default Callbacks
In addition to the Callbacks you specify, the GluonTS Trainer uses the two default Callbacks ModelAveraging and LearningRateReduction. You can turn them off by setting add_default_callbacks=False when initializing the Trainer.
```
trainer=Trainer(epochs=20, callbacks=history) # use the TrainingHistory Callback and the default callbacks.
trainer=Trainer(epochs=20, callbacks=history, add_default_callbacks=False) # use only the TrainingHistory Callback
trainer=Trainer(epochs=20, add_default_callbacks=False) # use no callback at all
```
#### 4. Custom Callbacks
To implement your own Callback you can write a class which inherits from the GluonTS Callback class and overwrite one or more of the hooks.
```
# Have a look at the abstract Callback class, the hooks take different arguments which you can use.
# Hook methods with boolean return value stop the training if False is returned.
from gluonts.mx.trainer.callback import Callback
import inspect
lines = inspect.getsource(Callback)
print(lines)
# Here is an example implementation of a Metric value based early stopping custom callback implementation
# it only implements the hook method "on_epoch_end()"
# which gets called after all batches of one epoch have been processed
from gluonts.evaluation import Evaluator
from gluonts.dataset.common import Dataset
from gluonts.mx.model.predictor import GluonPredictor
from mxnet.gluon import nn
from mxnet import gluon
import numpy as np
import mxnet as mx
from gluonts.support.util import copy_parameters
class MetricInferenceEarlyStopping(Callback):
"""
Early Stopping mechanism based on the prediction network.
Can be used to base the Early Stopping directly on a metric of interest, instead of on the training/validation loss.
In the same way as test datasets are used during model evaluation,
the time series of the validation_dataset can overlap with the train dataset time series,
except for a prediction_length part at the end of each time series.
Parameters
----------
validation_dataset
An out-of-sample dataset which is used to monitor metrics
predictor
A gluon predictor, with a prediction network that matches the training network
evaluator
The Evaluator used to calculate the validation metrics.
metric
The metric on which to base the early stopping on.
patience
Number of epochs to train on given the metric did not improve more than min_delta.
min_delta
Minimum change in the monitored metric counting as an improvement
verbose
Controls, if the validation metric is printed after each epoch.
minimize_metric
The metric objective.
restore_best_network
Controls, if the best model, as assessed by the validation metrics is restored after training.
num_samples
The amount of samples drawn to calculate the inference metrics.
"""
def __init__(
self,
validation_dataset: Dataset,
predictor: GluonPredictor,
evaluator: Evaluator = Evaluator(num_workers=None),
metric: str = "MSE",
patience: int = 10,
min_delta: float = 0.0,
verbose: bool = True,
minimize_metric: bool = True,
restore_best_network: bool = True,
num_samples: int = 100,
):
assert (
patience >= 0
), "EarlyStopping Callback patience needs to be >= 0"
assert (
min_delta >= 0
), "EarlyStopping Callback min_delta needs to be >= 0.0"
assert (
num_samples >= 1
), "EarlyStopping Callback num_samples needs to be >= 1"
self.validation_dataset = list(validation_dataset)
self.predictor = predictor
self.evaluator = evaluator
self.metric = metric
self.patience = patience
self.min_delta = min_delta
self.verbose = verbose
self.restore_best_network = restore_best_network
self.num_samples = num_samples
if minimize_metric:
self.best_metric_value = np.inf
self.is_better = np.less
else:
self.best_metric_value = -np.inf
self.is_better = np.greater
self.validation_metric_history: List[float] = []
self.best_network = None
self.n_stale_epochs = 0
def on_epoch_end(
self,
epoch_no: int,
epoch_loss: float,
training_network: nn.HybridBlock,
trainer: gluon.Trainer,
best_epoch_info: dict,
ctx: mx.Context
) -> bool:
should_continue = True
copy_parameters(training_network, self.predictor.prediction_net)
from gluonts.evaluation.backtest import make_evaluation_predictions
forecast_it, ts_it = make_evaluation_predictions(
dataset=self.validation_dataset,
predictor=self.predictor,
num_samples=self.num_samples,
)
agg_metrics, item_metrics = self.evaluator(ts_it, forecast_it)
current_metric_value = agg_metrics[self.metric]
self.validation_metric_history.append(current_metric_value)
if self.verbose:
print(
f"Validation metric {self.metric}: {current_metric_value}, best: {self.best_metric_value}"
)
if self.is_better(current_metric_value, self.best_metric_value):
self.best_metric_value = current_metric_value
if self.restore_best_network:
training_network.save_parameters("best_network.params")
self.n_stale_epochs = 0
else:
self.n_stale_epochs += 1
if self.n_stale_epochs == self.patience:
should_continue = False
print(
f"EarlyStopping callback initiated stop of training at epoch {epoch_no}."
)
if self.restore_best_network:
print(
f"Restoring best network from epoch {epoch_no - self.patience}."
)
training_network.load_parameters("best_network.params")
return should_continue
# use the custom callback
from gluonts.dataset.repository.datasets import get_dataset
from gluonts.model.simple_feedforward import SimpleFeedForwardEstimator
from gluonts.mx.trainer import Trainer
dataset = "m4_hourly"
dataset = get_dataset(dataset)
prediction_length = dataset.metadata.prediction_length
freq = dataset.metadata.freq
estimator = SimpleFeedForwardEstimator(prediction_length=prediction_length, freq = freq)
training_network = estimator.create_training_network()
transformation = estimator.create_transformation()
predictor = estimator.create_predictor(transformation=transformation, trained_network=training_network)
es_callback = MetricInferenceEarlyStopping(validation_dataset=dataset.test, predictor=predictor, metric="MSE")
trainer = Trainer(epochs=200, callbacks=es_callback)
estimator.trainer = trainer
pred = estimator.train(dataset.train)
```
| true |
code
| 0.830199 | null | null | null | null |
|
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
# Tutorial #2: Deploy an image classification model in Azure Container Instance (ACI)
This tutorial is **part two of a two-part tutorial series**. In the [previous tutorial](img-classification-part1-training.ipynb), you trained machine learning models and then registered a model in your workspace on the cloud.
Now, you're ready to deploy the model as a web service in [Azure Container Instances](https://docs.microsoft.com/azure/container-instances/) (ACI). A web service is an image, in this case a Docker image, that encapsulates the scoring logic and the model itself.
In this part of the tutorial, you use Azure Machine Learning service (Preview) to:
> * Set up your testing environment
> * Retrieve the model from your workspace
> * Test the model locally
> * Deploy the model to ACI
> * Test the deployed model
ACI is a great solution for testing and understanding the workflow. For scalable production deployments, consider using Azure Kubernetes Service. For more information, see [how to deploy and where](https://docs.microsoft.com/azure/machine-learning/service/how-to-deploy-and-where).
## Prerequisites
Complete the model training in the [Tutorial #1: Train an image classification model with Azure Machine Learning](train-models.ipynb) notebook.
```
# If you did NOT complete the tutorial, you can instead run this cell
# This will register a model and download the data needed for this tutorial
# These prerequisites are created in the training tutorial
# Feel free to skip this cell if you completed the training tutorial
# register a model
from azureml.core import Workspace
ws = Workspace.from_config()
from azureml.core.model import Model
model_name = "sklearn_mnist"
model = Model.register(model_path="sklearn_mnist_model.pkl",
model_name=model_name,
tags={"data": "mnist", "model": "classification"},
description="Mnist handwriting recognition",
workspace=ws)
from azureml.core.environment import Environment
from azureml.core.conda_dependencies import CondaDependencies
# to install required packages
env = Environment('tutorial-env')
cd = CondaDependencies.create(pip_packages=['azureml-dataprep[pandas,fuse]>=1.1.14', 'azureml-defaults'], conda_packages = ['scikit-learn==0.22.1'])
env.python.conda_dependencies = cd
# Register environment to re-use later
env.register(workspace = ws)
```
## Set up the environment
Start by setting up a testing environment.
### Import packages
Import the Python packages needed for this tutorial.
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import azureml.core
# display the core SDK version number
print("Azure ML SDK Version: ", azureml.core.VERSION)
```
## Deploy as web service
Deploy the model as a web service hosted in ACI.
To build the correct environment for ACI, provide the following:
* A scoring script to show how to use the model
* A configuration file to build the ACI
* The model you trained before
### Create scoring script
Create the scoring script, called score.py, used by the web service call to show how to use the model.
You must include two required functions into the scoring script:
* The `init()` function, which typically loads the model into a global object. This function is run only once when the Docker container is started.
* The `run(input_data)` function uses the model to predict a value based on the input data. Inputs and outputs to the run typically use JSON for serialization and de-serialization, but other formats are supported.
```
%%writefile score.py
import json
import numpy as np
import os
import pickle
import joblib
def init():
global model
# AZUREML_MODEL_DIR is an environment variable created during deployment.
# It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)
# For multiple models, it points to the folder containing all deployed models (./azureml-models)
model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_mnist_model.pkl')
model = joblib.load(model_path)
def run(raw_data):
data = np.array(json.loads(raw_data)['data'])
# make prediction
y_hat = model.predict(data)
# you can return any data type as long as it is JSON-serializable
return y_hat.tolist()
```
### Create configuration file
Create a deployment configuration file and specify the number of CPUs and gigabyte of RAM needed for your ACI container. While it depends on your model, the default of 1 core and 1 gigabyte of RAM is usually sufficient for many models. If you feel you need more later, you would have to recreate the image and redeploy the service.
```
from azureml.core.webservice import AciWebservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=1,
tags={"data": "MNIST", "method" : "sklearn"},
description='Predict MNIST with sklearn')
```
### Deploy in ACI
Estimated time to complete: **about 2-5 minutes**
Configure the image and deploy. The following code goes through these steps:
1. Create environment object containing dependencies needed by the model using the environment file (`myenv.yml`)
1. Create inference configuration necessary to deploy the model as a web service using:
* The scoring file (`score.py`)
* envrionment object created in previous step
1. Deploy the model to the ACI container.
1. Get the web service HTTP endpoint.
```
%%time
import uuid
from azureml.core.webservice import Webservice
from azureml.core.model import InferenceConfig
from azureml.core.environment import Environment
from azureml.core import Workspace
from azureml.core.model import Model
ws = Workspace.from_config()
model = Model(ws, 'sklearn_mnist')
myenv = Environment.get(workspace=ws, name="tutorial-env", version="1")
inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
service_name = 'sklearn-mnist-svc-' + str(uuid.uuid4())[:4]
service = Model.deploy(workspace=ws,
name=service_name,
models=[model],
inference_config=inference_config,
deployment_config=aciconfig)
service.wait_for_deployment(show_output=True)
```
Get the scoring web service's HTTP endpoint, which accepts REST client calls. This endpoint can be shared with anyone who wants to test the web service or integrate it into an application.
```
print(service.scoring_uri)
```
## Test the model
### Download test data
Download the test data to the **./data/** directory
```
import os
from azureml.core import Dataset
from azureml.opendatasets import MNIST
data_folder = os.path.join(os.getcwd(), 'data')
os.makedirs(data_folder, exist_ok=True)
mnist_file_dataset = MNIST.get_file_dataset()
mnist_file_dataset.download(data_folder, overwrite=True)
```
### Load test data
Load the test data from the **./data/** directory created during the training tutorial.
```
from utils import load_data
import os
import glob
data_folder = os.path.join(os.getcwd(), 'data')
# note we also shrink the intensity values (X) from 0-255 to 0-1. This helps the neural network converge faster
X_test = load_data(glob.glob(os.path.join(data_folder,"**/t10k-images-idx3-ubyte.gz"), recursive=True)[0], False) / 255.0
y_test = load_data(glob.glob(os.path.join(data_folder,"**/t10k-labels-idx1-ubyte.gz"), recursive=True)[0], True).reshape(-1)
```
### Predict test data
Feed the test dataset to the model to get predictions.
The following code goes through these steps:
1. Send the data as a JSON array to the web service hosted in ACI.
1. Use the SDK's `run` API to invoke the service. You can also make raw calls using any HTTP tool such as curl.
```
import json
test = json.dumps({"data": X_test.tolist()})
test = bytes(test, encoding='utf8')
y_hat = service.run(input_data=test)
```
### Examine the confusion matrix
Generate a confusion matrix to see how many samples from the test set are classified correctly. Notice the mis-classified value for the incorrect predictions.
```
from sklearn.metrics import confusion_matrix
conf_mx = confusion_matrix(y_test, y_hat)
print(conf_mx)
print('Overall accuracy:', np.average(y_hat == y_test))
```
Use `matplotlib` to display the confusion matrix as a graph. In this graph, the X axis represents the actual values, and the Y axis represents the predicted values. The color in each grid represents the error rate. The lighter the color, the higher the error rate is. For example, many 5's are mis-classified as 3's. Hence you see a bright grid at (5,3).
```
# normalize the diagonal cells so that they don't overpower the rest of the cells when visualized
row_sums = conf_mx.sum(axis=1, keepdims=True)
norm_conf_mx = conf_mx / row_sums
np.fill_diagonal(norm_conf_mx, 0)
fig = plt.figure(figsize=(8,5))
ax = fig.add_subplot(111)
cax = ax.matshow(norm_conf_mx, cmap=plt.cm.bone)
ticks = np.arange(0, 10, 1)
ax.set_xticks(ticks)
ax.set_yticks(ticks)
ax.set_xticklabels(ticks)
ax.set_yticklabels(ticks)
fig.colorbar(cax)
plt.ylabel('true labels', fontsize=14)
plt.xlabel('predicted values', fontsize=14)
plt.savefig('conf.png')
plt.show()
```
## Show predictions
Test the deployed model with a random sample of 30 images from the test data.
1. Print the returned predictions and plot them along with the input images. Red font and inverse image (white on black) is used to highlight the misclassified samples.
Since the model accuracy is high, you might have to run the following code a few times before you can see a misclassified sample.
```
import json
# find 30 random samples from test set
n = 30
sample_indices = np.random.permutation(X_test.shape[0])[0:n]
test_samples = json.dumps({"data": X_test[sample_indices].tolist()})
test_samples = bytes(test_samples, encoding='utf8')
# predict using the deployed model
result = service.run(input_data=test_samples)
# compare actual value vs. the predicted values:
i = 0
plt.figure(figsize = (20, 1))
for s in sample_indices:
plt.subplot(1, n, i + 1)
plt.axhline('')
plt.axvline('')
# use different color for misclassified sample
font_color = 'red' if y_test[s] != result[i] else 'black'
clr_map = plt.cm.gray if y_test[s] != result[i] else plt.cm.Greys
plt.text(x=10, y =-10, s=result[i], fontsize=18, color=font_color)
plt.imshow(X_test[s].reshape(28, 28), cmap=clr_map)
i = i + 1
plt.show()
```
You can also send raw HTTP request to test the web service.
```
import requests
# send a random row from the test set to score
random_index = np.random.randint(0, len(X_test)-1)
input_data = "{\"data\": [" + str(list(X_test[random_index])) + "]}"
headers = {'Content-Type':'application/json'}
# for AKS deployment you'd need to the service key in the header as well
# api_key = service.get_key()
# headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)}
resp = requests.post(service.scoring_uri, input_data, headers=headers)
print("POST to url", service.scoring_uri)
#print("input data:", input_data)
print("label:", y_test[random_index])
print("prediction:", resp.text)
```
## Clean up resources
To keep the resource group and workspace for other tutorials and exploration, you can delete only the ACI deployment using this API call:
```
service.delete()
```
If you're not going to use what you've created here, delete the resources you just created with this quickstart so you don't incur any charges. In the Azure portal, select and delete your resource group. You can also keep the resource group, but delete a single workspace by displaying the workspace properties and selecting the Delete button.
## Next steps
In this Azure Machine Learning tutorial, you used Python to:
> * Set up your testing environment
> * Retrieve the model from your workspace
> * Test the model locally
> * Deploy the model to ACI
> * Test the deployed model
You can also try out the [regression tutorial](regression-part1-data-prep.ipynb).

| true |
code
| 0.498474 | null | null | null | null |
|
### OkCupid DataSet: Classify using combination of text data and metadata
### Meeting 5, 03- 03- 2020
### Recap last meeting's decisions:
<ol>
<p>Meeting 4, 28- 01- 2020</p>
<li> Approach 1: </li>
<ul>
<li>Merge classs 1, 3 and 5</li>
<li>Under sample class 6 </li>
<li> Merge classes 6, 7, 8</li>
</ul>
<li> Approach 2:</li>
<ul>
<li>Merge classs 1, 3 and 5 as class 1</li>
<li> Merge classes 6, 7, 8 as class 8</li>
<li>Under sample class 8 </li>
</ul>
<li> collect metadata: </li>
<ul>
<li> Number of misspelled </li>
<li> Number of unique words </li>
<li> Avg no wordlength </li>
</ul>
</ol>
## Education level summary
<ol>
<p></p>
<img src="rep2_image/count_diag.JPG">
</ol>
<ol>
<p></p>
<img src="rep2_image/count_table.JPG">
</ol>
## Logistic regression after removing minority classes and undersampling
<ol>
<p></p>
<img src="rep2_image/log1.JPG">
</ol>
## Merge levels:
- Merge classs 1, 3 and 5 as class 1
- Merge classes 6, 7, 8 as class 8
- weight classes while classifying using Logistic regression
<ol>
<p></p>
<img src="rep2_image/count_table2.JPG">
</ol>
<ol>
<p></p>
### Logistic regression with undersampling
<img src="rep2_image/log_undersampling.JPG">
</ol>
<ol>
<p></p>
### Logistic regression with weighting
<img src="rep2_image/log_weight.JPG">
</ol>
### Add metadata to the dataset
```
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import accuracy_score
from sklearn.multiclass import OneVsRestClassifier
from sklearn.metrics import confusion_matrix
from sklearn.feature_extraction.text import TfidfVectorizer
import seaborn as sns
from sklearn.metrics import classification_report
from sklearn.preprocessing import StandardScaler
from sklearn import preprocessing
from sklearn.preprocessing import FunctionTransformer
from sklearn.pipeline import Pipeline
from sklearn.multiclass import OneVsRestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import FeatureUnion
from collections import Counter
from sklearn.naive_bayes import MultinomialNB
import numpy as np
import itertools
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from sklearn.utils import resample
def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0])
, range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
df = pd.read_csv (r'../../../data/processed/stylo_cupid2.csv')
df.columns
# import readability
# from tqdm._tqdm_notebook import tqdm_notebook
# tqdm_notebook.pandas()
# def text_readability(text):
# results = readability.getmeasures(text, lang='en')
# return results['readability grades']['FleschReadingEase']
# df['readability'] = df.progress_apply(lambda x:text_readability(x['text']), axis=1)
df.head()
# Read metadata dataset to dataframe
# df = pd.read_csv (r'../../../data/processed/stylo_cupid2.csv')
df['sex'].mask(df['sex'].isin(['m']) , 0.0, inplace=True)
df['sex'].mask(df['sex'].isin(['f']) , 1.0, inplace=True)
# print(df['sex'].value_counts())
df['isced'].mask(df['isced'].isin([3.0, 5.0]) , 1.0, inplace=True)
df['isced'].mask(df['isced'].isin([6.0, 7.0]) , 8.0, inplace=True)
# # Separate majority and minority classes
# df_majority = df[df.isced==8.0]
# df_minority = df[df.isced==1.0]
# # Downsample majority class
# df_majority_downsampled = resample(df_majority,
# replace=False, # sample without replacement
# n_samples=10985, # to match minority class
# random_state=123) # reproducible results
# # Combine minority class with downsampled majority class
# df = pd.concat([df_majority_downsampled, df_minority])
print(sorted(Counter(df['isced']).items()))
df = df.dropna(subset=['clean_text', 'isced'])
corpus = df[['clean_text', 'count_char','count_word', '#anwps', 'count_punct', 'avg_wordlength', 'count_misspelled', 'word_uniqueness', 'age', 'sex']]
target = df["isced"]
# vectorization
X_train, X_val, y_train, y_val = train_test_split(corpus, target, train_size=0.75, stratify=target,
test_size=0.25, random_state = 0)
get_text_data = FunctionTransformer(lambda x: x['clean_text'], validate=False)
get_numeric_data = FunctionTransformer(lambda x: x[['count_char','count_word', '#anwps', 'count_punct', 'avg_wordlength', 'count_misspelled', 'word_uniqueness', 'age', 'sex']], validate=False)
# Solver = lbfgs
# merge vectorized text data and scaled numeric data
process_and_join_features = Pipeline([
('features', FeatureUnion([
('numeric_features', Pipeline([
('selector', get_numeric_data),
('scaler', preprocessing.StandardScaler())
])),
('text_features', Pipeline([
('selector', get_text_data),
('vec', CountVectorizer(binary=False, ngram_range=(1, 2), lowercase=True))
]))
])),
('clf', LogisticRegression(random_state=0,max_iter=1000, solver='lbfgs', penalty='l2', class_weight='balanced'))
])
# merge vectorized text data and scaled numeric data
process_and_join_features.fit(X_train, y_train)
predictions = process_and_join_features.predict(X_val)
print("Final Accuracy for Logistic: %s"% accuracy_score(y_val, predictions))
cm = confusion_matrix(y_val,predictions)
plt.figure()
plot_confusion_matrix(cm, classes=[0,1], normalize=False,
title='Confusion Matrix')
print(classification_report(y_val, predictions))
from sklearn.model_selection import cross_val_predict
from sklearn.metrics import confusion_matrix
# scores = cross_val_score(process_and_join_features, X_train, y_train, cv=5)
# print(scores)
# print(scores.mean())
process_and_join_features.fit(X_train, y_train)
y_pred = cross_val_predict(process_and_join_features, corpus, target, cv=5)
conf_mat = confusion_matrix(target, y_pred)
print(conf_mat)
from sklearn.model_selection import cross_val_score, cross_val_predict
scores = cross_val_score(process_and_join_features, corpus, target, cv=5)
print(scores)
print(scores.mean())
from sklearn.model_selection import cross_val_score, cross_val_predict
scores = cross_val_score(process_and_join_features, corpus, target, cv=5)
print(scores)
print(scores.mean())
# merge vectorized text data and scaled numeric data
process_and_join_features = Pipeline([
('features', FeatureUnion([
('numeric_features', Pipeline([
('selector', get_numeric_data),
('scaler', preprocessing.StandardScaler())
])),
('text_features', Pipeline([
('selector', get_text_data),
('vec', CountVectorizer(binary=False, ngram_range=(1, 2), lowercase=True))
]))
])),
('clf', LogisticRegression(random_state=0,max_iter=5000, solver='sag', penalty='l2', class_weight='balanced'))
])
#
process_and_join_features.fit(X_train, y_train)
predictions = process_and_join_features.predict(X_val)
print("Final Accuracy for Logistic: %s"% accuracy_score(y_val, predictions))
cm = confusion_matrix(y_val,predictions)
plt.figure()
plot_confusion_matrix(cm, classes=[0,1], normalize=False,
title='Confusion Matrix')
print(classification_report(y_val, predictions))
# merge vectorized text data and scaled numeric data
process_and_join_features = Pipeline([
('features', FeatureUnion([
('numeric_features', Pipeline([
('selector', get_numeric_data),
('scaler', preprocessing.StandardScaler())
])),
('text_features', Pipeline([
('selector', get_text_data),
('vec', CountVectorizer(binary=False, ngram_range=(1, 2), lowercase=True))
]))
])),
('clf', LogisticRegression(n_jobs=-1, random_state=0,max_iter=3000, solver='saga', penalty='l2', class_weight='balanced'))
])
#
process_and_join_features.fit(X_train, y_train)
predictions = process_and_join_features.predict(X_val)
print("Final Accuracy for Logistic: %s"% accuracy_score(y_val, predictions))
cm = confusion_matrix(y_val,predictions)
plt.figure()
plot_confusion_matrix(cm, classes=[0,1], normalize=False,
title='Confusion Matrix')
print(classification_report(y_val, predictions))
```
| true |
code
| 0.562597 | null | null | null | null |
|
# Model Selection, Overfitting and Regularization
This tutorial is meant to be a gentle introduction to machine learning concepts. We present a simple polynomial fitting example using a least squares solution, which is a specific case of what is called maximum likelihood, but we will not get into details about this probabilistic view of least squares in this tutorial. We use this example to introduce important machine learning concepts using plain language that should be accessible to undergradiuate and graduate students with a minimum background of calculus.
The goals of this tutorial are:
- Explain how to develop an experiment. Split your data into development set (*i.e.*, train and validaion sets) and test set.
- Introduce how to select your model.
- Introduce the concepts of *over-fitting*, *under-fitting*, and *model generalization*.
- Introduce the concept of *regularization* for reducing *over-fitting*.
This tutorial is interactive and it corresponds to an adaptation of the example presented in chapter 1 of the book: **Christopher M. Bishop. 2006. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag New York, Inc., Secaucus, NJ, USA.**
## Designing your experiment
Machine learning builds models by learning from data. When designing your experiment, you need to split your data into a development set and a test set. The development set is split into 2 sets: a train set and a validation set. The train set is used to learn the parameters of the different models you are fititng (training). The validation set is employed to select hopefully what is the best model among the different models you trained, therefore it has a bias and cannot be used as proof of generalization. The test set is used to see if the selected model generalizes well to unseen data.
<img src="../Figures/train_val_test.png" alt="Drawing" style="width: 500px;"/>
## Generating synthetic data
```
# Directive to make plots inline as opposed to having pop-up plots
%matplotlib inline
import numpy as np # Import numpy with nickname np
import matplotlib.pylab as plt # plotting library
from ipywidgets import * # Interaction library
var = 0.2 #Noise variance
#Create data set
N = 25
x = np.linspace(0, 1, N)
y_noiseless = np.sin(2*np.pi*x) # signal
y = y_noiseless + np.random.normal(0, var, N) #signal + noise -> real measurements always come with noise
# Plot entire data set with and without noise
plt.figure()
plt.plot(x,y_noiseless,linewidth = 2.0,label = r'Data without noise: $sin(2 \pi x)$')
plt.scatter(x,y,color ='red', marker = 'x', label = r'Data with noise')
plt.legend(loc = (0.02, 0.18))
plt.xlabel("x")
plt.ylabel("y")
plt.show()
```
## Splitting the data into train, validation, and test sets
```
# Splitting the data in train/validation/test sets - size of each set was choosen arbitrarily
train_size = 10
val_size = 5
test_size = 10
indexes = np.arange(N, dtype =int)
np.random.seed(seed = 2) # Random seed to keep results always the same
np.random.shuffle(indexes) # Shuffling the data before the split
# Train set
aux = indexes[:train_size]
aux = np.sort(aux)
x_train = x[aux]
y_train = y[aux]
#Validation set
aux = indexes[train_size: train_size + val_size]
aux = np.sort(aux)
x_val= x[aux]
y_val = y[aux]
# Test set
aux = indexes[-test_size:]
aux = np.sort(aux)
x_test = x[aux]
y_test = y[aux]
# Plot train/val/test sets
plt.figure()
plt.plot(x,y_noiseless,linewidth = 2.0,label = r'Model no noise: $sin(2 \pi x)$')
plt.scatter(x_train,y_train,color ='red', marker = 'x', label = "Train set")
plt.scatter(x_val,y_val,color = 'blue',marker = '^' , label = "Validation set")
plt.scatter(x_test,y_test,color = 'green', marker = 'o', label = "Test set")
plt.legend(loc = (0.02, 0.18))
plt.xlabel("x")
plt.ylabel("y")
plt.show()
```
## Data
Observations: $$\boldsymbol{X} =[x_1,x_2,...,x_N]^T$$
Target: $$\boldsymbol{T} =[t_1,t_2,...,t_N]^T$$
Estimates: $$\boldsymbol{Y} =[y_1,y_2,...,y_N]^T$$
## Polynomial Model
$$y(x,\boldsymbol{W})= w_0 + w_1x +w_2x^2+...+w_mx^m = \sum^M_{j=0}w_jx^j$$
Weights (*i.e.*, what our model learns): $$\boldsymbol{W} =[t_1,t_2,...,t_M]^T$$
## Cost Function
Quadratic cost function: $$E(\boldsymbol{W})=\frac{1}{2}\sum_{n=1}^N\{y(x_n,\boldsymbol{W})-t_n\}^2$$
Computing the derivative of the cost function and making it equal to zero, we can find the vector **W*** that minimizes the error:
$$ \boldsymbol{W}^* = (\boldsymbol{A}^T\boldsymbol{A})^{-1}\boldsymbol{A} ^T\boldsymbol{T}$$
Where **A** is defined by:
$$\boldsymbol{A} = \begin{bmatrix}
1 & x_{1} & x_{1}^2 & \dots & x_{1}^M \\
1 & x_{2} & x_{2}^2 & \dots & x_{2}^M \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
1 & x_{N} & x_{N}^2 & \dots & x_{N}^M
\end{bmatrix}$$
```
#Least squares polynomial fitting solution
# Implementation of the equation shown above
def polynomial_fit(X,T,M):
A = np.power(X.reshape(-1,1),np.arange(0,M+1).reshape(1,-1))
T = T.reshape(-1,1)
W = np.dot(np.linalg.pinv(A),T)
return W.ravel()
```
Plotting the least squares result varying the polynomial degree between 0 a 9. **Which model is a good model?** Look at the plots but also the magnitude of the weights resulting from each polynomial fit.
```
def plotmodel(M):
coefs = polynomial_fit(x_train, y_train, M)[::-1]
print("Weights:\n", coefs)
p = np.poly1d(coefs)
plt.figure()
plt.plot(x,y_noiseless,linewidth = 1.5,label = r'Data no noise: $sin(2 \pi x)$')
plt.scatter(x_train,y_train,color='red',label= "Train set")
plt.xlabel("x")
plt.ylabel(r'y')
y_fit = p(x_train)
plt.plot(x_train,y_fit,linewidth = 1.0,label ="Polynomial Fit")
plt.legend(loc=(0.02,0.02))
plt.show()
interact(plotmodel,M=(0,9,1))
```
Depending on the degree, M, of the polynomial we fit to our data, our model falls under one of these categories:
- **Under-fitting**: the model is too inflexible and is not able to capture any patterns in the data.
- **Over-fitting**: the model is too flexible. It ends up tuning to the random noise in the data. The model may have a low error in the train set, but it is not expected to generalize well to new (unseen) data.
- **Good fit**: The model is able to capture patterns in our data, but it does not get tuned to the random noise in the data. Better chances to generalize to new (unseen) data.
A good exercise is to visually determine whether the model is under-fitting, over-fitting or it is a good model based on the polynomial degree in the interactive plot shown above.
## Root mean squared error and Model Selection
Root mean squared error is an error measure commonly emplyed in regression problems.
$$E_{RMS}=\sqrt{2E(\boldsymbol{W^*})/N}$$
We will analyze the root mean squared error in the validation set to select our model.
```
# Computes RMS error
def rms_error(X,T,W):
p = np.poly1d(W)
T_fit = p(X)
E = np.sqrt(((T - T_fit)**2/T.size).sum())
return E
m = range(10)
train = []
val = []
# Compute RMS error across different polynomial fits
for M in m:
W = polynomial_fit(x_train, y_train, M)[::-1]
error_train = rms_error(x_train,y_train,W)
error_val = rms_error(x_val,y_val,W)
train.append(error_train)
val.append(error_val)
# Plot the errors
plt.figure()
plt.plot(m,train,linewidth = 2.0,marker = 'o',markersize = 12,label = r'$E_{RMS}$ Train')
plt.plot(m,val,linewidth = 2.0,marker = 'x',markersize = 12,label = r'$E_{RMS}$ Validation')
plt.legend(loc = (0.02, 0.05))
plt.xlabel("Polynomial degree")
plt.ylabel(r'$E_{RMS}$')
plt.show()
# Model selection - the model with the lowest error in the validation set is selected. Then, the model
# generalizability is assessed on the test set.
best_M = np.argmin(val)
W = polynomial_fit(x_train, y_train, best_M)[::-1]
test_error = rms_error(x_test,y_test,W)
print("Model selected was a polynomial of degree %d" %best_M)
print("Root mean squared test error: %.3f" %test_error)
```
## Cost function with regularization
Regularization is a technique to avoid overfitting. Do you remember how the values of the estimated weights increased quickly for polynomial fits with high degrees in the example without regularization? That was the model tuning itself to the noise in the data. Regularization consists in adding a penalty term to the cost function. Let's add a quadratic penalty to the weights we are trying to estimate. The quadratic penalty is called **L2 regularization**.
$$E(\boldsymbol{W})=\frac{1}{2}\sum_{n=1}^N\{y(x_n,\boldsymbol{W})-t_n\}^2 +\frac{\lambda}{2}||\boldsymbol{W}||^2$$
The above equation also has a well-defined minimum point. Computing its derivative and making it equal to zero, the solution of the equation is given by:
$$\boldsymbol{W}^* = (\boldsymbol{A}^T\boldsymbol{A} + \lambda n\boldsymbol{I})^{-1}\boldsymbol{A} ^T\boldsymbol{T} $$
Note that our problem now has two hyper-parameters that we need to set. The polynomial degree (M) and the regularization factor ($\lambda$). Hyper-parameters are set by the user (*e.g.*, M and $\lambda$), while parameters are learned by the model (*e.g.*, the weights).
```
#Least square solution with regularization
def polynomial_fit_reg(X,T,M,lamb):
N = X.shape[0]
A = np.power(X.reshape(-1,1),np.arange(0,M+1).reshape(1,-1))
lambda_matrix = lamb*N*np.eye(M+1)
T = T.reshape(-1,1)
aux = np.dot(A.T,A) + lambda_matrix
aux = np.linalg.pinv(aux)
aux2 = np.dot(A.T,T)
W = np.dot(aux,aux2)
return W.ravel()
```
In the demo below, we show the influence of $log(\lambda)$ and $M$ in the polynomial fitting. Note the influence of $\lambda$ in the estimated weights.
```
def plotmodel2(M,log_lamb):
lamb = np.exp(log_lamb)
coefs = polynomial_fit_reg(x_train, y_train, M,lamb)[::-1]
print("Weights:\n",coefs)
print("Lambda\n", lamb)
p = np.poly1d(coefs)
plt.figure()
plt.plot(x,y_noiseless,linewidth = 1.5,label = r'Data no noise: $sin(2 \pi x)$')
plt.scatter(x_train,y_train,color='red',label= "Train set")
plt.xlabel("x")
plt.ylabel(r'y')
y_fit = p(x_train)
plt.plot(x_train,y_fit,linewidth = 1.0,label ="Polynomial Fit")
plt.legend(loc=(0.02,0.02))
plt.show()
interact(plotmodel2,M=(0,9,1),log_lamb = (-40,-9,.1))
```
When we fit our model to the training data, we do a grid search through different polynomial degrees (M) and different regularization values ($\lambda$) to search for the model with lowest error in the validation set, which again is the model we select. An alternative to the grid search is to perform a random search for the best set of model hyper-maraters.
```
log_lamb = range(-40,-8) # regularization values
M = range(7,10) # different polynomial degrees
train = np.zeros((len(log_lamb), len(M)))
val = np.zeros((len(log_lamb), len(M)))
for (i,m) in enumerate(M):
for (j,l) in enumerate(log_lamb):
lamb = np.exp(l)
coeffs = polynomial_fit_reg(x_train, y_train, m,lamb)[::-1]
train[j,i] = rms_error(x_train,y_train,coeffs)
val[j,i] = rms_error(x_val,y_val,coeffs)
plt.figure(figsize = (24,22), dpi = 300)
for (i,m) in enumerate(M):
plt.subplot(2, 2, i + 1)
plt.plot(log_lamb,train[:,i],linewidth = 1.0,marker = 'o',markersize = 12,label = r'$E_{RMS}$ Train')
plt.plot(log_lamb,val[:,i],linewidth = 1.0,marker = 'x',markersize = 12,label = r'$E_{RMS}$ Validation')
plt.legend(loc = (0.02, 0.075))
plt.xlabel(r'$ln\lambda$')
plt.ylabel(r'$E_{RMS}$')
plt.title("Polynomial degree %d" %m)
plt.show()
# Model selection
best_M_reg = np.unravel_index(val.argmin(), val.shape)
W = polynomial_fit_reg(x_train, y_train, M[best_M_reg[1]], np.exp(log_lamb[best_M_reg[0]]))[::-1]
test_error = rms_error(x_test,y_test,W)
print("Model selected was a polynome of degree %d with lambda = %e" %(M[best_M_reg[1]], np.exp(log_lamb[best_M_reg[0]])))
print("Root mean squared test error: %.3f" %test_error)
```
## Summary
That is all folks. In this tutorial, we presented a gentle introduction to model selection, over-fitting and regularization. We saw how to design our experiment by splitting our dataset into a development set (train + validation sets) and a test set. This method is commonly employed when we have very large datasets that may take days to train. For smaller datasets, a procedure called [cross-validation](https://en.wikipedia.org/wiki/Cross-validation_(statistics)#:~:text=Cross%2Dvalidation%2C%20sometimes%20called%20rotation,to%20an%20independent%20data%20set.) is often employed. We also saw that polynomials with high degrees tended to overfit to the data and by adding a regularization term to the cost function, over-fitting can be potentially mitigated. Another way to avoid over-fitting is by collecting more data (see activity suggestions), which is not always feasible.
The concepts explained in this tutorial are valid not just for polynomial fits, but also across diffrent machine learning models like neural networks and support vector machines.
## Activity suggestions
- Use more data for training your model;
- Change the input signal;
- Change the noise intensity;
| true |
code
| 0.715821 | null | null | null | null |
|
## Анализ результатов AB тестирования
* проанализировать АБ тест, проведенный на реальных пользователях Яндекса
* подтвердить или опровергнуть наличие изменений в пользовательском поведении между контрольной (control) и тестовой (exp) группами
* определить характер этих изменений и практическую значимость вводимого изменения
* понять, какая из пользовательских групп более всего проигрывает / выигрывает от тестируемого изменения (локализовать изменение)
### Задание 1
Основная метрика, на которой мы сосредоточимся в этой работе, — это количество пользовательских кликов на web-странице в зависимости от тестируемого изменения этой страницы.
Посчитайте, насколько в группе exp больше пользовательских кликов по сравнению с группой control в процентах от числа кликов в контрольной группе.
Полученный процент округлите до третьего знака после точки.
```
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import stats
from statsmodels.sandbox.stats.multicomp import multipletests
data = pd.read_csv('ab_browser_test.csv')
data.shape
data.head()
control = sum(data.loc[(data['slot'] == 'control')].n_clicks)
exp = sum(data.loc[(data['slot'] == 'exp')].n_clicks)*100 /control
exp-100
```
### Задание 2
Давайте попробуем посмотреть более внимательно на разницу между двумя группами (control и exp) относительно количества пользовательских кликов.
Для этого постройте с помощью бутстрепа 95% доверительный интервал для средних значений и медиан количества кликов в каждой из двух групп.
```
def get_bootstrap_samples(data, n_samples):
indices = np.random.randint(0, len(data), (n_samples, len(data)))
samples = data[indices]
return samples
def stat_intervals(stat, alpha):
boundaries = np.percentile(stat, [100 * alpha / 2., 100 * (1 - alpha / 2.)])
return boundaries
data.loc[data.slot == 'exp'].n_clicks
```
### Задание 3
Поскольку данных достаточно много (порядка полумиллиона уникальных пользователей), отличие в несколько процентов может быть не только практически значимым, но и значимым статистически. Последнее утверждение нуждается в дополнительной проверке.
Посмотрите на выданные вам данные и выберите все верные варианты ответа относительно проверки гипотезы о равенстве среднего количества кликов в группах.
```
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.hist(data.loc[data.slot == 'exp'].n_clicks, bins=100, edgecolor='k')
plt.axvline(x=(data.loc[data.slot == 'exp'].n_clicks).mean(), ymin=0, ymax=175000, c='r')
plt.xlim(0, 175)
plt.ylabel('Number of users')
plt.xlabel('Number of clicks')
plt.title('Experimental group')
plt.subplot(122)
plt.hist(data.loc[data.slot == 'control'].n_clicks, bins=100, edgecolor='k')
plt.axvline(x=(data.loc[data.slot == 'control'].n_clicks).mean(), ymin=0, ymax=175000, c='r')
plt.xlim(0, 175)
plt.ylabel('Number of users')
plt.xlabel('Number of clicks')
plt.title('Control group')
plt.show()
```
### Задание 4
Одним из возможных аналогов t-критерия, которым можно воспрользоваться, является тест Манна-Уитни. На достаточно обширном классе распределений он является асимптотически более эффективным, чем t-критерий, и при этом не требует параметрических предположений о характере распределения.
Разделите выборку на две части, соответствующие control и exp группам. Преобразуйте данные к виду, чтобы каждому пользователю соответствовало суммарное значение его кликов. С помощью критерия Манна-Уитни проверьте гипотезу о равенстве средних. Что можно сказать о получившемся значении достигаемого уровня значимости ?
```
control = data.loc[(data['slot'] == 'control')][['userID', 'n_clicks']]
control.head()
exp = data.loc[(data['slot'] == 'exp')][['userID', 'n_clicks']]
exp.head()
stats.mannwhitneyu(exp, control, alternative='two-sided')
```
| true |
code
| 0.526891 | null | null | null | null |
|
```
#|hide
#|skip
! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab
#|all_slow
#|default_exp callback.comet
#|export
from __future__ import annotations
import tempfile
from fastai.basics import *
from fastai.learner import Callback
#|hide
from nbdev.showdoc import *
```
# Comet.ml
> Integration with [Comet.ml](https://www.comet.ml/).
## Registration
1. Create account: [comet.ml/signup](https://www.comet.ml/signup).
2. Export API key to the environment variable (more help [here](https://www.comet.ml/docs/v2/guides/getting-started/quickstart/#get-an-api-key)). In your terminal run:
```
export COMET_API_KEY='YOUR_LONG_API_TOKEN'
```
or include it in your `./comet.config` file (**recommended**). More help is [here](https://www.comet.ml/docs/v2/guides/tracking-ml-training/jupyter-notebooks/#set-your-api-key-and-project-name).
## Installation
1. You need to install neptune-client. In your terminal run:
```
pip install comet_ml
```
or (alternative installation using conda). In your terminal run:
```
conda install -c anaconda -c conda-forge -c comet_ml comet_ml
```
## How to use?
Key is to create the callback `CometMLCallback` before you create `Learner()` like this:
```
from fastai.callback.comet import CometMLCallback
comet_ml_callback = CometCallback('PROJECT_NAME') # specify project
learn = Learner(dls, model,
cbs=comet_ml_callback
)
learn.fit_one_cycle(1)
```
```
#|export
import comet_ml
#|export
class CometCallback(Callback):
"Log losses, metrics, model weights, model architecture summary to neptune"
order = Recorder.order + 1
def __init__(self, project_name, log_model_weights=True):
self.log_model_weights = log_model_weights
self.keep_experiment_running = keep_experiment_running
self.project_name = project_name
self.experiment = None
def before_fit(self):
try:
self.experiment = comet_ml.Experiment(project_name=self.project_name)
except ValueError:
print("No active experiment")
try:
self.experiment.log_parameter("n_epoch", str(self.learn.n_epoch))
self.experiment.log_parameter("model_class", str(type(self.learn.model)))
except:
print(f"Did not log all properties.")
try:
with tempfile.NamedTemporaryFile(mode="w") as f:
with open(f.name, "w") as g:
g.write(repr(self.learn.model))
self.experiment.log_asset(f.name, "model_summary.txt")
except:
print("Did not log model summary. Check if your model is PyTorch model.")
if self.log_model_weights and not hasattr(self.learn, "save_model"):
print(
"Unable to log model to Comet.\n",
)
def after_batch(self):
# log loss and opt.hypers
if self.learn.training:
self.experiment.log_metric("batch__smooth_loss", self.learn.smooth_loss)
self.experiment.log_metric("batch__loss", self.learn.loss)
self.experiment.log_metric("batch__train_iter", self.learn.train_iter)
for i, h in enumerate(self.learn.opt.hypers):
for k, v in h.items():
self.experiment.log_metric(f"batch__opt.hypers.{k}", v)
def after_epoch(self):
# log metrics
for n, v in zip(self.learn.recorder.metric_names, self.learn.recorder.log):
if n not in ["epoch", "time"]:
self.experiment.log_metric(f"epoch__{n}", v)
if n == "time":
self.experiment.log_text(f"epoch__{n}", str(v))
# log model weights
if self.log_model_weights and hasattr(self.learn, "save_model"):
if self.learn.save_model.every_epoch:
_file = join_path_file(
f"{self.learn.save_model.fname}_{self.learn.save_model.epoch}",
self.learn.path / self.learn.model_dir,
ext=".pth",
)
else:
_file = join_path_file(
self.learn.save_model.fname,
self.learn.path / self.learn.model_dir,
ext=".pth",
)
self.experiment.log_asset(_file)
def after_fit(self):
try:
self.experiment.end()
except:
print("No neptune experiment to stop.")
```
| true |
code
| 0.635336 | null | null | null | null |
|
### This notebook is used to perform gridsearch on asia dataset
```
%load_ext autoreload
%autoreload 2
import numpy as np
import pandas as pd
from sdgym import benchmark
from sdgym import load_dataset
from xgboost import XGBClassifier
from sklearn.neural_network import MLPClassifier
from synthsonic.models.kde_copula_nn_pdf import KDECopulaNNPdf
from synthsonic.models.categorical_utils import categorical_round, vec_translate, categorical_frequency_mapping, \
categorical_frequency_inverse_mapping, encode_one_hot, decode_one_hot
from pandas_profiling import ProfileReport
%matplotlib inline
```
### EDA
```
df, categorical_columns, ordinal_columns = load_dataset('asia')
explore_df = pd.DataFrame(df)
profile = ProfileReport(explore_df, title="EDA for asia dataset")
profile
```
### Observations:
* All 8 features in this dataset are categorical, so it's worth trying all the categorical encoding strategies
* Consider categorical as ordinal
* One hot encode categorical features
* Frequency mapping
### MLP classifier
```
def KDECopulaNNPdf_RoundCategorical(real_data, categorical_columns, ordinal_columns):
# Max's kde copula model with default parameters
all_features = list(range(real_data.shape[1]))
numerical_features = list(set(all_features) - set(categorical_columns + ordinal_columns))
data = np.float64(real_data)
n_samples = data.shape[0]
n_features = data.shape[1]
#print(data.shape)
kde = KDECopulaNNPdf(use_KDE=False, clf=MLPClassifier(random_state=0, max_iter=500, early_stopping=True))
kde = kde.fit(data)
X_gen, sample_weight = kde.sample(n_samples)
X_gen[:, categorical_columns+ordinal_columns] = np.round(X_gen[:, categorical_columns+ordinal_columns])
X_gen = np.float32(X_gen)
return X_gen
def KDECopulaNNPdf_woKDE_OneHotEncoded(real_data, categorical_columns, ordinal_columns):
all_features = list(range(real_data.shape[1]))
numerical_features = list(set(all_features) - set(categorical_columns+ordinal_columns))
## One hot encode the categorical features
unique_values, ohe = encode_one_hot(real_data, categorical_columns)
categorical_np = np.array(ohe)
n_samples = real_data.shape[0]
n_features = real_data.shape[1]
## Append the categorical one hot encoded data to numerical and ordinal
data = np.float64(np.hstack((real_data[:, numerical_features+ordinal_columns], categorical_np)))
kde = KDECopulaNNPdf(use_KDE=False, clf=MLPClassifier(random_state=0, max_iter=500, early_stopping=True))
kde = kde.fit(data)
X_gen, sample_weight = kde.sample(n_samples)
X_gen = np.float32(X_gen)
X_final = decode_one_hot(X_gen, categorical_columns, unique_values, n_features)
X_final[:, numerical_features+ordinal_columns] = X_gen[:, numerical_features+ordinal_columns]
print(X_final.shape)
return X_final
def KDECopulaNNPdf_woKDE_FreqMapping(real_data, categorical_columns, ordinal_columns):
all_features = list(range(real_data.shape[1]))
numerical_features = list(set(all_features) - set(categorical_columns+ordinal_columns))
data = np.float64(real_data)
n_samples = data.shape[0]
n_features = data.shape[1]
data, inv_mappings = categorical_frequency_mapping(data, categorical_columns)
kde = KDECopulaNNPdf(use_KDE=False, clf=MLPClassifier(random_state=0, max_iter=500, early_stopping=True))
kde = kde.fit(data)
X_gen, sample_weight = kde.sample(n_samples)
X_gen[:, categorical_columns] = np.round(X_gen[:, categorical_columns])
X_final = categorical_frequency_inverse_mapping(X_gen, categorical_columns, inv_mappings)
return X_final
asia_scores_mlp = benchmark(synthesizers=[KDECopulaNNPdf_RoundCategorical,
KDECopulaNNPdf_woKDE_OneHotEncoded,
KDECopulaNNPdf_woKDE_FreqMapping], datasets=['asia'])
asia_scores_mlp
def KDECopulaNNPdf_RoundCategorical(real_data, categorical_columns, ordinal_columns):
# Max's kde copula model with default parameters
all_features = list(range(real_data.shape[1]))
numerical_features = list(set(all_features) - set(categorical_columns + ordinal_columns))
data = np.float64(real_data)
n_samples = data.shape[0]
n_features = data.shape[1]
kde = KDECopulaNNPdf(use_KDE=False, clf=XGBClassifier(random_state=42, max_depth=6, alpha=0.2, subsample=0.5))
kde = kde.fit(data)
X_gen, sample_weight = kde.sample(n_samples)
X_gen[:, categorical_columns+ordinal_columns] = np.round(X_gen[:, categorical_columns+ordinal_columns])
X_gen = np.float32(X_gen)
return X_gen
def KDECopulaNNPdf_woKDE_OneHotEncoded(real_data, categorical_columns, ordinal_columns):
all_features = list(range(real_data.shape[1]))
numerical_features = list(set(all_features) - set(categorical_columns+ordinal_columns))
## One hot encode the categorical features
unique_values, ohe = encode_one_hot(real_data, categorical_columns)
categorical_np = np.array(ohe)
n_samples = real_data.shape[0]
n_features = real_data.shape[1]
## Append the categorical one hot encoded data to numerical and ordinal
data = np.float64(np.hstack((real_data[:, numerical_features+ordinal_columns], categorical_np)))
kde = KDECopulaNNPdf(use_KDE=False, clf=XGBClassifier(random_state=42, max_depth=6, alpha=0.2, subsample=0.5))
kde = kde.fit(data)
X_gen, sample_weight = kde.sample(n_samples)
X_gen = np.float32(X_gen)
X_final = decode_one_hot(X_gen, categorical_columns, unique_values, n_features)
X_final[:, numerical_features+ordinal_columns] = X_gen[:, numerical_features+ordinal_columns]
print(X_final.shape)
return X_final
def KDECopulaNNPdf_woKDE_FreqMapping(real_data, categorical_columns, ordinal_columns):
all_features = list(range(real_data.shape[1]))
numerical_features = list(set(all_features) - set(categorical_columns+ordinal_columns))
data = np.float64(real_data)
n_samples = data.shape[0]
n_features = data.shape[1]
data, inv_mappings = categorical_frequency_mapping(data, categorical_columns)
kde = KDECopulaNNPdf(use_KDE=False, clf=XGBClassifier(random_state=42, max_depth=6, alpha=0.2, subsample=0.5))
kde = kde.fit(data)
X_gen, sample_weight = kde.sample(n_samples)
X_gen[:, categorical_columns] = np.round(X_gen[:, categorical_columns])
X_final = categorical_frequency_inverse_mapping(X_gen, categorical_columns, inv_mappings)
return X_final
asia_scores_xgboost = benchmark(synthesizers=[KDECopulaNNPdf_RoundCategorical,
KDECopulaNNPdf_woKDE_OneHotEncoded,
KDECopulaNNPdf_woKDE_FreqMapping], datasets=['asia'])
asia_scores_xgboost
asia_scores_mlp['Classifier'] = 'MLP'
asia_scores_xgboost['Classifier'] = 'XGBoost'
asia_scores_mlp.iloc[0:9]['Classifier'] = 'N/A'
asia_scores = asia_scores_mlp.reset_index().append(asia_scores_xgboost.reset_index().iloc[-3:], ignore_index=True)
asia_scores
```
### Grid search
```
data = np.float64(df)
kde = KDECopulaNNPdf(use_KDE=False, clf=MLPClassifier())
kde.get_params().keys()
# then for the grid search do this, where all classifier options now have a prefix clf__:
from sklearn.model_selection import GridSearchCV
parameters = {
'clf__alpha': 10.0 ** -np.arange(1, 3),
'clf__hidden_layer_sizes': [(10,),(20,),(50,),(100,)],
'clf__activation': ['tanh', 'relu'],
'clf__solver': ['sgd', 'adam'],
'clf__alpha': [0.0001, 0.05],
'clf__learning_rate': ['constant','adaptive'],
}
grid = GridSearchCV(KDECopulaNNPdf(use_KDE=False), parameters, cv=5)
grid.fit(data)
print (grid.best_params_)
print (grid.best_params_)
def KDECopulaNNPdf_RoundCategorical(real_data, categorical_columns, ordinal_columns):
# Max's kde copula model with default parameters
all_features = list(range(real_data.shape[1]))
numerical_features = list(set(all_features) - set(categorical_columns + ordinal_columns))
data = np.float64(real_data)
n_samples = data.shape[0]
n_features = data.shape[1]
#print(data.shape)
kde = KDECopulaNNPdf(clf=MLPClassifier(hidden_layer_sizes=(100,), alpha=0.05,
max_iter=500, early_stopping=True, random_state=1), use_KDE=False)
kde = kde.fit(data)
X_gen, sample_weight = kde.sample(n_samples)
X_gen[:, categorical_columns+ordinal_columns] = np.round(X_gen[:, categorical_columns+ordinal_columns])
X_gen = np.float32(X_gen)
return X_gen
asia_scores = benchmark(synthesizers=[KDECopulaNNPdf_RoundCategorical], datasets=['asia'])
asia_scores
asia_scores.sort_values('asia/test_likelihood')
```
* With use_KDE=False, modifying the classification model or tuning the hyper-parameters don't make a difference.
| true |
code
| 0.639792 | null | null | null | null |
|
# Building the Best AND Gate
Let's import everything:
```
from qiskit import *
from qiskit.tools.visualization import plot_histogram
%config InlineBackend.figure_format = 'svg' # Makes the images look nice
from qiskit.providers.aer import noise
import numpy as np
```
In Problem Set 1, you made an AND gate with quantum gates. This time you'll do the same again, but for a real device. Using real devices gives you two major constraints to deal with. One is the connectivity, and the other is noise.
The connectivity tells you what `cx` gates it is possible to do perform directly. For example, the device `ibmq_5_tenerife` has five qubits numbered from 0 to 4. It has a connectivity defined by
```
coupling_map = [[1, 0], [2, 0], [2, 1], [3, 2], [3, 4], [4, 2]]
```
Here the `[1,0]` tells us that we can implement a `cx` with qubit 1 as control and qubit 0 as target, the `[2,0]` tells us we can have qubit 2 as control and 0 as target, and so on. The are the `cx` gates that the device can implement directly.
The 'noise' of a device is the collective effects of all the things that shouldn't happen, but nevertheless do happen. Noise results in the output not always having the result we expect. There is noise associated with all processes in a quantum circuit: preparing the initial states, applying gates and measuring the output. For the gates, noise levels can vary between different gates and between different qubits. The `cx` gates are typically more noisy than any single qubit gate.
We can also simulate noise using a noise model. And we can set the noise model based on measurements of the noise for a real device. The following noise model is based on `ibmq_5_tenerife`.
```
noise_dict = {'errors': [{'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0004721766167523067, 0.0004721766167523067, 0.0004721766167523067, 0.9985834701497431], 'gate_qubits': [[0]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0005151090708174488, 0.0005151090708174488, 0.0005151090708174488, 0.9984546727875476], 'gate_qubits': [[1]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0005151090708174488, 0.0005151090708174488, 0.0005151090708174488, 0.9984546727875476], 'gate_qubits': [[2]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.000901556048412383, 0.000901556048412383, 0.000901556048412383, 0.9972953318547628], 'gate_qubits': [[3]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0011592423249461303, 0.0011592423249461303, 0.0011592423249461303, 0.9965222730251616], 'gate_qubits': [[4]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0009443532335046134, 0.0009443532335046134, 0.0009443532335046134, 0.9971669402994862], 'gate_qubits': [[0]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0010302181416348977, 0.0010302181416348977, 0.0010302181416348977, 0.9969093455750953], 'gate_qubits': [[1]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0010302181416348977, 0.0010302181416348977, 0.0010302181416348977, 0.9969093455750953], 'gate_qubits': [[2]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.001803112096824766, 0.001803112096824766, 0.001803112096824766, 0.9945906637095256], 'gate_qubits': [[3]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0023184846498922607, 0.0023184846498922607, 0.0023184846498922607, 0.9930445460503232], 'gate_qubits': [[4]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.9672573379090872], 'gate_qubits': [[1, 0]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.9699888805021712], 'gate_qubits': [[2, 0]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.9627184072576159], 'gate_qubits': [[2, 1]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.9437457618579164], 'gate_qubits': [[3, 2]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.9339816349935997], 'gate_qubits': [[3, 4]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.9307167621063416], 'gate_qubits': [[4, 2]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9372499999999999, 0.06275000000000008], [0.06275000000000008, 0.9372499999999999]], 'gate_qubits': [[0]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9345, 0.0655], [0.0655, 0.9345]], 'gate_qubits': [[1]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.97075, 0.029249999999999998], [0.029249999999999998, 0.97075]], 'gate_qubits': [[2]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9742500000000001, 0.02574999999999994], [0.02574999999999994, 0.9742500000000001]], 'gate_qubits': [[3]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.8747499999999999, 0.12525000000000008], [0.12525000000000008, 0.8747499999999999]], 'gate_qubits': [[4]]}], 'x90_gates': []}
noise_model = noise.noise_model.NoiseModel.from_dict( noise_dict )
```
Running directly on the device requires you to have an IBMQ account, and for you to sign in to it within your program. In order to not worry about all this, we'll instead use a simulation of the 5 qubit device defined by the constraints set above.
```
qr = QuantumRegister(5, 'qr')
cr = ClassicalRegister(1, 'cr')
backend = Aer.get_backend('qasm_simulator')
```
We now define the `AND` function. This has a few differences to the version in Exercise 1. Firstly, it is defined on a 5 qubit circuit, so you'll need to decide which of the 5 qubits are used to encode `input1`, `input2` and the output. Secondly, the output is a histogram of the number of times that each output is found when the process is repeated over 10000 samples.
```
def AND (input1,input2, q_1=0,q_2=1,q_out=2):
# The keyword q_1 specifies the qubit used to encode input1
# The keyword q_2 specifies qubit used to encode input2
# The keyword q_out specifies qubit to be as output
qc = QuantumCircuit(qr, cr)
# prepare input on qubits q1 and q2
if input1=='1':
qc.x( qr[ q_1 ] )
if input2=='1':
qc.x( qr[ q_2 ] )
qc.ccx(qr[ q_1 ],qr[ q_2 ],qr[ q_out ]) # the AND just needs a c
qc.measure(qr[ q_out ],cr[0]) # output from qubit 1 is measured
# the circuit is run on a simulator, but we do it so that the noise and connectivity of Tenerife are also reproduced
job = execute(qc, backend, shots=10000, noise_model=noise_model,
coupling_map=coupling_map,
basis_gates=noise_model.basis_gates)
output = job.result().get_counts()
return output
```
For example, here are the results when both inputs are `0`.
```
result = AND('0','0')
print( result )
plot_histogram( result )
```
We'll compare across all results to find the most unreliable.
```
worst = 1
for input1 in ['0','1']:
for input2 in ['0','1']:
print('\nProbability of correct answer for inputs',input1,input2)
prob = AND(input1,input2, q_1=0,q_2=1,q_out=2)[str(int( input1=='1' and input2=='1' ))]/10000
print( prob )
worst = min(worst,prob)
print('\nThe lowest of these probabilities was',worst)
```
The `AND` function above uses the `ccx` gate the implement the required operation. But you now know how to make your own. Find a way to implement an `AND` for which the lowest of the above probabilities is better than for a simple `ccx`.
```
import qiskit
qiskit.__qiskit_version__
```
| true |
code
| 0.462473 | null | null | null | null |
|
<img style="float: center;" src="../images/CI_horizontal.png" width="600">
<center>
<span style="font-size: 1.5em;">
<a href='https://www.coleridgeinitiative.org'>Website</a>
</span>
</center>
Ghani, Rayid, Frauke Kreuter, Julia Lane, Adrianne Bradford, Alex Engler, Nicolas Guetta Jeanrenaud, Graham Henke, Daniela Hochfellner, Clayton Hunter, Brian Kim, Avishek Kumar, and Jonathan Morgan.
# Data Preparation for Machine Learning - Creating Labels
----
## Python Setup
- Back to [Table of Contents](#Table-of-Contents)
Before we begin, run the code cell below to initialize the libraries we'll be using in this assignment. We're already familiar with `numpy`, `pandas`, and `psycopg2` from previous tutorials. Here we'll also be using [`scikit-learn`](http://scikit-learn.org) to fit modeling.
```
%pylab inline
import pandas as pd
import psycopg2
from sqlalchemy import create_engine
db_name = "appliedda"
hostname = "10.10.2.10"
```
## Creating Labels
Labels are the dependent variables, or *Y* variables, that we are trying to predict. In the machine learning framework, your labels are usually *binary*: true or false, encoded as 1 or 0.
In this case, our label is __whether an existing single unit employer in a given year disappears whithin a given number of years__. By convention, we will flag employers who still exist in the following year as 0, and those who no longer exist as 1.
Single unit employers can be flagged using the `multi_unit_code` (`multi_unit_code = '1'`). We create a unique firm ID using EIN (`ein`), SEIN Unit (`seinunit`) and Employer Number (`empr_no`).
We need to pick the year and quarter of prediction, and the number of years we look forward to see if the employer still exists. Let's use Q1 or 2013 as our date of prediction. Different projects might be interested in looking at short-term or long-term survivability of employers, but for this first example, we evaluate firm survivability within one year of the prediction date.
### Detailed Creation of Labels for a Given Year
For this example, let's use 2013 (Q1) as our reference year (year of prediction).
Let's start by creating the list of unique employers in that quarter:
```
conn = psycopg2.connect(database=db_name, host=hostname)
cursor = conn.cursor()
sql = '''
CREATE TEMP TABLE eins_2013q1 AS
SELECT DISTINCT CONCAT(ein, '-', seinunit, '-', empr_no) AS id, ein, seinunit, empr_no
FROM il_des_kcmo.il_qcew_employers
WHERE multi_unit_code = '1' AND year = 2013 AND quarter = 1;
COMMIT;
'''
cursor.execute(sql)
sql = '''
SELECT *
FROM eins_2013q1
LIMIT 10
'''
pd.read_sql(sql, conn)
```
Now let's create this same table one year later.
```
sql = '''
CREATE TEMP TABLE eins_2014q1 AS
SELECT DISTINCT CONCAT(ein, '-', seinunit, '-', empr_no) AS id,
ein, seinunit, empr_no
FROM il_des_kcmo.il_qcew_employers
WHERE multi_unit_code = '1' AND year = 2014 AND quarter = 1;
COMMIT;
'''
cursor.execute(sql)
sql = '''
SELECT *
FROM eins_2014q1
LIMIT 10
'''
pd.read_sql(sql, conn)
```
In order to assess whether a 2013 employer still exists in 2014, let's merge the 2014 table onto the 2013 list of employers. Notice that we create a `label` variable that takes the value `0` if the 2013 employer still exists in 2014, `1` if the employer disappears.
```
sql = '''
CREATE TABLE IF NOT EXISTS ada_18_uchi.labels_2013q1_2014q1 AS
SELECT a.*, CASE WHEN b.ein IS NULL THEN 1 ELSE 0 END AS label
FROM eins_2013q1 AS a
LEFT JOIN eins_2014q1 AS b
ON a.id = b.id AND a.ein = b.ein AND a.seinunit = b.seinunit AND a.empr_no = b.empr_no;
COMMIT;
ALTER TABLE ada_18_uchi.labels_2013q1_2014q1 OWNER TO ada_18_uchi_admin;
COMMIT;
'''
cursor.execute(sql)
# Load the 2013 Labels into Python Pandas
sql = '''
SELECT *
FROM ada_18_uchi.labels_2013q1_2014q1
'''
df_labels_2013 = pd.read_sql(sql, conn)
df_labels_2013.head(10)
```
Given these first rows, employers who survive seem to be more common than employers who disappear. Let's get an idea of the dsitribution of our label variable.
```
pd.crosstab(index = df_labels_2013['label'], columns = 'count')
```
### Repeating the Label Creation Process for Another Year
Since we need one training and one test set for our machine learning analysis, let's create the same labels table for the following year.
```
conn = psycopg2.connect(database=db_name, host=hostname)
cursor = conn.cursor()
sql = '''
CREATE TEMP TABLE eins_2014q1 AS
SELECT DISTINCT CONCAT(ein, '-', seinunit, '-', empr_no) AS id, ein, seinunit, empr_no
FROM il_des_kcmo.il_qcew_employers
WHERE multi_unit_code = '1' AND year = 2014 AND quarter = 1;
COMMIT;
CREATE TEMP TABLE eins_2015q1 AS
SELECT DISTINCT CONCAT(ein, '-', seinunit, '-', empr_no) AS id, ein, seinunit, empr_no
FROM il_des_kcmo.il_qcew_employers
WHERE multi_unit_code = '1' AND year = 2015 AND quarter = 1;
COMMIT;
CREATE TABLE IF NOT EXISTS ada_18_uchi.labels_2014q1_2015q1 AS
SELECT a.*, CASE WHEN b.ein IS NULL THEN 1 ELSE 0 END AS label
FROM eins_2014q1 AS a
LEFT JOIN eins_2015q1 AS b
ON a.id = b.id AND a.ein = b.ein AND a.seinunit = b.seinunit AND a.empr_no = b.empr_no;
COMMIT;
ALTER TABLE ada_18_uchi.labels_2014q1_2015q1 OWNER TO ada_18_uchi_admin;
COMMIT;
'''
cursor.execute(sql)
# Load the 2014 Labels into Python Pandas
sql = '''
SELECT *
FROM ada_18_uchi.labels_2014q1_2015q1
'''
df_labels_2014 = pd.read_sql(sql, conn)
df_labels_2014.head()
```
Let's get an idea of the dsitribution of our label variable.
```
pd.crosstab(index = df_labels_2014['label'], columns = 'count')
```
### Writing a Function to Create Labels
If you feel comfortable with the content we saw above, and expect to be creating labels for several different years as part of your project, the following code defines a Python function that generates the label table for any given year and quarter.
In the above, the whole SQL query was hard coded. In the below, we made a function with parameters for your choice of year and quarter, your choice of prediction horizon, your team's schema, etc. The complete list of parameters is given in parentheses after the `def generate_labels` statement. Some parameters are given a default value (like `delta_t=1`), others (like `year` and `qtr`) are not. More information on the different parameters is given below:
- `year`: The year at which we are doing the prediction.
- `qtr`: The quarter at which we are doing the prediction.
- `delta_t`: The forward-looking window, or number of years over which we are predicting employer survival or failure. The default value is 1, which means we are prediction at a given time whether an employer will still exist one year later.
- `schema`: Your team schema, where the label table will be written. The default value is set to `myschema`, which you define in the cell above the function.
- `db_name`: Database name. This is the name of the SQL database we are using. The default value is set to `db_name`, defined in the [Python Setup](#Python-Setup) section of this notebook.
- `hostname`: Host name. This is the host name for the SQL database we are using. The default value is set to `hostname`, defined in the [Python Setup](#Python-Setup) section of this notebook.
- `overwrite`: Whether you want the function to overwrite tables that already exist. Before writing a table, the function will check whether this table exists, and by default will not overwrite existing tables.
```
# Insert team schema name below:
myschema = 'ada_18_uchi'
def generate_labels(year, qtr, delta_t=1, schema=myschema, db_name=db_name, hostname=hostname, overwrite=False):
conn = psycopg2.connect(database=db_name, host = hostname) #database connection
cursor = conn.cursor()
sql = """
CREATE TEMP TABLE eins_{year}q{qtr} AS
SELECT DISTINCT CONCAT(ein, '-', seinunit, '-', empr_no) AS id, ein, seinunit, empr_no
FROM il_des_kcmo.il_qcew_employers
WHERE multi_unit_code = '1' AND year = {year} AND quarter = {qtr};
COMMIT;
CREATE TEMP TABLE eins_{year_pdelta}q{qtr} AS
SELECT DISTINCT CONCAT(ein, '-', seinunit, '-', empr_no) AS id, ein, seinunit, empr_no
FROM il_des_kcmo.il_qcew_employers
WHERE multi_unit_code = '1' AND year = {year_pdelta} AND quarter = {qtr};
COMMIT;
DROP TABLE IF EXISTS {schema}.labels_{year}q{qtr}_{year_pdelta}q{qtr};
CREATE TABLE {schema}.labels_{year}q{qtr}_{year_pdelta}q{qtr} AS
SELECT a.*, CASE WHEN b.ein IS NULL THEN 1 ELSE 0 END AS label
FROM eins_{year}q{qtr} AS a
LEFT JOIN eins_{year_pdelta}q{qtr} AS b
ON a.id = b.id AND a.ein = b.ein AND a.seinunit = b.seinunit AND a.empr_no = b.empr_no;
COMMIT;
ALTER TABLE {schema}.labels_{year}q{qtr}_{year_pdelta}q{qtr} OWNER TO {schema}_admin;
COMMIT;
""".format(year=year, year_pdelta=year+delta_t, qtr=qtr, schema=schema)
# Let's check if the table already exists:
# This query will return an empty table (with no rows) if the table does not exist
cursor.execute('''
SELECT * FROM information_schema.tables
WHERE table_name = 'labels_{year}q{qtr}_{year_pdelta}q{qtr}'
AND table_schema = '{schema}';
'''.format(year=year, year_pdelta=year+delta_t, qtr=qtr, schema=schema))
# Let's write table if it does not exist (or if overwrite = True)
if not(cursor.rowcount) or overwrite:
print("Creating table")
cursor.execute(sql)
else:
print("Table already exists")
cursor.close()
# Load table into pandas dataframe
sql = '''
SELECT * FROM {schema}.labels_{year}q{qtr}_{year_pdelta}q{qtr}
'''.format(year=year, year_pdelta=year+delta_t, qtr=qtr, schema=schema)
df = pd.read_sql(sql, conn)
return df
```
Let's run the defined function for a few different years:
```
# For 2012 Q1
df_labels_2012 = generate_labels(year=2012, qtr=1)
pd.crosstab(index = df_labels_2012['label'], columns = 'count')
# For 2012 Q1 with a 3 year forward looking window
df_labels_2012 = generate_labels(year=2012, qtr=1, delta_t=3)
pd.crosstab(index = df_labels_2012['label'], columns = 'count')
```
Why is the number of 1's higher in the second case?
```
df_labels_2015 = generate_labels(year=2015, qtr=1)
pd.crosstab(index = df_labels_2015['label'], columns = 'count')
```
Notice the surprising results in 2015. What is the underlying data problem?
| true |
code
| 0.233969 | null | null | null | null |
|
```
%config IPCompleter.greedy = True
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
%load_ext tensorboard
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sn
import tensorflow as tf
from datetime import datetime
pd.set_option('mode.chained_assignment', None)
sn.set(rc={'figure.figsize':(9,9)})
sn.set(font_scale=1.4)
# make results reproducible
seed = 0
np.random.seed(seed)
!pip install pydot
!rm -rf ./logs/
```
# TensorFlow Dataset
TensorFlow's [dataset](https://www.tensorflow.org/guide/data) object `tf.data.Dataset` allows us to write descriptive and efficient dataset input pipelines. It allows the following pattern:
1. Create a source dataset from the input data
2. Apply transformations to preprocess the data
3. Iterate over the dataset and process all the elements
The iteration happens via a streamlining method, which works well with datasets that are large and that don't have to completely fit into the memory.
We can consume any python iterable nested data structure by the `tf.data.Dataset` object, however we often use the following format that **Keras** expects, such as the following `(feature, label)` or `(X, y)` pairs is all that's needed for `tf.keras.Model.fit` and `tf.keras.Model.evaluate`.
Here is an example loading the digits dataset into a `tf.data.Dataset` object, using the `tf.data.Dataset.from_tensors()`
```
# Load the digits dataset that we have been using
from sklearn import datasets
from sklearn.model_selection import train_test_split
from tensorflow import keras
digits = datasets.load_digits()
(X, y) = datasets.load_digits(return_X_y=True)
X = X.astype(np.float32)
y = y.astype(np.int32)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42)
digits_train_ds = tf.data.Dataset.from_tensors((X_train, y_train))
print(list(digits_train_ds))
print('\n', digits_train_ds)
# Lets create a simple Dense Sequential NN and train it to illustrate passing the dataset object
model = keras.Sequential([
keras.layers.Dense(64, activation='relu', input_dim=64),
keras.layers.Dense(64, activation='relu'),
keras.layers.Dense(10)
])
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer='adam',
loss=loss_fn,
metrics=['accuracy'])
history = model.fit(digits_train_ds, epochs=100, verbose=0)
dir(history)
print('Training accuracy : {:.3%}'.format(history.history['accuracy'][-1]))
```
We can also construct a `Dataset` using `tf.data.Dataset.from_tensor_slices()` or if the input data is stored in a file in the recommended TFRecord file format, we can use the `tf.data.TFRecordDataset()`.
```
digits_train_ds = tf.data.Dataset.from_tensor_slices((X_train, y_train))
```
We can easily transform our `Dataset` object, setting up our data processing pipeline, by chaining method calls on the object since it returns a new `Dataset` object type. As an example we can apply per-element transformations such as `Dataset.map()`, and multi-element transformations such as `Dataset.batch()`. More transforms can be seen [here](https://www.tensorflow.org/api_docs/python/tf/data/Dataset).
The dataset object is also iteterable, so we can consume it in a for loop i.e.
```
for i, elm in enumerate(digits_train_ds):
if i <= 1:
print(elm)
```
We can also generate a dataset, such that it doesn't have all have to exist in memory by consuming a generator
```
def random_numbers(stop):
i = 0
while i < stop:
yield np.random.randint(0, 10)
i += 1
print('Testing the generator\n')
for i in random_numbers(7):
print(i)
print('\n\nCreating a Dataset by consuming the generator\n')
ds_random = tf.data.Dataset.from_generator(random_numbers, args=[10], output_types=tf.int32, output_shapes = (), )
for element in ds_random:
print(element)
```
We can also injest datasets from the following formats with the following [functions](https://www.tensorflow.org/api_docs/python/tf/data):
|Data format|Function|
|-----------|--------|
|`TFRecord`|`tf.data.TFRecordDataset(file_paths)`|
|`Text file`|`tf.data.TextLineDataset(file_paths)`|
|`CSV`|`tf.data.experimental.CsvDataset(file_path)`|
Once we have our dataset, we can process it before using it for training.
#### Batching the dataset
We can turn our `Dataset` into a batched `Dataset`, i.e. stacking $n$ consecutive elements of a dataset into a single element, performed with `Dataset.batch(n)`
```
print('Before batching\n[')
for i in ds_random:
print(i)
print(']')
print('\nAfter batching\n[')
for i in ds_random.batch(3):
print(i)
print(']')
```
#### Repeating the dataset
We can repeat the dataset so that each original value is seen $n$ times
```
dataset = tf.data.Dataset.from_tensor_slices([0, 1, 2])
dataset = dataset.repeat(3)
list(dataset.as_numpy_iterator())
```
#### Randomly shuffling the input data
Randomly shuffle the elements of the dataset. This has as `buffer_size` parameter, where the dataset fills a buffer with `buffer_size` elements, then randomly samples elements from this buffer, replacing selected elements with new elements. Therefore for perfect shuffling, we need to specify a `buffer_size` greater than or equal to the full size of the dataset required
```
dataset = tf.data.Dataset.from_tensor_slices([0, 1, 2])
dataset = dataset.shuffle(3)
list(dataset.as_numpy_iterator())
```
#### Custom dataset operations
We can easily process the dataset with our own element wise function `f` that we define ourselves. And then call `Dataset.map(f)` to apply the transformation and return a new `Dataset`.
```
def f(x):
return x * 2
dataset = tf.data.Dataset.from_tensor_slices([0, 1, 2])
dataset = dataset.map(f)
list(dataset.as_numpy_iterator())
```
# Custom models in Keras
So far we have only used `tf.keras.Sequential` model, which is a simple stack of layers. However this cannot represent arbitrary models. We can use **Keras**'s *functional API* to build complex models (usually a directed acyclic graph of layers), which can have multi-input, multi-output, shared layers (the layers is called multiple times) and models with non-sequential data flows (residual connections).
This is possible with the TensorFlow integration as each layer instance takes a tensor as a callable parameter and returns a tensor, so we can connect layers up as we want. We use the input tensors and output tensors to define the `tf.keras.Model` instance, which allows us to train it and use all the model **Keras** model functionality we have seen so far.
We can create a fully-connected network using the functional API, e.g.
```
# Returns an input placeholder
inputs = tf.keras.Input(shape=(64,))
# A layer instance is callable on a tensor, and returns a tensor.
x = keras.layers.Dense(64, activation='relu')(inputs)
x = keras.layers.Dense(64, activation='relu')(x)
predictions = keras.layers.Dense(10)(x)
# Instantiate the model for the defined input and output tensors
model = tf.keras.Model(inputs=inputs, outputs=predictions, name='FirstCustomModel')
```
Once we have defined our model, we checkout what the model summary looks like by using `tf.keras.Model.summary()`
```
# For a dense layer each MLP unit in that layer is connected to each input layer unit
# plus one parameter per unit for the bias
print('Parameters for a dense layer = {}\n\n'.format(64*64 + 64))
model.summary()
```
We can also plot the model as graph natively
```
keras.utils.plot_model(model)
```
We can also show the input and output shapes for each layer in the graph
```
keras.utils.plot_model(model, show_shapes=True)
```
Once we have our model, we can use it like any other **Keras** model that we have seen, i.e. being able to train, evaluate and save the model simply.
```
# Specify the training configuration.
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.01),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# Trains for 5 epochs
history = model.fit(X_train, y_train, batch_size=32, epochs=5)
test_scores = model.evaluate(X_test, y_test, verbose=2)
print('Test loss: {:.4}'.format(test_scores[0]))
print('Test accuracy: {:.3%}'.format(test_scores[1]))
```
### Defining multiple models from the same graph of layers
Since the `tf.keras.Model` is really just a convenience object that encapsulates a connected set of layers, we can form multiple models, or connected sets of layers (sub-graphs) from one defined graph of layers (or computation graph).
To illustrate, let us create an *auto-encoder*, that takes an input mapping it to a low dimensional representation by a neural network and then maps the same low dimensional representation back to the output, i.e. to learn an efficient low-dimensional representation (efficient data encoding) of our sample in an unsupervised manner.
Here we can create one large model to encapsulate the entire graph, called the *auto-encoder*, however we may wish to create sub models such as the *encoder* model to map the input sample to the low-dimensional representation and the *decoder* model to map the low-dimensional representation back to the input sample dimensions.
Lets illustrate with an example
```
# Create one auto-encoder graph
encoder_input = keras.Input(shape=(28, 28, 1), name='img')
x = keras.layers.Conv2D(16, 3, activation='relu')(encoder_input)
x = keras.layers.Conv2D(32, 3, activation='relu')(x)
x = keras.layers.MaxPooling2D(3)(x)
x = keras.layers.Conv2D(32, 3, activation='relu')(x)
x = keras.layers.Conv2D(16, 3, activation='relu')(x)
encoder_output = keras.layers.GlobalMaxPooling2D()(x)
x = keras.layers.Reshape((4, 4, 1))(encoder_output)
x = keras.layers.Conv2DTranspose(16, 3, activation='relu')(x)
x = keras.layers.Conv2DTranspose(32, 3, activation='relu')(x)
x = keras.layers.UpSampling2D(3)(x)
x = keras.layers.Conv2DTranspose(16, 3, activation='relu')(x)
decoder_output = keras.layers.Conv2DTranspose(1, 3, activation='relu')(x)
autoencoder = keras.Model(encoder_input, decoder_output, name='autoencoder')
encoder = keras.Model(encoder_input, encoder_output, name='encoder')
print('Auto-encoder')
keras.utils.plot_model(autoencoder, show_shapes=True)
print('Encoder')
keras.utils.plot_model(encoder, show_shapes=True)
```
Due to the *auto-encoder* nature the architecture is symmetrical, since the reverse of a `Conv2D` layer is a `Conv2DTranspose` layer, and the reverse of a `MaxPooling2D` layer is an `UpSampling2D` layer.
We can also compose multiple models, as we can assume a model behaves like a layer, i.e. we can create the same *auto-encoder* architecture by composing the encoder and decoder model together, i.e.
```
# Create encoder graph
x = keras.layers.Conv2D(16, 3, activation='relu')(encoder_input)
x = keras.layers.Conv2D(32, 3, activation='relu')(x)
x = keras.layers.MaxPooling2D(3)(x)
x = keras.layers.Conv2D(32, 3, activation='relu')(x)
x = keras.layers.Conv2D(16, 3, activation='relu')(x)
encoder_output = keras.layers.GlobalMaxPooling2D()(x)
# Create decoder graph
decoder_input = keras.Input(shape=(16,), name='encoded_img')
x = keras.layers.Reshape((4, 4, 1))(decoder_input)
x = keras.layers.Conv2DTranspose(16, 3, activation='relu')(x)
x = keras.layers.Conv2DTranspose(32, 3, activation='relu')(x)
x = keras.layers.UpSampling2D(3)(x)
x = keras.layers.Conv2DTranspose(16, 3, activation='relu')(x)
decoder_output = keras.layers.Conv2DTranspose(1, 3, activation='relu')(x)
# Create models for each graph
encoder = keras.Model(encoder_input, encoder_output, name='encoder')
decoder = keras.Model(decoder_input, decoder_output, name='decoder')
# Connect the two models together
autoencoder_input = keras.Input(shape=(28, 28, 1), name='img')
encoded_img = encoder(autoencoder_input)
decoded_img = decoder(encoded_img)
# Create the auto-encoder model that composes the two encoder and decoder models
autoencoder = keras.Model(autoencoder_input, decoded_img, name='autoencoder')
autoencoder.summary()
```
A common case that we can use model nesting is to create an *ensemble* of models. Such as the example below combining multiple models and averaging their predictions.
```
def get_model():
inputs = keras.Input(shape=(128,))
outputs = keras.layers.Dense(1)(inputs)
return keras.Model(inputs, outputs)
model1 = get_model()
model2 = get_model()
model3 = get_model()
inputs = keras.Input(shape=(128,))
y1 = model1(inputs)
y2 = model2(inputs)
y3 = model3(inputs)
outputs = keras.layers.average([y1, y2, y3])
ensemble_model = keras.Model(inputs=inputs, outputs=outputs)
```
## Multi-Output & Multi-Input models
We may want to create a model that that takes multiple inputs and or outputs multiple outputs.
For example we may want to model to rank customer emails for a business, by priority and routing them to the correct group mailing list email for resolving.
This model could have three inputs:
* email subject as text input
* email body as text input
* any optional tags based existing categorical tags (that the company has about this email address already)
And two outputs:
* priority score between 0 and 1 (scalar sigmoid output)
* the group mailing list email that should resolve the inbound email (a softmax output over the set of departments)
```
amount_tags = 12 # Number of unique tags
amount_words = 10000 # Size of vocabulary obtained when preprocessing text data
amount_mailing_lists = 4 # Number of mailing lists for predictions
# Variable-length sequence of ints
subject_input = keras.Input(shape=(None,), name='subject')
# Variable-length sequence of ints
body_input = keras.Input(shape=(None,), name='body')
# Binary vectors of size `amount_tags`
tags_input = keras.Input(shape=(amount_tags,), name='tags')
# Embed each word in the subject into a 64-dimensional vector
subject_features = keras.layers.Embedding(amount_words, 64)(subject_input)
# Embed each word in the text into a 64-dimensional vector
body_features = keras.layers.Embedding(amount_words, 64)(body_input)
# Reduce sequence of embedded words in the subject into a single 128-dimensional vector
subject_features = keras.layers.LSTM(128)(subject_features)
# Reduce sequence of embedded words in the body into a single 32-dimensional vector
body_features = keras.layers.LSTM(32)(body_features)
# Merge all available features into a single large vector via concatenation
x = keras.layers.concatenate([subject_features, body_features, tags_input])
# Apply a sigmoid (logistic regression) for priority prediction on top of the features
priority_pred = keras.layers.Dense(1, name='priority')(x)
# Apply a mailing_list classifier on top of the features
mailing_list_pred = keras.layers.Dense(
amount_mailing_lists, name='mailing_list')(x)
# Instantiate an end-to-end model predicting both priority and mailing_list
model = keras.Model(inputs=[subject_input, body_input, tags_input],
outputs=[priority_pred, mailing_list_pred])
keras.utils.plot_model(model, show_shapes=True)
```
We can assign different losses to each output, and thus can assign different weights to each loss - to control their contribution to the total training loss when we compile the model.
```
model.compile(optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.BinaryCrossentropy(from_logits=True),
keras.losses.CategoricalCrossentropy(from_logits=True)],
loss_weights=[1., 0.2])
```
We can also specify the losses based on their names as well
```
model.compile(optimizer=keras.optimizers.RMSprop(1e-3),
loss={'priority':keras.losses.BinaryCrossentropy(from_logits=True),
'mailing_list': keras.losses.CategoricalCrossentropy(from_logits=True)},
loss_weights=[1., 0.2])
```
We can train the model, where we pass the data ( or yield it from the dataset object) as either as a:
* tuple of lists, e.g. `([X_subject, X_body, X_tags], [y_priority, y_mailing_list])`
* tuple of dictionaries, e.g. `({'subject': X_subject, 'body': X_body, 'tags': X_tags}, {'priority': y_priority, 'mailing_list': y_mailing_list})`
```
# Some random input data (X)
X_subject = np.random.randint(amount_words, size=(1280, 10))
X_body = np.random.randint(amount_words, size=(1280, 100))
X_tags = np.random.randint(2, size=(1280, amount_tags)).astype('float32')
# Some random targets (y)
y_priority = np.random.random(size=(1280, 1))
y_mailing_list = np.random.randint(2, size=(1280, amount_mailing_lists))
model.fit({'subject': X_subject, 'body': X_body, 'tags': X_tags},
{'priority': y_priority, 'mailing_list': y_mailing_list},
epochs=2,
batch_size=32)
```
## Non-linear networks
We can also create non-linear graphs, where the models with the layers are not connected sequentially.
An example of type of model that is non-linear is a *Residual Neural Network* (ResNet), which is a neural network that has *skip connections* or *shortcuts* to jump over some layers. Often implemented in double or triple layer skips that contain nonlinearities (ReLU) and batch normalization in between.
We can connect multiple connections into the same node by using the `keras.layers.add()` layer where we pass a list of of input tensors to add together. There also exists other layers to combine multiple layers such as the `subtract`, `average`, `concatenate`, `dot`, `maximum`, `minimum` and `multiply` layers in `keras.layers` module. A full list can be seen [here](https://www.tensorflow.org/api_docs/python/tf/keras/layers).
To illustrate lets create an example of a ResNet model:
```
inputs = keras.Input(shape=(32, 32, 3), name='img')
x = keras.layers.Conv2D(32, 3, activation='relu')(inputs)
x = keras.layers.Conv2D(64, 3, activation='relu')(x)
block_1_output = keras.layers.MaxPooling2D(3)(x)
x = keras.layers.Conv2D(64, 3, activation='relu', padding='same')(block_1_output)
x = keras.layers.Conv2D(64, 3, activation='relu', padding='same')(x)
block_2_output = keras.layers.add([x, block_1_output])
x = keras.layers.Conv2D(64, 3, activation='relu', padding='same')(block_2_output)
x = keras.layers.Conv2D(64, 3, activation='relu', padding='same')(x)
block_3_output = keras.layers.add([x, block_2_output])
x = keras.layers.Conv2D(64, 3, activation='relu')(block_3_output)
x = keras.layers.GlobalAveragePooling2D()(x)
x = keras.layers.Dense(256, activation='relu')(x)
x = keras.layers.Dropout(0.5)(x)
outputs = keras.layers.Dense(10)(x)
model = keras.Model(inputs, outputs, name='example_resnet')
model.summary()
keras.utils.plot_model(model, show_shapes=True)
```
## Share layers
We can also easily share the same layer in our model, i.e. a single layer instance is reused multiple times in the same model so that it learns a mapping that corresponds to multiple paths in the graph of layers.
Common use cases for sharing a layer would be to create a shared embedding (encoding inputs) if the inputs come from similar spaces.
For example
```
# Embedding for 10000 unique words mapped to 128-dimensional vectors
shared_embedding = keras.layers.Embedding(10000, 128)
# Variable-length sequence of integers
text_input_a = keras.Input(shape=(None,), dtype='int32')
# Variable-length sequence of integers
text_input_b = keras.Input(shape=(None,), dtype='int32')
# Reuse the same layer to encode both inputs
encoded_input_a = shared_embedding(text_input_a)
encoded_input_b = shared_embedding(text_input_b)
```
### Extract and reuse nodes
The graph of layers is a static data structure, thus it can be directly accessed and inspected. This means that you can access the outputs from each node in the graph and reuse them elsewhere, which is useful for feature extraction and taking parts of a pre-trained model.
For an example lets create a model that outputs all the output nodes for a given pre-trained graph, e.g. the VGG19 model with its weights trained on ImageNet:
```
vgg19 = tf.keras.applications.VGG19()
# query the graph data structure
features_list = [layer.output for layer in vgg19.layers]
# Create a new model that that outputs all the nodes values from the intermediate layers
feat_extraction_model = keras.Model(inputs=vgg19.input, outputs=features_list)
img = np.random.random((1, 224, 224, 3)).astype('float32')
extracted_features = feat_extraction_model(img)
```
## Custom layers
Although `tf.keras` includes many useful built-in layers, a few of [these](https://www.tensorflow.org/api_docs/python/tf/keras/layers) being:
* Convolutional layers: `Conv1D`, `Conv2D`, `Conv3D`, `Conv2DTranspose`
* Pooling layers: `MaxPooling1D`, `MaxPooling2D`, `MaxPooling3D`, `AveragePooling1D`
* RNN layers: `GRU`, `LSTM`, `ConvLSTM2D`
* `BatchNormalization`, `Dropout`, `Embedding`, etc.
We can simply create our own custom layer by subclassing `tf.keras.layers.Layer` and implementing the following methods:
* `__init__`: Save configuration in member variables
* `build()`: Create the weights of the layer. Add weights with the `add_weight()` method. Will be called once from `__call__`, when the shapes of the input and `dtype` is known.
* `call()`: Define the forward pass. I.e. applying the actual logic of applying the layer to the input tensors (which should be passed as the first argument)
* Optionally, a layer can be serialized by implementing the `get_config()` method and the `from_config()` class method.
Conviently the [layer class](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Layer), `tf.keras.layers.Layer` manages the weights, losses, updates and inter-layer connectivity for us.
Here's an example of a custom layer that implements a basic dense layer:
```
class CustomDense(keras.layers.Layer):
def __init__(self, units=32):
super(CustomDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
return {'units': self.units}
inputs = keras.Input((4,))
outputs = CustomDense(10)(inputs)
model = keras.Model(inputs, outputs)
# Example of serializing and deserializing the layer
config = model.get_config()
# deserializing the layer
new_model = keras.Model.from_config(
config, custom_objects={'CustomDense': CustomDense})
```
# Custom models
Another way to create our own models, slightly less flexible of constructing our custom models is to subclass the `tf.keras.Model` and define our own forward pass. Here we create layers in the `__init__()` method and use them as attributes of the class instance. We can define the forward pass in the `call()` method. However this is not the preferred way to create custom models *Keras*, the functional API described above is.
An example would be:
```
class MyModel(tf.keras.Model):
def __init__(self, num_classes=10):
super(MyModel, self).__init__(name='my_custom_model')
self.num_classes = num_classes
# Define your layers here.
self.dense_1 = keras.layers.Dense(32, activation='relu')
self.dense_2 = keras.layers.Dense(num_classes)
def call(self, inputs):
# Define your forward pass here,
# using layers you previously defined (in `__init__`).
x = self.dense_1(inputs)
return self.dense_2(x)
model = MyModel(num_classes=10)
```
# Keras Callbacks
Here a `tf.keras.callbacks.Callback` object can be passed to a model to customize its behaviour during training, predicting or testing. It is mainly used to customize training behaviour. We can write our own custom callbacks to process the current models state at a [particular step](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/Callback) within each iteration of training or using the model for instance at `on_batch_end`, `on_epoch_end` or `on_test_end` etc.
Common built-in callbacks in `tf.keras.callbacks` include:
* `tf.keras.callbacks.ModelCheckpoint`: Saves checkpoints of the model at regular intervals
* `tf.keras.callbacks.LearningRateScheduler`: Dynamically changes the learning rate
* `tf.keras.callbacks.EarlyStopping`: Interrupts training when validation performance has stopped improving
* `tf.keras.callbacks.TensorBoard`: Output a log for use in monitoring the model's behaviour using TensorBoard
We can use a `tf.keras.callbacks.Callback`, here for training by passing it to the models fit method:
```
callbacks = [
# Interrupt training if `val_loss` (Validation loss) stops improving for over 2 epochs
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
# Write TensorBoard logs to `./tmp_logs` directory
tf.keras.callbacks.TensorBoard(log_dir='./tmp_logs')
]
# Create a simple model to use it in
model = keras.Sequential([
keras.layers.Dense(64, activation='relu', input_dim=64),
keras.layers.Dense(64, activation='relu'),
keras.layers.Dense(10)
])
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer='adam',
loss=loss_fn,
metrics=['accuracy'])
model.fit(X_train, y_train, epochs=10, validation_split=0.2, callbacks=callbacks)
```
We can also write our own custom callbacks like the following:
```
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs):
self.losses = []
def on_batch_end(self, batch, logs):
self.losses.append(logs.get('loss'))
```
[[1](https://www.tensorflow.org/guide/keras/functional)]
| true |
code
| 0.733428 | null | null | null | null |
|
# Seminar 15
# Conjugate gradient method
## Reminder
1. Newton method
2. Convergence theorem
4. Comparison with gradient descent
5. Quasi-Newton methods
## Linear system vs. unconstrained minimization problem
Consider the problem
$$
\min_{x \in \mathbb{R}^n} \frac{1}{2}x^{\top}Ax - b^{\top}x,
$$
where $A \in \mathbb{S}^n_{++}$.
From the necessary optimality condition follows
$$
Ax^* = b
$$
Also denote gradient $f'(x_k) = Ax_k - b$ by $r_k$
## How to solve linear system $Ax = b$?
- Direct methods are based on the matrix decompositions:
- Dense matrix $A$: dimension is less then some thousands
- Sparse matrix $A$: dimension of the order $10^4 - 10^5$
- Iterative methods: the method of choice in many cases, the single approach which is appropriate for system with dimension $ > 10^6$
## Some history...
M. Hestenes and E. Stiefel proposed *conjugate gradient method* (CG)
to solve linear system in 1952 as **direct** method.
Many years CG was considered only as theoretical interest, because
- CG does not work with slide rule
- CG has not a lot advantages over Gaussian elimination while working with calculator
CG method has to be considered as **iterative method**, i.e. stop after
achieve required tolerance!
More details see [here](https://www.siam.org/meetings/la09/talks/oleary.pdf)
## Conjugate directions method
- Descent direction in gradient descent method is anti-gradient
- Convergence is veryyy **slow** for convex functions with pooorly conditioned hessian
**Idea:** move along directions that guarantee converegence in $n$ steps.
**Definition.** Nonzero vectors $\{p_0, \ldots, p_l\}$ are called *conjugate* with tespect to matrix $A \in \mathbb{S}^n_{++}$, where
$$
p^{\top}_iAp_j = 0, \qquad i \neq j
$$
**Claim.** For every initial guess vector $x_0 \in \mathbb{R}^n$ the sequence $\{x_k\}$, which is generated by conjugate gradient method, converges to solution of linear system $Ax = b$ not more than after $n$ steps.
```python
def ConjugateDirections(x0, A, b, p):
x = x0
r = A.dot(x) - b
for i in range(len(p)):
alpha = - (r.dot(p[i])) / (p[i].dot(A.dot(p[i])))
x = x + alpha * p[i]
r = A.dot(x) - b
return x
```
### Example of conjugate directions
- Eigenvectors of matrix $A$
- For every set of $n$ vectors one can perform analogue of Gram-Scmidt orthogonalization and get conjugate dorections
**Q:** What is Gram-Schmidt orthogonalization process? :)
### Geometrical interpretation (Mathematics Stack Exchange)
<center><img src="./cg.png" ></center>
## Conjugate gradient method
**Idea:** new direction $p_k$ is searched in the form $p_k = -r_k + \beta_k p_{k-1}$, where $\beta_k$ is based on the requirement of conjugacy of directions $p_k$ and $p_{k-1}$:
$$
\beta_k = \dfrac{p^{\top}_{k-1}Ar_k}{p^{\top}_{k-1}Ap^{\top}_{k-1}}
$$
Thus, to get the next conjugate direction $p_k$ it is necessary to store conjugate direction $p_{k-1}$ and residual $r_k$ from the previous iteration.
**Q:** how to select step size $\alpha_k$?
### Convergence theorems
**Theorem 1.** If matrix $A \in \mathbb{S}^n_{++}$ has only $r$ distinct eigenvalues, then conjugate gradient method converges in $r$ iterations.
**Theorem 2.** The following convergence estimate holds
$$
\| x_{k+1} - x^* \|_A \leq \left( \dfrac{\sqrt{\kappa(A)} - 1}{\sqrt{\kappa(A)} + 1} \right)^k \|x_0 - x^*\|_A,
$$
where $\|x\|_A = x^{\top}Ax$ and $\kappa(A) = \frac{\lambda_n(A)}{\lambda_1(A)}$ - condition number of matrix $A$
**Remark:** compare coefficient of the linear convergence with
corresponding coefficiet in gradient descent method.
### Interpretations of conjugate gradient method
- Gradient descent in the space $y = Sx$, where $S = [p_0, \ldots, p_n]$, in which the matrix $A$ is digonal (or identity if the conjugate directions are orthonormal)
- Search optimal solution in the [Krylov subspace](https://stanford.edu/class/ee364b/lectures/conj_grad_slides.pdf) $\mathcal{K}(A) = \{b, Ab, A^2b, \ldots \}$
### Improved version of CG method
In practice the following equations for step size $\alpha_k$ and coefficient $\beta_{k}$ are used.
$$
\alpha_k = \dfrac{r^{\top}_k r_k}{p^{\top}_{k}Ap_{k}} \qquad \beta_k = \dfrac{r^{\top}_k r_k}{r^{\top}_{k-1} r_{k-1}}
$$
**Q:** why do they better than base version?
### Pseudocode of CG method
```python
def ConjugateGradientQuadratic(x0, A, b):
r = A.dot(x0) - b
p = -r
while np.linalg.norm(r) != 0:
alpha = r.dot(r) / p.dot(A.dot(p))
x = x + alpha * p
r_next = r + alpha * A.dot(p)
beta = r_next.dot(r_next) / r.dot(r)
p = -r_next + beta * p
r = r_next
return x
```
## Using CG method in Newton method
- To find descent direction in Newton method one has to solve the following linear system $H(x_k) h_k = -f'(x_k)$
- If the objective function is strongly convex, then $H(x_k) \in \mathbb{S}^n_{++}$ and to solve this linear system one can use CG. In this case the merhod is called *inexact Newton method*.
- What's new?
- Explicit storage of hessian is not needed, it's enough to have function that perform multiplication hessian by vector
- One can control accuracy of solving linear system and do not solve it very accurate far away from minimizer. **Important**: inexact solution may be not descent direction!
- Convergence is only suprlinear if backtracking starts with $\alpha_0 = 1$ similarly to Newton method
## CG method for non-quadratic function
**Idea:** use gradients instead of residuals $r_k$ and backtracking for search $\alpha_k$ instead of analytical expression. We get Fletcher-Reeves method.
```python
def ConjugateGradientFR(f, gradf, x0):
x = x0
grad = gradf(x)
p = -grad
while np.linalg.norm(gradf(x)) != 0:
alpha = StepSearch(x, f, gradf, **kwargs)
x = x + alpha * p
grad_next = gradf(x)
beta = grad_next.dot(grad_next) / grad.dot(grad)
p = -grad_next + beta * p
grad = grad_next
if restart_condition:
p = -gradf(x)
return x
```
### Convergence theorem
**Theorem.** Assume
- level set $\mathcal{L}$ is bounded
- there exists $\gamma > 0$: $\| f'(x) \|_2 \leq \gamma$ for $x \in \mathcal{L}$
Then
$$
\lim_{j \to \infty} \| f'(x_{k_j}) \|_2 = 0
$$
### Restarts
1. To speed up convergence of CG one can use *restart* technique: remove stored history, consider current point as $x_0$ and run method from this point
2. There exist different conditions which indicate the necessity of restart, i.e.
- $k = n$
- $\dfrac{|\langle f'(x_k), f'(x_{k-1}) \rangle |}{\| f'(x_k) \|_2^2} \geq \nu \approx 0.1$
3. It can be shown (see Nocedal, Wright Numerical Optimization, Ch. 5, p. 125), that Fletcher-Reeves method without restarts can converge veryyy slow!
4. Polak-Ribiere method and its modifications have not this drawback
### Remarks
- The great notes "An Introduction to the Conjugate Gradient Method Without the Agonizing Pain" is available [here](https://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf)
- Besides Fletcher-Reeves method there exist other ways to compute $\beta_k$: Polak-Ribiere method, Hestens-Stiefel method...
- The CG method requires to store 4 vectors, what vectors?
- The bottleneck is matrix by vector multiplication
## Experiments
### Quadratic objective function
```
import numpy as np
n = 100
# Random
# A = np.random.randn(n, n)
# A = A.T.dot(A)
# Clustered eigenvalues
A = np.diagflat([np.ones(n//4), 10 * np.ones(n//4), 100*np.ones(n//4), 1000* np.ones(n//4)])
U = np.random.rand(n, n)
Q, _ = np.linalg.qr(U)
A = Q.dot(A).dot(Q.T)
A = (A + A.T) * 0.5
print("A is normal matrix: ||AA* - A*A|| =", np.linalg.norm(A.dot(A.T) - A.T.dot(A)))
b = np.random.randn(n)
# Hilbert matrix
# A = np.array([[1.0 / (i+j - 1) for i in xrange(1, n+1)] for j in xrange(1, n+1)])
# b = np.ones(n)
f = lambda x: 0.5 * x.dot(A.dot(x)) - b.dot(x)
grad_f = lambda x: A.dot(x) - b
x0 = np.zeros(n)
```
#### Eigenvalues distribution
```
USE_COLAB = False
%matplotlib inline
import matplotlib.pyplot as plt
if not USE_COLAB:
plt.rc("text", usetex=True)
plt.rc("font", family='serif')
if USE_COLAB:
!pip install git+https://github.com/amkatrutsa/liboptpy
import seaborn as sns
sns.set_context("talk")
eigs = np.linalg.eigvalsh(A)
plt.semilogy(np.unique(eigs))
plt.ylabel("Eigenvalues", fontsize=20)
plt.xticks(fontsize=18)
_ = plt.yticks(fontsize=18)
```
#### Exact solution
```
import scipy.optimize as scopt
def callback(x, array):
array.append(x)
scopt_cg_array = []
scopt_cg_callback = lambda x: callback(x, scopt_cg_array)
x = scopt.minimize(f, x0, method="CG", jac=grad_f, callback=scopt_cg_callback)
x = x.x
print("||f'(x*)|| =", np.linalg.norm(A.dot(x) - b))
print("f* =", f(x))
```
#### Implementation of conjugate gradient method
```
def ConjugateGradientQuadratic(x0, A, b, tol=1e-8, callback=None):
x = x0
r = A.dot(x0) - b
p = -r
while np.linalg.norm(r) > tol:
alpha = r.dot(r) / p.dot(A.dot(p))
x = x + alpha * p
if callback is not None:
callback(x)
r_next = r + alpha * A.dot(p)
beta = r_next.dot(r_next) / r.dot(r)
p = -r_next + beta * p
r = r_next
return x
import liboptpy.unconstr_solvers as methods
import liboptpy.step_size as ss
print("\t CG quadratic")
cg_quad = methods.fo.ConjugateGradientQuad(A, b)
x_cg = cg_quad.solve(x0, tol=1e-7, disp=True)
print("\t Gradient Descent")
gd = methods.fo.GradientDescent(f, grad_f, ss.ExactLineSearch4Quad(A, b))
x_gd = gd.solve(x0, tol=1e-7, disp=True)
print("Condition number of A =", abs(max(eigs)) / abs(min(eigs)))
```
#### Convergence plot
```
plt.figure(figsize=(8,6))
plt.semilogy([np.linalg.norm(grad_f(x)) for x in cg_quad.get_convergence()], label=r"$\|f'(x_k)\|^{CG}_2$", linewidth=2)
plt.semilogy([np.linalg.norm(grad_f(x)) for x in scopt_cg_array[:50]], label=r"$\|f'(x_k)\|^{CG_{PR}}_2$", linewidth=2)
plt.semilogy([np.linalg.norm(grad_f(x)) for x in gd.get_convergence()], label=r"$\|f'(x_k)\|^{G}_2$", linewidth=2)
plt.legend(loc="best", fontsize=20)
plt.xlabel(r"Iteration number, $k$", fontsize=20)
plt.ylabel("Convergence rate", fontsize=20)
plt.xticks(fontsize=18)
_ = plt.yticks(fontsize=18)
print([np.linalg.norm(grad_f(x)) for x in cg_quad.get_convergence()])
plt.figure(figsize=(8,6))
plt.plot([f(x) for x in cg_quad.get_convergence()], label=r"$f(x^{CG}_k)$", linewidth=2)
plt.plot([f(x) for x in scopt_cg_array], label=r"$f(x^{CG_{PR}}_k)$", linewidth=2)
plt.plot([f(x) for x in gd.get_convergence()], label=r"$f(x^{G}_k)$", linewidth=2)
plt.legend(loc="best", fontsize=20)
plt.xlabel(r"Iteration number, $k$", fontsize=20)
plt.ylabel("Function value", fontsize=20)
plt.xticks(fontsize=18)
_ = plt.yticks(fontsize=18)
```
### Non-quadratic function
```
import numpy as np
import sklearn.datasets as skldata
import scipy.special as scspec
n = 300
m = 1000
X, y = skldata.make_classification(n_classes=2, n_features=n, n_samples=m, n_informative=n//3)
C = 1
def f(w):
return np.linalg.norm(w)**2 / 2 + C * np.mean(np.logaddexp(np.zeros(X.shape[0]), -y * X.dot(w)))
def grad_f(w):
denom = scspec.expit(-y * X.dot(w))
return w - C * X.T.dot(y * denom) / X.shape[0]
# f = lambda x: -np.sum(np.log(1 - A.T.dot(x))) - np.sum(np.log(1 - x*x))
# grad_f = lambda x: np.sum(A.dot(np.diagflat(1 / (1 - A.T.dot(x)))), axis=1) + 2 * x / (1 - np.power(x, 2))
x0 = np.zeros(n)
print("Initial function value = {}".format(f(x0)))
print("Initial gradient norm = {}".format(np.linalg.norm(grad_f(x0))))
```
#### Implementation of Fletcher-Reeves method
```
def ConjugateGradientFR(f, gradf, x0, num_iter=100, tol=1e-8, callback=None, restart=False):
x = x0
grad = gradf(x)
p = -grad
it = 0
while np.linalg.norm(gradf(x)) > tol and it < num_iter:
alpha = utils.backtracking(x, p, method="Wolfe", beta1=0.1, beta2=0.4, rho=0.5, f=f, grad_f=gradf)
if alpha < 1e-18:
break
x = x + alpha * p
if callback is not None:
callback(x)
grad_next = gradf(x)
beta = grad_next.dot(grad_next) / grad.dot(grad)
p = -grad_next + beta * p
grad = grad_next.copy()
it += 1
if restart and it % restart == 0:
grad = gradf(x)
p = -grad
return x
```
#### Convergence plot
```
import scipy.optimize as scopt
import liboptpy.restarts as restarts
n_restart = 60
tol = 1e-5
max_iter = 600
scopt_cg_array = []
scopt_cg_callback = lambda x: callback(x, scopt_cg_array)
x = scopt.minimize(f, x0, tol=tol, method="CG", jac=grad_f, callback=scopt_cg_callback, options={"maxiter": max_iter})
x = x.x
print("\t CG by Polak-Rebiere")
print("Norm of garient = {}".format(np.linalg.norm(grad_f(x))))
print("Function value = {}".format(f(x)))
print("\t CG by Fletcher-Reeves")
cg_fr = methods.fo.ConjugateGradientFR(f, grad_f, ss.Backtracking("Wolfe", rho=0.9, beta1=0.1, beta2=0.4, init_alpha=1.))
x = cg_fr.solve(x0, tol=tol, max_iter=max_iter, disp=True)
print("\t CG by Fletcher-Reeves with restart n")
cg_fr_rest = methods.fo.ConjugateGradientFR(f, grad_f, ss.Backtracking("Wolfe", rho=0.9, beta1=0.1, beta2=0.4,
init_alpha=1.), restarts.Restart(n // n_restart))
x = cg_fr_rest.solve(x0, tol=tol, max_iter=max_iter, disp=True)
print("\t Gradient Descent")
gd = methods.fo.GradientDescent(f, grad_f, ss.Backtracking("Wolfe", rho=0.9, beta1=0.1, beta2=0.4, init_alpha=1.))
x = gd.solve(x0, max_iter=max_iter, tol=tol, disp=True)
plt.figure(figsize=(8, 6))
plt.semilogy([np.linalg.norm(grad_f(x)) for x in cg_fr.get_convergence()], label=r"$\|f'(x_k)\|^{CG_{FR}}_2$ no restart", linewidth=2)
plt.semilogy([np.linalg.norm(grad_f(x)) for x in cg_fr_rest.get_convergence()], label=r"$\|f'(x_k)\|^{CG_{FR}}_2$ restart", linewidth=2)
plt.semilogy([np.linalg.norm(grad_f(x)) for x in scopt_cg_array], label=r"$\|f'(x_k)\|^{CG_{PR}}_2$", linewidth=2)
plt.semilogy([np.linalg.norm(grad_f(x)) for x in gd.get_convergence()], label=r"$\|f'(x_k)\|^{G}_2$", linewidth=2)
plt.legend(loc="best", fontsize=16)
plt.xlabel(r"Iteration number, $k$", fontsize=20)
plt.ylabel("Convergence rate", fontsize=20)
plt.xticks(fontsize=18)
_ = plt.yticks(fontsize=18)
```
#### Running time
```
%timeit scopt.minimize(f, x0, method="CG", tol=tol, jac=grad_f, options={"maxiter": max_iter})
%timeit cg_fr.solve(x0, tol=tol, max_iter=max_iter)
%timeit cg_fr_rest.solve(x0, tol=tol, max_iter=max_iter)
%timeit gd.solve(x0, tol=tol, max_iter=max_iter)
```
## Recap
1. Conjugate directions
2. Conjugate gradient method
3. Convergence
4. Experiments
| true |
code
| 0.519338 | null | null | null | null |
|
# Modeling and Simulation in Python
Chapter 9
Copyright 2017 Allen Downey
License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
```
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import everything from SymPy.
from sympy import *
# Set up Jupyter notebook to display math.
init_printing()
```
The following displays SymPy expressions and provides the option of showing results in LaTeX format.
```
from sympy.printing import latex
def show(expr, show_latex=False):
"""Display a SymPy expression.
expr: SymPy expression
show_latex: boolean
"""
if show_latex:
print(latex(expr))
return expr
```
### Analysis with SymPy
Create a symbol for time.
```
t = symbols('t')
```
If you combine symbols and numbers, you get symbolic expressions.
```
expr = t + 1
```
The result is an `Add` object, which just represents the sum without trying to compute it.
```
type(expr)
```
`subs` can be used to replace a symbol with a number, which allows the addition to proceed.
```
expr.subs(t, 2)
```
`f` is a special class of symbol that represents a function.
```
f = Function('f')
```
The type of `f` is `UndefinedFunction`
```
type(f)
```
SymPy understands that `f(t)` means `f` evaluated at `t`, but it doesn't try to evaluate it yet.
```
f(t)
```
`diff` returns a `Derivative` object that represents the time derivative of `f`
```
dfdt = diff(f(t), t)
type(dfdt)
```
We need a symbol for `alpha`
```
alpha = symbols('alpha')
```
Now we can write the differential equation for proportional growth.
```
eq1 = Eq(dfdt, alpha*f(t))
```
And use `dsolve` to solve it. The result is the general solution.
```
solution_eq = dsolve(eq1)
```
We can tell it's a general solution because it contains an unspecified constant, `C1`.
In this example, finding the particular solution is easy: we just replace `C1` with `p_0`
```
C1, p_0 = symbols('C1 p_0')
particular = solution_eq.subs(C1, p_0)
```
In the next example, we have to work a little harder to find the particular solution.
### Solving the quadratic growth equation
We'll use the (r, K) parameterization, so we'll need two more symbols:
```
r, K = symbols('r K')
```
Now we can write the differential equation.
```
eq2 = Eq(diff(f(t), t), r * f(t) * (1 - f(t)/K))
```
And solve it.
```
solution_eq = dsolve(eq2)
```
The result, `solution_eq`, contains `rhs`, which is the right-hand side of the solution.
```
general = solution_eq.rhs
```
We can evaluate the right-hand side at $t=0$
```
at_0 = general.subs(t, 0)
```
Now we want to find the value of `C1` that makes `f(0) = p_0`.
So we'll create the equation `at_0 = p_0` and solve for `C1`. Because this is just an algebraic identity, not a differential equation, we use `solve`, not `dsolve`.
The result from `solve` is a list of solutions. In this case, [we have reason to expect only one solution](https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem), but we still get a list, so we have to use the bracket operator, `[0]`, to select the first one.
```
solutions = solve(Eq(at_0, p_0), C1)
type(solutions), len(solutions)
value_of_C1 = solutions[0]
```
Now in the general solution, we want to replace `C1` with the value of `C1` we just figured out.
```
particular = general.subs(C1, value_of_C1)
```
The result is complicated, but SymPy provides a method that tries to simplify it.
```
particular = simplify(particular)
```
Often simplicity is in the eye of the beholder, but that's about as simple as this expression gets.
Just to double-check, we can evaluate it at `t=0` and confirm that we get `p_0`
```
particular.subs(t, 0)
```
This solution is called the [logistic function](https://en.wikipedia.org/wiki/Population_growth#Logistic_equation).
In some places you'll see it written in a different form:
$f(t) = \frac{K}{1 + A e^{-rt}}$
where $A = (K - p_0) / p_0$.
We can use SymPy to confirm that these two forms are equivalent. First we represent the alternative version of the logistic function:
```
A = (K - p_0) / p_0
logistic = K / (1 + A * exp(-r*t))
```
To see whether two expressions are equivalent, we can check whether their difference simplifies to 0.
```
simplify(particular - logistic)
```
This test only works one way: if SymPy says the difference reduces to 0, the expressions are definitely equivalent (and not just numerically close).
But if SymPy can't find a way to simplify the result to 0, that doesn't necessarily mean there isn't one. Testing whether two expressions are equivalent is a surprisingly hard problem; in fact, there is no algorithm that can solve it in general.
### Exercises
**Exercise:** Solve the quadratic growth equation using the alternative parameterization
$\frac{df(t)}{dt} = \alpha f(t) + \beta f^2(t) $
```
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
```
**Exercise:** Use [WolframAlpha](https://www.wolframalpha.com/) to solve the quadratic growth model, using either or both forms of parameterization:
df(t) / dt = alpha f(t) + beta f(t)^2
or
df(t) / dt = r f(t) (1 - f(t)/K)
Find the general solution and also the particular solution where `f(0) = p_0`.
| true |
code
| 0.597021 | null | null | null | null |
|
```
### MODULE 1
### Basic Modeling in scikit-learn
```
```
### Seen vs. unseen data
# The model is fit using X_train and y_train
model.fit(X_train, y_train)
# Create vectors of predictions
train_predictions = model.predict(X_train)
test_predictions = model.predict(X_test)
# Train/Test Errors
train_error = mae(y_true=y_train, y_pred=train_predictions)
test_error = mae(y_true=y_test, y_pred=test_predictions)
# Print the accuracy for seen and unseen data
print("Model error on seen data: {0:.2f}.".format(train_error))
print("Model error on unseen data: {0:.2f}.".format(test_error))
# Set parameters and fit a model
# Set the number of trees
rfr.n_estimators = 1000
# Add a maximum depth
rfr.max_depth = 6
# Set the random state
rfr.random_state = 11
# Fit the model
rfr.fit(X_train, y_train)
## Feature importances
# Fit the model using X and y
rfr.fit(X_train, y_train)
# Print how important each column is to the model
for i, item in enumerate(rfr.feature_importances_):
# Use i and item to print out the feature importance of each column
print("{0:s}: {1:.2f}".format(X_train.columns[i], item))
### lassification predictions
# Fit the rfc model.
rfc.fit(X_train, y_train)
# Create arrays of predictions
classification_predictions = rfc.predict(X_test)
probability_predictions = rfc.predict_proba(X_test)
# Print out count of binary predictions
print(pd.Series(classification_predictions).value_counts())
# Print the first value from probability_predictions
print('The first predicted probabilities are: {}'.format(probability_predictions[0]))
## Reusing model parameters
rfc = RandomForestClassifier(n_estimators=50, max_depth=6, random_state=1111)
# Print the classification model
print(rfc)
# Print the classification model's random state parameter
print('The random state is: {}'.format(rfc.random_state))
# Print all parameters
print('Printing the parameters dictionary: {}'.format(rfc.get_params()))
## Random forest classifier
from sklearn.ensemble import RandomForestClassifier
# Create a random forest classifier
rfc = RandomForestClassifier(n_estimators=50, max_depth=6, random_state=1111)
# Fit rfc using X_train and y_train
rfc.fit(X_train, y_train)
# Create predictions on X_test
predictions = rfc.predict(X_test)
print(predictions[0:5])
# Print model accuracy using score() and the testing data
print(rfc.score(X_test, y_test))
## MODULE 2
## Validation Basics
```
```
## Create one holdout set
# Create dummy variables using pandas
X = pd.get_dummies(tic_tac_toe.iloc[:,0:9])
y = tic_tac_toe.iloc[:, 9]
# Create training and testing datasets. Use 10% for the test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.1, random_state=1111)
## Create two holdout sets
# Create temporary training and final testing datasets
X_temp, X_test, y_temp, y_test =\
train_test_split(X, y, test_size=.2, random_state=1111)
# Create the final training and validation datasets
X_train, X_val, y_train, y_val = train_test_split(X_temp, y_temp, test_size=.25, random_state=1111)
### Mean absolute error
from sklearn.metrics import mean_absolute_error
# Manually calculate the MAE
n = len(predictions)
mae_one = sum(abs(y_test - predictions)) / n
print('With a manual calculation, the error is {}'.format(mae_one))
# Use scikit-learn to calculate the MAE
mae_two = mean_absolute_error(y_test, predictions)
print('Using scikit-lean, the error is {}'.format(mae_two))
# <script.py> output:
# With a manual calculation, the error is 5.9
# Using scikit-lean, the error is 5.9
### Mean squared error
from sklearn.metrics import mean_squared_error
n = len(predictions)
# Finish the manual calculation of the MSE
mse_one = sum(abs(y_test - predictions)**2) / n
print('With a manual calculation, the error is {}'.format(mse_one))
# Use the scikit-learn function to calculate MSE
mse_two = mean_squared_error(y_test, predictions)
print('Using scikit-lean, the error is {}'.format(mse_two))
### Performance on data subsets
# Find the East conference teams
east_teams = labels == "E"
# Create arrays for the true and predicted values
true_east = y_test[east_teams]
preds_east = predictions[east_teams]
# Print the accuracy metrics
print('The MAE for East teams is {}'.format(
mae(true_east, preds_east)))
# Print the West accuracy
print('The MAE for West conference is {}'.format(west_error))
### Confusion matrices
# Calculate and print the accuracy
accuracy = (324 + 491) / (953)
print("The overall accuracy is {0: 0.2f}".format(accuracy))
# Calculate and print the precision
precision = (491) / (491 + 15)
print("The precision is {0: 0.2f}".format(precision))
# Calculate and print the recall
recall = (491) / (491 + 123)
print("The recall is {0: 0.2f}".format(recall))
### Confusion matrices, again
from sklearn.metrics import confusion_matrix
# Create predictions
test_predictions = rfc.predict(X_test)
# Create and print the confusion matrix
cm = confusion_matrix(y_test, test_predictions)
print(cm)
# Print the true positives (actual 1s that were predicted 1s)
print("The number of true positives is: {}".format(cm[1, 1]))
## <script.py> output:
## [[177 123]
## [ 92 471]]
## The number of true positives is: 471
## Row 1, column 1 represents the number of actual 1s that were predicted 1s (the true positives).
## Always make sure you understand the orientation of the confusion matrix before you start using it!
### Precision vs. recall
from sklearn.metrics import precision_score
test_predictions = rfc.predict(X_test)
# Create precision or recall score based on the metric you imported
score = precision_score(y_test, test_predictions)
# Print the final result
print("The precision value is {0:.2f}".format(score))
### Error due to under/over-fitting
# Update the rfr model
rfr = RandomForestRegressor(n_estimators=25,
random_state=1111,
max_features=2)
rfr.fit(X_train, y_train)
# Print the training and testing accuracies
print('The training error is {0:.2f}'.format(
mae(y_train, rfr.predict(X_train))))
print('The testing error is {0:.2f}'.format(
mae(y_test, rfr.predict(X_test))))
## <script.py> output:
## The training error is 3.88
## The testing error is 9.15
# Update the rfr model
rfr = RandomForestRegressor(n_estimators=25,
random_state=1111,
max_features=11)
rfr.fit(X_train, y_train)
# Print the training and testing accuracies
print('The training error is {0:.2f}'.format(
mae(y_train, rfr.predict(X_train))))
print('The testing error is {0:.2f}'.format(
mae(y_test, rfr.predict(X_test))))
## <script.py> output:
## The training error is 3.57
## The testing error is 10.05
# Update the rfr model
rfr = RandomForestRegressor(n_estimators=25,
random_state=1111,
max_features=4)
rfr.fit(X_train, y_train)
# Print the training and testing accuracies
print('The training error is {0:.2f}'.format(
mae(y_train, rfr.predict(X_train))))
print('The testing error is {0:.2f}'.format(
mae(y_test, rfr.predict(X_test))))
## <script.py> output:
## The training error is 3.60
## The testing error is 8.79
### Am I underfitting?
from sklearn.metrics import accuracy_score
test_scores, train_scores = [], []
for i in [1, 2, 3, 4, 5, 10, 20, 50]:
rfc = RandomForestClassifier(n_estimators=i, random_state=1111)
rfc.fit(X_train, y_train)
# Create predictions for the X_train and X_test datasets.
train_predictions = rfc.predict(X_train)
test_predictions = rfc.predict(X_test)
# Append the accuracy score for the test and train predictions.
train_scores.append(round(accuracy_score(y_train, train_predictions), 2))
test_scores.append(round(accuracy_score(y_test, test_predictions), 2))
# Print the train and test scores.
print("The training scores were: {}".format(train_scores))
print("The testing scores were: {}".format(test_scores))
### MODULE 3
### Cross Validation
```
```
### Two samples
# Create two different samples of 200 observations
sample1 = tic_tac_toe.sample(200, random_state=1111)
sample2 = tic_tac_toe.sample(200, random_state=1171)
# Print the number of common observations
print(len([index for index in sample1.index if index in sample2.index]))
# Print the number of observations in the Class column for both samples
print(sample1['Class'].value_counts())
print(sample2['Class'].value_counts())
### scikit-learn's KFold()
from sklearn.model_selection import KFold
# Use KFold
kf = KFold(n_splits=5, shuffle=True, random_state=1111)
# Create splits
splits = kf.split(X)
# Print the number of indices
for train_index, val_index in splits:
print("Number of training indices: %s" % len(train_index))
print("Number of validation indices: %s" % len(val_index))
### Using KFold indices
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_squared_error
rfc = RandomForestRegressor(n_estimators=25, random_state=1111)
# Access the training and validation indices of splits
for train_index, val_index in splits:
# Setup the training and validation data
X_train, y_train = X[train_index], y[train_index]
X_val, y_val = X[val_index], y[val_index]
# Fit the random forest model
rfc.fit(X_train, y_train)
# Make predictions, and print the accuracy
predictions = rfc.predict(X_val)
print("Split accuracy: " + str(mean_squared_error(y_val, predictions)))
### scikit-learn's methods
# Instruction 1: Load the cross-validation method
from sklearn.model_selection import cross_val_score
# Instruction 2: Load the random forest regression model
from sklearn.ensemble import RandomForestClassifier
# Instruction 3: Load the mean squared error method
# Instruction 4: Load the function for creating a scorer
from sklearn.metrics import mean_squared_error, make_scorer
## It is easy to see how all of the methods can get mixed up, but
## it is important to know the names of the methods you need.
## You can always review the scikit-learn documentation should you need any help
### Implement cross_val_score()
rfc = RandomForestRegressor(n_estimators=25, random_state=1111)
mse = make_scorer(mean_squared_error)
# Set up cross_val_score
cv = cross_val_score(estimator=rfc,
X=X_train,
y=y_train,
cv=10,
scoring=mse)
# Print the mean error
print(cv.mean())
### Leave-one-out-cross-validation
from sklearn.metrics import mean_absolute_error, make_scorer
# Create scorer
mae_scorer = make_scorer(mean_absolute_error)
rfr = RandomForestRegressor(n_estimators=15, random_state=1111)
# Implement LOOCV
scores = cross_val_score(estimator=rfr, X=X, y=y, cv=85, scoring=mae_scorer)
# Print the mean and standard deviation
print("The mean of the errors is: %s." % np.mean(scores))
print("The standard deviation of the errors is: %s." % np.std(scores))
### MODULE 4
### Selecting the best model with Hyperparameter tuning.
```
```
### Creating Hyperparameters
# Review the parameters of rfr
print(rfr.get_params())
# Maximum Depth
max_depth = [4, 8, 12]
# Minimum samples for a split
min_samples_split = [2, 5, 10]
# Max features
max_features = [4, 6, 8, 10]
### Running a model using ranges
from sklearn.ensemble import RandomForestRegressor
# Fill in rfr using your variables
rfr = RandomForestRegressor(
n_estimators=100,
max_depth=random.choice(max_depth),
min_samples_split=random.choice(min_samples_split),
max_features=random.choice(max_features))
# Print out the parameters
print(rfr.get_params())
### Preparing for RandomizedSearch
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import make_scorer, mean_squared_error
# Finish the dictionary by adding the max_depth parameter
param_dist = {"max_depth": [2, 4, 6, 8],
"max_features": [2, 4, 6, 8, 10],
"min_samples_split": [2, 4, 8, 16]}
# Create a random forest regression model
rfr = RandomForestRegressor(n_estimators=10, random_state=1111)
# Create a scorer to use (use the mean squared error)
scorer = make_scorer(mean_squared_error)
# Import the method for random search
from sklearn.model_selection import RandomizedSearchCV
# Build a random search using param_dist, rfr, and scorer
random_search =\
RandomizedSearchCV(
estimator=rfr,
param_distributions=param_dist,
n_iter=10,
cv=5,
scoring=scorer)
### Selecting the best precision model
from sklearn.metrics import precision_score, make_scorer
# Create a precision scorer
precision = make_scorer(precision_score)
# Finalize the random search
rs = RandomizedSearchCV(
estimator=rfc, param_distributions=param_dist,
scoring = precision,
cv=5, n_iter=10, random_state=1111)
rs.fit(X, y)
# print the mean test scores:
print('The accuracy for each run was: {}.'.format(rs.cv_results_['mean_test_score']))
# print the best model score:
print('The best accuracy for a single model was: {}'.format(rs.best_score_))
```
| true |
code
| 0.662715 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/PWhiddy/jax-experiments/blob/main/nbody.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import jax.numpy as jnp
from jax import jit
from jax import vmap
import jax
from numpy import random
import matplotlib.pyplot as plt
from tqdm import tqdm
!pip install tensor-canvas
!pip install moviepy
import tensorcanvas as tc
#@title VideoWriter
#VideoWriter from Alexander Mordvintsev
#https://colab.research.google.com/github/znah/notebooks/blob/master/external_colab_snippets.ipynb
import os
import numpy as np
os.environ['FFMPEG_BINARY'] = 'ffmpeg'
import moviepy.editor as mvp
from moviepy.video.io.ffmpeg_writer import FFMPEG_VideoWriter
class VideoWriter:
def __init__(self, filename='_autoplay.mp4', fps=30.0, **kw):
self.writer = None
self.params = dict(filename=filename, fps=fps, **kw)
def add(self, img):
img = np.asarray(img)
if self.writer is None:
h, w = img.shape[:2]
self.writer = FFMPEG_VideoWriter(size=(w, h), **self.params)
if img.dtype in [np.float32, np.float64]:
img = np.uint8(img.clip(0, 1)*255)
if len(img.shape) == 2:
img = np.repeat(img[..., None], 3, -1)
self.writer.write_frame(img)
def close(self):
if self.writer:
self.writer.close()
def __enter__(self):
return self
def __exit__(self, *kw):
self.close()
if self.params['filename'] == '_autoplay.mp4':
self.show()
def show(self, **kw):
self.close()
fn = self.params['filename']
display(mvp.ipython_display(fn, **kw))
def draw_sim(parts_pos, parts_vel, grid_r_x, grid_r_y, opacity=1.0, p_size=4.0, pcol=jnp.array([1.0,0.0,0.0])):
canvas = jnp.zeros((grid_r_y, grid_r_x, 3))
col = opacity*pcol
# would be interesting to use jax.experimental.loops for these
for part_p, part_v in zip(parts_pos, parts_vel):
canvas = tc.draw_circle(part_p[0]*grid_r_y+grid_r_x*0.5-grid_r_y*0.5, part_p[1]*grid_r_y, p_size, col, canvas)
return jnp.clip(canvas, 0.0, 1.0)
def draw_sim_par(parts_pos, parts_vel, grid_r_x, grid_r_y, opacity=1.0, p_size=4.0, pcol=jnp.array([1.0,0.0,0.0])):
col = opacity*pcol
draw_single = lambda part_p, canv: tc.draw_circle(part_p[0]*grid_r_y+grid_r_x*0.5-grid_r_y*0.5, part_p[1]*grid_r_y, p_size, col, canv)
draw_all = vmap(draw_single)
return jnp.clip(draw_all(parts_pos, jnp.zeros((parts_pos.shape[0], grid_r_y, grid_r_x, 3))).sum(0), 0.0, 1.0)
def compute_forces(pos, scale, eps=0.1):
a, b = jnp.expand_dims(pos, 1), jnp.expand_dims(pos, 0)
diff = a - b
dist = (diff * diff).sum(axis=-1) ** 0.5
dist = jnp.expand_dims(dist, 2)
force = diff / ((dist * scale) ** 3 + eps)
return force.sum(0)
fast_compute_forces = jit(compute_forces)
def sim_update_force(parts_pos, parts_vel, t_delta=0.05, scale=5, repel_mag=0.1, center_mag=2.5, steps=10, damp=0.99):
p_p = jnp.array(parts_pos)
p_v = jnp.array(parts_vel)
# jax.experimental.loops
for _ in range(steps):
p_p = p_p + t_delta * p_v
force = fast_compute_forces(p_p, scale)
center_diff = p_p-0.5
centering_force = center_diff / ((center_diff ** 2).sum() ** 0.5)
p_v = damp * p_v - t_delta * (force * repel_mag + centering_force * center_mag)
return p_p, p_v
def make_init_state(p_count):
return random.rand(p_count, 2), random.rand(p_count, 2)-0.5
fast_draw = jit(draw_sim, static_argnums=(2,3))
fast_draw_par = jit(draw_sim_par, static_argnums=(2,3))
fast_sim_update_force = jit(sim_update_force, static_argnames=('steps'))
p_state, v_state = make_init_state(128)
v_state *= 0
grid_res = 384
for i in tqdm(range(1000)):
p_state, v_state = fast_sim_update_force(p_state, v_state, t_delta=0.05, scale=10, center_mag=0.5, repel_mag=0.05, damp=0.996, steps=2)
plt.imshow(fast_draw_par(p_state, v_state, grid_res, grid_res, p_size=4.0))
p_state, v_state = make_init_state(2048)
v_state *= 0
grid_res = 512
for i in tqdm(range(100)):
p_state, v_state = fast_sim_update_force(p_state, v_state, t_delta=0.05, scale=40, center_mag=0.5, repel_mag=0.05, damp=0.997, steps=20)
plt.imshow(fast_draw_par(p_state, v_state, grid_res, grid_res, p_size=3.0))
render_video = False
if render_video:
p_state, v_state = make_init_state(128)
v_state *= 0
grid_res = 384
with VideoWriter(fps=60) as vw:
for i in tqdm(range(1000)):
render = fast_draw_par(p_state, v_state, grid_res, grid_res, p_size=3.0)
vw.add(render)
p_state, v_state = fast_sim_update_force(p_state, v_state, t_delta=0.05, scale=10, center_mag=0.5, repel_mag=0.05, damp=0.996, steps=2)
if render_video:
p_state, v_state = make_init_state(512)
v_state *= 0
grid_res = 256
with VideoWriter(fps=60) as vw:
for i in tqdm(range(1000)):
render = fast_draw_par(p_state, v_state, grid_res, grid_res, opacity=0.5, p_size=3.0)
vw.add(render)
p_state, v_state = fast_sim_update_force(p_state, v_state, t_delta=0.05, scale=20, center_mag=0.5, repel_mag=0.05, damp=0.998, steps=4)
!nvidia-smi
p_test = 50
res_test = 512
%%timeit
draw_sim(*make_init_state(p_test), res_test, res_test)
%%timeit
draw_sim_par(*make_init_state(p_test), res_test, res_test)
%%timeit
fast_draw(*make_init_state(p_test), res_test, res_test)
%%timeit
fast_draw_par(*make_init_state(p_test), res_test, res_test)
import ffmpeg
import logging
import numpy as np
import os
import subprocess
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)
def start_ffmpeg_process2(key, width, height):
logger.info('Starting ffmpeg process2')
args = f'ffmpeg -re -f lavfi -i anullsrc=channel_layout=stereo:sample_rate=44100 -f rawvideo -s {width}x{height} -pix_fmt rgb24 -i pipe: -c:v libx264 -preset veryfast -b:v 3000k -maxrate 3000k -bufsize 6000k -pix_fmt yuv420p -g 50 -c:a aac -b:a 160k -ac 2 -ar 44100 -f flv rtmp://a.rtmp.youtube.com/live2/{key}'
return subprocess.Popen(args.split(), stdin=subprocess.PIPE)
def write_frame(process2, frame):
logger.debug('Writing frame')
process2.stdin.write(
frame
.astype(np.uint8)
.tobytes()
)
def run(key, process_frame, width, height):
process2 = start_ffmpeg_process2(key, width, height)
while True:
logger.debug('Processing frame')
out_frame = process_frame()#(in_frame)
write_frame(process2, out_frame)
logger.info('Waiting for ffmpeg process2')
process2.stdin.close()
process2.wait()
logger.info('Done')
import json
class SimRunner():
def __init__(self, pcount, grid_x, grid_y):
self.pcount = pcount
self.p_state, self.v_state = make_init_state(pcount)
self.v_state *= 0
self.grid_x = grid_x
self.grid_y = grid_y
self.fcount = 0
def next_frame(self):
with open('test_col.json') as f:
col = jnp.array(json.load(f))
render = fast_draw_par(self.p_state, self.v_state, self.grid_x, self.grid_y, opacity=0.8, p_size=5.0, pcol=col)
if (self.fcount % 800 == 799):
self.v_state += 0.2*(random.rand(self.pcount, 2)-0.5)
self.p_state, self.v_state = fast_sim_update_force(self.p_state, self.v_state, t_delta=0.05, scale=20, center_mag=0.5, repel_mag=0.05, damp=0.995, steps=2)
self.fcount += 1
return render*255
test = SimRunner(256, 512, 512)
test.next_frame().max()
#plt.imshow(test.next_frame())
try:
res_x, res_y = 1280, 720
sr = SimRunner(384, res_x, res_y)
run('gjhh-kvup-9fhh-fbe7-4402', sr.next_frame, res_x, res_y)
except ffmpeg.Error as e:
print('stdout:', e.stdout.decode('utf8'))
print('stderr:', e.stderr.decode('utf8'))
raise e
```
| true |
code
| 0.583441 | null | null | null | null |
|
# Training with Features
From notebook 14, we now have radio features. From notebook 13, we now have astronomical features and potential host galaxies. It's now time to put all of these together into a set of vectors and train a classifier.
I'll quickly go over the pipeline up to now. First, make sure you have MongoDB running with the `radio` database containing the Radio Galaxy Zoo data. Then, convert all of the raw RGZ classifications into sanitised and nice-to-work-with classifications:
```bash
python -m crowdastro raw_classifications crowdastro-data/processed.db classifications
```
Next, compile the consensus database. For now, I'm only dealing with ATLAS data, so remember to specify the `--atlas` flag.
```bash
python -m crowdastro consensuses crowdastro-data/processed.db classifications atlas_consensuses_raw --atlas
```
We need to generate the training data. If you don't have a Gator cache, it will be generated.
```bash
python -m crowdastro training_data \
crowdastro-data/processed.db atlas_consensuses_raw \
gator_cache \
crowdastro-data/training.h5 \
--atlas
```
This dumps a file with astronomy features and potential hosts. Then, run 15_cnn to get CNN features (or just use the h5 and json files I already prepared) and run 16_pca to get a PCA matrix.
The pipeline is as follows:
1. Get potential hosts from training.h5.
2. Using the CDFS/ELAIS images, get radio patches around each potential host.
3. Run patches through CNN. Output the second convolutional layer.
4. Run CNN output through PCA.
5. Append astronomy features from training.h5. This is the input data.
```
import itertools
import sys
import bson
import h5py
import keras.layers
import keras.models
import matplotlib.pyplot
import numpy
import pandas
import sklearn.cross_validation
import sklearn.dummy
import sklearn.linear_model
import sklearn.metrics
sys.path.insert(1, '..')
import crowdastro.data
import crowdastro.show
with pandas.HDFStore('../crowdastro-data/training.h5') as store:
data = store['data']
data.head()
```
We'll just look at a small number of potential hosts for now. I'll have to do batches to scale this up and I just want to check it works for now.
```
n = 5000
# I'm gathering up the radio patches first so I can run them through the CNN at the same time
# as one big matrix operation. In principle this would run on the GPU.
radio_patches = numpy.zeros((n, 80, 80))
labels = numpy.zeros((n,))
radius = 40
padding = 150
for idx, row in data.head(n).iterrows():
sid = bson.objectid.ObjectId(row['subject_id'][0].decode('ascii'))
x = row['x'][0]
y = row['y'][0]
label = row['is_host'][0]
labels[idx] = label
subject = crowdastro.data.db.radio_subjects.find_one({'_id': sid})
radio = crowdastro.data.get_radio(subject, size='5x5')
patch = radio[x - radius + padding : x + radius + padding, y - radius + padding : y + radius + padding]
radio_patches[idx, :] = patch
# Load the CNN.
with open('../crowdastro-data/cnn_model_2.json', 'r') as f:
cnn = keras.models.model_from_json(f.read())
cnn.load_weights('../crowdastro-data/cnn_weights_2.h5')
cnn.layers = cnn.layers[:5] # Pop the layers after the second convolution's activation.
cnn.add(keras.layers.Flatten())
cnn.compile(optimizer='sgd', loss='mse') # I don't actually care about the optimiser or loss.
# Load the PCA.
with h5py.File('../crowdastro-data/pca.h5') as f:
pca = f['conv_2'][:]
# Find the radio features.
radio_features = cnn.predict(radio_patches.reshape(n, 1, 80, 80)) @ pca.T
# Add on the astronomy features.
features = numpy.hstack([radio_features, data.ix[:n-1, 'flux_ap2_24':'flux_ap2_80'].as_matrix()])
features = numpy.nan_to_num(features)
# Split into training and testing data.
xs_train, xs_test, ts_train, ts_test = sklearn.cross_validation.train_test_split(features, labels, test_size=0.2)
# Classify!
lr = sklearn.linear_model.LogisticRegression(class_weight='balanced')
lr.fit(xs_train, ts_train)
lr.score(xs_test, ts_test)
sklearn.metrics.confusion_matrix(ts_test, lr.predict(xs_test), [0, 1])
```
So we get ~84% accuracy on just predicting labels. Let's compare to a random classifier.
```
dc = sklearn.dummy.DummyClassifier(strategy='stratified')
dc.fit(xs_train, ts_train)
dc.score(xs_test, ts_test)
```
A stratified random classifier gets 88% accuracy, which doesn't look good for our logistic regression!
I am curious as to whether we can do better if we're considering the full problem, i.e. we know that exactly one potential host is the true host. Note that I'm ignoring the problem of multiple radio emitters for now. Let's try that: We'll get a subject, find the potential hosts, get their patches, and use the logistic regression and dummy classifiers to predict all the associated probabilities, and hence find the radio emitter. I'll only look at subjects not in the first `n` potential hosts, else we'd overlap with the training data.
To get a feel for how the predictor works, I'll try colour-coding potential hosts based on how likely they are to be the true host. To do *that*, I'll softmax the scores.
```
def softmax(x):
exp = numpy.exp(x)
return exp / numpy.sum(exp, axis=0)
subject_ids = set()
for idx, row in data.ix[n:n * 2].iterrows():
sid = row['subject_id'][0]
subject_ids.add(sid)
for subject_id in itertools.islice(subject_ids, 0, 10):
# Pandas *really* doesn't like fancy indexing against string comparisons.
indices = (data['subject_id'] == subject_id).as_matrix().reshape(-1)
potential_hosts = numpy.nan_to_num(data.as_matrix()[indices][:, 1:-1].astype(float))
subject = crowdastro.data.db.radio_subjects.find_one({'_id': bson.objectid.ObjectId(subject_id.decode('ascii'))})
radio = crowdastro.data.get_radio(subject, size='5x5')
radio_patches = numpy.zeros((len(potential_hosts), 1, radius * 2, radius * 2))
for index, (x, y, *astro) in enumerate(potential_hosts):
patch = radio[x - radius + padding : x + radius + padding, y - radius + padding : y + radius + padding]
radio_patches[index, 0, :] = patch
radio_features = cnn.predict(radio_patches) @ pca.T
astro_features = potential_hosts[:, 2:]
features = numpy.hstack([radio_features, astro_features])
scores = lr.predict_proba(features)[:, 1].T
probs = softmax(scores)
crowdastro.show.subject(subject)
matplotlib.pyplot.scatter(potential_hosts[:, 0], potential_hosts[:, 1], c=probs)
matplotlib.pyplot.show()
```
This is quite interesting! Lots of points (blue) are not really considered, and sometimes there are a few candidates (red). These usually look pretty reasonable, but it also seems a lot like the predictor is just looking for bright things.
Let's try and get an accuracy out. There is still the problem of multiple radio sources, so I'll just say that if the predictor hits *any* true host, that's a hit.
```
hits = 0
attempts = 0
for subject_id in subject_ids:
indices = (data['subject_id'] == subject_id).as_matrix().reshape(-1)
potential_hosts = numpy.nan_to_num(data.as_matrix()[indices][:, 1:-1].astype(float))
labels = numpy.nan_to_num(data.as_matrix()[indices][:, -1].astype(bool))
subject = crowdastro.data.db.radio_subjects.find_one({'_id': bson.objectid.ObjectId(subject_id.decode('ascii'))})
radio = crowdastro.data.get_radio(subject, size='5x5')
radio_patches = numpy.zeros((len(potential_hosts), 1, radius * 2, radius * 2))
for index, (x, y, *astro) in enumerate(potential_hosts):
patch = radio[x - radius + padding : x + radius + padding, y - radius + padding : y + radius + padding]
radio_patches[index, 0, :] = patch
radio_features = cnn.predict(radio_patches) @ pca.T
astro_features = potential_hosts[:, 2:]
features = numpy.hstack([radio_features, astro_features])
scores = lr.predict_proba(features)[:, 1].reshape(-1)
predicted_host = scores.argmax()
if labels[predicted_host]:
hits += 1
attempts += 1
print('Accuracy: {:.02%}'.format(hits / attempts))
```
Against a random classifier...
```
hits = 0
attempts = 0
for subject_id in subject_ids:
indices = (data['subject_id'] == subject_id).as_matrix().reshape(-1)
potential_hosts = numpy.nan_to_num(data.as_matrix()[indices][:, 1:-1].astype(float))
labels = numpy.nan_to_num(data.as_matrix()[indices][:, -1].astype(bool))
subject = crowdastro.data.db.radio_subjects.find_one({'_id': bson.objectid.ObjectId(subject_id.decode('ascii'))})
radio = crowdastro.data.get_radio(subject, size='5x5')
radio_patches = numpy.zeros((len(potential_hosts), 1, radius * 2, radius * 2))
for index, (x, y, *astro) in enumerate(potential_hosts):
patch = radio[x - radius + padding : x + radius + padding, y - radius + padding : y + radius + padding]
radio_patches[index, 0, :] = patch
radio_features = cnn.predict(radio_patches) @ pca.T
astro_features = potential_hosts[:, 2:]
features = numpy.hstack([radio_features, astro_features])
scores = dc.predict_proba(features)[:, 1].reshape(-1)
predicted_host = scores.argmax()
if labels[predicted_host]:
hits += 1
attempts += 1
print('Accuracy: {:.02%}'.format(hits / attempts))
```
It would also be useful to know what the classifier considers "hard" to classify. I think an entropy approach might work (though there are problems with this...). Let's find the highest-entropy subject.
```
max_entropy = float('-inf')
max_subject = None
for subject_id in subject_ids:
indices = (data['subject_id'] == subject_id).as_matrix().reshape(-1)
potential_hosts = numpy.nan_to_num(data.as_matrix()[indices][:, 1:-1].astype(float))
labels = numpy.nan_to_num(data.as_matrix()[indices][:, -1].astype(bool))
subject = crowdastro.data.db.radio_subjects.find_one({'_id': bson.objectid.ObjectId(subject_id.decode('ascii'))})
radio = crowdastro.data.get_radio(subject, size='5x5')
radio_patches = numpy.zeros((len(potential_hosts), 1, radius * 2, radius * 2))
for index, (x, y, *astro) in enumerate(potential_hosts):
patch = radio[x - radius + padding : x + radius + padding, y - radius + padding : y + radius + padding]
radio_patches[index, 0, :] = patch
radio_features = cnn.predict(radio_patches) @ pca.T
astro_features = potential_hosts[:, 2:]
features = numpy.hstack([radio_features, astro_features])
probabilities = softmax(lr.predict_proba(features)[:, 1].reshape(-1))
entropy = -(probabilities * numpy.log(probabilities)).sum()
if entropy > max_entropy:
max_entropy = entropy
max_subject = subject
crowdastro.show.subject(max_subject)
indices = (data['subject_id'] == str(max_subject['_id']).encode('ascii')).as_matrix().reshape(-1)
potential_hosts = numpy.nan_to_num(data.as_matrix()[indices][:, 1:-1].astype(float))
subject = max_subject
radio = crowdastro.data.get_radio(subject, size='5x5')
radio_patches = numpy.zeros((len(potential_hosts), 1, radius * 2, radius * 2))
for index, (x, y, *astro) in enumerate(potential_hosts):
patch = radio[x - radius + padding : x + radius + padding, y - radius + padding : y + radius + padding]
radio_patches[index, 0, :] = patch
radio_features = cnn.predict(radio_patches) @ pca.T
astro_features = potential_hosts[:, 2:]
features = numpy.hstack([radio_features, astro_features])
scores = lr.predict_proba(features)[:, 1].T
probs = softmax(scores)
crowdastro.show.subject(subject)
matplotlib.pyplot.scatter(potential_hosts[:, 0], potential_hosts[:, 1], c=probs)
matplotlib.pyplot.show()
matplotlib.pyplot.plot(sorted(probs), marker='x')
```
| true |
code
| 0.58053 | null | null | null | null |
|
# Self-Driving Car Engineer Nanodegree
## Project: **Finding Lane Lines on the Road**
***
In this project, the lanes on the road are detacted using Canny Edge Dectection and Hough Transform line detection. Meanwhile, I also use HSL color space, grayscaling, color selection ,color selection and Gaussian smoothing to reduce noise in pictures and vedios. To achieve optimal performance, this detection code is with memory of lanes in previous frames so the result is smooth. The code is verified by pictures and vedios. The code has good performance in challenge vedio, which has curved lane and shadow on the ground. All picture results are in folder 'test_image_output'. Vedio outputs are in 'test_vedios_output'.
Example picture output:
---
<figure>
<img src="test_images/solidWhiteRight.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Original Image </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="test_images_output/solidWhiteRight.png" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Lane Detaction Result</p>
</figcaption>
</figure>
## Python Code:
## Import Packages
```
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
from scipy import stats
%matplotlib inline
```
## Read in an Image
```
#reading in an image
image_sWR = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats.
print('This image is:', type(image_sWR), 'with dimensions:', image_sWR.shape)
```
Some important functions:
`find_hough_lines` Seperate left lane and right lane
`linear_regression_left/linear_regression_right` Use linear regression to extrapolate lanes
`create_lane_list` Use deque to store previous lanes
## Lane finding functions
```
import math
from collections import deque
def find_hough_lines(img,lines):
# Seperate left/right lanes
xl = []
yl = []
xr = []
yr = []
middel_x = img.shape[1]/2
for line in lines:
for x1,y1,x2,y2 in line:
if ((y2-y1)/(x2-x1))<0 and ((y2-y1)/(x2-x1))>-math.inf and x1<middel_x and x2<middel_x:
xl.append(x1)
xl.append(x2)
yl.append(y1)
yl.append(y2)
elif ((y2-y1)/(x2-x1))>0 and ((y2-y1)/(x2-x1))<math.inf and x1>middel_x and x2>middel_x:
xr.append(x1)
xr.append(x2)
yr.append(y1)
yr.append(y2)
return xl, yl, xr, yr
def linear_regression_left(xl,yl):
# Extrapolate left lane
slope_l, intercept_l, r_value_l, p_value_l, std_err = stats.linregress(xl, yl)
return slope_l, intercept_l
def linear_regression_right(xr,yr):
# Extrapolate right lane
slope_r, intercept_r, r_value_r, p_value_r, std_err = stats.linregress(xr, yr)
return slope_r, intercept_r
def create_lane_list():
# Use deque to store previous lanes
return deque(maxlen = 15)
def left_lane_mean(left_lane_que):
# Derive mean parameters of left lane based on memory
if len(left_lane_que) == 0:
return 0,0
slope_l_mean , intercept_l_mean = np.mean(left_lane_que,axis=0)
return slope_l_mean, intercept_l_mean
def right_lane_mean(right_lane_que):
# Derive mean parameters of right lane based on memory
if len(right_lane_que) == 0:
return 0,0
slope_r_mean , intercept_r_mean = np.mean(right_lane_que,axis=0)
return slope_r_mean, intercept_r_mean
def left_lane_add(left_lane_que,slope_l, intercept_l):
# Add left lane to memory
left_lane_que.append([slope_l,intercept_l])
return left_lane_que
def right_lane_add(right_lane_que,slope_r, intercept_r):
# Add right lane to memory
right_lane_que.append([slope_r,intercept_r])
return right_lane_que
def grayscale(img):
# Convert image to grayscale
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
def canny(img, low_threshold, high_threshold):
#Applies the Canny transform
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
#Applies a Gaussian Noise kernel
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img):
# Defining a blank mask to start with
mask = np.zeros_like(img)
# Defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
vertices = get_vertices_for_img(img)
# Filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
# Returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, intercept_l, slope_l,intercept_r, slope_r, xl, xr,color=[255, 0, 0], thickness=10):
# Draw lines based on mean intercept and slope
max_y = img.shape[0]
yl_LR = []
yr_LR = []
for x in xl:
yl_LR.append(intercept_l+slope_l*x)
for x in xr:
yr_LR.append(intercept_r+slope_r*x)
x_left_bottom = (max_y - intercept_l)/slope_l
x_right_bottom = (max_y - intercept_r)/slope_r
cv2.line(img, (int(x_left_bottom), int(max_y)), (int(max(xl)), int(min(yl_LR))), color, thickness)
cv2.line(img, (int(x_right_bottom), int(max_y)), (int(min(xr)), int(min(yr_LR))), color, thickness)
return img
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
# Derive Hough lines of the image, this would return the points on the edge
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
return line_img, lines
def weighted_img(img, initial_img, α=0.8, β=1., γ=0.):
# Combine images with weights
return cv2.addWeighted(initial_img, α, img, β, γ)
def isolate_yellow_hsl(img):
# Extract yellow color in the HSL color space.
# We are interested in the yellow lanes on the ground
low_threshold = np.array([15, 38, 115], dtype=np.uint8)
high_threshold = np.array([35, 204, 255], dtype=np.uint8)
yellow_mask = cv2.inRange(img, low_threshold, high_threshold)
return yellow_mask
def isolate_white_hsl(img):
# Extract white color in the HSL color space.
# We are interested in the white lanes on the ground
low_threshold = np.array([0, 200, 0], dtype=np.uint8)
high_threshold = np.array([180, 255, 255], dtype=np.uint8)
white_mask = cv2.inRange(img, low_threshold, high_threshold)
return white_mask
def get_vertices_for_img(img):
# Get the top points of polygon based on the size of image for function 'region_of_interest'
height = img.shape[0]
width = img.shape[1]
if (width, height) == (960, 540):
bottom_left = (130 ,img.shape[0] - 1)
top_left = (410, 330)
top_right = (650, 350)
bottom_right = (img.shape[1] - 30,img.shape[0] - 1)
vert = np.array([[bottom_left , top_left, top_right, bottom_right]], dtype=np.int32)
else:
bottom_left = (200 , 680)
top_left = (600, 450)
top_right = (750, 450)
bottom_right = (1100, 680)
vert = np.array([[bottom_left , top_left, top_right, bottom_right]], dtype=np.int32)
return vert
```
## Test Images
Firstly, use images to test the lane detection piplane
```
import os
# Read in a image list
test_img_dir = 'test_images/'
test_image_names = os.listdir("test_images/")
test_image_names = list(map(lambda name: test_img_dir + name, test_image_names))
```
## Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
```
# Read in images
image_wCLS = mpimg.imread('test_images/whiteCarLaneSwitch.jpg')
image_sYL = mpimg.imread('test_images/solidYellowLeft.jpg')
image_sYC2 = mpimg.imread('test_images/solidYellowCurve2.jpg')
image_sYC = mpimg.imread('test_images/solidYellowCurve.jpg')
image_sWC = mpimg.imread('test_images/solidWhiteCurve.jpg')
image_ch = mpimg.imread('test_images/challenge.jpg')
def Lane_Detect(image):
# Lane detection pipeline
image_hsl = cv2.cvtColor(image, cv2.COLOR_RGB2HLS)
image_yellow = isolate_yellow_hsl(image_hsl)
image_white = isolate_white_hsl(image_hsl)
# Combine white parts and yellow parts in a single pic
image_wy = cv2.bitwise_or(image_yellow,image_white)
# Combine yellow and white masks and original picture to derive the parts we are interested.
# This would reduce the noise and improve the performance if there is shadow on the ground.
image_com = cv2.bitwise_and(image,image,mask=image_wy)
image_gray = grayscale(image_com)
# Smoothing the image
kernal_size = 11
blur_image = cv2.GaussianBlur(image_gray,(kernal_size,kernal_size),0)
# Setup Canny
low_threshold = 10
high_threshold = 150
edges_image = cv2.Canny(blur_image,low_threshold,high_threshold)
# Define range of interest
masked_image = region_of_interest(edges_image)
bland_image, houghLines= hough_lines(masked_image, 1, np.pi/180, 1, 5, 1)
xl,yl,xr,yr = find_hough_lines(bland_image,houghLines)
slope_l, intercept_l = linear_regression_left(xl,yl)
slope_r, intercept_r = linear_regression_right(xr,yr)
hough_image = draw_lines(bland_image, intercept_l, slope_l, intercept_r, slope_r, xl, xr)
Final_image = weighted_img(hough_image,image)
return Final_image
# Process images and save
Final_wCLS = Lane_Detect(image_wCLS)
plt.imsave('test_images_output/whiteCarLaneSwitch.png',Final_wCLS)
Final_sWR = Lane_Detect(image_sWR)
plt.imsave('test_images_output/solidWhiteRight.png',Final_sWR)
Final_sYL = Lane_Detect(image_sYL)
plt.imsave('test_images_output/solidYellowLeft.png',Final_sYL)
Final_sYC2 = Lane_Detect(image_sYC2)
plt.imsave('test_images_output/solidYellowCurve2.png',Final_sYC2)
Final_sYC = Lane_Detect(image_sYC)
plt.imsave('test_images_output/solidYellowCurve.png',Final_sYC)
Final_sWC = Lane_Detect(image_sWC)
plt.imsave('test_images_output/solidWhiteCurve.png',Final_sWC)
Final_ch = Lane_Detect(image_ch)
plt.imsave('test_images_output/challenge.png',Final_ch)
```
## Test on Videos
```
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
# Set threshold to decide if the lane should be add to memory
MAXIMUM_SLOPE_DIFF = 0.1
MAXIMUM_INTERCEPT_DIFF = 50.0
class LaneDetectWithMemo:
def __init__(self):
self.left_lane_que = create_lane_list()
self.right_lane_que = create_lane_list()
def LanePipe(self,image):
image_hsl = cv2.cvtColor(image, cv2.COLOR_RGB2HLS)
image_yellow = isolate_yellow_hsl(image_hsl)
image_white = isolate_white_hsl(image_hsl)
# Combine white parts and yellow parts in a single pic
image_wy = cv2.bitwise_or(image_yellow,image_white)
# Combine yellow and white masks and original picture to derive the parts we are interested.
# This would reduce the noise and improve the performance if there is shadow on the ground.
image_com = cv2.bitwise_and(image,image,mask=image_wy)
image_gray = grayscale(image_com)
# Smoothing the image
kernal_size = 11
blur_image = cv2.GaussianBlur(image_gray,(kernal_size,kernal_size),0)
# Setup Canny
low_threshold = 10
high_threshold = 150
edges_image = cv2.Canny(blur_image,low_threshold,high_threshold)
# Define range of interest
masked_image = region_of_interest(edges_image)
bland_image, houghLines= hough_lines(masked_image, 1, np.pi/180, 1, 5, 1)
xl,yl,xr,yr = find_hough_lines(bland_image,houghLines)
slope_l, intercept_l = linear_regression_left(xl,yl)
slope_r, intercept_r = linear_regression_right(xr,yr)
# If the lane diverges too much, then use the mean value in memory to draw the lane
# If the lane is within thershold, then add it to memory and recalculate the mean value
if len(self.left_lane_que) == 0 and len(self.right_lane_que) == 0:
self.left_lane_que = left_lane_add(self.left_lane_que, slope_l, intercept_l)
self.right_lane_que = right_lane_add(self.right_lane_que, slope_r, intercept_r)
slope_l_mean, intercept_l_mean = left_lane_mean(self.left_lane_que)
slope_r_mean, intercept_r_mean = right_lane_mean(self.right_lane_que)
else:
slope_l_mean, intercept_l_mean = left_lane_mean(self.left_lane_que)
slope_r_mean, intercept_r_mean = right_lane_mean(self.right_lane_que)
slope_l_diff = abs(slope_l-slope_l_mean)
intercept_l_diff = abs(intercept_l-intercept_l_mean)
slope_r_diff = abs(slope_r-slope_r_mean)
intercept_r_diff = abs(intercept_r-intercept_r_mean)
if intercept_l_diff < MAXIMUM_INTERCEPT_DIFF and slope_l_diff < MAXIMUM_SLOPE_DIFF:
self.left_lane_que = left_lane_add(self.left_lane_que, slope_l, intercept_l)
slope_l_mean, intercept_l_mean = left_lane_mean(self.left_lane_que)
if intercept_r_diff < MAXIMUM_INTERCEPT_DIFF and slope_r_diff < MAXIMUM_SLOPE_DIFF:
self.right_lane_que = right_lane_add(self.right_lane_que, slope_r, intercept_r)
slope_r_mean, intercept_r_mean = right_lane_mean(self.right_lane_que)
hough_image = draw_lines(bland_image, intercept_l_mean, slope_l_mean,intercept_r_mean, slope_r_mean, xl, xr)
Final_image = weighted_img(hough_image,image)
return Final_image
# Test on the first vedio, with solid white lane on the right
LaneDetect_1 = LaneDetectWithMemo()
white_output = 'test_videos_output/solidWhiteRight.mp4'
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(LaneDetect_1.LanePipe) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
# Now for the one with the solid yellow lane on the left. This one's more tricky!
LaneDetect_2 = LaneDetectWithMemo()
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(LaneDetect_2.LanePipe)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
```
## Optional Challenge
This vedio has curved lane and shadow on the ground. In the futrue I would use polynomial to represent the lane instead of a single line. The shadow is improved by extracting yellow and white in the picture and combine them with the original image, which represent the parts we are interested
```
LaneDetect_ch = LaneDetectWithMemo()
challenge_output = 'test_videos_output/challenge.mp4'
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(LaneDetect_ch.LanePipe)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
```
| true |
code
| 0.500305 | null | null | null | null |
|
## Power analysis for: Reproducibility of cerebellum atrophy involvement in advanced ET.
1. Working with only MNI dataset will result in underpowered research:
posthoc power analysis with alpha=0.05, et=38, nc=32 and effect size 0.61 (obtained from literature median, both 1-sided and 2-sided tests);
2. Increasing power: Number of matched NC subjects needed to achieve a higher power of 0.9 with alpha=0.05 and effect size 0.61 (both 1-sided and 2-sided);
3. Effect sizes from literature research;
4. Power achieved with increasing number of matched NC subjects.
```
from statsmodels.stats import power
import math
from numpy import array
import matplotlib.pyplot as plt
from statsmodels.stats.power import TTestIndPower
# 1. calculate the post-hoc power we can achieve with only MNI dataset
effect_size_expected=0.61; #From later literature review;
alpha_expected=0.05;
power_expected=0.9;
n_et=38; n_nc=32; #Number of subjects in each group before QC.
print('Study with only our MNI cohort will also be underpowered:\n')
# 1-sided test
print('1.1: Power achieved with only MNI dataset for 1-sided test @alpha='+str(alpha_expected)+', et='+str(n_et)+', nc='+str(n_nc)+' and expected effect size='+str(effect_size_expected)+': ')
power_1_mni=power.tt_ind_solve_power(effect_size=effect_size_expected, nobs1=n_et, ratio=n_et/n_nc, alpha=alpha_expected, power=None, alternative='larger')
print(power_1_mni)
# 2-sided test
print('1.2: Power achieved with only MNI dataset for 2-sided test @alpha='+str(alpha_expected)+', et='+str(n_et)+', nc='+str(n_nc)+' and expected effect size='+str(effect_size_expected)+': ')
power_2_mni=power.tt_ind_solve_power(effect_size=effect_size_expected, nobs1=n_et, ratio=n_et/n_nc, alpha=alpha_expected, power=None, alternative='two-sided')
print(power_2_mni)
## 2. number of matched NC subjects needed for high power(0.9) reearch
effect_size_expected=0.61; #From later literature review;
alpha_expected=0.05;
power_expected=0.9;
n_et=38; n_nc=32; #Number of subjects in each group before QC.
# 1-sided test
print('1.3: Number of Controls needed for 1-sided test @ alpha='+str(alpha_expected)+', power='+str(power_expected)+' and effect size='+str(effect_size_expected)+': ')
r_expected=power.tt_ind_solve_power(effect_size=effect_size_expected, nobs1=n_et, alpha=alpha_expected, power=power_expected, ratio=None, alternative='larger')
n_nc_needed = math.ceil(r_expected*n_et)
print(n_nc_needed, ', r=', r_expected, 'n_et=', n_et, ', n_nc=', n_nc_needed, ', total=', math.ceil((r_expected+1)*n_et) )
# 2-sided test
print('1.4: Number of Controls needed (from PPMI) 2-sided, for alpha='+str(alpha_expected)+', power='+str(power_expected)+' and effect size='+str(effect_size_expected)+': ')
r_d_expected=power.tt_ind_solve_power(effect_size=effect_size_expected, nobs1=n_et, alpha=alpha_expected, power=power_expected, ratio=None)
n_nc_needed_d = math.ceil(r_d_expected*n_et)
print(n_nc_needed_d, ', r=', r_d_expected, 'n_et=',n_et, ', n_nc=', n_nc_needed_d, ', total=', math.ceil((r_d_expected+1)*n_et) )
```
## Literature power analysis
```
# basic functions for calculating literature standard effect sizes.
from math import sqrt
from statsmodels.stats import power
import pandas as pd
def cohend_from_sts(n1,m1,s1,n2,m2,s2):
# Cohen's d for independent samples with different sample sizes from basic stats
import numpy as np
from math import sqrt
s1 = s1*s1; s2 = s2*s2;
s = sqrt(((n1 - 1) * s1 + (n2 - 1) * s2) / (n1 + n2 - 2)) # calculate the pooled standard deviation
d_coh_val = (m1 - m2) / s; # calculate the effect size
#print('Cohens d: %.3f' % d_coh_val)
return d_coh_val
def cohend_from_z(z,n):
# Calculate cohend from z value reported for 2 groups with same number of samples.
d_coh_val = z/sqrt(n);
return d_coh_val
def cohend_from_z2(z, n1, n2):
# Calculate cohend from z value reported for 2 groups with different number of samples.
d_coh_val = z*sqrt(1/n1+1/n2);
return d_coh_val
def cohend_from_p(p,n):
# Calculate cohend from p value reported for 2 groups with same number of samples.
from scipy.stats import norm
z=norm.ppf(1-p)
d_coh_val = cohend_from_z(z, n);
return d_coh_val
def cohend_from_p2(p,n1,n2):
# Calculate cohend from p value reported for 2 groups with different number of samples.
from scipy.stats import norm
z=norm.ppf(1-p)
d_coh_val = cohend_from_z2(z, n1, n2);
return d_coh_val
```
### 1. [Benito-León, et al. “Brain Structural Changes in Essential Tremor: Voxel-Based Morphometry at 3-Tesla.” Journal of the Neurological Sciences (December 15, 2009)](https://pubmed.ncbi.nlm.nih.gov/19717167/)
- Study type: VBM (peak z-score)
- Multiple comparison correction: No, with P=0.001
- covariates: age, gender and eTIV
- Study groups: **ET** (19=10+9, 69.8±9.4) verses **NC** (20=10+10, 68.9±10.0);\
- Reported ROIs: bilateral cerebellum, bilateral parietal lobes, right frontal lobe, and right insula.
```
### paper1
# only 2/11 has enough power
p1_n_et=19; p1_n_nc=20; p = 0.001;
p1_roi=['wm_Left_medulla', 'wm_Right_cerebellum_anterior_lobe', 'wm_Right_parietal_lobe_postcentral_gyrus', 'wm_Right_limbic_lobe_uncus',
'Right_frontal_lobe_MFG','Right_parietal_lobe_precuneus','Left_parietal_lobe_precuneus', 'Right_insula',
'Left_cerebellum_anterior_lobe', 'Right_cerebellum_anterior_lobe', 'Left_cerebellum_posterior_lobe', 'Left_cerebellum_posterior_lobe'];
p1_z=[3.89, 2.96, 4.36, 4.48, 4.25, 5.09, 4.33, 5.50, 3.31, 4.19, 3.71, 3.72];
p1_cohend = [cohend_from_z2(x, p1_n_et, p1_n_nc) for x in p1_z];
p1_samples_needed = [power.tt_ind_solve_power(effect_size=x, alpha=p, power=power_expected) for x in p1_cohend];
p1_power_achieved = [power.tt_ind_solve_power(effect_size=x, nobs1=p1_n_et, alpha=p, ratio=p1_n_nc/p1_n_et) for x in p1_cohend];
#, alternative='larger', VBM map for differences, 2 side test.
p1_res={"VBM_Region":p1_roi,"z-value":p1_z,"Cohen d":p1_cohend, "total n": p1_n_et+p1_n_nc, "Samples needed ("+str(p)+")":p1_samples_needed, "Power achieved with ET/NC("+str(p1_n_et)+"/"+str(p1_n_nc)+")":p1_power_achieved}
p1_df=pd.DataFrame(p1_res)
print("Benito-León paper power analysis with p=0.001 and ET/NC=19/20:\n")
print("The mean effect size of this research is: ")
display(p1_df['Cohen d'].describe())
display(p1_df)
```
### 2. [Bagepally, et al. “Decrease in Cerebral and Cerebellar Gray Matter in Essential Tremor: A Voxel-Based Morphometric Analysis under 3T MRI.” Journal of Neuroimaging (2012)](https://onlinelibrary.wiley.com/doi/full/10.1111/j.1552-6569.2011.00598.x?casa_token=FOs-GPZVoYAAAAAA%3AvQjMw6X0zV0MAnziTsMzUijUvWvH1MwFDb1wMjB_DLsECHUX1G5eJLcSPtmmurrKbxMNQoiGPEXILHY)
**No t or z values reported, skipped.**
Study type: Surface based analysis
Multiple comparison correction: No, with P=0.001
covariates: age, gender, age at onset, and eICV
Study groups: **ET** (19=15+5, 38.2±16.5) verses **NC** (17=14+3, 40.7±16.5); (stating age and sex matched)
Reported ROIs: bilateral cerebellum, bilateral parietal lobes, right frontal lobe, and right insula.
### 3. [Cerasa, A., et al. “Cerebellar Atrophy in Essential Tremor Using an Automated Segmentation Method.” American Journal of Neuroradiology (June 1, 2009)](http://www.ajnr.org/content/30/6/1240)
Study type: freesurfer segmentaitons, subcortical volumes
Multiple comparison correction: Bonferroni corrected but no significant results.
covariates: eTIV
Study groups: **arm-ET** (27=17+10, 65.0±12.8), **head-ET** (19=6+13, 70.7±7.8) and **NC** (28=14+14, 66.5±7.8); (stating age and sex matched for ET and NC but not for sub-group comparison.)
Reported ROIs: Cerebellar gray p<0.02 and white matter p<0.01 (in exploratory analysis without multiple comparison).
```
# paper3
p3_n_arm_et=27; p3_n_head_et=19; p3_n_nc=28; p = 0.05;
p3_roi=['ICV', 'Cortical gray matter', 'Cortical white matter', 'Cerebellar gray matter',
'Cerebellar white matter']
p3_m_arm_et = [1434.7, 413.5, 385.3, 89.6, 23.9];
p3_s_arm_et = [127.5, 49.5, 57.1, 11.1, 3];
p3_m_head_et = [1375.8, 393.8, 358.9, 86, 23.5];
p3_s_head_et = [119.7, 30.5, 41.1, 7.1, 3.3];
p3_m_nc = [1411.9, 404.1, 384.6, 91.9, 25.7];
p3_s_nc = [122.6, 32.6, 41.9, 8.2, 4.2];
p3_g_arm_cohend=[]; p3_g_head_cohend=[]
for i in range(len(p3_roi)):
p3_g_arm_cohend.append(cohend_from_sts(p3_n_arm_et,p3_m_arm_et[i],p3_s_arm_et[i],
p3_n_nc,p3_m_nc[i],p3_s_nc[i]));
p3_g_head_cohend.append(cohend_from_sts(p3_n_head_et,p3_m_head_et[i],p3_s_head_et[i],
p3_n_nc,p3_m_nc[i],p3_s_nc[i]));
p3_g_arm_samples_needed = [power.tt_ind_solve_power(effect_size=x, alpha=p, power=power_expected) for x in p3_g_arm_cohend];
p3_g_arm_power_achieved = [power.tt_ind_solve_power(effect_size=x, nobs1=p3_n_arm_et, alpha=p, ratio=p3_n_nc/p3_n_arm_et, alternative='smaller') for x in p3_g_arm_cohend];
p3_g_head_samples_needed = [power.tt_ind_solve_power(effect_size=x, alpha=p, power=power_expected) for x in p3_g_head_cohend];
p3_g_head_power_achieved = [power.tt_ind_solve_power(effect_size=x, nobs1=p3_n_head_et, alpha=p, ratio=p3_n_nc/p3_n_head_et, alternative='smaller') for x in p3_g_head_cohend];
p3_g_arm_res={"FS_Region":p3_roi,"Cohen d":p3_g_arm_cohend,"total n": p3_n_arm_et+p3_n_nc,"Samples needed ("+str(p)+")":p3_g_arm_samples_needed,
"Power achieved with ET/NC("+str(p3_n_arm_et)+"/"+str(p3_n_nc)+")":p3_g_arm_power_achieved}
p3_g_arm_df=pd.DataFrame(p3_g_arm_res)
print("Cerasa A. paper power analysis with p=0.05 and arm-ET/NC=27/28:\n")
print("The mean cerebellar effect size of this research is: ")
display(p3_g_arm_df['Cohen d'][3:].describe())
display(p3_g_arm_df)
print('\n')
p3_g_head_res={"FS_Region":p3_roi,"Cohen d":p3_g_head_cohend,"total n": p3_n_head_et+p3_n_nc,"Samples needed ("+str(p)+")":p3_g_head_samples_needed,
"Power achieved with ET/NC("+str(p3_n_head_et)+"/"+str(p3_n_nc)+")":p3_g_head_power_achieved}
p3_g_head_df=pd.DataFrame(p3_g_head_res)
print("Cerasa A. paper power analysis with p=0.05 and head-ET/NC=19/28:\n")
print("The mean cerebellar effect size of this research is: ")
display(p3_g_head_df['Cohen d'][3:].describe())
display(p3_g_head_df)
# none of the results shows enough power.
```
### 4. [Bhalsing, K. S., et al. “Association between Cortical Volume Loss and Cognitive Impairments in Essential Tremor.” European Journal of Neurology 21, no. 6 (2014).](https://onlinelibrary.wiley.com/doi/abs/10.1111/ene.12399)
**We have no cognitive impairment, skiped.**
Study type: VBM
Multiple comparison correction: Bonferroni corrected.
covariates: eTIV
Study groups: **ET** (25=19+6, 45.0±10.7) and **NC** (28=14+14, 45.4±10.7); (stating age and sex matched for ET and NC but not for sub-group comparison.)
Reported ROIs: Cognitive impairments were shown to correlate with GMV in the frontal parietal lobes, cingulate and insular cortices and cerebellum posterior lobe.
### 5. [Quattrone A, Cerasa A, Messina D, Nicoletti G, Hagberg GE, Lemieux L, Novellino F, Lanza P, Arabia G, Salsone M. Essential head tremor is associated with cerebellar vermis atrophy: a volumetric and voxel-based morphometry MR imaging study. American journal of neuroradiology. 2008 Oct 1;29(9):1692-7.](http://www.ajnr.org/content/29/9/1692.short)
Study type: VBM.
Multiple comparison correction: Bonferroni.
covariates: age, sex, eTIV
Study groups: familial **ET** (50=24+26, 65.2±14.3) and **NC** (32=16+16, 66.2±8.1, arm-ET: 18/12, 61.5±16.5; head-ET: 6/14, 70.6±7.6); (stating age and sex matched for ET and NC but not for sub-group comparison.)
Reported ROIs: No significant cerebellar atrophy was found in the whole ET group with respect to healthy subjects wiht VBM (right cerebellar clusters, right insula, right hippocampus). Vermis lobule IV can distinguish the 3 sub-groups. h-ET showedsignificant cerebellar atrophy at the level of the **anterior lobe**, with a marked atrophy of the vermis and partially of the paravermal regions with respect to controls.
```
# paper7
p7_n_arm_et=30; p7_n_head_et=20; p7_n_nc=32; p = 0.05;
p7_roi=['Midsagittal vermal area', 'Anterior lobule area', 'Posterior sup. lobule area', 'Posterior inf. lobule area'];
p7_m_arm_et = [849.8, 373.7, 201.1, 274.9];
p7_s_arm_et = [124.6, 53.9, 37.4, 56.6];
p7_m_head_et = [790.3, 343.8, 195.8, 250.6];
p7_s_head_et = [94.5, 37.9, 37.1, 43.1];
p7_m_nc = [898.6, 394.5, 209.7, 294.3];
p7_s_nc = [170.6, 74.6, 47.3, 69.5];
p7_cohend_arm_et=[]; p7_cohend_head_et=[];
for i in range(len(p7_roi)):
p7_cohend_arm_et.append(cohend_from_sts(p7_n_arm_et,p7_m_arm_et[i],p7_s_arm_et[i],p7_n_nc,p7_m_nc[i],p7_s_nc[i]));
p7_cohend_head_et.append(cohend_from_sts(p7_n_head_et,p7_m_head_et[i],p7_s_head_et[i],p7_n_nc,p7_m_nc[i],p7_s_nc[i]));
p7_samples_needed_arm_et = [power.tt_ind_solve_power(effect_size=x, alpha=p, power=power_expected) for x in p7_cohend_arm_et];
p7_power_achieved_arm_et = [power.tt_ind_solve_power(effect_size=x, nobs1=p7_n_arm_et, alpha=p, ratio=p7_n_arm_et/p7_n_nc, alternative='smaller') for x in p7_cohend_arm_et];
p7_samples_needed_head_et = [power.tt_ind_solve_power(effect_size=x, alpha=p, power=power_expected) for x in p7_cohend_head_et];
p7_power_achieved_head_et = [power.tt_ind_solve_power(effect_size=x, nobs1=p7_n_head_et, alpha=p, ratio=p7_n_head_et/p7_n_nc, alternative='smaller') for x in p7_cohend_head_et];
p7_arm_et_res={"ROI_Region":p7_roi,"Cohen d":p7_cohend_arm_et,"total n": p7_n_arm_et+p7_n_nc,"Samples needed ("+str(p)+")":p7_samples_needed_arm_et,
"Power achieved with armET/NC("+str(p7_n_arm_et)+"/"+str(p7_n_nc)+")":p7_power_achieved_arm_et}
p7_arm_et_df=pd.DataFrame(p7_arm_et_res)
print("Quattrone A. paper power analysis with p=0.05 and arm ET/NC=30/32:\n")
print("The mean cerebellar effect size of this research is: ")
display(p7_arm_et_df['Cohen d'].describe())
display(p7_arm_et_df)
print('\n')
p7_head_et_res={"ROI_Region":p7_roi,"Cohen d":p7_cohend_head_et,"total n": p7_n_head_et+p7_n_nc,"Samples needed ("+str(p)+")":p7_samples_needed_head_et,
"Power achieved with headET/NC ("+str(p7_n_head_et)+"/"+str(p7_n_nc)+")":p7_power_achieved_head_et}
p7_head_et_df=pd.DataFrame(p7_head_et_res)
print("Quattrone A. paper power analysis with p=0.05 and head ET/NC=20/32:\n")
print("The mean cerebellar effect size of this research is: ")
display(p7_head_et_df['Cohen d'].describe())
display(p7_head_et_df)
# None of the results shows enough power.
```
### 6. [Shin H, Lee DK, Lee JM, Huh YE, Youn J, Louis ED, Cho JW. Atrophy of the cerebellar vermis in essential tremor: segmental volumetric MRI analysis. The Cerebellum. 2016 Apr 1;15(2):174-81.](https://link.springer.com/content/pdf/10.1007/s12311-015-0682-8.pdf)
Study type: Cerebellar segmentation (28 lobules).
Multiple comparison correction: Bonferroni for groups.
covariates: eTIV
Study groups: **ET** (39=23+16, 63.7±13.0) and **NC** (36=19+17, 65.3±6.8, cerebellar-ET: 12/8, 66.4±13.4; classic-ET: 11/8, 60.9±12.2); (stating age and sex matched for ET and NC but not for sub-group comparison.)
Reported ROIs: volume ratio/eTIV, **vermis VI**, vermis VIIAt.
```
# paper5
p5_n_cere_et=20; p5_n_classic_et=19; p5_n_et=p5_n_cere_et+p5_n_classic_et; p5_n_nc=36; p = 0.05;
p5_roi=['cerebellar volume', 'Vermis VI', 'Vermis VIIAt'];
p5_m_et = [0.0818, 0.0030, 0.0008];
p5_s_et = [0.0071, 0.0006, 0.0004];
p5_m_cere_et = [0.0813, 0.0028, 0.0008];
p5_s_cere_et = [0.0059, 0.0006, 0.0002];
p5_m_classic_et = [0.0824, 0.0032, 0.0010];
p5_s_classic_et = [0.0084, 0.0004, 0.0005];
p5_m_nc = [0.0833, 0.0033, 0.0009];
p5_s_nc = [0.0065, 0.0006, 0.0003];
p5_g_et_cohend=[]; p5_cere_class_cohend=[]
for i in range(len(p5_roi)):
p5_g_et_cohend.append(cohend_from_sts(p5_n_et,p5_m_et[i],p5_s_et[i],p5_n_nc,p5_m_nc[i],p5_s_nc[i]));
p5_cere_class_cohend.append(cohend_from_sts(p5_n_cere_et,p5_m_cere_et[i],p5_s_cere_et[i],
p5_n_classic_et,p5_m_classic_et[i],p5_s_classic_et[i]));
p5_g_et_samples_needed = [power.tt_ind_solve_power(effect_size=x, alpha=p, power=power_expected) for x in p5_g_et_cohend];
p5_g_et_power_achieved = [power.tt_ind_solve_power(effect_size=x, nobs1=p5_n_et, alpha=p, ratio=p5_n_et/p5_n_nc, alternative='smaller') for x in p5_g_et_cohend];
p5_g_cere_samples_needed = [power.tt_ind_solve_power(effect_size=x, alpha=p, power=power_expected) for x in p5_cere_class_cohend];
p5_g_cere_power_achieved = [power.tt_ind_solve_power(effect_size=x, nobs1=p5_n_cere_et, alpha=p, ratio=p5_n_cere_et/p5_n_classic_et, alternative='smaller') for x in p5_cere_class_cohend];
p5_g_et_res={"ROI_Region":p5_roi,"Cohen d":p5_g_et_cohend,"total n": p5_n_et+p5_n_nc,"Samples needed ("+str(p)+")":p5_g_et_samples_needed,
"Power achieved with ET/NC("+str(p5_n_et)+"/"+str(p5_n_nc)+")":p5_g_et_power_achieved}
p5_g_et_df=pd.DataFrame(p5_g_et_res)
print("Shin H. paper power analysis with p=0.05 and ET/NC=39/36:\n")
print("The mean cerebellar effect size of this research is: ")
display(p5_g_et_df['Cohen d'].describe())
display(p5_g_et_df)
print('\n')
p5_g_cere_res={"ROI_Region":p5_roi,"Cohen d":p5_cere_class_cohend,"total n": p5_n_cere_et+p5_n_classic_et,"Samples needed ("+str(p)+")":p5_g_cere_samples_needed,
"Power achieved with cerebellarET/classicET ("+str(p5_n_cere_et)+"/"+str(p5_n_classic_et)+")":p5_g_cere_power_achieved}
p5_g_cere_df=pd.DataFrame(p5_g_cere_res)
print("Shin H. paper power analysis with p=0.05 and cerebellarET/classicET=20/19:\n")
print("The mean cerebellar effect size of this research is: ")
display(p5_g_cere_df['Cohen d'].describe())
display(p5_g_cere_df)
# None of the results show enough power.
```
### 7. [Dyke JP, Cameron E, Hernandez N, Dydak U, Louis ED. Gray matter density loss in essential tremor: a lobule by lobule analysis of the cerebellum. Cerebellum & ataxias. 2017 Dec;4(1):1-7.](https://cerebellumandataxias.biomedcentral.com/articles/10.1186/s40673-017-0069-3)
Study type: Cerebellar segmentation (43 lobules, SUIT).
Multiple comparison correction: Benjamini-Hochberg False Discovery Rate procedure (BH FDR)@ alpha=0.1.
covariates: age, gender, MOCA score and group, no eTIV.
Study groups: **ET** (47=24+23, 76.0±6.8, head ET, voice ET and arm ET) and **NC** (36=10+26, 73.2±6.7); (sex not matched, did not give details of subgroups.)
Reported ROIs: %GM density differences (dpa equavalent). For head ET:, Right_IX, Left_V, Left_VIIIa, Left_IX, Vermis_VIIb, Left_VIIb, Left_X, Left_I_IV and Right_V. For voice ET: Right_IX, Vermis_VIIb, Left_IX, Left_V, Left_X, Vermis_CrusII, Vermis_CrusI, Vermis_VI, Left_I_IV, Vermis_VIIIb and Right_V. Severe tremor (TTS ≥ 23; n = 20) showed no significant decreases compared to controls after correcting for multiple comparisons.
```
# paper6
p6_n_head_et=27; p6_n_voice_et=22; p6_n_nc=36; p = 0.05;
p6_roi_head=['Left_IIV', 'Left_V', 'Left_VIIb', 'Left_VIIIa', 'Left_IX', 'Left_X', 'Right_V', 'Right_IX', 'Vermis_VIIb'];
p6_roi_voice=['Left_IIV', 'Left_V', 'Left_IX', 'Left_X', 'Right_V', 'Right_IX', 'Vermis_CrusI', 'Vermis_CrusII', 'Vermis_VI','Vermis_VIIb', 'Vermis_VIIIb'];
p6_p_head_et = [0.018, 0.004, 0.013, 0.009, 0.010, 0.014, 0.021, 0.001, 0.011];
p6_p_voice_et = [0.025, 0.005, 0.005, 0.008, 0.026, 0.001, 0.016, 0.012, 0.019, 0.004, 0.026];
p6_cohend_head_et = [cohend_from_p2(x,p6_n_head_et,p6_n_nc) for x in p6_p_head_et];
p6_cohend_voice_et = [cohend_from_p2(x,p6_n_voice_et,p6_n_nc) for x in p6_p_voice_et];
p6_sample_needed_head_et = [power.tt_ind_solve_power(effect_size=x, alpha=p, power=power_expected) for x in p6_cohend_head_et];
p6_power_achieved_head_et = [power.tt_ind_solve_power(effect_size=x, nobs1=p6_n_head_et, alpha=p, ratio=p6_n_nc/p6_n_head_et, alternative='larger')
for x in p6_cohend_head_et];
p6_sample_needed_voice_et = [power.tt_ind_solve_power(effect_size=x, alpha=p, power=power_expected) for x in p6_cohend_voice_et];
p6_power_achieved_voice_et = [power.tt_ind_solve_power(effect_size=x, nobs1=p6_n_voice_et, alpha=p, ratio=p6_n_nc/p6_n_voice_et, alternative='larger')
for x in p6_cohend_voice_et];
p6_head_et_res={"ROI_Region":p6_roi_head,"Cohen d":p6_cohend_head_et,"total n": p6_n_head_et+p6_n_nc,"Samples needed ("+str(p)+")":p6_sample_needed_head_et,
"Power achieved with headET/NC("+str(p6_n_head_et)+"/"+str(p6_n_nc)+")":p6_power_achieved_head_et}
p6_head_et_df=pd.DataFrame(p6_head_et_res)
print("Dyke JP. paper power analysis with p=0.05 and head ET/NC=27/36:\n")
print("The mean cerebellar effect size of this research is: ")
display(p6_head_et_df['Cohen d'].describe())
display(p6_head_et_df)
print('\n')
p6_voice_et_res={"ROI_Region":p6_roi_voice,"Cohen d":p6_cohend_voice_et,"total n": p6_n_voice_et+p6_n_nc,"Samples needed ("+str(p)+")":p6_sample_needed_voice_et,
"Power achieved with voiceET/NC ("+str(p6_n_voice_et)+"/"+str(p6_n_nc)+")":p6_power_achieved_voice_et}
p6_voice_et_df=pd.DataFrame(p6_voice_et_res)
print("Dyke JP. paper power analysis with p=0.05 and voice ET/NC=22/36:\n")
print("The mean cerebellar effect size of this research is: ")
display(p6_voice_et_df['Cohen d'].describe())
display(p6_voice_et_df)
# None of the results shows enough power. Largest: Right_IX=0.839158
```
## Summary of literature effect sizes and power.
```
###### Number of samples needed to detect the empirical effect sizes with power of 0.9; and actrual sample sizes.
# pool data
pd_roi_lit=pd.concat([p3_g_head_df.loc[3:,['Cohen d','total n']], p3_g_arm_df.loc[3:, ['Cohen d','total n']],
p7_arm_et_df.loc[:,['Cohen d','total n']], p7_head_et_df.loc[:, ['Cohen d','total n']],
p5_g_et_df.loc[:,['Cohen d','total n']], p5_g_cere_df.loc[:, ['Cohen d','total n']],
p6_head_et_df.loc[:,['Cohen d','total n']], p6_voice_et_df.loc[:,['Cohen d','total n']]], ignore_index=True)
pd_roi_lit.loc[:,'Cohen d']=abs(pd_roi_lit.loc[:,'Cohen d']);
pd_vbm_lit=p1_df.loc[:,['Cohen d','total n']]; pd_vbm_lit.loc[:,'Cohen d']=abs(pd_vbm_lit.loc[:,'Cohen d']);
pd_lit=pd.concat([pd_roi_lit, pd_vbm_lit]);
es_lit=round(pd_lit.loc[:,'Cohen d'].median(),2);
print(es_lit)
print('4. Samples needed to achieve power='+str(power_expected)+' for literature claims: \n')
print('The median of the effect size is: ', pd_lit.loc[:, 'Cohen d'].median())
print('ROI Cohens d summary:')
print('The median the effect size is: ', pd_roi_lit.loc[:, 'Cohen d'].median())
display(pd_roi_lit.loc[:, 'Cohen d'].describe())
print('VBM Cohens d summary:')
print('The median the effect size is: ', pd_vbm_lit.loc[:, 'Cohen d'].median())
display(pd_vbm_lit.loc[:,'Cohen d'].describe())
# Visualizae the literature effect size VS sample size, calculate the power .9 line for our dataset with fixed 38 ETs and augmented NCs.
cohend_lit = array(array(range(54, 400))/100)
n_et=38;
alpha_expected=0.05;
power_expected=0.9;
r_d_expected_list=[ power.tt_ind_solve_power(effect_size=x, nobs1=n_et, alpha=alpha_expected, power=power_expected, ratio=None) for x in cohend_lit];
r_d_expected=power.tt_ind_solve_power(effect_size=es_lit, nobs1=n_et, alpha=alpha_expected, power=power_expected, ratio=None)
n_nc_needed = [math.ceil(x*n_et) for x in r_d_expected_list]
n_total = [x+n_et for x in n_nc_needed]
#Power achieved with Number of borrowed subjects
n_matched=array(array(range(1,300))); n_total_power=[x+n_et for x in n_matched]
power_matched_1side=[ power.tt_ind_solve_power(effect_size=es_lit, nobs1=n_et, alpha=alpha_expected, ratio=x/n_et, alternative='larger') for x in n_matched];
power_matched_2side=[ power.tt_ind_solve_power(effect_size=es_lit, nobs1=n_et, alpha=alpha_expected, ratio=x/n_et) for x in n_matched];
# subplot1: literature effect sizes
fig, ax = plt.subplots(1, 2, figsize=(20, 10))
ax[0].plot(n_total, cohend_lit, 'gray')
ax[0].scatter(pd_roi_lit['total n'], pd_roi_lit['Cohen d'], c='b', marker='x')
ax[0].scatter(pd_vbm_lit['total n'], pd_vbm_lit['Cohen d'], c='g', marker='x')
# print and plot the literature effect size
r_d_lit=power.tt_ind_solve_power(effect_size=es_lit, nobs1=n_et, alpha=alpha_expected, power=power_expected, ratio=None)
n_total_aug=math.ceil((r_d_lit+1)*n_et)
print("literature median effect size: ", es_lit, ', the total number of samples needed: ',n_total_aug)
ax[0].vlines(n_total_aug, ymin=0, ymax=2, colors='r', linestyles='--', label='power=0.9')
# costumize
ax[0].set_xlim([0, 200]); ax[0].set_ylim([0, 2]);
ax[0].set_ylabel('Effect sizes (Cohen\'s d)',fontsize=20)
ax[0].set_xlabel('Total number of subjects',fontsize=20)
ax[0].set_title(r'Literature effect sizes',fontsize=20)
ax[0].legend(['Power='+str(power_expected)+' from '+str(n_et)+' ET and \nincreasing number of controls','ROI Literature','VBM Literature', 'Power=0.9 and effect size\n(Literature median) ='+str(es_lit)], loc='upper right',fontsize='x-large')
ax[0].text(0.025*200, 0.975*2, '(a)', fontsize=20, verticalalignment='top')
# subplot2: power with increasing NC subjcts
POW_LIM=[0.2, 1.0]
ax[1].plot(n_total_power, power_matched_1side, 'b')
ax[1].plot(n_total_power, power_matched_2side, 'g')
r_9=power.tt_ind_solve_power(effect_size=es_lit, nobs1=n_et, alpha=alpha_expected, ratio=None, power=0.9)
n_nc_needed=math.ceil(r_9*n_et)
ax[1].vlines(n_total_aug, ymin=POW_LIM[0], ymax=POW_LIM[1], colors='r', linestyles='--', label='power=0.9')
ax[1].set_xlim([0, 200]); ax[1].set_ylim(POW_LIM);
ax[1].set_xlabel('Total number of subjects (including 38 ET)',fontsize=20)
ax[1].set_ylabel('Power', fontsize=20)
ax[1].set_title(r'Power ($\alpha=0.05$, effect size='+str(es_lit)+')',fontsize=20)
ax[1].legend(['1-sided test','2-sided test', str(n_nc_needed)+' matched NCs needed\n for 2-sided test with\n Power='+str(power_expected)], loc='right',fontsize='x-large')
ax[1].text(0.025*200, 0.975*1, '(b)', fontsize=20, verticalalignment='top')
fig.savefig("power_analysis.jpg",dpi=300)
```
### 8. [Mavroudis, I., Petrides, F., Karantali, E., Chatzikonstantinou, S., McKenna, J., Ciobica, A., Iordache, A.-C., Dobrin, R., Trus, C., & Kazis, D. (2021). A Voxel-Wise Meta-Analysis on the Cerebellum in Essential Tremor. Medicina, 57(3), 264.](https://www.mdpi.com/1648-9144/57/3/264)
The power of studies mentioned in Mavroudis's meta analysis paper.
Study type: meta analysis.
```
# New meta analysis added.
sample_et = [36, 9, 45, 17, 47, 27, 14, 32, 19, 19, 20, 14, 25, 50, 19, 10]
sample_nc = [30, 9, 39, 17, 36, 27, 20, 12, 18, 20, 17, 23, 25, 32, 19, 12]
# Fang paper is not included for it is a rs-fMRI.
import numpy as np
n_et=np. median(sample_et)
n_nc=np. median(sample_nc)
print('Medians of sample sizes of the mentioned studies (ET/NC): ', np. median(sample_et), '/',np. median(sample_nc))
from statsmodels.stats import power
import math
from numpy import array
import matplotlib.pyplot as plt
from statsmodels.stats.power import TTestIndPower
# statsitical pre defined values: setting es=0.8
effect_size_expected=0.61;
alpha_expected=0.05;
power_expected=0.9;
# should pay attetnion to 1 sided or 2 sided tests
print('Medians of power of the mentioned studies (ET|NC): ', power.tt_ind_solve_power(effect_size=effect_size_expected, alpha=alpha_expected, nobs1=n_et, ratio=n_nc/n_et),
'|', power.tt_ind_solve_power(effect_size=effect_size_expected, alpha=alpha_expected/10, nobs1=n_et, ratio=n_nc/n_et))
pow_a=[]; pow_a_10=[]
for i in range(len(sample_et)):
pow_a.append(power.tt_ind_solve_power(effect_size=effect_size_expected, alpha=alpha_expected, nobs1=sample_et[i], ratio=sample_nc[i]/sample_et[i]))
pow_a_10.append(power.tt_ind_solve_power(effect_size=effect_size_expected, alpha=alpha_expected/10, nobs1=sample_et[i], ratio=sample_nc[i]/sample_et[i]))
#print([round(x, 4) for x in pow_a], '\n' , [round(x,4) for x in pow_a_10])
print('Medians of power of the mentioned studies (a=0.05|a=0.05/10): ', round(np.median(pow_a),4), '|', round(np.median(pow_a_10),4))
```
| true |
code
| 0.559832 | null | null | null | null |
|
# Widgets Demonstration
As well as providing working code that readers can experiment with, the textbook also provides a number of widgets to help explain specific concepts. This page contains a selection of these as an index. Run each cell to interact with the widget.
**NOTE:** You will need to enable interactivity by pressing 'Try' in the bottom left corner of a code cell, or by viewing this page in the [IBM Quantum Experience](https://quantum-computing.ibm.com/jupyter/user/qiskit-textbook/content/widgets-index.ipynb).
### Interactive Code
The most important interactive element of the textbook is the ability to change and experiment with the code. This is possible directly on the textbook webpage, but readers can also view the textbook as Jupyter notebooks where they are able to add more cells and save their changes. Interactive Python code also allows for widgets through [ipywidgets](https://ipywidgets.readthedocs.io/en/latest/), and the rest of this page is dedicated to demonstrating some of the widgets provided by the Qiskit Textbook.
```
# Click 'try' then 'run' to see the output
print("This is code works!")
```
### Gate Demo
This widget shows the effects of a number of gates on a qubit, illustrated through the Bloch sphere. It is used a lot in [Single Qubit Gates](https://qiskit.org/textbook/ch-states/single-qubit-gates.html).
```
from qiskit_textbook.widgets import gate_demo
gate_demo()
```
### Binary Demonstration
This simple widget allows the reader to interact with a binary number. It is found in [The Atoms of Computation](https://qiskit.org/textbook/ch-states/atoms-computation.html).
```
from qiskit_textbook.widgets import binary_widget
binary_widget(nbits=5)
```
### Scalable Circuit Widget
When working with circuits such as those in the [Quantum Fourier Transform Chapter](https://qiskit.org/textbook/ch-algorithms/quantum-fourier-transform.html), it's often useful to see how these scale to different numbers of qubits. If our function takes a circuit (`QuantumCircuit`) and a number of qubits (`int`) as positional inputs, we can see how it scales using the widget below. Try changing the code inside these functions and re-run the cell.
```
from qiskit_textbook.widgets import scalable_circuit
from numpy import pi
def qft_rotations(circuit, n):
"""Performs qft on the first n qubits in circuit (without swaps)"""
if n == 0:
return circuit
n -= 1
circuit.h(n)
for qubit in range(n):
circuit.cp(pi/2**(n-qubit), qubit, n)
# At the end of our function, we call the same function again on
# the next qubits (we reduced n by one earlier in the function)
qft_rotations(circuit, n)
def swap_qubits(circuit, n):
"""Reverse the order of qubits"""
for qubit in range(n//2):
circuit.swap(qubit, n-qubit-1)
return circuit
def qft(circuit, n):
"""QFT on the first n qubits in circuit"""
qft_rotations(circuit, n)
swap_qubits(circuit, n)
return circuit
scalable_circuit(qft)
```
### Bernstein-Vazirani Widget
Through this widget, the reader can follow the mathematics through an instance of the [Bernstein-Vazirani algorithm](https://qiskit.org/textbook/ch-algorithms/bernstein-vazirani.html). Press the buttons to apply the different steps of the algorithm. The first argument sets the number of qubits, and the second sets the hidden binary string, then re-run the cell. You can also reveal the contents of the oracle by setting `hide_oracle=False` and re-running the cell.
```
from qiskit_textbook.widgets import bv_widget
bv_widget(2, "11", hide_oracle=True)
```
### Deutsch-Jozsa Widget
Similarly to the Bernstein-Vazirani widget, through the Deutsch-Jozsa widget the reader can follow the mathematics through an instance of the [Deutsch-Jozsa algorithm](https://qiskit.org/textbook/ch-algorithms/deutsch-josza.html). Press the buttons to apply the different steps of the algorithm. `case` can be "balanced" or "constant", and `size` can be "small" or "large". Re-run the cell for a randomly selected oracle. You can also reveal the contents of the oracle by setting `hide_oracle=False` and re-running the cell.
```
from qiskit_textbook.widgets import dj_widget
dj_widget(size="large", case="balanced", hide_oracle=True)
```
| true |
code
| 0.595434 | null | null | null | null |
|
# CIFAR-10: Part 2
Welcome back! If you have not completed [Part 1](*), please do so before running the code in this notebook.
In Part 2 we will assume you have the training and testing lmdbs, as well as the trained model .pb files from Part 1. As you may recall from Part 1, we created the dataset in the form of lmdbs then trained a model and saved the trained model in the form of a *predict_net.pb* and an *init_net.pb*. In this notebook, we will show how to test that saved model with the test lmdb and how to continue training to increase our test accuracy.
Recall the objectives of the two part CIFAR-10 tutorial:
**Part 1:**
- Download dataset
- Write images to lmdbs
- Define and train a model with checkpoints
- Save the trained model
**Part 2:**
- Load pre-trained model from Part 1
- Run inference on testing lmdb
- Continue training to improve test accuracy
- Test the retrained model
As before, let's start with some necessary imports.
```
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import numpy as np
import os
import shutil
import operator
import glob
from caffe2.python import core,model_helper,optimizer,workspace,brew,utils
from caffe2.proto import caffe2_pb2
import matplotlib.pyplot as plt
from caffe2.python.modeling import initializers
from caffe2.python.modeling.parameter_info import ParameterTags
```
## Check Inputs
Before we get started, let's make sure you have the necessary Part 1 files. We will use the saved model from the most recent run of Part 1.
```
# Train lmdb
TRAIN_LMDB = os.path.join(os.path.expanduser('~'),"caffe2_notebooks/tutorial_data/cifar10/training_lmdb")
# Test lmdb
TEST_LMDB = os.path.join(os.path.expanduser('~'),"caffe2_notebooks/tutorial_data/cifar10/testing_lmdb")
# Extract protobuf files from most recent Part 1 run
part1_runs_path = os.path.join(os.path.expanduser('~'), "caffe2_notebooks", "tutorial_files", "tutorial_cifar10")
runs = sorted(glob.glob(part1_runs_path + "/*"))
# Init net
INIT_NET = os.path.join(runs[-1], "cifar10_init_net.pb")
# Predict net
PREDICT_NET = os.path.join(runs[-1], "cifar10_predict_net.pb")
# Make sure they all exist
if (not os.path.exists(TRAIN_LMDB)) or (not os.path.exists(TEST_LMDB)) or (not os.path.exists(INIT_NET)) or (not os.path.exists(PREDICT_NET)):
print("ERROR: input not found!")
else:
print("Success, you may continue!")
```
### Repeat Helper Functions
If these functions look familiar, you are correct; they have been copied-and-pasted from Part 1. To summarize, we will need the *AddInputLayer* function to connect our models to the lmdbs, and the *Add_Original_CIFAR10_Model* function to provide the architecture of the network.
```
def AddInputLayer(model, batch_size, db, db_type):
# load the data
#data_uint8, label = brew.db_input(
# model,
# blobs_out=["data_uint8", "label"],
# batch_size=batch_size,
# db=db,
# db_type=db_type,
#)
data_uint8, label = model.TensorProtosDBInput([], ["data_uint8", "label"], batch_size=batch_size, db=db, db_type=db_type)
# cast the data to float
data = model.Cast(data_uint8, "data", to=core.DataType.FLOAT)
# scale data from [0,255] down to [0,1]
data = model.Scale(data, data, scale=float(1./256))
# don't need the gradient for the backward pass
data = model.StopGradient(data, data)
return data, label
def update_dims(height, width, kernel, stride, pad):
new_height = ((height - kernel + 2*pad)//stride) + 1
new_width = ((width - kernel + 2*pad)//stride) + 1
return new_height, new_width
def Add_Original_CIFAR10_Model(model, data, num_classes, image_height, image_width, image_channels):
# Convolutional layer 1
conv1 = brew.conv(model, data, 'conv1', dim_in=image_channels, dim_out=32, kernel=5, stride=1, pad=2)
h,w = update_dims(height=image_height, width=image_width, kernel=5, stride=1, pad=2)
# Pooling layer 1
pool1 = brew.max_pool(model, conv1, 'pool1', kernel=3, stride=2)
h,w = update_dims(height=h, width=w, kernel=3, stride=2, pad=0)
# ReLU layer 1
relu1 = brew.relu(model, pool1, 'relu1')
# Convolutional layer 2
conv2 = brew.conv(model, relu1, 'conv2', dim_in=32, dim_out=32, kernel=5, stride=1, pad=2)
h,w = update_dims(height=h, width=w, kernel=5, stride=1, pad=2)
# ReLU layer 2
relu2 = brew.relu(model, conv2, 'relu2')
# Pooling layer 1
pool2 = brew.average_pool(model, relu2, 'pool2', kernel=3, stride=2)
h,w = update_dims(height=h, width=w, kernel=3, stride=2, pad=0)
# Convolutional layer 3
conv3 = brew.conv(model, pool2, 'conv3', dim_in=32, dim_out=64, kernel=5, stride=1, pad=2)
h,w = update_dims(height=h, width=w, kernel=5, stride=1, pad=2)
# ReLU layer 3
relu3 = brew.relu(model, conv3, 'relu3')
# Pooling layer 3
pool3 = brew.average_pool(model, relu3, 'pool3', kernel=3, stride=2)
h,w = update_dims(height=h, width=w, kernel=3, stride=2, pad=0)
# Fully connected layers
fc1 = brew.fc(model, pool3, 'fc1', dim_in=64*h*w, dim_out=64)
fc2 = brew.fc(model, fc1, 'fc2', dim_in=64, dim_out=num_classes)
# Softmax layer
softmax = brew.softmax(model, fc2, 'softmax')
return softmax
```
## Test Saved Model From Part 1
### Construct Model for Testing
The first thing we need is a model helper object that we can attach the lmdb reader to.
```
# Create a ModelHelper object with init_params=False
arg_scope = {"order": "NCHW"}
test_model = model_helper.ModelHelper(name="test_model", arg_scope=arg_scope, init_params=False)
# Add the data input layer to the model, pointing at the TEST_LMDB
data,_ = AddInputLayer(test_model,1,TEST_LMDB,'lmdb')
```
### Populate the Model Helper with Saved Model Params
To format a model for testing, we do not need to create params in the model helper, nor do we need to add gradient operators as we will only be performing forward passes. All we really need to do is populate the *.net* and *.param_init_net* members of the model helper with the contents of the saved *predict_net.pb* and *init_net.pb*, respectively. To accomplish this, we construct *caffe2_pb* objects with the protobuf from the pb files, create *Net* objects with the *caffe2_pb* objects, then **append** the net objects to the *.net* and *.param_init_net* members of the model helper. Appending is very important here! If we do not append, we would wipe out the input data layer stuff that we just added.
Recall from Part 1, the saved model expected an input named *data* and produced an output called *softmax*. Conveniently (but not accidentally), the *AddInputLayer* function reads from the lmdb and puts the information into the workspace in a blob called *data*. It is also important to remember what each of the saved nets that we are appending to our model contains. The *predict_net* contains the structure of the model, including the ops involved in the forward pass. It has the definitions of the convolutional, pooling, and fc layers in the model. The *init_net* contains the weight initializations for the parameters that the ops in the *predict_net* expect. For example, if there is an op in the *predict_net* named 'fc1', the *init_net* will contain the trained weights (*fc1_w*), and biases (*fc1_b*) for that layer.
After we append the nets, we add an accuracy layer to the model which uses the *softmax* output from the saved model and the *label* input from the lmdb. Note, we could manually fetch the softmax blob from the workspace after every iteration and check whether or not the class with the highest softmax score is the true label, but instead we opt for the simpler accuacy layer.
```
# Populate the model helper obj with the init net stuff, which provides the
# weight initializations for the model
init_net_proto = caffe2_pb2.NetDef()
with open(INIT_NET, "r") as f:
init_net_proto.ParseFromString(f.read())
test_model.param_init_net = test_model.param_init_net.AppendNet(core.Net(init_net_proto))
# Populate the model helper obj with the predict net stuff, which defines
# the structure of the model
predict_net_proto = caffe2_pb2.NetDef()
with open(PREDICT_NET, "r") as f:
predict_net_proto.ParseFromString(f.read())
test_model.net = test_model.net.AppendNet(core.Net(predict_net_proto))
# Add an accuracy feature to the model for convenient reporting during testing
accuracy = brew.accuracy(test_model, ['softmax', 'label' ], 'accuracy')
```
### Run Testing
At this point, our model is initialized as the saved model from Part 1. We can now run the testing loop and check the accuracy.
```
# Run the param init net to put the trained model info into the workspace
workspace.RunNetOnce(test_model.param_init_net)
workspace.CreateNet(test_model.net, overwrite=True)
# Stat keeper
avg_accuracy = 0.0
# Number of test iterations to run here, since the full test set is 10k images and the
# batch size is 1, we will run 10000 test batches to cover the entire test set
test_iters = 10000
# Main testing loop
for i in range(test_iters):
workspace.RunNet(test_model.net)
acc = workspace.FetchBlob('accuracy')
avg_accuracy += acc
if (i % 500 == 0) and (i > 0):
print("Iter: {}, Current Accuracy: {}".format(i, avg_accuracy/float(i)))
# Report final test accuracy score as the number of correct predictions divided by 10,000
print("*********************************************")
print("Final Test Accuracy: ",avg_accuracy/float(test_iters))
```
## Continue Training
Our model is performing significantly better than random guessing, but I think we can do a little better with more training. To do this we will:
- create a new model helper
- specify that the train data will come from the training lmdb
- re-define the model architecture with the Add_Original_CIFAR10_Model function
- grab the trained weights and biases from the saved init_net.pb
- resume training
### Construct Model for Re-Training
Here we create a new model helper object for training. Nothing here should look new but take notice that we set **init_params=False**. This is important, as we do not want brew (in *Add_Original_CIFAR10_Model* function) to automatically initialize the params, rather we want to set them ourselves. Once we construct the model helper, we add the input layer and point it to the training lmdb, brew in the model architecture, and finally initialize the parameters by appending the contents of the saved *init_net.pb* to the *.param_init_net* member of the train model.
```
# Number of iterations to train for here
training_iters = 3000
# Reset workspace to clear all of the information from the testing stage
workspace.ResetWorkspace()
# Create new model
arg_scope = {"order": "NCHW"}
train_model = model_helper.ModelHelper(name="cifar10_train", arg_scope=arg_scope, init_params=False)
# Add the data layer to the model
data,_ = AddInputLayer(train_model,100,TRAIN_LMDB,'lmdb')
softmax = Add_Original_CIFAR10_Model(train_model, data, 10, 32, 32, 3)
# Populate the param_init_net of the model obj with the contents of the init net
init_net_proto = caffe2_pb2.NetDef()
with open(INIT_NET, "r") as f:
init_net_proto.ParseFromString(f.read())
tmp_init_net = core.Net(init_net_proto)
train_model.param_init_net = train_model.param_init_net.AppendNet(tmp_init_net)
```
### Specify Loss Function and Optimizer
We can now proceed as normal by specifying the loss function, adding the gradient operators, and building the optimizier. Here, we opt for the same loss function and optimizer that we used in Part 1.
```
# Add the "training operators" to the model
xent = train_model.LabelCrossEntropy([softmax, 'label'], 'xent')
# compute the expected loss
loss = train_model.AveragedLoss(xent, "loss")
# track the accuracy of the model
accuracy = brew.accuracy(train_model, [softmax, 'label'], "accuracy")
# use the average loss we just computed to add gradient operators to the model
train_model.AddGradientOperators([loss])
# Specify Optimization Algorithm
optimizer.build_sgd(
train_model,
base_learning_rate=0.01,
policy="fixed",
momentum=0.9,
weight_decay=0.004
)
```
**Important Note**
Check out the results of the *GetOptimizationParamInfo* function. The *params* that this function returns are the parameters that will be optimized by the optimization function. If you are attempting to retrain a model in a different way, and your model doesnt seem to be learning, check the return value of this fuction. If it returns nothing, look no further for your problem! This is exactly the reason that we brew'ed in the layers of the train model with the *Add_Original_CIFAR10_Model* function, because it creates the params in the model automatically. If we had appended the *.net* member of the Model Helper as we did for the test model, this function would return nothing, meaning no parameters would get optimized. A workaround if you appended the net would be to manually create the params with the *create_param* function, which feels like a bit of a hack, especially if you have the add model code on-hand.
```
for param in train_model.GetOptimizationParamInfo():
print("Param to be optimized: ",param)
```
### Run Training
**This step will take a while!**
With our model helper setup we can now run the training as normal. Note, the accuracy and loss reported here is as measured on the *training* batches. Recall that the accuracy reported in Part 1 was the validation accuracy. Be careful how you interpret this number!
```
# Prime the workspace
workspace.RunNetOnce(train_model.param_init_net)
workspace.CreateNet(train_model.net, overwrite=True)
# Run the training loop
for i in range(training_iters):
workspace.RunNet(train_model.net)
acc = workspace.FetchBlob('accuracy')
loss = workspace.FetchBlob('loss')
if i % 100 == 0:
print ("Iter: {}, Loss: {}, Accuracy: {}".format(i,loss,acc))
```
## Test the Retrained Model
We will test the retrained model, just as we did in the first part of this notebook. However, since the params already exist in the workspace from the retraining step, we do not need to set the *.param_init_net*. Rather, we set **init_params=False** and brew in the model architecture with *Add_Original_CIFAR10_Model*. When we create the net, the model will find that the required blobs are already in the workspace. Then, we can run the main testing loop, which will report a final test accuracy score (which is hopefully higher).
```
arg_scope = {"order": "NCHW"}
# Construct the model
test_model = model_helper.ModelHelper(name="test_model", arg_scope=arg_scope, init_params=False)
# Set the input as the test lmdb
data,_ = AddInputLayer(test_model,1,TEST_LMDB,'lmdb')
# brew in the model architecture
softmax = Add_Original_CIFAR10_Model(test_model, data, 10, 32, 32, 3)
accuracy = brew.accuracy(test_model, ['softmax', 'label' ], 'accuracy')
# Prime the net
workspace.RunNetOnce(test_model.param_init_net)
workspace.CreateNet(test_model.net, overwrite=True)
# Confusion Matrix for CIFAR-10
cmat = np.zeros((10,10))
# Stat keepers
avg_accuracy = 0.0
test_iters = 10000
# Main testing loop
for i in range(test_iters):
workspace.RunNet(test_model.net)
acc = workspace.FetchBlob('accuracy')
avg_accuracy += acc
if (i % 500 == 0) and (i > 0):
print("Iter: {}, Current Accuracy: {}".format(i, avg_accuracy/float(i)))
# Get the top-1 prediction
results = workspace.FetchBlob('softmax')[0]
label = workspace.FetchBlob('label')[0]
max_index, max_value = max(enumerate(results), key=operator.itemgetter(1))
# Update confusion matrix
cmat[label,max_index] += 1
# Report final testing results
print("*********************************************")
print("Final Test Accuracy: ",avg_accuracy/float(test_iters))
```
### Check Results
Notice, the result from testing the re-trained model is better than the original test accuracy. If you wish, you can save the new model as .pb files just as in Part 1, but we will leave that to you. The last thing we will do is attempt to visualize the performance of our classifier by plotting a confusion matrix and looking for a **strong diagonal** trend.
```
# Plot confusion matrix
fig = plt.figure(figsize=(10,10))
plt.tight_layout()
ax = fig.add_subplot(111)
res = ax.imshow(cmat, cmap=plt.cm.rainbow,interpolation='nearest')
width, height = cmat.shape
for x in xrange(width):
for y in xrange(height):
ax.annotate(str(cmat[x,y]), xy=(y, x),horizontalalignment='center',verticalalignment='center')
classes = ['Airplane','Automobile','Bird','Cat','Deer','Dog','Frog','Horse','Ship','Truck']
plt.xticks(range(width), classes, rotation=0)
plt.yticks(range(height), classes, rotation=0)
ax.set_xlabel('Predicted Class')
ax.set_ylabel('True Class')
plt.title('CIFAR-10 Confusion Matrix')
plt.show()
```
| true |
code
| 0.575827 | null | null | null | null |
|
# Analyze Order Book Data
## Imports & Settings
```
import pandas as pd
from pathlib import Path
import numpy as np
from collections import Counter
from time import time
from datetime import datetime, timedelta, time
import seaborn as sns
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib.ticker import FuncFormatter
from math import pi
from bokeh.plotting import figure, show, output_file, output_notebook
from scipy.stats import normaltest
%matplotlib inline
pd.set_option('display.float_format', lambda x: '%.2f' % x)
plt.style.use('fivethirtyeight')
data_path = Path('data')
itch_store = str(data_path / 'itch.h5')
order_book_store = str(data_path / 'order_book.h5')
stock = 'AAPL'
date = '20190327'
title = '{} | {}'.format(stock, pd.to_datetime(date).date())
```
## Load system event data
```
with pd.HDFStore(itch_store) as store:
sys_events = store['S'].set_index('event_code').drop_duplicates()
sys_events.timestamp = sys_events.timestamp.add(pd.to_datetime(date)).dt.time
market_open = sys_events.loc['Q', 'timestamp']
market_close = sys_events.loc['M', 'timestamp']
```
## Trade Summary
We will combine the messages that refer to actual trades to learn about the volumes for each asset.
```
with pd.HDFStore(itch_store) as store:
stocks = store['R']
stocks.info()
```
As expected, a small number of the over 8,500 equity securities traded on this day account for most trades
```
with pd.HDFStore(itch_store) as store:
stocks = store['R'].loc[:, ['stock_locate', 'stock']]
trades = store['P'].append(store['Q'].rename(columns={'cross_price': 'price'}), sort=False).merge(stocks)
trades['value'] = trades.shares.mul(trades.price)
trades['value_share'] = trades.value.div(trades.value.sum())
trade_summary = trades.groupby('stock').value_share.sum().sort_values(ascending=False)
trade_summary.iloc[:50].plot.bar(figsize=(14, 6), color='darkblue', title='% of Traded Value')
plt.gca().yaxis.set_major_formatter(FuncFormatter(lambda y, _: '{:.0%}'.format(y)))
```
## AAPL Trade Summary
```
with pd.HDFStore(order_book_store) as store:
trades = store['{}/trades'.format(stock)]
trades.price = trades.price.mul(1e-4)
trades = trades[trades.cross == 0]
trades = trades.between_time(market_open, market_close).drop('cross', axis=1)
trades.info()
```
## Tick Bars
The trade data is indexed by nanoseconds and is very noisy. The bid-ask bounce, for instance, causes the price to oscillate between the bid and ask prices when trade initiation alternates between buy and sell market orders. To improve the noise-signal ratio and improve the statistical properties, we need to resample and regularize the tick data by aggregating the trading activity.
We typically collect the open (first), low, high, and closing (last) price for the aggregated period, alongside the volume-weighted average price (VWAP), the number of shares traded, and the timestamp associated with the data.
```
tick_bars = trades.copy()
tick_bars.index = tick_bars.index.time
tick_bars.price.plot(figsize=(10, 5), title='{} | {}'.format(stock, pd.to_datetime(date).date()), lw=1)
plt.xlabel('')
plt.tight_layout();
```
### Test for Normality of tick returns
```
normaltest(tick_bars.price.pct_change().dropna())
```
## Regularizing Tick Data
### Price-Volume Chart
We will use the `price_volume` function to compare the price-volume relation for various regularization methods.
```
def price_volume(df, price='vwap', vol='vol', suptitle=title):
fig, axes = plt.subplots(nrows=2, sharex=True, figsize=(15,8))
axes[0].plot(df.index, df[price])
axes[1].bar(df.index, df[vol], width=1/(len(df.index)), color='r')
# formatting
xfmt = mpl.dates.DateFormatter('%H:%M')
axes[1].xaxis.set_major_locator(mpl.dates.HourLocator(interval=3))
axes[1].xaxis.set_major_formatter(xfmt)
axes[1].get_xaxis().set_tick_params(which='major', pad=25)
axes[0].set_title('Price', fontsize=14)
axes[1].set_title('Volume', fontsize=14)
fig.autofmt_xdate()
fig.suptitle(suptitle)
fig.tight_layout()
plt.subplots_adjust(top=0.9)
```
### Time Bars
Time bars involve trade aggregation by period.
```
def get_bar_stats(agg_trades):
vwap = agg_trades.apply(lambda x: np.average(x.price, weights=x.shares)).to_frame('vwap')
ohlc = agg_trades.price.ohlc()
vol = agg_trades.shares.sum().to_frame('vol')
txn = agg_trades.shares.size().to_frame('txn')
return pd.concat([ohlc, vwap, vol, txn], axis=1)
```
We create time bars using the `.resample()` method with the desired period.
```
resampled = trades.resample('1Min')
time_bars = get_bar_stats(resampled)
normaltest(time_bars.vwap.pct_change().dropna())
price_volume(time_bars)
```
### Bokeh Candlestick Chart
Alternative visualization using the the [bokeh](https://bokeh.pydata.org/en/latest/) library:
```
resampled = trades.resample('5Min') # 5 Min bars for better print
df = get_bar_stats(resampled)
increase = df.close > df.open
decrease = df.open > df.close
w = 2.5 * 60 * 1000 # 2.5 min in ms
WIDGETS = "pan, wheel_zoom, box_zoom, reset, save"
p = figure(x_axis_type='datetime', tools=WIDGETS, plot_width=1500, title = "AAPL Candlestick")
p.xaxis.major_label_orientation = pi/4
p.grid.grid_line_alpha=0.4
p.segment(df.index, df.high, df.index, df.low, color="black")
p.vbar(df.index[increase], w, df.open[increase], df.close[increase], fill_color="#D5E1DD", line_color="black")
p.vbar(df.index[decrease], w, df.open[decrease], df.close[decrease], fill_color="#F2583E", line_color="black")
show(p)
```

### Volume Bars
Time bars smooth some of the noise contained in the raw tick data but may fail to account for the fragmentation of orders. Execution-focused algorithmic trading may aim to match the volume weighted average price (VWAP) over a given period, and will divide a single order into multiple trades and place orders according to historical patterns. Time bars would treat the same order differently, even though no new information has arrived in the market.
Volume bars offer an alternative by aggregating trade data according to volume. We can accomplish this as follows:
```
with pd.HDFStore(order_book_store) as store:
trades = store['{}/trades'.format(stock)]
trades.price = trades.price.mul(1e-4)
trades = trades[trades.cross == 0]
trades = trades.between_time(market_open, market_close).drop('cross', axis=1)
trades.info()
trades_per_min = trades.shares.sum()/(60*7.5) # min per trading day
trades['cumul_vol'] = trades.shares.cumsum()
df = trades.reset_index()
by_vol = df.groupby(df.cumul_vol.div(trades_per_min).round().astype(int))
vol_bars = pd.concat([by_vol.timestamp.last().to_frame('timestamp'), get_bar_stats(by_vol)], axis=1)
vol_bars.head()
price_volume(vol_bars.set_index('timestamp'))
normaltest(vol_bars.vwap.dropna())
```
| true |
code
| 0.478894 | null | null | null | null |
|
# Анализ оттока клиентов в сети фитнес-клубов
Сеть фитнес-центров «Культурист-датасаентист» разрабатывает стратегию взаимодействия с клиентами на основе аналитических данных.
Распространённая проблема фитнес-клубов и других сервисов — отток клиентов.
Для фитнес-центра можно считать, что клиент попал в отток, если за последний месяц ни разу не посетил спортзал.
Необходимо провести анализ и подготовить план действий по удержанию клиентов.
Наши основные задачи:
- научиться прогнозировать вероятность оттока (на уровне следующего месяца) для каждого клиента;
- сформировать типичные портреты клиентов: выделить несколько наиболее ярких групп и охарактеризовать их основные свойства;
- проанализировать основные признаки, наиболее сильно влияющие на отток;
- сформулировать основные выводы и разработать рекомендации по повышению качества работы с клиентами:
1) выделить целевые группы клиентов;
2) предложить меры по снижению оттока;
3) определить другие особенности взаимодействия с клиентами.
Набор данных включает следующие поля:
- `Churn` — факт оттока в текущем месяце;
Текущие поля в датасете:
Данные клиента за предыдущий до проверки факта оттока месяц:
* `gender` — пол;
* `Near_Location` — проживание или работа в районе, где находится фитнес-центр;
* `Partner` — сотрудник компании-партнёра клуба (сотрудничество с компаниями, чьи сотрудники могут получать скидки на абонемент — в таком случае фитнес-центр хранит информацию о работодателе клиента);
* `Promo_friends` — факт первоначальной записи в рамках акции «приведи друга» (использовал промо-код от знакомого при оплате первого абонемента);
* `Phone` — наличие контактного телефона;
* `Age` — возраст;
* `Lifetime` — время с момента первого обращения в фитнес-центр (в месяцах).
Информация на основе журнала посещений, покупок и информация о текущем статусе абонемента клиента:
* `Contract_period` — длительность текущего действующего абонемента (месяц, 3 месяца, 6 месяцев, год);
* `Month_to_end_contract` — срок до окончания текущего действующего абонемента (в месяцах);
* `Group_visits` — факт посещения групповых занятий;
* `Avg_class_frequency_total` — средняя частота посещений в неделю за все время с начала действия абонемента;
* `Avg_class_frequency_current_month` — средняя частота посещений в неделю за предыдущий месяц;
* `Avg_additional_charges_total` — суммарная выручка от других услуг фитнес-центра: кафе, спорт-товары, косметический и массажный салон.
<h1>Содержание<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Шаг-1.-Загрузите-данные" data-toc-modified-id="Шаг-1.-Загрузите-данные-1"><span class="toc-item-num">1 </span>Шаг 1. Загрузите данные</a></span><ul class="toc-item"><li><span><a href="#Выводы" data-toc-modified-id="Выводы-1.1"><span class="toc-item-num">1.1 </span>Выводы</a></span></li></ul></li><li><span><a href="#Шаг-2.-Проведите-исследовательский-анализ-данных-(EDA)" data-toc-modified-id="Шаг-2.-Проведите-исследовательский-анализ-данных-(EDA)-2"><span class="toc-item-num">2 </span>Шаг 2. Проведите исследовательский анализ данных (EDA)</a></span><ul class="toc-item"><li><span><a href="#Выводы" data-toc-modified-id="Выводы-2.1"><span class="toc-item-num">2.1 </span>Выводы</a></span></li></ul></li><li><span><a href="#Шаг-3.-Постройте-модель-прогнозирования-оттока-клиентов" data-toc-modified-id="Шаг-3.-Постройте-модель-прогнозирования-оттока-клиентов-3"><span class="toc-item-num">3 </span>Шаг 3. Постройте модель прогнозирования оттока клиентов</a></span><ul class="toc-item"><li><span><a href="#Выводы" data-toc-modified-id="Выводы-3.1"><span class="toc-item-num">3.1 </span>Выводы</a></span></li></ul></li><li><span><a href="#Шаг-4.-Сделайте-кластеризацию-клиентов" data-toc-modified-id="Шаг-4.-Сделайте-кластеризацию-клиентов-4"><span class="toc-item-num">4 </span>Шаг 4. Сделайте кластеризацию клиентов</a></span><ul class="toc-item"><li><span><a href="#Выводы" data-toc-modified-id="Выводы-4.1"><span class="toc-item-num">4.1 </span>Выводы</a></span></li></ul></li><li><span><a href="#Шаг-5.-Сформулируйте-выводы-и-сделайте-базовые-рекомендации-по-работе-с-клиентами" data-toc-modified-id="Шаг-5.-Сформулируйте-выводы-и-сделайте-базовые-рекомендации-по-работе-с-клиентами-5"><span class="toc-item-num">5 </span>Шаг 5. Сформулируйте выводы и сделайте базовые рекомендации по работе с клиентами</a></span></li></ul></div>
## Шаг 1. Загрузите данные
```
import pandas as pd
pd.set_option('display.max_columns', None)
pd.set_option('max_colwidth', 120)
pd.set_option('display.width', 500)
import numpy as np
import matplotlib.pyplot as plt
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
import seaborn as sns
sns.set()
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from scipy.cluster.hierarchy import dendrogram, linkage
from sklearn.cluster import KMeans
from sklearn.metrics import accuracy_score, precision_score, recall_score
import warnings
warnings.simplefilter('ignore')
RANDOM_SEED = 0
df = pd.read_csv("../datasets/gym_churn.csv")
```
Выведем произвольные строки из нашей таблицы чтобы увидеть данные.
```
display(pd.concat([df.sample(5, random_state=RANDOM_SEED)]).reset_index(drop=True))
```
Переведем названия столбцов к нижнему регистру.
```
df.columns = map(str.lower, df.columns)
df.info()
```
Пропущенных данных нет, всего в таблице 4000 строк и 14 столбцов. Отсутствующих признаков не наблюдается.
Видим, что можем понизить разрядность данных чтобы оптимизировать работу кода.
```
signed_features = df.select_dtypes(include='int64').columns
float_features = df.select_dtypes(include='float64').columns
df[signed_features] = df[signed_features].apply(pd.to_numeric, downcast='signed')
df[float_features] = df[float_features].apply(pd.to_numeric, downcast='float')
df.info()
```
После обработки оптимизировали работу кода почти в 4 раза.
### Выводы
В данном блоке мы оценили размер датафрейма - всего в таблице 4000 строк и 14 столбцов. Пропущенных данных нет, отсутствующих признаков не наблюдается. Перевели названия столбцов к нижнему регистру, а также оптимизировали работу кода почти в 4 раза, понизив разрядность данных.
## Шаг 2. Проведите исследовательский анализ данных (EDA)
```
df.describe().T
```
Из таблицы видим, что наибольший разброс в данных наблюдается у показателя `avg_additional_charges_total` (стандартное отклонение 96.35), при этом среднее - 146.9 (суммарная выручка от дополнительных процедур в фитнес центре). Почти у 85% клиентов фитнес центр находится рядом с работой или домом, примерно 41% клиентов посещают групповые занятия, 31% пришли по рекомендации друзей. Средний возраст клиентов - 29 лет, но зал посещают люди от 18 до 41 года и в гендерном соотношении разделены практически одинаково. Почти половина клиентов - сотрудники компании-партнёра клуба. Факт оттока в текущем месяце
зафиксирован у 26% клиентов.
```
df.groupby('churn').mean().reset_index()
```
В текущем месяце был зафиксирован равномерный отток как мужчин, так и женщин, осталось тоже одинаковое соотношение полов. Близкая локация сыграла интересную роль, почти 76% из тех, кто прекратил посещать зал либо работают либо живут возле фитнес центра. Примерно в первый месяц люди перестают посещать зал, но при этом те, кто полны энтузиазма песещают зал в среднем 5 месяцев. Люди, посещающие зал в настоящее время в среднем тратят больше денег на дополнительные процедуры и сервисы.
**Постройте столбчатые гистограммы и распределения признаков для тех, кто ушёл (отток) и тех, кто остался (не попали в отток);**
```
WIDTH = 3
plot_amount = len(df.columns)
height = plot_amount//WIDTH + 1
fig, axs = plt.subplots(height, WIDTH, figsize=(15, 25))
fig.suptitle('Гистограммы признаков', y=1.003, size=14)
for item, ax in zip(df.columns, np.ravel(axs)):
sns.histplot(data = df, x=item, hue='churn', ax=ax, kde=True)
ax.set_title(item.capitalize().replace('_', ' '), size=12)
plt.tight_layout()
plt.show()
```
Ближе всего к нормальному распределение признака возраста посещающих фитнес центр. Причем это касается как клиентов, которые регулярно посещают фитнес центр, так и клиентов попавших в фактор оттока. Чуть больше 200 человек, у которых не был зафиксирован факт оттока воспользовались дополнительными услугами фитнес центра и принесли выручку в районе 200 у.е. с человека. По гистограмме видим, что все те, кто уходят, делают это в первые месяцы посещения зала. Чаще всего люди покупают абонемент на месяц, но при этом у данной категории клиентов наблюдается факт оттока в большей степени. Те, кто покупают абонемент на 12 месяцев, реже всего уходят в дальнейшем.
```
corr_matrix = df.corr()
plt.figure(figsize = (13, 10))
plt.title('Тепловая карта корреляционной матрицы', size = 15)
sns_plot = sns.heatmap(corr_matrix, annot=True, fmt='.2f',
linewidth=1, linecolor='black', vmax=1, center=0, cmap='ocean')
fig = sns_plot.get_figure()
plt.xlabel('Наименование признаков')
plt.ylabel('Наименование признаков')
plt.show()
```
Видим наличие корреляции между переменными `month_to_end_contract` и `contract_period`, а также между переменными `avg_class_frequency_total` и `avg_class_frequency_current_month`, что неудевительно, это взаимозависимые переменные.
### Выводы
* Наибольший разброс в данных наблюдается у показателя avg_additional_charges_total (стандартное отклонение 96.35), при этом среднее - 146.9 (суммарная выручка от дополнительных процедур в фитнес центре).
* Почти у 85% клиентов фитнес центр находится рядом с работой или домом, примерно 41% клиентов посещают групповые занятия, 31% пришли по рекомендации друзей.
* Средний возраст клиентов - 29 лет, но зал посещают люди от 18 до 41 года и в гендерном соотношении разделены практически одинаково.
* Почти половина клиентов - сотрудники компании-партнёра клуба. Факт оттока в текущем месяце зафиксирован у 26% клиентов.
* В текущем месяце был зафиксирован равномерный отток как мужчин, так и женщин, осталось тоже одинаковое соотношение полов. Близкая локация сыграла интересную роль, почти 76% из тех, кто прекратил посещать зал либо работают либо живут возле фитнес центра.
* Примерно в первый месяц люди перестают посещать зал, но при этом те, кто полны энтузиазма песещают зал в среднем 5 месяцев. Люди, посещающие зал в настоящее время в среднем тратят больше денег на дополнительные процедуры и сервисы.
* Ближе всего к нормальному распределение признака возраста посещающих фитнес центр. Причем это касается как клиентов, которые регулярно посещают фитнес центр, так и клиентов попавших в фактор оттока.
* Чуть больше 200 человек, у которых не был зафиксирован факт оттока воспользовались дополнительными услугами фитнес центра и принесли выручку в районе 200 у.е. с человека.
* Все те, кто уходят, делают это в первые месяцы посещения зала. Чаще всего люди покупают абонемент на месяц, но при этом у данной категории клиентов наблюдается факт оттока в большей степени. Те, кто покупают абонемент на 12 месяцев, реже всего уходят в дальнейшем.
* Мы зафиксировали наличие корреляции между переменными month_to_end_contract и contract_period, а также между переменными avg_class_frequency_total и avg_class_frequency_current_month, что неудевительно, это взаимозависимые переменные.
## Шаг 3. Постройте модель прогнозирования оттока клиентов
Разделим наши данные на признаки (матрица X) и целевую переменную (y).
```
X = df.drop('churn', axis=1)
y = df['churn']
```
Разделяем модель на обучающую и валидационную выборку.
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state = RANDOM_SEED)
# обучаем StandartScaler на обучающей выборке
scaler = StandardScaler()
scaler.fit(X_train)
#Преобразовываем обучающий и валидационные наборы данных
X_train_st = scaler.transform(X_train)
X_test_st = scaler.transform(X_test)
#Задаем алгоритм для модели логистической регрессии
lr_model = LogisticRegression(solver = 'lbfgs', random_state=RANDOM_SEED)
#Обучим модель
lr_model.fit(X_train_st, y_train)
#Воспользуемся обученной моделью чтобы сделать прогнозы
lr_predictions = lr_model.predict(X_test_st)
lr_probabilities = lr_model.predict_proba(X_test_st)[:, 1]
# зададим алгоритм для новой модели на основе алгоритма случайного леса
rf_model = RandomForestClassifier(n_estimators = 100, random_state=RANDOM_SEED)
# обучим модель случайного леса
rf_model.fit(X_train, y_train)
# воспользуемся обученной моделью, чтобы сделать прогнозы
rf_predictions = rf_model.predict(X_test)
rf_probabilities = rf_model.predict_proba(X_test)[:,1]
def print_all_metrics(y_true, y_pred, y_proba, title='Метрики классификации'):
'''
y_true - зависимая переменная валидационной выборки
y_pred - прогнозы обученной модели
y_proba - вероятности
'''
print(title)
print('\tAccuracy: {:.2f}'.format(accuracy_score(y_true, y_pred)))
print('\tPrecision: {:.2f}'.format(precision_score(y_true, y_pred)))
print('\tRecall: {:.2f}'.format(recall_score(y_true, y_pred)))
print_all_metrics(
y_test,
lr_predictions,
lr_probabilities,
title='Метрики для модели логистической регрессии:',
)
print_all_metrics(
y_test,
rf_predictions,
rf_probabilities,
title = 'Метрики для модели случайного леса:'
)
```
Метрика Accuracy одинакова в обоих моделях и равна 0.92, что является неплохим результатом - доля верно угаданных ответов из всех прогнозов. Чем ближе значение accuracy к 100%, тем лучше. Метрика Precision характеризует долю правильных ответов только среди целевого класса. В модели логистической регрессии данная метрика лучше и равна 0.85. Recall метрика показывает, сколько реальных объектов "1" класса мы смогли обнаружить с помощью модели. Для случая логистической регрессии данная метрика также лучше.
Следовательно, ***модель логистической регрессии на основании метрик показала себя лучше.***
### Выводы
Мы построили модели прогнозирования оттока клиентов: модель логистической регрессии и модель случайного леса.
Метрика Accuracy одинакова в обоих моделях и равна 0.92, что является неплохим результатом - доля верно угаданных ответов из всех прогнозов. Чем ближе значение accuracy к 100%, тем лучше. Метрика Precision характеризует долю правильных ответов только среди целевого класса. В модели логистической регрессии данная метрика лучше и равна 0.85. Recall метрика показывает, сколько реальных объектов "1" класса мы смогли обнаружить с помощью модели. Для случая логистической регрессии данная метрика также лучше.
Следовательно, модель логистической регрессии на основании метрик показала себя лучше.
## Шаг 4. Сделайте кластеризацию клиентов
```
# стандартизация данных
sc = StandardScaler()
X_sc = sc.fit_transform(X)
#Построение матрицы расстояний
linked = linkage(X_sc, method = 'ward')
plt.figure(figsize=(15, 10))
dendrogram(linked, orientation='top')
plt.title('K-Means кластеризация. Дендрограмма', size=18)
plt.show()
```
На основании полученного графика можно выделить 4 класса.
Обучим модель кластеризации на основании алгоритма K-Means и спрогнозируем кластеры клиентов. Договоримся за число кластеров принять n=5.
```
km = KMeans(n_clusters = 5, random_state=RANDOM_SEED) # задаём число кластеров, равное 5
labels = km.fit_predict(X_sc) # применяем алгоритм к данным и формируем вектор кластеров
# сохраняем метки кластера в поле нашего датасета
df['cluster_km'] = labels
# выводим статистику по средним значениям наших признаков по кластеру
df.groupby('cluster_km').mean()
```
В гендерном соотношении все кластеры имеют схожее распределение мужчин/женщин, кроме кластера 4 - у него наибольшее среднее значение 0.56. Все клиенты, принадлежащие кластеру 3 либо проживают рядом с фитнес залом/ либо работают неподалеку, напротив, клиенты, относящиеся к кластеру 0 живут далеко от фитнес центра.
Средние значения признака `partner` - сотрудник компании-партнёра клуба сильно варьируются от кластера к кластеру. Наименьшее значение у кластера 3 - 0,35, а наибольшее у кластера 0 - 0,78. Посещение зала по рекомендации друга: для данного признака также замечена сильная вариабельность от кластера к кластеру - для кластера 2, например, среднее значение равно 0.08, а для кластера 0 - аж 0.57. Среднее для признака длительность текущего абонемента наибольшее у кластера 0 - 10.88. Среднее признака групповых посещений занятий в зале наименьшее у кластера 2 - 0,22. Возраст клиентов не сильно варьируется от кластера к кластеру и везде составляет около 30 лет. Среднее срока окончания контракта наименьшее у кластера 3 - 1.8, а наибольшее у кластера 0 - почти 9.95.
```
WIDTH = 3
height = 5
fig, axs = plt.subplots(height, WIDTH, figsize=(15, 25))
fig.suptitle('Гистограммы признаков по кластерам', y=1.003, size=14)
for item, ax in zip(df.columns, np.ravel(axs)):
sns.histplot(data = df, x=item, hue='cluster_km', ax=ax, kde=True, palette='plasma', multiple='dodge')
ax.set_title(item.capitalize().replace('_', ' '), size=12)
plt.tight_layout()
plt.show()
```
* Кластер 0 характеризуется тем, что в нем сосредоточена наибольшая часть сотрудников компаний-партнеров клуба, также он характерен тем, что в нем много клиентов длительность текущего действующего абонемента которых самая большая - 12 месяцев. Клиенты, попавшие в данный кластер больше других посещают групповые занятия и срок до окончания действия контракта составляет порядка 12 месяцев в большинстве случаев. Для данного кластера доля оттока клиентов наименьшая.
* Кластер 1 характерен тем, что у всех клиентов данной группы отсутствует номер телефона и средние значения всех признаков меньше, чем у клиентов из других кластеров. И при этом в целом в данной группе наименьшее количество людей.
* Кластер 2 характеризуется тем, что в нем больше всего клиентов, у которых фитнес зал находится далеко от дома/работы. При этом в целом в группе около 500 клиентов.
* Кластер 3 характеризуется наибольшим количеством клиентов среди всех остальных кластеров, в нем у всех клиентов зал находится рядом с домом/работой. В этом кластере много людей пришло по рекомендации друзей, но также в нем у многих клиентов длительность текущего действующего абонемента месяц - 3 месяца.
* Кластер 4 характеризуется тем, что у всех клиентов фитнес клуба есть номера телефонов, при этом данная группа сильно уступает другим кластерам в значениях, при этом почти у всех клиентов в данном кластере зафиксирован факт оттока.
Для каждого полученного кластера посчитаем долю оттока.
```
df.groupby('cluster_km').agg({'churn':'mean'}).reset_index().rename(columns={'churn':'churn_rate'})
```
Наиболее перспективные кластеры - 2 и 3 кластер. Склонные к оттоку - кластеры 0 и 4.
### Выводы
На основании полученной дендрограммы мы определили, что можно выделить 4 класса.
Мы обучили модель кластеризации на основании алгоритма K-Means и спрогнозировали кластеры клиентов и получили, что:
* Кластер 0 характеризуется тем, что в нем сосредоточена наибольшая часть сотрудников компаний-партнеров клуба, также он характерен тем, что в нем много клиентов длительность текущего действующего абонемента которых самая большая - 12 месяцев. Клиенты, попавшие в данный кластер больше других посещают групповые занятия и срок до окончания действия контракта составляет порядка 12 месяцев в большинстве случаев. Для данного кластера доля оттока клиентов наименьшая.
* Кластер 1 характерен тем, что у всех клиентов данной группы отсутствует номер телефона и средние значения всех признаков меньше, чем у клиентов из других кластеров. И при этом в целом в данной группе наименьшее количество людей.
* Кластер 2 характеризуется тем, что в нем больше всего клиентов, у которых фитнес зал находится далеко от дома/работы. При этом в целом в группе около 500 клиентов.
* Кластер 3 характеризуется наибольшим количеством клиентов среди всех остальных кластеров, в нем у всех клиентов зал находится рядом с домом/работой. В этом кластере много людей пришло по рекомендации друзей, но также в нем у многих клиентов длительность текущего действующего абонемента месяц - 3 месяца.
* Кластер 4 характеризуется тем, что у всех клиентов фитнес клуба есть номера телефонов, при этом данная группа сильно уступает другим кластерам в значениях, при этом почти у всех клиентов в данном кластере зафиксирован факт оттока.
В гендерном соотношении все кластеры имеют схожее распределение мужчин/женщин, кроме кластера 4 - у него наибольшее среднее значение 0.56. Все клиенты, принадлежащие кластеру 3 либо проживают рядом с фитнес залом/ либо работают неподалеку, напротив, клиенты, относящиеся к кластеру 0 живут далеко от фитнес центра.
Средние значения признака `partner` - сотрудник компании-партнёра клуба сильно варьируются от кластера к кластеру. Наименьшее значение у кластера 3 - 0,35, а наибольшее у кластера 0 - 0,78. Посещение зала по рекомендации друга: для данного признака также замечена сильная вариабельность от кластера к кластеру - для кластера 2, например, среднее значение равно 0.08, а для кластера 0 - аж 0.57. Среднее для признака длительность текущего абонемента наибольшее у кластера 0 - 10.88. Среднее признака групповых посещений занятий в зале наименьшее у кластера 2 - 0,22. Возраст клиентов не сильно варьируется от кластера к кластеру и везде составляет около 30 лет. Среднее срока окончания контракта наименьшее у кластера 3 - 1.8, а наибольшее у кластера 0 - почти 9.95.
Наиболее перспективные кластеры - 2 и 3 кластер. Склонные к оттоку - кластеры 0 и 4.
## Шаг 5. Сформулируйте выводы и сделайте базовые рекомендации по работе с клиентами
* Почти у 85% клиентов фитнес центр находится рядом с работой или домом, примерно 41% клиентов посещают групповые занятия, 31% пришли по рекомендации друзей.
* Средний возраст клиентов - 29 лет, но зал посещают люди от 18 до 41 года и в гендерном соотношении разделены практически одинаково.
* Почти половина клиентов - сотрудники компании-партнёра клуба. Факт оттока в текущем месяце зафиксирован у 26% клиентов.
* В текущем месяце был зафиксирован равномерный отток как мужчин, так и женщин, осталось тоже одинаковое соотношение полов. Близкая локация сыграла интересную роль, почти 76% из тех, кто прекратил посещать зал либо работают либо живут возле фитнес центра.
* Примерно в первый месяц люди перестают посещать зал, но при этом те, кто полны энтузиазма песещают зал в среднем 5 месяцев. Люди, посещающие зал в настоящее время в среднем тратят больше денег на дополнительные процедуры и сервисы.
* Чуть больше 200 человек, у которых не был зафиксирован факт оттока воспользовались дополнительными услугами фитнес центра и принесли выручку в районе 200 у.е. с человека.
* Все те, кто уходят, делают это в первые месяцы посещения зала. Чаще всего люди покупают абонемент на месяц, но при этом у данной категории клиентов наблюдается факт оттока в большей степени. Те, кто покупают абонемент на 12 месяцев, реже всего уходят в дальнейшем.
* Мы зафиксировали наличие корреляции между переменными month_to_end_contract и contract_period, а также между переменными avg_class_frequency_total и avg_class_frequency_current_month, что неудевительно, это взаимозависимые переменные.
***Рекомендации для стратегии взаимодействия с клиентами и их удержания:***
* Модель логистической регрессии на основании метрик показала себя лучше, следовательно для прогнозирования оттока клиентов лучше использовать именно ее.
* Наличие в базе фитнес центра номера телефона клиента поможет избежать факта оттока, поскольку администратор, например, может иногда звонить и напоминать об преимуществах использования абонемента/ возможно предлагать какие-то акции или дополнительные процедуры.
* Поскольку люди чаще всего бросают занятия фитнесом в первый месяц, можно придумать стратегии стимуляции интереса клиента продолжать посещать зал - например, после 10 посещений занятий в фитнес центре предлагать бесплатную процедуру массажа.
* Поскольку люди чаще всего покупают абонемент на месяц и у этой части клиентов факт оттока выражен в большей степени, можно устроить акции/розыгрыши - типа покупка абонемента на 3 месяца по цене абонемента на 1 месяц.
| true |
code
| 0.407599 | null | null | null | null |
|
```
%matplotlib inline
```
======================================================================
Compressive sensing: tomography reconstruction with L1 prior (Lasso)
======================================================================
This example shows the reconstruction of an image from a set of parallel
projections, acquired along different angles. Such a dataset is acquired in
**computed tomography** (CT).
Without any prior information on the sample, the number of projections
required to reconstruct the image is of the order of the linear size
``l`` of the image (in pixels). For simplicity we consider here a sparse
image, where only pixels on the boundary of objects have a non-zero
value. Such data could correspond for example to a cellular material.
Note however that most images are sparse in a different basis, such as
the Haar wavelets. Only ``l/7`` projections are acquired, therefore it is
necessary to use prior information available on the sample (its
sparsity): this is an example of **compressive sensing**.
The tomography projection operation is a linear transformation. In
addition to the data-fidelity term corresponding to a linear regression,
we penalize the L1 norm of the image to account for its sparsity. The
resulting optimization problem is called the `lasso`. We use the
class :class:`sklearn.linear_model.Lasso`, that uses the coordinate descent
algorithm. Importantly, this implementation is more computationally efficient
on a sparse matrix, than the projection operator used here.
The reconstruction with L1 penalization gives a result with zero error
(all pixels are successfully labeled with 0 or 1), even if noise was
added to the projections. In comparison, an L2 penalization
(:class:`sklearn.linear_model.Ridge`) produces a large number of labeling
errors for the pixels. Important artifacts are observed on the
reconstructed image, contrary to the L1 penalization. Note in particular
the circular artifact separating the pixels in the corners, that have
contributed to fewer projections than the central disk.
```
print(__doc__)
# Author: Emmanuelle Gouillart <emmanuelle.gouillart@nsup.org>
# License: BSD 3 clause
import numpy as np
from scipy import sparse
from scipy import ndimage
from sklearn.linear_model import Lasso
from sklearn.linear_model import Ridge
import matplotlib.pyplot as plt
def _weights(x, dx=1, orig=0):
x = np.ravel(x)
floor_x = np.floor((x - orig) / dx)
alpha = (x - orig - floor_x * dx) / dx
return np.hstack((floor_x, floor_x + 1)), np.hstack((1 - alpha, alpha))
def _generate_center_coordinates(l_x):
X, Y = np.mgrid[:l_x, :l_x].astype(np.float64)
center = l_x / 2.
X += 0.5 - center
Y += 0.5 - center
return X, Y
def build_projection_operator(l_x, n_dir):
""" Compute the tomography design matrix.
Parameters
----------
l_x : int
linear size of image array
n_dir : int
number of angles at which projections are acquired.
Returns
-------
p : sparse matrix of shape (n_dir l_x, l_x**2)
"""
X, Y = _generate_center_coordinates(l_x)
angles = np.linspace(0, np.pi, n_dir, endpoint=False)
data_inds, weights, camera_inds = [], [], []
data_unravel_indices = np.arange(l_x ** 2)
data_unravel_indices = np.hstack((data_unravel_indices,
data_unravel_indices))
for i, angle in enumerate(angles):
Xrot = np.cos(angle) * X - np.sin(angle) * Y
inds, w = _weights(Xrot, dx=1, orig=X.min())
mask = np.logical_and(inds >= 0, inds < l_x)
weights += list(w[mask])
camera_inds += list(inds[mask] + i * l_x)
data_inds += list(data_unravel_indices[mask])
proj_operator = sparse.coo_matrix((weights, (camera_inds, data_inds)))
return proj_operator
def generate_synthetic_data():
""" Synthetic binary data """
rs = np.random.RandomState(0)
n_pts = 36
x, y = np.ogrid[0:l, 0:l]
mask_outer = (x - l / 2.) ** 2 + (y - l / 2.) ** 2 < (l / 2.) ** 2
mask = np.zeros((l, l))
points = l * rs.rand(2, n_pts)
mask[(points[0]).astype(np.int), (points[1]).astype(np.int)] = 1
mask = ndimage.gaussian_filter(mask, sigma=l / n_pts)
res = np.logical_and(mask > mask.mean(), mask_outer)
return np.logical_xor(res, ndimage.binary_erosion(res))
# Generate synthetic images, and projections
l = 128
proj_operator = build_projection_operator(l, l / 7.)
data = generate_synthetic_data()
proj = proj_operator * data.ravel()[:, np.newaxis]
proj += 0.15 * np.random.randn(*proj.shape)
# Reconstruction with L2 (Ridge) penalization
rgr_ridge = Ridge(alpha=0.2)
rgr_ridge.fit(proj_operator, proj.ravel())
rec_l2 = rgr_ridge.coef_.reshape(l, l)
# Reconstruction with L1 (Lasso) penalization
# the best value of alpha was determined using cross validation
# with LassoCV
rgr_lasso = Lasso(alpha=0.001)
rgr_lasso.fit(proj_operator, proj.ravel())
rec_l1 = rgr_lasso.coef_.reshape(l, l)
plt.figure(figsize=(8, 3.3))
plt.subplot(131)
plt.imshow(data, cmap=plt.cm.gray, interpolation='nearest')
plt.axis('off')
plt.title('original image')
plt.subplot(132)
plt.imshow(rec_l2, cmap=plt.cm.gray, interpolation='nearest')
plt.title('L2 penalization')
plt.axis('off')
plt.subplot(133)
plt.imshow(rec_l1, cmap=plt.cm.gray, interpolation='nearest')
plt.title('L1 penalization')
plt.axis('off')
plt.subplots_adjust(hspace=0.01, wspace=0.01, top=1, bottom=0, left=0,
right=1)
plt.show()
```
| true |
code
| 0.795718 | null | null | null | null |
|
# Neural networks with PyTorch
Deep learning networks tend to be massive with dozens or hundreds of layers, that's where the term "deep" comes from. You can build one of these deep networks using only weight matrices as we did in the previous notebook, but in general it's very cumbersome and difficult to implement. PyTorch has a nice module `nn` that provides a nice way to efficiently build large neural networks.
```
# Import necessary packages
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import torch
import helper
import matplotlib.pyplot as plt
```
Now we're going to build a larger network that can solve a (formerly) difficult problem, identifying text in an image. Here we'll use the MNIST dataset which consists of greyscale handwritten digits. Each image is 28x28 pixels, you can see a sample below
<img src='assets/mnist.png'>
Our goal is to build a neural network that can take one of these images and predict the digit in the image.
First up, we need to get our dataset. This is provided through the `torchvision` package. The code below will download the MNIST dataset, then create training and test datasets for us. Don't worry too much about the details here, you'll learn more about this later.
```
### Run this cell
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,)),
])
# Download and load the training data
trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
```
We have the training data loaded into `trainloader` and we make that an iterator with `iter(trainloader)`. Later, we'll use this to loop through the dataset for training, like
```python
for image, label in trainloader:
## do things with images and labels
```
You'll notice I created the `trainloader` with a batch size of 64, and `shuffle=True`. The batch size is the number of images we get in one iteration from the data loader and pass through our network, often called a *batch*. And `shuffle=True` tells it to shuffle the dataset every time we start going through the data loader again. But here I'm just grabbing the first batch so we can check out the data. We can see below that `images` is just a tensor with size `(64, 1, 28, 28)`. So, 64 images per batch, 1 color channel, and 28x28 images.
```
dataiter = iter(trainloader)
images, labels = dataiter.next()
print(type(images))
print(images.shape)
print(labels.shape)
```
This is what one of the images looks like.
```
plt.imshow(images[1].numpy().squeeze(), cmap='Greys_r');
```
First, let's try to build a simple network for this dataset using weight matrices and matrix multiplications. Then, we'll see how to do it using PyTorch's `nn` module which provides a much more convenient and powerful method for defining network architectures.
The networks you've seen so far are called *fully-connected* or *dense* networks. Each unit in one layer is connected to each unit in the next layer. In fully-connected networks, the input to each layer must be a one-dimensional vector (which can be stacked into a 2D tensor as a batch of multiple examples). However, our images are 28x28 2D tensors, so we need to convert them into 1D vectors. Thinking about sizes, we need to convert the batch of images with shape `(64, 1, 28, 28)` to a have a shape of `(64, 784)`, 784 is 28 times 28. This is typically called *flattening*, we flattened the 2D images into 1D vectors.
Previously you built a network with one output unit. Here we need 10 output units, one for each digit. We want our network to predict the digit shown in an image, so what we'll do is calculate probabilities that the image is of any one digit or class. This ends up being a discrete probability distribution over the classes (digits) that tells us the most likely class for the image. That means we need 10 output units for the 10 classes (digits). We'll see how to convert the network output into a probability distribution next.
> **Exercise:** Flatten the batch of images `images`. Then build a multi-layer network with 784 input units, 256 hidden units, and 10 output units using random tensors for the weights and biases. For now, use a sigmoid activation for the hidden layer. Leave the output layer without an activation, we'll add one that gives us a probability distribution next.
```
## Your solution
def sigmoid(x):
return 1/ (1 + torch.exp(-x))
# Flatten the batch of images images
batch_size = images.shape[0]
inputs = images.view((batch_size,28*28))
# Then build a multi-layer network with 784 input units, 256 hidden units, and 10 output units
#using random tensors for the weights and biases
input_units = 28*28
hidden_units = 256
output_units = 10
wi = torch.randn(input_units,hidden_units)
bi = torch.randn(hidden_units)
wh = torch.randn(hidden_units,output_units)
bh = torch.randn(output_units)
# For now, use a sigmoid activation for the hidden layer.
h = sigmoid(torch.mm(inputs, wi)+ bi)
out = torch.mm(h,wh) + bh
print(out.shape)
```
Now we have 10 outputs for our network. We want to pass in an image to our network and get out a probability distribution over the classes that tells us the likely class(es) the image belongs to. Something that looks like this:
<img src='assets/image_distribution.png' width=500px>
Here we see that the probability for each class is roughly the same. This is representing an untrained network, it hasn't seen any data yet so it just returns a uniform distribution with equal probabilities for each class.
To calculate this probability distribution, we often use the [**softmax** function](https://en.wikipedia.org/wiki/Softmax_function). Mathematically this looks like
$$
\Large \sigma(x_i) = \cfrac{e^{x_i}}{\sum_k^K{e^{x_k}}}
$$
What this does is squish each input $x_i$ between 0 and 1 and normalizes the values to give you a proper probability distribution where the probabilites sum up to one.
> **Exercise:** Implement a function `softmax` that performs the softmax calculation and returns probability distributions for each example in the batch. Note that you'll need to pay attention to the shapes when doing this. If you have a tensor `a` with shape `(64, 10)` and a tensor `b` with shape `(64,)`, doing `a/b` will give you an error because PyTorch will try to do the division across the columns (called broadcasting) but you'll get a size mismatch. The way to think about this is for each of the 64 examples, you only want to divide by one value, the sum in the denominator. So you need `b` to have a shape of `(64, 1)`. This way PyTorch will divide the 10 values in each row of `a` by the one value in each row of `b`. Pay attention to how you take the sum as well. You'll need to define the `dim` keyword in `torch.sum`. Setting `dim=0` takes the sum across the rows while `dim=1` takes the sum across the columns.
```
def softmax(x):
e_x = torch.exp(x)
return e_x / torch.sum(e_x, dim=1).view(-1,1)
# Here, out should be the output of the network in the previous excercise with shape (64,10)
probabilities = softmax(out)
# Does it have the right shape? Should be (64, 10)
print(probabilities.shape)
# Does it sum to 1?
print(probabilities.sum(dim=1))
```
## Building networks with PyTorch
PyTorch provides a module `nn` that makes building networks much simpler. Here I'll show you how to build the same one as above with 784 inputs, 256 hidden units, 10 output units and a softmax output.
```
from torch import nn
class Network(nn.Module):
def __init__(self):
super().__init__()
# Inputs to hidden layer linear transformation
self.hidden = nn.Linear(784, 256)
# Output layer, 10 units - one for each digit
self.output = nn.Linear(256, 10)
# Define sigmoid activation and softmax output
self.sigmoid = nn.Sigmoid()
self.softmax = nn.Softmax(dim=1)
def forward(self, x):
# Pass the input tensor through each of our operations
x = self.hidden(x)
x = self.sigmoid(x)
x = self.output(x)
x = self.softmax(x)
return x
```
Let's go through this bit by bit.
```python
class Network(nn.Module):
```
Here we're inheriting from `nn.Module`. Combined with `super().__init__()` this creates a class that tracks the architecture and provides a lot of useful methods and attributes. It is mandatory to inherit from `nn.Module` when you're creating a class for your network. The name of the class itself can be anything.
```python
self.hidden = nn.Linear(784, 256)
```
This line creates a module for a linear transformation, $x\mathbf{W} + b$, with 784 inputs and 256 outputs and assigns it to `self.hidden`. The module automatically creates the weight and bias tensors which we'll use in the `forward` method. You can access the weight and bias tensors once the network (`net`) is created with `net.hidden.weight` and `net.hidden.bias`.
```python
self.output = nn.Linear(256, 10)
```
Similarly, this creates another linear transformation with 256 inputs and 10 outputs.
```python
self.sigmoid = nn.Sigmoid()
self.softmax = nn.Softmax(dim=1)
```
Here I defined operations for the sigmoid activation and softmax output. Setting `dim=1` in `nn.Softmax(dim=1)` calculates softmax across the columns.
```python
def forward(self, x):
```
PyTorch networks created with `nn.Module` must have a `forward` method defined. It takes in a tensor `x` and passes it through the operations you defined in the `__init__` method.
```python
x = self.hidden(x)
x = self.sigmoid(x)
x = self.output(x)
x = self.softmax(x)
```
Here the input tensor `x` is passed through each operation a reassigned to `x`. We can see that the input tensor goes through the hidden layer, then a sigmoid function, then the output layer, and finally the softmax function. It doesn't matter what you name the variables here, as long as the inputs and outputs of the operations match the network architecture you want to build. The order in which you define things in the `__init__` method doesn't matter, but you'll need to sequence the operations correctly in the `forward` method.
Now we can create a `Network` object.
```
# Create the network and look at it's text representation
model = Network()
model
```
You can define the network somewhat more concisely and clearly using the `torch.nn.functional` module. This is the most common way you'll see networks defined as many operations are simple element-wise functions. We normally import this module as `F`, `import torch.nn.functional as F`.
```
import torch.nn.functional as F
class Network(nn.Module):
def __init__(self):
super().__init__()
# Inputs to hidden layer linear transformation
self.hidden = nn.Linear(784, 256)
# Output layer, 10 units - one for each digit
self.output = nn.Linear(256, 10)
def forward(self, x):
# Hidden layer with sigmoid activation
x = F.sigmoid(self.hidden(x))
# Output layer with softmax activation
x = F.softmax(self.output(x), dim=1)
return x
```
### Activation functions
So far we've only been looking at the softmax activation, but in general any function can be used as an activation function. The only requirement is that for a network to approximate a non-linear function, the activation functions must be non-linear. Here are a few more examples of common activation functions: Tanh (hyperbolic tangent), and ReLU (rectified linear unit).
<img src="assets/activation.png" width=700px>
In practice, the ReLU function is used almost exclusively as the activation function for hidden layers.
### Your Turn to Build a Network
<img src="assets/mlp_mnist.png" width=600px>
> **Exercise:** Create a network with 784 input units, a hidden layer with 128 units and a ReLU activation, then a hidden layer with 64 units and a ReLU activation, and finally an output layer with a softmax activation as shown above. You can use a ReLU activation with the `nn.ReLU` module or `F.relu` function.
```
## Your solution here
from torch import nn
import torch.nn.functional as F
class ReLUNetwork(nn.Module):
def __init__(self):
super().__init__()
self.hidden1 = nn.Linear(784,128)
self.hidden2 = nn.Linear(128,64)
self.output = nn.Linear(64,10)
def forward(self, x):
x = F.relu(self.hidden1(x))
x = F.relu(self.hidden2(x))
x = F.softmax(self.output(x), dim=1)
return x
model = ReLUNetwork()
model
```
### Initializing weights and biases
The weights and such are automatically initialized for you, but it's possible to customize how they are initialized. The weights and biases are tensors attached to the layer you defined, you can get them with `model.fc1.weight` for instance.
```
print(model.hidden1.weight)
print(model.hidden1.bias)
```
For custom initialization, we want to modify these tensors in place. These are actually autograd *Variables*, so we need to get back the actual tensors with `model.fc1.weight.data`. Once we have the tensors, we can fill them with zeros (for biases) or random normal values.
```
# Set biases to all zeros
model.hidden1.bias.data.fill_(0)
# sample from random normal with standard dev = 0.01
model.hidden1.weight.data.normal_(std=0.01)
```
### Forward pass
Now that we have a network, let's see what happens when we pass in an image.
```
# Grab some data
dataiter = iter(trainloader)
images, labels = dataiter.next()
# Resize images into a 1D vector, new shape is (batch size, color channels, image pixels)
images.resize_(64, 1, 784)
# or images.resize_(images.shape[0], 1, 784) to automatically get batch size
# Forward pass through the network
img_idx = 0
ps = model.forward(images[img_idx,:])
img = images[img_idx]
helper.view_classify(img.view(1, 28, 28), ps)
```
As you can see above, our network has basically no idea what this digit is. It's because we haven't trained it yet, all the weights are random!
### Using `nn.Sequential`
PyTorch provides a convenient way to build networks like this where a tensor is passed sequentially through operations, `nn.Sequential` ([documentation](https://pytorch.org/docs/master/nn.html#torch.nn.Sequential)). Using this to build the equivalent network:
```
# Hyperparameters for our network
input_size = 784
hidden_sizes = [128, 64]
output_size = 10
# Build a feed-forward network
model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]),
nn.ReLU(),
nn.Linear(hidden_sizes[0], hidden_sizes[1]),
nn.ReLU(),
nn.Linear(hidden_sizes[1], output_size),
nn.Softmax(dim=1))
print(model)
# Forward pass through the network and display output
images, labels = next(iter(trainloader))
images.resize_(images.shape[0], 1, 784)
ps = model.forward(images[0,:])
helper.view_classify(images[0].view(1, 28, 28), ps)
```
Here our model is the same as before: 784 input units, a hidden layer with 128 units, ReLU activation, 64 unit hidden layer, another ReLU, then the output layer with 10 units, and the softmax output.
The operations are availble by passing in the appropriate index. For example, if you want to get first Linear operation and look at the weights, you'd use `model[0]`.
```
print(model[0])
model[0].weight
```
You can also pass in an `OrderedDict` to name the individual layers and operations, instead of using incremental integers. Note that dictionary keys must be unique, so _each operation must have a different name_.
```
from collections import OrderedDict
model = nn.Sequential(OrderedDict([
('fc1', nn.Linear(input_size, hidden_sizes[0])),
('relu1', nn.ReLU()),
('fc2', nn.Linear(hidden_sizes[0], hidden_sizes[1])),
('relu2', nn.ReLU()),
('output', nn.Linear(hidden_sizes[1], output_size)),
('softmax', nn.Softmax(dim=1))]))
model
```
Now you can access layers either by integer or the name
```
print(model[0])
print(model.fc1)
```
In the next notebook, we'll see how we can train a neural network to accuractly predict the numbers appearing in the MNIST images.
| true |
code
| 0.837554 | null | null | null | null |
|
# Graph Coloring with QAOA using PyQuil and Grove
We are going to color a graph using the near-term algorithm QAOA. The canonical example of QAOA was to solve a MaxCut problem, but graph coloring can be seen as a generalization of MaxCut, which is really graph coloring with only k = 2 colors
## Sample problem: Graph with n = 4 nodes and e = 5 edges, k = 3 colors
First let's make some imports:
```
# pyquil and grove imports
from grove.pyqaoa.qaoa import QAOA
from pyquil.api import QVMConnection, get_qc, WavefunctionSimulator
from pyquil.paulis import PauliTerm, PauliSum
from pyquil import Program
from pyquil.gates import CZ, H, RY, CNOT, X
# useful additional packages
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
```
### Generate a graph
Un-colored graph with 4 nodes, 5 edges, 3 colors:
```
# generate graph, specify nodes and edges
G = nx.Graph()
edges = [(0, 3), (3, 6), (6, 9), (3, 9), (0, 9)]
nodes = [0, 3, 6, 9]
G.add_nodes_from(nodes)
G.add_edges_from(edges)
# Let's draw this thing
colors = ['beige' for node in G.nodes()]
pos = nx.spring_layout(G)
default_axes = plt.axes(frameon=True)
nx.draw_networkx(G, node_color=colors, node_size=600, alpha=.8, ax=default_axes, pos=pos)
```
### Hamiltonians
To use QAOA, we need to consider two Hamiltonians:
* the Hamiltonian that best describes the cost function of our problem (cost Hamiltonian)
* and the Hamiltonian whose eigenstates span the solution space (mixer Hamiltonian)
Luckily for us, the cost Hamiltonian for graph coloring is the same as that for MaxCut:
$$H_{cost} = \sum_{i, j} \frac{1}{2}(\mathbb{1} - \sigma^z_i \sigma^z_j)$$
The mixer Hamiltonian must span the solution space, i.e. only those states that make any physical sense. If we allow $k=3$ qubits per node, and accept only W-states as solutions (100, 010, 001), then we can use the following mixer Hamiltonian:
$$H_{mixer} = \sum_{v, c, c'}\sigma^x_{v, c} \sigma^x_{v,c'} + \sigma^y_{v, c} \sigma^y_{v, c'}$$
Let's just create these Hamiltonians:
```
# define hamiltonians
def graph_coloring_cost_ham(graph, colors):
cost_operators = []
for k in range(len(colors)):
for i, j in graph.edges():
cost_operators.append(PauliTerm("Z", i + k, 0.5)*PauliTerm("Z", j + k) + PauliTerm("I", 0, -0.5))
return cost_operators
def graph_coloring_mixer_ham(graph, colors):
mixer_operators = []
for k in range(0, len(graph.nodes())*len(colors), len(colors)):
for i, j in colors:
mixer_operators.append(PauliTerm("X", i + k, -1.0)*PauliTerm("X", j + k) + PauliTerm("Y", i + k)*PauliTerm("Y", j + k, -1.0))
return mixer_operators
# above, note we've switched the sign of the Hamiltonians from those in the above equations
# this is because we use a classical minimizer, but we are actually trying to maximize the cost function
# instantiate mixer and cost
k = 3 # number of colors
colors = []
import itertools
for u, v in itertools.combinations(list(range(k)), 2):
colors.append((u, v))
cost = graph_coloring_cost_ham(G, colors)
mixer = graph_coloring_mixer_ham(G, colors)
print('Mixer Hamiltonian: ∑ XX + YY')
for operator in mixer:
print(operator)
print('\n')
print('Cost Hamiltonian: ∑ 1/2(I - ZZ)')
for operator in cost:
print(operator)
```
### Initial state
We must feed an inital reference state to QAOA that we will evolve to the ground state of the cost Hamiltonian. This initial state should ideally span the solution space, i.e. all physically relevant states. For our purposes, these would be the W-States.
First let's make functions that can create W-States:
```
# Define a F_gate
def F_gate(prog, i, j, n, k):
theta = np.arccos(np.sqrt(1/(n-k+1)))
prog += [RY(-theta, j),
CZ(i, j),
RY(theta, j)]
# Generate W-states
def wstategenerate(prog, q0, q1, q2):
prog += X(q2)
F_gate(prog, q2, q1, 3, 1)
F_gate(prog, q1, q0, 3, 2)
prog += CNOT(q1, q2)
prog += CNOT(q0, q1)
return prog
```
Now let's initialize W-states to feed our QAOA for the above graph:
```
# initialize state
initial_state = wstategenerate(Program(), 0, 1, 2) + wstategenerate(Program(), 3, 4, 5) + wstategenerate(Program(), 6, 7, 8) + wstategenerate(Program(), 9, 10, 11)
```
Quick test to make sure we are actually making W-states...
```
# qvm instantiation to run W-state generation
qvm_connection = QVMConnection()
# makes it easier to count up results
from collections import Counter
# get results with their counts
tests = qvm_connection.run_and_measure(initial_state, [9, 10, 11], trials=1000)
tests = [tuple(test) for test in tests]
tests_counter_tuples = Counter(tests)
most_common = tests_counter_tuples.most_common()
tests_counter = {}
for element in most_common:
result = element[0]
total = element[1]
result_string = ''
for bit in result:
result_string += str(bit)
tests_counter[result_string] = total
tests_counter
# import for histogram plotting
from qiskit.tools.visualization import plot_histogram
# plot the results with their counts
plot_histogram(tests_counter)
```
We only see the results 001, 010, and 100, so we're good!
### Use QAOA with specified parameters
Now let's instantiate QAOA with the specified cost, mixer, and number of steps:
```
# number of Trotterized steps for QAOA (I recommend two)
p = 2
# set initial beta and gamma angle values (you could try others, I find these work well)
initial_beta = [0, np.pi]
initial_gamma = [0, np.pi*2]
# arguments for the classical optimizer
minimizer_kwargs = {'method': 'Nelder-Mead',
'options': {'ftol': 1.0e-2, 'xtol': 1.0e-2,
'disp': False}}
# list of qubit ids on instantiated qvm we'll be using
num_qubits = len(colors)*len(G.nodes())
qubit_ids = list(range(num_qubits))
# instantiation of QAOA with requisite parameters
QAOA_inst = QAOA(qvm_connection, qubit_ids,
steps=p,
cost_ham=cost,
ref_ham=mixer,
driver_ref=initial_state,
init_betas=initial_beta,
init_gammas=initial_gamma,
minimizer_kwargs=minimizer_kwargs)
```
Solve for betas and gammas. All of the optimization happens here:
```
betas, gammas = QAOA_inst.get_angles()
print("Values of betas:", betas)
print("Values of gammas:", gammas)
print("And the most common measurement is... ")
most_common_result, _ = QAOA_inst.get_string(betas, gammas)
print(most_common_result)
```
### Reconstruct Program
Now that we've used QAOA to solve for the optimal beta and gamma values, we can reconstruct the ground state solution by initializing a new `Program()` object with these values
```
angles = np.hstack((betas, gammas))
# We take a template for quil program
param_prog = QAOA_inst.get_parameterized_program()
# We initialize this program with the angles we have found
prog = param_prog(angles)
```
### Run and Measure Program
Now that we've reconstructed the program with the proper angles, we can run and measure this program on the QVM many times to get statistics on the outcome
```
# Here we connect to the Forest API and run our program there.
# We do that 10000 times and after each one we measure the output.
measurements = qvm_connection.run_and_measure(prog, qubit_ids, trials=10000)
```
Just reformatting results into a dictionary...
```
# This is just a hack - we can't use Counter on a list of lists but we can on a list of tuples.
measurements = [tuple(measurement) for measurement in measurements]
measurements_counter = Counter(measurements)
# This line gives us the results in the diminishing order
most_common = measurements_counter.most_common()
most_common
measurements_counter = {}
for element in most_common:
result = element[0]
total = element[1]
result_string = ''
for bit in result:
result_string += str(bit)
measurements_counter[result_string] = total
measurements_counter
```
And now reformat bit strings into colors...
```
# Reformat these bit strings into colors
# Choose which state refers to red ('r'), blue ('b') or green ('g')
colors_totals = {}
for bitstring, total in measurements_counter.items():
node_0 = bitstring[0:3]
node_1 = bitstring[3:6]
node_2 = bitstring[6:9]
node_3 = bitstring[9:12]
nodes_list = [node_0, node_1, node_2, node_3]
node_colors_string = ''
for node in nodes_list:
if node == '100':
node = 'r'
elif node == '010':
node = 'b'
elif node == '001':
node = 'g'
else:
raise Exception('Invalid!')
node_colors_string += node
colors_totals[node_colors_string] = total
print(colors_totals)
```
### Visualize results
First let's plot the results as a histogram. There are tons of possible solutions ($k^n = 3^4 = 81$), but we should expect that 6 of them occur most often, so we're looking for 6 larger peaks. This is because for this particular graph and number of colors, there are 6 colorings that maximize the cost function.
```
plot_histogram(colors_totals, figsize=(25, 15))
```
Finally, let's color the graph using these solutions. The colors and their totals have already been ordered from most results to least, so we should expect that the first 6 (i.e. 0-5) colorings maximize the number of non-adjacent colors on the graph
```
# make graph
Graph = nx.Graph()
edges = [(0, 1), (1, 2), (2, 3), (3, 0), (1, 3)]
nodes = range(4)
Graph.add_nodes_from(nodes)
Graph.add_edges_from(edges)
# Let's draw this thing
# can increment the index at the end to get the max value and the next max totals
# i.e. try [0], [1], ... , [5]
colors = list(colors_totals.keys())[0]
# draw colored graph
pos = nx.spring_layout(Graph)
default_axes = plt.axes(frameon=True)
nx.draw_networkx(Graph, node_color=colors, node_size=600, alpha=.8, ax=default_axes, pos=pos)
```
<br>
| true |
code
| 0.376079 | null | null | null | null |
|
```
# from https://en.wikipedia.org/wiki/Inflation
document_text = """
In economics, inflation (or less frequently, price inflation) is a general rise in the price level of an economy
over a period of time.[1][2][3][4] When the general price level rises, each unit of currency buys fewer goods and
services; consequently, inflation reflects a reduction in the purchasing power per unit of money – a loss of real
value in the medium of exchange and unit of account within the economy.[5][6] The opposite of inflation is
deflation, a sustained decrease in the general price level of goods and services. The common measure of inflation
is the inflation rate, the annualised percentage change in a general price index, usually the consumer price
index, over time.[7]
Economists believe that very high rates of inflation and hyperinflation are harmful, and are caused by excessive
growth of the money supply.[8] Views on which factors determine low to moderate rates of inflation are more
varied. Low or moderate inflation may be attributed to fluctuations in real demand for goods and services, or
changes in available supplies such as during scarcities.[9] However, the consensus view is that a long sustained
period of inflation is caused by money supply growing faster than the rate of economic growth.[10][11]
Inflation affects economies in various positive and negative ways. The negative effects of inflation include an
increase in the opportunity cost of holding money, uncertainty over future inflation which may discourage
investment and savings, and if inflation were rapid enough, shortages of goods as consumers begin hoarding out
of concern that prices will increase in the future. Positive effects include reducing unemployment due to nominal
wage rigidity,[12] allowing the central bank greater freedom in carrying out monetary policy, encouraging loans
and investment instead of money hoarding, and avoiding the inefficiencies associated with deflation.
Today, most economists favour a low and steady rate of inflation.[13] Low (as opposed to zero or negative)
inflation reduces the severity of economic recessions by enabling the labor market to adjust more quickly in a
downturn, and reduces the risk that a liquidity trap prevents monetary policy from stabilising the economy.[14]
The task of keeping the rate of inflation low and stable is usually given to monetary authorities. Generally,
these monetary authorities are the central banks that control monetary policy through the setting of interest
rates, through open market operations, and through the setting of banking reserve requirements.[15]
"""
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
X_train = vectorizer.fit_transform([document_text])
X_train
vectorizer.get_feature_names()
# look at
## 'transmission',
## 'transmissions
## 'transmit'
# lemmatized words
from nltk.stem.porter import PorterStemmer
stemmer = PorterStemmer()
# lemmatized words
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
print(stemmer.stem("transmission"))
print(stemmer.stem("transmissions"))
print(stemmer.stem("transmit"))
print(lemmatizer.lemmatize("transmission"))
print(lemmatizer.lemmatize("transmissions"))
print(lemmatizer.lemmatize("transmit"))
lemma_text = ' '.join(lemmatizer.lemmatize(w) for w in document_text.split())
lemma_text
stem_text = ' '.join(stemmer.stem(w) for w in document_text.split())
stem_text
stem_vectorizer = CountVectorizer()
stem_vectorizer.fit_transform([stem_text])
stem_vectorizer.get_feature_names()
# generate for lemmatized as well
# only alpha
import re
regex = re.compile('[^a-zA-Z]')
alpha_text = regex.sub(' ', document_text)
alpha_text = ' '.join(alpha_text.split())
alpha_text
# remove stop words
# remove stop words
from nltk.corpus import stopwords
stop = stopwords.words('english')
'or' in stop
nostop_text = ' '.join(word.lower() for word in alpha_text.lower().split() if word not in stop)
print(nostop_text)
# what happens if you use the original document_text. does it catch all the stop words?
# generate a stemmed, alpha, no stop word list
stem_text = ' '.join(stemmer.stem(w) for w in nostop_text.split())
stem_text
lemma_text = ' '.join(lemmatizer.lemmatize(w) for w in nostop_text.split())
lemma_text
stem_vectorizer = CountVectorizer()
stem_vectorizer = CountVectorizer()
stem_vectorizer.fit_transform([stem_text])
stem_vectorizer.get_feature_names()
```
| true |
code
| 0.602822 | null | null | null | null |
|
# Detect sequential data
> Marcos Duarte
> Laboratory of Biomechanics and Motor Control ([http://demotu.org/](http://demotu.org/))
> Federal University of ABC, Brazil
The function `detect_seq.py` detects initial and final indices of sequential data identical to parameter `value` (default = 0) in the 1D numpy array_like `x`.
Use parameter `min_seq` to set the minimum number of sequential values to detect (default = 1).
The signature of `detect_seq.py` is:
```python
inds = detect_seq(x, value=0, min_seq=1, show=False, ax=None)
```
Let's see how `detect_seq.py` works; first let's import the necessary Python libraries and configure the environment:
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import sys
sys.path.insert(1, r'./../functions') # add to pythonpath
from detect_seq import detect_seq
```
Let's run the function examples:
```
>>> x = [1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0]
>>> detect_seq(x)
```
There is an option to plot the results:
```
>>> detect_seq(x, value=0, min_seq=2, show=True)
```
## Function `detect_seq.py`
```
# %load ./../functions/detect_seq.py
"""Detect initial and final indices of sequential data identical to value."""
import numpy as np
__author__ = 'Marcos Duarte, https://github.com/demotu/BMC'
__version__ = "1.0.0"
__license__ = "MIT"
def detect_seq(x, value=0, min_seq=1, show=False, ax=None):
"""Detect initial and final indices of sequential data identical to value.
Detects initial and final indices of sequential data identical to
parameter value (default = 0) in a 1D numpy array_like.
Use parameter min_seq to set the minimum number of sequential values to
detect (default = 1).
There is an option to plot the results.
Parameters
----------
x : 1D numpy array_like
array to search for sequential data
value : number, optional (default = 0)
Value to detect as sequential data
min_seq : integer, optional (default = 1)
Minimum number of sequential values to detect
show : bool, optional (default = False)
Show plot (True) of not (False).
ax : matplotlib object, optional (default = None)
Matplotlib axis object where to plot.
Returns
-------
inds : 2D numpy array [indi, indf]
Initial and final indices of sequential data identical to value
References
----------
.. [1] http://nbviewer.jupyter.org/github/demotu/BMC/blob/master/notebooks/detect_seq.ipynb
Examples
--------
>>> import numpy as np
>>> x = [1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0]
>>> detect_seq(x)
>>> inds = detect_seq(x, value=0, min_seq=2, show=True)
"""
isvalue = np.concatenate(([0], np.equal(x, value), [0]))
inds = np.where(np.abs(np.diff(isvalue)) == 1)[0].reshape(-1, 2)
if min_seq > 1:
inds = inds[np.where(np.diff(inds, axis=1) >= min_seq)[0]]
inds[:, 1] = inds[:, 1] - 1
if show:
_plot(x, value, min_seq, ax, inds)
return inds
def _plot(x, value, min_seq, ax, inds):
"""Plot results of the detect_seq function, see its help."""
try:
import matplotlib.pyplot as plt
except ImportError:
print('matplotlib is not available.')
else:
x = np.asarray(x)
if ax is None:
_, ax = plt.subplots(1, 1, figsize=(8, 4))
if inds.size:
for (indi, indf) in inds:
if indi == indf:
ax.plot(indf, x[indf], 'ro', mec='r', ms=6)
else:
ax.plot(range(indi, indf+1), x[indi:indf+1], 'r', lw=1)
ax.axvline(x=indi, color='b', lw=1, ls='--')
ax.axvline(x=indf, color='b', lw=1, ls='--')
inds = np.vstack((np.hstack((0, inds[:, 1])),
np.hstack((inds[:, 0], x.size-1)))).T
for (indi, indf) in inds:
ax.plot(range(indi, indf+1), x[indi:indf+1], 'k', lw=1)
else:
ax.plot(x, 'k', lw=1)
ax.set_xlim(-.02*x.size, x.size*1.02-1)
ymin, ymax = x[np.isfinite(x)].min(), x[np.isfinite(x)].max()
yrange = ymax - ymin if ymax > ymin else 1
ax.set_ylim(ymin - 0.1*yrange, ymax + 0.1*yrange)
text = 'Value=%.3g, minimum number=%d'
ax.set_title(text % (value, min_seq))
plt.show()
```
| true |
code
| 0.809201 | null | null | null | null |
|
**Chapter 4 – Training Linear Models**
_This notebook contains all the sample code and solutions to the exercices in chapter 4._
# Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
```
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "training_linear_models"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
```
# Linear regression using the Normal Equation
```
import numpy as np
X = 2 * np.random.rand(100, 1)
y = 4 + 3 * X + np.random.randn(100, 1)
plt.plot(X, y, "b.")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.axis([0, 2, 0, 15])
save_fig("generated_data_plot")
plt.show()
X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance
theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y)
theta_best
X_new = np.array([[0], [2]])
X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance
y_predict = X_new_b.dot(theta_best)
y_predict
plt.plot(X_new, y_predict, "r-")
plt.plot(X, y, "b.")
plt.axis([0, 2, 0, 15])
plt.show()
```
The figure in the book actually corresponds to the following code, with a legend and axis labels:
```
plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions")
plt.plot(X, y, "b.")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.legend(loc="upper left", fontsize=14)
plt.axis([0, 2, 0, 15])
save_fig("linear_model_predictions")
plt.show()
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(X, y)
lin_reg.intercept_, lin_reg.coef_
lin_reg.predict(X_new)
```
# Linear regression using batch gradient descent
```
eta = 0.1
n_iterations = 1000
m = 100
theta = np.random.randn(2,1)
for iteration in range(n_iterations):
gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y)
theta = theta - eta * gradients
theta
X_new_b.dot(theta)
theta_path_bgd = []
def plot_gradient_descent(theta, eta, theta_path=None):
m = len(X_b)
plt.plot(X, y, "b.")
n_iterations = 1000
for iteration in range(n_iterations):
if iteration < 10:
y_predict = X_new_b.dot(theta)
style = "b-" if iteration > 0 else "r--"
plt.plot(X_new, y_predict, style)
gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y)
theta = theta - eta * gradients
if theta_path is not None:
theta_path.append(theta)
plt.xlabel("$x_1$", fontsize=18)
plt.axis([0, 2, 0, 15])
plt.title(r"$\eta = {}$".format(eta), fontsize=16)
np.random.seed(42)
theta = np.random.randn(2,1) # random initialization
plt.figure(figsize=(10,4))
plt.subplot(131); plot_gradient_descent(theta, eta=0.02)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd)
plt.subplot(133); plot_gradient_descent(theta, eta=0.5)
save_fig("gradient_descent_plot")
plt.show()
np.random.seed(42)
theta = np.random.randn(2,1) # random initialization
plt.figure(figsize=(10,4))
plt.subplot(131); plot_gradient_descent(theta, eta=0.02)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd)
plt.subplot(133); plot_gradient_descent(theta, eta=0.5)
save_fig("gradient_descent_plot")
plt.show()
```
# Stochastic Gradient Descent
```
theta_path_sgd = []
m = len(X_b)
np.random.seed(42)
n_epochs = 50
t0, t1 = 5, 50 # learning schedule hyperparameters
def learning_schedule(t):
return t0 / (t + t1)
theta = np.random.randn(2,1) # random initialization
for epoch in range(n_epochs):
for i in range(m):
if epoch == 0 and i < 20: # not shown in the book
y_predict = X_new_b.dot(theta) # not shown
style = "b-" if i > 0 else "r--" # not shown
plt.plot(X_new, y_predict, style) # not shown
random_index = np.random.randint(m)
xi = X_b[random_index:random_index+1]
yi = y[random_index:random_index+1]
gradients = 2 * xi.T.dot(xi.dot(theta) - yi)
eta = learning_schedule(epoch * m + i)
theta = theta - eta * gradients
theta_path_sgd.append(theta) # not shown
plt.plot(X, y, "b.") # not shown
plt.xlabel("$x_1$", fontsize=18) # not shown
plt.ylabel("$y$", rotation=0, fontsize=18) # not shown
plt.axis([0, 2, 0, 15]) # not shown
save_fig("sgd_plot") # not shown
plt.show() # not shown
theta
from sklearn.linear_model import SGDRegressor
sgd_reg = SGDRegressor(n_iter=50, penalty=None, eta0=0.1, random_state=42)
sgd_reg.fit(X, y.ravel())
sgd_reg.intercept_, sgd_reg.coef_
```
# Mini-batch gradient descent
```
theta_path_mgd = []
n_iterations = 50
minibatch_size = 20
np.random.seed(42)
theta = np.random.randn(2,1) # random initialization
t0, t1 = 10, 1000
def learning_schedule(t):
return t0 / (t + t1)
t = 0
for epoch in range(n_iterations):
shuffled_indices = np.random.permutation(m)
X_b_shuffled = X_b[shuffled_indices]
y_shuffled = y[shuffled_indices]
for i in range(0, m, minibatch_size):
t += 1
xi = X_b_shuffled[i:i+minibatch_size]
yi = y_shuffled[i:i+minibatch_size]
gradients = 2 * xi.T.dot(xi.dot(theta) - yi)
eta = learning_schedule(t)
theta = theta - eta * gradients
theta_path_mgd.append(theta)
theta
theta_path_bgd = np.array(theta_path_bgd)
theta_path_sgd = np.array(theta_path_sgd)
theta_path_mgd = np.array(theta_path_mgd)
plt.figure(figsize=(7,4))
plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic")
plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch")
plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch")
plt.legend(loc="upper left", fontsize=16)
plt.xlabel(r"$\theta_0$", fontsize=20)
plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0)
plt.axis([2.5, 4.5, 2.3, 3.9])
save_fig("gradient_descent_paths_plot")
plt.show()
```
# Polynomial regression
```
import numpy as np
import numpy.random as rnd
np.random.seed(42)
m = 100
X = 6 * np.random.rand(m, 1) - 3
y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1)
plt.plot(X, y, "b.")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.axis([-3, 3, 0, 10])
save_fig("quadratic_data_plot")
plt.show()
from sklearn.preprocessing import PolynomialFeatures
poly_features = PolynomialFeatures(degree=2, include_bias=False)
X_poly = poly_features.fit_transform(X)
X[0]
X_poly[0]
lin_reg = LinearRegression()
lin_reg.fit(X_poly, y)
lin_reg.intercept_, lin_reg.coef_
X_new=np.linspace(-3, 3, 100).reshape(100, 1)
X_new_poly = poly_features.transform(X_new)
y_new = lin_reg.predict(X_new_poly)
plt.plot(X, y, "b.")
plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.legend(loc="upper left", fontsize=14)
plt.axis([-3, 3, 0, 10])
save_fig("quadratic_predictions_plot")
plt.show()
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)):
polybig_features = PolynomialFeatures(degree=degree, include_bias=False)
std_scaler = StandardScaler()
lin_reg = LinearRegression()
polynomial_regression = Pipeline((
("poly_features", polybig_features),
("std_scaler", std_scaler),
("lin_reg", lin_reg),
))
polynomial_regression.fit(X, y)
y_newbig = polynomial_regression.predict(X_new)
plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width)
plt.plot(X, y, "b.", linewidth=3)
plt.legend(loc="upper left")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.axis([-3, 3, 0, 10])
save_fig("high_degree_polynomials_plot")
plt.show()
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
def plot_learning_curves(model, X, y):
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10)
train_errors, val_errors = [], []
for m in range(1, len(X_train)):
model.fit(X_train[:m], y_train[:m])
y_train_predict = model.predict(X_train[:m])
y_val_predict = model.predict(X_val)
train_errors.append(mean_squared_error(y_train_predict, y_train[:m]))
val_errors.append(mean_squared_error(y_val_predict, y_val))
plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train")
plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val")
plt.legend(loc="upper right", fontsize=14) # not shown in the book
plt.xlabel("Training set size", fontsize=14) # not shown
plt.ylabel("RMSE", fontsize=14) # not shown
lin_reg = LinearRegression()
plot_learning_curves(lin_reg, X, y)
plt.axis([0, 80, 0, 3]) # not shown in the book
save_fig("underfitting_learning_curves_plot") # not shown
plt.show() # not shown
from sklearn.pipeline import Pipeline
polynomial_regression = Pipeline((
("poly_features", PolynomialFeatures(degree=10, include_bias=False)),
("lin_reg", LinearRegression()),
))
plot_learning_curves(polynomial_regression, X, y)
plt.axis([0, 80, 0, 3]) # not shown
save_fig("learning_curves_plot") # not shown
plt.show() # not shown
```
# Regularized models
```
from sklearn.linear_model import Ridge
np.random.seed(42)
m = 20
X = 3 * np.random.rand(m, 1)
y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5
X_new = np.linspace(0, 3, 100).reshape(100, 1)
def plot_model(model_class, polynomial, alphas, **model_kargs):
for alpha, style in zip(alphas, ("b-", "g--", "r:")):
model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression()
if polynomial:
model = Pipeline((
("poly_features", PolynomialFeatures(degree=10, include_bias=False)),
("std_scaler", StandardScaler()),
("regul_reg", model),
))
model.fit(X, y)
y_new_regul = model.predict(X_new)
lw = 2 if alpha > 0 else 1
plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha))
plt.plot(X, y, "b.", linewidth=3)
plt.legend(loc="upper left", fontsize=15)
plt.xlabel("$x_1$", fontsize=18)
plt.axis([0, 3, 0, 4])
plt.figure(figsize=(8,4))
plt.subplot(121)
plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.subplot(122)
plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42)
save_fig("ridge_regression_plot")
plt.show()
from sklearn.linear_model import Ridge
ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42)
ridge_reg.fit(X, y)
ridge_reg.predict([[1.5]])
sgd_reg = SGDRegressor(penalty="l2", random_state=42)
sgd_reg.fit(X, y.ravel())
sgd_reg.predict([[1.5]])
ridge_reg = Ridge(alpha=1, solver="sag", random_state=42)
ridge_reg.fit(X, y)
ridge_reg.predict([[1.5]])
from sklearn.linear_model import Lasso
plt.figure(figsize=(8,4))
plt.subplot(121)
plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.subplot(122)
plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42)
save_fig("lasso_regression_plot")
plt.show()
from sklearn.linear_model import Lasso
lasso_reg = Lasso(alpha=0.1)
lasso_reg.fit(X, y)
lasso_reg.predict([[1.5]])
from sklearn.linear_model import ElasticNet
elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42)
elastic_net.fit(X, y)
elastic_net.predict([[1.5]])
np.random.seed(42)
m = 100
X = 6 * np.random.rand(m, 1) - 3
y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1)
X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10)
poly_scaler = Pipeline((
("poly_features", PolynomialFeatures(degree=90, include_bias=False)),
("std_scaler", StandardScaler()),
))
X_train_poly_scaled = poly_scaler.fit_transform(X_train)
X_val_poly_scaled = poly_scaler.transform(X_val)
sgd_reg = SGDRegressor(n_iter=1,
penalty=None,
eta0=0.0005,
warm_start=True,
learning_rate="constant",
random_state=42)
n_epochs = 500
train_errors, val_errors = [], []
for epoch in range(n_epochs):
sgd_reg.fit(X_train_poly_scaled, y_train)
y_train_predict = sgd_reg.predict(X_train_poly_scaled)
y_val_predict = sgd_reg.predict(X_val_poly_scaled)
train_errors.append(mean_squared_error(y_train_predict, y_train))
val_errors.append(mean_squared_error(y_val_predict, y_val))
best_epoch = np.argmin(val_errors)
best_val_rmse = np.sqrt(val_errors[best_epoch])
plt.annotate('Best model',
xy=(best_epoch, best_val_rmse),
xytext=(best_epoch, best_val_rmse + 1),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.05),
fontsize=16,
)
best_val_rmse -= 0.03 # just to make the graph look better
plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2)
plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set")
plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set")
plt.legend(loc="upper right", fontsize=14)
plt.xlabel("Epoch", fontsize=14)
plt.ylabel("RMSE", fontsize=14)
save_fig("early_stopping_plot")
plt.show()
from sklearn.base import clone
sgd_reg = SGDRegressor(n_iter=1, warm_start=True, penalty=None,
learning_rate="constant", eta0=0.0005, random_state=42)
minimum_val_error = float("inf")
best_epoch = None
best_model = None
for epoch in range(1000):
sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off
y_val_predict = sgd_reg.predict(X_val_poly_scaled)
val_error = mean_squared_error(y_val_predict, y_val)
if val_error < minimum_val_error:
minimum_val_error = val_error
best_epoch = epoch
best_model = clone(sgd_reg)
best_epoch, best_model
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5
# ignoring bias term
t1s = np.linspace(t1a, t1b, 500)
t2s = np.linspace(t2a, t2b, 500)
t1, t2 = np.meshgrid(t1s, t2s)
T = np.c_[t1.ravel(), t2.ravel()]
Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]])
yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:]
J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape)
N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape)
N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape)
t_min_idx = np.unravel_index(np.argmin(J), J.shape)
t1_min, t2_min = t1[t_min_idx], t2[t_min_idx]
t_init = np.array([[0.25], [-1]])
def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50):
path = [theta]
for iteration in range(n_iterations):
gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta
theta = theta - eta * gradients
path.append(theta)
return np.array(path)
plt.figure(figsize=(12, 8))
for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")):
JR = J + l1 * N1 + l2 * N2**2
tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape)
t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx]
levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J)
levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR)
levelsN=np.linspace(0, np.max(N), 10)
path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0)
path_JR = bgd_path(t_init, Xr, yr, l1, l2)
path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0)
plt.subplot(221 + i * 2)
plt.grid(True)
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9)
plt.contour(t1, t2, N, levels=levelsN)
plt.plot(path_J[:, 0], path_J[:, 1], "w-o")
plt.plot(path_N[:, 0], path_N[:, 1], "y-^")
plt.plot(t1_min, t2_min, "rs")
plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16)
plt.axis([t1a, t1b, t2a, t2b])
plt.subplot(222 + i * 2)
plt.grid(True)
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9)
plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o")
plt.plot(t1r_min, t2r_min, "rs")
plt.title(title, fontsize=16)
plt.axis([t1a, t1b, t2a, t2b])
for subplot in (221, 223):
plt.subplot(subplot)
plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0)
for subplot in (223, 224):
plt.subplot(subplot)
plt.xlabel(r"$\theta_1$", fontsize=20)
save_fig("lasso_vs_ridge_plot")
plt.show()
```
# Logistic regression
```
t = np.linspace(-10, 10, 100)
sig = 1 / (1 + np.exp(-t))
plt.figure(figsize=(9, 3))
plt.plot([-10, 10], [0, 0], "k-")
plt.plot([-10, 10], [0.5, 0.5], "k:")
plt.plot([-10, 10], [1, 1], "k:")
plt.plot([0, 0], [-1.1, 1.1], "k-")
plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$")
plt.xlabel("t")
plt.legend(loc="upper left", fontsize=20)
plt.axis([-10, 10, -0.1, 1.1])
save_fig("logistic_function_plot")
plt.show()
from sklearn import datasets
iris = datasets.load_iris()
list(iris.keys())
print(iris.DESCR)
X = iris["data"][:, 3:] # petal width
y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0
from sklearn.linear_model import LogisticRegression
log_reg = LogisticRegression(random_state=42)
log_reg.fit(X, y)
X_new = np.linspace(0, 3, 1000).reshape(-1, 1)
y_proba = log_reg.predict_proba(X_new)
plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica")
plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica")
```
The figure in the book actually is actually a bit fancier:
```
X_new = np.linspace(0, 3, 1000).reshape(-1, 1)
y_proba = log_reg.predict_proba(X_new)
decision_boundary = X_new[y_proba[:, 1] >= 0.5][0]
plt.figure(figsize=(8, 3))
plt.plot(X[y==0], y[y==0], "bs")
plt.plot(X[y==1], y[y==1], "g^")
plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2)
plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica")
plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica")
plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center")
plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b')
plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g')
plt.xlabel("Petal width (cm)", fontsize=14)
plt.ylabel("Probability", fontsize=14)
plt.legend(loc="center left", fontsize=14)
plt.axis([0, 3, -0.02, 1.02])
save_fig("logistic_regression_plot")
plt.show()
decision_boundary
log_reg.predict([[1.7], [1.5]])
from sklearn.linear_model import LogisticRegression
X = iris["data"][:, (2, 3)] # petal length, petal width
y = (iris["target"] == 2).astype(np.int)
log_reg = LogisticRegression(C=10**10, random_state=42)
log_reg.fit(X, y)
x0, x1 = np.meshgrid(
np.linspace(2.9, 7, 500).reshape(-1, 1),
np.linspace(0.8, 2.7, 200).reshape(-1, 1),
)
X_new = np.c_[x0.ravel(), x1.ravel()]
y_proba = log_reg.predict_proba(X_new)
plt.figure(figsize=(10, 4))
plt.plot(X[y==0, 0], X[y==0, 1], "bs")
plt.plot(X[y==1, 0], X[y==1, 1], "g^")
zz = y_proba[:, 1].reshape(x0.shape)
contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg)
left_right = np.array([2.9, 7])
boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1]
plt.clabel(contour, inline=1, fontsize=12)
plt.plot(left_right, boundary, "k--", linewidth=3)
plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center")
plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center")
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.axis([2.9, 7, 0.8, 2.7])
save_fig("logistic_regression_contour_plot")
plt.show()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = iris["target"]
softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42)
softmax_reg.fit(X, y)
x0, x1 = np.meshgrid(
np.linspace(0, 8, 500).reshape(-1, 1),
np.linspace(0, 3.5, 200).reshape(-1, 1),
)
X_new = np.c_[x0.ravel(), x1.ravel()]
y_proba = softmax_reg.predict_proba(X_new)
y_predict = softmax_reg.predict(X_new)
zz1 = y_proba[:, 1].reshape(x0.shape)
zz = y_predict.reshape(x0.shape)
plt.figure(figsize=(10, 4))
plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica")
plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor")
plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa")
from matplotlib.colors import ListedColormap
custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0'])
plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5)
contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg)
plt.clabel(contour, inline=1, fontsize=12)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="center left", fontsize=14)
plt.axis([0, 7, 0, 3.5])
save_fig("softmax_regression_contour_plot")
plt.show()
softmax_reg.predict([[5, 2]])
softmax_reg.predict_proba([[5, 2]])
```
# Exercise solutions
## 1. to 11.
See appendix A.
## 12. Batch Gradient Descent with early stopping for Softmax Regression
(without using Scikit-Learn)
Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier.
```
X = iris["data"][:, (2, 3)] # petal length, petal width
y = iris["target"]
```
We need to add the bias term for every instance ($x_0 = 1$):
```
X_with_bias = np.c_[np.ones([len(X), 1]), X]
```
And let's set the random seed so the output of this exercise solution is reproducible:
```
np.random.seed(2042)
```
The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation:
```
test_ratio = 0.2
validation_ratio = 0.2
total_size = len(X_with_bias)
test_size = int(total_size * test_ratio)
validation_size = int(total_size * validation_ratio)
train_size = total_size - test_size - validation_size
rnd_indices = np.random.permutation(total_size)
X_train = X_with_bias[rnd_indices[:train_size]]
y_train = y[rnd_indices[:train_size]]
X_valid = X_with_bias[rnd_indices[train_size:-test_size]]
y_valid = y[rnd_indices[train_size:-test_size]]
X_test = X_with_bias[rnd_indices[-test_size:]]
y_test = y[rnd_indices[-test_size:]]
```
The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance:
```
def to_one_hot(y):
n_classes = y.max() + 1
m = len(y)
Y_one_hot = np.zeros((m, n_classes))
Y_one_hot[np.arange(m), y] = 1
return Y_one_hot
```
Let's test this function on the first 10 instances:
```
y_train[:10]
to_one_hot(y_train[:10])
```
Looks good, so let's create the target class probabilities matrix for the training set and the test set:
```
Y_train_one_hot = to_one_hot(y_train)
Y_valid_one_hot = to_one_hot(y_valid)
Y_test_one_hot = to_one_hot(y_test)
```
Now let's implement the Softmax function. Recall that it is defined by the following equation:
$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$
```
def softmax(logits):
exps = np.exp(logits)
exp_sums = np.sum(exps, axis=1, keepdims=True)
return exps / exp_sums
```
We are almost ready to start training. Let's define the number of inputs and outputs:
```
n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term)
n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes)
```
Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.
So the equations we will need are the cost function:
$J(\mathbf{\Theta}) =
- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$
And the equation for the gradients:
$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$
Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values.
```
eta = 0.01
n_iterations = 5001
m = len(X_train)
epsilon = 1e-7
Theta = np.random.randn(n_inputs, n_outputs)
for iteration in range(n_iterations):
logits = X_train.dot(Theta)
Y_proba = softmax(logits)
loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1))
error = Y_proba - Y_train_one_hot
if iteration % 500 == 0:
print(iteration, loss)
gradients = 1/m * X_train.T.dot(error)
Theta = Theta - eta * gradients
```
And that's it! The Softmax model is trained. Let's look at the model parameters:
```
Theta
```
Let's make predictions for the validation set and check the accuracy score:
```
logits = X_valid.dot(Theta)
Y_proba = softmax(logits)
y_predict = np.argmax(Y_proba, axis=1)
accuracy_score = np.mean(y_predict == y_valid)
accuracy_score
```
Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`.
```
eta = 0.1
n_iterations = 5001
m = len(X_train)
epsilon = 1e-7
alpha = 0.1 # regularization hyperparameter
Theta = np.random.randn(n_inputs, n_outputs)
for iteration in range(n_iterations):
logits = X_train.dot(Theta)
Y_proba = softmax(logits)
xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1))
l2_loss = 1/2 * np.sum(np.square(Theta[1:]))
loss = xentropy_loss + alpha * l2_loss
error = Y_proba - Y_train_one_hot
if iteration % 500 == 0:
print(iteration, loss)
gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_inputs]), alpha * Theta[1:]]
Theta = Theta - eta * gradients
```
Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out:
```
logits = X_valid.dot(Theta)
Y_proba = softmax(logits)
y_predict = np.argmax(Y_proba, axis=1)
accuracy_score = np.mean(y_predict == y_valid)
accuracy_score
```
Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant.
Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing.
```
eta = 0.1
n_iterations = 5001
m = len(X_train)
epsilon = 1e-7
alpha = 0.1 # regularization hyperparameter
best_loss = np.infty
Theta = np.random.randn(n_inputs, n_outputs)
for iteration in range(n_iterations):
logits = X_train.dot(Theta)
Y_proba = softmax(logits)
xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1))
l2_loss = 1/2 * np.sum(np.square(Theta[1:]))
loss = xentropy_loss + alpha * l2_loss
error = Y_proba - Y_train_one_hot
gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_inputs]), alpha * Theta[1:]]
Theta = Theta - eta * gradients
logits = X_valid.dot(Theta)
Y_proba = softmax(logits)
xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1))
l2_loss = 1/2 * np.sum(np.square(Theta[1:]))
loss = xentropy_loss + alpha * l2_loss
if iteration % 500 == 0:
print(iteration, loss)
if loss < best_loss:
best_loss = loss
else:
print(iteration - 1, best_loss)
print(iteration, loss, "early stopping!")
break
logits = X_valid.dot(Theta)
Y_proba = softmax(logits)
y_predict = np.argmax(Y_proba, axis=1)
accuracy_score = np.mean(y_predict == y_valid)
accuracy_score
```
Still perfect, but faster.
Now let's plot the model's predictions on the whole dataset:
```
x0, x1 = np.meshgrid(
np.linspace(0, 8, 500).reshape(-1, 1),
np.linspace(0, 3.5, 200).reshape(-1, 1),
)
X_new = np.c_[x0.ravel(), x1.ravel()]
X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new]
logits = X_new_with_bias.dot(Theta)
Y_proba = softmax(logits)
y_predict = np.argmax(Y_proba, axis=1)
zz1 = Y_proba[:, 1].reshape(x0.shape)
zz = y_predict.reshape(x0.shape)
plt.figure(figsize=(10, 4))
plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica")
plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor")
plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa")
from matplotlib.colors import ListedColormap
custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0'])
plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5)
contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg)
plt.clabel(contour, inline=1, fontsize=12)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="upper left", fontsize=14)
plt.axis([0, 7, 0, 3.5])
plt.show()
```
And now let's measure the final model's accuracy on the test set:
```
logits = X_test.dot(Theta)
Y_proba = softmax(logits)
y_predict = np.argmax(Y_proba, axis=1)
accuracy_score = np.mean(y_predict == y_test)
accuracy_score
```
Our perfect model turns out to have slight imperfections. This variability is likely due to the very small size of the dataset: depending on how you sample the training set, validation set and the test set, you can get quite different results. Try changing the random seed and running the code again a few times, you will see that the results will vary.
| true |
code
| 0.702645 | null | null | null | null |
|
# TensorBoard
TensorBoard is the tensorflow's visualization tool which can be used to visualize the
computation graph. It can also be used to plot various quantitative metrics and results of
several intermediate calculations. Using tensorboard, we can easily visualize complex
models which would be useful for debugging and also sharing.
Now let us build a basic computation graph and visualize that in tensorboard.
First, let us import the library
```
import tensorflow as tf
```
Next, we initialize the variables
```
a = tf.constant(5)
b = tf.constant(4)
c = tf.multiply(a,b)
d = tf.constant(2)
e = tf.constant(3)
f = tf.multiply(d,e)
g = tf.add(c,f)
```
Now, we will create a tensorflow session, we will write the results of our graph to file
called event file using tf.summary.FileWriter()
```
with tf.Session() as sess:
writer = tf.summary.FileWriter("logs", sess.graph)
print(sess.run(g))
writer.close()
```
In order to run the tensorboard, go to your terminal, locate the working directory and
type
tensorboard --logdir=logs --port=6003
# Adding Scope
Scoping is used to reduce complexity and helps to better understand the model by
grouping the related nodes together, For instance, in the above example, we can break
down our graph into two different groups called computation and result. If you look at the
previous example we can see that nodes, a to e perform the computation and node g
calculate the result. So we can group them separately using the scope for easy
understanding. Scoping can be created using tf.name_scope() function.
```
with tf.name_scope("Computation"):
a = tf.constant(5)
b = tf.constant(4)
c = tf.multiply(a,b)
d = tf.constant(2)
e = tf.constant(3)
f = tf.multiply(d,e)
with tf.name_scope("Result"):
g = tf.add(c,f)
```
If you see the computation scope, we can further break down in to separate parts for even
more good understanding. Say we can create scope as part 1 which has nodes a to c and
scope as part 2 which has nodes d to e since part 1 and 2 are independent of each other.
```
with tf.name_scope("Computation"):
with tf.name_scope("Part1"):
a = tf.constant(5)
b = tf.constant(4)
c = tf.multiply(a,b)
with tf.name_scope("Part2"):
d = tf.constant(2)
e = tf.constant(3)
f = tf.multiply(d,e)
```
Scoping can be better understood by visualizing them in the tensorboard. The complete
code looks like as follows,
```
with tf.name_scope("Computation"):
with tf.name_scope("Part1"):
a = tf.constant(5)
b = tf.constant(4)
c = tf.multiply(a,b)
with tf.name_scope("Part2"):
d = tf.constant(2)
e = tf.constant(3)
f = tf.multiply(d,e)
with tf.name_scope("Result"):
g = tf.add(c,f)
with tf.Session() as sess:
writer = tf.summary.FileWriter("logs", sess.graph)
print(sess.run(g))
writer.close()
```
In order to run the tensorboard, go to your terminal, locate the working directory and
type
tensorboard --logdir=logs --port=6003
If you look at the TensorBoard you can easily understand how scoping helps us to reduce
complexity in understanding by grouping the similar nodes together. Scoping is widely
used while working on a complex projects to better understand the functionality and
dependencies of nodes.
| true |
code
| 0.607896 | null | null | null | null |
|
# LassoLars Regression
This Code template is for the regression analysis using a simple LassoLars Regression. It is a lasso model implemented using the LARS algorithm.
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
from sklearn.linear_model import LassoLars
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X=df[features]
Y=df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
### Model
LassoLars is a lasso model implemented using the LARS algorithm, and unlike the implementation based on coordinate descent, this yields the exact solution, which is piecewise linear as a function of the norm of its coefficients.
### Tuning parameters
> **fit_intercept** -> whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations
> **alpha** -> Constant that multiplies the penalty term. Defaults to 1.0. alpha = 0 is equivalent to an ordinary least square, solved by LinearRegression. For numerical reasons, using alpha = 0 with the LassoLars object is not advised and you should prefer the LinearRegression object.
> **eps** -> The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.
> **max_iter** -> Maximum number of iterations to perform.
> **positive** -> Restrict coefficients to be >= 0. Be aware that you might want to remove fit_intercept which is set True by default. Under the positive restriction the model coefficients will not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (alphas_[alphas_ > 0.].min() when fit_path=True) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent Lasso estimator.
> **precompute** -> Whether to use a precomputed Gram matrix to speed up calculations.
```
model = LassoLars(random_state=123)
model.fit(x_train,y_train)
```
#### Model Accuracy
We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.
score: The score function returns the coefficient of determination R2 of the prediction.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
> **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
> **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
```
y_pred=model.predict(x_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
```
#### Prediction Plot
First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.
For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
```
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(x_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
```
#### Creator: Thilakraj Devadiga , Github: [Profile](https://github.com/Thilakraj1998)
| true |
code
| 0.462412 | null | null | null | null |
|
```
#default_exp data
#export
from timeseries_fastai.imports import *
from timeseries_fastai.core import *
from fastai.basics import *
from fastai.torch_core import *
from fastai.vision.data import get_grid
```
# Data
> DataBlock API to construct the DataLoaders
```
#hide
from nbdev.showdoc import show_doc
```
We will create a DataBlock to process our UCR datasets
```
ucr_path = untar_data(URLs.UCR)
df_train, df_test = load_df_ucr(ucr_path, 'StarLightCurves')
df_train.head()
x_cols = df_train.columns[slice(0,-1)].to_list()
x_cols[0:5], x_cols[-1]
#export
def TSBlock(cls=TSeries):
"A TimeSeries Block to process one timeseries"
return TransformBlock(type_tfms=cls.create)
dblock = DataBlock(blocks=(TSBlock, CategoryBlock),
get_x=lambda o: o[x_cols].values.astype(np.float32),
get_y=ColReader('target'),
splitter=RandomSplitter(0.2))
```
A good way to debug the Block is using summary:
```
dblock.summary(df_train)
dls = dblock.dataloaders(df_train, bs=4)
```
The `show_batch` method is not very practical, let's redefine it on the `DataLoader` class
```
dls.show_batch()
```
A handy function to stack `df_train` and `df_valid` together, adds column to know which is which.
```
#export
def stack_train_valid(df_train, df_valid):
"Stack df_train and df_valid, adds `valid_col`=True/False for df_valid/df_train"
return pd.concat([df_train.assign(valid_col=False), df_valid.assign(valid_col=True)]).reset_index(drop=True)
```
## DataLoaders
> A custom TSeries DataLoaders class
```
#export
class TSDataLoaders(DataLoaders):
"A TimeSeries DataLoader"
@classmethod
@delegates(DataLoaders.from_dblock)
def from_df(cls, df, path='.', valid_pct=0.2, seed=None, x_cols=None, label_col=None,
y_block=None, valid_col=None, item_tfms=None, batch_tfms=None, **kwargs):
"Create a DataLoader from a pandas DataFrame"
y_block = ifnone(y_block, CategoryBlock)
splitter = RandomSplitter(valid_pct, seed=seed) if valid_col is None else ColSplitter(valid_col)
dblock = DataBlock(blocks=(TSBlock, y_block),
get_x=lambda o: o[x_cols].values.astype(np.float32),
get_y=ColReader(label_col),
splitter=splitter,
item_tfms=item_tfms,
batch_tfms=batch_tfms)
return cls.from_dblock(dblock, df, path=path, **kwargs)
@classmethod
@delegates(DataLoaders.from_dblock)
def from_dfs(cls, df_train, df_valid, path='.', x_cols=None, label_col=None,
y_block=None, item_tfms=None, batch_tfms=None, **kwargs):
"Create a DataLoader from a df_train and df_valid"
df = stack_train_valid(df_train, df_valid)
return cls.from_df(df, path, x_cols=x_cols, valid_col='valid_col', label_col=label_col,
y_block=y_block, item_tfms=item_tfms, batch_tfms=batch_tfms,**kwargs)
```
Overchaging `show_batch` function to add grid spacing.
```
#export
@typedispatch
def show_batch(x: TSeries, y, samples, ctxs=None, max_n=10,rows=None, cols=None, figsize=None, **kwargs):
"Show batch for TSeries objects"
if ctxs is None: ctxs = get_grid(min(len(samples), max_n), nrows=rows, ncols=cols, add_vert=1, figsize=figsize)
ctxs = show_batch[object](x, y, samples=samples, ctxs=ctxs, max_n=max_n, **kwargs)
return ctxs
```
Let's test the DataLoader
```
show_doc(TSDataLoaders.from_dfs)
dls = TSDataLoaders.from_dfs(df_train, df_test, x_cols=x_cols, label_col='target', bs=16, val_bs=64)
dls.show_batch()
```
## Profiling the DataLoader
```
len(dls.valid_ds)
def cycle_dl(dl):
for x,y in iter(dl):
pass
```
It is pretty slow
```
#slow
%time cycle_dl(dls.valid)
```
# Export -
```
# hide
from nbdev.export import *
notebook2script()
```
| true |
code
| 0.710628 | null | null | null | null |
|
# Plotly Visualization
The aim of this notebook is to proivde guidelines on how to achieve parity with Pandas' visualization methods as explained in http://pandas.pydata.org/pandas-docs/stable/visualization.html with the use of **Plotly** and **Cufflinks**
```
import pandas as pd
import cufflinks as cf
import numpy as np
from IPython.display import display,HTML
%reload_ext autoreload
%autoreload 2
```
## Theme
Cufflinks can set global theme (sytle) to used.
In this case we will use Matplotlib's `ggplot` style.
```
cf.set_config_file(theme='ggplot',sharing='public',offline=False)
```
## Basic Plotting
The `iplot` method on Series and DataFrame is wrapper of Plotly's `plot` method
```
# Cufflinks can generate random data for different shapes
# Let's generate a single line with 1000 points
cf.datagen.lines(1,1000).iplot()
# Generating 4 timeseries
df=cf.datagen.lines(4,1000)
df.iplot()
```
You can plot one column versus another using the *x* and *y* keywords in `iplot`
```
df3 = pd.DataFrame(np.random.randn(1000, 2), columns=['B', 'C']).cumsum()
df3['A'] = pd.Series(list(range(len(df3))))
df3.iplot(x='A', y='B')
```
## Bar Plots
```
df.ix[3].iplot(kind='bar',bargap=.5)
```
Calling a DataFrame’s `plot()` method with `kind='bar'` produces a multiple bar plot:
```
df=pd.DataFrame(np.random.rand(10, 4), columns=['a', 'b', 'c', 'd'])
df.iplot(kind='bar')
```
To produce a stacked bar plot, use `barmode=stack`
```
df.iplot(kind='bar',barmode='stack')
```
To get horizontal bar plots, pass `kind='barh'`
```
df.iplot(kind='barh',barmode='stack',bargap=.1)
```
## Histograms
Historgrams can be used with `kind='histogram'`
```
df = pd.DataFrame({'a': np.random.randn(1000) + 1, 'b': np.random.randn(1000),
'c': np.random.randn(1000) - 1}, columns=['a', 'b', 'c'])
df.iplot(kind='histogram')
```
Histogram can be stacked by using `barmode=stack`. Bin size can be changed by `bin` keyword.
```
df.iplot(kind='histogram',barmode='stack',bins=20)
```
Orientation can normalization can also be set for Histograms by using `orientation='horizontal'` and `histnorm=probability`.
```
df.iplot(kind='histogram',columns=['a'],orientation='h',histnorm='probability')
```
Histograms (and any other kind of plot) can be set in a multiple layout by using `subplots=True`
```
df_h=cf.datagen.histogram(4)
df_h.iplot(kind='histogram',subplots=True,bins=50)
```
## Box Plots
Boxplots can be drawn calling a `Series` and `DataFrame` with `kind='box'`
```
df = pd.DataFrame(np.random.rand(10, 5), columns=['A', 'B', 'C', 'D', 'E'])
df.iplot(kind='box')
```
### Grouping values
```
df = pd.DataFrame(np.random.rand(10,2), columns=['Col1', 'Col2'] )
df['X'] = pd.Series(['A','A','A','A','A','B','B','B','B','B'])
```
Grouping values by generating a list of figures
```
figs=[df[df['X']==d][['Col1','Col2']].iplot(kind='box',asFigure=True) for d in pd.unique(df['X']) ]
cf.iplot(cf.subplots(figs))
```
Grouping values and ammending the keys
```
def by(df,category):
l=[]
for cat in pd.unique(df[category]):
_df=df[df[category]==cat]
del _df[category]
_df=_df.rename(columns=dict([(k,'{0}_{1}'.format(cat,k)) for k in _df.columns]))
l.append(_df.iplot(kind='box',asFigure=True))
return l
cf.iplot(cf.subplots(by(df,'X')))
```
## Area Plots
You can create area plots with Series.plot and DataFrame.plot by passing `kind='area'`. To produce stacked area plot, each column must be either all positive or all negative values.
When input data contains NaN, it will be automatically filled by 0. If you want to drop or fill by different values, use dataframe.dropna() or dataframe.fillna() before calling plot.
To fill the area you can use `fill=True`
```
df = pd.DataFrame(np.random.rand(10, 4), columns=['a', 'b', 'c', 'd'])
df.iplot(kind='area',fill=True,opacity=1)
```
For non-stacked charts you can use `kind=scatter` with `fill=True`. Alpha value is set to 0.3 unless otherwise specified:
```
df.iplot(fill=True)
```
## Scatter Plot
You can create scatter plots with DataFrame.plot by passing `kind='scatter'`. Scatter plot requires numeric columns for x and y axis. These can be specified by x and y keywords each, otherwise the DataFrame index will be used as `x`
```
df = pd.DataFrame(np.random.rand(50, 4), columns=['a', 'b', 'c', 'd'])
df.iplot(kind='scatter',x='a',y='b',mode='markers')
```
Colors can be assigned as either a list or dicitonary by using `color`.
The marker symbol can be defined by using `symbol`
```
df.iplot(kind='scatter',mode='markers',symbol='dot',colors=['orange','teal','blue','yellow'],size=10)
```
Bubble charts can be used with `kind=bubble` and by assigning one column as the `size`
```
df.iplot(kind='bubble',x='a',y='b',size='c')
```
## Scatter Matrix
You can create a scatter plot matrix using the function `scatter_matrix`
```
df = pd.DataFrame(np.random.randn(1000, 4), columns=['a', 'b', 'c', 'd'])
df.scatter_matrix()
```
## Subplots
Subplots can be defined with `subplots=True`. The shape of the output can also be determined with `shape=(rows,cols)`. If omitted then the subplot shape will automatically defined.
Axes can be shared across plots with `shared_xaxes=True` as well as `shared_yaxes=True`
```
df=cf.datagen.lines(4)
df.iplot(subplots=True,shape=(4,1),shared_xaxes=True,vertical_spacing=.02,fill=True)
```
Subplot Title can be set with `subplot_titles`. If set to `True` then the column names will be used. Otherwise a list of strings can be passed.
```
df.iplot(subplots=True,subplot_titles=True,legend=False)
```
Irregular Subplots can also be drawn using `specs`.
For example, for getting a charts that spans across 2 rows we can use `specs=[[{'rowspan':2},{}],[None,{}]]`.
For a full set of advanced layout you can see `help(cufflinks.subplots)`
```
df=cf.datagen.bubble(10,50,mode='stocks')
figs=cf.figures(df,[dict(kind='histogram',keys='x',color='blue'),
dict(kind='scatter',mode='markers',x='x',y='y',size=5),
dict(kind='scatter',mode='markers',x='x',y='y',size=5,color='teal')],asList=True)
figs.append(cf.datagen.lines(1).figure(bestfit=True,colors=['blue'],bestfit_colors=['pink']))
base_layout=cf.tools.get_base_layout(figs)
sp=cf.subplots(figs,shape=(3,2),base_layout=base_layout,vertical_spacing=.15,horizontal_spacing=.03,
specs=[[{'rowspan':2},{}],[None,{}],[{'colspan':2},None]],
subplot_titles=['Histogram','Scatter 1','Scatter 2','Bestfit Line'])
sp['layout'].update(showlegend=False)
cf.iplot(sp)
```
### Shapes
Lines can be added with `hline` and `vline` for horizontal and vertical lines respectively.
These can be either a list of values (relative to the axis) or a dictionary.
```
df=cf.datagen.lines(3,columns=['a','b','c'])
df.iplot(hline=[2,4],vline=['2015-02-10'])
```
More advanced parameters can be passed in the form of a dictionary, including `width` and `color` and `dash` for the line dash type.
```
df.iplot(hline=[dict(y=-1,color='blue',width=3),dict(y=1,color='pink',dash='dash')])
```
Shaded areas can be plotted using `hspan` and `vspan` for horizontal and vertical areas respectively.
These can be set with a list of paired tuples (v0,v1) or a list of dictionaries with further parameters.
```
df.iplot(hspan=[(-1,1),(2,5)])
```
Extra parameters can be passed in the form of dictionaries, `width`, `fill`, `color`, `fillcolor`, `opacity`
```
df.iplot(vspan={'x0':'2015-02-15','x1':'2015-03-15','color':'teal','fill':True,'opacity':.4})
# Plotting resistance lines
max_vals=df.max().values.tolist()
resistance=[dict(kind='line',y=i,color=j,width=2) for i,j in zip(max_vals,['red','blue','pink'])]
df.iplot(hline=resistance)
```
Different shapes can also be used with `shapes` and identifying the `kind` which can be either *line*, *rect* or *circle*
```
# Get min to max values
df_a=df['a']
max_val=df_a.max()
min_val=df_a.min()
max_date=df_a[df_a==max_val].index[0].strftime('%Y-%m-%d')
min_date=df_a[df_a==min_val].index[0].strftime('%Y-%m-%d')
shape1=dict(kind='line',x0=max_date,y0=max_val,x1=min_date,y1=min_val,color='blue',width=2)
shape2=dict(kind='rect',x0=max_date,x1=min_date,fill=True,color='gray',opacity=.3)
df_a.iplot(shapes=[shape1,shape2])
```
#### Other Shapes
```
x0 = np.random.normal(2, 0.45, 300)
y0 = np.random.normal(2, 0.45, 300)
x1 = np.random.normal(6, 0.4, 200)
y1 = np.random.normal(6, 0.4, 200)
x2 = np.random.normal(4, 0.3, 200)
y2 = np.random.normal(4, 0.3, 200)
distributions = [(x0,y0),(x1,y1),(x2,y2)]
dfs=[pd.DataFrame(dict(x=i,y=j)) for i,j in distributions]
d=cf.Data()
gen=cf.colorgen(scale='ggplot')
for df in dfs:
d_=df.figure(kind='scatter',mode='markers',x='x',y='y',size=5,colors=gen.next())['data']
for _ in d_:
d.append(_)
gen=cf.colorgen(scale='ggplot')
shapes=[cf.tools.get_shape(kind='circle',x0=min(x),x1=max(x),
y0=min(y),y1=max(y),color=gen.next(),fill=True,
opacity=.3,width=.4) for x,y in distributions]
fig=cf.Figure(data=d)
fig['layout']=cf.getLayout(shapes=shapes,legend=False,title='Distribution Comparison')
cf.iplot(fig,validate=False)
```
| true |
code
| 0.44077 | null | null | null | null |
|
# 1. 2D Linear Convection
We consider the 1d linear Convection equation, under a constant velocity
$$
\partial_t u + \mathbf{a} \cdot \nabla u - \nu \nabla^2 u = 0
$$
```
# needed imports
from numpy import zeros, ones, linspace, zeros_like
from matplotlib.pyplot import plot, contourf, show, colorbar
%matplotlib inline
# Initial condition
import numpy as np
u0 = lambda x,y: np.exp(-(x-.3)**2/.05**2)*np.exp(-(y-.3)**2/.05**2)
ts = linspace(0., 1., 401)
x,y = np.meshgrid(ts,ts)
u = u0(x,y)
contourf(x,y, u); colorbar() ; show()
```
### Time scheme
$$\frac{u^{n+1}-u^n}{\Delta t} + \mathbf{a} \cdot \nabla u^{n+1} - \nu \nabla^2 u_{n+1} = 0 $$
$$ \left(I + \Delta t \mathbf{a} \cdot \nabla - \nu \nabla^2 \right) u^{n+1} = u^n $$
### Weak formulation
$$
\langle v, u^{n+1} \rangle + \Delta t ~ \langle v, \mathbf{a} \cdot \nabla u^{n+1} \rangle + \nu ~\Delta t~ \langle \nabla v, \nabla u^{n+1} \rangle = \langle v, u^n \rangle
$$
if we assume $\mathbf{a} = \left( a_1, a_2 \right)^T$ is a constant, then our weak formulation writes
$$
\langle v, u^{n+1} \rangle - \Delta t ~ \langle \mathbf{a} \cdot \nabla v , u^{n+1} \rangle + \nu ~ \Delta t~\langle \nabla v, \nabla u^{n+1} \rangle = \langle v, u^n \rangle
$$
expending $u^n$ over the fem basis, we get the linear system
$$A U^{n+1} = M U^n$$
where
$$
M_{ij} = \langle b_i, b_j \rangle
$$
$$
A_{ij} = \langle b_i, b_j \rangle - \Delta t ~ \langle \mathbf{a} \cdot \nabla b_i, b_j \rangle + \nu ~\Delta t~ \langle \nabla b_i , \nabla b_j \rangle
$$
## Abstract Model using SymPDE
```
from sympde.core import Constant
from sympde.expr import BilinearForm, LinearForm, integral
from sympde.topology import ScalarFunctionSpace, Square, element_of
from sympde.calculus import grad, dot
from sympy import Tuple
# ... abstract model
domain = Square()
V = ScalarFunctionSpace('V', domain)
x,y = domain.coordinates
u,v = [element_of(V, name=i) for i in ['u', 'v']]
a1 = Constant('a1')
a2 = Constant('a2')
dt = Constant('dt')
nu = Constant('nu')
a = Tuple(a1,a2)
# bilinear form
expr = v*u + dt* dot(a, grad(u))*v + nu*dt*dot(grad(u), grad(v))
a = BilinearForm((u,v), integral(domain , expr))
# bilinear form for the mass matrix
expr = u*v
m = BilinearForm((u,v), integral(domain , expr))
# linear form for initial condition
from sympy import exp
expr = exp(-(x-.3)**2/.05**2)*exp(-(y-.3)**2/.05**2)*v
l = LinearForm(v, integral(domain, expr))
```
## Discretization using Psydac
```
from psydac.api.discretization import discretize
a1 = 1. ; a2 = 0. # wavespeed
nu = .03 # viscosity
T = 0.2 # T final time
dt = 0.001
niter = int(T / dt)
degree = [3,3] # spline degree
ncells = [64,64] # number of elements
# Create computational domain from topological domain
domain_h = discretize(domain, ncells=ncells, comm=None)
# Discrete spaces
Vh = discretize(V, domain_h, degree=degree)
# Discretize the bilinear forms
ah = discretize(a, domain_h, [Vh, Vh])
mh = discretize(m, domain_h, [Vh, Vh])
# Discretize the linear form for the initial condition
lh = discretize(l, domain_h, Vh)
# assemble matrices and convert them to scipy
M = mh.assemble().tosparse()
A = ah.assemble(a1=a1, a2=a2, nu=nu, dt=dt).tosparse()
# assemble the rhs and convert it to numpy array
rhs = lh.assemble().toarray()
from scipy.sparse.linalg import gmres
# L2 projection of the initial condition
un, status = gmres(M, rhs, tol=1.e-8, maxiter=5000)
from utilities.plot import plot_field_2d
nbasis = [W.nbasis for W in Vh.spaces]
plot_field_2d(Vh.knots, Vh.degree, un.reshape(nbasis)) ; colorbar() ; show()
for i in range(0, niter):
b = M.dot(un)
un, status = gmres(A, b, tol=1.e-8, maxiter=5000)
nbasis = [W.nbasis for W in Vh.spaces]
plot_field_2d(Vh.knots, Vh.degree, un.reshape(nbasis)) ; colorbar() ; show()
```
| true |
code
| 0.612455 | null | null | null | null |
|
# Предсказание временных рядов
## Библиотеки
```
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
from matplotlib import gridspec
from tqdm.notebook import tqdm
import numpy as np
import pandas as pd
import seaborn as sns
import torch
import scipy
import json
import sys
import re
import os
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.cluster import AgglomerativeClustering
from sklearn.manifold import MDS
from sklearn.decomposition import PCA
from mpl_toolkits.mplot3d import Axes3D
import scipy as sp
import numpy as np
from scipy import optimize
import matplotlib.pyplot as plt
from tqdm import tqdm_notebook as tqdm
from scipy.special import softmax
import pandas as pd
from sklearn.preprocessing import scale
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from sklearn.decomposition import PCA
from scipy.interpolate import interp1d
import math
```
### Вспомагательные функции
```
def RandomFunction(x, n=2):
N = np.arange(1, n + 1, 1)
A = np.random.randn(n)
B = np.random.randn(n)
A0 = np.random.randn(1)
y = 0.5*np.ones_like(x)*A0
for n, a, b in zip(N, A, B):
y += a*np.sin(n*x) + b*np.cos(n*x)
return y
def GenerateImpulses(n = 20, T = 2, k = 2, function = np.sin):
t = int(T)//2
x = np.linspace(start = 0, stop = T*np.pi, num = n)
List_y = []
for i in range(k):
y_temp = 5*np.random.randn()*function(x + np.random.rand()*2*np.pi)
List_y.append(y_temp)
y = np.array(List_y[0])
y2 = List_y[np.random.randint(0, k)]
for i in range(0, t):
if np.random.rand() < 0.1:
y2 = List_y[np.random.randint(0, k)]
ind = np.where(x <= 2*(i + 1)*np.pi)
ind = np.where(x[ind] > 2*i*np.pi)
y[ind] = y2[ind]
return y
def GeneratorOfTimeSeries(n = 100, m = 16384, k = 20):
T1 = []
T2 = []
T3 = []
for _ in range(m):
numPi = 80 + np.random.randint(0, 20)
numPi = n//k
function = np.sin
if np.random.rand() < -4*0.5:
function = RandomFunction
series = GenerateImpulses(n = n, T = numPi, k = np.random.randint(K, K+1), function=function)
T1.append(series + 0.5*np.random.randn(n))
T1 = np.asarray(T1)
return np.reshape(T1, [T1.shape[0], T1.shape[1], 1])
def return_h(input, i, l = 10):
return np.sum(input[:, i:i+l, :], axis = -1)
def return_phase_track(input, l = 10):
"""
input has a shape [batch_size, time_len, 1]
"""
phase_track = np.zeros([input.shape[0], input.shape[1] - l, l])
for i in range(0, input.shape[1] - l):
phase_track[:, i, :] = return_h(input, i, l)
return phase_track[0]
def local_basis(phase_track, m = 2, T = 20):
result_pca_1 = phase_track
List_of_basis_vector = []
List_of_basis_vector_s = []
List_of_basis_vector_c = []
model_pca = PCA(n_components=2)
for n in range(T, result_pca_1.shape[0] - T, 1):
if n-T >- 0:
arr = result_pca_1[n-T:n+T]
else:
arr = result_pca_1[:n]
model_pca_answ = model_pca.fit_transform(arr)
List_of_basis_vector_s.append(model_pca.singular_values_)
List_of_basis_vector_c.append(model_pca_answ[-1])
List_of_basis_vector.append(model_pca.components_)
List_of_basis_vector = np.array(List_of_basis_vector)
List_of_basis_vector_s = np.array(List_of_basis_vector_s)
List_of_basis_vector_c = np.array(List_of_basis_vector_c)
return List_of_basis_vector, List_of_basis_vector_s, List_of_basis_vector_c
def get_pairwise_matrix(List_of_basis_vector, List_of_basis_vector_s, List_of_basis_vector_c):
Volum = np.zeros([2, List_of_basis_vector.shape[0], List_of_basis_vector.shape[0]])
cos_beta = np.abs(List_of_basis_vector[:, 0, :]@List_of_basis_vector[:, 1, :].T)
cos_alpha = np.array(np.diagonal(cos_beta))
cos_gamma = np.abs(List_of_basis_vector[:, 1, :]@List_of_basis_vector[:, 1, :].T)
cos_beta[np.where(cos_beta > 1-10**(-10))] = 1-10**(-10)
cos_alpha[np.where(cos_alpha > 1-10**(-10))] = 1-10**(-10)
cos_gamma[np.where(cos_gamma > 1-10**(-10))] = 1-10**(-10)
cos_beta[np.where(cos_beta < 10**(-10))] = 0
cos_alpha[np.where(cos_alpha < 10**(-10))] = 0
cos_gamma[np.where(cos_gamma < 10**(-10))] = 0
temp_a = np.sqrt(1-cos_beta**2)
cos_A = np.abs((cos_alpha.reshape([-1,1]) - cos_gamma*cos_beta)/(np.sqrt(1-cos_gamma**2)*np.sqrt(1-cos_beta**2)))
h = temp_a*np.sqrt(1-cos_A**2)
Volum[0] = h* np.sqrt(1-cos_gamma**2)
cos_beta = np.abs(List_of_basis_vector[:, 0, :]@List_of_basis_vector[:, 0, :].T)
cos_gamma = np.abs(List_of_basis_vector[:, 1, :]@List_of_basis_vector[:, 0, :].T)
cos_alpha = np.array(np.diagonal(cos_gamma))
cos_beta[np.where(cos_beta > 1-10**(-10))] = 1-10**(-10)
cos_alpha[np.where(cos_alpha > 1-10**(-10))] = 1-10**(-10)
cos_gamma[np.where(cos_gamma > 1-10**(-10))] = 1-10**(-10)
cos_beta[np.where(cos_beta < 10**(-10))] = 0
cos_alpha[np.where(cos_alpha < 10**(-10))] = 0
cos_gamma[np.where(cos_gamma < 10**(-10))] = 0
temp_a = np.sqrt(1-cos_beta**2)
cos_A = (cos_alpha.reshape([-1,1]) - cos_gamma*cos_beta)/(np.sqrt(1-cos_gamma**2)*np.sqrt(1-cos_beta**2))
h = temp_a*np.sqrt(1-cos_A**2)
Volum[1] = h* np.sqrt(1-cos_gamma**2)
vol = np.max(Volum, axis = 0)
for i in range(vol.shape[0]):
for j in range(vol.shape[0]):
vol[i,j] = max(vol[i,j], vol[j,i])
dist = np.sqrt((List_of_basis_vector_s[:, :1] - List_of_basis_vector_s[:, :1].T)**2 + (List_of_basis_vector_s[:, 1:2] - List_of_basis_vector_s[:, 1:2].T)**2)
dist = dist/np.max(dist)
full_dist = np.sqrt(vol**2+dist**2)
return full_dist
def find_points(points, line_point):
"""
points have a shape [N x 2]
line_point has a shape [2 x 1]
"""
List_of_points_plus = []
List_of_points_minus = []
List_of_t_plus = []
List_of_t_minus = []
for i in range(len(points) - 1):
if (line_point[1]*points[i][0] - line_point[0]*points[i][1] < 0) and(line_point[1]*points[i+1][0] - line_point[0]*points[i+1][1] > 0):
List_of_points_plus.append(points[i])
List_of_t_plus.append(i)
if (line_point[1]*points[i][0] - line_point[0]*points[i][1] > 0) and(line_point[1]*points[i+1][0] - line_point[0]*points[i+1][1] < 0):
List_of_points_minus.append(points[i])
List_of_t_minus.append(i)
return np.array(List_of_points_plus), np.array(List_of_points_minus), np.array(List_of_t_plus), np.array(List_of_t_minus)
def find_distance(points, line_point):
"""
points have a shape [N x 2]
line_point has a shape [2 x 1]
"""
sum_distance = 0
normal = np.array([line_point[1], -line_point[0]])
normal = normal/np.sqrt((normal*normal).sum())
for p in points:
sum_distance += ((normal*p).sum())
return sum_distance
def find_segment(X, T):
phase_track = return_phase_track(X, T)
model = PCA(n_components=2)
ress = model.fit_transform(phase_track)
ress[:, 0] = ress[:, 0]/np.sqrt(((ress[:, 0]**2).mean()))
ress[:, 1] = ress[:, 1]/np.sqrt(((ress[:, 1]**2).mean()))
Phi = np.linspace(-np.pi, np.pi, 200)
All_List = np.array(list(map(lambda phi: find_points(ress, np.array([np.sin(phi), np.cos(phi)])), Phi)))
List_of_std = []
for l, phi in zip(All_List, Phi):
List_of_std.append(find_distance(np.vstack([l[0], l[1]]), np.array([np.sin(phi), np.cos(phi)])))
List_of_std = np.array(List_of_std)
phi = Phi[np.argmin(List_of_std)]
line_point = np.array([np.sin(phi), np.cos(phi)])
List_of_points_plus, List_of_points_minus, List_of_t_plus, List_of_t_minus = find_points(ress, line_point)
return List_of_points_plus, List_of_points_minus, List_of_t_plus, List_of_t_minus, line_point, ress
def segmentation(X_all, prediction_vector, T):
List_of_point = []
List_of_All = []
for t in np.unique(prediction_vector):
ind = np.where(prediction_vector == t)[0]
X = X_all[:, ind, :]
List_of_t = np.arange(0, X.shape[1], 1)
List_of_points_plus, List_of_points_minus, List_of_t_plus, List_of_t_minus, line_point, ress = find_segment(X, T)
List_of_All.append([X, List_of_t, List_of_points_plus, List_of_points_minus, List_of_t_plus, List_of_t_minus, line_point, ress])
List_of_point.append((np.where(prediction_vector == t)[0])[List_of_t_minus])
return List_of_All, List_of_point
def normalizer(x, t, n = None):
if n == None:
t_new = np.arange(np.min(t), np.max(t), 0.01)
else:
t_new = np.linspace(np.min(t), np.max(t), n, endpoint=True)
f = interp1d(t, x, kind='cubic')
return f(t_new)
def sort_prediction(prediction_vector):
prediction_vector += 1000
iterator = 0
need = np.where(prediction_vector >= 1000)[0]
while len(need) > 0:
prediction_vector[np.where(prediction_vector == prediction_vector[need[0]])] = iterator
iterator += 1
need = np.where(prediction_vector >= 1000)[0]
return prediction_vector
```
## Авторегрессия
Задан временной ряд:
$$
\mathbf{y} = [y_0, y_i, \cdots, y_t],
$$
где $t$ число известных точек временного ряда.
Требуется построить предсказательную модель:
$$
\hat{y}_{t+d} = f_{t, d}\bigr(\mathbf{y}, \mathbf{w}\bigr)
$$
Самое простое решение это линейная модель авторегрессии:
$$
\hat{y}_{t+1}\bigr(w\bigr) = \sum_{j=0}^{n-1}w_jy_{t-j}
$$
В чем плюсы? Легко решать! Простая задача линейной регрессии, а имено в качестве признакового описания:
$$
\mathbf{X} = \begin{bmatrix}
y_{t-1} & \cdots & y_{t-n}\\
\cdots & \cdots & \cdots\\
y_{n-1} & \cdots & y_{0}\\
\end{bmatrix}
$$
В качестве целевой переменной:
$$
\mathbf{y} = \begin{bmatrix}
y_{t}\\
\cdots\\
y_{n}\\
\end{bmatrix}
$$
Классичиская задача оптимизации:
$$
||\mathbf{X}\mathbf{w} - \mathbf{y}|| \to \min_{\mathbf{w}}
$$
```
fig = plt.figure(figsize=(20,4))
np.random.seed(0)
points = np.arange(400)
series = RandomFunction(points, n=100)
plt.plot(series, '-o')
plt.show()
n = 100
X = np.zeros(shape=(len(series)-n, n))
y = np.zeros(shape=(len(series)-n, 1))
y = series[n:len(series)]
for j in range(len(series)-n):
X[j] = series[j:j+n]
X_train = X[:-100]
y_train = y[:-100]
X_test = X[-100:]
y_test = y[-100:]
w = np.linalg.inv(X_train.T@X_train)@X_train.T@y_train
fig = plt.figure(figsize=(20,4))
plt.plot(points, series, '-.')
plt.plot(points[n:len(series)-100], X_train@w, '-o')
plt.plot(points[-100:], X_test@w, '-o')
plt.show()
```
#### Эксперимент по изменению $n$
```
for n in [30, 50, 100, 150, 200]:
X = np.zeros(shape=(len(series)-n, n))
y = np.zeros(shape=(len(series)-n, 1))
y = series[n:len(series)]
for j in range(len(series)-n):
X[j] = series[j:j+n]
X_train = X[:-100]
y_train = y[:-100]
X_test = X[-100:]
y_test = y[-100:]
w = np.linalg.inv(X_train.T@X_train)@X_train.T@y_train
fig = plt.figure(figsize=(20,4))
plt.plot(points, series, '-.')
plt.plot(points[n:len(series)-100], X_train@w, '-o')
plt.plot(points[-100:], X_test@w, '-o')
plt.title('$n={}$'.format(n))
plt.show()
```
Хм, почему так? Ответ - мультиколлинеарность!
## Экспоненциальное среднее
Модель:
$$
\hat{y}_{t+1} = \hat{y}_{t} + \alpha_t\left(y_t - \hat{y}_t\right).
$$
```
np.random.seed(0)
points = np.arange(400)
series = RandomFunction(points, n=100)
fig = plt.figure(figsize=(20,4))
plt.plot(series, '-o')
plt.show()
series_hat = [series[0]]
alpha = 0.3
for t in range(len(series)):
series_hat.append(series_hat[t] + alpha*(series[t]- series_hat[t]))
fig = plt.figure(figsize=(20,4))
plt.plot(series, '-')
plt.plot(series_hat, '-o')
plt.show()
for alpha in [0.1, 0.3, 0.5, 0.7, 1.0]:
series_hat = [series[0]]
for t in range(len(series)):
series_hat.append(series_hat[t] + alpha*(series[t]- series_hat[t]))
fig = plt.figure(figsize=(20,4))
plt.plot(series, '-')
plt.plot(series_hat, '-o')
plt.title('$\gamma={}$'.format(alpha))
plt.show()
```
## Кластеризация временных рядов
**Grabovoy A.V., Strijov V.V. Quasi-periodic time series clustering for human activity recognition // Lobachevskii Journal of Mathematics, 2020, 41 : 333-339**
Задан временной ряд
$$
\textbf{x} \in \mathbb{R}^{N},
$$
где $N$ число точек временного ряда. Он состоит из последовательности сегментов:
$$
\textbf{x} = [\textbf{v}_1, \textbf{v}_2, \cdots, \textbf{v}_M],
$$
где $\textbf{v}_i$ некоторый сегмент из множества сегментов $\mathbf{V},$ которые встречаются в данном ряде.
Причем для всех $i$ либо $[\textbf{v}_{i-1},\textbf{v}_{i}]$ либо $[\textbf{v}_{i},\textbf{v}_{i+1}]$ является цепочкой действий. Пусть множество $\mathbf{V}$ удовлетворяет следующим свойствам:
$$
\left|\mathbf{V}\right| = K, \quad \textbf{v} \in \mathbf{V}~\left|\textbf{v}\right| \leq T,
$$
где $\left|\mathbf{V}\right|$ число различных действий в множестве сегментов $\mathbf{V},$ $\left|\textbf{v}\right|$ длина сегмента, а $K$ и $T$ это число различных действий во временном ряде и длина максимального сегмента соответсвенно.
Рассматривается отображение
$$
a : t \to \mathbb{Y} = \{1,\cdots, K\},
$$
где $t \in \{1,\cdots, N\}$ некоторый момент времени, на котором задан временной ряд.
Требуется, чтобы отображение $a$ удовлетворяло следующим свойствам:
$$
\begin{cases}
a\left(t_1\right) = a\left(t_2\right), & \text{если в моменты } t_1, t_2 \text{ совершается один тип действий}\\
a\left(t_1\right) \not= a\left(t_2\right), & \text{если в моменты } t_1, t_2 \text{ совершаются разные типы действий }
\end{cases}
$$
### Визуализация основной идеи
```
data = pd.read_csv('https://raw.githubusercontent.com/andriygav/TimeSeriesClustering/master/code/SyntheticData/2_patern/1.csv')
X_intro = (data.values[1150:1600]).reshape([1,-1,1])
List_of_point = [np.array([15, 54, 95, 135, 175]), np.array([219, 259, 299, 339, 379, 419])]
phase_track_intro = return_phase_track(X_intro[:, 0:100, :], 20)
model = PCA(n_components=2)
basis_a = model.fit(phase_track_intro).components_
res_a = model.transform(phase_track_intro)
phase_track_intro = return_phase_track(X_intro[:, 300:400, :], 20)
model = PCA(n_components=2)
basis_b = model.fit(phase_track_intro).components_
res_b = model.transform(phase_track_intro)
alpha_1 = (basis_a[0]*basis_b[0]).sum()
alpha_2 = (basis_a[1]*basis_b[0]).sum()
beta_1 = (basis_a[0]*basis_b[1]).sum()
beta_2 = (basis_a[1]*basis_b[1]).sum()
a_1 = np.array([1, 0, 0])
a_2 = np.array([0, 1, 0])
b_1 = np.array([alpha_1, alpha_2, np.sqrt(1- alpha_1**2- alpha_2**2)])
b_1 = b_1/np.sqrt((b_1**2).sum())
b_2 = np.array([beta_1, beta_2, (-alpha_1*beta_1-alpha_2*beta_2)/np.sqrt(1- alpha_1**2- alpha_2**2)])
b_2 = b_2/np.sqrt((b_2**2).sum())
normal_1 = np.array([0,0,1])
normal_2 = np.cross(b_1, b_2)
point = np.array([1, 1, 1])
ress_a = res_a[:,0].reshape([-1,1])*a_1.reshape([1,-1]) + res_a[:,1].reshape([-1,1])*a_2.reshape([1,-1])
ress_b = res_b[:,0].reshape([-1,1])*b_1.reshape([1,-1]) + res_b[:,1].reshape([-1,1])*b_2.reshape([1,-1])
fig = plt.figure(figsize=(10,10));
marker = ['^', 's', 'v', 'D', 'P']
color = ['orange', 'green', 'red', 'yelow', 'blue']
gs = gridspec.GridSpec(1, 1)
ax = fig.add_subplot(gs[0], projection='3d');
xx, yy = np.meshgrid(range(-100, 100), range(-100, 100))
z_1 = (-normal_1[0] * xx - normal_1[1] * yy + point.dot(normal_1)) * 1./normal_1[2]
ax.plot_surface(xx, yy, z_1, alpha = 0.2, color = color[0])
ax.plot(ress_a[:,0] , ress_a[:,1] , ress_a[:,2], "-", marker = marker[0], color=color[0], label = 'Phase trajectory for type 1')
z_2 = (-normal_2[0] * xx - normal_2[1] * yy +point.dot(normal_2)) * 1./normal_2[2]
ax.plot_surface(xx, yy, z_2, alpha = 0.2, color = color[1])
ax.plot(ress_b[:, 0] , ress_b[:, 1] , ress_b[:, 2], "-", marker = marker[1], color=color[1], label = 'Phase trajectory for type 2')
ax.view_init(15, -50)
ax.legend(loc = 'best')
ax.xaxis.set_ticks(np.arange(-100, 101, 50))
ax.yaxis.set_ticks(np.arange(-100, 101, 50))
ax.zaxis.set_ticks(np.arange(-40, 41, 20))
ax.set_title('(b)', y=-0.25)
plt.subplots_adjust(wspace=0.05, hspace=0.2)
plt.show()
```
### Эксперимент по сегментации
```
data = pd.read_csv('https://raw.githubusercontent.com/andriygav/TimeSeriesClustering/master/code/RealData/2.csv')
T = 40
K = 2
X_test = data.values[100:1000, 2:3].reshape([1,-1,1])
List_of_x = np.arange(T, X_test[0].shape[0] - 2*T)
plt.figure(figsize=(12, 6))
_ = plt.plot(X_test[0], '-o')
plt.xlabel('Time $t$, $sec$')
plt.ylabel('Acceleration $x$, $m/sec^2$')
plt.grid()
plt.show()
phase_track = return_phase_track(X_test, T)
List_of_basis_vector, List_of_basis_vector_s, List_of_basis_vector_c = local_basis(phase_track, T = T)
M_pairwise = get_pairwise_matrix(List_of_basis_vector, List_of_basis_vector_s, List_of_basis_vector_c)
_ = plt.imshow(M_pairwise)
_ = plt.colorbar()
plt.xlabel('Time $t$, $sec$')
plt.ylabel('Time $t$, $sec$')
plt.show()
model = AgglomerativeClustering(n_clusters=K, affinity='precomputed', linkage='complete')
fitted = model.fit(M_pairwise)
prediction_vector = fitted.fit_predict(M_pairwise)
color = ['orange', 'green', 'red', 'yelow', 'blue']
plt.figure(figsize=(12, 6))
_ = plt.plot(X_test[0], '-')
for t in np.unique(prediction_vector):
ind = np.where(prediction_vector == t)
_ = plt.plot(List_of_x[ind]+T, X_test[0][2*T:X_test[0].shape[0]-T][ind], 'o', color = color[t], label = 'Type ' + str(t + 1))
plt.grid()
plt.legend(loc = 'best')
plt.xlabel('Time $t$, $sec$')
plt.ylabel('Acceleration $x$, $m/sec^2$')
plt.show()
List_of_All, List_of_point = segmentation(X_test[:, 2*T:X_test[0].shape[0]-T, :], prediction_vector, T)
color = ['orange', 'green', 'red', 'yelow', 'blue']
plt.figure(figsize=(12, 6))
_ = plt.plot(X_test[0], '-')
for t in np.unique(prediction_vector):
# _ = plt.plot(List_of_x[0] + T, 0, color = color[t], label = 'Type ' + str(t + 1))
ind = List_of_point[t] + T
for x in (List_of_x + T)[ind]:
_ = plt.axvline(x = x, color = color[t])
for t in np.unique(prediction_vector):
ind = np.where(prediction_vector == t)
_ = plt.plot(List_of_x[ind]+T, X_test[0][2*T:X_test[0].shape[0]-T][ind], 'o', color = color[t], label = 'Type ' + str(t + 1))
plt.grid()
plt.legend(loc = 'best')
plt.xlabel('Time $t$, $sec$')
plt.ylabel('Acceleration $x$, $m/sec^2$')
plt.show()
index = 1
_, _, List_of_points_plus, List_of_points_minus, List_of_t_plus, List_of_t_minus, line_point, ress = List_of_All[index]
_ = plt.plot(ress[:, 0], ress[:, 1], '-o', color = 'blue')
for point in List_of_points_plus:
_ = plt.plot(point[0], point[1], '*', color = 'orange')
for point in List_of_points_minus:
_ = plt.plot(point[0], point[1], '*', color = 'red')
x_line = np.array([-0.25, 0.25])
k = line_point[1]/line_point[0]
y_line = k*x_line
_ = plt.plot(x_line, y_line, '--', color = 'black')
plt.show()
```
## LSTM для предсказани точек временного ряда
```
class TimeSeriesPrediction(torch.nn.Module):
def __init__(self, emb_dim=2):
super(TimeSeriesPrediction, self).__init__()
self.lstm = torch.nn.LSTM(input_size=1,
hidden_size=emb_dim,
num_layers=1,
batch_first=True)
self.linear = torch.nn.Linear(emb_dim, 1)
def forward(self, x_batch, hidden=None):
r'''
:param x_batch: tensor of shape batch_size x 1 x 1 (момент времени t)
:return: tensor of shape batch_size x 1 x 1 (момент времени t+1)
'''
if hidden is None:
act, (h, c) = self.lstm(x_batch)
else:
act, (h, c) = self.lstm(x_batch, hidden)
return self.linear(act), (h, c)
data = torch.randn(8, 1, 1)
model = TimeSeriesPrediction()
res, (h, c) = model(data)
series = GeneratorOfTimeSeries()
X_train = torch.tensor(series).float()
loss_function = torch.nn.MSELoss()
optimiser = torch.optim.Adam(model.parameters())
for i in tqdm(range(len(X_train))):
optimiser.zero_grad()
x_batch_seq = X_train[i:i+64]
y_batch_seq = torch.zeros_like(x_batch_seq)
hidden = None
for j in range(x_batch_seq.shape[1]):
y_batch_seq[:, j:j+1, :], hidden = model(x_batch_seq[:, j:j+1, :], hidden)
loss = loss_function(y_batch_seq[:, :-1, :], x_batch_seq[:, 1:, :])
loss.backward()
optimiser.step()
```
| true |
code
| 0.441854 | null | null | null | null |
|
- [Lab 1: Principal Component Analysis](#Lab-1:-Principal-Component-Analysis)
- [Lab 2: K-Means Clustering](#Lab-2:-Clustering)
- [Lab 2: Hierarchical Clustering](#10.5.3-Hierarchical-Clustering)
- [Lab 3: NCI60 Data Example](#Lab-3:-NCI60-Data-Example)
# Chapter 10 - Unsupervised Learning
```
# %load ../standard_import.txt
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import scale
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
from scipy.cluster import hierarchy
pd.set_option('display.notebook_repr_html', False)
%matplotlib inline
plt.style.use('seaborn-white')
```
## Lab 1: Principal Component Analysis
```
# In R, I exported the dataset to a csv file. It is part of the base R distribution.
df = pd.read_csv('Data/USArrests.csv', index_col=0)
df.info()
df.mean()
df.var()
X = pd.DataFrame(scale(df), index=df.index, columns=df.columns)
# The loading vectors
pca_loadings = pd.DataFrame(PCA().fit(X).components_.T, index=df.columns, columns=['V1', 'V2', 'V3', 'V4'])
pca_loadings
# Fit the PCA model and transform X to get the principal components
pca = PCA()
df_plot = pd.DataFrame(pca.fit_transform(X), columns=['PC1', 'PC2', 'PC3', 'PC4'], index=X.index)
df_plot
fig , ax1 = plt.subplots(figsize=(9,7))
ax1.set_xlim(-3.5,3.5)
ax1.set_ylim(-3.5,3.5)
# Plot Principal Components 1 and 2
for i in df_plot.index:
ax1.annotate(i, (-df_plot.PC1.loc[i], -df_plot.PC2.loc[i]), ha='center')
# Plot reference lines
ax1.hlines(0,-3.5,3.5, linestyles='dotted', colors='grey')
ax1.vlines(0,-3.5,3.5, linestyles='dotted', colors='grey')
ax1.set_xlabel('First Principal Component')
ax1.set_ylabel('Second Principal Component')
# Plot Principal Component loading vectors, using a second y-axis.
ax2 = ax1.twinx().twiny()
ax2.set_ylim(-1,1)
ax2.set_xlim(-1,1)
ax2.tick_params(axis='y', colors='orange')
ax2.set_xlabel('Principal Component loading vectors', color='orange')
# Plot labels for vectors. Variable 'a' is a small offset parameter to separate arrow tip and text.
a = 1.07
for i in pca_loadings[['V1', 'V2']].index:
ax2.annotate(i, (-pca_loadings.V1.loc[i]*a, -pca_loadings.V2.loc[i]*a), color='orange')
# Plot vectors
ax2.arrow(0,0,-pca_loadings.V1[0], -pca_loadings.V2[0])
ax2.arrow(0,0,-pca_loadings.V1[1], -pca_loadings.V2[1])
ax2.arrow(0,0,-pca_loadings.V1[2], -pca_loadings.V2[2])
ax2.arrow(0,0,-pca_loadings.V1[3], -pca_loadings.V2[3]);
# Standard deviation of the four principal components
np.sqrt(pca.explained_variance_)
pca.explained_variance_
pca.explained_variance_ratio_
plt.figure(figsize=(7,5))
plt.plot([1,2,3,4], pca.explained_variance_ratio_, '-o', label='Individual component')
plt.plot([1,2,3,4], np.cumsum(pca.explained_variance_ratio_), '-s', label='Cumulative')
plt.ylabel('Proportion of Variance Explained')
plt.xlabel('Principal Component')
plt.xlim(0.75,4.25)
plt.ylim(0,1.05)
plt.xticks([1,2,3,4])
plt.legend(loc=2);
```
## Lab 2: Clustering
### 10.5.1 K-Means Clustering
```
# Generate data
np.random.seed(2)
X = np.random.standard_normal((50,2))
X[:25,0] = X[:25,0]+3
X[:25,1] = X[:25,1]-4
```
#### K = 2
```
km1 = KMeans(n_clusters=2, n_init=20)
km1.fit(X)
km1.labels_
```
See plot for K=2 below.
#### K = 3
```
np.random.seed(4)
km2 = KMeans(n_clusters=3, n_init=20)
km2.fit(X)
pd.Series(km2.labels_).value_counts()
km2.cluster_centers_
km2.labels_
# Sum of distances of samples to their closest cluster center.
km2.inertia_
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(14,5))
ax1.scatter(X[:,0], X[:,1], s=40, c=km1.labels_, cmap=plt.cm.prism)
ax1.set_title('K-Means Clustering Results with K=2')
ax1.scatter(km1.cluster_centers_[:,0], km1.cluster_centers_[:,1], marker='+', s=100, c='k', linewidth=2)
ax2.scatter(X[:,0], X[:,1], s=40, c=km2.labels_, cmap=plt.cm.prism)
ax2.set_title('K-Means Clustering Results with K=3')
ax2.scatter(km2.cluster_centers_[:,0], km2.cluster_centers_[:,1], marker='+', s=100, c='k', linewidth=2);
```
### 10.5.3 Hierarchical Clustering
#### scipy
```
fig, (ax1,ax2,ax3) = plt.subplots(3,1, figsize=(15,18))
for linkage, cluster, ax in zip([hierarchy.complete(X), hierarchy.average(X), hierarchy.single(X)], ['c1','c2','c3'],
[ax1,ax2,ax3]):
cluster = hierarchy.dendrogram(linkage, ax=ax, color_threshold=0)
ax1.set_title('Complete Linkage')
ax2.set_title('Average Linkage')
ax3.set_title('Single Linkage');
```
## Lab 3: NCI60 Data Example
### § 10.6.1 PCA
```
# In R, I exported the two elements of this ISLR dataset to csv files.
# There is one file for the features and another file for the classes/types.
df2 = pd.read_csv('Data/NCI60_X.csv').drop('Unnamed: 0', axis=1)
df2.columns = np.arange(df2.columns.size)
df2.info()
X = pd.DataFrame(scale(df2))
X.shape
y = pd.read_csv('Data/NCI60_y.csv', usecols=[1], skiprows=1, names=['type'])
y.shape
y.type.value_counts()
# Fit the PCA model and transform X to get the principal components
pca2 = PCA()
df2_plot = pd.DataFrame(pca2.fit_transform(X))
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(15,6))
color_idx = pd.factorize(y.type)[0]
cmap = plt.cm.hsv
# Left plot
ax1.scatter(df2_plot.iloc[:,0], df2_plot.iloc[:,1], c=color_idx, cmap=cmap, alpha=0.5, s=50)
ax1.set_ylabel('Principal Component 2')
# Right plot
ax2.scatter(df2_plot.iloc[:,0], df2_plot.iloc[:,2], c=color_idx, cmap=cmap, alpha=0.5, s=50)
ax2.set_ylabel('Principal Component 3')
# Custom legend for the classes (y) since we do not create scatter plots per class (which could have their own labels).
handles = []
labels = pd.factorize(y.type.unique())
norm = mpl.colors.Normalize(vmin=0.0, vmax=14.0)
for i, v in zip(labels[0], labels[1]):
handles.append(mpl.patches.Patch(color=cmap(norm(i)), label=v, alpha=0.5))
ax2.legend(handles=handles, bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
# xlabel for both plots
for ax in fig.axes:
ax.set_xlabel('Principal Component 1')
pd.DataFrame([df2_plot.iloc[:,:5].std(axis=0, ddof=0).as_matrix(),
pca2.explained_variance_ratio_[:5],
np.cumsum(pca2.explained_variance_ratio_[:5])],
index=['Standard Deviation', 'Proportion of Variance', 'Cumulative Proportion'],
columns=['PC1', 'PC2', 'PC3', 'PC4', 'PC5'])
df2_plot.iloc[:,:10].var(axis=0, ddof=0).plot(kind='bar', rot=0)
plt.ylabel('Variances');
fig , (ax1,ax2) = plt.subplots(1,2, figsize=(15,5))
# Left plot
ax1.plot(pca2.explained_variance_ratio_, '-o')
ax1.set_ylabel('Proportion of Variance Explained')
ax1.set_ylim(ymin=-0.01)
# Right plot
ax2.plot(np.cumsum(pca2.explained_variance_ratio_), '-ro')
ax2.set_ylabel('Cumulative Proportion of Variance Explained')
ax2.set_ylim(ymax=1.05)
for ax in fig.axes:
ax.set_xlabel('Principal Component')
ax.set_xlim(-1,65)
```
### § 10.6.2 Clustering
```
X= pd.DataFrame(scale(df2), index=y.type, columns=df2.columns)
fig, (ax1,ax2,ax3) = plt.subplots(1,3, figsize=(20,20))
for linkage, cluster, ax in zip([hierarchy.complete(X), hierarchy.average(X), hierarchy.single(X)],
['c1','c2','c3'],
[ax1,ax2,ax3]):
cluster = hierarchy.dendrogram(linkage, labels=X.index, orientation='right', color_threshold=0, leaf_font_size=10, ax=ax)
ax1.set_title('Complete Linkage')
ax2.set_title('Average Linkage')
ax3.set_title('Single Linkage');
plt.figure(figsize=(10,20))
cut4 = hierarchy.dendrogram(hierarchy.complete(X),
labels=X.index, orientation='right', color_threshold=140, leaf_font_size=10)
plt.vlines(140,0,plt.gca().yaxis.get_data_interval()[1], colors='r', linestyles='dashed');
```
##### KMeans
```
np.random.seed(2)
km4 = KMeans(n_clusters=4, n_init=50)
km4.fit(X)
km4.labels_
# Observations per KMeans cluster
pd.Series(km4.labels_).value_counts().sort_index()
```
##### Hierarchical
```
# Observations per Hierarchical cluster
cut4b = hierarchy.dendrogram(hierarchy.complete(X), truncate_mode='lastp', p=4, show_leaf_counts=True)
# Hierarchy based on Principal Components 1 to 5
plt.figure(figsize=(10,20))
pca_cluster = hierarchy.dendrogram(hierarchy.complete(df2_plot.iloc[:,:5]), labels=y.type.values, orientation='right', color_threshold=100, leaf_font_size=10)
cut4c = hierarchy.dendrogram(hierarchy.complete(df2_plot), truncate_mode='lastp', p=4,
show_leaf_counts=True)
# See also color coding in plot above.
```
| true |
code
| 0.668096 | null | null | null | null |
|
# Analysing model capacity
Author: Alexandre Gramfort, based on materials from Jake Vanderplas
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
```
The issues associated with validation and
cross-validation are some of the most important
aspects of the practice of machine learning. Selecting the optimal model
for your data is vital, and is a piece of the problem that is not often
appreciated by machine learning practitioners.
Of core importance is the following question:
**If our estimator is underperforming, how should we move forward?**
- Use simpler or more complicated model?
- Add more features to each observed data point?
- Add more training samples?
The answer is often counter-intuitive. In particular, **sometimes using a
more complicated model will give _worse_ results.** Also, **sometimes adding
training data will not improve your results.** The ability to determine
what steps will improve your model is what separates the successful machine
learning practitioners from the unsuccessful.
## Learning Curves and Validation Curves
One way to address this issue is to use what are often called **Learning Curves**.
The idea is to test the performance of your prediction pipeline as a function of the training samples.
To get a "robust" estimation of the model performance, the performance is evaluted multiple times for multiple random splits of the data.
What the right model for a dataset is depends critically on **how much data we have**. More data allows us to be more confident about building a complex model. Lets built some intuition on why that is. Look at the following datasets:
```
from sklearn import model_selection
rng = np.random.RandomState(0)
n_samples = 100
def f(X):
return X ** 3
X = np.sort(2. * (rng.rand(n_samples) - .5))
y = f(X) + .01 * rng.randn(n_samples)
X = X[:, None]
y = y
def plot(clf=None):
fig, axarr = plt.subplots(1, 3, figsize=(12, 4))
for ax, decim in zip(axarr, [45, 10, 1]):
ax.scatter(X[::decim], y[::decim])
ax.set_xlim((-3 * .5, 3 * .5))
ax.set_ylim((-1, 1))
xx = np.linspace(X.min(), X.max(), 100)
ax.plot(xx, f(xx), 'r--')
if clf is not None:
xx = np.linspace(-1.5, 1.5, 100)
clf.fit(X[::decim], y[::decim])
y_pred = clf.predict(xx[:, None])
ax.plot(xx, y_pred, 'g')
plt.show()
plot()
```
They all come from the same underlying process. But if you were asked to make a prediction, you would be more likely to draw a straight line for the left-most one, as there are only very few datapoints, and no real rule is apparent. For the dataset in the middle, some structure is recognizable, though the exact shape of the true function is maybe not obvious. With even more data on the right hand side, you would probably be very comfortable with drawing a curved line with a lot of certainty.
```
from sklearn.linear_model import LinearRegression
plot(LinearRegression())
from sklearn.svm import SVR
plot(SVR(kernel='rbf', gamma='auto'))
```
A great way to explore how a model fit evolves with different dataset sizes are learning curves.
A learning curve plots the validation error for a given model against different training set sizes.
But first, take a moment to think about what we're going to see:
**Questions:**
- **As the number of training samples are increased, what do you expect to see for the training error? For the validation error?**
- **Would you expect the training error to be higher or lower than the validation error? Would you ever expect this to change?**
We can run the following code to plot the learning curve for a ``kernel = linear`` model:
```
from sklearn.model_selection import learning_curve
from sklearn.svm import SVR
scoring = 'neg_mean_squared_error'
training_sizes, train_scores, test_scores = learning_curve(SVR(kernel='linear'), X, y, cv=10,
scoring=scoring,
train_sizes=[.6, .7, .8, .9, 1.])
# Use the negative because we want to maximize score
print(train_scores.mean(axis=1))
plt.plot(training_sizes, train_scores.mean(axis=1), label="training scores")
plt.plot(training_sizes, test_scores.mean(axis=1), label="test scores")
plt.legend(loc='best');
```
You can see that for the model with ``kernel = linear``, the validation score doesn't really improve as more data is given.
Notice that the validation error *generally improves* with a growing training set,
while the training error *generally gets worse* with a growing training set. From
this we can infer that as the training size increases, they will converge to a single
value.
From the above discussion, we know that `kernel = linear`
underfits the data. This is indicated by the fact that both the
training and validation errors are very poor. When confronted with this type of learning curve,
we can expect that adding more training data will not help matters: both
lines will converge to a relatively high error.
**When the learning curves have converged to a poor error, we have an underfitting model.**
An underfitting model can be improved by:
- Using a more sophisticated model (i.e. in this case, increase complexity of the ``kernel`` parameter)
- Gather more features for each sample.
- Decrease regularization in a regularized model.
A underfitting model cannot be improved, however, by increasing the number of training
samples (do you see why?)
Now let's look at an overfit model:
```
from sklearn.svm import SVR
training_sizes, train_scores, test_scores = learning_curve(SVR(kernel='rbf', gamma='auto'), X, y, cv=10,
scoring=scoring,
train_sizes=[.6, .7, .8, .9, 1.])
plt.plot(training_sizes, train_scores.mean(axis=1), label="training scores")
plt.plot(training_sizes, test_scores.mean(axis=1), label="test scores")
plt.legend(loc='best')
```
Here we show the learning curve for `kernel = rbf`. From the above
discussion, we know that `kernel = rbf` is an estimator
which mildly **overfits** the data. This is indicated by the fact that the
training error is **much** better than the validation error. As
we add more samples to this training set, the training error will
continue to worsen, while the cross-validation error will continue
to improve, until they meet in the middle. We can infer that adding more
data will allow the estimator to very closely match the best
possible cross-validation error.
**When the learning curves have not yet converged with our full training set, it indicates an overfit model.**
An overfitting model can be improved by:
- Gathering more training samples.
- Using a less-sophisticated model (i.e. in this case, make ``kernel`` less complex with ``kernel = poly``)
- Increasing regularization (parameter ``C`` for SVM/SVR).
In particular, gathering more features for each sample will not help the results.
## Summary
We’ve seen above that an under-performing algorithm can be due
to two possible situations: underfitting and overfitting.
Using the technique of learning curves, we can train on progressively
larger subsets of the data, evaluating the training error and
cross-validation error to determine whether our algorithm is overfitting or underfitting. But what do we do with this information?
### Underfitting
If our algorithm is **underfitting**, the following actions might help:
- **Add more features**. It may be helpful to make use of information as
additional features. For example, to predict housing prices
features such as the neighborhood
the house is in, the year the house was built, the size of the lot, etc.
can help the model by giving new dimensions to help differentiate
houses. Adding these features to the training and test sets can improve
the fit.
- **Use a more sophisticated model**. Adding complexity to the model can
help improve the fit. For a SVR fit, this can be accomplished
by increasing the kernel complexity (generally ``linear`` << ``poly`` << ``rbf``).
Each learning technique has its own methods of adding complexity.
- **Use fewer samples**. Though this will not improve the classification,
an underfitting algorithm can attain nearly the same error with a smaller
training sample. For algorithms which are computationally expensive,
reducing the training sample size can lead to very large improvements
in speed.
- **Decrease regularization**. Regularization is a technique used to impose
simplicity in some machine learning models, by adding a penalty term that
depends on the characteristics of the parameters. If a model is underfitting,
decreasing the regularization can lead to better results.
### Overfitting
If our algorithm shows signs of **overfitting**, the following actions might help:
- **Use fewer features**. Using a feature selection technique may be
useful, and decrease the overfitting of the estimator.
- **Use a simpler model**. Model complexity and overfitting go hand-in-hand.
For example, models like random forests tend to overfit
much more than linear models and SVMs.
- **Use more training samples**. Adding training samples can reduce
the effect of overfitting.
- **Increase Regularization**. Regularization is designed to prevent
overfitting. So increasing regularization
can lead to better results for overfitting models.
These choices become very important in real-world situations, as data collection usually
costs time and energy. If the model is underfitting, then spending weeks or months collecting
more data could be a colossal waste of time! However, more data (usually) gives us a better view
of the true nature of the problem, so these issues should always be carefully considered before
going on a "data foraging expedition".
| true |
code
| 0.767527 | null | null | null | null |
|
# Feature: POS/NER Tag Similarity
Derive bag-of-POS-tag and bag-of-NER-tag vectors from each question and calculate their vector distances.
## Imports
This utility package imports `numpy`, `pandas`, `matplotlib` and a helper `kg` module into the root namespace.
```
from pygoose import *
import os
import warnings
from collections import Counter
from scipy.spatial.distance import cosine, euclidean, jaccard
import spacy
```
## Config
Automatically discover the paths to various data folders and compose the project structure.
```
project = kg.Project.discover()
```
Identifier for storing these features on disk and referring to them later.
```
feature_list_id = 'nlp_tags'
```
## Read Data
Original question datasets.
```
df_train = pd.read_csv(project.data_dir + 'train.csv').fillna('')
df_test = pd.read_csv(project.data_dir + 'test.csv').fillna('')
```
Preprocessed and tokenized questions.
We should not use lowercased tokens here because that would harm the named entity recognition process.
```
tokens_train = kg.io.load(project.preprocessed_data_dir + 'tokens_spellcheck_train.pickle')
tokens_test = kg.io.load(project.preprocessed_data_dir + 'tokens_spellcheck_test.pickle')
df_all_texts = pd.DataFrame(
[[' '.join(pair[0]), ' '.join(pair[1])] for pair in tokens_train + tokens_test],
columns=['question1', 'question2'],
)
```
Dependency parsing takes a lot of time and we don't use any features from it. Let's disable it in the pipeline.
If model loading fails, run `python -m spacy download en`
```
nlp = spacy.load('en', parser=False)
```
## Build Features
```
pos_tags_whitelist = ['ADJ', 'ADV', 'NOUN', 'PROPN', 'NUM', 'VERB']
ner_tags_whitelist = ['GPE', 'LOC', 'ORG', 'NORP', 'PERSON', 'PRODUCT', 'DATE', 'TIME', 'QUANTITY', 'CARDINAL']
num_raw_features = len(pos_tags_whitelist) + len(ner_tags_whitelist)
X1 = np.zeros((len(df_all_texts), num_raw_features))
X2 = np.zeros((len(df_all_texts), num_raw_features))
X1.shape, X2.shape
```
### Collect POS and NER tags
```
pipe_q1 = nlp.pipe(df_all_texts['question1'].values, n_threads=os.cpu_count())
pipe_q2 = nlp.pipe(df_all_texts['question2'].values, n_threads=os.cpu_count())
for i, doc in progressbar(enumerate(pipe_q1), total=len(df_all_texts)):
pos_counter = Counter(token.pos_ for token in doc)
ner_counter = Counter(ent.label_ for ent in doc.ents)
X1[i, :] = np.array(
[pos_counter[pos_tag] for pos_tag in pos_tags_whitelist] +
[ner_counter[ner_tag] for ner_tag in ner_tags_whitelist]
)
for i, doc in progressbar(enumerate(pipe_q2), total=len(df_all_texts)):
pos_counter = Counter(token.pos_ for token in doc)
ner_counter = Counter(ent.label_ for ent in doc.ents)
X2[i, :] = np.array(
[pos_counter[pos_tag] for pos_tag in pos_tags_whitelist] +
[ner_counter[ner_tag] for ner_tag in ner_tags_whitelist]
)
```
### Create tag feature sets
```
df_pos_q1 = pd.DataFrame(
X1[:, 0:len(pos_tags_whitelist)],
columns=['pos_q1_' + pos_tag.lower() for pos_tag in pos_tags_whitelist]
)
df_pos_q2 = pd.DataFrame(
X2[:, 0:len(pos_tags_whitelist)],
columns=['pos_q2_' + pos_tag.lower() for pos_tag in pos_tags_whitelist]
)
df_ner_q1 = pd.DataFrame(
X1[:, -len(ner_tags_whitelist):],
columns=['ner_q1_' + ner_tag.lower() for ner_tag in ner_tags_whitelist]
)
df_ner_q2 = pd.DataFrame(
X2[:, -len(ner_tags_whitelist):],
columns=['ner_q2_' + ner_tag.lower() for ner_tag in ner_tags_whitelist]
)
```
### Compute pairwise distances
```
def get_vector_distances(i):
return [
# POS distances.
cosine(X1[i, 0:len(pos_tags_whitelist)], X2[i, 0:len(pos_tags_whitelist)]),
euclidean(X1[i, 0:len(pos_tags_whitelist)], X2[i, 0:len(pos_tags_whitelist)]),
# NER distances.
euclidean(X1[i, -len(ner_tags_whitelist):], X2[i, -len(ner_tags_whitelist):]),
np.abs(np.sum(X1[i, -len(ner_tags_whitelist):]) - np.sum(X2[i, -len(ner_tags_whitelist):])),
]
warnings.filterwarnings('ignore')
X_distances = kg.jobs.map_batch_parallel(
list(range(len(df_all_texts))),
item_mapper=get_vector_distances,
batch_size=1000,
)
X_distances = np.array(X_distances)
df_distances = pd.DataFrame(
X_distances,
columns=[
'pos_tag_cosine',
'pos_tag_euclidean',
'ner_tag_euclidean',
'ner_tag_count_diff',
]
)
```
### Build master feature list
```
df_master = pd.concat(
[df_pos_q1, df_ner_q1, df_pos_q2, df_ner_q2, df_distances],
axis=1,
ignore_index=True,
)
df_master.columns = list(df_pos_q1.columns) + \
list(df_ner_q1.columns) + \
list(df_pos_q2.columns) + \
list(df_ner_q2.columns) + \
list(df_distances.columns)
df_master.describe().T
X_train = df_master[:len(tokens_train)].values
X_test = df_master[len(tokens_train):].values
print('X train:', X_train.shape)
print('X test: ', X_test.shape)
```
## Save Features
```
feature_names = list(df_master.columns)
project.save_features(X_train, X_test, feature_names, feature_list_id)
```
| true |
code
| 0.30832 | null | null | null | null |
|
#### - Merge Cell painting & L1000 Level-4 data
- Merge both CP and L1000 based on the compounds present in both assays, and make sure the number of replicates for the compounds in both assays per treatment dose are the same, to be able to have an aligned dataset.
#### - Train/Test split the merged Level-4 data
```
import os
import pathlib
import pandas as pd
import numpy as np
import re
from os import walk
from collections import Counter
import random
# Load common compounds
common_file = pathlib.Path(
"..", "..", "6.paper_figures", "data", "significant_compounds_by_threshold_both_assays.tsv.gz"
)
common_df = pd.read_csv(common_file, sep="\t")
common_compounds = common_df.compound.unique()
print(len(common_compounds))
print(common_df.shape)
common_df.head(2)
data_path = '../0.download_cellpainting_L1000_data/data/'
cpd_split_path = '../1.compound_split_train_test/data'
data_path = '../../1.Data-exploration/Profiles_level4/cell_painting/cellpainting_lvl4_cpd_replicate_datasets/'
df_level4_cp = pd.read_csv(
os.path.join(data_path, 'cp_level4_cpd_replicates.csv.gz'),
compression='gzip',
low_memory = False
)
data_path = '../../1.Data-exploration/Profiles_level4/L1000/L1000_lvl4_cpd_replicate_datasets/'
df_level4_L1 = pd.read_csv(
os.path.join(data_path, 'L1000_level4_cpd_replicates.csv.gz'),
compression='gzip',
low_memory = False
)
df_cpds_moas_lincs = pd.read_csv(os.path.join(cpd_split_path, 'split_moas_cpds.csv'))
all_cpds = df_cpds_moas_lincs['pert_iname'].unique()
df_level4_cp = df_level4_cp.loc[df_level4_cp['pert_iname'].isin(all_cpds)].reset_index(drop=True)
df_level4_L1 = df_level4_L1.loc[df_level4_L1['pert_iname'].isin(all_cpds)].reset_index(drop=True)
df_level4_cp['moa'] = df_level4_cp['moa'].apply(lambda x: x.lower())
df_level4_L1['moa'] = df_level4_L1['moa'].apply(lambda x: x.lower())
##sanity check
for cpd in df_level4_cp['pert_iname'].unique():
if cpd not in df_level4_L1['pert_iname'].unique():
print('Some compounds in CP are not found in L1000!!')
len(df_level4_cp['pert_iname'].unique())
len(df_level4_cp['pert_iname'].unique())
df_level4_cp.rename({'Metadata_dose_recode':'dose'}, axis = 1, inplace = True)
##the same columns in Cell painting and L1000;
for col in df_level4_L1.columns:
if col in df_level4_cp.columns.tolist():
print(col)
df_level4_cp.shape
df_level4_L1.shape
def merge_cp_L1000_df(df_cp, df_L1000, all_cpds):
"""
This function merge Cell painting and L1000 level-4 data to one dataframe based on their compounds
args
df_cp: Cell painting Level-4 dataFrame
df_L1: L1000 Level-4 dataFrame
all_cpds: Compounds found in both Cell painting and L1000
return
df_lvl4: merged CP & L1000 dataframe
"""
df_level4_cp_rand = pd.DataFrame(columns = df_cp.columns)
df_level4_L1_rand = pd.DataFrame(columns = df_L1000.columns)
for idx, cpd in enumerate(all_cpds):
df_cpd = df_L1000[df_L1000['pert_iname'] == cpd]
for dose in df_cpd['dose'].unique():
df_dose = df_cpd[df_cpd['dose'] == dose].copy()
df_cpd_cp = df_cp[(df_cp['pert_iname'] == cpd) & (df_cp['dose'] == dose)]
if df_cpd_cp.shape[0] >= df_dose.shape[0]:
df_level4_cp_rand = pd.concat([df_level4_cp_rand,df_cpd_cp.sample(df_dose.shape[0])], ignore_index = True)
df_level4_L1_rand = pd.concat([df_level4_L1_rand,df_dose], ignore_index = True)
else:
df_level4_cp_rand = pd.concat([df_level4_cp_rand,df_cpd_cp], ignore_index = True)
df_level4_L1_rand = pd.concat([df_level4_L1_rand,df_dose.sample(df_cpd_cp.shape[0])], ignore_index = True)
df_level4_cp_rand.rename({'broad_id':'pert_id'}, axis = 1, inplace = True)
df_level4_cp_rand.drop(['dose', 'pert_iname', 'moa', 'pert_id', 'Metadata_broad_sample'], axis = 1, inplace = True)
df_lvl4 = pd.concat([df_level4_cp_rand,df_level4_L1_rand], axis = 1)
return df_lvl4
df_level4 = merge_cp_L1000_df(df_level4_cp, df_level4_L1, all_cpds)
df_level4.shape
def create_moa_targets(df):
"""Create the binary multi-label MOA targets for each compound"""
df['val'] = 1
df_moas_targets = pd.pivot_table(df, values=['val'], index='pert_iname',columns=['moa'], fill_value=0)
df_moas_targets.columns.names = (None,None)
df_moas_targets.columns = df_moas_targets.columns.droplevel(0)
df_moas_targets = df_moas_targets.reset_index().rename({'index':'pert_iname'}, axis = 1)
return df_moas_targets
df_cpds_moas = df_cpds_moas_lincs.copy()
df_moa_targets = create_moa_targets(df_cpds_moas)
df_level4 = df_level4.merge(df_moa_targets, on='pert_iname')
df_level4.shape
```
### - compounds split (80/20) based on MOAs -- based on split_moas_cpds
```
train_cpds = df_cpds_moas_lincs[df_cpds_moas_lincs['train']]['pert_iname'].unique()
test_cpds = df_cpds_moas_lincs[df_cpds_moas_lincs['test']]['pert_iname'].unique()
def train_test_split(train_cpds, test_cpds, df):
df_trn = df.loc[df['pert_iname'].isin(train_cpds)].reset_index(drop=True)
df_tst = df.loc[df['pert_iname'].isin(test_cpds)].reset_index(drop=True)
return df_trn, df_tst
df_level4_trn, df_level4_tst = train_test_split(train_cpds, test_cpds, df_level4)
df_level4_trn.shape
df_level4_tst.shape
```
### - Shuffle train data - 2nd train data
#### - Shuffle the target labels in the train data so that replicates of the same compound/MOA have different MOA labels
```
def create_shuffle_data(df_trn, target_cols):
"""Create shuffled train data where the replicates of each compound are given wrong target labels"""
df_trn_cpy = df_trn.copy()
df_trn_tgts = df_trn_cpy[target_cols].copy()
rand_df = pd.DataFrame(np.random.permutation(df_trn_tgts), columns =df_trn_tgts.columns.tolist())
df_trn_cpy.drop(target_cols, axis = 1, inplace = True)
df_trn_cpy = pd.concat([df_trn_cpy, rand_df], axis = 1)
return df_trn_cpy
target_cols = df_moa_targets.columns[1:]
df_lvl4_trn_shuf = create_shuffle_data(df_level4_trn, target_cols)
df_lvl4_trn_shuf.shape
def save_to_csv(df, path, file_name, compress=None):
"""saves dataframes to csv"""
if not os.path.exists(path):
os.mkdir(path)
df.to_csv(os.path.join(path, file_name), index=False, compression=compress)
L1_cp_level4_path = 'model_data/merged/'
save_to_csv(df_level4, L1_cp_level4_path, 'cp_L1000_lvl4_data.csv.gz', compress="gzip")
save_to_csv(df_level4_trn, L1_cp_level4_path, 'train_lvl4_data.csv.gz', compress="gzip")
save_to_csv(df_level4_tst, L1_cp_level4_path, 'test_lvl4_data.csv.gz', compress="gzip")
save_to_csv(df_lvl4_trn_shuf, L1_cp_level4_path, 'train_shuffle_lvl4_data.csv.gz', compress="gzip")
save_to_csv(df_moa_targets, L1_cp_level4_path, 'target_labels.csv')
```
| true |
code
| 0.318671 | null | null | null | null |
|
# Initializing a fiber with custom spectroscopy
This short example demonstrates how you can initialize a fiber with your own absorption and emission cross section data. In practice, this example uses the same spectroscopy files for Yb germano-silicate as the demonstration classes YbDopedFiber and YbDopedDoubleCladFiber. You can find the sample files in "pyfiberamp/spectroscopies/fiber_spectra/", which will be in your Python installation's site-packages folder if you have installed PyFiberAmp. It might be easier to locate the files on the [GitHub page](https://github.com/Jomiri/pyfiberamp) instead. Your own files should follow the same format, i.e. be readable with [numpy.loadtxt](https://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html) when default parameters are used. The first column in the files should contain wavelength in nanometers and the second column the cross section in m^2. Both "," and "." are accepted as decimal separators.
## Imports
```
from pyfiberamp.spectroscopies import Spectroscopy
from pyfiberamp.fibers import ActiveFiber, DoubleCladFiber
from pyfiberamp.parameters import YB_ABSORPTION_CS_FILE, YB_EMISSION_CS_FILE
```
## 1) Creating a spectroscopy object
```
path_to_absorption_cross_section_file = YB_ABSORPTION_CS_FILE # replace with your own file
path_to_emission_cross_section_file = YB_EMISSION_CS_FILE # replace with your own file
upper_state_lifetime = 1e-3
yb_spectroscopy = Spectroscopy.from_files(
absorption_cross_section_file=path_to_absorption_cross_section_file,
emission_cross_section_file=path_to_emission_cross_section_file,
upper_state_lifetime=upper_state_lifetime,
interpolate='spline') # alternatively: interpolate='linear'
```
## 2) Checking that the spectra and especially the interpolates look correct
Spline interpolation is smoother but does not work well with large gaps in the data. If the interpolates look bad, you can switch to linear interpolation or try to add more data points in the spectrum files. In the case of the sample cross section files, the spline interpolates the data points really well.
```
yb_spectroscopy.plot_gain_and_absorption_spectrum()
```
## 3 a) Initializing a core-pumped fiber with this spectroscopy
```
fiber = ActiveFiber(spectroscopy=yb_spectroscopy,
ion_number_density=1e25,
length=1,
core_radius=3e-6,
core_na=0.10,
background_loss=0)
```
## 3 b) Initializing a double-clad fiber with this spectroscopy
```
double_clad_fiber = DoubleCladFiber(spectroscopy=yb_spectroscopy,
ion_number_density=1e25,
length=1,
core_radius=3e-6,
core_na=0.10,
background_loss=0,
ratio_of_core_and_cladding_diameters=1/10)
```
### The fibers are now ready for use in simulations!
| true |
code
| 0.641057 | null | null | null | null |
|
# Machine Learning GridSearch Pipeline
```
# Import libraries
import os
import sys
# cpu_count returns the number of CPUs in the system.
from multiprocessing import cpu_count
import numpy as np
import pandas as pd
# Import metrics
from sklearn.metrics import accuracy_score
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Import preprocessing methods from sklearn
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import RobustScaler
from sklearn.preprocessing import MinMaxScaler
# Import PCA
from sklearn.decomposition import PCA
# Import feature_selection tools
from sklearn.feature_selection import VarianceThreshold
# Import models from sklearn
from sklearn.dummy import DummyClassifier
from sklearn.linear_model import LogisticRegression
# Import XGBClassifier
from xgboost.sklearn import XGBClassifier
# Import from sklearn
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.externals import joblib
from sklearn.base import TransformerMixin
from sklearn.base import BaseEstimator
# Import plotting libraries
import matplotlib.pyplot as plt
# Modify notebook settings
pd.options.display.max_columns = 150
pd.options.display.max_rows = 150
%matplotlib inline
plt.style.use('ggplot')
```
### Create paths to data file, append `src` directory to sys.path
```
# Create a variable for the project root directory
proj_root = os.path.join(os.pardir)
# Save path to the processed data file
# "dataset_processed.csv"
processed_data_file = os.path.join(proj_root,
"data",
"processed",
"dataset_processed.csv")
# add the 'src' directory as one where we can import modules
src_dir = os.path.join(proj_root, "src")
sys.path.append(src_dir)
```
### Create paths to data file, append `src` directory to sys.path
```
# Save the path to the folder that will contain
# the figures for the final report:
# /reports/figures
figures_dir = os.path.join(proj_root,
"reports",
"figures")
```
### Read in the processed data
```
# Read in the processed credit card client default data set.
df = pd.read_csv(processed_data_file,
index_col=0)
df.head()
```
### Train test split
```
# Extract X and y from df
X = df.drop('y', axis=1).values
#y = df[['y']].values
y = df['y'].values
# Train test split
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3, random_state=42)
# Define a function`namestr` to access the name of a variable
def namestr(obj, namespace):
return [name for name in namespace if namespace[name] is obj][0]
# Print the shape of X, y, X_train, X_test, y_train, and y_test
for var in [X, y, X_train, X_test, y_train, y_test]:
print(namestr(var, globals()),
'shape:\t',
var.shape)
```
### Make pipeline
```
df_X = df.drop('y', axis=1)
def create_binary_feature_list(df=df_X,
return_binary_features=True):
"""
Docstring ...
"""
# Create boolean maskDrop the column with the target values
binary_mask = df.isin([0, 1]).all()
# If return_binary_features=True,
# create a list of the binary features.
# If return_binary_features=False,
# create a list of the nonbinary features.
features_list = list(binary_mask[binary_mask == \
return_binary_features].index)
return features_list
def binary_feature_index_list(df=df_X,
features_list=None):
"""
Docstring ...
"""
feature_index_list = [df.columns.get_loc(c) for c \
in df.columns if c in features_list]
return feature_index_list
binary_features = create_binary_feature_list(df=df_X,
return_binary_features=True)
non_binary_features = create_binary_feature_list(df=df_X,
return_binary_features=False)
binary_index_list = \
binary_feature_index_list(df=df_X,
features_list=binary_features)
non_binary_index_list = \
binary_feature_index_list(df=df_X,
features_list=non_binary_features)
print('Binary features:\n')
print(''.join('{:2s}: {:40s}'.format(str(i), col) \
for i, col in zip(binary_index_list,
binary_features)))
print('\n')
print('Non-binary features:\n')
print(''.join('{:2s}: {:40s}'.format(str(i), col) \
for i, col in zip(non_binary_index_list,
non_binary_features)))
```
#### User defined preprocessors
```
class NonBinary_PCA(BaseEstimator, TransformerMixin):
def __init__(self):
self.scaler = PCA(n_components=None, random_state=42)
# Fit PCA only on the non-binary features
def fit(self, X, y):
self.scaler.fit(X[:, non_binary_index_list], y)
return self
# Transform only the non-binary features with PCA
def transform(self, X):
X_non_binary = \
self.scaler.transform(X[:, non_binary_index_list])
X_recombined = X_non_binary
binary_index_list.sort()
for col in binary_index_list:
X_recombined = np.insert(X_recombined,
col,
X[:, col],
1)
return X_recombined
class NonBinary_RobustScaler(BaseEstimator, TransformerMixin):
def __init__(self):
self.scaler = RobustScaler()
# Fit RobustScaler only on the non-binary features
def fit(self, X, y):
self.scaler.fit(X[:, non_binary_index_list], y)
return self
# Transform only the non-binary features with RobustScaler
def transform(self, X):
X_non_binary = \
self.scaler.transform(X[:, non_binary_index_list])
X_recombined = X_non_binary
binary_index_list.sort()
for col in binary_index_list:
X_recombined = np.insert(X_recombined,
col,
X[:, col],
1)
return X_recombined
class NonBinary_StandardScaler(BaseEstimator, TransformerMixin):
def __init__(self):
self.scaler = StandardScaler()
# Fit StandardScaler only on the non-binary features
def fit(self, X, y):
self.scaler.fit(X[:, non_binary_index_list], y)
return self
# Transform only the non-binary features with StandardScaler
def transform(self, X):
X_non_binary = \
self.scaler.transform(X[:, non_binary_index_list])
X_recombined = X_non_binary
binary_index_list.sort()
for col in binary_index_list:
X_recombined = np.insert(X_recombined,
col,
X[:, col],
1)
return X_recombined
class NonBinary_MinMaxScaler(BaseEstimator, TransformerMixin):
def __init__(self):
self.scaler = MinMaxScaler()
# Fit MinMaxScaler only on the non-binary features
def fit(self, X, y):
self.scaler.fit(X[:, non_binary_index_list], y)
return self
# Transform only the non-binary features with MinMaxScaler
def transform(self, X):
X_non_binary = \
self.scaler.transform(X[:, non_binary_index_list])
X_recombined = X_non_binary
binary_index_list.sort()
for col in binary_index_list:
X_recombined = np.insert(X_recombined,
col,
X[:, col],
1)
return X_recombined
```
#### Define the pipeline
```
# Set a high threshold for removing near-zero variance features
#thresh_prob = 0.999
thresh_prob = 0.99
threshold = (thresh_prob * (1 - thresh_prob))
# Create pipeline
pipe = Pipeline([('preprocessing_1', VarianceThreshold(threshold)),
('preprocessing_2', None),
('preprocessing_3', None),
('classifier', DummyClassifier(strategy='most_frequent',
random_state=42))])
# Create parameter grid
param_grid = [
{'classifier': [LogisticRegression(random_state=42)],
'preprocessing_1': [None, NonBinary_RobustScaler()],
'preprocessing_2': [None, NonBinary_PCA()],
'preprocessing_3': [None, VarianceThreshold(threshold)],
'classifier__C': [0.01, 0.1],
'classifier__penalty': ['l1','l2']},
{'classifier': [XGBClassifier(objective='binary:logistic', n_estimators=1000)],
'preprocessing_1': [None, VarianceThreshold(threshold)],
'preprocessing_2': [None],
'preprocessing_3': [None],
'classifier__n_estimators': [1000],
'classifier__learning_rate': [0.01, 0.1],
'classifier__gamma': [0.01, 0.1],
'classifier__max_depth': [3, 4],
'classifier__min_child_weight': [1, 3],
'classifier__subsample': [0.8],
# 'classifier__colsample_bytree': [0.8, 1.0],
'classifier__reg_lambda': [0.1, 1.0],
'classifier__reg_alpha': [0, 0.1]}]
# Set the number of cores to be used
cores_used = cpu_count() - 1
cores_used
cores_used = 1
# Set verbosity
verbosity = 1
# Execute Grid search
grid = GridSearchCV(pipe, param_grid, cv=5, scoring='roc_auc',
verbose=verbosity, n_jobs=cores_used)
grid.fit(X_train, y_train)
print("Best params:\n{}\n".format(grid.best_params_))
print("Best cross-validation score: {:.2f}".format(grid.best_score_))
```
#### Save the grid search object as a pickle file
```
# Save path to the `models` folder
models_folder = os.path.join(proj_root,
"models")
# full_gridsearch_file_name = 'gridsearch_pickle_20171029.pkl'
full_gridsearch_file_name = 'gridsearch_pickle.pkl'
full_gridsearch_path = os.path.join(models_folder,
full_gridsearch_file_name)
joblib.dump(grid, full_gridsearch_path)
# best_pipeline_file_name = 'pipeline_pickle_20171029.pkl'
best_pipeline_file_name = 'pipeline_pickle.pkl'
best_pipeline_path = os.path.join(models_folder,
best_pipeline_file_name)
joblib.dump(grid.best_estimator_, best_pipeline_path)
```
### Grid search for best *logistic regression* model
```
# Create parameter grid
param_grid = [
{'classifier': [LogisticRegression(random_state=42)],
'preprocessing_1': [None], # [VarianceThreshold(threshold)],
'preprocessing_2': [NonBinary_RobustScaler()],
'preprocessing_3': [None, NonBinary_PCA(), VarianceThreshold(threshold)],
'classifier__C': [0.001, 0.01, 0.1, 1, 10, 100],
'classifier__penalty': ['l1','l2']}]
# Set the number of cores to be used
cores_used = cpu_count() - 1
cores_used
cores_used = 1
# Set verbosity
verbosity = 1
# Execute Grid search
logreg_grid = GridSearchCV(pipe, param_grid, cv=5, scoring='roc_auc',
verbose=verbosity, n_jobs=cores_used)
logreg_grid.fit(X_train, y_train)
print("Best logistic regression params:\n{}\n".format(logreg_grid.best_params_))
print("Best cross-validated logistic regression score: {:.2f}".format(logreg_grid.best_score_))
# Save the grid search object as a pickle file
models_folder = os.path.join(proj_root,
"models")
logreg_gridsearch_file_name = 'logreg_gridsearch_pickle.pkl'
logreg_gridsearch_path = os.path.join(models_folder,logreg_gridsearch_file_name)
joblib.dump(logreg_grid, logreg_gridsearch_path)
best_logreg_pipeline_file_name = 'best_logreg_pipeline_pickle.pkl'
best_logreg_pipeline_path = os.path.join(models_folder,
best_logreg_pipeline_file_name)
joblib.dump(logreg_grid.best_estimator_, best_logreg_pipeline_path)
```
#### Read in the best pipeline
```
# best_pipeline_file_name = 'pipeline_pickle_20171029.pkl'
best_pipeline_file_name = 'pipeline_pickle.pkl'
best_pipeline_path = os.path.join(models_folder,
best_pipeline_file_name)
clf = joblib.load(best_pipeline_path)
```
#### Read in the best logistic regression pipeline
```
best_logreg_pipeline_file_name = 'best_logreg_pipeline_pickle.pkl'
best_logreg_pipeline_path = os.path.join(models_folder,
best_logreg_pipeline_file_name)
logreg_clf = joblib.load(best_logreg_pipeline_path)
```
#### Check AUC scores
```
cross_val_results = cross_val_score(clf,
X_train,
y_train,
scoring="roc_auc",
cv=5,
n_jobs=1)
results_mean = np.mean(cross_val_results)
print("Best pipeline:")
print("Mean Cross validation AUC:\n{:.3f}\n".format(results_mean))
cross_val_results_logreg = cross_val_score(logreg_clf,
X_train,
y_train,
scoring="roc_auc",
cv=5,
n_jobs=1)
results_mean_logreg = np.mean(cross_val_results_logreg)
print("Best logistic regression pipeline:")
print("Mean Cross validation AUC:\n{:.3f}\n".format(results_mean_logreg))
```
Best logistic regression pipeline:
Mean Cross validation AUC:
0.771
```
clf.fit(X_train, y_train)
auc_train = roc_auc_score(y_train, clf.predict_proba(X_train)[:,1])
print("Train AUC:\n{:.3f}\n".format(auc_train))
auc_test = roc_auc_score(y_test, clf.predict_proba(X_test)[:,1])
print("Test AUC:\n{:.3f}\n".format(auc_test))
dummy_clf = DummyClassifier(strategy='most_frequent',
random_state=42)
dummy_clf.fit(X_train, y_train)
dummy_auc_train = roc_auc_score(y_train,
dummy_clf.predict_proba(X_train)[:,1])
print("Dummy Train AUC:\n{:.3f}\n".format(dummy_auc_train))
dummy_auc_test = roc_auc_score(y_test,
dummy_clf.predict_proba(X_test)[:,1])
print("Dummy Test AUC:\n{:.3f}\n".format(dummy_auc_test))
```
#### Plot the Receiver Operating Characteristic Curves
```
probs = clf.predict_proba(X_test)
preds = probs[:,1]
fpr, tpr, threshold = roc_curve(y_test, preds)
#roc_auc = auc(fpr, tpr)
roc_auc = roc_auc_score(y_test, preds)
plt.plot(fpr, tpr, 'b', label = 'XGBoost Test AUC = %0.3f' % roc_auc)
probs = logreg_clf.predict_proba(X_test)
preds = probs[:,1]
fpr, tpr, threshold = roc_curve(y_test, preds)
#roc_auc = auc(fpr, tpr)
roc_auc = roc_auc_score(y_test, preds)
plt.plot(fpr, tpr, 'g', label = 'Logistic Regression\nTest AUC = %0.3f' % roc_auc)
#plt.plot([0, 1], [0, 1],'k', label = 'Baseline AUC = 0.500' )
probs = dummy_clf.predict_proba(X_test)
preds = probs[:,1]
fpr, tpr, threshold = roc_curve(y_test, preds)
#roc_auc = auc(fpr, tpr)
roc_auc = roc_auc_score(y_test, preds)
plt.plot(fpr, tpr, 'r--', label = 'Dummy Model AUC = %0.3f' % roc_auc)
plt.title('Receiver Operating Characteristic')
plt.legend(loc = 'lower right')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
# figure file_name
fig_file_name = 'roc_curve'
# figure file_path
fig_path = os.path.join(figures_dir,
fig_file_name)
# Save the figure
plt.savefig(fig_path, dpi = 300)
plt.plot([0, 1], [0, 1],'k', label = 'Baseline AUC = 0.50' )
probs = clf.predict_proba(X_train)
preds = probs[:,1]
fpr, tpr, threshold = roc_curve(y_train, preds)
#roc_auc = auc(fpr, tpr)
roc_auc = roc_auc_score(y_train, preds)
plt.plot(fpr, tpr, 'b', label = 'Train AUC = %0.2f' % roc_auc)
probs = clf.predict_proba(X_test)
preds = probs[:,1]
fpr, tpr, threshold = roc_curve(y_test, preds)
#roc_auc = auc(fpr, tpr)
roc_auc = roc_auc_score(y_test, preds)
plt.plot(fpr, tpr, 'g', label = 'Test AUC = %0.2f' % roc_auc)
probs = dummy_clf.predict_proba(X_test)
preds = probs[:,1]
fpr, tpr, threshold = roc_curve(y_test, preds)
#roc_auc = auc(fpr, tpr)
roc_auc = roc_auc_score(y_test, preds)
plt.plot(fpr, tpr, 'r--', label = 'Dummy Model AUC = %0.2f' % roc_auc)
plt.title('Receiver Operating Characteristic')
plt.legend(loc = 'lower right')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.savefig('roc_curve.png', dpi = 300)
```
#### Check accuracy scores
```
cross_val_accuracy = cross_val_score(clf,
X_train,
y_train,
scoring="accuracy",
cv=5,
n_jobs=1,
verbose=1)
accuracy_mean = np.mean(cross_val_accuracy)
print("Mean Cross validation accuracy:\n{:.3f}\n".format(accuracy_mean))
dummy_cross_val_accuracy = cross_val_score(dummy_clf,
X_train,
y_train,
scoring="accuracy",
cv=5,
n_jobs=1)
dummy_accuracy_mean = np.mean(dummy_cross_val_accuracy)
print("Baseline accuracy:\n{:.3f}\n".format(dummy_accuracy_mean))
accuracy_train = accuracy_score(y_train,
clf.predict(X_train))
print("Train Accuracy:\n{:.3f}\n".format(accuracy_train))
print("Train Error Rate:\n{:.3f}\n".format(1 - accuracy_train))
accuracy_test = accuracy_score(y_test,
clf.predict(X_test))
print("Test Accuracy:\n{:.3f}\n".format(accuracy_test))
print("Test Error Rate:\n{:.3f}\n".format(1 - accuracy_test))
```
### Save the trained model object as a pickle file
```
clf.fit(X_train, y_train)
# trained_model = 'trained_model_20171029.pkl'
trained_model = 'trained_model.pkl'
trained_model_path = os.path.join(models_folder,
trained_model)
joblib.dump(clf, trained_model_path)
```
# Load trained model
```
# Save path to the `models` folder
models_folder = os.path.join(proj_root,
"models")
trained_model = 'trained_model.pkl'
trained_model_path = os.path.join(models_folder,
trained_model)
clf = joblib.load(trained_model_path)
clf
```
# Lift Charts
```
def lift_chart_area_ratio(clf, X, y):
"""
"""
# Create an array of classification thresholds
# ranging from 0 to 1.
thresholds = np.arange(0.0, 1.0001, 0.0001)[np.newaxis, :]
true_actual = (y == 1)[:, np.newaxis]
false_actual = (y != 1)[:, np.newaxis]
predicted_probabilities = clf.predict_proba(X)[:,1][:, np.newaxis]
predicted_true = np.greater(predicted_probabilities, thresholds)
tp = true_actual * predicted_true
fp = false_actual * predicted_true
true_positive_count = np.sum(tp, axis=0)
false_positive_count = np.sum(fp, axis=0)
total = true_positive_count + false_positive_count
# Theoretically best curve
tp_best = np.clip(total, 0, np.max(true_positive_count))
#Calculate area ratio
area_best = np.abs(np.trapz(tp_best, total))
area_model = np.abs(np.trapz(true_positive_count, total))
area_baseline = np.max(total) * np.max(true_positive_count) / 2
area_ratio = (area_model - area_baseline) / \
(area_best - area_baseline)
return area_ratio, true_positive_count, total, tp_best
def plot_lift_chart(total,
true_positive_count,
tp_best,
title,
fname):
"""
"""
plt.plot(total,
true_positive_count,
'r',
label = 'Model Curve')
plt.plot(total,
tp_best,
'b',
label = 'Theoretically Best Curve')
plt.plot([0, np.max(total)],
[0, np.max(true_positive_count)],
'k',
label = 'Baseline Curve' )
plt.title(title)
plt.legend(loc = 'lower right')
plt.xlim(xmin=0)
plt.ylim(ymin=0)
plt.ylabel('True Positives')
plt.xlabel('True Positives + False Positives')
plt.savefig(fname, dpi = 300)
```
#### Train Set Lift Chart Area Ratio
```
area_ratio_train, true_positive_count_train, \
total_train, tp_best_train = \
lift_chart_area_ratio(clf, X_train, y_train)
title = 'Lift Chart - Training Set\n' + \
'(Area Ratio = {:.3f})'.format(area_ratio_train)
# figure file_name
fig_file_name = 'lift_chart_train'
# figure file_path
fig_path = os.path.join(figures_dir,
fig_file_name)
plot_lift_chart(total_train,
true_positive_count_train,
tp_best_train,
title,
fig_path)
print("Area ratio:\t",
"{:.3f}".format(area_ratio_train))
area_ratio_test, true_positive_count_test, \
total_test, tp_best_test = \
lift_chart_area_ratio(clf, X_test, y_test)
title = 'Lift Chart - Test Set\n' + \
'(Area Ratio = {:.3f})'.format(area_ratio_test)
# figure file_name
fig_file_name = 'lift_chart_test'
# figure file_path
fig_path = os.path.join(figures_dir,
fig_file_name)
plot_lift_chart(total_test,
true_positive_count_test,
tp_best_test,
title,
fig_path)
print("Area ratio:\t",
"{:.3f}".format(area_ratio_test))
```
| true |
code
| 0.631367 | null | null | null | null |
|
# Part 5 - Intro to Encrypted Programs
Believe it or not, it is possible to compute with encrypted data. In other words, it's possible to run a program where ALL of the variables in the program are encrypted!
In this tutorial, we're going to walk through very basic tools of encrypted computation. In particular, we're going to focus on one popular approach called Secure Multi-Party Computation. In this lesson, we'll learn how to build an encrypted calculator which can perform calculations on encrypted numbers.
Authors:
- Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask)
References:
- Morten Dahl - [Blog](https://mortendahl.github.io) - Twitter: [@mortendahlcs](https://twitter.com/mortendahlcs)
# Step 1: Encryption Using Secure Multi-Party Computation
SMPC is at first glance a rather strange form of "encryption". Instead of using a public/private key to encrypt a variable, each value is split into multiple "shares", each of which operate like a private key. Typically, these "shares" will be distributed amongst 2 or more "owners". Thus, in order to decrypt the variable, all owners must agree to allow the decryption. In essence, everyone has a private key.
### Encrypt()
So, let's say we wanted to "encrypt" a varible "x", we could do so in the following way.
```
Q = 1234567891011
x = 25
import random
def encrypt(x):
share_a = random.randint(0,Q)
share_b = random.randint(0,Q)
share_c = (x - share_a - share_b) % Q
return (share_a, share_b, share_c)
encrypt(x)
```
As you can see here, we have split our variable "x" into 3 different shares, which could be sent to 3 different owners.
### Decrypt()
If we wanted to decrypt these 3 shares, we could simply sum them together and take the modulus of the result (mod Q).
```
def decrypt(*shares):
return sum(shares) % Q
a,b,c = encrypt(25)
decrypt(a, b, c)
```
Importantly, notice that if we try to decrypt with only two shares, the decryption does not work!
```
decrypt(a, b)
```
Thus, we need all of the owners to participate in order to decrypt the value. It is in this way that the "shares" act like private keys, all of which must be present in order to decrypt a value.
# Step 2: Basic Arithmetic Using SMPC
However, the truly extraordinary property of Secure Multi-Party Computation is the ability to perform computation **while the variables are still encrypted**. Let's demonstrate simple addition below.
```
x = encrypt(25)
y = encrypt(5)
def add(x, y):
z = list()
# the first worker adds their shares together
z.append((x[0] + y[0]) % Q)
# the second worker adds their shares together
z.append((x[1] + y[1]) % Q)
# the third worker adds their shares together
z.append((x[2] + y[2]) % Q)
return z
decrypt(*add(x,y))
```
### Success!!!
And there you have it! If each worker (separately) adds their shares together, then the resulting shares will decrypt to the correct value (25 + 5 == 30).
As it turns out, SMPC protocols exist which can allow this encrypted computation for the following operations:
- addition (which we've just seen)
- multiplication
- comparison
and using these basic underlying primitives, we can perform arbitrary computation!!!
In the next section, we're going to learn how to use the PySyft library to perform these operations!
# Step 3: SMPC Using PySyft
In the previous sections, we outlined some basic intuitions around SMPC is supposed to work. However, in practice we don't want to have to hand-write all of the primitive operations ourselves when writing our encrypted programs. So, in this section we're going to walk through the basics of how to do encrypted computation using PySyft. In particualr, we're going to focus on how to do the 3 primitives previously mentioned: addition, multiplication, and comparison.
First, we need to create a few Virtual Workers (which hopefully you're now familiar with given our previous tutorials).
```
import syft as sy
hook = sy.TorchHook()
bob = sy.VirtualWorker(id="bob")
alice = sy.VirtualWorker(id="alice")
bill = sy.VirtualWorker(id="bill")
```
### Basic Encryption/Decryption
Encryption is as simple as taking any PySyft tensor and calling .share(). Decryption is as simple as calling .get() on the shared variable
```
x = sy.LongTensor([25])
encrypted_x = x.share(bob, alice, bill)
encrypted_x.get()
```
### Introspecting the Encrypted Values
If we look closer at Bob, Alice, and Bill's workers, we can see the shares that get created!
```
bob._objects
x = sy.LongTensor([25]).share(bob, alice,bill)
bob._objects
# Bob's share
bobs_share = list(bob._objects.values())[0].parent[0]
bobs_share
# Alice's share
alices_share = list(alice._objects.values())[0].parent[0]
alices_share
# Bill's share
bills_share = list(bill._objects.values())[0].parent[0]
bills_share
```
And if we wanted to, we could decrypt these values using the SAME approach we talked about earlier!!!
```
Q = sy.spdz.spdz.field
(bobs_share + alices_share + bills_share) % Q
```
As you can see, when we called .share() it simply split the value into 3 shares and sent one share to each of the parties!
# Encrypted Arithmetic
And now you see that we can perform arithmetic on the underlying values! The API is constructed so that we can simply perform arithmetic like we would with regular PyTorch tensors.
Note: for comparison, it returns boolean outputs (True/False) in the form of Integers (1/0). 1 corresponds to True. 0 corresponds to False.
```
x = sy.LongTensor([25]).share(bob,alice)
y = sy.LongTensor([5]).share(bob,alice)
z = x + y
z.get()
z = x * y
z.get()
z = x > y
z.get()
z = x < y
z.get()
z = x == y
z.get()
z = x == y + 20
z.get()
```
# Congratulations!!! - Time to Join the Community!
Congraulations on completing this notebook tutorial! If you enjoyed this and would like to join the movement toward privacy preserving, decentralized ownership of AI and the AI supply chain (data), you can do so in the following ways!
### Star PySyft on Github
The easiest way to help our community is just by starring the Repos! This helps raise awareness of the cool tools we're building.
- [Star PySyft](https://github.com/OpenMined/PySyft)
### Join our Slack!
The best way to keep up to date on the latest advancements is to join our community! You can do so by filling out the form at [http://slack.openmined.org](http://slack.openmined.org)
### Join a Code Project!
The best way to contribute to our community is to become a code contributor! At any time you can go to PySyft Github Issues page and filter for "Projects". This will show you all the top level Tickets giving an overview of what projects you can join! If you don't want to join a project, but you would like to do a bit of coding, you can also look for more "one off" mini-projects by searching for github issues marked "good first issue".
- [PySyft Projects](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3AProject)
- [Good First Issue Tickets](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
### Donate
If you don't have time to contribute to our codebase, but would still like to lend support, you can also become a Backer on our Open Collective. All donations go toward our web hosting and other community expenses such as hackathons and meetups!
[OpenMined's Open Collective Page](https://opencollective.com/openmined)
| true |
code
| 0.254115 | null | null | null | null |
|
# <font color="Red"><h3 align="center">Table of Contents</h3></font>
1. Introduction and Installation
2. DataFrame Basics
3. Read Write Excel CSV File
4. Different Ways Of Creating DataFrame
5. Handle Missing Data: fillna, dropna, interpolate
6. Handle Missing Data: replace function
7. Concat Dataframes
8. Pivot table
9. Pandas Crosstab
# <font color="Blue"><h3 align="center">1.Introduction and Installation</h3></font>
```
from IPython.display import Image
Image(filename='pandas.png')
```
> [Pandas](https://pandas.pydata.org/pandas-docs/stable/) is the typical tool a data scientist grabs first. It is based around a lot of the [NumPy package](https://docs.scipy.org/doc/numpy/reference/) so a familiarity with NumPy will help understand how to use Pandas. However, Pandas has a lot of specific extras that can be very useful to a data scientist!
>
>Pandas is also a software library written for the Python programming language for data manipulation and analysis. In particular, it offers data structures and operations for manipulating numerical tables and time series. It is free software released under the three-clause BSD license.
```
!pip install pandas
```
# <font color="Green"><h3 align="center">2.DataFrame Basics</h3></font>
> Dataframe is most commonly used object in pandas. It is a table like datastructure containing rows and columns similar to excel spreadsheet
```
import pandas as pd
weather_data = {
'day': ['1/1/2017','1/2/2017','1/3/2017','1/4/2017','1/5/2017','1/6/2017'],
'temperature': [32,35,28,24,32,31],
'windspeed': [6,7,2,7,4,2],
'event': ['Rain', 'Sunny', 'Snow','Snow','Rain', 'Sunny']
}
df = pd.DataFrame(weather_data)
df
df.shape # rows, columns shape
```
## <font color='blue'>Rows</font>
```
df.head()
df.tail()
df[1:3]
```
## <font color='blue'>Columns</font>
```
df.columns
df['day']
type(df['day'])
df[['day','temperature']]
```
## <font color='blue'>Operations On DataFrame</font>
```
df['temperature'].max()
df.temperature.max()
df[df['temperature']>32]
df['day'][df['temperature'] == df['temperature'].max()] # doing SQL in pandas
df[df['temperature'] == df['temperature'].max()] # doing SQL in pandas
df['temperature'].std()
df['event'].max() # But mean() won't work since data type is string
df.describe()
```
## <font color='blue'>set_index</font>
```
df.set_index('day')
df.set_index('day', inplace=True)
df
df.index
df.loc['1/6/2017']
df.reset_index(inplace=True)
df.head()
df.set_index('event',inplace=True) # this is kind of building a hash map using event as a key
df
df.loc['Snow']
df.reset_index(inplace=True)
df.head()
```
# <font color="TEAL"><h3 align="center">3.Read Write Excle CSV File</h3></font>
### <font color="blue">Write to CSV</color>
```
df.to_csv("new.csv", index=False)
```
### <font color="blue">Read CSV</color>
```
df = pd.read_csv("new.csv")
df
df = pd.read_csv("new.csv", header=None, names = ["ticker","eps","revenue","people"])
df
df = pd.read_csv("new.csv", nrows=5)
df
df.head(2)
```
### <font color="blue">Write to Excel</color>
```
df.to_excel("new.xlsx", sheet_name="weather", index=False, startrow=2)
df = pd.read_excel("new.xlsx",'weather')
df
df_stocks = pd.DataFrame({
'tickers': ['GOOGL', 'WMT', 'MSFT'],
'price': [845, 65, 64 ],
'pe': [30.37, 14.26, 30.97],
'eps': [27.82, 4.61, 2.12]
})
df_weather = pd.DataFrame({
'day': ['1/1/2017','1/2/2017','1/3/2017'],
'temperature': [32,35,28],
'event': ['Rain', 'Sunny', 'Snow']
})
with pd.ExcelWriter('stocks_weather.xlsx') as writer:
df_stocks.to_excel(writer, sheet_name="stocks")
df_weather.to_excel(writer, sheet_name="weather")
```
### <font color="blue">Read Excel</color>
```
df = pd.read_excel("new.xlsx","weather")
df
```
Excel data replace using **function**
```
def convert_people_cell(cell):
if cell=="n.a.":
return 'Sam Walton'
return cell
def convert_price_cell(cell):
if cell=="n.a.":
return 50
return cell
df = pd.read_excel("new.xlsx","weather", converters= {
'people': convert_people_cell,
'price': convert_price_cell
})
df
```
### <font color="blue">Write to JSON</color>
```
df.to_json('new.json')
```
### <font color="blue">Read JSON</color>
```
weather_df = pd.read_json('new.json')
weather_df.head()
```
# <font color="purple"><h3 align="center">4.Different Ways Of Creating Dataframe</h3></font>
## <font color="green">Using csv</h3></font>
```
df = pd.read_csv("weather_data.csv")
df
```
## <font color="green">Using excel</h3></font>
```
df=pd.read_excel("new.xlsx","weather")
df
```
## <font color="green">Using dictionary</h3></font>
```
import pandas as pd
weather_data = {
'day': ['1/1/2017','1/2/2017','1/3/2017'],
'temperature': [32,35,28],
'windspeed': [6,7,2],
'event': ['Rain', 'Sunny', 'Snow']
}
df = pd.DataFrame(weather_data)
df
```
## <font color="green">Using tuples list</h3></font>
```
weather_data = [
('1/1/2017',32,6,'Rain'),
('1/2/2017',35,7,'Sunny'),
('1/3/2017',28,2,'Snow')
]
df = pd.DataFrame(data=weather_data, columns=['day','temperature','windspeed','event'])
df
```
## <font color="green">Using list of dictionaries</h3></font>
```
weather_data = [
{'day': '1/1/2017', 'temperature': 32, 'windspeed': 6, 'event': 'Rain'},
{'day': '1/2/2017', 'temperature': 35, 'windspeed': 7, 'event': 'Sunny'},
{'day': '1/3/2017', 'temperature': 28, 'windspeed': 2, 'event': 'Snow'},
]
df = pd.DataFrame(data=weather_data, columns=['day','temperature','windspeed','event'])
df
```
## <font color="green">Using JSON</h3></font>
```
df.to_json('weather_data.json')
weather_df = pd.read_json('weather_data.json')
weather_df.head()
```
## <font color="maroon"><h4 align="center">5.Handling Missing Data - fillna, interpolate, dropna</font>
```
import pandas as pd
df = pd.read_csv("weather_data.csv",parse_dates=['day'])
type(df.day[0])
df
df.isnull().sum()
df.set_index('day',inplace=True)
df
```
## <font color="blue">fillna</font>
<font color="purple">**Fill all NaN with one specific value**</font>
```
new_df = df.fillna(0)
new_df
```
<font color="purple">**Fill na using column names and dict**</font>
```
new_df = df.fillna({
'temperature': 0,
'windspeed': 0,
'event': 'No Event'
})
new_df
```
<font color="purple">**Use method to determine how to fill na values**</font>
```
new_df = df.fillna(method="ffill")
new_df
new_df = df.fillna(method="bfill")
new_df
```
<font color="purple">**Use of axis**</font>
```
new_df = df.fillna(method="bfill", axis="columns") # axis is either "index" or "columns"
new_df
```
<font color="purple">**limit parameter**</font>
```
new_df = df.fillna(method="ffill",limit=1)
new_df
```
### <font color="blue">interpolate</font>
```
new_df = df.interpolate()
new_df
```
### <font color="blue">dropna</font>
```
new_df = df.dropna()
new_df
new_df = df.dropna(how='all')
new_df
```
### <font color="blue">Inserting Missing Dates</font>
```
dt = pd.date_range("01-01-2017","01-11-2017")
idx = pd.DatetimeIndex(dt)
df = df.reindex(idx)
df
```
## <font color="NAVY"><h4 align="center">6.Handling Missing Data - replace method</font>
**Replacing single value**
```
import numpy as np
new_df = df.replace(-99999, value = np.NaN)
new_df
```
**Replacing per column**
```
new_df = df.replace({
'temperature': -99999,
'windspeed': -99999,
'event': '0'
}, np.nan)
new_df
```
**Replacing by using mapping**
```
new_df = df.replace({
-99999: np.nan,
'no event': 'Sunny',
})
new_df
```
**Replacing list with another list**
```
df = pd.DataFrame({
'score': ['exceptional','average', 'good', 'poor', 'average', 'exceptional'],
'student': ['rob', 'maya', 'parthiv', 'tom', 'julian', 'erica']
})
df
df.replace(['poor', 'average', 'good', 'exceptional'], [1,2,3,4])
```
# <font color="purple"><h3 align="center">7.Pandas Concatenate</h3></font>
## <font color='blue'>Basic Concatenation</font>
```
import pandas as pd
india_weather = pd.DataFrame({
"city": ["mumbai","delhi","banglore"],
"temperature": [32,45,30],
"humidity": [80, 60, 78]
})
india_weather
us_weather = pd.DataFrame({
"city": ["new york","chicago","orlando"],
"temperature": [21,14,35],
"humidity": [68, 65, 75]
})
us_weather
df = pd.concat([india_weather, us_weather])
df
```
## <font color='blue'>Ignore Index</font>
```
df = pd.concat([india_weather, us_weather], ignore_index=True)
df
```
## <font color='blue'>Concatenation And Keys</font>
```
df = pd.concat([india_weather, us_weather], keys=["india", "us"])
df
df.loc["us"]
df.loc["india"]
```
## <font color='blue'>Concatenation Using Index</font>
```
temperature_df = pd.DataFrame({
"city": ["mumbai","delhi","banglore"],
"temperature": [32,45,30],
}, index=[0,1,2])
temperature_df
windspeed_df = pd.DataFrame({
"city": ["delhi","mumbai"],
"windspeed": [7,12],
}, index=[1,0])
windspeed_df
df = pd.concat([temperature_df,windspeed_df],axis=1)
df
```
## <font color='blue'>Concatenate dataframe with series</font>
```
s = pd.Series(["Humid","Dry","Rain"], name="event")
s
df = pd.concat([temperature_df,s],axis=1)
df
```
# <font color="OLIVE"><h3 align="center">8.Pandas Pivot table</h3></font>
<h1 style="color:blue">Pivot basics</h1>
```
import pandas as pd
import numpy as np
df = pd.read_csv("weather.csv")
df
df.pivot(index='city',columns='date')
df.pivot(index='city',columns='date',values="humidity")
df.pivot(index='date',columns='city',values='humidity')
df.pivot(index='humidity',columns='city')
```
<h1 style="color:blue">Pivot Table</h1>
```
df.pivot_table(index="city",columns="date")
```
<h2 style="color:brown">Grouper</h2>
```
df['date'] = pd.to_datetime(df['date'])
df.pivot_table(index=pd.Grouper(freq='M',key='date'),columns='city')
```
# <font color="PURPLE"><h3 align="center">9.Pandas Crosstab </h3></font>
```
import pandas as pd
df = pd.read_excel("survey.xls")
df
pd.crosstab(df.Nationality,df.Handedness)
pd.crosstab(df.Sex,df.Handedness)
```
<h2 style="color:purple">Margins</h2>
```
pd.crosstab(df.Sex,df.Handedness, margins=True)
```
<h2 style="color:purple">Multi Index Column and Rows</h2>
```
pd.crosstab(df.Sex, [df.Handedness,df.Nationality], margins=True)
```
### <font color="OLIVE"><h3 align="center">Read Write Database(Sql) using DataFrame</h3></font>
```
import pandas as pd
Cars = {'Brand': ['Honda Civic','Toyota Corolla','Ford Focus','Audi A4'],
'Price': [22000,25000,27000,35000]
}
df = pd.DataFrame(Cars, columns= ['Brand', 'Price'])
print (df)
import sqlite3
conn = sqlite3.connect('TestDB1.db')
c = conn.cursor()
c.execute('CREATE TABLE CARS (Brand text, Price number)')
conn.commit()
df.to_sql('CARS', conn, if_exists='replace', index = False)
database = 'TestDB1.db'
conn = sqlite3.connect(database)
tables = pd.read_sql("""SELECT *
FROM sqlite_master
WHERE type='table';""", conn)
print("Conection SuccessFull",conn)
df = pd.read_sql_query("SELECT * FROM CARS", conn)
df
```
| true |
code
| 0.276397 | null | null | null | null |
|
# Working with Projections
This section of the tutorial discusses [map projections](https://en.wikipedia.org/wiki/Map_projection). If you don't know what a projection is, or are looking to learn more about how they work in `geoplot`, this page is for you!
I recommend following along with this tutorial interactively using [Binder](https://mybinder.org/v2/gh/ResidentMario/geoplot/master?filepath=notebooks/tutorials/Working_with_Projections.ipynb).
## Projection and unprojection
```
import geopandas as gpd
import geoplot as gplt
%matplotlib inline
# load the example data
contiguous_usa = gpd.read_file(gplt.datasets.get_path('contiguous_usa'))
gplt.polyplot(contiguous_usa)
```
This map is an example of an unprojected plot: it reproduces our coordinates as if they were on a flat Cartesian plane. But remember, the Earth is not a flat surface; it's a sphere. This isn't a map of the United States that you'd seen in print anywhere because it badly distorts both of the [two criteria](http://www.geo.hunter.cuny.edu/~jochen/gtech201/lectures/lec6concepts/Map%20coordinate%20systems/How%20to%20choose%20a%20projection.htm) most projections are evaluated on: *shape* and *area*.
For sufficiently small areas, the amount of distortion is very small. This map of New York City, for example, is reasonably accurate:
```
boroughs = gpd.read_file(gplt.datasets.get_path('nyc_boroughs'))
gplt.polyplot(boroughs)
```
But there is a better way: use a **projection**.
A projection is a way of mapping points on the surface of the Earth into two dimensions (like a piece of paper or a computer screen). Because moving from three dimensions to two is intrinsically lossy, no projection is perfect, but some will definitely work better in certain case than others.
The most common projection used for the contiguous United States is the [Albers Equal Area projection](https://en.wikipedia.org/wiki/Albers_projection). This projection works by wrapping the Earth around a cone, one that's particularly well optimized for locations near the middle of the Northern Hemisphere (and particularly poorly for locations at the poles).
To add a projection to a map in `geoplot`, pass a `geoplot.crs` object to the `projection` parameter on the plot. For instance, here's what we get when we try `Albers` out on the contiguous United States:
```
import geoplot.crs as gcrs
gplt.polyplot(contiguous_usa, projection=gcrs.AlbersEqualArea())
```
For a list of projections implemented in `geoplot`, refer to [the projections reference](https://scitools.org.uk/cartopy/docs/latest/crs/projections.html) in the `cartopy` documentation (`cartopy` is the library `geoplot` relies on for its projections).
## Stacking projected plots
A key feature of `geoplot` is the ability to stack plots on top of one another.
```
cities = gpd.read_file(gplt.datasets.get_path('usa_cities'))
ax = gplt.polyplot(
contiguous_usa,
projection=gcrs.AlbersEqualArea()
)
gplt.pointplot(cities, ax=ax)
```
By default, `geoplot` will set the [extent](Customizing_Plots.ipynb#extent) (the area covered by the plot) to the [total_bounds](https://geopandas.org/reference.html#geopandas.GeoSeries.total_bounds) of the last plot stacked onto the map.
However, suppose that even though we have data for One entire United States (plus Puerto Rico) we actually want to display just data for the contiguous United States. An easy way to get this is setting the `extent` parameter using `total_bounds`.
```
ax = gplt.polyplot(
contiguous_usa,
projection=gcrs.AlbersEqualArea()
)
gplt.pointplot(cities, ax=ax, extent=contiguous_usa.total_bounds)
```
The section of the tutorial on [Customizing Plots](Customizing_Plots.ipynb#Extent) explains the `extent` parameter in more detail.
## Projections on subplots
It is possible to compose multiple axes together into a single panel figure in `matplotlib` using the `subplots` feature. This feature is highly useful for creating side-by-side comparisons of your plots, or for stacking your plots together into a single more informative display.
```
import matplotlib.pyplot as plt
import geoplot as gplt
f, axarr = plt.subplots(1, 2, figsize=(12, 4))
gplt.polyplot(contiguous_usa, ax=axarr[0])
gplt.polyplot(contiguous_usa, ax=axarr[1])
```
`matplotlib` supports subplotting projected maps using the `projection` argument to `subplot_kw`.
```
proj = gcrs.AlbersEqualArea(central_longitude=-98, central_latitude=39.5)
f, axarr = plt.subplots(1, 2, figsize=(12, 4), subplot_kw={
'projection': proj
})
gplt.polyplot(contiguous_usa, projection=proj, ax=axarr[0])
gplt.polyplot(contiguous_usa, projection=proj, ax=axarr[1])
```
The [Gallery](../gallery/index.rst) includes several demos, like the [Pointplot Scale Functions](../gallery/plot_usa_city_elevations.rst) demo, that use this feature to good effect.
Notice that in this code sample we specified some additional parameters for our projection. The `central_longitude=-98` and `central_latitude=39.5` parameters set the "center point" around which the points and shapes on the map are reprojected (in this case we use the [geographic center of the contiguous United States](https://en.wikipedia.org/wiki/Geographic_center_of_the_contiguous_United_States)).
When you pass a projection to a `geoplot` function, `geoplot` will infer these values for you. But when passing the projection directly to `matplotlib` you must set them yourself.
| true |
code
| 0.630799 | null | null | null | null |
|
# Overview
This colab demonstrates the steps to use the DeepLab model to perform semantic segmentation on a sample input image. Expected outputs are semantic labels overlayed on the sample image.
### About DeepLab
The models used in this colab perform semantic segmentation. Semantic segmentation models focus on assigning semantic labels, such as sky, person, or car, to multiple objects and stuff in a single image.
# Instructions
<h3><a href="https://cloud.google.com/tpu/"><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/tpu-hexagon.png" width="50"></a> Use a free TPU device</h3>
1. On the main menu, click Runtime and select **Change runtime type**. Set "TPU" as the hardware accelerator.
1. Click Runtime again and select **Runtime > Run All**. You can also run the cells manually with Shift-ENTER.
## Import Libraries
```
import os
from io import BytesIO
import tarfile
import tempfile
from six.moves import urllib
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
from PIL import Image
import tensorflow as tf
```
## Import helper methods
These methods help us perform the following tasks:
* Load the latest version of the pretrained DeepLab model
* Load the colormap from the PASCAL VOC dataset
* Adds colors to various labels, such as "pink" for people, "green" for bicycle and more
* Visualize an image, and add an overlay of colors on various regions
```
class DeepLabModel(object):
"""Class to load deeplab model and run inference."""
INPUT_TENSOR_NAME = 'ImageTensor:0'
OUTPUT_TENSOR_NAME = 'SemanticPredictions:0'
INPUT_SIZE = 513
FROZEN_GRAPH_NAME = 'frozen_inference_graph'
def __init__(self, tarball_path):
"""Creates and loads pretrained deeplab model."""
self.graph = tf.Graph()
graph_def = None
# Extract frozen graph from tar archive.
tar_file = tarfile.open(tarball_path)
for tar_info in tar_file.getmembers():
if self.FROZEN_GRAPH_NAME in os.path.basename(tar_info.name):
file_handle = tar_file.extractfile(tar_info)
graph_def = tf.GraphDef.FromString(file_handle.read())
break
tar_file.close()
if graph_def is None:
raise RuntimeError('Cannot find inference graph in tar archive.')
with self.graph.as_default():
tf.import_graph_def(graph_def, name='')
self.sess = tf.Session(graph=self.graph)
def run(self, image):
"""Runs inference on a single image.
Args:
image: A PIL.Image object, raw input image.
Returns:
resized_image: RGB image resized from original input image.
seg_map: Segmentation map of `resized_image`.
"""
width, height = image.size
resize_ratio = 1.0 * self.INPUT_SIZE / max(width, height)
target_size = (int(resize_ratio * width), int(resize_ratio * height))
resized_image = image.convert('RGB').resize(target_size, Image.ANTIALIAS)
batch_seg_map = self.sess.run(
self.OUTPUT_TENSOR_NAME,
feed_dict={self.INPUT_TENSOR_NAME: [np.asarray(resized_image)]})
seg_map = batch_seg_map[0]
return resized_image, seg_map
def create_pascal_label_colormap():
"""Creates a label colormap used in PASCAL VOC segmentation benchmark.
Returns:
A Colormap for visualizing segmentation results.
"""
colormap = np.zeros((256, 3), dtype=int)
ind = np.arange(256, dtype=int)
for shift in reversed(range(8)):
for channel in range(3):
colormap[:, channel] |= ((ind >> channel) & 1) << shift
ind >>= 3
return colormap
def label_to_color_image(label):
"""Adds color defined by the dataset colormap to the label.
Args:
label: A 2D array with integer type, storing the segmentation label.
Returns:
result: A 2D array with floating type. The element of the array
is the color indexed by the corresponding element in the input label
to the PASCAL color map.
Raises:
ValueError: If label is not of rank 2 or its value is larger than color
map maximum entry.
"""
if label.ndim != 2:
raise ValueError('Expect 2-D input label')
colormap = create_pascal_label_colormap()
if np.max(label) >= len(colormap):
raise ValueError('label value too large.')
return colormap[label]
def vis_segmentation(image, seg_map):
"""Visualizes input image, segmentation map and overlay view."""
plt.figure(figsize=(15, 5))
grid_spec = gridspec.GridSpec(1, 4, width_ratios=[6, 6, 6, 1])
plt.subplot(grid_spec[0])
plt.imshow(image)
plt.axis('off')
plt.title('input image')
plt.subplot(grid_spec[1])
seg_image = label_to_color_image(seg_map).astype(np.uint8)
plt.imshow(seg_image)
plt.axis('off')
plt.title('segmentation map')
plt.subplot(grid_spec[2])
plt.imshow(image)
plt.imshow(seg_image, alpha=0.7)
plt.axis('off')
plt.title('segmentation overlay')
unique_labels = np.unique(seg_map)
ax = plt.subplot(grid_spec[3])
plt.imshow(
FULL_COLOR_MAP[unique_labels].astype(np.uint8), interpolation='nearest')
ax.yaxis.tick_right()
plt.yticks(range(len(unique_labels)), LABEL_NAMES[unique_labels])
plt.xticks([], [])
ax.tick_params(width=0.0)
plt.grid('off')
plt.show()
LABEL_NAMES = np.asarray([
'background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus',
'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike',
'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tv'
])
FULL_LABEL_MAP = np.arange(len(LABEL_NAMES)).reshape(len(LABEL_NAMES), 1)
FULL_COLOR_MAP = label_to_color_image(FULL_LABEL_MAP)
```
## Select a pretrained model
We have trained the DeepLab model using various backbone networks. Select one from the MODEL_NAME list.
```
MODEL_NAME = 'mobilenetv2_coco_voctrainaug' # @param ['mobilenetv2_coco_voctrainaug', 'mobilenetv2_coco_voctrainval', 'xception_coco_voctrainaug', 'xception_coco_voctrainval']
_DOWNLOAD_URL_PREFIX = 'http://download.tensorflow.org/models/'
_MODEL_URLS = {
'mobilenetv2_coco_voctrainaug':
'deeplabv3_mnv2_pascal_train_aug_2018_01_29.tar.gz',
'mobilenetv2_coco_voctrainval':
'deeplabv3_mnv2_pascal_trainval_2018_01_29.tar.gz',
'xception_coco_voctrainaug':
'deeplabv3_pascal_train_aug_2018_01_04.tar.gz',
'xception_coco_voctrainval':
'deeplabv3_pascal_trainval_2018_01_04.tar.gz',
}
_TARBALL_NAME = 'deeplab_model.tar.gz'
model_dir = tempfile.mkdtemp()
tf.gfile.MakeDirs(model_dir)
download_path = os.path.join(model_dir, _TARBALL_NAME)
print('downloading model, this might take a while...')
urllib.request.urlretrieve(_DOWNLOAD_URL_PREFIX + _MODEL_URLS[MODEL_NAME],
download_path)
print('download completed! loading DeepLab model...')
MODEL = DeepLabModel(download_path)
print('model loaded successfully!')
```
## Run on sample images
Select one of sample images (leave `IMAGE_URL` empty) or feed any internet image
url for inference.
Note that this colab uses single scale inference for fast computation,
so the results may slightly differ from the visualizations in the
[README](https://github.com/tensorflow/models/blob/master/research/deeplab/README.md) file,
which uses multi-scale and left-right flipped inputs.
```
SAMPLE_IMAGE = 'image1' # @param ['image1', 'image2', 'image3']
IMAGE_URL = '' #@param {type:"string"}
_SAMPLE_URL = ('https://github.com/tensorflow/models/blob/master/research/'
'deeplab/g3doc/img/%s.jpg?raw=true')
def run_visualization(url):
"""Inferences DeepLab model and visualizes result."""
try:
f = urllib.request.urlopen(url)
jpeg_str = f.read()
original_im = Image.open(BytesIO(jpeg_str))
except IOError:
print('Cannot retrieve image. Please check url: ' + url)
return
print('running deeplab on image %s...' % url)
resized_im, seg_map = MODEL.run(original_im)
vis_segmentation(resized_im, seg_map)
image_url = IMAGE_URL or _SAMPLE_URL % SAMPLE_IMAGE
run_visualization(image_url)
```
## What's next
* Learn about [Cloud TPUs](https://cloud.google.com/tpu/docs) that Google designed and optimized specifically to speed up and scale up ML workloads for training and inference and to enable ML engineers and researchers to iterate more quickly.
* Explore the range of [Cloud TPU tutorials and Colabs](https://cloud.google.com/tpu/docs/tutorials) to find other examples that can be used when implementing your ML project.
* For more information on running the DeepLab model on Cloud TPUs, see the [DeepLab tutorial](https://cloud.google.com/tpu/docs/tutorials/deeplab).
| true |
code
| 0.773815 | null | null | null | null |
|
## Enviroment:
Open AI gym [CartPole v0](https://github.com/openai/gym/wiki/CartPole-v0)
### Observation
Type: Box(4)
| Num | Observation | Min | Max |
| ---- | -------------------- | -------- | ------- |
| 0 | Cart Position | -2.4 | 2.4 |
| 1 | Cart Velocity | -Inf | Inf |
| 2 | Pole Angle | ~ -41.8° | ~ 41.8° |
| 3 | Pole Velocity At Tip | -Inf | Inf |
### Actions
Type: Discrete(2)
| Num | Action |
| ---- | ---------------------- |
| 0 | Push cart to the left |
| 1 | Push cart to the right |
Note: The amount the velocity is reduced or increased is not fixed as it depends on the angle the pole is pointing. This is because the center of gravity of the pole increases the amount of energy needed to move the cart underneath it
### Reward
Reward is 1 for every step taken, including the termination step
### Starting State
All observations are assigned a uniform random value between ±0.05
### Episode Termination
1. Pole Angle is more than ±12°
2. Cart Position is more than ±2.4 (center of the cart reaches the edge of the display)
3. Episode length is greater than 200
### Solved Requirements
Considered solved when the average reward is greater than or equal to 195.0 over 100 consecutive trials
## 1. gym enviroment setup
```
import gym
import numpy as np
import matplotlib.pyplot as plt
env = gym.make("CartPole-v0")
env.reset()
```
## 2. Q Table setup
```
LEARNING_RATE = 0.5
DISCOUNT = 0.95
EPISODES = 50000
SHOW_EVERY = 1000
Q_TABLE_LEN = 150
def sigmoid(x):
return 1 / (1 + np.exp(-x))
DISCRETE_OS_SIZE = [Q_TABLE_LEN] * (len(env.observation_space.high))
observation_high = np.array([env.observation_space.high[0],
Q_TABLE_LEN*sigmoid(env.observation_space.high[1]),
env.observation_space.high[2],
Q_TABLE_LEN*sigmoid(env.observation_space.high[3])])
observation_low = np.array([env.observation_space.low[0],
Q_TABLE_LEN*sigmoid(env.observation_space.low[1]),
env.observation_space.low[2],
Q_TABLE_LEN*sigmoid(env.observation_space.low[3])])
discrete_os_win_size = (observation_high - observation_low) / DISCRETE_OS_SIZE
# q_table = np.random.uniform(low=0, high=1,
# size=(DISCRETE_OS_SIZE + [env.action_space.n]))
q_table = np.zeros((DISCRETE_OS_SIZE + [env.action_space.n]))
q_table.shape
```
### Decay epsilon
```
epsilon = 1 # not a constant, qoing to be decayed
START_EPSILON_DECAYING = 1
END_EPSILON_DECAYING = EPISODES//2
epsilon_decay_value = epsilon/(END_EPSILON_DECAYING - START_EPSILON_DECAYING)
```
## 3. Help functions
```
def get_discrete_state (state):
discrete_state = (state - observation_low) // discrete_os_win_size
return tuple(discrete_state.astype(int))
def take_epilon_greedy_action(state, epsilon):
discrete_state = get_discrete_state(state)
if np.random.random() < epsilon:
action = np.random.randint(0,env.action_space.n)
else:
action = np.argmax(q_table[discrete_state])
return action
```
## 4. Rewards Recorder setup
```
ep_rewards = []
aggr_ep_rewards = {'ep':[],'avg':[],'min':[],'max':[]}
```
## 5. Train the Agent
```
for episode in range(EPISODES):
# initiate reward every episode
ep_reward = 0
if episode % SHOW_EVERY == 0:
print("episode: {}".format(episode))
render = True
else:
render = False
state = env.reset()
done = False
while not done:
action = take_epilon_greedy_action(state, epsilon)
next_state, reward, done, _ = env.step(action)
ep_reward += reward
# if render:
# env.render()
if not done:
td_target = reward + DISCOUNT * np.max(q_table[get_discrete_state(next_state)])
q_table[get_discrete_state(state)][action] += LEARNING_RATE * (td_target - q_table[get_discrete_state(state)][action])
state = next_state
# Decaying is being done every episode if episode number is within decaying range
if END_EPSILON_DECAYING >= episode >= START_EPSILON_DECAYING:
epsilon -= epsilon_decay_value
# recoard aggrated rewards on each epsoide
ep_rewards.append(ep_reward)
# every SHOW_EVERY calculate average rewords
if episode % SHOW_EVERY == 0:
avg_reward = sum(ep_rewards[-SHOW_EVERY:]) / len(ep_rewards[-SHOW_EVERY:])
aggr_ep_rewards['ep'].append(episode)
aggr_ep_rewards['avg'].append(avg_reward)
aggr_ep_rewards['min'].append(min(ep_rewards[-SHOW_EVERY:]))
aggr_ep_rewards['max'].append(max(ep_rewards[-SHOW_EVERY:]))
plt.plot(aggr_ep_rewards['ep'], aggr_ep_rewards['avg'], label = 'avg')
plt.plot(aggr_ep_rewards['ep'], aggr_ep_rewards['min'], label = 'min')
plt.plot(aggr_ep_rewards['ep'], aggr_ep_rewards['max'], label = 'max')
plt.legend(loc='upper left')
plt.xlabel('Episodes')
plt.ylabel('Rewards')
```
### 6. Rendering Test
```
done = False
state = env.reset()
while not done:
action = np.argmax(q_table[get_discrete_state(state)])
next_state, _, done, _ = env.step(action)
state = next_state
env.render()
env.close()
```
| true |
code
| 0.414247 | null | null | null | null |
|
# Aufgabe 11 - Trading Environment Setup
22.01.2022, Thomas Iten
**Content**
0. Setup
1. Load S&P 500 Dataset
2. Define Trading Environment
3. Create Trading Environment and visualize some state values
4. Test some random actions and visualize the rewards
## 0. Setup
```
import random
import numpy as np
import gym
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
from dataset import SP500DataSet
from env import TradingEnv
```
## 1. Load S&P 500 Dataset
```
ds = SP500DataSet()
df_train, df_test = ds.get_train_test()
```
### Head of train data
```
df_train.head()
```
### Head of test data
```
df_test.head()
```
## 2. Define Trading Environment
```
class TradingEnv(gym.Env):
"""The S&P 500 Trading Environment."""
INITIAL_IDX = 0
INITIAL_CASH = 10_000
INITIAL_PORTFOLIO_VALUE = 0
ACTIONS = ["sell", "hold", "buy"]
def __init__(self, df_train, df_test, play=False):
# df and starting index
self.df = df_test if play else df_train
self.current_idx = TradingEnv.INITIAL_IDX
# cash and portfolio
self.cash = TradingEnv.INITIAL_CASH
self.portfolio_value = TradingEnv.INITIAL_PORTFOLIO_VALUE
# target stocks and stock values
self.stocks = ['AAPL', 'MSFT', 'AMZN', 'NFLX', 'XOM', 'JPM', 'T'] # target stocks
self.stock_values = np.zeros(len(self.stocks))
# number, states and rewards
self.n = len(self.df)
self.states = self.df.loc[:, ~self.df.columns.isin(self.stocks)].to_numpy()
self.rewards = self.df[self.stocks].to_numpy()
# last step data
self.last_step = None
def reset(self):
self.current_idx = TradingEnv.INITIAL_IDX
self.cash = TradingEnv.INITIAL_CASH
self.portfolio_value = TradingEnv.INITIAL_PORTFOLIO_VALUE
self.stock_values = np.zeros(len(self.stocks))
state = self.states[self.current_idx]
state = np.array(state).reshape(1, -1)
self.last_step = None
return state
def step(self, action):
"""
Run the give action, take a step forward and return the next state with the reward and done flag.
The actions calculates the difference between the mean value of the next states and the current states.
The reward is then calculated according the following table:
Action Difference Rise Reward
sell positive True -10
sell negative False +10
buy positive True +20
buy negative False -10
hold n/a n/a 0
:param action: ["sell", "hold", "buy"]
:return: next_state, reward, done
"""
# check valid state
if self.current_idx >= self.n:
raise Exception("Episode already done")
# check valid actions
if action not in TradingEnv.ACTIONS:
raise Exception("Invalid action: " + action)
# apply action and calculate mean values before and after
mean = np.mean(self.states[self.current_idx])
self.current_idx += 1 # apply action
next_mean = np.mean(self.states[self.current_idx])
# calculate done
done = (self.current_idx == self.n - 1)
if done:
next_state = None
reward = 0
else:
# calculate reward
reward = 0
rise = (next_mean - mean) > 0
if action == "sell":
reward = -10 if rise else +10
elif action == "buy":
reward = +20 if rise else -10
# calculate next step
next_state = self.states[self.current_idx]
next_state = np.array(next_state).reshape(1, -1)
# save last step data
self.last_step = {"action": action, "rise": rise, "reward": reward, "done": done}
# return results
return next_state, reward, done
def render(self):
# Currently we just render the data of the last step
print(self.last_step["action"] + ":",
"vaules rised=" + str(self.last_step["rise"]),
"reward=" + str(self.last_step["reward"]),
"done=" + str(self.last_step["done"]))
def render_state_mean_values(self, start=0, n=None):
means = []
steps = []
stop = len(self.states) if n is None else n
for i in range(start, stop):
mean = np.mean(self.states[i])
means.append(mean)
steps.append(i)
self.plot(steps, means, title="State values", xlabel="Step", ylabel="Mean")
def plot(self, x, y, title="", xlabel="", ylabel=""):
"""Simple plot function.
Further details see: https://jakevdp.github.io/PythonDataScienceHandbook/04.01-simple-line-plots.html
"""
fig = plt.figure()
ax = plt.axes()
ax.plot(x, y);
plt.title(title)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.show()
```
## 3. Create Trading Environment and visualize some state values
### Create and reset
```
env = TradingEnv(df_train, df_test)
env.reset()
```
### Visualize the first 100 state mean values
```
env.render_state_mean_values(n=100)
```
## 4. Test some random actions and visualize the rewards
### Test some random actions
```
# test some actions
n=24
rewards = []
print("Actions:")
for _ in range(n):
action = TradingEnv.ACTIONS[random.randint(0,2)]
next_state, reward, done = env.step(action)
rewards.append(reward)
env.render()
```
### Visualize the rewards
```
env.plot(range(0,n), rewards, title="Rewards", xlabel="Step", ylabel="Reward")
```
---
__The end.__
| true |
code
| 0.637623 | null | null | null | null |
|
```
from IPython.display import Markdown as md
### change to reflect your notebook
_nb_loc = "05_create_dataset/05_audio.ipynb"
_nb_title = "Vision ML on Audio, Video, Text, etc."
### no need to change any of this
_nb_safeloc = _nb_loc.replace('/', '%2F')
_nb_safetitle = _nb_title.replace(' ', '+')
md("""
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://console.cloud.google.com/ai-platform/notebooks/deploy-notebook?name={1}&url=https%3A%2F%2Fgithub.com%2FGoogleCloudPlatform%2Fpractical-ml-vision-book%2Fblob%2Fmaster%2F{2}&download_url=https%3A%2F%2Fgithub.com%2FGoogleCloudPlatform%2Fpractical-ml-vision-book%2Fraw%2Fmaster%2F{2}">
<img src="https://raw.githubusercontent.com/GoogleCloudPlatform/practical-ml-vision-book/master/logo-cloud.png"/> Run in AI Platform Notebook</a>
</td>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/GoogleCloudPlatform/practical-ml-vision-book/blob/master/{0}">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/GoogleCloudPlatform/practical-ml-vision-book/blob/master/{0}">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://raw.githubusercontent.com/GoogleCloudPlatform/practical-ml-vision-book/master/{0}">
<img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
""".format(_nb_loc, _nb_safetitle, _nb_safeloc))
```
# Vision ML on Audio, Video, Text, etc.
This notebook shows you how to use the spectrogram
of an audio file as a grayscale image input to an
ML model.
```
!gsutil cp gs://ml-design-patterns/audio_train/00353774.wav cello.wav
!gsutil cp gs://ml-design-patterns/audio_train/001ca53d.wav sax.wav
import matplotlib.pyplot as plt
from scipy import signal
from scipy.io import wavfile
import numpy as np
fig, ax = plt.subplots(2, 2, figsize=(15, 10))
for idx, instr in enumerate(['sax', 'cello']):
sample_rate, samples = wavfile.read(instr + '.wav')
ax[idx][0].plot(samples)
_, _, spectro = signal.spectrogram(samples, sample_rate)
img = np.log(spectro)
ax[idx][1].imshow(img, cmap='gray', aspect='auto');
ax[idx][1].set_title(instr)
print(img.shape)
```
## Vision ML on video
Video consists of frames, each of which is an image.
```
!curl -O https://www.engr.colostate.edu/me/facil/dynamics/files/flame.avi
## Frame-by-frame
import cv2
import numpy as np
import matplotlib.pyplot as plt
cap = cv2.VideoCapture('flame.avi')
num_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
print(num_frames)
fig, ax = plt.subplots(1, 4, figsize=(20, 10))
for i in range(num_frames):
ret, frame = cap.read()
if ret:
img = np.asarray(frame)
if i%30 == 0:
ax[i//30].imshow(img)
## Rolling average of 30 frames at a time
def rolling_average(cap, N):
img = None;
n = 0
for i in range(N):
ret, frame = cap.read()
if ret:
frame = np.asarray(frame)
if n > 0:
img = frame + img
else:
img = frame
n += 1
if n > 0:
return img / n
return img
cap = cv2.VideoCapture('flame.avi')
fig, ax = plt.subplots(1, 4, figsize=(20, 10))
for i in range(4):
img = rolling_average(cap, 25)
ax[i].imshow(img)
# read into a 4D shape
import tensorflow as tf
def read_video(filename):
cap = cv2.VideoCapture(filename)
num_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
frames = []
for i in range(num_frames):
ret, frame = cap.read()
if ret:
frames.append(np.asarray(frame))
return tf.convert_to_tensor(frames)
img4d = read_video('flame.avi')
print(img4d.shape)
```
## Text
We can break down a paragraph into sentences.
And we can do sentence-embedding to get a numeric representation of each sentence.
A paragraph now becomes an image!
```
import tensorflow_hub as hub
paragraph = """
Siddhartha gave his clothes to a poor Brahmin on the road and
only retained his loincloth andearth-colored unstitched cloak.
He only ate once a day and never cooked food. He fasted fourteen
days. He fasted twenty-eight days. The flesh disappeared from
his legs and cheeks. Strange dreams were reflected in his enlarged
eyes. The nails grew long on his thin fingers and a dry, bristly
beard appeared on his chin. His glance became icy when he
encountered women; his lips curled with contempt when he passed
through a town of well-dressed people. He saw businessmen trading,
princes going to the hunt, mourners weeping over their dead,
prostitutes offering themselves, doctors attending the sick,
priests deciding the day for sowing, lovers making love, mothers
soothing their children -and all were not worth a passing glance,
everything lied, stank of lies; they were all illusions of sense,
happiness and beauty. All were doomed to decay. The world tasted
bitter. Life was pain.
"""
print(paragraph.split('.'))
embed = hub.load("https://tfhub.dev/google/universal-sentence-encoder/4")
embeddings = embed(paragraph.split('.'))
import matplotlib.pyplot as plt
plt.figure(figsize=(5,10))
plt.imshow(embeddings.numpy(), aspect=25.0, cmap='gray');
```
Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| true |
code
| 0.539711 | null | null | null | null |
|
# Example: CanvasXpress violin Chart No. 14
This example page demonstrates how to, using the Python package, create a chart that matches the CanvasXpress online example located at:
https://www.canvasxpress.org/examples/violin-14.html
This example is generated using the reproducible JSON obtained from the above page and the `canvasxpress.util.generator.generate_canvasxpress_code_from_json_file()` function.
Everything required for the chart to render is included in the code below. Simply run the code block.
```
from canvasxpress.canvas import CanvasXpress
from canvasxpress.js.collection import CXEvents
from canvasxpress.render.jupyter import CXNoteBook
cx = CanvasXpress(
render_to="violin14",
data={
"y": {
"smps": [
"Var1",
"Var2",
"Var3",
"Var4",
"Var5",
"Var6",
"Var7",
"Var8",
"Var9",
"Var10",
"Var11",
"Var12",
"Var13",
"Var14",
"Var15",
"Var16",
"Var17",
"Var18",
"Var19",
"Var20",
"Var21",
"Var22",
"Var23",
"Var24",
"Var25",
"Var26",
"Var27",
"Var28",
"Var29",
"Var30",
"Var31",
"Var32",
"Var33",
"Var34",
"Var35",
"Var36",
"Var37",
"Var38",
"Var39",
"Var40",
"Var41",
"Var42",
"Var43",
"Var44",
"Var45",
"Var46",
"Var47",
"Var48",
"Var49",
"Var50",
"Var51",
"Var52",
"Var53",
"Var54",
"Var55",
"Var56",
"Var57",
"Var58",
"Var59",
"Var60"
],
"data": [
[
4.2,
11.5,
7.3,
5.8,
6.4,
10,
11.2,
11.2,
5.2,
7,
16.5,
16.5,
15.2,
17.3,
22.5,
17.3,
13.6,
14.5,
18.8,
15.5,
23.6,
18.5,
33.9,
25.5,
26.4,
32.5,
26.7,
21.5,
23.3,
29.5,
15.2,
21.5,
17.6,
9.7,
14.5,
10,
8.2,
9.4,
16.5,
9.7,
19.7,
23.3,
23.6,
26.4,
20,
25.2,
25.8,
21.2,
14.5,
27.3,
25.5,
26.4,
22.4,
24.5,
24.8,
30.9,
26.4,
27.3,
29.4,
23
]
],
"vars": [
"len"
]
},
"x": {
"supp": [
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"VC",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ",
"OJ"
],
"order": [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10
],
"dose": [
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2,
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
0.5,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
2,
2,
2,
2,
2,
2,
2,
2,
2,
2
]
}
},
config={
"axisAlgorithm": "rPretty",
"axisTickScaleFontFactor": 1.8,
"axisTitleFontStyle": "bold",
"axisTitleScaleFontFactor": 1.8,
"background": "white",
"backgroundType": "window",
"backgroundWindow": "#E5E5E5",
"colorBy": "dose",
"colorScheme": "GGPlot",
"graphOrientation": "vertical",
"graphType": "Boxplot",
"groupingFactors": [
"dose",
"supp"
],
"guides": "solid",
"guidesColor": "white",
"legendScaleFontFactor": 1.8,
"showBoxplotIfViolin": True,
"showLegend": True,
"showViolinBoxplot": True,
"smpLabelRotate": 90,
"smpLabelScaleFontFactor": 1.8,
"smpTitle": "dose",
"smpTitleFontStyle": "bold",
"smpTitleScaleFontFactor": 1.8,
"stringSampleFactors": [
"dose"
],
"title": "The Effect of Vitamin C on Tooth Growth in Guinea Pigs",
"xAxis2Show": False,
"xAxisMinorTicks": False,
"xAxisTickColor": "white",
"xAxisTitle": "len"
},
width=613,
height=613,
events=CXEvents(),
after_render=[
[
"switchNumericToString",
[
"dose",
True
]
]
],
other_init_params={
"version": 35,
"events": False,
"info": False,
"afterRenderInit": False,
"noValidate": True
}
)
display = CXNoteBook(cx)
display.render(output_file="violin_14.html")
```
| true |
code
| 0.647408 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/IEwaspbusters/KopuruVespaCompetitionIE/blob/main/Competition_subs/2021-04-28_submit/batch_LARVAE/HEX.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# XGBoost Years: Prediction with Cluster Variables and selected Weather Variables (according to Feature importance)
## Import the Data & Modules
```
# Base packages -----------------------------------
import pandas as pd
import numpy as np
# Data Viz -----------------------------------
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (15, 10) # to set figure size when ploting feature_importance
# XGBoost -------------------------------
import xgboost as xgb
from xgboost import XGBRegressor
from xgboost import plot_importance # built-in function to plot features ordered by their importance
# SKLearn -----------------------------------------
from sklearn import preprocessing # scaling data
#Cluster
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
from plotnine import *
# Function that checks if final Output is ready for submission or needs revision
def check_data(HEX):
def template_checker(HEX):
submission_df = (HEX["CODIGO MUNICIPIO"].astype("string")+HEX["NOMBRE MUNICIPIO"]).sort_values().reset_index(drop=True)
template_df = (template["CODIGO MUNICIPIO"].astype("string")+template["NOMBRE MUNICIPIO"]).sort_values().reset_index(drop=True)
check_df = pd.DataFrame({"submission_df":submission_df,"template_df":template_df})
check_df["check"] = check_df.submission_df == check_df.template_df
if (check_df.check == False).any():
pd.options.display.max_rows = 112
return check_df.loc[check_df.check == False,:]
else:
return "All Municipality Names and Codes to be submitted match the Template"
print("Submission form Shape is", HEX.shape)
print("Number of Municipalities is", HEX["CODIGO MUNICIPIO"].nunique())
print("The Total 2020 Nests' Prediction is", int(HEX["NIDOS 2020"].sum()))
assert HEX.shape == (112, 3), "Error: Shape is incorrect."
assert HEX["CODIGO MUNICIPIO"].nunique() == 112, "Error: Number of unique municipalities is correct."
return template_checker(HEX)
# Importing datasets from GitHub as Pandas Dataframes
queen_train = pd.read_csv("../Feeder_years/WBds03_QUEENtrainYEARS.csv", encoding="utf-8") #2018+2019 test df
queen_predict = pd.read_csv("../Feeder_years/WBds03_QUEENpredictYEARS.csv", encoding="utf-8") #2020 prediction df
template = pd.read_csv("../../../Input_open_data/ds01_PLANTILLA-RETO-AVISPAS-KOPURU.csv",sep=";", encoding="utf-8")
den_com = pd.read_excel("../../../Other_open_data/densidad comercial.xlsx")
```
## Further Clean the Data
```
# Remove the Municipalities to which we did not assign a Cluster, since there was not reliable data for us to predict
queen_train = queen_train.loc[~queen_train.municip_code.isin([48071, 48074, 48022, 48088, 48051, 48020]),:]
queen_predict = queen_predict.loc[~queen_predict.municip_code.isin([48071, 48074, 48022, 48088, 48051, 48020]),:]
```
# New queen Train dataset
```
den_com_18= den_com.loc[:,['Código municipio','2018']]
den_com_18.rename({'2018': 'dens_com','Código municipio':'municip_code'}, axis=1, inplace=True)
den_com_18['year_offset']='2018'
den_com_17= den_com.loc[:,['Código municipio','2017']]
den_com_17.rename({'2017': 'dens_com','Código municipio':'municip_code'}, axis=1, inplace=True)
den_com_17['year_offset']='2017'
den_com_19= den_com.loc[:,['Código municipio','2019']]
den_com_19.rename({'2019': 'dens_com','Código municipio':'municip_code'}, axis=1, inplace=True)
den_com_19['year_offset']='2019'
densidad_comercial= den_com_18.append(den_com_17).append(den_com_19)
densidad_comercial['cod_aux']=densidad_comercial.apply(lambda x:'%s_%s' % (x['municip_code'],x['year_offset']),axis=1)
aux_train= queen_train.copy()
aux_train['cod_aux']=aux_train.apply(lambda x:'%s_%s' % (x['municip_code'],x['year_offset']),axis=1)
queen_train_mischief= aux_train.loc[:,
['municip_code','municip_name','weath_meanTemp',
'population','cod_aux','NESTS']].merge(densidad_comercial, how='left', on='cod_aux')
queen_train_mischief.drop(['cod_aux','municip_code_y'], axis=1, inplace=True)
queen_train_mischief.rename({'municip_code_x': 'municip_code'}, axis=1, inplace=True)
queen_train_mischief["dens_com"] = queen_train_mischief["dens_com"].apply(lambda x: x.replace(",", "."))
```
# New queen predict dataset
```
queen_predict_mischief= queen_predict.loc[:,['municip_code','municip_name','weath_meanTemp','year_offset','population']]
queen_predict_mischief['cod_aux']= queen_predict_mischief.apply(lambda x:'%s_%s' % (x['municip_code'],x['year_offset']),axis=1)
queen_predict_mischief= queen_predict_mischief.merge(densidad_comercial, how='left', on='cod_aux')
queen_predict_mischief.drop(['cod_aux','municip_code_y','year_offset_x','year_offset_y'], axis=1, inplace=True)
queen_predict_mischief.rename({'municip_code_x': 'municip_code'}, axis=1, inplace=True)
queen_predict_mischief["dens_com"] = queen_predict_mischief["dens_com"].apply(lambda x: x.replace(",", "."))
predict_20=queen_predict_mischief.loc[:,['weath_meanTemp', 'population', 'dens_com']]
```
## Get the Prediction
### Arrange data into a features matrix and target vector
```
# selecting the train X & y variables
# Y will be the response variable (filter for the number of wasp nests - waspbust_id)
y = queen_train_mischief.NESTS
# X will be the explanatory variables. Remove response variable and non desired categorical columns such as (municip code, year, etc...)
X = queen_train_mischief.loc[:,['weath_meanTemp', 'population', 'dens_com']]
```
### Scale the Data in order to filter the relevant variables using Feature Importance
#### Arrange data into a features matrix and target vector
```
# Scale the datasets using MinMaxScaler
X_scaled = preprocessing.minmax_scale(X) # this creates a numpy array
```
#### Choose a class of model by importing the appropriate estimator class
```
# selecting the XGBoost model and fitting with the train data
model = XGBRegressor()
```
#### Fit the model to your data by calling the `.fit()` method of the model instance
```
# selecting the XGBoost model and fitting with the train data for each cluster
model.fit(X_scaled, y)
```
#### Selecting the Relevant Variables and filtering according to the results
```
# Plot the Relevant Variables in order to filter the relevant ones per Cluster
plot_importance(model,height=0.5,xlabel="F-Score",ylabel="Feature Importance",grid=False)
plt.show()
```
### Fit the model to your data by calling the `.fit()` method of the model instance
### Apply the model to new data:
- For supervised learning, predict labels for unknown data using the `.predict()` method
```
# make a prediction
X_scaled_pred = preprocessing.minmax_scale(predict_20)
queen_predict_mischief['nests_2020'] = model.predict(X_scaled_pred)
```
## Add Each Cluster Predictions to the original DataFrame and Save it as a `.csv file`
```
# Create a new DataFrame with the Municipalities to insert manualy
HEX_aux = pd.DataFrame({"CODIGO MUNICIPIO":[48022, 48071, 48088, 48074, 48051, 48020],\
"NOMBRE MUNICIPIO":["Karrantza Harana/Valle de Carranza","Muskiz","Ubide","Urduña/Orduña","Lanestosa","Bilbao"],\
"NIDOS 2020":[0,0,1,0,1,0]})
HEX = queen_predict_mischief.loc[:,["municip_code","municip_name","nests_2020"]].round() # create a new Dataframe for Kopuru submission
HEX.columns = ["CODIGO MUNICIPIO","NOMBRE MUNICIPIO","NIDOS 2020"] # change column names to Spanish (Decidata template)
HEX = HEX.append(HEX_aux, ignore_index=True) # Add rows of municipalities to add manually
# Final check
check_data(HEX)
# reset max_rows to default values (used in function to see which rows did not match template)
pd.reset_option("max_rows")
# Save the new dataFrame as a .csv in the current working directory on Windows
HEX.to_csv("WaspBusters_20210519_XGyears_NOcluster_PC2.csv", index=False)
```
| true |
code
| 0.430985 | null | null | null | null |
|
# Basic training functionality
```
from fastai.basic_train import *
from fastai.gen_doc.nbdoc import *
from fastai.vision import *
from fastai.distributed import *
```
[`basic_train`](/basic_train.html#basic_train) wraps together the data (in a [`DataBunch`](/basic_data.html#DataBunch) object) with a pytorch model to define a [`Learner`](/basic_train.html#Learner) object. This is where the basic training loop is defined for the [`fit`](/basic_train.html#fit) function. The [`Learner`](/basic_train.html#Learner) object is the entry point of most of the [`Callback`](/callback.html#Callback) functions that will customize this training loop in different ways (and made available through the [`train`](/train.html#train) module), notably:
- [`Learner.lr_find`](/train.html#lr_find) will launch an LR range test that will help you select a good learning rate
- [`Learner.fit_one_cycle`](/train.html#fit_one_cycle) will launch a training using the 1cycle policy, to help you train your model fast.
- [`Learner.to_fp16`](/train.html#to_fp16) will convert your model in half precision and help you launch a training in mixed precision.
```
show_doc(Learner, title_level=2)
```
The main purpose of [`Learner`](/basic_train.html#Learner) is to train `model` using [`Learner.fit`](/basic_train.html#Learner.fit). After every epoch, all *metrics* will be printed, and will also be available to callbacks.
The default weight decay will be `wd`, which will be handled using the method from [Fixing Weight Decay Regularization in Adam](https://arxiv.org/abs/1711.05101) if `true_wd` is set (otherwise it's L2 regularization). If `bn_wd` is False then weight decay will be removed from batchnorm layers, as recommended in [Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour](https://arxiv.org/abs/1706.02677). You can ensure that batchnorm layer learnable params are trained even for frozen layer groups, by enabling `train_bn`.
To use [discriminative layer training](#Discriminative-layer-training) pass an [`nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) for each layer group to be optimized with different settings.
Any model files created will be saved in `path`/`model_dir`.
You can pass a list of [`callbacks`](/callbacks.html#callbacks) that you have already created, or (more commonly) simply pass a list of callback functions to `callback_fns` and each function will be called (passing `self`) on object initialization, with the results stored as callback objects. For a walk-through, see the [training overview](/training.html) page. You may also want to use an `application` to fit your model, e.g. using the [`create_cnn`](/vision.learner.html#create_cnn) method:
```
path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
learn = create_cnn(data, models.resnet18, metrics=accuracy)
learn.fit(1)
```
### Model fitting methods
```
show_doc(Learner.fit)
```
Uses [discriminative layer training](#Discriminative-layer-training) if multiple learning rates or weight decay values are passed. To control training behaviour, use the [`callback`](/callback.html#callback) system or one or more of the pre-defined [`callbacks`](/callbacks.html#callbacks).
```
show_doc(Learner.fit_one_cycle)
```
Uses the [`OneCycleScheduler`](/callbacks.one_cycle.html#OneCycleScheduler) callback.
```
show_doc(Learner.lr_find)
```
Runs the learning rate finder defined in [`LRFinder`](/callbacks.lr_finder.html#LRFinder), as discussed in [Cyclical Learning Rates for Training Neural Networks](https://arxiv.org/abs/1506.01186).
### See results
```
show_doc(Learner.get_preds)
show_doc(Learner.validate)
show_doc(Learner.show_results)
show_doc(Learner.predict)
show_doc(Learner.pred_batch)
show_doc(Learner.interpret, full_name='interpret')
jekyll_note('This function only works in the vision application.')
```
### Model summary
```
show_doc(Learner.summary)
```
### Test time augmentation
```
show_doc(Learner.TTA, full_name = 'TTA')
```
Applies Test Time Augmentation to `learn` on the dataset `ds_type`. We take the average of our regular predictions (with a weight `beta`) with the average of predictions obtained through augmented versions of the training set (with a weight `1-beta`). The transforms decided for the training set are applied with a few changes `scale` controls the scale for zoom (which isn't random), the cropping isn't random but we make sure to get the four corners of the image. Flipping isn't random but applied once on each of those corner images (so that makes 8 augmented versions total).
### Gradient clipping
```
show_doc(Learner.clip_grad)
```
### Mixed precision training
```
show_doc(Learner.to_fp16)
```
Uses the [`MixedPrecision`](/callbacks.fp16.html#MixedPrecision) callback to train in mixed precision (i.e. forward and backward passes using fp16, with weight updates using fp32), using all [NVIDIA recommendations](https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html) for ensuring speed and accuracy.
```
show_doc(Learner.to_fp32)
```
### Distributed training
```
show_doc(Learner.distributed, full_name='distributed')
```
### Discriminative layer training
When fitting a model you can pass a list of learning rates (and/or weight decay amounts), which will apply a different rate to each *layer group* (i.e. the parameters of each module in `self.layer_groups`). See the [Universal Language Model Fine-tuning for Text Classification](https://arxiv.org/abs/1801.06146) paper for details and experimental results in NLP (we also frequently use them successfully in computer vision, but have not published a paper on this topic yet). When working with a [`Learner`](/basic_train.html#Learner) on which you've called `split`, you can set hyperparameters in four ways:
1. `param = [val1, val2 ..., valn]` (n = number of layer groups)
2. `param = val`
3. `param = slice(start,end)`
4. `param = slice(end)`
If we chose to set it in way 1, we must specify a number of values exactly equal to the number of layer groups. If we chose to set it in way 2, the chosen value will be repeated for all layer groups. See [`Learner.lr_range`](/basic_train.html#Learner.lr_range) for an explanation of the `slice` syntax).
Here's an example of how to use discriminative learning rates (note that you don't actually need to manually call [`Learner.split`](/basic_train.html#Learner.split) in this case, since fastai uses this exact function as the default split for `resnet18`; this is just to show how to customize it):
```
# creates 3 layer groups
learn.split(lambda m: (m[0][6], m[1]))
# only randomly initialized head now trainable
learn.freeze()
learn.fit_one_cycle(1)
# all layers now trainable
learn.unfreeze()
# optionally, separate LR and WD for each group
learn.fit_one_cycle(1, max_lr=(1e-4, 1e-3, 1e-2), wd=(1e-4,1e-4,1e-1))
show_doc(Learner.lr_range)
```
Rather than manually setting an LR for every group, it's often easier to use [`Learner.lr_range`](/basic_train.html#Learner.lr_range). This is a convenience method that returns one learning rate for each layer group. If you pass `slice(start,end)` then the first group's learning rate is `start`, the last is `end`, and the remaining are evenly geometrically spaced.
If you pass just `slice(end)` then the last group's learning rate is `end`, and all the other groups are `end/10`. For instance (for our learner that has 3 layer groups):
```
learn.lr_range(slice(1e-5,1e-3)), learn.lr_range(slice(1e-3))
show_doc(Learner.unfreeze)
```
Sets every layer group to *trainable* (i.e. `requires_grad=True`).
```
show_doc(Learner.freeze)
```
Sets every layer group except the last to *untrainable* (i.e. `requires_grad=False`).
```
show_doc(Learner.freeze_to)
show_doc(Learner.split)
```
A convenience method that sets `layer_groups` based on the result of [`split_model`](/torch_core.html#split_model). If `split_on` is a function, it calls that function and passes the result to [`split_model`](/torch_core.html#split_model) (see above for example).
### Saving and loading models
Simply call [`Learner.save`](/basic_train.html#Learner.save) and [`Learner.load`](/basic_train.html#Learner.load) to save and load models. Only the parameters are saved, not the actual architecture (so you'll need to create your model in the same way before loading weights back in). Models are saved to the `path`/`model_dir` directory.
```
show_doc(Learner.load)
show_doc(Learner.save)
```
### Deploying your model
When you are ready to put your model in production, export the minimal state of your [`Learner`](/basic_train.html#Learner) with
```
show_doc(Learner.export)
```
Then you can load it with the following function.
```
show_doc(load_learner)
```
You can find more information and multiple examples in [this tutorial](/tutorial.inference.html)
### Other methods
```
show_doc(Learner.init)
```
Initializes all weights (except batchnorm) using function `init`, which will often be from PyTorch's [`nn.init`](https://pytorch.org/docs/stable/nn.html#torch-nn-init) module.
```
show_doc(Learner.mixup)
```
Uses [`MixUpCallback`](/callbacks.mixup.html#MixUpCallback).
```
show_doc(Learner.backward)
show_doc(Learner.create_opt)
```
You generally won't need to call this yourself - it's used to create the [`optim`](https://pytorch.org/docs/stable/optim.html#module-torch.optim) optimizer before fitting the model.
```
show_doc(Learner.dl)
show_doc(Recorder, title_level=2)
```
A [`Learner`](/basic_train.html#Learner) creates a [`Recorder`](/basic_train.html#Recorder) object automatically - you do not need to explicitly pass it to `callback_fns` - because other callbacks rely on it being available. It stores the smoothed loss, hyperparameter values, and metrics for each batch, and provides plotting methods for each. Note that [`Learner`](/basic_train.html#Learner) automatically sets an attribute with the snake-cased name of each callback, so you can access this through `Learner.recorder`, as shown below.
### Plotting methods
```
show_doc(Recorder.plot)
```
This is mainly used with the learning rate finder, since it shows a scatterplot of loss vs learning rate.
```
learn = create_cnn(data, models.resnet18, metrics=accuracy)
learn.lr_find()
learn.recorder.plot()
show_doc(Recorder.plot_losses)
```
Note that validation losses are only calculated once per epoch, whereas training losses are calculated after every batch.
```
learn.fit_one_cycle(2)
learn.recorder.plot_losses()
show_doc(Recorder.plot_lr)
learn.recorder.plot_lr(show_moms=True)
show_doc(Recorder.plot_metrics)
```
Note that metrics are only collected at the end of each epoch, so you'll need to train at least two epochs to have anything to show here.
```
learn.recorder.plot_metrics()
```
### Callback methods
You don't call these yourself - they're called by fastai's [`Callback`](/callback.html#Callback) system automatically to enable the class's functionality.
```
show_doc(Recorder.on_backward_begin)
show_doc(Recorder.on_batch_begin)
show_doc(Recorder.on_epoch_end)
show_doc(Recorder.on_train_begin)
```
### Inner functions
The following functions are used along the way by the [`Recorder`](/basic_train.html#Recorder) or can be called by other callbacks.
```
show_doc(Recorder.add_metrics)
show_doc(Recorder.add_metric_names)
show_doc(Recorder.format_stats)
```
## Module functions
Generally you'll want to use a [`Learner`](/basic_train.html#Learner) to train your model, since they provide a lot of functionality and make things easier. However, for ultimate flexibility, you can call the same underlying functions that [`Learner`](/basic_train.html#Learner) calls behind the scenes:
```
show_doc(fit)
```
Note that you have to create the `Optimizer` yourself if you call this function, whereas [`Learn.fit`](/basic_train.html#fit) creates it for you automatically.
```
show_doc(train_epoch)
```
You won't generally need to call this yourself - it's what [`fit`](/basic_train.html#fit) calls for each epoch.
```
show_doc(validate)
```
This is what [`fit`](/basic_train.html#fit) calls after each epoch. You can call it if you want to run inference on a [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader) manually.
```
show_doc(get_preds)
show_doc(loss_batch)
```
You won't generally need to call this yourself - it's what [`fit`](/basic_train.html#fit) and [`validate`](/basic_train.html#validate) call for each batch. It only does a backward pass if you set `opt`.
## Other classes
```
show_doc(LearnerCallback, title_level=3)
show_doc(RecordOnCPU, title_level=3)
```
## Undocumented Methods - Methods moved below this line will intentionally be hidden
```
show_doc(Learner.tta_only)
show_doc(Learner.TTA)
show_doc(RecordOnCPU.on_batch_begin)
```
## New Methods - Please document or move to the undocumented section
| true |
code
| 0.866302 | null | null | null | null |
|
# Python Developers Survey 2017
## Exploratory Data Analysis
Data source: [Python Developers Survey 2017](https://www.jetbrains.com/research/python-developers-survey-2017/)
This notebook demonstrates how the simple summary techniques we've learned in the [workshop](https://jenfly.github.io/pydata-intro-workshop/) can help you navigate and analyze a large CSV file. In this example, we will analyze responses to the survey questions "What do you use Python for?" and "What do you use Python for *the most*?"
- The DataFrame attributes `shape`, `dtypes` and `columns` will help us quickly find and extract the columns of interest from a CSV file with a whopping **162 columns**!
This example also demonstrates other handy techniques we learned in the workshop, such as:
- Applying string methods to parse information from text data
- Counting the unique values in a column with the `value_counts` method
- Using a filter Series to extract a subset of data
- Computing sums along rows and columns of a DataFrame
### Initial Setup
```
# Import libraries
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# Display graphs inline
%matplotlib inline
# Use styles from the Seaborn library to make graphs look nicer
sns.set()
```
### Load Data
#### Read the CSV file into a DataFrame
```
survey = pd.read_csv('data/pythondevsurvey2017_raw_data.csv')
survey.head()
```
#### How big is our DataFrame?
```
survey.shape
```
Yikes! The data has 9506 rows (i.e. 9506 total respondents to the survey) and 162 columns. Trying to make sense of such a huge number of columns in Excel would likely be an unwieldy and unpleasant task.
Let's save the number of respondents to a variable so we can use it later:
```
n_respondents = survey.shape[0]
n_respondents
```
### Data Columns
Let's figure what the heck is in all these columns. We can use the `dtypes` attribute to get the name and data type of each column. If we simply display `survey.dtypes` in the cell below, it will only show the first 30 rows and last 30 rows, with `...` in between.
- We could tinker with some settings by running the code `pd.set_option('display.max_rows', 1000)` or using other solutions described [here](https://stackoverflow.com/questions/19124601/is-there-a-way-to-pretty-print-the-entire-pandas-series-dataframe), or we could use a `for` loop to print the items in `dtypes` in a slightly more compact way. I'll use the latter approach.
- I'm also using [f-strings](https://realpython.com/python-f-strings/), a really neat feature of Python 3 (version 3.6 and later) as a shortcut to construct a string containing the values of variables.
```
# Iterate over the items in survey.dtypes
for column, dtype in survey.dtypes.items():
# Print the data type and the column name, separated by a tab (\t)
# -- The syntax below is equivalent to: print(str(dtype) + '\t' + column)
print(f'{dtype} \t {column}')
```
Skimming through the column names, we can see there are a bunch of them ending with the string `'What do you use Python for?'`, for example:
- `'Educational purposes: What do you use Python for?'`
- `'Data analysis: What do you use Python for?'`
- etc.
These represent different multiple choice answers to the survey question "What do you use Python for?", where respondents were able to select more than one answer to the question. This kind of structure is quite common in the raw data you get from various online survey apps.
After these columns, the next column name is: `'What do you use Python for the most?'`. We'll analyze the data in this column too.
### Answers to the question: "What do you use Python for?"
#### (multiple choice survey question)
We want to extract the subset of data containing answers to this question. To do this, we need to find all the columns whose name contains the phrase 'What do you use Python for?'
- First we create a filter Series using the string method `endswith` on the column names in `survey.columns`
```
usage_filter = survey.columns.str.endswith('What do you use Python for?')
usage_filter
```
- Next, we use the filter to find the names of the relevant columns:
```
cols_usage = survey.columns[usage_filter]
cols_usage
```
What values are in each of these columns?
```
for col in cols_usage:
print(survey[col].value_counts(dropna=False))
print('\n')
```
The output above shows that the values in these columns are such that they can be stacked together into one giant Series, and then we can simply count the values in that Series using the `value_counts` method:
```
# Stack the columns into a giant Series
usages = survey[cols_usage].stack()
# Total number of answers to this survey question
n_answers = len(usages)
print(f'We have {n_answers} answers to this question, from the {n_respondents} respondents')
# Display the first 20 items in the Series
# (converting to a list first so that the output displays more nicely)
print('The first 20 answers are:')
list(usages.head(20))
# Get the counts for each value
usage_counts = usages.value_counts()
usage_counts
```
Let's calculate these totals as a fraction of the total number of respondents who answered the question "What do you use Python for?". This is a bit complicated, because the information is spread over multiple columns and we need to exclude rows where the respondent didn't answer the question at all.
First, we calculate how many answers each respondent gave to this survey question:
```
# Use the `notnull` method and sum across columns to calculate
# how many answers each respondent provided for this question
num_answers = survey[cols_usage].notnull().sum(axis=1)
num_answers.head(10)
len(num_answers)
```
Each value in the Series `num_answers` correspondents to one respondent (9506 respondents in total). A value of 0 means the respondent didn't answer this survey question. Any value greater than 0 means that the respondent provided one or more answers to the question.
```
num_answers.value_counts()
```
We can see that respondents provided up to 16 answers to the question, and the most common number of answers was 3.
Next, create a filter identifying whether or not a respondent answered the question:
```
# Create a filter to identify which respondents had more than 0 answers to this question
answered_ques = num_answers > 0
answered_ques.head()
```
To find the number of respondents who answered the question, we sum the values in the `answered_ques` Series:
```
n_ques_respondents = answered_ques.sum()
n_ques_respondents
```
Finally, we can compute the totals for each Python usage as a fraction of the number of respondents who answered the survey question:
```
usage_frac = usage_counts / n_ques_respondents
usage_frac
```
### Visualizing the Results
Let's plot the results as a horizontal bar chart. First we'll define a function to create our plot, so that we can re-use it for other plots.
```
def plot_barh(series, title=None, figsize=(7, 7)):
"""Plot a horizontal bar chart with tick labels in percent format"""
# Sort the series in ascending order, so that in the horizontal bar chart,
# the largest values are on top and smallest at the bottom
series_sorted = series.sort_values(ascending=True)
# We will use dark blue from the Seaborn default colour palette
blue = sns.color_palette()[0]
# Create a horizontal bar chart and customize the labels and formatting
ax = series_sorted.plot(kind='barh', color=blue, figsize=figsize)
ax.set_xticklabels([f'{val:.0%}' for val in ax.get_xticks()])
ax.tick_params(labelsize='large')
if title is not None:
ax.set_title(title, fontsize='large', fontweight='bold');
```
#### Create the bar chart
```
plot_barh(usage_frac, title='What do you use Python for?\n(multiple answers)')
```
### Answers to the question: "What do you use Python for *the most*?"
#### (single choice survey question)
This data is easier to analyze because all the answers to the question are in a single column. All we need to do is extract the column and use the `value_counts` method to tally up the answers. For convenience, we can also use the keyword argument `normalize=True` to compute the totals directly as a fraction of the number of respondents. *(This approach wouldn't have worked in the previous section because the data was spread over multiple columns and there were multiple choices per respondent.)*
```
column = 'What do you use Python for the most?'
primary_usage_frac = survey[column].value_counts(normalize=True)
primary_usage_frac
plot_barh(primary_usage_frac, title='What do you use Python for the most?\n(single answer)')
```
## Conclusions
Web development and data analysis are clear frontrunners as the most popular types of Python development.
- While **26% of developers indicated web development as their primary usage for Python**, 18% chose data analysis and 9% chose machine learning (a field of data science).
- Consolidating data analysis and machine learning together as a "data science" category reveals that **27% of developers use Python primarily for data science**.
- Many developers use Python for more than one type of development. 50% of all respondents indicated that they use Python for data analysis, and 49% of all respondents use Python for web development.
We can compare our results with: https://www.jetbrains.com/research/python-developers-survey-2017/#types-of-development.
- Perfect match—hurray!
| true |
code
| 0.619845 | null | null | null | null |
|
# Reconhecimento de atividade humana usando conjunto de dados de smartphones
## Random Forest com classificação e clustering - Preditor de atividade humana
A Contoso Behavior Systems está desenvolvendo uma ferramenta de IA que tentará reconhecer a atividade humana (1-Walking, 2-Walking upstairs, 3-Walking downstairs, 4-Sentado, 5-Standing ou 6-Laying) usando os sensores do smartphone. O que significa que, usando os métodos a seguir, o smartphone pode detectar o que estamos fazendo no momento.
O banco de dados do Human Activity Recognition foi construído a partir de gravações de 30 participantes do estudo realizando atividades de vida diária (AVD) enquanto carregavam um smartphone montado na cintura com sensores inerciais embutidos. O objetivo é classificar as atividades em uma das seis atividades realizadas.
### Descrição do experimento
Os experimentos foram realizados com um grupo de 30 voluntários em uma faixa etária de 19 a 48 anos. Cada pessoa realizou seis atividades (1-Walking, 2-Walking upstairs, 3-Walking downstairs, 4-Sentado, 5-Standing ou 6-Laying) usando um smartphone (Samsung Galaxy S II) na cintura. Usando seu acelerômetro e giroscópio embutidos, capturamos a aceleração linear 3-axial e a velocidade angular 3-axial a uma taxa constante de 50Hz. Os experimentos foram gravados em vídeo para rotular os dados manualmente. O conjunto de dados obtido foi particionado aleatoriamente em dois conjuntos, onde 70% dos voluntários foram selecionados para gerar os dados de treinamento e 30% os dados de teste.
Os sinais do sensor (acelerômetro e giroscópio) foram pré-processados pela aplicação de filtros de ruído e então amostrados em janelas deslizantes de largura fixa de 2,56 seg e sobreposição de 50% (128 leituras/janela). O sinal de aceleração do sensor, que possui componentes gravitacionais e de movimento corporal, foi separado por meio de um filtro passa-baixo Butterworth em gravidade e aceleração do corpo. A força gravitacional é considerada como tendo apenas componentes de baixa frequência, portanto, um filtro com frequência de corte de 0,3 Hz foi usado. De cada janela, um vetor de features foi obtido pelo cálculo de variáveis no domínio do tempo e da frequência.
### Informações de atributo
* Para cada registro no conjunto de dados, o seguinte é fornecido:
* Aceleração triaxial do acelerômetro (aceleração total) e a aceleração corporal estimada.
* Velocidade angular triaxial do giroscópio.
* Um vetor de 561 features com variáveis de domínio de tempo e frequência.
* Seu rótulo de atividade.
* Um identificador do sujeito que realizou o experimento.
### Planejamento
Quando temos um problema que sabemos resolver, podemos criar uma lista de etapas para nos guiar pelo experimento.
1. Importe as bibliotecas Python necessárias
2. Carregue e analise os dados
3. Encontre correlações entre as features
4. Divida os dados em dados de treinamento e teste (dados de validação)
5. Preveja a atividade usando Regressão Logística
### 1. Importe as bibliotecas Python necessárias
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import StratifiedShuffleSplit
from sklearn.linear_model import LogisticRegression, LogisticRegressionCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.metrics import precision_recall_fscore_support as error_metric
from sklearn.metrics import confusion_matrix, accuracy_score
from sklearn.feature_selection import VarianceThreshold
```
### 2. Carregue e analise os dados
```
#train = pd.read_csv("https://github.com/microsoft/Reactors/raw/main/workshop-resources/data-science-and-machine-learning/Data_Science_2/human-behavior-project/Data/train.csv")
train = pd.read_csv("Data/train.csv")
#test = pd.read_csv("https://github.com/microsoft/Reactors/raw/main/workshop-resources/data-science-and-machine-learning/Data_Science_2/human-behavior-project/Data/test.csv")
test = pd.read_csv("Data/test.csv")
print("--------- Training Data ---------")
print(train.head())
print("--------- Test Data ---------")
print(test.head())
```
#### Verifique se há valores nulos nos dados
```
print("Training Data:",train.isnull().values.any())
print("Testing Data:",test.isnull().values.any())
```
Sem valores nulos, vamos prosseguir
```
train.info()
test.info()
```
#### Removendo dados que não usaremos, neste caso não nos importamos com quem foi o "sujeito"
Olhando os valores principais, podemos notar que a coluna subject não será útil aqui, então vamos retirá-la de ambos os conjuntos de dados. Como há muitas colunas nos dados, você pode não tê-la notado, mas é apenas um número que foi usado arbitrariamente para identificar os indivíduos.
### Eliminando dados
Lembre-se de que podemos usar o método built-in .drop para remover colunas do conjunto de dados. Podemos usar a ajuda interativa para ter certeza de que sabemos quais são todos os parâmetros - para acessar a ajuda, digite train.drop? o ponto de interrogação permite ao interpretador saber que queremos ajuda.
No método [pandas.DataFrame.drop](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop.html), o parâmetro 'axis' significa que estamos descartando uma coluna e o parâmetro 'inplace' significa exatamente isso: faça a operação no local e retorne None.
```
# We can do these both at once:
train.drop('subject', axis =1, inplace=True)
test.drop('subject', axis =1, inplace=True)
# Verify that the column was dropped
print("--------- Training Data ---------")
print(train.head())
print("--------- Test Data ---------")
print(test.head())
# Let's creat a list of all the column labels.
rem_cols2 = test.columns.tolist()
rem_cols2
# We should also verify the different datatypes in our data, in this case we can see we have
# 561 float type data dimensions and 1 object dimension.
print('----------TRAIN------------')
print(train.dtypes.value_counts())
print('----------TEST------------')
print(test.dtypes.value_counts())
train.info()
test.info()
```
### Checkpoint
Devemos reescalar os dados? A reescalação de um conjunto de dados geralmente produz um conjunto de dados melhor e previsões mais precisas. Primeiro, verificamos o intervalo (o mínimo e o máximo) para cada um dos conjuntos de dados. Vamos tentar usar o método .describe() e vamos excluir a coluna de atividade que é a última coluna.
### Reescalando dados
Quando 'reescalamos' os dados, adicionamos ou subtraímos uma constante e multiplicamos ou dividimos por constante para os valores originais. Um bom exemplo disso é quando convertemos ou transformamos os dados de temperatura de Celsius para Fahrenheit.
### Padronização e normalização de nossos dados
Quando padronizamos e normalizamos dados, essencialmente estamos tentando criar dados que sejam facilmente comparáveis - como transformar uma comparação "maçãs com laranjas" em uma comparação "maçãs com maçãs". Padronizar as features em torno da média 0 com um desvio padrão de 1 é importante quando comparamos medidas que têm unidades diferentes. Variáveis que são medidas em escalas diferentes não contribuem igualmente para a análise e podem acabar criando um viés.
Da mesma forma, o objetivo da normalização é alterar os valores das colunas numéricas no conjunto de dados para uma escala comum, sem distorcer as diferenças nos intervalos de valores. Para aprendizado de máquina, nem todo conjunto de dados requer normalização. É necessário apenas quando as features têm intervalos diferentes.
### Quando devemos normalizar ou padronizar?
**Normalização** é uma boa técnica para usar quando você não sabe a distribuição de seus dados ou quando sabe que a distribuição não é Gaussiana (uma curva em sino). A normalização é útil quando seus dados têm escalas variáveis e o algoritmo que você está usando não faz suposições sobre a distribuição de seus dados, como vizinhos k-mais próximos e redes neurais artificiais.
**Padronização** assume que seus dados têm uma distribuição gaussiana. Isso não precisa ser estritamente verdadeiro, mas a técnica é mais eficaz se a distribuição de seus atributos for gaussiana. A padronização é útil quando seus dados têm escalas variáveis e o algoritmo que você está usando faz suposições sobre seus dados terem uma distribuição Gaussiana, como regressão linear, regressão logística e análise de discriminante linear.
```
print('----------TRAIN------------')
print(train.describe())
print('----------TEST------------')
print(test.describe())
```
### Escalando
Agora que entendemos por que escalamos os dados, devemos fazer isso aqui?
Aqui não é necessário. Vemos que o min = -1 e o max = +1, portanto, não há necessidade de dimensionar esses dados. Não parece haver dados estranhos ou remotos. Em outras palavras, todos os dados estão dentro de uma faixa que faz sentido. Vamos continuar.
```
# Notice how we can use .tail() to also examine the datatypes of the last
train.dtypes.tail()
```
Eles têm os mesmos tipos de dados. Ou seja, a maioria das features é float e uma é do tipo objeto. Vamos ver o que está na feature 'Activity', do tipo objeto, e separá-la do resto.
```
object_feature = train.dtypes == np.object
object_feature = train.columns[object_feature]
object_feature
```
Como podemos ver, o único tipo de dados de objeto no conjunto de dados de treinamento e teste é o recurso Activity. Vamos dar uma olhada nisso ...
```
train.Activity.value_counts()
```
Precisamos codificar a coluna Activity porque sklearn não aceitará dados categóricos como nossos rótulos de coluna. Usaremos [LabelEncoder] (https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html) para codificar a coluna 'Activity'.
#### LabelEndcoder em sklearn
A função LabelEncoder pode fazer algumas coisas por nós. Ele pode normalizar nomes de coluna de rótulo ou também pode converter rótulos categóricos em valores numéricos, semelhante ao processo de ["one-hot-encoding" no Azure ML Studio] (https://docs.microsoft.com/en-us/azure/machine-learning/ studio-module-reference/convert-to-indicator-values), que essencialmente nos permite criar um sistema binário para converter dados categóricos em números.
Vamos fazer isso aqui com o LabelEncoder:
```
le = LabelEncoder()
for x in [train, test]:
x['Activity'] = le.fit_transform(x.Activity)
train.Activity.sample(5)
test.Activity.sample(5)
```
### 3 Encontrando a correlação / relações entre as features
Correlação refere-se à relação mútua e associação entre quantidades e é geralmente usada para expressar uma quantidade em termos de sua relação com outras quantidades. A correlação pode ser positiva (as variáveis mudam na mesma direção), negativa (as variáveis mudam na direção oposta ou neutra (sem correlação).
As variáveis em um conjunto de dados podem estar relacionadas de várias maneiras e por vários motivos:
- Elas podem depender de valores de outras variáveis
- Elas podem estar associadas entre si
- Ambas podem depender de uma terceira variável.
Para este projeto, usaremos o método pandas .corr() para calcular a correlação entre as colunas do dataframe.
```
# Exclude the Activity column
feature_cols = train.columns[: -1]
# Calculate the correlation values
correlated_values = train[feature_cols].corr()
# Stack the data and convert to a dataframe
correlated_values = (correlated_values.stack().to_frame().reset_index()
.rename(columns={'level_0': 'Feature_1', 'level_1': 'Feature_2', 0:'Correlations'}))
correlated_values.head()
# Create an abs_correlation column
correlated_values['abs_correlation'] = correlated_values.Correlations.abs()
correlated_values.head()
# Picking most correlated features
train_fields = correlated_values.sort_values('Correlations', ascending = False).query('abs_correlation>0.8')
train_fields.sample(5)
```
### 4 Dividindo os dados em DataFrames de treinamento e validação
Se treinarmos um modelo e testá-lo com os mesmos dados, veremos algo muito interessante - provavelmente nada além de pontuações perfeitas e não conseguirá prever nada de útil em novos dados.
Quando essa situação surge, ocorre o chamado de overfitting, que é algo que discutiremos mais em nossos workshops de aprendizado de máquina. Para isso, é prática comum, ao realizar um experimento de aprendizado de máquina (supervisionado), manter parte dos dados disponíveis como um conjunto de teste x_test, y_test. Também realizaremos uma etapa de validação cruzada na próxima seção.
Aprender os parâmetros de uma função de previsão e testá-la com os mesmos dados é um erro metodológico: um modelo que apenas repetisse os rótulos das amostras que acabou de ver teria uma pontuação perfeita, mas não conseguiria prever nada de útil nos dados ainda não vistos. Essa situação é chamada de overfitting.
O que podemos fazer é realizar um processo de validação cruzada no treinamento do modelo. Os melhores parâmetros podem ser determinados por técnicas de grid search. Em nosso exemplo abaixo, usaremos [sklearn.model_selection.StratifiedShuffleSplit] (https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedShuffleSplit.html) para realizar nossas etapas de validação cruzada.
```
#Getting the split indexes
split_data = StratifiedShuffleSplit(n_splits = 3, test_size = 0.3, random_state = 42)
train_idx, val_idx = next(split_data.split(train[feature_cols], train.Activity))
#creating the dataframes
x_train = train.loc[train_idx, feature_cols]
y_train = train.loc[train_idx, 'Activity']
x_val = train.loc[val_idx, feature_cols]
y_val = train.loc[val_idx, 'Activity']
y_train.value_counts(normalize = True)
y_val.value_counts(normalize = True)
```
#### Mesma proporção de classes nos dados de treinamento e validação graças ao StratifiedShuffleSplit
StratifiedShuffleSplit é um validador cruzado que fornece índices de treinamento/teste para dividir dados em conjuntos de treinamento/teste.
Este objeto de validação cruzada é uma fusão de StratifiedKFold e ShuffleSplit, que retorna dobras estratificadas aleatórias. As dobras são feitas preservando o percentual de amostras de cada classe.
Observe que as proporções no dataframe y_train e no dataframe y_val são quase iguais. Isso nos diz que o desempenho de nosso modelo é consistente em todas as três divisões que criamos. Abordaremos mais sobre os conceitos de validação cruzada em nosso próximo workshop de aprendizado de máquina.
### 5. Modelagem Preditiva
A modelagem preditiva usa estatísticas para prever resultados. Na maioria das vezes, o evento que se deseja prever está no futuro, mas a modelagem preditiva pode ser aplicada a qualquer tipo de evento desconhecido, independentemente de quando ocorreu.
A regressão logística, apesar do nome, é um modelo linear para classificação em vez de regressão. A regressão logística também é conhecida na literatura como regressão logit, classificação de entropia máxima (MaxEnt) ou classificador log-linear. Neste modelo, as probabilidades que descrevem os resultados possíveis de um único ensaio são modeladas usando uma função logística.
A regressão logística é implementada em LogisticRegression. Esta implementação pode se ajustar à regressão logística binária, One-vs-Rest ou multinomial com regularização opcional l1, l2 ou Elastic-Net. [Saiba mais sobre regressão logística visitando o guia do usuário.] (Https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegressionCV.html?highlight=logisticregressioncv#sklearn.linear_model.LogisticRegressionCV)
```
# Our standard Logistic Regression algorithm
lr = LogisticRegression()
# We'll also use the Logistic Regression CV (Cross-Validation), with 4 folds.
# Here are the parameters and what each one of them does.
# Cs - List of ints of floats, default value is 10
# Each of the values in Cs describes the inverse of regularization strength.
# If Cs is as an int, then a grid of Cs values are chosen in a logarithmic scale between
# 1e-4 and 1e4. Like in support vector machines, smaller values specify stronger regularization.
# cv - Cross-validation generator
# penalty - Used to specify the norm used in the penalization. The ‘newton-cg’, ‘sag’ and ‘lbfgs’
# solvers support only l2 penalties. ‘elasticnet’ is only supported by the ‘saga’ solver.
# max-iter - Maximum number of iterations of the optimization algorithm.
lr_l2 = LogisticRegressionCV(Cs=10, cv=4, penalty='l2', max_iter=120)
# RandomForestClassifier - n_estimators =The number of trees in the forest. Default is 100
rf = RandomForestClassifier(n_estimators = 10)
lr = lr.fit(x_train, y_train)
rf = rf.fit(x_train, y_train)
lr_l2 = lr_l2.fit(x_train, y_train)
#predict the classes and probability for each
y_predict = list()
y_proba = list()
labels = ['lr', 'lr_l2', 'rf']
models = [lr, lr_l2, rf]
for lab, mod in zip(labels, models):
y_predict.append(pd.Series(mod.predict(x_val), name = lab))
y_proba.append(pd.Series(mod.predict_proba(x_val).max(axis=1), name = lab))
#.max(axis = 1) for a 1 dimensional dataframe
y_predict = pd.concat(y_predict, axis = 1)
y_proba = pd.concat(y_proba, axis = 1)
y_predict['true'] = y_val.values
y_proba['true'] = y_val.values
y_predict.head()
(y_predict['rf'] == y_predict['true']).mean()
y_proba.head(10)
```
### Os resultados são bons - Conclusão
Olhando para os resultados da regressão logística, regressão logística com regularização L2 e o classificador de floresta aleatório, estamos vendo uma boa precisão de nossos modelos. Parece que os melhores resultados são do modelo lr_l2, o que faz sentido, pois é onde estamos realizando a regularização (para lidar com outliers) e a etapa de validação cruzada que também está ocorrendo. Para todos os efeitos, nosso experimento está concluído - no entanto, aprenderemos sobre as métricas de erro e para que servem.
### 6. Calculating the Error Metrics
Regressions are one of the most commonly used tools in a data scientist’s kit. When you learn Python, you gain the ability to create regressions in single lines of code without having to deal with the underlying mathematical theory.
This ease can cause us to forget to evaluate our regressions to ensure that they are a sufficient enough representation of our data. We can plug our data back into our regression equation to see if the predicted output matches corresponding observed value seen in the data.
The quality of a regression model is how well its predictions match up against actual values, but how do we actually evaluate quality? Luckily, smart statisticians have developed error metrics to judge the quality of a model and enable us to compare regresssions against other regressions with different parameters. These metrics are short and useful summaries of the quality of our data.
We will look at the Precision, Recall, F-Score, and Accuracy as our error metrics. We are trying to figure our whether or not our model gives us more false/true positives(FP/TP) or more false/true negative responses(FN/TN). First of all, let's make sure we understand that all true positives and true negatives are usually good scores, depending on the type of data. Let's go over the error metrics in this project.
#### Accuracy
Accuracy is the most straightforward metric, it's simply answers the question how many times did the model accurately predict the phone user's behavior.
**Accuracy = (TP+TN)/(TP+FP+FN+TN)**
#### Recall (aka Sensitivity)
Recall is the ratio of the activities that were predicted compared to activites that were actually observed in real life. Recall answers the following question: Of all the subject's activities that we predicted them to be doing at any given time, how many of thm were correct?
**Recall = TP/(TP+FN)**
#### Precision
Precision is the ratio of the correctly labeled activities by our program to all the activities labeled.
Precision answers the following: How many of those who we labeled doing an activity were actually doing the activity?
**Precision = TP/(TP+FP)**
#### F1-score (aka F-Score / F-Measure)
F1 Score considers both precision and recall. It is the harmonic mean(average) of the precision and recall.
F1 Score is best if there is some sort of balance between precision (p) & recall (r) in the system. Oppositely F1 Score isn’t so high if one measure is improved at the expense of the other.
For example, if P is 1 & R is 0, F1 score is 0.
F1 Score = 2*(Recall * Precision) / (Recall + Precision)
```
# Let's calculate the error metrics here,
# We will also use a confusion matrix to better see where our results are coming from.
metrics = list()
confusion_m = dict()
for lab in labels:
precision, recall, f_score, _ = error_metric(y_val, y_predict[lab], average = 'weighted')
accuracy = accuracy_score(y_val, y_predict[lab])
confusion_m[lab] = confusion_matrix(y_val, y_predict[lab])
metrics.append(pd.Series({'Precision': precision, 'Recall': recall,
'F_score': f_score, 'Accuracy': accuracy}, name = lab))
metrics= pd.concat(metrics, axis =1)
metrics
```
### Confusion Matrix
One great tool for evaluating the behavior and understanding the effectiveness of a binary or categorical classifier is the Confusion Matrix.
You can see that all of the metris we are seeing are giving us very high marks. This tells us that our model is performing very well. Let's plot the lr (logistic regression), and lr_l2 (Level 2 regularization), and the rf (random forests) in confusion matrices.
We've already fit a logistic regression model, the confusion matrix can be calculated manually, or since we are talented smart data scientists, we can just use the confusion_matrix function from sklearn.
The code below fits a Logistic Regression Model and outputs the confusion matrix. The 'lab' object are loaded with the data frames of our preditions. Be sure to use the interactive help to figure our what line of code does.
```
fig, axList = plt.subplots(nrows=2, ncols=2)
axList = axList.flatten()
fig.set_size_inches(12, 10)
axList[-1].axis('off')
for ax,lab in zip(axList[:-1], labels):
sns.heatmap(confusion_m[lab], ax=ax, annot=True, fmt='d');
ax.set(title=lab);
plt.tight_layout()
```
### Observations about Error Metrics and Ridge Regression
We can see that the Logistic regression with L2 regularization gives slightly better error metric than the other models. The question we ask here is: What happens when we discard the most correlated feature? Will we have a better model? The answer is typically yes, we will get better results with we remove highly correlated features. We are addressing the 'curse of dimensionality' and the idea that too much correlated data in our experiment can cause it to be 'overfit' and will not be very effective working with similar data.
In following workshops, we'll learn more about cross-validation and using confusion matrices to check the performance and accuracy of our models.
| true |
code
| 0.674546 | null | null | null | null |
|
# GAN
```
from keras.datasets import mnist
from keras.layers import Input, Dense, Reshape, Flatten, Dropout
from keras.layers import BatchNormalization, Activation, ZeroPadding2D
from keras.layers.advanced_activations import LeakyReLU
from keras.layers.convolutional import UpSampling2D, Conv2D
from keras.models import Sequential, Model
from keras.optimizers import Adam
import matplotlib.pyplot as plt
import sys
import numpy as np
```
In a GAN we want to construct a distribution $\mathbb{P}_{g}$, called a generative distribution, which mimics the real distribution $\mathbb{P}_{r}$.
For that we use a neural network $G=g_{\theta}$, a noise $p(z)$ such that $x'=g_{\theta}(z), z \sim p(z)$ and a discriminator D:
$$\underset{D}{\text{max}} \ \underset{g_{\theta}}{\text{min}} \ L(D,g_{\theta}) = \underset{x \sim \mathbb{P}_{r}}{\mathbb{E}}[\log(D(x))]+ \underset{z \sim p(z)}{\mathbb{E}}[\log(1-D(g_{\theta}(z)))]$$
<img src="imgs/gan.png" alt="Drawing" style="width: 500px;"/>
Load the MNIST dataset
```
from keras.datasets import mnist
import numpy as np
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# Rescale -1 to 1
X_train = x_train / 127.5 - 1.
X_train = np.expand_dims(x_train, axis=3)
X_train.shape
x_test.shape
img_rows = 28
img_cols = 28
channels = 1
img_shape = (img_rows, img_cols, channels)
plt.imshow(X_train[46].reshape(28,28))
plt.gray()
plt.show()
```
##### Question 1. Create a generator model
The generator has the following layers :
- A dense layer of width 256 and take as input the dimension of the latent space (this is a paremeter that must be configurable
- A LeakyRelu activation with parameter alpha=0.2 : what does it correspond to ?
- We use batch normalization of momentum 0.8 : what does it correspond to ?
- A second dense layer of width 512
- We use same LeakyRelu and batch normalization for this layer
- A third Dense Layer of width 1024 with same batch normalization and LeakyRelu activation
- A last dense layer with width equal to the shape of the output image flattened
- The activation is tanh : what does is correspond to ?
The function must take as input a vector of dimension the dimension of the latent space and representing the noise and return Model
```
def build_generator(img_shape,latent_dim=100):
model = Sequential()
model.add(Dense(256, input_dim=latent_dim))
model.add(LeakyReLU(alpha=0.2))
model.add(BatchNormalization(momentum=0.8))
model.add(Dense(512))
model.add(LeakyReLU(alpha=0.2))
model.add(BatchNormalization(momentum=0.8))
model.add(Dense(1024))
model.add(LeakyReLU(alpha=0.2))
model.add(BatchNormalization(momentum=0.8))
model.add(Dense(np.prod(img_shape), activation='tanh'))
model.add(Reshape(img_shape))
noise = Input(shape=(latent_dim,))
img = model(noise)
return Model(noise, img)
gen=build_generator(img_shape=img_shape)
gen.summary()
```
##### Question 2. Build the discriminator
The discriminator has the following layers :
- A Dense layer of width 512 with LeakyRelu activation with parameter alpha=0.2
- A second Dense layer with LeakyRelu activation with parameter alpha=0.2
- A last Dense layer for the binary classification
The model must take as input an image and output the the classification result
```
def build_discriminator(img_shape):
model = Sequential()
model.add(Flatten(input_shape=img_shape))
model.add(Dense(512))
model.add(LeakyReLU(alpha=0.2))
model.add(Dense(256))
model.add(LeakyReLU(alpha=0.2))
model.add(Dense(1, activation='sigmoid'))
img = Input(shape=img_shape)
validity = model(img)
return Model(img, validity)
```
##### Question 3. Build the two neural networks with the MNIST configuration and print their properties. We will use 100 as the dimension of the latent space.
```
latent_dim=100
generator=build_generator(img_shape,latent_dim=latent_dim)
generator.layers[1].summary()
discriminator=build_discriminator(img_shape)
discriminator.layers[1].summary()
```
##### 4. Compile the model
The optimizer chosen is 'Adam' with parameters 0.0002 and 0.5 : what does it correspond to ?
First compile the discriminator
```
optimizer = Adam(0.0002, 0.5)
# Build and compile the discriminator
discriminator.compile(loss='binary_crossentropy',optimizer=optimizer,metrics=['accuracy'])
```
To compile the generator it is more tricky : (live explanations)
```
# The generator takes noise as input and generates imgs
z = Input(shape=(100,))
img = generator(z)
# For the combined model we will only train the generator
discriminator.trainable = False
# The discriminator takes generated images as input and determines validity
validity = discriminator(img)
# The combined model (stacked generator and discriminator)
# Trains the generator to fool the discriminator
combined = Model(z, validity)
combined.compile(loss='binary_crossentropy', optimizer=optimizer)
```
##### Question 5. Write a function that samples 25 images from a normal noise $\mathcal{N}(0,I_{d})$ with d configurable. It should be configurable wether we save or plot the images
```
def sample_images(generator,latent_dim=100,toplot=False,epoch=None):
r, c = 5, 5
noise = np.random.normal(0, 1, (r * c, latent_dim))
gen_imgs = generator.predict(noise)
# Rescale images 0 - 1
gen_imgs = 0.5 * gen_imgs + 0.5
fig, axs = plt.subplots(r, c,figsize=(10,10))
cnt = 0
for i in range(r):
for j in range(c):
axs[i,j].imshow(gen_imgs[cnt, :,:,0], cmap='gray')
axs[i,j].axis('off')
cnt += 1
if toplot:
plt.show()
else:
fig.savefig("./results/%d.png" % epoch)
plt.close()
```
##### Question 6. What is the definition of the binary cross entropy ? Train the model using batch_size of 32 and 10000 epochs.
What can you say about the results?
The binary cross entropy is defined as $L(y,p)=-(y\log(p)+(1-y)\log(1-p))$ where $p$ is the predicted probability and $y$ the true class
```
batch_size=32
epochs=10000
valid = np.ones((batch_size, 1))
fake = np.zeros((batch_size, 1))
dloss_onreal=[]
dloss_onfake=[]
total_dloss=[]
gloss=[]
for epoch in range(epochs):
# ---------------------
# Train Discriminator
# ---------------------
# Select a random batch of images
idx = np.random.randint(0, X_train.shape[0], batch_size)
imgs = X_train[idx]
noise = np.random.normal(0, 1, (batch_size, latent_dim)) #z
# Generate a batch of new images
gen_imgs = generator.predict(noise) #g_theta(z)
# Train the discriminator
d_loss_real = discriminator.train_on_batch(imgs, valid) #train on D(x)
d_loss_fake = discriminator.train_on_batch(gen_imgs, fake) #train on D(g_theta(z))
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
# ---------------------
# Train Generator
# ---------------------
noise = np.random.normal(0, 1, (batch_size, latent_dim))
# Train the generator (to have the discriminator label samples as valid)
g_loss = combined.train_on_batch(noise, valid)
dloss_onreal.append(d_loss_real)
dloss_onfake.append(d_loss_real)
total_dloss.append(d_loss)
gloss.append(g_loss)
if epoch % 200 ==0:
print('epoch number : ',str(epoch))
sample_images(generator,epoch=epoch)
```
We see clearly the mode collapse issue
##### 7. Sample 25 generated images and plot the loss curves
```
sample_images(generator,toplot=True)
plt.figure(figsize=(10,10))
plt.plot(np.array(total_dloss)[:,0])
plt.plot(gloss)
plt.legend(['Total D loss','G loss'])
plt.show()
```
| true |
code
| 0.890628 | null | null | null | null |
|
```
import matplotlib.pyplot as plt
import numpy as np
def evaluate_h(w, X):
assert len(w.shape) == 1
assert len(X.shape) == 2
assert w.shape[0] == X.shape[0]
return np.sign(w @ X)
def run_perceptron(w_initial, X_training, y_training, iteration_callback=None):
w = w_initial.copy()
n = 0
while True:
y = evaluate_h(w, X_training)
if iteration_callback:
iteration_callback(n, w)
correct = y == y_training
if np.all(correct):
return w
else:
i = np.argmax(~correct) # indice of first misclassified point
w = w + y_training[i]*X_training[:, i]
n = n + 1
def plot_hypothesis(x_coordinates, w, *plot_args, **plot_kwargs):
m = -w[1]/w[2] if w[2] != 0 else 0
b = -w[0]/w[2] if w[2] != 0 else 0
y_coordinates = m*x_coordinates + b
plt.plot(x_coordinates, y_coordinates, *plot_args, **plot_kwargs)
w_actual = np.array([1, 1, 1])
w_initial = np.array([3, -50, 0])
num_dimensions = 2
num_training_samples = 5
training_data_range = 10
X_training = np.vstack([
np.ones(num_training_samples),
np.random.uniform(
-training_data_range,
training_data_range,
(num_dimensions, num_training_samples),
),
])
y_training = evaluate_h(w_actual, X_training)
x_coordinates = X_training[1, :]
y_coordinates = X_training[2, :]
colors = ['r' if y > 0 else 'b' for y in y_training]
plt.scatter(x_coordinates, y_coordinates, c=colors)
x_coordinates_hypothesis = np.array([-training_data_range, training_data_range])
# created dummy scattered plot with 5 points for verification
def plot_iteration(n, w):
if n % 5 == 0:
label = 'iteration {}'.format(n)
plot_hypothesis(x_coordinates_hypothesis, w, 'k:', label=label)
w_final = run_perceptron(w_initial, X_training, y_training, plot_iteration)
plt.legend()
plt.show()
# created dummy scattered plot with 5 points for verification
w_actual = np.array([1, 1, 1])
w_initial = np.array([3, -50, 0])
num_dimensions = 2
num_training_samples = 20
training_data_range = 10
X_training = np.vstack([
np.ones(num_training_samples),
np.random.uniform(
-training_data_range,
training_data_range,
(num_dimensions, num_training_samples),
),
])
y_training = evaluate_h(w_actual, X_training)
x_coordinates = X_training[1, :]
y_coordinates = X_training[2, :]
colors = ['r' if y > 0 else 'b' for y in y_training]
plt.scatter(x_coordinates, y_coordinates, c=colors)
x_coordinates_hypothesis = np.array([-training_data_range, training_data_range])
w_actual = np.array([1, 1, 1])
w_initial = np.array([3, -50, 0])
num_dimensions = 2
num_training_samples = 20
training_data_range = 10
X_training = np.vstack([
np.ones(num_training_samples),
np.random.uniform(
-training_data_range,
training_data_range,
(num_dimensions, num_training_samples),
),
])
y_training = evaluate_h(w_actual, X_training)
x_coordinates = X_training[1, :]
y_coordinates = X_training[2, :]
colors = ['r' if y > 0 else 'b' for y in y_training]
plt.scatter(x_coordinates, y_coordinates, c=colors)
x_coordinates_hypothesis = np.array([-training_data_range, training_data_range])
def plot_iteration(n, w):
if n % 5 == 0:
label = 'iteration {}'.format(n)
plot_hypothesis(x_coordinates_hypothesis, w, 'k:', label=label)
w_final = run_perceptron(w_initial, X_training, y_training, plot_iteration)
plot_hypothesis(x_coordinates_hypothesis, w_final, 'k', label='final')
plot_hypothesis(x_coordinates_hypothesis, w_actual, 'y', label='actual')
plt.legend()
plt.xlim(-training_data_range, training_data_range)
plt.ylim(-training_data_range, training_data_range)
plt.show()
# data size 20
w_actual = np.array([1, 1, 1])
w_initial = np.array([3, -50, 0])
num_dimensions = 2
num_training_samples = 100
training_data_range = 10
X_training = np.vstack([
np.ones(num_training_samples),
np.random.uniform(
-training_data_range,
training_data_range,
(num_dimensions, num_training_samples),
),
])
y_training = evaluate_h(w_actual, X_training)
x_coordinates = X_training[1, :]
y_coordinates = X_training[2, :]
colors = ['r' if y > 0 else 'b' for y in y_training]
plt.scatter(x_coordinates, y_coordinates, c=colors)
x_coordinates_hypothesis = np.array([-training_data_range, training_data_range])
def plot_iteration(n, w):
if n % 5 == 0:
label = 'iteration {}'.format(n)
plot_hypothesis(x_coordinates_hypothesis, w, 'k:', label=label)
w_final = run_perceptron(w_initial, X_training, y_training, plot_iteration)
plot_hypothesis(x_coordinates_hypothesis, w_final, 'k', label='final')
plot_hypothesis(x_coordinates_hypothesis, w_actual, 'y', label='actual')
plt.legend()
plt.xlim(-training_data_range, training_data_range)
plt.ylim(-training_data_range, training_data_range)
plt.show()
# generated for data set with size 100
w_actual = np.array([1, 1, 1])
w_initial = np.array([3, -50, 0])
num_dimensions = 2
num_training_samples = 1000
training_data_range = 10
X_training = np.vstack([
np.ones(num_training_samples),
np.random.uniform(
-training_data_range,
training_data_range,
(num_dimensions, num_training_samples),
),
])
y_training = evaluate_h(w_actual, X_training)
x_coordinates = X_training[1, :]
y_coordinates = X_training[2, :]
colors = ['r' if y > 0 else 'b' for y in y_training]
plt.scatter(x_coordinates, y_coordinates, c=colors)
x_coordinates_hypothesis = np.array([-training_data_range, training_data_range])
def plot_iteration(n, w):
if n % 5 == 0:
label = 'iteration {}'.format(n)
plot_hypothesis(x_coordinates_hypothesis, w, 'k:', label=label)
w_final = run_perceptron(w_initial, X_training, y_training, plot_iteration)
plot_hypothesis(x_coordinates_hypothesis, w_final, 'k', label='final')
plot_hypothesis(x_coordinates_hypothesis, w_actual, 'y', label='actual')
plt.legend()
plt.xlim(-training_data_range, training_data_range)
plt.ylim(-training_data_range, training_data_range)
plt.show()
#data set size d =1000
```
| true |
code
| 0.643497 | null | null | null | null |
|
# Pawnee Fire analysis
The Pawnee Fire was a large wildfire that burned in Lake County, California. The fire started on June 23, 2018 and burned a total of 15,185 acres (61 km2) before it was fully contained on July 8, 2018.

## Remote Sensing using Sentinel-2 layer
```
from arcgis import GIS
gis = GIS(profile='plenary_deldev_profile')
```
For this analysis, we will be using Sentinel-2 imagery from the Living Atlas.
Sentinel-2 is an Earth observation mission developed by ESA as part of the Copernicus Programme to perform terrestrial observations in support of services such as forest monitoring, land cover changes detection, and natural disaster management.
```
sentinel_item = gis.content.search('Sentinel-2 Views', outside_org=True)[0]
sentinel_item
```
### Select before and after rasters
```
import arcgis
sentinel = sentinel_item.layers[0]
aoi = {'spatialReference': {'latestWkid': 3857, 'wkid': 102100},
'xmax': -13643017.100720055,
'xmin': -13652113.10708598,
'ymax': 4739654.477447927,
'ymin': 4731284.622850712}
arcgis.env.analysis_extent = aoi
sentinel.extent = aoi
import pandas as pd
from datetime import datetime
selected = sentinel.filter_by(where="acquisitiondate BETWEEN timestamp '2018-06-15 00:00:00' AND timestamp '2018-06-24 19:59:59'",
geometry=arcgis.geometry.filters.intersects(aoi))
df = selected.query(out_fields="AcquisitionDate, Tile_ID, CloudCover", order_by_fields="AcquisitionDate").df
df['acquisitiondate'] = pd.to_datetime(df['acquisitiondate'], unit='ms')
df.tail(40)
prefire = sentinel.filter_by('OBJECTID=2750251') # 2017-07-01
midfire = sentinel.filter_by('OBJECTID=2800097') # 2017-07-24
```
## Visual Assessment
```
from arcgis.raster.functions import *
truecolor = apply(midfire, 'Natural Color with DRA')
truecolor
```
### Visualize Burn Scars
Extract the [12, 11, 4] bands to improve visibility of fire and burn scars. This band combination pushes further into the SWIR range of the electromagnetic spectrum, where there is less susceptibility to smoke and haze generated by a burning fire.
```
extract_band(midfire, [12,11,4])
```
For comparison, the same area before the fire started shows no burn scar.
```
extract_band(prefire, [12,11,4])
```
## Quantitative Assessment
The **Normalized Burn Ratio (NBR)** can be used to delineate the burnt areas and identify the severity of the fire.
The formula for the NBR is very similar to that of NDVI except that it uses near-infrared band 9 and the short-wave infrared band 13:
\begin{align}
{\mathbf{NBR}} = \frac{\mathbf{B9} - \mathbf{B13}}{\mathbf{B9} + \mathbf{B13} + \mathbf{WS}} \\
\end{align}
The NBR equation was designed to be calcualted from reflectance, but it can be calculated from radiance and digital_number_(dn) with changes to the burn severity table below. The WS parameter is used for water suppression, and is typically 2000.
For a given area, NBR is calculated from an image just prior to the burn and a second NBR is calculated for an image immediately following the burn. Burn extent and severity is judged by taking the difference between these two index layers:
\begin{align}
{\Delta \mathbf{NBR}} = \mathbf{NBR_{prefire}} - \mathbf{NBR_{postfire}} \\
\end{align}
The meaning of the ∆NBR values can vary by scene, and interpretation in specific instances should always be based on some field assessment. However, the following table from the USGS FireMon program can be useful as a first approximation for interpreting the NBR difference:
| \begin{align}{\Delta \mathbf{NBR}} \end{align} | Burn Severity |
| ------------- |:-------------:|
| 0.1 to 0.27 | Low severity burn |
| 0.27 to 0.44 | Medium severity burn |
| 0.44 to 0.66 | Moderate severity burn |
| > 0.66 | High severity burn |
[Source: http://wiki.landscapetoolbox.org/doku.php/remote_sensing_methods:normalized_burn_ratio]
### Use Band Arithmetic and Map Algebra
```
nbr_prefire = band_arithmetic(prefire, "(b9 - b13) / (b9 + b13 + 2000)")
nbr_postfire = band_arithmetic(midfire, "(b9 - b13) / (b9 + b13 + 2000)")
nbr_diff = nbr_prefire - nbr_postfire
burnt_areas = colormap(remap(nbr_diff,
input_ranges=[0.1, 0.27, # low severity
0.27, 0.44, # medium severity
0.44, 0.66, # moderate severity
0.66, 1.00], # high severity burn
output_values=[1, 2, 3, 4],
no_data_ranges=[-1, 0.1], astype='u8'),
colormap=[[4, 0xFF, 0xC3, 0], [3, 0xFA, 0x8E, 0], [2, 0xF2, 0x55, 0],
[1, 0xE6, 0, 0]])
burnt_areas.draw_graph()
```
<img src="./img/pawnee-fire-graph.jpg" />
### Area calculation
```
pixx = (aoi['xmax'] - aoi['xmin']) / 1200.0
pixy = (aoi['ymax'] - aoi['ymin']) / 450.0
res = burnt_areas.compute_histograms(aoi, pixel_size={'x':pixx, 'y':pixy})
numpix = 0
histogram = res['histograms'][0]['counts'][1:]
for i in histogram:
numpix += i
```
### Report burnt area
```
from IPython.display import HTML
sqmarea = numpix * pixx * pixy # in sq. m
acres = 0.00024711 * sqmarea # in acres
HTML('<h3>Fire has consumed <font color="red">{:,} acres</font> till {}</h3>.' \
.format(int(acres), df.iloc[-1]['acquisitiondate'].date()))
import matplotlib.pyplot as plt
%matplotlib inline
plt.title('Distribution by severity', y=-0.1)
plt.pie(histogram, labels=['Low Severity', 'Medium Severity',
'Moderate Severity', 'High Severity'])
plt.axis('equal')
```
### Visualize burnt areas
```
firemap = gis.map()
firemap.extent = aoi
firemap
```
<img src="./img/pawnee-fire-inmem-raster.jpg" width=100% />
```
firemap.add_layer([truecolor, burnt_areas])
```
## Raster to Feature layer conversion
Use Raster Analytics and Geoanalytics to convert the burnt area raster to a feature layer. The `to_features()` method converts the raster to a feature layer and `create_buffers()` fills holes in the features and dissolves them to output one feature that covers the extent of the Thomas Fire.
#### Access Portal
```
#gis = GIS("https://datasciencedev.esri.com/portal", "rjackson", "admin123", verify_cert=False)
portal = GIS("https://python.playground.esri.com/portal", "arcgis_python", "amazing_arcgis_123")
fire = portal.content.search('PawneeFireArea', 'Feature Layer')[0]
fire
```
### Convert and Buffer Data
```
from arcgis.geoanalytics.use_proximity import create_buffers
fire_item = burnt_areas.to_features(output_name='Pawnee_Fire_boundary_output_tst')
fire_layer = fire_item.layers[0]
fire = create_buffers(fire_layer, 100, 'Meters', dissolve_option='All',
multipart=True, output_name='PawneeFireAreaTst')
```
## Visualize feature layer
```
vectormap = gis.map()
vectormap.basemap = 'dark-gray'
vectormap.extent = aoi
vectormap.add_layer(fire)
vectormap
```
<img src="./img/pawnee-fire-vectormap.jpg"/>
## Impact Assessment
### Assess human impact
```
from arcgis.geoenrichment import enrich
from arcgis.geometry import filters
from arcgis.features import GeoAccessor, GeoSeriesAccessor
import pandas as pd
sdf = pd.DataFrame.spatial.from_layer(fire.layers[0])
gis = GIS(profile='deldev')
fire_geometry = sdf.iloc[0].SHAPE
sa_filter = filters.intersects(geometry=fire_geometry,
sr=fire_geometry['spatialReference'])
def age_pyramid(df):
import warnings
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
warnings.simplefilter(action='ignore', category=FutureWarning)
pd.options.mode.chained_assignment = None
plt.style.use('ggplot')
df = df[[x for x in impacted_people.columns if 'MALE' in x or 'FEM' in x]]
sf = pd.DataFrame(df.sum())
age = sf.index.str.extract('(\d+)').astype('int64')
f = sf[sf.index.str.startswith('FEM')]
m = sf[sf.index.str.startswith('MALE')]
sf = sf.reset_index(drop = True)
f = f.reset_index(drop = True)
m = m.reset_index(drop = True)
sf['age'] = age
f["age"] = age
m["age"] = age
f = f.sort_values(by='age', ascending=False).set_index('age')
m = m.sort_values(by='age', ascending=False).set_index('age')
popdf = pd.concat([f, m], axis=1)
popdf.columns = ['F', 'M']
popdf['agelabel'] = popdf.index.map(str) + ' - ' + (popdf.index+4).map(str)
popdf.M = -popdf.M
sns.barplot(x="F", y="agelabel", color="#CC6699", label="Female", data=popdf, edgecolor='none')
sns.barplot(x="M", y="agelabel", color="#008AB8", label="Male", data=popdf, edgecolor='none')
plt.ylabel('Age group')
plt.xlabel('Number of people');
return plt;
```
### Age Pyramid of affected population
```
from arcgis.geoenrichment import enrich
impacted_people = enrich(sdf, 'Age')
age_pyramid(impacted_people)
```

| true |
code
| 0.498413 | null | null | null | null |
|
<a href="https://githubtocolab.com/giswqs/geemap/blob/master/examples/notebooks/50_cartoee_projections.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a>
Uncomment the following line to install [geemap](https://geemap.org) and [cartopy](https://scitools.org.uk/cartopy/docs/latest/installing.html#installing) if needed. Keep in mind that cartopy can be challenging to install. If you are unable to install cartopy on your computer, you can try Google Colab with this the [notebook example](https://colab.research.google.com/github/giswqs/geemap/blob/master/examples/notebooks/cartoee_colab.ipynb).
See below the commands to install cartopy and geemap using conda/mamba:
```
conda create -n carto python=3.8
conda activate carto
conda install mamba -c conda-forge
mamba install cartopy scipy -c conda-forge
mamba install geemap -c conda-forge
jupyter notebook
```
```
# !pip install cartopy scipy
# !pip install geemap
```
# Working with projections in cartoee
`cartoee` is a lightweight module to aid in creatig publication quality maps from Earth Engine processing results without having to download data. The `cartoee` package does this by requesting png images from EE results (which are usually good enough for visualization) and `cartopy` is used to create the plots. Utility functions are available to create plot aethetics such as gridlines or color bars. **The notebook and the geemap cartoee module ([cartoee.py](https://geemap.org/cartoee)) were contributed by [Kel Markert](https://github.com/KMarkert). A huge thank you to him.**
```
import ee
import geemap
from geemap import cartoee
import cartopy.crs as ccrs
%pylab inline
geemap.ee_initialize()
```
## Plotting an image on a map
Here we are going to show another example of creating a map with EE results. We will use global sea surface temperature data for Jan-Mar 2018.
```
# get an earth engine image of ocean data for Jan-Mar 2018
ocean = (
ee.ImageCollection('NASA/OCEANDATA/MODIS-Terra/L3SMI')
.filter(ee.Filter.date('2018-01-01', '2018-03-01'))
.median()
.select(["sst"], ["SST"])
)
# set parameters for plotting
# will plot the Sea Surface Temp with specific range and colormap
visualization = {'bands':"SST", 'min':-2, 'max':30}
# specify region to focus on
bbox = [-180, -88, 180, 88]
fig = plt.figure(figsize=(15,10))
# plot the result with cartoee using a PlateCarre projection (default)
ax = cartoee.get_map(ocean, cmap='plasma', vis_params=visualization, region=bbox)
cb = cartoee.add_colorbar(ax, vis_params=visualization, loc='right', cmap='plasma')
ax.set_title(label = 'Sea Surface Temperature', fontsize = 15)
ax.coastlines()
plt.show()
```
### Mapping with different projections
You can specify what ever projection is available within `cartopy` to display the results from Earth Engine. Here are a couple examples of global and regions maps using the sea surface temperature example. Please refer to the [`cartopy` projection documentation](https://scitools.org.uk/cartopy/docs/latest/crs/projections.html) for more examples with different projections.
```
fig = plt.figure(figsize=(15,10))
# create a new Mollweide projection centered on the Pacific
projection = ccrs.Mollweide(central_longitude=-180)
# plot the result with cartoee using the Mollweide projection
ax = cartoee.get_map(ocean, vis_params=visualization, region=bbox,
cmap='plasma', proj=projection)
cb = cartoee.add_colorbar(ax,vis_params=visualization, loc='bottom', cmap='plasma',
orientation='horizontal')
ax.set_title("Mollweide projection")
ax.coastlines()
plt.show()
fig = plt.figure(figsize=(15,10))
# create a new Goode homolosine projection centered on the Pacific
projection = ccrs.Robinson(central_longitude=-180)
# plot the result with cartoee using the Goode homolosine projection
ax = cartoee.get_map(ocean, vis_params=visualization, region=bbox,
cmap='plasma', proj=projection)
cb = cartoee.add_colorbar(ax, vis_params=visualization, loc='bottom', cmap='plasma',
orientation='horizontal')
ax.set_title("Robinson projection")
ax.coastlines()
plt.show()
fig = plt.figure(figsize=(15,10))
# create a new Goode homolosine projection centered on the Pacific
projection = ccrs.InterruptedGoodeHomolosine(central_longitude=-180)
# plot the result with cartoee using the Goode homolosine projection
ax = cartoee.get_map(ocean, vis_params=visualization, region=bbox,
cmap='plasma', proj=projection)
cb = cartoee.add_colorbar(ax, vis_params=visualization, loc='bottom', cmap='plasma',
orientation='horizontal')
ax.set_title("Goode homolosine projection")
ax.coastlines()
plt.show()
fig = plt.figure(figsize=(15,10))
# create a new orographic projection focused on the Pacific
projection = ccrs.EqualEarth(central_longitude=-180)
# plot the result with cartoee using the orographic projection
ax = cartoee.get_map(ocean, vis_params=visualization, region=bbox,
cmap='plasma', proj=projection)
cb = cartoee.add_colorbar(ax, vis_params=visualization, loc='right', cmap='plasma',
orientation='vertical')
ax.set_title("Equal Earth projection")
ax.coastlines()
plt.show()
fig = plt.figure(figsize=(15,10))
# create a new orographic projection focused on the Pacific
projection = ccrs.Orthographic(-130,-10)
# plot the result with cartoee using the orographic projection
ax = cartoee.get_map(ocean, vis_params=visualization, region=bbox,
cmap='plasma', proj=projection)
cb = cartoee.add_colorbar(ax, vis_params=visualization, loc='right', cmap='plasma',
orientation='vertical')
ax.set_title("Orographic projection")
ax.coastlines()
plt.show()
```
### Warping artifacts
Often times global projections are not needed so we use specific projection for the map that provides the best view for the geographic region of interest. When we use these, sometimes image warping effects occur. This is because `cartoee` only requests data for region of interest and when mapping with `cartopy` the pixels get warped to fit the view extent as best as possible. Consider the following example where we want to map SST over the south pole:
```
fig = plt.figure(figsize=(15, 10))
# Create a new region to focus on
spole = [-180, -88, 180,0]
projection = ccrs.SouthPolarStereo()
# plot the result with cartoee focusing on the south pole
ax = cartoee.get_map(ocean, cmap='plasma', vis_params=visualization, region=spole, proj=projection)
cb = cartoee.add_colorbar(ax, vis_params=visualization, loc='right', cmap='plasma')
ax.coastlines()
ax.set_title('The South Pole')
plt.show()
```
As you can see from the result there are warping effects on the plotted image. There is really no way of getting aound this (other than requesting a larger extent of data which may not alway be the case).
So, what we can do is set the extent of the map to a more realistic view after plotting the image as in the following example:
```
fig = plt.figure(figsize=(15,10))
# plot the result with cartoee focusing on the south pole
ax = cartoee.get_map(ocean, cmap='plasma', vis_params=visualization, region=spole, proj=projection)
cb = cartoee.add_colorbar(ax, vis_params=visualization, loc='right', cmap='plasma')
ax.coastlines()
ax.set_title('The South Pole')
# get bounding box coordinates of a zoom area
zoom = spole
zoom[-1] = -20
# convert bbox coordinate from [W,S,E,N] to [W,E,S,N] as matplotlib expects
zoom_extent = cartoee.bbox_to_extent(zoom)
# set the extent of the map to the zoom area
ax.set_extent(zoom_extent,ccrs.PlateCarree())
plt.show()
```
| true |
code
| 0.691719 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/TheGupta2012/qctrl-qhack-Hostages-of-the-Entangled-Dungeons/blob/master/Robust_control_x_gate.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
**Creating Robust Control for Single qubit gates**
Here we introduce the essential concepts behind robust control. We create a model of noise on a quantum computer and simulate its performance. Then we show how to create controls that are robust to this noise process. We demonstrate the control's robustness with a simulation.
## Imports and initialization
```
import matplotlib.pyplot as plt
import numpy as np
import qctrlvisualizer as qv
from attr import asdict
from qctrl import Qctrl
# Starting a session with the API
qctrl = Qctrl(email = 'harshit.co19@nsut.ac.in', password = 'HARSHITcontrol')
# Define standard matrices
identity = np.array([[1.0, 0.0], [0.0, 1.0]], dtype=np.complex)
sigma_x = np.array([[0.0, 1.0], [1.0, 0.0]], dtype=np.complex)
sigma_y = np.array([[0.0, -1j], [1j, 0.0]], dtype=np.complex)
sigma_z = np.array([[1.0, 0.0], [0.0, -1.0]], dtype=np.complex)
sigma_m = np.array([[0.0, 1.0], [0.0, 0.0]], dtype=np.complex)
sigmas = [sigma_x, sigma_y, sigma_z]
sigma_names = ["X", "Y", "Z"]
not_gate = np.array([[0.0, -1.0], [1.0, 0.0]])
# Plotting and formatting methods
plt.style.use(qv.get_qctrl_style())
def plot_simulation_trajectories(figure, times, coherent_samples, noisy_trajectories):
ideal_bloch_sphere_coords = np.array(
[
[
np.real(
np.dot(
sample.state_vector.conj(),
np.matmul(sigma, sample.state_vector),
)
)
for sigma in sigmas
]
for sample in coherent_samples
]
)
noisy_bloch_sphere_coords = np.array(
[
[
[
np.real(
np.dot(
sample.state_vector.conj(),
np.matmul(sigma, sample.state_vector),
)
)
for sigma in sigmas
]
for sample in trajectory.samples
]
for trajectory in noisy_trajectories
]
)
figure.set_figheight(6.0)
figure.set_figwidth(7.0)
axes = figure.subplots(nrows=3, ncols=1, sharex=True, sharey=False, squeeze=False)[
:, 0
]
for a in range(3):
axes[a].set_ylabel(sigma_names[a])
axes[a].set_ylim([-1.1, 1.1])
for t in range(noisy_bloch_sphere_coords.shape[0]):
axes[a].plot(
times * 1e6,
noisy_bloch_sphere_coords[t, :, a],
"--",
color="#680CE9",
alpha=0.25,
)
axes[a].plot(times * 1e6, ideal_bloch_sphere_coords[:, a], "-", color="#680CE9")
axes[2].set_xlabel("Time ($\mu$s)")
axes[0].set_title("Bloch sphere coordinates")
def plot_simulation_noise_directions(figure, times, coherent_samples):
figure.set_figheight(6.0)
figure.set_figwidth(7.0)
noise_operator_directions = np.array(
[
[
0.5
* np.real(
np.trace(
np.matmul(
sigma,
np.matmul(
sample.evolution_operator.conj().T,
np.matmul(sigma_z, sample.evolution_operator),
),
)
)
)
for sigma in sigmas
]
for sample in coherent_samples
]
)
axes = figure.subplots(nrows=3, ncols=1, sharex=True, sharey=False, squeeze=False)[
:, 0
]
for a in range(3):
axes[a].set_ylabel(sigma_names[a])
axes[a].set_ylim([-1.1, 1.1])
axes[a].plot(
robust_point_times * 1e6,
noise_operator_directions[:, a],
"-",
color="#680CE9",
)
axes[a].fill_between(
robust_point_times * 1e6,
0,
noise_operator_directions[:, a],
color="#680CE9",
alpha=0.25,
)
axes[2].set_xlabel("Time ($\mu$s)")
axes[0].set_title("Bloch sphere directions")
def plot_noise_spectral_density(figure, nsd_samples):
frequencies = np.array([sample["frequency"] for sample in nsd_samples])
powers = np.array([sample["power"] for sample in nsd_samples])
axes = figure.subplots(nrows=1, ncols=1, sharex=True, sharey=False, squeeze=False)[
0, 0
]
axes.plot(frequencies / 1e6, powers * 1e6)
axes.fill_between(frequencies / 1e6, 0, powers * 1e6, alpha=0.25)
axes.set_xlabel("Frequency (MHz)")
axes.set_ylabel("Power density (1/MHz)")
axes.set_title("Dephasing noise spectral density")
def pm_format(average, std):
return "{:.4f}".format(average) + "+/-" + "{:.4f}".format(std)
# The main signal which is sent to the qubit in the cloud
def bandwidth_limited_pwc_signal(
name, duration, segment_count, max_rabi_rate, cutoff_frequency
):
# create a raw pwc_signal where the amplitude of each segment is an optimization variables
raw_signal = qctrl.operations.pwc_signal(
values=qctrl.operations.bounded_optimization_variable(
count=segment_count, lower_bound=-max_rabi_rate, upper_bound=max_rabi_rate
),
duration=duration,
)
# pass the signal through a bandwidth limited filter
filtered_signal = qctrl.operations.convolve_pwc(
raw_signal, qctrl.operations.sinc_integral_function(cutoff_frequency)
)
# resample the smooth filtered signal as a pwc_signal
final_signal = qctrl.operations.discretize_stf(
stf=filtered_signal,
duration=robust_duration,
segments_count=segment_count,
name=name,
)
return final_signal
```
## Single qubit with dephasing noise
To better understand how noise affects a quantum computer we are going to create a simulation.
To start we write down a Hamiltonian, which will mathematically describe this physical system:
\begin{align*}
H_{\rm total}(t) = & H_{\rm control}(t) + H_{\rm noise}(t).
\end{align*}
### Control: Standard microwave pulse that creates a NOT Gate
The control part of the Hamiltonian is:
\begin{align}
H_{\rm control}(t) = \Omega_{\rm I}(t) \sigma_{x}/2 + \Omega_{\rm Q}(t) \sigma_{y}/2.
\end{align}
Where $\Omega_I(t)$ and $\Omega_Q(t)$ are the time-dependent Rabi rate created by the IQ modulated microwave pulse applied to control the qubits state, which couples to the qubit state through the $\sigma_k$ operators.
We are trying to apply a NOT gate to the qubit. The simplest way to do this is to apply a Q modulated microwave pulse at the maximum Rabi rate $\Omega_{\rm Q}(t) = \Omega_{\rm max}$ for a duration of $\pi/\Omega_{\rm max}$, while the I modulated microwave pulse is set to zero $\Omega_{\rm I}(t) = 0$. We will call this the standard NOT gate.
```
omega_max = 2 * np.pi * 1e6 # Hz
standard_duration = np.pi / omega_max # s
standard_pulse_segments = [
qctrl.types.ComplexSegmentInput(duration=standard_duration, value=omega_max),
]
plot_segments = {
"$\Omega_Q$": [
{"duration": segment.duration, "value": segment.value}
for segment in standard_pulse_segments
]
}
qv.plot_controls(plt.figure(), plot_segments)
plt.show()
```
### Noise: Magnetic field with a 1/f spectrum
The noise part of the Hamiltonian is:
\begin{align}
H_{\rm noise}(t) = \eta(t) \sigma_z / 2.
\end{align}
We treat the noisy magnetic field environment as a classical noise process $\eta(t)$ coupled to the quantum system with a noise operator $\sigma_z$. This approximate model is often reasonable for real quantum computing hardware when the decoherence time (T2) is the limiting factor, being much shorter than the relaxation time (T1) of the qubits.
The noise process $\eta(t)$ is sampled from a noise spectral density that follows a power law:
\begin{align}
S_{\eta}(\omega) = \frac{\omega_{\rm cutoff}^{a-1}}{\omega^a + \omega_{\rm cutoff}^a},
\end{align}
Where $\omega_{\rm cutoff}$ is the cutoff frequency and $a$ is the order of the power law. It is common for magnetic field environments to follow 1/f power law ($a=1$) where low frequency noise dominates.
Different physical processes will couple to the quantum computer through different noise operators. The key to getting a good simulation is to identify the noises that most significantly affect our qubits.
```
def power_spectrum(frequencies, frequency_cutoff, power):
return frequency_cutoff ** (power - 1) / (
frequencies ** power + frequency_cutoff ** power
)
frequencies = np.linspace(0, 2.0e4, 1000)
power_densities = 4e10 * power_spectrum(frequencies, 1.0e2, 1.0)
nsd_sampled_points = [
{"frequency": f, "power": p, "power_uncertainty": 0.0, "weight": 0.0}
for f, p in zip(frequencies, power_densities)
]
plot_noise_spectral_density(plt.figure(), nsd_sampled_points)
```
## Simulation of standard NOT Gate
Now that we have a Hamiltonian we can create a simulation. The control we have is a `shift` with $\sigma_x$ as the `operator` and $\Omega(t)$ is the `pulse`. The noise we have is an `additive noise` with $\sigma_z$ as the `operator` and $S_\eta(\omega)$ is the `linear_piecewise_noise_spectral_density`.
```
standard_control = qctrl.types.colored_noise_simulation.Shift(
control=standard_pulse_segments, operator=sigma_y / 2
)
noise_drift = qctrl.types.colored_noise_simulation.Drift(
operator=sigma_z / 2.0,
noise=qctrl.types.colored_noise_simulation.Noise(
power_densities=power_densities,
frequency_step=frequencies[1],
time_domain_sample_count=1000,
),
)
target = qctrl.types.TargetInput(operator=not_gate)
```
Now we can create a simulation of the qubit in a noisy environment.
*See also:* The [simulation user guide](https://docs.q-ctrl.com/boulder-opal/user-guides/simulation) explains how to create multiple types of simulations.
```
standard_point_times = np.linspace(0, standard_duration, 100)
standard_noisy_simulation_result = qctrl.functions.calculate_colored_noise_simulation(
duration=standard_duration,
sample_times=standard_point_times,
shifts=[standard_control],
drifts=[noise_drift],
trajectory_count=5,
initial_state_vector=np.array([1.0, 0.0]),
target=target,
)
```
For comparison we can also create a simulation of a system with no noise
```
standard_ideal_simulation_result = qctrl.functions.calculate_coherent_simulation(
duration=standard_duration,
sample_times=standard_point_times,
shifts=[standard_control],
initial_state_vector=np.array([1.0, 0.0]),
target=target,
)
```
### Noisy trajectories of the qubit state
We can display the noisy trajectories of the qubit using the coordinates of the [Bloch sphere](https://en.wikipedia.org/wiki/Bloch_sphere) as a representation of the state. We can see that the noisy trajectories, shown with dotted lines, take us away from the ideal simulation path, shown with the solid line. Most importantly, the final state of noisy trajectories diverges from the ideal final state. This indicates that the noise will introduce errors into our calculation and affect the outcomes of an algorithm that we want to run.
```
plot_simulation_trajectories(
plt.figure(),
standard_point_times,
standard_ideal_simulation_result.samples,
standard_noisy_simulation_result.trajectories,
)
plt.show()
```
### Average gate infidelity of standard NOT gate
The results above are specific to a particular initial state. We can quantify the *average* performance of the gate under noise by looking at the average gate infidelity, defined as:
\begin{align}
\mathcal{I}_{\rm gate} = 1 - \mathbb{E}[ \rm{Tr}[ U_{\rm target}^\dagger(T) U_{\rm total}(T) ] ],
\end{align}
where $U_{k}(T)$ is the solution to $\dot{U}_{k}(t) = -i H_{k} U_{k}(t)$, $U_{\rm target}$ is the target unitary, in this case a NOT gate, and $\mathbb{E}[ \cdot ]$ is the classical stochastic average. An estimate of this number is automatically calculated when you provide a target to a stochastic simulation in BOULDER OPAL.
```
standard_final_sample = standard_noisy_simulation_result.average_samples[-1]
print("Average gate infidelity:")
print(
pm_format(
standard_final_sample.average_infidelity,
standard_final_sample.average_infidelity_uncertainty,
)
)
print(standard_noisy_simulation_result.average_samples[-1])
```
## Robust control design
The filter function framework can be used to design robust controls. We treat the design problem as a multi-objective optimization problem. First we assume the control field is parametrized by a set of variables $\Omega_{\rm candidate}(\underline{v},t)$.
The first target of our optimization is to ensure that our optimized pulse performs the correct operation. To do this we need to minimize the infidelity of the control:
\begin{align}
\mathcal{I}_{\rm control} = \rm{Tr}[U_{\rm control}^\dagger(T) U_{\rm target}(T)],
\end{align}
This quantifies how close the control is to the target operation if there is no noise.
The second target of our optimization is to ensure that our optimized pulse is robust to the noise. It is common for physically relevant noise processes to be dominated by low frequency noise, in this case it simplifies the numerical calculation to minimize just the zero frequency part of the filter function. We call this the infidelity of the noise:
\begin{align}
\mathcal{I}_{\rm noise} = w^2 \left|\left| \int dt H_{\rm noise}^{\rm (control)}(t) \right|\right|_2^2,
\end{align}
where $w$ is a relative weight of the filter cost compared to the operation, a good value for additive noises is $w=1/T$.
The multi-objective optimization problem can be represented as minimizing the cost
\begin{align}
\mathcal{I}_{\rm robust}(\underline{v}) = \mathcal{I}_{\rm control}(\underline{v}) + \mathcal{I}_{\rm noise}(\underline{v}).
\end{align}
If we can find a control where $\mathcal{I}_{\rm robust}(\underline{v})$ is very close to zero, we can be sure that it will both complete the correct operation and be robust to low frequency noise.
### Optimizing a robust NOT gate
We can create a robust NOT gate using the BOULDER OPAL optimizer. The [optimization feature](https://docs.q-ctrl.com/boulder-opal/user-guides/optimization) allows the user to define an optimization with arbitrary pulse constraints.
We are going to construct two control pulses $\Omega_I(\underline{v},t)$ and $\Omega_Q(\underline{v},t)$ which have a maximum Rabi rate $\Omega_{\rm max}$ and a bandwidth limit defined by a cutoff frequency $\Omega_{\rm cutoff}$.
The optimizer requires that you define the quantum system as a `graph` that represents how a set of `optimization_variables` an `infidelity` you want to minimize. A series of convenience methods makes
creating this `graph` straightforward once you have mathematically written down the total Hamiltonian ($H_{\rm total}$). Below we show how to create a `graph` for optimizing a qubit with dephasing noise. On each line, we write down what the current variable represents in the mathematical equation of the total Hamiltonian.
We restate the entire Hamiltonian below so we can easily refer to it:
\begin{align}
H_{\rm total}(t) = & H_{\rm control}(t) + H_{\rm noise}(t), \\
H_{\rm control}(t) = & \Omega_{\rm I}(t) \sigma_{x}/2 + \Omega_{\rm Q}(t) \sigma_{y}/2, \\
H_{\rm noise}(t) = & \eta(t) \sigma_z / 2.
\end{align}
```
robust_duration = 3.0 * standard_duration
omega_cutoff = 1e7
segment_count = 100
with qctrl.create_graph() as graph:
# Omega_I(v,t)
pulse_i = bandwidth_limited_pwc_signal(
name="I",
duration=robust_duration,
segment_count=segment_count,
max_rabi_rate=omega_max,
cutoff_frequency=omega_cutoff,
)
# Omega_Q(v,t)
pulse_q = bandwidth_limited_pwc_signal(
name="Q",
duration=robust_duration,
segment_count=segment_count,
max_rabi_rate=omega_max,
cutoff_frequency=omega_cutoff,
)
# Omega_I(t) sigma_x/2
robust_control_i = qctrl.operations.pwc_operator(
signal=pulse_i, operator=sigma_x / 2.0
)
# Omega_Q(t) sigma_y/2
robust_control_q = qctrl.operations.pwc_operator(
signal=pulse_q, operator=sigma_y / 2.0
)
# H_control = Omega_I(t) sigma_x/2 + Omega_Q(t) sigma_y/2
control_hamiltonian = qctrl.operations.pwc_sum([robust_control_i, robust_control_q])
# sigma_z / 2w
noise_operator = qctrl.operations.constant_pwc_operator(
robust_duration, sigma_z / 2.0 / robust_duration
)
# create U_target
target_unitary = qctrl.operations.target(operator=not_gate)
# create I_robust(v) = I_control(v) + I_noise(v)
infidelity = qctrl.operations.infidelity_pwc(
hamiltonian=control_hamiltonian,
noise_operators=[
noise_operator,
],
target_operator=target_unitary,
name="infidelity",
)
```
When you run an optimization, a series of searches are performed and the pulse with the smallest cost is returned. The optimization is stochastic and therefore a different result will be returned each time, but they will always satisfy the constraints.
A pulse that is both robust and completes the correct operation will have a cost which is very close to zero. If the cost returned does not satisfy this condition, you may need to reduce your constraints. Increasing the total duration and/or the number of segments will often help.
```
optimization_result = qctrl.functions.calculate_optimization(
cost_node_name="infidelity",
output_node_names=["infidelity", "I", "Q"],
graph=graph,
)
optimization_result
print("Best cost:")
print(optimization_result.cost)
```
Once you have completed an optimization with a good cost you can export the segments of the pulse to your device.
```
qv.plot_controls(
plt.figure(),
{
"$\Omega_I$": optimization_result.output["I"],
"$\Omega_Q$": optimization_result.output["Q"],
},
)
plt.show()
```
## Sending the pulse to the cloud to realize the NOT gate
```
Ivals, Qvals = optimization_result.output["I"], optimization_result.output["Q"]
Qvals
control_count = 1
segment_count = len(Ivals)
duration = Ivals[0]['duration']*1e9
shot_count = 512
values = []
R, C = [], []
for RE, COM in zip(Ivals, Qvals):
r = RE['value']
c = COM['value']
R.append(r)
C.append(c)
# R = (np.array(R) - np.mean(R))/ np.std(R)
# C = (np.array(C) - np.mean(C))/ np.std(C)
for r, c in zip(R, C):
values.append(r + 1j * c)
# values = np.array(values)
# values = (values - np.mean(values)) / np.std(values)
norm = np.linalg.norm(values)
values = values/norm
controls = []
controls.append({"duration":duration, "values": np.array(values)})
controls
experiment_results = qctrl.functions.calculate_qchack_measurements(
controls=controls,
shot_count=shot_count,
)
measurements = experiment_results.measurements
for measurement_counts in enumerate(measurements):
print("control: {measurement_counts}")
for measurement_counts in enumerate(measurements):
p0 = measurement_counts.count(0) / shot_count
p1 = measurement_counts.count(1) / shot_count
p2 = measurement_counts.count(2) / shot_count
print(f"control 1: P(|0>) = {p0:.2f}, P(|1>) = {p1:.2f}, P(|2>) = {p2:.2f}")
```
| true |
code
| 0.75795 | null | null | null | null |
|
## Image网 Submission `128x128`
This contains a submission for the Image网 leaderboard in the `128x128` category.
In this notebook we:
1. Train on 1 pretext task:
- Train a network to do image inpatining on Image网's `/train`, `/unsup` and `/val` images.
2. Train on 4 downstream tasks:
- We load the pretext weights and train for `5` epochs.
- We load the pretext weights and train for `20` epochs.
- We load the pretext weights and train for `80` epochs.
- We load the pretext weights and train for `200` epochs.
Our leaderboard submissions are the accuracies we get on each of the downstream tasks.
```
import os
os.chdir('..')
import json
import torch
import numpy as np
from functools import partial
from fastai2.basics import *
from fastai2.vision.all import *
torch.cuda.set_device(6)
# Chosen parameters
lr=2e-2
sqrmom=0.99
mom=0.95
beta=0.
eps=1e-4
bs=64
sa=1
m = xresnet34
act_fn = Mish
pool = MaxPool
nc=20
source = untar_data(URLs.IMAGEWANG_160)
len(get_image_files(source/'unsup')), len(get_image_files(source/'train')), len(get_image_files(source/'val'))
# Use the Ranger optimizer
opt_func = partial(ranger, mom=mom, sqr_mom=sqrmom, eps=eps, beta=beta)
m_part = partial(m, c_out=nc, act_cls=torch.nn.ReLU, sa=sa, pool=pool)
model_meta[m_part] = model_meta[xresnet34]
save_name = 'models/imagewang_contrast_kornia_80ep_loweraug_temp5'
```
## Pretext Task: Contrastive Learning
```
#export
from pytorch_metric_learning import losses
class XentLoss(losses.NTXentLoss):
def forward(self, output1, output2):
stacked = torch.cat((output1, output2), dim=0)
labels = torch.arange(output1.shape[0]).repeat(2)
return super().forward(stacked, labels, None)
class ContrastCallback(Callback):
run_before=Recorder
def __init__(self, size=256, aug_targ=None, aug_pos=None, temperature=0.1):
self.aug_targ = ifnone(aug_targ, get_aug_pipe(size))
self.aug_pos = ifnone(aug_pos, get_aug_pipe(size))
self.temperature = temperature
def update_size(self, size):
pipe_update_size(self.aug_targ, size)
pipe_update_size(self.aug_pos, size)
def begin_fit(self):
self.old_lf = self.learn.loss_func
self.old_met = self.learn.metrics
self.learn.metrics = []
self.learn.loss_func = losses.NTXentLoss(self.temperature)
def after_fit(self):
self.learn.loss_fun = self.old_lf
self.learn.metrics = self.old_met
def begin_batch(self):
xb, = self.learn.xb
xb_targ = self.aug_targ(xb)
xb_pos = self.aug_pos(xb)
self.learn.xb = torch.cat((xb_targ, xb_pos), dim=0),
self.learn.yb = torch.arange(xb_targ.shape[0]).repeat(2),
#export
def pipe_update_size(pipe, size):
for tf in pipe.fs:
if isinstance(tf, RandomResizedCropGPU):
tf.size = size
def get_dbunch(size, bs, workers=8, dogs_only=False):
path = URLs.IMAGEWANG_160 if size <= 160 else URLs.IMAGEWANG
source = untar_data(path)
folders = ['unsup', 'val'] if dogs_only else None
files = get_image_files(source, folders=folders)
tfms = [[PILImage.create, ToTensor, RandomResizedCrop(size, min_scale=0.9)],
[parent_label, Categorize()]]
# dsets = Datasets(files, tfms=tfms, splits=GrandparentSplitter(train_name='unsup', valid_name='val')(files))
dsets = Datasets(files, tfms=tfms, splits=RandomSplitter(valid_pct=0.1)(files))
# batch_tfms = [IntToFloatTensor, *aug_transforms(p_lighting=1.0, max_lighting=0.9)]
batch_tfms = [IntToFloatTensor]
dls = dsets.dataloaders(bs=bs, num_workers=workers, after_batch=batch_tfms)
dls.path = source
return dls
size = 128
bs = 256
dbunch = get_dbunch(160, bs)
len(dbunch.train.dataset)
dbunch.show_batch()
# # xb = TensorImage(torch.randn(1, 3,128,128))
# afn_tfm, lght_tfm = aug_transforms(p_lighting=1.0, max_lighting=0.8, p_affine=1.0)
# # lght_tfm.split_idx = None
# xb.allclose(afn_tfm(xb)), xb.allclose(lght_tfm(xb, split_idx=0))
import kornia
#export
def get_aug_pipe(size, stats=None, s=.6):
stats = ifnone(stats, imagenet_stats)
rrc = kornia.augmentation.RandomResizedCrop((size,size), scale=(0.2, 1.0), ratio=(3/4, 4/3))
rhf = kornia.augmentation.RandomHorizontalFlip()
rcj = kornia.augmentation.ColorJitter(0.8*s, 0.8*s, 0.8*s, 0.2*s)
tfms = [rrc, rhf, rcj, Normalize.from_stats(*stats)]
pipe = Pipeline(tfms)
pipe.split_idx = 0
return pipe
aug = get_aug_pipe(size)
aug2 = get_aug_pipe(size)
cbs = ContrastCallback(size=size, aug_targ=aug, aug_pos=aug2, temperature=0.5)
xb,yb = dbunch.one_batch()
nrm = Normalize.from_stats(*imagenet_stats)
xb_dec = nrm.decodes(aug(xb))
show_images([xb_dec[0], xb[0]])
ch = nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(512, 256), nn.ReLU(), nn.Linear(256, 128))
learn = cnn_learner(dbunch, m_part, opt_func=opt_func,
metrics=[], loss_func=CrossEntropyLossFlat(), cbs=cbs, pretrained=False,
config={'custom_head':ch}
).to_fp16()
learn.unfreeze()
learn.fit_flat_cos(80, 2e-2, wd=1e-2, pct_start=0.5)
torch.save(learn.model[0].state_dict(), f'{save_name}.pth')
# learn.save(save_name)
```
## Downstream Task: Image Classification
```
def get_dbunch(size, bs, workers=8, dogs_only=False):
path = URLs.IMAGEWANG_160 if size <= 160 else URLs.IMAGEWANG
source = untar_data(path)
if dogs_only:
dog_categories = [f.name for f in (source/'val').ls()]
dog_train = get_image_files(source/'train', folders=dog_categories)
valid = get_image_files(source/'val')
files = dog_train + valid
splits = [range(len(dog_train)), range(len(dog_train), len(dog_train)+len(valid))]
else:
files = get_image_files(source)
splits = GrandparentSplitter(valid_name='val')(files)
item_aug = [RandomResizedCrop(size, min_scale=0.35), FlipItem(0.5)]
tfms = [[PILImage.create, ToTensor, *item_aug],
[parent_label, Categorize()]]
dsets = Datasets(files, tfms=tfms, splits=splits)
batch_tfms = [IntToFloatTensor, Normalize.from_stats(*imagenet_stats)]
dls = dsets.dataloaders(bs=bs, num_workers=workers, after_batch=batch_tfms)
dls.path = source
return dls
def do_train(size=128, bs=64, lr=1e-2, epochs=5, runs=5, dogs_only=False, save_name=None, ch=None):
dbunch = get_dbunch(size, bs, dogs_only=dogs_only)
for run in range(runs):
print(f'Run: {run}')
ch = ifnone(ch, nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(512, 20)))
learn = cnn_learner(dbunch, m_part, opt_func=opt_func, normalize=False,
metrics=[accuracy,top_k_accuracy], loss_func=LabelSmoothingCrossEntropy(),
# metrics=[accuracy,top_k_accuracy], loss_func=CrossEntropyLossFlat(),
pretrained=False,
config={'custom_head':ch})
if save_name is not None:
state_dict = torch.load(f'{save_name}.pth')
learn.model[0].load_state_dict(state_dict)
learn.unfreeze()
learn.fit_flat_cos(epochs, lr, wd=1e-2)
```
### 5 Epochs
```
epochs = 5
runs = 5
do_train(epochs=epochs, runs=runs, lr=2e-2, dogs_only=False, save_name=save_name)
```
### 20 Epochs
```
epochs = 20
runs = 1
# LATEST
do_train(epochs=epochs, runs=runs, lr=2e-2, dogs_only=False, save_name=save_name)
```
## Larger HEAD
```
ch = create_head(512, 20, concat_pool=False)
do_train(epochs=epochs, runs=runs, lr=2e-2, dogs_only=False, save_name=save_name, ch=ch)
ch = create_head(1024, 20, concat_pool=True, ps=0.25)
do_train(epochs=epochs, runs=runs, lr=2e-2, dogs_only=False, save_name=save_name, ch=ch)
```
## 80 epochs
```
epochs = 80
runs = 1
do_train(epochs=epochs, runs=runs, dogs_only=False, save_name=save_name)
```
Accuracy: **62.18%**
### 200 epochs
```
epochs = 200
runs = 1
do_train(epochs=epochs, runs=runs, dogs_only=False, save_name=save_name)
```
Accuracy: **62.03%**
| true |
code
| 0.627152 | null | null | null | null |
|
<div class="alert alert-block alert-info">
Section of the book chapter: <b>5.2.2 Active Learning</b>
</div>
# 4. Active learning
**Table of Contents**
* [4.1 Active Learning Setup](#4.1-Active-Learning-Setup)
* [4.2 Initial Estimation](#4.2-Initial-Estimation)
* [4.3 Including Active Learning](#4.3-Including-Active-Learning)
**Learnings:**
- how to implement basic active learning approaches models,
- how active learning can improve estimations.
### Packages
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib as mpl
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import WhiteKernel, RBF
from modAL.models import ActiveLearner
import utils
```
### Read in Data
**Dataset:** Felix M. Riese and Sina Keller, "Hyperspectral benchmark dataset on soil moisture", Dataset, Zenodo, 2018. [DOI:10.5281/zenodo.1227836](http://doi.org/10.5281/zenodo.1227836) and [GitHub](https://github.com/felixriese/hyperspectral-soilmoisture-dataset)
**Introducing paper:** Felix M. Riese and Sina Keller, “Introducing a Framework of Self-Organizing Maps for Regression of Soil Moisture with Hyperspectral Data,” in IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 2018, pp. 6151-6154. [DOI:10.1109/IGARSS.2018.8517812](https://doi.org/10.1109/IGARSS.2018.8517812)
```
X_train, X_test, y_train, y_test, y_train_full = utils.get_xy_split(missing_rate=0.8)
print(X_train.shape, X_test.shape, y_train.shape, y_test.shape, y_train_full.shape)
print(y_train[y_train>=0.].shape, y_test[y_test>=0.].shape, y_train_full[y_train>=0.].shape)
index_initial = np.where(y_train>=0.)[0]
```
***
## 4.1 Active Learning Setup
Source: [modAL/active_regression.py](https://github.com/modAL-python/modAL/blob/master/examples/active_regression.py)
```
# defining the kernel for the Gaussian process
kernel = RBF(length_scale=1.0, length_scale_bounds=(1e-2, 1e3)) \
+ WhiteKernel(noise_level=1, noise_level_bounds=(1e-10, 1e+1))
# query strategy for regression
def regression_std(regressor, X):
_, std = regressor.predict(X, return_std=True)
query_idx = np.argmax(std)
return query_idx, X[query_idx]
# initializing the active learner
regressor = ActiveLearner(
estimator=GaussianProcessRegressor(kernel=kernel),
query_strategy=regression_std,
X_training=X_train[index_initial],
y_training=y_train[index_initial])
print(X_train[index_initial].shape, y_train[index_initial].shape)
```
***
## 4.2 Initial Estimation
```
# plot initial estimation
plt.figure(figsize=(6,6))
pred, std = regressor.predict(X_train, return_std=True)
# plot prediction of supervised samples
plt.scatter(y_train_full[index_initial], pred[index_initial], alpha=0.5)
# plot prediction of unsupervised samples
not_initial = [i for i in range(y_train.shape[0]) if i not in index_initial]
plt.scatter(y_train_full[not_initial], pred[not_initial], alpha=0.5)
# plot std
plt.fill_between(np.linspace(22, 45, 339), pred-std, pred+std, alpha=0.2)
plt.xlim(22.0, 45.0)
plt.ylim(22.0, 45.0)
plt.xlabel("Soil Moisture (Ground Truth) in %")
plt.ylabel("Soil Moisture (Prediction) in %")
plt.show()
```
***
## 4.3 Including Active Learning
```
n_queries = 150
for idx in range(n_queries):
query_idx, query_instance = regressor.query(X_train)
# print(query_idx, query_instance)
# print(X_train[query_idx].reshape(1,125).shape)
# print(y_train_full[query_idx].reshape(-1, ).shape)
regressor.teach(X_train[query_idx].reshape(1, 125), y_train_full[query_idx].reshape(-1, ))
# plot initial estimation
plt.figure(figsize=(6,6))
pred, std = regressor.predict(X_train, return_std=True)
# plot prediction of supervised samples
plt.scatter(y_train_full[index_initial], pred[index_initial], alpha=0.5)
# plot prediction of unsupervised samples
not_initial = [i for i in range(y_train.shape[0]) if i not in index_initial]
plt.scatter(y_train_full[not_initial], pred[not_initial], alpha=0.5)
# plot std
plt.fill_between(np.linspace(22, 45, 339), pred-std, pred+std, alpha=0.2)
plt.xlim(22.0, 45.0)
plt.ylim(22.0, 45.0)
plt.xlabel("Soil Moisture (Ground Truth) in %")
plt.ylabel("Soil Moisture (Prediction) in %")
plt.show()
```
| true |
code
| 0.642292 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/simecek/dspracticum2020/blob/master/lecture_02/01_one_neuron_and_MPG_dataset.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.layers.experimental import preprocessing
print(tf.__version__)
```
## Data
```
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data'
column_names = ['MPG', 'Cylinders', 'Displacement', 'Horsepower', 'Weight',
'Acceleration', 'Model Year', 'Origin']
raw_dataset = pd.read_csv(url, names=column_names,
na_values='?', comment='\t',
sep=' ', skipinitialspace=True)
dataset = raw_dataset.copy()
# because of missing values in the Horsepower column
dataset = dataset.dropna()
dataset.tail()
# split the dataset into two parts (train & test)
train_dataset = dataset.sample(frac=0.8, random_state=42)
test_dataset = dataset.drop(train_dataset.index)
train_dataset.shape, test_dataset.shape
# separate label column from the data
train_features = train_dataset.copy()
test_features = test_dataset.copy()
train_labels = train_features.pop('MPG')
test_labels = test_features.pop('MPG')
train_features.shape, test_features.shape, train_labels.shape, test_labels.shape
```
## Predict MPG (miles per gallon) from Horsepower
```
sns.scatterplot(data=dataset, x="Horsepower", y="MPG");
horsepower = np.array(train_features['Horsepower'])
# we will use train data to estimate average and SD of horsepower and
# get transformation to zero mean and unit variance
horsepower_normalizer = preprocessing.Normalization(input_shape=[1,])
horsepower_normalizer.adapt(horsepower)
normalized_horsepower = np.array(horsepower_normalizer(horsepower))
normalized_horsepower.mean(), normalized_horsepower.std()
# model
horsepower_model = tf.keras.Sequential([
horsepower_normalizer,
layers.Dense(units=1)
])
horsepower_model.summary()
# model compilation
horsepower_model.compile(
optimizer=tf.optimizers.Adam(learning_rate=0.1),
loss='mean_absolute_error')
%%time
history = horsepower_model.fit(
train_features['Horsepower'], train_labels,
epochs=100,
# suppress logging
verbose=0,
# Calculate validation results on 20% of the training data
validation_split = 0.2)
print(history.history['val_loss'][-1:])
def plot_loss(history):
plt.plot(history.history['loss'], label='loss')
plt.plot(history.history['val_loss'], label='val_loss')
plt.ylim([0, 10])
plt.xlabel('Epoch')
plt.ylabel('Error [MPG]')
plt.legend()
plt.grid(True)
plot_loss(history)
# prediction on test data
test_data_predictions = horsepower_model.predict(test_features['Horsepower'])
def plot_horsepower(preds):
plt.scatter(test_features['Horsepower'], test_labels, label='Data')
plt.plot(test_features['Horsepower'], preds, color='k', label='Predictions')
plt.xlabel('Horsepower')
plt.ylabel('MPG')
plt.legend()
plot_horsepower(test_data_predictions)
# evaluation on test data
test_evaluation = {}
test_evaluation['horsepower_model'] = horsepower_model.evaluate(
test_features['Horsepower'],
test_labels, verbose=0)
test_evaluation
```
## Predict MPG (miles per gallon) from Other Features
```
sns.pairplot(train_dataset[['MPG', 'Cylinders', 'Displacement', 'Weight']], diag_kind='kde');
train_dataset.describe().transpose()
# normalizer for all feature columns
normalizer = preprocessing.Normalization()
normalizer.adapt(np.array(train_features))
# model definition
linear_model = tf.keras.Sequential([
normalizer,
layers.Dense(units=1)
])
# you can even do prediction from (untrained) model or look what is his weights
print(linear_model.predict(train_features[:10]))
# parameters (weights and bias)
linear_model.layers[1].kernel, linear_model.layers[1].bias
linear_model.compile(
optimizer=tf.optimizers.Adam(learning_rate=0.1),
loss='mean_absolute_error')
%%time
history = linear_model.fit(
train_features, train_labels,
epochs=100,
# suppress logging
verbose=0,
# Calculate validation results on 20% of the training data
validation_split = 0.2)
plot_loss(history)
```
| true |
code
| 0.745764 | null | null | null | null |
|
<!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png">
*This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).*
*The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!*
<!--NAVIGATION-->
< [Feature Engineering](05.04-Feature-Engineering.ipynb) | [Contents](Index.ipynb) | [In Depth: Linear Regression](05.06-Linear-Regression.ipynb) >
<a href="https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.05-Naive-Bayes.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
# In Depth: Naive Bayes Classification
The previous four sections have given a general overview of the concepts of machine learning.
In this section and the ones that follow, we will be taking a closer look at several specific algorithms for supervised and unsupervised learning, starting here with naive Bayes classification.
Naive Bayes models are a group of extremely fast and simple classification algorithms that are often suitable for very high-dimensional datasets.
Because they are so fast and have so few tunable parameters, they end up being very useful as a quick-and-dirty baseline for a classification problem.
This section will focus on an intuitive explanation of how naive Bayes classifiers work, followed by a couple examples of them in action on some datasets.
## Bayesian Classification
Naive Bayes classifiers are built on Bayesian classification methods.
These rely on Bayes's theorem, which is an equation describing the relationship of conditional probabilities of statistical quantities.
In Bayesian classification, we're interested in finding the probability of a label given some observed features, which we can write as $P(L~|~{\rm features})$.
Bayes's theorem tells us how to express this in terms of quantities we can compute more directly:
$$
P(L~|~{\rm features}) = \frac{P({\rm features}~|~L)P(L)}{P({\rm features})}
$$
If we are trying to decide between two labels—let's call them $L_1$ and $L_2$—then one way to make this decision is to compute the ratio of the posterior probabilities for each label:
$$
\frac{P(L_1~|~{\rm features})}{P(L_2~|~{\rm features})} = \frac{P({\rm features}~|~L_1)}{P({\rm features}~|~L_2)}\frac{P(L_1)}{P(L_2)}
$$
All we need now is some model by which we can compute $P({\rm features}~|~L_i)$ for each label.
Such a model is called a *generative model* because it specifies the hypothetical random process that generates the data.
Specifying this generative model for each label is the main piece of the training of such a Bayesian classifier.
The general version of such a training step is a very difficult task, but we can make it simpler through the use of some simplifying assumptions about the form of this model.
This is where the "naive" in "naive Bayes" comes in: if we make very naive assumptions about the generative model for each label, we can find a rough approximation of the generative model for each class, and then proceed with the Bayesian classification.
Different types of naive Bayes classifiers rest on different naive assumptions about the data, and we will examine a few of these in the following sections.
We begin with the standard imports:
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
```
## Gaussian Naive Bayes
Perhaps the easiest naive Bayes classifier to understand is Gaussian naive Bayes.
In this classifier, the assumption is that *data from each label is drawn from a simple Gaussian distribution*.
Imagine that you have the following data:
```
from sklearn.datasets import make_blobs
X, y = make_blobs(100, 2, centers=2, random_state=2, cluster_std=1.5)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='RdBu');
```
One extremely fast way to create a simple model is to assume that the data is described by a Gaussian distribution with no covariance between dimensions.
This model can be fit by simply finding the mean and standard deviation of the points within each label, which is all you need to define such a distribution.
The result of this naive Gaussian assumption is shown in the following figure:

[figure source in Appendix](06.00-Figure-Code.ipynb#Gaussian-Naive-Bayes)
The ellipses here represent the Gaussian generative model for each label, with larger probability toward the center of the ellipses.
With this generative model in place for each class, we have a simple recipe to compute the likelihood $P({\rm features}~|~L_1)$ for any data point, and thus we can quickly compute the posterior ratio and determine which label is the most probable for a given point.
This procedure is implemented in Scikit-Learn's ``sklearn.naive_bayes.GaussianNB`` estimator:
```
from sklearn.naive_bayes import GaussianNB
model = GaussianNB()
model.fit(X, y);
```
Now let's generate some new data and predict the label:
```
rng = np.random.RandomState(0)
Xnew = [-6, -14] + [14, 18] * rng.rand(2000, 2)
ynew = model.predict(Xnew)
```
Now we can plot this new data to get an idea of where the decision boundary is:
```
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='RdBu')
lim = plt.axis()
plt.scatter(Xnew[:, 0], Xnew[:, 1], c=ynew, s=20, cmap='RdBu', alpha=0.1)
plt.axis(lim);
```
We see a slightly curved boundary in the classifications—in general, the boundary in Gaussian naive Bayes is quadratic.
A nice piece of this Bayesian formalism is that it naturally allows for probabilistic classification, which we can compute using the ``predict_proba`` method:
```
yprob = model.predict_proba(Xnew)
yprob[-8:].round(2)
```
The columns give the posterior probabilities of the first and second label, respectively.
If you are looking for estimates of uncertainty in your classification, Bayesian approaches like this can be a useful approach.
Of course, the final classification will only be as good as the model assumptions that lead to it, which is why Gaussian naive Bayes often does not produce very good results.
Still, in many cases—especially as the number of features becomes large—this assumption is not detrimental enough to prevent Gaussian naive Bayes from being a useful method.
## Multinomial Naive Bayes
The Gaussian assumption just described is by no means the only simple assumption that could be used to specify the generative distribution for each label.
Another useful example is multinomial naive Bayes, where the features are assumed to be generated from a simple multinomial distribution.
The multinomial distribution describes the probability of observing counts among a number of categories, and thus multinomial naive Bayes is most appropriate for features that represent counts or count rates.
The idea is precisely the same as before, except that instead of modeling the data distribution with the best-fit Gaussian, we model the data distribuiton with a best-fit multinomial distribution.
### Example: Classifying Text
One place where multinomial naive Bayes is often used is in text classification, where the features are related to word counts or frequencies within the documents to be classified.
We discussed the extraction of such features from text in [Feature Engineering](05.04-Feature-Engineering.ipynb); here we will use the sparse word count features from the 20 Newsgroups corpus to show how we might classify these short documents into categories.
Let's download the data and take a look at the target names:
```
from sklearn.datasets import fetch_20newsgroups
data = fetch_20newsgroups()
data.target_names
```
For simplicity here, we will select just a few of these categories, and download the training and testing set:
```
categories = ['talk.religion.misc', 'soc.religion.christian',
'sci.space', 'comp.graphics']
train = fetch_20newsgroups(subset='train', categories=categories)
test = fetch_20newsgroups(subset='test', categories=categories)
```
Here is a representative entry from the data:
```
print(train.data[5])
```
In order to use this data for machine learning, we need to be able to convert the content of each string into a vector of numbers.
For this we will use the TF-IDF vectorizer (discussed in [Feature Engineering](05.04-Feature-Engineering.ipynb)), and create a pipeline that attaches it to a multinomial naive Bayes classifier:
```
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import make_pipeline
model = make_pipeline(TfidfVectorizer(), MultinomialNB())
```
With this pipeline, we can apply the model to the training data, and predict labels for the test data:
```
model.fit(train.data, train.target)
labels = model.predict(test.data)
```
Now that we have predicted the labels for the test data, we can evaluate them to learn about the performance of the estimator.
For example, here is the confusion matrix between the true and predicted labels for the test data:
```
from sklearn.metrics import confusion_matrix
mat = confusion_matrix(test.target, labels)
sns.heatmap(mat.T, square=True, annot=True, fmt='d', cbar=False,
xticklabels=train.target_names, yticklabels=train.target_names)
plt.xlabel('true label')
plt.ylabel('predicted label');
```
Evidently, even this very simple classifier can successfully separate space talk from computer talk, but it gets confused between talk about religion and talk about Christianity.
This is perhaps an expected area of confusion!
The very cool thing here is that we now have the tools to determine the category for *any* string, using the ``predict()`` method of this pipeline.
Here's a quick utility function that will return the prediction for a single string:
```
def predict_category(s, train=train, model=model):
pred = model.predict([s])
return train.target_names[pred[0]]
```
Let's try it out:
```
predict_category('sending a payload to the ISS')
predict_category('discussing islam vs atheism')
predict_category('determining the screen resolution')
```
Remember that this is nothing more sophisticated than a simple probability model for the (weighted) frequency of each word in the string; nevertheless, the result is striking.
Even a very naive algorithm, when used carefully and trained on a large set of high-dimensional data, can be surprisingly effective.
## When to Use Naive Bayes
Because naive Bayesian classifiers make such stringent assumptions about data, they will generally not perform as well as a more complicated model.
That said, they have several advantages:
- They are extremely fast for both training and prediction
- They provide straightforward probabilistic prediction
- They are often very easily interpretable
- They have very few (if any) tunable parameters
These advantages mean a naive Bayesian classifier is often a good choice as an initial baseline classification.
If it performs suitably, then congratulations: you have a very fast, very interpretable classifier for your problem.
If it does not perform well, then you can begin exploring more sophisticated models, with some baseline knowledge of how well they should perform.
Naive Bayes classifiers tend to perform especially well in one of the following situations:
- When the naive assumptions actually match the data (very rare in practice)
- For very well-separated categories, when model complexity is less important
- For very high-dimensional data, when model complexity is less important
The last two points seem distinct, but they actually are related: as the dimension of a dataset grows, it is much less likely for any two points to be found close together (after all, they must be close in *every single dimension* to be close overall).
This means that clusters in high dimensions tend to be more separated, on average, than clusters in low dimensions, assuming the new dimensions actually add information.
For this reason, simplistic classifiers like naive Bayes tend to work as well or better than more complicated classifiers as the dimensionality grows: once you have enough data, even a simple model can be very powerful.
<!--NAVIGATION-->
< [Feature Engineering](05.04-Feature-Engineering.ipynb) | [Contents](Index.ipynb) | [In Depth: Linear Regression](05.06-Linear-Regression.ipynb) >
<a href="https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.05-Naive-Bayes.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
| true |
code
| 0.626181 | null | null | null | null |
|
# "Statistical Thinking in Python (Part 1)"
> "Building the foundation you need to think statistically, speak the language of your data, and understand what your data is telling you."
- toc: true
- comments: true
- author: Victor Omondi
- categories: [statistical-thinking, eda, data-science]
- image: images/statistical-thinking-1.png
# Graphical exploratory data analysis
Before diving into sophisticated statistical inference techniques, we should first explore our data by plotting them and computing simple summary statistics. This process, called **exploratory data analysis**, is a crucial first step in statistical analysis of data.
## Introduction to Exploratory Data Analysis
Exploratory Data Analysis is the process of organizing, plo!ing, and summarizing a data set
>“Exploratory data analysis can never be the
whole story, but nothing else can serve as the
foundation stone. ” > ~ John Tukey
### Tukey's comments on EDA
* Exploratory data analysis is detective work.
* There is no excuse for failing to plot and look.
* The greatest value of a picture is that it forces us to notice what we never expected to see.
* It is important to understand what you can do before you learn how to measure how well you seem to have done it.
> If you don't have time to do EDA, you really don't have time to do hypothesis tests. And you should always do EDA first.
### Advantages of graphical EDA
* It often involves converting tabular data into graphical form.
* If done well, graphical representations can allow for more rapid interpretation of data.
* There is no excuse for neglecting to do graphical EDA.
> While a good, informative plot can sometimes be the end point of an analysis, it is more like a beginning: it helps guide you in the quantitative statistical analyses that come next.
## Plotting a histogram
### Plotting a histogram of iris data
We will use a classic data set collected by botanist Edward Anderson and made famous by Ronald Fisher, one of the most prolific statisticians in history. Anderson carefully measured the anatomical properties of samples of three different species of iris, Iris setosa, Iris versicolor, and Iris virginica. The full data set is [available as part of scikit-learn](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_iris.html). Here, you will work with his measurements of petal length.
We will plot a histogram of the petal lengths of his 50 samples of Iris versicolor using matplotlib/seaborn's default settings.
The subset of the data set containing the Iris versicolor petal lengths in units of centimeters (cm) is stored in the NumPy array `versicolor_petal_length`.
# Libraries
```
# Import plotting modules
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
# Set default Seaborn style
sns.set()
%matplotlib inline
versicolor_petal_length = np.array([4.7, 4.5, 4.9, 4. , 4.6, 4.5, 4.7, 3.3, 4.6, 3.9, 3.5, 4.2, 4. ,
4.7, 3.6, 4.4, 4.5, 4.1, 4.5, 3.9, 4.8, 4. , 4.9, 4.7, 4.3, 4.4,
4.8, 5. , 4.5, 3.5, 3.8, 3.7, 3.9, 5.1, 4.5, 4.5, 4.7, 4.4, 4.1,
4. , 4.4, 4.6, 4. , 3.3, 4.2, 4.2, 4.2, 4.3, 3. , 4.1])
# Plot histogram of versicolor petal lengths
plt.hist(versicolor_petal_length)
plt.ylabel("count")
plt.xlabel("petal length (cm)")
plt.show()
```
### Adjusting the number of bins in a histogram
The histogram we just made had ten bins. This is the default of matplotlib.
>Tip: The "square root rule" is a commonly-used rule of thumb for choosing number of bins: choose the number of bins to be the square root of the number of samples.
We will plot the histogram of _Iris versicolor petal lengths_ again, this time using the square root rule for the number of bins. You specify the number of bins using the `bins` keyword argument of `plt.hist()`.
```
# Compute number of data points: n_data
n_data = len(versicolor_petal_length)
# Number of bins is the square root of number of data points: n_bins
n_bins = np.sqrt(n_data)
# Convert number of bins to integer: n_bins
n_bins = int(n_bins)
# Plot the histogram
_ = plt.hist(versicolor_petal_length, bins=n_bins)
# Label axes
_ = plt.xlabel('petal length (cm)')
_ = plt.ylabel('count')
# Show histogram
plt.show()
```
## Plot all data: Bee swarm plots
### Bee swarm plot
We will make a bee swarm plot of the iris petal lengths. The x-axis will contain each of the three species, and the y-axis the petal lengths.
```
iris_petal_lengths = pd.read_csv("../datasets/iris_petal_lengths.csv")
iris_petal_lengths.head()
iris_petal_lengths.shape
iris_petal_lengths.tail()
# Create bee swarm plot with Seaborn's default settings
_ = sns.swarmplot(data=iris_petal_lengths, x="species", y="petal length (cm)")
# Label the axes
_ = plt.xlabel("species")
_ = plt.ylabel("petal length (cm)")
# Show the plot
plt.show()
```
### Interpreting a bee swarm plot
* _I. virginica_ petals tend to be the longest, and _I. setosa_ petals tend to be the shortest of the three species.
> Note: Notice that we said **"tend to be."** Some individual _I. virginica_ flowers may be shorter than individual _I. versicolor_ flowers. It is also possible that an individual _I. setosa_ flower may have longer petals than in individual _I. versicolor_ flower, though this is highly unlikely, and was not observed by Anderson.
## Plot all data: ECDFs
> Note: Empirical cumulative distribution function (ECDF)
### Computing the ECDF
We will write a function that takes as input a 1D array of data and then returns the `x` and `y` values of the ECDF.
> Important: ECDFs are among the most important plots in statistical analysis.
```
def ecdf(data):
"""Compute ECDF for a one-dimensional array of measurements."""
# Number of data points: n
n = len(data)
# x-data for the ECDF: x
x = np.sort(data)
# y-data for the ECDF: y
y = np.arange(1, n+1) / n
return x, y
```
### Plotting the ECDF
We will now use `ecdf()` function to compute the ECDF for the petal lengths of Anderson's _Iris versicolor_ flowers. We will then plot the ECDF.
> Warning: `ecdf()` function returns two arrays so we will need to unpack them. An example of such unpacking is `x, y = foo(data)`, for some function `foo()`.
```
# Compute ECDF for versicolor data: x_vers, y_vers
x_vers, y_vers = ecdf(versicolor_petal_length)
# Generate plot
_ = plt.plot(x_vers, y_vers, marker=".", linestyle="none")
# Label the axes
_ = plt.xlabel("versicolor petal length, (cm)")
_ = plt.ylabel("ECDF")
# Display the plot
plt.show()
```
### Comparison of ECDFs
ECDFs also allow us to compare two or more distributions ***(though plots get cluttered if you have too many)***. Here, we will plot ECDFs for the petal lengths of all three iris species.
> Important: we already wrote a function to generate ECDFs so we can put it to good use!
```
setosa_petal_length = iris_petal_lengths["petal length (cm)"][iris_petal_lengths.species == "setosa"]
versicolor_petal_length = iris_petal_lengths["petal length (cm)"][iris_petal_lengths.species == "versicolor"]
virginica_petal_length = iris_petal_lengths["petal length (cm)"][iris_petal_lengths.species == "virginica"]
setosa_petal_length.head()
# Compute ECDFs
x_set, y_set = ecdf(setosa_petal_length)
x_vers, y_vers = ecdf(versicolor_petal_length)
x_virg, y_virg = ecdf(virginica_petal_length)
# Plot all ECDFs on the same plot
_ = plt.plot(x_set, y_set, marker=".", linestyle="none")
_ = plt.plot(x_vers, y_vers, marker=".", linestyle="none")
_ = plt.plot(x_virg, y_virg, marker=".", linestyle="none")
# Annotate the plot
plt.legend(('setosa', 'versicolor', 'virginica'), loc='lower right')
_ = plt.xlabel('petal length (cm)')
_ = plt.ylabel('ECDF')
# Display the plot
plt.show()
```
> Note: The ECDFs expose clear differences among the species. Setosa is much shorter, also with less absolute variability in petal length than versicolor and virginica.
## Onward toward the whole story!
> Important: “Exploratory data analysis can never be the
whole story, but nothing else can serve as the
foundation stone.”
—John Tukey
# Quantitative exploratory data analysis
We will compute useful summary statistics, which serve to concisely describe salient features of a dataset with a few numbers.
## Introduction to summary statistics: The sample mean and median
$$
mean = \bar{x} = \frac{1}{n} \sum_{i=1}^{n} x_i
$$
> ### Outliers
● Data points whose value is far greater or less than
most of the rest of the data
> ### The median
● The middle value of a data set
> Note: An outlier can significantly affect the value of the mean, but not the median
### Computing means
The mean of all measurements gives an indication of the typical magnitude of a measurement. It is computed using `np.mean()`.
```
# Compute the mean: mean_length_vers
mean_length_vers = np.mean(versicolor_petal_length)
# Print the result with some nice formatting
print('I. versicolor:', mean_length_vers, 'cm')
```
## Percentiles, outliers, and box plots
### Computing percentiles
We will compute the percentiles of petal length of _Iris versicolor_.
```
# Specify array of percentiles: percentiles
percentiles = np.array([2.5, 25, 50, 75, 97.5])
# Compute percentiles: ptiles_vers
ptiles_vers = np.percentile(versicolor_petal_length, percentiles)
# Print the result
ptiles_vers
```
### Comparing percentiles to ECDF
To see how the percentiles relate to the ECDF, we will plot the percentiles of _Iris versicolor_ petal lengths on the ECDF plot.
```
# Plot the ECDF
_ = plt.plot(x_vers, y_vers, '.')
_ = plt.xlabel('petal length (cm)')
_ = plt.ylabel('ECDF')
# Overlay percentiles as red diamonds.
_ = plt.plot(ptiles_vers, percentiles/100, marker='D', color='red',
linestyle="none")
# Show the plot
plt.show()
```
### Box-and-whisker plot
> Warning: Making a box plot for the petal lengths is unnecessary because the iris data set is not too large and the bee swarm plot works fine.
We will Make a box plot of the iris petal lengths.
```
# Create box plot with Seaborn's default settings
_ = sns.boxplot(data=iris_petal_lengths, x="species", y="petal length (cm)")
# Label the axes
_ = plt.xlabel("species")
_ = plt.ylabel("petal length (cm)")
# Show the plot
plt.show()
```
## Variance and standard deviation
> ### Variance
● The mean squared distance of the data from their
mean
> Tip: Variance; nformally, a measure of the spread of data
> $$
variance = \frac{1}{n} \sum_{i=1}^{n} (x_i - \bar{x})^2
$$
> ### standard Deviation
$$
std = \sqrt {\frac{1}{n} \sum_{i=1}^{n} (x_i - \bar{x})^2}
$$
### Computing the variance
we will explicitly compute the variance of the petal length of _Iris veriscolor_, we will then use `np.var()` to compute it.
```
# Array of differences to mean: differences
differences = versicolor_petal_length-np.mean(versicolor_petal_length)
# Square the differences: diff_sq
diff_sq = differences**2
# Compute the mean square difference: variance_explicit
variance_explicit = np.mean(diff_sq)
# Compute the variance using NumPy: variance_np
variance_np = np.var(versicolor_petal_length)
# Print the results
print(variance_explicit, variance_np)
```
### The standard deviation and the variance
the standard deviation is the square root of the variance.
```
# Compute the variance: variance
variance = np.var(versicolor_petal_length)
# Print the square root of the variance
print(np.sqrt(variance))
# Print the standard deviation
print(np.std(versicolor_petal_length))
```
## Covariance and the Pearson correlation coefficient
> ### Covariance
● A measure of how two quantities vary together
> $$
covariance = \frac{1}{n} \sum_{i=1}^{n} (x_i\ \bar{x})\ (y_i \ - \bar{y})
$$
> ### Pearson correlation coefficient
> $$
\rho = Pearson\ correlation = \frac{covariance}{(std\ of\ x)\ (std\ of\ y)} = \frac{variability\ due\ to\ codependence}{independent variability}
$$
### Scatter plots
When we made bee swarm plots, box plots, and ECDF plots in previous exercises, we compared the petal lengths of different species of _iris_. But what if we want to compare two properties of a single species? This is exactly what we will do, we will make a **scatter plot** of the petal length and width measurements of Anderson's _Iris versicolor_ flowers.
> Important: If the flower scales (that is, it preserves its proportion as it grows), we would expect the length and width to be correlated.
```
versicolor_petal_width = np.array([1.4, 1.5, 1.5, 1.3, 1.5, 1.3, 1.6, 1. , 1.3, 1.4, 1. , 1.5, 1. ,
1.4, 1.3, 1.4, 1.5, 1. , 1.5, 1.1, 1.8, 1.3, 1.5, 1.2, 1.3, 1.4,
1.4, 1.7, 1.5, 1. , 1.1, 1. , 1.2, 1.6, 1.5, 1.6, 1.5, 1.3, 1.3,
1.3, 1.2, 1.4, 1.2, 1. , 1.3, 1.2, 1.3, 1.3, 1.1, 1.3])
# Make a scatter plot
_ = plt.plot(versicolor_petal_length, versicolor_petal_width, marker=".", linestyle="none")
# Label the axes
_ = plt.xlabel("petal length, (cm)")
_ = plt.ylabel("petal length, (cm)")
# Show the result
plt.show()
```
> Tip: we see some correlation. Longer petals also tend to be wider.
### Computing the covariance
The covariance may be computed using the Numpy function `np.cov()`. For example, we have two sets of data $x$ and $y$, `np.cov(x, y)` returns a 2D array where entries `[0,1`] and `[1,0]` are the covariances. Entry `[0,0]` is the variance of the data in `x`, and entry `[1,1]` is the variance of the data in `y`. This 2D output array is called the **covariance matrix**, since it organizes the self- and covariance.
```
# Compute the covariance matrix: covariance_matrix
covariance_matrix = np.cov(versicolor_petal_length, versicolor_petal_width)
# Print covariance matrix
print(covariance_matrix)
# Extract covariance of length and width of petals: petal_cov
petal_cov = covariance_matrix[0,1]
# Print the length/width covariance
print(petal_cov)
```
### Computing the Pearson correlation coefficient
the Pearson correlation coefficient, also called the **Pearson r**, is often easier to interpret than the covariance. It is computed using the `np.corrcoef()` function. Like `np.cov(`), it takes two arrays as arguments and returns a 2D array. Entries `[0,0]` and `[1,1]` are necessarily equal to `1`, and the value we are after is entry `[0,1]`.
We will write a function, `pearson_r(x, y)` that takes in two arrays and returns the Pearson correlation coefficient. We will then use this function to compute it for the petal lengths and widths of $I.\ versicolor$.
```
def pearson_r(x, y):
"""Compute Pearson correlation coefficient between two arrays."""
# Compute correlation matrix: corr_mat
corr_mat = np.corrcoef(x,y)
# Return entry [0,1]
return corr_mat[0,1]
# Compute Pearson correlation coefficient for I. versicolor: r
r = pearson_r(versicolor_petal_length, versicolor_petal_width)
# Print the result
print(r)
```
# Thinking probabilistically-- Discrete variables
Statistical inference rests upon probability. Because we can very rarely say anything meaningful with absolute certainty from data, we use probabilistic language to make quantitative statements about data. We will think probabilistically about discrete quantities: those that can only take certain values, like integers.
## Probabilistic logic and statistical inference
### the goal of statistical inference
* To draw probabilistic conclusions about what we might expect if we collected the same data again.
* To draw actionable conclusions from data.
* To draw more general conclusions from relatively few data or observations.
> Note: Statistical inference involves taking your data to probabilistic conclusions about what you would expect if you took even more data, and you can make decisions based on these conclusions.
### Why we use the probabilistic language in statistical inference
* Probability provides a measure of uncertainty and this is crucial because we can quantify what we might expect if the data were acquired again.
* Data are almost never exactly the same when acquired again, and probability allows us to say how much we expect them to vary. We need probability to say how data might vary if acquired again.
> Note: Probabilistic language is in fact very precise. It precisely describes uncertainty.
## Random number generators and hacker statistics
> ### Hacker statistics
- Uses simulated repeated measurements to compute
probabilities.
> ### The np.random module
- Suite of functions based on random number generation
- `np.random.random()`: draw a number between $0$ and $1$
> ### Bernoulli trial
● An experiment that has two options,
"success" (True) and "failure" (False).
> ### Random number seed
- Integer fed into random number generating algorithm
- Manually seed random number generator if you need reproducibility
- Specified using `np.random.seed()`
> ### Hacker stats probabilities
- Determine how to simulate data
- Simulate many many times
- Probability is approximately fraction of trials with the outcome of interest
### Generating random numbers using the np.random module
we'll generate lots of random numbers between zero and one, and then plot a histogram of the results. If the numbers are truly random, all bars in the histogram should be of (close to) equal height.
```
# Seed the random number generator
np.random.seed(42)
# Initialize random numbers: random_numbers
random_numbers = np.empty(100000)
# Generate random numbers by looping over range(100000)
for i in range(100000):
random_numbers[i] = np.random.random()
# Plot a histogram
_ = plt.hist(random_numbers, bins=316, histtype="step", density=True)
_ = plt.xlabel("random numbers")
_ = plt.ylabel("counts")
# Show the plot
plt.show()
```
> Note: The histogram is almost exactly flat across the top, indicating that there is equal chance that a randomly-generated number is in any of the bins of the histogram.
### The np.random module and Bernoulli trials
> Tip: You can think of a Bernoulli trial as a flip of a possibly biased coin. Each coin flip has a probability $p$ of landing heads (success) and probability $1−p$ of landing tails (failure).
We will write a function to perform `n` Bernoulli trials, `perform_bernoulli_trials(n, p)`, which returns the number of successes out of `n` Bernoulli trials, each of which has probability $p$ of success. To perform each Bernoulli trial, we will use the `np.random.random()` function, which returns a random number between zero and one.
```
def perform_bernoulli_trials(n, p):
"""Perform n Bernoulli trials with success probability p
and return number of successes."""
# Initialize number of successes: n_success
n_success = False
# Perform trials
for i in range(n):
# Choose random number between zero and one: random_number
random_number = np.random.random()
# If less than p, it's a success so add one to n_success
if random_number < p:
n_success += 1
return n_success
```
### How many defaults might we expect?
Let's say a bank made 100 mortgage loans. It is possible that anywhere between $0$ and $100$ of the loans will be defaulted upon. We would like to know the probability of getting a given number of defaults, given that the probability of a default is $p = 0.05$. To investigate this, we will do a simulation. We will perform 100 Bernoulli trials using the `perform_bernoulli_trials()` function and record how many defaults we get. Here, a success is a default.
> Important: Remember that the word "success" just means that the Bernoulli trial evaluates to True, i.e., did the loan recipient default?
You will do this for another $100$ Bernoulli trials. And again and again until we have tried it $1000$ times. Then, we will plot a histogram describing the probability of the number of defaults.
```
# Seed random number generator
np.random.seed(42)
# Initialize the number of defaults: n_defaults
n_defaults = np.empty(1000)
# Compute the number of defaults
for i in range(1000):
n_defaults[i] = perform_bernoulli_trials(100, 0.05)
# Plot the histogram with default number of bins; label your axes
_ = plt.hist(n_defaults, density=True)
_ = plt.xlabel('number of defaults out of 100 loans')
_ = plt.ylabel('probability')
# Show the plot
plt.show()
```
> Warning: This is actually not an optimal way to plot a histogram when the results are known to be integers. We will revisit this
### Will the bank fail?
If interest rates are such that the bank will lose money if 10 or more of its loans are defaulted upon, what is the probability that the bank will lose money?
```
# Compute ECDF: x, y
x,y = ecdf(n_defaults)
# Plot the ECDF with labeled axes
_ = plt.plot(x,y, marker=".", linestyle="none")
_ = plt.xlabel("number of defaults")
_ = plt.ylabel("ECDF")
# Show the plot
plt.show()
# Compute the number of 100-loan simulations with 10 or more defaults: n_lose_money
n_lose_money = np.sum(n_defaults >= 10)
# Compute and print probability of losing money
print('Probability of losing money =', n_lose_money / len(n_defaults))
```
> Note: we most likely get 5/100 defaults. But we still have about a 2% chance of getting 10 or more defaults out of 100 loans.
## Probability distributions and stories: The Binomial distribution
> ### Probability mass function (PMF)
- The set of probabilities of discrete outcomes
> ### Probability distribution
- A mathematical description of outcomes
> ### Discrete Uniform distribution: the story
- The outcome of rolling a single fair die is Discrete Uniformly distributed.
> ### Binomial distribution: the story
- The number $r$ of successes in $n$ Bernoulli trials with
probability $p$ of success, is Binomially distributed
- The number $r$ of heads in $4$ coin flips with probability
$0.5$ of heads, is Binomially distributed
### Sampling out of the Binomial distribution
We will compute the probability mass function for the number of defaults we would expect for $100$ loans as in the last section, but instead of simulating all of the Bernoulli trials, we will perform the sampling using `np.random.binomial()`{% fn 1 %}.
> Note: This is identical to the calculation we did in the last set of exercises using our custom-written `perform_bernoulli_trials()` function, but far more computationally efficient.
Given this extra efficiency, we will take $10,000$ samples instead of $1000$. After taking the samples, we will plot the CDF. This CDF that we are plotting is that of the Binomial distribution.
```
# Take 10,000 samples out of the binomial distribution: n_defaults
n_defaults = np.random.binomial(100, 0.05, size=10000)
# Compute CDF: x, y
x,y = ecdf(n_defaults)
# Plot the CDF with axis labels
_ = plt.plot(x,y, marker=".", linestyle="-")
_ = plt.xlabel("number of defaults out of 100 loans")
_ = plt.ylabel("CDF")
# Show the plot
plt.show()
```
> Tip: If you know the story, using built-in algorithms to directly sample out of the distribution is ***much*** faster.
### Plotting the Binomial PMF
> Warning: plotting a nice looking PMF requires a bit of matplotlib trickery that we will not go into here.
we will plot the PMF of the Binomial distribution as a histogram. The trick is setting up the edges of the `bins` to pass to `plt.hist()` via the `bins` keyword argument. We want the bins centered on the integers. So, the edges of the bins should be $-0.5, 0.5, 1.5, 2.5, ...$ up to `max(n_defaults) + 1.5`. We can generate an array like this using `np.arange() `and then subtracting `0.5` from the array.
```
# Compute bin edges: bins
bins = np.arange(0, max(n_defaults) + 1.5) - 0.5
# Generate histogram
_ = plt.hist(n_defaults, density=True, bins=bins)
# Label axes
_ = plt.xlabel("number of defaults out of 100 loans")
_ = plt.ylabel("probability")
# Show the plot
plt.show()
```
## Poisson processes and the Poisson distribution
> ### Poisson process
- The timing of the next event is completely independent of when the previous event happened
> ### Examples of Poisson processes
- Natural births in a given hospital
- Hit on a website during a given hour
- Meteor strikes
- Molecular collisions in a gas
- Aviation incidents
- Buses in Poissonville
> ### Poisson distribution
- The number $r$ of arrivals of a Poisson process in a
given time interval with average rate of $λ$ arrivals
per interval is Poisson distributed.
- The number r of hits on a website in one hour with
an average hit rate of 6 hits per hour is Poisson
distributed.
> ### Poisson Distribution
- Limit of the Binomial distribution for low
probability of success and large number of trials.
- That is, for rare events.
### Relationship between Binomial and Poisson distributions
> Important: Poisson distribution is a limit of the Binomial distribution for rare events.
> Tip: Poisson distribution with arrival rate equal to $np$ approximates a Binomial distribution for $n$ Bernoulli trials with probability $p$ of success (with $n$ large and $p$ small). Importantly, the Poisson distribution is often simpler to work with because it has only one parameter instead of two for the Binomial distribution.
Let's explore these two distributions computationally. We will compute the mean and standard deviation of samples from a Poisson distribution with an arrival rate of $10$. Then, we will compute the mean and standard deviation of samples from a Binomial distribution with parameters $n$ and $p$ such that $np = 10$.
```
# Draw 10,000 samples out of Poisson distribution: samples_poisson
samples_poisson = np.random.poisson(10, size=10000)
# Print the mean and standard deviation
print('Poisson: ', np.mean(samples_poisson),
np.std(samples_poisson))
# Specify values of n and p to consider for Binomial: n, p
n = [20, 100, 1000]
p = [.5, .1, .01]
# Draw 10,000 samples for each n,p pair: samples_binomial
for i in range(3):
samples_binomial = np.random.binomial(n[i],p[i], size=10000)
# Print results
print('n =', n[i], 'Binom:', np.mean(samples_binomial),
np.std(samples_binomial))
```
> Note: The means are all about the same, which can be shown to be true by doing some pen-and-paper work. The standard deviation of the Binomial distribution gets closer and closer to that of the Poisson distribution as the probability $p$ gets lower and lower.
### Was 2015 anomalous?
In baseball, a no-hitter is a game in which a pitcher does not allow the other team to get a hit. This is a rare event, and since the beginning of the so-called modern era of baseball (starting in 1901), there have only been 251 of them through the 2015 season in over 200,000 games. The ECDF of the number of no-hitters in a season is shown to the right. The probability distribution that would be appropriate to describe the number of no-hitters we would expect in a given season? is Both Binomial and Poisson, though Poisson is easier to model and compute.
> Important: When we have rare events (low $p$, high $n$), the Binomial distribution is Poisson. This has a single parameter, the mean number of successes per time interval, in our case the mean number of no-hitters per season.
1990 and 2015 featured the most no-hitters of any season of baseball (there were seven). Given that there are on average $\frac{251}{115}$ no-hitters per season, what is the probability of having seven or more in a season? Let's find out
```
# Draw 10,000 samples out of Poisson distribution: n_nohitters
n_nohitters = np.random.poisson(251/115, size=10000)
# Compute number of samples that are seven or greater: n_large
n_large = np.sum(n_nohitters >= 7)
# Compute probability of getting seven or more: p_large
p_large = n_large/10000
# Print the result
print('Probability of seven or more no-hitters:', p_large)
```
> Note: The result is about $0.007$. This means that it is not that improbable to see a 7-or-more no-hitter season in a century. We have seen two in a century and a half, so it is not unreasonable.
# Thinking probabilistically-- Continuous variables
It’s time to move onto continuous variables, such as those that can take on any fractional value. Many of the principles are the same, but there are some subtleties. We will be speaking the probabilistic language needed to launch into the inference techniques.
## Probability density functions
> ### Continuous variables
- Quantities that can take any value, not just
discrete values
> ### Probability density function (PDF)
- Continuous analog to the PMF
- Mathematical description of the relative likelihood
of observing a value of a continuous variable
## Introduction to the Normal distribution
> ### Normal distribution
- Describes a continuous variable whose PDF has a single symmetric peak.
>|Parameter| |Calculated from data|
|---|---|---|
|mean of a Normal distribution|≠| mean computed from data|
|st. dev. of a Normal distribution|≠|standard deviation computed from data|
### The Normal PDF
```
# Draw 100000 samples from Normal distribution with stds of interest: samples_std1, samples_std3, samples_std10
samples_std1 = np.random.normal(20,1,size=100000)
samples_std3 = np.random.normal(20, 3, size=100000)
samples_std10 = np.random.normal(20, 10, size=100000)
# Make histograms
_ = plt.hist(samples_std1, density=True, histtype="step", bins=100)
_ = plt.hist(samples_std3, density=True, histtype="step", bins=100)
_ = plt.hist(samples_std10, density=True, histtype="step", bins=100)
# Make a legend, set limits and show plot
_ = plt.legend(('std = 1', 'std = 3', 'std = 10'))
plt.ylim(-0.01, 0.42)
plt.show()
```
> Note: You can see how the different standard deviations result in PDFs of different widths. The peaks are all centered at the mean of 20.
### The Normal CDF
```
# Generate CDFs
x_std1, y_std1 = ecdf(samples_std1)
x_std3, y_std3 = ecdf(samples_std3)
x_std10, y_std10 = ecdf(samples_std10)
# Plot CDFs
_ = plt.plot(x_std1, y_std1, marker=".", linestyle="none")
_ = plt.plot(x_std3, y_std3, marker=".", linestyle="none")
_ = plt.plot(x_std10, y_std10, marker=".", linestyle="none")
# Make a legend and show the plot
_ = plt.legend(('std = 1', 'std = 3', 'std = 10'), loc='lower right')
plt.show()
```
> Note: The CDFs all pass through the mean at the 50th percentile; the mean and median of a Normal distribution are equal. The width of the CDF varies with the standard deviation.
## The Normal distribution: Properties and warnings
### Are the Belmont Stakes results Normally distributed?
Since 1926, the Belmont Stakes is a $1.5$ mile-long race of 3-year old thoroughbred horses. <a href="https://en.wikipedia.org/wiki/Secretariat_(horse)">Secretariat</a> ran the fastest Belmont Stakes in history in $1973$. While that was the fastest year, 1970 was the slowest because of unusually wet and sloppy conditions. With these two outliers removed from the data set, we will compute the mean and standard deviation of the Belmont winners' times. We will sample out of a Normal distribution with this mean and standard deviation using the `np.random.normal()` function and plot a CDF. Overlay the ECDF from the winning Belmont times {% fn 2 %}.
```
belmont_no_outliers = np.array([148.51, 146.65, 148.52, 150.7 , 150.42, 150.88, 151.57, 147.54,
149.65, 148.74, 147.86, 148.75, 147.5 , 148.26, 149.71, 146.56,
151.19, 147.88, 149.16, 148.82, 148.96, 152.02, 146.82, 149.97,
146.13, 148.1 , 147.2 , 146. , 146.4 , 148.2 , 149.8 , 147. ,
147.2 , 147.8 , 148.2 , 149. , 149.8 , 148.6 , 146.8 , 149.6 ,
149. , 148.2 , 149.2 , 148. , 150.4 , 148.8 , 147.2 , 148.8 ,
149.6 , 148.4 , 148.4 , 150.2 , 148.8 , 149.2 , 149.2 , 148.4 ,
150.2 , 146.6 , 149.8 , 149. , 150.8 , 148.6 , 150.2 , 149. ,
148.6 , 150.2 , 148.2 , 149.4 , 150.8 , 150.2 , 152.2 , 148.2 ,
149.2 , 151. , 149.6 , 149.6 , 149.4 , 148.6 , 150. , 150.6 ,
149.2 , 152.6 , 152.8 , 149.6 , 151.6 , 152.8 , 153.2 , 152.4 ,
152.2 ])
# Compute mean and standard deviation: mu, sigma
mu = np.mean(belmont_no_outliers)
sigma = np.std(belmont_no_outliers)
# Sample out of a normal distribution with this mu and sigma: samples
samples = np.random.normal(mu, sigma, size=10000)
# Get the CDF of the samples and of the data
x_theor, y_theor = ecdf(samples)
x,y = ecdf(belmont_no_outliers)
# Plot the CDFs and show the plot
_ = plt.plot(x_theor, y_theor)
_ = plt.plot(x, y, marker='.', linestyle='none')
_ = plt.xlabel('Belmont winning time (sec.)')
_ = plt.ylabel('CDF')
plt.show()
```
> Note: The theoretical CDF and the ECDF of the data suggest that the winning Belmont times are, indeed, Normally distributed. This also suggests that in the last 100 years or so, there have not been major technological or training advances that have significantly affected the speed at which horses can run this race.
### What are the chances of a horse matching or beating Secretariat's record?
The probability that the winner of a given Belmont Stakes will run it as fast or faster than Secretariat assuming that the Belmont winners' times are Normally distributed (with the 1970 and 1973 years removed)
```
# Take a million samples out of the Normal distribution: samples
samples = np.random.normal(mu, sigma, size=1000000)
# Compute the fraction that are faster than 144 seconds: prob
prob = np.sum(samples<=144)/len(samples)
# Print the result
print('Probability of besting Secretariat:', prob)
```
> Note: We had to take a million samples because the probability of a fast time is very low and we had to be sure to sample enough. We get that there is only a 0.06% chance of a horse running the Belmont as fast as Secretariat.
## The Exponential distribution
The waiting time between arrivals of a Poisson process is Exponentially distributed
> ### Possible Poisson process
- Nuclear incidents:
- Timing of one is independent of all others
$f(x; \frac{1}{\beta}) = \frac{1}{\beta} \exp(-\frac{x}{\beta})$
### If you have a story, you can simulate it!
Sometimes, the story describing our probability distribution does not have a named distribution to go along with it. In these cases, fear not! You can always simulate it.
we looked at the rare event of no-hitters in Major League Baseball. _Hitting the cycle_ is another rare baseball event. When a batter hits the cycle, he gets all four kinds of hits, a single, double, triple, and home run, in a single game. Like no-hitters, this can be modeled as a Poisson process, so the time between hits of the cycle are also Exponentially distributed.
How long must we wait to see both a no-hitter and then a batter hit the cycle? The idea is that we have to wait some time for the no-hitter, and then after the no-hitter, we have to wait for hitting the cycle. Stated another way, what is the total waiting time for the arrival of two different Poisson processes? The total waiting time is the time waited for the no-hitter, plus the time waited for the hitting the cycle.
> Important: We will write a function to sample out of the distribution described by this story.
```
def successive_poisson(tau1, tau2, size=1):
"""Compute time for arrival of 2 successive Poisson processes."""
# Draw samples out of first exponential distribution: t1
t1 = np.random.exponential(tau1, size=size)
# Draw samples out of second exponential distribution: t2
t2 = np.random.exponential(tau2, size=size)
return t1 + t2
```
### Distribution of no-hitters and cycles
We'll use the sampling function to compute the waiting time to observe a no-hitter and hitting of the cycle. The mean waiting time for a no-hitter is $764$ games, and the mean waiting time for hitting the cycle is $715$ games.
```
# Draw samples of waiting times: waiting_times
waiting_times = successive_poisson(764, 715, size=100000)
# Make the histogram
_ = plt.hist(waiting_times, bins=100, density=True, histtype="step")
# Label axes
_ = plt.xlabel("Waiting times")
_ = plt.ylabel("probability")
# Show the plot
plt.show()
```
Notice that the PDF is peaked, unlike the waiting time for a single Poisson process. For fun (and enlightenment), Let's also plot the CDF.
```
x,y = ecdf(waiting_times)
_ = plt.plot(x,y)
_ = plt.plot(x,y, marker=".", linestyle="none")
_ = plt.xlabel("Waiting times")
_ = plt.ylabel("CDF")
plt.show()
```
{{'For this exercise and all going forward, the random number generator is pre-seeded for you (with `np.random.seed(42))` to save you typing that each time.' | fndetail: 1 }}
{{'we scraped the data concerning the Belmont Stakes from the [Belmont Wikipedia page](https://en.wikipedia.org/wiki/Belmont_Stakes).' | fndetail: 2 }}
| true |
code
| 0.733947 | null | null | null | null |
|
# TorchDyn Quickstart
**TorchDyn is the toolkit for continuous models in PyTorch. Play with state-of-the-art architectures or use its powerful libraries to create your own.**
Central to the `torchdyn` approach are continuous neural networks, where *width*, *depth* (or both) are taken to their infinite limit. On the optimization front, we consider continuous "data-stream" regimes and gradient flow methods, where the dataset represents a time-evolving signal processed by the neural network to adapt its parameters.
By providing a centralized, easy-to-access collection of model templates, tutorial and application notebooks, we hope to speed-up research in this area and ultimately contribute to turning neural differential equations into an effective tool for control, system identification and common machine learning tasks.
```
from torchdyn.models import *
from torchdyn.data_utils import *
from torchdyn import *
```
## Generate data from a static toy dataset
We’ll be generating data from toy datasets. We provide a wide range of datasets often use to benchmark and understand neural ODEs. Here we will use the classic moons dataset and train a neural ODE for binary classification
```
d = ToyDataset()
X, yn = d.generate(n_samples=520, dataset_type='moons')
import matplotlib.pyplot as plt
colors = ['orange', 'blue']
fig = plt.figure(figsize=(3,3))
ax = fig.add_subplot(111)
for i in range(len(X)):
ax.scatter(X[i,0], X[i,1], color=colors[yn[i].int()])
```
Generated data can be easily loaded in the dataloader with standard `PyTorch` calls
```
import torch
import torch.utils.data as data
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
X_train = torch.Tensor(X).to(device)
y_train = torch.LongTensor(yn.long()).to(device)
train = data.TensorDataset(X_train, y_train)
trainloader = data.DataLoader(train, batch_size=len(X), shuffle=False)
```
The learner is defined as....
```
import torch.nn as nn
import pytorch_lightning as pl
class Learner(pl.LightningModule):
def __init__(self, model:nn.Module, settings:dict={}):
super().__init__()
defaults.update(settings)
self.settings = defaults
self.model = model
self.c = 0
def forward(self, x):
return self.model(x)
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self.model(x)
loss = nn.CrossEntropyLoss()(y_hat, y)
logs = {'train_loss': loss}
return {'loss': loss, 'log': logs}
def configure_optimizers(self):
return torch.optim.Adam(self.model.parameters(), lr=0.01)
def train_dataloader(self):
return trainloader
```
## Define a Neural ODE
Analogously to most forward neural models we want to realize a map
$$
x \mapsto \hat y
$$
where $\hat y$ becomes the best approximation of a true output $y$ given an input $x$.\
In torchdyn you can define very simple neural ODE models of the form
$$ \left\{
\begin{aligned}
\dot{h}(s) &= f(h(s), \theta)\\
h(0) &= x\\
\hat y & = h(1)
\end{aligned}
\right. \quad s\in[0,1]
$$
by just specifying a neural network $f$ and giving some simple settings.
**Note:** This neural ODE model is of *depth-invariant* type as neither $f$ explicitly depend on $s$ nor the parameters $\theta$ are depth-varying. Together with their *depth-variant* counterpart with $s$ concatenated in the vector field was first proposed and implemeted by [[Chen T. Q. et al, 2018]](https://arxiv.org/abs/1806.07366)
### Define the vector field (DEFunc)
The first step is to define a `torch.nn.Sequential` object and wrap it with the `DEFunc` class from torchdyn. This automatically defines the vector field $f(h,\theta)$ of the neural ODE
```
f = DEFunc(nn.Sequential(
nn.Linear(2, 64),
nn.Tanh(),
nn.Linear(64,2)
)
)
```
In this case we chose $f$ to be a simple MLP with one hidden layer and $\tanh$ activation
### Define the NeuralDE
The final step to define a neural ODE object is to instantiate an object of the torchdyn's class `NeuralDE` passing some preferences and `f`.
In this case with `settings` we just specify that:
* we want a `'classic'` neural ODE;
* we will use the `'dopri5'` (Dormand-Prince) ODE solver from `torchdiffeq`;
* we compute backward gradients with the `'adjoint'` method.
```
settings = {'type':'classic', 'solver':'dopri5', 'backprop_style':'adjoint'}
model = NeuralDE(f, settings).to(device)
```
## Train the Model
```
learn = Learner(model)
trainer = pl.Trainer(min_nb_epochs=200, max_nb_epochs=300)
trainer.fit(learn)
```
With the method `trajectory` of `NeuralDE` objects you can quickly evaluate the entire trajectory of each data point in `X_train` on an interval `s_span`
```
s_span = torch.linspace(0,1,100)
trajectory = model.trajectory(X_train, s_span).detach().cpu()
```
### Plot the Training Results
We can first plot the trajectories of the data points in the depth domain $s$
```
color=['orange', 'blue']
fig = plt.figure(figsize=(8,2))
ax0 = fig.add_subplot(121)
ax1 = fig.add_subplot(122)
for i in range(500):
ax0.plot(s_span, trajectory[:,i,0], color=color[int(yn[i])], alpha=.1);
ax1.plot(s_span, trajectory[:,i,1], color=color[int(yn[i])], alpha=.1);
ax0.set_xlabel(r"$s$ [Depth]")
ax0.set_ylabel(r"$h_0(s)$")
ax0.set_title("Dimension 0")
ax1.set_xlabel(r"$s$ [Depth]")
ax1.set_ylabel(r"$h_1(s)$")
ax1.set_title("Dimension 1")
```
Then the trajectory in the *state-space*
```
fig = plt.figure(figsize=(3,3))
ax = fig.add_subplot(111)
for i in range(500):
ax.plot(trajectory[:,i,0], trajectory[:,i,1], color=color[int(yn[i])], alpha=.1);
ax.set_xlabel(r"$h_0$")
ax.set_ylabel(r"$h_1$")
ax.set_title("Flows in the state-space")
```
As you can see, the neural ODE steers the data-points into regions of null loss with a continuous flow in the depth domain.\ Finally, we can also plot the learned vector field $f$
```
plot_static_vector_field(model, trajectory)
```
**Sweet! You trained your first neural ODE! Now go on and learn more advanced models with the next tutorials**
| true |
code
| 0.921816 | null | null | null | null |
|
# Setup
```
# Python 3 compatability
from __future__ import division, print_function
# system functions that are always useful to have
import time, sys, os
# basic numeric setup
import numpy as np
import math
from numpy import linalg
import scipy
from scipy import stats
# plotting
import matplotlib
from matplotlib import pyplot as plt
# fits data
from astropy.io import fits
# inline plotting
%matplotlib inline
# re-defining plotting defaults
from matplotlib import rcParams
rcParams.update({'xtick.major.pad': '7.0'})
rcParams.update({'xtick.major.size': '7.5'})
rcParams.update({'xtick.major.width': '1.5'})
rcParams.update({'xtick.minor.pad': '7.0'})
rcParams.update({'xtick.minor.size': '3.5'})
rcParams.update({'xtick.minor.width': '1.0'})
rcParams.update({'ytick.major.pad': '7.0'})
rcParams.update({'ytick.major.size': '7.5'})
rcParams.update({'ytick.major.width': '1.5'})
rcParams.update({'ytick.minor.pad': '7.0'})
rcParams.update({'ytick.minor.size': '3.5'})
rcParams.update({'ytick.minor.width': '1.0'})
rcParams.update({'axes.titlepad': '15.0'})
rcParams.update({'axes.labelpad': '15.0'})
rcParams.update({'font.size': 30})
```
# HSC SynPipe
Plot comparisons to data from HSC Synpipe.
```
# convert from magnitudes to fluxes
def inv_magnitude(mag, err, zeropoints=1.):
phot = 10**(-0.4 * mag) * zeropoints
phot_err = err * 0.4 * np.log(10.) * phot
return phot, phot_err
bands = ['G', 'R', 'I', 'Z', 'Y']
cpivot, mrange = 1e-4, 2. # pivot point & mag range used to shift offsets
nmin = 10 # minimum number of objects required to plot results
boxcar = 6 # bins used for boxcar used to determine variance for plotting
mgrid = np.arange(18, 28., 0.15) # magnitude bins
dmgrid = np.arange(-0.1, 0.1, 0.0075) # dmag bins
dmpgrid = np.arange(-0.02, 0.02, 1e-5) # dmag (predicted) bins
plt.figure(figsize=(50, 14))
plt.suptitle('Tract 8764 (Good Seeing)', y=1.02, fontsize=40)
for count, i in enumerate(bands):
# load data
data = fits.open('data/synpipe/star1_HSC-{0}_good.fits'.format(i))[1].data
# top panel: dmag distribution (shifted)
plt.subplot(2, 5, 1 + count)
moff = np.median(data['mag'] - data['mag.psf'])
n, bx, by, _ = plt.hist2d(data['mag'],
data['mag.psf'] - data['mag'] + moff,
[mgrid, dmgrid])
xc, yc = 0.5 * (bx[1:] + bx[:-1]), 0.5 * (by[1:] + by[:-1]) # bin centers
nmag = np.sum(n, axis=1) # counts per magnitude bin
nmean = np.sum(yc * n, axis=1) / np.sum(n, axis=1) # mean
nstd = np.sqrt(np.sum((yc[None, :] - nmean[:, None])**2 * n, axis=1) / np.sum(n, axis=1)) # error
# compute SNR as a function of magnitude
mconst = 2.5 / np.log(10)
fout, fe = inv_magnitude(data['mag.psf'], data['mag.psf.err'])
fin, fe2 = inv_magnitude(data['mag'], data['mag.psf.apcorr.err'])
snr = fout/np.sqrt(fe**2 + (0.02 * fout)**2)
# first order
n, bx, by = np.histogram2d(data['mag'], -mconst * snr**-2,
[mgrid, dmpgrid])
xc, yc = 0.5 * (bx[1:] + bx[:-1]), 0.5 * (by[1:] + by[:-1])
cmean = np.sum(yc * n, axis=1) / np.sum(n, axis=1)
# prettify
plt.xlabel('{0}-band PSF Mag'.format(i))
plt.ylabel(r'$\Delta\,$mag')
try:
midx = np.where(cmean < -0.012)[0][0]
except:
midx = -1
pass
plt.xlim([mgrid[0], mgrid[midx]])
plt.tight_layout()
# bottom panel: computed mean offsets vs predicted mean offsets
plt.subplot(2, 5, 6 + count)
mhigh = xc[np.abs(cmean - np.nanmax(cmean)) > cpivot][0]
mlow = mhigh - mrange
nsel = nmag > nmin
offset = np.nanmedian(nmean[(xc >= mlow) & (xc <= mhigh) & nsel])
nmean_err = nstd/np.sqrt(nmag)
nmean_serr = np.array([np.std(nmean[i:i+boxcar]) for i in range(len(nmean) - boxcar)])
nmean_err[boxcar//2:-boxcar//2] = np.sqrt(nmean_err[boxcar//2:-boxcar//2]**2 + nmean_serr**2)
plt.fill_between(xc[nsel], (nmean - offset - nmean_err)[nsel],
(nmean - offset + nmean_err)[nsel], color='gray', alpha=0.8)
# apply linear bias correction
lin_coeff = np.polyfit(xc[(xc >= mlow - 1) & (xc <= mhigh + 1) & nsel],
nmean[(xc >= mlow - 1) & (xc <= mhigh + 1) & nsel], 1)
lin_off = np.poly1d(lin_coeff)(xc)
nmean -= lin_off
offset = np.nanmedian(nmean[(xc >= mlow) & (xc <= mhigh) & nsel])
plt.fill_between(xc[nsel], (nmean - offset - nmean_err)[nsel],
(nmean - offset + nmean_err)[nsel], color='orange', alpha=0.4)
# plot prediction
plt.plot(xc[nsel], cmean[nsel], lw=6, color='red', alpha=0.7)
# prettify
plt.xlabel('{0}-band PSF Mag'.format(i))
plt.ylabel(r'Mean $\Delta\,$mag')
try:
midx = np.where(cmean < -0.012)[0][0]
except:
midx = -1
pass
plt.xlim([mgrid[0], mgrid[midx]])
plt.ylim([-0.02, 0.01])
plt.tight_layout()
plt.legend(['Prediction', 'No Corr.', 'Linear Corr.'])
# save figure
plt.savefig('plots/hsc_synpipe_goodseeing.png', bbox_inches='tight')
plt.figure(figsize=(50, 14))
plt.suptitle('Tract 9699 (Poor Seeing)', y=1.02, fontsize=40)
for count, i in enumerate(bands):
# load data
data = fits.open('data/synpipe/star2_HSC-{0}_good.fits'.format(i))[1].data
# top panel: dmag distribution (shifted)
plt.subplot(2, 5, 1 + count)
moff = np.median(data['mag'] - data['mag.psf'])
n, bx, by, _ = plt.hist2d(data['mag'],
data['mag.psf'] - data['mag'] + moff,
[mgrid, dmgrid])
xc, yc = 0.5 * (bx[1:] + bx[:-1]), 0.5 * (by[1:] + by[:-1]) # bin centers
nmag = np.sum(n, axis=1) # counts per magnitude bin
nmean = np.sum(yc * n, axis=1) / np.sum(n, axis=1) # mean
nstd = np.sqrt(np.sum((yc[None, :] - nmean[:, None])**2 * n, axis=1) / np.sum(n, axis=1)) # error
# compute SNR as a function of magnitude
mconst = 2.5 / np.log(10)
fout, fe = inv_magnitude(data['mag.psf'], data['mag.psf.err'])
fin, fe2 = inv_magnitude(data['mag'], data['mag.psf.apcorr.err'])
snr = fout/np.sqrt(fe**2 + (0.02 * fout)**2)
# first order
n, bx, by = np.histogram2d(data['mag'], -mconst * snr**-2,
[mgrid, dmpgrid])
xc, yc = 0.5 * (bx[1:] + bx[:-1]), 0.5 * (by[1:] + by[:-1])
cmean = np.sum(yc * n, axis=1) / np.sum(n, axis=1)
# prettify
plt.xlabel('{0}-band PSF Mag'.format(i))
plt.ylabel(r'$\Delta\,$mag')
try:
midx = np.where(cmean < -0.012)[0][0]
except:
midx = -1
pass
plt.xlim([mgrid[0], mgrid[midx]])
plt.tight_layout()
# bottom panel: computed mean offsets vs predicted mean offsets
plt.subplot(2, 5, 6 + count)
mhigh = xc[np.abs(cmean - np.nanmax(cmean)) > cpivot][0]
mlow = mhigh - mrange
nsel = nmag > nmin
offset = np.nanmedian(nmean[(xc >= mlow) & (xc <= mhigh) & nsel])
nmean_err = nstd/np.sqrt(nmag)
nmean_serr = np.array([np.std(nmean[i:i+boxcar]) for i in range(len(nmean) - boxcar)])
nmean_err[boxcar//2:-boxcar//2] = np.sqrt(nmean_err[boxcar//2:-boxcar//2]**2 + nmean_serr**2)
plt.fill_between(xc[nsel], (nmean - offset - nmean_err)[nsel],
(nmean - offset + nmean_err)[nsel], color='gray', alpha=0.8)
# apply linear bias correction
lin_coeff = np.polyfit(xc[(xc >= mlow - 1) & (xc <= mhigh + 1) & nsel],
nmean[(xc >= mlow - 1) & (xc <= mhigh + 1) & nsel], 1)
lin_off = np.poly1d(lin_coeff)(xc)
nmean -= lin_off
offset = np.nanmedian(nmean[(xc >= mlow) & (xc <= mhigh) & nsel])
plt.fill_between(xc[nsel], (nmean - offset - nmean_err)[nsel],
(nmean - offset + nmean_err)[nsel], color='orange', alpha=0.4)
# plot prediction
plt.plot(xc[nsel], cmean[nsel], lw=6, color='red', alpha=0.7)
# prettify
plt.xlabel('{0}-band PSF Mag'.format(i))
plt.ylabel(r'Mean $\Delta\,$mag')
try:
midx = np.where(cmean < -0.012)[0][0]
except:
midx = -1
pass
plt.xlim([mgrid[0], mgrid[midx]])
plt.ylim([-0.02, 0.01])
plt.tight_layout()
plt.legend(['Prediction', 'No Corr.', 'Linear Corr.'])
# save figure
plt.savefig('plots/hsc_synpipe_poorseeing.png', bbox_inches='tight')
```
| true |
code
| 0.638582 | null | null | null | null |
|
<h1> Polynomial Regression
This cell is regarding polynomial regression, first we will grab the dataset and clean it a little bit.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_val_score
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
nyc_pumpkins = pd.read_csv("./new-york_9-24-2016_9-30-2017.csv")
cat_map = {
'sml': 0,
'med': 1,
'med-lge': 2,
'lge': 3,
'xlge': 4,
'exjbo': 5
}
nyc_pumpkins = nyc_pumpkins.assign(
size=nyc_pumpkins['Item Size'].map(cat_map),
price=nyc_pumpkins['High Price'] + nyc_pumpkins['Low Price'] / 2,
size_class=(nyc_pumpkins['Item Size'].map(cat_map) >= 2).astype(int)
)
nyc_pumpkins = nyc_pumpkins.drop([c for c in nyc_pumpkins.columns if c not in ['size', 'price', 'size_class']],
axis='columns')
nyc_pumpkins = nyc_pumpkins.dropna()
nyc_pumpkins.head(10)
nyc_pumpkins.shape
```
Now we will split into train and test set with the useful train_test_split method from sklearn. We will test with polynomials of degree 1,2 and 4. Remember a polynomial of dgree 1 is just linear regression!
In this case as we want to predict the size which is an integer, we will round up the predicted value and we will how close we are with a distance of 1 class.
```
X_train, X_test, y_train, y_test = train_test_split(nyc_pumpkins['price'], nyc_pumpkins['size'], test_size=0.20, random_state=42, shuffle=True) #split 20% into test set
degrees = [1, 2, 4]
fig = plt.figure(figsize=(20,10))
for i in range(len(degrees)):
ax = plt.subplot(1, len(degrees)+1, i + 2)
plt.setp(ax, xticks=(), yticks=())
polynomial_features = PolynomialFeatures(degree=degrees[i],
include_bias=True)
linear_regression = LinearRegression()
pipeline = Pipeline([("polynomial_features", polynomial_features),
("linear_regression", linear_regression)])
pipeline.fit(X_train[:, np.newaxis], y_train)
# Evaluate the models using crossvalidation
predicted_sizes = np.round(pipeline.predict(X_test[:, np.newaxis]))
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
pd.Series(
np.abs((np.array(y_test) - predicted_sizes).flatten()) <= 1
).value_counts().plot.bar(title='Accuracy Within 1 Class \n for degree {}'.format(degrees[i]))
```
We did pretty good with polynomial regression! Lets analyze how well it can generalize. one problem with degree 4 we will have is correlation, lets check it!
```
correlations = pd.DataFrame(PolynomialFeatures(degree=4, include_bias=False).fit_transform(np.array(nyc_pumpkins['price']).reshape(-1,1))).corr()
# plot correlation matrix
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
cax = ax.matshow(correlations, vmin=-1, vmax=1)
fig.colorbar(cax)
ticks = np.arange(0,4,1)
ax.set_xticks(ticks)
ax.set_yticks(ticks)
ax.set_xticklabels(['X', 'X^2', 'X^3', 'X^4'])
ax.set_yticklabels(['X', 'X^2', 'X^3', 'X^4'])
plt.show()
```
Thats bad! Finally, to see what is the issue with polynomial regression, lts add just *ONE* outlier!
```
X_train, X_test, y_train, y_test = train_test_split(nyc_pumpkins['price'], nyc_pumpkins['size'], test_size=0.20, random_state=42, shuffle=True) #split 20% into test set
X_train[20]= -10
y_train[20] = 30
degrees = [1, 2, 4]
fig = plt.figure(figsize=(20,10))
for i in range(len(degrees)):
ax = plt.subplot(1, len(degrees)+1, i + 2)
plt.setp(ax, xticks=(), yticks=())
polynomial_features = PolynomialFeatures(degree=degrees[i],
include_bias=True)
linear_regression = LinearRegression()
pipeline = Pipeline([("polynomial_features", polynomial_features),
("linear_regression", linear_regression)])
pipeline.fit(X_train[:, np.newaxis], y_train)
# Evaluate the models using crossvalidation
predicted_sizes = np.round(pipeline.predict(X_test[:, np.newaxis]))
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
pd.Series(
np.abs((np.array(y_test) - predicted_sizes).flatten()) <= 1
).value_counts().plot.bar(title='Accuracy Within 1 Class \n for degree {}'.format(degrees[i]))
```
Polynomial regression is really non robust!
| true |
code
| 0.615463 | null | null | null | null |
|
### Cleaning data associated with bills: utterances, summaries; so they are ready for input to pointer-gen model - this is the new cleaning method implementation
There are 6541 BIDs which overlap between the utterances and summaries datasets (using all the summary data). There are 359 instances in which the summaries are greater than 100 tokens in length, and 41 instances in which the summaries are greater than 201 tokens in length. In these instances, the summaries with less than 201 tokens were cut to their first 100 tokens (anything over 201 tokens is cut entirely). There are 374 instances in which the utterances are less than 70 tokens in length. In the final dataset(old) of 6000 examples, there are 865 examples of resolutions.
There are 374+127=501 instances in which the utterances are less than 100 tokens in length.
```
import json
import numpy as np
import ast
import re
import spacy
from collections import Counter,defaultdict
import warnings
warnings.filterwarnings('ignore')
nlp = spacy.load("en_core_web_sm")
with open("../data/bill_summaries.json") as summaries_file: # loading in the data
bill_summaries = json.load(summaries_file)
with open("../data/bill_utterances.json") as utterances_file:
bill_utterances = json.load(utterances_file)
ca_bill_utterances = bill_utterances['CA']
```
### Cleaning data before the processing to format which is accepted by pointer-gen model
```
def clean_bill_summaries(bill_summaries,max_summary_length=201,ignore_resolutions=False):
""" post-processing to remove bill summary entries with certain critera:
1) if the summary does not start with "This" (probable encoding error)
2) if "page 1" occurs in the text (indicates improper encoding)
3) if the text is over max_summary_length tokens in length (very long summaries indicate probable encoding error)
-for bill summaries which have ordering (" 1)"," 2)","(1)","(2)"," a)","(a)"), removes the implicit ordering
args:
summary_cutoff: the length of the summary for the text in which to keep
max_summary_length: max length of summaries in which to keep
ignore_resolutions (bool): whether to ignore resolutions and only output bills
"""
num_cutoff_counter=0 # counts the number of summaries ignored due to being too long
bill_summary_info = defaultdict(dict) # stores both summaries and utterances for each CA bill
for bid,summary in bill_summaries.items():
text = summary['text']
if "page 1" in text: # ignore this instance, indicator of encoding error
continue
if text[0:4] != "This": # relatively strong indicator that there was error in encoding
continue
if ignore_resolutions and "R" in bid: # ignore this instance if wanting to ignore resolutions
continue
tokens = [str(token) for token in nlp(text)]
if len(tokens)>max_summary_length: # ignore this instance, includes many errors in pdf encoding in which end state not reached
num_cutoff_counter += 1
continue
# removing the implicit ordering for all instances
if " 1)" in text or " 2)" in text or "(1)" in text or "(2)" in text or " a)" in text or " b)" in text or "(a)" in text or "(b)" in text:
text = re.sub(" \([0-9]\)","",text)
text = re.sub(" [0-9]\)","",text)
text = re.sub(" \([a-j]\)","",text)
text = re.sub(" [a-j]\)","",text)
tokens = [str(token) for token in nlp(text)]
bill_summary_info[bid]['summary'] = summary
bill_summary_info[bid]['summary']['text']=text # text is occasionally updated (when ordering removed)
bill_summary_info[bid]['summary_tokens'] = tokens
return bill_summary_info,num_cutoff_counter
bill_summary_info,_ = clean_bill_summaries(bill_summaries,max_summary_length=650,ignore_resolutions=False)
len(bill_summary_info)
def clean_bill_utterances(bill_summary_info,ca_bill_utterances,minimum_utterance_tokens=99,token_cutoff=1000):
""" cleans and combines the summary and utterance data
args:
bill_summary_info: holds cleaned information about bill summaries
token_cutoff: max number of tokens to consider for utterances
minimum_utterance_tokens: minimum number of utterance tokens allowable
"""
num_utterance_counter=0 # counts num. examples ignored due to utterances being too short
all_bill_info = {}
all_tokens_dict = {} # stores all tokens for a given bid (ignoring token_cutoff)
for bid in ca_bill_utterances:
if bid in bill_summary_info: # there is a summary assigned to this bill
all_utterances = [] # combining all discussions (did) for this bid together
for utterance_list in ca_bill_utterances[bid]['utterances']:
all_utterances+=utterance_list
all_token_lists = [[str(token) for token in nlp(utterance)] for utterance in all_utterances]
all_tokens = [] # getting a single stream of tokens
multitask_y = [] # 0 if not end of utterance, 1 if end of utterance (multitask component)
for token_list in all_token_lists:
multitask_y += [0 for _ in range(len(token_list)-1)]+[1]
all_tokens += token_list
multitask_loss_mask = [1 for _ in range(len(multitask_y))] # getting multitask components to correct shape
if len(multitask_loss_mask)<token_cutoff:
amount_to_pad = token_cutoff-len(multitask_loss_mask)
multitask_loss_mask += [0 for _ in range(amount_to_pad)]
multitask_y += [0 for _ in range(amount_to_pad)]
multitask_loss_mask = multitask_loss_mask[:token_cutoff]
multitask_y = multitask_y[:token_cutoff]
assert(len(multitask_loss_mask)==token_cutoff and len(multitask_y)==token_cutoff)
if len(all_tokens)-len(all_token_lists)>=minimum_utterance_tokens: # ignore bids which don't have enough utterance tokens
all_tokens_dict[bid]=[token.lower() for token in all_tokens] # adding all utterance tokens
all_tokens_dict[bid]+=[token.lower() for token in bill_summary_info[bid]['summary_tokens']] # adding all summary tokens
all_bill_info[bid] = bill_summary_info[bid]
all_tokens = all_tokens[:token_cutoff] # taking up to max number of tokens
all_bill_info[bid]['utterances']=all_utterances
all_bill_info[bid]['utterance_tokens']=all_tokens
all_bill_info[bid]['resolution'] = "R" in bid
all_bill_info[bid]['multitask_y'] = multitask_y
all_bill_info[bid]['multitask_loss_mask'] = multitask_loss_mask
else:
num_utterance_counter += 1
return all_bill_info,all_tokens_dict,num_utterance_counter
all_bill_info,all_tokens_dict,_ = clean_bill_utterances(bill_summary_info,ca_bill_utterances,token_cutoff=500)
len(all_bill_info)
```
### Processing data to get to format which is accepted by pointer-gen model
```
### using pretrained Glove vectors
word_to_embedding = {}
with open("../data/glove.6B/glove.6B.100d.txt") as glove_file:
for line in glove_file.readlines():
values = line.split()
word = values[0]
coefs = np.asarray(values[1:],dtype='float32')
word_to_embedding[word] = coefs
print(len(word_to_embedding))
# getting all unique tokens used to get words which will be part of the fixed vocabulary
## specifically specifying that I want a vocabulary size of 30k (adding less common words up to that threshold)
all_tokens = []
for bid in all_tokens_dict:
all_tokens += all_tokens_dict[bid]
word_freq = Counter(all_tokens)
words_by_freq = (list(word_freq.items()))
words_by_freq.sort(key=lambda tup: tup[1],reverse=True) # sorting by occurance freq.
most_freq_words = [word_tup[0] for word_tup in words_by_freq if word_tup[1] >= 3]
most_freq_words += [word_tup[0] for word_tup in words_by_freq if word_tup[1] == 2 and word_tup[0] in word_to_embedding][:30000-3-len(most_freq_words)]
less_freq_words = [word_tup[0] for word_tup in words_by_freq if word_tup[1] < 2]
print(most_freq_words[0:10])
print(less_freq_words[0:10])
print(len(most_freq_words),len(less_freq_words))
## new addition to this where I store the word embeddings for the vocabulary
# assigning indices for all words, and adding <PAD>,<SENT>,<UNK> symbols
fixed_vocab_word_to_index = {"<PAD>":0,"<SENT>":1,"<UNK>":2} # for words assigned to the fixed_vocabulary
fixed_vocab_index_to_word = {0:"<PAD>",1:"<SENT>",2:"<UNK>"}
word_embeddings = [np.random.uniform(low=-0.05,high=0.05,size=100).astype("float32") for _ in range(3)]
index = 3 # starting index for all words
# assigning indices to most common words:
for word in most_freq_words:
fixed_vocab_word_to_index[word]=index
fixed_vocab_index_to_word[index]=word
index += 1
if word in word_to_embedding: # use pre-trained embedding
word_embeddings.append(word_to_embedding[word])
else: # initialize a trainable embedding
word_embeddings.append(np.random.uniform(low=-0.05,high=0.05,size=100).astype("float32"))
word_embeddings = np.stack(word_embeddings)
print(len(fixed_vocab_word_to_index),word_embeddings.shape)
## saving all of the vocabulary related information
np.save("../data/len_500_data/word_embeddings.npy",word_embeddings)
with open("../data/len_500_data/word_to_index.json","w+") as out_file:
json.dump(fixed_vocab_word_to_index,out_file)
with open("../data/len_500_data/index_to_word.json","w+") as out_file:
json.dump(fixed_vocab_index_to_word,out_file)
num_fixed_words = len(fixed_vocab_word_to_index)
token_cutoff=500 # this is the amount to pad up to for the input representation
# creating the input data representations for the model - input is padded up to a length of 500
x = [] # stores the integer/index representation for all input
x_indices = [] # stores the joint probability vector indices for all words in the input
x_indices_dicts = [] # stores the dicts for assigning words which are not in the fixed_vocabulary
att_mask = [] # stores the attention masks (0 for valid words, -np.inf for padding)
multitask_y = [] # stores labels for the multitask component
multitask_loss_mask = [] # stores loss mask for the multitask component
## data stores for debugging/error analysis
bill_information_dict = {} # stores summary(text),utterances(2d list of utterances),resolution(boolean) for each BID
bids = [] # stores the BIDs in sequential order
for bid in all_bill_info:
# creating representations for data store
bill_information_dict[bid] = {"summary":all_bill_info[bid]["summary"]["text"],"utterances":all_bill_info[bid]["utterances"],"resolution":all_bill_info[bid]["resolution"]}
bids.append(bid)
# getting the multitask data representations
this_multitask_y = all_bill_info[bid]['multitask_y']
this_multitask_loss_mask = all_bill_info[bid]['multitask_loss_mask']
multitask_y.append(this_multitask_y)
multitask_loss_mask.append(this_multitask_loss_mask)
# creating the standard input representation:
utterance_tokens = [token.lower() for token in all_bill_info[bid]["utterance_tokens"]]
x_rep = [] # assigning indices to words, if input word not part of fixed_vocab, assign to <UNK>
for token in utterance_tokens:
if token in fixed_vocab_word_to_index:
x_rep.append(fixed_vocab_word_to_index[token])
else:
x_rep.append(fixed_vocab_word_to_index['<UNK>'])
att_mask_rep = [0 for i in range(len(x_rep))]
amount_to_pad = token_cutoff-len(x_rep)
x_rep += [0 for i in range(amount_to_pad)] # padding the input
att_mask_rep += [-np.inf for i in range(amount_to_pad)]
x.append(x_rep)
att_mask.append(att_mask_rep)
# creating the joint probability representation for the input:
## (the index in joint prob vector that each input word probability should be assigned to)
index=num_fixed_words # start index for assignment to joint_probability vector, length of fixed_vocab_size
non_vocab_dict = {} # stores all OOV words for this bid
this_x_indices = [] # joint prob vector indices for this bid
for token in utterance_tokens:
if token in fixed_vocab_word_to_index:
this_x_indices.append(fixed_vocab_word_to_index[token])
else:
if token in non_vocab_dict: # this word is OOV but has been seen before
this_x_indices.append(non_vocab_dict[token])
else: # this word is OOV and has never been seen before
non_vocab_dict[token]=index
this_x_indices.append(index)
index += 1
x_indices_dicts.append(non_vocab_dict)
this_x_indices += [0 for i in range(amount_to_pad)] # padding will be masked out in att calculation, so padding with 0 here is valid
x_indices.append(this_x_indices)
# this is the largest number of OOV words for a given bid utterances
max([len(dic) for dic in x_indices_dicts])
# creating the output representations for the model - output is padded up to a length of 101
## the last index is for <SENT> to indicate the end of decoding (assuming representation is shorter than 100 tokens)
## assuming the summary is greater than 100 tokens in length, we simply cut off the first 101 tokens
### when we do this cutoff, we do NOT include that <SENT> token as the 102nd token
## all words in output that are not in input utterances or in fixed_vocab_vector are assigned 3:<UNK>
y = [] # stores the index representations for all words in the headlines (this is never used)
loss_mask = [] # 1 for valid words, 0 for padding
decoder_x = [] # starts with 1:<SENT>, followed by y[0:len(headline)-1] (this is the input for teacher-forcing)(101x1)
y_indices = [] # index for the correct decoder prediction, in the joint-probability vector
total_oov_words = 0
resolution_bools = [] # bool, whether a given example is a resolution (False=bill); used for train_test_split
for bid_i,bid in enumerate(all_bill_info.keys()):
# creating standard output representation:
summary_tokens = [token.lower() for token in all_bill_info[bid]["summary_tokens"]]
y_rep = [] # not used in the model, stores indices using only fixed_vocab_vector
for token in summary_tokens:
if token in fixed_vocab_word_to_index:
y_rep.append(fixed_vocab_word_to_index[token])
else:
y_rep.append(fixed_vocab_word_to_index['<UNK>'])
resolution_bools.append(all_bill_info[bid]['resolution'])
## this is a new addition from before, including longer summaries, but just cutting off the text
if len(y_rep) > 100: # simply cutoff to the first 101 tokens
y_rep = y_rep[:101]
else: # append a end-of-sentence indicator
y_rep.append(fixed_vocab_word_to_index['<SENT>'])
loss_mask_rep = [1 for i in range(len(y_rep))]
decoder_x_rep = [1]+y_rep[0:len(y_rep)-1] # embedding word in input but not in fixed_vocab is currently set to <UNK>
amount_to_pad = 101-len(y_rep) # 100+1 represents final <SENT> prediction
y_rep += [0 for i in range(amount_to_pad)]
loss_mask_rep += [0 for i in range(amount_to_pad)] # cancels out loss contribution from padding
decoder_x_rep += [0 for i in range(amount_to_pad)]
# creating joint-probability representation of output:
non_vocab_dict = x_indices_dicts[bid_i]
y_indices_rep = []
for token in summary_tokens:
if token in fixed_vocab_word_to_index: # word is in fixed_vocabulary
y_indices_rep.append(fixed_vocab_word_to_index[token])
elif token in non_vocab_dict: # word is OOV but in the input utterances, use the index assigned to this word in x_indices
y_indices_rep.append(non_vocab_dict[token])
else: # word is OOV and not in input utterances
y_indices_rep.append(fixed_vocab_word_to_index["<UNK>"])
total_oov_words += 1
if len(y_indices_rep) > 100: # simply cutoff to the first 101 tokens
y_indices_rep = y_indices_rep[:101]
else: # if len <= 100, last prediction should be <SENT>
y_indices_rep.append(fixed_vocab_word_to_index['<SENT>'])
y_indices_rep += [0 for i in range(amount_to_pad)] # padding will be ignored by loss_mask
y.append(y_rep)
loss_mask.append(loss_mask_rep)
decoder_x.append(decoder_x_rep)
y_indices.append(y_indices_rep)
x = np.array(x).astype("int32")
x_indices = np.array(x_indices).astype("int32")
att_mask = np.array(att_mask).astype("float32")
loss_mask = np.array(loss_mask).astype("float32")
decoder_x = np.array(decoder_x).astype("int32")
y_indices = np.array(y_indices).astype("int32")
multitask_y = np.array(multitask_y).astype("float32")
multitask_loss_mask = np.array(multitask_loss_mask).astype("float32")
print(x.shape,x_indices.shape,att_mask.shape)
print(loss_mask.shape,decoder_x.shape,y_indices.shape)
print(multitask_y.shape,multitask_loss_mask.shape)
bids = np.array(bids)
print(bids.shape,len(bill_information_dict))
```
#### Shuffling the data so that only bills are in the validation and test sets
```
from sklearn.utils import shuffle
x_resolution = x[resolution_bools]
x_indices_resolution = x_indices[resolution_bools]
att_mask_resolution = att_mask[resolution_bools]
loss_mask_resolution = loss_mask[resolution_bools]
decoder_x_resolution = decoder_x[resolution_bools]
y_indices_resolution = y_indices[resolution_bools]
bids_resolution = bids[resolution_bools]
multitask_y_resolution = multitask_y[resolution_bools]
multitask_loss_mask_resolution = multitask_loss_mask[resolution_bools]
bill_bools = [not res_bool for res_bool in resolution_bools] # reversal
x_bill = x[bill_bools]
x_indices_bill = x_indices[bill_bools]
att_mask_bill = att_mask[bill_bools]
loss_mask_bill = loss_mask[bill_bools]
decoder_x_bill = decoder_x[bill_bools]
y_indices_bill = y_indices[bill_bools]
bids_bill = bids[bill_bools]
multitask_y_bill = multitask_y[bill_bools]
multitask_loss_mask_bill = multitask_loss_mask[bill_bools]
print(x_resolution.shape,loss_mask_resolution.shape,bids_resolution.shape,multitask_y_resolution.shape)
print(x_bill.shape,loss_mask_bill.shape,bids_bill.shape,multitask_y_bill.shape)
# shuffling only the bill data - in order to get the validation and val set data
x_bill,x_indices_bill,att_mask_bill,loss_mask_bill,decoder_x_bill,y_indices_bill,bids_bill,multitask_y_bill,multitask_loss_mask_bill = shuffle(x_bill,x_indices_bill,att_mask_bill,loss_mask_bill,decoder_x_bill,y_indices_bill,bids_bill,multitask_y_bill,multitask_loss_mask_bill,random_state=1)
x_bill_val,x_indices_bill_val,att_mask_bill_val,loss_mask_bill_val,decoder_x_bill_val,y_indices_bill_val,bids_bill_val,multitask_y_bill_val,multitask_loss_mask_bill_val = x_bill[:400],x_indices_bill[:400],att_mask_bill[:400],loss_mask_bill[:400],decoder_x_bill[:400],y_indices_bill[:400],bids_bill[:400],multitask_y_bill[:400],multitask_loss_mask_bill[:400]
x_bill_train,x_indices_bill_train,att_mask_bill_train,loss_mask_bill_train,decoder_x_bill_train,y_indices_bill_train,bids_bill_train,multitask_y_bill_train,multitask_loss_mask_bill_train = x_bill[400:],x_indices_bill[400:],att_mask_bill[400:],loss_mask_bill[400:],decoder_x_bill[400:],y_indices_bill[400:],bids_bill[400:],multitask_y_bill[400:],multitask_loss_mask_bill[400:]
print(x_bill_val.shape,loss_mask_bill_val.shape,bids_bill_val.shape,multitask_y_bill_val.shape)
print(x_bill_train.shape,loss_mask_bill_train.shape,bids_bill_train.shape,multitask_y_bill_train.shape)
## to remove resolutions, simply don't include them here
# shuffling the training set - which is a combination of bill and resolution data
x_train = np.vstack([x_bill_train,x_resolution])
x_indices_train = np.vstack([x_indices_bill_train,x_indices_resolution])
att_mask_train = np.vstack([att_mask_bill_train,att_mask_resolution])
loss_mask_train = np.vstack([loss_mask_bill_train,loss_mask_resolution])
decoder_x_train = np.vstack([decoder_x_bill_train,decoder_x_resolution])
y_indices_train = np.vstack([y_indices_bill_train,y_indices_resolution])
bids_train = np.concatenate([bids_bill_train,bids_resolution])
multitask_y_train = np.vstack([multitask_y_bill_train,multitask_y_resolution])
multitask_loss_mask_train = np.vstack([multitask_loss_mask_bill_train,multitask_loss_mask_resolution])
x_train,x_indices_train,att_mask_train,loss_mask_train,decoder_x_train,y_indices_train,multitask_y_train,multitask_loss_mask_train = shuffle(x_train,x_indices_train,att_mask_train,loss_mask_train,decoder_x_train,y_indices_train,multitask_y_train,multitask_loss_mask_train,random_state=2)
print(x_train.shape,loss_mask_train.shape,bids_train.shape,multitask_y_train.shape)
# adding all the data together, with the final 400 instances being the val and test sets
x_final = np.vstack([x_train,x_bill_val])
x_indices_final = np.vstack([x_indices_train,x_indices_bill_val])
att_mask_final = np.vstack([att_mask_train,att_mask_bill_val])
loss_mask_final = np.vstack([loss_mask_train,loss_mask_bill_val])
decoder_x_final = np.vstack([decoder_x_train,decoder_x_bill_val])
y_indices_final = np.vstack([y_indices_train,y_indices_bill_val])
bids_final = np.concatenate([bids_train,bids_bill_val])
multitask_y_final = np.vstack([multitask_y_train,multitask_y_bill_val])
multitask_loss_mask_final = np.vstack([multitask_loss_mask_train,multitask_loss_mask_bill_val])
print(x_final.shape,loss_mask_final.shape,bids_final.shape,multitask_y_final.shape)
## there is no final shuffling, as the last 400 datapoints represent the validation/test sets
subdir = "len_500_data"
np.save("../data/{}/x_500.npy".format(subdir),x_final)
np.save("../data/{}/x_indices_500.npy".format(subdir),x_indices_final)
np.save("../data/{}/att_mask_500.npy".format(subdir),att_mask_final)
np.save("../data/{}/loss_mask_500.npy".format(subdir),loss_mask_final)
np.save("../data/{}/decoder_x_500.npy".format(subdir),decoder_x_final)
np.save("../data/{}/y_indices_500.npy".format(subdir),y_indices_final)
np.save("../data/{}/bids_500.npy".format(subdir),bids_final)
np.save("../data/{}/multitask_y_500.npy".format(subdir),multitask_y_final)
np.save("../data/{}/multitask_loss_mask_500.npy".format(subdir),multitask_loss_mask_final)
with open("../data/len_500_data/bill_information.json","w+") as out_file:
json.dump(bill_information_dict,out_file)
```
| true |
code
| 0.431285 | null | null | null | null |
|
# End-to-End Incremental Training Image Classification Example
1. [Introduction](#Introduction)
2. [Prerequisites and Preprocessing](#Prequisites-and-Preprocessing)
1. [Permissions and environment variables](#Permissions-and-environment-variables)
2. [Prepare the data](#Prepare-the-data)
3. [Training the model](#Training-the-model)
1. [Training parameters](#Training-parameters)
2. [Start the training](#Start-the-training)
4. [Inference](#Inference)
## Introduction
Welcome to our end-to-end example of incremental training using Amazon Sagemaker image classification algorithm. In this demo, we will use the Amazon sagemaker image classification algorithm to train on the [caltech-256 dataset](http://www.vision.caltech.edu/Image_Datasets/Caltech256/). First, we will run the training for few epochs. Then, we will use the generated model in the previous training to start another training to improve accuracy further without re-training again.
To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on.
## Prequisites and Preprocessing
### Permissions and environment variables
Here we set up the linkage and authentication to AWS services. There are three parts to this:
* The roles used to give learning and hosting access to your data. This will automatically be obtained from the role used to start the notebook
* The S3 bucket that you want to use for training and model data
* The Amazon sagemaker image classification docker image which need not be changed
```
%%time
import sagemaker
from sagemaker import get_execution_role
role = get_execution_role()
print(role)
sess = sagemaker.Session()
bucket = sess.default_bucket()
prefix = "ic-fulltraining"
from sagemaker import image_uris
training_image = image_uris.retrieve(region=sess.boto_region_name, framework="image-classification")
```
### Data preparation
Download the data and transfer to S3 for use in training. In this demo, we are using [Caltech-256](http://www.vision.caltech.edu/Image_Datasets/Caltech256/) dataset, which contains 30608 images of 256 objects. For the training and validation data, we follow the splitting scheme in this MXNet [example](https://github.com/apache/incubator-mxnet/blob/master/example/image-classification/data/caltech256.sh). In particular, it randomly selects 60 images per class for training, and uses the remaining data for validation. The algorithm takes `RecordIO` file as input. The user can also provide the image files as input, which will be converted into `RecordIO` format using MXNet's [im2rec](https://mxnet.incubator.apache.org/how_to/recordio.html?highlight=im2rec) tool. It takes around 50 seconds to converted the entire Caltech-256 dataset (~1.2GB) on a p2.xlarge instance. However, for this demo, we will use record io format.
```
import boto3
s3_client = boto3.client("s3")
def upload_to_s3(channel, file):
s3 = boto3.resource("s3")
data = open(file, "rb")
key = channel + "/" + file
s3.Bucket(bucket).put_object(Key=key, Body=data)
# caltech-256
s3_client.download_file(
"sagemaker-sample-files",
"datasets/image/caltech-256/caltech-256-60-train.rec",
"caltech-256-60-train.rec",
)
s3_client.download_file(
"sagemaker-sample-files",
"datasets/image/caltech-256/caltech-256-60-val.rec",
"caltech-256-60-val.rec",
)
# Two channels: train, validation
s3train = "s3://{}/{}/train/".format(bucket, prefix)
s3validation = "s3://{}/{}/validation/".format(bucket, prefix)
# upload the rec files to train and validation channels
!aws s3 cp caltech-256-60-train.rec $s3train --quiet
!aws s3 cp caltech-256-60-val.rec $s3validation --quiet
```
Once we have the data available in the correct format for training, the next step is to actually train the model using the data. After setting training parameters, we kick off training, and poll for status until training is completed.
## Training the model
Now that we are done with all the setup that is needed, we are ready to train our object detector. To begin, let us create a ``sageMaker.estimator.Estimator`` object. This estimator will launch the training job.
### Training parameters
There are two kinds of parameters that need to be set for training. The first one are the parameters for the training job. These include:
* **Training instance count**: This is the number of instances on which to run the training. When the number of instances is greater than one, then the image classification algorithm will run in distributed settings.
* **Training instance type**: This indicates the type of machine on which to run the training. Typically, we use GPU instances for these training
* **Output path**: This the s3 folder in which the training output is stored
```
s3_output_location = "s3://{}/{}/output".format(bucket, prefix)
ic = sagemaker.estimator.Estimator(
training_image,
role,
train_instance_count=1,
train_instance_type="ml.p2.xlarge",
train_volume_size=50,
train_max_run=360000,
input_mode="File",
output_path=s3_output_location,
sagemaker_session=sess,
)
```
Apart from the above set of parameters, there are hyperparameters that are specific to the algorithm. These are:
* **num_layers**: The number of layers (depth) for the network. We use 18 in this samples but other values such as 50, 152 can be used.
* **image_shape**: The input image dimensions,'num_channels, height, width', for the network. It should be no larger than the actual image size. The number of channels should be same as the actual image.
* **num_classes**: This is the number of output classes for the new dataset. Imagenet was trained with 1000 output classes but the number of output classes can be changed for fine-tuning. For caltech, we use 257 because it has 256 object categories + 1 clutter class.
* **num_training_samples**: This is the total number of training samples. It is set to 15240 for caltech dataset with the current split.
* **mini_batch_size**: The number of training samples used for each mini batch. In distributed training, the number of training samples used per batch will be N * mini_batch_size where N is the number of hosts on which training is run.
* **epochs**: Number of training epochs.
* **learning_rate**: Learning rate for training.
* **top_k**: Report the top-k accuracy during training.
```
ic.set_hyperparameters(
num_layers=18,
image_shape="3,224,224",
num_classes=257,
num_training_samples=15420,
mini_batch_size=256,
epochs=10,
learning_rate=0.1,
top_k=2,
)
```
## Input data specification
Set the data type and channels used for training
```
train_data = sagemaker.session.s3_input(
s3train,
distribution="FullyReplicated",
content_type="application/x-recordio",
s3_data_type="S3Prefix",
)
validation_data = sagemaker.session.s3_input(
s3validation,
distribution="FullyReplicated",
content_type="application/x-recordio",
s3_data_type="S3Prefix",
)
data_channels = {"train": train_data, "validation": validation_data}
```
## Start the training
Start training by calling the fit method in the estimator
```
ic.fit(inputs=data_channels, logs=True)
```
## Prepare for incremental training
Now, we will use the model generated in the previous training to start another training with the same dataset. This new training will start with higher accuracy as it uses the model generated in the previous training.
```
# Print the location of the model data from previous training
print(ic.model_data)
# Prepare model channel in addition to train and validation
model_data = sagemaker.session.s3_input(
ic.model_data,
distribution="FullyReplicated",
s3_data_type="S3Prefix",
content_type="application/x-sagemaker-model",
)
data_channels = {"train": train_data, "validation": validation_data, "model": model_data}
```
## Start another training
We use the same hyperparameters as before. When the model channel is present, the use_pretrained_model parameter is ignored. The number of classes, input image shape and number of layers should be the same as the previous training since we are starting with the same model. Other parameters, such as learning_rate, mini_batch_size, etc., can be varied.
```
incr_ic = sagemaker.estimator.Estimator(
training_image,
role,
train_instance_count=1,
train_instance_type="ml.p2.xlarge",
train_volume_size=50,
train_max_run=360000,
input_mode="File",
output_path=s3_output_location,
sagemaker_session=sess,
)
incr_ic.set_hyperparameters(
num_layers=18,
image_shape="3,224,224",
num_classes=257,
num_training_samples=15420,
mini_batch_size=128,
epochs=2,
learning_rate=0.01,
top_k=2,
)
incr_ic.fit(inputs=data_channels, logs=True)
```
As you can see from the logs, the training starts with the previous model and hence the accuracy for the first epoch itself is higher.
# Inference
***
We can now use the trained model to perform inference. You can deploy the created model by using the deploy method in the estimator
```
ic_classifier = incr_ic.deploy(initial_instance_count=1, instance_type="ml.m4.xlarge")
```
### Download test image
```
file_name = "/tmp/test.jpg"
s3_client.download_file(
"sagemaker-sample-files",
"datasets/image/caltech-256/256_ObjectCategories/008.bathtub/008_0007.jpg",
file_name,
)
# test image
from IPython.display import Image
Image(file_name)
```
### Evaluation
Evaluate the image through the network for inteference. The network outputs class probabilities and typically, one selects the class with the maximum probability as the final class output.
```
import json
import numpy as np
from sagemaker.serializers import IdentitySerializer
with open(file_name, "rb") as f:
payload = f.read()
ic_classifier.serializer = IdentitySerializer("image/jpeg")
result = json.loads(ic_classifier.predict(payload))
# the result will output the probabilities for all classes
# find the class with maximum probability and print the class index
index = np.argmax(result)
object_categories = [
"ak47",
"american-flag",
"backpack",
"baseball-bat",
"baseball-glove",
"basketball-hoop",
"bat",
"bathtub",
"bear",
"beer-mug",
"billiards",
"binoculars",
"birdbath",
"blimp",
"bonsai-101",
"boom-box",
"bowling-ball",
"bowling-pin",
"boxing-glove",
"brain-101",
"breadmaker",
"buddha-101",
"bulldozer",
"butterfly",
"cactus",
"cake",
"calculator",
"camel",
"cannon",
"canoe",
"car-tire",
"cartman",
"cd",
"centipede",
"cereal-box",
"chandelier-101",
"chess-board",
"chimp",
"chopsticks",
"cockroach",
"coffee-mug",
"coffin",
"coin",
"comet",
"computer-keyboard",
"computer-monitor",
"computer-mouse",
"conch",
"cormorant",
"covered-wagon",
"cowboy-hat",
"crab-101",
"desk-globe",
"diamond-ring",
"dice",
"dog",
"dolphin-101",
"doorknob",
"drinking-straw",
"duck",
"dumb-bell",
"eiffel-tower",
"electric-guitar-101",
"elephant-101",
"elk",
"ewer-101",
"eyeglasses",
"fern",
"fighter-jet",
"fire-extinguisher",
"fire-hydrant",
"fire-truck",
"fireworks",
"flashlight",
"floppy-disk",
"football-helmet",
"french-horn",
"fried-egg",
"frisbee",
"frog",
"frying-pan",
"galaxy",
"gas-pump",
"giraffe",
"goat",
"golden-gate-bridge",
"goldfish",
"golf-ball",
"goose",
"gorilla",
"grand-piano-101",
"grapes",
"grasshopper",
"guitar-pick",
"hamburger",
"hammock",
"harmonica",
"harp",
"harpsichord",
"hawksbill-101",
"head-phones",
"helicopter-101",
"hibiscus",
"homer-simpson",
"horse",
"horseshoe-crab",
"hot-air-balloon",
"hot-dog",
"hot-tub",
"hourglass",
"house-fly",
"human-skeleton",
"hummingbird",
"ibis-101",
"ice-cream-cone",
"iguana",
"ipod",
"iris",
"jesus-christ",
"joy-stick",
"kangaroo-101",
"kayak",
"ketch-101",
"killer-whale",
"knife",
"ladder",
"laptop-101",
"lathe",
"leopards-101",
"license-plate",
"lightbulb",
"light-house",
"lightning",
"llama-101",
"mailbox",
"mandolin",
"mars",
"mattress",
"megaphone",
"menorah-101",
"microscope",
"microwave",
"minaret",
"minotaur",
"motorbikes-101",
"mountain-bike",
"mushroom",
"mussels",
"necktie",
"octopus",
"ostrich",
"owl",
"palm-pilot",
"palm-tree",
"paperclip",
"paper-shredder",
"pci-card",
"penguin",
"people",
"pez-dispenser",
"photocopier",
"picnic-table",
"playing-card",
"porcupine",
"pram",
"praying-mantis",
"pyramid",
"raccoon",
"radio-telescope",
"rainbow",
"refrigerator",
"revolver-101",
"rifle",
"rotary-phone",
"roulette-wheel",
"saddle",
"saturn",
"school-bus",
"scorpion-101",
"screwdriver",
"segway",
"self-propelled-lawn-mower",
"sextant",
"sheet-music",
"skateboard",
"skunk",
"skyscraper",
"smokestack",
"snail",
"snake",
"sneaker",
"snowmobile",
"soccer-ball",
"socks",
"soda-can",
"spaghetti",
"speed-boat",
"spider",
"spoon",
"stained-glass",
"starfish-101",
"steering-wheel",
"stirrups",
"sunflower-101",
"superman",
"sushi",
"swan",
"swiss-army-knife",
"sword",
"syringe",
"tambourine",
"teapot",
"teddy-bear",
"teepee",
"telephone-box",
"tennis-ball",
"tennis-court",
"tennis-racket",
"theodolite",
"toaster",
"tomato",
"tombstone",
"top-hat",
"touring-bike",
"tower-pisa",
"traffic-light",
"treadmill",
"triceratops",
"tricycle",
"trilobite-101",
"tripod",
"t-shirt",
"tuning-fork",
"tweezer",
"umbrella-101",
"unicorn",
"vcr",
"video-projector",
"washing-machine",
"watch-101",
"waterfall",
"watermelon",
"welding-mask",
"wheelbarrow",
"windmill",
"wine-bottle",
"xylophone",
"yarmulke",
"yo-yo",
"zebra",
"airplanes-101",
"car-side-101",
"faces-easy-101",
"greyhound",
"tennis-shoes",
"toad",
"clutter",
]
print("Result: label - " + object_categories[index] + ", probability - " + str(result[index]))
```
### Clean up
When we're done with the endpoint, we can just delete it and the backing instances will be released. Uncomment and run the following cell to delete the endpoint and model
```
ic_classifier.delete_endpoint()
```
| true |
code
| 0.47725 | null | null | null | null |
|
## Preprocessing Tabular Data
The purpose of this notebook is to demonstrate how to preprocess tabular data for training a machine learning model via Amazon SageMaker. In this notebook we focus on preprocessing our tabular data and in a sequel notebook, [training_model_on_tabular_data.ipynb](training_model_on_tabular_data.ipynb) we use our preprocessed tabular data to train a machine learning model. We showcase how to preprocess 3 different tabular data sets.
#### Notes
In this notebook, we use the sklearn framework for data partitionining and storemagic to share dataframes in [training_model_on_tabular_data.ipynb](training_model_on_tabular_data.ipynb). While we load data into memory here we do note that is it possible to skip this and load your partitioned data directly to an S3 bucket.
#### Tabular Data Sets
* [boston house data](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html)
* [california house data](https://www.dcc.fc.up.pt/~ltorgo/Regression/cal_housing.html)
* [diabetes data ](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html)
#### Library Dependencies:
* sagemaker >= 2.0.0
* numpy
* pandas
* plotly
* sklearn
* matplotlib
* seaborn
### Setting up the notebook
```
import os
import sys
import subprocess
import pkg_resources
def get_sagemaker_version():
"Return the version of 'sagemaker' in your kernel or -1 if 'sagemaker' is not installed"
for i in pkg_resources.working_set:
if i.key == "sagemaker":
return "%s==%s" % (i.key, i.version)
return -1
# Store original 'sagemaker' version
sagemaker_version = get_sagemaker_version()
# Install any missing dependencies
!{sys.executable} -m pip install -qU 'plotly' 'sagemaker>=2.0.0'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.datasets import *
import sklearn.model_selection
# SageMaker dependencies
import sagemaker
from sagemaker import get_execution_role
from sagemaker.image_uris import retrieve
# This instantiates a SageMaker session that we will be operating in.
session = sagemaker.Session()
# This object represents the IAM role that we are assigned.
role = sagemaker.get_execution_role()
print(role)
```
### Step 1: Select and Download Data
Here you can select the tabular data set of your choice to preprocess.
```
data_sets = {'diabetes': 'load_diabetes()', 'california': 'fetch_california_housing()', 'boston' : 'load_boston()'}
```
To do select a particular dataset, assign **choosen_data_set** below to be one of 'diabetes', 'california', or 'boston' where each name corresponds to the it's respective dataset.
* 'boston' : [boston house data](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html)
* 'california' : [california house data](https://www.dcc.fc.up.pt/~ltorgo/Regression/cal_housing.html)
* 'diabetes' : [diabetes data ](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html)
```
# Change choosen_data_set variable to one of the data sets above.
choosen_data_set = 'california'
assert choosen_data_set in data_sets.keys()
print("I selected the '{}' dataset!".format(choosen_data_set))
```
### Step 2: Describe Feature Information
Here you can select the tabular data set of your choice to preprocess.
```
data_set = eval(data_sets[choosen_data_set])
X = pd.DataFrame(data_set.data, columns=data_set.feature_names)
Y = pd.DataFrame(data_set.target)
print("Features:", list(X.columns))
print("Dataset shape:", X.shape)
print("Dataset Type:", type(X))
print("Label set shape:", Y.shape)
print("Label set Type:", type(X))
```
#### We describe both our training data inputs X and outputs Y by computing the count, mean, std, min, percentiles.
```
display(X.describe())
display(Y.describe())
```
### Step 3: Plot on Feature Correlation
Here we show a heatmap and clustergrid across all our features. These visualizations help us analyze correlated features and are particularly important if we want to remove redundant features. The heatmap computes a similarity score across each feature and colors like features using this score. The clustergrid is similar, however it presents feature correlations hierarchically.
**Note**: For the purposes of this notebook we do not remove any features but by gathering the findings from these plots one may choose to and can do so at this point.
```
plt.figure(figsize=(14,12))
cor = X.corr()
sns.heatmap(cor, annot=True, cmap=sns.diverging_palette(20, 220, n=200))
plt.show()
cluster_map = sns.clustermap(cor, cmap =sns.diverging_palette(20, 220, n=200), linewidths = 0.1);
plt.setp(cluster_map.ax_heatmap.yaxis.get_majorticklabels(), rotation = 0)
cluster_map
```
### Step 4: Partition Dataset into Train, Test, Validation Splits
Here using the sklearn framework we partition our selected dataset into Train, Test and Validation splits. We choose a partition size of 1/3 and then further split the training set into 2/3 training and 1/3 validation set.
```
# We partition the dataset into 2/3 training and 1/3 test set.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X, Y, test_size=0.33)
# We further split the training set into a validation set i.e., 2/3 training set, and 1/3 validation set
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
```
### Step 5: Store Variables using storemagic
We use storemagic to persist all relevant variables so they can be reused in our sequel notebook, [training_model_on_tabular_data.ipynb](training_model_on_tabular_data.ipynb).
Alternatively, it is possible to upload your partitioned data to an S3 bucket and point to it during the model training phase. We note that this is beyond the scope of this notebook hence why we omit it.
```
# Using storemagic we persist the variables below so we can access them in the training_model_on_tabular_data.ipynb
%store X_train
%store X_test
%store X_val
%store Y_train
%store Y_test
%store Y_val
%store choosen_data_set
%store sagemaker_version
```
| true |
code
| 0.36676 | null | null | null | null |
|
```
from keras.datasets import fashion_mnist
(train_X,train_Y), (test_X,test_Y) = fashion_mnist.load_data()
import numpy as np
from keras.utils import to_categorical
import matplotlib.pyplot as plt
%matplotlib inline
print('Training data shape: ', train_X.shape, train_Y.shape)
print('Testing data shape: ', test_X.shape, test_Y.shape)
classes = np.unique(train_Y)
nclasses = len(classes)
print('Total number of outputs: ', nclasses)
print('Outputnclasses: ', classes)
plt.figure(figsize=[5,5])
# Display the first image in training data
plt.subplot(121)
plt.imshow(train_X[0,:,:], cmap='gray')
plt.title("Ground Truth : {}".format(train_Y[0]))
# Display the first image in testing data
plt.subplot(122)
plt.imshow(test_X[0,:,:], cmap='gray')
plt.title("Ground Truth : {}".format(test_Y[0]))
tr_x = train_X
ts_x = test_X
tr_y = train_Y
ts_y = test_Y
train_X = train_X.reshape(-1, 28, 28, 1)
test_X = test_X.reshape(-1, 28, 28, 1)
train_X.shape, test_X.shape
train_X = train_X.astype('float32')
test_X = test_X.astype('float32')
train_X = train_X / 255
test_X = test_X / 255
train_Y_one_hot = to_categorical(train_Y)
test_Y_one_hot = to_categorical(test_Y)
print('Original label:', train_Y[0])
print('After conversion to one-hot', train_Y_one_hot[0])
from sklearn.model_selection import train_test_split
train_X, valid_X, train_label, valid_label = train_test_split(train_X, train_Y_one_hot, test_size = 0.2, random_state = 13)
train_X.shape, valid_X.shape, train_label.shape, valid_label.shape
import keras
from keras.models import Sequential, Input, Model
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.layers.normalization import BatchNormalization
from keras.layers.advanced_activations import LeakyReLU
batch_size = 64
epochs = 20
num_classes = 10
fashion_model = Sequential()
fashion_model.add(Conv2D(32, kernel_size = (3, 3), activation = 'linear', input_shape=(28,28,1), padding = 'same'))
fashion_model.add(LeakyReLU(alpha = 0.1))
fashion_model.add(MaxPooling2D((2,2), padding = 'same'))
fashion_model.add(Conv2D(64, (3, 3), activation = 'linear', padding = 'same'))
fashion_model.add(LeakyReLU(alpha = 0.1))
fashion_model.add(MaxPooling2D(pool_size = (2,2), padding = 'same'))
fashion_model.add(Conv2D(128, (3, 3), activation = 'linear', padding = 'same'))
fashion_model.add(LeakyReLU(alpha = 0.1))
fashion_model.add(MaxPooling2D(pool_size=(2, 2), padding = 'same'))
fashion_model.add(Flatten())
fashion_model.add(Dense(128, activation = 'linear'))
fashion_model.add(LeakyReLU(alpha = 0.1))
fashion_model.add(Dense(num_classes, activation = 'softmax'))
fashion_model.compile(loss = keras.losses.categorical_crossentropy, optimizer = keras.optimizers.Adam(), metrics = ['accuracy'])
fashion_model.summary()
fashion_train = fashion_model.fit(train_X, train_label, batch_size=batch_size,epochs=epochs,verbose=1,validation_data=(valid_X, valid_label))
test_eval = fashion_model.evaluate(test_X, test_Y_one_hot, verbose = 0)
print('Test loss: ', test_eval[0])
print('Test accuracy: ', test_eval[1])
accuracy = fashion_train.history['accuracy']
val_accuracy = fashion_train.history['val_accuracy']
loss = fashion_train.history['loss']
val_loss = fashion_train.history['val_loss']
epochs = range(len(accuracy))
plt.plot(epochs, accuracy, 'bo', label = 'Training accuracy')
plt.plot(epochs, val_accuracy, 'b', label = 'Validation accuracy')
plt.title('Trainimg and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label = 'Training loss')
plt.plot(epochs, val_loss, 'b', label = 'Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
batch_size = 64
epochs = 20
num_classes = 10
fashion_model = Sequential()
fashion_model.add(Conv2D(32, kernel_size = (3, 3), activation = 'linear', input_shape=(28,28,1), padding = 'same'))
fashion_model.add(LeakyReLU(alpha = 0.1))
fashion_model.add(MaxPooling2D((2,2), padding = 'same'))
fashion_model.add(Dropout(0.25))
fashion_model.add(Conv2D(64, (3, 3), activation = 'linear', padding = 'same'))
fashion_model.add(LeakyReLU(alpha = 0.1))
fashion_model.add(MaxPooling2D(pool_size = (2,2), padding = 'same'))
fashion_model.add(Dropout(0.25))
fashion_model.add(Conv2D(128, (3, 3), activation = 'linear', padding = 'same'))
fashion_model.add(LeakyReLU(alpha = 0.1))
fashion_model.add(MaxPooling2D(pool_size=(2, 2), padding = 'same'))
fashion_model.add(Dropout(0.4))
fashion_model.add(Flatten())
fashion_model.add(Dense(128, activation = 'linear'))
fashion_model.add(LeakyReLU(alpha = 0.1))
fashion_model.add(Dropout(0.3))
fashion_model.add(Dense(num_classes, activation = 'softmax'))
fashion_model.summary()
fashion_model.compile(loss = keras.losses.categorical_crossentropy, optimizer = keras.optimizers.Adam(), metrics = ['accuracy'])
fashion_train_dropout = fashion_model.fit(train_X, train_label, batch_size = batch_size, epochs = epochs, verbose = 1, validation_data = (valid_X, valid_label))
fashion_model.save("fashion_model_dropout.h5py")
test_eval = fashion_model.evaluate(test_X, test_Y_one_hot, verbose = 1)
print('Test loss: ', test_eval[0])
print('Test accuracy: ', test_eval[1])
accuracy = fashion_train_dropout.history['accuracy']
val_accuracy = fashion_train_dropout.history['val_accuracy']
loss = fashion_train_dropout.history['loss']
val_loss = fashion_train_dropout.history['val_loss']
epochs = range(len(accuracy))
plt.plot(epochs, accuracy, 'bo', label='Training accuracy')
plt.plot(epochs, val_accuracy, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
predictes_classes = fashion_model.predict(test_X)
predicted_classes = np.argmax(np.round(predictes_classes), axis = 1)
predicted_classes.shape, test_Y.shape
correct = np.where(predicted_classes == test_Y)[0]
# print("Found ", len(correct), " correct labels")
print ("Found %d correct labels" % len(correct))
for i, correct in enumerate(correct[:9]):
plt.subplot(3, 3, i + 1)
plt.imshow(test_X[correct].reshape(28, 28), cmap = 'gray', interpolation = 'none')
plt.title("Predicted {}, Class {}".format(predicted_classes[correct], test_Y[correct]))
plt.tight_layout()
incorrect = np.where(predicted_classes!=test_Y)[0]
print ("Found %d incorrect labels" % len(incorrect))
for i, incorrect in enumerate(incorrect[:9]):
plt.subplot(3,3,i+1)
plt.imshow(test_X[incorrect].reshape(28,28), cmap='gray', interpolation='none')
plt.title("Predicted {}, Class {}".format(predicted_classes[incorrect], test_Y[incorrect]))
plt.tight_layout()
from sklearn.metrics import classification_report
target_names = ["Class {}".format(i) for i in range(num_classes)]
print(classification_report(test_Y, predicted_classes, target_names=target_names))
```
| true |
code
| 0.872565 | null | null | null | null |
|
# Clustering of Social Groups Using Census Demographic Variables
#### Purpose of this notebook
- 1) Use scikit-learn K-Means to create social groups across Toronto, Vancouver, Montreal
#### Data Sources
- Census Variables: https://www12.statcan.gc.ca/census-recensement/2016/dp-pd/prof/details/download-telecharger/comp/page_dl-tc.cfm?Lang=E
- Census Geographies: https://www12.statcan.gc.ca/census-recensement/2011/geo/bound-limit/bound-limit-2016-eng.cfm
```
import pandas as pd
import geopandas as gpd
import os
import numpy as np
import matplotlib.pyplot as plt
os.chdir('C:/Users/Leonardo/OneDrive/Documents/MetroWork/RealEstateData')
# make dataframe with variables of interest
import re
variables = open("CensusData2016/Census/Test2Variables.txt")
var_list = []
for line in variables:
var_list.append(line)
var_df = pd.DataFrame({'census_variable': var_list})
var_df = var_df.census_variable.str.split(pat = ".", expand=True)[[0,1]]
#var_df = var_df['1'].str.split(pat = "/n", expand=True)[[0,1]]
var_df = var_df[[0,1]]
var_df.columns = ['Member ID: Profile of Dissemination Areas (2247)', 'DIM: Profile of Dissemination Areas (2247)']
# Read Canada Census data by dissemination area
Canada_census_2016 = gpd.read_file('CensusData2016_MA/CanadaWide/CanadaDAs_Census2016_vars.shp')
#function for extracting metropolitan census blocks
def get_metrocensus(canada=None, NAMES=[]):
'''filters canadian census layer by metropolitan area, and dissolves all polygons. The result is metropolitan boundaries'''
#create new col to be used with dissolve
MAREA = canada[canada.CCSNAME.isin(NAMES)]
return MAREA
# extract census blocks for all cities
CT = get_metrocensus(canada=Canada_census_2016,NAMES=['Toronto'])
CV = get_metrocensus(canada=Canada_census_2016,NAMES=['Vancouver'])
CM = get_metrocensus(canada=Canada_census_2016,NAMES=['Montréal'])
CO = get_metrocensus(canada=Canada_census_2016,NAMES=['Ottawa'])
CC = get_metrocensus(canada=Canada_census_2016,NAMES=['Calgary'])
CT.plot()
```
# K-Means Clustering
### Extracting Social Groups from Census Data
### 1: Scale Data and Get rid of Outliers
```
#prepare dataframe for ML algorithm
df = pd.DataFrame(CT.iloc[:,23:214])
df = df.replace([np.inf, -np.inf], 0)
#MVDA_Census2016_vars_PCA = MVDA_Census2016_vars_PCA.drop(['index'], axis = 1)
#split into X data and y blocks
X = df.iloc[:,1:] # all rows, all the features and no labels
y = df.iloc[:, 0] # all rows, label (census blocks) only
#import libraries, scale the data
from scipy.stats import mstats
from sklearn.model_selection import train_test_split
from sklearn.decomposition import PCA
from sklearn.decomposition import NMF
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.neighbors import (KNeighborsClassifier,
NeighborhoodComponentsAnalysis)
from sklearn.preprocessing import StandardScaler, MinMaxScaler
#scaler = StandardScaler()
scaler = MinMaxScaler()
X_scaled = scaler.fit_transform(X)
#visualize distribution of variables/outliers
pd.DataFrame(X_scaled).plot.box(figsize=(26,8))
#remove outliers by winsorizing data
X_scaled_wd = mstats.winsorize(X_scaled, inplace = True, axis = 0, limits = [0.05, 0.05])
#visualize data
pd.DataFrame(X_scaled_wd).plot.box(figsize=(26,8))
```
### 2: Fit K-Means algorithm to data and find the optimal number of clusters
```
#import clustering algorithm data using k means
from sklearn.cluster import KMeans
from sklearn import cluster, mixture
from sklearn.metrics import silhouette_score
#to chose the right number of clusters we visualize the inertia of the clusters
kmeans_per_k = [KMeans(n_clusters=k, algorithm='auto', init='k-means++',
max_iter=300, n_init=30, n_jobs=None, precompute_distances='auto',
random_state=5, tol=0.0001).fit(X_scaled_wd) for k in range(1, 10)]
inertias = [model.inertia_ for model in kmeans_per_k]
plt.figure(figsize=(8, 3.5))
plt.plot(range(1, 10), inertias)
plt.xlabel("Number of Clusters", fontsize=14)
plt.ylabel("Inertia", fontsize=14)
plt.show()
#we visualize the silhouette scores
silhouette_scores = [silhouette_score(X_scaled_wd, model.labels_) for model in kmeans_per_k[1:]]
plt.figure(figsize=(8, 3))
plt.plot(range(2, 10), silhouette_scores)
plt.xlabel("K", fontsize=14)
plt.ylabel("Silhouette score", fontsize=14)
plt.show()
#the silhouette scores can be visualized for each label within each number of clusters
from sklearn.metrics import silhouette_samples
from matplotlib.ticker import FixedLocator, FixedFormatter
import matplotlib
plt.figure(figsize=(11, 9))
# we visualize 4 plots, one for the result of each of k-means with 3, 4, 5, and 6 clusters
for k in (3, 4, 5, 6):
plt.subplot(2, 3, k - 2)
y_pred = kmeans_per_k[k - 1].labels_
silhouette_coefficients = silhouette_samples(X_scaled_wd, y_pred)
padding = len(X_scaled_wd) // 30
pos = padding
ticks = []
for i in range(k):
coeffs = silhouette_coefficients[y_pred == i]
coeffs.sort()
color = matplotlib.cm.Spectral(i / k)
plt.fill_betweenx(np.arange(pos, pos + len(coeffs)), 0, coeffs,
facecolor=color, edgecolor=color, alpha=0.7)
ticks.append(pos + len(coeffs) // 2)
pos += len(coeffs) + padding
plt.gca().yaxis.set_major_locator(FixedLocator(ticks))
plt.gca().yaxis.set_major_formatter(FixedFormatter(range(k)))
if k in (3, 5):
plt.ylabel("Cluster")
if k in (5, 6):
plt.gca().set_xticks([-0.1, 0, 0.2, 0.4, 0.6, 0.8, 1])
plt.xlabel("Silhouette Coefficient")
else:
plt.tick_params(labelbottom=False)
plt.axvline(x=silhouette_scores[k - 2], color="red", linestyle="--")
plt.title("$k={}$".format(k), fontsize=16)
plt.show()
#THE CHOICE IS 4 CLUSTERS: the silhouette scores are highest, each label is about the average silhouette coefficient, and each cluster is of similar size
k4cls_rnd_10_inits_Kplus = KMeans(n_clusters=4, algorithm='auto', init='k-means++',
max_iter=300, n_init=30, n_jobs=None, precompute_distances='auto',
random_state=5, tol=0.0001)
k4cls_Kplus = k4cls_rnd_10_inits_Kplus.fit(X_scaled_wd)
#now lets merge back the labels to the initial dataframe
X_std1 = pd.DataFrame(X_scaled_wd)
X_std1.columns = df.iloc[:,1:191].columns
X_std1['GEO_NAME'] = y.values
X_std1['k4cls'] = k4cls_Kplus.labels_
X_std1.head()
#merge back to spatial layer and save data to shapefile
CT['DAUID'] = CT['DAUID'].astype('int64')
CT_4Kcls = pd.merge(CT, X_std1, left_on='DAUID', right_on='GEO_NAME')
CC_4Kcls.to_file('CensusData2016_MA/PCA/CC_DA_Census2016_3Kcls_5.shp')
```
#### 2: Visualize the size of the clusters
```
#visualize the size of the clusters
ksizes = X_std1.groupby('k4cls').size()
ksizes.plot(kind = 'bar')
plt.title("Size of K-Means Clusters")
```
| true |
code
| 0.464234 | null | null | null | null |
|
Intro To Python
=====
In this notebook, we will explore basic Python:
- data types, including dictionaries
- functions
- loops
Please note that we are using Python 3.
(__NOT__ Python 2! Python 2 has some different functions and syntax)
```
# Let's make sure we are using Python 3
import sys
print(sys.version[0])
```
# 1. Basic Data Types: Numbers, Booleans, and Strings
## 1.1 Numbers
```
a = 5
# Note: use the `type()` function to get the type of a variable
# Numbers can be integers ('int'), such as 3, 5 and 3049, or floats
# ('float'), such as 2.5, 3.1, and 2.34938493
print(type(a))
print(a)
list = [1,2]
print(type(list))
```
### Mathematical Operators: +, -, *, /, **
Mathematical operators allow you to perform math operations on numbers in Python.
```
b = a + 1
print(b)
c = a - 1
print(c)
d = a * 2
print(d)
e = a / 2
print(e)
# Note: ** is the exponention operator
f = a ** 2
print(f)
```
### Shorthand mathematical operators
`a += 1` is shorthand for `a = a + 1`
```
a += 1
print(a)
a *= 2
print(a)
```
## 1.2 Booleans & Logic Operators
```
im_true = True
im_false = False
print(type(im_true))
```
### Equality operators
Equality operators (== and !=) allow you to compare the values of variables on the left and right hand side.
```
print(im_true == im_false) # Equality operator
print(im_true != im_false)
```
The `and` operator requires that the variables on each side of the operator are equal to true.
```
print(im_true and im_false)
```
The `or` operator only requires the ONE of the variables on each side of the operator to be true.
```
print(im_true or im_false)
```
## 1.3 Strings
You can use single or double quotes for strings.
```
my_string = 'delta'
my_other_string = "analytics"
print(my_string, my_other_string)
```
### String methods
Concatenating strings:
```
another_string = 'hello, ' + my_string + " " + my_other_string
print(another_string)
```
Get the length of the string:
```
print(len(another_string))
```
# 2. Container Data Types
## 2.1 Lists
A Python `list` stores multiple elements, which can be different types
```
my_list = ['a', 'b', 'c', 3485]
print(my_list)
```
You can access an element in a list with the following syntax:
Note: the first element in a list has an index of zero.
```
print(my_list[2])
print(my_list[0])
```
Reassigning elements in a list:
```
my_list[0] = 'delta'
print(my_list)
```
Adding/removing elements from a list:
```
my_list.append('hello')
print(my_list)
my_list.pop()
print(my_list)
```
Accessing multiple elements in a list:
```
print(my_list[0:2]) # Access elements in index 0, 1 and 2
print(my_list[2:]) # Access elements from index 2 to the end
print(my_list[:2]) # Access elements from the beginning to index 2
```
## 2.2 Dictionaries
Dictionaries hold key/value pairs and are useful for storing information.
```
my_dict = { 'key_one': 'value_one', 'name': 'mike' }
```
Access a value from a dictionary by a key:
```
print(my_dict['key_one'])
print(my_dict['name'])
```
Looping over values of a dictionary:
```
for key in my_dict:
print("The key is " + key)
for key, value in my_dict.items():
print("The key is " + key + ", and the value is " + value)
```
## 2.3 Sets
Sets are similar to lists, but can only contain distinct values.
```
my_set = {1, 2, 3, 'hello'}
print(my_set)
```
When defining a set with the same value present multiple times, only one element will be added to the set. For example:
```
multiple = {1, 2, 2, 2, 2, 2, 3, 'hello'}
print(multiple) # This will return {1, 2, 3, 'hello'}
```
# 3. Functions
A function is a block of reusable code that performs a certain action. Once you've defined a function, you can use it anywhere in your code!
Defining a function:
```
def am_i_happy(happiness_level):
if happiness_level >= 10:
return "You're very happy."
elif happiness_level >= 5:
return "You're happy."
else:
return "You're not happy."
```
Calling a function:
```
print(am_i_happy(0))
print(am_i_happy(5))
```
# 4. Control Flow
## 4.1 If/Else If/Else
```
sleepy = True
hungry = False
if sleepy and hungry:
print("Eat a snack and take a nap.")
elif sleepy and not hungry:
print("Take a nap")
elif hungry and not sleepy:
print("Eat a snack")
else:
print("Go on with your day")
```
## 4.2 Loops
### 4.2.1 'while' loops
```
counter = 0
while (counter < 10):
print("You have counted to", counter)
counter = counter + 1 # Increment the counter
print("You're finished counting")
```
### 4.2.2 'for' loops
Loop over a list:
```
cool_animals = ['cat', 'dog', 'lion', 'bear']
for animal in cool_animals:
print(animal + "s are cool")
```
Loop over a dict:
```
animal_sounds = {
'dog': 'bark',
'cat': 'meow',
'pig': 'oink'
}
for animal, sound in animal_sounds.items():
print("The " + animal + " says " + sound + "!")
```
<br>
<br>
<br>
----
| true |
code
| 0.252315 | null | null | null | null |
|
<a href="https://colab.research.google.com/gist/adaamko/0161526d638e1877f7b649b3fff8f3de/deep-learning-practical-lesson.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Natural Language Processing and Information Extraction
## Deep learning - practical session
__Nov 12, 2021__
__Ádám Kovács__
During this lecture we are going to use a classification dataset from a shared task: SemEval 2019 - Task 6.
The dataset is about Identifying and Categorizing Offensive Language in Social Media.
__Preparation:__
- You will need the Semeval dataset (we will have code to download it)
- You will need to install pytorch:
- pip install torch
- You will also need to have pandas, torchtext, numpy and scikit learn installed.
We are going to use an open source library for building optimized deep learning models that can be run on GPUs, the library is called [Pytorch](https://pytorch.org/docs/stable/index.html). It is one of the most widely used libraries for building neural networks/deep learning models.
In this lecture we are mostly using pure PyTorch models, but there are multiple libraries available to make it even easier to build neural networks. You are free to use them in your projects.
Just to name a few:
- TorchText: https://pytorch.org/text/stable/index.html
- AllenNLP: https://github.com/allenai/allennlp
__NOTE: It is advised to use Google Colab for this laboratory for free access to GPUs, and also for reproducibility.__
```
!pip install torch
# Import the needed libraries
import pandas as pd
import numpy as np
```
## Download the dataset and load it into a pandas DataFrame
```
import os
if not os.path.isdir("./data"):
os.mkdir("./data")
import urllib.request
u = urllib.request.URLopener()
u.retrieve(
"https://raw.githubusercontent.com/ZeyadZanaty/offenseval/master/datasets/training-v1/offenseval-training-v1.tsv",
"data/offenseval.tsv",
)
```
## Read in the dataset into a Pandas DataFrame
Use `pd.read_csv` with the correct parameters to read in the dataset. If done correctly, `DataFrame` should have 5 columns,
`id`, `tweet`, `subtask_a`, `subtask_b`, `subtask_c`.
```
import pandas as pd
import numpy as np
def read_dataset():
train_data = pd.read_csv("./data/offenseval.tsv", sep="\t")
return train_data
train_data_unprocessed = read_dataset()
train_data_unprocessed
```
## Convert `subtask_a` into a binary label
The task is to classify the given tweets into two category: _offensive(OFF)_ , _not offensive (NOT)_. For machine learning algorithms you will need integer labels instead of strings. Add a new column to the dataframe called `label`, and transform the `subtask_a` column into a binary integer label.
```
def transform(train_data):
labels = {"NOT": 0, "OFF": 1}
train_data["label"] = [labels[item] for item in train_data.subtask_a]
train_data["tweet"] = train_data["tweet"].str.replace("@USER", "")
return train_data
train_data = transform(train_data_unprocessed)
```
## Train a simple neural network on this dataset
In this notebook we are going to build different neural architectures on the task:
- A simple one layered feed forward neural network (FNN) with one-hot encoded vectors
- Adding more layers to the FNN, making it a deep neural network
- Instead of using one-hot encoded vectors we are going to add embedding vectors to the architecture, that takes the sequential nature of natural texts into account
- Then we will train LSTM networks
- At last, we will also build a Transformer architecture, that currently achieves SOTA results on a lot of tasks
First we will build one-hot-encoded vectors for each sentence, and then use a simple feed forward neural network to predict the correct labels.
```
# First we need to import pytorch and set a fixed random seed number for reproducibility
import torch
SEED = 1234
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
```
### Split the dataset into a train and a validation dataset
Use the random seed for splitting. You should split the dataset into 70% training data and 30% validation data
```
from sklearn.model_selection import train_test_split as split
def split_data(train_data, random_seed):
tr_data, val_data = split(train_data, test_size=0.3, random_state=SEED)
return tr_data, val_data
tr_data, val_data = split_data(train_data, SEED)
```
### Use CountVectorizer to prepare the features for the sentences
_CountVectorizer_ is a great tool from _sklearn_ that helps us with basic preprocessing steps. It has lots of parameters to play with, you can check the documentation [here](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html). It will:
- Tokenize, lowercase the text
- Filter out stopwords
- Convert the text into one-hot encoded vectors
- Select the _n_-best features
We fit CountVectorizer using _3000_ features
We will also _lemmatize_ texts using the _nltk_ package and its lemmatizer. Check the [docs](https://www.nltk.org/_modules/nltk/stem/wordnet.html) for more.
```
from sklearn.feature_extraction.text import CountVectorizer
import nltk
nltk.download("punkt")
nltk.download("wordnet")
from nltk.stem import WordNetLemmatizer
from nltk import word_tokenize
class LemmaTokenizer(object):
def __init__(self):
self.wnl = WordNetLemmatizer()
def __call__(self, articles):
return [self.wnl.lemmatize(t) for t in word_tokenize(articles)]
def prepare_vectorizer(tr_data):
vectorizer = CountVectorizer(
max_features=3000, tokenizer=LemmaTokenizer(), stop_words="english"
)
word_to_ix = vectorizer.fit(tr_data.tweet)
return word_to_ix
word_to_ix = prepare_vectorizer(tr_data)
# The vocab size is the length of the vocabulary, or the length of the feature vectors
VOCAB_SIZE = len(word_to_ix.vocabulary_)
assert VOCAB_SIZE == 3000
```
CountVectorizer can directly transform any sentence into a one-hot encoded vector based on the corpus it was built upon.

```
word_to_ix.transform(["Hello my name is adam"]).toarray()
# Initialize the correct device
# It is important that every array should be on the same device or the training won't work
# A device could be either the cpu or the gpu if it is available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
```
### Prepare the DataLoader for batch processing
The __prepare_dataloader(..)__ function will take the training and the validation dataset and convert them to one-hot encoded vectors with the help of the initialized CountVectorizer.
We prepare two FloatTensors and LongTensors for the converted tweets and labels of the training and the validation data.
Then zip together the vectors with the labels as a list of tuples!
```
# Preparing the data loaders for the training and the validation sets
# PyTorch operates on it's own datatype which is very similar to numpy's arrays
# They are called Torch Tensors: https://pytorch.org/docs/stable/tensors.html
# They are optimized for training neural networks
def prepare_dataloader(tr_data, val_data, word_to_ix):
# First we transform the tweets into one-hot encoded vectors
# Then we create Torch Tensors from the list of the vectors
# It is also inportant to send the Tensors to the correct device
# All of the tensors should be on the same device when training
tr_data_vecs = torch.FloatTensor(word_to_ix.transform(tr_data.tweet).toarray()).to(
device
)
tr_labels = torch.LongTensor(tr_data.label.tolist()).to(device)
val_data_vecs = torch.FloatTensor(
word_to_ix.transform(val_data.tweet).toarray()
).to(device)
val_labels = torch.LongTensor(val_data.label.tolist()).to(device)
tr_data_loader = [(sample, label) for sample, label in zip(tr_data_vecs, tr_labels)]
val_data_loader = [
(sample, label) for sample, label in zip(val_data_vecs, val_labels)
]
return tr_data_loader, val_data_loader
tr_data_loader, val_data_loader = prepare_dataloader(tr_data, val_data, word_to_ix)
```
- __We have the correct lists now, it is time to initialize the DataLoader objects!__
- __Create two DataLoader objects with the lists we have created__
- __Shuffle the training data but not the validation data!__
```
# We then define a BATCH_SIZE for our model
# Usually we don't feed the whole dataset into our model at once
# For this we have the BATCH_SIZE parameter
# Try to experiment with different sized batches and see if changing this will improve the performance or not!
BATCH_SIZE = 64
from torch.utils.data import DataLoader
# The DataLoader(https://pytorch.org/docs/stable/data.html) class helps us to prepare the training batches
# It has a lot of useful parameters, one of it is _shuffle_ which will randomize the training dataset in each epoch
# This can also improve the performance of our model
def create_dataloader_iterators(tr_data_loader, val_data_loader, BATCH_SIZE):
train_iterator = DataLoader(
tr_data_loader,
batch_size=BATCH_SIZE,
shuffle=True,
)
valid_iterator = DataLoader(
val_data_loader,
batch_size=BATCH_SIZE,
shuffle=False,
)
return train_iterator, valid_iterator
train_iterator, valid_iterator = create_dataloader_iterators(
tr_data_loader, val_data_loader, BATCH_SIZE
)
assert type(train_iterator) == torch.utils.data.dataloader.DataLoader
```
### Building the first PyTorch model
At first, the model will contain a single Linear layer that takes one-hot-encoded vectors and trainsforms it into the dimension of the __NUM_LABELS__(how many classes we are trying to predict). Then, run through the output on a softmax activation to produce probabilites of the classes!
```
from torch import nn
class BoWClassifier(nn.Module): # inheriting from nn.Module!
def __init__(self, num_labels, vocab_size):
# calls the init function of nn.Module. Dont get confused by syntax,
# just always do it in an nn.Module
super(BoWClassifier, self).__init__()
# Define the parameters that you will need.
# Torch defines nn.Linear(), which provides the affine map.
# Note that we could add more Linear Layers here connected to each other
# Then we would also need to have a HIDDEN_SIZE hyperparameter as an input to our model
# Then, with activation functions between them (e.g. RELU) we could have a "Deep" model
# This is just an example for a shallow network
self.linear = nn.Linear(vocab_size, num_labels)
def forward(self, bow_vec, sequence_lens):
# Ignore sequence_lens for now!
# Pass the input through the linear layer,
# then pass that through log_softmax.
# Many non-linearities and other functions are in torch.nn.functional
# Softmax will provide a probability distribution among the classes
# We can then use this for our loss function
return F.log_softmax(self.linear(bow_vec), dim=1)
# The INPUT_DIM is the size of our input vectors
INPUT_DIM = VOCAB_SIZE
# We have only 2 classes
OUTPUT_DIM = 2
# Init the model
# At first it is untrained, the weights are assigned random
model = BoWClassifier(OUTPUT_DIM, INPUT_DIM)
# Set the optimizer and the loss function!
# https://pytorch.org/docs/stable/optim.html
import torch.optim as optim
# The optimizer will update the weights of our model based on the loss function
# This is essential for correct training
# The _lr_ parameter is the learning rate
optimizer = optim.Adam(model.parameters(), lr=1e-3)
criterion = nn.NLLLoss()
# Copy the model and the loss function to the correct device
model = model.to(device)
criterion = criterion.to(device)
assert model.linear.out_features == 2
```
### Training and evaluating PyTorch models
- __calculate_performance__: This should calculate the batch-wise precision, recall, and fscore of your model!
- __train__ - Train your model on the training data! This function should set the model to training mode, then use the given iterator to iterate through the training samples and make predictions using the provided model. You should then propagate back the error with the loss function and the optimizer. Finally return the average epoch loss and performance!
- __evaluate__ - Evaluate your model on the validation dataset. This function is essentially the same as the trainnig function, but you should set your model to eval mode and don't propagate back the errors to your weights!
```
from sklearn.metrics import precision_recall_fscore_support
def calculate_performance(preds, y):
"""
Returns precision, recall, fscore per batch
"""
# Get the predicted label from the probabilities
rounded_preds = preds.argmax(1)
# Calculate the correct predictions batch-wise and calculate precision, recall, and fscore
# WARNING: Tensors here could be on the GPU, so make sure to copy everything to CPU
precision, recall, fscore, support = precision_recall_fscore_support(
rounded_preds.cpu(), y.cpu()
)
return precision[1], recall[1], fscore[1]
import torch.nn.functional as F
def train(model, iterator, optimizer, criterion):
# We will calculate loss and accuracy epoch-wise based on average batch accuracy
epoch_loss = 0
epoch_prec = 0
epoch_recall = 0
epoch_fscore = 0
# You always need to set your model to training mode
# If you don't set your model to training mode the error won't propagate back to the weights
model.train()
# We calculate the error on batches so the iterator will return matrices with shape [BATCH_SIZE, VOCAB_SIZE]
for batch in iterator:
text_vecs = batch[0]
labels = batch[1]
sen_lens = []
texts = []
# This is for later!
if len(batch) > 2:
sen_lens = batch[2]
texts = batch[3]
# We reset the gradients from the last step, so the loss will be calculated correctly (and not added together)
optimizer.zero_grad()
# This runs the forward function on your model (you don't need to call it directly)
predictions = model(text_vecs, sen_lens)
# Calculate the loss and the accuracy on the predictions (the predictions are log probabilities, remember!)
loss = criterion(predictions, labels)
prec, recall, fscore = calculate_performance(predictions, labels)
# Propagate the error back on the model (this means changing the initial weights in your model)
# Calculate gradients on parameters that requries grad
loss.backward()
# Update the parameters
optimizer.step()
# We add batch-wise loss to the epoch-wise loss
epoch_loss += loss.item()
# We also do the same with the scores
epoch_prec += prec.item()
epoch_recall += recall.item()
epoch_fscore += fscore.item()
return (
epoch_loss / len(iterator),
epoch_prec / len(iterator),
epoch_recall / len(iterator),
epoch_fscore / len(iterator),
)
# The evaluation is done on the validation dataset
def evaluate(model, iterator, criterion):
epoch_loss = 0
epoch_prec = 0
epoch_recall = 0
epoch_fscore = 0
# On the validation dataset we don't want training so we need to set the model on evaluation mode
model.eval()
# Also tell Pytorch to not propagate any error backwards in the model or calculate gradients
# This is needed when you only want to make predictions and use your model in inference mode!
with torch.no_grad():
# The remaining part is the same with the difference of not using the optimizer to backpropagation
for batch in iterator:
text_vecs = batch[0]
labels = batch[1]
sen_lens = []
texts = []
if len(batch) > 2:
sen_lens = batch[2]
texts = batch[3]
predictions = model(text_vecs, sen_lens)
loss = criterion(predictions, labels)
prec, recall, fscore = calculate_performance(predictions, labels)
epoch_loss += loss.item()
epoch_prec += prec.item()
epoch_recall += recall.item()
epoch_fscore += fscore.item()
# Return averaged loss on the whole epoch!
return (
epoch_loss / len(iterator),
epoch_prec / len(iterator),
epoch_recall / len(iterator),
epoch_fscore / len(iterator),
)
import time
# This is just for measuring training time!
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
```
### Training loop!
Below is the training loop of our model! Try to set an EPOCH number that will correctly train your model :) (it is not underfitted but neither overfitted!
```
def training_loop(epoch_number=15):
# Set an EPOCH number!
N_EPOCHS = epoch_number
best_valid_loss = float("inf")
# We loop forward on the epoch number
for epoch in range(N_EPOCHS):
start_time = time.time()
# Train the model on the training set using the dataloader
train_loss, train_prec, train_rec, train_fscore = train(
model, train_iterator, optimizer, criterion
)
# And validate your model on the validation set
valid_loss, valid_prec, valid_rec, valid_fscore = evaluate(
model, valid_iterator, criterion
)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
# If we find a better model, we save the weights so later we may want to reload it
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), "tut1-model.pt")
print(f"Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s")
print(
f"\tTrain Loss: {train_loss:.3f} | Train Prec: {train_prec*100:.2f}% | Train Rec: {train_rec*100:.2f}% | Train Fscore: {train_fscore*100:.2f}%"
)
print(
f"\t Val. Loss: {valid_loss:.3f} | Val Prec: {valid_prec*100:.2f}% | Val Rec: {valid_rec*100:.2f}% | Val Fscore: {valid_fscore*100:.2f}%"
)
training_loop()
```
__NOTE: DON'T FORGET TO RERUN THE MODEL INITIALIZATION WHEN YOU ARE TRYING TO RUN THE MODEL MULTIPLE TIMES. IF YOU DON'T REINITIALIZE THE MODEL IT WILL CONTINUE THE TRAINING WHERE IT HAS STOPPED LAST TIME AND DOESN'T RUN FROM SRATCH!__
These lines:
```python
model = BoWClassifier(OUTPUT_DIM, INPUT_DIM)
optimizer = optim.Adam(model.parameters(), lr=1e-3)
criterion = nn.NLLLoss()
model = model.to(device)
criterion = criterion.to(device)
```
This will reinitialize the model!
```
def reinitialize(model):
optimizer = optim.Adam(model.parameters(), lr=1e-3)
criterion = nn.NLLLoss()
model = model.to(device)
criterion = criterion.to(device)
reinitialize(BoWClassifier(OUTPUT_DIM, INPUT_DIM))
```
## Add more linear layers to the model and experiment with other hyper-parameters
### More layers
Currently we only have a single linear layers in our model. We are now adding more linear layers to the model.
We also introduce a HIDDEN_SIZE parameter that will be the size of the intermediate representation between the linear layers. Also adding a RELU activation function between the linear layers.
See more:
- https://pytorch.org/docs/stable/generated/torch.nn.ReLU.html
- https://pytorch.org/tutorials/beginner/examples_nn/two_layer_net_nn.html
```
from torch import nn
class BoWDeepClassifier(nn.Module):
def __init__(self, num_labels, vocab_size, hidden_size):
super(BoWDeepClassifier, self).__init__()
# First linear layer
self.linear1 = nn.Linear(vocab_size, hidden_size)
# Non-linear activation function between them
self.relu = torch.nn.ReLU()
# Second layer
self.linear2 = nn.Linear(hidden_size, num_labels)
def forward(self, bow_vec, sequence_lens):
# Run the input vector through every layer
output = self.linear1(bow_vec)
output = self.relu(output)
output = self.linear2(output)
# Get the probabilities
return F.log_softmax(output, dim=1)
HIDDEN_SIZE = 200
learning_rate = 0.001
BATCH_SIZE = 64
N_EPOCHS = 15
model = BoWDeepClassifier(OUTPUT_DIM, INPUT_DIM, HIDDEN_SIZE)
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
criterion = nn.NLLLoss()
model = model.to(device)
criterion = criterion.to(device)
training_loop()
```
## Implement automatic early-stopping in the training loop
Early stopping is a very easy method to avoid the overfitting of your model.
We could:
- Save the training and the validation loss of the last two epochs (if you are atleast in the third epoch)
- If the loss increased in the last two epoch on the training data but descreased or stagnated in the validation data, you should stop the training automatically!
```
# REINITIALIZE YOUR MODEL TO GET A CORRECT RUN!
```
## Handling class imbalance
Our data is imbalanced, the first class has twice the population of the second class.
One way of handling imbalanced data is to weight the loss function, so it penalizes errors on the smaller class.
Look at the documentation of the loss function: https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html
Set the weights based on the inverse population of the classes (so the less sample a class has, more the errors will be penalized!)
```
tr_data.groupby("label").size()
weights = torch.Tensor([1, 2])
criterion = nn.NLLLoss(weight=weights)
```
## Adding an Embedding Layer to the network
- We only used one-hot-encoded vectors as our features until now
- Now we will introduce an [embedding](https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html) layer into our network.
- We will feed the words into our network one-by-one, and the layer will learn a dense vector representation for each word

_from pytorch.org_
```
# Get the analyzer to get the word-id mapping from CountVectorizer
an = word_to_ix.build_analyzer()
an("hello my name is adam")
max(word_to_ix.vocabulary_, key=word_to_ix.vocabulary_.get)
len(word_to_ix.vocabulary_)
def create_input(dataset, analyzer, vocabulary):
dataset_as_indices = []
# We go through each tweet in the dataset
# We need to add two additional symbols to the vocabulary
# We have 3000 features, ranged 0-2999
# We add 3000 as an id for the "unknown" words not among the features
# 3001 will be the symbol for padding, but about this later!
for tweet in dataset:
tokens = analyzer(tweet)
token_ids = []
for token in tokens:
# if the token is in the vocab, we add the id
if token in vocabulary:
token_ids.append(vocabulary[token])
# else we add the id of the unknown token
else:
token_ids.append(3000)
# if we removed every token during preprocessing (stopword removal, lemmatization), we add the unknown token to the list so it won't be empty
if not token_ids:
token_ids.append(3000)
dataset_as_indices.append(torch.LongTensor(token_ids).to(device))
return dataset_as_indices
# We add the length of the tweets so sentences with similar lengths will be next to each other
# This can be important because of padding
tr_data["length"] = tr_data.tweet.str.len()
val_data["length"] = val_data.tweet.str.len()
tr_data.tweet.str.len()
tr_data = tr_data.sort_values(by="length")
val_data = val_data.sort_values(by="length")
# We create the dataset as ids of tokens
dataset_as_ids = create_input(tr_data.tweet, an, word_to_ix.vocabulary_)
dataset_as_ids[0]
```
### Padding
- We didn't need to take care of input padding when using one-hot-encoded vectors
- Padding handles different sized inputs
- We can pad sequences from the left, or from the right

_image from https://towardsdatascience.com/nlp-preparing-text-for-deep-learning-model-using-tensorflow2-461428138657_
```
from torch.nn.utils.rnn import pad_sequence
# pad_sequence will take care of the padding
# we will need to provide a padding_value to it
padded = pad_sequence(dataset_as_ids, batch_first=True, padding_value=3001)
def prepare_dataloader_with_padding(tr_data, val_data, word_to_ix):
# First create the id representations of the input vectors
# Then pad the sequences so all of the input is the same size
# We padded texts for the whole dataset, this could have been done batch-wise also!
tr_data_vecs = pad_sequence(
create_input(tr_data.tweet, an, word_to_ix.vocabulary_),
batch_first=True,
padding_value=3001,
)
tr_labels = torch.LongTensor(tr_data.label.tolist()).to(device)
tr_lens = torch.LongTensor(
[len(i) for i in create_input(tr_data.tweet, an, word_to_ix.vocabulary_)]
)
# We also add the texts to the batches
# This is for the Transformer models, you wont need this in the next experiments
tr_sents = tr_data.tweet.tolist()
val_data_vecs = pad_sequence(
create_input(val_data.tweet, an, word_to_ix.vocabulary_),
batch_first=True,
padding_value=3001,
)
val_labels = torch.LongTensor(val_data.label.tolist()).to(device)
val_lens = torch.LongTensor(
[len(i) for i in create_input(val_data.tweet, an, word_to_ix.vocabulary_)]
)
val_sents = val_data.tweet.tolist()
tr_data_loader = [
(sample, label, length, sent)
for sample, label, length, sent in zip(
tr_data_vecs, tr_labels, tr_lens, tr_sents
)
]
val_data_loader = [
(sample, label, length, sent)
for sample, label, length, sent in zip(
val_data_vecs, val_labels, val_lens, val_sents
)
]
return tr_data_loader, val_data_loader
tr_data_loader, val_data_loader = prepare_dataloader_with_padding(
tr_data, val_data, word_to_ix
)
def create_dataloader_iterators_with_padding(
tr_data_loader, val_data_loader, BATCH_SIZE
):
train_iterator = DataLoader(
tr_data_loader,
batch_size=BATCH_SIZE,
shuffle=True,
)
valid_iterator = DataLoader(
val_data_loader,
batch_size=BATCH_SIZE,
shuffle=False,
)
return train_iterator, valid_iterator
train_iterator, valid_iterator = create_dataloader_iterators_with_padding(
tr_data_loader, val_data_loader, BATCH_SIZE
)
next(iter(train_iterator))
```

_image from bentrevett_
```
from torch import nn
import numpy as np
class BoWClassifierWithEmbedding(nn.Module):
def __init__(self, num_labels, vocab_size, embedding_dim):
super(BoWClassifierWithEmbedding, self).__init__()
# We define the embedding layer here
# It will convert a list of ids: [1, 50, 64, 2006]
# Into a list of vectors, one for each word
# The embedding layer will learn the vectors from the contexts
self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx=3001)
# We could also load precomputed embeddings, e.g. GloVe, in some cases we don't want to train the embedding layer
# In this case we enable the training
self.embedding.weight.requires_grad = True
self.linear = nn.Linear(embedding_dim, num_labels)
def forward(self, text, sequence_lens):
# First we create the embedded vectors
embedded = self.embedding(text)
# We need a pooling to convert a list of embedded words to a sentence vector
# We could have chosen different pooling, e.g. min, max, average..
# With LSTM we also do a pooling, just smarter
pooled = F.max_pool2d(embedded, (embedded.shape[1], 1)).squeeze(1)
return F.log_softmax(self.linear(pooled), dim=1)
```
Output of the LSTM layer..

_image from stackoverflow_
```
class LSTMClassifier(nn.Module):
def __init__(self, num_labels, vocab_size, embedding_dim, hidden_dim):
super(LSTMClassifier, self).__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx=3001)
self.embedding.weight.requires_grad = True
# Define the LSTM layer
# Documentation: https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html
self.lstm = nn.LSTM(
embedding_dim,
hidden_dim,
batch_first=True,
num_layers=1,
bidirectional=False,
)
self.linear = nn.Linear(hidden_dim, num_labels)
# Dropout to overcome overfitting
self.dropout = nn.Dropout(0.25)
def forward(self, text, sequence_lens):
embedded = self.embedding(text)
# To ensure LSTM doesn't learn gradients for the id of the padding symbol
packed = nn.utils.rnn.pack_padded_sequence(
embedded, sequence_lens, enforce_sorted=False, batch_first=True
)
packed_outputs, (h, c) = self.lstm(packed)
# extract LSTM outputs (not used here)
lstm_outputs, lens = nn.utils.rnn.pad_packed_sequence(
packed_outputs, batch_first=True
)
# We use the last hidden vector from LSTM
y = self.linear(h[-1])
log_probs = F.log_softmax(y, dim=1)
return log_probs
INPUT_DIM = VOCAB_SIZE + 2
OUTPUT_DIM = 2
EMBEDDING_DIM = 100
HIDDEN_DIM = 20
criterion = nn.NLLLoss()
# model = BoWClassifierWithEmbedding(OUTPUT_DIM, INPUT_DIM, EMBEDDING_DIM)
model = LSTMClassifier(OUTPUT_DIM, INPUT_DIM, EMBEDDING_DIM, HIDDEN_DIM)
model = model.to(device)
criterion = criterion.to(device)
optimizer = optim.Adam(model.parameters(), lr=1e-3)
training_loop(epoch_number=15)
```
## Transformers
To completely understand the transformers architecture look at this lecture held by Judit Acs (on the course of Introduction to Python and Natural Language Technologies in BME):
- https://github.com/bmeaut/python_nlp_2021_spring/blob/main/lectures/09_Transformers_BERT.ipynb
Here I will only include and present the necessary details _from the lecture_ about transformers and BERT.
### Problems with recurrent neural networks:
Recall that we used recurrent neural cells, specifically LSTMs to encode a list of vectors into a sentence vector.
- Problem 1. No parallelism
- LSTMs are recurrent, they rely on their left and right history, so the symbols need to be processed in order -> no parallelism.
- Problem 2. Long-range dependencies
- Long-range dependencies are not infrequent in NLP.
- "The people/person who called and wanted to rent your house when you go away next year are/is from California" -- Miller & Chomsky 1963
- LSTMs have a problem capturing these because there are too many backpropagation steps between the symbols.
Introduced in [Attention Is All You Need](https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf) by Vaswani et al., 2017
Transformers solve Problem 1 by relying purely on attention instead of recurrence.
Not having recurrent connections means that sequence position no longer matters.
Recurrence is replaced by self attention.
- Transformers are available in the __transformers__ Python package: https://github.com/huggingface/transformers.
- There are thousands of pretrained transformers models in different languages and with different architectures.
- With the huggingface package there is a unified interface to download and use all the models. Browse https://huggingface.co/models for more!
- There is also a great blog post to understand the architecture of transformers: https://jalammar.github.io/illustrated-transformer/
### BERT
[BERT](https://www.aclweb.org/anthology/N19-1423/): Pre-training of Deep Bidirectional Transformers for Language Understanding by Devlin et al. 2018, 17500 citations
[BERTology](https://huggingface.co/transformers/bertology.html) is the nickname for the growing amount of BERT-related research.
BERT trains a transformer model on two tasks:
- Masked language model:
- 15% of the tokenswordpieces are selected at the beginning.
- 80% of those are replaced with [MASK],
- 10% are replaced with a random token,
- 10% are kept intact.
- Next sentence prediction:
- Are sentences A and B consecutive sentences?
- Generate 50-50%.
- Binary classification task.
### Training, Finetuning BERT
- BERT models are (masked-)language models that were usually trained on large corporas.
- e.g. BERT base model was trained on BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia.
#### Finetuning
- Get a trained BERT model.
- Add a small classification layer on top (typically a 2-layer MLP).
- Train BERT along with the classification layer on an annotated dataset.
- Much smaller than the data BERT was trained on
- Another option: freeze BERT and train the classification layer only.
- Easier training regime.
- Smaller memory footprint.
- Worse performance.
<img src="https://production-media.paperswithcode.com/methods/new_BERT_Overall.jpg" alt="finetune" width="800px"/>
```
!pip install transformers
```
### WordPiece tokenizer
- BERT has its own tokenizer
- All inputs must be tokenized with BERT
- You don't need to remove stopwords, lemmatize, preprocess the input for BERT
- It is a middle ground between word and character tokenization.
- Static vocabulary:
- Special tokens: [CLS], [SEP], [MASK], [UNK]
- It tokenizes everything, falling back to characters and [UNK] if necessary
```
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
print(type(tokenizer))
print(len(tokenizer.get_vocab()))
tokenizer.tokenize("My shihtzu's name is Maszat.")
tokenizer("There are black cats and black dogs.", "Another sentence.")
```
### Train a BertForSequenceClassification model on the dataset
```
from transformers import BertForSequenceClassification
```
__BertForSequenceClassification__ is a helper class to train transformer-based BERT models. It puts a classification layer on top of a pretrained model.
Read more in the documentation: https://huggingface.co/transformers/model_doc/bert.html#bertforsequenceclassification
```
model = BertForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=2)
_ = model.to(device)
# We only want to finetune the classification layer on top of BERT
for p in model.base_model.parameters():
p.requires_grad = False
params = list(model.named_parameters())
print(f"The BERT model has {len(params)} different named parameters.")
print("==== Embedding Layer ====\n")
for p in params[0:5]:
print(f"{p[0]} {str(tuple(p[1].size()))}")
print("\n==== First Transformer ====\n")
for p in params[5:21]:
print(f"{p[0]} {str(tuple(p[1].size()))}")
print("\n==== Output Layer ====\n")
for p in params[-4:]:
print(f"{p[0]} {str(tuple(p[1].size()))}")
N_EPOCHS = 5
optimizer = optim.Adam(model.parameters())
tr_data_loader, val_data_loader = prepare_dataloader_with_padding(
tr_data, val_data, word_to_ix
)
train_iterator, valid_iterator = create_dataloader_iterators_with_padding(
tr_data_loader, val_data_loader, BATCH_SIZE
)
for epoch in range(N_EPOCHS):
start_time = time.time()
train_epoch_loss = 0
train_epoch_prec = 0
train_epoch_recall = 0
train_epoch_fscore = 0
model.train()
# We use our own iterator but now use the raw texts instead of the ID tokens
for train_batch in train_iterator:
labels = train_batch[1]
texts = train_batch[3]
optimizer.zero_grad()
# We use BERT's own tokenizer on raw texts
# Check the documentation: https://huggingface.co/transformers/main_classes/tokenizer.html
encoded = tokenizer(
texts,
truncation=True,
max_length=128,
padding=True,
return_tensors="pt",
)
# BERT converts texts into IDs of its own vocabulary
input_ids = encoded["input_ids"].to(device)
# Mask to avoid performing attention on padding token indices.
attention_mask = encoded["attention_mask"].to(device)
# Run the model
outputs = model(input_ids, attention_mask=attention_mask, labels=labels)
loss = outputs[0]
predictions = outputs[1]
prec, recall, fscore = calculate_performance(predictions, labels)
loss.backward()
optimizer.step()
train_epoch_loss += loss.item()
train_epoch_prec += prec.item()
train_epoch_recall += recall.item()
train_epoch_fscore += fscore.item()
train_loss = train_epoch_loss / len(train_iterator)
train_prec = train_epoch_prec / len(train_iterator)
train_rec = train_epoch_recall / len(train_iterator)
train_fscore = train_epoch_fscore / len(train_iterator)
# And validate your model on the validation set
valid_epoch_loss = 0
valid_epoch_prec = 0
valid_epoch_recall = 0
valid_epoch_fscore = 0
model.eval()
with torch.no_grad():
for valid_batch in valid_iterator:
labels = valid_batch[1]
texts = valid_batch[3]
encoded = tokenizer(
texts,
truncation=True,
max_length=128,
padding=True,
return_tensors="pt",
)
input_ids = encoded["input_ids"].to(device)
attention_mask = encoded["attention_mask"].to(device)
outputs = model(input_ids, attention_mask=attention_mask, labels=labels)
loss = outputs[0]
predictions = outputs[1]
prec, recall, fscore = calculate_performance(predictions, labels)
# We add batch-wise loss to the epoch-wise loss
valid_epoch_loss += loss.item()
valid_epoch_prec += prec.item()
valid_epoch_recall += recall.item()
valid_epoch_fscore += fscore.item()
valid_loss = valid_epoch_loss / len(valid_iterator)
valid_prec = valid_epoch_prec / len(valid_iterator)
valid_rec = valid_epoch_recall / len(valid_iterator)
valid_fscore = valid_epoch_fscore / len(valid_iterator)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
print(f"Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s")
print(
f"\tTrain Loss: {train_loss:.3f} | Train Prec: {train_prec*100:.2f}% | Train Rec: {train_rec*100:.2f}% | Train Fscore: {train_fscore*100:.2f}%"
)
print(
f"\t Val. Loss: {valid_loss:.3f} | Val Prec: {valid_prec*100:.2f}% | Val Rec: {valid_rec*100:.2f}% | Val Fscore: {valid_fscore*100:.2f}%"
)
```
| true |
code
| 0.753603 | null | null | null | null |
|
# T81-558: Applications of Deep Neural Networks
**Module 11: Natural Language Processing and Speech Recognition**
* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# Module 11 Material
* Part 11.1: Getting Started with Spacy in Python [[Video]](https://www.youtube.com/watch?v=A5BtU9vXzu8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_11_01_spacy.ipynb)
* Part 11.2: Word2Vec and Text Classification [[Video]](https://www.youtube.com/watch?v=nWxtRlpObIs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_11_02_word2vec.ipynb)
* Part 11.3: What are Embedding Layers in Keras [[Video]](https://www.youtube.com/watch?v=OuNH5kT-aD0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_11_03_embedding.ipynb)
* **Part 11.4: Natural Language Processing with Spacy and Keras** [[Video]](https://www.youtube.com/watch?v=BKgwjhao5DU&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_11_04_text_nlp.ipynb)
* Part 11.5: Learning English from Scratch with Keras and TensorFlow [[Video]](https://www.youtube.com/watch?v=Y1khuuSjZzc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN&index=58) [[Notebook]](t81_558_class_11_05_english_scratch.ipynb)
# Part 11.4: Natural Language Processing with Spacy and Keras
In this part we will see how to use Spacy and Keras together.
### Word-Level Text Generation
There are a number of different approaches to teaching a neural network to output free-form text. The most basic question is if you wish the neural network to learn at the word or character level. In many ways, learning at the character level is the more interesting of the two. The LSTM is learning construct its own words without even being shown what a word is. We will begin with character-level text generation. In the next module, we will see how we can use nearly the same technique to operate at the word level. The automatic captioning that will be implemented in the next module is at the word level.
We begin by importing the needed Python packages and defining the sequence length, named **maxlen**. Time-series neural networks always accept their input as a fixed length array. Not all of the sequence might be used, it is common to fill extra elements with zeros. The text will be divided into sequences of this length and the neural network will be trained to predict what comes after this sequence.
```
from tensorflow.keras.callbacks import LambdaCallback
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
from tensorflow.keras.optimizers import RMSprop
import numpy as np
import random
import sys
import io
import requests
import re
import requests
r = requests.get("https://data.heatonresearch.com/data/t81-558/text/treasure_island.txt")
raw_text = r.text.lower()
print(raw_text[0:1000])
import spacy
nlp = spacy.load("en_core_web_sm")
doc = nlp(raw_text)
vocab = set()
tokenized_text = []
for token in doc:
word = ''.join([i if ord(i) < 128 else ' ' for i in token.text])
word = word.strip()
if not token.is_digit \
and not token.like_url \
and not token.like_email:
vocab.add(word)
tokenized_text.append(word)
print(f"Vocab size: {len(vocab)}")
```
The above section might have given you this error:
```
OSError: [E050] Can't find model 'en_core_web_sm'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory.
```
If so, Spacy can be installed with a simple PIP install. This was included in the list of packages to install for this course. You will need to ensure that you've installed a language with Spacy. If you do not, you will get the following error:
To install English, use the following command:
```
python -m spacy download en
```
```
print(list(vocab)[:20])
word2idx = dict((n, v) for v, n in enumerate(vocab))
idx2word = dict((n, v) for n, v in enumerate(vocab))
tokenized_text = [word2idx[word] for word in tokenized_text]
tokenized_text
# cut the text in semi-redundant sequences of maxlen words
maxlen = 6
step = 3
sentences = []
next_words = []
for i in range(0, len(tokenized_text) - maxlen, step):
sentences.append(tokenized_text[i: i + maxlen])
next_words.append(tokenized_text[i + maxlen])
print('nb sequences:', len(sentences))
sentences[0:5]
import numpy as np
print('Vectorization...')
x = np.zeros((len(sentences), maxlen, len(vocab)), dtype=np.bool)
y = np.zeros((len(sentences), len(vocab)), dtype=np.bool)
for i, sentence in enumerate(sentences):
for t, word in enumerate(sentence):
x[i, t, word] = 1
y[i, next_words[i]] = 1
x.shape
y.shape
y[0:5]
# build the model: a single LSTM
print('Build model...')
model = Sequential()
model.add(LSTM(128, input_shape=(maxlen, len(vocab))))
model.add(Dense(len(vocab), activation='softmax'))
optimizer = RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
model.summary()
def sample(preds, temperature=1.0):
# helper function to sample an index from a probability array
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
def on_epoch_end(epoch, _):
# Function invoked at end of each epoch. Prints generated text.
print("****************************************************************************")
print('----- Generating text after Epoch: %d' % epoch)
start_index = random.randint(0, len(tokenized_text) - maxlen)
for temperature in [0.2, 0.5, 1.0, 1.2]:
print('----- temperature:', temperature)
#generated = ''
sentence = tokenized_text[start_index: start_index + maxlen]
#generated += sentence
o = ' '.join([idx2word[idx] for idx in sentence])
print(f'----- Generating with seed: "{o}"')
#sys.stdout.write(generated)
for i in range(100):
x_pred = np.zeros((1, maxlen, len(vocab)))
for t, word in enumerate(sentence):
x_pred[0, t, word] = 1.
preds = model.predict(x_pred, verbose=0)[0]
next_index = sample(preds, temperature)
next_word = idx2word[next_index]
#generated += next_char
sentence = sentence[1:]
sentence.append(next_index)
sys.stdout.write(next_word)
sys.stdout.write(' ')
sys.stdout.flush()
print()
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
model.fit(x, y,
batch_size=128,
epochs=60,
callbacks=[print_callback])
```
| true |
code
| 0.559651 | null | null | null | null |
|
# Using nbconvert as a library
In this notebook, you will be introduced to the programmatic API of nbconvert and how it can be used in various contexts.
A great [blog post](http://jakevdp.github.io/blog/2013/04/15/code-golf-in-python-sudoku/) by [@jakevdp](https://github.com/jakevdp) will be used to demonstrate. This notebook will not focus on using the command line tool. The attentive reader will point-out that no data is read from or written to disk during the conversion process. This is because nbconvert has been designed to work in memory so that it works well in a database or web-based environment too.
## Quick overview
Credit: Jonathan Frederic (@jdfreder on github)
The main principle of nbconvert is to instantiate an `Exporter` that controls the pipeline through which notebooks are converted.
First, download @jakevdp's notebook (if you do not have `requests`, install it by running `pip install requests`, or if you don't have pip installed, you can find it on PYPI):
```
from urllib.request import urlopen
url = 'http://jakevdp.github.com/downloads/notebooks/XKCD_plots.ipynb'
response = urlopen(url).read().decode()
response[0:60] + ' ...'
```
The response is a JSON string which represents a Jupyter notebook.
Next, we will read the response using nbformat. Doing this will guarantee that the notebook structure is valid. Note that the in-memory format and on disk format are slightly different. In particual, on disk, multiline strings might be split into a list of strings.
```
import nbformat
jake_notebook = nbformat.reads(response, as_version=4)
jake_notebook.cells[0]
```
The nbformat API returns a special type of dictionary. For this example, you don't need to worry about the details of the structure (if you are interested, please see the [nbformat documentation](https://nbformat.readthedocs.io/en/latest/)).
The nbconvert API exposes some basic exporters for common formats and defaults. You will start by using one of them. First, you will import one of these exporters (specifically, the HTML exporter), then instantiate it using most of the defaults, and then you will use it to process the notebook we downloaded earlier.
```
from traitlets.config import Config
# 1. Import the exporter
from nbconvert import HTMLExporter
# 2. Instantiate the exporter. We use the `basic` template for now; we'll get into more details
# later about how to customize the exporter further.
html_exporter = HTMLExporter()
html_exporter.template_file = 'basic'
# 3. Process the notebook we loaded earlier
(body, resources) = html_exporter.from_notebook_node(jake_notebook)
```
The exporter returns a tuple containing the source of the converted notebook, as well as a resources dict. In this case, the source is just raw HTML:
```
print(body[:400] + '...')
```
If you understand HTML, you'll notice that some common tags are omitted, like the `body` tag. Those tags are included in the default `HtmlExporter`, which is what would have been constructed if we had not modified the `template_file`.
The resource dict contains (among many things) the extracted `.png`, `.jpg`, etc. from the notebook when applicable. The basic HTML exporter leaves the figures as embedded base64, but you can configure it to extract the figures. So for now, the resource dict should be mostly empty, except for a key containing CSS and a few others whose content will be obvious:
```
print("Resources:", resources.keys())
print("Metadata:", resources['metadata'].keys())
print("Inlining:", resources['inlining'].keys())
print("Extension:", resources['output_extension'])
```
`Exporter`s are stateless, so you won't be able to extract any useful information beyond their configuration. You can re-use an exporter instance to convert another notebook. In addition to the `from_notebook_node` used above, each exporter exposes `from_file` and `from_filename` methods.
## Extracting Figures using the RST Exporter
When exporting, you may want to extract the base64 encoded figures as files. While the HTML exporter does not do this by default, the `RstExporter` does:
```
# Import the RST exproter
from nbconvert import RSTExporter
# Instantiate it
rst_exporter = RSTExporter()
# Convert the notebook to RST format
(body, resources) = rst_exporter.from_notebook_node(jake_notebook)
print(body[:970] + '...')
print('[.....]')
print(body[800:1200] + '...')
```
Notice that base64 images are not embedded, but instead there are filename-like strings, such as `output_3_0.png`. The strings actually are (configurable) keys that map to the binary data in the resources dict.
Note, if you write an RST Plugin, you are responsible for writing all the files to the disk (or uploading, etc...) in the right location. Of course, the naming scheme is configurable.
As an exercise, this notebook will show you how to get one of those images. First, take a look at the `'outputs'` of the returned resources dictionary. This is a dictionary that contains a key for each extracted resource, with values corresponding to the actual base64 encoding:
```
sorted(resources['outputs'].keys())
```
In this case, there are 5 extracted binary figures, all `png`s. We can use the Image display object to actually display one of the images:
```
from IPython.display import Image
Image(data=resources['outputs']['output_3_0.png'], format='png')
```
Note that this image is being rendered without ever reading or writing to the disk.
## Extracting Figures using the HTML Exporter
As mentioned above, by default, the HTML exporter does not extract images -- it just leaves them as inline base64 encodings. However, this is not always what you might want. For example, here is a use case from @jakevdp:
> I write an [awesome blog](http://jakevdp.github.io/) using Jupyter notebooks converted to HTML, and I want the images to be cached. Having one html file with all of the images base64 encoded inside it is nice when sharing with a coworker, but for a website, not so much. I need an HTML exporter, and I want it to extract the figures!
### Some theory
Before we get into actually extracting the figures, it will be helpful to give a high-level overview of the process of converting a notebook to a another format:
1. Retrieve the notebook and it's accompanying resources (you are responsible for this).
2. Feed the notebook into the `Exporter`, which:
1. Sequentially feeds the notebook into an array of `Preprocessor`s. Preprocessors only act on the **structure** of the notebook, and have unrestricted access to it.
2. Feeds the notebook into the Jinja templating engine, which converts it to a particular format depending on which template is selected.
3. The exporter returns the converted notebook and other relevant resources as a tuple.
4. You write the data to the disk using the built-in `FilesWriter` (which writes the notebook and any extracted files to disk), or elsewhere using a custom `Writer`.
### Using different preprocessors
To extract the figures when using the HTML exporter, we will want to change which `Preprocessor`s we are using. There are several preprocessors that come with nbconvert, including one called the `ExtractOutputPreprocessor`.
The `ExtractOutputPreprocessor` is responsible for crawling the notebook, finding all of the figures, and putting them into the resources directory, as well as choosing the key (i.e. `filename_xx_y.extension`) that can replace the figure inside the template. To enable the `ExtractOutputPreprocessor`, we must add it to the exporter's list of preprocessors:
```
# create a configuration object that changes the preprocessors
from traitlets.config import Config
c = Config()
c.HTMLExporter.preprocessors = ['nbconvert.preprocessors.ExtractOutputPreprocessor']
# create the new exporter using the custom config
html_exporter_with_figs = HTMLExporter(config=c)
html_exporter_with_figs.preprocessors
```
We can compare the result of converting the notebook using the original HTML exporter and our new customized one:
```
(_, resources) = html_exporter.from_notebook_node(jake_notebook)
(_, resources_with_fig) = html_exporter_with_figs.from_notebook_node(jake_notebook)
print("resources without figures:")
print(sorted(resources.keys()))
print("\nresources with extracted figures (notice that there's one more field called 'outputs'):")
print(sorted(resources_with_fig.keys()))
print("\nthe actual figures are:")
print(sorted(resources_with_fig['outputs'].keys()))
```
## Custom Preprocessors
There are an endless number of transformations that you may want to apply to a notebook. In particularly complicated cases, you may want to actually create your own `Preprocessor`. Above, when we customized the list of preprocessors accepted by the `HTMLExporter`, we passed in a string -- this can be any valid module name. So, if you create your own preprocessor, you can include it in that same list and it will be used by the exporter.
To create your own preprocessor, you will need to subclass from `nbconvert.preprocessors.Preprocessor` and overwrite either the `preprocess` and/or `preprocess_cell` methods.
## Example
The following demonstration adds the ability to exclude a cell by index.
Note: injecting cells is similar, and won't be covered here. If you want to inject static content at the beginning/end of a notebook, use a custom template.
```
from traitlets import Integer
from nbconvert.preprocessors import Preprocessor
class PelicanSubCell(Preprocessor):
"""A Pelican specific preprocessor to remove some of the cells of a notebook"""
# I could also read the cells from nb.metadata.pelican if someone wrote a JS extension,
# but for now I'll stay with configurable value.
start = Integer(0, help="first cell of notebook to be converted")
end = Integer(-1, help="last cell of notebook to be converted")
start.tag(config='True')
end.tag(config='True')
def preprocess(self, nb, resources):
self.log.info("I'll keep only cells from %d to %d", self.start, self.end)
nb.cells = nb.cells[self.start:self.end]
return nb, resources
```
Here a Pelican exporter is created that takes `PelicanSubCell` preprocessors and a `config` object as parameters. This may seem redundant, but with the configuration system you can register an inactive preprocessor on all of the exporters and activate it from config files or the command line.
```
# Create a new config object that configures both the new preprocessor, as well as the exporter
c = Config()
c.PelicanSubCell.start = 4
c.PelicanSubCell.end = 6
c.RSTExporter.preprocessors = [PelicanSubCell]
# Create our new, customized exporter that uses our custom preprocessor
pelican = RSTExporter(config=c)
# Process the notebook
print(pelican.from_notebook_node(jake_notebook)[0])
```
## Programmatically creating templates
```
from jinja2 import DictLoader
dl = DictLoader({'full.tpl':
"""
{%- extends 'basic.tpl' -%}
{% block footer %}
FOOOOOOOOTEEEEER
{% endblock footer %}
"""})
exportHTML = HTMLExporter(extra_loaders=[dl])
(body, resources) = exportHTML.from_notebook_node(jake_notebook)
for l in body.split('\n')[-4:]:
print(l)
```
## Real World Uses
@jakevdp uses Pelican and Jupyter Notebook to blog. Pelican [will use](https://github.com/getpelican/pelican-plugins/pull/21) nbconvert programmatically to generate blog post. Have a look a [Pythonic Preambulations](http://jakevdp.github.io/) for Jake's blog post.
@damianavila wrote the Nikola Plugin to [write blog post as Notebooks](http://damianavila.github.io/blog/posts/one-line-deployment-of-your-site-to-gh-pages.html) and is developing a js-extension to publish notebooks via one click from the web app.
<center>
<blockquote class="twitter-tweet"><p>As <a href="https://twitter.com/Mbussonn">@Mbussonn</a> requested... easieeeeer! Deploy your Nikola site with just a click in the IPython notebook! <a href="http://t.co/860sJunZvj">http://t.co/860sJunZvj</a> cc <a href="https://twitter.com/ralsina">@ralsina</a></p>— Damián Avila (@damian_avila) <a href="https://twitter.com/damian_avila/statuses/370306057828335616">August 21, 2013</a></blockquote>
</center>
| true |
code
| 0.603202 | null | null | null | null |
|
# Training a ConvNet PyTorch
In this notebook, you'll learn how to use the powerful PyTorch framework to specify a conv net architecture and train it on the CIFAR-10 dataset.
```
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
from torch.utils.data import DataLoader
from torch.utils.data import sampler
import torchvision.datasets as dset
import torchvision.transforms as T
import numpy as np
import timeit
```
## What's this PyTorch business?
You've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.
For the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks: in this instance, PyTorch (or TensorFlow, if you switch over to that notebook).
Why?
* Our code will now run on GPUs! Much faster training. When using a framework like PyTorch or TensorFlow you can harness the power of the GPU for your own custom neural network architectures without having to write CUDA code directly (which is beyond the scope of this class).
* We want you to be ready to use one of these frameworks for your project so you can experiment more efficiently than if you were writing every feature you want to use by hand.
* We want you to stand on the shoulders of giants! TensorFlow and PyTorch are both excellent frameworks that will make your lives a lot easier, and now that you understand their guts, you are free to use them :)
* We want you to be exposed to the sort of deep learning code you might run into in academia or industry.
## How will I learn PyTorch?
If you've used Torch before, but are new to PyTorch, this tutorial might be of use: http://pytorch.org/tutorials/beginner/former_torchies_tutorial.html
Otherwise, this notebook will walk you through much of what you need to do to train models in Torch. See the end of the notebook for some links to helpful tutorials if you want to learn more or need further clarification on topics that aren't fully explained here.
## Load Datasets
We load the CIFAR-10 dataset. This might take a couple minutes the first time you do it, but the files should stay cached after that.
```
class ChunkSampler(sampler.Sampler):
"""Samples elements sequentially from some offset.
Arguments:
num_samples: # of desired datapoints
start: offset where we should start selecting from
"""
def __init__(self, num_samples, start = 0):
self.num_samples = num_samples
self.start = start
def __iter__(self):
return iter(range(self.start, self.start + self.num_samples))
def __len__(self):
return self.num_samples
NUM_TRAIN = 49000
NUM_VAL = 1000
cifar10_train = dset.CIFAR10('./cs231n/datasets', train=True, download=True,
transform=T.ToTensor())
loader_train = DataLoader(cifar10_train, batch_size=64, sampler=ChunkSampler(NUM_TRAIN, 0))
cifar10_val = dset.CIFAR10('./cs231n/datasets', train=True, download=True,
transform=T.ToTensor())
loader_val = DataLoader(cifar10_val, batch_size=64, sampler=ChunkSampler(NUM_VAL, NUM_TRAIN))
cifar10_test = dset.CIFAR10('./cs231n/datasets', train=False, download=True,
transform=T.ToTensor())
loader_test = DataLoader(cifar10_test, batch_size=64)
```
For now, we're going to use a CPU-friendly datatype. Later, we'll switch to a datatype that will move all our computations to the GPU and measure the speedup.
```
dtype = torch.FloatTensor # the CPU datatype
# Constant to control how frequently we print train loss
print_every = 100
# This is a little utility that we'll use to reset the model
# if we want to re-initialize all our parameters
def reset(m):
if hasattr(m, 'reset_parameters'):
m.reset_parameters()
```
## Example Model
### Some assorted tidbits
Let's start by looking at a simple model. First, note that PyTorch operates on Tensors, which are n-dimensional arrays functionally analogous to numpy's ndarrays, with the additional feature that they can be used for computations on GPUs.
We'll provide you with a Flatten function, which we explain here. Remember that our image data (and more relevantly, our intermediate feature maps) are initially N x C x H x W, where:
* N is the number of datapoints
* C is the number of channels
* H is the height of the intermediate feature map in pixels
* W is the height of the intermediate feature map in pixels
This is the right way to represent the data when we are doing something like a 2D convolution, that needs spatial understanding of where the intermediate features are relative to each other. When we input data into fully connected affine layers, however, we want each datapoint to be represented by a single vector -- it's no longer useful to segregate the different channels, rows, and columns of the data. So, we use a "Flatten" operation to collapse the C x H x W values per representation into a single long vector. The Flatten function below first reads in the N, C, H, and W values from a given batch of data, and then returns a "view" of that data. "View" is analogous to numpy's "reshape" method: it reshapes x's dimensions to be N x ??, where ?? is allowed to be anything (in this case, it will be C x H x W, but we don't need to specify that explicitly).
```
class Flatten(nn.Module):
def forward(self, x):
N, C, H, W = x.size() # read in N, C, H, W
return x.view(N, -1) # "flatten" the C * H * W values into a single vector per image
```
### The example model itself
The first step to training your own model is defining its architecture.
Here's an example of a convolutional neural network defined in PyTorch -- try to understand what each line is doing, remembering that each layer is composed upon the previous layer. We haven't trained anything yet - that'll come next - for now, we want you to understand how everything gets set up. nn.Sequential is a container which applies each layer
one after the other.
In that example, you see 2D convolutional layers (Conv2d), ReLU activations, and fully-connected layers (Linear). You also see the Cross-Entropy loss function, and the Adam optimizer being used.
Make sure you understand why the parameters of the Linear layer are 5408 and 10.
```
# Here's where we define the architecture of the model...
simple_model = nn.Sequential(
nn.Conv2d(3, 32, kernel_size=7, stride=2),
nn.ReLU(inplace=True),
Flatten(), # see above for explanation
nn.Linear(5408, 10), # affine layer
)
# Set the type of all data in this model to be FloatTensor
simple_model.type(dtype)
loss_fn = nn.CrossEntropyLoss().type(dtype)
optimizer = optim.Adam(simple_model.parameters(), lr=1e-2) # lr sets the learning rate of the optimizer
```
PyTorch supports many other layer types, loss functions, and optimizers - you will experiment with these next. Here's the official API documentation for these (if any of the parameters used above were unclear, this resource will also be helpful). One note: what we call in the class "spatial batch norm" is called "BatchNorm2D" in PyTorch.
* Layers: http://pytorch.org/docs/nn.html
* Activations: http://pytorch.org/docs/nn.html#non-linear-activations
* Loss functions: http://pytorch.org/docs/nn.html#loss-functions
* Optimizers: http://pytorch.org/docs/optim.html#algorithms
## Training a specific model
In this section, we're going to specify a model for you to construct. The goal here isn't to get good performance (that'll be next), but instead to get comfortable with understanding the PyTorch documentation and configuring your own model.
Using the code provided above as guidance, and using the following PyTorch documentation, specify a model with the following architecture:
* 7x7 Convolutional Layer with 32 filters and stride of 1
* ReLU Activation Layer
* Spatial Batch Normalization Layer
* 2x2 Max Pooling layer with a stride of 2
* Affine layer with 1024 output units
* ReLU Activation Layer
* Affine layer from 1024 input units to 10 outputs
And finally, set up a **cross-entropy** loss function and the **RMSprop** learning rule.
```
fixed_model_base = nn.Sequential( # You fill this in!
nn.Conv2d(in_channels=3, out_channels=32, kernel_size=7, stride=1),
nn.ReLU(True),
nn.BatchNorm2d(num_features=32),
nn.MaxPool2d(kernel_size=(2,2), stride=2),
Flatten(),
nn.Linear(in_features=5408, out_features=1024),
nn.ReLU(True),
nn.Linear(in_features=1024, out_features=10)
)
fixed_model = fixed_model_base.type(dtype)
```
To make sure you're doing the right thing, use the following tool to check the dimensionality of your output (it should be 64 x 10, since our batches have size 64 and the output of the final affine layer should be 10, corresponding to our 10 classes):
```
## Now we're going to feed a random batch into the model you defined and make sure the output is the right size
x = torch.randn(64, 3, 32, 32).type(dtype)
x_var = Variable(x.type(dtype)) # Construct a PyTorch Variable out of your input data
ans = fixed_model(x_var) # Feed it through the model!
# Check to make sure what comes out of your model
# is the right dimensionality... this should be True
# if you've done everything correctly
np.array_equal(np.array(ans.size()), np.array([64, 10]))
```
### GPU!
Now, we're going to switch the dtype of the model and our data to the GPU-friendly tensors, and see what happens... everything is the same, except we are casting our model and input tensors as this new dtype instead of the old one.
If this returns false, or otherwise fails in a not-graceful way (i.e., with some error message), you may not have an NVIDIA GPU available on your machine. If you're running locally, we recommend you switch to Google Cloud and follow the instructions to set up a GPU there. If you're already on Google Cloud, something is wrong -- make sure you followed the instructions on how to request and use a GPU on your instance. If you did, post on Piazza or come to Office Hours so we can help you debug.
```
# Verify that CUDA is properly configured and you have a GPU available
torch.cuda.is_available()
import copy
gpu_dtype = torch.cuda.FloatTensor
fixed_model_gpu = copy.deepcopy(fixed_model_base).type(gpu_dtype)
x_gpu = torch.randn(64, 3, 32, 32).type(gpu_dtype)
x_var_gpu = Variable(x.type(gpu_dtype)) # Construct a PyTorch Variable out of your input data
ans = fixed_model_gpu(x_var_gpu) # Feed it through the model!
# Check to make sure what comes out of your model
# is the right dimensionality... this should be True
# if you've done everything correctly
np.array_equal(np.array(ans.size()), np.array([64, 10]))
```
Run the following cell to evaluate the performance of the forward pass running on the CPU:
```
%%timeit
ans = fixed_model(x_var)
```
... and now the GPU:
```
%%timeit
torch.cuda.synchronize() # Make sure there are no pending GPU computations
ans = fixed_model_gpu(x_var_gpu) # Feed it through the model!
torch.cuda.synchronize() # Make sure there are no pending GPU computations
```
You should observe that even a simple forward pass like this is significantly faster on the GPU. So for the rest of the assignment (and when you go train your models in assignment 3 and your project!), you should use the GPU datatype for your model and your tensors: as a reminder that is *torch.cuda.FloatTensor* (in our notebook here as *gpu_dtype*)
### Train the model.
Now that you've seen how to define a model and do a single forward pass of some data through it, let's walk through how you'd actually train one whole epoch over your training data (using the simple_model we provided above).
Make sure you understand how each PyTorch function used below corresponds to what you implemented in your custom neural network implementation.
Note that because we are not resetting the weights anywhere below, if you run the cell multiple times, you are effectively training multiple epochs (so your performance should improve).
First, set up an RMSprop optimizer (using a 1e-3 learning rate) and a cross-entropy loss function:
```
loss_fn = nn.CrossEntropyLoss().cuda()
optimizer = optim.RMSprop(fixed_model_gpu.parameters(),lr=1e-3)
pass
# This sets the model in "training" mode. This is relevant for some layers that may have different behavior
# in training mode vs testing mode, such as Dropout and BatchNorm.
fixed_model_gpu.train()
# Load one batch at a time.
for t, (x, y) in enumerate(loader_train):
x_var = Variable(x.type(gpu_dtype))
y_var = Variable(y.type(gpu_dtype).long())
# This is the forward pass: predict the scores for each class, for each x in the batch.
scores = fixed_model_gpu(x_var)
# Use the correct y values and the predicted y values to compute the loss.
loss = loss_fn(scores, y_var)
if (t + 1) % print_every == 0:
print('t = %d, loss = %.4f' % (t + 1, loss.data[0]))
# Zero out all of the gradients for the variables which the optimizer will update.
optimizer.zero_grad()
# This is the backwards pass: compute the gradient of the loss with respect to each
# parameter of the model.
loss.backward()
# Actually update the parameters of the model using the gradients computed by the backwards pass.
optimizer.step()
```
Now you've seen how the training process works in PyTorch. To save you writing boilerplate code, we're providing the following helper functions to help you train for multiple epochs and check the accuracy of your model:
```
def train(model, loss_fn, optimizer, num_epochs = 1):
for epoch in range(num_epochs):
print('Starting epoch %d / %d' % (epoch + 1, num_epochs))
model.train()
for t, (x, y) in enumerate(loader_train):
x_var = Variable(x.type(gpu_dtype))
y_var = Variable(y.type(gpu_dtype).long())
scores = model(x_var)
loss = loss_fn(scores, y_var)
if (t + 1) % print_every == 0:
print('t = %d, loss = %.4f' % (t + 1, loss.data[0]))
optimizer.zero_grad()
loss.backward()
optimizer.step()
def check_accuracy(model, loader):
if loader.dataset.train:
print('Checking accuracy on validation set')
else:
print('Checking accuracy on test set')
num_correct = 0
num_samples = 0
model.eval() # Put the model in test mode (the opposite of model.train(), essentially)
for x, y in loader:
x_var = Variable(x.type(gpu_dtype), volatile=True)
scores = model(x_var)
_, preds = scores.data.cpu().max(1)
num_correct += (preds == y).sum()
num_samples += preds.size(0)
acc = float(num_correct) / num_samples
print('Got %d / %d correct (%.2f)' % (num_correct, num_samples, 100 * acc))
```
### Check the accuracy of the model.
Let's see the train and check_accuracy code in action -- feel free to use these methods when evaluating the models you develop below.
You should get a training loss of around 1.2-1.4, and a validation accuracy of around 50-60%. As mentioned above, if you re-run the cells, you'll be training more epochs, so your performance will improve past these numbers.
But don't worry about getting these numbers better -- this was just practice before you tackle designing your own model.
```
torch.cuda.random.manual_seed(12345)
fixed_model_gpu.apply(reset)
train(fixed_model_gpu, loss_fn, optimizer, num_epochs=1)
check_accuracy(fixed_model_gpu, loader_val)
```
### Don't forget the validation set!
And note that you can use the check_accuracy function to evaluate on either the test set or the validation set, by passing either **loader_test** or **loader_val** as the second argument to check_accuracy. You should not touch the test set until you have finished your architecture and hyperparameter tuning, and only run the test set once at the end to report a final value.
## Train a _great_ model on CIFAR-10!
Now it's your job to experiment with architectures, hyperparameters, loss functions, and optimizers to train a model that achieves **>=70%** accuracy on the CIFAR-10 **validation** set. You can use the check_accuracy and train functions from above.
### Things you should try:
- **Filter size**: Above we used 7x7; this makes pretty pictures but smaller filters may be more efficient
- **Number of filters**: Above we used 32 filters. Do more or fewer do better?
- **Pooling vs Strided Convolution**: Do you use max pooling or just stride convolutions?
- **Batch normalization**: Try adding spatial batch normalization after convolution layers and vanilla batch normalization after affine layers. Do your networks train faster?
- **Network architecture**: The network above has two layers of trainable parameters. Can you do better with a deep network? Good architectures to try include:
- [conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]
- [conv-relu-conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]
- [batchnorm-relu-conv]xN -> [affine]xM -> [softmax or SVM]
- **Global Average Pooling**: Instead of flattening and then having multiple affine layers, perform convolutions until your image gets small (7x7 or so) and then perform an average pooling operation to get to a 1x1 image picture (1, 1 , Filter#), which is then reshaped into a (Filter#) vector. This is used in [Google's Inception Network](https://arxiv.org/abs/1512.00567) (See Table 1 for their architecture).
- **Regularization**: Add l2 weight regularization, or perhaps use Dropout.
### Tips for training
For each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind:
- If the parameters are working well, you should see improvement within a few hundred iterations
- Remember the coarse-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.
- Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.
- You should use the validation set for hyperparameter search, and save your test set for evaluating your architecture on the best parameters as selected by the validation set.
### Going above and beyond
If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are **not required** to implement any of these; however they would be good things to try for extra credit.
- Alternative update steps: For the assignment we implemented SGD+momentum, RMSprop, and Adam; you could try alternatives like AdaGrad or AdaDelta.
- Alternative activation functions such as leaky ReLU, parametric ReLU, ELU, or MaxOut.
- Model ensembles
- Data augmentation
- New Architectures
- [ResNets](https://arxiv.org/abs/1512.03385) where the input from the previous layer is added to the output.
- [DenseNets](https://arxiv.org/abs/1608.06993) where inputs into previous layers are concatenated together.
- [This blog has an in-depth overview](https://chatbotslife.com/resnets-highwaynets-and-densenets-oh-my-9bb15918ee32)
If you do decide to implement something extra, clearly describe it in the "Extra Credit Description" cell below.
### What we expect
At the very least, you should be able to train a ConvNet that gets at least 70% accuracy on the validation set. This is just a lower bound - if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches.
You should use the space below to experiment and train your network.
Have fun and happy training!
```
# Train your model here, and make sure the output of this cell is the accuracy of your best model on the
# train, val, and test sets. Here's some code to get you started. The output of this cell should be the training
# and validation accuracy on your best model (measured by validation accuracy).
class model_cifar10(nn.Module):
def __init__(self):
super(model_cifar10, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=64, kernel_size=3, stride=1),
nn.ReLU(True),
nn.BatchNorm2d(num_features=64),
nn.MaxPool2d(kernel_size=(2,2), stride=2)
) # (None, 16, 16, 64)
self.conv2 = nn.Sequential(
nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, stride=1),
nn.ReLU(True),
nn.BatchNorm2d(num_features=128),
nn.MaxPool2d(kernel_size=(2,2), stride=2)
) # (None, 8, 8, 128)
self.fc1 = nn.Linear(in_features=6*6*128, out_features=1024)
self.dropout = nn.Dropout(p=0.5)
self.out = nn.Linear(in_features=1024, out_features=10)
def forward(self,x):
x = self.conv1(x)
x = self.conv2(x)
N, C, H, W = x.size()
x = x.view(N, -1)
x = self.fc1(x)
x = self.dropout(x)
x = self.out(x)
return x
model = model_cifar10().cuda()
loss_fn = nn.CrossEntropyLoss().cuda()
optimizer = optim.RMSprop(model.parameters(),lr=1e-3)
train(model, loss_fn, optimizer, num_epochs=10)
check_accuracy(model, loader_val)
```
### Describe what you did
In the cell below you should write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network.
Tell us here!
## Test set -- run this only once
Now that we've gotten a result we're happy with, we test our final model on the test set (which you should store in best_model). This would be the score we would achieve on a competition. Think about how this compares to your validation set accuracy.
```
best_model = model
check_accuracy(best_model, loader_test)
```
## Going further with PyTorch
The next assignment will make heavy use of PyTorch. You might also find it useful for your projects.
Here's a nice tutorial by Justin Johnson that shows off some of PyTorch's features, like dynamic graphs and custom NN modules: http://pytorch.org/tutorials/beginner/pytorch_with_examples.html
If you're interested in reinforcement learning for your final project, this is a good (more advanced) DQN tutorial in PyTorch: http://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html
| true |
code
| 0.808729 | null | null | null | null |
|
```
# default_exp env_wrappers
#hide
from nbdev import *
```
# env_wrappers
> Here we provide a useful set of environment wrappers.
```
%nbdev_export
import gym
import numpy as np
import torch
from typing import Optional, Union
%nbdev_export
class ToTorchWrapper(gym.Wrapper):
"""
Environment wrapper for converting actions from torch.Tensors to np.array and converting observations from np.array to
torch.Tensors.
Args:
- env (gym.Env): Environment to wrap. Should be a subclass of gym.Env and follow the OpenAI Gym API.
"""
def __init__(self, env: gym.Env):
super().__init__(env)
self.env = env
def reset(self, *args, **kwargs):
"""
Reset the environment.
Returns:
- tensor_obs (torch.Tensor): output of reset as PyTorch Tensor.
"""
obs = self.env.reset(*args, **kwargs)
tensor_obs = torch.as_tensor(obs, dtype=torch.float32)
return tensor_obs
def step(self, action: torch.Tensor, *args, **kwargs):
"""
Execute environment step.
Converts from torch.Tensor action and returns observations as a torch.Tensor.
Returns:
- tensor_obs (torch.Tensor): Next observations as pytorch tensor.
- reward (float or int): The reward earned at the current timestep.
- done (bool): Whether the episode is in a terminal state.
- infos (dict): The info dict from the environment.
"""
action = self.action2np(action)
obs, reward, done, infos = self.env.step(action, *args, **kwargs)
tensor_obs = torch.as_tensor(obs, dtype=torch.float32)
return tensor_obs, reward, done, infos
def action2np(self, action: torch.Tensor):
"""
Convert torch.Tensor action to NumPy.
Args:
- action (torch.Tensor): The action to convert.
Returns:
- np_act (np.array or int): The action converted to numpy.
"""
if isinstance(self.action_space, gym.spaces.Discrete):
action_map = lambda action: int(action.squeeze().numpy())
if isinstance(self.action_space, gym.spaces.Box):
action_map = lambda action: action.numpy()
np_act = action_map(action)
return np_act
show_doc(ToTorchWrapper)
show_doc(ToTorchWrapper.reset)
show_doc(ToTorchWrapper.step)
show_doc(ToTorchWrapper.action2np)
```
Example usage of the `ToTorchWrapper` is demonstrated below.
```
env = gym.make("CartPole-v1")
env = ToTorchWrapper(env)
obs = env.reset()
print("initial obs:", obs)
action = env.action_space.sample()
# need to convert action to PyTorch Tensor because ToTorchWrapper expects actions as Tensors.
# normally you would not need to do this, your PyTorch NN actor will output a Tensor by default.
action = torch.as_tensor(action, dtype=torch.float32)
stepped = env.step(action)
print("stepped once:", stepped)
print("\nEntering interaction loop! \n")
# interaction loop
obs = env.reset()
ret = 0
for i in range(100):
action = torch.as_tensor(env.action_space.sample(), dtype=torch.float32)
state, reward, done, _ = env.step(action)
ret += reward
if done:
print(f"Random policy got {ret} reward!")
obs = env.reset()
ret = 0
if i < 99:
print("Starting new episode.")
if i == 99:
print(f"\nInteraction loop ended! Got reward {ret} before episode was cut off.")
break
#hide
env = gym.make("CartPole-v1")
env = ToTorchWrapper(env)
obs = env.reset()
assert type(obs) == torch.Tensor
action = torch.as_tensor(env.action_space.sample(), dtype=torch.float32)
step_out = env.step(action)
assert type(step_out[0]) == torch.Tensor
env = gym.make("LunarLanderContinuous-v2")
env = ToTorchWrapper(env)
obs = env.reset()
assert type(obs) == torch.Tensor
action = torch.as_tensor(env.action_space.sample(), dtype=torch.float32)
step_out = env.step(action)
assert type(step_out[0]) == torch.Tensor
%nbdev_export
class StateNormalizeWrapper(gym.Wrapper):
"""
Environment wrapper for normalizing states.
Args:
- env (gym.Env): Environment to wrap.
- beta (float): Beta parameter for running mean and variance calculation.
- eps (float): Parameter to avoid division by zero in case variance goes to zero.
"""
def __init__(self, env: gym.Env, beta: Optional[float] = 0.99, eps: Optional[float] = 1e-8):
super().__init__(env)
self.env = env
self.mean = np.zeros(self.observation_space.shape)
self.var = np.ones(self.observation_space.shape)
self.beta = beta
self.eps = eps
def normalize(self, state: np.array):
"""
Update running mean and variance parameters and normalize input state.
Args:
- state (np.array): State to normalize and to use to calculate update.
Returns:
- norm_state (np.array): Normalized state.
"""
self.mean = self.beta * self.mean + (1. - self.beta) * state
self.var = self.beta * self.var + (1. - self.beta) * np.square(state - self.mean)
norm_state = (state - self.mean) / (np.sqrt(self.var) + self.eps)
return norm_state
def reset(self, *args, **kwargs):
"""
Reset environment and return normalized state.
Returns:
- norm_state (np.array): Normalized state.
"""
state = self.env.reset()
norm_state = self.normalize(state)
return norm_state
def step(self, action: Union[np.array, int, float], *args, **kwargs):
"""
Step environment and normalize state.
Args:
- action (np.array or int or float): Action to use to step the environment.
Returns:
- norm_state (np.array): Normalized state.
- reward (int or float): Reward earned at step.
- done (bool): Whether the episode is over.
- infos (dict): Any infos from the environment.
"""
state, reward, done, infos = self.env.step(action, *args, **kwargs)
norm_state = self.normalize(state)
return norm_state, reward, done, infos
```
**Note: Testing needed for StateNormalizeWrapper. At present, use `ToTorchWrapper` for guaranteed working.**
```
show_doc(StateNormalizeWrapper)
show_doc(StateNormalizeWrapper.reset)
show_doc(StateNormalizeWrapper.normalize)
show_doc(StateNormalizeWrapper.step)
```
Here is a demonstration of using the `StateNormalizeWrapper`.
```
env = gym.make("CartPole-v1")
env = StateNormalizeWrapper(env)
obs = env.reset()
print("initial obs:", obs)
# the StateNormalizeWrapper expects NumPy arrays, so there is no need to convert action to PyTorch Tensor.
action = env.action_space.sample()
stepped = env.step(action)
print("stepped once:", stepped)
print("\nEntering interaction loop! \n")
# interaction loop
obs = env.reset()
ret = 0
for i in range(100):
action = env.action_space.sample()
state, reward, done, _ = env.step(action)
ret += reward
if done:
print(f"Random policy got {ret} reward!")
obs = env.reset()
ret = 0
if i < 99:
print("Starting new episode.")
if i == 99:
print(f"\nInteraction loop ended! Got reward {ret} before episode was cut off.")
break
#hide
env = gym.make("CartPole-v1")
env = StateNormalizeWrapper(env)
assert env.reset() is not None
action = env.action_space.sample()
assert env.step(action) is not None
env = ToTorchWrapper(env)
assert env.reset() is not None
assert type(env.reset()) == torch.Tensor
action = env.action_space.sample()
t_action = torch.as_tensor(action, dtype=torch.float32)
assert env.step(t_action) is not None
assert type(env.step(t_action)[0]) == torch.Tensor
%nbdev_export
class RewardScalerWrapper(gym.Wrapper):
"""
A class for reward scaling over training.
Calculates running mean and standard deviation of observed rewards and scales the rewards using the variance.
Computes: $r_t / (\sigma + eps)$
"""
def __init__(self, env: gym.Env, beta: Optional[float] = 0.99, eps: Optional[float] = 1e-8):
super().__init__(env)
self.beta = beta
self.eps = eps
self.var = 1
self.mean = 0
def scale(self, reward: Union[int, float]):
"""
Update running mean and variance for rewards, scale reward using the variance.
Args:
- reward (int or float): reward to scale.
Returns:
- scaled_rew (float): reward scaled using variance.
"""
self.mean = self.beta * self.mean + (1. - self.beta) * reward
self.var = self.beta * self.var + (1. - self.beta) * np.square(reward - self.mean)
scaled_rew = (reward - self.mean) / (np.sqrt(self.var) + self.eps)
return scaled_rew
def step(self, action, *args, **kwargs):
"""
Step the environment and scale the reward.
Args:
- action (np.array or int or float): Action to use to step the environment.
Returns:
- state (np.array): Next state from environment.
- scaled_rew (float): reward scaled using the variance.
- done (bool): Indicates whether the episode is over.
- infos (dict): Any information from the environment.
"""
state, reward, done, infos = self.env.step(action, *args, **kwargs)
scaled_rew = self.scale(reward)
return state, scaled_rew, done, infos
#hide
env = gym.make("CartPole-v1")
env = RewardScalerWrapper(env)
assert env.reset() is not None
action = env.action_space.sample()
assert env.step(action) is not None
assert type(env.step(action)[0]) == np.ndarray
env = StateNormalizeWrapper(env)
assert env.reset() is not None
action = env.action_space.sample()
assert env.step(action) is not None
assert type(env.step(action)[0]) == np.ndarray
env = ToTorchWrapper(env)
assert env.reset() is not None
assert type(env.reset()) == torch.Tensor
action = torch.as_tensor(env.action_space.sample(), dtype=torch.float32)
assert env.step(action) is not None
assert type(env.step(action)[0]) == torch.Tensor
```
**Note: Testing needed for RewardScalerWrapper. At present, use `ToTorchWrapper` for guaranteed working.**
```
show_doc(RewardScalerWrapper)
show_doc(RewardScalerWrapper.scale)
show_doc(RewardScalerWrapper.step)
```
An example usage of the RewardScalerWrapper.
```
env = gym.make("CartPole-v1")
env = RewardScalerWrapper(env)
obs = env.reset()
print("initial obs:", obs)
action = env.action_space.sample()
stepped = env.step(action)
print("stepped once:", stepped)
print("\nEntering interaction loop! \n")
# interaction loop
obs = env.reset()
ret = 0
for i in range(100):
action = env.action_space.sample()
state, reward, done, _ = env.step(action)
ret += reward
if done:
print(f"Random policy got {ret} reward!")
obs = env.reset()
ret = 0
if i < 99:
print("Starting new episode.")
if i == 99:
print(f"\nInteraction loop ended! Got reward {ret} before episode was cut off.")
break
```
## Combining Wrappers
All of these wrappers can be composed together! Simply be sure to call the `ToTorchWrapper` last, because the others expect NumPy arrays as input, and the `ToTorchWrapper` converts outputs to PyTorch tensors. Below is an example.
```
env = gym.make("CartPole-v1")
env = StateNormalizeWrapper(env)
print(f"After wrapping with StateNormalizeWrapper, output is still a NumPy array: {env.reset()}")
env = RewardScalerWrapper(env)
print(f"After wrapping with RewardScalerWrapper, output is still a NumPy array: {env.reset()}")
env = ToTorchWrapper(env)
print(f"But after wrapping with ToTorchWrapper, output is now a PyTorch Tensor: {env.reset()}")
%nbdev_export
class BestPracticesWrapper(gym.Wrapper):
"""
This wrapper combines the wrappers which we think (from experience and from reading papers/blogs and watching lectures)
constitute best practices.
At the moment it combines the wrappers below in the order listed:
1. StateNormalizeWrapper
2. RewardScalerWrapper
3. ToTorchWrapper
Args:
- env (gym.Env): Environment to wrap.
"""
def __init__(self, env: gym.Env):
super().__init__(env)
env = StateNormalizeWrapper(env)
env = RewardScalerWrapper(env)
self.env = ToTorchWrapper(env)
def reset(self):
"""
Reset environment.
Returns:
- obs (torch.Tensor): Starting observations from the environment.
"""
obs = self.env.reset()
return obs
def step(self, action, *args, **kwargs):
"""
Step the environment forward using input action.
Args:
- action (torch.Tensor): Action to step the environment with.
Returns:
- obs (torch.Tensor): Next step observations.
- reward (int or float): Reward for the last timestep.
- done (bool): Whether the episode is over.
- infos (dict): Dictionary of any info from the environment.
"""
obs, reward, done, infos = self.env.step(action, *args, **kwargs)
return obs, reward, done, infos
#hide
env = gym.make("CartPole-v1")
env = BestPracticesWrapper(env)
assert env.reset() is not None
assert type(env.reset()) == torch.Tensor
action = torch.as_tensor(env.action_space.sample(), dtype=torch.float32)
stepped = env.step(action)
assert stepped is not None
assert type(stepped[0]) == torch.Tensor
```
**Note: Testing needed for BestPracticesWrapper. At present, use `ToTorchWrapper` for guaranteed working.**
```
show_doc(BestPracticesWrapper)
show_doc(BestPracticesWrapper.reset)
show_doc(BestPracticesWrapper.step)
```
Below is a usage example of the `BestPracticesWrapper`. It is used in the same way as the `ToTorchWrapper`.
```
env = gym.make("CartPole-v1")
env = BestPracticesWrapper(env)
obs = env.reset()
print("initial obs:", obs)
action = torch.as_tensor(env.action_space.sample(), dtype=torch.float32)
stepped = env.step(action)
print("stepped once:", stepped)
print("\nEntering interaction loop! \n")
# interaction loop
obs = env.reset()
ret = 0
for i in range(100):
action = torch.as_tensor(env.action_space.sample(), dtype=torch.float32)
state, reward, done, _ = env.step(action)
ret += reward
if done:
print(f"Random policy got {ret} reward!")
obs = env.reset()
ret = 0
if i < 99:
print("Starting new episode.")
if i == 99:
print(f"\nInteraction loop ended! Got reward {ret} before episode was cut off.")
break
#hide
notebook2script()
```
| true |
code
| 0.756155 | null | null | null | null |
|
# Facies classification using machine learning techniques
The ideas of
<a href="https://home.deib.polimi.it/bestagini/">Paolo Bestagini's</a> "Try 2", <a href="https://github.com/ar4">Alan Richardson's</a> "Try 2",
<a href="https://github.com/dalide">Dalide's</a> "Try 6", augmented, by Dimitrios Oikonomou and Eirik Larsen (ESA AS) by
- adding the gradient of gradient of features as augmented features.
- with an ML estimator for PE using both training and blind well data.
- removing the NM_M from augmented features.
In the following, we provide a possible solution to the facies classification problem described at https://github.com/seg/2016-ml-contest.
The proposed algorithm is based on the use of random forests, xgboost or gradient boost combined in one-vs-one multiclass strategy. In particular, we would like to study the effect of:
- Robust feature normalization.
- Feature imputation for missing feature values.
- Well-based cross-validation routines.
- Feature augmentation strategies.
- Test multiple classifiers
# Script initialization
Let's import the used packages and define some parameters (e.g., colors, labels, etc.).
```
# Import
from __future__ import division
get_ipython().magic(u'matplotlib inline')
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcParams['figure.figsize'] = (20.0, 10.0)
inline_rc = dict(mpl.rcParams)
from classification_utilities import make_facies_log_plot
import pandas as pd
import numpy as np
import seaborn as sns
from sklearn import preprocessing
from sklearn.model_selection import LeavePGroupsOut
from sklearn.metrics import f1_score
from sklearn.model_selection import GridSearchCV
from sklearn.multiclass import OneVsOneClassifier
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor, GradientBoostingClassifier
import xgboost as xgb
from xgboost.sklearn import XGBClassifier
from scipy.signal import medfilt
import sys, scipy, sklearn
print('Python: ' + sys.version.split('\n')[0])
print(' ' + sys.version.split('\n')[0])
print('Pandas: ' + pd.__version__)
print('Numpy: ' + np.__version__)
print('Scipy: ' + scipy.__version__)
print('Sklearn: ' + sklearn.__version__)
print('Xgboost: ' + xgb.__version__)
```
### Parameters
```
feature_names = ['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M', 'RELPOS']
facies_names = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D', 'PS', 'BS']
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
#Select classifier type
#clfType='GB' #Gradient Boosting Classifier
clfType='XBA' #XGB Clasifier
# Define window length
N_neig=2
#Seed
seed = 24
np.random.seed(seed)
```
# Load data
Let's load the data
```
# Load data from file
data = pd.read_csv('../facies_vectors.csv')
# Load Test data from file
test_data = pd.read_csv('../validation_data_nofacies.csv')
test_data.insert(0,'Facies',np.ones(test_data.shape[0])*(-1))
#Create Dataset for PE prediction from both dasets
all_data=pd.concat([data,test_data])
```
#### Let's store features, labels and other data into numpy arrays.
```
# Store features and labels
X = data[feature_names].values # features
y = data['Facies'].values # labels
# Store well labels and depths
well = data['Well Name'].values
depth = data['Depth'].values
```
# Data inspection
Let us inspect the features we are working with. This step is useful to understand how to normalize them and how to devise a correct cross-validation strategy. Specifically, it is possible to observe that:
- Some features seem to be affected by a few outlier measurements.
- Only a few wells contain samples from all classes.
- PE measurements are available only for some wells.
```
# Define function for plotting feature statistics
def plot_feature_stats(X, y, feature_names, facies_colors, facies_names):
# Remove NaN
nan_idx = np.any(np.isnan(X), axis=1)
X = X[np.logical_not(nan_idx), :]
y = y[np.logical_not(nan_idx)]
# Merge features and labels into a single DataFrame
features = pd.DataFrame(X, columns=feature_names)
labels = pd.DataFrame(y, columns=['Facies'])
for f_idx, facies in enumerate(facies_names):
labels[labels[:] == f_idx] = facies
data = pd.concat((labels, features), axis=1)
# Plot features statistics
facies_color_map = {}
for ind, label in enumerate(facies_names):
facies_color_map[label] = facies_colors[ind]
sns.pairplot(data, hue='Facies', palette=facies_color_map, hue_order=list(reversed(facies_names)))
```
## Feature distribution
plot_feature_stats(X, y, feature_names, facies_colors, facies_names)
mpl.rcParams.update(inline_rc)
```
# Facies per well
for w_idx, w in enumerate(np.unique(well)):
ax = plt.subplot(3, 4, w_idx+1)
hist = np.histogram(y[well == w], bins=np.arange(len(facies_names)+1)+.5)
plt.bar(np.arange(len(hist[0])), hist[0], color=facies_colors, align='center')
ax.set_xticks(np.arange(len(hist[0])))
ax.set_xticklabels(facies_names)
ax.set_title(w)
# Features per well
for w_idx, w in enumerate(np.unique(well)):
ax = plt.subplot(3, 4, w_idx+1)
hist = np.logical_not(np.any(np.isnan(X[well == w, :]), axis=0))
plt.bar(np.arange(len(hist)), hist, color=facies_colors, align='center')
ax.set_xticks(np.arange(len(hist)))
ax.set_xticklabels(feature_names)
ax.set_yticks([0, 1])
ax.set_yticklabels(['miss', 'hit'])
ax.set_title(w)
```
## Feature imputation
Let us fill missing PE values. Currently no feature engineering is used, but this should be explored in the future.
```
reg = RandomForestRegressor(max_features='sqrt', n_estimators=50, random_state=seed)
DataImpAll = all_data[feature_names].copy()
DataImp = DataImpAll.dropna(axis = 0, inplace=False)
Ximp=DataImp.loc[:, DataImp.columns != 'PE']
Yimp=DataImp.loc[:, 'PE']
reg.fit(Ximp, Yimp)
X[np.array(data.PE.isnull()),feature_names.index('PE')] = reg.predict(data.loc[data.PE.isnull(),:][['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'NM_M', 'RELPOS']])
```
# Augment features
```
# ## Feature augmentation
# Our guess is that facies do not abrutly change from a given depth layer to the next one. Therefore, we consider features at neighboring layers to be somehow correlated. To possibly exploit this fact, let us perform feature augmentation by:
# - Select features to augment.
# - Aggregating aug_features at neighboring depths.
# - Computing aug_features spatial gradient.
# - Computing aug_features spatial gradient of gradient.
# Feature windows concatenation function
def augment_features_window(X, N_neig, features=-1):
# Parameters
N_row = X.shape[0]
if features==-1:
N_feat = X.shape[1]
features=np.arange(0,X.shape[1])
else:
N_feat = len(features)
# Zero padding
X = np.vstack((np.zeros((N_neig, X.shape[1])), X, (np.zeros((N_neig, X.shape[1])))))
# Loop over windows
X_aug = np.zeros((N_row, N_feat*(2*N_neig)+X.shape[1]))
for r in np.arange(N_row)+N_neig:
this_row = []
for c in np.arange(-N_neig,N_neig+1):
if (c==0):
this_row = np.hstack((this_row, X[r+c,:]))
else:
this_row = np.hstack((this_row, X[r+c,features]))
X_aug[r-N_neig] = this_row
return X_aug
# Feature gradient computation function
def augment_features_gradient(X, depth, features=-1):
if features==-1:
features=np.arange(0,X.shape[1])
# Compute features gradient
d_diff = np.diff(depth).reshape((-1, 1))
d_diff[d_diff==0] = 0.001
X_diff = np.diff(X[:,features], axis=0)
X_grad = X_diff / d_diff
# Compensate for last missing value
X_grad = np.concatenate((X_grad, np.zeros((1, X_grad.shape[1]))))
return X_grad
# Feature augmentation function
def augment_features(X, well, depth, N_neig=1, features=-1):
if (features==-1):
N_Feat=X.shape[1]
else:
N_Feat=len(features)
# Augment features
X_aug = np.zeros((X.shape[0], X.shape[1] + N_Feat*(N_neig*2+2)))
for w in np.unique(well):
w_idx = np.where(well == w)[0]
X_aug_win = augment_features_window(X[w_idx, :], N_neig,features)
X_aug_grad = augment_features_gradient(X[w_idx, :], depth[w_idx],features)
X_aug_grad_grad = augment_features_gradient(X_aug_grad, depth[w_idx])
X_aug[w_idx, :] = np.concatenate((X_aug_win, X_aug_grad,X_aug_grad_grad), axis=1)
# Find padded rows
padded_rows = np.unique(np.where(X_aug[:, 0:7] == np.zeros((1, 7)))[0])
return X_aug, padded_rows
# Train and test a classifier
def train_and_test(X_tr, y_tr, X_v, well_v, clf):
# Feature normalization
scaler = preprocessing.RobustScaler(quantile_range=(25.0, 75.0)).fit(X_tr)
X_tr = scaler.transform(X_tr)
X_v = scaler.transform(X_v)
# Train classifier
clf.fit(X_tr, y_tr)
# Test classifier
y_v_hat = clf.predict(X_v)
# Clean isolated facies for each well
for w in np.unique(well_v):
y_v_hat[well_v==w] = medfilt(y_v_hat[well_v==w], kernel_size=3)
return y_v_hat
# Define which features to augment by introducing window and gradients.
augm_Features=['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'RELPOS']
# Get the columns of features to be augmented
feature_indices=[feature_names.index(log) for log in augm_Features]
# Augment features
X_aug, padded_rows = augment_features(X, well, depth, N_neig=N_neig, features=feature_indices)
# Remove padded rows
data_no_pad = np.setdiff1d(np.arange(0,X_aug.shape[0]), padded_rows)
X=X[data_no_pad ,:]
depth=depth[data_no_pad]
X_aug=X_aug[data_no_pad ,:]
y=y[data_no_pad]
data=data.iloc[data_no_pad ,:]
well=well[data_no_pad]
```
## Generate training, validation and test data splitsar4_submission_withFac.ipynb
The choice of training and validation data is paramount in order to avoid overfitting and find a solution that generalizes well on new data. For this reason, we generate a set of training-validation splits so that:
- Features from each well belongs to training or validation set.
- Training and validation sets contain at least one sample for each class.
# Initialize model selection methods
```
lpgo = LeavePGroupsOut(2)
# Generate splits
split_list = []
for train, val in lpgo.split(X, y, groups=data['Well Name']):
hist_tr = np.histogram(y[train], bins=np.arange(len(facies_names)+1)+.5)
hist_val = np.histogram(y[val], bins=np.arange(len(facies_names)+1)+.5)
if np.all(hist_tr[0] != 0) & np.all(hist_val[0] != 0):
split_list.append({'train':train, 'val':val})
# Print splits
for s, split in enumerate(split_list):
print('Split %d' % s)
print(' training: %s' % (data.iloc[split['train']]['Well Name'].unique()))
print(' validation: %s' % (data.iloc[split['val']]['Well Name'].unique()))
```
## Classification parameters optimization
Let us perform the following steps for each set of parameters:
- Select a data split.
- Normalize features using a robust scaler.
- Train the classifier on training data.
- Test the trained classifier on validation data.
- Repeat for all splits and average the F1 scores.
At the end of the loop, we select the classifier that maximizes the average F1 score on the validation set. Hopefully, this classifier should be able to generalize well on new data.
```
if clfType=='XB':
md_grid = [2,3]
# mcw_grid = [1]
gamma_grid = [0.2, 0.3, 0.4]
ss_grid = [0.7, 0.9, 0.5]
csb_grid = [0.6,0.8,0.9]
alpha_grid =[0.2, 0.4, 0.3]
lr_grid = [0.04, 0.06, 0.05]
ne_grid = [100,200,300]
param_grid = []
for N in md_grid:
# for M in mcw_grid:
for S in gamma_grid:
for L in ss_grid:
for K in csb_grid:
for P in alpha_grid:
for R in lr_grid:
for E in ne_grid:
param_grid.append({'maxdepth':N,
# 'minchildweight':M,
'gamma':S,
'subsample':L,
'colsamplebytree':K,
'alpha':P,
'learningrate':R,
'n_estimators':E})
if clfType=='XBA':
learning_rate_grid=[0.12] #[0.06, 0.10, 0.12]
max_depth_grid=[3] #[3, 5]
min_child_weight_grid=[6] #[6, 8, 10]
colsample_bytree_grid = [0.9] #[0.7, 0.9]
n_estimators_grid=[120] #[80, 120, 150] #[150]
param_grid = []
for max_depth in max_depth_grid:
for min_child_weight in min_child_weight_grid:
for colsample_bytree in colsample_bytree_grid:
for learning_rate in learning_rate_grid:
for n_estimators in n_estimators_grid:
param_grid.append({'maxdepth':max_depth,
'minchildweight':min_child_weight,
'colsamplebytree':colsample_bytree,
'learningrate':learning_rate,
'n_estimators':n_estimators})
if clfType=='RF':
N_grid = [50, 100, 150]
M_grid = [5, 10, 15]
S_grid = [10, 25, 50, 75]
L_grid = [2, 3, 4, 5, 10, 25]
param_grid = []
for N in N_grid:
for M in M_grid:
for S in S_grid:
for L in L_grid:
param_grid.append({'N':N, 'M':M, 'S':S, 'L':L})
if clfType=='GB':
N_grid = [80] #[80, 100, 120]
MD_grid = [5] #[3, 5]
M_grid = [10]
LR_grid = [0.12] #[0.1, 0.08, 0.12]
L_grid = [3] #[3, 5, 7]
S_grid = [25] #[20, 25, 30]
param_grid = []
for N in N_grid:
for M in MD_grid:
for M1 in M_grid:
for S in LR_grid:
for L in L_grid:
for S1 in S_grid:
param_grid.append({'N':N, 'MD':M, 'MF':M1,'LR':S,'L':L,'S1':S1})
def getClf(clfType, param):
if clfType=='RF':
clf = OneVsOneClassifier(RandomForestClassifier(n_estimators=param['N'], criterion='entropy',
max_features=param['M'], min_samples_split=param['S'], min_samples_leaf=param['L'],
class_weight='balanced', random_state=seed), n_jobs=-1)
if clfType=='XB':
clf = OneVsOneClassifier(XGBClassifier(
learning_rate = param['learningrate'],
n_estimators=param['n_estimators'],
max_depth=param['maxdepth'],
# min_child_weight=param['minchildweight'],
gamma = param['gamma'],
subsample=param['subsample'],
colsample_bytree=param['colsamplebytree'],
reg_alpha = param['alpha'],
nthread =4,
seed = seed,
) , n_jobs=4)
if clfType=='XBA':
clf = XGBClassifier(
learning_rate = param['learningrate'],
n_estimators=param['n_estimators'],
max_depth=param['maxdepth'],
min_child_weight=param['minchildweight'],
colsample_bytree=param['colsamplebytree'],
nthread =4,
seed = 17
)
if clfType=='GB':
clf=OneVsOneClassifier(GradientBoostingClassifier(
loss='exponential',
n_estimators=param['N'],
learning_rate=param['LR'],
max_depth=param['MD'],
max_features= param['MF'],
min_samples_leaf=param['L'],
min_samples_split=param['S1'],
random_state=seed,
max_leaf_nodes=None,)
, n_jobs=-1)
return clf
# For each set of parameters
score_param = []
print('features: %d' % X_aug.shape[1])
exportScores=[]
for param in param_grid:
print('features: %d' % X_aug.shape[1])
# For each data split
score_split = []
split = split_list[5]
split_train_no_pad = split['train']
# Select training and validation data from current split
X_tr = X_aug[split_train_no_pad, :]
X_v = X_aug[split['val'], :]
y_tr = y[split_train_no_pad]
y_v = y[split['val']]
# Select well labels for validation data
well_v = well[split['val']]
# Train and test
y_v_hat = train_and_test(X_tr, y_tr, X_v, well_v, getClf(clfType,param))
# Score
score = f1_score(y_v, y_v_hat, average='micro')
score_split.append(score)
#print('Split: {0}, Score = {1:0.3f}'.format(split_list.index(split),score))
#print('Split: , Score = {0:0.3f}'.format(score))
# Average score for this param
score_param.append(np.mean(score_split))
print('Average F1 score = %.3f %s' % (score_param[-1], param))
exportScores.append('Average F1 score = %.3f %s' % (score_param[-1], param))
# Best set of parameters
best_idx = np.argmax(score_param)
param_best = param_grid[best_idx]
score_best = score_param[best_idx]
print('\nBest F1 score = %.3f %s' % (score_best, param_best))
# Store F1 scores for multiple param grids
if len(exportScores)>1:
exportScoresFile=open('results_{0}_{1}_sub01b.txt'.format(clfType,N_neig),'wb')
exportScoresFile.write('features: %d' % X_aug.shape[1])
for item in exportScores:
exportScoresFile.write("%s\n" % item)
exportScoresFile.write('\nBest F1 score = %.3f %s' % (score_best, param_best))
exportScoresFile.close()
# ## Predict labels on test data
# Let us now apply the selected classification technique to test data.
# Training data
X_tr = X_aug
y_tr = y
# Prepare test data
well_ts = test_data['Well Name'].values
depth_ts = test_data['Depth'].values
X_ts = test_data[feature_names].values
# Augment Test data features
X_ts, padded_rows = augment_features(X_ts, well_ts,depth_ts,N_neig=N_neig, features=feature_indices)
# Predict test labels
y_ts_hat = train_and_test(X_tr, y_tr, X_ts, well_ts, getClf(clfType,param_best))
# Save predicted labels
test_data['Facies'] = y_ts_hat
test_data.to_csv('esa_predicted_facies_{0}_{1}_sub01c.csv'.format(clfType,N_neig))
# Plot predicted labels
make_facies_log_plot(
test_data[test_data['Well Name'] == 'STUART'],
facies_colors=facies_colors)
make_facies_log_plot(
test_data[test_data['Well Name'] == 'CRAWFORD'],
facies_colors=facies_colors)
mpl.rcParams.update(inline_rc)
```
| true |
code
| 0.548734 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/ashishpatel26/100-Days-Of-ML-Code/blob/master/Tensorflow_Basic_Chapter_1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Basic Perceptron
```
import tensorflow as tf
print(tf.__version__)
W = tf.Variable(tf.ones(shape=(2,2)), name='w')
b = tf.Variable(tf.zeros(shape=(2)), name='b')
@tf.function
def model(x):
return W * x + b
out_a = model([1, 0])
print(out_a)
```
## Tensorflow 2.0 Code First Example
```
import tensorflow as tf
from tensorflow import keras
NB_CLASSES = 10
RESHAPED = 784
model = tf.keras.Sequential()
model.add(keras.layers.Dense(NB_CLASSES, input_shape = (RESHAPED,), kernel_initializer='zeros', name='Dense_layer', activation='softmax'))
model.summary()
```
## MNIST Example with tf2.2.0
```
import tensorflow as tf
from tensorflow import keras
import numpy as np
# Network and Training Parameter
EPOCH = 200
BATCH_SIZE = 128
VERBOSE = 1
NB_CLASSES = 10 # number of output = number of digit
N_HIDDEN = 128
VALIDATION_SPLIT = 0.2 # how much TRAIN is reserved for VALIDATION
### Loading MNIST Dataset which containing 60000 training and 10000 testing example.
mnist = keras.datasets.mnist
(X_train,Y_train), (X_test, Y_test) = mnist.load_data()
# X train is 60000 rows of 28 X 28 values ==> we are doing reshape into 60000 X 784
RESHAPED = 784
X_train = X_train.reshape(60000, RESHAPED)
X_test = X_test.reshape(10000, RESHAPED)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train = X_train/255
X_test = X_test/255
print(X_train.shape[0],'Train Samples')
print(X_test.shape[0],'Test Samples')
Y_train = tf.keras.utils.to_categorical(Y_train, NB_CLASSES)
Y_test = tf.keras.utils.to_categorical(Y_test, NB_CLASSES)
# Build a model
model = tf.keras.models.Sequential()
model.add(keras.layers.Dense(NB_CLASSES,
input_shape = (RESHAPED, ),
name = 'dense_layer',
activation = 'softmax'))
#compile the modela and apply backpropagation
model.compile(optimizer='SGD',
loss='categorical_crossentropy',
metrics = ['accuracy'])
#model summary
model.summary()
model.fit(X_train,Y_train,
batch_size = BATCH_SIZE,
epochs = EPOCH,
verbose=VERBOSE, validation_split = VALIDATION_SPLIT)
# evaluate the model
test_loss, test_acc = model.evaluate(X_test, Y_test)
print('Test Accuracy:', test_acc)
```
| true |
code
| 0.723236 | null | null | null | null |
|
<img src="../../../images/qiskit_header.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" align="middle">
## _*Relaxation and Decoherence*_
* **Last Updated:** Feb 25, 2019
* **Requires:** qiskit-terra 0.8, qiskit-ignis 0.1.1, qiskit-aer 0.2
This notebook gives examples for how to use the ``ignis.characterization.coherence`` module for measuring $T_1$ and $T_2$.
```
import numpy as np
import matplotlib.pyplot as plt
import qiskit
from qiskit.providers.aer.noise.errors.standard_errors import thermal_relaxation_error
from qiskit.providers.aer.noise import NoiseModel
from qiskit.ignis.characterization.coherence import T1Fitter, T2StarFitter, T2Fitter
from qiskit.ignis.characterization.coherence import t1_circuits, t2_circuits, t2star_circuits
```
# Generation of coherence circuits
This shows how to generate the circuits. The list of qubits specifies for which qubits to generate characterization circuits; these circuits will run in parallel. The discrete unit of time is the identity gate (``iden``) and so the user must specify the time of each identity gate if they would like the characterization parameters returned in units of time. This should be available from the backend.
```
num_of_gates = (np.linspace(10, 300, 50)).astype(int)
gate_time = 0.1
# Note that it is possible to measure several qubits in parallel
qubits = [0, 2]
t1_circs, t1_xdata = t1_circuits(num_of_gates, gate_time, qubits)
t2star_circs, t2star_xdata, osc_freq = t2star_circuits(num_of_gates, gate_time, qubits, nosc=5)
t2echo_circs, t2echo_xdata = t2_circuits(np.floor(num_of_gates/2).astype(int),
gate_time, qubits)
t2cpmg_circs, t2cpmg_xdata = t2_circuits(np.floor(num_of_gates/6).astype(int),
gate_time, qubits,
n_echos=5, phase_alt_echo=True)
```
# Backend execution
```
backend = qiskit.Aer.get_backend('qasm_simulator')
shots = 400
# Let the simulator simulate the following times for qubits 0 and 2:
t_q0 = 25.0
t_q2 = 15.0
# Define T1 and T2 noise:
t1_noise_model = NoiseModel()
t1_noise_model.add_quantum_error(
thermal_relaxation_error(t_q0, 2*t_q0, gate_time),
'id', [0])
t1_noise_model.add_quantum_error(
thermal_relaxation_error(t_q2, 2*t_q2, gate_time),
'id', [2])
t2_noise_model = NoiseModel()
t2_noise_model.add_quantum_error(
thermal_relaxation_error(np.inf, t_q0, gate_time, 0.5),
'id', [0])
t2_noise_model.add_quantum_error(
thermal_relaxation_error(np.inf, t_q2, gate_time, 0.5),
'id', [2])
# Run the simulator
t1_backend_result = qiskit.execute(t1_circs, backend, shots=shots,
noise_model=t1_noise_model, optimization_level=0).result()
t2star_backend_result = qiskit.execute(t2star_circs, backend, shots=shots,
noise_model=t2_noise_model, optimization_level=0).result()
t2echo_backend_result = qiskit.execute(t2echo_circs, backend, shots=shots,
noise_model=t2_noise_model, optimization_level=0).result()
# It is possible to split the circuits into multiple jobs and then give the results to the fitter as a list:
t2cpmg_backend_result1 = qiskit.execute(t2cpmg_circs[0:5], backend,
shots=shots, noise_model=t2_noise_model,
optimization_level=0).result()
t2cpmg_backend_result2 = qiskit.execute(t2cpmg_circs[5:], backend,
shots=shots, noise_model=t2_noise_model,
optimization_level=0).result()
```
# Analysis of results
```
# Fitting T1
%matplotlib inline
plt.figure(figsize=(15, 6))
t1_fit = T1Fitter(t1_backend_result, t1_xdata, qubits,
fit_p0=[1, t_q0, 0],
fit_bounds=([0, 0, -1], [2, 40, 1]))
print(t1_fit.time())
print(t1_fit.time_err())
print(t1_fit.params)
print(t1_fit.params_err)
for i in range(2):
ax = plt.subplot(1, 2, i+1)
t1_fit.plot(i, ax=ax)
plt.show()
```
Execute the backend again to get more statistics, and add the results to the previous ones:
```
t1_backend_result_new = qiskit.execute(t1_circs, backend,
shots=shots, noise_model=t1_noise_model,
optimization_level=0).result()
t1_fit.add_data(t1_backend_result_new)
plt.figure(figsize=(15, 6))
for i in range(2):
ax = plt.subplot(1, 2, i+1)
t1_fit.plot(i, ax=ax)
plt.show()
# Fitting T2*
%matplotlib inline
t2star_fit = T2StarFitter(t2star_backend_result, t2star_xdata, qubits,
fit_p0=[0.5, t_q0, osc_freq, 0, 0.5],
fit_bounds=([-0.5, 0, 0, -np.pi, -0.5],
[1.5, 40, 2*osc_freq, np.pi, 1.5]))
plt.figure(figsize=(15, 6))
for i in range(2):
ax = plt.subplot(1, 2, i+1)
t2star_fit.plot(i, ax=ax)
plt.show()
# Fitting T2 single echo
%matplotlib inline
t2echo_fit = T2Fitter(t2echo_backend_result, t2echo_xdata, qubits,
fit_p0=[0.5, t_q0, 0.5],
fit_bounds=([-0.5, 0, -0.5],
[1.5, 40, 1.5]))
print(t2echo_fit.params)
plt.figure(figsize=(15, 6))
for i in range(2):
ax = plt.subplot(1, 2, i+1)
t2echo_fit.plot(i, ax=ax)
plt.show()
# Fitting T2 CPMG
%matplotlib inline
t2cpmg_fit = T2Fitter([t2cpmg_backend_result1, t2cpmg_backend_result2],
t2cpmg_xdata, qubits,
fit_p0=[0.5, t_q0, 0.5],
fit_bounds=([-0.5, 0, -0.5],
[1.5, 40, 1.5]))
plt.figure(figsize=(15, 6))
for i in range(2):
ax = plt.subplot(1, 2, i+1)
t2cpmg_fit.plot(i, ax=ax)
plt.show()
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
| true |
code
| 0.61682 | null | null | null | null |
|
## Universal Style Transfer
The models above are trained to work for a single style. Using these methods, in order to create a new style transfer model, you have to train the model with a wide variety of content images.
Recent work by Yijun Li et al. shows that it is possible to create a model that generalizes to unseen style images, while maintaining the quality of output images.
Their method works by treating style transfer as an image reconstruction task. They use the output of a VGG19 ReLU layer to encode features of various content images and traing a decoder to reconstruct these images. Then, with these two networks fixed, they feed the content and the style image into the encoder and use a whitening and coloring transform so that the covarience matrix of the features matches the covarience matrix of the style.
This process can then be expanded to the remaining ReLU layers of VGG19 to create a style transfer pipeline that can apply to all spatial scales.
Since only content images were used to train the encoder and decoder, additional training is not needed when generalizing this to new styles.
<img src="images/universal-style-transfer.png" style="width: 600px;"/>
(Yijun Li et al., Universal Style Transfer)
<img src="images/doge_the_scream.jpg" style="width: 300px;"/>
<img src="images/doge_mosaic.jpg" style="width: 300px;"/>
The results are pretty impressive, but there are some patches of blurriness, most likely as a result of the transforms.
### Whitening Transform
The whitening transform removes the style from the content image, keeping the global content structure.
The features of the content image, $f_c$, are transformed to obtain $\hat{f}_c$, such that the feature maps
are uncorrelated ($\hat{f}_c \hat{f}_c^T = I$),
$$
\hat{f}_c = E_c D_c^{- \frac{1}{2}} E_c^T f_c
$$
where $D_c$ is a diagonal matrix with the eigenvalues of the covariance matrix $f_c f_c^T \in R^{C \times C}$,
and $E_c$ is the corresponding orthogonal matrix of eigenvectors, satisfying $f_c f_c^T = E_c D_c E_c^T$.
<img src="images/whitening.png" style="width: 300px;"/>
(Yijun Li et al., Universal Style Transfer)
### Coloring Transform
The coloring transform adds the style from the style image onto the content image.
The whitening transformed features of the content image, $\hat{f}_c$, are transformed to obtain $\hat{f}_{cs}$, such that the feature maps have that desired correlations ($\hat{f}_{cs} \hat{f}_{cs}^T = f_s f_s^T$),
$$
\hat{f}_{cs} = E_s D_s^{\frac{1}{2}} E_s^T \hat{f}_c
$$
where $D_s$ is a diagonal matrix with the eigenvalues of the covariance matrix $f_s f_s^T \in R^{C \times C}$,
and $E_s$ is the corresponding orthogonal matrix of eigenvectors, satisfying $f_c f_c^T = E_c D_c E_c^T$.
In practice, we also take a weighted sum of the colored and original activations such that:
$$ f_{blend} = \alpha\hat{f}_{cs} + (1-\alpha)\hat{f}_c $$
Before each transform step, the mean of the corresponding feature maps are subtracted, and the mean of the style features are added back to the final transformed features.
```
# workaround for multiple OpenMP on Mac
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
import tensorflow as tf
from pathlib import PurePath
import IPython.display as display
from IPython.display import HTML
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12,12)
mpl.rcParams['axes.grid'] = False
%matplotlib inline
import numpy as np
import PIL.Image
import time
import functools
print('here')
def tensor_to_image(tensor):
tensor = tensor*255
tensor = np.array(tensor, dtype=np.uint8)
if np.ndim(tensor)>3:
assert tensor.shape[0] == 1
tensor = tensor[0]
return PIL.Image.fromarray(tensor)
def load_img(path_to_img):
max_dim = 512
img = tf.io.read_file(path_to_img)
img = tf.image.decode_image(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.float32)
shape = tf.cast(tf.shape(img)[:-1], tf.float32)
long_dim = max(shape)
scale = max_dim / long_dim
new_shape = tf.cast(shape * scale, tf.int32)
img = tf.image.resize(img, new_shape)
img = img[tf.newaxis, :]
return img
def imshow(image, title=None):
if len(image.shape) > 3:
image = tf.squeeze(image, axis=0)
plt.imshow(image)
if title==None:
title = str(image.shape)
else:
title += ' '+str(image.shape)
plt.title(title)
```
# Using a pre-trained AutoEncoder
For this assignment, i will be using an auto encoder created with Yihao Wang, a PhD student in the UbiComp lab here at SMU. The original code used to created this encoder is available for SMU students.
The model that was trained can be downloaded from:
https://www.dropbox.com/sh/2djb2c0ohxtvy2t/AAAxA2dnoFBcHGqfP0zLx-Oua?dl=0
```
ModelBlock2 = tf.keras.models.load_model('decoder_2.h5', compile = False)
ModelBlock2.summary()
class VGG19AutoEncoder(tf.keras.Model):
def __init__(self, files_path):
super(VGG19AutoEncoder, self).__init__()
#Load Full Model with every trained decoder
#Get Each SubModel
# Each model has an encoder, a decoder, and an extra output convolution
# that converts the upsampled activations into output images
# DO NOT load models four and five because they are not great auto encoders
# and therefore will cause weird artifacts when used for style transfer
ModelBlock3 = tf.keras.models.load_model(str(PurePath(files_path, 'Block3_Model')), compile = False)
self.E3 = ModelBlock3.layers[0] # VGG encoder
self.D3 = ModelBlock3.layers[1] # Trained decoder from VGG
self.O3 = ModelBlock3.layers[2] # Conv layer to get to three channels, RGB image
ModelBlock2 = tf.keras.models.load_model('decoder_2.h5', compile = False)
self.E2 = ModelBlock2.layers[0] # VGG encoder
self.D2 = ModelBlock2.layers[1] # Trained decoder from VGG
self.O2 = ModelBlock2.layers[2] # Conv layer to get to three channels, RGB image
# no special decoder for this one becasue VGG first layer has
# no downsampling. So the decoder is just a convolution
ModelBlock1 = tf.keras.models.load_model(str(PurePath(files_path, 'Block1_Model')), compile = False)
self.E1 = ModelBlock1.layers[0] # VGG encoder, one layer
self.O1 = ModelBlock1.layers[1] # Conv layer to get to three channels, RGB image
def call(self, image, alphas=None, training = False):
# Input should be dictionary with 'style' and 'content' keys
# {'style':style_image, 'content':content_image}
# value in each should be a 4D Tensor,: (batch, i,j, channel)
style_image = image['style']
content_image = image['content']
output_dict = dict()
# this will be the output, where each value is a styled
# version of the image at layer 1, 2, and 3. So each key in the
# dictionary corresponds to layer1, layer2, and layer3.
# we also give back the reconstructed image from the auto encoder
# so each value in the dict is a tuple (styled, reconstructed)
x = content_image
# choose covariance function
# covariance is more stable, but signal will work for very small images
wct = self.wct_from_cov
if alphas==None:
alphas = {'layer3':0.6,
'layer2':0.6,
'layer1':0.6}
# ------Layer 3----------
# apply whiten/color on layer 3 from the original image
# get activations
a_c = self.E3(tf.constant(x))
a_s = self.E3(tf.constant(style_image))
# swap grammian of activations, blended with original
x = wct(a_c.numpy(), a_s.numpy(), alpha=alphas['layer3'])
# decode the new style
x = self.O3(self.D3(x))
x = self.enhance_contrast(x)
# get reconstruction
reconst3 = self.O3(self.D3(self.E3(tf.constant(content_image))))
# save off the styled and reconstructed images for display
blended3 = tf.clip_by_value(tf.squeeze(x), 0, 1)
reconst3 = tf.clip_by_value(tf.squeeze(reconst3), 0, 1)
output_dict['layer3'] = (blended3, reconst3)
# ------Layer 2----------
# apply whiten/color on layer 2 from the already blended image
# get activations
a_c = self.E2(tf.constant(x))
a_s = self.E2(tf.constant(style_image))
# swap grammian of activations, blended with original
x = wct(a_c.numpy(),a_s.numpy(), alpha=alphas['layer2'])
# decode the new style
x = self.O2(self.D2(x))
x = self.enhance_contrast(x,1.3)
# get reconstruction
reconst2 = self.O2(self.D2(self.E2(tf.constant(content_image))))
# save off the styled and reconstructed images for display
blended2 = tf.clip_by_value(tf.squeeze(x), 0, 1)
reconst2 = tf.clip_by_value(tf.squeeze(reconst2), 0, 1)
output_dict['layer2'] = (blended2, reconst2)
# ------Layer 1----------
# apply whiten/color on layer 1 from the already blended image
# get activations
a_c = self.E1(tf.constant(x))
a_s = self.E1(tf.constant(style_image))
# swap grammian of activations, blended with original
x = wct(a_c.numpy(),a_s.numpy(), alpha=alphas['layer1'])
# decode the new style
x = self.O1(x)
x = self.enhance_contrast(x,1.2)
# get reconstruction
reconst1 = self.O1(self.E1(tf.constant(content_image)))
# save off the styled and reconstructed images for display
blended1 = tf.clip_by_value(tf.squeeze(x), 0, 1)
reconst1 = tf.clip_by_value(tf.squeeze(reconst1), 0, 1)
output_dict['layer1'] = (blended1, reconst1)
return output_dict
@staticmethod
def enhance_contrast(image, factor=1.25):
return tf.image.adjust_contrast(image,factor)
@staticmethod
def wct_from_cov(content, style, alpha=0.6, eps=1e-5):
'''
https://github.com/eridgd/WCT-TF/blob/master/ops.py
Perform Whiten-Color Transform on feature maps using numpy
See p.4 of the Universal Style Transfer paper for equations:
https://arxiv.org/pdf/1705.08086.pdf
'''
# 1xHxWxC -> CxHxW
content_t = np.transpose(np.squeeze(content), (2, 0, 1))
style_t = np.transpose(np.squeeze(style), (2, 0, 1))
# CxHxW -> CxH*W
content_flat = content_t.reshape(-1, content_t.shape[1]*content_t.shape[2])
style_flat = style_t.reshape(-1, style_t.shape[1]*style_t.shape[2])
# applt a threshold for only the largets eigen values
eigen_val_thresh = 1e-5
# ===Whitening transform===
# 1. take mean of each channel
mc = content_flat.mean(axis=1, keepdims=True)
fc = content_flat - mc
# 2. get covariance of content, take SVD
cov_c = np.dot(fc, fc.T) / (content_t.shape[1]*content_t.shape[2] - 1)
Uc, Sc, _ = np.linalg.svd(cov_c)
# 3. truncate the SVD to only the largest eigen values
k_c = (Sc > eigen_val_thresh).sum()
Dc = np.diag((Sc[:k_c]+eps)**-0.5)
Uc = Uc[:,:k_c]
# 4. Now make a whitened content image
fc_white = (Uc @ Dc @ Uc.T) @ fc
# ===Coloring transform===
# 1. take mean of each channel
ms = style_flat.mean(axis=1, keepdims=True)
fs = style_flat - ms
# 2. get covariance of style, take SVD
cov_s = np.dot(fs, fs.T) / (style_t.shape[1]*style_t.shape[2] - 1)
Us, Ss, _ = np.linalg.svd(cov_s)
# 3. truncate the SVD to only the largest eigen values
k_s = (Ss > eigen_val_thresh).sum()
Ds = np.sqrt(np.diag(Ss[:k_s]+eps))
Us = Us[:,:k_s]
# 4. Now make a colored image that mixes the Grammian of the style
# with the whitened content image
fcs_hat = (Us @ Ds @ Us.T) @ fc_white
fcs_hat = fcs_hat + ms # add style mean back to each channel
# Blend transform features with original features
blended = alpha*fcs_hat + (1 - alpha)*(content_flat)
# CxH*W -> CxHxW
blended = blended.reshape(content_t.shape)
# CxHxW -> 1xHxWxC
blended = np.expand_dims(np.transpose(blended, (1,2,0)), 0)
return np.float32(blended)
@staticmethod
def wct_from_signal(content, style, alpha=0.6 ):
# This uses a more computational SVD decomposition to get the Grammian
# to match. However, the numerical precision makes this totally fail
# if the activations are too large.
# This code is only for reference based on our discussion of WCT
# 1xHxWxC -> CxHxW
content_t = np.transpose(np.squeeze(content), (2, 0, 1))
style_t = np.transpose(np.squeeze(style), (2, 0, 1))
# CxHxW -> Cx(H*W)
content_flat = content_t.reshape(-1, content_t.shape[1]*content_t.shape[2])
style_flat = style_t.reshape(-1, style_t.shape[1]*style_t.shape[2])
singular_val_thresh = 1e-3
#-------------------------------------------
# Whitening transform and Coloring transform
# 1. SVD of content signals
mc = content_flat.mean()
fc = content_flat - mc
Uc, Sc, Vc = np.linalg.svd(fc, full_matrices=False)
k_c = (Sc > singular_val_thresh).sum()
# 2. SVD of style signals
ms = style_flat.mean()
fs = style_flat - ms
Us, Ss, Vs = np.linalg.svd(fs, full_matrices=False)
k_s = (Ss > singular_val_thresh).sum()
k = min(k_s,k_c)
# Blend transform features with original features
fcs = (Us[:,:k] @ np.diag(Ss[:k]) @ Vc[:k,:]) + mc
blended = alpha*fcs + (1 - alpha)*(content_flat)
# CxH*W -> CxHxW
blended = blended.reshape(content_t.shape)
# CxHxW -> 1xHxWxC
blended = np.expand_dims(np.transpose(blended, (1,2,0)), 0)
return np.float32(blended)
%%time
AE = VGG19AutoEncoder('../VGGDecoderWeights/')
%%time
from skimage.transform import resize
content_path = 'images/dallas_hall.jpg'
style_path = 'images/mosaic_style.png'
content_image = load_img(content_path)
style_image = load_img(style_path)
plt.subplot(1, 2, 1)
imshow(content_image,'Content')
plt.subplot(1, 2, 2)
imshow(style_image,'Style')
tmp = {'style':style_image,
'content':content_image}
alphas = {'layer3':0.8, 'layer2':0.6, 'layer1':0.6}
decoded_images = AE(tmp, alphas=alphas)
imshow(style_image,'Style')
for layer in decoded_images.keys():
plt.figure(figsize=(10,10))
plt.subplot(1,2,1)
imshow(decoded_images[layer][0],'Styled')
plt.subplot(1,2,2)
imshow(decoded_images[layer][1],'Reconstructed')
```
| true |
code
| 0.667933 | null | null | null | null |
|
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges).
# Solution Notebook
## Problem: Given two strings, find the longest common substring.
* [Constraints](#Constraints)
* [Test Cases](#Test-Cases)
* [Algorithm](#Algorithm)
* [Code](#Code)
* [Unit Test](#Unit-Test)
## Constraints
* Can we assume the inputs are valid?
* No
* Can we assume the strings are ASCII?
* Yes
* Is this case sensitive?
* Yes
* Is a substring a contiguous block of chars?
* Yes
* Do we expect a string as a result?
* Yes
* Can we assume this fits memory?
* Yes
## Test Cases
* str0 or str1 is None -> Exception
* str0 or str1 equals 0 -> ''
* General case
str0 = 'ABCDEFGHIJ'
str1 = 'FOOBCDBCDE'
result: 'BCDE'
## Algorithm
We'll use bottom up dynamic programming to build a table.
<pre>
The rows (i) represent str0.
The columns (j) represent str1.
str1
-------------------------------------------------
| | | A | B | C | D | E | F | G | H | I | J |
-------------------------------------------------
| | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| F | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 |
| O | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 |
s | O | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 |
t | B | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
r | C | 0 | 0 | 1 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 |
0 | D | 0 | 0 | 1 | 2 | 3 | 3 | 3 | 3 | 3 | 3 | 3 |
| B | 0 | 0 | 1 | 2 | 3 | 3 | 3 | 3 | 3 | 3 | 3 |
| C | 0 | 0 | 1 | 2 | 3 | 3 | 3 | 3 | 3 | 3 | 3 |
| D | 0 | 0 | 1 | 2 | 3 | 3 | 3 | 3 | 3 | 3 | 3 |
| E | 0 | 0 | 1 | 2 | 3 | 4 | 4 | 4 | 4 | 4 | 4 |
-------------------------------------------------
if str1[j] != str0[i]:
T[i][j] = max(
T[i][j-1],
T[i-1][j])
else:
T[i][j] = T[i-1][j-1] + 1
</pre>
Complexity:
* Time: O(m * n), where m is the length of str0 and n is the length of str1
* Space: O(m * n), where m is the length of str0 and n is the length of str1
## Code
```
class StringCompare(object):
def longest_common_substr(self, str0, str1):
if str0 is None or str1 is None:
raise TypeError('str input cannot be None')
# Add one to number of rows and cols for the dp table's
# first row of 0's and first col of 0's
num_rows = len(str0) + 1
num_cols = len(str1) + 1
T = [[None] * num_cols for _ in range(num_rows)]
for i in range(num_rows):
for j in range(num_cols):
if i == 0 or j == 0:
T[i][j] = 0
elif str0[j-1] != str1[i-1]:
T[i][j] = max(T[i][j-1],
T[i-1][j])
else:
T[i][j] = T[i-1][j-1] + 1
results = ''
i = num_rows - 1
j = num_cols - 1
# Walk backwards to determine the substring
while T[i][j]:
if T[i][j] == T[i][j-1]:
j -= 1
elif T[i][j] == T[i-1][j]:
i -= 1
elif T[i][j] == T[i-1][j-1] + 1:
results += str1[i-1]
i -= 1
j -= 1
else:
raise Exception('Error constructing table')
# Walking backwards results in a string in reverse order
return results[::-1]
```
## Unit Test
```
%%writefile test_longest_common_substr.py
import unittest
class TestLongestCommonSubstr(unittest.TestCase):
def test_longest_common_substr(self):
str_comp = StringCompare()
self.assertRaises(TypeError, str_comp.longest_common_substr, None, None)
self.assertEqual(str_comp.longest_common_substr('', ''), '')
str0 = 'ABCDEFGHIJ'
str1 = 'FOOBCDBCDE'
expected = 'BCDE'
self.assertEqual(str_comp.longest_common_substr(str0, str1), expected)
print('Success: test_longest_common_substr')
def main():
test = TestLongestCommonSubstr()
test.test_longest_common_substr()
if __name__ == '__main__':
main()
%run -i test_longest_common_substr.py
```
| true |
code
| 0.474996 | null | null | null | null |
|
# Installing Cantera
For this notebook you will need [Cantera](http://www.cantera.org/), an open source suite of object-oriented software tools for problems involving chemical kinetics, thermodynamics, and/or transport processes.
Fortunately a helpful chap named Bryan Weber has made Anaconda packages, so to install you can simply type
```
conda install -c bryanwweber cantera
```
at your terminal (if you can remember back to when you installed Anaconda!).
If you are on Windows you will probably also need to install the Visual C++ Redistributable which you can get [from Microsoft here](https://www.microsoft.com/en-us/download/details.aspx?id=48145).
There are other, more difficult, ways to install it in [the instructions](http://www.cantera.org/docs/sphinx/html/install.html) if you can't get the Anaconda packages to work. It is also already on the COE computer lab 274 Snell (though there you will have to `pip install jupyter` to get this notebook working).
```
# First, import cantera, with the nickname `ct` to save us some typing later.
import cantera as ct
# Then the usual suspects:
import numpy as np
%matplotlib inline
from matplotlib import pyplot as plt
```
# Heptane combustion
Download the reduced n-heptane model from LLNL https://combustion.llnl.gov/archived-mechanisms/alkanes/heptane-reduced-mechanism. Save the files alongside this python notebook. These files are in "CHEMKIN" format. First, we have to convert them into a format that is usable by Cantera.
This may take a while and issue some warnings, but then end by saying `Validating mechanism...PASSED`:
```
from cantera import ck2cti
ck2cti.main(['--input=heptanesymp159_mec.txt',
'--thermo=heptanesymp_therm.txt',
'--permissive',
'--output=heptanesymp159.cti'])
```
Clearly, with 160 species and 1540 reactions, this mechanism is more detailed than any we have considered before!
Now, let's create a 'Solution' phase in Cantera called `gas` from the Cantera mechanism file we just created.
```
gas = ct.Solution('heptanesymp159.cti')
```
Let's examine some of the reactions and species in the mechanism.
This will return the first 10 reactions:
```
gas.reaction_equations(np.arange(10))
```
And this will return a list of the chemical species names, joined by spaces:
```
print(" ".join(gas.species_names))
```
Knowing what all those species names mean is a [formidable challenge](http://www.northeastern.edu/comocheng/2014/04/nsf-grant-to-identify-discrepancies/) but we are [making headway](http://www.northeastern.edu/comocheng/2015/05/uscombustionmeeting/) (and more help is welcome).
For now, lets loop through all the species looking for ones with 7 carbons and 16 hydrogen atoms, which should be all the isomers of heptane.
```
for species in gas.species():
if species.composition == {'C':7, 'H':16}:
print(species.name)
```
There is only one!
Based on the name beginning with 'n' let's assume it represents normal-heptane (all 7 carbons in a single chain with no branching), which is the fuel that we want to simulate. Now we need to find the index number for this species.
```
i_heptane = gas.species_names.index('nc7h16')
# alternatively, this shortcut:
i_heptane = gas.species_index('nc7h16')
print("heptane is species index {0}".format(i_heptane))
```
To specify the state of a system we must supply two intensive variables (temperature, pressure, density, specific entropy, specific enthalpy, specific volume) and the composition (mass or mole fractions). We will set the temperature, pressure, and mole fractions. In cantera, mole fractions are `X` and mass fractions are `Y`. We can then print some properties of our gas system by typing `gas()`.
```
gas.TPX = 1000, 10e5, 'nc7h16:1.0'
gas()
```
To find equilbrium you must specify which two intensive variables to hold constant. We'll find the equilbrium at constant Temperature and Pressure, then print the properties again.
```
gas.equilibrate('TP')
gas()
```
You will recall from Thermodynamics II that a system going to equilibrium at constant T and P should minimize the specific Gibbs free energy of the system. Sure enough, it has gone down (compare the "Gibbs function" in the "1 kg" columns above. To check that number represents what we expect (this will be returned in Cantera's default SI units, a combination of K, m<sup>3</sup>, Pa, J, kg, kmol; in this case J/kg)
```
print(gas.h - gas.T * gas.s)
print(gas.g)
```
Now lets find the equilibrium composition at 1 bar pressure and a range of temperatures between 100 and 2000 K
```
temperatures = np.arange(100,2000,20)
# make a big array to store the results in
equilibrium_mass_fractions = np.zeros((len(temperatures), gas.n_species))
for i, T in enumerate(temperatures):
gas.TP = T, 1e5
gas.equilibrate('TP')
print(T,end=" ")
equilibrium_mass_fractions[i,:] = gas.Y
```
Now plot the equilibrium mass fractions as a function of temperature. With 160 lines, let's forgo the legend and instead label the biggest peaks directly.
```
plt.plot(temperatures,equilibrium_mass_fractions)
plt.xlabel("Temperature (K)")
plt.ylabel("Equilibrium mole fraction")
for i, name in enumerate(gas.species_names):
Y = equilibrium_mass_fractions[:,i]
if max(Y)> 0.08:
peakT = temperatures[Y.argmax()]
peakY = max(Y)
plt.text(peakT,peakY, name)
plt.show()
```
## Question (a)
What do you notice about the species that peaks at 100K, and the ones that peak at 2000K? Can you explain or justify this?
To see some of the complexity hidden at low concentrations, let's plot the y axis on a logarithmic scale:
```
plt.semilogy(temperatures,equilibrium_mass_fractions)
plt.ylim(1e-30,1)
plt.xlabel("Temperature (K)")
plt.ylabel("Equilibrium mole fraction")
plt.show()
```
If you think about how many reactions are equilibrated, it was remarkably quick!
Now we'll add some air, which is mostly nitrogen and oxygen. First of all, find the names of anything with just 2 oxygen atoms or just 2 nitrogen atoms.
```
for species in gas.species():
if species.composition == {'O':2} or species.composition == {'N':2}:
print(species.name)
```
Now look up and store the species indices
```
i_oxygen = gas.species_names.index('o2')
print("oxygen is species index {0}".format(i_oxygen))
i_nitrogen = gas.species_names.index('n2')
print("nitrogen is species index {0}".format(i_nitrogen))
```
## Question (b)
For a "stoichiometric" mixture of n-heptane and air (enough oxygen to reach complete combustion) how many moles of heptane and how many moles of nitrogen should you have for one mole of oxygen? Assume air is 80% nitrogen and 20% oxygen.
```
oxygen_mole = 1. # moles oxygen
## ANSWER:
nitrogen_mole = 4 * oxygen_mole
heptane_mole = oxygen_mole / 11.
## Some checks
assert nitrogen_mole / oxygen_mole == 4, "Assume air is 80% nitrogen and 20% oxygen"
assert oxygen_mole / heptane_mole == 3+1+3//5*3+8-5//3, "C7H16 + ?? O2 => 8 H2O + 7 CO2"
```
Now use those to make a string for the '`X`' when we set `gas.TPX`. Although we call it a mole fraction, they don't need to add up to one: Cantera will normalize it, preserving the ratios. Then print it, use it, and check it.
```
X_string = 'nc7h16:{0},o2:{1},n2:{2}'.format(heptane_mole, oxygen_mole, nitrogen_mole)
print("The 'X' will be set to {0!r}".format(X_string))
gas.TPX = 1000, 10e5, X_string
gas()
assert round(gas.concentrations[i_oxygen] / gas.concentrations[i_heptane], 2) == 11
```
## Question (c)
We can do an equilibrium analysis like before, but before you do,
starting with a stoichiometric mixture of fuel and air
what do you expect the equilibrium composition to mostly consist of?
(Imagine all reactions are fast with no barriers)
```
temperatures = np.arange(100,2000,20)
# make a big array to store the results in
equilibrium_mass_fractions = np.zeros((len(temperatures), gas.n_species))
for i, T in enumerate(temperatures):
gas.TP = T, 1e5
gas.equilibrate('TP')
print(T, end=" ")
equilibrium_mass_fractions[i,:] = gas.Y
plt.plot(temperatures,equilibrium_mass_fractions)
for i, name in enumerate(gas.species_names):
Y = equilibrium_mass_fractions[:,i]
if max(Y)> 0.08:
peakT = temperatures[Y.argmax()]
peakY = max(Y)
plt.text(peakT,peakY, name)
plt.show()
```
## Kinetics
Now we are done with equilbria, let's calculate some kinetics!
Cantera can do complex networks of reactors with valves, flow controllers, etc.
but we will make a simple "reactor network" with just one constant volume ideal gas batch reactor.
```
gas.TPX = 800, 10e5, X_string
reactor = ct.IdealGasReactor(gas)
reactor_network = ct.ReactorNet([reactor])
start_time = 0.0 #starting time
end_time = 4e-3 # seconds
n_steps = 251
times = np.linspace(start_time, end_time, n_steps)
concentrations = np.zeros((n_steps, gas.n_species))
pressures = np.zeros(n_steps)
temperatures = np.zeros(n_steps)
print_data = True
if print_data:
#this just gives headings
print('{0:>10s} {1:>10s} {2:>10s} {3:>14s}'.format(
't [s]','T [K]','P [Pa]','u [J/kg]'))
for n, time in enumerate(times):
if time > 0:
reactor_network.advance(time)
temperatures[n] = reactor.T
pressures[n] = reactor.thermo.P
concentrations[n,:] = reactor.thermo.concentrations
if print_data:
print('{0:10.3e} {1:10.3f} {2:10.3f} {3:14.6e}'.format(
reactor_network.time, reactor.T, reactor.thermo.P, reactor.thermo.u))
```
Now let's plot some graphs to see how things look
```
plt.plot(times*1e3, concentrations[:,i_heptane])
plt.ylabel("Heptane concentration (kmol/m3)")
plt.xlabel("Time (ms)")
plt.ylim(0,)
plt.show()
plt.plot(times*1e3, pressures/1e5)
plt.xlabel("Time (ms)")
plt.ylabel("Pressure (bar)")
plt.show()
plt.plot(times*1e3, temperatures)
plt.xlabel("Time (ms)")
plt.ylabel("Temperature (K)")
plt.show()
```
Although the timescale is milliseconds instead of hours, that looks remarkably like the thermal runaway reaction that caused the T2 laboratory explosion that we studied last lecture. This time, however, it's not just a thermal runaway but a chemical runaway - it's the gradual accumulation of reactive radical species like `OH` that is auto-catalytic.
Let's look at some of the other species:
```
# skip the zeroth species which is nitrogen
plt.plot(times*1e3, concentrations[:,1:])
plt.ylim(0,)
plt.ylabel("Concentration")
plt.xlabel("Time (ms)")
for i, name in enumerate(gas.species_names):
if i==0: continue
concentration = concentrations[:,i]
peak_concentration = max(concentration)
if peak_concentration > 0.001:
peak_time = times[concentration.argmax()]
plt.text(peak_time*1e3, peak_concentration, name)
plt.show()
```
Let's zoom in on the y axis by making it logarithmic:
```
plt.semilogy(times*1e3, concentrations)
plt.ylim(1e-15,1)
plt.ylabel("Concentration")
plt.xlabel("Time (ms)")
plt.show()
```
What a mess! Let's zoom in a little and see if we can pick out any significant intermediates
```
plt.semilogy(times*1e3, concentrations)
plt.ylim(1e-4,1)
# Add some labels
for t in [1.5, 3]:
i = (times*1e3>t).nonzero()[0][0]
time = times[i]*1e3
for j, name in enumerate(gas.species_names):
concentration = concentrations[i,j]
if concentration > 1e-4:
plt.text(time, concentration, name)
plt.ylabel("Concentration")
plt.xlabel("Time (ms)")
plt.show()
```
Not really! We would have to do a flux analysis and [reaction path diagram](http://www.cantera.org/docs/sphinx/html/cython/examples/kinetics_reaction_path.html) to see what is going on.
## Defining ignition delay time.
We want to identify when the ignition occurs, so that we could compare our simulation with an experiment.
Some experiments measure pressure rise; some monitor the concentration of an intermediate like `OH` via laser absorption; but other studies monitor the luminescence of excited `OH*` decaying to ground state `OH` (which it does by emitting a photon). This process is proportional to the rate of formation (not concentration) of `OH*`, which is predominantly made by reaction of `CH` with `O2`, so it is pretty closely proportional to the product `[CH][O2]`, i.e. "brightest flash of light" is propontional to “peak `OH*` emission” which can be modeled as “peak in the product of `[CH]` and `[O2]`”. Likewise photoemission from creation of excited `CH*` can be modeled reasonably as the product `[C2H][O]`. When modeling an experiment it's important to know precisely what the experimenter measurend and how they defined their derived parameters. For now we'll look for the peak in `OH*` emission:
```
i_ch = gas.species_index('ch')
i_o2 = gas.species_index('o2')
excited_oh_generation = concentrations[:,i_ch] * concentrations[:,i_o2]
plt.plot(times*1e3, excited_oh_generation)
plt.xlabel("Time (ms)")
plt.ylabel("Excited OH* emission (arbitrary units)")
plt.show()
ignition_time = times[excited_oh_generation.argmax()]
print("Ignition delay time is {0} ms".format(ignition_time * 1e3))
```
Now let's put it all together, into a function that takes temperature, pressure, and stoichiometry, and predicts ignition delay time for n-heptane. It's a bit different from before - now we let the ODE solver choose the array of times, which means we don't know how long it will be when we begin, so we have to use lists (which can grow as we add to them) and convert to arrays when we've finished.
```
def get_ignition_delay(temperature, pressure = 10.,
stoichiometry = 1.0, plot = False):
"""
Get the ignition delay time in miliseconds, at the specified
temperature (K), pressure (bar), and stoichiometry
(stoichiometric = 1.0, fuel-rich > 1.0, oxygen-rich < 1.0).
Default pressure is 10.0 bar, default stoichoimetry is 1.0.
If plot=True then it draws a plot (default is False).
"""
oxygen_mole = 1.
nitrogen_mole = 4*oxygen_mole
heptane_mole = stoichiometry/11
X_string = 'nc7h16:{0},o2:{1},n2:{2}'.format(heptane_mole, oxygen_mole, nitrogen_mole)
gas.TPX = temperature, pressure*1e5, X_string
reactor = ct.IdealGasReactor(gas)
reactor_network = ct.ReactorNet([reactor])
time = 0.0
end_time = 10e-3
# Use lists instead of arrays, so they can be any length
times = []
concentrations = []
pressures = []
temperatures = []
print_data = True
while time < end_time:
time = reactor_network.time
times.append(time)
temperatures.append(reactor.T)
pressures.append(reactor.thermo.P)
concentrations.append(reactor.thermo.concentrations)
# take a timestep towards the end_time.
# the size of the step will be determined by the ODE solver
# depending on how quickly things are changing.
reactor_network.step(end_time)
print("Reached end time {0:.2f} ms in {1} steps".format(times[-1]*1e3, len(times)))
# convert the lists into arrays
concentrations = np.array(concentrations)
times = np.array(times)
pressures = np.array(pressures)
temperatures = np.array(temperatures)
if plot:
plt.subplot(2,1,1)
plt.plot(times*1e3, pressures/1e5)
plt.ylabel("Pressure (bar)", color='b')
ax2 = plt.gca().twinx()
ax2.set_ylabel('Temperature (K)', color='r')
ax2.plot(times*1e3, temperatures, 'r')
i_ch = gas.species_index('ch')
i_o2 = gas.species_index('o2')
excited_oh_generation = concentrations[:,i_o2] * concentrations[:,i_ch]
if plot:
plt.subplot(2,1,2)
plt.plot(times*1e3, excited_oh_generation, 'g')
plt.ylabel("OH* emission")
plt.ylim(0,max(1e-8,1.1*max(excited_oh_generation)))
plt.xlabel("Time (ms)")
plt.tight_layout()
plt.show()
step_with_highest_oh_gen = excited_oh_generation.argmax()
if step_with_highest_oh_gen > 1 and excited_oh_generation.max()>1e-20:
ignition_time_ms = 1e3 * times[step_with_highest_oh_gen]
print("At {0} K {1} bar, ignition delay time is {2} ms".format(temperature, pressure, ignition_time_ms))
return ignition_time_ms
else:
print("At {0} K {1} bar, no ignition detected".format(temperature, pressure))
return np.infty
```
Let's test it at 1000 K, 10 bar.
```
get_ignition_delay(1000, 10, plot=True)
```
Now let's repeat it at a range of temperatures and pressures, and plot all the delay times on one graph
```
temperatures = np.linspace(1000,1500.,25)
ignition_delay_times = np.zeros_like(temperatures)
for P in [10,50]:
for i,T in enumerate(temperatures):
ignition_delay_times[i] = get_ignition_delay(T, P)
plt.semilogy(1000./temperatures, ignition_delay_times, 'o-', label='{0} bar'.format(P))
plt.legend(loc='best')
plt.xlabel("1000K / temperature")
plt.ylabel("Ignition delay time (ms)")
plt.ylim(1e-2,)
plt.show()
```
## Question (d)
Explain why this look as you would expect from Arrhenius behaviour.
## Question (e)
Repeat the analysis but going down to 650K (i.e. cover the range 650-1500K).
Describe and try to explain what you find.
```
temperatures = np.linspace(650,1500.,25)
ignition_delay_times = np.zeros_like(temperatures)
for P in [10,50]:
for i,T in enumerate(temperatures):
ignition_delay_times[i] = get_ignition_delay(T, P)
plt.semilogy(1000./temperatures, ignition_delay_times, 'o-', label='{0} bar'.format(P))
plt.legend(loc='best')
plt.xlabel("1000K / temperature")
plt.ylabel("Ignition delay time (ms)")
plt.ylim(1e-2,)
plt.show()
```
| true |
code
| 0.509459 | null | null | null | null |
|
# Using `scipy.integrate`
## Authors
Zach Pace, Lia Corrales, Stephanie T. Douglas
## Learning Goals
* perform numerical integration in the `astropy` and scientific python context
* trapezoidal approximation
* gaussian quadrature
* use `astropy`'s built-in black-body curves
* understand how `astropy`'s units interact with one another
* define a Python class
* how the `__call__` method works
* add $\LaTeX$ labels to `matplotlib` figures using the `latex_inline` formatter
## Keywords
modeling, units, synphot, OOP, LaTeX, astrostatistics, matplotlib, units, physics
## Companion Content
* http://synphot.readthedocs.io/en/latest/
* [Using Astropy Quantities for astrophysical calculations](http://www.astropy.org/astropy-tutorials/rst-tutorials/quantities.html)
## Summary
In this tutorial, we will use the examples of the Planck function and the stellar initial mass function (IMF) to illustrate how to integrate numerically, using the trapezoidal approximation and Gaussian quadrature. We will also explore making a custom class, an instance of which is callable in the same way as a function. In addition, we will encounter `astropy`'s built-in units, and get a first taste of how to convert between them. Finally, we will use $\LaTeX$ to make our figure axis labels easy to read.
```
import numpy as np
from scipy import integrate
from astropy.modeling.models import BlackBody
from astropy import units as u, constants as c
import matplotlib.pyplot as plt
%matplotlib inline
```
## The Planck function
The Planck function describes how a black-body radiates energy. We will explore how to find bolometric luminosity using the Planck function in both frequency and wavelength space.
Let's say we have a black-body at 5000 Kelvin. We can find out the total intensity (bolometric) from this object, by integrating the Planck function. The simplest way to do this is by approximating the integral using the trapezoid rule. Let's do this first using the frequency definition of the Planck function.
We'll define a photon frequency grid, and evaluate the Planck function at those frequencies. Those will be used to numerically integrate using the trapezoidal rule. By multiplying a `numpy` array by an `astropy` unit, we get a `Quantity`, which is effectively a combination of one or more numbers and a unit.
<div class="alert alert-info">
**Note on printing units**:
Quantities and units can be printed to strings using the [Format String Syntax](https://docs.python.org/3/library/string.html#format-string-syntax). This demonstration uses the `latex_inline` format that is built in to the `astropy.units` package. To see additional ways to format quantities, see the [Getting Started](http://docs.astropy.org/en/stable/units/#getting-started) section of the astropy.units documentation pages.
</div>
```
bb = BlackBody(5000. * u.Kelvin)
nu = np.linspace(1., 3000., 1000) * u.THz
bb5000K_nu = bb(nu)
plt.plot(nu, bb5000K_nu)
plt.xlabel(r'$\nu$, [{0:latex_inline}]'.format(nu.unit))
plt.ylabel(r'$I_{\nu}$, ' + '[{0:latex_inline}]'.format(bb5000K_nu.unit))
plt.title('Planck function in frequency')
plt.show()
```
### Using $LaTeX$ for axis labels
Here, we've used $LaTeX$ markup to add nice-looking axis labels. To do that, we enclose $LaTeX$ markup text in dollar signs, within a string `r'\$ ... \$'`. The `r` before the open-quote denotes that the string is "raw," and backslashes are treated literally. This is the suggested format for axis label text that includes markup.
Now we numerically integrate using the trapezoid rule.
```
np.trapz(x=nu, y=bb5000K_nu).to('erg s-1 cm-2 sr-1')
```
Now we can do something similar, but for a wavelength grid. We want to integrate over an equivalent wavelength range to the frequency range we did earlier. We can transform the maximum frequency into the corresponding (minimum) wavelength by using the `.to()` method, with the addition of an *equivalency*.
```
lam = np.linspace(nu.max().to(u.AA, equivalencies=u.spectral()),
nu.min().to(u.AA, equivalencies=u.spectral()), 1000)
bb_lam = BlackBody(bb.temperature,
scale=1.0 * u.erg / (u.cm ** 2 * u.AA * u.s * u.sr))
bb5000K_lam = bb_lam(lam)
plt.plot(lam, bb5000K_lam)
plt.xlim([1.0e3, 5.0e4])
plt.xlabel(r'$\lambda$, [{0:latex_inline}]'.format(lam.unit))
plt.ylabel(r'$I_{\lambda}$, ' + '[{0:latex_inline}]'.format(bb5000K_lam.unit))
plt.title('Planck function in wavelength')
plt.show()
np.trapz(x=lam, y=bb5000K_lam).to('erg s-1 cm-2 sr-1')
```
Notice this is within a couple percent of the answer we got in frequency space, despite our bad sampling at small wavelengths!
Many `astropy` functions use units and quantities directly. As you gain confidence working with them, consider incorporating them into your regular workflow. Read more [here](http://docs.astropy.org/en/stable/units/) about how to use units.
### How to simulate actual observations
As of Fall 2017, `astropy` does not explicitly support constructing synthetic observations of models like black-body curves. The [synphot library](https://synphot.readthedocs.io/en/latest/) does allow this. You can use `synphot` to perform tasks like turning spectra into visual magnitudes by convolving with a filter curve.
## The stellar initial mass function (IMF)
The stellar initial mass function tells us how many of each mass of stars are formed. In particular, low-mass stars are much more abundant than high-mass stars are. Let's explore more of the functionality of `astropy` using this concept.
People generally think of the IMF as a power-law probability density function. In other words, if you count the stars that have been born recently from a cloud of gas, their distribution of masses will follow the IMF. Let's write a little class to help us keep track of that:
```
class PowerLawPDF(object):
def __init__(self, gamma, B=1.):
self.gamma = gamma
self.B = B
def __call__(self, x):
return x**self.gamma / self.B
```
### The `__call__` method
By defining the method `__call__`, we are telling the Python interpreter that an instance of the class can be called like a function. When called, an instance of this class, takes a single argument, `x`, but it uses other attributes of the instance, like `gamma` and `B`.
### More about classes
Classes are more advanced data structures, which can help you keep track of functionality within your code that all works together. You can learn more about classes in [this tutorial](https://www.codecademy.com/ja/courses/learn-python/lessons/introduction-to-classes/exercises/why-use-classes).
## Integrating using Gaussian quadrature
In this section, we'll explore a method of numerical integration that does not require having your sampling grid set-up already. `scipy.integrate.quad` with reference [here](https://docs.scipy.org/doc/scipy-0.19.1/reference/generated/scipy.integrate.quad.html) takes a function and both a lower and upper bound, and our `PowerLawPDF` class takes care of this just fine.
Now we can use our new class to normalize our IMF given the mass bounds. This amounts to normalizing a probability density function. We'll use Gaussian quadrature (`quad`) to find the integral. `quad` returns the numerical value of the integral and its uncertainty. We only care about the numerical value, so we'll pack the uncertainty into `_` (a placeholder variable). We immediately throw the integral into our IMF object and use it for normalizing!
To read more about *generalized packing and unpacking* in Python, look at the original proposal, [PEP 448](https://www.python.org/dev/peps/pep-0448/), which was accepted in 2015.
```
salpeter = PowerLawPDF(gamma=-2.35)
salpeter.B, _ = integrate.quad(salpeter, a=0.01, b=100.)
m_grid = np.logspace(-2., 2., 100)
plt.loglog(m_grid, salpeter(m_grid))
plt.xlabel(r'Stellar mass [$M_{\odot}$]')
plt.ylabel('Probability density')
plt.show()
```
### How many more M stars are there than O stars?
Let's compare the number of M dwarf stars (mass less than 60% solar) created by the IMF, to the number of O stars (mass more than 15 times solar).
```
n_m, _ = integrate.quad(salpeter, a=.01, b=.6)
n_o, _ = integrate.quad(salpeter, a=15., b=100.)
print(n_m / n_o)
```
There are almost 21000 as many low-mass stars born as there are high-mass stars!
### Where is all the mass?
Now let's compute the relative total masses for all O stars and all M stars born. To do this, weight the Salpeter IMF by mass (i.e., add an extra factor of mass to the integral). To do this, we define a new function that takes the old power-law IMF as one of its arguments. Since this argument is unchanged throughout the integral, it is passed into the tuple `args` within `quad`. It's important that there is only *one* argument that changes over the integral, and that it is the *first* argument that the function being integrated accepts.
Mathematically, the integral for the M stars is
$$ m^M = \int_{.01 \, M_{\odot}}^{.6 \, M_{\odot}} m \, {\rm IMF}(m) \, dm $$
and it amounts to weighting the probability density function (the IMF) by mass. More generally, you find the value of some property $\rho$ that depends on $m$ by calculating
$$ \rho(m)^M = \int_{.01 \, M_{\odot}}^{.6 \, M_{\odot}} \rho(m) \, {\rm IMF}(m) \, dm $$
```
def IMF_m(m, imf):
return imf(m) * m
m_m, _ = integrate.quad(IMF_m, a=.01, b=.6, args=(salpeter, ))
m_o, _ = integrate.quad(IMF_m, a=15., b=100., args=(salpeter, ))
m_m / m_o
```
So about 20 times as much mass is tied up in M stars as in O stars.
### Extras
* Now compare the total luminosity from all O stars to total luminosity from all M stars. This requires a mass-luminosity relation, like this one which you will use as $\rho(m)$:
$$
\frac{L}{L_{\odot}} (M) =
\begin{cases}
\hfill .23 \left( \frac{M}{M_{\odot}} \right)^{2.3} \hfill , \hfill & .1 < \frac{M}{M_{\odot}} < .43 \\
\hfill \left( \frac{M}{M_{\odot}} \right)^{4} \hfill , \hfill & .43 < \frac{M}{M_{\odot}} < 2 \\
\hfill 1.5 \left( \frac{M}{M_{\odot}} \right)^{3.5} \hfill , \hfill & 2 < \frac{M}{M_{\odot}} < 20 \\
\hfill 3200 \left( \frac{M}{M_{\odot}} \right) \hfill , \hfill & 20 < \frac{M}{M_{\odot}} < 100 \\
\end{cases},
$$
* Think about which stars are producing most of the light, and which stars have most of the mass. How might this result in difficulty inferring stellar masses from the light they produce? If you're interested in learning more, see [this review article](https://ned.ipac.caltech.edu/level5/Sept14/Courteau/Courteau_contents.html).
## Challenge problems
* Right now, we aren't worried about the bounds of the power law, but the IMF should drop off to zero probability at masses below .01 solar masses and above 100 solar masses. Modify `PowerLawPDF` in a way that allows both `float` and `numpy.ndarray` inputs.
* Modify the `PowerLawPDF` class to explicitly use `astropy`'s `units` constructs.
* Derive a relationship between recent star-formation rate and $H\alpha$ luminosity. In other words, find a value of $C$ for the function
$${\rm SFR \, [\frac{M_{\odot}}{yr}]} = {\rm C \, L_{H\alpha} \, [\frac{erg}{s}]} \, .$$
* How does this depend on the slope and endpoints of the IMF?
* Take a look at Appendix B of [Hunter & Elmegreen 2004, AJ, 128, 2170](http://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:astro-ph/0408229)
* What effect does changing the power-law index or upper mass limit of the IMF have on the value of $C$?
* Predict the effect on the value of $C$ of using a different form of the IMF, like Kroupa or Chabrier (both are lighter on the low-mass end).
* If you're not tired of IMFs yet, try defining a new class that implements a broken-power-law (Kroupa) or log-parabola (Chabrier) IMF. Perform the same calculations as above.
| true |
code
| 0.611121 | null | null | null | null |
|
# TensorBoard with Fashion MNIST
In this week's exercise you will train a convolutional neural network to classify images of the Fashion MNIST dataset and you will use TensorBoard to explore how it's confusion matrix evolves over time.
## Setup
```
# Load the TensorBoard notebook extension.
%load_ext tensorboard
import io
import itertools
import numpy as np
import sklearn.metrics
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow import keras
from datetime import datetime
from os import getcwd
print("TensorFlow version: ", tf.__version__)
```
## Load the Fashion-MNIST Dataset
We are going to use a CNN to classify images in the the [Fashion-MNIST](https://research.zalando.com/welcome/mission/research-projects/fashion-mnist/) dataset. This dataset consist of 70,000 grayscale images of fashion products from 10 categories, with 7,000 images per category. The images have a size of $28\times28$ pixels.
First, we load the data. Even though these are really images, we will load them as NumPy arrays and not as binary image objects. The data is already divided into training and testing sets.
```
# Load the data.
train_images = np.load(f"{getcwd()}/../tmp2/train_images.npy")
train_labels = np.load(f"{getcwd()}/../tmp2/train_labels.npy")
test_images = np.load(f"{getcwd()}/../tmp2/test_images.npy")
test_labels = np.load(f"{getcwd()}/../tmp2/test_labels.npy")
# The labels of the images are integers representing classes.
# Here we set the Names of the integer classes, i.e., 0 -> T-short/top, 1 -> Trouser, etc.
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
```
## Format the Images
`train_images` is a NumPy array with shape `(60000, 28, 28)` and `test_images` is a NumPy array with shape `(10000, 28, 28)`. However, our model expects arrays with shape `(batch_size, height, width, channels)` . Therefore, we must reshape our NumPy arrays to also include the number of color channels. Since the images are grayscale, we will set `channels` to `1`. We will also normalize the values of our NumPy arrays to be in the range `[0,1]`.
```
# Pre-process images
train_images = train_images.reshape(60000, 28, 28, 1)
train_images = train_images / 255.0
test_images = test_images.reshape(10000, 28, 28, 1)
test_images = test_images / 255.0
```
## Build the Model
We will build a simple CNN and compile it.
```
# Build the model
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
```
## Plot Confusion Matrix
When training a classifier, it's often useful to see the [confusion matrix](https://en.wikipedia.org/wiki/Confusion_matrix). The confusion matrix gives you detailed knowledge of how your classifier is performing on test data.
In the cell below, we will define a function that returns a Matplotlib figure containing the plotted confusion matrix.
```
def plot_confusion_matrix(cm, class_names):
"""
Returns a matplotlib figure containing the plotted confusion matrix.
Args:
cm (array, shape = [n, n]): a confusion matrix of integer classes
class_names (array, shape = [n]): String names of the integer classes
"""
figure = plt.figure(figsize=(8, 8))
plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues)
plt.title("Confusion matrix")
plt.colorbar()
tick_marks = np.arange(len(class_names))
plt.xticks(tick_marks, class_names, rotation=45)
plt.yticks(tick_marks, class_names)
# Normalize the confusion matrix.
cm = np.around(cm.astype('float') / cm.sum(axis=1)[:, np.newaxis], decimals=2)
# Use white text if squares are dark; otherwise black.
threshold = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
color = "white" if cm[i, j] > threshold else "black"
plt.text(j, i, cm[i, j], horizontalalignment="center", color=color)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
return figure
```
## TensorBoard Callback
We are now ready to train the CNN and regularly log the confusion matrix during the process. In the cell below, you will create a [Keras TensorBoard callback](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/TensorBoard) to log basic metrics.
```
# Clear logs prior to logging data.
!rm -rf logs/image
# Create log directory
logdir = "logs/image/" + datetime.now().strftime("%Y%m%d-%H%M%S")
# EXERCISE: Define a TensorBoard callback. Use the log_dir parameter
# to specify the path to the directory where you want to save the
# log files to be parsed by TensorBoard.
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
file_writer_cm = tf.summary.create_file_writer(logdir + '/cm')
```
## Convert Matplotlib Figure to PNG
Unfortunately, the Matplotlib file format cannot be logged as an image, but the PNG file format can be logged. So, you will create a helper function that takes a Matplotlib figure and converts it to PNG format so it can be written.
```
def plot_to_image(figure):
"""
Converts the matplotlib plot specified by 'figure' to a PNG image and
returns it. The supplied figure is closed and inaccessible after this call.
"""
buf = io.BytesIO()
plt.savefig(buf, format="png")
# Closing the figure prevents it from being displayed directly inside
# the notebook.
plt.close(figure)
buf.seek(0)
# EXERCISE: Use tf.image.decode_png to convert the PNG buffer
# to a TF image. Make sure you use 4 channels.
image = tf.image.decode_png(buf.getvalue(), channels=4)
# EXERCISE: Use tf.expand_dims to add the batch dimension
image = tf.expand_dims(image, 0)
return image
```
## Confusion Matrix
In the cell below, you will define a function that calculates the confusion matrix.
```
def log_confusion_matrix(epoch, logs):
# EXERCISE: Use the model to predict the values from the test_images.
test_pred_raw = model.predict(test_images)
test_pred = np.argmax(test_pred_raw, axis=1)
# EXERCISE: Calculate the confusion matrix using sklearn.metrics
cm = sklearn.metrics.confusion_matrix(test_labels, test_preds)
figure = plot_confusion_matrix(cm, class_names=class_names)
cm_image = plot_to_image(figure)
# Log the confusion matrix as an image summary.
with file_writer_cm.as_default():
tf.summary.image("Confusion Matrix", cm_image, step=epoch)
# Define the per-epoch callback.
cm_callback = keras.callbacks.LambdaCallback(on_epoch_end=log_confusion_matrix)
```
## Running TensorBoard
The next step will be to run the code shown below to render the TensorBoard. Unfortunately, TensorBoard cannot be rendered within the Coursera environment. Therefore, we won't run the code below.
```python
# Start TensorBoard.
%tensorboard --logdir logs/image
# Train the classifier.
model.fit(train_images,
train_labels,
epochs=5,
verbose=0, # Suppress chatty output
callbacks=[tensorboard_callback, cm_callback],
validation_data=(test_images, test_labels))
```
However, you are welcome to download the notebook and run the above code locally on your machine or in Google's Colab to see TensorBoard in action. Below are some example screenshots that you should see when executing the code:
<table>
<tr>
<td>
<img src="../tmp2/tensorboard_01.png" width="500"/>
</td>
<td>
<img src="../tmp2/tensorboard_02.png" width="500"/>
</td>
</tr>
</table>
<br>
<br>
<table>
<tr>
<td>
<img src="../tmp2/tensorboard_03.png" width="500"/>
</td>
<td>
<img src="../tmp2/tensorboard_04.png" width="500"/>
</td>
</tr>
</table>
# Submission Instructions
```
# Now click the 'Submit Assignment' button above.
```
# When you're done or would like to take a break, please run the two cells below to save your work and close the Notebook. This frees up resources for your fellow learners.
```
%%javascript
<!-- Save the notebook -->
IPython.notebook.save_checkpoint();
%%javascript
<!-- Shutdown and close the notebook -->
window.onbeforeunload = null
window.close();
IPython.notebook.session.delete();
```
| true |
code
| 0.717544 | null | null | null | null |
|
```
# Based on Huggingface interface
# - https://huggingface.co/transformers/notebooks.html
# - https://github.com/huggingface/notebooks/blob/master/transformers_doc/quicktour.ipynb
# -
# Transformers installation, if needed
#! pip install transformers datasets
```
# Task: Sentiment analysis
```
# Default model used is - "distilbert-base-uncased-finetuned-sst-2-english"
from transformers import pipeline
classifier = pipeline('sentiment-analysis')
data = ["NSDWRs (or secondary standards) are non-enforceable guidelines regulating contaminants that may cause cosmetic effects (such as skin or tooth discoloration) or aesthetic effects (such as taste, odor, or color) in drinking water.",
" EPA recommends secondary standards to water systems but does not require systems to comply with the standard. ",
"However, states may choose to adopt them as enforceable standards."]
# Now run to see sentiments
results = classifier(data)
for result in results:
print(f"label: {result['label']}, with score: {round(result['score'], 4)}")
data2 = ["this is good",
"this is not bad",
"this is bad bad bad",
"this is too good",
"this is too bad",
"this is not bad",
"No one did a bad action",
"Jamil did a bad action",
"John did a bad action"]
# Now run to see sentiments
results = classifier(data2)
i = 0
for result in results:
print(f"text: {data2[i]} -> label: {result['label']}, with score: {round(result['score'], 4)}")
i = i+1
```
# Task: Question Answering
```
question_answerer = pipeline("question-answering")
# From tutorial
context = r"""
Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a
question answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune
a model on a SQuAD task, you may leverage the examples/pytorch/question-answering/run_squad.py script.
"""
result = question_answerer(question="What is extractive question answering?", context=context)
print(f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}")
result = question_answerer(question="What is a good example of a question answering dataset?", context=context)
print(f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}")
context = r"""
National Secondary Drinking Water Regulations (NSDWRs)
NSDWRs (or secondary standards) are non-enforceable guidelines regulating contaminants that may cause cosmetic effects (such as skin or tooth discoloration) or aesthetic effects (such as taste, odor, or color) in drinking water.
EPA recommends secondary standards to water systems but does not require systems to comply with the standard. However, states may choose to adopt them as enforceable standards.
"""
result = question_answerer(question="What are NSDWRs (or secondary standards)?", context=context)
print(f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}")
result = question_answerer(question="What does EPA recommend?", context=context)
print(f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}")
question_answerer = pipeline("question-answering", model = "distilbert-base-uncased-finetuned-sst-2-english")
result = question_answerer(question="What is extractive question answering?", context=context)
print(f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}")
result = question_answerer(question="What is a good example of a question answering dataset?", context=context)
print(f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}")
# Straight from HuggingFace tutorial
from transformers import AutoTokenizer, TFAutoModelForQuestionAnswering
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
# Caused error. Fixed with solution adapted from - https://discuss.huggingface.co/t/the-question-answering-example-in-the-doc-throws-an-attributeerror-exception-please-help/2611
# model = TFAutoModelForQuestionAnswering.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
model = TFAutoModelForQuestionAnswering.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad", return_dict=True)
text = r"""
🤗 Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides general-purpose
architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet…) for Natural Language Understanding (NLU) and Natural
Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between
TensorFlow 2.0 and PyTorch.
"""
questions = [
"How many pretrained models are available in 🤗 Transformers?",
"What does 🤗 Transformers provide?",
"🤗 Transformers provides interoperability between which frameworks?",
]
for question in questions:
inputs = tokenizer(question, text, add_special_tokens=True, return_tensors="tf")
input_ids = inputs["input_ids"].numpy()[0]
outputs = model(inputs)
answer_start_scores = outputs.start_logits
answer_end_scores = outputs.end_logits
# Get the most likely beginning of answer with the argmax of the score
answer_start = tf.argmax(answer_start_scores, axis=1).numpy()[0]
# Get the most likely end of answer with the argmax of the score
answer_end = tf.argmax(answer_end_scores, axis=1).numpy()[0] + 1
answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))
print(f"Question: {question}")
print(f"Answer: {answer}")
# Make into a function
def performQA(text, questions):
for question in questions:
inputs = tokenizer(question, text, add_special_tokens=True, return_tensors="tf")
input_ids = inputs["input_ids"].numpy()[0]
outputs = model(inputs)
answer_start_scores = outputs.start_logits
answer_end_scores = outputs.end_logits
# Get the most likely beginning of answer with the argmax of the score
answer_start = tf.argmax(answer_start_scores, axis=1).numpy()[0]
# Get the most likely end of answer with the argmax of the score
answer_end = tf.argmax(answer_end_scores, axis=1).numpy()[0] + 1
answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))
print(f"Question: {question}")
print(f"Answer: {answer}")
# Try on water examples
text = r"""
National Secondary Drinking Water Regulations (NSDWRs)
NSDWRs (or secondary standards) are non-enforceable guidelines regulating contaminants that may cause cosmetic effects (such as skin or tooth discoloration) or aesthetic effects (such as taste, odor, or color) in drinking water.
EPA recommends secondary standards to water systems but does not require systems to comply with the standard. However, states may choose to adopt them as enforceable standards.
"""
questions = [
"What are NSDWRs?",
"What are NSDWRs (or secondary standards)?",
"What does EPA recommend?",
]
# See it in action
performQA(text, questions)
```
# Task: Summarization
```
from transformers import pipeline
summarizer = pipeline("summarization")
ARTICLE = """ New York (CNN)When Liana Barrientos was 23 years old, she got married in Westchester County, New York.
A year later, she got married again in Westchester County, but to a different man and without divorcing her first husband.
Only 18 days after that marriage, she got hitched yet again. Then, Barrientos declared "I do" five more times, sometimes only within two weeks of each other.
In 2010, she married once more, this time in the Bronx. In an application for a marriage license, she stated it was her "first and only" marriage.
Barrientos, now 39, is facing two criminal counts of "offering a false instrument for filing in the first degree," referring to her false statements on the
2010 marriage license application, according to court documents.
Prosecutors said the marriages were part of an immigration scam.
On Friday, she pleaded not guilty at State Supreme Court in the Bronx, according to her attorney, Christopher Wright, who declined to comment further.
After leaving court, Barrientos was arrested and charged with theft of service and criminal trespass for allegedly sneaking into the New York subway through an emergency exit, said Detective
Annette Markowski, a police spokeswoman. In total, Barrientos has been married 10 times, with nine of her marriages occurring between 1999 and 2002.
All occurred either in Westchester County, Long Island, New Jersey or the Bronx. She is believed to still be married to four men, and at one time, she was married to eight men at once, prosecutors say.
Prosecutors said the immigration scam involved some of her husbands, who filed for permanent residence status shortly after the marriages.
Any divorces happened only after such filings were approved. It was unclear whether any of the men will be prosecuted.
The case was referred to the Bronx District Attorney\'s Office by Immigration and Customs Enforcement and the Department of Homeland Security\'s
Investigation Division. Seven of the men are from so-called "red-flagged" countries, including Egypt, Turkey, Georgia, Pakistan and Mali.
Her eighth husband, Rashid Rajput, was deported in 2006 to his native Pakistan after an investigation by the Joint Terrorism Task Force.
If convicted, Barrientos faces up to four years in prison. Her next court appearance is scheduled for May 18.
"""
print(summarizer(ARTICLE, max_length=130, min_length=30, do_sample=False))
```
| true |
code
| 0.652906 | null | null | null | null |
|
# Measuring a Multiport Device with a 2-Port Network Analyzer
## Introduction
In microwave measurements, one commonly needs to measure a n-port deveice with a m-port network analyzer ($m<n$ of course).
<img src="nports_with_2ports.svg"/>
This can be done by terminating each non-measured port with a matched load, and assuming the reflected power is negligable. With multiple measurements, it is then possible to reconstitute the original n-port. The first section of this example illustrates this method.
However, in some cases this may not provide the most accurate results, or even be possible in all measurement environments. Or, sometime it is not possible to have matched loads for all ports. The second part of this example presents an elegent solution to this problem, using impedance renormalization. We'll call it *Tippet's technique*, because it has a good ring to it.
```
import skrf as rf
from itertools import combinations
%matplotlib inline
from pylab import *
rf.stylely()
```
## Matched Ports
Let's assume that you have a 2-ports VNA. In order to measure a n-port network, you will need at least $p=n(n-1)/2$ measurements between the different pair of ports (total number of unique pairs of a set of n).
For example, let's assume we wants to measure a 3-ports network with a 2-ports VNA. One needs to perform at least 3 measurements: between ports 1 & 2, between ports 2 & 3 and between ports 1 & 3. We will assume these measurements are then converted into three 2-ports `Network`. To build the full 3-ports `Network`, one needs to provide a list of these 3 (sub)networks to the scikit-rf builtin function `n_twoports_2_nport`. While the order of the measurements in the list is not important, pay attention to define the `Network.name` properties of these subnetworks to contain the port index, for example `p12` for the measurement between ports 1&2 or `p23` between 2&3, etc.
Let's suppose we want to measure a tee:
```
tee = rf.data.tee
print(tee)
```
For the sake of the demonstration, we will "fake" the 3 distincts measurements by extracting 3 subsets of the orignal Network, ie. 3 subnetworks:
```
# 2 port Networks as if one measures the tee with a 2 ports VNA
tee12 = rf.subnetwork(tee, [0, 1]) # 2 port Network btw ports 1 & 2, port 3 being matched
tee23 = rf.subnetwork(tee, [1, 2]) # 2 port Network btw ports 2 & 3, port 1 being matched
tee13 = rf.subnetwork(tee, [0, 2]) # 2 port Network btw ports 1 & 3, port 2 being matched
```
In reality of course, these three Networks comes from three measurements with distincts pair of ports, the non-used port being properly matched.
Before using the `n_twoports_2_nport` function, one must define the name of these subsets by setting the `Network.name` property, in order the function to know which corresponds to what:
```
tee12.name = 'tee12'
tee23.name = 'tee23'
tee13.name = 'tee13'
```
Now we can build the 3-ports Network from these three 2-port subnetworks:
```
ntw_list = [tee12, tee23, tee13]
tee_rebuilt = rf.n_twoports_2_nport(ntw_list, nports=3)
print(tee_rebuilt)
# this is an ideal example, both Network are thus identical
print(tee == tee_rebuilt)
```
## Tippet's Technique
This example demonstrates a numerical test of the technique described in "*A Rigorous Technique for Measuring the Scattering Matrix of a Multiport Device with a 2-Port Network Analyzer*" [1].
In *Tippets technique*, several sub-networks are measured in a similar way as before, but the port terminations are not assumed to be matched. Instead, the terminations just have to be known and no more than one can be completely reflective. So, in general $|\Gamma| \ne 1$.
During measurements, each port is terminated with a consistent termination. So port 1 is always terminated with $Z_1$ when not being measured. Once measured, each sub-network is re-normalized to these port impedances. Think about that. Finally the composit network is contructed, and may then be re-normalized to the desired system impedance, say $50$ ohm.
* [1] J. C. Tippet and R. A. Speciale, “A Rigorous Technique for Measuring the Scattering Matrix of a Multiport Device with a 2-Port Network Analyzer,” IEEE Transactions on Microwave Theory and Techniques, vol. 30, no. 5, pp. 661–666, May 1982.
## Outline of Tippet's Technique
Following the example given in [1], measuring a 4-port network with a 2-port network analyzer.
An outline of the technique:
1. Calibrate 2-port network analyzer
2. Get four known terminations ($Z_1, Z_2, Z_3,Z_4$). No more than one can have $|\Gamma| = 1$
3. Measure all combinations of 2-port subnetworks (there are 6). Each port not currently being measured must be terminated with its corresponding load.
4. Renormalize each subnetwork to the impedances of the loads used to terminate it when note being measured.
5. Build composite 4-port, renormalize to VNA impedance.
## Implementation
First, we create a Media object, which is used to generate networks for testing. We will use WR-10 Rectangular waveguide.
```
wg = rf.wr10
wg.frequency.npoints = 101
```
Next, lets generate a random 4-port network which will be the DUT, that we are trying to measure with out 2-port network analyzer.
```
dut = wg.random(n_ports = 4,name= 'dut')
dut
```
Now, we need to define the loads used to terminate each port when it is not being measured, note as described in [1] not more than one can be have full reflection, $|\Gamma| = 1$
```
loads = [wg.load(.1+.1j),
wg.load(.2-.2j),
wg.load(.3+.3j),
wg.load(.5),
]
# construct the impedance array, of shape FXN
z_loads = array([k.z.flatten() for k in loads]).T
```
Create required measurement port combinations. There are 6 different measurements required to measure a 4-port with a 2-port VNA. In general, #measurements = $n\choose 2$, for n-port DUT on a 2-port VNA.
```
ports = arange(dut.nports)
port_combos = list(combinations(ports, 2))
port_combos
```
Now to do it. Ok we loop over the port combo's and connect the loads to the right places, simulating actual measurements. Each raw subnetwork measurement is saved, along with the renormalized subnetwork. Finally, we stuff the result into the 4-port composit network.
```
composite = wg.match(nports = 4) # composite network, to be filled.
measured,measured_renorm = {},{} # measured subnetworks and renormalized sub-networks
# ports `a` and `b` are the ports we will connect the VNA too
for a,b in port_combos:
# port `c` and `d` are the ports which we will connect the loads too
c,d =ports[(ports!=a)& (ports!=b)]
# determine where `d` will be on four_port, after its reduced to a three_port
e = where(ports[ports!=c]==d)[0][0]
# connect loads
three_port = rf.connect(dut,c, loads[c],0)
two_port = rf.connect(three_port,e, loads[d],0)
# save raw and renormalized 2-port subnetworks
measured['%i%i'%(a,b)] = two_port.copy()
two_port.renormalize(c_[z_loads[:,a],z_loads[:,b]])
measured_renorm['%i%i'%(a,b)] = two_port.copy()
# stuff this 2-port into the composite 4-port
for i,m in enumerate([a,b]):
for j,n in enumerate([a,b]):
composite.s[:,m,n] = two_port.s[:,i,j]
# properly copy the port impedances
composite.z0[:,a] = two_port.z0[:,0]
composite.z0[:,b] = two_port.z0[:,1]
# finally renormalize from
composite.renormalize(50)
```
## Results
### Self-Consistency
Note that 6-measurements of 2-port subnetworks works out to 24-sparameters, and we only need 16. This is because each reflect, s-parameter is measured three-times. As, in [1], we will use this redundent measurement as a check of our accuracy.
The renormalized networks are stored in a dictionary with names based on their port indecies, from this you can see that each have been renormalized to the appropriate z0.
```
measured_renorm
```
Plotting all three raw measurements of $S_{11}$, we can see that they are not in agreement. These plots answer to plots 5 and 7 of [1]
```
s11_set = rf.NS([measured[k] for k in measured if k[0]=='0'])
figure(figsize = (8,4))
subplot(121)
s11_set .plot_s_db(0,0)
subplot(122)
s11_set .plot_s_deg(0,0)
tight_layout()
```
However, the renormalized measurements agree perfectly. These plots answer to plots 6 and 8 of [1]
```
s11_set = rf.NS([measured_renorm[k] for k in measured_renorm if k[0]=='0'])
figure(figsize = (8,4))
subplot(121)
s11_set .plot_s_db(0,0)
subplot(122)
s11_set .plot_s_deg(0,0)
tight_layout()
```
### Test For Accuracy
Making sure our composite network is the same as our DUT
```
composite == dut
```
Nice!. How close ?
```
sum((composite - dut).s_mag)
```
Dang!
## Practical Application
This could be used in many ways. In waveguide, one could just make a measurement of a radiating open after a standard two-port calibration (like TRL). Then using *Tippets technique*, you can leave each port wide open while not being measured. This way you dont have to buy a bunch of loads. How sweet would that be?
## More Complex Simulations
```
def tippits(dut, gamma, noise=None):
'''
simulate tippits technique on a 4-port dut.
'''
ports = arange(dut.nports)
port_combos = list(combinations(ports, 2))
loads = [wg.load(gamma) for k in ports]
# construct the impedance array, of shape FXN
z_loads = array([k.z.flatten() for k in loads]).T
composite = wg.match(nports = dut.nports) # composite network, to be filled.
#measured,measured_renorm = {},{} # measured subnetworks and renormalized sub-networks
# ports `a` and `b` are the ports we will connect the VNA too
for a,b in port_combos:
# port `c` and `d` are the ports which we will connect the loads too
c,d =ports[(ports!=a)& (ports!=b)]
# determine where `d` will be on four_port, after its reduced to a three_port
e = where(ports[ports!=c]==d)[0][0]
# connect loads
three_port = rf.connect(dut,c, loads[c],0)
two_port = rf.connect(three_port,e, loads[d],0)
if noise is not None:
two_port.add_noise_polar(*noise)
# save raw and renormalized 2-port subnetworks
measured['%i%i'%(a,b)] = two_port.copy()
two_port.renormalize(c_[z_loads[:,a],z_loads[:,b]])
measured_renorm['%i%i'%(a,b)] = two_port.copy()
# stuff this 2-port into the composite 4-port
for i,m in enumerate([a,b]):
for j,n in enumerate([a,b]):
composite.s[:,m,n] = two_port.s[:,i,j]
# properly copy the port impedances
composite.z0[:,a] = two_port.z0[:,0]
composite.z0[:,b] = two_port.z0[:,1]
# finally renormalize from
composite.renormalize(50)
return composite
wg.frequency.npoints = 11
dut = wg.random(4)
#er = lambda gamma: mean((tippits(dut,gamma)-dut).s_mag)/mean(dut.s_mag)
def er(gamma, *args):
return max(abs(tippits(dut, rf.db_2_mag(gamma),*args).s_db-dut.s_db).flatten())
gammas = linspace(-80,0,11)
title('Error vs $|\Gamma|$')
plot(gammas, [er(k) for k in gammas])
plot(gammas, [er(k) for k in gammas])
semilogy()
xlabel('$|\Gamma|$ of Loads (dB)')
ylabel('Max Error in DUT\'s dB(S)')
figure()
#er = lambda gamma: max(abs(tippits(dut,gamma,(1e-5,.1)).s_db-dut.s_db).flatten())
noise = (1e-5,.1)
title('Error vs $|\Gamma|$ with reasonable noise')
plot(gammas, [er(k, noise) for k in gammas])
plot(gammas, [er(k,noise) for k in gammas])
semilogy()
xlabel('$|\Gamma|$ of Loads (dB)')
ylabel('Max Error in DUT\'s dB(S)')
```
| true |
code
| 0.39222 | null | null | null | null |
|
# Árboles de decisión y bosques
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
```
Ahora vamos a ver una serie de modelos basados en árboles de decisión. Los árboles de decisión son modelos muy intuitivos. Codifican una serie de decisiones del tipo "SI" "ENTONCES", de forma similar a cómo las personas tomamos decisiones. Sin embargo, qué pregunta hacer y cómo proceder a cada respuesta es lo que aprenden a partir de los datos.
Por ejemplo, si quisiéramos crear una guía para identificar un animal que encontramos en la naturaleza, podríamos hacer una serie de preguntas:
- ¿El animal mide más o menos de un metro?
- *más*: ¿Tiene cuernos?
- *Sí*: ¿Son más largos de 10cm?
- *No*: ¿Tiene collar?
- *menos*: ¿Tiene dos piernas o cuatro?
- *Dos*: ¿Tiene alas?
- *Cuatro*: ¿Tiene una cola frondosa?
Y así... Esta forma de hacer particiones binarias en base a preguntas es la esencia de los árboles de decisión.
Una de las ventajas más importantes de los modelos basados en árboles es que requieren poco procesamiento de los datos.
Pueden trabajar con variables de distintos tipos (continuas y discretas) y no les afecta la escala de las variables.
Otro beneficio es que los modelos basados en árboles son "no paramétricos", lo que significa que no tienen un conjunto fijo de parámetros a aprender. En su lugar, un modelo de árbol puede ser más y más flexible, si le proporcionamos más datos. En otras palabras, el número de parámetros libres aumenta según aumentan los datos disponibles y no es un valor fijo, como pasa en los modelos lineales.
## Regresión con árboles de decisión
Un árbol de decisión funciona de una forma más o menos similar a los predictores basados en el vecino más cercano. Se utiliza de la siguiente forma:
```
from figures import make_dataset
x, y = make_dataset()
X = x.reshape(-1, 1)
plt.figure()
plt.xlabel(u'Característica X')
plt.ylabel('Objetivo y')
plt.scatter(X, y);
from sklearn.tree import DecisionTreeRegressor
reg = DecisionTreeRegressor(max_depth=5)
reg.fit(X, y)
X_fit = np.linspace(-3, 3, 1000).reshape((-1, 1))
y_fit_1 = reg.predict(X_fit)
plt.figure()
plt.plot(X_fit.ravel(), y_fit_1, color='blue', label=u"predicción")
plt.plot(X.ravel(), y, '.k', label="datos de entrenamiento")
plt.legend(loc="best");
```
Un único árbol de decisión nos permite estimar la señal de una forma no paraḿetrica, pero está claro que tiene algunos problemas. En algunas regiones, el modelo muestra un alto sesgo e infra-aprende los datos (observa las regiones planas, donde no predecimos correctamente los datos), mientras que en otras el modelo muestra varianza muy alta y sobre aprende los datos (observa los picos pequeños de la superficie obtenida, guiados por puntos de entrenamiento "ruidosos").
Clasificación con árboles de decisión
==================
Los árboles de decisión para clasificación actúan de una forma muy similar, asignando todos los ejemplos de una hoja a la etiqueta mayoritaria en esa hoja:
```
from sklearn.datasets import make_blobs
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from figures import plot_2d_separator
X, y = make_blobs(centers=[[0, 0], [1, 1]], random_state=61526, n_samples=100)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
clf = DecisionTreeClassifier(max_depth=5)
clf.fit(X_train, y_train)
plt.figure()
plot_2d_separator(clf, X, fill=True)
plt.scatter(X_train[:, 0], X_train[:, 1], c=np.array(['b', 'r'])[y_train], s=60, alpha=.7, edgecolor='k')
plt.scatter(X_test[:, 0], X_test[:, 1], c=np.array(['b', 'r'])[y_test], s=60, edgecolor='k');
```
Hay varios parámetros que controla la complejidad de un árbol, pero uno que es bastante fácil de entender es la máxima profundidad. Esto limita hasta que nivel se puede afinar particionando el espacio, o, lo que es lo mismo, cuantos antecedentes tienen como máximo las reglas "Si-Entonces".
Es importante ajustar este parámetro de la mejor forma posible para árboles y modelos basados en árboles. El gráfico interactivo que encontramos a continuación muestra como se produce infra-ajuste y sobre-ajuste para este modelo. Tener un ``max_depth=1`` es claramente un caso de infra-ajuste, mientras que profundidades de 7 u 8 claramente sobre-ajustan. La máxima profundidad para un árbol en este dataset es 8, ya que, a partir de ahí, todas las ramas tienen ejemplos de un única clase. Es decir, todas las ramas son **puras**.
En el gráfico interactivo, las regiones a las que se les asignan colores azules o rojos indican que la clase predicha para ese región es una o la otra. El grado del color indica la probabilidad para esa clase (más oscuro, mayor probabilidad), mientras que las regiones amarillas tienen la misma probabilidad para las dos clases. Las probabilidades se asocian a la cantidad de ejemplos que hay de cada clase en la región evaluada.
```
%matplotlib notebook
from figures import plot_tree_interactive
plot_tree_interactive()
```
Los árboles de decisión son rápidos de entrenar, fáciles de entender y suele llevar a modelos interpretables. Sin embargo, un solo árbol de decisión a veces tiende al sobre-aprendizaje. Jugando con el gráfico anterior, puedes ver como el modelo empieza a sobre-entrenar antes incluso de que consiga una buena separación de los datos.
Por tanto, en la práctica, es más común combinar varios árboles para producir modelos que generalizan mejor. El método más común es el uso de bosques aleatorios y *gradient boosted trees*.
## Bosques aleatorios
Los bosques aleatorios son simplemente conjuntos de varios árboles, que han sido construidos usando subconjuntos aleatorios diferentes de los datos (muestreados con reemplazamiento) y subconjuntos aleatorios distintos de características (sin reemplazamiento). Esto hace que los árboles sean distintos entre si, y que cada uno aprenda aspectos distintos de los datos. Al final, las predicciones se promedian, llegando a una predicción suavizada que tiende a sobre-entrenar menos.
```
from figures import plot_forest_interactive
plot_forest_interactive()
```
## Elegir el estimador óptimo usando validación cruzada
```
# Este código puede llevar bastante tiempo
from sklearn.model_selection import GridSearchCV
from sklearn.datasets import load_digits
from sklearn.ensemble import RandomForestClassifier
digits = load_digits()
X, y = digits.data, digits.target
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
rf = RandomForestClassifier(n_estimators=200)
parameters = {'max_features':['sqrt', 'log2', 10],
'max_depth':[5, 7, 9]}
clf_grid = GridSearchCV(rf, parameters, n_jobs=-1)
clf_grid.fit(X_train, y_train)
clf_grid.score(X_train, y_train)
clf_grid.score(X_test, y_test)
clf_grid.best_params_
```
## Gradient Boosting
Otro método útil tipo *ensemble* es el *Boosting*. En lugar de utilizar digamos 200 estimadores en paralelo, construimos uno por uno los 200 estimadores, de forma que cada uno refina los resultados del anterior. La idea es que aplicando un conjunto de modelos muy simples, se obtiene al final un modelo final mejor que los modelos individuales.
```
from sklearn.ensemble import GradientBoostingRegressor
clf = GradientBoostingRegressor(n_estimators=100, max_depth=5, learning_rate=.2)
clf.fit(X_train, y_train)
print(clf.score(X_train, y_train))
print(clf.score(X_test, y_test))
```
<div class="alert alert-success">
<b>Ejercicio: Validación cruzada para Gradient Boosting</b>:
<ul>
<li>
Utiliza una búsqueda *grid* para optimizar los parámetros `learning_rate` y `max_depth` de un *Gradient Boosted
Decision tree* para el dataset de los dígitos manuscritos.
</li>
</ul>
</div>
```
from sklearn.datasets import load_digits
from sklearn.ensemble import GradientBoostingClassifier
digits = load_digits()
X_digits, y_digits = digits.data, digits.target
# divide el dataset y aplica búsqueda grid
```
## Importancia de las características
Las clases ``RandomForest`` y ``GradientBoosting`` tienen un atributo `feature_importances_` una vez que han sido entrenados. Este atributo es muy importante e interesante. Básicamente, cuantifica la contribución de cada característica al rendimiento del árbol.
```
X, y = X_digits[y_digits < 2], y_digits[y_digits < 2]
rf = RandomForestClassifier(n_estimators=300, n_jobs=1)
rf.fit(X, y)
print(rf.feature_importances_) # un valor por característica
plt.figure()
plt.imshow(rf.feature_importances_.reshape(8, 8), cmap=plt.cm.viridis, interpolation='nearest')
```
| true |
code
| 0.670285 | null | null | null | null |
|
# Quantization of Signals
*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).*
## Requantization of a Speech Signal
The following example illustrates the requantization of a speech signal. The signal was originally recorded with a wordlength of $w=16$ bits. It is requantized by a [uniform mid-tread quantizer](linear_uniform_characteristic.ipynb#Mid-Tread-Chacteristic-Curve) to various wordlengths. The signal-to-noise ratio (SNR) after quantization is computed and a portion of the (quantized) signal is plotted. It is further possible to listen to the requantized signal and the quantization error. Note, the level of the quantization error has been normalized for better audability of the effects.
```
import numpy as np
import matplotlib.pyplot as plt
import soundfile as sf
%matplotlib inline
idx = 130000 # index to start plotting
def uniform_midtread_quantizer(x, w):
# quantization step
Q = 1/(2**(w-1))
# limiter
x = np.copy(x)
idx = np.where(x <= -1)
x[idx] = -1
idx = np.where(x > 1 - Q)
x[idx] = 1 - Q
# linear uniform quantization
xQ = Q * np.floor(x/Q + 1/2)
return xQ
def evaluate_requantization(x, xQ):
e = xQ - x
# SNR
SNR = 10*np.log10(np.var(x)/np.var(e))
print('SNR: {:2.1f} dB'.format(SNR))
# plot signals
plt.figure(figsize=(10, 4))
plt.plot(x[idx:idx+100], label=r'signal $x[k]$')
plt.plot(xQ[idx:idx+100], label=r'requantized signal $x_Q[k]$')
plt.plot(e[idx:idx+100], label=r'quantization error $e[k]$')
plt.xlabel(r'sample index $k$')
plt.grid()
plt.legend()
# normalize error
e = .2 * e / np.max(np.abs(e))
return e
# load speech sample
x, fs = sf.read('../data/speech.wav')
# normalize sample
x = x/np.max(np.abs(x))
```
**Original Signal**
<audio src="../data/speech.wav" controls>Your browser does not support the audio element.</audio>
[../data/speech.wav](../data/speech.wav)
### Requantization to 8 bit
```
xQ = uniform_midtread_quantizer(x, 8)
e = evaluate_requantization(x, xQ)
sf.write('speech_8bit.wav', xQ, fs)
sf.write('speech_8bit_error.wav', e, fs)
```
**Requantized Signal**
<audio src="speech_8bit.wav" controls>Your browser does not support the audio element.</audio>
[speech_8bit.wav](speech_8bit.wav)
**Quantization Error**
<audio src="speech_8bit_error.wav" controls>Your browser does not support the audio element.</audio>
[speech_8bit_error.wav](speech_8bit_error.wav)
### Requantization to 6 bit
```
xQ = uniform_midtread_quantizer(x, 6)
e = evaluate_requantization(x, xQ)
sf.write('speech_6bit.wav', xQ, fs)
sf.write('speech_6bit_error.wav', e, fs)
```
**Requantized Signal**
<audio src="speech_6bit.wav" controls>Your browser does not support the audio element.</audio>
[speech_6bit.wav](speech_6bit.wav)
**Quantization Error**
<audio src="speech_6bit_error.wav" controls>Your browser does not support the audio element.</audio>
[speech_6bit_error.wav](speech_6bit_error.wav)
### Requantization to 4 bit
```
xQ = uniform_midtread_quantizer(x, 4)
e = evaluate_requantization(x, xQ)
sf.write('speech_4bit.wav', xQ, fs)
sf.write('speech_4bit_error.wav', e, fs)
```
**Requantized Signal**
<audio src="speech_4bit.wav" controls>Your browser does not support the audio element.</audio>
[speech_4bit.wav](speech_4bit.wav)
**Quantization Error**
<audio src="speech_4bit_error.wav" controls>Your browser does not support the audio element.</audio>
[speech_4bit_error.wav](speech_4bit_error.wav)
### Requantization to 2 bit
```
xQ = uniform_midtread_quantizer(x, 2)
e = evaluate_requantization(x, xQ)
sf.write('speech_2bit.wav', xQ, fs)
sf.write('speech_2bit_error.wav', e, fs)
```
**Requantized Signal**
<audio src="speech_2bit.wav" controls>Your browser does not support the audio element.</audio>
[speech_2bit.wav](speech_2bit.wav)
**Quantization Error**
<audio src="speech_2bit_error.wav" controls>Your browser does not support the audio element.</audio>
[speech_2bit_error.wav](speech_2bit_error.wav)
**Copyright**
This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples*.
| true |
code
| 0.635505 | null | null | null | null |
|
# **Boston BLUE bikes Analysis**
Team Member: Zhangcheng Guo, Chang-Han Chen, Ziqi Shan, Tsung Yen Wu, Jiahui Xu
### Topic Background and Motivation
>A rapidly growing industry, bike-sharing, replaces traditional bike rentals. BLUE bikes' renting procedures are fully automated from picking up, returning, and making payments. With bike-sharing businesses like BLUE bikes, users can easily rent a bike from a particular position and return to another position without artificial interference. Currently, there are about over 500 bike-sharing programs around the world which are composed of over 2.3 million bicycles. In Boston, the BLUE bike has over 300 bike stations and 5000 bikes in service. With growing station coverage in Boston, BLUE bikes can bring more convenience and therefore, promote more usage.
>Moreover, BLUE bikes promote the action of 'Go Green', which has become a popular way of commuting in response to climate change. BLUE bikes' business model serves as a means of providing an option to Go Green, and promotes more physical activities. It also reduces the concern of stolen bikes for users, which is a common concern in Boston.
### Project Objective
>With good causes of BLUE bikes, it incentivses us more to learn more about the bussiness, and align our objective with BLUE bike's cause. We aim to help maximize bike trips for BLUE bikes to provide a healthier and more eco freindly way of commuting by looking more indepth into potential variables that affect trip volume.
### Dataset Information and Processing
#### Data Profile
>'BLuebikes trip data' contains monthly bike trip data, and includes:
>
>- Trip Duration (seconds)
- Start Time and Date
- Stop Time and Date
- Start Station Name & ID
- End Station Name & ID
- Bike ID
- User Type (Casual = Single Trip or Day Pass user; Member = Annual or Monthly Member)
- Birth Year
- Gender, self-reported by capital (0=unknown; 1=male; 2=female)
>
>
>
>In addition to Bluebikes's data, weather information from NOAA is merged into original dataset considering impact of weather on bike rentals.
- TAVG - average temperature for the day (in whole degrees Fahrenheit). This is based on the arithmetic average of the maximum and minimum temperature for the day.
- SNOW - daily amount of snowfall (in inches to the tenths)
- AWND - average wind speed in miles per hour miles per hour, to tenths)
- PRCP - precipitation (in inches to the tenths)
>
>
>Two new columns listed are added to gain further infomation on each station
- District
- Total Docks (of each stations)
#### Dataset Source
>Bluebikes Trip Data, current_bluebikes_stations: https://s3.amazonaws.com/hubway-data/index.html
>
>NOAA Boston 2019 Weather Datase: <https://www.ncdc.noaa.gov/cdo-web/datasets/GHCND/stations/GHCND:USW00014739/detail>
#### Raw Datasets
Here are sneakpeaks of datasets. Please note that our datasets are stored in private google drive. To request access, please email kguo@bu.edu. No commercial use allowed.
```
from google.colab import drive
drive.mount('/content/drive')
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
## Sneakpeak of BLUEbikes dataset
blueraw = pd.read_csv('/content/drive/Shareddrives/MSBA BA 775 Team 1/Bluebikes/bb2019.csv',index_col=0);
blueraw.head(3)
## Sneakpeak of NOAA Boston weather dataset
bosweather = pd.read_csv('/content/drive/Shareddrives/MSBA BA 775 Team 1/Bluebikes/BosWeather19.csv',index_col=0);
bosweather.head(3)
## Sneakpeak of distrcit&docks dataset
bikestations=pd.read_csv('/content/drive/Shareddrives/MSBA BA 775 Team 1/Bluebikes/current_bluebikes_stations.csv',index_col=0,header = 1)
bikestations.head(3)
```
#### Data Preprocessing
'blueraw' dataset contains some null cells, which is processed and modified as following:
- 'age':
'age' column is created and calculated from [today's year - birthyear] for better visualization. Some cells record user age higher than 80 years old, which is suspected to be the user's manual input error. Therefore, ages over 80 years old are replaced by age median 31 to eliminate outliers but not affecting age distribution.
- 'bikeid':
bikeid column is converted from int64 to str to be treated as categorical variable.
- 'gender':
Gender columns, originally recorded as 0,1,2 are changed into 'U' (unknown), 'M' (male) and 'F' (female) to be easier treated as categorical variables.
- 'Holiday':
'Holiday' is created from starttime infomation to record whether it is weekend,Federal holidays or workdays , 1 for yes, 0 for no.
- 'District' and 'End District':
Two columns contain some missing District cells after merged with 'bike stations'. This is due to the 'bike stations' dataset recording the latest stations, while'blueraw 'dataset records the trip that occurred in 2019. With the rapid growth of the BLUE bikes business, some stations are removed or added between 2019 and now.
With those stations that can be found on Bluebikes, District names are manually added.
Rows with columns start and end station recorded 'Mobile Temporary Station' and 'Warehouse Bonfire Station in a Box', 'MTL-ECO4-01', 'Warehouse Lab PBSC' is removed (83 rows).
'blueraw' is then merged with 'bosweather' and 'bikestations', and recorded as 'Blue19', the main dataset, shown below.
```
Blue19=pd.read_csv('/content/drive/Shareddrives/MSBA BA 775 Team 1/Bluebikes/Blue19.csv',index_col=0);
Blue19['starttime'] = pd.to_datetime(Blue19['starttime']) ##Convert time variable to datetime
Blue19['stoptime'] = pd.to_datetime(Blue19['stoptime'])
Blue19['Date'] = pd.to_datetime(Blue19['Date'])
Blue19.head(3)
Blue19.info()
# check missing data
Blue19.dropna(how= 'any', inplace =True) ## delete cells that are na, 83 rows of empty district names
pd.isnull(Blue19).sum()
```
### Data Behavior
##### **User Demographics**
```
genderdis = Blue19.groupby('gender').count()['starttime'] ##age distribution
userdis = Blue19.groupby('usertype').count()['starttime'] ##User Type Distribution
fig, axis = plt.subplots(1, 2, figsize=(15,7))
plt.ylabel('')
axis[0].set_title("Gender Distribution",fontsize = 20)
axis[1].set_title("User Type Distribution",fontsize = 20)
colors=['lightblue','steelblue','darkseagreen']
ax = genderdis.plot.pie(autopct="%.1f%%", ax = axis[0],colors=colors,fontsize = 18);
ax = userdis.plot.pie(autopct="%.1f%%", ax = axis[1],colors=colors, fontsize = 18);
plt.xlabel(' ');
plt.ylabel(' ');
mkt= Blue19[['bikeid','usertype', 'age', 'gender']]
pd.DataFrame(mkt['age'].agg({'mean','median','std'}))
g=sns.catplot(data=mkt,x='gender',y='age',kind='box',aspect=3,palette='Blues_r');
plt.title('Age Distribution by Gender', fontsize= 20);
g.fig.set_figwidth(12)
g.fig.set_figheight(10)
```
* As shown above, more than half of the users are male which is 65.15%, and subscribers take up 78.83% of user composition.
The average age is 35 years old and the median is 31 with a standard deviation of 11.55. If divided the users into subscribers and non-subscribers, the subscribers are around 34 years old on average, whereas the non-subscribers are 41 years old on average. The difference in age between male and female is not significant, as male is around 34 years old on average and female is around 33 years old on average.
#### **Time Variables**
> * The goal is to know if there are more trips on weekdays or on weekends. To do this, we counted the average trip counts in each day of week. The result shows that on average, it is between 7,000 to 7,500 trip counts from Monday to Friday (on weekdays), while it is roughly 6,000 when it comes to Saturday and Sunday (on weekends). It seems that BLUEbikes are more popular on weekdays than on weekends.
* In addition, the average usage times are longer on weekends than on weekways. On weekdays, a user spends around 15 to 17 minutes on BLUEbikes on average, while on weekends, a user spends roughly 22 to 23 minutes on BLUEbikes on average. This phenomenon may be explained that there are more commuters on weekdays that they choose BLUEbikes for short distances in high flexibility. For example, for commuters who take the subway for work/school, they may choose BLUEbikes to connect between subway station to the place they work for/study at. So, the travelled distances and durations of the BLUEbikes on weekdays could be rather short.On weekends, however, there are probably more recreational uses on BLUEbikes such as riding bikes along the coast and that encourages users to utilize BLUEbikes regardless of how much time spent.
###### Data Processing
```
blue19_dur = Blue19[['tripduration', 'starttime', 'stoptime', 'Date']]
blue19_dur['starttime'] = pd.to_datetime(blue19_dur['starttime']);
blue19_dur['stoptime'] = pd.to_datetime(blue19_dur['stoptime']);
blue19_dur['Date'] = pd.to_datetime(blue19_dur['Date']);
blue19_dur['duration_manual'] = (blue19_dur['stoptime'] - blue19_dur['starttime']).astype('timedelta64[s]')
blue19_dur[blue19_dur['tripduration'] != blue19_dur['duration_manual']].count()
blue19_dur_diff = blue19_dur[blue19_dur['tripduration'] != blue19_dur['duration_manual']]
(blue19_dur_diff['duration_manual'] - blue19_dur_diff['tripduration']).value_counts()
blue19_dur.loc[Blue19['tripduration'] != blue19_dur['duration_manual'], 'Date'].unique().astype('datetime64[D]');
blue19_dur_clean = blue19_dur[(blue19_dur['tripduration'] == blue19_dur['duration_manual']) & (blue19_dur['tripduration'] <= 86400)]
blue19_dur_clean['start_dayofweek'] = blue19_dur_clean['starttime'].dt.dayofweek
blue19_dur_clean['start_hour'] = blue19_dur_clean['starttime'].dt.hour
blue19_dur_clean.groupby('start_dayofweek')['tripduration'].count()/blue19_dur_clean.groupby('start_dayofweek')['Date'].nunique();
blue19_dur_clean.groupby('start_dayofweek')['tripduration'].mean();
dw_hr_count = (blue19_dur_clean.groupby(['start_dayofweek', 'start_hour'])['tripduration'].count()/blue19_dur_clean.groupby(['start_dayofweek', 'start_hour'])['Date'].nunique()).reset_index(name = 'trip_count').sort_values('trip_count', ascending = False)
print(dw_hr_count.head(10))
```
###### **Hourly behavior**
* Longer trip at midnight
* Users spend less time at early mornings on weekdays
* The average trip duration is about 26 minutes = 1560 seconds
```
blue19_dur_clean.groupby(['start_dayofweek', 'start_hour'])['tripduration'].mean().reset_index().sort_values('tripduration', ascending = False).head(10)
dw_hr_duration = blue19_dur_clean.groupby(['start_dayofweek', 'start_hour'])['tripduration'].mean().reset_index().sort_values('tripduration', ascending = True)
print(dw_hr_duration.head(10))
dw_hr_duration['start_dayofweek'] = dw_hr_duration['start_dayofweek'].astype('category')
sns.relplot(x = 'start_hour', y = 'tripduration', data = dw_hr_duration, hue = 'start_dayofweek', kind = 'line',linewidth = 2, palette=['lightblue','blue','dodgerblue','darkblue','teal','darkred','red']).fig.suptitle('Hourly Trip Durations by Day of Week',fontsize = 15);
```
###### **Weekly behavior**
* More popular on weedays than on weekends
* Average useage times are longer on weekends
* 8 am and 5 pm are the most popular
Note that the dt.dayofweek: <br><br/>
0: Monday; 1: Tuesday; 2: Wednesday; 3: Thursday; 4: Friday; 5: Saturday; 6: Sunday
```
dw_hr_count['start_dayofweek'] = dw_hr_count['start_dayofweek'].astype('category')
sns.relplot(x = 'start_hour', y = 'trip_count', data = dw_hr_count, hue = 'start_dayofweek', kind = 'line', palette=['lightblue','blue','dodgerblue','darkblue','teal','darkred','red'], linewidth = 2).fig.suptitle('Hourly Trip Counts by Day of Week',fontsize = 15);
```
###### **Holiday**
* Less rides on holidays
* People prefer riding on working day
```
# with_cnt: table with a new column called 'count', which represent the daily total count
# new: table that only contain 'date', 'count', 'season', 'Holiday'
with_cnt=pd.read_csv('/content/drive/Shareddrives/MSBA BA 775 Team 1/Bluebikes/with_cnt.csv',index_col=0)
new=pd.read_csv('/content/drive/Shareddrives/MSBA BA 775 Team 1/Bluebikes/new.csv',index_col=0)
holiday=with_cnt.groupby('Holiday')[['count']].mean()
holiday.rename(columns={'count':'count_mean'},inplace=True)
holiday=holiday.reset_index()
holiday
sns.catplot(x='Holiday',y='count', data=with_cnt, kind='box',palette='Blues_r' )
plt.title('Holiday Trip Counts', fontsize=15)
plt.xlabel("Holiday",fontsize=12);
plt.ylabel("Trip Counts",fontsize=12);
```
###### **Monthly**
* More bike using in Sep, Aug and July
* Seasonal pattern
```
month=new.groupby('month')[['count']].sum()
month=month.reset_index().sort_values(by='count',ascending=False)
month['percentage']=round(month['count']/with_cnt.shape[0]*100,2)
month['mean']=round(month['count']/30,2)
month
sns.catplot(x='month',y='mean', data=month, kind='bar',ci=None,palette = 'Blues_r')
plt.title('Average Bike Use On Month', fontsize=15)
plt.ylabel('mean count')
```
###### **Season**
* More bike using in Summer and Autumn
Good weather and more opportunities
```
season=new.groupby('season').agg({'count':['sum','mean']})
season=season.reset_index()
season
sns.catplot(x='season',y='count', data=with_cnt,kind='box', palette='Blues_r');
plt.title('Seasonal Trip Counts', fontsize= 15);
plt.xlabel("Season",fontsize=12);
plt.ylabel("Trip Counts",fontsize=12);
```
##### **Weather**
To observe relationship between trip counts and affect of weather such as wind, rain, snow and temperature, two axises plots are created for each category to visually see the relationships.
```
aggweather = {'tripduration':'count','AWND':'mean','PRCP':'mean', 'SNOW':'mean', 'TAVG':'mean'}
weathercount =Blue19.resample('M', on = 'starttime').agg(aggweather).reset_index()
weathercount = weathercount.rename(columns = {'starttime':'month','tripduration':'count'})
fig, axis = plt.subplots(2, 2, figsize=(15,10))
axis[0][0].set_title("TAVG",fontsize = 20)
axis[0][1].set_title("Wind",fontsize = 20)
axis[1][0].set_title("Snow",fontsize = 20)
axis[1][1].set_title("Precipitation",fontsize = 20)
fig.suptitle('Trip Counts and Weather Condistions 2019', fontsize = 25)
evenly_spaced_interval = np.linspace(0, 1, 8)
colors = [plt.cm.Blues_r(x) for x in evenly_spaced_interval]
ax = weathercount.plot(x = 'month' , y = 'TAVG',legend = False, ax =axis [0][0],linewidth = 5,fontsize = 12,color = colors[1])
ax2 = ax.twinx()
weathercount.plot(x="month", y="count", ax=ax2,legend = False, color="r",linewidth = 2,fontsize = 12)
axa = weathercount.plot(x = 'month' , y = 'AWND',legend = False, ax =axis [0][1],linewidth = 5,fontsize = 12,color = colors[2])
ax2 = axa.twinx()
weathercount.plot(x="month", y="count", ax=ax2,legend = False, color="r",linewidth = 2,fontsize = 12)
axb = weathercount.plot(x = 'month' , y = 'SNOW',legend = False, ax =axis [1][0],linewidth = 5,fontsize = 12,color = colors[3])
ax2 = axb.twinx()
weathercount.plot(x="month", y="count", ax=ax2,legend = False, color="r",linewidth = 2,fontsize = 12)
axc = weathercount.plot(x = 'month' , y = 'PRCP',legend = False, ax =axis [1][1],linewidth = 5,fontsize = 12,color = colors[4])
ax2 = axc.twinx()
weathercount.plot(x="month", y="count", ax=ax2,legend = False, color="r",linewidth = 2,fontsize = 12)
ax.figure.legend(fontsize = 12);
```
From the graph presented above, it is very clear to observe that high temperature on average occurs higher bike rentals than lower temperature. Inverse to snowfall, when more snow, there are fewer bike trips. The same trend applies to average wind speed, in which the wind speed in Boston is observed to be seasonal, higher in winter months, and lower in summer months.
It is reasonable to state that temperature, snowfall, and wind speed are seasonal, which later in the analysis. The seasonal factor can be considered in predicting the number of bike rentals.
As for precipitation, it is not very clear from the whole year observation; therefore a monthly line plot is performed to observe closer the relationship between precipitation and trip counts.
```
import datetime
import calendar
aggprcp = {'tripduration':'count','PRCP':'mean'}
prcpcount = Blue19.resample('D', on = 'starttime').agg(aggprcp).reset_index()
prcpcount['monthcat'] = pd.DatetimeIndex(prcpcount['starttime']).month
prcpcount = prcpcount.rename(columns = {'tripduration':'count'})
evenly_spaced_interval = np.linspace(0, 1, 12)
colors = [plt.cm.Blues_r(x) for x in evenly_spaced_interval]
fig, axis = plt.subplots(4, 3, figsize=(20,12))
axis = axis.ravel()
fig.suptitle('Monthly Trip Counts and Precipitation', fontsize = 25)
for i in prcpcount['monthcat'].unique():
prcp = prcpcount[prcpcount['monthcat'] == i]
ax = prcp.plot(x = 'starttime' , y = 'PRCP',legend =False, ax =axis [i-1],color = colors[4], linewidth = 4);
ax.set(xlabel = 'Time (day in month)')
ax2 = ax.twinx()
prcp.plot(x="starttime", y="count", ax=ax2, color="r",legend =False, linewidth = 2);
axis[i-1].set_title(calendar.month_name[i],fontsize = 18)
handles, labels = ax.get_legend_handles_labels()
lgd = dict(zip(labels, handles))
handles2, labels2 = ax2.get_legend_handles_labels()
lgd[labels2[0]] = handles2[0]
ax.figure.legend(lgd.values(), lgd.keys(), fontsize =20);
```
From the plots above, it is apparent that when the blue line peaks, the red line drops. In other words, when higher rainfall is observed, trip counts decrease accordingly. Though a direct relationship cannot be assumed, it is rational to state that whether raining or not is a factor affecting Bluebike renting.
#### **Location**
To observe the relationship between trip counts and location.
Start station, end station, and bike docks data were extracted to analyze.
###### Data Processing
```
stations = pd.read_csv('/content/drive/Shareddrives/MSBA BA 775 Team 1/Bluebikes/current_bluebikes_stations.csv',index_col=0,skiprows=1);
popular_start=Blue19.loc[:,['start station name']].value_counts(ascending =False).to_frame().reset_index()
popular_start.columns = [ 'start station name', 'trip counts']
popular_end=Blue19.loc[:,['end station name']].value_counts(ascending = False).to_frame().reset_index()
popular_end.columns = [ 'end station name', 'trip counts']
start_docks=popular_start.merge(stations, left_on = 'start station name', right_on = 'Name', how = 'left')
start_docks =start_docks[['start station name', 'trip counts','Total docks']]
end_docks=popular_end.merge(stations, left_on = 'end station name', right_on = 'Name', how = 'left')
end_docks =end_docks[['end station name', 'trip counts','Total docks']]
#### District Dock Count
districtdock = bikestations.groupby('District').sum().reset_index().sort_values(by = 'Total docks',ascending =False)
districtdock = districtdock[['District','Total docks']]
#### District Trip Count
districtct = Blue19.groupby('District').count().reset_index().sort_values(by = 'starttime',ascending =False)
districtct.rename({'age':'Trip Count'}, axis = 1, inplace = True)
districtct = districtct[['District', 'Trip Count']]
#### District Station Count
station_count=Blue19.pivot_table(values='start station name', index=['District'], aggfunc=pd.Series.nunique)
#### Merged Table
disbar = districtct.merge(station_count, how = 'left', on ='District')
disbar.rename({'start station name': 'Station Count'}, axis = 1, inplace = True)
disbar =disbar.merge(districtdock, how = 'left', on = 'District')
disbar
```
###### District Infomation
```
disbar
from matplotlib.lines import Line2D
fig, axis = plt.subplots(1, 2, figsize=(17,6))
axis[0].set_title("District Trip Count vs. Dock Counts",fontsize = 20)
axis[1].set_title("District Trip Count vs. Station Counts",fontsize = 20)
ax = sns.pointplot(x ='District', y='Trip Count' ,data=disbar, ax=axis[0])
ax2 = ax.twinx()
ax =sns.pointplot( x ='District', y='Total docks', data=disbar, ax = ax2, color = 'lightcoral')
custom_lines = [Line2D([0], [0], lw=2),
Line2D([0], [0], color='lightcoral', lw=2)]
ax.legend(custom_lines, ['Trip Count', 'Total Docks'],fontsize = 15);
axa = sns.pointplot(x ='District', y='Trip Count' ,data=disbar,ax=axis[1])
ax2 = axa.twinx()
axb =sns.pointplot( x ='District', y='Station Count', data=disbar, ax = ax2, color = 'darkseagreen')
custom_lines = [Line2D([0], [0], lw=2),
Line2D([0], [0], color='darkseagreen', lw=2)]
axb.legend(custom_lines, ['Trip Count', 'Station Count'],fontsize = 15);
```
> By observing two point plots of dock counts and station counts compared to trip counts. It can be observed that both trends almost align except Cambridge exhbits a dip compared to demand, which can potentially result in short of supply due to shortages in bikes.
###### Station Infomation
```
start_docks=popular_start.merge(stations, left_on = 'start station name', right_on = 'Name', how = 'left')
start_docks =start_docks[['start station name', 'trip counts','Total docks']]
start_docks.head()
end_docks=popular_end.merge(stations, left_on = 'end station name', right_on = 'Name', how = 'left')
end_docks =end_docks[['end station name', 'trip counts','Total docks']]
end_docks.head()
```
> Boston and Cambridge have the highest trip counts if we group by the district. From the charts and the comparison between the number of docks in the starting station and the end station and their trip counts, the dock number of the starting station and the end station with the highest frequency do not align. Therefore, it can be concluded that more trip counts come with more bike docks, which is also proper to match both factors along with its geographic locations to optimize profit for BLUEbikes.
#### **Correlation Matrix**
* Season and average temperature are highly correlated with daily count
* Choose features with absolute scores > 0.15 to put in our model
```
cor_table= pd.read_csv('/content/drive/Shareddrives/MSBA BA 775 Team 1/Bluebikes/cor_table.csv', index_col=0)
a=cor_table.copy()
dic={'winter':0, 'spring':1,'summer':2,'autumn':3, 'Subscriber':1, 'Customer':0,'Cambridge':0, 'Boston':1, 'Somerville':2, 'Brookline':3, 'Everett':4, 'nan':5, 'U':0,'M':1, 'F':2}
c=a.replace(dic)
c.head(3)
corrMatrix =c.corr() ##correlation calculation
cmap=sns.diverging_palette(245, 1, as_cmap=True, n = 6,)
def magnify():
return [dict(selector="th",
props=[("font-size", "7pt")]),
dict(selector="td",
props=[('padding', "0em 0em")]),
dict(selector="th:hover",
props=[("font-size", "12pt")]),
dict(selector="tr:hover td:hover",
props=[('max-width', '200px'),
('font-size', '12pt')])
]
corrMatrix.style.background_gradient(cmap, axis=1,vmin=-0.6)\
.set_properties(**{'max-width': '80px', 'font-size': '10pt'})\
.set_caption("Correlation")\
.set_precision(2)\
.set_table_styles(magnify())\
mask = np.zeros_like(corrMatrix)
mask[np.triu_indices_from(mask)] = True
with sns.axes_style("white"):
f, ax = plt.subplots(figsize=(7, 5))
ax = sns.heatmap(corrMatrix, mask=mask, vmax=.3, square=True, cmap='Blues')
```
> From correlation heat map, it can be observed that month, season and temperature average have higher correlation compared to trip counts. Snow, pricipitation and wind speed follow. These factors are considered in regression model in later section described. Month variable will not be included since it accounts for time sequence.
### Predicting with Models
#### Machine Learning - Trip Counts Prediction
First, we need to sort out useful features and label for machine learning. However, due to our limitation with knowledge of time series analysis using scikit-learn package, the time-related features are temporarily discarded so that simply linear regression models can be applied in this dataset.
#### Machine Learning Dataset Processing
```
# Copy from original dataset
BlueML = Blue19.copy()
# Select feasible predictors and create dummy variables for categorical variables
BlueML_pre = pd.get_dummies(BlueML[['tripduration', 'starttime', 'month', 'season', 'Holiday', 'gender', 'age', 'AWND', 'PRCP',
'SNOW', 'TAVG', 'District']], drop_first = True)
# Resample data on a daily basis
BlueML_1 = BlueML_pre.resample('D', on = 'starttime').agg({'tripduration':len, 'Holiday':np.mean,
'AWND':np.mean, 'PRCP':np.mean, 'SNOW':np.mean, 'TAVG':np.mean,
'season_spring':np.mean, 'season_summer':np.mean, 'season_winter':np.mean,
'District_Brookline':np.mean, 'District_Cambridge':np.mean,
'District_Everett':np.mean, 'District_Somerville':np.mean})
BlueML_1.columns = ['trip_count', 'Holiday', 'AWND', 'PRCP', 'SNOW', 'TAVG',
'season_spring', 'season_summer', 'season_winter', 'District_Brookline', 'District_Cambridge',
'District_Everett', 'District_Somerville']
# Remove NAs after resampling
BlueML_1 = BlueML_1[BlueML_1['AWND'].notnull()]
# Import scikit-learn packages
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, r2_score
```
##### First Model
According to the correlation analysis, both season and temperature are features having the highest correlation with daily trip counts. So, in the first model, the two features are included to predict daily trip counts.
The rooted mean squared error of the first model is 2,088.75 and the R-squared is 0.66. It turns out that season and temperature have some predictive power but are still not strong enough.
```
# First try: Use season and temperature to predict trip counts
X1_df = BlueML_1[['season_spring', 'season_summer', 'season_winter', 'TAVG']]
y1_df = BlueML_1['trip_count']
# Choose 30% of data as testing data
X1_train, X1_test, y1_train, y1_test = train_test_split(X1_df, y1_df, test_size = .3, random_state = 833)
# Fit the linear regression model and predict y_test
model_1 = LinearRegression()
model_1.fit(X1_train, y1_train)
y1_model = model_1.predict(X1_test)
# Calculate mean_squared_error and r^2 score
rmse1 = np.sqrt(mean_squared_error(y1_test, y1_model))
r2_1 = r2_score(y1_test, y1_model)
print('The RMSE is {}.'.format(round(rmse1, 2)))
print('The R2_score is {}.'.format(round(r2_1, 2)))
# Plot actual y vs. predicted y
sns.relplot(x = y1_test, y = y1_model, kind = 'scatter')
plt.xlabel('Actual Trip Count')
plt.ylabel('Predicted Trip Count')
plt.title('Linear Regression (Actual vs. Predicted)')
plt.plot([0, 14000], [0, 14000], linewidth = 1, c = 'red', linestyle = '--')
```
##### Second Model
Next, there are other variables also have some correlation with daily tip counts, and it is worth observing if the machine learning model can predict the label better when adding these variables. So, the 'AWND' (average wind speed), 'PRCP' (precipitation), 'SNOW' (snowfall), and 'Holiday' (whether the day is holiday) are included in training the machine learning model to see if we can predict daily trip counts better.
This time, the RMSE decreases to 1,563.58 and the R-squared is 0.81. With other weather features and the 'Holiday' variable, the RMSE and R-squared is greatly improved. Also, when observing the scatter plot of actual testing data and predicted data, the dots are more concentrated to the 45-degree line, which means that the predicted daily trip counts are closer to the actual daily trip counts.
```
# Next model: Use season, weather, and holiday features to predict trip counts
X2_df = BlueML_1[['season_spring', 'season_summer', 'season_winter', 'Holiday', 'AWND', 'PRCP', 'SNOW', 'TAVG']]
y2_df = BlueML_1['trip_count']
# Choose 30% of data as testing data
X2_train, X2_test, y2_train, y2_test = train_test_split(X2_df, y2_df, test_size = .2, random_state = 833)
# Fit the linear regression model and predict y_test
model_2 = LinearRegression()
model_2.fit(X2_train, y2_train)
y2_model = model_2.predict(X2_test)
# Calculate mean_squared_error and r^2 score
rmse2 = np.sqrt(mean_squared_error(y2_test, y2_model))
r2_2 = r2_score(y2_test, y2_model)
print('The RMSE is {}.'.format(round(rmse2, 2)))
print('The R2_score is {}.'.format(round(r2_2, 2)))
# Plot actual y vs. predicted y
sns.relplot(x = y2_test, y = y2_model, kind = 'scatter')
plt.xlabel('Actual Trip Count')
plt.ylabel('Predicted Trip Count')
plt.title('Linear Regression (Actual vs. Predicted)')
plt.plot([0, 14000], [0, 14000], linewidth=1, c='red', linestyle='--')
```
#### Machine Learning - Daily Bike Inflow or Outflow of District
Also, we are interested to know if it is available to use the features in the dataset to predict daily inflow and outflow of bikes in districts. For example, if there are so many bikes outflow from Boston district to Somerville District, then the BLUEbikes company should dispatch more bikes from other districts to Boston in order to make sure its sufficiency.
##### Load Data
```
# Load data: Difference Count of bike IDs between those start at District X and those end at District X
BlueReg = pd.read_csv('/content/drive/Shareddrives/MSBA BA 775 Team 1/Bluebikes/Bluereg.csv', index_col = 0);
BlueReg.head()
# Discard one dummy variable of the same categorical varible to avoid multicollinearity
BlueReg_pre = BlueReg.drop(['District_Somerville', 'season_winter'], axis = 1)
```
##### Fit the Model
Now, we use weather, season, district, and holiday features to predict the daily bike inflow or outflow of districts. The result turns out that our target variable is not well-explained by our predictors. The RMSE is 18.59 while the R-squared is 0.18. The RMSE seems to be low, but as for this target variable, the standard deviation is only 21.18.
Despite the limitation of our knowledge in more advanced topics of scikit-learn, we suggest that if we can make use of the time-series analysis, we may probably come up with a better result.
```
# Select feasible features for LinearRegression
X3_df = BlueReg_pre[['PRCP', 'SNOW', 'TAVG', 'AWND', 'season_spring', 'season_summer', 'season_autumn', 'District_Boston',
'District_Brookline', 'District_Cambridge', 'District_Everett', 'Holiday']]
y3_df = BlueReg_pre['Bike Count Difference']
# Choose 30% of data as testing data
X3_train, X3_test, y3_train, y3_test = train_test_split(X3_df, y3_df, test_size = .3, random_state = 833)
# Fit the linear regression model and predict y_test
model_3 = LinearRegression()
model_3.fit(X3_train, y3_train)
y3_model = model_3.predict(X3_test)
# Import mean_squared_error and r^2 score from scikit-learn and calculate
rmse3 = np.sqrt(mean_squared_error(y3_test, y3_model))
r2_3 = r2_score(y3_test, y3_model)
print('The RMSE is {}.'.format(round(rmse3, 2)))
print('The R2_score is {}.'.format(round(r2_3, 2)))
# Plot actual y vs. predicted y
sns.relplot(x = y3_test, y = y3_model, kind = 'scatter')
plt.xlabel('Actual Trip Count')
plt.ylabel('Predicted Trip Count')
plt.title('Linear Regression (Actual vs. Predicted)')
plt.plot([-100, 100], [-100, 100], linewidth=1, c='red', linestyle='--')
plt.xlim((-120,80))
# Zoom in to see if there is any pattern
# Plot actual y vs. predicted y
sns.relplot(x = y3_test, y = y3_model, kind = 'scatter')
plt.xlabel('Actual Trip Count')
plt.ylabel('Predicted Trip Count')
plt.title('Linear Regression (Actual vs. Predicted)')
plt.xlim((-30,30))
plt.ylim((-30,30))
plt.plot([-100, 100], [-100, 100], linewidth=1, c='red', linestyle='--')
# The standard deviation of the target variable
print(round(BlueReg_pre['Bike Count Difference'].std(), 2))
```
### Conclusion
> From the analysis presented above, it can be concluded that trip counts are affected by seasonal and weather factors such as precipitation, wind speed, snow, and temperature. Whether or not it is a holiday also affects trip counts. From our sklearn LinearRegression model predicting daily trip counts, our model is fairly accurate with given information.
>However, there exist limitations when attempting to come up with a practice prediction. We mean to predict the hourly difference of each station between the number of pick up bikes and returned bikes, which we can suggest timely allocation of bikes to stations in need. First, variables in datasets are not sufficient enough to explain the reasons for user pickups and returns at one station. There also exists equipment limitation when running too many datasets, RAM space runs when run on Google Colab (we chose to work on Google Colab because the sizes of the dataset are too big, which causes loss of data when transporting, with the only exception of importing and saving dataset on to Google Drive, which can be easily accessed from Google Colab). Most importantly, insufficient knowledge in performing time series prediction refrained us from a more accurate model for the BLUE bikes dataset; therefore, we resided on linear regression.
> There involves further learning and practicing in our technical skills to deliver ideal results. Limitations of this project will be our motivation moving forward.
| true |
code
| 0.365089 | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.