prompt
stringlengths 501
4.98M
| target
stringclasses 1
value | chunk_prompt
bool 1
class | kind
stringclasses 2
values | prob
float64 0.2
0.97
⌀ | path
stringlengths 10
394
⌀ | quality_prob
float64 0.4
0.99
⌀ | learning_prob
float64 0.15
1
⌀ | filename
stringlengths 4
221
⌀ |
---|---|---|---|---|---|---|---|---|
# Actividad: Clasificación de SPAM
¿Podemos clasificar un email como spam con árboles y/o ensambles?
Usaremos la base de datos [UCI Spam database](https://archive.ics.uci.edu/ml/datasets/Spambase)
Responda las preguntas y realice las actividades en cada uno de los bloques
Entregas al correo phuijse@inf.uach.cl hasta el Viernes 13, 11:20 AM
Se trabajará en grupos de dos personas: se entrega un notebook completo por grupo
```
# Descargar la base de datos con wget, si usas windows usa el link de arriba
!wget -c https://archive.ics.uci.edu/ml/machine-learning-databases/spambase/spambase.data
!head -n 5 spambase.data
```
Responda
- ¿Cuántos atributos tiene la base de datos? Describalos de forma muy breve
- Muestre un histograma de las etiquetas ¿Cuántos ejemplos hay de cada clase? ¿Es la base de datos balanceada?
- ¿Hay valores perdidos o invalidas?
```
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
data = np.genfromtxt('spambase.data', delimiter=',')
X, Y = data[:, :-1], data[:, -1]
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, train_size=0.75, stratify=Y)
```
Use el conjunto de entrenamiento para entrenar y ajustar los parámetros de un
1. árbol de decisión
1. ensamble random forest
1. ensamble *gradient boosting*
Puede usar `GridSearchCV` para encontrar los mejores estimadores
Para este caso particular y para cada estimador responda
- ¿Qué función/criterio funciona mejor? `criterion`
- ¿Qué profundidad de árbol funciona mejor? `max_depth`
- ¿Combiene ponderar las clases? `class_weight`
- En el caso de los ensambles
- ¿Es recomendable usar un subconjunto aleatorio de características? `max_features`
- ¿Cuál es la mejor cantidad de clasificadores débiles? `n_estimators`
Compare los mejores modelos de cada tipo en el conjunto de test usando métricas de clasificación apropiadas
Analice y comente sus resultados
```
tree.DecisionTreeClassifier?
from sklearn import tree
from sklearn.model_selection import GridSearchCV
params = {'criterion':('entropy', 'gini'),
'max_depth':[2, 5, 10, 20, 35, 50],
'class_weight': (None, 'balanced', {0:0.3, 1:0.7})}
np.random.seed(0) # reproducibilidad
model = tree.DecisionTreeClassifier()
clf_dt = GridSearchCV(model, params, cv=5)
clf_dt.fit(X_train, Y_train)
display(clf_dt.best_estimator_)
from sklearn.metrics import precision_recall_curve
fig, ax = plt.subplots(1, figsize=(5, 4), tight_layout=True)
ax.set_xlabel('Recall/TPR')
ax.set_ylabel('Precision')
Y_pred = clf_dt.best_estimator_.predict_proba(X_test)[:, 1]
precision, recall, th = precision_recall_curve(Y_test, Y_pred)
ax.plot(recall, precision, label="Decision Tree", linewidth=1)
plt.legend(loc=3);
!rm spambase.data
```
| true |
code
| 0.603465 | null | null | null | null |
|
# Caffe2 Basic Concepts - Operators & Nets
In this tutorial we will go through a set of Caffe2 basics: the basic concepts including how operators and nets are being written.
First, let's import caffe2. `core` and `workspace` are usually the two that you need most. If you want to manipulate protocol buffers generated by caffe2, you probably also want to import `caffe2_pb2` from `caffe2.proto`.
```
# We'll also import a few standard python libraries
from matplotlib import pyplot
import numpy as np
import time
# These are the droids you are looking for.
from caffe2.python import core, workspace
from caffe2.proto import caffe2_pb2
# Let's show all plots inline.
%matplotlib inline
```
You might see a warning saying that caffe2 does not have GPU support. That means you are running a CPU-only build. Don't be alarmed - anything CPU is still runnable without problem.
## Workspaces
Let's cover workspaces first, where all the data reside.
If you are familiar with Matlab, workspace consists of blobs you create and store in memory. For now, consider a blob to be a N-dimensional Tensor similar to numpy's ndarray, but is contiguous. Down the road, we will show you that a blob is actually a typed pointer that can store any type of C++ objects, but Tensor is the most common type stored in a blob. Let's show what the interface looks like.
`Blobs()` prints out all existing blobs in the workspace.
`HasBlob()` queries if a blob exists in the workspace. For now, we don't have anything yet.
```
print("Current blobs in the workspace: {}".format(workspace.Blobs()))
print("Workspace has blob 'X'? {}".format(workspace.HasBlob("X")))
```
We can feed blobs into the workspace using `FeedBlob()`.
```
X = np.random.randn(2, 3).astype(np.float32)
print("Generated X from numpy:\n{}".format(X))
workspace.FeedBlob("X", X)
```
Now, let's take a look what blobs there are in the workspace.
```
print("Current blobs in the workspace: {}".format(workspace.Blobs()))
print("Workspace has blob 'X'? {}".format(workspace.HasBlob("X")))
print("Fetched X:\n{}".format(workspace.FetchBlob("X")))
```
Let's verify that the arrays are equal.
```
np.testing.assert_array_equal(X, workspace.FetchBlob("X"))
```
Also, if you are trying to access a blob that does not exist, an error will be thrown:
```
try:
workspace.FetchBlob("invincible_pink_unicorn")
except RuntimeError as err:
print(err)
```
One thing that you might not use immediately: you can have multiple workspaces in Python using different names, and switch between them. Blobs in different workspaces are separate from each other. You can query the current workspace using `CurrentWorkspace`. Let's try switching the workspace by name (gutentag) and creating a new one if it doesn't exist.
```
print("Current workspace: {}".format(workspace.CurrentWorkspace()))
print("Current blobs in the workspace: {}".format(workspace.Blobs()))
# Switch the workspace. The second argument "True" means creating
# the workspace if it is missing.
workspace.SwitchWorkspace("gutentag", True)
# Let's print the current workspace. Note that there is nothing in the
# workspace yet.
print("Current workspace: {}".format(workspace.CurrentWorkspace()))
print("Current blobs in the workspace: {}".format(workspace.Blobs()))
```
Let's switch back to the default workspace.
```
workspace.SwitchWorkspace("default")
print("Current workspace: {}".format(workspace.CurrentWorkspace()))
print("Current blobs in the workspace: {}".format(workspace.Blobs()))
```
Finally, `ResetWorkspace()` clears anything that is in the current workspace.
```
workspace.ResetWorkspace()
```
## Operators
Operators in Caffe2 are kind of like functions. From the C++ side, they all derive from a common interface, and are registered by type, so that we can call different operators during runtime. The interface of operators is defined in `caffe2/proto/caffe2.proto`. Basically, it takes in a bunch of inputs, and produces a bunch of outputs.
Remember, when we say "create an operator" in Caffe2 Python, nothing gets run yet. All it does is to create the protocol buffere that specifies what the operator should be. At a later time it will be sent to the C++ backend for execution. If you are not familiar with protobuf, it is a json-like serialization tool for structured data. Find more about protocol buffers [here](https://developers.google.com/protocol-buffers/).
Let's see an actual example.
```
# Create an operator.
op = core.CreateOperator(
"Relu", # The type of operator that we want to run
["X"], # A list of input blobs by their names
["Y"], # A list of output blobs by their names
)
# and we are done!
```
As we mentioned, the created op is actually a protobuf object. Let's show the content.
```
print("Type of the created op is: {}".format(type(op)))
print("Content:\n")
print(str(op))
```
OK, let's run the operator. We first feed in the input X to the workspace.
Then the simplest way to run an operator is to do `workspace.RunOperatorOnce(operator)`
```
workspace.FeedBlob("X", np.random.randn(2, 3).astype(np.float32))
workspace.RunOperatorOnce(op)
```
After execution, let's see if the operator is doing the right thing, which is our neural network's activation function ([Relu](https://en.wikipedia.org/wiki/Rectifier_(neural_networks))) in this case.
```
print("Current blobs in the workspace: {}\n".format(workspace.Blobs()))
print("X:\n{}\n".format(workspace.FetchBlob("X")))
print("Y:\n{}\n".format(workspace.FetchBlob("Y")))
print("Expected:\n{}\n".format(np.maximum(workspace.FetchBlob("X"), 0)))
```
This is working if your Expected output matches your Y output in this example.
Operators also take optional arguments if needed. They are specified as key-value pairs. Let's take a look at one simple example, which takes a tensor and fills it with Gaussian random variables.
```
op = core.CreateOperator(
"GaussianFill",
[], # GaussianFill does not need any parameters.
["Z"],
shape=[100, 100], # shape argument as a list of ints.
mean=1.0, # mean as a single float
std=1.0, # std as a single float
)
print("Content of op:\n")
print(str(op))
```
Let's run it and see if things are as intended.
```
workspace.RunOperatorOnce(op)
temp = workspace.FetchBlob("Z")
pyplot.hist(temp.flatten(), bins=50)
pyplot.title("Distribution of Z")
```
If you see a bell shaped curve then it worked!
## Nets
Nets are essentially computation graphs. We keep the name `Net` for backward consistency (and also to pay tribute to neural nets). A Net is composed of multiple operators just like a program written as a sequence of commands. Let's take a look.
When we talk about nets, we will also talk about BlobReference, which is an object that wraps around a string so we can do easy chaining of operators.
Let's create a network that is essentially the equivalent of the following python math:
```
X = np.random.randn(2, 3)
W = np.random.randn(5, 3)
b = np.ones(5)
Y = X * W^T + b
```
We'll show the progress step by step. Caffe2's `core.Net` is a wrapper class around a NetDef protocol buffer.
When creating a network, its underlying protocol buffer is essentially empty other than the network name. Let's create the net and then show the proto content.
```
net = core.Net("my_first_net")
print("Current network proto:\n\n{}".format(net.Proto()))
```
Let's create a blob called X, and use GaussianFill to fill it with some random data.
```
X = net.GaussianFill([], ["X"], mean=0.0, std=1.0, shape=[2, 3], run_once=0)
print("New network proto:\n\n{}".format(net.Proto()))
```
You might have observed a few differences from the earlier `core.CreateOperator` call. Basically, when we have a net, you can direct create an operator *and* add it to the net at the same time using Python tricks: essentially, if you call `net.SomeOp` where SomeOp is a registered type string of an operator, this essentially gets translated to
```
op = core.CreateOperator("SomeOp", ...)
net.Proto().op.append(op)
```
Also, you might be wondering what X is. X is a `BlobReference` which basically records two things:
- what its name is. You can access the name by str(X)
- which net it gets created from. It is recorded by an internal variable `_from_net`, but most likely
you won't need that.
Let's verify it. Also, remember, we are not actually running anything yet, so X contains nothing but a symbol. Don't expect to get any numerical values out of it right now :)
```
print("Type of X is: {}".format(type(X)))
print("The blob name is: {}".format(str(X)))
```
Let's continue to create W and b.
```
W = net.GaussianFill([], ["W"], mean=0.0, std=1.0, shape=[5, 3], run_once=0)
b = net.ConstantFill([], ["b"], shape=[5,], value=1.0, run_once=0)
```
Now, one simple code sugar: since the BlobReference objects know what net it is generated from, in addition to creating operators from net, you can also create operators from BlobReferences. Let's create the FC operator in this way.
```
Y = X.FC([W, b], ["Y"])
```
Under the hood, `X.FC(...)` simply delegates to `net.FC` by inserting `X` as the first input of the corresponding operator, so what we did above is equivalent to
```
Y = net.FC([X, W, b], ["Y"])
```
Let's take a look at the current network.
```
print("Current network proto:\n\n{}".format(net.Proto()))
```
Too verbose huh? Let's try to visualize it as a graph. Caffe2 ships with a very minimal graph visualization tool for this purpose. Let's show that in ipython.
```
from caffe2.python import net_drawer
from IPython import display
graph = net_drawer.GetPydotGraph(net, rankdir="LR")
display.Image(graph.create_png(), width=800)
```
So we have defined a `Net`, but nothing gets executed yet. Remember that the net above is essentially a protobuf that holds the definition of the network. When we actually want to run the network, what happens under the hood is:
- Instantiate a C++ net object from the protobuf;
- Call the instantiated net's Run() function.
Before we do anything, we should clear any earlier workspace variables with `ResetWorkspace()`.
Then there are two ways to run a net from Python. We will do the first option in the example below.
1. Using `workspace.RunNetOnce()`, which instantiates, runs and immediately destructs the network.
2. A little bit more complex and involves two steps:
(a) call `workspace.CreateNet()` to create the C++ net object owned by the workspace, and
(b) use `workspace.RunNet()` by passing the name of the network to it.
```
workspace.ResetWorkspace()
print("Current blobs in the workspace: {}".format(workspace.Blobs()))
workspace.RunNetOnce(net)
print("Blobs in the workspace after execution: {}".format(workspace.Blobs()))
# Let's dump the contents of the blobs
for name in workspace.Blobs():
print("{}:\n{}".format(name, workspace.FetchBlob(name)))
```
Now let's try the second way to create the net, and run it. First clear the variables with `ResetWorkspace()`, create the net with the workspace's net object you created earlier `CreateNet(net_object)`, and then run the net by name with `RunNet(net_name)`.
```
workspace.ResetWorkspace()
print("Current blobs in the workspace: {}".format(workspace.Blobs()))
workspace.CreateNet(net)
workspace.RunNet(net.Proto().name)
print("Blobs in the workspace after execution: {}".format(workspace.Blobs()))
for name in workspace.Blobs():
print("{}:\n{}".format(name, workspace.FetchBlob(name)))
```
There are a few differences between `RunNetOnce` and `RunNet`, but probably the main difference is the computation time overhead. Since `RunNetOnce` involves serializing the protobuf to pass between Python and C and instantiating the network, it may take longer to run. Let's see in this case what the overhead is.
```
# It seems that %timeit magic does not work well with
# C++ extensions so we'll basically do for loops
start = time.time()
for i in range(1000):
workspace.RunNetOnce(net)
end = time.time()
print('Run time per RunNetOnce: {}'.format((end - start) / 1000))
start = time.time()
for i in range(1000):
workspace.RunNet(net.Proto().name)
end = time.time()
print('Run time per RunNet: {}'.format((end - start) / 1000))
```
OK, so above are a few key components if you would like to use Caffe2 from the python side. We are going to add more to the tutorial as we find more needs. For now, kindly check out the rest of the tutorials!
| true |
code
| 0.306832 | null | null | null | null |
|
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Azure Machine Learning Pipeline with HyperDriveStep
This notebook is used to demonstrate the use of HyperDriveStep in AML Pipeline.
## Prerequisites and Azure Machine Learning Basics
If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the [configuration Notebook](https://aka.ms/pl-config) first if you haven't. This sets you up with a working config file that has information on your workspace, subscription id, etc.
## Azure Machine Learning and Pipeline SDK-specific imports
```
import azureml.core
from azureml.core import Workspace, Experiment
from azureml.core.datastore import Datastore
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.exceptions import ComputeTargetException
from azureml.data.data_reference import DataReference
from azureml.pipeline.steps import HyperDriveStep, HyperDriveStepRun
from azureml.pipeline.core import Pipeline, PipelineData
from azureml.train.dnn import TensorFlow
# from azureml.train.hyperdrive import *
from azureml.train.hyperdrive import RandomParameterSampling, BanditPolicy, HyperDriveConfig, PrimaryMetricGoal
from azureml.train.hyperdrive import choice, loguniform
import os
import shutil
import urllib
import numpy as np
import matplotlib.pyplot as plt
# Check core SDK version number
print("SDK version:", azureml.core.VERSION)
```
## Initialize workspace
Initialize a workspace object from persisted configuration. If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure the config file is present at .\config.json
```
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n')
```
## Create an Azure ML experiment
Let's create an experiment named "tf-mnist" and a folder to hold the training scripts.
> The best practice is to use separate folders for scripts and its dependent files for each step. This helps reduce the size of the snapshot created for the step (only the specific folder is snapshotted). Since changes in any files in the `source_directory` would trigger a re-upload of the snapshot, this helps keep the reuse of the step when there are no changes in the `source_directory` of the step.
> The script runs will be recorded under the experiment in Azure.
```
script_folder = './tf-mnist'
os.makedirs(script_folder, exist_ok=True)
exp = Experiment(workspace=ws, name='Hyperdrive_sample')
```
## Download MNIST dataset
In order to train on the MNIST dataset we will first need to download it from Yan LeCun's web site directly and save them in a `data` folder locally.
```
os.makedirs('./data/mnist', exist_ok=True)
urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz', filename = './data/mnist/train-images.gz')
urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz', filename = './data/mnist/train-labels.gz')
urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz', filename = './data/mnist/test-images.gz')
urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz', filename = './data/mnist/test-labels.gz')
```
## Show some sample images
Let's load the downloaded compressed file into numpy arrays using some utility functions included in the `utils.py` library file from the current folder. Then we use `matplotlib` to plot 30 random images from the dataset along with their labels.
```
from utils import load_data
# note we also shrink the intensity values (X) from 0-255 to 0-1. This helps the neural network converge faster.
X_train = load_data('./data/mnist/train-images.gz', False) / 255.0
y_train = load_data('./data/mnist/train-labels.gz', True).reshape(-1)
X_test = load_data('./data/mnist/test-images.gz', False) / 255.0
y_test = load_data('./data/mnist/test-labels.gz', True).reshape(-1)
count = 0
sample_size = 30
plt.figure(figsize = (16, 6))
for i in np.random.permutation(X_train.shape[0])[:sample_size]:
count = count + 1
plt.subplot(1, sample_size, count)
plt.axhline('')
plt.axvline('')
plt.text(x = 10, y = -10, s = y_train[i], fontsize = 18)
plt.imshow(X_train[i].reshape(28, 28), cmap = plt.cm.Greys)
plt.show()
```
## Upload MNIST dataset to blob datastore
A [datastore](https://docs.microsoft.com/azure/machine-learning/service/how-to-access-data) is a place where data can be stored that is then made accessible to a Run either by means of mounting or copying the data to the compute target. In the next step, we will use Azure Blob Storage and upload the training and test set into the Azure Blob datastore, which we will then later be mount on a Batch AI cluster for training.
```
ds = ws.get_default_datastore()
ds.upload(src_dir='./data/mnist', target_path='mnist', overwrite=True, show_progress=True)
```
## Retrieve or create a Azure Machine Learning compute
Azure Machine Learning Compute is a service for provisioning and managing clusters of Azure virtual machines for running machine learning workloads. Let's create a new Azure Machine Learning Compute in the current workspace, if it doesn't already exist. We will then run the training script on this compute target.
If we could not find the compute with the given name in the previous cell, then we will create a new compute here. This process is broken down into the following steps:
1. Create the configuration
2. Create the Azure Machine Learning compute
**This process will take a few minutes and is providing only sparse output in the process. Please make sure to wait until the call returns before moving to the next cell.**
```
cluster_name = "gpu-cluster"
try:
compute_target = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing compute target {}.'.format(cluster_name))
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size="STANDARD_NC6",
max_nodes=4)
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True, timeout_in_minutes=20)
print("Azure Machine Learning Compute attached")
```
## Copy the training files into the script folder
The TensorFlow training script is already created for you. You can simply copy it into the script folder, together with the utility library used to load compressed data file into numpy array.
```
# the training logic is in the tf_mnist.py file.
shutil.copy('./tf_mnist.py', script_folder)
# the utils.py just helps loading data from the downloaded MNIST dataset into numpy arrays.
shutil.copy('./utils.py', script_folder)
```
## Create TensorFlow estimator
Next, we construct an [TensorFlow](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.dnn.tensorflow?view=azure-ml-py) estimator object.
The TensorFlow estimator is providing a simple way of launching a TensorFlow training job on a compute target. It will automatically provide a docker image that has TensorFlow installed -- if additional pip or conda packages are required, their names can be passed in via the `pip_packages` and `conda_packages` arguments and they will be included in the resulting docker.
The TensorFlow estimator also takes a `framework_version` parameter -- if no version is provided, the estimator will default to the latest version supported by AzureML. Use `TensorFlow.get_supported_versions()` to get a list of all versions supported by your current SDK version or see the [SDK documentation](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.dnn?view=azure-ml-py) for the versions supported in the most current release.
The TensorFlow estimator also takes a `framework_version` parameter -- if no version is provided, the estimator will default to the latest version supported by AzureML. Use `TensorFlow.get_supported_versions()` to get a list of all versions supported by your current SDK version or see the [SDK documentation](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.dnn?view=azure-ml-py) for the versions supported in the most current release.
```
est = TensorFlow(source_directory=script_folder,
compute_target=compute_target,
entry_script='tf_mnist.py',
use_gpu=True,
framework_version='1.13')
```
## Intelligent hyperparameter tuning
Now let's try hyperparameter tuning by launching multiple runs on the cluster. First let's define the parameter space using random sampling.
In this example we will use random sampling to try different configuration sets of hyperparameters to maximize our primary metric, the best validation accuracy (`validation_acc`).
```
ps = RandomParameterSampling(
{
'--batch-size': choice(25, 50, 100),
'--first-layer-neurons': choice(10, 50, 200, 300, 500),
'--second-layer-neurons': choice(10, 50, 200, 500),
'--learning-rate': loguniform(-6, -1)
}
)
```
Now we will define an early termnination policy. The `BanditPolicy` basically states to check the job every 2 iterations. If the primary metric (defined later) falls outside of the top 10% range, Azure ML terminate the job. This saves us from continuing to explore hyperparameters that don't show promise of helping reach our target metric.
Refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-tune-hyperparameters#specify-an-early-termination-policy) for more information on the BanditPolicy and other policies available.
```
early_termination_policy = BanditPolicy(evaluation_interval=2, slack_factor=0.1)
```
Now we are ready to configure a run configuration object, and specify the primary metric `validation_acc` that's recorded in your training runs. If you go back to visit the training script, you will notice that this value is being logged after every epoch (a full batch set). We also want to tell the service that we are looking to maximizing this value. We also set the number of samples to 20, and maximal concurrent job to 4, which is the same as the number of nodes in our computer cluster.
```
hd_config = HyperDriveConfig(estimator=est,
hyperparameter_sampling=ps,
policy=early_termination_policy,
primary_metric_name='validation_acc',
primary_metric_goal=PrimaryMetricGoal.MAXIMIZE,
max_total_runs=4,
max_concurrent_runs=4)
```
## Add HyperDrive as a step of pipeline
### Setup an input for the hypderdrive step
Let's setup a data reference for inputs of hyperdrive step.
```
data_folder = DataReference(
datastore=ds,
data_reference_name="mnist_data")
```
### HyperDriveStep
HyperDriveStep can be used to run HyperDrive job as a step in pipeline.
- **name:** Name of the step
- **hyperdrive_config:** A HyperDriveConfig that defines the configuration for this HyperDrive run
- **estimator_entry_script_arguments:** List of command-line arguments for estimator entry script
- **inputs:** List of input port bindings
- **outputs:** List of output port bindings
- **metrics_output:** Optional value specifying the location to store HyperDrive run metrics as a JSON file
- **allow_reuse:** whether to allow reuse
- **version:** version
```
metrics_output_name = 'metrics_output'
metirics_data = PipelineData(name='metrics_data',
datastore=ds,
pipeline_output_name=metrics_output_name)
hd_step_name='hd_step01'
hd_step = HyperDriveStep(
name=hd_step_name,
hyperdrive_config=hd_config,
estimator_entry_script_arguments=['--data-folder', data_folder],
inputs=[data_folder],
metrics_output=metirics_data)
```
### Run the pipeline
```
pipeline = Pipeline(workspace=ws, steps=[hd_step])
pipeline_run = exp.submit(pipeline)
```
### Monitor using widget
```
from azureml.widgets import RunDetails
RunDetails(pipeline_run).show()
```
### Wait for the completion of this Pipeline run
```
pipeline_run.wait_for_completion()
```
### Retrieve the metrics
Outputs of above run can be used as inputs of other steps in pipeline. In this tutorial, we will show the result metrics.
```
metrics_output = pipeline_run.get_pipeline_output(metrics_output_name)
num_file_downloaded = metrics_output.download('.', show_progress=True)
import pandas as pd
import json
with open(metrics_output._path_on_datastore) as f:
metrics_output_result = f.read()
deserialized_metrics_output = json.loads(metrics_output_result)
df = pd.DataFrame(deserialized_metrics_output)
df
```
## Find and register best model
When all the jobs finish, we can find out the one that has the highest accuracy.
```
hd_step_run = HyperDriveStepRun(step_run=pipeline_run.find_step_run(hd_step_name)[0])
best_run = hd_step_run.get_best_run_by_primary_metric()
best_run
```
Now let's list the model files uploaded during the run.
```
print(best_run.get_file_names())
```
We can then register the folder (and all files in it) as a model named `tf-dnn-mnist` under the workspace for deployment.
```
model = best_run.register_model(model_name='tf-dnn-mnist', model_path='outputs/model')
```
## Deploy the model in ACI
Now we are ready to deploy the model as a web service running in Azure Container Instance [ACI](https://azure.microsoft.com/en-us/services/container-instances/).
### Create score.py
First, we will create a scoring script that will be invoked by the web service call.
* Note that the scoring script must have two required functions, `init()` and `run(input_data)`.
* In `init()` function, you typically load the model into a global object. This function is executed only once when the Docker container is started.
* In `run(input_data)` function, the model is used to predict a value based on the input data. The input and output to `run` typically use JSON as serialization and de-serialization format but you are not limited to that.
```
%%writefile score.py
import json
import numpy as np
import os
import tensorflow as tf
from azureml.core.model import Model
def init():
global X, output, sess
tf.reset_default_graph()
model_root = Model.get_model_path('tf-dnn-mnist')
saver = tf.train.import_meta_graph(os.path.join(model_root, 'mnist-tf.model.meta'))
X = tf.get_default_graph().get_tensor_by_name("network/X:0")
output = tf.get_default_graph().get_tensor_by_name("network/output/MatMul:0")
sess = tf.Session()
saver.restore(sess, os.path.join(model_root, 'mnist-tf.model'))
def run(raw_data):
data = np.array(json.loads(raw_data)['data'])
# make prediction
out = output.eval(session=sess, feed_dict={X: data})
y_hat = np.argmax(out, axis=1)
return y_hat.tolist()
```
### Create myenv.yml
We also need to create an environment file so that Azure Machine Learning can install the necessary packages in the Docker image which are required by your scoring script. In this case, we need to specify packages `numpy`, `tensorflow`.
```
from azureml.core.runconfig import CondaDependencies
cd = CondaDependencies.create()
cd.add_conda_package('numpy')
cd.add_tensorflow_conda_package()
cd.save_to_file(base_directory='./', conda_file_path='myenv.yml')
print(cd.serialize_to_string())
```
### Deploy to ACI
Now we can deploy. **This cell will run for about 7-8 minutes**. Behind the scene, AzureML will build a Docker container image with the given configuration, if already not available. This image will be deployed to the ACI infrastructure and the scoring script and model will be mounted on the container. The model will then be available as a web service with an HTTP endpoint to accept REST client calls.
```
%%time
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(runtime = "python",
entry_script = "score.py",
conda_file = "myenv.yml")
aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=1,
tags={'name':'mnist', 'framework': 'TensorFlow DNN'},
description='Tensorflow DNN on MNIST')
service = Model.deploy(ws, 'tf-mnist-svc', [model], inference_config, aciconfig)
service.wait_for_deployment(show_output=True)
```
**Tip: If something goes wrong with the deployment, the first thing to look at is the logs from the service by running the following command:**
```
print(service.get_logs())
```
This is the scoring web service endpoint:
```
print(service.scoring_uri)
```
### Test the deployed model
Let's test the deployed model. Pick 30 random samples from the test set, and send it to the web service hosted in ACI. Note here we are using the `run` API in the SDK to invoke the service. You can also make raw HTTP calls using any HTTP tool such as curl.
After the invocation, we print the returned predictions and plot them along with the input images. Use red font color and inversed image (white on black) to highlight the misclassified samples. Note since the model accuracy is pretty high, you might have to run the below cell a few times before you can see a misclassified sample.
```
import json
# find 30 random samples from test set
n = 30
sample_indices = np.random.permutation(X_test.shape[0])[0:n]
test_samples = json.dumps({"data": X_test[sample_indices].tolist()})
test_samples = bytes(test_samples, encoding='utf8')
# predict using the deployed model
result = service.run(input_data=test_samples)
# compare actual value vs. the predicted values:
i = 0
plt.figure(figsize = (20, 1))
for s in sample_indices:
plt.subplot(1, n, i + 1)
plt.axhline('')
plt.axvline('')
# use different color for misclassified sample
font_color = 'red' if y_test[s] != result[i] else 'black'
clr_map = plt.cm.gray if y_test[s] != result[i] else plt.cm.Greys
plt.text(x=10, y=-10, s=y_hat[s], fontsize=18, color=font_color)
plt.imshow(X_test[s].reshape(28, 28), cmap=clr_map)
i = i + 1
plt.show()
```
We can also send raw HTTP request to the service.
```
import requests
# send a random row from the test set to score
random_index = np.random.randint(0, len(X_test)-1)
input_data = "{\"data\": [" + str(list(X_test[random_index])) + "]}"
headers = {'Content-Type':'application/json'}
resp = requests.post(service.scoring_uri, input_data, headers=headers)
print("POST to url", service.scoring_uri)
print("input data:", input_data)
print("label:", y_test[random_index])
print("prediction:", resp.text)
```
Let's look at the workspace after the web service was deployed. You should see
* a registered model named 'model' and with the id 'model:1'
* an image called 'tf-mnist' and with a docker image location pointing to your workspace's Azure Container Registry (ACR)
* a webservice called 'tf-mnist' with some scoring URL
```
models = ws.models
for name, model in models.items():
print("Model: {}, ID: {}".format(name, model.id))
images = ws.images
for name, image in images.items():
print("Image: {}, location: {}".format(name, image.image_location))
webservices = ws.webservices
for name, webservice in webservices.items():
print("Webservice: {}, scoring URI: {}".format(name, webservice.scoring_uri))
```
## Clean up
You can delete the ACI deployment with a simple delete API call.
```
service.delete()
```
| true |
code
| 0.535463 | null | null | null | null |
|
# Hide your messy video background using neural nets, Part 2
> "Using our trained model to blur the background of video frames with OpenCV."
- toc: true
- branch: master
- badges: true
- comments: false
- categories: [fastai, privacy, opencv]
- image: images/articles/2021-backgroundblur-2/thumbnail.jpg
- hide: false
```
#hide
!pip install fastai==2.2.5 opencv-python==4.5.1.48 -q
#hide
from fastai.vision.all import *
import cv2
```
In [Part 1](https://deeplearning.berlin/fastai/privacy/getting%20started/2021/02/09/Background-Blur-Part-1.html) we created our own dataset of webcam pictures and trained a model that separates the person from the background. Now, we're going to use this model to blur the background of a webcam video.
<video width="640" height="360" controls autoplay loop muted playsinline>
<source src="/images/articles/2021-backgroundblur-2/smooth.mp4" type="video/mp4">
<source src="/images/articles/2021-backgroundblur-2/smooth.webm" type="video/webm">
Your browser does not support the video tag.
</video>
## Load Learner
The `Learner` expects to find all functions that were defined when creating it, in our case that is `create_mask`. We don't need any custom functionality however, so we define an empty `create_mask` function.
```
def create_mask(): pass
```
Load the `Learner` we exported in Part 1. If you have not trained a model in part 1, you can download [my model](https://www.dropbox.com/s/nl8u2veoa1bywwl/unet-resnet18-person-background.pkl?dl=0) and play around. I can't guarantee that it works under any conditions other than my living room though 😀
```
learn = load_learner('unet-resnet18-person-background.pkl')
```
## Practicing Predictions
> Note: You can skip this part and jump to the [OpenCV part](#Constructing-the-Image-With-Blurred-Background). I included this section because I wanted to see and show the different outputs of the `predict` function.
Let's pick a random file from our training images to practice getting the model predictions:
```
fnames = get_image_files('training')
image = fnames[0]
PILImage.create(image).show();
```
Get predictions of one training image:
```
preds = learn.predict(image)
#collapse_output
preds
```
There are different tensors in the predictions. `preds[0]` contains the output after `argmax`, so it picks the class with the higher probability. Every pixel is either a `0` or a `1` in line with our two classes.
```
preds[0].show(cmap='Blues', vmin=0, vmax=1);
#collapse
print(f'''unique values: {np.unique(preds[0])}
type: {type(preds[0])}
data type: {preds[0].dtype}''')
```
`preds[1]` contains the same values, just in a different type (`TensorImage` instead of `TensorMask`)
```
preds[1].show(cmap='Blues', vmin=0, vmax=1);
#collapse
print(f'''unique values: {np.unique(preds[1])}
type: {type(preds[1])}
data type: {preds[1].dtype}''')
```
`preds[2]` is a tensor with three dimensions. It contains the probabilities of the two classes as float values.
```
preds[2].shape
#collapse
print(f'''unique values: {np.unique(preds[2])}
type: {type(preds[2])}
data type: {preds[2].dtype}''')
```
Probabilities for the `background` class:
```
preds[2][0].show(cmap='Blues');
```
Probabilities for the `person` class:
```
preds[2][1].show(cmap='Blues');
```
## Constructing the Image With Blurred Background
We could use clean predictions `preds[1]` with just `0`s and `1`s for a simple mask. I tried that initially and it worked, it resulted in some rough edges however.
Instead, we will use the raw probabilities from `preds[2][1]` since it results in a smoother image. You can try for yourself which one you like btter.
Let's define a simple blur function.
```
def blur(img: np.ndarray, kernel_size=5, sigma_x=0) -> np.ndarray:
# Make sure that kernel size is an odd number
if kernel_size % 2 == 0:
kernel_size += 1
return cv2.GaussianBlur(img, (kernel_size, kernel_size), sigma_x)
```
We now define a function that blurs the background and blends in the original frame with an alpha mask. Thank you to [learnopencv.com](https://learnopencv.com/alpha-blending-using-opencv-cpp-python/) for their useful code!
```
def masked_blur(image: np.ndarray, mask: TensorImage) -> np.ndarray:
"mask must have dimensions (360,640)"
foreground = cv2.resize(image, (640,360), interpolation=cv2.INTER_AREA)
background = blur(foreground, kernel_size=61)
# Convert uint8 to float
foreground = foreground.astype(np.float32)
background = background.astype(np.float32)
# Some transforms to match the dimensions and type of the cv2 image
alpha = to_np(mask.unsqueeze(2).repeat(1,1,3)).astype(np.float32)
# Multiply the foreground with the alpha matte
foreground = cv2.multiply(alpha, foreground)
# Multiply the background with ( 1 - alpha )
background = cv2.multiply(1.0 - alpha, background)
# Add the masked foreground and background.
result = cv2.add(foreground, background)
# Convert to integer
result = result.astype(np.uint8)
return result
```
Read an image and create predictions:
```
frame = cv2.imread(str(image))
preds = learn.predict(image)
alpha = preds[2][1]
```
Create the resulting image and have a look:
```
output = masked_blur(frame, alpha)
output_rgb = cv2.cvtColor(output, cv2.COLOR_BGR2RGB)
PILImage.create(output_rgb)
```
Apart from my grumpy look, I think this is a quite nice result!
## Processing a Video Clip
As for now, we just work with a saved video file. To work with live webcam video, we would have to increase the speed of the inference process by a lot. On my current Paperspace Gradient machine (P4000) it runs at about 0.5 FPS....
Setting up video files. `testclip.mp4` is a video I shot with my webcam. The arguments for the `VideoWriter` are framerate and dimensions. I chose 25 because I think this is the framerate of my webcam, and 640x360 are the dimensions we used to train the neural net.
```
cap = cv2.VideoCapture('testclip.mp4')
fourcc = cv2.VideoWriter_fourcc(*'mp4v')
out = cv2.VideoWriter('output/testclip-output.mp4', fourcc, 25, (640, 360))
```
### Main Loop
We use this while loop to capture every frame of the video. For every frame we
1. Resize it to 640x360
2. Convert it to from cv2 BGR to RGB
3. Use the model to predict the mask
4. Create the image with blurred background
5. Write this image to the output video
Additionally, we save some frames as `jpg` files to inspect them.
```
i = 0
while cap.isOpened():
# Capture frame
ret, frame = cap.read()
# Break loop at end of video
if ret == False:
break
# Resize frame and convert to RGB
frame = cv2.resize(frame, (640,360), interpolation=cv2.INTER_AREA)
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
# Run inference and create alpha mask from result
preds = learn.predict(frame_rgb)
mask = preds[2][1]
# Blur background and convert it to integer type
output = masked_blur(frame, mask)
# Write frame to video
out.write(output)
# Save every 25th output as jpg, just to find a good thumbnail :)
if i == 0 or i%25 == 0:
cv2.imwrite('output/output_'+str(i)+'.jpg', output)
# Increase counter
i += 1
# Release opened files
cap.release()
out.release()
```
### Results
Let's look at a single frame:
```
PILImage.create('output/output_0.jpg')
```
And the resulting video:
<video width="640" height="360" controls autoplay loop muted playsinline>
<source src="/images/articles/2021-backgroundblur-2/smooth.mp4" type="video/mp4">
<source src="/images/articles/2021-backgroundblur-2/smooth.webm" type="video/webm">
Your browser does not support the video tag.
</video>
I think that looks quite good. There are some rough edges and my arms are not recognized well, but overall I'm happy with the result for this little project.
## To Do
There are many aspects which we could improve:
- The biggest thing to improve now is inference speed. As I mentioned, the current implementation works only with video files, not live video, and it runs at about 0.5 frames per second 🥴
- The U-Net is a pretty heavy model, even with the relatively small Resnet18 backbone. The saved weights are 167MB. This alone is reason enough for the model to run slow. Since we run the model frame by frame, the GPU is not helping much because there is no parallelization.
- The next step would be better generalization. I suspect that this model is currently very much optimized for myself. If we wanted to roll this out as a feature for many people, we would have to include many people in our training dataset, as well as different backgrounds, cameras, and lightning situations.
- Aesthetics could be improved. There is a "shadow" around the person in the foreground, an artifact of blurring the whole picture including the person.
Let me know when you found this helpful or implemented something similar yourself, or if you're stuck. I'd be happy to hear from you on [Twitter](https://twitter.com/daflowjoe)!
| true |
code
| 0.469216 | null | null | null | null |
|

[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/DEID_EHR_DATA.ipynb)
# **De-identify Structured Data**
To run this yourself, you will need to upload your license keys to the notebook. Otherwise, you can look at the example outputs at the bottom of the notebook. To upload license keys, open the file explorer on the left side of the screen and upload `workshop_license_keys.json` to the folder that opens.
## 1. Colab Setup
Import license keys
```
import os
import json
with open('/content/spark_nlp_for_healthcare.json', 'r') as f:
license_keys = json.load(f)
license_keys.keys()
secret = license_keys['SECRET']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['AWS_ACCESS_KEY_ID'] = license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
sparknlp_version = license_keys["PUBLIC_VERSION"]
jsl_version = license_keys["JSL_VERSION"]
print ('SparkNLP Version:', sparknlp_version)
print ('SparkNLP-JSL Version:', jsl_version)
```
Install dependencies
```
# Install Java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp==$sparknlp_version
! python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$secret
```
Import dependencies into Python
```
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ['PATH'] = os.environ['JAVA_HOME'] + "/bin:" + os.environ['PATH']
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
```
Start the Spark session
```
spark = sparknlp_jsl.start(secret)
```
## 2. Select the NER model and construct the pipeline
Select the models:
* NER Deidentification models: **ner_deid_enriched, ner_deid_large**
* Deidentification models: **deidentify_large, deidentify_rb, deidentify_rb_no_regex**
For more details: https://github.com/JohnSnowLabs/spark-nlp-models#pretrained-models---spark-nlp-for-healthcare
```
# Change this to the model you want to use and re-run the cells below.
# Anatomy models: ner_anatomy
MODEL_NAME = "ner_deid_large"
DEID_MODEL_NAME = "deidentify_large"
```
Create the pipeline
```
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetector()\
.setInputCols(["document"])\
.setOutputCol("sentence")
tokenizer = Tokenizer()\
.setInputCols(["sentence"])\
.setOutputCol("token")
# Clinical word embeddings trained on PubMED dataset
word_embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")
# NER model trained on n2c2 datasets)
clinical_ner = NerDLModel.pretrained(MODEL_NAME, "en", "clinical/models") \
.setInputCols(["sentence", "token", "embeddings"]) \
.setOutputCol("ner")
# NER Overwriter to ensure all the entities are deidentified.
# Use this if the NER does not recognize entities.
neroverwriter = NerOverwriter() \
.setInputCols(["ner"]) \
.setOutputCol("ner_overwrited") \
.setStopWords(['AIQING', 'YBARRA']) \
.setNewResult("B-NAME")
ner_converter = NerConverterInternal()\
.setInputCols(["sentence", "token", "ner_overwrited"])\
.setOutputCol("ner_chunk")
nlp_pipeline = Pipeline(stages=[
documentAssembler,
sentenceDetector,
tokenizer,
word_embeddings,
clinical_ner,
neroverwriter,
ner_converter])
empty_df = spark.createDataFrame([['']]).toDF('text')
pipeline_model = nlp_pipeline.fit(empty_df)
light_pipeline = LightPipeline(pipeline_model)
```
## 3. Create example inputs
```
# Enter examples as strings in this array
df = pd.DataFrame({'Name': ['Dave'], 'DOB':['1970-01-01'], 'Address': ['Kensington Street'],
'Summary':['Mr. Dave said he has cut his alcohol back to 6 pack once a week. He has cut back his cigarettes to one time per week. His PCP was M.D William Boss who had suggested some tests.']
})
```
# 4. De-identify using Obfuscation Method
Define De-identification Model
```
deidentification = DeIdentificationModel.pretrained(DEID_MODEL_NAME, "en", "clinical/models") \
.setInputCols(["sentence", "token", "ner_chunk"]) \
.setOutputCol("deidentified") \
.setObfuscateDate(True)\
.setMode('obfuscate')
#helper function
def deid_row(df):
res_m = {}
for col in df.columns:
result = pipeline_model.transform(spark.createDataFrame(pd.DataFrame({'text':[df[col].values[0]]})))
deid_text = deidentification.transform(result)
res1 = deid_text.toPandas()
sent = ''
for r in res1['deidentified'].iloc[0]:
sent = sent + ' ' + r[3]
res_m[col] = sent
return pd.DataFrame([res_m])
result_obfuscated = deid_row(df, )
```
Visualize
```
result_obfuscated
```
# 5. De-identify using Masking Method
Define De-identification Model
```
deidentification = DeIdentificationModel.pretrained(DEID_MODEL_NAME, "en", "clinical/models") \
.setInputCols(["sentence", "token", "ner_chunk"]) \
.setOutputCol("deidentified") \
.setObfuscateDate(True)\
.setMode('mask')
result_masked = deid_row(df)
```
Visualize
```
result_masked
```
| true |
code
| 0.412648 | null | null | null | null |
|
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/JavaScripts/CloudMasking/Landsat8SurfaceReflectance.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/JavaScripts/CloudMasking/Landsat8SurfaceReflectance.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/JavaScripts/CloudMasking/Landsat8SurfaceReflectance.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
# This example demonstrates the use of the pixel QA band to mask
# clouds in surface reflectance (SR) data. It is suitable
# for use with any of the Landsat SR datasets.
# Function to cloud mask from the pixel_qa band of Landsat 8 SR data.
def maskL8sr(image):
# Bits 3 and 5 are cloud shadow and cloud, respectively.
cloudShadowBitMask = 1 << 3
cloudsBitMask = 1 << 5
# Get the pixel QA band.
qa = image.select('pixel_qa')
# Both flags should be set to zero, indicating clear conditions.
mask = qa.bitwiseAnd(cloudShadowBitMask).eq(0) \
.And(qa.bitwiseAnd(cloudsBitMask).eq(0))
# Return the masked image, scaled to reflectance, without the QA bands.
return image.updateMask(mask).divide(10000) \
.select("B[0-9]*") \
.copyProperties(image, ["system:time_start"])
# Map the function over one year of data.
collection = ee.ImageCollection('LANDSAT/LC08/C01/T1_SR') \
.filterDate('2016-01-01', '2016-12-31') \
.map(maskL8sr)
composite = collection.median()
# Display the results.
Map.addLayer(composite, {'bands': ['B4', 'B3', 'B2'], 'min': 0, 'max': 0.3})
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| true |
code
| 0.637398 | null | null | null | null |
|
## 1. United Nations life expectancy data
<p>Life expectancy at birth is a measure of the average a living being is expected to live. It takes into account several demographic factors like gender, country, or year of birth.</p>
<p>Life expectancy at birth can vary along time or between countries because of many causes: the evolution of medicine, the degree of development of countries, or the effect of armed conflicts. Life expectancy varies between gender, as well. The data shows that women live longer that men. Why? Several potential factors, including biological reasons and the theory that women tend to be more health conscious.</p>
<p>Let's create some plots to explore the inequalities about life expectancy at birth around the world. We will use a dataset from the United Nations Statistics Division, which is available <a href="http://data.un.org/Data.aspx?d=GenderStat&f=inID:37&c=1,2,3,4,5,6&s=crEngName:asc,sgvEngName:asc,timeEngName:desc&v=1">here</a>.</p>
```
# This sets plot images to a nice size
options(repr.plot.width = 6, repr.plot.height = 6)
# Loading packages
library("dplyr")
library("tidyr")
library("ggplot2")
# Loading data
life_expectancy <- read.csv("datasets/UNdata.csv")
# Taking a look at the first few rows
head(life_expectancy)
```
## 2. Life expectancy of men vs. women by country
<p>Let's manipulate the data to make our exploration easier. We will build the dataset for our first plot in which we will represent the average life expectancy of men and women across countries for the last period recorded in our data (2000-2005).</p>
```
# Subsetting and reshaping the life expectancy data
subdata <- life_expectancy %>%
filter(Year == "2000-2005") %>%
select (Country.or.Area, Subgroup, Value) %>%
spread (Subgroup,Value)
# Taking a look at the first few rows
head(subdata)
nrow(subdata)
```
## 3. Visualize I
<p>A scatter plot is a useful way to visualize the relationship between two variables. It is a simple plot in which points are arranged on two axes, each of which represents one of those variables. </p>
<p>Let's create a scatter plot using <code>ggplot2</code> to represent life expectancy of males (on the x-axis) against females (on the y-axis). We will create a straightforward plot in this task, without many details. We will take care of these kinds of things shortly.</p>
```
# Plotting male and female life expectancy
ggplot(data=subdata,aes(x=Male,y=Female)) +
geom_point()
```
## 4. Reference lines I
<p>A good plot must be easy to understand. There are many tools in <code>ggplot2</code> to achieve this goal and we will explore some of them now. Starting from the previous plot, let's set the same limits for both axes as well as place a diagonal line for reference. After doing this, the difference between men and women across countries will be easier to interpret.</p>
<p>After completing this task, we will see how most of the points are arranged above the diagonal and how there is a significant dispersion among them. What does this all mean?</p>
```
# Adding an abline and changing the scale of axes of the previous plots
ggplot(data=subdata,aes(x=Male,y=Female)) +
geom_point()+geom_abline(intercept = 0, slope = 1,linetype = "dashed")+
xlim(35,85)+
ylim(35,85)
```
## 5. Plot titles and axis labels
<p>A key point to make a plot understandable is placing clear labels on it. Let's add titles, axis labels, and a caption to refer to the source of data. Let's also change the appearance to make it clearer.</p>
```
# Adding labels to previous plot
ggplot(subdata, aes(x=Male, y=Female))+
geom_point(colour="white", fill="chartreuse3", shape=21, alpha=.55, size=5)+
geom_abline(intercept = 0, slope = 1, linetype=2)+
scale_x_continuous(limits=c(35,85))+
scale_y_continuous(limits=c(35,85))+
labs(title="Life Expectancy at Birth by Country",
subtitle="Years. Period: 2000-2005. Average.",
caption="Source: United Nations Statistics Division",
x="Males",
y="Females")
```
## 6. Highlighting remarkable countries I
<p>Now, we will label some points of our plot with the name of its corresponding country. We want to draw attention to some special countries where the gap in life expectancy between men and women is significantly high. These will be the final touches on this first plot.</p>
```
# Subseting data to obtain countries of interest
top_male <- subdata %>% arrange(Male-Female) %>% head(3)
top_female <- subdata %>% arrange(Female-Male) %>% head(3)
# Adding text to the previous plot to label countries of interest
ggplot(subdata, aes(x=Male, y=Female, label=Country.or.Area))+
geom_point(colour="white", fill="chartreuse3", shape=21, alpha=.55, size=5)+
geom_abline(intercept = 0, slope = 1, linetype=2)+
scale_x_continuous(limits=c(35,85))+
scale_y_continuous(limits=c(35,85))+
labs(title="Life Expectancy at Birth by Country",
subtitle="Years. Period: 2000-2005. Average.",
caption="Source: United Nations Statistics Division",
x="Males",
y="Females")+
geom_text(data=top_male, size=3)+
geom_text(data=top_female, size=3)+
theme_bw()
```
## 7. How has life expectancy by gender evolved?
<p>Since our data contains historical information, let's see now how life expectancy has evolved in recent years. Our second plot will represent the difference between men and women across countries between two periods: 2000-2005 and 1985-1990.</p>
<p>Let's start building a dataset called <code>subdata2</code> for our second plot. </p>
```
# Subsetting, mutating and reshaping the life expectancy data
subdata2 <- life_expectancy %>%
filter(Year %in% c("1985-1990", "2000-2005")) %>%
mutate(Sub_Year=paste(Subgroup, Year, sep="_")) %>%
mutate(Sub_Year=gsub("-", "_", Sub_Year)) %>%
select(-Subgroup, -Year) %>%
spread(Sub_Year,Value)%>%
mutate(diff_Female=Female_2000_2005 - Female_1985_1990,diff_Male=Male_2000_2005 - Male_1985_1990)
# Taking a look at the first few rows
head(subdata2)
```
## 8. Visualize II
<p>Now let's create our second plot in which we will represent average life expectancy differences between "1985-1990" and "2000-2005" for men and women.</p>
```
# Doing a nice first version of the plot with abline, scaling axis and adding labels
ggplot(subdata2, aes(x=diff_Male, y=diff_Female, label=Country.or.Area))+
geom_point(colour="white", fill="chartreuse3", shape=21, alpha=.55, size=5)+
geom_abline(intercept = 0, slope = 1, linetype=2)+
scale_x_continuous(-25,25)+
scale_x_continuous(-25,25)+
labs(title="Life Expectancy at Birth by Country in Years",
subtitle="Difference between 1985-1990 and 2000-2005. Average.",
caption="Source: United Nations Statistics Division",
x="Males",
y="Females")+
theme_bw()
```
## 9. Reference lines II
<p>Adding reference lines can make plots easier to understand. We already added a diagonal line to visualize differences between men and women more clearly. Now we will add two more lines to help to identify in which countries people increased or decreased their life expectancy in the period analyzed.</p>
```
# Adding an hline and vline to previous plots
ggplot(subdata2, aes(x=diff_Male, y=diff_Female, label=Country.or.Area))+
geom_point(colour="white", fill="chartreuse3", shape=21, alpha=.55, size=5)+
geom_abline(intercept = 0, slope = 1, linetype=2)+
scale_x_continuous(limits=c(-25,25))+
scale_y_continuous(limits=c(-25,25))+
geom_hline(yintercept = 0,linetype=2)+
geom_vline(xintercept = 0,linetype=2)+
labs(title="Life Expectancy at Birth by Country",
subtitle="Years. Difference between 1985-1990 and 2000-2005. Average.",
caption="Source: United Nations Statistics Division",
x="Males",
y="Females")+
theme_bw()
```
## 10. Highlighting remarkable countries II
<p>As we did in the first plot, let's label some points. Concretely, we will point those three where the aggregated average life expectancy for men and women increased most and those three where decreased most in the period.</p>
```
# Subseting data to obtain countries of interest
top <- subdata2 %>% arrange(diff_Male+diff_Female) %>% head(3)
bottom <- subdata2 %>% arrange(-(diff_Male+diff_Female)) %>% head(3)
# Adding text to the previous plot to label countries of interest
ggplot(subdata2, aes(x=diff_Male, y=diff_Female, label=Country.or.Area), guide=FALSE)+
geom_point(colour="white", fill="chartreuse3", shape=21, alpha=.55, size=5)+
geom_abline(intercept = 0, slope = 1, linetype=2)+
scale_x_continuous(limits=c(-25,25))+
scale_y_continuous(limits=c(-25,25))+
geom_hline(yintercept=0, linetype=2)+
geom_vline(xintercept=0, linetype=2)+
labs(title="Life Expectancy at Birth by Country",
subtitle="Years. Difference between 1985-1990 and 2000-2005. Average.",
caption="Source: United Nations Statistics Division",
x="Males",
y="Females")+
geom_text(data=top,size=3)+
geom_text(data=bottom,size=3)+
theme_bw()
```
| true |
code
| 0.677714 | null | null | null | null |
|
# 自动数据增强
## 概述
MindSpore除了可以让用户自定义数据增强的使用,还提供了一种自动数据增强方式,可以基于特定策略自动对图像进行数据增强处理。
自动数据增强主要分为基于概率的自动数据增强和基于回调参数的自动数据增强。
## 基于概率的自动数据增强
MindSpore提供了一系列基于概率的自动数据增强API,用户可以对各种数据增强操作进行随机选择与组合,使数据增强更加灵活。
关于API的详细说明,可以参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.dataset.transforms.html)。
### RandomApply
API接收一个数据增强操作列表`transforms`,以一定的概率顺序执行列表中各数据增强操作,默认概率为0.5,否则都不执行。
在下面的代码示例中,以0.5的概率来顺序执行`RandomCrop`和`RandomColorAdjust`操作,否则都不执行。
```
import mindspore.dataset.vision.c_transforms as c_vision
from mindspore.dataset.transforms.c_transforms import RandomApply
rand_apply_list = RandomApply([c_vision.RandomCrop(512), c_vision.RandomColorAdjust()])
```
### RandomChoice
API接收一个数据增强操作列表`transforms`,从中随机选择一个数据增强操作执行。
在下面的代码示例中,等概率地在`CenterCrop`和`RandomCrop`中选择一个操作执行。
```
import mindspore.dataset.vision.c_transforms as c_vision
from mindspore.dataset.transforms.c_transforms import RandomChoice
rand_choice = RandomChoice([c_vision.CenterCrop(512), c_vision.RandomCrop(512)])
```
### RandomSelectSubpolicy
API接收一个预置策略列表,包含一系列子策略组合,每一子策略由若干个顺序执行的数据增强操作及其执行概率组成。
对各图像先等概率随机选择一种子策略,再依照子策略中的概率顺序执行各个操作。
在下面的代码示例中,预置了两条子策略,子策略1中包含`RandomRotation`、`RandomVerticalFlip`和`RandomColorAdjust`三个操作,概率分别为0.5、1.0和0.8;子策略2中包含`RandomRotation`和`RandomColorAdjust`两个操作,概率分别为1.0和0.2。
```
import mindspore.dataset.vision.c_transforms as c_vision
from mindspore.dataset.vision.c_transforms import RandomSelectSubpolicy
policy_list = [
[(c_vision.RandomRotation((45, 45)), 0.5), (c_vision.RandomVerticalFlip(), 1.0), (c_vision.RandomColorAdjust(), 0.8)],
[(c_vision.RandomRotation((90, 90)), 1.0), (c_vision.RandomColorAdjust(), 0.2)]
]
policy = RandomSelectSubpolicy(policy_list)
```
## 基于回调参数的自动数据增强
MindSpore的`sync_wait`接口支持按batch或epoch粒度在训练过程中动态调整数据增强策略,用户可以设定阻塞条件触发特定的数据增强操作。
`sync_wait`将阻塞整个数据处理pipeline直到`sync_update`触发用户预先定义的`callback`函数,两者需配合使用,对应说明如下:
- sync_wait(condition_name, num_batch=1, callback=None)
该API为数据集添加一个阻塞条件`condition_name`,当`sync_update`调用时执行指定的`callback`函数。
- sync_update(condition_name, num_batch=None, data=None)
该API用于释放对应`condition_name`的阻塞,并对`data`触发指定的`callback`函数。
下面将演示基于回调参数的自动数据增强的用法。
1. 用户预先定义`Augment`类,其中`preprocess`为自定义的数据增强函数,`update`为更新数据增强策略的回调函数。
```
import mindspore.dataset.vision.py_transforms as transforms
import mindspore.dataset as ds
import numpy as np
class Augment:
def __init__(self):
self.ep_num = 0
self.step_num = 0
def preprocess(self, input_):
return (np.array((input_ + self.step_num ** self.ep_num - 1), ))
def update(self, data):
self.ep_num = data['ep_num']
self.step_num = data['step_num']
```
2. 数据处理pipeline先回调自定义的增强策略更新函数`update`,然后在`map`操作中按更新后的策略来执行`preprocess`中定义的数据增强操作。
```
arr = list(range(1, 4))
dataset = ds.NumpySlicesDataset(arr, shuffle=False)
aug = Augment()
dataset = dataset.sync_wait(condition_name="policy", callback=aug.update)
dataset = dataset.map(operations=[aug.preprocess])
```
3. 在每个step中调用`sync_update`进行数据增强策略的更新。
```
epochs = 5
itr = dataset.create_tuple_iterator(num_epochs=epochs)
step_num = 0
for ep_num in range(epochs):
for data in itr:
print("epcoh: {}, step:{}, data :{}".format(ep_num, step_num, data))
step_num += 1
dataset.sync_update(condition_name="policy", data={'ep_num': ep_num, 'step_num': step_num})
```
| true |
code
| 0.397061 | null | null | null | null |
|
# Trigonometry
```
import numpy as np
import matplotlib.pyplot as plt
```
## Contents
- [Sine, cosine and tangent](#Sine_cosine_and_tangent)
- [Measurements](#Measurements)
- [Small angle approximation](#Small_angle_approximation)
- [Trigonometric functions](#Trigonometric_functions)
- [More trigonometric functions](#More_trigonometric_functions)
- [Identities](#Identities)
- [Compound angles](#Compound_angles)
<a id='Sine_cosine_and_tangent'></a>
### Sine, cosine and tangent
Sine:
- $\sin\theta = \frac{opp}{hyp}$
- with triangle with angles A, B and C, and lines a, b and c opposite their respective angles
- $\frac{a}{\sin A} = \frac{b}{\sin B} = \frac{c}{\sin C}$
Cosine:
- $\cos\theta = \frac{adj}{hyp}$
- with triangle with angles A, B and C, and lines a, b and c opposite their respective angles
- $a^2 = b^2 + c^2 - 2bc \cos A$
- $b^2 = a^2 + c^2 - 2ac \cos B$
- $c^2 = a^2 + b^2 - 2ab \cos C$
- $\cos A = \frac{b^2 + c^2 - a^2}{2bc}$
Tangent:
- $\tan\theta = \frac{opp}{adj}$
Area of triangle:
- $\frac{1}{2}ab\sin C$
<a id='Measurements'></a>
### Measurements
Radians:
- 1 radian = angle when the arc opposite the angle = r (the 2 points on the circumference)
- since $c = 2\pi r$, 1 circumference = 2 pi radians
Arc Length:
- length, s, of arc on circumference with angle $\theta$ in radians
- $s = r\theta$
Area of Sector:
- area, a, of sector with angle $\theta$ in radians
- $\frac{1}{2} r^2\theta$
<a id='Small_angle_approximation'></a>
### Small angle approximation
when $\theta \approx 0$ (in radians)
or $\theta = \lim_{\theta\to0}$
$\sin \theta \approx \theta$
$\cos \theta \approx 1 - \frac{\theta^2}{2} \approx 1$
$\tan \theta \approx \theta$
<a id='Trigonometric_functions'></a>
### Trigonometric functions
arcsin, arcos and arctan are the inverse (from length to angle in circle)
Domains and ranges:
sin:
- $\theta = \mathbb{R}$
- $-1 \le \sin\theta \le 1$
- $-\frac{\pi}{2} \le \arcsin x \le \frac{\pi}{2}$
cos:
- $\theta = \mathbb{R}$
- $-1 \le \cos\theta \le 1$
- $0 \le \arccos x \le \pi$
tan:
- $\theta \not= \frac{\pi}{2}, \frac{3\pi}{2} \dots$
- $\tan$ range is undefined
- $-\pi \le \arctan x \le \pi$
#### Graphing:
```
fig, ax = plt.subplots(1, 3, figsize=(13,4))
x = np.linspace(0, 2*np.pi, 30*np.pi).astype(np.float32)
ax[0].plot(x, np.sin(x), label='sin')
ax[1].plot(x, np.cos(x), label='cos')
ax[2].plot(x, np.tan(x), label='tan')
ax[0].plot(x, np.arcsin(np.sin(x)), label='arcsin')
ax[1].plot(x, np.arccos(np.cos(x)), label='arccos')
ax[2].plot(x, np.arctan(np.tan(x)), label='arctan')
for axes in ax:
axes.grid(True)
axes.legend()
plt.show()
```
<a id='More_trigonometric_functions'></a>
### More trigonometric functions
Secant:
- $\sec \theta = \frac{1}{\cos \theta}$
Cosecant:
- $\mathrm{cosec} \theta = \frac{1}{\sin \theta}$
Cotangent:
- $\cot \theta = \frac{1}{\tan\theta} = \frac{\cos\theta}{\sin\theta}$
#### Graphing:
```
fig, ax = plt.subplots(1, 3, figsize=(13,4))
x = np.linspace(0, 2*np.pi, 20*np.pi)
ax[0].plot(x, 1/np.cos(x), label='$sec$')
ax[1].plot(x, 1/np.sin(x), label='cosec')
ax[2].plot(x, np.cos(x)/np.sin(x), label='cot')
for axes in ax:
axes.grid(True)
axes.set_ylim([-20,20])
axes.legend()
plt.show()
```
<a id='Identities'></a>
### Identities
$\tan\theta = \frac{\sin\theta}{\cos\theta}$
$\sin^2\theta + \cos^2\theta = 1$
$\sec^2\theta = 1 + \tan^2\theta$
$\mathrm{cosec} \theta = 1 + \cot^2\theta$
<a id='Compound_angles'></a>
### Compound angles
Sin:
- $\sin(A+B) = \sin A\cos B + \cos A\sin B$
- $\sin(2A) = 2\sin A\cos A$
Cos:
- $\cos(A+B) = \cos A\cos B - \sin A\sin B$
- $\cos(2A) = \cos^2A - 2\sin^2B$
$= 2\cos^2x - 1$
$= 1 - 2\sin^2x$
Tan:
- $\tan(A+B) = \frac{\tan A + \tan B}{1 - \tan A\tan B}$
- $\tan(2A) = \frac{2\tan A}{1 - \tan^2A}$
### $r\cos(\theta+a)$
useful to reformat:
$a\cos \theta + b\sin \theta = r\cos(\theta+a)$
$r\cos(\theta + a) = r \cos a \cos \theta - r \sin a \sin \theta$
so:
$r \cos a \cos \theta = a\cos \theta$
$\therefore$ $r \cos a = a$
$- r \sin a \sin \theta = b\sin \theta$
$\therefore$ $r \sin a = -b$
Then solve as simultaneous equations:
solving a:
$\frac{\sin a}{\cos a} = \frac{-b}{a}$
$a \tan a = -b$
solving r:
$r^2\cos^2a + r^2\sin^2a = a^2+b^2$
$r^2 = a^2+b^2$
| true |
code
| 0.629547 | null | null | null | null |
|
```
import random
```
The first parameter, learn_speed, is used to control how fast our perceptron will learn. The lower the value, the longer it will take to learn, but the less one value will change each overall weight. If this parameter is too high, our program will change its weights so quickly that they are inaccurate. On the other hand, if learn_speed is too low, it will take forever to train the perceptron accurately. A good value for this parameter is about 0.01-0.05.
The second parameter, num_weights, controls how many weights the perceptron will have. Our perceptron will also have the same number of inputs as it does weights, because each input has its own weight.
Next, we need to create a function in our class to take in inputs, and turn them into an output. We do this by multiplying each input by its corresponding weight, summing all those together, and then checking if the sum is greater than 0.
The first function, feed_forward, is used to turn inputs into outputs. The term feed forward is commonly used in neural networks to describe this process of turning inputs into outputs. This method weights each input based on each corresponding weights. It sums them up, and then uses the activate function to return either 1 or -1.
The activate function is used to turn a number into 1 or -1. This is implemented because when we use a perceptron, we want to classify data. We classify it into two groups, one of which is represented by 1, and the other is represented by -1.
You might be wondering, "What's the use of this if the weights are random?" That's why we have to train the perceptron before we use it. In our train function, we want to make a guess based on the inputs provided, and then see how our guess compared to the output we wanted.
```
class Perceptron:
def __init__(self, learn_speed, num_weights):
self.speed = learn_speed
self.weights = []
for x in range(0, num_weights):
self.weights.append(random.random()*2-1)
def feed_forward(self, inputs):
sum = 0
# multiply inputs by weights and sum them
for x in range(0, len(self.weights)):
sum += self.weights[x] * inputs[x]
# return the 'activated' sum
return self.activate(sum)
def activate(self, num):
# turn a sum over 0 into 1, and below 0 into -1
if num > 0:
return 1
return -1
def train(self, inputs, desired_output):
guess = self.feed_forward(inputs)
error = desired_output - guess
# loop through each weight and adjust it by how much error we had.
for x in range(0, len(self.weights)):
self.weights[x] += error*inputs[x]*self.speed
```
### Training the Perceptron
Our perceptron has no use if we don't actually train it. We will do this by coding a quick Trainer class. In this example, we will train our perceptron to tell us whether a point is above a line or below a line. Our line, in this case, is represented by the equation y = 0.5x + 10. Once you know how to train a perceptron to recognize a line, you can represent x and y as different attributes, and above or below the line as results of those attributes.
For example, if you had a dataset on the GPAs and ACT scores of Harvard applicants, and whether they got accepted or not, you could train a perceptron to find a line on a graph where x=GPA score and y=ACT score. Above the line would be students that got accepted, and below the line would be students that got rejected. You could then use this perceptron to predict whether or not a student will get accepted into Harvard based on their GPA and ACT scores.
In this example, we'll stick with recognizing a line. To do this, we will create a Trainer class that trains a perceptron with points, and whether or not they are above the line. Below is the code for our Trainer class:
```
class Trainer:
def __init__(self):
self.perceptron = Perceptron(0.01, 3)
def f(self, x):
return 0.5*x + 10 # line: f(x) = 0.5x + 10
def train(self):
for x in range(0, 1000000):
x_coord = random.random()*500-250
y_coord = random.random()*500-250
line_y = self.f(x_coord)
if y_coord > line_y: # above the line
answer = 1
self.perceptron.train([x_coord, y_coord,1], answer)
else: # below the line
answer = -1
self.perceptron.train([x_coord, y_coord,1], answer)
return self.perceptron # return our trained perceptron
```
As you can see, the initializer for the Trainer class creates a perceptron with three inputs and a learning speed of 0.01. The first two inputs are x and y, but what is the last input? This is another core concept of neural networks and machine learning. That last input will always set to 1. The weight that corresponds to it will determine how it affects our line. For example, if you look back at our equation: y = 0.5x + 10, we need some way of representing the y-intercept, 10. We do this by creating a third input that increases or decreases based on the weight that the perceptron trains it to have. Think of it as a threshold that helps the perceptron understand that the line is adjusted 10 units upward.
In our f function, we take in an x coordinate and return a y coordinate. This is used to find points on the line based on their x coordinate, which will come in handy in the next function.
This train function for the Trainer class is where all the magic happens, and we actually get to train our perceptron. We start off by looping 1 million times. Remember how we had a learning speed for our perceptron? The more times that we train our perceptron (in this case, 1 million times), the more accurate it will become, even with a low learning speed.
In each iteration of the loop, we create a point, determine if it is above or below the line, and then feed those inputs into the perceptron's train method. First, x and y coordinates are randomly generated between -250 and 250. Next, we find where the y coordinate would be on the line for that x value to see if our point is above the line. For example, if we picked a point at (1, 3), then we should get the y coordinate of the line for the x value of 3. We do this with our f function. If our random y coordinate is higher than the corresponding y coordinate on the line, we know that our random coordinate is above the line.
That's what we do in the if...else statement. If our point is above the line, we set the expected output, stored in answer to be 1. If our point is below the line, our expected output is -1. We then train our perceptron based on the x coordinate, the y coordinate, and our expected output. After the whole loop is done, we return our newly trained perceptron object.
```
trainer = Trainer()
p = trainer.train()
```
Let's pick two points, (-7, 9) and (3, 1). The first point is above the line, so it should return 1, and the second is below the line, so it should return -1. Let's see how we would run our perceptron:
```
print("(-7, 9): " + str(p.feed_forward([-7,9,1])))
print("(3, 1): " + str(p.feed_forward([3,1,1])))
```
| true |
code
| 0.506286 | null | null | null | null |
|
# 使用Mask R-CNN模型实现人体关键节点标注
在之前的[Mask R-CNN](#)案例中,我们对Mask R-CNN模型的整体架构进行简介。Mask R-CNN是一个灵活开放的框架,可以在这个基础框架的基础上进行扩展,以完成更多的人工智能任务。在本案例中,我们将展示如何对基础的Mask R-CNN进行扩展,完成人体关键节点标注的任务。
## Mask-RCNN模型的基本结构
也许您还记得我们之前介绍过的Mask R-CNN整体架构,它的3个主要网络:
- backbone网络,用于生成特征图
- RPN网络,用于生成实例的位置、分类、分割(mask)信息
- head网络,对位置、分类和分割(mask)信息进行训练
在head网络中,有分类、位置框和分割(mask)信息的3个分支,我们可以对head网络进行扩展,加入一个人体关键节点keypoint分支。并对其进行训练,使得我们的模型具备关键节点分析的能力。那么我们的模型结构将如下图所示:

> head网络中,红色的<span style="color:red">keypionts</span>分支为新加入的**人体关键节点分支**
MaskRCNN模型的解析可以参考[此文章](https://github.com/huaweicloud/ModelArts-Lab/wiki/Mask-R-CNN%E6%A8%A1%E5%9E%8B%E8%A7%A3%E6%9E%90) 。
本案例的运行环境是 TensorFlow 1.8.0 。
## keypoints分支
在RPN中,我们生成Proposal后,当检测到Proposal的分类为"Person"时,对每个部位的关键点生成一个one-hot掩码,训练的目标最终是得到一个56*56的二值掩码,当中只有一个像素被标记为关键点,其余像素均为背景。对于每一个关键点的位置,进行最小化平均交叉熵损失检测,K个关键点是被独立处理的。
人体姿态检测中,人本身可以作为一个目标实例进行分类检测。但是,采取了one-hot编码以后,就可以扩展到coco数据集中被标注的17个人体关键点(例如:左眼、右耳),同时也能够处理非连续型数值特征。
COCO数据集中,对人体中17个关键点进行了标注,包括:鼻子,左眼,右眼,左耳,右耳,左肩,右肩,左肘,右肘,左手腕,右手腕,左膝盖,右膝盖,左脚踝,右脚踝,左小腿,右小腿,如下图所示:

## 在ModelArts中训练Mask R-CNN keypoints模型
### 准备数据和源代码
第一步:准备数据集和预训练模型
```
from modelarts.session import Session
sess = Session()
sess.download_data(bucket_path='modelarts-labs-bj4/end2end/mask_rcnn_keypoints/mask_rcnn_keypoints.data.tgz',
path='./mask_rcnn_keypoints.data.tgz')
!tar zxf ./mask_rcnn_keypoints.data.tgz
!rm ./mask_rcnn_keypoints.data.tgz
```
解压后,得到data目录,其结构如下:
```bash
data/
├── mask_rcnn_coco.h5
├── annotations
│ ├── person_keypoints_train2014.json
│ ├── ***.json
├── train2014
│ ├── COCO_train2014_***.jpg
└── val2014
├── COCO_val2014_***.jpg
```
其中`data/mask_rcnn_coco_humanpose.h5`为预训练模型,`annotations`、`train2014`和`val2014`为我们提前准备好的最小数据集,包含了500张图片的标注信息。
第二步:准备源代码
```
sess.download_data(bucket_path='modelarts-labs-bj4/end2end/mask_rcnn_keypoints/mask_rcnn_keypoints.src.tgz',
path='./mask_rcnn_keypoints.src.tgz')
!tar zxf ./mask_rcnn_keypoints.src.tgz
!rm ./mask_rcnn_keypoints.src.tgz
```
第三步:安装依赖pycocotools
示例中,我们使用COCO数据集,需要安装工具库pycocotools
```
!pip install pycocotools
```
### 程序初始化
第一步:导入相关的库,定义全局变量
```
import sys
import random
import math
import re
import time
import numpy as np
import cv2
import matplotlib
import matplotlib.pyplot as plt
# from src.mrcnn.config import Config
from src.mrcnn import coco
from src.mrcnn import utils
import src.mrcnn.model as modellib
from src.mrcnn import visualize
from src.mrcnn.model import log
%matplotlib inline
# Directory to save logs and trained model
MODEL_DIR = "logs"
# Local path to trained weights file
COCO_HUMANPOSE_MODEL_PATH = "data/mask_rcnn_coco_humanpose.h5"
```
第二步:生成配置项
我们定义Config类的子类MyTrainConfig,指定相关的参数,较为关键的参数有:
- __NAME__: Config的唯一名称
- __NUM_CLASSIS__: 分类的数量,我们只生成圆形,正方形和三角形,再加上背景,因此一共是4个分类
- __IMAGE_MIN_DIM和IMAGE_MAX_DIM__: 图片的最大和最小尺寸,我们生成固定的128x128的图片,因此都设置为128
- __TRAIN_ROIS_PER_IMAGE__: 每张图片上训练的RoI个数
- __STEPS_PER_EPOCH和VALIDATION_STEPS__: 训练和验证时,每轮的step数量,减少step的数量可以加速训练,但是检测精度降低
```
class DemoTrainConfig(coco.CocoConfig):
# 可辨识的名称
NAME = "demo_train"
# GPU的数量和每个GPU处理的图片数量,可以根据实际情况进行调整,参考为Nvidia Tesla P100
GPU_COUNT = 1
IMAGES_PER_GPU = 1
# 物体的分类个数,我们针对关键节点进行训练,只需要BG和Person两种分类
NUM_CLASSES = 1 + 1 # background + 80 shapes
# 图片尺寸统一处理为1024,可以根据实际情况再进一步调小
IMAGE_MIN_DIM = 1024
IMAGE_MAX_DIM = 1024
# 因为我们生成的形状图片较小,这里可以使用较小的Anchor进行RoI检测
# RPN_ANCHOR_SCALES = (8, 16, 32, 64, 128) # anchor side in pixels
# 每张图片上训练的RoI个数,可以适当调小该参数
TRAIN_ROIS_PER_IMAGE = 100
# 每轮训练的step数量
STEPS_PER_EPOCH = 100
# 每轮验证的step数量
VALIDATION_STEPS = 20
config = DemoTrainConfig()
config.display()
```
第三步:创建数据集对象
我们使用封装好的CocoDataset类,生成训练集和验证集。
```
from src.mrcnn.coco import CocoDataset
COCO_DIR = 'data'
# 生成训练集
dataset_train = CocoDataset(task_type="person_keypoints")
dataset_train.load_coco(COCO_DIR, "train", "2014") # 加载训练数据集
dataset_train.prepare()
# 生成验证集
dataset_val = CocoDataset(task_type="person_keypoints")
dataset_val.load_coco(COCO_DIR, "val", "2014") # 加载验证数据集
dataset_val.prepare()
# 打印数据集中keypoints的相关信息
print("Train Keypoints Image Count: {}".format(len(dataset_train.image_ids)))
print("Train Keypoints Class Count: {}".format(dataset_train.num_classes))
for i, info in enumerate(dataset_train.class_info):
print("{:3}. {:50}".format(i, info['name']))
print("Val Keypoints Image Count: {}".format(len(dataset_val.image_ids)))
print("Val Keypoints Class Count: {}".format(dataset_val.num_classes))
for i, info in enumerate(dataset_val.class_info):
print("{:3}. {:50}".format(i, info['name']))
```
## 创建模型
用"training"模式创建模型对象,并加载预训练模型
```
# Create model object in inference mode.
model = modellib.MaskRCNN(mode="training", model_dir=MODEL_DIR, config=config)
# Load weights trained on MS-COCO
# model.load_weights(COCO_MODEL_PATH, by_name=True,exclude=["mrcnn_class_logits", "mrcnn_bbox_fc",
# "mrcnn_bbox", "mrcnn_mask"])
COCO_HUMANPOSE_MODEL_PATH = './data/mask_rcnn_coco_humanpose.h5'
# Load weights trained on MS-COCO
print("Loading weights from ", COCO_HUMANPOSE_MODEL_PATH)
model.load_weights(COCO_HUMANPOSE_MODEL_PATH, by_name=True)
# model.keras_model.summary()
```
## 训练模型
Keras中的模型可以按照制定的层进行构建,在模型的train方法中,我们可以通过layers参数来指定特定的层进行训练。layers参数有以下几种预设值:
- heads:只训练head网络中的分类、mask和bbox回归
- all: 所有的layer
- 3+: 训练ResNet Stage3和后续Stage
- 4+: 训练ResNet Stage4和后续Stage
- 5+: 训练ResNet Stage5和后续Stage
此外,layers参数还支持正则表达式,按照匹配规则指定layer,可以调用model.keras_model.summary()查看各个层的名称,然后按照需要指定要训练的层。
我们针对不同的layer进行训练,首先,训练head网络中的4个分支:
```
# Training - Stage 1
print("Train heads")
model.train(dataset_train, dataset_val,
learning_rate=config.LEARNING_RATE,
epochs=1,
layers='heads')
```
然后训练ResNet Stage4和后续Stage
```
# Training - Stage 2
# Finetune layers from ResNet stage 4 and up
# print("Training Resnet layer 4+")
# model.train(dataset_train, dataset_val,
# learning_rate=config.LEARNING_RATE / 10,
# epochs=1,
# layers='4+')
```
最后,对所有layer进行优化,并将训练的模型保存到本地
```
# Training - Stage 3
# Finetune layers from ResNet stage 3 and up
print("Training Resnet layer 3+")
model.train(dataset_train, dataset_val,
learning_rate=config.LEARNING_RATE / 100,
epochs=2,
layers='all')
model_savepath = 'demo_mrcnn_humanpose_model.h5'
model.keras_model.save_weights(model_savepath)
```
## 使用模型检测图片物体
第一步:创建"Inference"模式的模型对象,并加载我们训练好的模型文件
```
# Recreate the model in inference mode
inference_model = modellib.MaskRCNN(mode="inference",
config=config,
model_dir=MODEL_DIR)
# 加载我们自己训练出的形状模型文件的权重信息
print("Loading weights from ", model_savepath)
inference_model.load_weights(model_savepath, by_name=True)
```
第二步:从验证数据集中随机选出一张图片,显式Ground Truth信息
```
# 随机选出图片进行测试
image_id = random.choice(dataset_val.image_ids)
original_image, image_meta, gt_class_id, gt_bbox, gt_mask =\
modellib.load_image_gt(dataset_val, config,
image_id, use_mini_mask=False)
log("original_image", original_image)
log("image_meta", image_meta)
log("gt_class_id", gt_class_id)
log("gt_bbox", gt_bbox)
log("gt_mask", gt_mask)
visualize.display_instances(original_image, gt_bbox, gt_mask, gt_class_id,
dataset_train.class_names, figsize=(8, 8))
```
第三步:使用模型对图片进行预测,并显示结果
```
results = inference_model.detect_keypoint([original_image], verbose=1)
r = results[0] # for one image
log("rois",r['rois'])
log("keypoints",r['keypoints'])
log("class_ids",r['class_ids'])
log("keypoints",r['keypoints'])
log("masks",r['masks'])
log("scores",r['scores'])
# 定义助手函数用于设置matplot中的子绘制区域所在的行和列
def get_ax(rows=1, cols=1, size=8):
_, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows))
return ax
visualize.display_keypoints(original_image, r['rois'], r['keypoints'], r['class_ids'],
dataset_train.class_names,skeleton=config.LIMBS, ax=get_ax())
```
| true |
code
| 0.469581 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/abegpatel/movie-recomendation-system-using-auto-encoder/blob/master/autoencoder.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
**AUTO ENCODERS:**
.auto encoders
.training of an auto encoders
.overcomplete hidden layers
.sparse autoencoders
.denoising autoencoders
.contractive autoencoders
.stacked autoencoders
.deep autoencoders
**.auto encoders**
used for recomendation system
visibleinput nodes->encoding->hiddenlayer->decoding->visible output layers
.it encoding it self
.self supervised model
.uesd for feature detection
.uesd for powerful recomendation system
.used for encoding
eg..
4 movies as i/p->hidden layer->4 visiable o/p layer
soft max function
->takes highest value
convert highest val-1
else 0
**training of an auto encoders **
1.we start with an array where lines corresponds to user and column
2.the first goes to the network.input vector contains ratings of all movie
3.the i/p vector x is encoded into a vector zof same dimension by mapping function
z=f(wx+b)w->weights
b-bias
4.z is decoded into the o/p vector y of same dimension as x
5.the reconstruction error d(x,y)=||x-y|| is computedthe goal is to minimize it
6.back_popagated throungh right to left and upadated(gradient descent)
7.repeat 1 to 7
8.read more epochs
**overcomplete hidden layers**
if i/p layer->more hidden layer->o/p layer
it will cheat
and go straight produce o/p
and hidden layer which are left not in use
**.sparse autoencoders**
hidden layer is more than i/p layer
(it will cheat)
.a regularization technique apply (prevent overfitting,stebilizing algorithm)
.it uses ceratin no of nodes at a time
**.denoising autoencoders**
when we have more hidden layer
.a regularization technique
.modified version of i/p value
.randomly assign to 0
.compare o/p to original value
.stochastic auto encoder
**contractive autoencoders**
.a regularization techinque
.they add penalty to loss function
**stacked autoencoders**
.add two hidden layer in auto encoders
.hidden layer->encoding->hidden layer
.directed neural network
**deep autoencoders**
i/p layer->hidden layer1..2..3..->o/p layer(rbm stack)
```
!unzip -uq "/content/drive/My Drive/P16-AutoEncoders.zip" -d "/content/drive/My Drive/"
!unzip -uq "/content/drive/My Drive/AutoEncoders/ml-100k.zip" -d "/content/drive/My Drive/AutoEncoders/"
!unzip -uq "/content/drive/My Drive/AutoEncoders/ml-1m.zip" -d "/content/drive/My Drive/AutoEncoders/"
# AutoEncoders
# Importing the libraries
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.optim as optim
import torch.utils.data
from torch.autograd import Variable
# Importing the dataset
movies = pd.read_csv('/content/drive/My Drive/AutoEncoders/ml-1m/movies.dat', sep = '::', header = None, engine = 'python', encoding = 'latin-1')
users = pd.read_csv('/content/drive/My Drive/AutoEncoders/ml-1m/users.dat', sep = '::', header = None, engine = 'python', encoding = 'latin-1')
ratings = pd.read_csv('/content/drive/My Drive/AutoEncoders/ml-1m/ratings.dat', sep = '::', header = None, engine = 'python', encoding = 'latin-1')
movies
# Preparing the training set and the test set
training_set = pd.read_csv('/content/drive/My Drive/AutoEncoders/ml-100k/u1.base', delimiter = '\t')
training_set = np.array(training_set, dtype = 'int')
test_set = pd.read_csv('/content/drive/My Drive/AutoEncoders/ml-100k/u1.test', delimiter = '\t')
test_set = np.array(test_set, dtype = 'int')
# Getting the number of users and movies
nb_users = int(max(max(training_set[:,0]), max(test_set[:,0])))
nb_movies = int(max(max(training_set[:,1]), max(test_set[:,1])))
# Converting the data into an array with users in lines and movies in columns
def convert(data):
new_data = []
for id_users in range(1, nb_users + 1):
id_movies = data[:,1][data[:,0] == id_users]
id_ratings = data[:,2][data[:,0] == id_users]
ratings = np.zeros(nb_movies)
ratings[id_movies - 1] = id_ratings
new_data.append(list(ratings))
return new_data
training_set = convert(training_set)
test_set = convert(test_set)
# Converting the data into Torch tensors
training_set = torch.FloatTensor(training_set)
test_set = torch.FloatTensor(test_set)
# Creating the architecture of the Neural Network
class SAE(nn.Module):
def __init__(self, ):
super(SAE, self).__init__()
self.fc1 = nn.Linear(nb_movies, 20)
self.fc2 = nn.Linear(20, 10)
self.fc3 = nn.Linear(10, 20)
self.fc4 = nn.Linear(20, nb_movies)
self.activation = nn.Sigmoid()
def forward(self, x):
x = self.activation(self.fc1(x))
x = self.activation(self.fc2(x))
x = self.activation(self.fc3(x))
x = self.fc4(x)
return x
sae = SAE()
criterion = nn.MSELoss()
optimizer = optim.RMSprop(sae.parameters(), lr = 0.01, weight_decay = 0.5)
# Training the SAE
nb_epoch = 200
for epoch in range(1, nb_epoch + 1):
train_loss = 0
s = 0.
for id_user in range(nb_users):
input = Variable(training_set[id_user]).unsqueeze(0)
target = input.clone()
if torch.sum(target.data > 0) > 0:
output = sae(input)
target.require_grad = False
output[target == 0] = 0
loss= criterion(output, target)
mean_corrector = nb_movies/float(torch.sum(target.data > 0) + 1e-10)
loss.backward()
train_loss += np.sqrt(loss.data*mean_corrector)
s += 1.
optimizer.step()
print('epoch: '+str(epoch)+' loss: '+str(train_loss/s))
# Testing the SAE
test_loss = 0
s = 0.
for id_user in range(nb_users):
input = Variable(training_set[id_user]).unsqueeze(0)
target = Variable(test_set[id_user])
if torch.sum(target.data > 0) > 0:
output = sae(input)
target.require_grad = False
output[(target == 0).unsqueeze(0)] = 0
loss = criterion(output, target)
mean_corrector = nb_movies/float(torch.sum(target.data > 0) + 1e-10)
test_loss += np.sqrt(loss.data*mean_corrector)
s += 1.
print('test loss: '+str(test_loss/s))
```
| true |
code
| 0.696307 | null | null | null | null |
|
# 应用自动数据增强
[](https://gitee.com/mindspore/docs/blob/master/docs/notebook/mindspore_enable_auto_augmentation.ipynb)
## 概述
自动数据增强(AutoAugment)是在一系列图像增强子策略的搜索空间中,通过搜索算法找到适合特定数据集的图像增强方案。MindSpore的`c_transforms`模块提供了丰富的C++算子来实现AutoAugment,用户也可以自定义函数或者算子来实现。更多MindSpore算子的详细说明参见[API文档](https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.dataset.vision.html)。
MindSpore算子和AutoAugment中的算子的对应关系如下:
| AutoAugment算子 | MindSpore算子 | 描述 |
| :------: | :------ | ------ |
| shearX | RandomAffine | 横向剪切 |
| shearY | RandomAffine | 纵向剪切 |
| translateX | RandomAffine | 水平平移 |
| translateY | RandomAffine | 垂直平移 |
| rotate | RandomRotation | 旋转变换 |
| color | RandomColor | 颜色变换 |
| posterize | RandomPosterize | 减少颜色通道位数 |
| solarize | RandomSolarize | 指定的阈值范围内,反转所有的像素点 |
| contrast | RandomColorAdjust | 调整对比度 |
| sharpness | RandomSharpness | 调整锐度 |
| brightness | RandomColorAdjust | 调整亮度 |
| autocontrast | AutoContrast | 最大化图像对比度 |
| equalize | Equalize | 均衡图像直方图 |
| invert | Invert | 反转图像 |
> 本文档适用于CPU、GPU和Ascend环境。
## 整体流程
- 准备环节。
- CIFAR-10自动数据增强。
## 准备环节
### 下载所需数据集
以下示例代码将数据集下载并解压到指定位置。
```
import os
import requests
import tarfile
import zipfile
import shutil
requests.packages.urllib3.disable_warnings()
def download_dataset(url, target_path):
"""download and decompress dataset"""
if not os.path.exists(target_path):
os.makedirs(target_path)
download_file = url.split("/")[-1]
if not os.path.exists(download_file):
res = requests.get(url, stream=True, verify=False)
if download_file.split(".")[-1] not in ["tgz", "zip", "tar", "gz"]:
download_file = os.path.join(target_path, download_file)
with open(download_file, "wb") as f:
for chunk in res.iter_content(chunk_size=512):
if chunk:
f.write(chunk)
if download_file.endswith("zip"):
z = zipfile.ZipFile(download_file, "r")
z.extractall(path=target_path)
z.close()
if download_file.endswith(".tar.gz") or download_file.endswith(".tar") or download_file.endswith(".tgz"):
t = tarfile.open(download_file)
names = t.getnames()
for name in names:
t.extract(name, target_path)
t.close()
print("The {} file is downloaded and saved in the path {} after processing".format(os.path.basename(url), target_path))
download_dataset("https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/datasets/cifar-10-binary.tar.gz", "./datasets")
test_path = "./datasets/cifar-10-batches-bin/test"
train_path = "./datasets/cifar-10-batches-bin/train"
os.makedirs(test_path, exist_ok=True)
os.makedirs(train_path, exist_ok=True)
if not os.path.exists(os.path.join(test_path, "test_batch.bin")):
shutil.move("./datasets/cifar-10-batches-bin/test_batch.bin", test_path)
[shutil.move("./datasets/cifar-10-batches-bin/"+i, train_path) for i in os.listdir("./datasets/cifar-10-batches-bin/") if os.path.isfile("./datasets/cifar-10-batches-bin/"+i) and not i.endswith(".html") and not os.path.exists(os.path.join(train_path, i))]
```
下载并解压后的数据集文件的目录结构如下:
```text
./datasets/cifar-10-batches-bin
├── readme.html
├── test
│ └── test_batch.bin
└── train
├── batches.meta.txt
├── data_batch_1.bin
├── data_batch_2.bin
├── data_batch_3.bin
├── data_batch_4.bin
└── data_batch_5.bin
```
## CIFAR-10自动数据增强
本教程以在CIFAR-10数据集上实现AutoAugment作为示例。
针对CIFAR-10数据集的数据增强策略包含25条子策略,每条子策略中包含两种变换,针对一个batch中的每张图像随机挑选一个子策略的组合,以预定的概率来决定是否执行子策略中的每种变换。
用户可以使用MindSpore中`c_transforms`模块的`RandomSelectSubpolicy`接口来实现AutoAugment,在CIFAR-10分类训练中标准的数据增强方式分以下几个步骤:
- `RandomCrop`:随机裁剪。
- `RandomHorizontalFlip`:水平方向上随机翻转。
- `Normalize`:归一化。
- `HWC2CHW`:图片通道变化。
在`RandomCrop`后插入AutoAugment变换,如下所示:
1. 引入MindSpore数据增强模块。
```
from mindspore import dtype as mstype
import mindspore.dataset as ds
import mindspore.dataset.vision.c_transforms as c_vision
import mindspore.dataset.transforms.c_transforms as c_transforms
import matplotlib.pyplot as plt
```
2. 定义MindSpore算子到AutoAugment算子的映射:
```
# define Auto Augmentation operators
PARAMETER_MAX = 10
def float_parameter(level, maxval):
return float(level) * maxval / PARAMETER_MAX
def int_parameter(level, maxval):
return int(level * maxval / PARAMETER_MAX)
def shear_x(level):
v = float_parameter(level, 0.3)
return c_transforms.RandomChoice([c_vision.RandomAffine(degrees=0, shear=(-v, -v)), c_vision.RandomAffine(degrees=0, shear=(v, v))])
def shear_y(level):
v = float_parameter(level, 0.3)
return c_transforms.RandomChoice([c_vision.RandomAffine(degrees=0, shear=(0, 0, -v, -v)), c_vision.RandomAffine(degrees=0, shear=(0, 0, v, v))])
def translate_x(level):
v = float_parameter(level, 150 / 331)
return c_transforms.RandomChoice([c_vision.RandomAffine(degrees=0, translate=(-v, -v)), c_vision.RandomAffine(degrees=0, translate=(v, v))])
def translate_y(level):
v = float_parameter(level, 150 / 331)
return c_transforms.RandomChoice([c_vision.RandomAffine(degrees=0, translate=(0, 0, -v, -v)), c_vision.RandomAffine(degrees=0, translate=(0, 0, v, v))])
def color_impl(level):
v = float_parameter(level, 1.8) + 0.1
return c_vision.RandomColor(degrees=(v, v))
def rotate_impl(level):
v = int_parameter(level, 30)
return c_transforms.RandomChoice([c_vision.RandomRotation(degrees=(-v, -v)), c_vision.RandomRotation(degrees=(v, v))])
def solarize_impl(level):
level = int_parameter(level, 256)
v = 256 - level
return c_vision.RandomSolarize(threshold=(0, v))
def posterize_impl(level):
level = int_parameter(level, 4)
v = 4 - level
return c_vision.RandomPosterize(bits=(v, v))
def contrast_impl(level):
v = float_parameter(level, 1.8) + 0.1
return c_vision.RandomColorAdjust(contrast=(v, v))
def autocontrast_impl(level):
return c_vision.AutoContrast()
def sharpness_impl(level):
v = float_parameter(level, 1.8) + 0.1
return c_vision.RandomSharpness(degrees=(v, v))
def brightness_impl(level):
v = float_parameter(level, 1.8) + 0.1
return c_vision.RandomColorAdjust(brightness=(v, v))
```
3. 定义CIFAR-10数据集的AutoAugment策略:
- 预置一条简单的子策略,其中只包含`RandomRotation`和`RandomColor`两个操作,概率分别为1.0和0.0。
```
policy_list = [
[(c_vision.RandomRotation((90, 90)), 1.0), (c_vision.RandomColorAdjust(), 0.0)]
]
```
- 预置多个子策略。
```
# define the Auto Augmentation policy
cifar10_policy = [
[(posterize_impl(8), 0.4), (rotate_impl(9), 0.6)],
[(solarize_impl(5), 0.6), (autocontrast_impl(5), 0.6)],
[(c_vision.Equalize(), 0.8), (c_vision.Equalize(), 0.6)],
[(posterize_impl(7), 0.6), (posterize_impl(6), 0.6)],
[(c_vision.Equalize(), 0.4), (solarize_impl(4), 0.2)],
[(c_vision.Equalize(), 0.4), (rotate_impl(8), 0.8)],
[(solarize_impl(3), 0.6), (c_vision.Equalize(), 0.6)],
[(posterize_impl(5), 0.8), (c_vision.Equalize(), 1.0)],
[(rotate_impl(3), 0.2), (solarize_impl(8), 0.6)],
[(c_vision.Equalize(), 0.6), (posterize_impl(6), 0.4)],
[(rotate_impl(8), 0.8), (color_impl(0), 0.4)],
[(rotate_impl(9), 0.4), (c_vision.Equalize(), 0.6)],
[(c_vision.Equalize(), 0.0), (c_vision.Equalize(), 0.8)],
[(c_vision.Invert(), 0.6), (c_vision.Equalize(), 1.0)],
[(color_impl(4), 0.6), (contrast_impl(8), 1.0)],
[(rotate_impl(8), 0.8), (color_impl(2), 1.0)],
[(color_impl(8), 0.8), (solarize_impl(7), 0.8)],
[(sharpness_impl(7), 0.4), (c_vision.Invert(), 0.6)],
[(shear_x(5), 0.6), (c_vision.Equalize(), 1.0)],
[(color_impl(0), 0.4), (c_vision.Equalize(), 0.6)],
[(c_vision.Equalize(), 0.4), (solarize_impl(4), 0.2)],
[(solarize_impl(5), 0.6), (autocontrast_impl(5), 0.6)],
[(c_vision.Invert(), 0.6), (c_vision.Equalize(), 1.0)],
[(color_impl(4), 0.6), (contrast_impl(8), 1.0)],
[(c_vision.Equalize(), 0.8), (c_vision.Equalize(), 0.6)],
]
```
4. 在`RandomCrop`操作后插入AutoAugment变换。
```
def create_dataset(dataset_path, do_train, policy, repeat_num=1, batch_size=32, shuffle=True, num_samples=5):
# create a train dataset for ResNet-50
data = ds.Cifar10Dataset(dataset_path, num_parallel_workers=8,
shuffle=shuffle, num_samples=num_samples)
image_size = 224
mean = [0.485 * 255, 0.456 * 255, 0.406 * 255]
std = [0.229 * 255, 0.224 * 255, 0.225 * 255]
# define map operations
if do_train:
trans = [
c_vision.RandomCrop((32, 32), (4, 4, 4, 4)),
]
post_trans = [
c_vision.RandomHorizontalFlip(prob=0.5),
]
else:
trans = [
c_vision.Decode(),
c_vision.Resize(256),
c_vision.CenterCrop(image_size),
c_vision.Normalize(mean=mean, std=std),
c_vision.HWC2CHW()
]
data = data.map(operations=trans, input_columns="image")
if do_train:
data = data.map(operations=c_vision.RandomSelectSubpolicy(policy), input_columns=["image"])
data = data.map(operations=post_trans, input_columns="image")
type_cast_op = c_transforms.TypeCast(mstype.int32)
data = data.map(operations=type_cast_op, input_columns="label")
# apply the batch operation
data = data.batch(batch_size, drop_remainder=True)
# apply the repeat operation
data = data.repeat(repeat_num)
return data
```
5. 验证自动数据增强效果。
- 在一条子策略的情况下,因为`RandomRotation`操作的概率设置为1,也就是该操作肯定会发生,而`RandomColor`操作的概率设置为0,也就是该操作不会发生。
```
DATA_DIR = "./datasets/cifar-10-batches-bin/train"
data = create_dataset(dataset_path=DATA_DIR, do_train=True, batch_size=5, shuffle=False, num_samples=5, policy=policy_list)
epochs = 5
itr = data.create_dict_iterator()
fig = plt.figure(figsize=(8, 8))
columns = 5
rows = 5
step_num = 0
for ep_num in range(epochs):
for data in itr:
step_num += 1
for index in range(rows):
fig.add_subplot(rows, columns, ep_num * rows + index + 1)
plt.imshow(data['image'].asnumpy()[index])
plt.show()
```
- 在多个子策略的情况下,每张图片首先等概率的随机选取一条子策略,然后根据该子策略内俩个操作的概率情况,进行随机的自动数据增强,增强数据的泛化性。
```
DATA_DIR = "./datasets/cifar-10-batches-bin/train"
data = create_dataset(dataset_path=DATA_DIR, do_train=True, batch_size=5, shuffle=False, num_samples=5, policy=cifar10_policy)
epochs = 5
itr = data.create_dict_iterator()
fig = plt.figure(figsize=(8, 8))
columns = 5
rows = 5
step_num = 0
for ep_num in range(epochs):
for data in itr:
step_num += 1
for index in range(rows):
fig.add_subplot(rows, columns, ep_num * rows + index + 1)
plt.imshow(data['image'].asnumpy()[index])
plt.show()
```
> 为了更好地演示效果,此处只加载5张图片,且读取时不进行`shuffle`操作,自动数据增强时也不进行`Normalize`和`HWC2CHW`操作。
>
> 运行结果可以看到,batch中每张图像的增强效果,水平方向表示1个batch的5张图像,垂直方向表示5个batch。
| true |
code
| 0.535098 | null | null | null | null |
|
<img src="NotebookAddons/blackboard-banner.png" width="100%" />
<font face="Calibri">
<br>
<font size="5"> <b>Change Detection in <font color='rgba(200,0,0,0.2)'>Your Own</font> SAR Amplitude Time Series Stack </b> </font>
<br>
<font size="4"> <b> Franz J Meyer; University of Alaska Fairbanks & Josef Kellndorfer, <a href="http://earthbigdata.com/" target="_blank">Earth Big Data, LLC</a> </b> <br>
<img style="padding: 7px" src="NotebookAddons/UAFLogo_A_647.png" width="170" align="right"/>
</font>
<font size="3"> This notebook introduces you to the methods of change detection in deep multi-temporal SAR image data stacks.
<br><br>
<b>In this chapter we introduce the following data analysis concepts:</b>
- How to use your own HyP3-generated data stack in a change detection effort
- The concepts of time series slicing by month, year, and date.
- The concepts and workflow of Cumulative Sum-based change point detection.
- The identification of change dates for each identified change point.
</font>
</font>
<hr>
<font face="Calibri" size="5" color="darkred"> <b>Important Note about JupyterHub</b> </font>
<br><br>
<font face="Calibri" size="3"> <b>Your JupyterHub server will automatically shutdown when left idle for more than 1 hour. Your notebooks will not be lost but you will have to restart their kernels and re-run them from the beginning. You will not be able to seamlessly continue running a partially run notebook.</b> </font>
```
%%javascript
var kernel = Jupyter.notebook.kernel;
var command = ["notebookUrl = ",
"'", window.location, "'" ].join('')
kernel.execute(command)
from IPython.display import Markdown
from IPython.display import display
user = !echo $JUPYTERHUB_USER
env = !echo $CONDA_PREFIX
if env[0] == '':
env[0] = 'Python 3 (base)'
if env[0] != '/home/jovyan/.local/envs/rtc_analysis':
display(Markdown(f'<text style=color:red><strong>WARNING:</strong></text>'))
display(Markdown(f'<text style=color:red>This notebook should be run using the "rtc_analysis" conda environment.</text>'))
display(Markdown(f'<text style=color:red>It is currently using the "{env[0].split("/")[-1]}" environment.</text>'))
display(Markdown(f'<text style=color:red>Select the "rtc_analysis" from the "Change Kernel" submenu of the "Kernel" menu.</text>'))
display(Markdown(f'<text style=color:red>If the "rtc_analysis" environment is not present, use <a href="{notebookUrl.split("/user")[0]}/user/{user[0]}/notebooks/conda_environments/Create_OSL_Conda_Environments.ipynb"> Create_OSL_Conda_Environments.ipynb </a> to create it.</text>'))
display(Markdown(f'<text style=color:red>Note that you must restart your server after creating a new environment before it is usable by notebooks.</text>'))
```
<hr>
<font face="Calibri">
<font size="5"> <b> 0. Importing Relevant Python Packages </b> </font>
<font size="3">In this notebook we will use the following scientific libraries:
<ol type="1">
<li> <b><a href="https://pandas.pydata.org/" target="_blank">Pandas</a></b> is a Python library that provides high-level data structures and a vast variety of tools for analysis. The great feature of this package is the ability to translate rather complex operations with data into one or two commands. Pandas contains many built-in methods for filtering and combining data, as well as the time-series functionality. </li>
<li> <b><a href="https://www.gdal.org/" target="_blank">GDAL</a></b> is a software library for reading and writing raster and vector geospatial data formats. It includes a collection of programs tailored for geospatial data processing. Most modern GIS systems (such as ArcGIS or QGIS) use GDAL in the background.</li>
<li> <b><a href="http://www.numpy.org/" target="_blank">NumPy</a></b> is one of the principal packages for scientific applications of Python. It is intended for processing large multidimensional arrays and matrices, and an extensive collection of high-level mathematical functions and implemented methods makes it possible to perform various operations with these objects. </li>
<li> <b><a href="https://matplotlib.org/index.html" target="_blank">Matplotlib</a></b> is a low-level library for creating two-dimensional diagrams and graphs. With its help, you can build diverse charts, from histograms and scatterplots to non-Cartesian coordinates graphs. Moreover, many popular plotting libraries are designed to work in conjunction with matplotlib. </li>
</font>
<br>
<font face="Calibri" size="3"><b>Our first step is to import them:</b> </font>
```
%%capture
import os
import glob
import json # for loads
import pandas as pd
from osgeo import gdal
import numpy as np
%matplotlib inline
import matplotlib.pylab as plt
import asf_notebook as asfn
asfn.jupytertheme_matplotlib_format()
```
<hr>
<font face="Calibri">
<font size="5"> <b> 1. Load Your Prepared Data Stack Into the Notebook </b> </font>
<font size="3"> This notebook assumes that you've prepared your own data stack of <b>RTC image products</b> over your personal area of interest. This can be done using the <b>Prepare_Data_Stack_Hyp3</b> and <b>Subset_Data_Stack notebooks</b>.
This notebook expects <a href="https://media.asf.alaska.edu/uploads/RTC/rtc_atbd_v1.2_final.pdf" target="_blank">Radiometric Terrain Corrected</a> (RTC) image products as input, so be sure to select an RTC process when creating the subscription for your input data within HyP3. Prefer a **unique orbit geometry** (ascending or descending) to keep geometric differences between images low.
<b>Begin by writing a function to retrieve and the absolute paths to each of our tiffs:</b>
</font>
</font>
```
def get_tiff_paths(paths):
tiff_paths = !ls $paths | sort -t_ -k5,5
return tiff_paths
```
<font face="Calibri" size="3"><b>Enter the path to the directory holding your tiffs:</b> </font>
```
while True:
print("Enter the absolute path to the directory holding your tiffs.")
tiff_dir = input()
wildcard_path = f"{tiff_dir}/*.tif*"
if os.path.exists(tiff_dir):
tiff_paths = get_tiff_paths(wildcard_path)
if len(tiff_paths) < 1:
print(f"{tiff_dir} exists but contains no tifs.")
print("You will not be able to proceed until tifs are prepared.")
break
else:
print(f"\n{tiff_dir} does not exist.")
continue
```
<font face="Calibri" size="3"><b>Determine the path to the analysis directory containing the tiff directory:</b> </font>
```
analysis_dir = os.path.dirname(tiff_dir)
print(analysis_dir)
```
<font face="Calibri" size="3"><b>Create a wildcard path to the tiffs:</b> </font>
```
wildcard_path = f"{tiff_dir}/*.tif*"
print(wildcard_path)
```
<font face="Calibri" size="3"><b>Write a function to extract the tiff dates from a wildcard path:</b> </font>
```
def get_dates(paths):
dates = []
pths = glob.glob(paths)
for p in pths:
filename = os.path.basename(p).split('_')
for chunk in filename:
if len(chunk) == 15 and 'T' in chunk:
date = chunk.split('T')[0]
dates.append(date)
break
elif len(chunk) == 8:
try:
int(chunk)
dates.append(chunk)
break
except ValueError:
continue
dates.sort()
return dates
```
<font face="Calibri" size="3"><b>Call get_dates() to collect the product acquisition dates:</b></font>
```
dates = get_dates(wildcard_path)
print(dates)
```
<font face="Calibri" size="3"><b>Gather the upper-left and lower-right corner coordinates of the data stack:</b></font>
```
coords = [[], []]
info = (gdal.Info(tiff_paths[0], options = ['-json']))
info = json.dumps(info)
coords[0] = (json.loads(info))['cornerCoordinates']['upperLeft']
coords[1] = (json.loads(info))['cornerCoordinates']['lowerRight']
print(coords)
```
<font face="Calibri" size="3"><b>Grab the stack's UTM zone.</b> Note that any UTM zone conflicts should already have been handled in the Prepare_Data_Stack_Hyp3 notebook.</font>
```
utm = json.loads(info)['coordinateSystem']['wkt'].split('ID')[-1].split(',')[1][0:-2]
print(f"UTM Zone: {utm}")
```
<hr>
<font face="Calibri" size="3"> Now we stack up the data by creating a virtual raster table with links to all subset data files: </font>
<br><br>
<font size="3"><b>Create the virtual raster table for the subset GeoTiffs:</b></font>
```
!gdalbuildvrt -separate raster_stack.vrt $wildcard_path
```
<hr>
<font face="Calibri">
<font size="5"> <b> 3. Now You Can Work With Your Data </b> </font>
<font size="3"> Now you are ready to perform time series change detection on your data stack.
</font>
</font>
<br>
<font face="Calibri" size="4"> <b> 3.1 Define Data Directory and Path to VRT </b> </font>
<br><br>
<font face="Calibri" size="3"><b>Create a variable containing the VRT filename:</b></font>
```
image_file = "raster_stack.vrt"
```
<font face="Calibri" size="3"><b>Create an index of timedelta64 data with Pandas:</b></font>
```
# Get some indices for plotting
time_index = pd.DatetimeIndex(dates)
```
<font face="Calibri" size="3"><b>Print the bands and dates for all images in the virtual raster table (VRT):</b></font>
```
j = 1
print(f"Bands and dates for {image_file}")
for i in time_index:
print("{:4d} {}".format(j, i.date()), end=' ')
j += 1
if j%5 == 1: print()
```
<hr>
<br>
<font face="Calibri" size="4"> <b> 3.2 Open Your Data Stack with gdal </b> </font>
```
img = gdal.Open(image_file)
```
<font face="Calibri" size="3"><b>Print the bands, pixels, and lines:</b></font>
```
print(f"Number of bands: {img.RasterCount}")
print(f"Number of pixels: {img.RasterXSize}")
print(f"Number of lines: {img.RasterYSize}")
```
<hr><hr>
<font face="Calibri" size="4"> <b> 3.3 Create a masked raster stack:</b></font>
```
raster_stack = img.ReadAsArray()
raster_stack_masked = np.ma.masked_where(raster_stack==0, raster_stack)
del raster_stack
```
<br>
<hr>
<font face="Calibri" size="5"> <b> 4. Cumulative Sum-based Change Detection Across an Entire Image</b> </font>
<font face="Calibri" size="3"> Using numpy arrays we can apply the concept of **cumulative sum change detection** analysis effectively on the entire image stack. We take advantage of array slicing and axis-based computing in numpy. **Axis 0 is the time domain** in our raster stacks.
<hr>
<font size="4"><b>4.1 Create our time series stack</b></font>
<br><br>
<font size="3"><b>Calculate the dB scale:</b></font>
```
db = 10.*np.ma.log10(raster_stack_masked)
```
<font face="Calibri" size="3">Sometimes it makes sense to <b>extract a reduced time span</b> from the full time series to reduce the number of different change objects in a scene. In the following, we extract a shorter time span:
</font>
```
date_picker = asfn.gui_date_picker(dates)
date_picker
subset_dates = date_picker.value
subset_dates = pd.DatetimeIndex(subset_dates)
date_index_subset = np.where((time_index>=subset_dates[0]) & (time_index<=subset_dates[1]))
db_subset = np.squeeze(db[date_index_subset, :, :])
time_index_subset = time_index[date_index_subset]
plt.figure(figsize=(12, 8))
band_number = 0
vmin = np.percentile(db_subset[band_number], 5)
vmax = np.percentile(db_subset[band_number], 95)
plt.title('Band {} {}'.format(band_number+1, time_index_subset[band_number].date()))
plt.imshow(db_subset[0], cmap='gray', vmin=vmin, vmax=vmax)
cbar = plt.colorbar()
_ = cbar.ax.set_xlabel('dB', fontsize='12')
```
<br>
<hr>
<font face="Calibri" size="4"> <b> 4.2 Calculate Mean Across Time Series to Prepare for Calculation of Cummulative Sum $S$:</b> </font>
<br><br>
<font face="Calibri" size="3"><b>Write a function to convert our plots into GeoTiffs:</b></font>
```
def geotiff_from_plot(source_image, out_filename, extent, utm, cmap=None, vmin=None, vmax=None, interpolation=None, dpi=300):
assert "." not in out_filename, 'Error: Do not include the file extension in out_filename'
assert type(extent) == list and len(extent) == 2 and len(extent[0]) == 2 and len(
extent[1]) == 2, 'Error: extent must be a list in the form [[upper_left_x, upper_left_y], [lower_right_x, lower_right_y]]'
plt.figure()
plt.axis('off')
plt.imshow(source_image, cmap=cmap, vmin=vmin, vmax=vmax, interpolation=interpolation)
temp = f"{out_filename}_temp.png"
plt.savefig(temp, dpi=dpi, transparent='true', bbox_inches='tight', pad_inches=0)
cmd = f"gdal_translate -of Gtiff -a_ullr {extent[0][0]} {extent[0][1]} {extent[1][0]} {extent[1][1]} -a_srs EPSG:{utm} {temp} {out_filename}.tiff"
!{cmd}
try:
os.remove(temp)
except FileNotFoundError:
pass
```
<font face="Calibri" size="3"><b>Create a directory in which to store our plots and animations:</b></font>
```
output_path = f"{tiff_dir}/plots_and_animations"
asfn.new_directory(output_path)
```
<font face="Calibri" size="3"><b>Plot the time-series mean and save as a png (time_series_mean.png):</b></font>
```
db_mean = np.mean(db_subset, axis=0)
plt.figure(figsize=(12, 8))
plt.imshow(db_mean, cmap='gray')
cbar = plt.colorbar()
cbar.ax.set_xlabel('dB', fontsize='12')
plt.savefig(f"{output_path}/time_series_mean.png", dpi=300, transparent='true')
```
<font face="Calibri" size="3"><b>Save the time-series mean as a GeoTiff (time_series_mean.tiff):</b></font>
```
%%capture
geotiff_from_plot(db_mean, f"{output_path}/time_series_mean", coords, utm, cmap='gray')
```
<font face="Calibri" size="3"><b>Calculate the residuals and plot residuals[0]. Save it as a png (residuals.png):</b></font>
```
residuals = db_subset - db_mean
plt.figure(figsize=(12, 8))
plt.imshow(residuals[0])
plt.title('Residuals for Band {} {}'.format(band_number+1, time_index_subset[band_number].date()))
cbar = plt.colorbar()
_ = cbar.ax.set_xlabel('dB', fontsize='12')
plt.savefig(f"{output_path}/residuals.png", dpi=300, transparent='true')
```
<font face="Calibri" size="3"><b>Save the residuals[0] as a GeoTiff (residuals.tiff):</b></font>
```
%%capture
geotiff_from_plot(residuals[0], f"{output_path}/residuals", coords, utm)
```
<br>
<hr>
<font face="Calibri" size="4"><b> 4.3 Calculate Cummulative Sum $S$ as well as Change Magnitude $S_{diff}$:</b></font>
<br><br>
<font face="Calibri" size="3"><b>Plot Smin, Smax, and the change magnitude and save a png of the plots (Smin_Smax_Sdiff.png):</b></font>
```
summation = np.cumsum(residuals, axis=0)
summation_max = np.max(summation, axis=0)
summation_min = np.min(summation, axis=0)
change_mag = summation_max - summation_min
fig, ax = plt.subplots(1, 3, figsize=(16, 4))
vmin = np.percentile(summation_min.flatten(), 3)
vmax = np.percentile(summation_max.flatten(), 97)
max_plot = ax[0].imshow(summation_max, vmin=vmin, vmax=vmax)
ax[0].set_title('$S_{max}$')
ax[1].imshow(summation_min, vmin=vmin, vmax=vmax)
ax[1].set_title('$S_{min}$')
ax[2].imshow(change_mag, vmin=vmin, vmax=vmax)
ax[2].set_title('Change Magnitude')
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.02, 0.7])
cbar = fig.colorbar(max_plot, cax=cbar_ax)
_ = cbar.ax.set_xlabel('dB', fontsize='12')
plt.savefig(f"{output_path}/Smin_Smax_Sdiff.png", dpi=300, transparent='true')
```
<font face="Calibri" size="3"><b>Save Smax as a GeoTiff (Smax.tiff):</b></font>
```
%%capture
geotiff_from_plot(summation_max, f"{output_path}/Smax", coords, utm, vmin=vmin, vmax=vmax)
```
<font face="Calibri" size="3"><b>Save Smin as a GeoTiff (Smin.tiff):</b></font>
```
%%capture
geotiff_from_plot(summation_min, f"{output_path}/Smin", coords, utm, vmin=vmin, vmax=vmax)
```
<font face="Calibri" size="3"><b>Save the change magnitude as a GeoTiff (Sdiff.tiff):</b></font>
```
%%capture
geotiff_from_plot(change_mag, f"{output_path}/Sdiff", coords, utm, vmin=vmin, vmax=vmax)
```
<br>
<hr>
<font face="Calibri" size="4"> <b> 4.4 Mask $S_{diff}$ With a-priori Threshold To Idenfity Change Candidates:</b> </font>
<font face="Calibri" size="3">To identified change candidate pixels, we can threshold $S_{diff}$ to reduce computation of the bootstrapping. For land cover change, we would not expect more than 5-10% change pixels in a landscape. So, if the test region is reasonably large, setting a threshold for expected change to 10% is appropriate. In our example, we'll start out with a very conservative threshold of 50%.
<br><br>
<b>Plot and tsave the histogram and CDF for the change magnitude (change_mag_histogram_CDF.png):</b></font>
```
plt.rcParams.update({'font.size': 14})
fig = plt.figure(figsize=(14, 6)) # Initialize figure with a size
ax1 = fig.add_subplot(121) # 121 determines: 2 rows, 2 plots, first plot
ax2 = fig.add_subplot(122)
# Second plot: Histogram
# IMPORTANT: To get a histogram, we first need to *flatten*
# the two-dimensional image into a one-dimensional vector.
histogram = ax1.hist(change_mag.flatten(), bins=200, range=(0, np.max(change_mag)))
ax1.xaxis.set_label_text('Change Magnitude')
ax1.set_title('Change Magnitude Histogram')
plt.grid()
n, bins, patches = ax2.hist(change_mag.flatten(), bins=200, range=(0, np.max(change_mag)), cumulative='True', density='True', histtype='step', label='Empirical')
ax2.xaxis.set_label_text('Change Magnitude')
ax2.set_title('Change Magnitude CDF')
plt.grid()
plt.savefig(f"{output_path}/change_mag_histogram_CDF", dpi=72)
precentile = 0.5
out_indicies = np.where(n>precentile)
threshold_index = np.min(out_indicies)
threshold = bins[threshold_index]
print('At the {}% percentile, the threshold value is {:2.2f}'.format(precentile*100, threshold))
```
<font face="Calibri" size="3">Using this threshold, we can <b>visualize our change candidate areas and save them as a png (change_candidate.png):</b></font>
```
change_mag_mask = change_mag < threshold
plt.figure(figsize=(12, 8))
plt.title('Change Candidate Areas (black)')
_ = plt.imshow(change_mag_mask, cmap='gray')
plt.savefig(f"{output_path}/change_candidate.png", dpi=300, transparent='true')
```
<font face="Calibri" size="3"><b>Save the change candidate areas as a GeoTiff (change_canididate.tiff):</b>
</font>
```
%%capture
geotiff_from_plot(change_mag_mask, f"{output_path}/change_canididate", coords, utm, cmap='gray')
```
<br>
<hr>
<font face="Calibri" size="4"> <b> 4.5 Bootstrapping to Prepare for Change Point Selection:</b> </font>
<font face="Calibri" size="3">We can now perform bootstrapping over the candidate pixels. The workflow is as follows:
<ul>
<li>Filter our residuals to the change candidate pixels</li>
<li>Perform bootstrapping over candidate pixels</li>
</ul>
For efficient computing we permutate the index of the time axis.
</font>
```
residuals_mask = np.broadcast_to(change_mag_mask , residuals.shape)
residuals_masked = np.ma.array(residuals, mask=residuals_mask)
```
<font face="Calibri" size="3">On the masked time series stack of residuals, we can re-compute the cumulative sums:
</font>
```
summation_masked = np.ma.cumsum(residuals_masked, axis=0)
```
<font face="Calibri" size="3"><b>Plot the masked Smax, Smin, and change magnitude. Save them as a png (masked_Smax_Smin_Sdiff.png):</b>
</font>
```
summation_masked_max = np.ma.max(summation_masked, axis=0)
summation_masked_min = np.ma.min(summation_masked, axis=0)
change_mag_masked = summation_masked_max - summation_masked_min
fig, ax = plt.subplots(1, 3, figsize=(16, 4))
vmin = summation_masked_min.min()
vmax = summation_masked_max.max()
masked_sum_max_plot = ax[0].imshow(summation_masked_max, vmin=vmin, vmax=vmax)
ax[0].set_title('Masked $S_{max}$')
ax[1].imshow(summation_masked_min, vmin=vmin, vmax=vmax)
ax[1].set_title('Masked $S_{min}$')
ax[2].imshow(change_mag_masked, vmin=vmin, vmax=vmax)
ax[2].set_title('Masked Change Magnitude')
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.02, 0.7])
cbar = fig.colorbar(masked_sum_max_plot, cax=cbar_ax)
_ = cbar.ax.set_xlabel('dB', fontsize='12')
plt.savefig(f"{output_path}/masked_Smax_Smin_Sdiff.png", dpi=300, transparent='true')
```
<font face="Calibri" size="3"><b>Save the masked Smax as a GeoTiff (masked_Smax.tiff):</b>
</font>
```
%%capture
geotiff_from_plot(summation_masked_max, f"{output_path}/masked_Smax", coords, utm, vmin=vmin, vmax=vmax)
```
<font face="Calibri" size="3"><b>Save the masked Smin as a GeoTiff (masked_Smin.tiff):</b>
</font>
```
%%capture
geotiff_from_plot(summation_masked_min, f"{output_path}/masked_Smin", coords, utm, vmin=vmin, vmax=vmax)
```
<font face="Calibri" size="3"><b>Save the masked change magnitude as a GeoTiff (masked_Sdiff.tiff):</b>
</font>
```
%%capture
geotiff_from_plot(change_mag_masked, f"{output_path}/masked_Sdiff", coords, utm, vmin=vmin, vmax=vmax)
```
<font face="Calibri" size="3">Now let's perform <b>bootstrapping</b>:
</font>
```
random_index = np.random.permutation(residuals_masked.shape[0])
residuals_random = residuals_masked[random_index,:,:]
n_bootstraps = 100 # bootstrap sample size
# to keep track of the maxium Sdiff of the bootstrapped sample:
change_mag_random_max = np.ma.copy(change_mag_masked)
change_mag_random_max[~change_mag_random_max.mask]=0
# to compute the Sdiff sums of the bootstrapped sample:
change_mag_random_sum = np.ma.copy(change_mag_masked)
change_mag_random_sum[~change_mag_random_max.mask]=0
# to keep track of the count of the bootstrapped sample
n_change_mag_gt_change_mag_random = np.ma.copy(change_mag_masked)
n_change_mag_gt_change_mag_random[~n_change_mag_gt_change_mag_random.mask]=0
print("Running Bootstrapping for %4.1f iterations ..." % (n_bootstraps))
for i in range(n_bootstraps):
# For efficiency, we shuffle the time axis index and use that
#to randomize the masked array
random_index = np.random.permutation(residuals_masked.shape[0])
# Randomize the time step of the residuals
residuals_random = residuals_masked[random_index,:,:]
summation_random = np.ma.cumsum(residuals_random, axis=0)
summation_random_max = np.ma.max(summation_random, axis=0)
summation_random_min = np.ma.min(summation_random, axis=0)
change_mag_random = summation_random_max - summation_random_min
change_mag_random_sum += change_mag_random
change_mag_random_max[np.ma.greater(change_mag_random, change_mag_random_max)] = \
change_mag_random[np.ma.greater(change_mag_random, change_mag_random_max)]
n_change_mag_gt_change_mag_random[np.ma.greater(change_mag_masked, change_mag_random)] += 1
if ((i+1)/n_bootstraps*100)%10 == 0:
print("\r%4.1f%% completed" % ((i+1)/n_bootstraps*100), end='\r', flush=True)
print(f"Bootstrapping Complete")
```
<br>
<hr>
<font face="Calibri" size="4"> <b> 4.6 Extract Confidence Metrics and Select Final Change Points:</b> </font>
<font face="Calibri" size="3">We first <b>compute for all pixels the confidence level $CL$, the change point significance metric $CP_{significance}$ and the product of the two as our confidence metric for identified change points. Plot the results and save them as a png (confidenceLevel_CPSignificance.png):</b></font>
```
confidence_level = n_change_mag_gt_change_mag_random / n_bootstraps
change_point_significance = 1.- (change_mag_random_sum / n_bootstraps)/change_mag
#Plot
fig, ax = plt.subplots(1, 3, figsize=(16, 4))
a = ax[0].imshow(confidence_level*100)
cbar0 = fig.colorbar(a, ax=ax[0])
_ = cbar0.ax.set_xlabel('%', fontsize='12')
ax[0].set_title('Confidence Level %')
a = ax[1].imshow(change_point_significance)
_ = fig.colorbar(a, ax=ax[1])
ax[1].set_title('Significance')
a = ax[2].imshow(confidence_level*change_point_significance)
_ = fig.colorbar(a, ax=ax[2])
_ = ax[2].set_title('CL x S')
plt.savefig(f"{output_path}/confidenceLevel_CPSignificance.png", dpi=300, transparent='true')
```
<font face="Calibri" size="3"><b>Save the confidence level as a GeoTiff (confidence_level.tiff):</b>
</font>
```
%%capture
geotiff_from_plot(confidence_level*100, f"{output_path}/confidence_level", coords, utm)
```
<font face="Calibri" size="3"><b>Save the change point significance as a GeoTiff (cp_significance.tiff):</b>
</font>
```
%%capture
geotiff_from_plot(change_point_significance, f"{output_path}/cp_significance", coords, utm)
```
<font face="Calibri" size="3"><b>Save the change point significance as a GeoTiff (cp_significance.tiff):</b>
</font>
```
%%capture
geotiff_from_plot(confidence_level*change_point_significance, f"{output_path}/confidenceLevel_x_CPSignificance", coords, utm)
```
<font face="Calibri" size="3">Now we can <b>set a change point threshold</b> to identify most likely change pixels in our map of change candidates:
</font>
```
change_point_threshold = 0.01
```
<font face="Calibri" size="3"><b>Plot the detected change pixels based on the change_point_threshold and save it as a png (detected_change_pixels.png):</b></font>
```
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(1, 1, 1)
plt.title('Detected Change Pixels based on Threshold %2.2f' % (change_point_threshold))
a = ax.imshow(confidence_level*change_point_significance < change_point_threshold, cmap='cool')
plt.savefig(f"{output_path}/detected_change_pixels.png", dpi=300, transparent='true')
```
<font face="Calibri" size="3"><b>Save the detected_change_pixels as a GeoTiff (detected_change_pixels.tiff):</b>
</font>
```
%%capture
geotiff_from_plot(confidence_level*change_point_significance < change_point_threshold, f"{output_path}/detected_change_pixels", coords, utm, cmap='cool')
```
<br>
<hr>
<font face="Calibri" size="4"> <b> 4.7 Derive Timing of Change for Each Change Pixel:</b> </font>
<font face="Calibri" size="3">Our last step in the identification of the change points is to extract the timing of the change. We will produce a raster layer that shows the band number of this first date after a change was detected. We will make use of the numpy indexing scheme. First, we create a combined mask of the first threshold and the identified change points after the bootstrapping. For this we use the numpy "mask_or" operation.
</font>
```
# make a mask of our change points from the new threhold and the previous mask
change_point_mask = np.ma.mask_or(confidence_level*change_point_significance < change_point_threshold, confidence_level.mask)
# Broadcast the mask to the shape of the masked S curves
change_point_mask2 = np.broadcast_to(change_point_mask, summation_masked.shape)
# Make a numpy masked array with this mask
change_point_raster = np.ma.array(summation_masked.data, mask=change_point_mask2)
```
<font face="Calibri" size="3">To retrieve the dates of the change points we find the band indices in the time series along the time axis where the maximum of the cumulative sums was located. Numpy offers the "argmax" function for this purpose.
</font>
```
change_point_index = np.ma.argmax(change_point_raster, axis=0)
change_indices = list(np.unique(change_point_index))
print(change_indices)
change_indices.remove(0)
print(change_indices)
# Look up the dates from the indices to get the change dates
all_dates = time_index_subset
change_dates = [str(all_dates[x].date()) for x in change_indices]
```
<font face="Calibri" size="3">Lastly, we <b>plot the change dates by showing the $CP_{index}$ raster and label the change dates. Save the plot as a png (change_dates.png):</b></font>
```
ticks = change_indices
ticklabels = change_dates
cmap = plt.cm.get_cmap('tab20', ticks[-1])
fig, ax = plt.subplots(figsize=(12, 12))
cax = ax.imshow(change_point_index, interpolation='nearest', cmap=cmap)
# fig.subplots_adjust(right=0.8)
# cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7])
# fig.colorbar(p,cax=cbar_ax)
ax.set_title('Dates of Change')
# cbar = fig.colorbar(cax,ticks=ticks)
cbar = fig.colorbar(cax, ticks=ticks, orientation='horizontal')
_ = cbar.ax.set_xticklabels(ticklabels, size=10, rotation=45, ha='right')
plt.savefig(f"{output_path}/change_dates.png", dpi=300, transparent='true')
```
<font face="Calibri" size="3"><b>Save the change dates as a GeoTiff (change_dates.tiff):</b>
</font>
```
%%capture
geotiff_from_plot(change_point_index, f"{output_path}/change_dates", coords, utm, cmap=cmap, interpolation='nearest', dpi=600)
```
<font face="Calibri" size="2"> <i>GEOS 657 Microwave Remote Sensing - Version 1.3.0 - April 2021 </i>
<br>
<b>Version Changes</b>
<ul>
<li>namespace asf_notebook</li>
</ul>
</font>
| true |
code
| 0.362828 | null | null | null | null |
|
# Comparing machine learning models in scikit-learn
*From the video series: [Introduction to machine learning with scikit-learn](https://github.com/justmarkham/scikit-learn-videos)*
```
#environment setup with watermark
%load_ext watermark
%watermark -a 'Gopala KR' -u -d -v -p watermark,numpy,pandas,matplotlib,nltk,sklearn,tensorflow,theano,mxnet,chainer
```
## Agenda
- How do I choose **which model to use** for my supervised learning task?
- How do I choose the **best tuning parameters** for that model?
- How do I estimate the **likely performance of my model** on out-of-sample data?
## Review
- Classification task: Predicting the species of an unknown iris
- Used three classification models: KNN (K=1), KNN (K=5), logistic regression
- Need a way to choose between the models
**Solution:** Model evaluation procedures
## Evaluation procedure #1: Train and test on the entire dataset
1. Train the model on the **entire dataset**.
2. Test the model on the **same dataset**, and evaluate how well we did by comparing the **predicted** response values with the **true** response values.
```
# read in the iris data
from sklearn.datasets import load_iris
iris = load_iris()
# create X (features) and y (response)
X = iris.data
y = iris.target
```
### Logistic regression
```
# import the class
from sklearn.linear_model import LogisticRegression
# instantiate the model (using the default parameters)
logreg = LogisticRegression()
# fit the model with data
logreg.fit(X, y)
# predict the response values for the observations in X
logreg.predict(X)
# store the predicted response values
y_pred = logreg.predict(X)
# check how many predictions were generated
len(y_pred)
```
Classification accuracy:
- **Proportion** of correct predictions
- Common **evaluation metric** for classification problems
```
# compute classification accuracy for the logistic regression model
from sklearn import metrics
print(metrics.accuracy_score(y, y_pred))
```
- Known as **training accuracy** when you train and test the model on the same data
### KNN (K=5)
```
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X, y)
y_pred = knn.predict(X)
print(metrics.accuracy_score(y, y_pred))
```
### KNN (K=1)
```
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X, y)
y_pred = knn.predict(X)
print(metrics.accuracy_score(y, y_pred))
```
### Problems with training and testing on the same data
- Goal is to estimate likely performance of a model on **out-of-sample data**
- But, maximizing training accuracy rewards **overly complex models** that won't necessarily generalize
- Unnecessarily complex models **overfit** the training data

*Image Credit: [Overfitting](http://commons.wikimedia.org/wiki/File:Overfitting.svg#/media/File:Overfitting.svg) by Chabacano. Licensed under GFDL via Wikimedia Commons.*
## Evaluation procedure #2: Train/test split
1. Split the dataset into two pieces: a **training set** and a **testing set**.
2. Train the model on the **training set**.
3. Test the model on the **testing set**, and evaluate how well we did.
```
# print the shapes of X and y
print(X.shape)
print(y.shape)
# STEP 1: split X and y into training and testing sets
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=4)
```

What did this accomplish?
- Model can be trained and tested on **different data**
- Response values are known for the testing set, and thus **predictions can be evaluated**
- **Testing accuracy** is a better estimate than training accuracy of out-of-sample performance
```
# print the shapes of the new X objects
print(X_train.shape)
print(X_test.shape)
# print the shapes of the new y objects
print(y_train.shape)
print(y_test.shape)
# STEP 2: train the model on the training set
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
# STEP 3: make predictions on the testing set
y_pred = logreg.predict(X_test)
# compare actual response values (y_test) with predicted response values (y_pred)
print(metrics.accuracy_score(y_test, y_pred))
```
Repeat for KNN with K=5:
```
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print(metrics.accuracy_score(y_test, y_pred))
```
Repeat for KNN with K=1:
```
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print(metrics.accuracy_score(y_test, y_pred))
```
Can we locate an even better value for K?
```
# try K=1 through K=25 and record testing accuracy
k_range = list(range(1, 26))
scores = []
for k in k_range:
knn = KNeighborsClassifier(n_neighbors=k)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
scores.append(metrics.accuracy_score(y_test, y_pred))
# import Matplotlib (scientific plotting library)
import matplotlib.pyplot as plt
# allow plots to appear within the notebook
%matplotlib inline
# plot the relationship between K and testing accuracy
plt.plot(k_range, scores)
plt.xlabel('Value of K for KNN')
plt.ylabel('Testing Accuracy')
```
- **Training accuracy** rises as model complexity increases
- **Testing accuracy** penalizes models that are too complex or not complex enough
- For KNN models, complexity is determined by the **value of K** (lower value = more complex)
## Making predictions on out-of-sample data
```
# instantiate the model with the best known parameters
knn = KNeighborsClassifier(n_neighbors=11)
# train the model with X and y (not X_train and y_train)
knn.fit(X, y)
# make a prediction for an out-of-sample observation
knn.predict([[3, 5, 4, 2]])
```
## Downsides of train/test split?
- Provides a **high-variance estimate** of out-of-sample accuracy
- **K-fold cross-validation** overcomes this limitation
- But, train/test split is still useful because of its **flexibility and speed**
## Resources
- Quora: [What is an intuitive explanation of overfitting?](http://www.quora.com/What-is-an-intuitive-explanation-of-overfitting/answer/Jessica-Su)
- Video: [Estimating prediction error](https://www.youtube.com/watch?v=_2ij6eaaSl0&t=2m34s) (12 minutes, starting at 2:34) by Hastie and Tibshirani
- [Understanding the Bias-Variance Tradeoff](http://scott.fortmann-roe.com/docs/BiasVariance.html)
- [Guiding questions](https://github.com/justmarkham/DAT8/blob/master/homework/09_bias_variance.md) when reading this article
- Video: [Visualizing bias and variance](http://work.caltech.edu/library/081.html) (15 minutes) by Abu-Mostafa
## Comments or Questions?
- Email: <kevin@dataschool.io>
- Website: http://dataschool.io
- Twitter: [@justmarkham](https://twitter.com/justmarkham)
```
from IPython.core.display import HTML
def css_styling():
styles = open("styles/custom.css", "r").read()
return HTML(styles)
css_styling()
test complete; Gopal
```
| true |
code
| 0.575111 | null | null | null | null |
|
# Kats 204 Forecasting with Meta-Learning
This tutorial will introduce the meta-learning framework for forecasting in Kats. The table of contents for Kats 203 is as follows:
1. Overview of Meta-Learning Framework For Forecasting
2. Introduction to `GetMetaData`
3. Determining Predictability with `MetaLearnPredictability`
4. Model Selection with `MetaLearnModelSelect`
5. Hyperparameter Tuning with `MetaLearnHPT`
5.1. Initializing `MetaLearnHPT`
5.2. `MetaLearnHPT` with Default Neural Network Model Structure
5.3. `MetaLearnHPT` with Custom Neural Network Model Structure
**Note:** We provide two types of tutorial notebooks
- **Kats 101**, basic data structure and functionalities in Kats (this tutorial)
- **Kats 20x**, advanced topics, including advanced forecasting techniques, advanced detection algorithms, `TsFeatures`, meta-learning, etc.
## 1. Overview of Meta-Learning Framework For Forecasting
Suppose we have a time series and we are looking to build the best possible forecast (with respect to a predefined error metric such as mean absolute error) from the following list of candidate models (and possibly other forecasting models in Kats too):
* ARIMA
* SARIMA
* Holt-Winters
* Prophet
* Theta
* STLF
For a single time series, it is straightforward to to do hyperparameter tuning for each of the candidate models with this time series, calculate the error metric, and choose the model that minimizes the error metric. We have discussed this methodology in detail in Kats 201. Our basic metadata object, `GetMetaData`, which we will introduce below, also does this calculation to find the best forecasting model for a single time series.
However, when we are working with a large number of time series, repeating this process quickly becomes intractable, and for that, we include a meta-learning framework for forecasting. There are two key model classes, plus one optional one, in our meta-learning framework:
1. `MetaLearnModelSelect`: Given the metadata for a time series, predict the best model family (from the candidate models of interest) to forecast the series. This model is a random forest by default.
2. `MetaLearnHPT`: Given a time series and a model type, predict the best parameters for this model. This model is a neutral network.
3. `MetaLearnPredictability` (optional): Given the metadata for a time series, predict if it is "predictable", i.e. if it is possible to forecast with a threshold error. This model is a random forest by default.
For each of these models, you can use labeled training data to build a model or you load a pre-trained model from a file path.
We use the `GetMetaData` object to represent the metadata for a time series in `MetaLearnModelSelect` and `MetaLearnPredictability`. This tutorial begins with an introduction to the `GetMetaData` object. Since this object is heavily dependent on `TsFeatures`, if you are not familiar `TsFeatures`, you should check out Kats 203 prior to continuing with this tutorial.
Next we will use labeled time series data from the `m3_meta_data.csv` file to show how to use the `MetaLearnPredictability`, `MetaLearnModelSelect` and `MetaLearnPredictability`.
The sample data in `m3_meta_data.csv` is very small, with 78 labeled examples, so the examples we provide here will not be highly accurate, but they will show you the proper workflow for using the meta-learning framework for forecasting in Kats.
The full table of contents for Kats 204 is as follow
## 2. Introduction to `GetMetaData`
The `GetMetaData` class generates the metadata for any time series. There are three key components to the the metadata for a time series:
1. `features`: the `TsFeatures` dictionary for the time series
2. `hpt_res`: a dictionary giving the best hyperparameters for each candidate model and the corresponding error metric for the time series
3. `best_model`: the name of the model with the smallest error metric
The default error metric is mean absolute error (mae) but this can be controlled with the `error_method` argument in `GetMetaData`.
The list of candidate models that we consider is controlled by the `all_models` argument in `GetMetaData`, which is a dictionary with string names of the candidate models as keys and corresponding model classes as value.
with keys equal to the string names of the models as keys and values equal to the corresponding model class. The keys in `hpt_res` and the value of `best_model` come from the keys of the `all_models` dictionary. The default value of `all_models` will include the following six models.
1. ARIMA
2. SARIMA
3. Holt-Winters
4. Prophet`
5. Theta
6. STLF
Our first example uses the `air_passengers` data set. We show how to get the metadata for this time series. We start by loading the time series into a `TimeSeriesData` object.
```
import pandas as pd
import numpy as np
import sys
import matplotlib.pyplot as plt
import warnings
warnings.simplefilter(action='ignore')
sys.path.append("../")
from kats.consts import TimeSeriesData
air_passengers_df = pd.read_csv("../kats/data/air_passengers.csv")
air_passengers_df.columns = ["time", "value"]
air_passengers_ts = TimeSeriesData(air_passengers_df)
```
Now we can construct the `GetMetaData` object for the `air_passengers` data set time series like follows. We use all of the default settings except that we use mean absolute percentage error (mape) as our error metric rather than the default of mean absolute error (mae)
```
from kats.models.metalearner.get_metadata import GetMetaData
# create an object MD of class GetMetaData with error method mean absolute percentage error (mape)
MD = GetMetaData(data=air_passengers_ts, error_method='mape')
```
Let's take a look at the `all_models` dictionary that is used by default here. You are allowed to specify your own `all_models` dictionary as long as all the values are classes that extends the abstract class `kats.models.Model`.
```
MD.all_models
```
The `all_params` dictionary will have the same keys as the `all_models` dictionary, and the values are the corresponding parameter class (i.e. a class that extends the class `kats.const.Params`)
```
MD.all_params
```
Now we can use the `get_meta_data` function to calculate all the metadata and output the result as a dictionary.
```
# get meta data as a dictionary
air_passengers_metadata = MD.get_meta_data()
```
Let's take a look at the keys of the metadata dictionary.
```
air_passengers_metadata.keys()
```
We explained what `features`, `hpt_res` and `best_model` are above. This dictionary also includes the `search_method` and `error_method`, which will just be the default values in this case. We can see these as follows.
```
print(f"search_method: {air_passengers_metadata['search_method']}")
print(f"error_method: {air_passengers_metadata['error_method']}")
```
The keys of the `hpt_res` dictionary are name of the candidate model families; they should be the same as the keys for the `all_models` and `all_parameters` dictionaries.
```
air_passengers_metadata['hpt_res'].keys()
```
The values of the `hpt_res` dictionary are two-element tuples. The first element is gives the hyperparameters that minimize the error metric. The second element gives the corresponding minimum error metric. Let's take a look at these values for ARIMA:
```
air_passengers_metadata['hpt_res']['arima']
```
We can sort the different methods by their error metric as follows:
```
methods = list(air_passengers_metadata['hpt_res'].keys())
sorted(methods, key = lambda m: air_passengers_metadata['hpt_res'][m][1])
```
This suggests that Prophet has the lowest error metric. Let's confirm that this is what `best_model` indicates:
```
air_passengers_metadata['best_model']
```
We constructed the `GetMetaData` object for the `air_passengers` data set with all of the default settings. Let's take a look at the full set of attributes that can be used to initialize `GetMetadata`.
This is the only required attributed:
* **data**: TimeSeriesData, the time series for which we calculate the metadata
The following attributes are all optional:
* **all_models**: `Dict[str, m.Model]`, a dictionary for the candidate model classes. The key is a string naming the model and each value is a corresponding model class (i.e. a class that extends the abstract class `kats.models.Model`).
* **all_params**: `Dict[str, Params]`, a dictionary for the candidate model parameter classes. The keys are the same as the keys for `all_models` and each value is a corresponding parameter class (i.e. a class that extends the class `kats.const.Params`).
* **min_length**: int, the minimal length of time series. We raise a value error if the length of `data` is smaller than `min_length`. The default value of `min_length` is 30.
* **scale**: bool, Whether to rescale the time series by its maximum value; default is true.
* **method**: SearchMethodEnum, Search method for hyper-parameters tuning; default is random search in the default parameter space
* **executor**: Callable, A parallel executor for parallel processing. By default, we use Python's native multiprocessing implementation.
* **error_method**: str, Type of error metric. Options are `'mape`', `'smape`',`'mae`', `'mase`', `'mse`', `'rmse`'; default is `'mae'`.
* **num_trials**: int, Number of trials for hyperparameter search; default is 5
* **num_arm**: optional Number of arms in hyperparameter search; default is 4.
For the remaining examples, we use the sample data in `m3_meta_data.csv` to show how to build meta-learning models. This sample data set contains the metadata for 78 time series, meaning it has that we need to construct 78 metadata dictionaries like the one we constructed for the `air_passengers` data set. While 78 metadata objects is certainly too few to develop an accurate meta-learning model and you should use more examples for your own meta-learning models to get high accuracy, these examples will help familiarize you with our meta-learning framework.
Loading this data is straightforward is straightforward. After loading it into a `DataFrame`, we have to do some pre-processing with the `eval` function to ensure that the dictionaries are represented as dictionaries and not as strings. We demonstrate this as follows:
```
# load the metadata into a DataFrame
metadata_df = pd.read_csv("../kats/data/m3_meta_data.csv")
# We need to do a little pre-processing to make sure the dictionaries are represented as dictionaries
# rather than as strings. This function will do that pre-processing.
def change_format(tmp):
tmp['hpt_res']=eval(tmp['hpt_res'])
tmp['hpt_res']['sarima'][0]['seasonal_order'] = eval(tmp['hpt_res']['sarima'][0]['seasonal_order'])
tmp['features']=eval(tmp['features'])
return tmp
metadata_df = metadata_df.apply(change_format, axis=1)
```
Let's preview the metadata `DataFrame` we just loaded.
```
metadata_df.head()
```
Let's convert this metadata `DataFrame` into a list of metadata dictionaries.
```
metadata_list = metadata_df.to_dict(orient='records')
```
## 3. Determining Predictability with `MetaLearnPredictability`
Before using meta-learning models for model selection and hyper-parameter forecasting, we would like to know if our target time series is predictable. The `MetaLearnPredictability` module allows us to treat this like a binary classification problem and build a model for it. We train this model using a list of a metadata and a threshold for the error metric. We use the threshold to label each metadata dictionary as predictable if and only if the error of it's `best_model` is smaller than the input threshold. The arguments for `MetaLearnPredictability` are as follows:
* **metadata**: A list of dictionaries representing the meta-data of time series (e.g., the meta-data generated by GetMetaData object). Required unless `load_model=True`.
* **threshold**: Float; the threshold for the forecasting error. A time series whose forecasting error of the best forecasting model is higher than the threshold is considered as unpredictable. Default is 0.2.
* **load_model**: Boolean; whether or not to load a trained model. Default is False.
If we want to train a new predictability model from a list of metadata dictionaries, we should include that list in the `metadata` argument. If we want to load a trained model, we set `load_data=True` and do ignore the `metadata` argument. We will provide examples of both below.
For our example, we are going to use the sample metadata from the `m3_meta_data.csv` file to train a predictability model with `MetaLearnPredictability`. Then we will use this to predict whether or not `air_passenger` time series can be forecasted (with MAPE at most 0.2).
We initialize model using the `metadata_list` we previously generated from `m3_meta_data.csv` as follows:
```
from kats.models.metalearner.metalearner_predictability import MetaLearnPredictability
# take the time series with MAPE>=0.2 as unpreditable time series and initial the object
mlp=MetaLearnPredictability(metadata_list, threshold=0.2)
```
When we train the model, we see a dictionary with performance metrics calculated on the test data set.
```
mlp.train()
```
Now we can use this model to predict if the `air_passenger` time series is predictable.
```
mlp.pred(air_passengers_ts)
```
This suggests that this model can be forecast with MAPE at most 0.2.
Let's save the model we trained to a file.
```
mlp.save_model("mlp.pkl")
```
Now let's re-load our saved model into a new `MetaLearnPredictability` object.
```
#initiate a new object and load the trained model
mlp2 = MetaLearnPredictability(load_model=True)
mlp2.load_model("mlp.pkl")
```
Finally, let's use our newly loaded model to repeat the prediction we did on the `air_passenger` data set.
```
mlp2.pred(air_passengers_ts)
```
## 4. **Model Selection with `MetaLearnModelSelect`**
The `MetaLearnModelSelect` object allows you to build a predictive model to determine the best forecasting model for a time series. It is trained using a list of metadata dictionaries. The arguments for `MetaLearnModelSelect` are as follows:
* **metadata**: A list of dictionaries representing the meta-data of time series (e.g., the meta-data generated by GetMetaData object). Required unless `load_model=True`.
* **load_model**: Boolean; whether or not to load a trained model. Default is False.
If we want to train a new predictability model from a list of metadata dictionaries, we should include that list in the `metadata` argument. If we want to load a trained model, we set `load_data=True` and do ignore the `metadata` argument. We will provide examples of both below.
For our example, we are going to use the sample metadata from the `m3_meta_data.csv` file to train a selection model with `MetaLearnModelSelect`. Then we will use this to predict the best forecasting model for the `air_passenger` time series.
We initialize model using the `metadata_list` we previously generated from `m3_meta_data.csv` as follows:
```
from kats.models.metalearner.metalearner_modelselect import MetaLearnModelSelect
#Initialize the MetaLearnModelSelect object
mlms = MetaLearnModelSelect(metadata_list)
```
Each metadata dictionary includes a `best_model`, and we can take a look at the frquencies of these models using the `count_category` function.
```
mlms.count_category()
```
Before we visualize the data and train the model, it is helpful do some preprocessing. We can do this with the `preprocess` function.
```
# pre-process the metadata
# don't down-sample it to balance the classes
# standardize the TsFeatures to have zero mean and unit variance
mlms.preprocess(downsample=False, scale=True)
```
We can see how the different `TsFeatures` in our metadata objects are correlated with each other by plotting a heatmap, which can be generated using the `plot_corr_heatmap` function.
```
mlms.plot_corr_heatmap()
```
Now, it is time to train our model. By default, we will be fitting a random forest model, but other model types (including GBDT, SVT, KNN, Naive Bayes) can be supported using the `method` parameter in the `train` function. When we run the `train` function, it outputs a dictionary with the training error and test error for each of the candidate models. All of these error metrics are MAPE because that is the error metric our metadata is using for this example.
```
# train a modelselect model using random forest algorithm
results=mlms.train()
# preview the dictionary
results
```
Let's view this dictionary as a `DataFrame`.
```
results_df=pd.DataFrame([results['fit_error'], results['pred_error']])
results_df['error_type']=['fit_error', 'pred_error']
results_df['error_metric']='MAPE'
results_df
```
Now, let's use our trained model to predict the best model for the `air_passengers` time series.
```
mlms.pred(air_passengers_ts)
```
Let's save the model we trained to a file.
```
mlms.save_model("mlms.pkl")
```
Now let's re-load our saved model into a new `MetaLearnModelSelect` object.
```
mlms2 = MetaLearnModelSelect(load_model=True)
mlms2.load_model("mlms.pkl")
```
Finally, let's use our newly loaded model to repeat the prediction we did on the `air_passenger` data set.
```
mlms2.pred(air_passengers_ts)
```
## 5. **Hyperparameter Tuning with `MetaLearnHPT`**
The `MetaLearnHPT` object allows you to build a model to predict the best hyperparameters for a time series given a designated forecasting model. Specifically, `MetaLearnHPT` builds a neural network model that takes the `TsFeatures` for a time series as inputs and predicts the best hyperparameters for the forecasting model.
Since a metadata dictionary contains both the `TsFeatures` and the best parameters (with keys `features` and `hpt_res`, respectively), we can use a list of metadata dictionaries to build this predictive model.
For our example, we use `metadata_list`, which contains the metadata from the `m3_meta_data.csv` file, to build a model for the Holt-Winters parameters for a time series. We then use this model to predict the best Holt-Winters parameters for the `air_passengers` time series. While this example is using the Holt-Winters model as the designated model, the same process can be used for any forecasting model supported by Kats as long as it is included in our metadata objects.
### 5.1 Initializing `MetaLearnHPT`
To initialize the `MetaLearnHPT` model, we need to input the `TsFeatures` and hyperparameters for the Holt-Winters model as `DataFrame` objects. To extract these from the metadata from `m3_meta_data.csv`, it is easiest use the `DataFrame` we loaded with this data, `metadata_df`.
First, let's load the `TsFeatures` from `metadata_df` to a new `DataFrame` and preview it.
```
metadata_features_df = pd.DataFrame(metadata_df['features'].tolist())
metadata_features_df.head()
```
Now, let's do the same for the the Holt-Winters hyperparameters.
```
metadata_hpt_df = pd.DataFrame(metadata_df['hpt_res'].map(lambda x: x['holtwinters'][0]).tolist())
metadata_hpt_df.head()
```
The arguments for `MetaLearnHPT` are:
* **data_x**: pd.DataFrame; A DataFrame with the TsFeatures. Required unless `load_model=True`.
* **data_y**: pd.DataFrame; A DataFrame with the best hyperparameters. Required unless `load_model=True`.
* **default_model**: string; The name of the forecast model whose default settings will be used. Supported options are 'arima', 'sarima', 'theta', 'prophet', 'holtwinters', 'stlf' and None. Default is None, in which case we instantiate a custom model and use `categorical_idx` and `numerical_idx` to get the names of the hyperparameters.
* **categorical_idx**: A list of strings of the names of the categorical hyper-parameters. Required only when `default_model` is `None` and there are categorical hyper-parameters.
* **numerical_idx**: Optional; A list of strings of the names of the numerical hyper-parameters. Required only when `default_model` is `None` and there are numerical hyper-parameters.
If None, then a customized model will be initiated.
* **load_model**: Boolean; whether or not to load a trained model. Default is False.
We can initialize the `MetaLearnHPT` model using a `default_model` as follows.
```
from kats.models.metalearner.metalearner_hpt import MetaLearnHPT
mlhpt_holtwinters = MetaLearnHPT(
data_x=metadata_features_df,
data_y=metadata_hpt_df,
default_model='holtwinters'
)
```
```
mlhpt_holtwinters2=MetaLearnHPT(
data_x=metadata_features_df,
data_y=metadata_hpt_df,
categorical_idx = ["trend","damped","seasonal"],
numerical_idx = ["seasonal_periods"]
)
```
### 5.2 `MetaLearnHPT` with Default Neural Network Model Structure
When using a default model like we did when initializing `mlhpt_holtwinters`, `MetaLearnHPT` builds a neural network with the default neural network model structure. This means we call the `build_network` function with no parameters.
```
mlhpt_holtwinters.build_network()
```
We use the `train` function to train the neural network.
```
mlhpt_holtwinters.train(lr=0.001, batch_size=20)
```
Let's look at the training curves for this model.
```
mlhpt_holtwinters.plot()
```
Now let's use our trained model to predict the best Holt-Winters parameters for the `air_passengers` time series. The `pred` function returns a `DataFrame` and the predicted parameters are in the `parameters` column.
```
pred=mlhpt_holtwinters.pred(air_passengers_ts)
pred['parameters'].iloc[0]
```
Let's save the model we trained to a file.
```
mlhpt_holtwinters.save_model("mlhpt_hw.pkl")
```
Now let's re-load our saved model into a new `MetaLearnHPT` object.
```
mlhpt_holtwinters3=MetaLearnHPT(load_model=True)
mlhpt_holtwinters3.load_model("mlhpt_hw.pkl")
```
Let's use our newly loaded model to repeat the prediction we did on the `air_passenger` data set.
```
pred=mlhpt_holtwinters3.pred(air_passengers_ts)
pred['parameters'].iloc[0]
```
### 5.3 `MetaLearnHPT` with Custom Neural Network Model Structure
When using a custom model like we did when initializing `mlhpt_holtwinters2`, you need to specify the model structure by providing the parameters for the neural network to the `build_network` function.
Here's how we can do that.
```
mlhpt_holtwinters2.build_network(
#One shared one-layer NN with 50 neurons.
n_hidden_shared=[50],
#Each classification task has its own task-specific NN. In this example, "trend" and "dampled" both have a two-layer NN respectively
#and "seasonal" has a one-layer NN.
n_hidden_cat_combo=[[20, 10], [20, 10], [20]],
#One task-specific one-layer NN with 30 neurons for regression task.
n_hidden_num=[30]
)
```
Now let's use the `train` function to train the model. We include some of the extra parameters here to specify how to train the neural network model.
```
#train the customized NN
mlhpt_holtwinters2.train(
#loss_scale is used to balance 2 types of losses: cross-entropy for classification tasks and MSE for regression tasks
loss_scale=30,
#learning rate
lr=0.005,
n_epochs=2000,
batch_size=16,
#supports ADAM and SGD
method='SGD',
#momentum in SGD.
momentum=0,
#early stop option.
n_epochs_stop=50,)
```
Let's look at the training curves for this model.
```
mlhpt_holtwinters2.plot()
```
Let's use our trained model to predict the best parameters for the `air_passengers` time series.
```
pred=mlhpt_holtwinters2.pred(air_passengers_ts)
pred['parameters'].iloc[0]
```
| true |
code
| 0.511595 | null | null | null | null |
|
# PageRank
In this notebook, you'll build on your knowledge of eigenvectors and eigenvalues by exploring the PageRank algorithm.
The notebook is in two parts, the first is a worksheet to get you up to speed with how the algorithm works - here we will look at a micro-internet with fewer than 10 websites and see what it does and what can go wrong.
The second is an assessment which will test your application of eigentheory to this problem by writing code and calculating the page rank of a large network representing a sub-section of the internet.
## Part 1 - Worksheet
### Introduction
PageRank (developed by Larry Page and Sergey Brin) revolutionized web search by generating a
ranked list of web pages based on the underlying connectivity of the web. The PageRank algorithm is
based on an ideal random web surfer who, when reaching a page, goes to the next page by clicking on a
link. The surfer has equal probability of clicking any link on the page and, when reaching a page with no
links, has equal probability of moving to any other page by typing in its URL. In addition, the surfer may
occasionally choose to type in a random URL instead of following the links on a page. The PageRank is
the ranked order of the pages from the most to the least probable page the surfer will be viewing.
```
# Before we begin, let's load the libraries.
%pylab notebook
import numpy as np
import numpy.linalg as la
from readonly.PageRankFunctions import *
np.set_printoptions(suppress=True)
```
### PageRank as a linear algebra problem
Let's imagine a micro-internet, with just 6 websites (**A**vocado, **B**ullseye, **C**atBabel, **D**romeda, **e**Tings, and **F**aceSpace).
Each website links to some of the others, and this forms a network as shown,

The design principle of PageRank is that important websites will be linked to by important websites.
This somewhat recursive principle will form the basis of our thinking.
Imagine we have 100 *Procrastinating Pat*s on our micro-internet, each viewing a single website at a time.
Each minute the Pats follow a link on their website to another site on the micro-internet.
After a while, the websites that are most linked to will have more Pats visiting them, and in the long run, each minute for every Pat that leaves a website, another will enter keeping the total numbers of Pats on each website constant.
The PageRank is simply the ranking of websites by how many Pats they have on them at the end of this process.
We represent the number of Pats on each website with the vector,
$$\mathbf{r} = \begin{bmatrix} r_A \\ r_B \\ r_C \\ r_D \\ r_E \\ r_F \end{bmatrix}$$
And say that the number of Pats on each website in minute $i+1$ is related to those at minute $i$ by the matrix transformation
$$ \mathbf{r}^{(i+1)} = L \,\mathbf{r}^{(i)}$$
with the matrix $L$ taking the form,
$$ L = \begin{bmatrix}
L_{A→A} & L_{B→A} & L_{C→A} & L_{D→A} & L_{E→A} & L_{F→A} \\
L_{A→B} & L_{B→B} & L_{C→B} & L_{D→B} & L_{E→B} & L_{F→B} \\
L_{A→C} & L_{B→C} & L_{C→C} & L_{D→C} & L_{E→C} & L_{F→C} \\
L_{A→D} & L_{B→D} & L_{C→D} & L_{D→D} & L_{E→D} & L_{F→D} \\
L_{A→E} & L_{B→E} & L_{C→E} & L_{D→E} & L_{E→E} & L_{F→E} \\
L_{A→F} & L_{B→F} & L_{C→F} & L_{D→F} & L_{E→F} & L_{F→F} \\
\end{bmatrix}
$$
where the columns represent the probability of leaving a website for any other website, and sum to one.
The rows determine how likely you are to enter a website from any other, though these need not add to one.
The long time behaviour of this system is when $ \mathbf{r}^{(i+1)} = \mathbf{r}^{(i)}$, so we'll drop the superscripts here, and that allows us to write,
$$ L \,\mathbf{r} = \mathbf{r}$$
which is an eigenvalue equation for the matrix $L$, with eigenvalue 1 (this is guaranteed by the probabalistic structure of the matrix $L$).
Complete the matrix $L$ below, we've left out the column for which websites the *FaceSpace* website (F) links to.
Remember, this is the probability to click on another website from this one, so each column should add to one (by scaling by the number of links).
```
# Replace the ??? here with the probability of clicking a link to each website when leaving Website F (FaceSpace).
L = np.array([[0, 1/2, 1/3, 0, 0, 0 ],
[1/3, 0, 0, 0, 1/2, 0 ],
[1/3, 1/2, 0, 1, 0, 1/2 ],
[1/3, 0, 1/3, 0, 1/2, 1/2 ],
[0, 0, 0, 0, 0, 0 ],
[0, 0, 1/3, 0, 0, 0 ]])
```
In principle, we could use a linear algebra library, as below, to calculate the eigenvalues and vectors.
And this would work for a small system. But this gets unmanagable for large systems.
And since we only care about the principal eigenvector (the one with the largest eigenvalue, which will be 1 in this case), we can use the *power iteration method* which will scale better, and is faster for large systems.
Use the code below to peek at the PageRank for this micro-internet.
```
eVals, eVecs = la.eig(L) # Gets the eigenvalues and vectors
order = np.absolute(eVals).argsort()[::-1] # Orders them by their eigenvalues
eVals = eVals[order]
eVecs = eVecs[:,order]
r = eVecs[:, 0] # Sets r to be the principal eigenvector
100 * np.real(r / np.sum(r)) # Make this eigenvector sum to one, then multiply by 100 Procrastinating Pats
```
We can see from this list, the number of Procrastinating Pats that we expect to find on each website after long times.
Putting them in order of *popularity* (based on this metric), the PageRank of this micro-internet is:
**C**atBabel, **D**romeda, **A**vocado, **F**aceSpace, **B**ullseye, **e**Tings
Referring back to the micro-internet diagram, is this what you would have expected?
Convince yourself that based on which pages seem important given which others link to them, that this is a sensible ranking.
Let's now try to get the same result using the Power-Iteration method that was covered in the video.
This method will be much better at dealing with large systems.
First let's set up our initial vector, $\mathbf{r}^{(0)}$, so that we have our 100 Procrastinating Pats equally distributed on each of our 6 websites.
```
r = 100 * np.ones(6) / 6 # Sets up this vector (6 entries of 1/6 × 100 each)
r # Shows it's value
```
Next, let's update the vector to the next minute, with the matrix $L$.
Run the following cell multiple times, until the answer stabilises.
```
r = L @ r # Apply matrix L to r
r # Show it's value
# Re-run this cell multiple times to converge to the correct answer.
```
We can automate applying this matrix multiple times as follows,
```
r = 100 * np.ones(6) / 6 # Sets up this vector (6 entries of 1/6 × 100 each)
for i in np.arange(100) : # Repeat 100 times
r = L @ r
r
```
Or even better, we can keep running until we get to the required tolerance.
```
r = 100 * np.ones(6) / 6 # Sets up this vector (6 entries of 1/6 × 100 each)
lastR = r
r = L @ r
i = 0
while la.norm(lastR - r) > 0.01 :
lastR = r
r = L @ r
i += 1
print(str(i) + " iterations to convergence.")
r
```
See how the PageRank order is established fairly quickly, and the vector converges on the value we calculated earlier after a few tens of repeats.
Congratulations! You've just calculated your first PageRank!
### Damping Parameter
The system we just studied converged fairly quickly to the correct answer.
Let's consider an extension to our micro-internet where things start to go wrong.
Say a new website is added to the micro-internet: *Geoff's* Website.
This website is linked to by *FaceSpace* and only links to itself.

Intuitively, only *FaceSpace*, which is in the bottom half of the page rank, links to this website amongst the two others it links to,
so we might expect *Geoff's* site to have a correspondingly low PageRank score.
Build the new $L$ matrix for the expanded micro-internet, and use Power-Iteration on the Procrastinating Pat vector.
See what happens…
```
# We'll call this one L2, to distinguish it from the previous L.
L2 = np.array([[0, 1/2, 1/3, 0, 0, 0, 0 ],
[1/3, 0, 0, 0, 1/2, 0, 0 ],
[1/3, 1/2, 0, 1, 0, 1/3, 0 ],
[1/3, 0, 1/3, 0, 1/2, 1/3, 0 ],
[0, 0, 0, 0, 0, 1/3, 0 ],
[0, 0, 1/3, 0, 0, 0, 0 ],
[0, 0, 0, 0, 0, 0, 1 ]])
r = 100 * np.ones(7) / 7 # Sets up this vector (6 entries of 1/6 × 100 each)
lastR = r
r = L2 @ r
i = 0
while la.norm(lastR - r) > 0.01 :
lastR = r
r = L2 @ r
i += 1
print(str(i) + " iterations to convergence.")
r
```
That's no good! *Geoff* seems to be taking all the traffic on the micro-internet, and somehow coming at the top of the PageRank.
This behaviour can be understood, because once a Pat get's to *Geoff's* Website, they can't leave, as all links head back to Geoff.
To combat this, we can add a small probability that the Procrastinating Pats don't follow any link on a webpage, but instead visit a website on the micro-internet at random.
We'll say the probability of them following a link is $d$ and the probability of choosing a random website is therefore $1-d$.
We can use a new matrix to work out where the Pat's visit each minute.
$$ M = d \, L + \frac{1-d}{n} \, J $$
where $J$ is an $n\times n$ matrix where every element is one.
If $d$ is one, we have the case we had previously, whereas if $d$ is zero, we will always visit a random webpage and therefore all webpages will be equally likely and equally ranked.
For this extension to work best, $1-d$ should be somewhat small - though we won't go into a discussion about exactly how small.
Let's retry this PageRank with this extension.
```
d = 0.5 # Feel free to play with this parameter after running the code once.
M = d * L2 + (1-d)/7 * np.ones([7, 7]) # np.ones() is the J matrix, with ones for each entry.
r = 100 * np.ones(7) / 7 # Sets up this vector (6 entries of 1/6 × 100 each)
lastR = r
r = M @ r
i = 0
while la.norm(lastR - r) > 0.01 :
lastR = r
r = M @ r
i += 1
print(str(i) + " iterations to convergence.")
r
```
This is certainly better, the PageRank gives sensible numbers for the Procrastinating Pats that end up on each webpage.
This method still predicts Geoff has a high ranking webpage however.
This could be seen as a consequence of using a small network. We could also get around the problem by not counting self-links when producing the L matrix (an if a website has no outgoing links, make it link to all websites equally).
We won't look further down this route, as this is in the realm of improvements to PageRank, rather than eigenproblems.
You are now in a good position, having gained an understanding of PageRank, to produce your own code to calculate the PageRank of a website with thousands of entries.
Good Luck!
## Part 2 - Assessment
In this assessment, you will be asked to produce a function that can calculate the PageRank for an arbitrarily large probability matrix.
This, the final assignment of the course, will give less guidance than previous assessments.
You will be expected to utilise code from earlier in the worksheet and re-purpose it to your needs.
### How to submit
Edit the code in the cell below to complete the assignment.
Once you are finished and happy with it, press the *Submit Assignment* button at the top of this notebook.
Please don't change any of the function names, as these will be checked by the grading script.
If you have further questions about submissions or programming assignments, here is a [list](https://www.coursera.org/learn/linear-algebra-machine-learning/discussions/weeks/1/threads/jB4klkn5EeibtBIQyzFmQg) of Q&A. You can also raise an issue on the discussion forum. Good luck!
```
# PACKAGE
# Here are the imports again, just in case you need them.
# There is no need to edit or submit this cell.
import numpy as np
import numpy.linalg as la
from readonly.PageRankFunctions import *
np.set_printoptions(suppress=True)
# GRADED FUNCTION
# Complete this function to provide the PageRank for an arbitrarily sized internet.
# I.e. the principal eigenvector of the damped system, using the power iteration method.
# (Normalisation doesn't matter here)
# The functions inputs are the linkMatrix, and d the damping parameter - as defined in this worksheet.
# (The damping parameter, d, will be set by the function - no need to set this yourself.)
def pageRank(linkMatrix, d) :
n = linkMatrix.shape[0]
M = d * linkMatrix + (1 - d) / n
r = 100 * np.ones(n) / n
lastR = r
r = M @ r
while la.norm(lastR - r) > 0.01 :
lastR = r
r = M @ r
return r
```
## Test your code before submission
To test the code you've written above, run the cell (select the cell above, then press the play button [ ▶| ] or press shift-enter).
You can then use the code below to test out your function.
You don't need to submit this cell; you can edit and run it as much as you like.
```
# Use the following function to generate internets of different sizes.
generate_internet(5)
# Test your PageRank method against the built in "eig" method.
# You should see yours is a lot faster for large internets
L = generate_internet(10)
pageRank(L, 1)
# Do note, this is calculating the eigenvalues of the link matrix, L,
# without any damping. It may give different results that your pageRank function.
# If you wish, you could modify this cell to include damping.
# (There is no credit for this though)
eVals, eVecs = la.eig(L) # Gets the eigenvalues and vectors
order = np.absolute(eVals).argsort()[::-1] # Orders them by their eigenvalues
eVals = eVals[order]
eVecs = eVecs[:,order]
r = eVecs[:, 0]
100 * np.real(r / np.sum(r))
# You may wish to view the PageRank graphically.
# This code will draw a bar chart, for each (numbered) website on the generated internet,
# The height of each bar will be the score in the PageRank.
# Run this code to see the PageRank for each internet you generate.
# Hopefully you should see what you might expect
# - there are a few clusters of important websites, but most on the internet are rubbish!
%pylab notebook
r = pageRank(generate_internet(100), 0.9)
plt.bar(arange(r.shape[0]), r);
```
| true |
code
| 0.47098 | null | null | null | null |
|
# Four Qubit Chip Design
Creates a complete quantum chip and exports it
### Preparations
The next cell enables [module automatic reload](https://ipython.readthedocs.io/en/stable/config/extensions/autoreload.html?highlight=autoreload). Your notebook will be able to pick up code updates made to the qiskit-metal (or other) module code.
```
%reload_ext autoreload
%autoreload 2
```
Import key libraries and open the Metal GUI. Also we configure the notebook to enable overwriting of existing components
```
import numpy as np
from collections import OrderedDict
from qiskit_metal import designs, draw
from qiskit_metal import MetalGUI, Dict, Headings
design = designs.DesignPlanar()
gui = MetalGUI(design)
# if you disable the next line, then you will need to delete a component [<component>.delete()] before recreating it
design.overwrite_enabled = True
```
Import components that will be necessary for the design
```
from qiskit_metal.qlibrary.qubits.transmon_pocket_cl import TransmonPocketCL
from qiskit_metal.qlibrary.tlines.meandered import RouteMeander
from qiskit_metal.qlibrary.tlines.anchored_path import RouteAnchors
from qiskit_metal.qlibrary.tlines.pathfinder import RoutePathfinder
from qiskit_metal.qlibrary.terminations.open_to_ground import OpenToGround
from qiskit_metal.qlibrary.terminations.launchpad_wb import LaunchpadWirebond
from qiskit_metal.qlibrary.terminations.launchpad_wb_coupled import LaunchpadWirebondCoupled
```
## Let's design the core of the chip
Setup the design-wide default settings for trace width and trace gap. These can be customized later for individual transmission lines.
```
design.variables['cpw_width'] = '10 um'
design.variables['cpw_gap'] = '6 um'
design._chips['main']['size']['size_y'] = '9mm'
design._chips['main']['size']['size_y'] = '6.5mm'
```
We need 4 transmons with 3 connection pads each and a chargeline. Let's explore the options of one transmon
```
TransmonPocketCL.get_template_options(design)
```
We want to change the `pad_width` for these transmons, as well as define the 3 connection pads and chargeline.
To apply the same modifications to all 4 transmons, we define a single option-dictionary to pass to all transmons at the monent of creation
```
transmon_options = dict(
connection_pads=dict(
a = dict(loc_W=+1, loc_H=-1, pad_width='70um', cpw_extend = '50um'),
b = dict(loc_W=-1, loc_H=-1, pad_width='125um', cpw_extend = '50um'),
c = dict(loc_W=-1, loc_H=+1, pad_width='110um', cpw_extend = '50um')
),
gds_cell_name='FakeJunction_01',
cl_off_center = '-50um',
cl_pocket_edge = '180'
)
```
We can now create the 4 transmons by specifying the desired coordinates and rotations.
```
offset_tm = 69 #we the transmon slightly out of center-line
q1 = TransmonPocketCL(design, 'Q1', options = dict(
pos_x='+2420um', pos_y=f'{offset_tm}um', **transmon_options))
q2 = TransmonPocketCL(design, 'Q2', options = dict(
pos_x='0um', pos_y='-857.6um', orientation = '270', **transmon_options))
q3 = TransmonPocketCL(design, 'Q3', options = dict(
pos_x='-2420um', pos_y=f'{offset_tm}um', orientation = '180', **transmon_options))
q4 = TransmonPocketCL(design, 'Q4', options = dict(
pos_x='0um', pos_y='+857.6um', orientation = '90', **transmon_options))
gui.rebuild()
gui.autoscale()
```
Let's now connect the transmons with tranismission lines. We want to have an "exact length" transmission line, so we will use the `RouteMeander`. Let's first observe what are the default options
```
RouteMeander.get_template_options(design)
```
We want to globally override the default lead (straight initial segment leaving the transmon) and the default fillet (corner rounding radius). Let's collect this information in one dictionary
```
fillet='99.99um'
cpw_options = Dict(
lead=Dict(
start_straight='100um',
end_straight='250um'),
fillet=fillet
)
```
We then want each transmission line to be connected to different pins and to have different lengths and asymmetry w.r.t their centerline. Let's collect this information in other dictionaries. Before doing that, to manage the dictionaries in a simpler way, we redefine the `RouteMeander` signature by wrapping it into a convenience method named `connect`
```
def connect(cpw_name: str, pin1_comp_name: str, pin1_comp_pin: str, pin2_comp_name: str, pin2_comp_pin: str,
length: str, asymmetry='0 um'):
"""Connect two pins with a CPW."""
myoptions = Dict(
pin_inputs=Dict(
start_pin=Dict(
component=pin1_comp_name,
pin=pin1_comp_pin),
end_pin=Dict(
component=pin2_comp_name,
pin=pin2_comp_pin)),
total_length=length)
myoptions.update(cpw_options)
myoptions.meander.asymmetry = asymmetry
return RouteMeander(design, cpw_name, myoptions)
```
We can now proceed and define the meanders following the signature: `connect(cpw_name, pin1_comp_name, pin1_comp_pin, pin2_comp_name, pin2_comp_pin, length, asymmetry)`
```
asym = 500
cpw1 = connect('cpw1', 'Q1', 'c', 'Q4', 'b', '9000um', f'-{asym-1.25*offset_tm}um')
cpw2 = connect('cpw2', 'Q3', 'b', 'Q4', 'c', '9000um', f'+{asym-1.25*offset_tm}um')
cpw3 = connect('cpw3', 'Q3', 'c', 'Q2', 'b', '9000um', f'-{asym+0.75*offset_tm}um')
cpw4 = connect('cpw4', 'Q1', 'b', 'Q2', 'c', '9000um', f'+{asym+0.75*offset_tm}um')
gui.rebuild()
gui.autoscale()
```
## Let's now connect the core elements to the launchpads
First we setup the launchpad location and orientation
```
# V1 - Corners
p1_c = LaunchpadWirebond(design, 'P1_C', options = dict(pos_x='3545um', pos_y='2812um', orientation='270', lead_length='0um'))
p2_c = LaunchpadWirebond(design, 'P2_C', options = dict(pos_x='3545um', pos_y='-2812um', orientation='90', lead_length='0um'))
p3_c = LaunchpadWirebond(design, 'P3_C', options = dict(pos_x='-3545um', pos_y='-2812um', orientation='90', lead_length='0um'))
p4_c = LaunchpadWirebond(design, 'P4_C', options = dict(pos_x='-3545um', pos_y='2812um', orientation='270', lead_length='0um'))
# V2
p1_q = LaunchpadWirebondCoupled(design, 'P1_Q', options = dict(pos_x='4020um', pos_y='0', orientation='180', lead_length='30um'))
p2_q = LaunchpadWirebondCoupled(design, 'P2_Q', options = dict(pos_x='-990um', pos_y='-2812um', orientation='90', lead_length='30um'))
p3_q = LaunchpadWirebondCoupled(design, 'P3_Q', options = dict(pos_x='-4020um', pos_y='0', orientation='0', lead_length='30um'))
p4_q = LaunchpadWirebondCoupled(design, 'P4_Q', options = dict(pos_x='990um', pos_y='2812um', orientation='270', lead_length='30um'))
gui.rebuild()
gui.autoscale()
```
Then we route. First the V2 launchpads - Exchange Coupler Lines to Edges
```
asym = 150
cpw_options = Dict(
lead=Dict(
start_straight='430um',
end_straight='0um'),
fillet=fillet
)
ol1 = connect('ol1', 'Q1', 'a', 'P1_Q', 'tie', '8.6 mm', f'+{asym}um')
ol3 = connect('ol3', 'Q3', 'a', 'P3_Q', 'tie', '8.6 mm', f'+{asym}um')
asym = 200
cpw_options = Dict(
lead=Dict(
start_straight='535um',
end_straight='0um'),
fillet=fillet
)
ol2 = connect('ol2', 'Q2', 'a', 'P2_Q', 'tie', '8.6 mm', f'+{asym}um')
ol4 = connect('ol4', 'Q4', 'a', 'P4_Q', 'tie', '8.6 mm', f'+{asym}um')
gui.rebuild()
gui.autoscale()
```
Finally we route the V1 launchpads - Charge Lines to Corners
We create the transmission lines between the corner launchpads and the open to grounds
```
from collections import OrderedDict
jogsA_in = OrderedDict()
jogsA_in[0] = ["L", '200um']
options_line_cl1 = {'pin_inputs':
{'start_pin': {'component': 'Q1', 'pin': 'Charge_Line'},
'end_pin': {'component': 'P1_C', 'pin': 'tie'}},
'lead': {'start_straight': '120um', 'end_straight': '225um','start_jogged_extension': jogsA_in},
'fillet': fillet
}
cl1 = RouteAnchors(design, 'line_cl1', options_line_cl1)
options_line_cl3 = {'pin_inputs':
{'start_pin': {'component': 'Q3', 'pin': 'Charge_Line'},
'end_pin': {'component': 'P3_C', 'pin': 'tie'}},
'lead': {'start_straight': '120um', 'end_straight': '225um', 'start_jogged_extension': jogsA_in},
'fillet': fillet
}
cl3 = RouteAnchors(design, 'line_cl3', options_line_cl3)
gui.rebuild()
gui.autoscale()
jogsB_in = OrderedDict()
jogsB_in[0] = ["L", '300um']
anchors2c = OrderedDict()
anchors2c[0] = np.array([2, -2.5])
options_line_cl2 = {'pin_inputs':
{'start_pin': {'component': 'Q2', 'pin': 'Charge_Line'},
'end_pin': {'component': 'P2_C', 'pin': 'tie'}},
'lead': {'start_straight': '200um', 'end_straight': '225um',
'start_jogged_extension': jogsB_in},
'anchors': anchors2c,
'fillet': fillet
}
cl2 = RouteAnchors(design, 'line_cl2', options_line_cl2)
anchors4c = OrderedDict()
anchors4c[0] = np.array([-2, 2.5])
options_line_cl4 = {'pin_inputs':
{'start_pin': {'component': 'Q4', 'pin': 'Charge_Line'},
'end_pin': {'component': 'P4_C', 'pin': 'tie'}},
'lead': {'start_straight': '200um', 'end_straight': '225um',
'start_jogged_extension': jogsB_in},
'anchors': anchors4c,
'fillet': fillet
}
cl4 = RouteAnchors(design, 'line_cl4', options_line_cl4)
gui.rebuild()
gui.autoscale()
gui.rebuild() # rebuild the design and plot
gui.autoscale() #resize GUI to see QComponent
# Get a list of all the qcomponents in QDesign and then zoom on them.
all_component_names = design.components.keys()
gui.zoom_on_components(all_component_names)
#Save screenshot as a .png formatted file.
gui.screenshot()
# Screenshot the canvas only as a .png formatted file.
gui.figure.savefig('shot.png')
from IPython.display import Image, display
_disp_ops = dict(width=500)
display(Image('shot.png', **_disp_ops))
# Closing the Qiskit Metal GUI
gui.main_window.close()
```
| true |
code
| 0.40248 | null | null | null | null |
|
# Assignment 3
## Implementation: EM and Gaussian mixtures
```
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import multivariate_normal as mv_normal
import matplotlib.mlab as mlab
from scipy.stats import chi2
from matplotlib.patches import Ellipse
```
We start off by loading the training data:
```
train_data = np.loadtxt('data/EMGaussian.train')
test_data = np.loadtxt('data/EMGaussian.test')
```
We will define a helper function that will help us compute the Gaussian pdf. This method will be used to plot the contours as well.
```
def mv_gauss(X, Y, mu, cov):
sigma_x = np.sqrt(cov[0,0])
sigma_y = np.sqrt(cov[1,1])
sigma_xy = np.sqrt(cov[0,1])
mu_x = mu[0]
mu_y = mu[1]
return mlab.bivariate_normal(X, Y, sigma_x, sigma_y, mu_x, mu_y, sigma_xy)
# Credit to:
# http://www.nhsilbert.net/source/2014/06/bivariate-normal-ellipse-plotting-in-python/
def plot_cov_ellipse(cov, pos, volume=.5, ax=None, fc='none', ec=[0,0,0], a=1, lw=1):
"""
Plots an ellipse enclosing *volume* based on the specified covariance
matrix (*cov*) and location (*pos*). Additional keyword arguments are passed on to the
ellipse patch artist.
Parameters
----------
cov : The 2x2 covariance matrix to base the ellipse on
pos : The location of the center of the ellipse. Expects a 2-element
sequence of [x0, y0].
volume : The volume inside the ellipse; defaults to 0.5
ax : The axis that the ellipse will be plotted on. Defaults to the
current axis.
"""
def eigsorted(cov):
vals, vecs = np.linalg.eigh(cov)
order = vals.argsort()[::-1]
return vals[order], vecs[:,order]
if ax is None:
ax = plt.gca()
vals, vecs = eigsorted(cov)
theta = np.degrees(np.arctan2(*vecs[:,0][::-1]))
kwrg = {'facecolor':fc, 'edgecolor':ec, 'alpha':a, 'linewidth':lw}
# Width and height are "full" widths, not radius
width, height = 2 * np.sqrt(chi2.ppf(volume,2)) * np.sqrt(vals)
ellip = Ellipse(xy=pos, width=width, height=height, angle=theta, **kwrg)
ax.add_artist(ellip)
```
### Implementation of the K-means algorithm
```
class K_means:
def __init__(self, k=4, n_dims=2):
self.k = k
self.n_dims = n_dims
def train(self, train_data):
# Initialize the cluster means
self.means = np.random.rand(self.k, self.n_dims) * np.max(train_data, axis=0)
n_iter = 0
# Matrix where each row is a z_n assignment vector associated with a data point
old_Z = np.zeros(shape=(train_data.shape[0], self.k))
self.Z = np.zeros(shape=(train_data.shape[0], self.k))
while(not self._converged(old_Z, n_iter)):
old_Z = np.array(self.Z)
self.Z = np.zeros(shape=(train_data.shape[0], self.k))
# First phase, we evaluate the value of the latent cluster assignment variables
for i, train_point in enumerate(train_data):
distances = np.linalg.norm(self.means - train_point, axis=1)**2
self.Z[i][np.argmin(distances)] = 1
# Second phase, the values of the cluster means are computed
self.means = self.Z.T.dot(train_data) / np.sum(self.Z.T, axis=1).reshape(self.k, 1)
n_iter += 1
def assign_cluster(self, data):
# Will hold the cluster that each data point belongs to
clusters = np.zeros(data.shape[0], dtype=int)
for i, x in enumerate(data):
distances = np.linalg.norm(self.means - x, axis=1)**2
clusters[i] = np.argmin(distances)
return clusters
# Helper function that checks the convergence of the K-means algorithm
def _converged(self, old_Z, n_iter):
if n_iter == 0:
return False
elif np.array_equal(old_Z, self.Z):
return True
else:
return False
kmeans = K_means()
kmeans.train(train_data)
means1 = kmeans.means
clusters1 = kmeans.assign_cluster(train_data)
kmeans.train(train_data)
means2 = kmeans.means
clusters2 = kmeans.assign_cluster(train_data)
kmeans.train(train_data)
means3 = kmeans.means
clusters3 = kmeans.assign_cluster(train_data)
```
#### Graphical representation of the data
```
plt.scatter(train_data[:,0], train_data[:,1], marker='x', c=clusters1, alpha=0.4)
plt.scatter(means1[:,0], means1[:,1], marker='v', color='red', alpha=0.8)
plt.title('K-means')
plt.show()
```
### EM algorithm for a Gaussian mixture with covariance matrix proportional to identity matrix
```
class EM_GMM_isotropic:
def __init__(self, k=4, n_dims=2):
self.k = k
self.n_dims = n_dims
def train(self, train_data, means, clusters, MAX_ITER = 100):
# We start off by initializing our gaussian mixture parameters with the parameters given to us
self.means = means
self.sigmas2 = np.ones(self.k)
# posterior probabilities or the weights N x K matrix
self.taus = np.zeros(shape=(train_data.shape[0], self.k))
self.pi = np.bincount(clusters) / clusters.shape[0]
n_iter = 0
while(n_iter < MAX_ITER):
# E step
for i in xrange(self.k):
cov = self.sigmas2[i] * np.eye(self.n_dims)
self.taus[:, i] = self.pi[i] * mv_normal.pdf(train_data, self.means[i], cov, allow_singular=True)
# normalize the taus to get posterior probabilities
self.taus = (self.taus.T / np.sum(self.taus, axis=1)).T
# M step
# Compute the new means and covariance matrices
for i in xrange(self.k):
# We compute the divisor in a variable because we need it in every other computation later on
tau_sum = np.sum(self.taus[:, i])
# First the mean for cluster i
self.means[i] = np.sum(self.taus[:, i].reshape(self.taus.shape[0], 1) * train_data, axis=0)
self.means[i] /= tau_sum
# Now we compute the new sigmas^2
accum = 0
for n in xrange(train_data.shape[0]):
distance = train_data[n] - self.means[i]
accum += self.taus[n,i] * np.linalg.norm(distance)**2
self.sigmas2[i] = accum/( 2* tau_sum)
self.pi[i] = tau_sum / train_data.shape[0]
n_iter += 1
def assign_cluster(self, data):
taus = np.zeros(shape=(data.shape[0], self.k))
for i in xrange(self.k):
cov = self.sigmas2[i] * np.eye(2)
taus[:, i] = self.pi[i] * mv_normal.pdf(data, self.means[i], cov, True)
clusters = np.zeros(data.shape[0], dtype=int)
for i, x in enumerate(data):
clusters[i] = np.argmax(taus[i, :])
return clusters
def normalized_log_likelihood(self, data):
like = np.zeros(shape=(data.shape[0], self.k))
for i in xrange(self.k):
cov = self.sigmas2[i] * np.eye(2)
like[:, i] = self.pi[i] * mv_normal.pdf(data, self.means[i], cov, True)
loglike = np.log(np.sum(like, axis=1))
loglike = np.sum(loglike) / data.shape[0]
return loglike
```
#### Graphical representation of the data
```
kmeans = K_means(k=4)
kmeans.train(train_data)
means = kmeans.means
clusters = kmeans.assign_cluster(train_data)
gmm = EM_GMM_isotropic(k=4)
gmm.train(train_data, means, clusters, MAX_ITER=500)
```
We plot the training data and test data together with colors to represent their estimated class
```
gmm_clusters_train = gmm.assign_cluster(train_data)
gmm_cluster_test = gmm.assign_cluster(test_data)
plt.scatter(train_data[:,0], train_data[:,1], marker='x', c=gmm_clusters_train, alpha=0.4)
plt.scatter(test_data[:,0], test_data[:,1], marker='x', c=gmm_cluster_test, alpha=0.4)
plt.scatter(gmm.means[:,0], gmm.means[:,1], marker='v', color='red', alpha=0.8)
delta = 0.5
x = np.arange(-10.0, 10, delta)
y = np.arange(-10.0, 10, delta)
X, Y = np.meshgrid(x, y)
for (mu, sigma) in zip(gmm.means, gmm.sigmas2):
cov = sigma * np.eye(2)
plot_cov_ellipse(cov, mu, volume=0.9, a=0.9, lw=1)
plt.title('EM for GMM with Isotropic Gaussians Training Data + Test Data')
plt.show()
```
We see that the ellipses containing 90% of the mass are circles. (The axes are on different scales, that is why they appear oval). This is because we assumed that the Gaussians in the mixtures were **isotropic**.
```
test_loglik = gmm.normalized_log_likelihood(test_data)
train_loglik = gmm.normalized_log_likelihood(train_data)
print 'The test log likelihood: ' + str(test_loglik)
print 'The training data log likelihood: ' + str(train_loglik)
```
### EM algorithm for a Gaussian mixture with general covariance matrix
```
class EM_GMM:
def __init__(self, k=4, n_dims=2):
self.k = k
self.n_dims = n_dims
def train(self, train_data, means, clusters, MAX_ITER = 100):
# We start off by initializing our gaussian mixture parameters with the parameters given to us
self.means = means
self.covs = [np.eye(self.n_dims)] * self.k
# compute the sample covariance of each cluster
for i in xrange(self.k):
self.covs[i] = np.cov(train_data[np.where(clusters==i)[0],:], rowvar=False)
# posterior probabilities or the weights N x K matrix
self.taus = np.zeros(shape=(train_data.shape[0], self.k))
self.pi = np.bincount(clusters) / clusters.shape[0]
n_iter = 0
while(n_iter < MAX_ITER):
# E step
for i in xrange(self.k):
self.taus[:, i] = self.pi[i] * mv_normal.pdf(train_data, self.means[i], self.covs[i], True)
# normalize the taus to get posterior probabilities
self.taus = (self.taus.T / np.sum(self.taus, axis=1)).T
# M step
# Compute the new means and covariance matrices
for i in xrange(self.k):
tau_sum = np.sum(self.taus[:, i])
# First the mean for cluster i
self.means[i] = (np.sum(self.taus[:, i].reshape(self.taus.shape[0], 1) * train_data, axis=0) / tau_sum)
distance = train_data - self.means[i]
self.covs[i] = (distance.T.dot(self.taus[:, i].reshape(self.taus.shape[0], 1) * distance) / tau_sum)
self.pi[i] = tau_sum / train_data.shape[0]
n_iter += 1
def assign_cluster(self, data):
taus = np.zeros(shape=(data.shape[0], self.k))
for i in xrange(self.k):
taus[:, i] = self.pi[i] * mv_normal.pdf(data, self.means[i], self.covs[i], True)
clusters = np.zeros(data.shape[0], dtype=int)
for i, x in enumerate(data):
clusters[i] = np.argmax(taus[i, :])
return clusters
def normalized_log_likelihood(self, data):
like = np.zeros(shape=(data.shape[0], self.k))
for i in xrange(self.k):
like[:, i] = self.pi[i] * mv_normal.pdf(data, self.means[i], self.covs[i], True)
loglike = np.log(np.sum(like, axis=1))
loglike = np.sum(loglike) / data.shape[0]
return loglike
```
#### Graphical representation of the data
```
gmm = EM_GMM(k=4)
gmm.train(train_data, means, clusters, MAX_ITER=2000)
```
We plot the training data and test data together with colors to represent their estimated class
```
gmm_clusters_train = gmm.assign_cluster(train_data)
gmm_cluster_test = gmm.assign_cluster(test_data)
plt.scatter(train_data[:,0], train_data[:,1], marker='x', c=gmm_clusters_train, alpha=0.4)
plt.scatter(test_data[:,0], test_data[:,1], marker='x', c=gmm_cluster_test, alpha=0.4)
delta = 0.5
x = np.arange(-10.0, 10, delta)
y = np.arange(-10.0, 10, delta)
X, Y = np.meshgrid(x, y)
for (mu, cov) in zip(gmm.means, gmm.covs):
plot_cov_ellipse(cov, mu, volume=0.8, a=0.9, lw=1)
plt.title('EM for GMM Training Data + Test Data')
plt.show()
```
We notice in this case that our model fits the data much better. This is because we removed the constraints that the Gaussians were isotropic. We assume a more general form of the covariance matrices.
```
test_loglik = gmm.normalized_log_likelihood(test_data)
train_loglik = gmm.normalized_log_likelihood(train_data)
print 'The test log likelihood: ' + str(test_loglik)
print 'The training data log likelihood: ' + str(train_loglik)
```
For EM with isotropic gaussians we get the following log likelihoods:
`
The test log likelihood: -5.38819545252
The training data log likelihood: -5.29104864112
`
For EM with Gaussians with general covariance matrices we get:
`
The test log likelihood: -4.81795630691
The training data log likelihood: -4.65543134984
`
We have that the log-likelihood is higher in the latter case. This is to be expected because a mixture with general covariance matrices will fit our data better as we can see on the scatter plots.
| true |
code
| 0.900289 | null | null | null | null |
|
```
%matplotlib inline
```
Cross Compilation and RPC
=========================
**Author**: `Ziheng Jiang <https://github.com/ZihengJiang/>`_, `Lianmin Zheng <https://github.com/merrymercy/>`_
This tutorial introduces cross compilation and remote device
execution with RPC in TVM.
With cross compilation and RPC, you can **compile program on your
local machine then run it on the remote device**. It is useful when
the resource of remote devices is limited, like Raspberry Pi and mobile
platforms. In this tutorial, we will take Raspberry Pi for CPU example
and Firefly-RK3399 for opencl example.
Build TVM Runtime on Device
---------------------------
The first step is to build tvm runtime on the remote device.
<div class="alert alert-info"><h4>Note</h4><p>All instructions in both this section and next section should be
executed on the target device, e.g. Raspberry Pi. And we assume it
has Linux running.</p></div>
Since we do compilation on local machine, the remote device is only used
for running the generated code. We only need to build tvm runtime on
the remote device.
.. code-block:: bash
git clone --recursive https://github.com/dmlc/tvm
cd tvm
make runtime -j2
After building runtime successfully, we need to set environment variables
in :code:`~/.bashrc` file. We can edit :code:`~/.bashrc`
using :code:`vi ~/.bashrc` and add the line below (Assuming your TVM
directory is in :code:`~/tvm`):
.. code-block:: bash
export PYTHONPATH=$PYTHONPATH:~/tvm/python
To update the environment variables, execute :code:`source ~/.bashrc`.
Set Up RPC Server on Device
---------------------------
To start an RPC server, run the following command on your remote device
(Which is Raspberry Pi in this example).
.. code-block:: bash
python -m tvm.exec.rpc_server --host 0.0.0.0 --port=9090
If you see the line below, it means the RPC server started
successfully on your device.
.. code-block:: bash
INFO:root:RPCServer: bind to 0.0.0.0:9090
Declare and Cross Compile Kernel on Local Machine
-------------------------------------------------
<div class="alert alert-info"><h4>Note</h4><p>Now we back to the local machine, which has a full TVM installed
(with LLVM).</p></div>
Here we will declare a simple kernel on the local machine:
```
import numpy as np
import tvm
from tvm import rpc
from tvm.contrib import util
n = tvm.convert(1024)
A = tvm.placeholder((n,), name='A')
B = tvm.compute((n,), lambda i: A[i] + 1.0, name='B')
s = tvm.create_schedule(B.op)
```
Then we cross compile the kernel.
The target should be 'llvm -target=armv7l-linux-gnueabihf' for
Raspberry Pi 3B, but we use 'llvm' here to make this tutorial runnable
on our webpage building server. See the detailed note in the following block.
```
local_demo = True
if local_demo:
target = 'llvm'
else:
target = 'llvm -target=armv7l-linux-gnueabihf'
func = tvm.build(s, [A, B], target=target, name='add_one')
# save the lib at a local temp folder
temp = util.tempdir()
path = temp.relpath('lib.tar')
func.export_library(path)
```
<div class="alert alert-info"><h4>Note</h4><p>To run this tutorial with a real remote device, change :code:`local_demo`
to False and replace :code:`target` in :code:`build` with the true
target triple of your device. The target triple which might be
different for different devices. For example, it is
:code:`'llvm -target=armv7l-linux-gnueabihf'` for Raspberry Pi 3B and
:code:`'llvm -target=aarch64-linux-gnu'` for RK3399.
Usually, you can query the target by execute :code:`gcc -v` on your
device, and look for the line starting with :code:`Target:`
(Though it may be still a loose configuration.)
Besides :code:`-target`, you can also set other compilation options
like:
* -mcpu=<cpuname>
Specify a specific chip in the current architecture to generate code for. By default this is inferred from the target triple and autodetected to the current architecture.
* -mattr=a1,+a2,-a3,...
Override or control specific attributes of the target, such as whether SIMD operations are enabled or not. The default set of attributes is set by the current CPU.
To get the list of available attributes, you can do:
.. code-block:: bash
llc -mtriple=<your device target triple> -mattr=help
These options are consistent with `llc <http://llvm.org/docs/CommandGuide/llc.html>`_.
It is recommended to set target triple and feature set to contain specific
feature available, so we can take full advantage of the features of the
board.
You can find more details about cross compilation attributes from
`LLVM guide of cross compilation <https://clang.llvm.org/docs/CrossCompilation.html>`_.</p></div>
Run CPU Kernel Remotely by RPC
------------------------------
We show how to run the generated cpu kernel on the remote device.
First we obtain an RPC session from remote device.
```
if local_demo:
remote = rpc.LocalSession()
else:
# The following is my environment, change this to the IP address of your target device
host = '10.77.1.162'
port = 9090
remote = rpc.connect(host, port)
```
Upload the lib to the remote device, then invoke a device local
compiler to relink them. Now `func` is a remote module object.
```
remote.upload(path)
func = remote.load_module('lib.tar')
# create arrays on the remote device
ctx = remote.cpu()
a = tvm.nd.array(np.random.uniform(size=1024).astype(A.dtype), ctx)
b = tvm.nd.array(np.zeros(1024, dtype=A.dtype), ctx)
# the function will run on the remote device
func(a, b)
np.testing.assert_equal(b.asnumpy(), a.asnumpy() + 1)
```
When you want to evaluate the performance of the kernel on the remote
device, it is important to avoid the overhead of network.
:code:`time_evaluator` will returns a remote function that runs the
function over number times, measures the cost per run on the remote
device and returns the measured cost. Network overhead is excluded.
```
time_f = func.time_evaluator(func.entry_name, ctx, number=10)
cost = time_f(a, b).mean
print('%g secs/op' % cost)
```
Run OpenCL Kernel Remotely by RPC
---------------------------------
As for remote OpenCL devices, the workflow is almost the same as above.
You can define the kernel, upload files, and run by RPC.
<div class="alert alert-info"><h4>Note</h4><p>Raspberry Pi does not support OpenCL, the following code is tested on
Firefly-RK3399. You may follow this `tutorial <https://gist.github.com/mli/585aed2cec0b5178b1a510f9f236afa2>`_
to setup the OS and OpenCL driver for RK3399.
Also we need to build the runtime with OpenCL enabled on rk3399 board. In the tvm
root directory, execute</p></div>
.. code-block:: bash
cp cmake/config.cmake .
sed -i "s/USE_OPENCL OFF/USE_OPENCL ON/" config.cmake
make runtime -j4
The following function shows how we run OpenCL kernel remotely
```
def run_opencl():
# NOTE: This is the setting for my rk3399 board. You need to modify
# them according to your environment.
target_host = "llvm -target=aarch64-linux-gnu"
opencl_device_host = '10.77.1.145'
opencl_device_port = 9090
# create scheule for the above "add one" compute decleration
s = tvm.create_schedule(B.op)
xo, xi = s[B].split(B.op.axis[0], factor=32)
s[B].bind(xo, tvm.thread_axis("blockIdx.x"))
s[B].bind(xi, tvm.thread_axis("threadIdx.x"))
func = tvm.build(s, [A, B], "opencl", target_host=target_host)
remote = rpc.connect(opencl_device_host, opencl_device_port)
# export and upload
path = temp.relpath('lib_cl.tar')
func.export_library(path)
remote.upload(path)
func = remote.load_module('lib_cl.tar')
# run
ctx = remote.cl()
a = tvm.nd.array(np.random.uniform(size=1024).astype(A.dtype), ctx)
b = tvm.nd.array(np.zeros(1024, dtype=A.dtype), ctx)
func(a, b)
np.testing.assert_equal(b.asnumpy(), a.asnumpy() + 1)
print("OpenCP test passed!")
```
Summary
-------
This tutorial provides a walk through of cross compilation and RPC
features in TVM.
- Set up RPC server on the remote device.
- Set up target device configuration to cross compile kernel on the
local machine.
- Upload and run the kernel remotely by RPC API.
| true |
code
| 0.462109 | null | null | null | null |
|
# Pyspark
Using pyspark from a Jupyter notebook is quite straightforward when using a local spark instance. This can be installed trivially using conda, i.e.,
```
conda install pyspark
```
Once this is done, a local spark instance can be launched easily from within the notebook.
```
from pyspark import SparkContext
sc = SparkContext('local', 'test')
```
## Example: counting characters
As an example, we read a file that contains an DNA sequence (unrealistically long). We first check some properties of the file, and show the first few lines. We want to count the number of nucleotides, i.e., the total number of occurrences of `A`, `C`, `G`, and `T`.
```
!wc Data/large_dna.txt
!head -3 Data/large_dna.txt
```
Read data from a text file, the resulting data is stored in an RDD.
```
data = sc.textFile('Data/large_dna.txt')
```
The RDD has as many elements as the data file has lines. The order of the elements is the same as that of the lines in the file.
```
data.count()
data.take(3)
```
Define a function that computes the number of nucleotimes in a string, returning the result as a tuple. Note that this function is not the optimal implementation, but it is straighforward.
```
def count_nucl(seq):
return tuple(seq.count(nucl) for nucl in 'ACGT')
```
This function can be applied to each element in the RDD indepedently, in Spark terminology, it is a transformation. Note that the transformation is lazy, it will only be computed when the result values are required.
```
counts = data.map(count_nucl)
```
Next, we define a function that computes the sum of the elements of two tuples, and returns a new tuple.
```
def sum_nucl(t1, t2):
return tuple(x + y for x, y in zip(t1, t2))
total_count = counts.reduce(sum_nucl)
total_count
```
### Alternative approach
An alternative approach is to construct an RDD with key/value pairs.
```
data = sc.textFile('Data/large_dna.txt')
```
First, we create a list of nucleotides for each element in the RDD.
```
nucleotides = data.map(list)
```
For each element in the RDD, we create a key/value pair, the key is the nucleotide, the value is 1. Using the `flatMap` method ensures that the end result is an RDD with key/value pairs as a flat structure.
```
nucl = nucl_counts = nucleotides.flatMap(lambda x: ((n, 1) for n in x))
nucl.take(5)
```
The `countByKey` method will count all RDD elements that have the same key.
```
for key, value in nucl_counts.countByKey().items():
print(f'{key}: {value}')
```
## Example: counting signs
```
import numpy as np
```
RDDs can also be constructured starting from iterables such as numpy arrays.
```
data = sc.parallelize(np.random.uniform(-1.0, 1.0, (1000,)))
```
We want to count the number of positive and negative values, and cmopute the sum of all positve and negative numbers in the RDD. The first step is to transform the RDD into key/value pairs where the key is `'pos'` for numbers that are strictly positive, `'neg'` otherwise. The corresponding values are the original numbers.
```
signs = data.map(lambda x: ('pos', x) if x > 0 else ('neg', x))
signs.take(5)
```
As in the previous example, counting can be done by key.
```
counts = signs.countByKey()
for key, value in counts.items():
print(f'{key}: {value}')
```
To compute the sums, we can perform a reduction by key, using a lambda function to compute the pairwise sum.
```
sums = signs.reduceByKey(lambda x, y: x + y)
sums.take(2)
for key, value in sums.collect():
print(f'{key}: {value}')
```
| true |
code
| 0.281233 | null | null | null | null |
|
# Using starry process as a prior
Most of the tutorials here focus on doing inference on the statistical properties of star spots from large ensemble analyses. But what if we know (or think we know) the properties of the spots of a given star? Then we can use the GP to constrain the actual surface map of the body. This tutorial shows how to compute the mean and covariance of the GP in both spherical harmonic space and pixel space; these can be used as informative priors when mapping individual stars.
```
try:
from IPython import get_ipython
get_ipython().run_line_magic("run", "notebook_config.py")
except:
import warnings
warnings.warn("Can't execute `notebook_config.py`.")
from IPython.display import display, Markdown
from starry_process.defaults import defaults
```
## Setup
```
from starry_process import StarryProcess
import numpy as np
import matplotlib.pyplot as plt
from tqdm.auto import tqdm
import theano
import theano.tensor as tt
```
Let's instantiate a `StarryProcess` with all parameters set to their default values.
```
sp = StarryProcess()
```
## Prior in spherical harmonic space
Computing the GP prior in spherical harmonic space is easy. The GP mean is given by
```
mean = sp.mean_ylm.eval()
mean.shape
```
where its length is just the number of spherical harmonic coefficients at the default maximum degree of the expansion,
$$
N = (l_\mathrm{max} + 1)^2 = (15 + 1)^2 = 256
$$
We can plot this as a function of coefficient index:
```
plt.plot(mean)
plt.ylim(-0.02, 0.045)
plt.xlabel("flattened spherical harmonic index")
plt.ylabel("GP mean")
plt.show()
```
This very regular pattern corresponds to the 2-band structure of the process: a band of spots at $\pm 30^\circ$ latitude. We'll see in the next section what this actually looks like in pixel space.
The GP covariance may be computed from
```
cov = sp.cov_ylm.eval()
cov.shape
```
It's a matrix, which we can also visualize. We'll limit the plot to the first 8 spherical harmonic degrees (81 coefficients) since it's a pretty big matrix:
```
fig, ax = plt.subplots(1, 2)
im = ax[0].imshow(cov[:81, :81])
plt.colorbar(im, ax=ax[0])
ax[0].set_title("covariance")
im = ax[1].imshow(np.log10(np.abs(cov[:81, :81])), vmin=-15)
plt.colorbar(im, ax=ax[1])
ax[1].set_title("$\log_{10}|\mathrm{covariance}|$")
plt.show()
```
The structure certainly isn't trivial: it encodes everything about the size, location, contrast, and number of spots.
Now that we have the GP mean vector ``mean`` and the GP covariance matrix ``cov``, we effectively have a prior for doing inference. This is useful when mapping stellar surfaces with the ``starry`` code, which accepts a spherical harmonic mean vector and covariance matrix as a prior (see [here](https://luger.dev/starry/v1.0.0/notebooks/EclipsingBinary_Linear.html#Linear-solve)).
## Prior in pixel space
For some applications (particularly those not using ``starry``), it may be useful to compute the prior in pixel space. This is helpful if one is attempting to map the stellar surface directly in the pixel basis (i.e., the model is computed on a gridded stellar surface, and the model parameters are the actual pixel intensities). Since there is a linear relationship between spherical harmonic coefficients and pixels, it is very easy to convert between the two.
To visualize the GP mean in pixel space, let's create a grid of latitude-longitude points in degrees:
```
lat = np.linspace(-90, 90, 50)
lon = np.linspace(-180, 180, 100)
```
Let's turn this into a vector of ``(lat, lon)`` tuples...
```
latlon = np.transpose(np.meshgrid(lat, lon))
```
and feed it into ``sp.mean_pix`` to compute the process mean:
```
mean = sp.mean_pix(latlon).eval()
mean.shape
```
The mean computed by ``StarryProcess`` is flattened, so we can unravel it back into the dimensions of our grid to visualize it:
```
plt.imshow(mean.reshape(50, 100), origin="lower", extent=(-180, 180, -90, 90))
plt.colorbar()
plt.xlabel("longitude [degrees]")
plt.ylabel("latitude [degrees]")
plt.show()
```
The prior mean corresponds to dark bands at mid-latitudes. Even though ``StarryProcess`` models circular spots, it is a longitudinally isotropic process, so there's no preferred longitude at which to place the spots. The prior mean is therefore just a spot that's been "smeared out" longitudinally. All of the information about how spots emerge from this pattern is encoded in the covariance matrix (see below).
You can experiment with passing different values for the spot latitude parameters when instantiating the ``StarryProcess`` to see how that affects the mean.
The covariance may be computed from
```
cov = sp.cov_pix(latlon).eval()
cov.shape
```
Again, this is flattened. Let's attempt to visualize it (again restricting to the first few hundred coefficients):
```
plt.imshow(cov[:500, :500])
plt.colorbar()
plt.show()
```
That looks pretty wonky! In general, it's much harder to visualize covariances in pixel space, since it's inherently 4-d! We can settle instead for visualizing the *variance*, which is 2d, and tells us how much scatter there is at every point on the grid when we sample from the prior:
```
plt.imshow(np.diag(cov).reshape(50, 100))
plt.colorbar()
plt.show()
```
We see the same banded structure as before, but now we have *positive* values in the bands and values close to zero outside of the bands. This is exactly what we'd expect: the variance is high within the bands (that's where all the spots live, and where we expect the samples to differ from each other) and zero outside (where the surface should be close to the unspotted mean level).
| true |
code
| 0.630998 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/ahmedhisham73/deep_learningtuts/blob/master/DataAugmentation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!wget --no-check-certificate \
https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip \
-O /tmp/cats_and_dogs_filtered.zip
```
we will start testing on cats vs dogs dataset
```
import os
import zipfile
import tensorflow as tf
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.preprocessing.image import ImageDataGenerator
```
creating training and validation image data generator with subdirectories
```
local_zip = '/tmp/cats_and_dogs_filtered.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp')
zip_ref.close()
base_dir = '/tmp/cats_and_dogs_filtered'
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
# Directory with our training cat pictures
train_cats_dir = os.path.join(train_dir, 'cats')
# Directory with our training dog pictures
train_dogs_dir = os.path.join(train_dir, 'dogs')
# Directory with our validation cat pictures
validation_cats_dir = os.path.join(validation_dir, 'cats')
# Directory with our validation dog pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs')
```
creating the Deep neural network
```
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
```
creating the optimizer
```
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=1e-4),
metrics=['acc'])
```
data validation and training
```
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
# Flow training images in batches of 20 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
train_dir, # This is the source directory for training images
target_size=(150, 150), # All images will be resized to 150x150
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
# Flow validation images in batches of 20 using test_datagen generator
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
history = model.fit_generator(
train_generator,
steps_per_epoch=100, # 2000 images = batch_size * steps
epochs=100,
validation_data=validation_generator,
validation_steps=50, # 1000 images = batch_size * steps
verbose=2)
```
plotting accuracy vs loss
```
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training_accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation_accuracy')
plt.title('Training and validation accuracy')
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training_Loss')
plt.plot(epochs, val_loss, 'b', label='Validation_Loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
overfitting occurs?
refers to a model that models the training data too well.
Overfitting happens when a model learns the detail and noise in the training data to the extent that it negatively impacts the performance of the model on new data. This means that the noise or random fluctuations in the training data is picked up and learned as concepts by the model. The problem is that these concepts do not apply to new data and negatively impact the models ability to generalize.
```
!wget --no-check-certificate \
https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip \
-O /tmp/cats_and_dogs_filtered.zip
import os
import zipfile
import tensorflow as tf
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.preprocessing.image import ImageDataGenerator
local_zip = '/tmp/cats_and_dogs_filtered.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp')
zip_ref.close()
base_dir = '/tmp/cats_and_dogs_filtered'
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
# Directory with our training cat pictures
train_cats_dir = os.path.join(train_dir, 'cats')
# Directory with our training dog pictures
train_dogs_dir = os.path.join(train_dir, 'dogs')
# Directory with our validation cat pictures
validation_cats_dir = os.path.join(validation_dir, 'cats')
# Directory with our validation dog pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs')
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=1e-4),
metrics=['acc'])
# This code has changed. Now instead of the ImageGenerator just rescaling
# the image, we also rotate and do other operations
# Updated to do image augmentation
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
test_datagen = ImageDataGenerator(rescale=1./255)
# Flow training images in batches of 20 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
train_dir, # This is the source directory for training images
target_size=(150, 150), # All images will be resized to 150x150
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
# Flow validation images in batches of 20 using test_datagen generator
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
history = model.fit_generator(
train_generator,
steps_per_epoch=100, # 2000 images = batch_size * steps
epochs=100,
validation_data=validation_generator,
validation_steps=50, # 1000 images = batch_size * steps
verbose=2)
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training_accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation_accuracy')
plt.title('Training and validation accuracy')
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training_Loss')
plt.plot(epochs, val_loss, 'b', label='Validation_Loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
| true |
code
| 0.706228 | null | null | null | null |
|
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
# ONNX Runtime: Tutorial for Nuphar execution provider
**Accelerating model inference via compiler, using Docker Images for ONNX Runtime with Nuphar**
This example shows how to accelerate model inference using Nuphar, an execution provider that leverages just-in-time compilation to generate optimized executables.
For more background about Nuphar, please check [Nuphar-ExecutionProvider.md](https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/Nuphar-ExecutionProvider.md) and its [build instructions](https://www.onnxruntime.ai/docs/how-to/build.html#nuphar).
#### Tutorial Roadmap:
1. Prerequistes
2. Create and run inference on a simple ONNX model, and understand how ***compilation*** works in Nuphar.
3. Create and run inference on a model using ***LSTM***, run symbolic shape inference, edit LSTM ops to Scan, and check Nuphar speedup.
4. ***Quantize*** the LSTM model and check speedup in Nuphar (CPU with AVX2 support is required).
5. Working on real models from onnx model zoo: ***BERT squad***, ***GPT-2*** and ***Bidirectional Attention Flow ([BiDAF](https://arxiv.org/pdf/1611.01603))***.
6. ***Ahead-Of-Time (AOT) compilation*** to save just-in-time compilation cost on model load.
7. Performance tuning for single thread inference.
## 1. Prerequistes
Please make sure you have installed following Python packages. Besides, C++ compiler/linker is required for ahead-of-time compilation. Please make sure you have g++ if running on Linux, or Visual Studio 2017 on Windows.
For simplicity, you may use [Nuphar docker image](https://github.com/microsoft/onnxruntime/blob/master/dockerfiles/README.md) from Microsoft Container Registry.
```
import cpufeature
import hashlib
import numpy as np
import onnx
from onnx import helper, numpy_helper
import os
from timeit import default_timer as timer
import shutil
import subprocess
import sys
import tarfile
import urllib.request
def is_windows():
return sys.platform.startswith('win')
if is_windows():
assert shutil.which('cl.exe'), 'Please make sure MSVC compiler and liner are in PATH.'
else:
assert shutil.which('g++'), 'Please make sure g++ is installed.'
def print_speedup(name, delta_baseline, delta):
print("{} speed-up {:.2f}%".format(name, 100*(delta_baseline/delta - 1)))
print(" Baseline: {:.3f} s, Current: {:.3f} s".format(delta_baseline, delta))
def create_cache_dir(cache_dir):
# remove any stale cache files
if os.path.exists(cache_dir):
shutil.rmtree(cache_dir)
os.makedirs(cache_dir, exist_ok=True)
def md5(file_name):
hash_md5 = hashlib.md5()
with open(file_name, "rb") as f:
for chunk in iter(lambda: f.read(4096), b""):
hash_md5.update(chunk)
return hash_md5.hexdigest()
```
And Nuphar package in onnxruntime is required too. Please make sure you are using Nuphar enabled build.
```
import onnxruntime
from onnxruntime.nuphar.model_editor import convert_to_scan_model
from onnxruntime.nuphar.model_quantizer import convert_matmul_model
from onnxruntime.nuphar.rnn_benchmark import generate_model, perf_test
from onnxruntime.tools.symbolic_shape_infer import SymbolicShapeInference
```
## 2. Create and run inference on a simple ONNX model
Let's start with a simple model: Y = ((X + X) * X + X) * X + X
```
model = onnx.ModelProto()
opset = model.opset_import.add()
opset.domain == 'onnx'
opset.version = 7 # ONNX opset 7 is required for LSTM op later
model.ir_version = onnx.IR_VERSION
graph = model.graph
X = 'input'
Y = 'output'
# declare graph input/ouput with shape [seq, batch, 1024]
dim = 1024
model.graph.input.add().CopyFrom(helper.make_tensor_value_info(X, onnx.TensorProto.FLOAT, ['seq', 'batch', dim]))
model.graph.output.add().CopyFrom(helper.make_tensor_value_info(Y, onnx.TensorProto.FLOAT, ['seq', 'batch', dim]))
# create nodes: Y = ((X + X) * X + X) * X + X
num_nodes = 5
for i in range(num_nodes):
n = helper.make_node('Mul' if i % 2 else 'Add',
[X, X if i == 0 else 'out_'+str(i-1)],
['out_'+str(i) if i < num_nodes - 1 else Y],
'node'+str(i))
model.graph.node.add().CopyFrom(n)
# save the model
simple_model_name = 'simple.onnx'
onnx.save(model, simple_model_name)
```
We will use nuphar execution provider to run the inference for the model that we created above, and use settings string to check the generated code.
Because of the redirection of output, we dump the lowered code from a subprocess to a log file:
```
code_to_run = '''
import onnxruntime
s = 'codegen_dump_lower:verbose'
providers = [('NupharExecutionProvider', {'nuphar_settings': s}), 'CPUExecutionProvider']
sess = onnxruntime.InferenceSession('simple.onnx', providers=providers)
'''
log_file = 'simple_lower.log'
with open(log_file, "w") as f:
subprocess.run([sys.executable, '-c', code_to_run], stdout=f, stderr=f)
```
The lowered log is similar to C source code, but the whole file is lengthy to show here. Let's just check the last few lines that are most important:
```
with open(log_file) as f:
log_lines = f.readlines()
log_lines[-10:]
```
The compiled code showed that the nodes of Add/Mul were fused into a single function, and vectorization was applied in the loop. The fusion was automatically done by the compiler in the Nuphar execution provider, and did not require any manual model editing.
Next, let's run inference on the model and compare the accuracy and performance with numpy:
```
seq = 128
batch = 16
input_data = np.random.rand(seq, batch, dim).astype(np.float32)
sess = onnxruntime.InferenceSession(simple_model_name)
simple_feed = {X:input_data}
simple_output = sess.run([], simple_feed)
np_output = ((((input_data + input_data) * input_data) + input_data) * input_data) + input_data
assert np.allclose(simple_output[0], np_output)
simple_repeats = 100
start_ort = timer()
for i in range(simple_repeats):
sess.run([], simple_feed)
end_ort = timer()
start_np = timer()
for i in range(simple_repeats):
np_output = ((((input_data + input_data) * input_data) + input_data) * input_data) + input_data
end_np = timer()
print_speedup('Fusion', end_np - start_np, end_ort - start_ort)
```
## 3. Create and run inference on a model using LSTM
Now, let's take one step further to work on a 4-layer LSTM model, created from onnxruntime.nuphar.rnn_benchmark module.
```
lstm_model = 'LSTMx4.onnx'
input_dim = 256
hidden_dim = 1024
generate_model('lstm', input_dim, hidden_dim, bidirectional=False, layers=4, model_name=lstm_model)
```
**IMPORTANT**: Nuphar generates code before knowing shapes of input data, unlike other execution providers that do runtime shape inference. Thus, shape inference information is critical for compiler optimizations in Nuphar. To do that, we run symbolic shape inference on the model. Symbolic shape inference is based on the ONNX shape inference, and enhanced by sympy to better handle Shape/ConstantOfShape/etc. ops using symbolic computation.
**IMPORTANT**: When running multi-threaded inference, Nuphar currently uses TVM's parallel schedule with has its own thread pool that's compatible with OpenMP and MKLML. The TVM thread pool has not been integrated with ONNX runtime thread pool, so intra_op_num_threads won't control it. Please make sure the build is with OpenMP or MKLML, and use OMP_NUM_THREADS to control thread pool.
```
onnx.save(SymbolicShapeInference.infer_shapes(onnx.load(lstm_model)), lstm_model)
```
Now, let's check baseline performance on the generated model, using CPU execution provider.
```
sess_baseline = onnxruntime.InferenceSession(lstm_model, providers=['CPUExecutionProvider'])
seq = 128
input_data = np.random.rand(seq, 1, input_dim).astype(np.float32)
lstm_feed = {sess_baseline.get_inputs()[0].name:input_data}
lstm_output = sess_baseline.run([], lstm_feed)
```
To run RNN models in Nuphar execution provider efficiently, LSTM/GRU/RNN ops need to be converted to Scan ops. This is because Scan is more flexible, and supports quantized RNNs.
```
lstm_scan_model = 'Scan_LSTMx4.onnx'
convert_to_scan_model(lstm_model, lstm_scan_model)
```
After conversion, let's compare performance and accuracy with baseline:
```
sess_nuphar = onnxruntime.InferenceSession(lstm_scan_model)
output_nuphar = sess_nuphar.run([], lstm_feed)
assert np.allclose(lstm_output[0], output_nuphar[0])
lstm_repeats = 10
start_lstm_baseline = timer()
for i in range(lstm_repeats):
sess_baseline.run([], lstm_feed)
end_lstm_baseline = timer()
start_nuphar = timer()
for i in range(lstm_repeats):
sess_nuphar.run([], lstm_feed)
end_nuphar = timer()
print_speedup('Nuphar Scan', end_lstm_baseline - start_lstm_baseline, end_nuphar - start_nuphar)
```
## 4. Quantize the LSTM model
Let's get more speed-ups from Nuphar by quantizing the floating point GEMM/GEMV in LSTM model to int8 GEMM/GEMV.
**NOTE:** For inference speed of quantizated model, a CPU with AVX2 instructions is preferred.
```
cpufeature.CPUFeature['AVX2'] or 'No AVX2, quantization model might be slow'
```
We can use onnxruntime.nuphar.model_quantizer to quantize floating point GEMM/GEMVs. Assuming GEMM/GEMV takes form of input * weights, weights are statically quantized per-column, and inputs are dynamically quantized per-row.
```
lstm_quantized_model = 'Scan_LSTMx4_int8.onnx'
convert_matmul_model(lstm_scan_model, lstm_quantized_model)
```
Now run the quantized model, and check accuracy. Please note that quantization may cause accuracy loss, so we relax the comparison threshold a bit.
```
sess_quantized = onnxruntime.InferenceSession(lstm_quantized_model)
output_quantized = sess_quantized.run([], lstm_feed)
assert np.allclose(lstm_output[0], output_quantized[0], rtol=1e-3, atol=1e-3)
```
Now check quantized model performance:
```
start_quantized = timer()
for i in range(lstm_repeats):
sess_quantized.run([], lstm_feed)
end_quantized = timer()
print_speedup('Quantization', end_nuphar - start_nuphar, end_quantized - start_quantized)
```
To check RNN quantization performance, please use rnn_benchmark.perf_test.
```
rnn_type = 'lstm' # could be 'lstm', 'gru' or 'rnn'
num_threads = cpufeature.CPUFeature['num_physical_cores'] # no hyper thread
input_dim = 80 # size of input dimension
hidden_dim = 512 # size of hidden dimension in cell
bidirectional = True # specify RNN being bidirectional
layers = 6 # number of stacked RNN layers
seq_len = 40 # length of sequence
batch_size = 1 # size of batch
original_ms, scan_ms, int8_ms = perf_test(rnn_type, num_threads, input_dim, hidden_dim, bidirectional, layers, seq_len, batch_size)
print_speedup('Nuphar Quantization speed up', original_ms / 1000, int8_ms / 1000)
```
## 5. Working on real models
### 5.1 BERT Squad
BERT (Bidirectional Encoder Representations from Transformers) applies Transformers to language modelling. With Nuphar, we may fuse and compile the model to accelerate inference on CPU.
#### Download model and test data
```
# download BERT squad model
cwd = os.getcwd()
bert_model_url = 'https://github.com/onnx/models/raw/master/text/machine_comprehension/bert-squad/model/bertsquad-10.tar.gz'
bert_model_local = os.path.join(cwd, 'bertsquad-10.tar.gz')
if not os.path.exists(bert_model_local):
urllib.request.urlretrieve(bert_model_url, bert_model_local)
with tarfile.open(bert_model_local, 'r') as f:
f.extractall(cwd)
```
#### Run symbolic shape inference
Note that this model has computations like `min(100000, seq_len)` which could be simplified to `seq_len` if we know `seq_len` is not going to be too big. We can do this by setting int_max. Besides, auto_merge is used to make sure the all nodes in the entire model could have shape inferenced by merging symbolic dims when broadcasting.
```
bert_model_dir = os.path.join(cwd, 'bertsquad-10')
bert_model = os.path.join(bert_model_dir, 'bertsquad10.onnx')
bert_model_with_shape_inference = os.path.join(bert_model_dir, 'bertsquad10_shaped.onnx')
# run symbolic shape inference
onnx.save(SymbolicShapeInference.infer_shapes(onnx.load(bert_model), auto_merge=True, int_max=100000), bert_model_with_shape_inference)
```
#### Run inference on original model, using CPU execution provider, with maximum optimization
```
sess_options = onnxruntime.SessionOptions()
sess_options.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_ALL
sess_baseline = onnxruntime.InferenceSession(bert_model, sess_options=sess_options, providers=['CPUExecutionProvider'])
# load test data
test_data_dir = os.path.join(bert_model_dir, 'test_data_set_1')
tps = [onnx.load_tensor(os.path.join(test_data_dir, 'input_{}.pb'.format(i))) for i in range(len(sess_baseline.get_inputs()))]
bert_feed = {tp.name:numpy_helper.to_array(tp) for tp in tps}
bert_output_baseline = sess_baseline.run([], bert_feed)
bert_repeats = 20
start_bert_baseline = timer()
for i in range(bert_repeats):
sess_baseline.run([], bert_feed)
end_bert_baseline = timer()
```
#### Run inference on the model with symbolic shape inference, using Nuphar execution provider
First let's check accuracy:
```
sess = onnxruntime.InferenceSession(bert_model_with_shape_inference)
output = sess.run([], bert_feed)
assert all([np.allclose(o, ob, atol=1e-4) for o, ob in zip(output, bert_output_baseline)])
```
Then check speed:
```
start_nuphar = timer()
for i in range(bert_repeats):
sess.run([], bert_feed)
end_nuphar = timer()
print_speedup('Nuphar BERT squad', end_bert_baseline - start_bert_baseline, end_nuphar - start_nuphar)
```
### 5.2 GPT-2 with fixed batch size
GPT-2 is a language model using Generative Pre-Trained Transformer for text generation. With Nuphar, we may fuse and compile the model to accelerate inference on CPU.
#### Download model and test data
```
# download GPT-2 model
cwd = os.getcwd()
gpt2_model_url = 'https://github.com/onnx/models/raw/master/text/machine_comprehension/gpt-2/model/gpt2-10.tar.gz'
gpt2_model_local = os.path.join(cwd, 'gpt2-10.tar.gz')
if not os.path.exists(gpt2_model_local):
urllib.request.urlretrieve(gpt2_model_url, gpt2_model_local)
with tarfile.open(gpt2_model_local, 'r') as f:
f.extractall(cwd)
```
#### Change batch dimension to fixed value, and run symbolic shape inference
The GPT-2 model from model zoo has a symbolic batch dimension. By replacing it with a fixed value, compiler would be able to generate better code.
```
gpt2_model_dir = os.path.join(cwd, 'GPT2')
gpt2_model = os.path.join(gpt2_model_dir, 'model.onnx')
# edit batch dimension from symbolic to int value for better codegen
mp = onnx.load(gpt2_model)
mp.graph.input[0].type.tensor_type.shape.dim[0].dim_value = 1
onnx.save(mp, gpt2_model)
gpt2_model_with_shape_inference = os.path.join(gpt2_model_dir, 'model_shaped.onnx')
# run symbolic shape inference
onnx.save(SymbolicShapeInference.infer_shapes(onnx.load(gpt2_model), auto_merge=True), gpt2_model_with_shape_inference)
```
#### Run inference and compare accuracy/performance to CPU provider
```
sess_options = onnxruntime.SessionOptions()
sess_options.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_ALL
sess_baseline = onnxruntime.InferenceSession(gpt2_model, sess_options=sess_options, providers=['CPUExecutionProvider'])
# load test data
input_name = [i.name for i in sess_baseline.get_inputs()][0] # This model only has one input
test_data_dir = os.path.join(gpt2_model_dir, 'test_data_set_0')
tp = onnx.load_tensor(os.path.join(test_data_dir, 'input_0.pb'))
gpt2_feed = {input_name:numpy_helper.to_array(tp)}
gpt2_output_baseline = sess_baseline.run([], gpt2_feed)
gpt2_repeats = 100
start_gpt2_baseline = timer()
for i in range(gpt2_repeats):
sess_baseline.run([], gpt2_feed)
end_gpt2_baseline = timer()
sess = onnxruntime.InferenceSession(gpt2_model_with_shape_inference)
output = sess.run([], gpt2_feed)
assert all([np.allclose(o, ob, atol=1e-4) for o, ob in zip(output, gpt2_output_baseline)])
start_nuphar = timer()
for i in range(gpt2_repeats):
output = sess.run([], gpt2_feed)
end_nuphar = timer()
print_speedup('Nuphar GPT-2', end_gpt2_baseline - start_gpt2_baseline, end_nuphar - start_nuphar)
```
### 5.3 BiDAF with quantization
BiDAF is a machine comprehension model that uses LSTMs. The inputs to this model are paragraphs of contexts and queries, and the outputs are start/end indices of words in the contexts that answers the queries.
First let's download the model:
```
# download BiDAF model
cwd = os.getcwd()
bidaf_url = 'https://github.com/onnx/models/raw/master/text/machine_comprehension/bidirectional_attention_flow/model/bidaf-9.tar.gz'
bidaf_local = os.path.join(cwd, 'bidaf-9.tar.gz')
if not os.path.exists(bidaf_local):
urllib.request.urlretrieve(bidaf_url, bidaf_local)
with tarfile.open(bidaf_local, 'r') as f:
f.extractall(cwd)
```
Now let's check the performance of the CPU provider:
```
bidaf_dir = os.path.join(cwd, 'bidaf')
bidaf = os.path.join(bidaf_dir, 'bidaf.onnx')
sess_baseline = onnxruntime.InferenceSession(bidaf, providers=['CPUExecutionProvider'])
# load test data
test_data_dir = os.path.join(cwd, 'bidaf', 'test_data_set_3')
tps = [onnx.load_tensor(os.path.join(test_data_dir, 'input_{}.pb'.format(i))) for i in range(len(sess_baseline.get_inputs()))]
bidaf_feed = {tp.name:numpy_helper.to_array(tp) for tp in tps}
bidaf_output_baseline = sess_baseline.run([], bidaf_feed)
```
The context in this test data:
```
' '.join(list(bidaf_feed['context_word'].reshape(-1)))
```
The query:
```
' '.join(list(bidaf_feed['query_word'].reshape(-1)))
```
And the answer:
```
' '.join(list(bidaf_feed['context_word'][bidaf_output_baseline[0][0]:bidaf_output_baseline[1][0]+1].reshape(-1)))
```
Now put all steps together:
```
# editing
bidaf_converted = 'bidaf_mod.onnx'
onnx.save(SymbolicShapeInference.infer_shapes(onnx.load(bidaf)), bidaf_converted)
convert_to_scan_model(bidaf_converted, bidaf_converted)
# When quantizing, there's an only_for_scan option to quantize only the GEMV inside Scan ops.
# This is useful when the input dims of LSTM being much bigger than hidden dims.
# BiDAF has several LSTMs with input dim being 800/1400/etc, while hidden dim is 100.
# So unlike the LSTMx4 model above, we use only_for_scan here
convert_matmul_model(bidaf_converted, bidaf_converted, only_for_scan=True)
# inference and verify accuracy
sess = onnxruntime.InferenceSession(bidaf_converted)
output = sess.run([], bidaf_feed)
assert all([np.allclose(o, ob) for o, ob in zip(output, bidaf_output_baseline)])
```
Check performance after all these steps:
```
bidaf_repeats = 100
start_bidaf_baseline = timer()
for i in range(bidaf_repeats):
sess_baseline.run([], bidaf_feed)
end_bidaf_baseline = timer()
start_nuphar = timer()
for i in range(bidaf_repeats):
sess.run([], bidaf_feed)
end_nuphar = timer()
print_speedup('Nuphar quantized BiDAF', end_bidaf_baseline - start_bidaf_baseline, end_nuphar - start_nuphar)
```
The benefit of quantization in BiDAF is not as great as in the LSTM sample above, because BiDAF has relatively small hidden dimensions, which limited the gain from optimization inside Scan ops. However, this model still benefits from fusion/vectorization/etc.
## 6. Ahead-Of-Time (AOT) compilation
Nuphar runs Just-in-time (JIT) compilation when loading models. The compilation may lead to slow cold start. We can use create_shared script to build dll from JIT code and accelerate model loading.
```
start_jit = timer()
sess = onnxruntime.InferenceSession(bidaf_converted)
end_jit = timer()
'JIT took {:.3f} seconds'.format(end_jit - start_jit)
# use settings to enable JIT cache
bidaf_cache_dir = os.path.join(bidaf_dir, 'cache')
create_cache_dir(bidaf_cache_dir)
settings = 'nuphar_cache_path:{}'.format(bidaf_cache_dir)
providers = [('NupharExecutionProvider', {'nuphar_settings': settings}), 'CPUExecutionProvider']
sess = onnxruntime.InferenceSession(bidaf_converted, providers=providers)
```
Now object files of JIT code is stored in cache_dir, let's link them into dll:
```
bidaf_cache_versioned_dir = os.path.join(bidaf_cache_dir, os.listdir(bidaf_cache_dir)[0])
# use onnxruntime.nuphar.create_shared module to create dll
subprocess.run([sys.executable, '-m', 'onnxruntime.nuphar.create_shared', '--input_dir', bidaf_cache_versioned_dir], check=True)
os.listdir(bidaf_cache_versioned_dir)
```
Check the model loading speed-up with AOT dll:
```
start_aot = timer()
settings = 'nuphar_cache_path:{}'.format(bidaf_cache_dir)
providers = [('NupharExecutionProvider', {'nuphar_settings': settings}), 'CPUExecutionProvider']
sess = onnxruntime.InferenceSession(bidaf_converted, providers=providers)
end_aot = timer()
print_speedup('AOT', end_jit - start_jit, end_aot - start_aot)
```
Moreover, Nuphar AOT also supports:
* Generate JIT cache with AVX/AVX2/AVX-512 and build a AOT dll including support for all these CPUs, which makes deployment easier when targeting different CPUs in one package.
* Bake model checksum into AOT dll to validate model with given AOT dll.
```
# create object files for different CPUs
cache_dir = os.path.join(os.getcwd(), 'lstm_cache')
model_name = lstm_quantized_model
model_checksum = md5(model_name)
repeats = lstm_repeats
feed = lstm_feed
time_baseline = end_lstm_baseline - start_lstm_baseline
multi_isa_so = 'avx_avx2_avx512.so'
create_cache_dir(cache_dir)
settings = 'nuphar_cache_path:{}'.format(cache_dir)
for isa in ['avx512', 'avx2', 'avx']:
settings_with_isa = settings + ', nuphar_codegen_target:' + isa
providers = [('NupharExecutionProvider', {'nuphar_settings': settings_with_isa}), 'CPUExecutionProvider']
sess = onnxruntime.InferenceSession(model_name, providers=providers)
cache_versioned_dir = os.path.join(cache_dir, os.listdir(cache_dir)[0])
# link object files to AOT dll
subprocess.run([sys.executable, '-m', 'onnxruntime.nuphar.create_shared', '--input_dir', cache_versioned_dir, '--input_model', model_name, '--output_name', multi_isa_so], check=True)
# now load the model with AOT dll
# NOTE: when nuphar_codegen_target is not set, it defaults to current CPU ISA
settings = 'nuphar_cache_path:{}, nuphar_cache_so_name:{}, nuphar_cache_model_checksum:{}, nuphar_cache_force_no_jit:on'.format(cache_dir, multi_isa_so, model_checksum)
providers = [('NupharExecutionProvider', {'nuphar_settings': settings}), 'CPUExecutionProvider']
sess = onnxruntime.InferenceSession(model_name, providers=providers)
# force to a different ISA which is a subset of current CPU
# NOTE: if an incompatible ISA is used, exception on invalid instructions would be thrown
for valid_isa in ['avx2', 'avx']:
settings_with_isa = 'nuphar_cache_path:{}, nuphar_cache_so_name:{}, nuphar_cache_model_checksum:{}, nuphar_codegen_target:{}, nuphar_cache_force_no_jit:on'.format(cache_dir, multi_isa_so, model_checksum, valid_isa)
providers = [('NupharExecutionProvider', {'nuphar_settings': settings_with_isa}), 'CPUExecutionProvider']
sess = onnxruntime.InferenceSession(model_name, providers=providers)
start_nuphar = timer()
for i in range(repeats):
sess.run([], feed)
end_nuphar = timer()
print_speedup('{} in {}'.format(model_name, valid_isa), time_baseline, end_nuphar - start_nuphar)
```
## 7. Performance tuning for single thread inference.
By default, Nuphar enables parallel schedule for lower inference latency with multiple threads, when building with MKLML or OpenMP. For some models, user may want to run single-thread inference for better throughput with multiple concurrent inference threads, and turning off parallel schedule may make inference a bit faster in single thread.
```
# set OMP_NUM_THREADS to 1 for single thread inference
# this would mak
os.environ['OMP_NUM_THREADS'] = '1'
sess = onnxruntime.InferenceSession(bidaf_converted)
start_baseline = timer()
for i in range(bidaf_repeats):
sess_baseline.run([], bidaf_feed)
end_baseline = timer()
# use NUPHAR_PARALLEL_MIN_WORKLOADS=0 to turn off parallel schedule, using settings string
# it can be set from environment variable too: os.environ['NUPHAR_PARALLEL_MIN_WORKLOADS'] = '0'
settings = 'nuphar_parallel_min_workloads:0'
providers = [('NupharExecutionProvider', {'nuphar_settings': settings}), 'CPUExecutionProvider']
sess = onnxruntime.InferenceSession(bidaf_converted, providers=providers)
start = timer()
for i in range(bidaf_repeats):
sess_baseline.run([], bidaf_feed)
end = timer()
print_speedup('Single thread perf w/o parallel schedule', end_baseline - start_baseline, end - start)
del os.environ['OMP_NUM_THREADS']
```
| true |
code
| 0.394434 | null | null | null | null |
|
# 19. Gradient Boosting Regression
[](https://colab.research.google.com/github/rhennig/EMA6938/blob/main/Notebooks/19.GradientBoostingRegression.ipynb)
In this notebook, we will use a gradient boosted trees model for regression of $({\bf X}, {\bf y})$ data to obtain a function $f({\bf x})$ that best models the labels $y$.
A gradient boosted trees model sequentially adds decision trees to the model to learn the residuals of the model.
To illustrate the behavior of gradient boosting for regression, we will fit a simple one-dimensional function to the same data set that we previously used for linear regression, decision tree regression, and random forest regression.
```
# Import the numpy, panda, sklearn, and matplotlib libraries
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import cross_validate
from sklearn.model_selection import GridSearchCV
from sklearn.tree import plot_tree
plt.rc('xtick', labelsize=18)
plt.rc('ytick', labelsize=18)
```
### Create a one-dimensional dataset for regression
```
# Generate a data set for machine learning
np.random.seed(seed=5)
x=np.linspace(0, 2, 300)
x=x+np.random.normal(0,.3,x.shape)
y=np.cos(x)+2*np.sin(x)+3*np.cos(x*2)+np.random.normal(0,1,x.shape)
# Split the dataset into 80% for training and 20% for testing
x = x.reshape((x.size,1))
X_train,X_test,y_train,y_test = train_test_split(x, y, train_size=0.8, shuffle=True)
# Plot the training and testing dataset
fig,ax=plt.subplots(figsize=(8,8))
ax.scatter(X_train, y_train, color='blue', label='Training')
ax.scatter(X_test, y_test, color='orange', label='Testing')
ax.set_xlabel('X Values',fontsize=20)
ax.set_ylabel('cos(x)+2sin(x)+3cos(2x)',fontsize=20)
ax.set_title('Training and testing data',fontsize=25)
plt.legend(fontsize=20)
plt.show()
```
### Train Decision Tree Regression Model
```
# Fitting Decision Tree Regression to the dataset
regressor = GradientBoostingRegressor()
regressor.fit(X_train, y_train)
# Regressor score is the coefficient of determination of the prediction
print('Training score =', np.round(regressor.score(X_train,y_train),3))
print('Testing score =', np.round(regressor.score(X_test,y_test),3))
y_train_pred = regressor.predict(X_train)
training_mse = mean_squared_error(y_train, y_train_pred)
y_test_pred = regressor.predict(X_test)
testing_mse = mean_squared_error(y_test, y_test_pred)
print('Training RMSE = ', np.round(np.sqrt(training_mse),3))
print('Testing RMSE = ', np.round(np.sqrt(testing_mse),3))
```
We get a more similar score on the training and testing data than the previously trained decision tree and random forest models. This indicates that gradient boosted trees are less prone to overfitting.
Let us visualize the model and data to see the results.
### Visualization of Model Performance
```
# Calculate results for decision tree regression
X_model = np.linspace(np.min(x), np.max(x), 10000)
X_model = X_model.reshape((X_model.size,1))
y_model_pred = regressor.predict(X_model)
y_truth = np.cos(X_model)+2*np.sin(X_model)+3*np.cos(X_model*2)
# Plot the whole dataset
fig,ax=plt.subplots(figsize=(24,8))
ax.scatter(X_train, y_train, label='Data')
ax.scatter(X_test, y_test, color='orange', label='Testing')
ax.plot(X_model, y_model_pred, color='red', label='Model')
ax.plot(X_model, y_truth, color='green', label='Truth')
ax.set_xlabel('x-Values', fontsize=20)
ax.set_ylabel('y-Values', fontsize=20)
ax.set_title('Performance', fontsize=25)
ax.legend(loc='upper right', fontsize=20)
plt.show()
```
As for decision tree and random forest models, gradient boosted trees also result in **piecewise constant** models..
Let's check the predicted $y$ and true $y$ values using a scatter plot.
```
fig,ax=plt.subplots(figsize=(8,8))
ax.scatter(y_test, y_test_pred, color="orange")
ax.scatter(y_train, y_train_pred, color="blue")
ax.set_xlabel('Truth', fontsize=20)
ax.set_ylabel('Prediction', fontsize=20)
plt.show()
```
### Hyperparameter Optimization with Cross-Validation
To address overfitting, we should optimize the hyperparameters for gradient boosted trees.
The two main hyperparameters for gradient boosted trees are the
`learning rate` and `n_estimators`.
1. The `learning rate` is usually denoted as α.
- It determines how fast the model learns. Each tree added modifies the overall model. The learning rate modifies the magnitude of the modification.
- The lower the learning rate, the slower the model learns. The advantage of slower learning rate is that the model becomes more robust and generalized. In statistical learning, models that learn slowly perform better.
- However, learning slowly comes at a cost. It takes more time to train the model which brings us to the other significant hyperparameter.
2. The `n_estimator` hyperparameter determines the number of trees used in the model. If the learning rate is low, we need more trees to train the model. Be very careful selecting the number of trees as too many trees creates the risk of overfitting.
In cell `[4]` above, set the hyperparameter:
`regressor = GradientBoostingRegressor(hyperparameter = value)`
### Grid Search for Optimal Hyperparameters
Instead of optimizing hyperparameters one by one, we will use a grid search for the optimization of some of the hyperparameters of the decision tree model with cross-validation. The optimal values of hyperparameters depend on each other. The grid search varies all the parameters together, which ensures that we obtain a somewhat optimal model.
```
# List possible hyperparameters
regressor.get_params().keys()
# Grid search cross-validation
# Hyper parameters range intialization for tuning
parameters={"learning_rate" : [0.1, 0.3, 0.5],
"n_estimators" : [20, 40, 80]}
grid_search = GridSearchCV(regressor,param_grid=parameters,
scoring='neg_mean_squared_error',cv=3,verbose=1)
grid_search.fit(X_train, y_train)
# Optimial hyperparameters
tuned_parameters = grid_search.best_params_
print(tuned_parameters)
tuned_regressor = GradientBoostingRegressor(**tuned_parameters)
tuned_regressor.fit(X_train, y_train)
print('Training score =', np.round(tuned_regressor.score(X_train,y_train),3))
print('Testing score =', np.round(tuned_regressor.score(X_test,y_test),3))
y_train_pred = tuned_regressor.predict(X_train)
training_mse = mean_squared_error(y_train, y_train_pred)
y_test_pred = tuned_regressor.predict(X_test)
testing_mse = mean_squared_error(y_test, y_test_pred)
print('Training RMSE = ', np.round(np.sqrt(training_mse),3))
print('Testing RMSE = ', np.round(np.sqrt(testing_mse),3))
```
The tuned model does very similar to the value using default parameters.
- It predicts similar training and testing errors.
### Visualization of Model Performance
```
# Calculate optimal decision tree regression
X_model = np.linspace(np.min(x), np.max(x), 1000)
X_model = X_model.reshape((X_model.size,1))
y_model_pred = tuned_regressor.predict(X_model)
y_truth = np.cos(X_model)+2*np.sin(X_model)+3*np.cos(X_model*2)
# Plot the whole dataset
fig,ax=plt.subplots(figsize=(24,8))
ax.scatter(X_train, y_train, label='Data')
ax.scatter(X_test, y_test, color='orange', label='Testing')
ax.plot(X_model, y_model_pred, color='red', label='Model')
ax.plot(X_model, y_truth, color='green', label='Truth')
ax.set_xlabel('x-Values', fontsize=20)
ax.set_ylabel('y-Values', fontsize=20)
ax.set_title('Performance', fontsize=25)
ax.legend(loc='upper right', fontsize=20)
plt.show()
fig,ax=plt.subplots(figsize=(8,8))
ax.scatter(y_test, y_test_pred, color="orange")
ax.scatter(y_train, y_train_pred, color="blue")
ax.set_xlabel('Truth', fontsize=20)
ax.set_ylabel('Prediction', fontsize=20)
plt.show()
```
| true |
code
| 0.748975 | null | null | null | null |
|
# Deploy and Distribute TensorFlow
In this notebook you will learn how to deploy TensorFlow models to TensorFlow Serving (TFS), using the REST API or the gRPC API, and how to train a model across multiple devices.
## Imports
```
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import sklearn
import sys
import tensorflow as tf
from tensorflow import keras
import time
print("python", sys.version)
for module in mpl, np, pd, sklearn, tf, keras:
print(module.__name__, module.__version__)
assert sys.version_info >= (3, 5) # Python ≥3.5 required
assert tf.__version__ >= "2.0" # TensorFlow ≥2.0 required
```

## Exercise 1 – Deploying a Model to TensorFlow Serving
## Save/Load a `SavedModel`
```
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full / 255.
X_test = X_test / 255.
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
model.fit(X_train, y_train, epochs=10, validation_data=(X_valid, y_valid))
MODEL_NAME = "my_fashion_mnist"
!rm -rf {MODEL_NAME}
import time
model_version = int(time.time())
model_path = os.path.join(MODEL_NAME, str(model_version))
os.makedirs(model_path)
tf.saved_model.save(model, model_path)
for root, dirs, files in os.walk(MODEL_NAME):
indent = ' ' * root.count(os.sep)
print('{}{}/'.format(indent, os.path.basename(root)))
for filename in files:
print('{}{}'.format(indent + ' ', filename))
!saved_model_cli show --dir {model_path}
!saved_model_cli show --dir {model_path} --tag_set serve
!saved_model_cli show --dir {model_path} --tag_set serve \
--signature_def serving_default
!saved_model_cli show --dir {model_path} --all
```
**Warning**: as you can see, the method name is empty. This is [a bug](https://github.com/tensorflow/tensorflow/issues/25235), hopefully it will be fixed shortly. In the meantime, you must use `keras.experimental.export()` instead of `tf.saved_model.save()`:
```
!rm -rf {MODEL_NAME}
model_path = keras.experimental.export(model, MODEL_NAME).decode("utf-8")
!saved_model_cli show --dir {model_path} --all
```
Let's write a few test instances to a `npy` file so we can pass them easily to our model:
```
X_new = X_test[:3]
np.save("my_fashion_mnist_tests.npy", X_new, allow_pickle=False)
input_name = model.input_names[0]
input_name
```
And now let's use `saved_model_cli` to make predictions for the instances we just saved:
```
!saved_model_cli run --dir {model_path} --tag_set serve \
--signature_def serving_default \
--inputs {input_name}=my_fashion_mnist_tests.npy
```
## TensorFlow Serving
Install [Docker](https://docs.docker.com/install/) if you don't have it already. Then run:
```bash
docker pull tensorflow/serving
docker run -it --rm -p 8501:8501 \
-v "`pwd`/my_fashion_mnist:/models/my_fashion_mnist" \
-e MODEL_NAME=my_fashion_mnist \
tensorflow/serving
```
Once you are finished using it, press Ctrl-C to shut down the server.
```
import json
input_data_json = json.dumps({
"signature_name": "serving_default",
"instances": X_new.tolist(),
})
print(input_data_json[:200] + "..." + input_data_json[-200:])
```
Now let's use TensorFlow Serving's REST API to make predictions:
```
import requests
SERVER_URL = 'http://localhost:8501/v1/models/my_fashion_mnist:predict'
response = requests.post(SERVER_URL, data=input_data_json)
response.raise_for_status()
response = response.json()
response.keys()
y_proba = np.array(response["predictions"])
y_proba.round(2)
```
### Using Serialized Examples
```
serialized = []
for image in X_new:
image_data = tf.train.FloatList(value=image.ravel())
features = tf.train.Features(
feature={
"image": tf.train.Feature(float_list=image_data),
}
)
example = tf.train.Example(features=features)
serialized.append(example.SerializeToString())
[data[:100]+b'...' for data in serialized]
def parse_images(serialized):
expected_features = {
"image": tf.io.FixedLenFeature([28 * 28], dtype=tf.float32)
}
examples = tf.io.parse_example(serialized, expected_features)
return tf.reshape(examples["image"], (-1, 28, 28))
parse_images(serialized)
serialized_inputs = keras.layers.Input(shape=[], dtype=tf.string)
images = keras.layers.Lambda(lambda serialized: parse_images(serialized))(serialized_inputs)
y_proba = model(images)
ser_model = keras.models.Model(inputs=[serialized_inputs], outputs=[y_proba])
SER_MODEL_NAME = "my_ser_fashion_mnist"
!rm -rf {SER_MODEL_NAME}
ser_model_path = keras.experimental.export(ser_model, SER_MODEL_NAME).decode("utf-8")
!saved_model_cli show --dir {ser_model_path} --all
```
```bash
docker run -it --rm -p 8500:8500 -p 8501:8501 \
-v "`pwd`/my_ser_fashion_mnist:/models/my_ser_fashion_mnist" \
-e MODEL_NAME=my_ser_fashion_mnist \
tensorflow/serving
```
```
import base64
import json
ser_input_data_json = json.dumps({
"signature_name": "serving_default",
"instances": [{"b64": base64.b64encode(data).decode("utf-8")}
for data in serialized],
})
print(ser_input_data_json[:200] + "..." + ser_input_data_json[-200:])
import requests
SER_SERVER_URL = 'http://localhost:8501/v1/models/my_ser_fashion_mnist:predict'
response = requests.post(SER_SERVER_URL, data=ser_input_data_json)
response.raise_for_status()
response = response.json()
response.keys()
y_proba = np.array(response["predictions"])
y_proba.round(2)
!python3 -m pip install --no-deps tensorflow-serving-api
import grpc
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2_grpc
channel = grpc.insecure_channel('localhost:8500')
predict_service = prediction_service_pb2_grpc.PredictionServiceStub(channel)
request = predict_pb2.PredictRequest()
request.model_spec.name = SER_MODEL_NAME
request.model_spec.signature_name = "serving_default"
input_name = ser_model.input_names[0]
request.inputs[input_name].CopyFrom(tf.compat.v1.make_tensor_proto(serialized))
result = predict_service.Predict(request, 10.0)
result
output_name = ser_model.output_names[0]
output_name
shape = [dim.size for dim in result.outputs[output_name].tensor_shape.dim]
shape
y_proba = np.array(result.outputs[output_name].float_val).reshape(shape)
y_proba.round(2)
```

## Exercise 2 – Distributed Training
```
keras.backend.clear_session()
distribution = tf.distribute.MirroredStrategy()
with distribution.scope():
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
model.fit(X_train, y_train, epochs=10, validation_data=(X_valid, y_valid), batch_size=25)
```
| true |
code
| 0.566318 | null | null | null | null |
|
```
from local.torch_basics import *
from local.test import *
from local.core import *
from local.layers import *
from local.data.all import *
from local.optimizer import *
from local.learner import *
from local.metrics import *
from local.text.all import *
from local.callback.rnn import *
from local.callback.all import *
from local.notebook.showdoc import *
```
# Transfer learning in text
> How to fine-tune a language model and train a classifier
## Finetune a pretrained Language Model
First we get our data and tokenize it.
```
path = untar_data(URLs.IMDB_SAMPLE)
df = pd.read_csv(path/'texts.csv')
df_tok,count = tokenize_df(df, 'text')
```
Then we put it in a `DataSource`. For a language model, we don't have targets, so there is only one transform to numericalize the texts. Note that `tokenize_df` returns the count of the words in the corpus to make it easy to create a vocabulary.
```
splits = RandomSplitter()(range_of(df_tok))
vocab = make_vocab(count)
dsrc = DataSource(df_tok, [[attrgetter("text"), Numericalize(vocab)]], splits=splits, dl_type=LMDataLoader)
```
Then we use that `DataSource` to create a `DataBunch`. Here the class of `TfmdDL` we need to use is `LMDataLoader` which will concatenate all the texts in a source (with a shuffle at each epoch for the training set), split it in `bs` chunks then read continuously through it.
```
dbunch = dsrc.databunch(bs=64, seq_len=72, after_batch=Cuda)
dbunch.show_batch()
```
Then we have a convenience method to directly grab a `Learner` from it, using the `AWD_LSTM` architecture.
```
learn = language_model_learner(dbunch, AWD_LSTM, vocab, metrics=[accuracy, Perplexity()], path=path, opt_func = partial(Adam, wd=0.1)).to_fp16()
learn.freeze()
learn.fit_one_cycle(1, 1e-2, moms=(0.8,0.7,0.8))
learn.unfreeze()
learn.fit_one_cycle(4, 1e-2, moms=(0.8,0.7,0.8))
```
Once we have fine-tuned the pretrained language model to this corpus, we save the encoder since we will use it for the classifier.
```
learn.show_results()
learn.save_encoder('enc1')
```
## Use it to train a classifier
For classification, we need to use two set of transforms: one to numericalize the texts and the other to encode the labels as categories.
```
splits = RandomSplitter()(range_of(df_tok))
dsrc = DataSource(df_tok, splits=splits, tfms=[
[attrgetter("text"), Numericalize(vocab)],
[attrgetter("label"), Categorize()]], dl_type=SortedDL)
```
We opnce again use a subclass of `TfmdDL` for the dataloaders, since we want to sort the texts (sortish for the training set) by order of lengths. We also use `pad_collate` to create batches form texts of different lengths.
```
dbunch = dsrc.databunch(before_batch=pad_input, after_batch=Cuda)
dbunch.show_batch(max_n=2)
```
Then we once again have a convenience function to create a classifier from this `DataBunch` with the `AWD_LSTM` architecture.
```
learn = text_classifier_learner(dbunch, AWD_LSTM, vocab, metrics=[accuracy], path=path, opt_func=Adam, drop_mult=0.5)
learn = learn.load_encoder('enc1')
```
Then we can train with gradual unfreezing and differential learning rates.
```
learn.fit_one_cycle(4, moms=(0.8,0.7,0.8))
learn.unfreeze()
learn.opt = learn.create_opt()
learn.fit_one_cycle(8, slice(1e-5,1e-3), moms=(0.8,0.7,0.8))
learn.show_results(max_n=5)
```
| true |
code
| 0.773345 | null | null | null | null |
|

# A simple pipeline using hypergroup to perform community detection and network analysis
A social network of a [karate club](https://en.wikipedia.org/wiki/Zachary%27s_karate_club) was studied by Wayne W. Zachary [1] for a period of three years from 1970 to 1972. The network captures 34 members of a karate club, documenting 78 pairwise links between members who interacted outside the club. During the study a conflict arose between the administrator "John A" and instructor "Mr. Hi" (pseudonyms), which led to the split of the club into two. Half of the members formed a new club around Mr. Hi, members from the other part found a new instructor or gave up karate. Basing on collected data Zachary assigned correctly all but one member of the club to the groups they actually joined after the split.
[1] W. Zachary, An information flow model for conflict and fission in small groups, Journal of Anthropological Research 33, 452-473 (1977)
## Data Preparation
### Import packages: SAS Wrapper for Analytic Transfer and open source libraries
```
import swat
import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import matplotlib.cm as cmx
# Also import networkx used for rendering a network
import networkx as nx
%matplotlib inline
```
### Connect to Cloud Analytic Services in SAS Viya
```
s = swat.CAS('http://cas.mycompany.com:8888') # REST API
```
### Load the action set for hypergroup
```
s.loadactionset('hypergroup')
```
### Load data into CAS
Data set used from https://en.wikipedia.org/wiki/Zachary%27s_karate_club.
```
df = pd.DataFrame.from_records([[2,1],[3,1],[3,2],[4,1],[4,2],[4,3],[5,1],[6,1],[7,1],[7,5],[7,6],[8,1],[8,2],[8,3],[8,4],[9,1],[9,3],[10,3],[11,1],[11,5],[11,6],[12,1],[13,1],[13,4],[14,1],[14,2],[14,3],[14,4],[17,6],[17,7],[18,1],[18,2],[20,1],[20,2],[22,1],[22,2],[26,24],[26,25],[28,3],[28,24],[28,25],[29,3],[30,24],[30,27],[31,2],[31,9],[32,1],[32,25],[32,26],[32,29],[33,3],[33,9],[33,15],[33,16],[33,19],[33,21],[33,23],[33,24],[33,30],[33,31],[33,32],[34,9],[34,10],[34,14],[34,15],[34,16],[34,19],[34,20],[34,21],[34,23],[34,24],[34,27],[34,28],[34,29],[34,30],[34,31],[34,32],[34,33]],
columns=['FROM','TO'])
df['SOURCE'] = df['FROM'].astype(str)
df['TARGET'] = df['TO'].astype(str)
df.head()
```
**Hypergroup** doesn't support numeric source and target columns - so make sure to cast them as varchars.
```
if s.tableexists('karate').exists:
s.CASTable('KARATE').droptable()
dataset = s.upload(df,
importoptions=dict(filetype='csv',
vars=[dict(type='double'),
dict(type='double'),
dict(type='varchar'),
dict(type='varchar')]),
casout=dict(name='KARATE', promote=True)).casTable
```
## Data Exploration
### Get to know your data (what are variables?)
```
dataset.head(5)
dataset.summary()
```
### Graph rendering utility
```
def renderNetworkGraph(filterCommunity=-1, size=18, sizeVar='_HypGrp_',
colorVar='', sizeMultipler=500, nodes_table='nodes',
edges_table='edges'):
''' Build an array of node positions and related colors based on community '''
nodes = s.CASTable(nodes_table)
if filterCommunity >= 0:
nodes = nodes.query('_Community_ EQ %F' % filterCommunity)
nodes = nodes.to_frame()
nodePos = {}
nodeColor = {}
nodeSize = {}
communities = []
i = 0
for nodeId in nodes._Value_:
nodePos[nodeId] = (nodes._AllXCoord_[i], nodes._AllYCoord_[i])
if colorVar:
nodeColor[nodeId] = nodes[colorVar][i]
if nodes[colorVar][i] not in communities:
communities.append(nodes[colorVar][i])
nodeSize[nodeId] = max(nodes[sizeVar][i],0.1)*sizeMultipler
i += 1
communities.sort()
# Build a list of source-target tuples
edges = s.CASTable(edges_table)
if filterCommunity >= 0:
edges = edges.query('_SCommunity_ EQ %F AND _TCommunity_ EQ %F' %
(filterCommunity, filterCommunity))
edges = edges.to_frame()
edgeTuples = []
for i, p in enumerate(edges._Source_):
edgeTuples.append( (edges._Source_[i], edges._Target_[i]) )
# Add nodes and edges to the graph
plt.figure(figsize=(size,size))
graph = nx.DiGraph()
graph.add_edges_from(edgeTuples)
# Size mapping
getNodeSize=[nodeSize[v] for v in graph]
# Color mapping
jet = cm = plt.get_cmap('jet')
getNodeColor=None
if colorVar:
getNodeColor=[nodeColor[v] for v in graph]
cNorm = colors.Normalize(vmin=min(communities), vmax=max(communities))
scalarMap = cmx.ScalarMappable(norm=cNorm, cmap=jet)
# Using a figure here to work-around the fact that networkx doesn't
# produce a labelled legend
f = plt.figure(1)
ax = f.add_subplot(1,1,1)
for community in communities:
ax.plot([0],[0], color=scalarMap.to_rgba(community),
label='Community %s' % '{:2.0f}'.format(community), linewidth=10)
# Render the graph
nx.draw_networkx_nodes(graph, nodePos, node_size=getNodeSize,
node_color=getNodeColor, cmap=jet)
nx.draw_networkx_edges(graph, nodePos, width=1, alpha=0.5)
nx.draw_networkx_labels(graph, nodePos, font_size=11, font_family='sans-serif')
if len(communities) > 0:
plt.legend(loc='upper left', prop={'size':11})
plt.title('Zachary Karate Club social network', fontsize=30)
plt.axis('off')
plt.show()
```
### Execute community and hypergroup detection
```
# Create output table objects
edges = s.CASTable('edges', replace=True)
nodes = s.CASTable('nodes', replace=True)
dataset[['SOURCE', 'TARGET']].hyperGroup(
createOut = 'never',
allGraphs = True,
edges = edges,
vertices = nodes
)
renderNetworkGraph(size=10, sizeMultipler=2000)
```
>**Note:** Network of the Zachary Karate Club. Distribution by degree of the node. Node 1 stands for the instructor, node 34 for the president
```
dataset[['SOURCE', 'TARGET']].hyperGroup(
createOut = 'never',
allGraphs = True,
community = True,
edges = edges,
vertices = nodes
)
```
How many hypergroups and communities do we have?
```
nodes.distinct()
nodes.summary()
```
### Basic community analysis
What are the 2 biggest communities?
```
topKOut = s.CASTable('topKOut', replace=True)
nodes[['_Community_']].topk(
aggregator = 'N',
topK = 4,
casOut = topKOut
)
topKOut = topKOut.sort_values('_Rank_').head(10)
topKOut.columns
nCommunities = len(topKOut)
ind = np.arange(nCommunities) # the x locations for the groups
plt.figure(figsize=(8,4))
p1 = plt.bar(ind + 0.2, topKOut._Score_, 0.5, color='orange', alpha=0.75)
plt.ylabel('Vertices', fontsize=12)
plt.xlabel('Community', fontsize=12)
plt.title('Number of nodes for the top %s communities' % '{:2.0f}'.format(nCommunities))
plt.xticks(ind + 0.2, topKOut._Fmtvar_)
plt.show()
```
>**Note:** This shows that the biggest communities have up to 18 vertices.
What nodes belong to community 4?
```
nodes.query('_Community_ EQ 1').head(5)
```
What edges do we have?
```
edges.head(5)
```
### Render the network graph
```
renderNetworkGraph(size=10, colorVar='_Community_', sizeMultipler=2000)
```
### Analyze node centrality
How important is a user in the network?
```
dataset[['SOURCE', 'TARGET']].hyperGroup(
createOut = 'never',
community = True,
centrality = True,
mergeCommSmallest = True,
allGraphs = True,
graphPartition = True,
scaleCentralities = 'central1', # Returns centrality values closer to 1 in the center
edges = edges,
vertices = nodes
)
nodes.head()
```
Between-ness centrality quantifies the number of times a node acts as a bridge along the shortest path(s) between two other nodes. As such it describes the importance of a node in a network.
```
renderNetworkGraph(size=10, colorVar='_Community_', sizeVar='_Betweenness_')
```
### Filter communities
Only filter community 2.
```
renderNetworkGraph(1, size=10, sizeVar='_CentroidAngle_', sizeMultipler=5)
s.close()
```
>Falko Schulz ▪ Principal Software Developer ▪ Business Intelligence Visualization R&D ▪ SAS® Institute ▪ [falko.schulz@sas.com](mailto:falko.schulz@sas.com) ▪ http://www.sas.com
| true |
code
| 0.36376 | null | null | null | null |
|
# pyplearnr demo
Here I demonstrate pyplearnr, a wrapper for building/training/validating scikit learn pipelines using GridSearchCV or RandomizedSearchCV.
Quick keyword arguments give access to optional feature selection (e.g. SelectKBest), scaling (e.g. standard scaling), use of feature interactions, and data transformations (e.g. PCA, t-SNE) before being fed to a classifier/regressor.
After building the pipeline, data can be used to perform a nested (stratified if classification) k-folds cross-validation and output an object containing data from the process, including the best model.
Various default pipeline step parameters for the grid-search are available for quick iteration over different pipelines, with the option to ignore/override them in a flexible way.
This is an on-going project that I intend to update with more models and pre-processing options and also with corresponding defaults.
## Titanic dataset example
Here I use the Titanic dataset I've cleaned and pickled in a separate tutorial.
### Import data
```
import pandas as pd
df = pd.read_pickle('trimmed_titanic_data.pkl')
df.info()
```
By "cleaned" I mean I've derived titles (e.g. "Mr.", "Mrs.", "Dr.", etc) from the passenger names, imputed the missing Age values using polynomial regression with grid-searched 10-fold cross-validation, filled in the 3 missing Embarked values with the mode, and removed all fields that could be considered an id for that individual.
Thus, there is no missing/null data.
## Set categorical features as type 'category'
In order to one-hot encode categorical data, its best to set the features that are considered categorical:
```
simulation_df = df.copy()
categorical_features = ['Survived','Pclass','Sex','Embarked','Title']
for feature in categorical_features:
simulation_df[feature] = simulation_df[feature].astype('category')
simulation_df.info()
```
## One-hot encode categorical features
```
simulation_df = pd.get_dummies(simulation_df,drop_first=True)
simulation_df.info()
```
Now we have 17 features.
### Split into input/output data
```
# Set output feature
output_feature = 'Survived_1'
# Get all column names
column_names = list(simulation_df.columns)
# Get input features
input_features = [x for x in column_names if x != output_feature]
# Split into features and responses
X = simulation_df[input_features].copy()
y = simulation_df[output_feature].copy()
```
### Null model
```
simulation_df['Survived_1'].value_counts().values/float(simulation_df['Survived_1'].value_counts().values.sum())
```
Thus, null accuracy of ~62% if always predict death.
### Import pyplearnr and initialize optimized pipeline collection
```
%matplotlib inline
%load_ext autoreload
import sys
import os
sys.path.append("./pyplearnr")
optimized_pipelines = {}
%%time
%autoreload
import numpy as np
import pyplearnr as ppl
reload(ppl)
kfcv = ppl.NestedKFoldCrossValidation(outer_loop_fold_count=3,
inner_loop_fold_count=3)
pipeline_schematic = [
{'scaler': {
'none': {},
'standard': {},
'min_max': {},
'normal': {}
}},
{'estimator': {
'knn': {
'n_neighbors': range(1,31),
'weights': ['uniform','distance']
}}}
]
pipelines = ppl.PipelineBuilder().build_pipeline_bundle(pipeline_schematic)
print 'Number of pipelines: %d'%(len(pipelines)), '\n'
kfcv.fit(X.values, y.values, pipelines, scoring_metric='auc')
kfcv.fit(X.values, y.values, pipelines,
best_inner_fold_pipeline_inds = {0:59})
kfcv.fit(X.values, y.values, pipelines, best_outer_fold_pipeline=59)
%autoreload
kfcv.plot_best_pipeline_scores(number_size=10,markersize=8, figsize=(9,3), box_line_thickness=1)
%autoreload
kfcv.plot_contest(color_by='scaler', markersize=3)
%autoreload
kfcv.fit(X.values, y.values, pipelines,
best_inner_fold_pipeline_inds = {1:6})
kfcv.fit(X.values, y.values, pipelines, best_outer_fold_pipeline=8)
%autoreload
%matplotlib inline
kfcv.plot_best_pipeline_scores(number_size=18, markersize=14)
%autoreload
%matplotlib inline
kfcv.plot_contest(number_size=8, markersize=7, all_folds=True, figsize=(10,40),
color_by='scaler', box_line_thickness=2)
kfcv.pipelines[29]
# cmap = pylab.cm.viridis
# print cmap.__doc__
worst_pipelines = [85, 67, 65, 84, 69, 83]
for pipeline_ind in worst_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
print '\n'
worst_pipelines = [86, 75, 84, 79, 85, 83]
for pipeline_ind in worst_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
print '\n'
worst_pipelines = [77, 61, 81, 83, 74, 82, 84]
for pipeline_ind in worst_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
best_pipelines = [89, 93, 2, 91, 4, 3]
for pipeline_ind in best_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
print '\n'
best_pipelines = [91, 93, 5, 43, 4, 100]
for pipeline_ind in best_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
print '\n'
best_pipelines = [5, 4, 91, 3, 55, 49, 2]
for pipeline_ind in best_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
%%time
%autoreload
import numpy as np
import pyplearnr as ppl
reload(ppl)
kfcv = ppl.NestedKFoldCrossValidation(outer_loop_fold_count=3,
inner_loop_fold_count=3)
pipeline_bundle_schematic = [
{'scaler': {
'standard': {},
'normal': {},
'min_max': {},
'binary': {}
}},
{'estimator': {
'knn': {
'n_neighbors': range(1,30)
},
# 'svm': {
# 'C': np.array([1.00000000e+00])
# }
}}
]
pipelines = ppl.PipelineBuilder().build_pipeline_bundle(pipeline_bundle_schematic)
print 'Number of pipelines: %d'%(len(pipelines)), '\n'
kfcv.fit(X.values, y.values, pipelines, scoring_metric='accuracy')
kfcv.fit(X.values, y.values, pipelines,
best_inner_fold_pipeline_inds = {1:24, 2:55})
kfcv.fit(X.values, y.values, pipelines, best_outer_fold_pipeline=55)
%autoreload
%matplotlib inline
kfcv.plot_best_pipeline_scores()
%autoreload
%matplotlib inline
kfcv.plot_contest()
best_pipelines = [91, 44, 89, 45, 3, 90]
for pipeline_ind in best_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
print '\n'
best_pipelines = [21, 18, 40, 38, 36, 35, 24]
for pipeline_ind in best_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
print '\n'
best_pipelines = [55, 39, 41, 42, 47, 40, 114, 110]
for pipeline_ind in best_pipelines:
print pipeline_ind, kfcv.pipelines[pipeline_ind]
%autoreload
kfcv.print_report()
kfcv.fit(X.values, y.values, pipelines,
best_inner_fold_pipeline_inds = {2:18})
kfcv.fit(X.values, y.values, pipelines, best_outer_fold_pipeline=18)
%autoreload
kfcv.print_report()
best_inner_fold_pipelines = {
2: 9
}
kfcv.fit(X.values, y.values, pipelines,
best_inner_fold_pipeline_inds = best_inner_fold_pipelines)
best_outer_fold_pipeline = 45
kfcv.fit(X.values, y.values, pipelines,
best_outer_fold_pipeline = best_outer_fold_pipeline)
```
# Regression
```
%%time
%autoreload
import numpy as np
import pyplearnr as ppl
from sklearn.preprocessing import StandardScaler
from sklearn.feature_selection import SelectKBest
reload(ppl)
data = pd.read_csv('Advertising.csv',index_col=0)
# Start with all features
feature_cols = ['TV','Radio','Newspaper']
# Split data
X = data[feature_cols]
y = data.Sales
kfcv = ppl.NestedKFoldCrossValidation(outer_loop_fold_count=5,
inner_loop_fold_count=3)
pipeline_bundle_schematic = [
{'scaler': {
'none': {},
'standard': {}
}},
{'pre_estimator': {
'polynomial_features': {
'degree': range(1,5)
}
}},
{'estimator': {
'linear_regression': {},
}}
]
pipelines = ppl.PipelineBuilder().build_pipeline_bundle(pipeline_bundle_schematic)
print 'Number of pipelines: %d'%(len(pipelines)), '\n'
kfcv.fit(X.values, y.values, pipelines, scoring_metric='rmse')
kfcv.fit(X.values, y.values, pipelines, scoring_metric='rmse', best_outer_fold_pipeline=1)
%autoreload
kfcv.print_report()
%autoreload
kfcv.print_report()
%%time
%autoreload
import itertools
estimators = ['knn','logistic_regression','svm',
'multilayer_perceptron','random_forest','adaboost']
feature_interaction_options = [True,False]
feature_selection_options = [None,'select_k_best']
scaling_options = [None,'standard','normal','min_max','binary']
transformations = [None,'pca']
pipeline_steps = [feature_interaction_options,feature_selection_options,scaling_options,
transformations,estimators]
pipeline_options = list(itertools.product(*pipeline_steps))
optimized_pipelines = {}
for pipeline_step_combo in pipeline_options:
model_name = []
feature_interactions = pipeline_step_combo[0]
if feature_interactions:
model_name.append('interactions')
feature_selection_type = pipeline_step_combo[1]
if feature_selection_type:
model_name.append('select')
scale_type = pipeline_step_combo[2]
if scale_type:
model_name.append(scale_type)
transform_type = pipeline_step_combo[3]
if transform_type:
model_name.append(transform_type)
estimator = pipeline_step_combo[4]
model_name.append(estimator)
model_name = '_'.join(model_name)
print model_name
# Set pipeline keyword arguments
optimized_pipeline_kwargs = {
'feature_selection_type': feature_selection_type,
'scale_type': scale_type,
'transform_type': transform_type
}
# Initialize pipeline
optimized_pipeline = ppl.PipelineOptimization(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'cv': 10,
'num_parameter_combos': None,
'n_jobs': -1,
'random_state': None,
'suppress_output': True,
'use_default_param_dist': True,
'param_dist': None,
'test_size': 0.2 # 20% saved as test set
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save optimized pipeline
optimized_pipelines[model_name] = optimized_pipeline
```
### KNN with and without pre-processing and various options
#### Basic KNN
Here we do a K-nearest neighbors (KNN) classification with stratified 10-fold (default) cross-validation with a grid search over the default of 1 to 30 nearest neighbors and the use of either "uniform" or "distance" weights:
```
%%time
estimator = 'knn'
# Set pipeline keyword arguments
optimized_pipeline_kwargs = {
'feature_selection_type': None,
'scale_type': None,
'transform_type': None
}
# Initialize pipeline
optimized_pipeline = ppl.PipelineOptimization(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'cv': 10,
'num_parameter_combos': None,
'n_jobs': -1,
'random_state': 6,
'suppress_output': True,
'use_default_param_dist': True,
'param_dist': None,
'test_size': 0.2 # 20% saved as test set
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[estimator] = optimized_pipeline
```
Note the default OptimizedPipeline parameters and those for its fit() method.
The OptimizedPipeline class contains all of the data associated with the nested stratified k-folds cross-validation.
After use of the fit() method, this includes the data, its test/train splits (based on the test_size percentage keyword argument), the GridSearchCV or RandomizedGridSearchCV object, the Pipeline object that has been retrained using all of the data with the best parameters, test/train scores, and validation metrics/reports.
A report can be printed immediately after the fit by setting the suppress_output keyword argument to True.
Printing the OptimizedPipeline instance also shows the report:
```
print optimized_pipeline
```
The report lists the steps in the pipeline, their optimized settings, the test/training accuracy (or L2 regression score), the grid search parameters, and the best parameters.
If the estimator used is a classifier it also includes the confusion matrix, normalized confusion matrix, and a classification report containing precision/recall/f1-score for each class.
Turns out that the best settings for this optimized pipeline are 12 neighbors and the use of the 'uniform' weight.
Note how I've set the random_state keyword agument to 6 so that the models can be compared using the same test/train split.
#### Default pipeline step grid parameters
The default parameters to grid-search over for k-nearest neighbors are 1 to 30 neighbors and either the 'uniform' or 'distance' weight.
The defaults for the pre-processing steps, classifiers, and regressors can be viewed by using the get_default_pipeline_step_parameters() method with the number of features as the input:
```
pre_processing_grid_parameters,classifier_grid_parameters,regression_grid_parameters = \
optimized_pipeline.get_default_pipeline_step_parameters(X.shape[0])
classifier_grid_parameters['knn']
```
#### KNN with custom pipeline step grid parameters
These default parameters can be ignored by setting the use_default_param_dist keyword argument to False.
The param_dist keyword argument can be used to keep default parameters (if use_default_param_dist set to True) or to be used as the sole source of parameters (if use_default_param_dist set to False).
Here is a demonstration of generation of default parameters with those in param_dist being overridden:
```
%%time
estimator_name = 'knn'
model_name = 'custom_override_%s'%(estimator_name)
# Set custom parameters
param_dist = {
'estimator__n_neighbors': range(30,500)
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'param_dist': param_dist,
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
```
Note how the n_neighbors parameter was 30 to 499 instead of 1 to 30.
Here's an example of only using param_dist for parameters:
```
%%time
model_name = 'from_scratch_%s'%(estimator_name)
# Set custom parameters
param_dist = {
'estimator__n_neighbors': range(10,30)
}
estimator = 'knn'
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': False,
'param_dist': param_dist,
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
```
Note how the estimator\_\_weights parameter isn't set for the KNN estimator.
### KNN with scaling
The currently supported scaling options are standard, normal, min-max, and binary using scikit-learn's StandardScaler, Normalizer, MinMaxScaler, and Binarizer, respectively. These are set by the pipeline initialization kwarg 'scale_type' like this:
```
%%time
estimator = 'knn'
scaling_options = ['standard','normal','min-max','binary']
for scaling_option in scaling_options:
model_name = '%s_%s'%(scaling_option,estimator_name)
optimized_pipeline_kwargs = {
'scale_type': scaling_option
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
```
Let's compare the pipelines so far:
```
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
```
Binary scaling fed into a KNN classifier appears to have the best training score.
#### KNN with custom min-max and binary scaling settings
MinMaxScaler scales each feature value to between 0 and 1 by default. Different scaling ranges can be gridded over by setting the 'scaler\_\_feature_range' keyword argument in param_dist.
Binarizer sets each value to 0 or 1 depending on a threshold. The default for pyplearnr is 0.5. This can be changed by setting 'scaler\_\_threshold' using param_dist.
Here is an example of setting both:
```
%%time
reload(ppl)
estimator = 'knn'
scaling_options = ['min_max','binary']
param_dists = {
'min_max': {
'scaler__feature_range': [(1,2),(3,4)]
},
'binary': {
'scaler__threshold': np.arange(0,1,0.1)
}
}
for scaling_option in scaling_options:
model_name = 'custom_%s_%s'%(scaling_option,estimator_name)
optimized_pipeline_kwargs = {
'scale_type': scaling_option
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True,
'param_dist': param_dists[scaling_option]
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
```
Switching the range for min_max scaling boosted it to rank 1 for pipeline training scores:
```
print optimized_pipelines['custom_min_max_knn']
```
The range of 1 to 2 for the MinMaxScaler appeared to be the best.
### KNN with feature selection using SelectKBest with f_classif
Currently only one form of feature selection, SelectKBest with f_classif, is supported. This is set using the 'feature_selection_type' keyword argument.
```
%%time
reload(ppl)
estimator = 'knn'
model_name = 'select_%s'%(estimator_name)
optimized_pipeline_kwargs = {
'feature_selection_type': 'select_k_best'
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
```
Feature selection and KNN did had a mid-level training score:
```
print optimized_pipelines['select_knn']
```
SelectKBest with f_classif chose 5 features as the best to use in the model.
The features selected by SelectKBest can be accessed normally, using the mask obtained from the get_support() method on the columns:
```
feature_selection_mask = optimized_pipelines['select_knn'].pipeline.named_steps['feature_selection'].get_support()
print np.array(X.columns)[feature_selection_mask]
```
Thus, Pclass 3, being male, and the titles Miss, Mr, and Mrs were considered the most important features by SelectKBest using f_classif.
#### Setting custom feature selection
The default number of features is 1 to all of them. This can be gridded over different values by setting 'feature_selection\_\_k' in param_dist:
```
%%time
reload(ppl)
estimator = 'knn'
model_name = 'custom_select_%s'%(estimator_name)
optimized_pipeline_kwargs = {
'feature_selection_type': 'select_k_best'
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
param_dist = {
'feature_selection__k': [5,7,8]
}
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True,
'param_dist': param_dist
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
print optimized_pipelines['custom_select_knn']
```
### KNN using feature interactions
Feature products of different degrees can be used as additional features by setting the 'feature_interaction' OptimizedPipeline keyword argument to True:
```
%%time
reload(ppl)
estimator = 'knn'
model_name = 'interaction_%s'%(estimator_name)
optimized_pipeline_kwargs = {
'feature_interactions': True
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
print optimized_pipelines['interaction_knn']
```
The optimal number of interactions (number of features multiplied by each other at once) was found to be 1.
#### KNN using custom number of feature interactions
The 'feature_interactions__degree' dictates the number of interactions. The default setting is to try no interactions (degree 1) and 2 interactions. Setting this in param_dist allows custom numbers:
```
%%time
reload(ppl)
estimator = 'knn'
model_name = 'custom_interaction_%s'%(estimator_name)
optimized_pipeline_kwargs = {
'feature_interactions': True
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
param_dist = {
'feature_interactions__degree': [2,3,4]
}
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True,
'param_dist': param_dist
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
print optimized_pipelines['custom_interaction_knn']
```
### KNN with pre-processing transforms
Currently Principal Component Analysis (PCA) and t-distributed stochastic neighbor embedding (t-SNE) are supported as pre-processing options.
#### KNN with PCA pre-processing
```
%%time
reload(ppl)
estimator = 'knn'
model_name = 'pca_%s'%(estimator_name)
optimized_pipeline_kwargs = {
'transform_type': 'pca'
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
print optimized_pipelines['pca_knn']
```
We can look at the transformed data after PCA normally:
```
transformed_data = optimized_pipelines['pca_knn'].pipeline.named_steps['transform'].transform(X.values)
column_names = ['PCA_%d'%(feature_ind+1) for feature_ind in range(transformed_data.shape[1])]
pca_df = pd.DataFrame(transformed_data,columns=column_names)
pca_df.plot(x='PCA_1',y='PCA_2',style='ro')
```
This is currently a very manual process and would be difficult with more and more processing steps. I'm thinking of automating this with a class containing all optimized pipelines in the future.
Any of the parameters displayed in the pipeline section of the report (iterated_power, random_state, whiten, n_components, etc) can be set in param_dist by 'transform\__setting' as done previously.
#### KNN with t-SNE pre-processing
The t-SNE algorithm can be used as a pre-processing algorithm as well by setting the 'transform_type' keyword argument to 't-sne':
```
%%time
reload(ppl)
estimator = 'knn'
model_name = 't-sne_%s'%(estimator_name)
optimized_pipeline_kwargs = {
'transform_type': 't-sne'
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
```
This t-SNE step takes longer than most in pyplearnr unfortunately. It also resulted in the worst score. I'll try to optimize this in the future.
### Reducing the number of grid combinations
Setting the 'num_parameter_combos' fit() method keyword argument to an integer will limit the number of grid combinations to perform using RandomizedSearchCV instead of GridSearchCV:
```
%%time
reload(ppl)
estimator = 'knn'
model_name = 'less_combos_%s'%(estimator_name)
optimized_pipeline_kwargs = {}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'use_default_param_dist': True,
'suppress_output': True,
'num_parameter_combos': 5
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
```
This is a good way to speed up computations and give you an idea as to how long a particular pipeline takes to train.
Here's the corresponding report:
```
print optimized_pipelines['less_combos_knn']
```
The best parameter combination, of those attempted by RandomizedSearchCV, was 12 nearest neighbors with the 'uniform' weight.
### Other models
This code currently supports K-nearest neighbors, logistic regression, support vector machines, multilayer perceptrons, random forest, and adaboost:
```
%%time
classifiers = ['knn','logistic_regression','svm',
'multilayer_perceptron','random_forest','adaboost']
for estimator in classifiers:
# Set pipeline keyword arguments
optimized_pipeline_kwargs = {}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'random_state': 6,
'suppress_output': True,
'use_default_param_dist': True
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
optimized_pipelines[estimator] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black')
```
Logistic regression, random forest, multilayer perceptron, and adaboost outperform KNN, even with all of the attempted pre-processing so far.
### Putting it all together
Different combinations of these options can be strung together simultaneously to iterate over multiple models:
```
%%time
import itertools
estimators = ['knn','logistic_regression','svm',
'multilayer_perceptron','random_forest','adaboost']
feature_interaction_options = [True,False]
feature_selection_options = [None,'select_k_best']
scaling_options = [None,'standard','normal','min_max','binary']
transformations = [None,'pca']
pipeline_steps = [feature_interaction_options,feature_selection_options,scaling_options,
transformations,estimators]
pipeline_options = list(itertools.product(*pipeline_steps))
optimized_pipelines = {}
for pipeline_step_combo in pipeline_options:
model_name = []
feature_interactions = pipeline_step_combo[0]
if feature_interactions:
model_name.append('interactions')
feature_selection_type = pipeline_step_combo[1]
if feature_selection_type:
model_name.append('select')
scale_type = pipeline_step_combo[2]
if scale_type:
model_name.append(scale_type)
transform_type = pipeline_step_combo[3]
if transform_type:
model_name.append(transform_type)
estimator = pipeline_step_combo[4]
model_name.append(estimator)
model_name = '_'.join(model_name)
print model_name
# Set pipeline keyword arguments
optimized_pipeline_kwargs = {
'feature_selection_type': feature_selection_type,
'scale_type': scale_type,
'feature_interactions': feature_interactions,
'transform_type': transform_type
}
# Initialize pipeline
optimized_pipeline = ppl.OptimizedPipeline(estimator,**optimized_pipeline_kwargs)
# Set pipeline fitting parameters
fit_kwargs = {
'cv': 10,
'num_parameter_combos': None,
'n_jobs': -1,
'random_state': None,
'suppress_output': True,
'use_default_param_dist': True,
'param_dist': None,
'test_size': 0.2 # 20% saved as test set
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save optimized pipeline
optimized_pipelines[model_name] = optimized_pipeline
# Visualize results
pipeline_keys = optimized_pipelines.keys()
test_scores = [optimized_pipelines[key].test_score_ for key in pipeline_keys]
ax = pd.Series(test_scores,index=pipeline_keys).sort_values().plot(kind='barh',color='black',figsize=(10,40))
print optimized_pipelines['min_max_pca_multilayer_perceptron']
len(optimized_pipelines.keys())
```
Out of 240 different possible pipelines, best pipeline, with a test score of 0.899, appears to be min-max scaling between 0 and 1 funneled into a PCA and then into a multilayer perceptron with one hidden layer of size 5.
It took roughly 3 hours to find it.
### Predicting survival with the optimal model
All one has to do to make a prediction is use the .predict method of the pipeline in the .pipeline field.
Here's an example of predicting whether I would survive on the Titanic. I'm 32, would probably have one family member with me, might be Pclass1 (I'd hope), male, have a Ph.D (if that's what they mean by Dr.). I'm using the median Fare for Pclass 1 and randomly chose a city to have embarked from:
```
personal_stats = [32,1,0,df[df['Pclass']==1]['Fare'].median(),0,0,1,1,0,1,0,0,0,0,0,0]
zip(personal_stats,X.columns)
optimized_pipelines['min_max_pca_multilayer_perceptron'].pipeline.predict(personal_stats)
```
Looks like I died!
Let's look at my predicted probability of surviving:
```
optimized_pipelines['min_max_pca_multilayer_perceptron'].pipeline.predict_proba(personal_stats)
```
I would have a 0.77% chance of survival.
## Summary
I've shown how to use pyplearnr to try out 240 different pipeline combinations validated with stratified 10-folds cross-validation using a combination of simple keyword arguments with some additional customization options. Also, I've shown how to access the model parameters, predict survival, and check the actual predicted probability according to the optimized pipeline.
Please let me know if you have any questions or suggestions about how to improve this tool, my code, the approach I'm taking, etc.
```
%%time
%matplotlib inline
import pyplearnr as ppl
repeated_k_folds = []
for i in range(100):
# Alert user of step number
print('Step %d/%d'%(i+1,100))
# Set custom parameters
param_dist = {}
estimator = 'knn'
# Initialize pipeline
optimized_pipeline = ppl.PipelineOptimization(estimator)
# Set pipeline fitting parameters
fit_kwargs = {
'use_default_param_dist': True,
'param_dist': param_dist,
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
repeated_k_folds.append(optimized_pipeline)
data = {
'train scores': [pipeline_optimization.train_score_
for pipeline_optimization in repeated_k_folds],
'test scores': [pipeline_optimization.test_score_
for pipeline_optimization in repeated_k_folds],
}
repeated_kfcv_df = pd.DataFrame(data)
repeated_kfcv_df['test scores'].plot(kind='hist',bins=8,color='grey')
repeated_kfcv_df['train scores'].plot(kind='hist',bins=8,color='white')
%%time
reload(ppl)
%matplotlib inline
import pyplearnr as ppl
repeated_five_folds = []
for i in range(100):
# Alert user of step number
print('Step %d/%d'%(i+1,100))
# Set custom parameters
param_dist = {}
estimator = 'knn'
# Initialize pipeline
optimized_pipeline = ppl.PipelineOptimization(estimator)
# Set pipeline fitting parameters
fit_kwargs = {
'use_default_param_dist': True,
'param_dist': param_dist,
'cv': 5,
'suppress_output': True
}
# Fit data
optimized_pipeline.fit(X,y,**fit_kwargs)
# Save
repeated_five_folds.append(optimized_pipeline)
data = {
'train scores': [pipeline_optimization.train_score_
for pipeline_optimization in repeated_five_folds],
'test scores': [pipeline_optimization.test_score_
for pipeline_optimization in repeated_five_folds],
}
repeated_fivefcv_df = pd.DataFrame(data)
repeated_kfcv_df['test scores'].plot(kind='hist',bins=8,color='grey')
repeated_fivefcv_df['test scores'].plot(kind='hist',bins=8,color='red')
repeated_kfcv_df['train scores'].plot(kind='hist',bins=8,color='white')
repeated_fivefcv_df['train scores'].plot(kind='hist',bins=8,color='blue')
repeated_fivefcv_df['test scores'].plot(kind='hist',bins=8,color='red')
repeated_kfcv_df['test scores'].plot(kind='hist',bins=8,color='grey')
repeated_kfcv_df['train scores'].plot(kind='hist',bins=8,color='white')
repeated_fivefcv_df['train scores'].plot(kind='hist',bins=8,color='blue')
import sys
sys.path.append('/Users/cmshymansky/documents/code/library/pairplotr')
import pairplotr as ppr
repeated_fivefcv_df.info()
reload(ppr)
ppr.compare_data(repeated_fivefcv_df,bins=8,marker_size=10,plot_medians=True)
reload(ppr)
ppr.compare_data(repeated_fivefcv_df,bins=8,marker_size=10,plot_medians=True)
repeated_fivefcv_df['train scores'].describe()
from matplotlib import pylab as plt
ax = plt.subplot(111)
print ax
# repeated_fivefcv_df.plot(ax=ax,x='train scores',y='test scores',style='bo')
repeated_kfcv_df.plot(ax=ax,x='train scores',y='test scores',style='ro')
print dir(repeated_k_folds[0].grid_search)
all_scores = []
for x in repeated_k_folds[0].grid_search.grid_scores_:
all_scores.extend(list(x.cv_validation_scores))
print max(x.cv_validation_scores),x.best_score_
print repeated_k_folds[0].grid_search.cv_results_
pd.Series(all_scores).plot(kind='hist',color='grey',bins=8)
def get_bootstrapped_datasets(orig_data_set, num_samples=100, points_per_sample=50):
import random
data_sets = []
for i in range(num_samples):
sample = [random.choice(orig_data_set) for x in range(points_per_sample)]
data_sets.append(sample)
return data_sets
def cdf(aList, x):
''' 'aList' must be sorted (low to high) '''
returnVal=0
for v in aList:
if v<=x:
returnVal+=1
return returnVal/float(len(aList))
def inv_cdf(aList, percentile):
''' 'percentile' is between 0 and 1.
'aList' must be sorted (low to high)
'''
returnVal = 0
for i in xrange(len(aList)):
if cdf(aList, aList[i])>=percentile:
returnVal = aList[i]
break
return returnVal
def conf_interval(data_set, alpha=0.05):
data_set.sort()
low_end = inv_cdf(data_set, alpha)
high_end = inv_cdf(data_set, 1-alpha)
return (low_end, high_end)
from matplotlib import pylab as plt
bootstrapped_samples = get_bootstrapped_datasets(repeated_fivefcv_df['test scores'].values)
avg_vals = [float(sum(l))/len(l) for l in bootstrapped_samples]
conf_10000 = conf_interval(avg_vals)
pd.Series(avg_vals).hist(bins=10, normed=True)
plt.axvspan(conf_10000[0],conf_10000[1],alpha=0.5,color='red')
from sklearn.learning_curve import learning_curve
import numpy as np
fig, ax = plt.subplots(1,1, figsize=(16, 6))
fig.subplots_adjust(left=0.0625, right=0.95, wspace=0.1)
N, train_lc, val_lc = learning_curve(optimized_pipeline.pipeline,
X, y, cv=5,
train_sizes=np.linspace(0.3, 1, 25))
ax.plot(N, np.mean(train_lc, 1), color='blue', label='training score')
ax.plot(N, np.mean(val_lc, 1), color='red', label='validation score')
ax.hlines(np.mean([train_lc[-1], val_lc[-1]]), N[0], N[-1],
color='gray', linestyle='dashed')
ax.set_ylim(0, 1)
ax.set_xlim(N[0], N[-1])
ax.set_xlabel('training size')
ax.set_ylabel('score')
ax.legend(loc='best')
# ax[i].plot(N, np.mean(train_lc, 1), color='blue', label='training score')
# ax[i].plot(N, np.mean(val_lc, 1), color='red', label='validation score')
# ax[i].hlines(np.mean([train_lc[-1], val_lc[-1]]), N[0], N[-1],
# color='gray', linestyle='dashed')
# ax[i].set_ylim(0, 1)
# ax[i].set_xlim(N[0], N[-1])
# ax[i].set_xlabel('training size')
# ax[i].set_ylabel('score')
# ax[i].set_title('degree = {0}'.format(degree), size=14)
# ax[i].legend(loc='best')
train_lc
# Set output feature
output_feature = 'diabetes'
# Get input features
input_features = [x for x in X_interaction.columns if x != output_feature]
# Split into features and responses
X = X_interaction.copy()
y = test_df[output_feature].copy()
reload(ppl)
ppl.OptimizationBundle().get_options()
%%time
estimator = 'knn'
# Initialize pipeline
optimized_pipeline = ppl.PipelineOptimization(estimator)
# Fit data
optimized_pipeline.fit(X,y,random_state=6)
import numpy as np
from sklearn.cross_validation import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.pipeline import Pipeline
import sklearn.metrics as sklearn_metrics
X_array = X.copy().values
y_array = y.copy().values
param_grid = {
'estimator__n_neighbors': range(31),
'estimator__weights': ['uniform', 'distance']
}
X_train, X_val, y_train, y_val = \
train_test_split(X_array,y_array,test_size=0.2,random_state=6,stratify=y_array)
from sklearn.model_selection import StratifiedKFold
kfolds_kwargs = dict(
n_splits=10,
shuffle=True,
random_state=6
)
skf = StratifiedKFold(**kfolds_kwargs)
fold_optimizations = {}
for fold_ind, data_inds in enumerate(skf.split(X_train, y_train)):
fold_optimizations[fold_ind] = {}
train_index, test_index = data_inds[0],data_inds[1]
X_train_inner, X_test_inner = X_array[train_index], X_array[test_index]
y_train_inner, y_test_inner = y_array[train_index], y_array[test_index]
pipeline = Pipeline([('estimator',KNeighborsClassifier(n_neighbors=11,weights='distance'))])
pipeline.fit(X_train_inner,y_train_inner)
y_pred_inner = pipeline.predict(X_test_inner)
confusion_matrix = sklearn_metrics.confusion_matrix(y_test_inner, y_pred_inner)
score = confusion_matrix.trace()/float(confusion_matrix.sum())
fold_optimizations[fold_ind]['confusion_matrix'] = confusion_matrix
fold_optimizations[fold_ind]['score'] = confusion_matrix.trace()/float(confusion_matrix.sum())
fold_optimizations[fold_ind]['pipeline'] = pipeline
print np.array([fold_optimizations[fold_ind]['score'] for fold_ind in fold_optimizations]).mean()
y_pred = pipeline.predict(X_val)
test_confusion_matrix = sklearn_metrics.confusion_matrix(y_val, y_pred)
score = test_confusion_matrix.trace()/float(test_confusion_matrix.sum())
print score
# TRAIN: [1 3] TEST: [0 2]
# TRAIN: [0 2] TEST: [1 3]
fold_optimizations
print dir(optimized_pipeline.grid_search.best_estimator_)
dir(folds[0].named_steps['estimator'])
```
| true |
code
| 0.403978 | null | null | null | null |
|
# xarray use case: Neural network training
**tl;dr**
1. This notebook is an example of reading from a climate model netCDF file to train a neural network. Neural networks (for use in parameterization research) require random columns of several stacked variables at a time.
2. Experiments in this notebook show:
1. Reading from raw climate model output files is super slow (1s per batch... need speeds on the order of ms)
2. open_mfdataset is half as fast as opening the same dataset with open_dataset
3. Pure h5py is much faster than reading the same dataset using xarray (even using the h5 backend)
3. Currently, I revert to preformatting the dataset (flatten time, lat, lon). This gets the reading speed down to milliseconds per batch.
**Conclusions**
Reading straight from the raw netCDF files (with all dimensions intact) is handy and might be necessary for later applications (using continuous time slices or lat-lon regions for RNNs or CNNs).
However, at the moment this is many orders of magnitude too slow. Preprocessing seems required.
What would be a good way of speeding this up without too extensive post processing?
```
import xarray as xr
import numpy as np
xr.__version__
```
## Load an example dataset
I uploaded a sample dataset here: http://doi.org/10.5281/zenodo.2559313
The files are around 1GB large. Let's download it.
NOTE: I have all my data on an SSD
```
# Modify this path!
DATADIR = '/local/S.Rasp/tmp/'
!wget -P $DATADIR https://zenodo.org/record/2559313/files/sample_SPCAM_1.nc
!wget -P $DATADIR https://zenodo.org/record/2559313/files/sample_SPCAM_2.nc
!wget -P $DATADIR https://zenodo.org/record/2559313/files/sample_SPCAM_concat.nc
!ls -lh $DATADIR/sample_SPCAM*
```
The files are typical climate model output files. `sample_SPCAM_1.nc` and `sample_SPCAM_2.nc` are two contiguous output files. `sample_SPCAM_concat.nc` is the concatenated version of the two files.
```
%%time
ds = xr.open_mfdataset(DATADIR + 'sample_SPCAM_1.nc')
ds
```
## Random columns for machine learning parameterizations
For the work on ML parameterizations that a few of us are doing now, we would like to work one column at a time. One simple example would be predicting the temperature and humidity tendencies (TPHYSTND and PHQ) from the temperature and humidity profiles (TAP and QAP).
This means we would like to give the neural network a stacked vector containing the inputs (2 x 30 levels) and ask it to predict the outputs (also 2 x 30 levels).
In NN training, we usually train on a batch of data at a time. Batches typically have a few hundred samples (columns in our case). It is really important that the samples in a batch are not correlated but rather represent a random sample of the entire dataset.
To achieve this we will write a data generator that loads the batches by randomly selecting along the time, lat and lon dimensions.
```
class DataGenerator(object):
"""
Data generator that randomly (if shuffle = True) picks columns from the dataset and returns them in
batches. For each column the input variables and output variables will be stacked.
"""
def __init__(self, fn_or_ds, batch_size=128, input_vars=['TAP', 'QAP'], output_vars=['TPHYSTND', 'PHQ'],
shuffle=True, engine='netcdf4'):
self.ds = xr.open_mfdataset(fn_or_ds, engine=engine) if type(fn_or_ds) is str else fn_or_ds
self.batch_size = batch_size
self.input_vars = input_vars
self.output_vars = output_vars
self.ntime, self.nlat, self.nlon = self.ds.time.size, self.ds.lat.size, self.ds.lon.size
self.ntot = self.ntime * self.nlat * self.ntime
self.n_batches = self.ntot // batch_size
self.indices = np.arange(self.ntot)
if shuffle:
self.indices = np.random.permutation(self.indices)
def __getitem__(self, index):
time_indices, lat_indices, lon_indices = np.unravel_index(
self.indices[index*self.batch_size:(index+1)*self.batch_size], (self.ntime, self.nlat, self.nlon)
)
X, Y = [], []
for itime, ilat, ilon in zip(time_indices, lat_indices, lon_indices):
X.append(
np.concatenate(
[self.ds[v].isel(time=itime, lat=ilat, lon=ilon).values for v in self.input_vars]
)
)
Y.append(
np.concatenate(
[self.ds[v].isel(time=itime, lat=ilat, lon=ilon).values for v in self.output_vars]
)
)
return np.array(X), np.array(Y)
```
### Multi-file dataset
Let's start by using the split dataset `sample_SPCAM_1.nc` and `sample_SPCAM_2.nc`.
```
gen = DataGenerator(DATADIR + 'sample_SPCAM_[1-2].nc')
# This is how we get one batch of inputs and corresponding outputs
x, y = gen[0]
x.shape, y.shape
# A little test function to check the timing.
def test(g, n):
for i in range(n):
x, y = g[i]
%%time
test(gen, 10)
# does shuffling make a big difference
gen = DataGenerator(DATADIR + 'sample_SPCAM_[1-2].nc', shuffle=True)
%time test(gen, 10)
```
So it takes more than one second to read one batch. This is way too slow to train a neural network in a reasonable amount of time. Shuffling doesn't seem to be a huge problem, but even without shuffling I am probably accessing the data in a different order than saved on disc.
Let's check what actually takes that long.
```
%load_ext line_profiler
%lprun -f gen.__getitem__ test(gen, 10)
```
Output:
```
Timer unit: 1e-06 s
Total time: 24.5229 s
File: <ipython-input-57-78b9d254df3b>
Function: __getitem__ at line 18
Line # Hits Time Per Hit % Time Line Contents
==============================================================
18 def __getitem__(self, index):
19 10 17.0 1.7 0.0 time_indices, lat_indices, lon_indices = np.unravel_index(
20 10 267.0 26.7 0.0 self.indices[index*self.batch_size:(index+1)*self.batch_size], (self.ntime, self.nlat, self.nlon)
21 )
22
23 10 10.0 1.0 0.0 X, Y = [], []
24 1290 4642.0 3.6 0.0 for itime, ilat, ilon in zip(time_indices, lat_indices, lon_indices):
25 1280 1399.0 1.1 0.0 X.append(
26 1280 1721.0 1.3 0.0 np.concatenate(
27 1280 12256070.0 9575.1 50.0 [self.ds[v].isel(time=itime, lat=ilat, lon=ilon).values for v in self.input_vars]
28 )
29 )
30 1280 2393.0 1.9 0.0 Y.append(
31 1280 1750.0 1.4 0.0 np.concatenate(
32 1280 12253415.0 9573.0 50.0 [self.ds[v].isel(time=itime, lat=ilat, lon=ilon).values for v in self.output_vars]
33 )
34 )
35
36 10 1218.0 121.8 0.0 return np.array(X), np.array(Y)
```
### Using the concatenated dataset
Let's see whether it makes a difference to use the pre-concatenated dataset.
```
ds = xr.open_dataset(f'{DATADIR}sample_SPCAM_concat.nc')
gen = DataGenerator(ds, shuffle=True)
%time test(gen, 10)
ds = xr.open_mfdataset(f'{DATADIR}sample_SPCAM_concat.nc')
gen = DataGenerator(ds, shuffle=True)
%time test(gen, 10)
```
So yes, it approximately halves the time but only if the single dataset is NOT opened with `open_mfdataset`.
### With h5py engine
Let's see whether using the h5py backend makes a difference
```
import h5netcdf
ds.close()
ds = xr.open_dataset(f'{DATADIR}sample_SPCAM_concat.nc', engine='h5netcdf')
gen = DataGenerator(ds)
%%time
test(gen, 10)
```
Doesn't seem to speed it up
```
ds.close()
```
### Using plain h5py
Let's write a version of the data generator that uses plain h5py for data loading.
```
class DataGeneratorH5(object):
def __init__(self, fn, batch_size=128, input_vars=['TAP', 'QAP'], output_vars=['TPHYSTND', 'PHQ'], shuffle=True):
self.ds = xr.open_dataset(fn)
self.batch_size = batch_size
self.input_vars = input_vars
self.output_vars = output_vars
self.ntime, self.nlat, self.nlon = self.ds.time.size, self.ds.lat.size, self.ds.lon.size
self.ntot = self.ntime * self.nlat * self.ntime
self.n_batches = self.ntot // batch_size
self.indices = np.arange(self.ntot)
if shuffle:
self.indices = np.random.permutation(self.indices)
# Close xarray dataset and open h5py object
self.ds.close()
self.ds = h5py.File(fn, 'r')
def __getitem__(self, index):
time_indices, lat_indices, lon_indices = np.unravel_index(
self.indices[index*self.batch_size:(index+1)*self.batch_size], (self.ntime, self.nlat, self.nlon)
)
X, Y = [], []
for itime, ilat, ilon in zip(time_indices, lat_indices, lon_indices):
X.append(
np.concatenate(
[self.ds[v][itime, :, ilat, ilon] for v in self.input_vars]
)
)
Y.append(
np.concatenate(
[self.ds[v][itime, :, ilat, ilon] for v in self.output_vars]
)
)
return np.array(X), np.array(Y)
gen = DataGeneratorH5(f'{DATADIR}sample_SPCAM_concat.nc')
%%time
test(gen, 10)
gen.ds.close()
```
So this is significantly faster than xarray.
## Use in a simple neural network
How would we actually use this data generator for network training...
Note that this neural network will not actually learn much because we didn't normalize the input data. But we only care about computational performance here, right?
```
import tensorflow as tf
from tensorflow.keras.layers import *
from tensorflow.keras.models import Sequential
tf.keras.__version__
model = Sequential([
Dense(128, input_shape=(60,), activation='relu'),
Dense(60),
])
model.summary()
model.compile('adam', 'mse')
# Load the xarray version using the concatenated dataset
ds = xr.open_dataset(f'{DATADIR}sample_SPCAM_concat.nc')
gen = DataGenerator(ds, shuffle=True)
model.fit_generator(iter(gen), steps_per_epoch=gen.n_batches)
```
So as you can see, it would take around 1 hour to go through one epoch (i.e. the entire dataset once). This is crazy slow since we only used 2 days of data. The full dataset contains a year of data...
## Pre-processing the dataset
What I have resorted to to solve this issue is to prestack the data, preshuffle the data and save it to disc conveniently.
These files contain the exactly same information for the input (features) and output (targets) variables required.
The files only have two dimensions: sample, which is the shuffled, flattened time, lat and lon dimensions and lev which is the stacked vertical coordinate.
The preprocessing for these two files only takes a few seconds but for an entire year of data, the preprocessing alone can take around an hour.
```
!wget -P $DATADIR https://zenodo.org/record/2559313/files/preproc_features.nc
!wget -P $DATADIR https://zenodo.org/record/2559313/files/preproc_targets.nc
!ls -lh $DATADIR/preproc*
ds = xr.open_dataset(f'{DATADIR}preproc_features.nc')
ds
# Write a new data generator
class DataGeneratorPreproc(object):
"""
Data generator that randomly (if shuffle = True) picks columns from the dataset and returns them in
batches. For each column the input variables and output variables will be stacked.
"""
def __init__(self, feature_fn, target_fn, batch_size=128, shuffle=True, engine='netcdf4'):
self.feature_ds = xr.open_dataset(feature_fn, engine=engine)
self.target_ds = xr.open_dataset(target_fn, engine=engine)
self.batch_size = batch_size
self.ntot = self.feature_ds.sample.size
self.n_batches = self.ntot // batch_size
self.indices = np.arange(self.ntot)
if shuffle:
self.indices = np.random.permutation(self.indices)
def __getitem__(self, index):
batch_indices = self.indices[index*self.batch_size:(index+1)*self.batch_size]
X = self.feature_ds.features.isel(sample=batch_indices)
Y = self.target_ds.targets.isel(sample=batch_indices)
return X, Y
gen = DataGeneratorPreproc(f'{DATADIR}preproc_features.nc', f'{DATADIR}preproc_targets.nc')
x, y = gen[0]
x.shape, y.shape
%%time
test(gen, 10)
gen = DataGeneratorPreproc(f'{DATADIR}preproc_features.nc', f'{DATADIR}preproc_targets.nc', shuffle=False)
%%time
test(gen, 10)
gen.feature_ds.close(); gen.target_ds.close()
gen = DataGeneratorPreproc(f'{DATADIR}preproc_features.nc', f'{DATADIR}preproc_targets.nc', engine='h5netcdf')
%%time
test(gen, 10)
```
So these are the sort of times that are required for training a neural network.
### Pure h5py version
```
class DataGeneratorPreprocH5(object):
"""
Data generator that randomly (if shuffle = True) picks columns from the dataset and returns them in
batches. For each column the input variables and output variables will be stacked.
"""
def __init__(self, feature_fn, target_fn, batch_size=128):
self.feature_ds = xr.open_dataset(feature_fn)
self.target_ds = xr.open_dataset(target_fn)
self.batch_size = batch_size
self.ntot = self.feature_ds.sample.size
self.n_batches = self.ntot // batch_size
# Close xarray dataset and open h5py object
self.feature_ds.close()
self.feature_ds = h5py.File(feature_fn, 'r')
self.target_ds.close()
self.target_ds = h5py.File(target_fn, 'r')
def __getitem__(self, index):
X = self.feature_ds['features'][index*self.batch_size:(index+1)*self.batch_size, :]
Y = self.target_ds['targets'][index*self.batch_size:(index+1)*self.batch_size, :]
return X, Y
gen.feature_ds.close(); gen.target_ds.close()
gen = DataGeneratorPreprocH5(f'{DATADIR}preproc_features.nc', f'{DATADIR}preproc_targets.nc')
%%time
test(gen, 10)
```
So again, the pure h5py version is an order of magnitude faster than the xarray version.
## End
| true |
code
| 0.567517 | null | null | null | null |
|
# Multi Armed Bandit Problem
Multi-armed bandit (MAB) problem is one of the classical problems in reinforcement learning. A multi-armed bandit is actually a slot machine, a gambling game played in a casino where you pull the arm(lever) and get a payout(reward) based on some randomly generated probability distribution. A single slot machine is called one-armed bandit and when there are multiple slot machines it is called as multi-armed bandits or k-armed bandits. Multi-armed bandits are shown below,

As each slot machine gives us the reward from its own probability distribution, our goal is to find out which slot machine will give us the maximum cumulative reward over a sequence of time.So at each time step t, the agent performs an action i.e pulls an arm from the slot machine and receives a reward and goal of our agent is to maximize the cumulative reward.
We define the value of an arm Q(a) as average rewards received by pulling the arm,
$$Q(a) = \frac{Sum \,of \,rewards \,\,received \,from \,the \,arm}{Total\, number \,of \,times \,the \,arm \,was \,pulled} $$
So the optimal arm is the one which gives us maximum cumulative reward i.e
$$Q(a^*)= Max \; Q(a) $$
The goal of our agent is to find the optimal arm and also to minimize the regret which can be defined as the cost of knowing which of the k arms is optimal. Now how do we find the best arm? Whether we should explore all the arms or choose the arm which already gives us a maximum cumulative reward? Here comes exploration-exploitation dilemma. Now we will see how to solve this dilemma using various exploration strategies as follows,
1. Epsilon-greedy policy
2. Softmax exploration
3. Upper Confidence bound algorithm
4. Thomson sampling technique
First let us import the libraries,
```
import gym_bandits
import gym
import numpy as np
import math
import random
env = gym.make("BanditTenArmedGaussian-v0")
```
### Epsilon-Greedy Policy
```
def epsilon_greedy(epsilon):
rand = np.random.random()
if rand < epsilon:
action = env.action_space.sample()
else:
action = np.argmax(Q)
return action
```
Now, let us initialize all the necessary variables
```
# number of rounds (iterations)
num_rounds = 20000
# Count of number of times an arm was pulled
count = np.zeros(10)
# Sum of rewards of each arm
sum_rewards = np.zeros(10)
# Q value which is the average reward
Q = np.zeros(10)
```
Start pulling the arm!!!!!!!!
```
for i in range(num_rounds):
# Select the arm using epsilon greedy
arm = epsilon_greedy(0.5)
# Get the reward
observation, reward, done, info = env.step(arm)
# update the count of that arm
count[arm] += 1
# Sum the rewards obtained from the arm
sum_rewards[arm]+=reward
# calculate Q value which is the average rewards of the arm
Q[arm] = sum_rewards[arm]/count[arm]
print( 'The optimal arm is {}'.format(np.argmax(Q)))
```
### Softmax Exploration
```
def softmax(tau):
total = sum([math.exp(val/tau) for val in Q])
probs = [math.exp(val/tau)/total for val in Q]
threshold = random.random()
cumulative_prob = 0.0
for i in range(len(probs)):
cumulative_prob += probs[i]
if (cumulative_prob > threshold):
return i
return np.argmax(probs)
# number of rounds (iterations)
num_rounds = 20000
# Count of number of times an arm was pulled
count = np.zeros(10)
# Sum of rewards of each arm
sum_rewards = np.zeros(10)
# Q value which is the average reward
Q = np.zeros(10)
for i in range(num_rounds):
# Select the arm using softmax
arm = softmax(0.5)
# Get the reward
observation, reward, done, info = env.step(arm)
# update the count of that arm
count[arm] += 1
# Sum the rewards obtained from the arm
sum_rewards[arm]+=reward
# calculate Q value which is the average rewards of the arm
Q[arm] = sum_rewards[arm]/count[arm]
print( 'The optimal arm is {}'.format(np.argmax(Q)))
```
### Upper Confidence Bound
```
def UCB(iters):
ucb = np.zeros(10)
#explore all the arms
if iters < 10:
return i
else:
for arm in range(10):
# calculate upper bound
upper_bound = math.sqrt((2*math.log(sum(count))) / count[arm])
# add upper bound to the Q valyue
ucb[arm] = Q[arm] + upper_bound
# return the arm which has maximum value
return (np.argmax(ucb))
# number of rounds (iterations)
num_rounds = 20000
# Count of number of times an arm was pulled
count = np.zeros(10)
# Sum of rewards of each arm
sum_rewards = np.zeros(10)
# Q value which is the average reward
Q = np.zeros(10)
for i in range(num_rounds):
# Select the arm using UCB
arm = UCB(i)
# Get the reward
observation, reward, done, info = env.step(arm)
# update the count of that arm
count[arm] += 1
# Sum the rewards obtained from the arm
sum_rewards[arm]+=reward
# calculate Q value which is the average rewards of the arm
Q[arm] = sum_rewards[arm]/count[arm]
print( 'The optimal arm is {}'.format(np.argmax(Q)))
```
### Thompson Sampling
```
def thompson_sampling(alpha,beta):
samples = [np.random.beta(alpha[i]+1,beta[i]+1) for i in range(10)]
return np.argmax(samples)
# number of rounds (iterations)
num_rounds = 20000
# Count of number of times an arm was pulled
count = np.zeros(10)
# Sum of rewards of each arm
sum_rewards = np.zeros(10)
# Q value which is the average reward
Q = np.zeros(10)
# initialize alpha and beta values
alpha = np.ones(10)
beta = np.ones(10)
for i in range(num_rounds):
# Select the arm using thompson sampling
arm = thompson_sampling(alpha,beta)
# Get the reward
observation, reward, done, info = env.step(arm)
# update the count of that arm
count[arm] += 1
# Sum the rewards obtained from the arm
sum_rewards[arm]+=reward
# calculate Q value which is the average rewards of the arm
Q[arm] = sum_rewards[arm]/count[arm]
# If it is a positive reward increment alpha
if reward >0:
alpha[arm] += 1
# If it is a negative reward increment beta
else:
beta[arm] += 1
print( 'The optimal arm is {}'.format(np.argmax(Q)))
```
| true |
code
| 0.542863 | null | null | null | null |
|
# Sensor invariance of signal bouts
We assume that the bouts in the signal are caused by encounters of a plume filament with high gas concentration.
The aim of this figure is to show that the sensor bouts are sensor invariant, that is, encountering them is (by and large) independent of the sensor used. As we will show, it's particularly the bout onsets that allow to identify corresponding bouts of gas concentration across all sensors.
### Preliminaries
```
import sys
import os
#add path to the directory containing the plumy module to PYTHONPATH
from matplotlib import cm
plumy_path = os.path.abspath(os.path.join(os.path.pardir, os.path.pardir))
sys.path.append(os.path.join(plumy_path))
toplevel_path = os.path.abspath(os.path.join(os.path.pardir, os.path.pardir, os.path.pardir))
import matplotlib.pyplot as plt
%matplotlib inline
from tqdm.auto import tqdm
from plumy.utils import DataSelector
from plumy.utils import HDFDataSelector
from plumy.utils import ZipFileDataSelector
from plumy.bouts import *
import plumy.bouts
import importlib
importlib.reload(plumy.bouts)
plt.rc('text', usetex=False)
mpl.rcParams['savefig.dpi'] = 150 # for print, go to 600
from __future__ import unicode_literals
rem_dupes = True # Drop duplicate timestamps
resample = True # Signal resampling active
path = os.path.join(toplevel_path,'WTD_upload') # path to dataset
ds = DataSelector(path, drop_duplicates = rem_dupes, resample = resample, verbose = False, use_HDFcache=True)
path = os.path.join(toplevel_path, 'WTD_upload.zip')
dsz = ZipFileDataSelector(path, drop_duplicates = rem_dupes, resample = resample, verbose=False, use_HDFcache=True)
ds = dsz
path = os.path.join(toplevel_path, 'WTD_upload.zip_HDFcache')
dsh = HDFDataSelector(path, drop_duplicates = rem_dupes, resample = resample, verbose=False)
ds = dsh
sensornames = ["TGS2611", # Sensor 1
"TGS2612", # Sensor 2
"TGS2610", # Sensor 3
"TGS2602", # Sensor 4
"TGS2600a", # Sensor 5
"TGS2600b", # Sensor 6
"TGS2620a", # Sensor 7
"TGS2620b"] # Sensor 8
```
### Load the data
```
gas = 1
voltage = 5
speed = 1
trial = 'all'
print("using Gas: {}, Voltage: {}, Fan Speed: {}, Trial #{}.".format(
DataSelector.GasNames[gas],
DataSelector.SensorVoltages[voltage], DataSelector.AltFanSpeeds[speed], trial))
data = []
for dist in tqdm(range(1,7)):
data.append(ds.select(gas,dist,voltage,speed))
sensornames_bynumber = ['Sensor{}'.format(i) for i in range(1,9) ]
distance = 5 # middle row because bouts overlap less here (on the first board it's mayhem)
ebcs = []
halflife = 40.
smooth_std = 30.
for i,sn in enumerate(sensornames_bynumber):
ebcss = make_boutcounters(data, sensorname=sn, boardname='Board5', halflife=halflife, smooth_std=smooth_std,
ampthresh=None, use_data_baseline=True)
ebcs.append(ebcss)
```
### Analysis
#### Artifacts on sensor 1 (TIGS 2611)
Unfortunately, the signals from sensor 1 on board 5 contains artefacts that we were not able to correct. Below we show one example, but the artefacts actually exist on all recordings from that sensor on that board that we looked at. Thus, we exclude sensor 1 from further analysis.
```
sensor = 0
distance = 0
trial = 19
e = ebcs[sensor][distance][trial]
s = e.signal
timeax = np.arange(0, len(s)*0.01, 0.01)
f = plt.figure(figsize=(4,2))
ax = f.gca()
ax.plot(timeax, s)
ax.set_xlim(0,timeax[-1])
ax.set_xlabel('time [s]')
ax.set_ylabel('response [a.u.]')
ax.set_frame_on(False)
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
```
#### No response on sensor 2 (TGS 2612)
Sensor 2 shows only a very small (if any) response to the stimulus that we analyse here. See the analysis below - the response to the gas should kick in around t=60s. Hence, we do not show sensor 2 in the actual figure for the paper.
```
sensor = 1
distance = 0
trial = 19
e = ebcs[sensor][distance][trial]
s = e.signal
timeax = np.arange(0, len(s)*0.01, 0.01)
f = plt.figure(figsize=(4,2))
ax = f.gca()
s = []
for i in range(20): # loop over trials
e = ebcs[sensor][distance][i]
ax.plot(timeax, e.signal)
ax.set_xlim(0,timeax[-1])
ax.set_xlabel('time [s]')
ax.set_ylabel('response [a.u.]')
ax.set_title("sensor 2 (TGS 2612), all trials")
ax.set_frame_on(False)
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
```
### Compare bout occurrence across all sensors
```
trial = 19
distance = 3
f = plt.figure(figsize=(8,4))
gs = mpl.gridspec.GridSpec(6,2, wspace=0.5, hspace=0.4)
ax = f.add_subplot(gs[:,0])
yticks = []
maxy = 800
for i in range(2,8): #sensor1 artifacts, sensor2 no response
signal = ebcs[i][distance][trial].signal
signal = signal.rolling(300, win_type='gaussian').mean(std=smooth_std)
signal = signal.dropna()
s = signal.values - signal[signal.index[0]]
if i == 3: #sensor 4, scale down by factor 10 to get approx. same amplitude
s = s / 10.
s = s + (i-2)* 100. + 30
ax.plot(signal.index, s, color='k')
yticks.append(s[0])
#panel decoration
ax.set_ylim(0,maxy)
ax.set_xlim(0, signal.index[-1])
ax.set_frame_on(False)
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.set_yticks(yticks)
ax.set_yticklabels([sensornames[si] for si in range(2,8)])
#ax.set_yticklabels(["Sensor {}".format(si) for si in xrange(3,9)])
ax.set_xticklabels(["{:d}".format(int(xtl/1000)) for xtl in ax.get_xticks()])
ax.set_xlabel("time [s]")
#scalebar
ax.plot([20,20], [670,770], color='k', lw=2)
ax.text(8000,720, "∆V 100 mV\n(TGS2610: 1000 mV)", fontsize=7)
#bouts
ax = f.add_subplot(gs[:,1])
yticks = []
maxy = 800
for i in range(2,8): #sensor1 artifacts, sensor2 no response
offset = (i-2) + 0.1
if i == 3:
scale = 1.
else:
scale=10.
line = plumy.bouts.plot_bout(ebcs[i][distance][trial], ax, offset, scale)
data = line[0].get_data()[1]
yticks.append(data[0])
#decorate panel
ax.set_yticks(yticks)
ax.set_yticklabels([sensornames[si] for si in range(2,8)])
#ax.set_yticklabels(["Sensor {}".format(si) for si in xrange(3,9)])
ax.set_xlim(0, len(data)/100)
ax.set_ylim(-1,7)
ax.set_xlabel('time [s]')
#add scalebar
ax.plot([20,20], [6.5,7], color='k', lw=1)
ax.text(30,6.5, "1 a.u.\n(TGS2602: 0.1 a.u.)", fontsize=7)
f.text(0.05,0.87,"A)", fontsize=12, weight='bold')
f.text(0.5,0.87,"B)", fontsize=12, weight='bold')
f.savefig('Figures/Fig. 6 - sensor invariance.png',dpi=600)
```
Sensor 2 is not shown because it doesn't show any discernible response to the stimulus.
The response of Sensor 3 shows the most bouts, probably it's most senitive to the signal. Sensors 7 and 8 are most likely of the same type, their response is very similar (but not identical). Sensors 5 and 6 also show very similar responses, with Sensor 6 having slightly higher amount of noise.
### Bouts only, no other signal
```
f = plt.figure(figsize=(8,3))
ax = f.add_subplot(111)
color_iter = iter(cm.rainbow(np.linspace(0,1,6)))
for i in range(2,8):
ebc = ebcs[i][distance][trial]
col = next(color_iter)
s = ebc.smooth_time_deriv_ewma()
p = ebc.filtered_posneg
for j in p.T.astype(int):
lp = ax.plot(np.arange(j[0], j[1]), (s[j[0]:j[1]] - s[j[0]]), color=col)
lp[0].set_label(sensornames[i])
ax.set_frame_on(False)
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
plt.legend(frameon=False, fontsize=8)
```
Not so intuitive, because everything overlaps. Normalising for max height does not really help, it rather makes things even less overseeable. Therefore, pick nice sensors and compare these.
e.g., try sensors 3 + 4 and 7 + 8. 3 + 4 are different but with nice responses, 7 + 8 are putatively the same sensor. While the latter is also true for 5+6, their response is noisy and the overlap not so good.
```
f = plt.figure(figsize=(6,3.5))
gs = mpl.gridspec.GridSpec(2,1, hspace=0.4)
pairs = [[6,7], [2,3]]
markers = ['x','+']
lines = ['-','-']
color_iters = [iter(cmm([0.2,0.8], alpha=0.9)) for cmm in [cm.RdBu, cm.PuOr]]
yticks = [[0,0.5,1],[0.0, 0.5, 1.0]]
for i,pair in enumerate(pairs):
ax = f.add_subplot(gs[i])
for pi,pa in enumerate(pair):
ebc = ebcs[pa][distance][trial]
s = ebc.smooth_time_deriv_ewma()
p = ebc.filtered_posneg
color = next(color_iters[i])
#normalise by max height
max_height = 0
for j in p.T.astype(int):
height = s[j[1]] - s[j[0]]
if height > max_height:
max_height = height
print(max_height)
for j in p.T.astype(int):
lp = ax.plot(np.arange(j[0], j[1])/100., (s[j[0]:j[1]] - s[j[0]])/max_height, linestyle=lines[pi], linewidth=.6, color=color)
lpl = ax.plot((j[0]/100., j[0]/100.), (0,1), linestyle='-', linewidth=.2, color=color)
lpm = ax.plot(j[0]/100., 1, linestyle='none', marker=markers[pi], markersize=4, color=color)
lp[0].set_label(sensornames[pa])
# ax.set_frame_on(True)
ax.set_frame_on(False)
# for sp in ["top", "bottom", "right"]:
# ax.spines[sp].set_visible(False)
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.set_xlim(0,320)
ax.set_ylim(-0.01,1.05)
ax.set_yticks(yticks[i])
lg = plt.legend(loc="upper right", frameon=False, fontsize=8)
lg.set_frame_on(False)
ax.set_xticks(range(0,251,50))
ax.set_xticklabels([])
ax.set_xlabel('time [s]')
ax.set_xticklabels(range(0,251,50))
ax.set_ylabel('bout amp. [a.u.]')
f.text(0.015,0.89, "A)", fontweight='bold')
f.text(0.015,0.44, "B)", fontweight='bold')
f.savefig("Figures/Fig. 7 - Bout coincidence.png", dpi=600)
```
### Test for event correlation
In order to quantify the similarity of bouts across sensors we adopt an approach first described by Schreiber et al (2003) that aims at measuring the similarity of event series. This approach is based on convolving a time series of discrete events (in the original study, neuronal action potentials) with a gaussian kernel, thus creating a continuous time series. The similarity of two time series is then quantified by the pearson correlation of these continuous series.
Here, we apply this measure of event series to the bout onsets as discrete events. Fig. S2 depicts the bout onset times for the signals in Fig. 5 (Acetaldehyde, source distance 1.18 m, trial 19). We convolved these time series with gaussian kernels of width $\sigma=2s$. We then computed the pairwise correlation coefficients between the generated continuous time series. This analysis was done for all 20 trials that were present in the data set for Acetaldehyde, measured in 1.18 m distance from the source. The average correlation between all time series was $c=0.38$ ± $0.21$ (standard deviation).
To check against a random background, we scrambled the trials, i.e., computing correlations between time series that were randomly chosen from different trials. Here we obtained $c=0.17$ ± $0.15$. Fig. S3 depicts the histograms of pairwise correlations obtained in matched and randomised trials. A 2-sample Kolmogorov-Smirnov test confirmed that the correlations observed in pairs of matched trials is significantly different from randomised trials ($p=2.1*10^{-29}$).
#### References
Schreiber, S., Fellous, J. M., Whitmer, D., Tiesinga, P., and Sejnowski, T. J. (2003). A new correlation-based measure of spike timing reliability. Neurocomputing 52-54, 925–931. doi:10.1016/S0925-2312(02)00838-X.
```
#windowsize is 10 sigma
#padding 5 sigma at the end to avoid truncating late events
def convolve_with_gaussian(train, sigma, range=None):
if range is None:
range = [0,26000+5*sigma]
hist,bins = np.histogram(train, bins=range[1]-range[0], range=range)
ts = pd.Series(hist)
signal = ts.rolling(10*sigma, win_type='gaussian', center=True).mean(std=sigma)
signal = signal.dropna().values
retval = np.concatenate((np.zeros(5*sigma), signal)) #pad 5 sigma at the start that have been dropped as NAN
return retval
distance = 3
trial = 19
f = plt.figure(figsize=(5,4))
gs = mpl.gridspec.GridSpec(1,1, left=0.3)
ax = f.add_subplot(gs[0])
sigs = []
sigma = 200
for i in range(2,8):
bouts = ebcs[i][distance][i].filtered_posneg
for i in range(2,8):
bouts = ebcs[i][distance][trial].filtered_posneg
train = bouts[0]
sig_smooth = convolve_with_gaussian(train, sigma)
sigs.append(sig_smooth)
for ons in train/100.:
ax.plot([ons,ons], [i-0.25,i+0.25], color='k')
xaxis = np.arange(len(sig_smooth))/100.
ax.plot(xaxis, i-0.5+sig_smooth/max(sig_smooth), color=[0.5,0.5,0.5,0.5])
sigs = np.array(sigs)
ax.set_frame_on(False)
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.set_yticks(range(2,8))
ax.set_yticklabels(sensornames[2:8])
ax.set_xlabel('time [s]')
ax.set_xlim(0,260)
f.savefig('Figures/Fig. S4 - Event onsets.png', dpi=600)
sigma = 200
corrs_pertrial = []
for trial in range(20):
sigs = []
for i in range(2,8):
bouts = ebcs[i][distance][trial].filtered_posneg
train = bouts[0]
sigs.append(convolve_with_gaussian(train, sigma))
sigs = np.array(sigs)
corr = np.corrcoef(sigs)
all_corrs = []
for i in range(corr.shape[0]):
for j in range(i+1, corr.shape[1]):
all_corrs.append(corr[i,j])
corrs_pertrial.extend(all_corrs)
corrs_random = []
for trial in range(20):
trialperm = np.random.permutation(20)
sigs = []
for i in range(2,8):
bouts = ebcs[i][distance][trialperm[i]].filtered_posneg
train = bouts[0]
sigs.append(convolve_with_gaussian(train, sigma))
sigs = np.array(sigs)
corr = np.corrcoef(sigs)
all_corrs = []
for i in range(corr.shape[0]):
for j in range(i+1, corr.shape[1]):
all_corrs.append(corr[i,j])
corrs_random.extend(all_corrs)
f = plt.figure(figsize=(5,2.8))
gs = mpl.gridspec.GridSpec(1,1, bottom=0.2)
ax = f.add_subplot(gs[0])
bins = np.linspace(-1,1,30)
ax.plot(bins, np.histogram(corrs_pertrial, bins=30, range=(-1,1))[0], label='matched trials', color='k',zorder=1)
ax.plot(bins, np.histogram(corrs_random, bins=30, range=(-1,1))[0], label='random trials', color='gray', zorder=0, ls=":")
plt.legend(frameon=False, loc="upper left", fontsize=8)
ax.set_frame_on(False)
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.set_ylabel("number of pairs")
ax.set_xlabel("correlation")
ax.set_ylim(-3, ax.get_ylim()[1])
ax.set_xlim(-.5, 1.)
print(u"matched trials: corr = {:.2f} ± {:.2f}".format(np.mean(corrs_pertrial), np.std(corrs_pertrial)))
print(u"random trials: corr = {:.2f} ± {:.2f}".format(np.mean(corrs_random), np.std(corrs_random)))
import scipy.stats as ss
p = ss.ks_2samp(corrs_pertrial, corrs_random)
print("Kolmogorov-Smirnov 2 sample test: p = {:.2g}".format(p.pvalue))
f.savefig('Figures/Fig. S5 - Event correlation statistics.png', dpi=600)
```
| true |
code
| 0.328348 | null | null | null | null |
|
```
import codecs
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
```
Load data in Tensorflow.
```
root = "../"
training_data_folder = '%straining_data/web-radio/output/rec' % root
embDir = '%sembeddings' % root
what = 'artist'
uri_file = '%s/%s.emb.u' % (embDir, what)
vector_file = '%s/%s.emb.v' % (embDir, what)
# header_file = '%s/%s.emb.h' % (embDir, what)
training_file = '%s/%s.dat' % (training_data_folder, what)
vectors = np.array([line.strip().split(' ') for line in codecs.open(vector_file, 'r', 'utf-8')])
# heads = np.array([line.strip() for line in codecs.open(header_file, 'r', 'utf-8')])
uris = np.array([line.strip() for line in codecs.open(uri_file, 'r', 'utf-8')])
train_array = np.array([line.strip().split(' ') for line in codecs.open(training_file, 'r', 'utf-8')])
pd.DataFrame(train_array, columns=['seed', 'target', 'score']).head()
```
Data pre-processing: I want to substitute the seed and target with their embeddings
```
col1 = np.array([get_embs(xi) for xi in train_array[:, 0]])
col2 = np.array([get_embs(xi) for xi in train_array[:, 1]])
col1 = np.concatenate((col1, [12., 45., 73.] * np.ones((train_array.shape[0], 3))), axis=1)
col2 = np.concatenate((col2, [12., 45., 73.] * np.ones((train_array.shape[0], 3))), axis=1)
col3 = np.array(train_array[:, 2]).astype('float32')
col3 = col3.reshape((col3.size, 1))
def get_embs(x):
# uri to embedding
v = vectors[np.argwhere(uris == x)]
if v.size == 0:
result = -2. * np.ones(vectors[0].size)
else:
result = v[0][0]
return result.astype('float32')
training_vector = np.concatenate((col1, col2, col3), axis=1)
training_vector.shape
```
Split test and train
```
train, test = train_test_split(training_vector, train_size=0.7)
train_vector = train[:, :-1]
train_label = train[:, -1]
train_label = train_label.reshape((len(train_label), 1))
test_vector = test[:, :-1]
test_label = test[:, -1]
test_label = test_label.reshape((len(test_label), 1))
print('Train')
print(train_vector.shape)
print(train_label.shape)
print('Test')
print(test_vector.shape)
print(test_label.shape)
# Parameters
learning_rate = 0.1
num_steps = 1000
batch_size = 64
display_step = 100
# Network Parameters
n_hidden_1 = 256 # 1st layer number of neurons
n_hidden_2 = 256 # 2nd layer number of neurons
num_input = train_vector[0].size
num_output = int(num_input / 2)
num_output_wrap = train_label[0].size
# tf Graph input
X = tf.placeholder(tf.float32, [None, num_input], name="X")
Y = tf.placeholder(tf.float32, [None, num_output_wrap], name="Y")
```
Neural network
```
# Create model
def neural_net(x):
with tf.name_scope('hidden_1') as scope:
# Hidden fully connected layer with 256 neurons
w1 = tf.Variable(tf.random_normal([num_input, n_hidden_1]), name='w')
b1 = tf.Variable(tf.random_normal([n_hidden_1]), name='b')
layer_1 = tf.add(tf.matmul(x, w1), b1, name='o')
with tf.name_scope('hidden_2') as scope:
# Hidden fully connected layer with 256 neurons
w2 = tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2]), name='w')
b2 = tf.Variable(tf.random_normal([n_hidden_2]), name='b')
layer_2 = tf.add(tf.matmul(layer_1, w2), b2, name='o')
with tf.name_scope('out_layer') as scope:
# Output fully connected layer with a neuron for each class
wo = tf.Variable(tf.random_normal([n_hidden_2, num_output]), name='w')
bo = tf.Variable(tf.random_normal([num_output]), name='b')
out_layer = tf.add(tf.matmul(layer_2, wo), bo, name="o")
with tf.name_scope('u_norm') as scope:
row_sum = tf.reduce_sum(out_layer, axis=1, keepdims=True)
return tf.divide(out_layer, row_sum)
def weighted_l2(a, b, w):
with tf.name_scope('weighted_l2') as scope:
# https://stackoverflow.com/a/8861999/1218213
q = tf.subtract(a, b, name="q")
# return np.sqrt((w * q * q).sum())
pow_q = tf.cast(tf.pow(q, 2), tf.float32, name="q-power")
return tf.reduce_sum(tf.multiply(w, pow_q), axis=1, name="o", keepdims=True)
def compute_penalty(expected, taken, total):
with tf.name_scope('penalty') as scope:
penalty = tf.divide(tf.subtract(expected, taken), total)
return tf.cast(penalty, tf.float32)
def neural_net_wrap(x, previous_out):
with tf.name_scope('nn_wrapper') as scope:
lt = previous_out.shape.as_list()[0] # vertical size of the tensor
lh = previous_out[0].shape.as_list()[0] # horizontal size of the tensor
seed, target = tf.split(x, [lh, lh], axis=1)
bs = tf.equal(seed, -2.)
bt = tf.equal(target, -2.)
_ones = tf.ones_like(previous_out, tf.float32)
max_distance = weighted_l2(_ones, _ones * -1., previous_out)
bad_mask = tf.logical_or(bs, bt)
good_mask = tf.logical_not(bad_mask)
bs_count = tf.count_nonzero(tf.logical_not(bs), axis=1, keepdims=True)
good_count = tf.count_nonzero(good_mask, axis=1, keepdims=True)
_zeros = tf.zeros_like(previous_out, tf.float32)
_seed = tf.where(good_mask, seed, _zeros)
_target = tf.where(good_mask, target, _zeros)
# distance
d = weighted_l2(_seed, _target, previous_out)
# how much info I am not finding
penalty = compute_penalty(bs_count, good_count, lh)
multiplier = tf.subtract(1., penalty)
# score
s = tf.divide(tf.subtract(max_distance, d), max_distance)
return tf.multiply(s, multiplier)
# Construct model
intermediate = neural_net(X)
logits = neural_net_wrap(X, intermediate)
logits.shape
# Define loss and optimizer
# loss_op = MSE
loss_op = tf.reduce_mean(tf.square(tf.subtract(logits, Y)))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)
# Evaluate model (with test logits, for dropout to be disabled)
correct_pred = tf.less(tf.subtract(logits, Y), 0.1)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
def next_batch(num, data, labels):
"""
Return a total of `num` random samples and labels.
"""
idx = np.arange(0, len(data))
np.random.shuffle(idx)
idx = idx[:num]
data_shuffle = data[idx]
labels_shuffle = labels[idx]
return data_shuffle, labels_shuffle
with tf.Session() as sess:
writer = tf.summary.FileWriter("output", sess.graph)
# Run the initializer
sess.run(init)
print("Start learning")
for step in range(1, num_steps + 1):
batch_x, batch_y = next_batch(batch_size, train_vector, train_label)
# Run optimization op (backprop)
sess.run(train_op, feed_dict={X: batch_x, Y: batch_y})
if step % display_step == 0 or step == 1:
# Calculate batch loss and accuracy
preds, my_weights, loss, acc = sess.run([logits, intermediate, loss_op, accuracy],
feed_dict={X: batch_x, Y: batch_y})
print("Step " + str(step) + ", Minibatch Loss= " + \
"{:.4f}".format(loss) + ", Training Accuracy= " + \
"{:.3f}".format(acc))
# print("Predictions %s VS %s" % (preds[0], batch_y[0]))
np.set_printoptions(precision=2)
print("My weights %s" % np.mean(my_weights, axis=0))
print("Optimization Finished!")
print("Testing Accuracy:",
sess.run(accuracy, feed_dict={X: test_vector, Y: test_label}))
writer.close()
```
| true |
code
| 0.4917 | null | null | null | null |
|
```
from sklearn import linear_model
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
from astropy.stats import LombScargle
%matplotlib inline
plt.style.use('seaborn')
# in order to use custom modules in parent path
import os
import sys
nb_dir = os.path.split(os.getcwd())[0]
if nb_dir not in sys.path:
sys.path.append(nb_dir)
from mfilter.implementations.simulate import SimulateSignal
from mfilter.types.arrays import Array
from mfilter.types.frequencyseries import FrequencySamples
from mfilter.implementations.regressions import *
```
$$ \int_{-\infty}^{\infty} \frac{\tilde{x}(f)\tilde{h}(f)}{S_n(f)} e^{2\pi i f t_0} df$$
```
n_samples = 100
freq = [0.5/8000, 0.001, 0.01, 0.1]
weights=[1.5, 0.4, 0.4, 0.4]
config="mix1"
pos_start_peaks = 0
n_peaks = 1
simulated = SimulateSignal(n_samples, freq, weights=weights, noise_level=0.1,
dwindow="tukey", underlying_delta=50)
weights2= np.array(weights) / 2
simulated2 = SimulateSignal(n_samples, freq, weights=weights2, noise_level=0.2,
dwindow="tukey", underlying_delta=50)
times = simulated.get_times(configuration=config)
# data = simulated.get_data(pos_start_peaks=pos_start_peaks, n_peaks=n_peaks,
# with_noise=True,
# configuration=config)
noise = simulated.get_noise(None)
temp = simulated.get_data(pos_start_peaks=n_samples//4, n_peaks=2, with_noise=False,
configuration=config)
temp = abs(temp)
temp2 = simulated.get_data(pos_start_peaks=n_samples//4, n_peaks=0.5,
configuration=config)
temp2 = abs(temp2)
temp3 = simulated2.get_data(pos_start_peaks=n_samples//4, n_peaks=1, with_noise=False,
configuration=config)
temp3 = abs(temp3)
temp4 = simulated2.get_data(pos_start_peaks=n_samples//4, n_peaks=1.5, with_noise=False,
configuration=config)
temp4 = abs(temp4)
data = temp + noise
# templates with same energy (any value, here we take the energy of the data)
E = np.sum(data**2)
E_n = np.sum(noise**2)
print("ratio E.signal/E.noise: ", E/E_n)
# E = E_n
#temp *= E / np.sum(temp**2)
#temp2 *= E / np.sum(temp2**2)
plt.figure(figsize=(15, 4))
plt.plot(times, data, label='data')
plt.plot(times, temp, label='template 1')
plt.plot(times, temp2, label='template 2')
plt.plot(times, temp3, label='template 3')
plt.plot(times, temp4, label='template 4')
plt.legend()
plt.figure()
plt.plot(times, noise, 'k', label="noise")
samples_per_peak = 5
T = max(times) - min(times)
fs = n_samples / T
print("sampling rate is: ", (n_samples / T))
df = 1 / T / samples_per_peak
max_freq = 2 * max(freq) + 20 * df
freqs = FrequencySamples(Array(times),
minimum_frequency=0,
maximum_frequency=max_freq)
print(len(freqs))
def lomb_psd(values, freqs, times):
lomb = LombScargle(times, values, normalization="standard")
if freqs.has_zero:
zero_idx = freqs.zero_idx
psd = np.zeros(len(freqs))
if zero_idx == 0:
psd[1:] = lomb.power(freqs.data[1:])
psd[0] = 0.0000001
else:
neg_freq, pos_freq = freqs.split_by_zero()
right_psd = lomb.power(pos_freq)
left_psd = lomb.power(np.abs(neg_freq))
psd[:zero_idx] = left_psd
psd[zero_idx] = 0.000001
psd[zero_idx+1:] = right_psd
else:
psd = lomb.power(np.abs(freqs.data))
return psd
psd = lomb_psd(noise, freqs, times)
psd_data = lomb_psd(data, freqs, times)
print(len(freqs), len(times))
plt.figure()
plt.plot(freqs.data, psd)
plt.plot(freqs.data, psd_data, 'g')
# plt.xlim([0, 0.2])
# plt.ylim([0, 0.02])
T
# say that my psd of noise is the square of the ft of the noise
#psd = (abs(noise_ft)**2)
#print(np.where(psd == 0.0))
#psd[0] = 0.000000001
reg = ElasticNetRegression(alpha=0.001, l1_ratio=0.7)
reg = RidgeRegression(alpha=0.01)
F = Dictionary(times, freqs)
data_ft = reg.get_ft(data, F)
temp_ft = reg.get_ft(temp, F)
temp2_ft = reg.get_ft(temp2, F)
temp3_ft = reg.get_ft(temp3, F)
temp4_ft = reg.get_ft(temp4, F)
noise_ft = reg.get_ft(noise, F)
plt.plot(freqs.data, abs(noise_ft)**2)
plt.plot(freqs.data, psd , 'g', alpha=0.5)
print("lambda is: ", np.sqrt(1 / df**2), np.sqrt(df))
def get_z(ft, temp_ft, psd, F):
corr = ft * temp_ft.conjugate() / psd
return np.dot(F.matrix, corr).real
z_data = get_z(data_ft, temp_ft, psd, F)
z_data2 = get_z(data_ft, temp2_ft, psd, F)
z_data3 = get_z(data_ft, temp3_ft, psd, F)
z_data4 = get_z(data_ft, temp4_ft, psd, F)
z_noise = get_z(noise_ft, temp_ft, psd, F)
z_noise2 = get_z(noise_ft, temp2_ft, psd, F)
z_noise3 = get_z(noise_ft, temp3_ft, psd, F)
z_noise4 = get_z(noise_ft, temp4_ft, psd, F)
plt.plot(times - times[n_samples//2], np.roll(z_data, n_samples//2))
# plt.plot(times - times[n_samples//2], np.roll(noise, n_samples//2), 'r', alpha=0.4)
# plt.plot(times - times[n_samples//2], np.random.normal(0, 0.05, n_samples), 'g', alpha=0.5)
h1 = get_z(temp_ft, temp_ft, psd, F)
plt.plot(times - times[n_samples//2], np.roll(np.sqrt(abs(h1)), n_samples//2))
def get_sigma(temp_ft, psd):
return np.sum(temp_ft * temp_ft.conjugate() / psd)
sigma_temp = get_sigma(temp_ft, psd)
sigma2_temp = get_sigma(temp2_ft, psd)
sigma3_temp = get_sigma(temp3_ft, psd)
sigma4_temp = get_sigma(temp4_ft, psd)
print("var is ", sigma_temp, sigma2_temp, sigma3_temp, sigma4_temp)
snr_data = z_data / np.sqrt(sigma_temp.real)
snr_data2 = z_data2 / np.sqrt(sigma2_temp.real)
snr_data3 = z_data3 / np.sqrt(sigma3_temp.real)
snr_data4 = z_data4 / np.sqrt(sigma4_temp.real)
snr_noise = z_noise / np.sqrt(sigma_temp.real)
snr_noise2 = z_noise2 / np.sqrt(sigma2_temp.real)
snr_noise3 = z_noise3 / np.sqrt(sigma3_temp.real)
snr_noise4 = z_noise4 / np.sqrt(sigma4_temp.real)
fig, [ax1, ax2] = plt.subplots(1, 2, figsize=(15,4))
ax1.plot(times - times[n_samples//2], np.roll(snr_data, n_samples//2), 'r')
ax1.plot(times - times[n_samples//2], np.roll(snr_data2, n_samples//2), 'b')
ax1.plot(times - times[n_samples//2], np.roll(snr_data3, n_samples//2), 'g')
ax1.plot(times - times[n_samples//2], np.roll(snr_data4, n_samples//2), 'k')
ax2.plot(times - times[n_samples//2], np.roll(snr_noise, n_samples//2), 'r')
ax2.plot(times - times[n_samples//2], np.roll(snr_noise2, n_samples//2), 'b')
ax2.plot(times - times[n_samples//2], np.roll(snr_noise3, n_samples//2), 'g', alpha=0.5)
ax2.plot(times - times[n_samples//2], np.roll(snr_noise4, n_samples//2), 'k', alpha=0.5)
np.sum(data_ft.conjugate() * temp_ft / psd) / np.sqrt(sigma_temp)
lomb = LombScargle(times, snr_data2, normalization="standard")
if freqs.has_zero:
zero_idx = freqs.zero_idx
neg_freq, pos_freq = freqs.split_by_zero()
right_psd = lomb.power(pos_freq)
left_psd = lomb.power(np.abs(neg_freq))
psd_z = np.zeros(len(freqs))
psd_z[:zero_idx] = left_psd
psd_z[zero_idx] = 0.000001
psd_z[zero_idx+1:] = right_psd
else:
psd_z = lomb.power(np.abs(freqs.data))
plt.plot(freqs.data, psd_z)
norm = sp.stats.norm(0, (sigma_temp.real)**(1/2))
print("for only noise: ", (1 - norm.cdf(max(snr_noise))))
print("for data: ", (1 - norm.cdf(max(snr_data))))
norm = sp.stats.norm(0, (sigma2_temp.real)**(1/2))
print("for only noise: ", (1 - norm.cdf(max(snr_noise2))))
print("for data: ", (1 - norm.cdf(max(snr_data2))))
norm = sp.stats.norm(0, (sigma3_temp.real)**(1/2))
print("for only noise: ", (1 - norm.cdf(max(snr_noise3))))
print("for data: ", (1 - norm.cdf(max(snr_data3))))
norm = sp.stats.norm(0, (sigma4_temp.real)**(1/2))
print("for only noise: ", (1 - norm.cdf(max(snr_noise4))))
print("for data: ", (1 - norm.cdf(max(snr_data4))))
# or using same sigma as the original noise
norm = sp.stats.norm(0, 0.)
plt.figure(figsize=(15, 4))
plt.plot(times, np.dot(F.matrix, data_ft / np.sqrt(psd)).real, 'k')
plt.plot(times, np.roll(np.dot(F.matrix, temp_ft / np.sqrt(psd)).real, snr_data.argmax() + 1), '.-', label="temp1")
# plt.plot(times, np.roll(np.dot(F.matrix, temp2_ft / np.sqrt(psd)).real, snr_data2.argmax() + 1), 'o--', label="temp2")
# plt.plot(times, np.roll(np.dot(F.matrix, temp3_ft / np.sqrt(psd)).real, snr_data3.argmax()+1), label="temp3")
# plt.plot(times, np.roll(np.dot(F.matrix, temp4_ft / np.sqrt(psd)).real, snr_data4.argmax()+1), label="temp4")
plt.legend()
plt.plot((times + times[n_samples//3] + times[snr_data.argmax()]) % max(times), temp)
plt.plot(times, np.roll(temp, n_samples//3 + snr_data.argmax()))
max(times)
norm.stats()
nn = np.random.normal(0, 5, 500)
dtt = 0.4
tt = np.arange(500) * dtt
plt.plot(tt, nn)
nn_ft = sp.fft(nn)
freq = np.fft.fftfreq(500, d=dtt)
f, nn_psd = sp.signal.welch(nn, 1/dtt, return_onesided=False, nfft=500)
plt.plot(freq, np.abs(nn_ft)**2 / 500)
plt.plot(f, nn_ft / nn_psd)
nn_r = np.fft.ifft(nn_ft / nn_psd)
plt.plot(tt, nn_r.real)
plt.plot(tt, nn_r.imag, 'r')
print(np.std(nn_r.real), np.std(nn))
```
| true |
code
| 0.501831 | null | null | null | null |
|
# Oscillations
Two practicals
## 1. Finding $g$ by using a pendulumn
$\begin{aligned}
T = 2\pi \sqrt{\frac{l}{g}}
\end{aligned}$
- $T$: period in s
- $l$: length in m
- $g$ acceleration due to gravity: $m/s^2$
Linearisation of the sqrt function (as $T \propto \sqrt{l})$:
$\begin{aligned}
T^2 &= (2 \pi)^2 \frac{l}{g}\\
\underbrace{T^2}_{y} &= \overbrace{\frac{4 \pi^2}{g}}^{m} \underbrace{l}_{x}
\end{aligned}$
### Method
- 10 periods, 5 times
- min. 5 lengths
| String length / cm ± 0.05cm |
Wave transports energy from 1 location to another.
**Travelling waves:**
```
Motion of waves -->
/ \ Longitudinal
Transverse | <---->
\ /
oscillations
```
```
import matplotlib.pyplot as plt
```
## Wave equation
$\begin{aligned}
v &= \lambda f = \frac{\lambda}{T}\\
f &= \frac{1}{T}
\end{aligned}$
## Simple Harmonic Motion
$\underbrace{a}_{\text{acceleration}} = -\overbrace{k}^{\text{proportionality factor}} \underbrace{x}_{\text{displacement}}$
$\omega = 2\pi f = \frac{2 \pi}{T}$
$\begin{aligned}
x(t) &= \underbrace{A}_{\text{Amplitude}} \cos(\underbrace{\omega}_{\text{angular frequency}} t + \underbrace{\varphi}_{\text{phase shift}})\\
v(t) &= -\omega A \sin(\omega t + \varphi)\\
a(t) &= -\omega^2 A \cos(\omega t + \varphi)\\
\iff a(t) &= -\underbrace{k}_{\omega^2} x(t)
\end{aligned}$
compression & rarefaction
**Non-polarised** light: oscillation at all angles of axis of propogation
**Polarised** light: only oscillates in one plane
Polarised sources are: screens (phone, computer), reflected light.
When totally polarised light is incident on an analyser, the intensity of light let through is $$I = I_0 \cos^2 \underbrace{\theta}_{\text{angle between polarised light and analyser}}.$$
## Measuring speed of sound
* oscilloscope
* signal generator
* ultrasound emitter - receiver
1. Connect signal generator and oscilloscope.
2. Set up emitter and receiver.
3. Starting from 0cm (ignoring offset inside devices), measure phase change with cursors.
4. Move receiver away by 5cm until 25cm.
5. Record data, plot time against distance.
6. Using $\text{speed} = \frac{\text{distance}}{\text{time}}$, we know $\text{speed of sound} = \frac{1}{\text{gradient}}$ of graph.
# Wave behaviour (4.4)
**Reflection**: a wave bouncing off a surface
Incident
ray Normal
\ | _ /
_\| /|
\ | /
\- -/
_____\|/_____
Angle Angle of
of reflection
incidence
$\text{Angle } i = \text{Angle } r$
**Refraction**: the change in direction of a wave when it passes a boundary between 2 mdeia.
Eg: from air to water; from water to glass
Medium 1 Medium 2
| .
|_.
|.|
_ _ _ _|_>_ _ _
<. |
_. |
. | |
Ray |
$\theta_i, \theta_{\text{refracted}}$;
Refrective medium 1: $n_1$; ~ 2: $n_2$.
## Practical: Investigating Snell/Descartes Law
* Changing incident angle
* measuring angle of refraction
* Plot one against the other
Goal: find $n_2$.
$\begin{aligned}
&n_1 \sin\theta_1 = n_2 \sin\theta_2\\
&\sin\theta_2 = \frac{n_1}{n_2} \sin\theta_1\\
\implies &\frac{n_1}{n_2} \sin\theta_1 > 1\\
&\theta_1 > \arcsin\frac{n_2}{n_1}
\end{aligned}$
The angle at which the refracted ray passes along the boundary is called the **critical angle**. When the incident angle $< \theta_i$, refraction + reflection; $\geq \theta_i$, no refracted ray, T.I.R = total internal reflection.
When a wave is refracted, the speed changes but the frequency remains constant. Since $v = \lambda f$, $\lambda$ must change, so refraction changes the wavelength.
## Diffraction
Waves diffract when they go through an aperture, especially when wavelength $\approx$ aperture.
```
import matplotlib.pyplot as plt
import numpy as np
t = np.linspace(0, 6)
w1 = .5 * np.sin(t)
w2 = 1. * np.sin(t + np.pi)
w = w1 + w2
plt.figure()
plt.plot(t, w1)
plt.plot(t, w2)
plt.plot(t, w, '--')
plt.legend(['', '', "sum of both waves"])
plt.xlabel("t / s")
plt.title("Superposition of waves")
plt.show()
```
| true |
code
| 0.701381 | null | null | null | null |
|
(gravity)=
# Gravity
## Newton's Law of Universal Gravitation
The gravitational force \\(F\\) that body A exerts on body B has a magnitude that is directly proportional to the mass of body A \\(m_a\\) and the mass of body B \\(m_b\\), and inversely proportional to the square of the distance between their centres \\(R\\).
Mathematically:
\\[F=\frac{Gm_am_b}{R^2}\\]
where \\(G\\) is the gravitational constant, equal to \\(6.674\times 10^{-11}Nm^2kg^{-2}\\).
Therefore, the gravitational force exerted on a mass \\(m\\) by a planet of mass \\(m_p\\) is:
\\[F=\frac{Gm_pm}{R_p^2}\\]
where \\(m_p\\) and \\(R_p\\) is the mass and radius of a planet respectively.
The local gravitational acceleration \\(g\\) is:
\\[a=\frac{F}{m}=\frac{Gm_p}{R_p^2}=g\\]
In terms of density \\(\rho\\):
\\[g=\frac{4\pi\rho R_p^3G}{3R_p^2}=\frac{4\pi\rho R_pG}{3}\\]
## Gravitational Potential Energy
The definition of change in energy is that a force \\(F\\) moves a body from position 1 \\(R_1\\) to position 2 \\(R_2\\). In the case of change in gravitational potential energy \\(GPE\\) on Earth:
\\[F=-\frac{Gm_Em}{R^2}\\]
where \\(m_E\\) and \\(R\\) are the mass of the Earth and distance from Earth's centre respectively,
\\[GPE_2-GPE_1=-\int_{R_1}^{R_2}F(R)dR=Gm_Em(\frac{1}{R_1}-\frac{1}{R_2})\\]
Taking \\(R_2\\) to be infinitely far away from the Earth:
\\[GPE(\infty)-GPE(R_1)=Gm_Em(\frac{1}{R_1}-0)=\frac{Gm_Em}{R_1}\\]
Rearranging for \\(GPE(R_1)\\):
\\[GPE(R_1)=-\frac{Gm_Em}{R_1}\\]
For a body at an elevation \\(h\\) close to the surface, \\(GPE\\) is roughly equal to \\(mgh\\).
## Escape Velocity
To fully escape the gravitational field of a planet, the object must reach "infinity" where \\(GPE=0\\). Thus, taking position 2 as infinity:
\\[KE(1)+GPE(R_E)=KE(2)+GPE(\infty)\\]
To find the minimum velocity required, we set \\(KE(2)=0\\), and since \\(GPE(\infty)=0\\):
\\[KE(1)+GPE(R_E)=0\\]
\\[\frac{mv_e^2}{2}-\frac{Gm_Em}{R_E}=0\\]
Rearranging for escape velocity \\(v_e\\):
\\[v_e=\sqrt{\frac{2Gm_E}{R_E}}\\]
### Maxwell-Boltzmann distribution
At absolute temperature \\(T\\), the mean energy of monoatomic molecules is given by:
\\[\frac{3}{2}k_BT\\]
where \\(k_B\\) is the Boltzmann's constant equal to \\(1.38\times10^{-23}m^2kgs^{-2}K^{-1}\\).
Ignoring potential energy and assuming that the energy of the monoatomic molecules is only from their kinetic energy, we can equate the above equation with the kinetic energy equation:
\\[\frac{mv_{mean}^2}{2}=\frac{3}{2}k_BT\\]
where \\(m\\) is the mass of that monoatomic molecule, and \\(v_{mean}\\) is the mean velocity of the monoatomic molecules at temperature \\(T\\).
Rearranging for \\(v_{mean}\\):
\\[v_{mean}=\sqrt{\frac{3k_BT}{m}}\\]
The probability of a molecule having a velocity higher than some value \\(v\\) is given by:
\\[\frac{v}{v_{mean}}e^{-1.27(\frac{v}{v_{mean}})^2}\\]
## Tutorial Problem 5.3
What is the probability of Helium atoms escaping from the Moon's surface?
Given that the mass and radius of the Moon are \\(7.35\times10^{22}kg\\) and \\(1.74\times10^6m\\) respectively, average surface temperature on the Moon is \\(400K\\), and mass of a helium atom is \\(6.54\times10^{-27}kg\\).
```
import numpy as np
import matplotlib.pyplot as plt
def escape(m, R, G=6.674e-11): # function for calculating escape velocity given mass and radius
return np.sqrt((2*G*m)/R)
mM = 7.35e22 # mass of Moon (kg)
RM = 1.74e6 # radius of Moon (m)
ve = escape(mM, RM)
print("Escape velocity of the Moon is %.f m/s" % (ve))
def find_vmean(m, T, kB=1.38e-23):
return np.sqrt((3*kB*T)/m)
mHe = 6.54e-27 # mass of helium atom (kg)
TM = 400 # surface temperature on the Moon (T)
vm = find_vmean(mHe, TM)
print("Mean velocity of He atoms on the Moon is %.f m/s" % (vm))
def maxwell_boltzmann(v, v_mean): # maxwell-boltzmann distribution
return v/v_mean * np.exp(-1.27*(v/v_mean)**2)
v = np.linspace(0, 5000, 1001)
prob = maxwell_boltzmann(v, vm) # array of probabilities at different velocities
# plot probability distribution
fig = plt.figure(figsize=(10,8))
plt.plot(v, prob, 'k')
plt.plot(v[475], prob[475], 'ro', label='velocity >= %.fm/s, probability = %.2f' % (v[475], prob[475]))
plt.xlabel('velocity (m/s)')
plt.ylabel('probability')
plt.title('Probability distribution of He molecules with a velocity greater than some value v', fontsize=14)
plt.legend(loc='upper right', fontsize=12)
plt.grid(True)
plt.show()
```
### References
Course notes from Lecture 5 of the module ESE 95011 Mechanics
| true |
code
| 0.770923 | null | null | null | null |
|
<h1>CS4618: Artificial Intelligence I</h1>
<h1>Gradient Descent</h1>
<h2>
Derek Bridge<br>
School of Computer Science and Information Technology<br>
University College Cork
</h2>
<h1>Initialization</h1>
$\newcommand{\Set}[1]{\{#1\}}$
$\newcommand{\Tuple}[1]{\langle#1\rangle}$
$\newcommand{\v}[1]{\pmb{#1}}$
$\newcommand{\cv}[1]{\begin{bmatrix}#1\end{bmatrix}}$
$\newcommand{\rv}[1]{[#1]}$
$\DeclareMathOperator{\argmax}{arg\,max}$
$\DeclareMathOperator{\argmin}{arg\,min}$
$\DeclareMathOperator{\dist}{dist}$
$\DeclareMathOperator{\abs}{abs}$
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from ipywidgets import interactive
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import add_dummy_feature
from sklearn.linear_model import SGDRegressor
```
<h1>Acknowledgement</h1>
<ul>
<li>I based 5 of the diagrams on ones to be found in A. Géron: <i>Hands-On Machine Learning with Scikit-Learn, Keras &
TensorFlow (2nd edn)</i>, O'Reilly, 2019
</li>
</ul>
<h1>Gradient Descent</h1>
<ul>
<li><b>Gradient Descent</b> is a generic method for finding optimal solutions to problems that involve
minimizing a loss function.
</li>
<li>It is a <em>search</em> in the model's <b>parameter space</b> for values of the parameters that minimize
the loss function.
</li>
<li>Conceptually:
<ul>
<li>
It starts with an initial guess for the values of the parameters.
</li>
<li>
Then repeatedly:
<ul>
<li>It updates the parameter values — hopefully to reduce the loss.
</li>
</ul>
</li>
</ul>
<img src="images/fog.jpg" alt="" />
</li>
<li>
Ideally, it keeps doing this until <b>convergence</b> — changes to the parameter values do not result
in lower loss.
</li>
<li>The key to this algorithm is how to update the parameter values.</li>
</ul>
<h2>The update rule</h2>
<ul>
<li>To update the parameter values to reduce the loss:
<ul>
<li>Compute the gradient vector.
<ul>
<li>But this points 'uphill' and we want to go 'downhill'.</li>
<li>And we want to make 'baby steps' (see later), so we use a <b>learning rate</b>,
$\alpha$, which is between 0 and 1.
</li>
</ul>
</li>
<li>So subtract $\alpha$ times the gradient vector from $\v{\beta}$.</li>
</ul>
$$\v{\beta} \gets \v{\beta} - \alpha\nabla_{\v{\beta}}J(\v{X}, \v{y}, \v{\beta})$$
Or
$$\v{\beta} \gets \v{\beta} - \frac{\alpha}{m}\v{X}^T(\v{X}\v{\beta} - \v{y})$$
</li>
<li>(BTW, this is vectorized. Naive loop implementations are wrong: they lose the
<em>simultaneous</em> update of the $\v{\beta}_j$.)
</li>
</ul>
<h2>Gradient descent algorithm</h2>
<ul>
<li>Pseudocode (in fact, this is for <b>batch gradient descent</b>, see later):
<ul style="background: lightgrey; list-style: none">
<li>initialize $\v{\beta}$ randomly
<li>
repeat until convergence
<ul>
<li>
$\v{\beta} \gets \v{\beta} - \frac{\alpha}{m}\v{X}^T(\v{X}\v{\beta} - \v{y})$
</li>
</ul>
</li>
</ul>
</li>
<h2>Baby steps</h2>
<ul>
<li>We'll use an example with a single feature/single parameter $\beta_1$ in order to visualize.</li>
<li>We update $\beta_1$ gradually, one baby step at a time, unitl the algorithm converges on minimum loss:
<figure>
<img src="images/baby_steps1.png" />
</figure>
</li>
<li>The size of the steps is determined by <!--a <b>hyperparameter</b> called--> the learning rate.
<!--
<ul>
<li>(Hyperparamters are explained in CS4619)</li>
</ul>
-->
</li>
<li>If the learning rate is too small, it will take many updates until convergence:
<figure>
<img src="images/baby_steps2.png" />
</figure>
</li>
<li>If the learning rate is too big, the algorithm might jump across the valley — it may even end up with
higher loss than before, making the next step bigger.
<ul>
<li>This might make the algorithm <b>diverge</b>.
</li>
</ul>
<figure>
<img src="images/baby_steps3.png" />
</figure>
</li>
</ul>
<h2>Why we need to scale for Gradient Descent</h2>
<ul>
<li>If we are doing OLS regression using the Normal Equation, we do not need to scale the features.
But if we are doing OLS regression using Gradient Descent, we do need to scale the features.
</li>
<li>If features have different ranges, it affects the shape of the 'bowl'.</li>
<li>E.g. features 1 and 2 have similar ranges of values — a 'bowl':
<figure>
<img src="images/scaled.png" />
</figure>
<ul>
<li>The algorithm goes straight towards the minimum.</li>
</ul>
</li>
<li>E.g. feature 1 has smaller values than feature 2 — an elongated 'bowl':
<figure>
<img src="images/unscaled.png" />
</figure>
<ul>
<li>Since feature 1 has smaller values, it takes a larger change in $\v{\beta}_1$ to affect
the loss function, which is why it is elongated.
</li>
<li>It takes more steps to get to the minimum — steeply down but not really towards the
goal, followed by a long march down a nearly flat valley.
</li>
<li>It makes it more difficult to choose a value for the learning rate that avoids diveregence:
a value that suits one feature may not suit another.
</li>
</ul>
</li>
</ul>
<h2>Variants of Gradient Descent</h2>
<ul>
<li>There are, in fact, three variants:
<ul>
<li>Batch Gradient Descent;</li>
<li>Stochastic Gradient Descent; and</li>
<li>Mini-batch Gradient Descent.</li>
</ul>
</li>
</ul>
<h1>Batch Gradient Descent</h1>
<ul>
<li>The pseudocode we saw earlier (repeated here for convenience) is Batch Gradient Descent:
<ul style="background: lightgrey; list-style: none">
<li>initialize $\v{\beta}$ randomly
<li>
repeat until convergence
<ul>
<li>
$\v{\beta} \gets \v{\beta} - \frac{\alpha}{m}\v{X}^T(\v{X}\v{\beta} - \v{y})$
</li>
</ul>
</li>
</ul>
</li>
<li>Why is it called <em>Batch</em> Gradient Descent?
<ul>
<li>The update involves a calculation over the <em>entire</em> training set $\v{X}$
on every iteration.
</li>
<li>This can be slow for large training sets.</li>
</ul>
</li>
</ul>
<h2>Batch Gradient Descent in numpy</h2>
<ul>
<li>For the hell of it, let's implement it ourselves.</li>
<li>Again for the purposes of this explanation, we will use the entire dataset as our training set.</li>
</ul>
```
# Loss function for OLS regression (assumes X contains all 1s in its first column)
def J(X, y, beta):
return np.mean((X.dot(beta) - y) ** 2) / 2.0
def batch_gradient_descent_for_ols_linear_regression(X, y, alpha, num_iterations):
m, n = X.shape
beta = np.random.randn(n)
Jvals = np.zeros(num_iterations)
for iter in range(num_iterations):
beta -= (1.0 * alpha / m) * X.T.dot(X.dot(beta) - y)
Jvals[iter] = J(X, y, beta)
return beta, Jvals
# Use pandas to read the CSV file
df = pd.read_csv("../datasets/dataset_corkA.csv")
# Get the feature-values and the target values
X = df[["flarea", "bdrms", "bthrms"]].values
y = df["price"].values
# Scale it
scaler = StandardScaler()
X = scaler.fit_transform(X)
# Add the extra column to X
X = add_dummy_feature(X)
# Run the Batch Gradient Descent
beta, Jvals = batch_gradient_descent_for_ols_linear_regression(X, y, alpha = 0.03, num_iterations = 500)
# Display beta
beta
```
<ul>
<li>Bear in mind that the coefficients it finds are on the scaled data.</li>
</ul>
<ul>
<li>It's a good idea to plot the values of the loss function against the number of iterations.
</li>
<li>For OLS regression done using Batch Gradient Descent, if the loss ever increases, then:
<ul>
<li>
the code might be incorrect; or
</li>
<li>
the value of $\alpha$ is too big and is causing divergence.
</li>
</ul>
</li>
</ul>
```
fig = plt.figure(figsize=(8,6))
plt.title("$J$ during learning")
plt.xlabel("Number of iterations")
plt.xlim(1, Jvals.size)
plt.ylabel("$J$")
plt.ylim(3500, 50000)
xvals = np.linspace(1, Jvals.size, Jvals.size)
plt.scatter(xvals, Jvals)
plt.show()
```
<ul>
<li>The algorithm gives us the problem of choosing the number of iterations.</li>
<li>An alternative is to use a very large number of iterations but exit when the gradient vector
becomes tiny:
<ul>
<li>when its norm becomes smaller than <b>tolerance</b>, $\eta$.</li>
</ul>
</li>
</ul>
<ul>
<li>Here's an interactive version that allows you to choose the value of $\alpha$ and to decide
whether to scale the data or not.
</li>
</ul>
```
def bgd(scale=True, alpha=0.03):
# Get the feature-values and the target values
X = df[["flarea", "bdrms", "bthrms"]].values
y = df["price"].values
# Scale the data, if requested
if scale:
X = StandardScaler().fit_transform(X)
# Add the extra column to X
X = add_dummy_feature(X)
# Run the Batch Gradient Descent
beta, Jvals = batch_gradient_descent_for_ols_linear_regression(X, y, alpha, num_iterations = 3000)
# Display beta
print("beta: ", beta)
# Plot loss
fig = plt.figure(figsize=(8,6))
plt.title("$J$ during learning")
plt.xlabel("Number of iterations")
plt.xlim(1, Jvals.size)
plt.ylabel("$J$")
plt.ylim(3500, 50000)
xvals = np.linspace(1, Jvals.size, Jvals.size)
plt.scatter(xvals, Jvals)
plt.show()
interactive_plot = interactive(bgd, {'manual': True},
scale=True, alpha=[("0.00009", 0.00009), ("0.0009", 0.0009), ("0.009", 0.009), ("0.09", 0.09), ("0.9", 0.9)])
interactive_plot
```
<ul>
<li>
Some people suggest a variant of Batch Gradient Descent in which the value of $\alpha$ is decreased
over time, i.e. its value in later iterations is smaller
<ul>
<li>Why do they suggest this? </li>
<li>And why isn't it necessary?
</li>
</ul>
</li>
<li>(But, we'll revisit this idea in Stochastic Gradient Descent.)</li>
</ul>
<h1>Stochastic Gradient Descent</h1>
<ul>
<li>As we saw, in each iteration, Batch Gradient Descent does a calculation on the entire
training set, which, for large training sets, may be slow.
</li>
<li><b>Stochastic Gradient Descent (SGD)</b>:
<ul>
<li>On each iteration, it picks just <em>one</em> training example $\v{x}$ at random and computes
the gradients on just that
one example
$$\v{\beta} \gets \v{\beta} - \alpha\v{x}^T(\v{x}\v{\beta} - y)$$
</li>
</ul>
</li>
<li>This gives huge speed-up.</li>
<li>It enables us to train on huge training sets since only one example needs to be in memory in each iteration.
</li>
<li>But, because it is stochastic (the randomness), the loss will not necessarily decrease on each iteration:
<ul>
<li><em>On average</em>, the loss decreases, but in any one iteration, loss may go up or down.</li>
<li>Eventually, it will get close to the minimum, but it will continue to go up and down a bit.
<ul>
<li>So, once you stop it, the $\v{\beta}$ will be close to the best, but not
necessarily optimal.
</li>
</ul>
</li>
</ul>
</li>
</ul>
<h2>SGD in scikit-learn</h2>
<ul>
<li>The <code>fit</code> method of scikit-learn's <code>SGDRegressor</code> class is doing
what we have described:
<ul>
<li>You must scale the features but it inserts the extra column of 1s for us.</li>
<li>You can supply a <code>learning_rate</code> and lots of other things
(in the code below, we'll just use the defaults).
</li>
</ul>
</li>
<li>(Again, we'll train on the whole dataset.)</li>
</ul>
```
# Get the feature-values and the target values
X = df[["flarea", "bdrms", "bthrms"]].values
y = df["price"].values
# Scale it
scaler = StandardScaler()
X = scaler.fit_transform(X)
# Create the SGDRegressor and fit the model
sgd = SGDRegressor()
sgd.fit(X, y)
```
<h2>SGD in numpy</h2>
<ul>
<li>For the hell of it, let's implement a simple version ourselves</li>
<li>(Again, we'll train on the whole dataset.)</li>
</ul>
```
def stochastic_gradient_descent_for_ols_linear_regression(X, y, alpha, num_epochs):
m, n = X.shape
beta = np.random.randn(n)
Jvals = np.zeros(num_epochs * m)
for epoch in range(num_epochs):
for i in range(m):
rand_idx = np.random.randint(m)
xi = X[rand_idx:rand_idx + 1]
yi = y[rand_idx:rand_idx + 1]
beta -= alpha * xi.T.dot(xi.dot(beta) - yi)
Jvals[epoch * m + i] = J(X, y, beta)
return beta, Jvals
```
<ul>
<li>(One common alternative to the code above is to shuffle between epochs and remove the randomness within the
inner loop.)
</li>
</ul>
```
# Get the feature-values and the target values
X = df[["flarea", "bdrms", "bthrms"]].values
y = df["price"].values
# Scale it
scaler = StandardScaler()
X = scaler.fit_transform(X)
# Add the extra column to X
X = add_dummy_feature(X)
# Run the Stochastic Gradient Descent
beta, Jvals = stochastic_gradient_descent_for_ols_linear_regression(X, y, alpha = 0.03, num_epochs = 50)
# Display beta
beta
fig = plt.figure(figsize=(8,6))
plt.title("$J$ during learning")
plt.xlabel("Number of iterations")
plt.xlim(1, Jvals.size)
plt.ylabel("$J$")
plt.ylim(3500, 50000)
xvals = np.linspace(1, Jvals.size, Jvals.size)
plt.scatter(xvals, Jvals)
plt.show()
```
<ul>
<li>Quite a bumpy ride!</li>
<li>So, let's try <b>simulated annealing</b>.</li>
</ul>
<h2>Simulated Annealing</h2>
<ul>
<li>As we discussed, SGD does not settle at the minimum.</li>
<li>One solution is to gradually reduce the learning rate:
<ul>
<li>Updates start out 'large' so you make progress.</li>
<li>But, over time, updates get smaller, allowing SGD to settle at or near the global minimum.</li>
</ul>
</li>
<li>The function that determines how to reduce the learning rate is called the <b>learning schedule</b>.
<ul>
<li>Reduce it too quickly and you may not converge on or near to the global minimum.</li>
<li>Reduce it too slowly and you may still bounce around a lot and, if stopped after too few iterations,
may end up
with a suboptimal solution.
</li>
</ul>
</li>
</ul>
```
def learning_schedule(t):
return 5 / (t + 50)
def stochastic_gradient_descent_for_ols_linear_regression_with_simulated_annealing(X, y, num_epochs):
m, n = X.shape
beta = np.random.randn(n)
Jvals = np.zeros(num_epochs * m)
for epoch in range(num_epochs):
for i in range(m):
rand_idx = np.random.randint(m)
xi = X[rand_idx:rand_idx + 1]
yi = y[rand_idx:rand_idx + 1]
alpha = learning_schedule(epoch * m + i)
beta -= alpha * xi.T.dot(xi.dot(beta) - yi)
Jvals[epoch * m + i] = J(X, y, beta)
return beta, Jvals
# Run the Stochastic Gradient Descent
beta, Jvals = stochastic_gradient_descent_for_ols_linear_regression_with_simulated_annealing(X, y, num_epochs = 50)
# Display beta
beta
fig = plt.figure(figsize=(8,6))
plt.title("$J$ during learning")
plt.xlabel("Number of iterations")
plt.xlim(1, Jvals.size)
plt.ylabel("$J$")
plt.ylim(3500, 50000)
xvals = np.linspace(1, Jvals.size, Jvals.size)
plt.scatter(xvals, Jvals)
plt.show()
```
<h1>Mini-Batch Gradient Descent</h1>
<ul>
<li>Batch Gradient Descent computes gradients from the full training set.</li>
<li>Stochastic Gradient Descent computes gradients from just one example.</li>
<li>Mini-Batch Gradient Descent lies between the two:
<ul>
<li>It computes gradients from a small randomly-selected subset of the training set, called a
<b>mini-batch</b>.
</li>
</ul>
</li>
<li>Since it lies between the two:
<ul>
<li>It may bounce less and get closer to the global minimum than SGD…
<ul>
<li>…although both of them can reach the global minimum with a good learning schedule.</li>
</ul>
</li>
<li>Its time and memory costs lie between the two.</li>
</ul>
</li>
</ul>
<h1>The Normal Equation versus Gradient Descent</h1>
<ul>
<li>Efficiency/scaling-up to large training sets:
<ul>
<li>Normal Equation:
<ul>
<li>is linear in $m$, so can handle large training sets efficiently if they fit into
main memory;
</li>
<li>but it has to compute the inverse (or psueudo-inverse) of a $n \times n$ matrix, which takes
time between quadratic and cubic in $n$, and so is only feasible for smallish $n$ (up to
a few thousand).
</li>
</ul>
</li>
<li>Gradient Descent:
<ul>
<li>SGD scales really well to huge $m$;</li>
<li>All three Gradient Descent methods can handle huge $n$ (even 100s of 1000s).</li>
</ul>
</li>
</ul>
</li>
<li>Finding the global minimum for OLS regression:
<ul>
<li>Normal Equation: guaranteed to find the global minimum.</li>
<li>Gradient Descent: all a bit dependent on number of iterations, learning rate, learning schedule.</li>
</ul>
</li>
<li>Feature scaling:
<ul>
<li>Normal Equation: scaling is not needed.
</li>
<li>Gradient Descent: scaling <em>is</em> needed.</li>
</ul>
</li>
<li>Finally, Gradient Descent is a general method, whereas the Normal Equation is only for OLS regression.</li>
</ul>
<h1>Non-Convex Functions</h1>
<ul>
<li>The loss function for OLS regression is convex and it has a slope that never changes abruptly.
<ul>
<li>This gives us good 'guarantees' about reaching the minimum
(depending on such things as running for long enough, using a learning rate that isn't too high,
and whether we are using Batch, Mini-Batch or Stochastic Gradient Descent).
</li>
</ul>
</li>
<li>But Gradient Descent is a generic method: you can use it to find the minima of other loss functions.</li>
<li>But not all loss functions are convex, which can cause problems for Gradient Descent:
<figure>
<img src="images/local_minima.png" />
</figure>
<ul>
<li>The algorithm might converge to a local minimum, instead of the global minimum.</li>
<li>It may take a long time to cross a plateau.</li>
</ul>
</li>
<li>What do we do about this?
<ul>
<li>One thing is to prefer Stochastic Gradient Descent (or Mini-Batch Gradient Descent):
because of the way they 'bounce around', they might even escape a
local minimum, and might even get to the global minimum.
</li>
<li>In this context, simulated annealing is also useful: updates start out 'large' allowing these
algorithms to make
progress and even escape local minima; but, over time, updates get smaller, allowing
these algorithms to settle at or near the global minimum.
</li>
<li>But, if using simulated annealing, if you reduce the learning rate too quickly, you may
stil get stuck in a local minimum.
</li>
</ul>
</li>
</ul>
| true |
code
| 0.76578 | null | null | null | null |
|
# Photometric monitoring
## Setup
```
%load_ext autoreload
%autoreload 2
import glob as glob
import matplotlib as mpl
import matplotlib.patheffects as PathEffects
import matplotlib.pyplot as plt
import matplotlib.transforms as transforms
import numpy as np
import pandas as pd
import seaborn as sns
import corner
import json
import pathlib
import pickle
import utils
import warnings
from astropy import constants as const
from astropy import units as uni
from astropy.io import ascii, fits
from astropy.time import Time
from mpl_toolkits.axes_grid1 import ImageGrid
# Default figure dimensions
FIG_WIDE = (11, 5)
FIG_LARGE = (8, 11)
# Figure style
sns.set(style="ticks", palette="colorblind", color_codes=True, context="talk")
params = utils.plot_params()
plt.rcParams.update(params)
```
## [Dowload data](https://www.dropbox.com/sh/74sihxztgd82jjz/AADgB_f5RYc3De3IEioUGAfha?dl=1)
Unzip this into a folder named `data` in the same level as this notebook
## Load
```
dirpath = "data/photometric_act"
mid_transit_times = {
"Transit 1": "2016-06-22 08:18:00",
"Transit 2": "2017-06-10 07:05:00",
"Transit 3": "2018-06-04 07:24:00",
"Transit 4": "2018-06-21 06:56",
"Transit 5": "2018-08-22 03:30:00",
}
# Load processed data
df_stell_data = pd.read_csv(
f"{dirpath}/HATP23_lc_norm_v3.csv",
names=["t_HJD", "t_UT", "f"],
parse_dates=[1],
infer_datetime_format=True,
)
# Load model data
df_stell_model = pd.read_csv(
f"{dirpath}/HATP23_GP_model_Prot7_v3.csv", names=["t_HJD", "f", "f_err"]
)
```
## Plot
```
fig, ax = plt.subplots(figsize=FIG_WIDE)
ax.plot(df_stell_data["t_HJD"], df_stell_data["f"], "r.", alpha=0.5, mew=0)
ax.plot(df_stell_model["t_HJD"], df_stell_model["f"], color="grey")
f_d = df_stell_model["f"] - df_stell_model["f_err"]
f_u = df_stell_model["f"] + df_stell_model["f_err"]
ax.fill_between(df_stell_model["t_HJD"], f_d, f_u, alpha=0.3, lw=0, color="grey")
p_kwargs = {"ls": "--", "c": "darkgrey", "lw": 1.0}
trans = transforms.blended_transform_factory(ax.transData, ax.transAxes)
for transit_name, t0 in mid_transit_times.items():
t_mid = Time(t0).jd - 2.4e6
ax.axvline(t_mid, **p_kwargs)
ax.annotate(
transit_name,
xy=(t_mid, 0.1),
xycoords=trans,
ha="right",
rotation=90.0,
fontsize=12,
)
# Save
ax.set_ylim(0.88, 0.98)
ax.set_xlabel("Date (HJD - 2400000)")
ax.set_ylabel("Flux relative to comparisons")
fig.tight_layout()
fig.set_size_inches(FIG_WIDE)
utils.savefig("../paper/figures/photometric_act/phot_mon_full.pdf")
```
| true |
code
| 0.623234 | null | null | null | null |
|
## How to forecast time series in BigQuery ML
This notebook accompanies the article
[How to do time series forecasting in BigQuery](https://towardsdatascience.com/how-to-do-time-series-forecasting-in-bigquery-af9eb6be8159)
## Install library and extensions if needed
You don't need to do this if you use AI Platform Notebooks
```
#!pip install google-cloud-bigquery
%load_ext google.cloud.bigquery
```
## Helper plot functions
```
import matplotlib.pyplot as plt
import pandas as pd
def plot_historical_and_forecast(input_timeseries, timestamp_col_name, data_col_name, forecast_output=None, actual=None):
input_timeseries = input_timeseries.sort_values(timestamp_col_name)
plt.figure(figsize=(20,6))
plt.plot(input_timeseries[timestamp_col_name], input_timeseries[data_col_name], label = 'Historical')
plt.xlabel(timestamp_col_name)
plt.ylabel(data_col_name)
if forecast_output is not None:
forecast_output = forecast_output.sort_values('forecast_timestamp')
forecast_output['forecast_timestamp'] = pd.to_datetime(forecast_output['forecast_timestamp'])
x_data = forecast_output['forecast_timestamp']
y_data = forecast_output['forecast_value']
confidence_level = forecast_output['confidence_level'].iloc[0] * 100
low_CI = forecast_output['confidence_interval_lower_bound']
upper_CI = forecast_output['confidence_interval_upper_bound']
# Plot the data, set the linewidth, color and transparency of the
# line, provide a label for the legend
plt.plot(x_data, y_data, alpha = 1, label = 'Forecast', linestyle='--')
# Shade the confidence interval
plt.fill_between(x_data, low_CI, upper_CI, color = '#539caf', alpha = 0.4, label = str(confidence_level) + '% confidence interval')
# actual
if actual is not None:
actual = actual.sort_values(timestamp_col_name)
plt.plot(actual[timestamp_col_name], actual[data_col_name], label = 'Actual', linestyle='--')
# Display legend
plt.legend(loc = 'upper center', prop={'size': 16})
```
## Plot the time series
The first step, as with any machine learning problem is to gather the training data and explore it. Assume that we have the data on rentals until mid-June of 2015 and we'd like to predict for the rest of the month. We can gather the past 6 weeks of data using
```
%%bigquery df
SELECT
CAST(EXTRACT(date from start_date) AS TIMESTAMP) AS date
, COUNT(*) AS numrentals
FROM
`bigquery-public-data`.london_bicycles.cycle_hire
WHERE start_station_name LIKE '%Hyde%' -- all stations in Hyde Park
GROUP BY date
HAVING date BETWEEN '2015-05-01' AND '2015-06-15'
ORDER BY date
plot_historical_and_forecast(df, 'date', 'numrentals');
```
## Train ARIMA model
We can use this data to train an ARIMA model, telling BigQuery which column is the data column and which one the timestamp column:
```
!bq ls ch09eu || bq mk --location EU ch09eu
%%bigquery
CREATE OR REPLACE MODEL ch09eu.numrentals_forecast
OPTIONS(model_type='ARIMA',
time_series_data_col='numrentals',
time_series_timestamp_col='date') AS
SELECT
CAST(EXTRACT(date from start_date) AS TIMESTAMP) AS date
, COUNT(*) AS numrentals
FROM
`bigquery-public-data`.london_bicycles.cycle_hire
WHERE start_station_name LIKE '%Hyde%' -- all stations in Hyde Park
GROUP BY date
HAVING date BETWEEN '2015-05-01' AND '2015-06-15'
```
We can get the forecast data using:
```
%%bigquery fcst
SELECT * FROM ML.FORECAST(MODEL ch09eu.numrentals_forecast,
STRUCT(14 AS horizon, 0.9 AS confidence_level))
plot_historical_and_forecast(df, 'date', 'numrentals', fcst);
%%bigquery actual
SELECT
CAST(EXTRACT(date from start_date) AS TIMESTAMP) AS date
, COUNT(*) AS numrentals
FROM
`bigquery-public-data`.london_bicycles.cycle_hire
WHERE start_station_name LIKE '%Hyde%' -- all stations in Hyde Park
GROUP BY date
HAVING date BETWEEN '2015-06-16' AND '2015-07-01'
ORDER BY date
plot_historical_and_forecast(df, 'date', 'numrentals', fcst, actual);
```
## Forecasting a bunch of series
So far, I have been forecasting the overall rental volume for all the bicycle stations in Hyde Park. How do we predict the rental volume for each individual station? Use the time_series_id_col:
```
%%bigquery
CREATE OR REPLACE MODEL ch09eu.numrentals_forecast
OPTIONS(model_type='ARIMA',
time_series_data_col='numrentals',
time_series_timestamp_col='date',
time_series_id_col='start_station_name') AS
SELECT
start_station_name
, CAST(EXTRACT(date from start_date) AS TIMESTAMP) AS date
, COUNT(*) AS numrentals
FROM
`bigquery-public-data`.london_bicycles.cycle_hire
WHERE start_station_name LIKE '%Hyde%' -- all stations in Hyde Park
GROUP BY start_station_name, date
HAVING date BETWEEN '2015-01-01' AND '2015-06-15'
```
Note that instead of training the series on 45 days (May 1 to June 15), I'm now training on a longer time period.
That's because the aggregate time series will tend to be smoother and much easier to predict than the time series
for individual stations. So, we have to show the model a longer trendline.
```
%%bigquery
SELECT *
FROM ML.ARIMA_COEFFICIENTS(MODEL ch09eu.numrentals_forecast)
ORDER BY start_station_name
%%bigquery fcst
SELECT
*
FROM ML.FORECAST(MODEL ch09eu.numrentals_forecast,
STRUCT(14 AS horizon, 0.9 AS confidence_level))
ORDER By start_station_name, forecast_timestamp
%%bigquery df
SELECT
start_station_name
, CAST(EXTRACT(date from start_date) AS TIMESTAMP) AS date
, COUNT(*) AS numrentals
FROM
`bigquery-public-data`.london_bicycles.cycle_hire
WHERE start_station_name LIKE '%Hyde%' -- all stations in Hyde Park
GROUP BY start_station_name, date
HAVING date BETWEEN '2015-05-01' AND '2015-06-15' -- this is just for plotting, hence we'll keep this 45 days.
%%bigquery actual
SELECT
start_station_name
, CAST(EXTRACT(date from start_date) AS TIMESTAMP) AS date
, COUNT(*) AS numrentals
FROM
`bigquery-public-data`.london_bicycles.cycle_hire
WHERE start_station_name LIKE '%Hyde%' -- all stations in Hyde Park
GROUP BY start_station_name, date
HAVING date BETWEEN '2015-06-16' AND '2015-07-01'
```
As you would expect, the aggregated time series over all the stations is much smoother and more predictable than the time series of just one station (the one station data will be more noisy). So, some forecasts will be better than others.
```
%%bigquery stations
SELECT DISTINCT start_station_name
FROM
`bigquery-public-data`.london_bicycles.cycle_hire
WHERE start_station_name LIKE '%Hyde%' -- all stations in Hyde Park
ORDER by start_station_name ASC
stations
station = stations['start_station_name'].iloc[3] # Hyde Park Corner
print(station)
plot_historical_and_forecast(df[df['start_station_name']==station],
'date', 'numrentals',
fcst[fcst['start_station_name']==station],
actual[actual['start_station_name']==station]);
station = stations['start_station_name'].iloc[6] # Serpentine Car Park,
print(station)
plot_historical_and_forecast(df[df['start_station_name']==station],
'date', 'numrentals',
fcst[fcst['start_station_name']==station],
actual[actual['start_station_name']==station]);
station = stations['start_station_name'].iloc[4] # Knightsbridge
print(station)
plot_historical_and_forecast(df[df['start_station_name']==station],
'date', 'numrentals',
fcst[fcst['start_station_name']==station],
actual[actual['start_station_name']==station]);
```
## Evaluation
As you can see from the graphs above, the predictions accuracy varies by station. Can we gauge how good the prediction for a station is going to be?
```
%%bigquery
SELECT * FROM ML.EVALUATE(MODEL ch09eu.numrentals_forecast)
ORDER BY variance DESC
```
Note that Hyde Park Corner (#0 on the list) is expected to be worse than Serpentine Corner (#5 on the list). That does pan out. But we expected Knightsbridge (#10) to be the best overall, but it appears that this is a case where cycling activity really picked up in an unexpected way.
Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| true |
code
| 0.5526 | null | null | null | null |
|
# Huggingface SageMaker-SDK - GPT2 Fine-tuning example
1. [Introduction](#Introduction)
2. [Development Environment and Permissions](#Development-Environment-and-Permissions)
1. [Installation](#Installation)
2. [Permissions](#Permissions)
3. [Uploading data to sagemaker_session_bucket](#Uploading-data-to-sagemaker_session_bucket)
3. [Fine-tuning & starting Sagemaker Training Job](#Fine-tuning-\&-starting-Sagemaker-Training-Job)
1. [Creating an Estimator and start a training job](#Creating-an-Estimator-and-start-a-training-job)
2. [Download fine-tuned model from s3](#Download-fine-tuned-model-from-s3)
3. [Text Generation on Local](#Text-Generation-on-Local)
# Introduction
このnotebookはHuggingFaceの[run_clm.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py)を日本語データで動作する様に変更を加えたものです。
**日本語データで動作する様に変更を加えた以外はSageMakerで実行するために変更を加えた部分はありません**
データは[wikiHow日本語要約データセット](https://github.com/Katsumata420/wikihow_japanese)を使用します。
このデモでは、AmazonSageMakerのHuggingFace Estimatorを使用してSageMakerのトレーニングジョブを実行します。
_**NOTE: このデモは、SagemakerNotebookインスタンスで動作検証しています**_
# Development Environment and Permissions
## Installation
このNotebookはSageMakerの`conda_pytorch_p36`カーネルを利用しています。
**_Note: このnotebook上で推論テストを行う場合、(バージョンが古い場合は)pytorchのバージョンアップが必要になります。_**
```
!pip install --upgrade pip
!pip install --upgrade torch
!pip install "sagemaker>=2.48.1" "transformers==4.9.2" "datasets[s3]==1.11.0" --upgrade
!pip install sentencepiece
!pip install sentencepiece
from transformers import T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("rinna/japanese-gpt2-medium")
tokenizer.do_lower_case = True # due to some bug of tokenizer config loading
```
## Permissions
ローカル環境でSagemakerを使用する場合はSagemakerに必要な権限を持つIAMロールにアクセスする必要があります。[こちら](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html)を参照してください
```
import sagemaker
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
# データの準備
事前に`create_wikihow_dataset.ipynb`を実行してwikiHow日本語要約データセットを用意してください。
```
import pandas as pd
from tqdm import tqdm
train = pd.read_json('./wikihow_japanese/data/output/train.jsonl', orient='records', lines=True)
train
dev = pd.read_json('./wikihow_japanese/data/output/dev.jsonl', orient='records', lines=True)
dev
with open('train.txt', 'w') as output_file:
for row in tqdm(train.itertuples(), total=train.shape[0]):
src = row.src
tgt = row.tgt
tokens = tokenizer.tokenize(src)
src = "".join(tokens).replace('▁', '')
text = '<s>' + src + '[SEP]' + tgt + '</s>'
output_file.write(text + '\n')
with open('dev.txt', 'w') as output_file:
for row in tqdm(dev.itertuples(), total=dev.shape[0]):
src = row.src
tgt = row.tgt
tokens = tokenizer.tokenize(src)
src = "".join(tokens).replace('▁', '')
text = '<s>' + src + '[SEP]' + tgt + '</s>'
output_file.write(text + '\n')
```
## Uploading data to `sagemaker_session_bucket`
S3へデータをアップロードします。
```
s3_prefix = 'samples/datasets/wikihow'
input_train = sess.upload_data(
path='train.txt',
key_prefix=f'{s3_prefix}/train'
)
input_validation = sess.upload_data(
path='dev.txt',
key_prefix=f'{s3_prefix}/valid'
)
# データのUpload path
print(input_train)
print(input_validation)
```
# Fine-tuning & starting Sagemaker Training Job
`HuggingFace`のトレーニングジョブを作成するためには`HuggingFace` Estimatorが必要になります。
Estimatorは、エンドツーエンドのAmazonSageMakerトレーニングおよびデプロイタスクを処理します。 Estimatorで、どのFine-tuningスクリプトを`entry_point`として使用するか、どの`instance_type`を使用するか、どの`hyperparameters`を渡すかなどを定義します。
```python
huggingface_estimator = HuggingFace(
entry_point='train.py',
source_dir='./scripts',
base_job_name='huggingface-sdk-extension',
instance_type='ml.p3.2xlarge',
instance_count=1,
transformers_version='4.4',
pytorch_version='1.6',
py_version='py36',
role=role,
hyperparameters={
'epochs': 1,
'train_batch_size': 32,
'model_name':'distilbert-base-uncased'
}
)
```
SageMakerトレーニングジョブを作成すると、SageMakerは`huggingface`コンテナを実行するために必要なec2インスタンスの起動と管理を行います。
Fine-tuningスクリプト`train.py`をアップロードし、`sagemaker_session_bucket`からコンテナ内の`/opt/ml/input/data`にデータをダウンロードして、トレーニングジョブを実行します。
```python
/opt/conda/bin/python train.py --epochs 1 --model_name distilbert-base-uncased --train_batch_size 32
```
`HuggingFace estimator`で定義した`hyperparameters`は、名前付き引数として渡されます。
またSagemakerは、次のようなさまざまな環境変数を通じて、トレーニング環境に関する有用なプロパティを提供しています。
* `SM_MODEL_DIR`:トレーニングジョブがモデルアーティファクトを書き込むパスを表す文字列。トレーニング後、このディレクトリのアーティファクトはモデルホスティングのためにS3にアップロードされます。
* `SM_NUM_GPUS`:ホストで使用可能なGPUの数を表す整数。
* `SM_CHANNEL_XXXX`:指定されたチャネルの入力データを含むディレクトリへのパスを表す文字列。たとえば、HuggingFace estimatorのfit呼び出しで`train`と`test`という名前の2つの入力チャネルを指定すると、環境変数`SM_CHANNEL_TRAIN`と`SM_CHANNEL_TEST`が設定されます。
このトレーニングジョブをローカル環境で実行するには、`instance_type='local'`、GPUの場合は`instance_type='local_gpu'`で定義できます(GPUの場合は追加で設定が必要になります[SageMakerのドキュメント](https://sagemaker.readthedocs.io/en/stable/overview.html#local-mode)を参照してください)。
**_Note:これはSageMaker Studio内では機能しません_**
```
# requirements.txtはトレーニングジョブの実行前に実行されます(コンテナにライブラリを追加する際に使用します)
# ファイルはここを参照しています。https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/requirements.txt
# 1点異なる部分は transformers >= 4.8.0 でHuggingFaceコンテナのバージョンが古く本家に追いついていないため、バージョンアップを行なっています。
!pygmentize ./scripts/requirements.txt
# トレーニングジョブで実行されるコード
# 変更点:AutoTokenizer→T5Tokenizer
!pygmentize ./scripts/run_clm.py
from sagemaker.huggingface import HuggingFace
# hyperparameters, which are passed into the training job
hyperparameters={
'model_name_or_path':'rinna/japanese-gpt2-medium',
'train_file': '/opt/ml/input/data/train/train.txt',
'validation_file': '/opt/ml/input/data/validation/dev.txt',
'do_train': 'True',
'do_eval': 'True',
'num_train_epochs': 10,
'per_device_train_batch_size': 1,
'per_device_eval_batch_size': 1,
'use_fast_tokenizer': 'False',
'save_steps': 1000,
'save_total_limit': 1,
'output_dir':'/opt/ml/model',
}
```
## Creating an Estimator and start a training job
```
# estimator
huggingface_estimator = HuggingFace(
role=role,
entry_point='run_clm.py',
source_dir='./scripts',
instance_type='ml.p3.8xlarge',
instance_count=1,
volume_size=200,
transformers_version='4.6',
pytorch_version='1.7',
py_version='py36',
hyperparameters=hyperparameters,
)
# starting the train job with our uploaded datasets as input
huggingface_estimator.fit({'train': input_train, 'validation': input_validation})
# ml.p3.8xlarge, 10 epochでの実行時間の目安
# Training seconds: 3623
# Billable seconds: 3623
```
## Download-fine-tuned-model-from-s3
```
import os
OUTPUT_DIR = './output/'
if not os.path.exists(OUTPUT_DIR):
os.makedirs(OUTPUT_DIR)
from sagemaker.s3 import S3Downloader
# 学習したモデルのダウンロード
S3Downloader.download(
s3_uri=huggingface_estimator.model_data, # s3 uri where the trained model is located
local_path='.', # local path where *.targ.gz is saved
sagemaker_session=sess # sagemaker session used for training the model
)
# OUTPUT_DIRに解凍します
!tar -zxvf model.tar.gz -C output
```
## Text Generation on Local
```
import torch
from transformers import AutoModelForCausalLM, T5Tokenizer
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = T5Tokenizer.from_pretrained("rinna/japanese-gpt2-medium")
tokenizer.do_lower_case = True # due to some bug of tokenizer config loading
model = AutoModelForCausalLM.from_pretrained('output/')
model.to(device)
model.eval()
def generate_summary(body, num_gen=5):
input_text = '<s>'+body+'[SEP]'
input_ids = tokenizer.encode(input_text, return_tensors='pt').to(device)
out = model.generate(input_ids, do_sample=True, top_p=0.95, top_k=40,
num_return_sequences=num_gen, max_length=1024, bad_words_ids=[[1], [5]])
print('='*5,'原文', '='*5)
print(body)
print('-'*5, '要約', '-'*5)
for sent in tokenizer.batch_decode(out):
sent = sent.split('</s>')[1]
sent = sent.replace('</s>', '')
print(sent)
body = dev.src[0]
generate_summary(body)
body = dev.src[1]
generate_summary(body)
body = dev.src[2]
generate_summary(body)
```
| true |
code
| 0.618291 | null | null | null | null |
|
# Passive Aggressive Regressor with Scale
This Code template is for the regression analysis using a simple PassiveAggresiveRegressor based on the passive-aggressive algorithms and the feature rescaling technique used is Scale. Passive-aggressive algorithms are a group of algorithms for large-scale learning.
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import scale
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
from sklearn.linear_model import PassiveAggressiveRegressor
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X=df[features]
Y=df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
###Data Scaling
sklearn.preprocessing.scale(X, *, axis=0, with_mean=True, with_std=True, copy=True)
Standardize a dataset along any axis. Center to the mean and component wise scale to unit variance.
Read more at [scikit-learn.org](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.scale.html)
```
x_train=scale(x_train)
x_test=scale(x_test)
```
### Model
The passive-aggressive algorithms are a family of algorithms for large-scale learning. They are similar to the Perceptron in that they do not require a learning rate. However, contrary to the Perceptron, they include a regularization parameter C
> **C** ->Maximum step size (regularization). Defaults to 1.0.
> **max_iter** ->The maximum number of passes over the training data (aka epochs). It only impacts the behavior in the fit method, and not the partial_fit method.
> **tol**->The stopping criterion. If it is not None, the iterations will stop when (loss > previous_loss - tol).
> **early_stopping**->Whether to use early stopping to terminate training when validation. score is not improving. If set to True, it will automatically set aside a fraction of training data as validation and terminate training when validation score is not improving by at least tol for n_iter_no_change consecutive epochs.
> **validation_fraction**->The proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if early_stopping is True.
> **n_iter_no_change**->Number of iterations with no improvement to wait before early stopping.
> **shuffle**->Whether or not the training data should be shuffled after each epoch.
> **loss**->The loss function to be used: epsilon_insensitive: equivalent to PA-I in the reference paper. squared_epsilon_insensitive: equivalent to PA-II in the reference paper.
> **epsilon**->If the difference between the current prediction and the correct label is below this threshold, the model is not updated.
```
model = PassiveAggressiveRegressor(random_state=123)
model.fit(x_train,y_train)
```
#### Model Accuracy
We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.
score: The score function returns the coefficient of determination R2 of the prediction.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
> **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
> **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
```
y_pred=model.predict(x_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
```
#### Prediction Plot
First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.
For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
```
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),y_pred[0:20], color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
```
#### Creator: Arpit Somani , Github: [Profile](https://github.com/arpitsomani8)
| true |
code
| 0.492249 | null | null | null | null |
|
# Analysis of trained models and training logs
This notebook shows how to load, process, and analyze logs that are automatically generated during training. It also demonstrates how to make plots to examine performance of a single model or compare performance of multiple models.
Prerequisites:
- To run this example live, you must train at least two models to generate the trained log directories and set the paths below.
Each log directory contains the following:
- args.txt: the arguments fed into regression.py to train the model
- split: the train-tune-test split used to train the model
- final_evaluation.txt: final evaluation metrics (MSE, Pearson's r, Spearman's r, and r^2) on each of the split sets
- predictions: the model's score predictions for every variant in each of the split sets
- the trained model itself: see the inference notebook for more information on how to use this
This codebase provides several convenient functions for loading this log data.
```
# reload modules before executing code in order to make development and debugging easier
%load_ext autoreload
%autoreload 2
# this jupyter notebook is running inside of the "notebooks" directory
# for relative paths to work properly, we need to set the current working directory to the root of the project
# for imports to work properly, we need to add the code folder to the system path
import os
from os.path import abspath, join, isdir, basename
import sys
if not isdir("notebooks"):
# if there's a "notebooks" directory in the cwd, we've already set the cwd so no need to do it again
os.chdir("..")
module_path = abspath("code")
if module_path not in sys.path:
sys.path.append(module_path)
import numpy as np
import pandas as pd
import sklearn.metrics as skm
from scipy.stats import pearsonr, spearmanr
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
import analysis as an
```
# Define the log directories
To run this script live, you must train at least two models. As an example, we are using the avGFP linear regression and fully connected models, trained using the arguments in `pub/regression_args/avgfp_main_lr.txt` and `pub/regression_args/avgfp_main_fc.txt`. You can use these or train your own models. For comparing performance of many trained models, you must write your own function to collect the log directory names. Then, using them with this example is then relatively straightfoward.
```
log_dir_lr = "output/training_logs/log_local_local_2020-09-22_22-02-33_avgfp_lr_lr0.0001_bs128_DKPQxV5s"
log_dir_fc = "output/training_logs/log_local_local_2020-09-22_22-02-36_avgfp_fc-3xh100_lr0.0001_bs32_RbLfpQvW"
log_dirs = [log_dir_lr, log_dir_fc]
```
# Loading score predictions (single model)
The utility function uses the dataset tsv as a base and adds columns for the set name (train, tune, test, etc) and the predicted score.
```
ds_lr = an.load_predictions(log_dir_lr)
ds_lr.head()
```
# Loading evaluation metrics (single model)
```
metrics_lr = an.load_metrics(log_dir_lr)
metrics_lr
```
Sometimes it is convenient to have access to other aspects of the model, such as the learning rate and batch size. You can load the regression arguments as a dictionary using `an.load_args()`. Or, you can use `an.load_metrics_and_args` to load both the metrics and arguments in a single dataframe. The combined dataframe is set up so that each row can be a different model, which helps with comparisons between models.
```
met_args_lr = an.load_metrics_and_args(log_dir_lr)
met_args_lr
```
# Evaluating a single model
The dataframe contains variants from all sets (train, tune, test, etc), so if you are interested in a single set, you must select just those variants.
```
# before creating the testset-only dataframe, add a column with mean absolute error, used below
ds_lr["abs_err"] = np.abs(ds_lr["score"] - ds_lr["prediction"])
# create a subset view of the dataframe containing only test set variants
ds_lr_stest = ds_lr.loc[ds_lr.set_name == "stest"]
```
## Scatterplot of predicted vs. true score
```
fig, ax = plt.subplots(1)
sns.scatterplot(x="score", y="prediction", data=ds_lr_stest, ax=ax)
# draw a line of equivalence
x0, x1 = ax.get_xlim()
y0, y1 = ax.get_ylim()
lims = [max(x0, y0), min(x1, y1)]
ax.plot(lims, lims, ':k')
ax.set(ylabel="Predicted score", xlabel="True score", title="Predicted score vs. score (Linear regression)")
plt.show()
plt.close(fig)
```
## Mean absolute error by number of mutations
```
# plot the mean absolute error vs. number of mutations
# can do this more easily with pandas groupby, apply
grouped_mean = ds_lr_stest.groupby("num_mutations", as_index=False).mean()
fig, ax = plt.subplots(1)
sns.stripplot(x="num_mutations", y="abs_err", data=grouped_mean[grouped_mean.num_mutations < 13], ax=ax)
ax.set(ylabel="Mean absolute error", xlabel="Number of mutations", title="Mean absolute error by number of mutations")
plt.show()
plt.close(fig)
```
## Additional evaluation metrics
The regression training script automatically computes a few metrics, but you can also use the true and predicted scores to compute your own. Here, let's recompute Pearson's correlation coefficient and compare it to the same metric computed during training.
```
my_pearsonr = pearsonr(ds_lr_stest["score"], ds_lr_stest["prediction"])[0]
my_pearsonr
# the pearsonr from the metrics dataframe
met_args_lr.loc[0, "stest_pearsonr"]
```
There's a small amount of floating point imprecision, but otherwise the values are identical.
```
np.isclose(my_pearsonr, met_args_lr.loc[0, "stest_pearsonr"])
```
# Loading score predictions and metrics (multiple models)
The functions used above also accept lists of log directories. For loading predictions, you can optionally specify column names, otherwise the column names will be automatically labeled by number.
```
ds = an.load_predictions(log_dirs, col_names=["lr", "fc"])
ds.head()
```
Loading metrics is also straightforward. Note that `an.load_metrics()` does not support multiple log dirs, only `an.load_metrics_and_args()`.
```
metrics = an.load_metrics_and_args(log_dirs)
metrics
```
# Comparing multiple models
Make multiple scatterplots for different models. Note again, we must subset the dataframe to select our desired train/tune/test set.
```
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(12, 4))
for i, pred_col in enumerate(["lr", "fc"]):
ax = sns.scatterplot(x="score", y=pred_col, data=ds[ds.set_name == "stest"], ax=axes[i])
# draw a line of equivalence
x0, x1 = ax.get_xlim()
y0, y1 = ax.get_ylim()
lims = [max(x0, y0), min(x1, y1)]
ax.plot(lims, lims, ':k')
ax.set(ylabel="Predicted score", xlabel="True score", title="Predicted score vs. score ({})".format(pred_col))
plt.show()
plt.close(fig)
```
Compare performance metrics between datasets.
```
metrics["parsed_net_file"] = metrics["net_file"].apply(lambda nf: basename(nf).split(".")[0])
fix, ax = plt.subplots(1)
ax = sns.stripplot(x="parsed_net_file", y="stest_pearsonr", data=metrics)
ax.set(xlabel="Network", ylabel="Pearson's r", title="Performance (test set)")
plt.show()
plt.close(fig)
```
| true |
code
| 0.658115 | null | null | null | null |
|
```
import numpy as np
import pandas as pd
#import matplotlib.pylab as plt
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import silhouette_score
from sklearn import cluster
from sklearn.cluster import KMeans
from sklearn.datasets import make_blobs
import seaborn as sns
sns.set()
from sklearn.neighbors import NearestNeighbors
from yellowbrick.cluster import KElbowVisualizer
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
from sklearn.model_selection import train_test_split
from mpl_toolkits.mplot3d import Axes3D
from sklearn.metrics import accuracy_score
%matplotlib inline
plt.rcParams['figure.figsize'] = (16, 9)
plt.style.use('ggplot')
```
## Se visualiza los datos y se elimina las columnas que no son necesarias
```
dfRead = pd.read_csv('Suma_todasLasSesiones.csv')
df = dfRead.drop(['Sesion','Id'], axis=1)
#df = df[df['Fsm']!=0]
```
## Filtrado de datos
## Histograma de las notas
```
plt.rcParams['figure.figsize'] = (16, 9)
plt.style.use('ggplot')
datos = df.drop(['Nota'],1).hist()
plt.grid(True)
plt.show()
```
## Se crean los datos para el clusters y las categorias
```
clusters = df[['Nota']]
X = df.drop(['Nota'],1)
## Se reliza la normalización de los datos para que esten en un rango de (0,1)
scaler = MinMaxScaler(feature_range=(0, 1))
x = scaler.fit_transform(X)
```
## Se definen los metodos a emplear en el cluster
```
def clusterDBscan(x):
db = cluster.DBSCAN(eps=0.175, min_samples=5)
db.fit(x)
return db.labels_
def clusterKMeans(x, n_clusters):
return cluster.k_means(x, n_clusters=n_clusters)[1]
```
## Se crea funciones en caso de ser necesarias para poder reducir las dimensiones
```
def reducir_dim(x, ndim):
pca = PCA(n_components=ndim)
return pca.fit_transform(x)
def reducir_dim_tsne(x, ndim):
pca = TSNE(n_components=ndim)
return pca.fit_transform(x)
```
## Se grafica los valores de los posibles cluster en base a silohuette score
```
def calculaSilhoutter(x, clusters):
res=[]
fig, ax = plt.subplots(1,figsize=(20, 5))
for numCluster in range(2, 7):
res.append(silhouette_score(x, clusterKMeans(x,numCluster )))
ax.plot(range(2, 7), res)
ax.set_xlabel("n clusters")
ax.set_ylabel("silouhette score")
ax.set_title("K-Means")
calculaSilhoutter(x, clusters)
```
## Se grafica los valores de los posibles cluster en base a Elbow Method
```
model = KMeans()
visualizer = KElbowVisualizer(model, k=(2,7), metric='calinski_harabasz', timings=False)
visualizer.fit(x) # Fit the data to the visualizer
visualizer.show()
clus_km = clusterKMeans(x, 3)
clus_db = clusterDBscan(x)
def reducir_dataset(x, how):
if how == "pca":
res = reducir_dim(x, ndim=2)
elif how == "tsne":
res = reducir_dim_tsne(x, ndim=2)
else:
return x[:, :2]
return res
results = pd.DataFrame(np.column_stack([reducir_dataset(x, how="tsne"), clusters, clus_km, clus_db]), columns=["x", "y", "clusters", "clus_km", "clus_db"])
def mostrar_resultados(res):
"""Muestra los resultados de los algoritmos
"""
fig, ax = plt.subplots(1, 3, figsize=(20, 5))
sns.scatterplot(data=res, x="x", y="y", hue="clusters", ax=ax[0], legend="full")
ax[0].set_title('Ground Truth')
sns.scatterplot(data=res, x="x", y="y", hue="clus_km", ax=ax[1], legend="full")
ax[1].set_title('K-Means')
sns.scatterplot(data=res, x="x", y="y", hue="clus_db", ax=ax[2], legend="full")
ax[2].set_title('DBSCAN')
mostrar_resultados(results)
kmeans = KMeans(n_clusters=3,init = "k-means++")
kmeans.fit(x)
labels = kmeans.predict(x)
X['Cluster_Km']=labels
dfRead['Cluster_Km']=labels
X.groupby('Cluster_Km').mean()
```
## DBSCAN
```
neigh = NearestNeighbors(n_neighbors=2)
nbrs = neigh.fit(x)
distances, indices = nbrs.kneighbors(x)
distances = np.sort(distances, axis=0)
distances = distances[:,1]
plt.plot(distances)
plt.ylim(0,0.25)
dbscan = cluster.DBSCAN(eps=0.175, min_samples=5)
dbscan.fit(x)
clusterDbscan = dbscan.labels_
X['Cluster_DB']=clusterDbscan
dfRead['Cluster_DB']=clusterDbscan
X.groupby('Cluster_DB').mean()
dfRead
```
| true |
code
| 0.540196 | null | null | null | null |
|
# Abnormality Detection in Musculoskeletal Radiographs
The objective is to build a machine learning model that can detect an abnormality in the X-Ray radiographs. These models can help towards providing healthcare access to the parts of the world where access to skilled radiologists is limited. According to a study on the Global Burden of Disease and the worldwide impact of all diseases found that, “musculoskeletal conditions affect more than 1.7 billion people worldwide. They are the 2nd greatest cause of disabilities, and have the 4th greatest impact on the overall health of the world population when considering both death and disabilities”. (www.usbji.org, n.d.).
This project attempts to implement deep neural network using DenseNet169 inspired from the Stanford Paper Rajpurkar, et al., 2018.
## XR_WRIST Study Type
## Phase 3: Data Preprocessing
As per the paper, i have normalized the each image to have same mean & std of the images in the ImageNet training set. In the paper, they have used variable-sized images to 320 x 320. But i have chosen to scale 224 x 224. Then i have augmented the data during the training by applying random lateral inversions and rotations of up to 30 degrees using
```
from keras.applications.densenet import DenseNet169, DenseNet121, preprocess_input
from keras.preprocessing.image import ImageDataGenerator, load_img, image
from keras.models import Sequential, Model, load_model
from keras.layers import Conv2D, MaxPool2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras.callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint, Callback
from keras import regularizers
import pandas as pd
from tqdm import tqdm
import os
import numpy as np
import random
from keras.optimizers import Adam
import keras.backend as K
import cv2
import matplotlib.pyplot as plt
```
### 3.1 Data preprocessing
```
#Utility function to find the list of files in a directory excluding the hidden files.
def listdir_nohidden(path):
for f in os.listdir(path):
if not f.startswith('.'):
yield f
```
### 3.1.1 Creating a csv file containing path to image & csv
```
def create_images_metadata_csv(category,study_types):
"""
This function creates a csv file containing the path of images, label.
"""
image_data = {}
study_label = {'positive': 1, 'negative': 0}
#study_types = ['XR_ELBOW','XR_FINGER','XR_FOREARM','XR_HAND','XR_HUMERUS','XR_SHOULDER','XR_WRIST']
#study_types = ['XR_ELBOW']
i = 0
image_data[category] = pd.DataFrame(columns=['Path','Count', 'Label'])
for study_type in study_types: # Iterate throught every study types
DATA_DIR = 'data/MURA-v1.1/%s/%s/' % (category, study_type)
patients = list(os.walk(DATA_DIR))[0][1] # list of patient folder names
for patient in tqdm(patients): # for each patient folder
for study in os.listdir(DATA_DIR + patient): # for each study in that patient folder
if(study != '.DS_Store'):
label = study_label[study.split('_')[1]] # get label 0 or 1
path = DATA_DIR + patient + '/' + study + '/' # path to this study
for j in range(len(list(listdir_nohidden(path)))):
image_path = path + 'image%s.png' % (j + 1)
image_data[category].loc[i] = [image_path,1, label] # add new row
i += 1
image_data[category].to_csv(category+"_image_data.csv",index = None, header=False)
#New function create image array by study level
def getImagesInArrayNew(train_dataframe):
images = []
labels = []
for i, data in tqdm(train_dataframe.iterrows()):
img = cv2.imread(data['Path'])
# #random rotation
# angle = random.randint(-30,30)
# M = cv2.getRotationMatrix2D((img_width/2,img_height/2),angle,1)
# img = cv2.warpAffine(img,M,(img_width,img_height))
#resize
img = cv2.resize(img,(img_width,img_height))
img = img[...,::-1].astype(np.float32)
images.append(img)
labels.append(data['Label'])
images = np.asarray(images).astype('float32')
#normalization
mean = np.mean(images[:, :, :])
std = np.std(images[:, :, :])
images[:, :, :] = (images[:, :, :] - mean) / std
labels = np.asarray(labels)
return {'images': images, 'labels': labels}
```
#### 3.1.1.1 Variables intialization
```
img_width, img_height = 224, 224
#Keras ImageDataGenerator to load, transform the images of the dataset
BASE_DATA_DIR = 'data/'
IMG_DATA_DIR = 'MURA-v1.1/'
```
### 3.1.2 XR_SHOULDER ImageDataGenertors
I am going to generate model for every study type and ensemble them. Hence i am preparing data per study type for the model to be trained on.
```
train_data_dir = BASE_DATA_DIR + IMG_DATA_DIR + 'train/XR_WRIST'
valid_data_dir = BASE_DATA_DIR + IMG_DATA_DIR + 'valid/XR_WRIST'
train_datagen = ImageDataGenerator(
rotation_range=30,
horizontal_flip=True
)
test_datagen = ImageDataGenerator(
rotation_range=30,
horizontal_flip=True
)
study_types = ['XR_WRIST']
create_images_metadata_csv('train',study_types)
create_images_metadata_csv('valid',study_types)
valid_image_df = pd.read_csv('valid_image_data.csv', names=['Path','Count', 'Label'])
train_image_df = pd.read_csv('train_image_data.csv', names=['Path', 'Count','Label'])
dd={}
dd['train'] = train_image_df
dd['valid'] = valid_image_df
valid_dict = getImagesInArrayNew(valid_image_df)
train_dict = getImagesInArrayNew(train_image_df)
train_datagen.fit(train_dict['images'],augment=True)
test_datagen.fit(valid_dict['images'],augment=True)
validation_generator = test_datagen.flow(
x=valid_dict['images'],
y=valid_dict['labels'],
batch_size = 1
)
train_generator = train_datagen.flow(
x=train_dict['images'],
y=train_dict['labels']
)
```
### 3.2 Building a model
As per the MURA paper, i replaced the fully connected layer with the one that has a single output, after that i applied a sigmoid nonlinearity. In the paper, the optimized weighted binary cross entropy loss. Please see below for the formula,
L(X, y) = -WT,1 * ylog p(Y = 1|X) -WT,0 * (1 - y)log p(Y = 0|X);
p(Y = 1|X) is the probability that the network assigns to the label i, WT,1 = |NT| / (|AT| + |NT|), and WT,0 = |AT| / (|AT| + |NT|) where |AT|) and |NT|) are the number of abnormal images and normal images of study type T in the training set, respectively.
But i choose to use the default binary cross entropy. The network is configured with Adam using default parameters, batch size of 8, initial learning rate = 0.0001 that is decayed by a factor of 10 each time the validation loss plateaus after an epoch.
### 3.2.1 Model paramaters
```
#model parameters for training
#K.set_learning_phase(1)
nb_train_samples = len(train_dict['images'])
nb_validation_samples = len(valid_dict['images'])
epochs = 10
batch_size = 8
steps_per_epoch = nb_train_samples//batch_size
print(steps_per_epoch)
n_classes = 1
def build_model():
base_model = DenseNet169(input_shape=(None, None,3),
weights='imagenet',
include_top=False,
pooling='avg')
# i = 0
# total_layers = len(base_model.layers)
# for layer in base_model.layers:
# if(i <= total_layers//2):
# layer.trainable = True
# i = i+1
x = base_model.output
predictions = Dense(n_classes,activation='sigmoid')(x)
model = Model(inputs=base_model.input, outputs=predictions)
return model
model = build_model()
#Compiling the model
model.compile(loss="binary_crossentropy", optimizer='adam', metrics=['acc', 'mse'])
#callbacks for early stopping incase of reduced learning rate, loss unimprovement
early_stop = EarlyStopping(monitor='val_loss', patience=8, verbose=1, min_delta=1e-4)
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=1, verbose=1, min_lr=0.0001)
callbacks_list = [early_stop, reduce_lr]
```
### 3.2.2 Training the Model
```
#train the module
model_history = model.fit_generator(
train_generator,
epochs=epochs,
workers=0,
use_multiprocessing=False,
steps_per_epoch = nb_train_samples//batch_size,
validation_data=validation_generator,
validation_steps=nb_validation_samples //batch_size,
callbacks=callbacks_list
)
model.save("densenet_mura_rs_v3_xr_wrist.h5")
```
### 3.2.3 Visualizing the model
```
#There was a bug in keras to use pydot in the vis_utils class. In order to fix the bug, i had to comment out line#55 in vis_utils.py file and reload the module
#~/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/keras/utils
from keras.utils import plot_model
from keras.utils.vis_utils import *
import keras
import importlib
importlib.reload(keras.utils.vis_utils)
import pydot
plot_model(model, to_file='images/densenet_archi_xr_shoulder_v3.png', show_shapes=True)
```
### 3.3 Performance Evaluation
```
#Now we have trained our model, we can see the metrics during the training proccess
plt.figure(0)
plt.plot(model_history.history['acc'],'r')
plt.plot(model_history.history['val_acc'],'g')
plt.xticks(np.arange(0, 5, 1))
plt.rcParams['figure.figsize'] = (8, 6)
plt.xlabel("Num of Epochs")
plt.ylabel("Accuracy")
plt.title("Training Accuracy vs Validation Accuracy")
plt.legend(['train','validation'])
plt.figure(1)
plt.plot(model_history.history['loss'],'r')
plt.plot(model_history.history['val_loss'],'g')
plt.xticks(np.arange(0, 5, 1))
plt.rcParams['figure.figsize'] = (8, 6)
plt.xlabel("Num of Epochs")
plt.ylabel("Loss")
plt.title("Training Loss vs Validation Loss")
plt.legend(['train','validation'])
plt.figure(2)
plt.plot(model_history.history['mean_squared_error'],'r')
plt.plot(model_history.history['val_mean_squared_error'],'g')
plt.xticks(np.arange(0, 5, 1))
plt.rcParams['figure.figsize'] = (8, 6)
plt.xlabel("Num of Epochs")
plt.ylabel("MSE")
plt.title("Training Loss vs Validation Loss")
plt.legend(['train','validation'])
plt.show()
#Now we evaluate the trained model with the validation dataset and make a prediction.
#The class predicted will be the class with maximum value for each image.
ev = model.evaluate_generator(validation_generator, steps=(nb_validation_samples //batch_size)+1, workers=0, use_multiprocessing=False)
ev[1]
#pred = model.predict_generator(validation_generator, steps=1, batch_size=1, use_multiprocessing=False, max_queue_size=25, verbose=1)
validation_generator.reset()
#pred = model.predict_generator(validation_generator,steps=nb_validation_samples)
pred_batch = model.predict_on_batch(valid_dict['images'])
predictions = []
for p in pred_batch:
if(p > 0.5):
predictions+=[1]
else:
predictions+=[0]
error = np.sum(np.not_equal(predictions, valid_dict['labels'])) / valid_dict['labels'].shape[0]
pred = predictions
print('Confusion Matrix')
from sklearn.metrics import confusion_matrix, classification_report, cohen_kappa_score
import seaborn as sn
cm = confusion_matrix( pred ,valid_dict['labels'])
plt.figure(figsize = (30,20))
sn.set(font_scale=1.4) #for label size
sn.heatmap(cm, annot=True, annot_kws={"size": 20},cmap="YlGnBu") # font size
plt.show()
print()
print('Classification Report')
print(classification_report(valid_dict['labels'], pred, target_names=["0","1"]))
from sklearn.metrics import confusion_matrix, classification_report, cohen_kappa_score
cohen_kappa_score(valid_dict['labels'], pred)
```
### ROC Curve
```
from sklearn.metrics import roc_curve
fpr_keras, tpr_keras, thresholds_keras = roc_curve(valid_dict['labels'], pred_batch)
from sklearn.metrics import auc
auc_keras = auc(fpr_keras, tpr_keras)
plt.figure(1)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_keras, tpr_keras, label='Keras (area = {:.3f})'.format(auc_keras))
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve')
plt.legend(loc='best')
plt.show()
plt.figure(2)
plt.xlim(0.0, 0.2)
plt.ylim(0.65, 0.9)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_keras, tpr_keras, label='Keras (area = {:.3f})'.format(auc_keras))
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve (zoomed in at top left)')
plt.legend(loc='best')
plt.show()
```
| true |
code
| 0.610802 | null | null | null | null |
|
# D-optimal experiment design: comparing ABPG and Frank-Wolfe
Solve the D-Optimal experiment design problem
$$
\begin{array}{ll}
\textrm{minimize} & F(x):=\log\left(\det\left(\sum_{i=1}^n x_i V_i V_i^T\right)\right) \\
\textrm{subject to} & \sum_{i=1}^n x_i = 1, \\
& x_i\geq 0, \quad i=1,\ldots,n
\end{array}
$$
where $V_i\in R^m$ for $i=1,\ldots,n$.
Methods compared:
* Original Frank-Wolfe method
* Frank-Wolfe method with away steps
* Bregman Proximal Gradient (BPG) method with adaptive line search
* Accelerated Bregman Proximal Gradient (ABPG) method with gain adaption
```
cd C:\\github\accbpg
import numpy as np
import accbpg
def compare_FW_ABPG(m, n, Nmax, Nskip):
f, h, L, x0Kh = accbpg.D_opt_design(m, n)
x0KY = accbpg.D_opt_KYinit(f.H)
x0Mx = (1-1e-3)*x0KY + 1e-3*x0Kh
_, F_FWKh, _, _, T_FWKh = accbpg.D_opt_FW(f.H, x0Kh, 1e-8, maxitrs=Nmax, verbskip=Nskip)
_, F_FWKY, _, _, T_FWKY = accbpg.D_opt_FW(f.H, x0KY, 1e-8, maxitrs=Nmax, verbskip=Nskip)
_, F_WAKh, _, _, T_WAKh = accbpg.D_opt_FW_away(f.H, x0Kh, 1e-8, maxitrs=Nmax, verbskip=Nskip)
_, F_WAKY, _, _, T_WAKY = accbpg.D_opt_FW_away(f.H, x0KY, 1e-8, maxitrs=Nmax, verbskip=Nskip)
_, F_LSKh, _, T_LSKh = accbpg.BPG(f, h, L, x0Kh, maxitrs=Nmax, linesearch=True, ls_ratio=1.5, verbskip=Nskip)
_, F_LSKY, _, T_LSKY = accbpg.BPG(f, h, L, x0Mx, maxitrs=Nmax, linesearch=True, ls_ratio=1.5, verbskip=Nskip)
_, F_ABKh, _, _, _, T_ABKh = accbpg.ABPG_gain(f, h, L, x0Kh, gamma=2, maxitrs=Nmax, ls_inc=1.5, ls_dec=1.5, restart=True, verbskip=Nskip)
_, F_ABKY, _, _, _, T_ABKY = accbpg.ABPG_gain(f, h, L, x0Mx, gamma=2, maxitrs=Nmax, ls_inc=1.5, ls_dec=1.5, restart=True, verbskip=Nskip)
f_vals = [F_FWKh, F_FWKY, F_WAKh, F_WAKY, F_LSKh, F_LSKY, F_ABKh, F_ABKY]
t_vals = [T_FWKh, T_FWKY, T_WAKh, T_WAKY, T_LSKh, T_LSKY, T_ABKh, T_ABKY]
return f_vals, t_vals
import matplotlib
import matplotlib.pyplot as plt
# Plot required number of iterations and time
matplotlib.rcParams.update({'font.size': 14, 'font.family': 'serif'})
matplotlib.rcParams.update({'text.usetex': True})
labels = [r"FW", r"FW KY", r"FW-away", r"FW-away KY", r"BPG-LS", r"BPG-LS KY", r"ABPG-g", r"ABPG-g KY"]
linestyles=['g-', 'g--', 'k-', 'k--', 'b-.', 'b:', 'r-', 'r--']
F1, T1 = compare_FW_ABPG(100, 10000, 100000, 10000)
plt.subplots(1, 2, figsize=(10, 4))
ax1 = plt.subplot(1, 2, 1)
accbpg.plot_comparisons(ax1, F1, labels, x_vals=[], plotdiff=True, yscale="log", xscale="linear", xlim=[-2000, 100000], ylim=[1e-6, 100],
xlabel=r"Iteration number $k$", ylabel=r"$F(x_k)-F_\star$", legendloc="no", linestyles=linestyles)
ax2 = plt.subplot(1, 2, 2)
T1 = [ts - ts.min() + 1e-3 for ts in T1]
accbpg.plot_comparisons(ax2, F1, labels, x_vals=T1, plotdiff=True, yscale="log", xscale="linear", xlim=[-1, 60], ylim=[1e-6, 100],
xlabel=r"Time (sec)", ylabel=r"$F(x_k)-F_\star$", legendloc=0, linestyles=linestyles)
plt.tight_layout(h_pad=0.5, w_pad=2)
#plt.savefig("C:\github\\accbpg\\figures\\Dopt_compareFW_m100n10000.pdf", bbox_inches="tight")
F2, T2 = compare_FW_ABPG(100, 1000, 100000, 10000)
plt.subplots(1, 2, figsize=(10, 4))
ax1 = plt.subplot(1, 2, 1)
accbpg.plot_comparisons(ax1, F2, labels, x_vals=[], plotdiff=True, yscale="log", xscale="linear", xlim=[-400, 40000], ylim=[1e-6, 100],
xlabel=r"Iteration number $k$", ylabel=r"$F(x_k)-F_\star$", legendloc="no", linestyles=linestyles)
ax2 = plt.subplot(1, 2, 2)
accbpg.plot_comparisons(ax2, F2, labels, x_vals=T2, plotdiff=True, yscale="log", xscale="linear", xlim=[-1, 30], ylim=[1e-6, 100],
xlabel=r"Time (sec)", ylabel=r"$F(x_k)-F_\star$", legendloc="lower right", linestyles=linestyles)
plt.tight_layout(h_pad=0.5, w_pad=2)
#plt.savefig("C:\github\\accbpg\\figures\\Dopt_compareFW_m100n1000.pdf", bbox_inches="tight")
F3, T3 = compare_FW_ABPG(300, 1000, 100000, 10000)
plt.subplots(1, 2, figsize=(10, 4))
ax1 = plt.subplot(1, 2, 1)
accbpg.plot_comparisons(ax1, F3, labels, x_vals=[], plotdiff=True, yscale="log", xscale="linear", xlim=[-100, 5000], ylim=[1e-8,100],
xlabel=r"Iteration number $k$", ylabel=r"$F(x_k)-F_\star$", legendloc=0, linestyles=linestyles)
ax2 = plt.subplot(1, 2, 2)
accbpg.plot_comparisons(ax2, F3, labels, x_vals=T3, plotdiff=True, yscale="log", xscale="linear", xlim=[-0.2, 60], ylim=[1e-8, 100],
xlabel=r"Time (sec)", ylabel=r"$F(x_k)-F_\star$", legendloc=0, linestyles=linestyles)
plt.tight_layout(h_pad=0.5, w_pad=2)
#plt.savefig("C:\github\\accbpg\\figures\\Dopt_compareFW_m300n1000.pdf", bbox_inches="tight")
#F4, T4 = compare_FW_ABPG(800, 2500, 100000, 10000)
F4, T4 = compare_FW_ABPG(400, 1000, 100000, 10000)
plt.subplots(1, 2, figsize=(10, 4))
ax1 = plt.subplot(1, 2, 1)
accbpg.plot_comparisons(ax1, F4, labels, x_vals=[], plotdiff=True, yscale="log", xscale="linear", xlim=[-50, 2000], ylim=[1e-6, 100],
xlabel=r"Iteration number $k$", ylabel=r"$F(x_k)-F_\star$", legendloc="no", linestyles=linestyles)
ax2 = plt.subplot(1, 2, 2)
accbpg.plot_comparisons(ax2, F4, labels, x_vals=T4, plotdiff=True, yscale="log", xscale="linear", xlim=[-1, 60], ylim=[1e-6, 100],
xlabel=r"Time (sec)", ylabel=r"$F(x_k)-F_\star$", legendloc="lower right", linestyles=linestyles)
plt.tight_layout(h_pad=0.5, w_pad=2)
#plt.savefig("C:\github\\accbpg\\figures\\Dopt_compareFW_m400n1000.pdf", bbox_inches="tight")
F5, T5 = compare_FW_ABPG(500, 1000, 100000, 10000)
plt.subplots(1, 2, figsize=(10, 4))
ax1 = plt.subplot(1, 2, 1)
accbpg.plot_comparisons(ax1, F5, labels, x_vals=[], plotdiff=True, yscale="log", xscale="linear", xlim=[-100, 1500], ylim=[1e-6, 2],
xlabel=r"Iteration number $k$", ylabel=r"$F(x_k)-F_\star$", legendloc="no", linestyles=linestyles)
ax2 = plt.subplot(1, 2, 2)
accbpg.plot_comparisons(ax2, F5, labels, x_vals=T5, plotdiff=True, yscale="log", xscale="linear", xlim=[-1, 12], ylim=[1e-6, 2],
xlabel=r"Time (sec)", ylabel=r"$F(x_k)-F_\star$", legendloc="lower center", linestyles=linestyles)
plt.tight_layout(h_pad=0.5, w_pad=2)
#plt.savefig("C:\github\\accbpg\\figures\\Dopt_compareFW_m500n1000.pdf", bbox_inches="tight")
F6, T6 = compare_FW_ABPG(350, 1000, 100000, 10000)
plt.subplots(1, 2, figsize=(10, 4))
ax1 = plt.subplot(1, 2, 1)
accbpg.plot_comparisons(ax1, F6, labels, x_vals=[], plotdiff=True, yscale="log", xscale="linear", xlim=[-50, 2500], ylim=[1e-6, 100],
xlabel=r"Iteration number $k$", ylabel=r"$F(x_k)-F_\star$", legendloc="no", linestyles=linestyles)
ax2 = plt.subplot(1, 2, 2)
accbpg.plot_comparisons(ax2, F6, labels, x_vals=T6, plotdiff=True, yscale="log", xscale="linear", xlim=[-0.1, 10], ylim=[1e-6, 100],
xlabel=r"Time (sec)", ylabel=r"$F(x_k)-F_\star$", legendloc="lower right", linestyles=linestyles)
plt.tight_layout(h_pad=0.5, w_pad=2)
#plt.savefig("C:\github\\accbpg\\figures\\Dopt_compareFW_m350n1000.pdf", bbox_inches="tight")
```
| true |
code
| 0.674774 | null | null | null | null |
|
# <center> Практические задания по цифровой обработке сигналов </center>
# <center> Четвёртая лабораторная работа </center>
# <center> Акустические признаки </center>
```
from glob import glob
import hashlib
import IPython.display as ipd
import os
import librosa
import librosa.display
import librosa.filters
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import scipy
import scipy.fft
from sklearn.decomposition import PCA
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import LinearSVC
# Функция отрисовки аудиосигнала.
def draw_waveform(wav, sr, figsize=(14, 5)):
# Отрисовка звукового сигнала во временной области
plt.figure(figsize=figsize)
plt.grid(True)
librosa.display.waveplot(wav, sr=sr)
plt.show()
# Для выполнения задания нам понадобится датасет yes/no.
# Про датасет можно почитать тут: https://www.openslr.org/1/
# Скачаем его:
!rm -f waves_yesno.tar.gz
!wget -q https://www.openslr.org/resources/1/waves_yesno.tar.gz
# Распакуем:
!tar -xzf waves_yesno.tar.gz
# P.S Если по каким-либо причинам данные не скачались,
# их можно загрузить отсюда: https://www.openslr.org/1/
# Загрузим один из файлов:
wav, sr = librosa.load("waves_yesno/0_1_0_1_1_1_0_0.wav")
draw_waveform(wav, sr)
ipd.Audio(wav, rate=sr)
```
Как можно услышать, в этом датасете произносятся какие-то два слова (yes/no на иврите). Каждый файл состоит из 8 произнесений. Метки слов указаны в названиях файлов.
```
# Построим спектрограмму загруженного файла:
stft = librosa.stft(wav)
stft_db = librosa.amplitude_to_db(abs(stft))
plt.figure(figsize=(15, 10))
librosa.display.specshow(stft_db, sr=sr, x_axis='time', y_axis='hz');
```
# Задание 0.1: Анализ спектрограммы (0.5 балла)
1. Посмотрите на спектрограмму и попробуйте найти признаки, по которым можно отличить произнесение "yes" от "no".
2. В каких частотах находится основная энергия этого речевого сигнала?
1. О произнесении "yes" можно судить по высокой энергии сигнала в высокочастотной области (от 1000 до 3000 Гц), в то время как для "no" характерна только низкочастотная область (до 1000 Гц).
2. Энергия этого речевого сигнала находится в диапазоне (0÷4000) Гц, основная её часть — в диапазоне (0÷1000) Гц.
# Задание 1: Мел-шкала (1 балл)
Нарисовать спектрограму в [mel-шкале](https://en.wikipedia.org/wiki/Mel_scale).
Использовать формулу, представленную Дугласом О'Шонесси.
```
def mel(spec):
# spec — stft spectrogram
mel_spec = 2595.0 * np.log10(1.0 + spec / 700.0)
return mel_spec
def test_mel():
x = np.random.randint(100, size=(1000, 100))
x_mel = mel(x)
x_hz = 700.0 * (10.0 ** (x_mel / 2595.0) - 1.0)
assert np.allclose(x, x_hz), "TEST Hertz -> Mel -> Hertz failed."
print("All OK!")
test_mel()
```
# Мел-фильтры
Одними из наиболее популярных акустических признаков являются Filter Banks (fbanks).
fbanks вычисляются применением нескольких (количество фильтров = количество fbanks) треугольных фильтров к мел-спектрограмме. Чтобы не делать два действия со спектрограммой, переход к мел-шкале и применение фильтров в мел-шкале можно заменить на перевод мел-фильтров в Герц-шкалу и применение их к Герц-спектрограмме.
## Задание 2 (3 балла)
Реализуйте функцию вычисления fbank.
```
def mel_filters(sr, n_fft, n_mels):
"""
Функция построения треугольных мел-фильтров в герц-шкале
:param sr — sample rate
:param n_fft — length of the FFT window
:param n_mels — number of filters
:return mel filters matrix of shape [n_mels, n_fft // 2 + 1]
"""
# Initialize the weights
weights = np.zeros((n_mels, 1 + n_fft // 2))
# Center freqs of each FFT bin
fft_freqs = np.linspace(0, sr / 2, 1 + n_fft // 2)
# "Center freqs" of mel bands — uniformly spaced between limits
mel_freqs = np.linspace(mel(0.0), mel(sr / 2), n_mels + 2)
mel_freqs = 700.0 * (10.0 ** (mel_freqs / 2595.0) - 1.0)
f_diff = np.diff(mel_freqs)
ramps = np.subtract.outer(mel_freqs, fft_freqs)
for i in range(n_mels):
# lower and upper slopes for all bins...
lower = -ramps[i] / f_diff[i]
upper = ramps[i + 2] / f_diff[i + 1]
# ...then intersect them with each other and zero:
weights[i] = np.maximum(0, np.minimum(lower, upper))
enorm = 2.0 / (mel_freqs[2:n_mels + 2] - mel_freqs[:n_mels])
weights *= enorm[:, np.newaxis]
return weights
assert mel_filters(32, 46, 4).shape == (4, 24) and \
mel_filters(65, 45, 5).shape == (5, 23), "Wrong shape."
assert np.allclose(mel_filters(16, 8, 4),
librosa.filters.mel(16, 8, n_mels=4, htk=True))
assert np.allclose(mel_filters(8600, 512, 40),
librosa.filters.mel(8600, 512, n_mels=40, htk=True))
print("All OK!")
def get_fbanks(wav: np.ndarray, sr: int, window_ms=25, step_mc=10, n_fbanks=40):
# wav — input signal
# sr — sample rate
# window_ms — window length in milliseconds
# step_ms — stft step in milliseconds
# n_fbanks — number of filters
# return fbank matrix [n_fbanks, time]
n_fft = window_ms * sr // 1000
hop_length = step_mc * sr // 1000
wav_padded = np.pad(wav, int(n_fft // 2), mode="reflect")
window = scipy.signal.get_window("hann", Nx=n_fft)
spectrogram = np.zeros((n_fft // 2 + 1, len(wav) // hop_length + 1),
dtype=np.complex64)
for i in range(spectrogram.shape[1]):
j = i * hop_length
spectrogram[:, i] = np.abs(
scipy.fft.fft(wav_padded[j:j + n_fft] * window)[:1 + n_fft // 2]
) ** 2
mel_basis = mel_filters(sr=sr, n_fft=n_fft, n_mels=n_fbanks)
return np.dot(mel_basis, spectrogram)
def test_fbank(wav, sr, window_ms=25, step_mc=10, n_fbanks=40):
n_fft = window_ms * sr // 1000
hop_length = step_mc * sr // 1000
fbanks_lib = librosa.feature.melspectrogram(wav, sr, n_fft=n_fft,
hop_length=hop_length,
n_mels=n_fbanks, htk=True)
fbanks = get_fbanks(wav, sr, window_ms=window_ms,
step_mc=step_mc, n_fbanks=n_fbanks)
if fbanks_lib.shape != fbanks.shape:
print("TEST FAILED")
print(f"Shape {fbanks_lib.shape} != {fbanks.shape}.")
if not np.allclose(fbanks_lib, fbanks):
print("TEST FAILED")
print(f"Average diff is {np.mean(np.abs(fbanks_lib - fbanks))}.")
return -1
print("TEST PASSED")
return 0
assert test_fbank(wav[:sr * 1], sr) == 0, "1 sec wav test failed."
assert test_fbank(wav, sr) == 0 , "All wav tests failed."
print("All OK!")
fbanks = get_fbanks(wav, sr)
plt.figure(figsize=(14, 10))
librosa.display.specshow(fbanks, sr=sr, x_axis='time')
plt.ylabel("Filter number")
plt.show()
```
## Задание 3 (3 балла)
Реализовать вычисление [mfcc](https://en.wikipedia.org/wiki/Mel-frequency_cepstrum).
```
def dct(S):
"""DCT (type 2)"""
N = S.shape[0]
k = np.arange(N).reshape(1, -1)
n = np.arange(N).reshape(-1, 1)
M = 2 * np.dot(S.T, np.cos(np.pi * (n + 0.5) * k / N)).T
M[0, :] /= np.sqrt(2)
M /= np.sqrt(2 * N)
return M
def get_mfcc(wav: np.ndarray, sr: int, window_ms=25, step_mc=10, n_mfcc=13):
# wav — input signal
# sr — sample rate
# window_ms — window length in milliseconds
# step_ms — stft step in milliseconds
# n_mfcc — number of filters
# return mfcc matrix [n_mfcc, time]
n_fft = window_ms * sr // 1000
hop_length = step_mc * sr // 1000
# Get mel-spectrogram
mel_spec = get_fbanks(wav, sr, window_ms=window_ms, step_mc=step_mc)
# Convert power to decibels
magnitude = np.abs(mel_spec)
log_spec = 10.0 * np.log10(np.maximum(1e-10, magnitude))
log_spec = np.maximum(log_spec, log_spec.max() - 80.0)
# Apply discrete cosine transform (type 2)
mfcc = dct(log_spec)[:n_mfcc]
return mfcc
def test_mfcc(wav, sr, window_ms=25, step_mc=10, n_mfcc=13):
n_fft = window_ms * sr // 1000
hop_length = step_mc * sr // 1000
mfcc_lib = librosa.feature.mfcc(wav, sr, n_fft=n_fft, hop_length=hop_length,
n_mels=40, n_mfcc=n_mfcc, htk=True)
mfcc = get_mfcc(wav, sr, window_ms=window_ms,
step_mc=step_mc, n_mfcc=n_mfcc)
if mfcc_lib.shape != mfcc.shape:
print("TEST FAILED")
print(f"Shape {mfcc_lib.shape} != {mfcc.shape}.")
if not np.allclose(mfcc_lib, mfcc, atol=1e-4):
print("TEST FAILED")
print(f"Average diff is {np.mean(np.abs(mfcc_lib - mfcc))}.")
return -1
print("TEST PASSED")
return 0
assert test_mfcc(wav[:sr * 1], sr) == 0, "1 sec wav test failed."
assert test_mfcc(wav, sr) == 0 , "All wav tests failed."
print("All OK!")
mfcc = get_mfcc(wav, sr)
plt.figure(figsize=(15, 10))
librosa.display.specshow(mfcc, sr=sr, x_axis='time')
plt.ylabel("Filter number")
plt.show()
```
# Классификация слов
Построим простую систему, классифицирующую слова yes/no.
Загрузим весь датасет:
```
def load_yn_dataset(directory):
X, labels = [], []
for f in glob(directory + "/*.wav"):
name = os.path.basename(f)[:-4]
y = [int(l) for l in name.split("_")]
x, _ = librosa.load(f)
X.append(x)
labels.append(y)
return X, labels
X, Y = load_yn_dataset("waves_yesno/")
```
Отделим 20% для теста:
```
X_train, X_test, Y_train, Y_test = train_test_split(
X, Y, test_size=0.2, random_state=1
)
# 6-th sample of X_test is corrupted, let's remove it:
X_test = X_test[:6] + X_test[7:]
Y_test = Y_test[:6] + Y_test[7:]
```
## Задание 4* (1 балл)
Voice Activity Detector (VAD) определяет, есть ли речь в текущем кадре или нет.
Реализуйте простой VAD.
Настройте VAD, чтобы хорошо определялись границы слов.
```
def moving_average(data, window):
"""Moving average filter"""
return np.convolve(data, np.ones(window), "same") / window
def detect_va(x, sr=22050):
"""Voice activity detector"""
mfcc = get_mfcc(x, sr=sr)
# Moving average filter and level adjusting:
mfcc_smoothed = moving_average(mfcc[1], 10) - 125
vad = np.zeros_like(mfcc_smoothed)
vad[mfcc_smoothed > 0] = 1
return vad
plt.figure(figsize=(14, 3))
plt.plot(detect_va(X_train[6]));
# train_VA: 1 — voice, 0 - silence
# test_VA: 1 - voice, 0 - silence
train_VA = [detect_va(x) for x in X_train]
test_VA = [detect_va(x) for x in X_test]
def test_VAD(VA, Y):
def check_diff(diff, num_words):
if diff.sum() != 0:
print("VAD detected speech at the beginning (or end) of audio.")
return -1
if not (diff > 0).sum() == num_words:
print("Wrong number of words. Each audio contains 8 words.")
return -2
return 0
for i, (va, y) in enumerate(zip(VA, Y)):
diff = va[1:] - va[:-1]
assert check_diff(diff, len(y)) == 0, f"Bad {i}-th example."
test_VAD(train_VA, Y_train)
test_VAD(test_VA, Y_test)
```
## Задание 5* (2 балла)
Обучите классификатор, определяющий, какое слово было сказано. Используйте VAD для разбиения входных файлов на отдельные слова. Классификацию можно сделать, например, с помощью SVM по усреднённым признакам выделеных слов или любым другим удобным для вас способом.
```
def prepare_dataset(x, vad, y):
train, target = [], []
for i, (xi, vai, yi) in enumerate(zip(x, vad, y)):
mfcc = get_mfcc(xi, sr=22050)
# Get indices of VAD changing values:
indices = np.where(vai[:-1] != vai[1:])[0] + 1
# Extract speech parts from signal:
for j in range(0, len(indices), 2):
sample = np.mean(mfcc[:, indices[j]:indices[j + 1]], axis=1)
train.append(sample)
target.append(yi[j // 2])
return np.array(train), np.array(target)
x_train, y_train = prepare_dataset(X_train, train_VA, Y_train)
x_test, y_test = prepare_dataset(X_test, test_VA, Y_test)
print(f"Train set class balance: " \
f"{len(y_train[y_train == 1]) / len(y_train) * 100:.2f}%")
print(f"Test set class balance: " \
f"{len(y_test[y_test == 1]) / len(y_test) * 100:.2f}%")
```
The classes are well balanced, we can use accuracy as evaluation metric.
```
clf = make_pipeline(StandardScaler(),
LinearSVC(random_state=44, tol=1e-5))
clf.fit(x_train, y_train)
print(f"Accuracy: {accuracy_score(y_test, clf.predict(x_test)) * 100:.2f}%.")
```
Wow, the model achieved 100% accuracy! Let's do PCA and check if the samples can really be perfecty separated by some decision boundary.
```
pca = PCA(n_components=2)
y_pca = pca.fit_transform(x_train)
plt.figure(figsize=(14, 6))
plt.scatter(y_pca[:, 0], y_pca[:, 1], c=y_train)
plt.grid()
plt.show()
```
The data is really perfectly separable, it's alright to have 100% accuracy.
| true |
code
| 0.4206 | null | null | null | null |
|
# Federated PyTorch TinyImageNet Tutorial
## Using low-level Python API
# Long-Living entities update
* We now may have director running on another machine.
* We use Federation API to communicate with Director.
* Federation object should hold a Director's client (for user service)
* Keeping in mind that several API instances may be connacted to one Director.
* We do not think for now how we start a Director.
* But it knows the data shape and target shape for the DataScience problem in the Federation.
* Director holds the list of connected envoys, we do not need to specify it anymore.
* Director and Envoys are responsible for encrypting connections, we do not need to worry about certs.
* Yet we MUST have a cert to communicate to the Director.
* We MUST know the FQDN of a Director.
* Director communicates data and target shape to the Federation interface object.
* Experiment API may use this info to construct a dummy dataset and a `shard descriptor` stub.
```
!pip install torchvision==0.8.1
```
## Connect to the Federation
```
# Create a federation
from openfl.interface.interactive_api.federation import Federation
# please use the same identificator that was used in signed certificate
client_id = 'api'
cert_dir = 'cert'
director_node_fqdn = 'localhost'
# 1) Run with API layer - Director mTLS
# If the user wants to enable mTLS their must provide CA root chain, and signed key pair to the federation interface
# cert_chain = f'{cert_dir}/root_ca.crt'
# api_certificate = f'{cert_dir}/{client_id}.crt'
# api_private_key = f'{cert_dir}/{client_id}.key'
# federation = Federation(client_id=client_id, director_node_fqdn=director_node_fqdn, director_port='50051',
# cert_chain=cert_chain, api_cert=api_certificate, api_private_key=api_private_key)
# --------------------------------------------------------------------------------------------------------------------
# 2) Run with TLS disabled (trusted environment)
# Federation can also determine local fqdn automatically
federation = Federation(client_id=client_id, director_node_fqdn=director_node_fqdn, director_port='50051', tls=False)
federation.target_shape
shard_registry = federation.get_shard_registry()
shard_registry
# First, request a dummy_shard_desc that holds information about the federated dataset
dummy_shard_desc = federation.get_dummy_shard_descriptor(size=10)
dummy_shard_dataset = dummy_shard_desc.get_dataset('train')
sample, target = dummy_shard_dataset[0]
print(sample.shape)
print(target.shape)
```
## Creating a FL experiment using Interactive API
```
from openfl.interface.interactive_api.experiment import TaskInterface, DataInterface, ModelInterface, FLExperiment
```
### Register dataset
```
import torchvision
from torchvision import transforms as T
normalize = T.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]
)
augmentation = T.RandomApply(
[T.RandomHorizontalFlip(),
T.RandomRotation(10),
T.RandomResizedCrop(64)],
p=.8
)
training_transform = T.Compose(
[T.Lambda(lambda x: x.convert("RGB")),
augmentation,
T.ToTensor(),
normalize]
)
valid_transform = T.Compose(
[T.Lambda(lambda x: x.convert("RGB")),
T.ToTensor(),
normalize]
)
from torch.utils.data import Dataset
class TransformedDataset(Dataset):
"""Image Person ReID Dataset."""
def __init__(self, dataset, transform=None, target_transform=None):
"""Initialize Dataset."""
self.dataset = dataset
self.transform = transform
self.target_transform = target_transform
def __len__(self):
"""Length of dataset."""
return len(self.dataset)
def __getitem__(self, index):
img, label = self.dataset[index]
label = self.target_transform(label) if self.target_transform else label
img = self.transform(img) if self.transform else img
return img, label
class TinyImageNetDataset(DataInterface):
def __init__(self, **kwargs):
self.kwargs = kwargs
@property
def shard_descriptor(self):
return self._shard_descriptor
@shard_descriptor.setter
def shard_descriptor(self, shard_descriptor):
"""
Describe per-collaborator procedures or sharding.
This method will be called during a collaborator initialization.
Local shard_descriptor will be set by Envoy.
"""
self._shard_descriptor = shard_descriptor
self.train_set = TransformedDataset(
self._shard_descriptor.get_dataset('train'),
transform=training_transform
)
self.valid_set = TransformedDataset(
self._shard_descriptor.get_dataset('val'),
transform=valid_transform
)
def get_train_loader(self, **kwargs):
"""
Output of this method will be provided to tasks with optimizer in contract
"""
return DataLoader(
self.train_set, num_workers=8, batch_size=self.kwargs['train_bs'], shuffle=True
)
def get_valid_loader(self, **kwargs):
"""
Output of this method will be provided to tasks without optimizer in contract
"""
return DataLoader(self.valid_set, num_workers=8, batch_size=self.kwargs['valid_bs'])
def get_train_data_size(self):
"""
Information for aggregation
"""
return len(self.train_set)
def get_valid_data_size(self):
"""
Information for aggregation
"""
return len(self.valid_set)
fed_dataset = TinyImageNetDataset(train_bs=64, valid_bs=64)
```
### Describe the model and optimizer
```
import os
import glob
from torch.utils.data import Dataset, DataLoader
from PIL import Image
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
"""
MobileNetV2 model
"""
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.model = torchvision.models.mobilenet_v2(pretrained=True)
self.model.requires_grad_(False)
self.model.classifier[1] = torch.nn.Linear(in_features=1280, \
out_features=200, bias=True)
def forward(self, x):
x = self.model.forward(x)
return x
model_net = Net()
params_to_update = []
for param in model_net.parameters():
if param.requires_grad == True:
params_to_update.append(param)
optimizer_adam = optim.Adam(params_to_update, lr=1e-4)
def cross_entropy(output, target):
"""Binary cross-entropy metric
"""
return F.cross_entropy(input=output,target=target)
```
### Register model
```
from copy import deepcopy
framework_adapter = 'openfl.plugins.frameworks_adapters.pytorch_adapter.FrameworkAdapterPlugin'
model_interface = ModelInterface(model=model_net, optimizer=optimizer_adam, framework_plugin=framework_adapter)
# Save the initial model state
initial_model = deepcopy(model_net)
```
## Define and register FL tasks
```
task_interface = TaskInterface()
import torch
import tqdm
# The Interactive API supports registering functions definied in main module or imported.
def function_defined_in_notebook(some_parameter):
print(f'Also I accept a parameter and it is {some_parameter}')
# Task interface currently supports only standalone functions.
@task_interface.add_kwargs(**{'some_parameter': 42})
@task_interface.register_fl_task(model='net_model', data_loader='train_loader', \
device='device', optimizer='optimizer')
def train(net_model, train_loader, optimizer, device, loss_fn=cross_entropy, some_parameter=None):
device = torch.device('cuda')
if not torch.cuda.is_available():
device = 'cpu'
function_defined_in_notebook(some_parameter)
train_loader = tqdm.tqdm(train_loader, desc="train")
net_model.train()
net_model.to(device)
losses = []
for data, target in train_loader:
data, target = torch.tensor(data).to(device), torch.tensor(
target).to(device)
optimizer.zero_grad()
output = net_model(data)
loss = loss_fn(output=output, target=target)
loss.backward()
optimizer.step()
losses.append(loss.detach().cpu().numpy())
return {'train_loss': np.mean(losses),}
@task_interface.register_fl_task(model='net_model', data_loader='val_loader', device='device')
def validate(net_model, val_loader, device):
device = torch.device('cuda')
net_model.eval()
net_model.to(device)
val_loader = tqdm.tqdm(val_loader, desc="validate")
val_score = 0
total_samples = 0
with torch.no_grad():
for data, target in val_loader:
samples = target.shape[0]
total_samples += samples
data, target = torch.tensor(data).to(device), \
torch.tensor(target).to(device, dtype=torch.int64)
output = net_model(data)
pred = output.argmax(dim=1,keepdim=True)
val_score += pred.eq(target).sum().cpu().numpy()
return {'acc': val_score / total_samples,}
```
## Time to start a federated learning experiment
```
# create an experimnet in federation
experiment_name = 'tinyimagenet_test_experiment'
fl_experiment = FLExperiment(federation=federation, experiment_name=experiment_name)
# The following command zips the workspace and python requirements to be transfered to collaborator nodes
fl_experiment.start(
model_provider=model_interface,
task_keeper=task_interface,
data_loader=fed_dataset,
rounds_to_train=5,
opt_treatment='CONTINUE_GLOBAL'
)
# If user want to stop IPython session, then reconnect and check how experiment is going
# fl_experiment.restore_experiment_state(model_interface)
fl_experiment.stream_metrics(tensorboard_logs=False)
```
| true |
code
| 0.816754 | null | null | null | null |
|
# Acquire Sentinel-2 MSI Data for California
This notebook is used for gathering data from California from the Sentinel-2 satellites. Specifically, we are looking to acquire the surface reflectance data (atmosphere corrected - level 2a) as that is what we did our baseline model testing and evaluation with using the Big Earth Net data. We will gather data from a variety of geographic areas but most of our focus will be on the agricultural regions of California (e.g., Central Valley).
```
## ee package
!pip install earthengine-api --upgrade
!pip install Shapely
!pip install folium
import time
from math import sin, cos, sqrt, atan2, radians
import pandas as pd
import ee #https://developers.google.com/earth-engine/guides/python_install
from shapely.geometry import box
import folium
import matplotlib.pyplot as plt
import numpy as np
from scipy.signal import find_peaks
def authenticate():
# Trigger the authentication flow.
ee.Authenticate()
class MSICalifornia():
def __init__(self, center_lat=43.771114, center_lon=-116.736866, edge_len=0.005, year=2019):
'''
Parameters:
center_lat: latitude for the location coordinate
center_lon: longitude for the location coordinate
edge_len: edge length in degrees for the rectangle given the location coordinates
year: year the satellite data should pull images for
'''
# Initialize the library.
ee.Initialize()
# Error handle parameter issues
if center_lat >= -90 and center_lat <= 90:
self.center_lat = center_lat
else:
raise ValueError('Please enter float value for latitude between -90 and 90')
exit()
if center_lon >= -180 and center_lon <= 180:
self.center_lon = center_lon
else:
raise ValueError('Please enter float value for longitude between -180 and 180')
exit()
if (type(edge_len) == float and (edge_len <= 0.5 and edge_len >= 0.005)):
self.edge_len = edge_len
else:
raise ValueError('Please enter float value for edge length between 0.5 and 0.005')
exit()
# (range is 2017 to year prior)
if ((type(year) == int) and (year >= 2017 and year <= int(time.strftime("%Y")) - 1)):
self.year = year
else:
raise ValueError(
'Please enter an integer value for year > 2017 and less than the current year')
exit()
# initialize remaining variables
self.label = []
self.comment = dict()
self.image = ee.Image()
self.simple_image = ee.Image()
self.base_asset_directory = None
# Create the bounding box using GEE API
self.aoi_ee = self.__create_bounding_box_ee()
# Estimate the area of interest
self.dist_lon = self.__calc_distance(
self.center_lon - self.edge_len / 2, self.center_lat, self.center_lon + self.edge_len / 2, self.center_lat)
self.dist_lat = self.__calc_distance(
self.center_lon, self.center_lat - self.edge_len / 2, self.center_lon, self.center_lat + self.edge_len / 2)
print('The selected area is approximately {:.2f} km by {:.2f} km'.format(
self.dist_lon, self.dist_lat))
self.model_projection = "EPSG:3857"
def __create_bounding_box_ee(self):
'''Creates a rectangle for pulling image information using center coordinates and edge_len'''
return ee.Geometry.Rectangle([self.center_lon - self.edge_len / 2, self.center_lat - self.edge_len / 2, self.center_lon + self.edge_len / 2, self.center_lat + self.edge_len / 2])
def __create_bounding_box_shapely(self):
'''Returns a box for coordinates to plug in as an image add-on layer'''
return box(self.center_lon - self.edge_len / 2, self.center_lat - self.edge_len / 2, self.center_lon + self.edge_len / 2, self.center_lat + self.edge_len / 2)
@staticmethod
def __calc_distance(lon1, lat1, lon2, lat2):
'''Calculates the distance between 2 coordinates'''
# Reference: https://stackoverflow.com/questions/19412462/getting-distance-between-two-points-based-on-latitude-longitude
# approximate radius of earth in km
R = 6373.0
lon1 = radians(lon1)
lat1 = radians(lat1)
lon2 = radians(lon2)
lat2 = radians(lat2)
dlon = lon2 - lon1
dlat = lat2 - lat1
a = sin(dlat / 2)**2 + cos(lat1) * cos(lat2) * sin(dlon / 2)**2
c = 2 * atan2(sqrt(a), sqrt(1 - a))
distance = R * c
return distance
def pull_Sentinel2_data(self):
# 10 of 13 Spectral Bands are retained. 10th band has no surface reflectance per
# http://bigearth.net/static/documents/BigEarthNet_IGARSS_2019.pdf
# Also the baseline model only used the 10 and 20m bands (remove band 1 and 9)
band_names = ['B2', 'B3', 'B4', 'B5',
'B6', 'B7', 'B8', 'B8A', 'B9',
'B11', 'B12']
random_month = np.random.randint(1,13)
start_date = f'{self.year}-{random_month}-01'
if random_month != 12:
end_date = f'{self.year}-{random_month+1}-01'
else:
end_date = f'{self.year +1 }-1-01'
self.Sentinel_MSI = (ee.ImageCollection('COPERNICUS/S2_SR')
.filterDate(start_date, end_date)
.filterBounds(self.aoi_ee)
.select(band_names)
.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 1))
.median().clip(self.aoi_ee))
return random_month
def plot_map(self):
'''Plot folium map using GEE api - the map includes are of interest box and associated ndvi readings'''
def add_ee_layer(self, ee_object, vis_params, show, name):
'''Checks if image object classifies as ImageCollection, FeatureCollection, Geometry or single Image
and adds to folium map accordingly'''
try:
if isinstance(ee_object, ee.image.Image):
map_id_dict = ee.Image(ee_object).getMapId(vis_params)
folium.raster_layers.TileLayer(
tiles=map_id_dict['tile_fetcher'].url_format,
attr='Google Earth Engine',
name=name,
overlay=True,
control=True,
show=show
).add_to(self)
elif isinstance(ee_object, ee.imagecollection.ImageCollection):
ee_object_new = ee_object.median()
map_id_dict = ee.Image(ee_object_new).getMapId(vis_params)
folium.raster_layers.TileLayer(
tiles=map_id_dict['tile_fetcher'].url_format,
attr='Google Earth Engine',
name=name,
overlay=True,
control=True,
show=show
).add_to(self)
elif isinstance(ee_object, ee.geometry.Geometry):
folium.GeoJson(
data=ee_object.getInfo(),
name=name,
overlay=True,
control=True
).add_to(self)
elif isinstance(ee_object, ee.featurecollection.FeatureCollection):
ee_object_new = ee.Image().paint(ee_object, 0, 2)
map_id_dict = ee.Image(ee_object_new).getMapId(vis_params)
folium.raster_layers.TileLayer(
tiles=map_id_dict['tile_fetcher'].url_format,
attr='Google Earth Engine',
name=name,
overlay=True,
control=True,
show=show
).add_to(self)
except:
print("Could not display {}".format(name))
# Add EE drawing method to folium.
folium.Map.add_ee_layer = add_ee_layer
myMap = folium.Map(location=[self.center_lat, self.center_lon], zoom_start=11)
#aoi_shapely = self.__create_bounding_box_shapely()
#folium.GeoJson(aoi_shapely, name="Area of Interest").add_to(myMap)
# Add Sentinel-2 RGB quarterly layers
start = time.time()
visParams = {'max': 3000}
# Add MSI layer for July
myMap.add_ee_layer(self.Sentinel_MSI.select(['B2','B3','B4']), visParams, show=False, name="Sentinel2A")
end = time.time()
print("ADDED S2 RGB LAYERS \t\t--> " + str(round((end - start) / 60, 2)) + " min")
return myMap
def write_image_google_drive(self, filename):
'''Writes predicted image out as an image to Google Drive as individual TIF files
They will need to be combined after the fact'''
bands = ['B2', 'B3', 'B4', 'B5',
'B6', 'B7', 'B8', 'B8A',
'B11', 'B12']
tasks = []
for band in bands:
tasks.append(ee.batch.Export.image.toDrive(
crs=self.model_projection,
region=self.aoi_ee,
image=self.Sentinel_MSI.select(band),
description=f'{filename}_msi_{band}',
scale = 10,
maxPixels=1e13))
print(f"Writing To Google Drive filename = {filename}.tif")
for t in tasks:
t.start()
authenticate()
```
## Degree to distance calculation
- One degree of latitude equals approximately 364,080 feet (69 miles), one minute equals 6,068 feet (1.15 miles), and one-second equals 101 feet.
- One-degree of longitude equals 288,200 feet (54.6 miles), one minute equals 4,800 feet (0.91 mile), and one second equals 80 feet.
- 1.60934 km per mile
- 9748 square kilometers per squared degree
```
# Latitude and Longitude of center point
edge_len = 0.25
# Grab Central Valley region of California
# Fresno to Bakersfield
#lat_range = np.arange(35.125,35.375,edge_len)
#lon_range = np.arange(-119.875,-119.625,edge_len)
#lat_range = np.arange(35.125,35.375,edge_len)
#lon_range = np.arange(-119.125,-118.875,edge_len)
# Sacramento to Merced
#lat_range = np.arange(37.125,37.375,edge_len)
#lon_range = np.arange(-121.125,-120.875,edge_len)
#lat_range = np.arange(37.125,37.375,edge_len)
# Calexico Region
#lat_range = np.arange(32.625,33.375,edge_len)
#lon_range = np.arange(-115.875,-114.875,edge_len)
# North of Sacramento
lat_range = np.arange(39.875, 40.125,edge_len)
lon_range = np.arange(-122.125,-121.875,edge_len)
year = 2019
# Iterate over range of lats and longs
for lat in lat_range:
for lon in lon_range:
# Instantiate the model
print(f'Evaluating irrigation at {lat}, {lon}')
# Instantiate the model
model = MSICalifornia(
center_lat=lat,
center_lon=lon,
edge_len=edge_len,
year=year)
month = model.pull_Sentinel2_data()
base_filename = f'S2SR_{month}_{year}_{lat}_{lon}'
model.write_image_google_drive(base_filename)
%%time
model.plot_map()
rgb_img = model.Sentinel_MSI.select(['B2', 'B3', 'B4', 'B5',
'B6', 'B7', 'B8', 'B8A',
'B11', 'B12'])
rgb_img.getInfo()
model.Sentinel_MSI.bandTypes().getInfo()
```
| true |
code
| 0.744064 | null | null | null | null |
|
#Price Momentum Factor Algorithm
By Gil Wassermann
Strategy taken from "130/30: The New Long-Only" by Andrew Lo and Pankaj Patel
Part of the Quantopian Lecture Series:
* www.quantopian.com/lectures
* github.com/quantopian/research_public
Notebook released under the Creative Commons Attribution 4.0 License. Please do not remove this attribution.
Let us imagine that we are traders at a large bank, watching out screens as stock prices fluctuate up and down. Suddenly, everyone around us is buying one particular security. Demand has increased so the stock price increases. We panic. Is there some information that we missed out on -are we out of the loop? In our panic, we blindly decide to buy some shares so we do not miss the boat on the next big thing. Demand further increases as a result of the hype surrounding the stock, driving up the price even more.
Now let us take a step back. From the observational perspective of a quant, the price of the security is increasing because of the animal spirits of investors. In essence, the price is going up because the price is going up. As quants, if we can identify these irrational market forces, we can profit from them.
In this notebook we will go step-by-step through the contruction of an algorithm to find and trade equities experiencing momentum in price.
First, let us import all the necessary libraries for our algorithm.
```
import numpy as np
import pandas as pd
from scipy.signal import argrelmin, argrelmax
import statsmodels.api as sm
import talib
import matplotlib.pyplot as plt
from quantopian.pipeline.classifiers.morningstar import Sector
from quantopian.pipeline import Pipeline
from quantopian.pipeline.factors import Latest
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.research import run_pipeline
from quantopian.pipeline.data import morningstar
from quantopian.pipeline.factors import CustomFactor
```
#Price Momentum
In this notebook, we will use indicators outlined in "130/30: The New Long Only" by Andrew Lo and Pankaj Patel and combine them to create a single factor. It should be clarified that we are looking for long-term momentum as opposed to intra-day momentum. These indicators are:
* Slope of the 52-Week Trendline (20-day Lag)
* Percent Above 260-Day Low (20-day Lag)
* 4/52-Week Oscillator (20-day Lag)
* 39-Week Return (20-day Lag)
##Lag
One thing that all of the indicators have in common is that they are calculated using a 20-day lag. This lag is a way of smoothing out the stock signal so that we can filter out noise and focus on concrete, underlying trends. To calculate lag, we will take our desired data series and calculate its 20-day simple moving average, which is the arithmetic mean of window of the series' last 20 entries.
Let's see an example of this for the closing price of Apple (AAPL) stock from August 2014 to August 2015. We will abstract out the lag calculation into a helper function because we will be needing it so often in the algorithm.
NB: we remove the first 20 entries of the results as these will always be undefined (here, NaN) as because there is not a 20-day window with which to calculate the lag. We also have a check to determine if the entire row of data in NaN, as this can cause issues with the TA-Lib library.
```
# check if entire column is NaN. If yes, return True
def nan_check(col):
if np.isnan(np.sum(col)):
return True
else:
return False
# helper to calculate lag
def lag_helper(col):
# TA-Lib raises an error if whole colum is NaN,
# so we check if this is true and, if so, skip
# the lag calculation
if nan_check(col):
return np.nan
# 20-day simple moving average
else:
return talib.SMA(col, 20)[20:]
AAPL_frame = get_pricing('AAPL', start_date='2014-08-08', end_date='2015-08-08', fields='close_price')
# convert to np.array for helper function and save index of timeseries
AAPL_index = AAPL_frame.index
AAPL_frame = AAPL_frame.as_matrix()
# calculate lag
AAPL_frame_lagged = lag_helper(AAPL_frame)
plt.plot(AAPL_index, AAPL_frame, label='Close')
plt.plot(AAPL_index[20:], AAPL_frame_lagged, label='Lagged Close')
plt.legend(loc=2)
plt.xlabel('Date')
plt.title('Close Prices vs Close Prices (20-Day Lag)')
plt.ylabel('AAPL Price');
```
As you can see from the graph, the lagged closing prices generally follow the same general pattern as the unlagged prices, but do not experience as extreme peaks and troughs. For the rest of the notebook we will use lagged prices as we are interested in long-term trends.
## Slope of 52-Week Trendline
One of the oldest indicators of price momentum is the trendline. The basic idea is to create a bounding line around stock prices that predict when a price should pivot. A trendline that predicts a ceiling is called a resistance trendline, and one that predicts a floor is a support trendline.
To calculate a support trendline here, we take a lagged series, and find its pronounced local minima (here, a local minimum is defined as a data point lower than the five previous and five proceeding points). We then connect the first local minimum and the last local minimum by a straight line. For a resistance trendline, the process is the same, except it uses local maxima. This is just one of many methodologies for calculating trendlines.
Let us code up a function to return the gradient of the trendline. We will include a boolean variable `support` that, when set to `True` gives a support trendline and when set to `False` gives a resistance trendline. Let us have a look at the same dataset of AAPL stock and plot its trendlines.
NB: The y-intercepts used here are purely aesthetic and have no meaning as the indicator itself is only based on the slope of the trendline
```
# Custom Factor 1 : Slope of 52-Week trendline
def trendline_function(col, support):
# NaN check for speed
if nan_check(col):
return np.nan
# lag transformation
col = lag_helper(col)
# support trendline
if support:
# get local minima
minima_index = argrelmin(col, order=5)[0]
# make sure line can be drawn
if len(minima_index) < 2:
return np.nan
else:
# return gradient
return (col[minima_index[-1]] - col[minima_index[0]]) / (minima_index[-1] - minima_index[0])
# resistance trandline
else:
# get local maxima
maxima_index = argrelmax(col, order=5)[0]
if len(maxima_index) < 2:
return np.nan
else:
return (col[maxima_index[-1]] - col[maxima_index[0]]) / (maxima_index[-1] - maxima_index[0])
# make the lagged frame the default
AAPL_frame = AAPL_frame_lagged
# use day count rather than dates to ensure straight lines
days = list(range(0,len(AAPL_frame),1))
# get points to plot
points_low = [(101.5 + (trendline_function(AAPL_frame, True)*day)) for day in days]
points_high = [94 + (trendline_function(AAPL_frame, False)*day) for day in days]
# create graph
plt.plot(days, points_low, label='Support')
plt.plot(days, points_high, label='Resistance')
plt.plot(days, AAPL_frame, label='Lagged Closes')
plt.xlim([0, max(days)])
plt.xlabel('Days Elapsed')
plt.ylabel('AAPL Price')
plt.legend(loc=2);
```
As you can see, at the beginning of the time frame these lines seem to describe the pivot points of the curve well. Therefore it appears that betting against the stock when its price nears the resistance line and betting on the stock when its price nears the support line is a decent strategy. One issue with this is that these trendlines change over time. Even at the end of the above graph, it appears that the lines need to be redrawn in order to accomodate new prevailing price trends.
Now let us create our factor. In order to maintain flexibility between the types of trendlines, we need a way to pass the variable `support` into our Pipeline calculation. To do this we create a function that returns a `CustomFactor class` that *can* take a variable that is in scope of our indicator.
Also, we have abstracted out the trendline calculation so that we can use the builtin Numpy function `apply_along_axis` instead of creating and appending the results of the trendline calculation for each column to a list, which is a slower process.
```
def create_trendline_factor(support):
class Trendline(CustomFactor):
# 52 week + 20d lag
window_length = 272
inputs=[USEquityPricing.close]
def compute(self, today, assets, out, close):
out[:] = np.apply_along_axis(trendline_function, 0, close, support)
return Trendline
temp_pipe_1 = Pipeline()
trendline = create_trendline_factor(support=True)
temp_pipe_1.add(trendline(), 'Trendline')
results_1 = run_pipeline(temp_pipe_1, '2015-08-08', '2015-08-08')
results_1.head(20)
```
## Percent Above 260-Day Low
This indicator is relatively self explanitory. Whereas the trendline metric gives a more indepth picture of price momentum (as the line itself shows how this momentum has evolves over time), this metric is fairly blunt. It is calculated as the price of a stock today less the minimum price in a retrospective 260-day window, all divided by that minimum price.
Let us have a look at a visualization of this metric for the same window of AAPL stock.
```
# Custom Factor 2 : % above 260 day low
def percent_helper(col):
if nan_check(col):
return np.nan
else:
col = lag_helper(col)
return (col[-1] - min(col)) / min(col)
print 'Percent above 260-day Low: %f%%' % (percent_helper(AAPL_frame) * 100)
# create the graph
plt.plot(days, AAPL_frame)
plt.axhline(min(AAPL_frame), color='r', label='260-Day Low')
plt.axhline(AAPL_frame[-1], color='y', label='Latest Price')
plt.fill_between(days, AAPL_frame)
plt.xlabel('Days Elapsed')
plt.ylabel('AAPL Price')
plt.xlim([0, max(days)])
plt.title('Percent Above 260-Day Low')
plt.legend();
```
Now we will create the `CustomFactor` for this metric. We will use the same abstraction process as above for run-time efficiency.
```
class Percent_Above_Low(CustomFactor):
# 260 days + 20 lag
window_length = 280
inputs=[USEquityPricing.close]
def compute(self, today, asseys, out, close):
out[:] = np.apply_along_axis(percent_helper, 0, close)
temp_pipe_2 = Pipeline()
temp_pipe_2.add(Percent_Above_Low(), 'Percent Above Low')
results_2 = run_pipeline(temp_pipe_2, '2015-08-08', '2015-08-08')
results_2.head(20)
```
NB: There are a lot of 0's here for this output. Although this might seem odd at first, it makes sense when we consider that there are many securities on a downwards trend. These stocks would be prime candidates to give a value of 0 as their current price is as low as it has ever been in this lookback window.
##4/52-Week Price Oscillator
This is calculated as the average close price over 4 weeks over the average close price over 52 weeks, all subtracted by 1. To understand this value measures, let us consider what happens to the oscillator in different scenarios. This particular oscillator gives a sense of relative performance between the previous four weeks and the previous year. A value given by this oscillator could be "0.05", which would indicate that the stocks recent closes are outperforming its previous year's performance by 5%. A positive value is an indicator of momentum as more recent performance is stronger than normal and the larger the number, the more momentum.
As close prices can not be negative, this oscillator is bounded by -1 and positive infinity. Let us create a graph to show how, given a particular 52-week average, the value of the oscillator is affected by its four-week average.
```
# set 48-week average
av_52w = 100.
# create list of possible last four-week averages
av_4w = xrange(0,200)
# create list of oscillator values
osc = [(x / av_52w) - 1 for x in av_4w]
# draw graph
plt.plot(av_4w, osc)
plt.axvline(100, color='r', label='52-Week Average')
plt.xlabel('Four-Week Average')
plt.ylabel('4/52 Oscillator')
plt.legend();
```
Now let us create a Pipeline factor and observe some values.
```
# Custom Factor 3: 4/52 Price Oscillator
def oscillator_helper(col):
if nan_check(col):
return np.nan
else:
col = lag_helper(col)
return np.nanmean(col[-20:]) / np.nanmean(col) - 1
class Price_Oscillator(CustomFactor):
inputs = [USEquityPricing.close]
window_length = 272
def compute(self, today, assets, out, close):
out[:] = np.apply_along_axis(oscillator_helper, 0, close)
temp_pipe_3 = Pipeline()
temp_pipe_3.add(Price_Oscillator(), 'Price Oscillator')
results_3 = run_pipeline(temp_pipe_3, '2015-08-08', '2015-08-08')
results_3.head(20)
```
Once again, let us use AAPL stock as an example.
```
# get two averages
av_4w = np.nanmean(AAPL_frame[-20:])
av_52w = np.nanmean(AAPL_frame)
# create the graph
plt.plot(days, AAPL_frame)
plt.fill_between(days[-20:], AAPL_frame[-20:])
plt.axhline(av_4w, color='y', label='Four-week Average' )
plt.axhline(av_52w, color='r', label='Year-long Average')
plt.ylim([80,140])
plt.xlabel('Days Elapsed')
plt.ylabel('AAPL Price')
plt.title('4/52 Week Oscillator')
plt.legend();
```
The section shaded blue under the graph represents the last four weeks of close prices. The fact that this average (shown by the yellow line) is greater than the year-long average (shown by the red line), means that the 4/52 week oscillator for this date will be positive. This fact is backed by our pipeline output, which gives the value of the metric to be 9.4%.
##39-Week Return
This is calculated as the difference price between today and 39-weks prior, all over the price 39-weeks prior.
Although returns as a metric might seem too ubitquitous to be useful or special, the important thing to highlight hear is the window length chosen. By choosing a larger window length (here, 39-weeks) as opposed to daily returns, we see larger fluctuations in value. This is because a larger time window exposes the metric to larger trends and higher volatility.
In the graph below, we illustrate this point by plotting returns calculated over different time windows. To do this we will look at a AAPL close prices between 2002 and 2016. We will also mark important dates in the history of Apple in order to highlight this metric's descriptive power for larger trends.
NB: 39-week return is not a metric that is event driven. The inclusion of these dates is illustrative as opposed to predictive.
```
# create a new longer frame of AAPL close prices
AAPL_frame = get_pricing('AAPL', start_date='2002-08-08', end_date='2016-01-01', fields='close_price')
# use dates as index
AAPL_index = AAPL_frame.index[20:]
AAPL_frame = lag_helper(AAPL_frame.as_matrix())
# 1d returns
AAPL_1d_returns = ((AAPL_frame - np.roll(AAPL_frame, 1))/ np.roll(AAPL_frame,1))[1:]
# 1w returns
AAPL_1w_returns = ((AAPL_frame - np.roll(AAPL_frame, 5))/ np.roll(AAPL_frame, 5))[5:]
# 1m returns
AAPL_1m_returns = ((AAPL_frame - np.roll(AAPL_frame, 30))/ np.roll(AAPL_frame, 30))[30:]
# 39w returns
AAPL_39w_returns = ((AAPL_frame - np.roll(AAPL_frame, 215))/ np.roll(AAPL_frame, 215))[215:]
# plot close prices
plt.plot(AAPL_index[1:], AAPL_1d_returns, label='1-day Returns')
plt.plot(AAPL_index[5:], AAPL_1w_returns, label='1-week Returns')
plt.plot(AAPL_index[30:], AAPL_1m_returns, label='1-month Returns')
plt.plot(AAPL_index[215:], AAPL_39w_returns, label='39-week Returns')
# show events
# iPhone release
plt.axvline('2007-07-29')
# iPod mini 2nd gen. release
plt.axvline('2005-02-23')
# iPad release
plt.axvline('2010-04-03')
# iPhone 5 release
plt.axvline('2012-09-21')
# Apple Watch
plt.axvline('2015-04-24')
# labels
plt.xlabel('Days')
plt.ylabel('Returns')
plt.title('Returns')
plt.legend();
```
There are a few important characteristics to note on the graph above.
Firstly, as we expected, the amplitude of the signal of returns with a longer window length is larger.
Secondly, these new releases, many of which were announced several months before, all lie in or adjacent to a peak in the 39-week return price. Therefore, it would seem that this window length is a useful tool for capturing information on larger trends.
Now let us create the custom factor and run the Pipeline.
```
# Custom Fator 4: 39-week Returns
def return_helper(col):
if nan_check(col):
return np.nan
else:
col = lag_helper(col)
return (col[-1] - col[-215]) / col[-215]
class Return_39_Week(CustomFactor):
inputs = [USEquityPricing.close]
window_length = 235
def compute(self, today, assets, out, close):
out[:] = np.apply_along_axis(return_helper, 0, close)
temp_pipe_4 = Pipeline()
temp_pipe_4.add(Return_39_Week(), '39 Week Return')
results_4 = run_pipeline(temp_pipe_4, '2015-08-08','2015-08-08')
results_4.head(20)
```
##Aggregation
Let us create the full Pipeline. Once again we will need a proxy for the S&P500 for the ordering logic. Also, given the large window lengths needed for the algorithm, we will employ the trick of multiple outputs per factor. This is explained in detail here (https://www.quantopian.com/posts/new-feature-multiple-output-pipeline-custom-factors). Instead of having to process several data frames, we only need to deal with one large one and then apply our helper functions. This will speed up out computation exponentially in the backtester.
```
# This factor creates the synthetic S&P500
class SPY_proxy(CustomFactor):
inputs = [morningstar.valuation.market_cap]
window_length = 1
def compute(self, today, assets, out, mc):
out[:] = mc[-1]
# using helpers to boost speed
class Pricing_Pipe(CustomFactor):
inputs = [USEquityPricing.close]
outputs = ['trendline', 'percent', 'oscillator', 'returns']
window_length=280
def compute(self, today, assets, out, close):
out.trendline[:] = np.apply_along_axis(trendline_function, 0, close[-272:], True)
out.percent[:] = np.apply_along_axis(percent_helper, 0, close)
out.oscillator[:] = np.apply_along_axis(oscillator_helper, 0, close[-272:])
out.returns[:] = np.apply_along_axis(return_helper, 0, close[-235:])
def Data_Pull():
# create the piepline for the data pull
Data_Pipe = Pipeline()
# create SPY proxy
Data_Pipe.add(SPY_proxy(), 'SPY Proxy')
# run all on same dataset for speed
trendline, percent, oscillator, returns = Pricing_Pipe()
# add the calculated values
Data_Pipe.add(trendline, 'Trendline')
Data_Pipe.add(percent, 'Percent')
Data_Pipe.add(oscillator, 'Oscillator')
Data_Pipe.add(returns, 'Returns')
return Data_Pipe
results = run_pipeline(Data_Pull(), '2015-08-08', '2015-08-08')
results.head(20)
```
We will now use the Lo/Patel ranking logic described in the Traditional Value notebook (https://www.quantopian.com/posts/quantopian-lecture-series-long-slash-short-traditional-value-case-study) in order to combine these desriptive metrics into a single factor.
NB: `standard_frame_compute` and `composite_score` have been combined into a single function called `aggregate_data`.
```
# limit effect of outliers
def filter_fn(x):
if x <= -10:
x = -10.0
elif x >= 10:
x = 10.0
return x
# combine data
def aggregate_data(df):
# basic clean of dataset to remove infinite values
df = df.replace([np.inf, -np.inf], np.nan)
df = df.dropna()
# need standardization params from synthetic S&P500
df_SPY = df.sort(columns='SPY Proxy', ascending=False)
# create separate dataframe for SPY
# to store standardization values
df_SPY = df_SPY.head(500)
# get dataframes into numpy array
df_SPY = df_SPY.as_matrix()
# store index values
index = df.index.values
# get data intp a numpy array for speed
df = df.as_matrix()
# get one empty row on which to build standardized array
df_standard = np.empty(df.shape[0])
for col_SPY, col_full in zip(df_SPY.T, df.T):
# summary stats for S&P500
mu = np.mean(col_SPY)
sigma = np.std(col_SPY)
col_standard = np.array(((col_full - mu) / sigma))
# create vectorized function (lambda equivalent)
fltr = np.vectorize(filter_fn)
col_standard = (fltr(col_standard))
# make range between -10 and 10
col_standard = (col_standard / df.shape[1])
# attach calculated values as new row in df_standard
df_standard = np.vstack((df_standard, col_standard))
# get rid of first entry (empty scores)
df_standard = np.delete(df_standard, 0, 0)
# sum up transformed data
df_composite = df_standard.sum(axis=0)
# put into a pandas dataframe and connect numbers
# to equities via reindexing
df_composite = pd.Series(data=df_composite, index=index)
# sort descending
df_composite.sort(ascending=False)
return df_composite
ranked_scores = aggregate_data(results)
ranked_scores
```
##Stock Choice
Now that we have our ranking system, let us have a look at the histogram of the ranked scores. This will allow us to see general trends in the metric and diagnose any issues with our ranking system as a factor. The red lines give our cut-off points for our trading baskets
```
# histogram
ranked_scores.hist()
# baskets
plt.axvline(ranked_scores[26], color='r')
plt.axvline(ranked_scores[-6], color='r')
plt.xlabel('Ranked Scores')
plt.ylabel('Frequency')
plt.title('Histogram of Ranked Scores of Stock Universe');
```
Although there does appear to be some positive skew, this looks to be a robust metric as the tails of this distribution are very thin. A thinner tail means that our ranking system has identified special characteristics about our stock universe possessed by only a few equities. More thorough statistical analysis would have to be conducted in order to see if this strategy could generate good alpha returns. This robust factor analysis will be covered in a later notebook.
Please see the attached algorithm for a full implementation!
*The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory or other services by Quantopian.*
*In addition, the content of the website neither constitutes investment advice nor offers any opinion with respect to the suitability of any security or any specific investment. Quantopian makes no guarantees as to accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.*
| true |
code
| 0.566438 | null | null | null | null |
|
<img alt="QuantRocket logo" src="https://www.quantrocket.com/assets/img/notebook-header-logo.png">
© Copyright Quantopian Inc.<br>
© Modifications Copyright QuantRocket LLC<br>
Licensed under the [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/legalcode).
<a href="https://www.quantrocket.com/disclaimer/">Disclaimer</a>
# Universe Selection
by Gil Wassermann, Maxwell Margenot
Selecting the product space in which an algorithm trades can be as important as, if not more than, the strategy itself. In this lecture, we will walk through the basics of constructing a universe.
## What is a Universe?
On a high level, universe selection is the process of choosing the pool of securities upon which your algorithm will trade. For example, an algorithm designed to play with the characteristics of a universe consisting of technology equities may perform exceptionally well in that universe with the tradeoff of falling flat in other sectors. Experimenting with different universes by tweaking their components is an essential part of developing a trading strategy.
Using Pipeline and the full US Stock dataset, we have access to over 8000 securities to choose from each day. However, the securities within this basket are markedly different. Some are different asset classes, some belong to different sectors and super-sectors, some employ different business models, some practice different management styles, and so on. By defining a universe, a trader can narrow in on securities with one or more of these attributes in order to craft a strategy that is most effective for that subset of the population.
Without a properly-constructed universe, your algorithm may be exposed to risks that you just aren't aware of. For example, it could be possible that your universe selection methodology only selects a stock basket whose constituents do not trade very often. Let's say that your algorithm wants to place an order of 100,000 shares for a company that only trades 1,000 on a given day. The inability to fill this order or others might prevent you from achieving the optimal weights for your portfolio, thereby undermining your strategy. These risks can be controlled for by careful and thoughtful universe slection.
In Zipline, universes are often implemented as a Pipeline screen. If you are not familiar with Pipeline, feel free to check out the [Pipeline Tutorial](https://www.quantrocket.com/code/?filter=zipline). Below is an example implementation of a universe that limits Pipeline output to the 500 securities with the largest revenue each day. This can be seen as a naive implementation of the Fortune500.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from zipline.pipeline.data import master
from zipline.pipeline import Pipeline
from zipline.pipeline.data import USEquityPricing
from zipline.research import run_pipeline
from zipline.pipeline.data import sharadar
from zipline.pipeline.factors import CustomFactor
revenue = sharadar.Fundamentals.slice(dimension='ARQ', period_offset=0).REVENUE.latest
pipe = Pipeline(
columns={
'Revenue': revenue
},
screen=revenue.top(500)
)
res = run_pipeline(pipe, start_date='2016-01-04', end_date='2016-01-04', bundle='usstock-1d-bundle')
print("There are %d assets in this universe." % len(res))
res.head(10) # print 10 constituents
```
This is a good start, but again, it is a very naive universe. Normally, high revenue is a characteristic of a healthy, thriving company, but there are many other things that play into the construction of a good universe. While this idea has a reasonable economic basis, more analysis has to be conducted to determine the efficacy of this universe. There may be more subtle things occurring independently of the revenue of its constituent companies.
For the rest of this notebook, we will design our own universe, profile it and check its performance. Let's create the Lectures500!
## Lectures500
### Sector Exposure
If I create a universe that only looks at equities in the technology sector, my algorithm will have an extreme sector bias. Companies in the same industry sector are affected by similar macroeconomic trends and therefore their performance tends to be correlated. In the case of particular strategies, we may find the benefits of working exclusively within a particular sector greater than the downside risks, but this is not suitable for creating a general-purpose, quality universe.
Let's have a look at the sector breakdown of the Lectures500.
```
# Rename our universe to Lectures500
Lectures500 = revenue.top(500)
def get_sectors(day, universe, bundle):
pipe = Pipeline(columns={'Sector': master.SecuritiesMaster.usstock_Sector.latest}, screen=universe)
# Drop the datetime level of the index, since we only have one day of data
return run_pipeline(pipe, start_date=day, end_date=day, bundle=bundle).reset_index(level=0, drop=True)
def calculate_sector_counts(sectors):
counts = (sectors.groupby('Sector').size())
return counts
lectures500_sectors = get_sectors('2016-01-04', Lectures500, 'usstock-1d-bundle')
lectures500_counts = calculate_sector_counts(lectures500_sectors)
def plot_sector_counts(sector_counts):
bar = plt.subplot2grid((10,12), (0,0), rowspan=10, colspan=6)
pie = plt.subplot2grid((10,12), (0,6), rowspan=10, colspan=6)
# Bar chart
sector_counts.plot(
kind='bar',
color='b',
rot=30,
ax=bar,
)
bar.set_title('Sector Exposure - Counts')
# Pie chart
sector_counts.plot(
kind='pie',
colormap='Set3',
autopct='%.2f %%',
fontsize=12,
ax=pie,
)
pie.set_ylabel('') # This overwrites default ylabel, which is None :(
pie.set_title('Sector Exposure - Proportions')
plt.tight_layout();
plot_sector_counts(lectures500_counts)
```
From the above plots it is clear that there is a mild sector bias towards the consumer discretionary industry. Any big events that affect companies in this sector will have a large effect on this universe and any algorithm that uses it.
One option is to equal-weight the sectors, so that equities from each industry sector make up an identical proportion of the final universe. This, however, comes with its own disadvantages. In a sector-equal Lectures500, the universe would include some lower-revenue real estate equities at the expense of higher-revenue consumer discretionary equities.
### Turnover
Another thing to consider when designing a universe is the rate at which the universe changes. Turnover is a way of measuring this rate of change. Turnover is defined as the number of equities to enter or exit the universe in a particular time window.
Let us imagine a universe with a turnover of 0. This universe would be completely unchanged by market movements. Moreover, stocks inappropriate for the universe would never be removed and stocks that should be included will never enter.
Conversely, imagine a universe that changes every one of its constituents every day. An algorithm built on this universe will be forced to sell its entire portfolio every day. This incurs transaction costs which erode returns.
When creating a universe, there is an inherent tradeoff between stagnation and sensitivity to the market.
Let's have a look at the turnover for the Lectures500!
```
res = run_pipeline(Pipeline(columns={'Lectures500' : Lectures500}), start_date='2015-01-01', end_date='2016-01-01', bundle='usstock-1d-bundle')
res = res.unstack().fillna(False).astype(int)
def calculate_daily_turnover(unstacked):
return (unstacked
.diff() # Get 1/0 (True/False) showing where values changed from previous day.
.abs() # take absolute value so that any turnover is a 1
.iloc[1:] # Drop first row, which is meaningless after diff().
.groupby(axis=1, level=0)
.sum()) # Group by universe and count number of 1 values in each row.
def plot_daily_turnover(unstacked):
# Calculate locations where the inclusion state of an asset changed.
turnover = calculate_daily_turnover(unstacked)
# Write the data to an axis.
ax = turnover.plot(figsize=(14, 8))
# Add style to the axis.
ax.grid(False)
ax.set_title('Changes per Day')
ax.set_ylabel('Number of Added or Removed Assets')
def print_daily_turnover_stats(unstacked):
turnover = calculate_daily_turnover(unstacked)
print(turnover.describe().loc[['mean', 'std', '25%', '50%', '75%', 'min', 'max']])
plot_daily_turnover(res)
print_daily_turnover_stats(res)
```
#### Smoothing
A good way to reduce turnover is through smoothing functions. Smoothing is the process of taking noisy data and aggregating it in order to analyze its underlying trends. When applied to universe selection, a good smoothing function prevents equities at the universe boundary from entering and exiting frequently.
One example of a potential smoothing function is a filter that finds equities that have passed the Lectures500 criteria for 16 or more days out of the past 21 days. We will call this filter `AtLeast16`. This aggregation of many days of data lends a certain degree of flexibility to the edges of our universe. If, for example, Equity XYZ is very close to the boundary for inclusion, in a given month, it may flit in and out of the Lectures500 day after day. However, with the `AtLeast16` filter, Equity XYZ is allowed to enter and exit the daily universe a maximum of 5 times before it is excluded from the smoothed universe.
Let's apply a smoothing function to our universe and see its effect on turnover.
```
from zipline.pipeline.filters import AtLeastN
Lectures500 = AtLeastN(inputs=[Lectures500],
window_length=21,
N=16,)
res_smoothed = run_pipeline(Pipeline(columns={'Lectures500 Smoothed' : Lectures500}),
start_date='2015-01-01',
end_date='2016-01-01',
bundle='usstock-1d-bundle')
res_smoothed = res_smoothed.unstack().fillna(False).astype(int)
plot_daily_turnover(res_smoothed)
print_daily_turnover_stats(res_smoothed)
```
Looking at the metrics, we can see that the smoothed universe has a lower turnover than the original Lectures500. Since this is a good characteristic, we will add this logic to the universe.
NB: Smoothing can also be accomplished by downsampling.
---
**Next Lecture:** [The Capital Asset Pricing Model and Arbitrage Pricing Theory](Lecture30-CAPM-and-Arbitrage-Pricing-Theory.ipynb)
[Back to Introduction](Introduction.ipynb)
---
*This presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. ("Quantopian") or QuantRocket LLC ("QuantRocket"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, neither Quantopian nor QuantRocket has taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information believed to be reliable at the time of publication. Neither Quantopian nor QuantRocket makes any guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances.*
| true |
code
| 0.498169 | null | null | null | null |
|
# Job Sequencing with Integer Lengths
# Hamiltonian
We get a Hamiltonian from the paper below.
https://arxiv.org/abs/1302.5843
$\displaystyle H = H_A + H_B$
$\displaystyle H_A = A \sum_{i=1}^N \left( 1 - \sum_\alpha x_{i,\alpha} \right)^2 + A\sum_{\alpha=1}^m \left( \sum_{n=1}^M ny_{n,\alpha} + \sum_i L_i \left( x_{i,\alpha} - x_{i,1}\right) \right)^2$
$\displaystyle H_B = B \sum_i L_ix_i$
# Little bit change on Hamiltonian
We did a little bit change on Hamiltonian because the existing hamitonian doesn't give good answer.
①We divided $H_A$ to $A_1,A_2$ and increase the number of coefficient.
②We added new term $\displaystyle A_1\sum_{\alpha}\left( 1 - \sum_n y_{n,\alpha} \right)^2$ to $H_A$
$\displaystyle H_A = A_1\sum_i \left( 1 - \sum_\alpha x_{i,\alpha} \right)^2 + A_1\sum_{\alpha}\left( 1 - \sum_n y_{n,\alpha} \right)^2 + A_2\sum_\alpha \left( \sum_n ny_{n,\alpha} + \sum_i L_i \left( x_{i,\alpha} - x_{i,1}\right) \right)^2$
$\displaystyle = A_1\sum_i \left( -2 \sum_\alpha x_{i,\alpha} + \left( \sum_\alpha x_{i,\alpha} \right)^2 \right) + A_1\sum_\alpha \left( -2 \sum_n y_{n,\alpha} + \left( \sum_n y_{n,\alpha} \right)^2 \right)$
$\displaystyle + A_2\sum_\alpha \left( \left( \sum_n ny_{n,\alpha} \right)^2 + 2\left( \sum_n ny_{n,\alpha} \right)\left(\sum_i L_i \left( x_{i,\alpha} - x_{i,1}\right)\right) +\left(\sum_i L_i \left( x_{i,\alpha} - x_{i,1}\right)\right)^2 \right) + Const.$
$\displaystyle = A_1\sum_i \sum_\alpha \left( -x_{i,\alpha}^2 + \sum_{\beta \left( \gt \alpha \right)} 2x_{i,\alpha}x_{i, \beta} \right) + A_1\sum_\alpha \sum_n \left( -y_{n,\alpha}^2 + \sum_{m \left( \gt n \right) } 2y_{n,\alpha}y_{m, \alpha} \right) + A_2\sum_\alpha \sum_n \left( n^2y_{n, \alpha}^2 + \sum_{m \left( \gt n \right) } 2nmy_{n,\alpha}y_{m, \alpha} \right) $
$\displaystyle + A_2\sum_\alpha \sum_i \left( \left( \sum_n 2nL_i y_{n,\alpha} \left( x_{i,\alpha} - x_{i,1}\right) \right) + L_i^2 \left( x_{i,\alpha} - x_{i,1}\right)^2 + \sum_{j \left( \gt i \right) } 2L_iL_j \left( x_{i,\alpha} - x_{i,1}\right) \left( x_{j,\alpha} - x_{j,1}\right) \right) + Const.$
$\displaystyle =\sum_\alpha \sum_i \left( - A_1x_{i,\alpha}^2 + A_2L_i^2 \left( x_{i,\alpha} - x_{i,1}\right)^2 + \sum_{\beta \left( \gt \alpha \right) } 2A_1x_{i,\alpha}x_{i, \beta} + \sum_{j \left( \gt i \right) } 2A_2L_iL_j \left( x_{i,\alpha} - x_{i,1}\right) \left( x_{j,\alpha} - x_{j,1}\right)+ \sum_n 2A_2nL_i y_{n,\alpha} \left( x_{i,\alpha} - x_{i,1}\right) \right)$
$\displaystyle + \sum_\alpha \sum_n \left( \left( -A_1 + A_2n^2 \right) y_{n, \alpha}^2 + \sum_{m \left( \gt n \right) } 2\left(A_1+ A_2nm \right) y_{n,\alpha}y_{m, \alpha} \right) + Const.$
# QUBO class
```
import blueqat.wq as wq
import numpy as np
class Qubo():
def __init__(self, jobs, n_machine, max_delta, A1, A2, B):
self.__jobs = jobs
self.__n_jobs = len(jobs)
self.__n_machine = n_machine
self.__max_delta = max_delta
self.__A1 = A1
self.__A2 = A2
self.__B = B
self.__index_offset = self.__n_jobs * self.__n_machine
def __calc_sum_alpha_n_m(self, qubo, alpha, n):
A1 = self.__A1
A2 = self.__A2
for m in range(n + 1, self.__max_delta + 1):
v_n_alpha = self.__index_offset + alpha * self.__max_delta + n - 1
v_m_alpha = self.__index_offset + alpha * self.__max_delta + m - 1
qubo[v_n_alpha][v_m_alpha] += 2 * (A2 * n * m + A1)
def __calc_sum_alpha_n(self, qubo, alpha):
A1 = self.__A1
A2 = self.__A2
for n in range(1, self.__max_delta + 1):
v_n_alpha = self.__index_offset + alpha * self.__max_delta + n - 1
qubo[v_n_alpha][v_n_alpha] += (A2 * n ** 2 - A1)
self.__calc_sum_alpha_n_m(qubo, alpha, n)
def __calc_sum_alpha_i_beta(self, qubo, alpha, i):
A1 = self.__A1
for beta in range(alpha + 1, self.__n_machine):
u_i_alpha = i * self.__n_machine + alpha
u_i_beta = i * self.__n_machine + beta
qubo[u_i_alpha][u_i_beta] += 2 * A1
def __calc_sum_alpha_i_j(self, qubo, alpha, i):
A2 = self.__A2
Li = self.__jobs[i]
for j in range(i + 1, self.__n_jobs):
Lj = self.__jobs[j]
u_i_alpha = i * self.__n_machine + alpha
u_j_alpha = j * self.__n_machine + alpha
u_i_0 = i * self.__n_machine
u_j_0 = j * self.__n_machine
qubo[u_i_alpha][u_j_alpha] += 2 * A2 * Li * Lj
qubo[u_i_alpha][u_j_0] += -2 * A2 * Li * Lj
qubo[u_i_0][u_j_alpha] += -2 * A2 * Li * Lj
qubo[u_i_0][u_j_0] += 2 * A2 * Li * Lj
def __calc_sum_alpha_i_n(self, qubo, alpha, i):
A2 = self.__A2
Li = self.__jobs[i]
u_i_alpha = i * self.__n_machine + alpha
u_i_0 = i * self.__n_machine
for n in range(1, self.__max_delta + 1):
v_n_alpha = self.__index_offset + alpha * self.__max_delta + n - 1
qubo[u_i_alpha][v_n_alpha] += 2 * A2 * n * Li
qubo[u_i_0][v_n_alpha] += -2 * A2 * n * Li
def __calc_sum_alpha_i(self,qubo, alpha):
A1 = self.__A1
A2 = self.__A2
for i in range(self.__n_jobs):
u_i_alpha = i * self.__n_machine + alpha
u_i_0 = i * self.__n_machine
Li = self.__jobs[i]
qubo[u_i_alpha][u_i_alpha] += -A1 + A2 * Li ** 2
qubo[u_i_0][u_i_0] += A2 * Li ** 2
qubo[u_i_0][u_i_alpha] += -2 * A2 * Li ** 2
self.__calc_sum_alpha_i_beta(qubo, alpha, i)
self.__calc_sum_alpha_i_j(qubo, alpha, i)
self.__calc_sum_alpha_i_n(qubo, alpha, i)
def __calc_constraint_func(self,qubo):
for alpha in range(self.__n_machine):
self.__calc_sum_alpha_i(qubo, alpha)
self.__calc_sum_alpha_n(qubo, alpha)
def __calc_objective_func(self,qubo):
B = self.__B
for i in range(self.__n_jobs):
u_i_0 = i * self.__n_machine
Li = self.__jobs[i]
qubo[u_i_0][u_i_0] += B * Li
def __calc_qubo(self, qubo):
self.__calc_constraint_func(qubo)
self.__calc_objective_func(qubo)
def get_qubo(self):
size = self.__n_machine * (self.__n_jobs + self.__max_delta)
qubo = np.zeros((size, size))
self.__calc_qubo(qubo)
return qubo
def show_answer(self, solution):
print(f"Solution is {solution}")
assigned_job_sizes = np.zeros(self.__n_machine, dtype=int)
for i in range(self.__n_jobs):
assigned = False
for alpha in range(self.__n_machine):
u_i_alpha = i * n_machine + alpha
if(solution[u_i_alpha] > 0):
print(f"Job{i} has been assigned to the machine{alpha}.")
assigned_job_sizes[alpha] += self.__jobs[i]
assigned = True
if assigned == False:
print(f"Job{i} has not been assigned.")
for alpha in range(self.__n_machine):
print(f"Total size of jobs assigned to machine{alpha} is {assigned_job_sizes[alpha]}.")
```
Let's solve it. We choose $A1, A2, B$ looking at the total balance of each terms.
```
jobs = [1,1,2,2,5,5,7] # the numbers are lengths(Li) of jobs
n_machine = 3
max_delta = 7 # permissive maximum delta of M1 - Malpha. select by yourself.
A1 = 1
A2 = (A1 / max(jobs) ** 2) * 0.9
B = (A1 / max(jobs)) * 0.5
qubo = Qubo(jobs, n_machine, max_delta, A1, A2, B)
annealer = wq.Opt()
annealer.qubo = qubo.get_qubo()
for _ in range(10):
solution = annealer.sa()
qubo.show_answer(solution)
print()
```
| true |
code
| 0.469763 | null | null | null | null |
|
# Week 10 Discussion
## Infographic
* [Racial Discrimination in Auto Insurance Prices][propublica]
[propublica]: https://www.propublica.org/article/minority-neighborhoods-higher-car-insurance-premiums-methodology
## Links
* [Learn X in Y Minutes, X = JavaScript][js-intro] -- a brief intro to JavaScript
* [MDN JavaScript Guide][js-guide] -- a detailed guide to JavaScript
* [MDN Learning Materials][web-intro] -- more information about web development
* [UC Berkeley Library's GeoData][geodata]
Please fill out TA evals!
[js-intro]: https://learnxinyminutes.com/docs/javascript/
[js-guide]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide
[web-intro]: https://developer.mozilla.org/en-US/docs/Learn
[geodata]: https://geodata.lib.berkeley.edu/
## Web Visualization
Web browsers are ubiquitous and support interactivity (via JavaScript), so the web is an excellent platform for visualizations.
Popular JavaScript libraries used for web visualizations:
<table><tr>
<th>Library</th><th>Based On</th><th>Python Support</th><th>Description</th>
</tr><tr>
<td>[D3.js](https://d3js.org/)</td><td>-</td><td>[mpld3](http://mpld3.github.io/)</td>
<td>
Short for Data-Driven Documents, D3 allows you to bind data to HTML tags.
In other words, you can use data to control the structure and style of a
web page.
</td>
</tr><tr>
<td>[Vega](https://vega.github.io/vega/)</td><td>D3.js</td><td>-</td>
<td>
A visualization grammar (the same idea as ggplot) built on top of D3. You
write a description of what you want in JSON, and Vega produces a D3
visualization.
</td>
</tr><tr>
<td>[Vega Lite](https://vega.github.io/vega-lite/)</td><td>Vega</td><td>[altair](https://altair-viz.github.io/)</td>
<td>
A visualization grammar for _common statistical graphics_ built on top of
Vega. You write a JSON description which is translated to Vega and then D3.
</td>
</tr><tr>
<td>[plotly.js](https://plot.ly/javascript/)</td><td>D3.js</td><td>[plotly](https://plot.ly/python/)</td>
<td>
A visualization library that supports the Python, R, Julia, and MATLAB
plotly packages. Although this is an open-source library, development
is controlled by Plotly (a private company).
</td>
</tr><tr>
<td>[BokehJS](http://bokeh.pydata.org/en/latest/docs/dev_guide/bokehjs.html)</td><td>-</td><td>[bokeh](http://bokeh.pydata.org/)</td>
<td>
A visualization library designed to be used from other (non-JavaScript)
languages. You write Python, R, or Scala code to produce visualizations.
</td>
</tr><tr>
<td>[Leaflet](http://leafletjs.com/)</td><td>-</td><td>[folium](https://github.com/python-visualization/folium)</td>
<td>
An interactive maps library that can display GeoJSON data.
</td>
</tr></table>
Also worth mentioning is the [pygal](http://www.pygal.org/en/stable/) package, which produces SVG plots that can be viewed in a web browser but do not require any JavaScript library.
## Static Visualizations
```
import pandas as pd
dogs = pd.read_feather("data/dogs.feather")
dogs.head()
```
To display Bokeh plots in a Jupyter notebook, you must first call the setup function `output_notebook()`. You don't have to do this if you're going to save your plots to HTML instead.
```
import bokeh.io # conda install bokeh
bokeh.io.output_notebook()
```
Now we can make a plot. The `bokeh.charts` submodule has functions to create common statistical plots. You can also use functions in the `bokeh.models` submodule to fine-tune plots.
Bokeh's plotting functions work with data frames in [tidy](http://vita.had.co.nz/papers/tidy-data.pdf) form.
```
from bokeh.plotting import figure, show
#colormap = {'setosa': 'red', 'versicolor': 'green', 'virginica': 'blue'}
#colors = [colormap[x] for x in flowers['species']]
p = figure(title = "Dogs", width = 300, height = 300)
p.xaxis.axis_label = "Datadog Score"
p.yaxis.axis_label = "Popularity"
p.scatter("datadog", "popularity", source = dogs, fill_alpha = 0.2)
show(p)
# Optional: save the plot to a standalone HTML file.
#bokeh.io.output_file("MY_PLOT.html")
```
## Maps
```
import folium
# Make a map.
m = folium.Map(location = [45.5236, -122.6750])
# Optional: set up a Figure to control the size of the map.
fig = folium.Figure(width = 600, height = 200)
fig.add_child(m)
# Optional: save the map to a standalone HTML file.
# fig.save("MY_MAP.html")
```
The dataset about recent restaurant inspections in Yolo County is available [here](http://anson.ucdavis.edu/~nulle/yolo_food.feather)
```
food = pd.read_feather("data/yolo_food.feather")
food.head()
food.shape
food = food[food.lat.notna() & food.lng.notna()]
m = folium.Map(location = [38.54, -121.74], zoom_start = 11)
cols = ["FacilityName", "lat", "lng"]
for name, lat, lng in food[cols].itertuples(index = False):
popup = folium.Popup(name, parse_html = True)
folium.Marker([float(lat), float(lng)], popup = popup).add_to(m)
fig = folium.Figure(width = 800, height = 400)
fig.add_child(m)
```
Folium can also display boundaries stored in GeoJSON files. See the README for more info.
You can convert shapefiles to GeoJSON with geopandas.
```
m = folium.Map(location = [37.76, -122.44], zoom_start = 12)
m.choropleth("shapefiles/sf_neighborhoods.geojson", fill_opacity = 0.2, fill_color = "green")
fig = folium.Figure(width = 800, height = 400)
fig.add_child(m)
```
## Interactive Visualizations
In order to make a visualization interactive, you need to run some code when the user clicks on a widget. The code can run _client-side_ on the user's machine, or _server-side_ on your server.
For client-side interactivity:
* Your code must be written in JavaScript.
* You can host your visualization on any web server. No special setup is needed.
* Your visualization will use the user's CPU and memory.
For server-side interactivity:
* Your code can be written in any language the server supports. This may require special setup.
* Your visualization will use the server's CPU and memory.
* You can update the data in real-time.
* You can save data submitted by the user.
Shiny is a server-side framework for R. There are lots of server-side frameworks for Python. Two of the most popular are [Django][django] and [Flask][flask].
[django]: https://www.djangoproject.com/
[flask]: http://flask.pocoo.org/
### Client-side
Client-side interactivity is cheaper to get started with because you can use a free web server (like GitHub Pages).
Let's make the diamonds plot interactive so that the user can select which variables get plotted. Unfortunately, Bokeh charts don't work with interactivity, so we have to build the plot with simpler functions. We'll lose the color-coding, although you could still add that with a bit more work.
```
dogs.head()
import bokeh.layouts
from bokeh.models import ColumnDataSource, CustomJS, widgets
from bokeh.plotting import figure, show
original = ColumnDataSource(dogs)
source = ColumnDataSource({"x": dogs["datadog"], "y": dogs["popularity"]})
plt = figure(title = "Dogs", tools = [])
plt.xaxis.axis_label = "datadog"
plt.yaxis.axis_label = "popularity"
plt.scatter("x", "y", source = source, fill_alpha = 0.2)
# Callback for x selector box.
callback_x = CustomJS(args = {"original": original, "source": source, "axis": plt.xaxis[0]}, code = """
// This is the JavaScript code that will run when the x selector box is changed.
// You can use the alert() function to "print" values.
//alert(cb_obj.value);
axis.axis_label = cb_obj.value;
source.data['x'] = original.data[cb_obj.value];
source.change.emit();
""")
# Callback for y selector box.
callback_y = CustomJS(args = {"original": original, "source": source, "axis": plt.yaxis[0]}, code = """
// This is the JavaScript code that will run when the y selector box is changed.
axis.axis_label = cb_obj.value;
source.data['y'] = original.data[cb_obj.value];
source.change.emit();
""")
# Set up selector boxes.
numeric_cols = ["datadog", "popularity", "lifetime_cost", "longevity"]
sel_x = widgets.Select(title = "x-axis", options = numeric_cols, value = "datadog")
sel_y = widgets.Select(title = "y-axis", options = numeric_cols, value = "popularity")
sel_x.js_on_change("value", callback_x)
sel_y.js_on_change("value", callback_y)
# Position the selector boxes to the right of the plot.
layout = bokeh.layouts.column(sel_x, sel_y)
layout = bokeh.layouts.row(plt, layout)
show(layout)
```
### Server-side
Server-side interactivity is a lot more flexible. Flask is a simple framework with great documentation, so it's easy to get started with.
The core of a flask website (or "app") is a script with functions that return the text that should be displayed on each page.
See `hello_app.py` for an example flask website.
#### Example: Query Slack
As an example, let's make a flask website that displays recent messages from the class' Slack.
First you need to [get a Slack API token][slack-apps]. Make sure it has the `channels:read` and `channels:history` permissions.
Then you can use the `slackclient` package to query the Slack API.
[slack-apps]: https://api.slack.com/apps
```
from slackclient import SlackClient
with open("flask/slack_token") as f:
slack_token = f.readline().strip()
sc = SlackClient(slack_token)
```
We'll display messages from the `#flask` channel.
Slack tracks channels by ID, not name, so we need to get the channel ID.
Use `channels.list` to get a list of public channels:
```
channels = sc.api_call("channels.list")
channels = channels["channels"]
chan_id = next(x["id"] for x in channels if x["name"] == "flask")
chan_id
```
Now let's get the history of the channel:
```
history = sc.api_call("channels.history", channel = chan_id)
messages = pd.DataFrame(history["messages"])
messages
```
These steps are turned into a flask website in `slack_app.py`.
| true |
code
| 0.621799 | null | null | null | null |
|
# Particle Swarm Optimization
>Investigación y entendimiento del algoritmo.
La idea de este notebook es revisar la implementación del algoritmo particle swarm optimization de manera general para tener una idea de la lógica, intuición y los parámetros que éste contempla.
```
import random
import math
import matplotlib.pyplot as plt
```
Se define una función objetivo.
```
# función objetivo
def objective_function(x):
y = x[0]**3 + x[1]**2
return y
```
A continuación se definen los parámetros.
```
bounds=[(-4, 4), (-4, 4)] # variables limite inferior y superior
nv = 2 # numero de variables
mm = -1 # problema de minimización, mm = -1; problema de mazimización, mm = 1
# Parámetros opcionales (Para optimizar el desempeño del PSO necesitamos optimizar estos parámetros)
particle_size = 100 # Número de particulas
iterations = 500 # Máximo número de iteraciones
w = 0.95 # Constante inercia
c1 = 2 # Constante cognitiva
c2 = 2 # Constante social
```
Ahora revisaremos el algoritmo.
```
# Algoritmo
class Particle:
def __init__(self, bounds):
self.particle_position = [] # posición de la particula
self.particle_velocity = [] # velocidad de la particula
self.local_best_particle_position = [] # mejor posición de la particula
self.fitness_local_best_particle_position = initial_fitness # valor inicial de la función objetivo de la mejor particula
self.fitness_particle_position = initial_fitness # valor de la función objetivo de la posición de la particula
for i in range(nv):
self.particle_position.append(random.uniform(bounds[i][0], bounds[i][1])) # generamos una posición inicial al azar
self.particle_velocity.append(random.uniform(-1, 1)) # generamos la velocidad inicial al azar
def evaluate(self, objective_function):
self.fitness_particle_position = objective_function(self.particle_position)
if mm == -1:
if self.fitness_particle_position < self.fitness_local_best_particle_position:
self.local_best_particle_position = self.particle_position # actualizamos el mejor "local"
self.fitness_local_best_particle_position = self.fitness_particle_position # actualizamos el fitness del mejor "local"
if mm == 1:
if self.fitness_particle_position > self.fitness_local_best_particle_position:
self.local_best_particle_position = self.particle_position # actualizamos el mejor "local"
self.fitness_local_best_particle_position = self.fitness_particle_position # actualizamos el fitness del mejor "local"
def update_velocity(self, global_best_particle_position):
for i in range(nv):
r1 = random.random()
r2 = random.random()
cognitive_velocity = c1*r1*(self.local_best_particle_position[i] - self.particle_position[i])
social_velocity = c2*r2*(global_best_particle_position[i] - self.particle_position[i])
self.particle_velocity[i] = w*self.particle_velocity[i] + cognitive_velocity + social_velocity
def update_position(self, bounds):
for i in range(nv):
self.particle_position[i] = self.particle_position[i] + self.particle_velocity[i]
# Validamos y reparamos para satisfacer el limite superior
if self.particle_position[i] > bounds[i][1]:
self.particle_position[i] = bounds[i][1]
# Validamos y reparamos para satisfacer el limite inferior
if self.particle_position[i] < bounds[i][0]:
self.particle_position[i] = bounds[i][0]
class PSO():
def __init__(self, objective_function, bounds, particle_size, iterations):
fitness_global_best_particle_position = initial_fitness
global_best_particle_position = []
swarm_particle = []
for i in range(particle_size):
swarm_particle.append(Particle(bounds))
A = []
for i in range(iterations):
for j in range(particle_size):
swarm_particle[j].evaluate(objective_function)
if mm == -1:
if swarm_particle[j].fitness_particle_position < fitness_global_best_particle_position:
global_best_particle_position = list(swarm_particle[j].particle_position)
fitness_global_best_particle_position = float(swarm_particle[j].fitness_particle_position)
if mm == 1:
if swarm_particle[j].fitness_particle_position > fitness_global_best_particle_position:
global_best_particle_position = list(swarm_particle[j].particle_position)
fitness_global_best_particle_position = float(swarm_particle[j].fitness_particle_position)
for j in range(particle_size):
swarm_particle[j].update_velocity(global_best_particle_position)
swarm_particle[j].update_position(bounds)
A.append(fitness_global_best_particle_position) # Guardamos el mejor fitness
print('Optimal solution:', global_best_particle_position)
print('Objective function value:', fitness_global_best_particle_position)
print('Evolutionary process of the objective function value:')
plt.plot(A)
if mm == -1:
initial_fitness = float("inf") # Para problema de minimización
if mm == 1:
initial_fitness = -float("inf") # Para problema de maximización
# Ejecutamos PSO
PSO(objective_function, bounds, particle_size, iterations)
```
## Puntos para recordar
+ Población: Enjambre (Swarm)
+ Soluciones: Particulas (Particles)
+ Valor asignado a cada particula: fitness
+ Las particulas se mueven en el espacio de busqueda (search-space)
+ Los movimientos de las particulas son guiados por:
- Conocimiento de su mejor posición en el espacio de busqueda.
- Conocimiento de la mejor posición del enjambre entero.
+ Cuando mejores posiciones sean descubiertas estas guiaran los movimientos del enjambre.
+ Iteramos hasta encontrar una solución. (No siempre se encuentra una solución)
+ La meta es minimizar o maximizar una función de costo.
+ Ventaja. Rapidez para converger.
+ Desventaja. Puede quedar atrapado en un mínimo local en lugar del mínimo global.
+ No calcula derivadas como otros optimizadores por lo que se puede utilizar para funciones no diferenciables.
### Referencias:
http://ijcsi.org/papers/IJCSI-9-6-2-264-271.pdf
https://www.youtube.com/watch?v=JhgDMAm-imI
https://www.youtube.com/watch?v=7uZcuaUvwq0&t=134s
| true |
code
| 0.505859 | null | null | null | null |
|
```
#export
from fastai2.data.all import *
from fastai2.text.core import *
from nbdev.showdoc import *
#default_exp text.models.awdlstm
#default_cls_lvl 3
```
# AWD-LSTM
> AWD LSTM from [Smerity et al.](https://arxiv.org/pdf/1708.02182.pdf)
## Basic NLP modules
On top of the pytorch or the fastai [`layers`](/layers.html#layers), the language models use some custom layers specific to NLP.
```
#export
def dropout_mask(x, sz, p):
"Return a dropout mask of the same type as `x`, size `sz`, with probability `p` to cancel an element."
return x.new(*sz).bernoulli_(1-p).div_(1-p)
t = dropout_mask(torch.randn(3,4), [4,3], 0.25)
test_eq(t.shape, [4,3])
assert ((t == 4/3) + (t==0)).all()
#export
class RNNDropout(Module):
"Dropout with probability `p` that is consistent on the seq_len dimension."
def __init__(self, p=0.5): self.p=p
def forward(self, x):
if not self.training or self.p == 0.: return x
return x * dropout_mask(x.data, (x.size(0), 1, x.size(2)), self.p)
dp = RNNDropout(0.3)
tst_inp = torch.randn(4,3,7)
tst_out = dp(tst_inp)
for i in range(4):
for j in range(7):
if tst_out[i,0,j] == 0: assert (tst_out[i,:,j] == 0).all()
else: test_close(tst_out[i,:,j], tst_inp[i,:,j]/(1-0.3))
#export
import warnings
#export
class WeightDropout(Module):
"A module that warps another layer in which some weights will be replaced by 0 during training."
def __init__(self, module, weight_p, layer_names='weight_hh_l0'):
self.module,self.weight_p,self.layer_names = module,weight_p,L(layer_names)
for layer in self.layer_names:
#Makes a copy of the weights of the selected layers.
w = getattr(self.module, layer)
self.register_parameter(f'{layer}_raw', nn.Parameter(w.data))
self.module._parameters[layer] = F.dropout(w, p=self.weight_p, training=False)
def _setweights(self):
"Apply dropout to the raw weights."
for layer in self.layer_names:
raw_w = getattr(self, f'{layer}_raw')
self.module._parameters[layer] = F.dropout(raw_w, p=self.weight_p, training=self.training)
def forward(self, *args):
self._setweights()
with warnings.catch_warnings():
#To avoid the warning that comes because the weights aren't flattened.
warnings.simplefilter("ignore")
return self.module.forward(*args)
def reset(self):
for layer in self.layer_names:
raw_w = getattr(self, f'{layer}_raw')
self.module._parameters[layer] = F.dropout(raw_w, p=self.weight_p, training=False)
if hasattr(self.module, 'reset'): self.module.reset()
module = nn.LSTM(5,7).cuda()
dp_module = WeightDropout(module, 0.4)
wgts = getattr(dp_module.module, 'weight_hh_l0')
tst_inp = torch.randn(10,20,5).cuda()
h = torch.zeros(1,20,7).cuda(), torch.zeros(1,20,7).cuda()
x,h = dp_module(tst_inp,h)
new_wgts = getattr(dp_module.module, 'weight_hh_l0')
test_eq(wgts, getattr(dp_module, 'weight_hh_l0_raw'))
assert 0.2 <= (new_wgts==0).sum().float()/new_wgts.numel() <= 0.6
#export
class EmbeddingDropout(Module):
"Apply dropout with probabily `embed_p` to an embedding layer `emb`."
def __init__(self, emb, embed_p):
self.emb,self.embed_p = emb,embed_p
def forward(self, words, scale=None):
if self.training and self.embed_p != 0:
size = (self.emb.weight.size(0),1)
mask = dropout_mask(self.emb.weight.data, size, self.embed_p)
masked_embed = self.emb.weight * mask
else: masked_embed = self.emb.weight
if scale: masked_embed.mul_(scale)
return F.embedding(words, masked_embed, ifnone(self.emb.padding_idx, -1), self.emb.max_norm,
self.emb.norm_type, self.emb.scale_grad_by_freq, self.emb.sparse)
enc = nn.Embedding(10, 7, padding_idx=1)
enc_dp = EmbeddingDropout(enc, 0.5)
tst_inp = torch.randint(0,10,(8,))
tst_out = enc_dp(tst_inp)
for i in range(8):
assert (tst_out[i]==0).all() or torch.allclose(tst_out[i], 2*enc.weight[tst_inp[i]])
#export
class AWD_LSTM(Module):
"AWD-LSTM inspired by https://arxiv.org/abs/1708.02182"
initrange=0.1
def __init__(self, vocab_sz, emb_sz, n_hid, n_layers, pad_token=1, hidden_p=0.2, input_p=0.6, embed_p=0.1,
weight_p=0.5, bidir=False):
store_attr(self, 'emb_sz,n_hid,n_layers,pad_token')
self.bs = 1
self.n_dir = 2 if bidir else 1
self.encoder = nn.Embedding(vocab_sz, emb_sz, padding_idx=pad_token)
self.encoder_dp = EmbeddingDropout(self.encoder, embed_p)
self.rnns = nn.ModuleList([self._one_rnn(emb_sz if l == 0 else n_hid, (n_hid if l != n_layers - 1 else emb_sz)//self.n_dir,
bidir, weight_p, l) for l in range(n_layers)])
self.encoder.weight.data.uniform_(-self.initrange, self.initrange)
self.input_dp = RNNDropout(input_p)
self.hidden_dps = nn.ModuleList([RNNDropout(hidden_p) for l in range(n_layers)])
self.reset()
def forward(self, inp, from_embeds=False):
bs,sl = inp.shape[:2] if from_embeds else inp.shape
if bs!=self.bs: self._change_hidden(bs)
output = self.input_dp(inp if from_embeds else self.encoder_dp(inp))
new_hidden = []
for l, (rnn,hid_dp) in enumerate(zip(self.rnns, self.hidden_dps)):
output, new_h = rnn(output, self.hidden[l])
new_hidden.append(new_h)
if l != self.n_layers - 1: output = hid_dp(output)
self.hidden = to_detach(new_hidden, cpu=False, gather=False)
return output
def _change_hidden(self, bs):
self.hidden = [self._change_one_hidden(l, bs) for l in range(self.n_layers)]
self.bs = bs
def _one_rnn(self, n_in, n_out, bidir, weight_p, l):
"Return one of the inner rnn"
rnn = nn.LSTM(n_in, n_out, 1, batch_first=True, bidirectional=bidir)
return WeightDropout(rnn, weight_p)
def _one_hidden(self, l):
"Return one hidden state"
nh = (self.n_hid if l != self.n_layers - 1 else self.emb_sz) // self.n_dir
return (one_param(self).new_zeros(self.n_dir, self.bs, nh), one_param(self).new_zeros(self.n_dir, self.bs, nh))
def _change_one_hidden(self, l, bs):
if self.bs < bs:
nh = (self.n_hid if l != self.n_layers - 1 else self.emb_sz) // self.n_dir
return tuple(torch.cat([h, h.new_zeros(self.n_dir, bs-self.bs, nh)], dim=1) for h in self.hidden[l])
if self.bs > bs: return (self.hidden[l][0][:,:bs].contiguous(), self.hidden[l][1][:,:bs].contiguous())
return self.hidden[l]
def reset(self):
"Reset the hidden states"
[r.reset() for r in self.rnns if hasattr(r, 'reset')]
self.hidden = [self._one_hidden(l) for l in range(self.n_layers)]
```
This is the core of an AWD-LSTM model, with embeddings from `vocab_sz` and `emb_sz`, `n_layers` LSTMs potentialy `bidir` stacked, the first one going from `emb_sz` to `n_hid`, the last one from `n_hid` to `emb_sz` and all the inner ones from `n_hid` to `n_hid`. `pad_token` is passed to the PyTorch embedding layer. The dropouts are applied as such:
- the embeddings are wrapped in `EmbeddingDropout` of probability `embed_p`;
- the result of thise embedding layer goes through an `RNNDropout` of probability `input_p`;
- each LSTM has `WeightDropout` applied with probability `weight_p`;
- between two of the inner LSTM, an `RNNDropout` is applied with probabilith `hidden_p`.
THe module returns two lists: the raw outputs (without being applied the dropout of `hidden_p`) of each inner LSTM and the list of outputs with dropout. Since there is no dropout applied on the last output, those two lists have the same last element, which is the output that should be fed to a decoder (in the case of a language model).
```
tst = AWD_LSTM(100, 20, 10, 2)
x = torch.randint(0, 100, (10,5))
r = tst(x)
test_eq(tst.bs, 10)
test_eq(len(tst.hidden), 2)
test_eq([h_.shape for h_ in tst.hidden[0]], [[1,10,10], [1,10,10]])
test_eq([h_.shape for h_ in tst.hidden[1]], [[1,10,20], [1,10,20]])
test_eq(r.shape, [10,5,20])
test_eq(r[:,-1], tst.hidden[-1][0][0]) #hidden state is the last timestep in raw outputs
#hide
#test bs change
x = torch.randint(0, 100, (6,5))
r = tst(x)
test_eq(tst.bs, 6)
# hide
# cuda
tst = AWD_LSTM(100, 20, 10, 2, bidir=True).to('cuda')
tst.reset()
x = torch.randint(0, 100, (10,5)).to('cuda')
r = tst(x)
x = torch.randint(0, 100, (6,5), device='cuda')
r = tst(x)
#export
def awd_lstm_lm_split(model):
"Split a RNN `model` in groups for differential learning rates."
groups = [nn.Sequential(rnn, dp) for rnn, dp in zip(model[0].rnns, model[0].hidden_dps)]
groups = L(groups + [nn.Sequential(model[0].encoder, model[0].encoder_dp, model[1])])
return groups.map(params)
splits = awd_lstm_lm_split
#export
awd_lstm_lm_config = dict(emb_sz=400, n_hid=1152, n_layers=3, pad_token=1, bidir=False, output_p=0.1,
hidden_p=0.15, input_p=0.25, embed_p=0.02, weight_p=0.2, tie_weights=True, out_bias=True)
#export
def awd_lstm_clas_split(model):
"Split a RNN `model` in groups for differential learning rates."
groups = [nn.Sequential(model[0].module.encoder, model[0].module.encoder_dp)]
groups += [nn.Sequential(rnn, dp) for rnn, dp in zip(model[0].module.rnns, model[0].module.hidden_dps)]
groups = L(groups + [model[1]])
return groups.map(params)
#export
awd_lstm_clas_config = dict(emb_sz=400, n_hid=1152, n_layers=3, pad_token=1, bidir=False, output_p=0.4,
hidden_p=0.3, input_p=0.4, embed_p=0.05, weight_p=0.5)
```
## QRNN
```
#export
class AWD_QRNN(AWD_LSTM):
"Same as an AWD-LSTM, but using QRNNs instead of LSTMs"
def _one_rnn(self, n_in, n_out, bidir, weight_p, l):
from fastai2.text.models.qrnn import QRNN
rnn = QRNN(n_in, n_out, 1, save_prev_x=(not bidir), zoneout=0, window=2 if l == 0 else 1, output_gate=True, bidirectional=bidir)
rnn.layers[0].linear = WeightDropout(rnn.layers[0].linear, weight_p, layer_names='weight')
return rnn
def _one_hidden(self, l):
"Return one hidden state"
nh = (self.n_hid if l != self.n_layers - 1 else self.emb_sz) // self.n_dir
return one_param(self).new_zeros(self.n_dir, self.bs, nh)
def _change_one_hidden(self, l, bs):
if self.bs < bs:
nh = (self.n_hid if l != self.n_layers - 1 else self.emb_sz) // self.n_dir
return torch.cat([self.hidden[l], self.hidden[l].new_zeros(self.n_dir, bs-self.bs, nh)], dim=1)
if self.bs > bs: return self.hidden[l][:, :bs]
return self.hidden[l]
model = AWD_QRNN(vocab_sz=10, emb_sz=20, n_hid=16, n_layers=2, bidir=False)
x = torch.randint(0, 10, (7,5))
y = model(x)
test_eq(y.shape, (7, 5, 20))
# hide
# test bidir=True
model = AWD_QRNN(vocab_sz=10, emb_sz=20, n_hid=16, n_layers=2, bidir=True)
x = torch.randint(0, 10, (7,5))
y = model(x)
test_eq(y.shape, (7, 5, 20))
#export
awd_qrnn_lm_config = dict(emb_sz=400, n_hid=1552, n_layers=4, pad_token=1, bidir=False, output_p=0.1,
hidden_p=0.15, input_p=0.25, embed_p=0.02, weight_p=0.2, tie_weights=True, out_bias=True)
#export
awd_qrnn_clas_config = dict(emb_sz=400, n_hid=1552, n_layers=4, pad_token=1, bidir=False, output_p=0.4,
hidden_p=0.3, input_p=0.4, embed_p=0.05, weight_p=0.5)
```
## Export -
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
| true |
code
| 0.748469 | null | null | null | null |
|
# 卷积神经网络
```
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers, datasets, Sequential
```
### 1. 自定义权值实现
**在 tensorflow 中:**
- $C_{in} = 输入通道数 = 卷积核通道数$
- $C_{out} = 卷积核数 = 输出通道数$
$$X:[b, h, w, C_{in}],W:[k, k, C_{in}, C_{out}]$$
$$\Downarrow$$
$$O:[b, h', w', C_{out}]$$
```
x = tf.random.normal([2, 5, 5, 3]) # 输入,5*5,3 通道
w = tf.random.normal([3, 3, 3, 4]) # 4 个 3*3 大小的卷积核
# 设置步长为 1, padding 为 0
# padding 参数的设置格式为: padding=[[0,0],[上,下],[左,右],[0,0]]
out = tf.nn.conv2d(x, w, strides=1, padding=[[0, 0], [0, 0], [0, 0], [0, 0]])
out.shape
# padding 都为 1
out = tf.nn.conv2d(x, w, strides=1, padding=[[0, 0], [1, 1], [1, 1], [0, 0]])
out.shape
# 步长为,padding 设置为输出、输入同大小
# 需要注意的是, padding=same 只有在 strides=1 时才是同大小
out = tf.nn.conv2d(x, w, strides=1, padding='SAME')
out.shape
# 当𝑠 > 1 时,设置 padding='SAME'将使得输出高、宽将成 1/s 倍的减少
# 高宽先 padding 成可以整除 3 的最小整数 6,然后 6 按 3 倍减少,得到 2x2
out = tf.nn.conv2d(x, w, strides=3, padding='SAME')
out.shape
# tf.nn.conv2D 没有实现偏置向量计算, 所以需要手动添加 偏置 bias
b = tf.zeros([4])
out = out + b
```
### 2. 卷积层类
- 在 `TensorFlow` 中,`API` 的命名有 一定的规律,首字母大写的对象一般表示类,全部小写的一般表示函数
```
# 卷积核宽高相等时
# 创建 4 个 3 × 3大小的卷积核的卷积层,步长为 1, padding 方案为'SAME'
layer = layers.Conv2D(4, kernel_size=3, strides=1, padding='SAME')
# 卷积核宽高不等时
layer = layers.Conv2D(4, kernel_size=(3, 4), strides=(1, 2), padding="SAME")
layer = layers.Conv2D(4, kernel_size=3, strides=1, padding='SAME')
out = layer(x) # 前向计算
out.shape
# 返回 W 和 b 的列表
# layer.trainable_variables
# layer.kernel # layer.weights
# layer.bias
```
### 3. LeNet-5 实战
```
(X_train, y_train), (X_test, y_test) = datasets.mnist.load_data()
X_train = tf.convert_to_tensor(X_train, dtype=tf.float32)
y_train = tf.convert_to_tensor(y_train, dtype=tf.int32)
X_test = tf.convert_to_tensor(X_test, dtype=tf.float32)
y_test = tf.convert_to_tensor(y_test, dtype=tf.int32)
network = Sequential([
layers.Conv2D(6, kernel_size=3, strides=1), # 6 个 3x3 的卷积核
layers.MaxPooling2D(pool_size=2, strides=2), # 宽高各减半的池化层
layers.ReLU(),
layers.Conv2D(16, kernel_size=3, strides=1), # 第二个卷积层, 16 个 3x3 卷积核
layers.MaxPooling2D(pool_size=2, strides=2), # 宽高各减半的池化层
layers.ReLU(),
layers.Flatten(), # 打平层,方便全连接层处理
layers.Dense(120, activation='relu'),
layers.Dense(84, activation='relu'),
layers.Dense(10)
])
# build 一次网络模型,给输入 X 的形状,其中 4 为随意给的 batchsz
network.build(input_shape=(4, 28, 28, 1))
network.summary()
from tensorflow.keras import losses, optimizers
# 插入通道维度 => [b, 28, 28, 1]
X_train = tf.expand_dims(X_train, axis=3)
X_train.shape
# 通过设置 from_logits=True 标志位将 softmax 激活函数实现在损失函数中
# 创建损失函数的类,在实际计算时直接调用类实例即可
criteon = losses.CategoricalCrossentropy(from_logits=True)
optimizer = optimizers.SGD(lr=0.01)
for epoch in range(5):
# 构建梯度记录环境
with tf.GradientTape() as tape:
# 前向计算,获得10类别的预测分布,[b, 784] => [b, 10]
out = network(X_train)
# 真实标签one-hot编码,[b] => [b, 10]
y_train_onehot = tf.one_hot(y_train, depth=10)
# 计算交叉熵损失函数,标量
loss = criteon(y_train_onehot, out)
print("losses: ", loss)
# 自动计算梯度
grads = tape.gradient(loss, network.trainable_variables)
# 自动更新参数
optimizer.apply_gradients(zip(grads, network.trainable_variables))
```
**测试**
```
X_test = tf.expand_dims(X_test, axis=3)
X_test.shape
y_predict = network(X_test)
y_predict.shape
# 模型输出没有经过 softmax
y_predict[0]
y_predict = tf.argmax(y_predict, axis=1)
y_predict[:100]
y_predict2 = network(X_test)
y_predict2.shape
```
| true |
code
| 0.65321 | null | null | null | null |
|
```
%matplotlib inline
```
Word Embeddings: Encoding Lexical Semantics
===========================================
Word embeddings are dense vectors of real numbers, one per word in your
vocabulary. In NLP, it is almost always the case that your features are
words! But how should you represent a word in a computer? You could
store its ascii character representation, but that only tells you what
the word *is*, it doesn't say much about what it *means* (you might be
able to derive its part of speech from its affixes, or properties from
its capitalization, but not much). Even more, in what sense could you
combine these representations? We often want dense outputs from our
neural networks, where the inputs are $|V|$ dimensional, where
$V$ is our vocabulary, but often the outputs are only a few
dimensional (if we are only predicting a handful of labels, for
instance). How do we get from a massive dimensional space to a smaller
dimensional space?
How about instead of ascii representations, we use a one-hot encoding?
That is, we represent the word $w$ by
\begin{align}\overbrace{\left[ 0, 0, \dots, 1, \dots, 0, 0 \right]}^\text{|V| elements}\end{align}
where the 1 is in a location unique to $w$. Any other word will
have a 1 in some other location, and a 0 everywhere else.
There is an enormous drawback to this representation, besides just how
huge it is. It basically treats all words as independent entities with
no relation to each other. What we really want is some notion of
*similarity* between words. Why? Let's see an example.
Suppose we are building a language model. Suppose we have seen the
sentences
* The mathematician ran to the store.
* The physicist ran to the store.
* The mathematician solved the open problem.
in our training data. Now suppose we get a new sentence never before
seen in our training data:
* The physicist solved the open problem.
Our language model might do OK on this sentence, but wouldn't it be much
better if we could use the following two facts:
* We have seen mathematician and physicist in the same role in a sentence. Somehow they
have a semantic relation.
* We have seen mathematician in the same role in this new unseen sentence
as we are now seeing physicist.
and then infer that physicist is actually a good fit in the new unseen
sentence? This is what we mean by a notion of similarity: we mean
*semantic similarity*, not simply having similar orthographic
representations. It is a technique to combat the sparsity of linguistic
data, by connecting the dots between what we have seen and what we
haven't. This example of course relies on a fundamental linguistic
assumption: that words appearing in similar contexts are related to each
other semantically. This is called the `distributional
hypothesis <https://en.wikipedia.org/wiki/Distributional_semantics>`__.
Getting Dense Word Embeddings
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
How can we solve this problem? That is, how could we actually encode
semantic similarity in words? Maybe we think up some semantic
attributes. For example, we see that both mathematicians and physicists
can run, so maybe we give these words a high score for the "is able to
run" semantic attribute. Think of some other attributes, and imagine
what you might score some common words on those attributes.
If each attribute is a dimension, then we might give each word a vector,
like this:
\begin{align}q_\text{mathematician} = \left[ \overbrace{2.3}^\text{can run},
\overbrace{9.4}^\text{likes coffee}, \overbrace{-5.5}^\text{majored in Physics}, \dots \right]\end{align}
\begin{align}q_\text{physicist} = \left[ \overbrace{2.5}^\text{can run},
\overbrace{9.1}^\text{likes coffee}, \overbrace{6.4}^\text{majored in Physics}, \dots \right]\end{align}
Then we can get a measure of similarity between these words by doing:
\begin{align}\text{Similarity}(\text{physicist}, \text{mathematician}) = q_\text{physicist} \cdot q_\text{mathematician}\end{align}
Although it is more common to normalize by the lengths:
\begin{align}\text{Similarity}(\text{physicist}, \text{mathematician}) = \frac{q_\text{physicist} \cdot q_\text{mathematician}}
{\| q_\text{\physicist} \| \| q_\text{mathematician} \|} = \cos (\phi)\end{align}
Where $\phi$ is the angle between the two vectors. That way,
extremely similar words (words whose embeddings point in the same
direction) will have similarity 1. Extremely dissimilar words should
have similarity -1.
You can think of the sparse one-hot vectors from the beginning of this
section as a special case of these new vectors we have defined, where
each word basically has similarity 0, and we gave each word some unique
semantic attribute. These new vectors are *dense*, which is to say their
entries are (typically) non-zero.
But these new vectors are a big pain: you could think of thousands of
different semantic attributes that might be relevant to determining
similarity, and how on earth would you set the values of the different
attributes? Central to the idea of deep learning is that the neural
network learns representations of the features, rather than requiring
the programmer to design them herself. So why not just let the word
embeddings be parameters in our model, and then be updated during
training? This is exactly what we will do. We will have some *latent
semantic attributes* that the network can, in principle, learn. Note
that the word embeddings will probably not be interpretable. That is,
although with our hand-crafted vectors above we can see that
mathematicians and physicists are similar in that they both like coffee,
if we allow a neural network to learn the embeddings and see that both
mathematicians and physicists have a large value in the second
dimension, it is not clear what that means. They are similar in some
latent semantic dimension, but this probably has no interpretation to
us.
In summary, **word embeddings are a representation of the *semantics* of
a word, efficiently encoding semantic information that might be relevant
to the task at hand**. You can embed other things too: part of speech
tags, parse trees, anything! The idea of feature embeddings is central
to the field.
Word Embeddings in Pytorch
~~~~~~~~~~~~~~~~~~~~~~~~~~
Before we get to a worked example and an exercise, a few quick notes
about how to use embeddings in Pytorch and in deep learning programming
in general. Similar to how we defined a unique index for each word when
making one-hot vectors, we also need to define an index for each word
when using embeddings. These will be keys into a lookup table. That is,
embeddings are stored as a $|V| \times D$ matrix, where $D$
is the dimensionality of the embeddings, such that the word assigned
index $i$ has its embedding stored in the $i$'th row of the
matrix. In all of my code, the mapping from words to indices is a
dictionary named word\_to\_ix.
The module that allows you to use embeddings is torch.nn.Embedding,
which takes two arguments: the vocabulary size, and the dimensionality
of the embeddings.
To index into this table, you must use torch.LongTensor (since the
indices are integers, not floats).
```
# Author: Robert Guthrie
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
torch.manual_seed(1)
word_to_ix = {"hello": 0, "world": 1}
embeds = nn.Embedding(2, 5) # 2 words in vocab, 5 dimensional embeddings
lookup_tensor = torch.tensor([word_to_ix["hello"]], dtype=torch.long)
hello_embed = embeds(lookup_tensor)
print(hello_embed)
```
An Example: N-Gram Language Modeling
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Recall that in an n-gram language model, given a sequence of words
$w$, we want to compute
\begin{align}P(w_i | w_{i-1}, w_{i-2}, \dots, w_{i-n+1} )\end{align}
Where $w_i$ is the ith word of the sequence.
In this example, we will compute the loss function on some training
examples and update the parameters with backpropagation.
```
CONTEXT_SIZE = 2
EMBEDDING_DIM = 10
# We will use Shakespeare Sonnet 2
test_sentence = """When forty winters shall besiege thy brow,
And dig deep trenches in thy beauty's field,
Thy youth's proud livery so gazed on now,
Will be a totter'd weed of small worth held:
Then being asked, where all thy beauty lies,
Where all the treasure of thy lusty days;
To say, within thine own deep sunken eyes,
Were an all-eating shame, and thriftless praise.
How much more praise deserv'd thy beauty's use,
If thou couldst answer 'This fair child of mine
Shall sum my count, and make my old excuse,'
Proving his beauty by succession thine!
This were to be new made when thou art old,
And see thy blood warm when thou feel'st it cold.""".split()
# we should tokenize the input, but we will ignore that for now
# build a list of tuples. Each tuple is ([ word_i-2, word_i-1 ], target word)
trigrams = [([test_sentence[i], test_sentence[i + 1]], test_sentence[i + 2])
for i in range(len(test_sentence) - 2)]
# print the first 3, just so you can see what they look like
print(trigrams[:3])
vocab = set(test_sentence)
word_to_ix = {word: i for i, word in enumerate(vocab)}
class NGramLanguageModeler(nn.Module):
def __init__(self, vocab_size, embedding_dim, context_size):
super(NGramLanguageModeler, self).__init__()
self.embeddings = nn.Embedding(vocab_size, embedding_dim)
self.linear1 = nn.Linear(context_size * embedding_dim, 128)
self.linear2 = nn.Linear(128, vocab_size)
def forward(self, inputs):
embeds = self.embeddings(inputs).view((1, -1))
out = F.relu(self.linear1(embeds))
out = self.linear2(out)
log_probs = F.log_softmax(out, dim=1)
return log_probs
losses = []
loss_function = nn.NLLLoss()
model = NGramLanguageModeler(len(vocab), EMBEDDING_DIM, CONTEXT_SIZE)
optimizer = optim.SGD(model.parameters(), lr=0.001)
for epoch in range(10):
total_loss = 0
for context, target in trigrams:
# Step 1. Prepare the inputs to be passed to the model (i.e, turn the words
# into integer indices and wrap them in tensors)
context_idxs = torch.tensor([word_to_ix[w] for w in context], dtype=torch.long)
# Step 2. Recall that torch *accumulates* gradients. Before passing in a
# new instance, you need to zero out the gradients from the old
# instance
model.zero_grad()
# Step 3. Run the forward pass, getting log probabilities over next
# words
log_probs = model(context_idxs)
# Step 4. Compute your loss function. (Again, Torch wants the target
# word wrapped in a tensor)
loss = loss_function(log_probs, torch.tensor([word_to_ix[target]], dtype=torch.long))
# Step 5. Do the backward pass and update the gradient
loss.backward()
optimizer.step()
# Get the Python number from a 1-element Tensor by calling tensor.item()
total_loss += loss.item()
losses.append(total_loss)
print(losses) # The loss decreased every iteration over the training data!
```
Exercise: Computing Word Embeddings: Continuous Bag-of-Words
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The Continuous Bag-of-Words model (CBOW) is frequently used in NLP deep
learning. It is a model that tries to predict words given the context of
a few words before and a few words after the target word. This is
distinct from language modeling, since CBOW is not sequential and does
not have to be probabilistic. Typcially, CBOW is used to quickly train
word embeddings, and these embeddings are used to initialize the
embeddings of some more complicated model. Usually, this is referred to
as *pretraining embeddings*. It almost always helps performance a couple
of percent.
The CBOW model is as follows. Given a target word $w_i$ and an
$N$ context window on each side, $w_{i-1}, \dots, w_{i-N}$
and $w_{i+1}, \dots, w_{i+N}$, referring to all context words
collectively as $C$, CBOW tries to minimize
\begin{align}-\log p(w_i | C) = -\log \text{Softmax}(A(\sum_{w \in C} q_w) + b)\end{align}
where $q_w$ is the embedding for word $w$.
Implement this model in Pytorch by filling in the class below. Some
tips:
* Think about which parameters you need to define.
* Make sure you know what shape each operation expects. Use .view() if you need to
reshape.
```
CONTEXT_SIZE = 2 # 2 words to the left, 2 to the right
raw_text = """We are about to study the idea of a computational process.
Computational processes are abstract beings that inhabit computers.
As they evolve, processes manipulate other abstract things called data.
The evolution of a process is directed by a pattern of rules
called a program. People create programs to direct processes. In effect,
we conjure the spirits of the computer with our spells.""".split()
# By deriving a set from `raw_text`, we deduplicate the array
vocab = set(raw_text)
vocab_size = len(vocab)
word_to_ix = {word: i for i, word in enumerate(vocab)}
data = []
for i in range(2, len(raw_text) - 2):
context = [raw_text[i - 2], raw_text[i - 1],
raw_text[i + 1], raw_text[i + 2]]
target = raw_text[i]
data.append((context, target))
print(data[:5])
class CBOW(nn.Module):
def __init__(self):
pass
def forward(self, inputs):
pass
# create your model and train. here are some functions to help you make
# the data ready for use by your module
def make_context_vector(context, word_to_ix):
idxs = [word_to_ix[w] for w in context]
return torch.tensor(idxs, dtype=torch.long)
make_context_vector(data[0][0], word_to_ix) # example
```
| true |
code
| 0.697532 | null | null | null | null |
|
# Game of Life
[Game of Life](https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life), introduced by John H. Conway in 1970, is a 2D cellular automaton that simulates a world populated by cells. The world is a 2D square grid that is, in principle, infinite. Each grid position represents a cell that can be either alive, or dead. The game is played over a number of generations. To compute the next generation, each grid position is considered indepedently. The rules are straightforward:
* If a cell in generation $t$ is alive,
* it is alive in generation $t + 1$ if it has either two or three life neighbours in generation $t$;
* it is dead in generation $t + 1$ otherwise.
* If a cell in generation $t$ is dead,
* it is alive in generatino $t + 1$ if it has exactly three neighbours in generation $t$;
* it is dead in generation $t + 1$ otherwise.
Each cell has eight neighbours. Typically, the Game of Life world is represented by an $n \times n$ array, and periodic boundary conditions are applied to simulate an infinite world.
## Required imports
```
from IPython.display import HTML
from collections import Counter
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
%matplotlib inline
import numpy as np
```
## World representation
A Game of Life world will be represented by an array of integers. Each array element represents a cell that can either be dead (0) or alive (1). First, we define a class that represents a world, and that is initialized from a given numpy array. This will serve as a base class for classes that implement specific initializations. Typically, those should override `__init__`. The `World` base class defines all methods to compute the next generation, get information on the world's state, as well a string representation.
```
class World:
'''Class representing a Game of Life world, intended to be subclassed
for specific initialization strategies.'''
def __init__(self, cells):
'''Initialize a world using the cells provided
Parameters
----------
cells : np.ndarray
2D numpy array representing the world, 1 represents a cell that
is alive, 0 represents a dead cell.
'''
self._world = np.copy(cells.astype(np.int8))
self._tmp_world = np.empty_like(self._world)
@property
def shape(self):
'''Get the shape of the world
Returns
-------
tuple
shape of the world as a 2-tuple of int
'''
return self._world.shape
@property
def nr_alive(self):
'''Get the number of cells that are alive
Returns
-------
int
number of cells alive in the world
'''
return np.sum(self._world)
@property
def cells(self):
'''Get the world as a 2D numpy array
Returns
-------
np.ndarray
2D numpy array of 0 and 1 int values, where 1 represents
a cell that is alive, and 0 one that is dead
'''
return np.copy(self._world)
@property
def fraction_alive(self):
'''Get the fraction of cells that are alive in the world
Returns
-------
float
fraction of cells that are alive
'''
return np.sum(self._world)/(self.shape[0]*self.shape[1])
def is_alive(self, i, j):
return self._world[i, j] == 1
def nr_neignbours(self, i, j):
up = (i + self.shape[0] - 1) % self.shape[0]
down = (i + 1) % self.shape[0]
left = (j + self.shape[1] - 1) % self.shape[1]
right = (j + 1) % self.shape[1]
return (self._world[up, left] + self._world[up, j] +
self._world[up, right] +
self._world[i, left] + self._world[i, right] +
self._world[down, left] + self._world[down, j] +
self._world[down, right])
def next_generation(self):
'''Compute the world's next generation
'''
for i in range(self.shape[0]):
for j in range(self.shape[1]):
nr_nb = self.nr_neignbours(i, j)
if self.is_alive(i, j):
self._tmp_world[i, j] = 1 if nr_nb == 2 or nr_nb == 3 else 0
else:
self._tmp_world[i, j] = 1 if nr_nb == 3 else 0
self._world = self._tmp_world
def __repr__(self):
return '\n'.join(' '.join(f'{self._world[i, j]:1d}'
for j in range(self.shape[1]))
for i in range(self.shape[0]))
```
### Random world
The `RandomWorld` class inherits from the `World` basse class, and initializes a world of $n \times n$ randomly, but where a fraction $f_{\rm alive}$ is alive.
```
class RandomWorld(World):
'''Class representing a world that is initialized randomly so that a given
fraction of cells is alive. Note this is not necessarily exact.'''
def __init__(self, n, f_alive):
'''Create a random world with a give fraction of cells that are alive.
Parameters
----------
n : int
size of the n*n world
f_alive : float
fraction of cells that are alive (between 0.0 and 1.0)
'''
super().__init__(np.random.choice(np.array([0, 1], dtype=np.int8),
(n, n), p=(1 - f_alive, f_alive)))
```
Create a world and run a generation.
```
world = RandomWorld(10, 0.4)
world
world.next_generation()
print(world)
```
### Patch world
A second, interesting way to initialize a world is from a numpy array representing an $p_0 \times p_1$ patch in the $n \times n$ world, where, obviously, $p_0 \le n$ and $p_1 \le n$.
```
class PatchWorld(World):
'''Class that is initialized with a patch given as a 2D numpy array. All
other cells are dead.'''
def __init__(self, n, patch):
'''Create a random world with a give initial patch, all
other cells will be dead.
Parameters
----------
n : int
size of the n*n world
patch : np.ndarray
2D numpy array containing the part of the world to be
initialized; patch.shape[0] <= n, patch.shape[1] <= n,
and patch should contain 1 for a cell that is alive, 0
for a cell that is dead
'''
world = np.zeros((n, n))
world[0:patch.shape[0], 0:patch.shape[1]] = patch
super().__init__(world)
world = PatchWorld(10, np.array([[1, 0, 0], [1, 1, 0]]))
world
```
## Simulation runner
We define a class to conveniently perform a complete simulation. At most `max_gen` generations are computed, but the computation stops as soon as a cycle is detected.
```
class WorldRunner:
'''Class to run a simulation of the given world over a maximum of
generations. The simulation will stop as soon as a cycle is detected.'''
def __init__(self, world, max_gen, early_stopping=True):
'''Initialize the run with the initial world and the maximum
number of generations to simulate.
Parameters
----------
world : World
initial world to run the simulation on
max_gen : int
maximum number of generations to simulate
early_stopping : bool
if True, stop when a cycle is detected, otherwise,
continue form max_gen generations
'''
self._world = world
self._max_gen = max_gen
self._early_stopping = early_stopping
self._cycle_length = None
self._hist = [self._world.cells]
@property
def max_gen(self):
'''Get the maximum generation for this simulation
Returns
-------
int
maximum number of generations for this run
'''
return self._max_gen
@property
def nr_generations(self):
'''Get the number of generations computed, note that this may be less than
the maximum number of generations if a cycle was detected.
Returns
-------
int
number of generations computed in this run
'''
return len(self._hist) - 1
def has_cycle(self):
'''Check whether a cycle was detected.
Returns
-------
bool
True if a cycle was detected, False otherwise
'''
return self._cycle_length is not None
@property
def cycle_length(self):
'''Get the cycle length, if any.
Returns
-------
int
length of the detected cycle, None if no cycle was found.
'''
return self._cycle_length
@property
def history(self):
'''Get the world history.
Returns
-------
list
a list of the generations of this world, represented as 2D
numpy arrays.
'''
return self._hist
def _has_cycle(self):
for gen in range(-2, -len(self._hist), -1):
if np.all(self._hist[-1] == self._hist[gen]):
self._cycle_length = -gen - 1
return True
return False
def run(self):
'''Run the simulation for the world.
'''
for _ in range(1, self.max_gen + 1):
self._world.next_generation()
self._hist.append(self._world.cells)
if self._has_cycle() and self._early_stopping:
break
```
Create a world, and run it for a number of generations, then check on the properties.
```
world = RandomWorld(10, 0.3)
runner = WorldRunner(world, 100)
runner.run()
```
The current state of the world can be checked.
```
world
world.fraction_alive
```
Check whether a cycle has been detected, what the cycle length is, and after how many generations it occured.
```
runner.has_cycle()
runner.cycle_length
runner.nr_generations
```
## Simulation visualization
To gain insight in the Game of Life dynamics, it is useful to visualize the consecutive generations of a world. This can be done by using the `FuncAnimation` function provided by matplotlib. Given the setup for this function, it is convenient to wrap its creation in a class.
```
class WorldView:
'''Class for creating an animation of the world's history.'''
def __init__(self, world_runner):
'''Initialize the view object.
Parameters
----------
world_runner : WorldRunner
runner that has completed a simulation to visualize.
'''
self._world_runner = world_runner
self._nr_gen = world_runner.nr_generations
self._figure, self._axes = plt.subplots()
self._axes.get_xaxis().set_visible(False)
self._axes.get_yaxis().set_visible(False)
@property
def figure(self):
return self._figure
def create_animation(self):
'''Create an animation.
Returns
-------
function
function that will visualize the simulation.
'''
return FuncAnimation(self.figure, self.create_animate(),
init_func=self.create_init(),
frames=self._world_runner.nr_generations)
def create_animate(self):
def animate(i):
self._axes.imshow(self._world_runner.history[i])
return animate
def create_init(self):
def init():
self._axes.imshow(self._world_runner.history[0])
return init
world_size = 10
f_alive = 0.3
max_generations = 100
world = RandomWorld(world_size, f_alive)
world_runner = WorldRunner(world, max_generations)
world_runner.run()
world_runner.nr_generations
world_view = WorldView(world_runner)
animation = world_view.create_animation()
HTML(animation.to_jshtml(default_mode='once'))
world
world_runner.cycle_length
```
## Simulation statistics
First, we define a class that is an iterator over randomlly initialized worlds. All worlds will have the same given size, and fraction of cells that are alive.
```
class RandomWorldGenerator:
'''Iterator over randomly initialized worlds.'''
def __init__(self, nr_worlds, size, f_alive):
'''Create an iterator over a given number of worlds, each of the same
size, and (approximately) the same number of cells that are alive.
Parameters
---------
nr_worlds : int
number of worlds to generate
size : int
world size
f_alive : float
fractino of cells that are alive
'''
self._nr_worlds = nr_worlds
self._size = size
self._f_alive = f_alive
def __iter__(self):
self._current = 0
return self
def __next__(self):
if self._current < self._nr_worlds:
self._current += 1
return RandomWorld(self._size, self._f_alive)
else:
raise StopIteration
for world in RandomWorldGenerator(3, 5, 0.3):
print(world, end='\n\n')
```
Next, we define a class to perform a number of simulation, and gather statistics on the number of cells that are alive for each generation.
```
def compute_avg_live_cels(world_generator, max_gen):
nr_alive = np.zeros(max_gen + 1)
nr_worlds = 0
for world in world_generator:
nr_worlds += 1
world_runner = WorldRunner(world, max_gen, early_stopping=False)
world_runner.run()
for i, generation in enumerate(world_runner.history):
nr_alive[i] += np.sum(generation)
return nr_alive/(nr_worlds*generation.shape[0]*generation.shape[1])
nr_generations = 100
stats = compute_avg_live_cels(RandomWorldGenerator(nr_worlds=50, size=20, f_alive=0.1), max_gen=nr_generations)
_ = plt.plot(range(nr_generations + 1), stats)
```
A second experiment would be to check how many initial world configurations of $p \times p$ where $p \le n$ and $n$ is the size of the world. For a $p \times p$ patch, there are $2^{p^2}$ initial configurations.
```
class PatchGenerator:
'''Iterator class for all worlds that are initialized from all compbinations of cells
are alive or dead in an p by p patch, while all other cells are dead. The number of
such worlds is 2^(p*p)'''
def __init__(self, size, patch_size):
'''Initialize the iterator fow a given patch size on a given board size
Parameters
----------
size : int
size of the world
patch_size : int
size of the patch, should be less than or equal to size
'''
if size < patch_size:
raise ValueError('patch size should be less or equal to world size')
self._size = size
self._patch_size = patch_size
self._patch_idx = None
def __iter__(self):
self._patch_idx = 0
return self
def _create_patch(self):
patch = np.empty((self._patch_size, self._patch_size))
for i in range(self._patch_size):
for j in range(self._patch_size):
patch[i, j] = 1 if self._patch_idx & (1 << (i*self._patch_size + j)) else 0
return patch
def __next__(self):
if self._patch_idx >= 2**(self._patch_size**2):
raise StopIteration
world = PatchWorld(self._size, self._create_patch())
self._patch_idx += 1
return world
patch_generrator = PatchGenerator(3, 2)
for world in patch_generrator:
print(world, end='\n\n')
def compute_cycle_count(world_generator, max_gen):
'''Function to cmopute statistics on the number of worlds that lead
to cycles of various lengths
Parameters
----------
world_generator : iterator
Iterator that returns initialized words
max_gen : int
Maximum number of generation to simulate per word
Returns
-------
collections.Counter
count for each cycle length, for the number of words that
contain only dead cells and for worlds for which no cycle
was detected.
'''
cycle_count = Counter()
nr_worlds = 0
for world in world_generator:
nr_worlds += 1
world_runner = WorldRunner(world, max_gen)
world_runner.run()
if world.nr_alive > 0:
if world_runner.has_cycle():
cycle_count[world_runner.cycle_length] += 1
else:
cycle_count['no cycle'] += 1
else:
cycle_count['dead'] += 1
return cycle_count
cycle_count = compute_cycle_count(PatchGenerator(5, 2), 10)
for cycle_length in cycle_count:
print(f'{cycle_length}: {cycle_count[cycle_length]}')
```
| true |
code
| 0.815049 | null | null | null | null |
|
# MNE
Open-source Python software for exploring, visualizing, and analyzing human neurophysiological data: MEG, EEG, sEEG, ECoG, and more.
<https://martinos.org/mne>
---
```
import numpy as np
pip install mne
from mne.datasets import eegbci
from mne.io import concatenate_raws, read_raw_edf
subject = 1
runs = [6, 10, 14] # motor imagery: hands vs feet
raw_fnames = eegbci.load_data(subject, runs)
raw = concatenate_raws([read_raw_edf(f, preload=True) for f in raw_fnames])
```
---
Plot the data using
```python
raw.plot(start=..., duration=..., n_channels=..., scalings='auto')
```
---
```
# Apply band-pass filter
raw.filter(7., 30., fir_design='firwin', skip_by_annotation='edge')
```
---
### Divide into epochs
```
from mne import Epochs, pick_types, events_from_annotations
events, _ = events_from_annotations(raw, event_id=dict(T1=2, T2=3))
picks = pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False,
exclude='bads')
```
have a look to `events` and `picks`
```
event_id = dict(hands=2, feet=3)
tmin, tmax=-1,4
epochs = Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks,
baseline=None, preload=True)
```
---
Consider only 1 second for each epoch
```
epochs_design = epochs.copy().crop(tmin=1., tmax=2.)
```
---
Create a new variable `y` (**label**) from `events` (or from `epochs_design.events`)
`y`:
- 0: event T1
- 1: event T2
```
#y =...
```
---
Get **data** from `epochs_design`, using the method `get_data()`
Have a look to the data, using `shape`
```
#X=...
X.shape
```
----
# SCIKIT-LEARN
Machine learning in python
<https://scikit-learn.org>
---
Split data and labels into random train and test subsets using
`train_test_split` from `sklearn.model_selection`.
Have a look to the data.
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33)
X_test.shape
```
---
## Feature extraction:
**Common Spatial Pattern (CSP)**
- Zoltan J. Koles. The quantitative extraction and topographic mapping of the abnormal components in the clinical EEG. Electroencephalography and Clinical Neurophysiology, 79(6):440–447, December 1991.
- https://en.wikipedia.org/wiki/Common_spatial_*pattern*
```
from mne.decoding import CSP
csp = CSP(n_components=4, reg=None, log=True, norm_trace=False)
```
---
Use of **CSP**
- 'train' the decoder using the `fit()` method.
- transform the data using the `tranform()` method
have a look to the data
```
# csp.fit(...)
# X_train_csp=...
# X_test_csp=...
```
---
Create a linear discriminant classifier
```python
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
lda = LinearDiscriminantAnalysis()
```
- Train the classifier using the `fit()` method
- Classify the test set using the `predict()`method
- Estimate accuracy
```
```
---
Repeat the process using the `knn` classifier
```python
from sklearn.neighbors import KNeighborsClassifier
knn=KNeighborsClassifier(k)
```
| true |
code
| 0.785226 | null | null | null | null |
|
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges).
# Challenge Notebook
## Problem: Given sorted arrays A, B, merge B into A in sorted order.
* [Constraints](#Constraints)
* [Test Cases](#Test-Cases)
* [Algorithm](#Algorithm)
* [Code](#Code)
* [Unit Test](#Unit-Test)
* [Solution Notebook](#Solution-Notebook)
## Constraints
* Does A have enough space for B?
* Yes
* Can the inputs have duplicate array items?
* Yes
* Can we assume the inputs are valid?
* No
* Does the inputs also include the actual size of A and B?
* Yes
* Can we assume this fits memory?
* Yes
## Test Cases
* A or B is None -> Exception
* index of last A or B < 0 -> Exception
* A or B is empty
* General case
* A = [1, 3, 5, 7, 9, None, None, None]
* B = [4, 5, 6]
* A = [1, 3, 4, 5, 5, 6, 7, 9]
## Algorithm
Refer to the [Solution Notebook](http://nbviewer.jupyter.org/github/donnemartin/interactive-coding-challenges/blob/master/sorting_searching/merge_into/merge_into_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.
## Code
```
class Array(object):
def merge_into(self, source, dest, source_end_index, dest_end_index):
result = []
if source is None or dest is None:
raise TypeError
if source_end_index < 0 or dest_end_index < 0:
raise ValueError
s_idx = 0
e_idx = 0
while s_idx < source_end_index and e_idx < dest_end_index:
if source[s_idx] < dest[e_idx]:
result.append(source[s_idx])
s_idx += 1
else:
result.append(dest[e_idx])
e_idx += 1
while s_idx < source_end_index:
result.append(source[s_idx])
s_idx += 1
while e_idx < dest_end_index:
result.append(dest[e_idx])
e_idx += 1
return result
pass
```
## Unit Test
**The following unit test is expected to fail until you solve the challenge.**
```
# %load test_merge_into.py
import unittest
class TestArray(unittest.TestCase):
def test_merge_into(self):
array = Array()
self.assertRaises(TypeError, array.merge_into, None, None, None, None)
self.assertRaises(ValueError, array.merge_into, [1], [2], -1, -1)
a = [1, 2, 3]
self.assertEqual(array.merge_into(a, [], len(a), 0), [1, 2, 3])
a = [1, 2, 3]
self.assertEqual(array.merge_into(a, [], len(a), 0), [1, 2, 3])
a = [1, 3, 5, 7, 9, None, None, None]
b = [4, 5, 6]
expected = [1, 3, 4, 5, 5, 6, 7, 9]
self.assertEqual(array.merge_into(a, b, 5, len(b)), expected)
print('Success: test_merge_into')
def main():
test = TestArray()
test.test_merge_into()
if __name__ == '__main__':
main()
```
## Solution Notebook
Review the [Solution Notebook]() for a discussion on algorithms and code solutions.
| true |
code
| 0.477493 | null | null | null | null |
|
```
import csv
import numpy as np
from google.colab import drive
import pandas as pd
import json
import ast
import matplotlib.pyplot as plt
import collections
```
#Main Functions
```
def reverse_counts(counts, size=20):
"""
Reverses the keys of a dictionary (i.e. the characters in all the keys are reversed)
Parameters:
counts (dict): dictionary containing the measurement results
size (int): the number of qubits measured
Returns:
reverse_counts (dict): dictionary with keys in reverse order
"""
intermediate = {}
for key, value in counts.items():
rev_key = ""
for i in range(size):
rev_key = rev_key + key[size-i-1]
intermediate[key] = rev_key
reverse_counts = dict([(intermediate.get(k), v) for k, v in counts.items()])
return reverse_counts
def get_delegated_OTP_keys(permutation, x_key, z_key, num_qubits=14, syndrome_cnots = [[14, 0], [14, 2], [14, 4], [14, 6], [15, 1], [15, 2], [15, 5], [15, 6], [16, 3], [16, 4], [16, 5], [16, 6], [17, 7], [17, 9], [17, 11], [17, 13], [18, 8], [18, 9], [18, 12], [18, 13], [19, 10], [19, 11], [19, 12], [19, 13]]):
"""
Get delegated, post-processed, classical one-time pad keys for a program
Parameters:
permutation ([int]): permutation key
x_key ([int]): X part of the non-delegated one-time pad key
z_key ([int]): Z part of the non-delegated one-time pad key
num_qubits (int): number of data qubits
syndrome_cnots ([[int,int]]): all cnot gates used to derive error syndromes
Returns:
delegated_x_key ([int]): classically processed and delegated X part of one-time pad key
delegated_z_key ([int]): classically processed and delegated Z part of one-time pad key
"""
permuted_cnots = []
for gate in syndrome_cnots:
permuted_cnots.append([gate[0],permutation.index(gate[1])])
new_x_key = x_key[:]
new_z_key = z_key[:]
for cnot in permuted_cnots:
a = new_x_key[cnot[0]]
b = new_z_key[cnot[0]]
c = new_x_key[cnot[1]]
d = new_z_key[cnot[1]]
new_x_key[cnot[0]] = a
new_z_key[cnot[0]] = b+d
new_x_key[cnot[1]] = a+c
new_z_key[cnot[1]] = d
#hadamard operator delegation
for i in range(num_qubits,num_qubits + int(num_qubits/7*3)):
new_x_key[i], new_z_key[i] = new_z_key[i], new_x_key[i]
delegated_x_key = [i%2 for i in new_x_key]
delegated_z_key = [i%2 for i in new_z_key]
return delegated_x_key, delegated_z_key
def apply_OTP_and_unpermute(counts, permutation, x_key, z_key, num_qubits=14):
"""
Classical processing of quantum measurement outcomes
Includes applying the delegated one-time pad and unpermuting the circuit
Parameters:
counts (dict): all the measurement outcomes for a job
permutation([int]): permutation key
x_key ([int]): x gates part of one-time pad key
z_key ([int]): z gates part of one-time pad key
num_qubits (int): number of data qubits
Returns:
unpermuted_steane(dict): classically post processed measurement outcomes
"""
processed_results = {}
for key, value in counts.items():
new_key = ""
for i in range(num_qubits + int(num_qubits/7*3)):
val = int(key[i])
k2_val = int(x_key[i])
if k2_val == 1 and val == 0:
new_key = new_key + "1"
elif k2_val == 1 and val == 1:
new_key = new_key + "0"
else:
new_key = new_key + str(val)
processed_results[new_key] = value
unpermuted_steane = {}
for key, value in processed_results.items():
new_key = ""
for i in range(num_qubits):
new_key = new_key+ key[permutation.index(i)]
syndrome_keys=""
for j in range(int(num_qubits/7*3)):
syndrome_keys = syndrome_keys + key[-int(int(num_qubits/7*3)-j)]
new_key = new_key + syndrome_keys
# print(syndrome_keys)
# print(new_key)
unpermuted_steane[new_key] = value
return unpermuted_steane
def check_correctness(counts, codeword_combos, syndrome = '000000', num_shots = 8192, num_qubits = 14):
"""
Gets the correct measurement outcome rates of a job
Parameters:
counts (dict): all processed measurement outcomes
codeword_combos ([str]): all codewords
syndrome (str): the correct no error syndrome
num_shots (int): the number of times the computation was run
num_qubits (int): the number of data qubits
Returns:
bit_rate (float): rate of measurement outcomes that have no bit flips (i.e. no bit error)
phase_rate (float): rate of measurement outcomes that have no phase flips (i.e. no phase error)
all_rate (float): rate of measurement outcomes that have no bit or phase flips (i.e. no bit and phase error)
"""
bit_count = 0
phase_count = 0
all_count = 0
for key, val in counts.items():
if key[:num_qubits] in codeword_combos:
bit_count = bit_count + 1
if key[num_qubits:] == syndrome:
all_count = all_count +1
if key[num_qubits:] == syndrome:
phase_count = phase_count +1
bit_rate = bit_count/num_shots
phase_rate = phase_count/num_shots
all_rate = all_count/num_shots
return bit_rate, phase_rate, all_rate
def get_average_rates(file_name, num_tests = 5, num_iterations= 10):
"""
Gets the average true positive and false positive rates for the different tests
For tests where the challenge input is equal to the password, the average true positive rate is found.
In all other cases, the average false positive is found.
Parameters:
file_name (str): the name of the file in which the rates for the individual rates were saved
num_tests (int): the number of different tests performed
num_iterations (int): the number of iterations each test was performed
Returns:
new_df (panda's DataFrame): contains the averages of all the tests
"""
try:
df = pd.read_csv(file_name)
except Error as err:
print("Error: ", err)
new_df = pd.DataFrame()
for i in range(num_tests):
avgs = df[i*num_iterations:(i+1)*num_iterations].mean()
new_df[str(i)] = avgs
return new_df
def get_average_rates_from_random_tests(file_name, start_index, end_index):
"""
Gets the average true positive and false positive rates for tests that sample random challenge inputs
For tests where the challenge input is equal to the password, the average true positive rate is found.
In all other cases, the average false positive is found.
Parameters:
file_name (str): the name of the file in which the rates for the individual rates were saved
start_index (int): the location of where random tests starts according to data ordered in file_name
end_index (int): the location of where random tests ends according to data ordered in file_name
Returns:
new_df (panda's DataFrame): contains the averages of the random tests
"""
try:
df = pd.read_csv(file_name)
except Error as err:
print("Error: ", err)
new_df = pd.DataFrame()
random_avgs = df[start_index:end_index].groupby(['is_p']).get_group(True).mean()
new_df["True Positive"] = random_avgs
random_avgs = df[start_index:end_index].groupby(['is_p']).get_group(False).mean()
new_df["False Positive"] = random_avgs
return new_df
```
# User Defined Values
```
drive.mount('/content/drive')
# set location for retrieving all the measurement outcome results and information
info_file = "/content/drive/My Drive/res/stripped_info.csv"
# set location for saving all the individual calculated error rates (i.e. bit, phase, and both bit and phase combined errors)
save_file = "/content/drive/My Drive/res/individual_error_rates.csv"
df = pd.read_csv(info_file)
all_key1 = df.challenge_key_1.to_list()
all_key2 = df.challenge_key_2.to_list()
is_point = df.is_point.to_list()
fields = ['#', 'is_p','no_bit_flip_percentage', 'no_phase_flip_percentage', 'no_error_percentage']
stats = pd.DataFrame(columns=fields)
first_steane_codewords = ['0000000','1010101','0110011','1100110','0001111','1011010','0111100','1101001']
second_steane_codewords = ['0000000', '1110000', '1001100', '0111100', '0101010', '1011010', '1100110', '0010110', '1101001', '0011001', '0100101', '1010101', '1000011', '0110011', '0001111', '1111111']
# the codewords of our Steane encoded program
codeword_combos = [x+y for x in first_steane_codewords for y in second_steane_codewords]
```
# Calculate Error Rates
## Option 1: Calculating Rates From File
Calculate the true positive and false positive rates for all test results from a file containing all the raw counts.
```
# set location of data file containing a list of raw counts only
# format of file: "["{'00000000000000000000':8192}"]"
data = "/content/drive/My Drive/res/secondary/raw_counts_data.txt"
raw_data = ""
with open(data) as f:
raw_data = f.read()
raw_data = ast.literal_eval(raw_data)
index = 0
for x in raw_data:
raw = ast.literal_eval(x)
counts = reverse_counts(raw)
key1 = ast.literal_eval(all_key1[index])
key2 = ast.literal_eval(all_key2[index])
xkey = key2[0] + [0]*6
zkey = key2[1] + [0]*6
x_key, z_key = get_delegated_OTP_keys(key1, xkey, zkey)
processed_counts = apply_OTP_and_unpermute(counts, key1, x_key, z_key)
bit, phase ,all = check_correctness(processed_counts, codeword_combos)
print(is_point[index], bit, phase, all)
stats.loc[index] = [index, is_point[index], bit, phase, all]
index = index +1
stats.to_csv(save_file)
print(stats)
```
## Option 2: Calculating Rates from A Single Set of Measurement Outcomes
Calculate the true positive and false positive rates for all test results from a single job's measurement outcomes
```
# set the index of the job
index = 10
# set a single job's measurement counts
raw = {}
counts = reverse_counts(raw)
key1 = ast.literal_eval(all_key1[index])
key2 = ast.literal_eval(all_key2[index])
xkey = key2[0] + [0]*6
zkey = key2[1] + [0]*6
del_x_key, del_z_key = get_delegated_OTP_keys(key1, xkey, zkey)
processed_counts = apply_OTP_and_unpermute(counts, key1, del_x_key, del_z_key)
bit, phase, all = check_correctness(processed_counts,codeword_combos)
# stats.loc[index] = [index, is_point[index], bit, phase, all]
print(index, is_point[index], bit, phase, all)
```
# Calculate Average Error Rates
```
df = get_average_rates(save_file, num_tests = 5, num_iterations= 10)
print(df)
df = get_average_rates_from_random_tests(save_file, 50, 60)
print(df)
```
#Graphing results example
```
# no error phase syndrome
phase_syndrome = '000000'
# set the post-processed measurement outcomes
counts_dict = {}
num_qubits = 14
d = collections.OrderedDict(sorted(counts_dict.items()))
count = 0
# set color of all the wrong measurement outcomes
colors = ['lightgray']*len(d)
patterns = ['']*len(d)
for key, val in d.items():
if phase_syndrome == key[-num_syndrome:]:
if key[:num_qubits] in codeword_combos:
# set color of all the right measurement outcomes
colors[count]= "black"
count = count +1
x_vals = list(d.keys())
y_vals = list(d.values())
plt.figure(figsize=(20,14))
for i in range(len(d)):
plt.bar(x_vals[i], y_vals[i], color=colors[i])
plt.xticks(fontsize=18, rotation=90)
plt.yticks(fontsize=18)
plt.xlabel('Measurement Values', fontsize=25)
plt.ylabel('Probability', fontsize=25)
plt.title('Quantum Computer without Err Mit', fontsize=30)
plt.show()
```
| true |
code
| 0.66888 | null | null | null | null |
|
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
```
# Images are numpy arrays
Images are represented in ``scikit-image`` using standard ``numpy`` arrays. This allows maximum inter-operability with other libraries in the scientific Python ecosystem, such as ``matplotlib`` and ``scipy``.
Let's see how to build a grayscale image as a 2D array:
```
import numpy as np
from matplotlib import pyplot as plt
random_image = np.random.random([500, 500])
plt.imshow(random_image, cmap='gray')
plt.colorbar();
```
The same holds for "real-world" images:
```
from skimage import data
coins = data.coins()
print('Type:', type(coins))
print('dtype:', coins.dtype)
print('shape:', coins.shape)
plt.imshow(coins, cmap='gray');
```
A color image is a 3D array, where the last dimension has size 3 and represents the red, green, and blue channels:
```
cat = data.chelsea()
print("Shape:", cat.shape)
print("Values min/max:", cat.min(), cat.max())
plt.imshow(cat);
```
These are *just NumPy arrays*. E.g., we can make a red square by using standard array slicing and manipulation:
```
cat[10:110, 10:110, :] = [255, 0, 0] # [red, green, blue]
plt.imshow(cat);
```
Images can also include transparent regions by adding a 4th dimension, called an *alpha layer*.
### Other shapes, and their (possible) meanings
|Image type|Coordinates|
|:---|:---|
|2D grayscale|(row, column)|
|2D multichannel|(row, column, channel)|
|3D grayscale (or volumetric) |(plane, row, column)|
|3D multichannel|(plane, row, column, channel)|
## Displaying images using matplotlib
```
from skimage import data
img0 = data.chelsea()
img1 = data.rocket()
import matplotlib.pyplot as plt
f, (ax0, ax1) = plt.subplots(1, 2, figsize=(20, 10))
ax0.imshow(img0)
ax0.set_title('Cat', fontsize=18)
ax0.axis('off')
ax1.imshow(img1)
ax1.set_title('Rocket', fontsize=18)
ax1.set_xlabel(r'Launching position $\alpha=320$')
ax1.vlines([202, 300], 0, img1.shape[0], colors='magenta', linewidth=3, label='Side tower position')
ax1.plot([168, 190, 200], [400, 200, 300], color='white', linestyle='--', label='Side angle')
ax1.legend();
```
For more on plotting, see the [Matplotlib documentation](https://matplotlib.org/gallery/index.html#images-contours-and-fields) and [pyplot API](https://matplotlib.org/api/pyplot_summary.html).
## Data types and image values
In literature, one finds different conventions for representing image values:
```
0 - 255 where 0 is black, 255 is white
0 - 1 where 0 is black, 1 is white
```
``scikit-image`` supports both conventions--the choice is determined by the
data-type of the array.
E.g., here, I generate two valid images:
```
linear0 = np.linspace(0, 1, 2500).reshape((50, 50))
linear1 = np.linspace(0, 255, 2500).reshape((50, 50)).astype(np.uint8)
print("Linear0:", linear0.dtype, linear0.min(), linear0.max())
print("Linear1:", linear1.dtype, linear1.min(), linear1.max())
fig, (ax0, ax1) = plt.subplots(1, 2, figsize=(15, 15))
ax0.imshow(linear0, cmap='gray')
ax1.imshow(linear1, cmap='gray');
```
The library is designed in such a way that any data-type is allowed as input,
as long as the range is correct (0-1 for floating point images, 0-255 for unsigned bytes,
0-65535 for unsigned 16-bit integers).
You can convert images between different representations by using ``img_as_float``, ``img_as_ubyte``, etc.:
```
from skimage import img_as_float, img_as_ubyte
image = data.chelsea()
image_ubyte = img_as_ubyte(image)
image_float = img_as_float(image)
print("type, min, max:", image_ubyte.dtype, image_ubyte.min(), image_ubyte.max())
print("type, min, max:", image_float.dtype, image_float.min(), image_float.max())
print()
print("231/255 =", 231/255.)
```
Your code would then typically look like this:
```python
def my_function(any_image):
float_image = img_as_float(any_image)
# Proceed, knowing image is in [0, 1]
```
We recommend using the floating point representation, given that
``scikit-image`` mostly uses that format internally.
## Image I/O
Mostly, we won't be using input images from the scikit-image example data sets. Those images are typically stored in JPEG or PNG format. Since scikit-image operates on NumPy arrays, *any* image reader library that provides arrays will do. Options include imageio, matplotlib, pillow, etc.
scikit-image conveniently wraps many of these in the `io` submodule, and will use whichever of the libraries mentioned above are installed:
```
from skimage import io
image = io.imread('../images/balloon.jpg')
print(type(image))
print(image.dtype)
print(image.shape)
print(image.min(), image.max())
plt.imshow(image);
```
We also have the ability to load multiple images, or multi-layer TIFF images:
```
import os
ic = io.ImageCollection(os.pathsep.join(['../images/*.png', '../images/*.jpg']))
print('Type:', type(ic))
ic.files
import os
f, axes = plt.subplots(nrows=3, ncols=len(ic) // 3 + 1, figsize=(20, 5))
# subplots returns the figure and an array of axes
# we use `axes.ravel()` to turn these into a list
axes = axes.ravel()
for ax in axes:
ax.axis('off')
for i, image in enumerate(ic):
axes[i].imshow(image, cmap='gray')
axes[i].set_title(os.path.basename(ic.files[i]))
plt.tight_layout()
```
### Aside: `enumerate`
`enumerate` gives us each element in a container, along with its position.
```
animals = ['cat', 'dog', 'leopard']
for i, animal in enumerate(animals):
print('The animal in position {} is {}'.format(i, animal))
```
## <span class="exercize">Exercise: draw the letter H</span>
Define a function that takes as input an RGB image and a pair of coordinates (row, column), and returns a copy with a green letter H overlaid at those coordinates. The coordinates point to the top-left corner of the H.
The arms and strut of the H should have a width of 3 pixels, and the H itself should have a height of 24 pixels and width of 20 pixels.
Start with the following template:
```
def draw_H(image, coords, color=(0, 255, 0)):
out = image.copy()
out = ... # FIXME
return out
```
Test your function like so:
```
cat = data.chelsea()
cat_H = draw_H(cat, (50, -50))
plt.imshow(cat_H);
```
## <span class="exercize">Exercise: visualizing RGB channels</span>
Display the different color channels of the image along (each as a gray-scale image). Start with the following template:
```
# --- read in the image ---
image = plt.imread('../images/Bells-Beach.jpg')
# --- assign each color channel to a different variable ---
r = ... # FIXME: grab channel from image...
g = ... # FIXME
b = ... # FIXME
# --- display the image and r, g, b channels ---
f, axes = plt.subplots(1, 4, figsize=(16, 5))
for ax in axes:
ax.axis('off')
(ax_r, ax_g, ax_b, ax_color) = axes
ax_r.imshow(r, cmap='gray')
ax_r.set_title('red channel')
ax_g.imshow(g, cmap='gray')
ax_g.set_title('green channel')
ax_b.imshow(b, cmap='gray')
ax_b.set_title('blue channel')
# --- Here, we stack the R, G, and B layers again
# to form a color image ---
ax_color.imshow(np.stack([r, g, b], axis=2))
ax_color.set_title('all channels');
```
Now, take a look at the following R, G, and B channels. How would their combination look? (Write some code to confirm your intuition.)
```
from skimage import draw
red = np.zeros((300, 300))
green = np.zeros((300, 300))
blue = np.zeros((300, 300))
r, c = draw.circle(100, 100, 100)
red[r, c] = 1
r, c = draw.circle(100, 200, 100)
green[r, c] = 1
r, c = draw.circle(200, 150, 100)
blue[r, c] = 1
f, axes = plt.subplots(1, 3)
for (ax, channel) in zip(axes, [red, green, blue]):
ax.imshow(channel, cmap='gray')
ax.axis('off')
# Solution
```
## Exercise: Convert to grayscale ("black and white")
The *relative luminance* of an image is the intensity of light coming from each point. Different colors contribute differently to the luminance: it's very hard to have a bright, pure blue, for example. So, starting from an RGB image, the luminance is given by:
$$
Y = 0.2126R + 0.7152G + 0.0722B
$$
Use Python's matrix multiplication, `@`, to convert an RGB image to a grayscale luminance image according to the formula above.
Compare your results to that obtained with `skimage.color.rgb2gray`.
Change the coefficients to 1/3 (i.e., take the mean of the red, green, and blue channels, to see how that approach compares with `rgb2gray`).
```
from skimage import color, io, img_as_float
image = img_as_float(io.imread('../images/balloon.jpg'))
gray = color.rgb2gray(image)
my_gray = ... # FIXME
# --- display the results ---
f, (ax0, ax1) = plt.subplots(1, 2, figsize=(10, 6))
ax0.imshow(gray, cmap='gray')
ax0.set_title('skimage.color.rgb2gray')
ax1.imshow(my_gray, cmap='gray')
ax1.set_title('my rgb2gray')
```
## Bonus
If you would like to watch a stand-up comedy act about spreadsheets
Matt Parker’s comedy routine about spreadsheets. From the Festival of the Spoken Nerd DVD
The video is 13 minutes long.
You can watch it here: https://www.youtube.com/watch?v=UBX2QQHlQ_I)
| true |
code
| 0.541954 | null | null | null | null |
|
# Goals
### 1. Learn to implement Inception A Block using monk
- Monk's Keras
- Monk's Pytorch
- Monk's Mxnet
### 2. Use network Monk's debugger to create complex blocks
### 3. Understand how syntactically different it is to implement the same using
- Traditional Keras
- Traditional Pytorch
- Traditional Mxnet
# Inception A Block
- Note: The block structure can have variations too, this is just an example
```
from IPython.display import Image
Image(filename='imgs/inception_a.png')
```
# Table of contents
[1. Install Monk](#1)
[2. Block basic Information](#2)
- [2.1) Visual structure](#2-1)
- [2.2) Layers in Branches](#2-2)
[3) Creating Block using monk visual debugger](#3)
- [3.0) Create the base sub-block](#3-0)
- [3.1) Create the first branch](#3-1)
- [3.2) Create the second branch](#3-2)
- [3.3) Create the third branch](#3-3)
- [3.4) Create the fourth branch](#3-4)
- [3.5) Merge the branches](#3-5)
- [3.6) Debug the merged network](#3-6)
- [3.7) Compile the network](#3-7)
- [3.8) Visualize the network](#3-8)
- [3.9) Run data through the network](#3-9)
[4) Creating Block Using MONK one line API call](#4)
- [Mxnet Backend](#4-1)
- [Pytorch Backend](#4-2)
- [Keras Backend](#4-3)
[5) Appendix](#5)
- [Study Material](#5-1)
- [Creating block using traditional Mxnet](#5-2)
- [Creating block using traditional Pytorch](#5-3)
- [Creating block using traditional Keras](#5-4)
<a id='1'></a>
# Install Monk
- git clone https://github.com/Tessellate-Imaging/monk_v1.git
- cd monk_v1/installation && pip install -r requirements_cu9.txt
- (Select the requirements file as per OS and CUDA version)
```
!git clone https://github.com/Tessellate-Imaging/monk_v1.git
```
# Imports
```
# Common
import numpy as np
import math
import netron
from collections import OrderedDict
from functools import partial
# Monk
import os
import sys
sys.path.append("monk_v1/monk/");
```
<a id='2'></a>
# Block Information
<a id='2_1'></a>
## Visual structure
```
from IPython.display import Image
Image(filename='imgs/inception_a.png')
```
<a id='2_2'></a>
## Layers in Branches
- Number of branches: 4
- Branch 1
- conv1x1 -> batchnorm
- Branch 2
- conv_1x1 -> batchnorm -> relu -> conv_5x5 -> batchnorm -> relu
- Branch 3
- conv_1x1 -> batchnorm -> relu -> conv_3x3 -> batchnorm -> relu -> conv_3x3 -> batchnorm -> relu
- Branch 4
- pooling -> conv_1x1 -> batchnorm -> relu
- Branches merged using
- Concatenation
(See Appendix to read blogs on inception networks)
<a id='3'></a>
# Creating Block using monk debugger
```
# Imports and setup a project
# To use pytorch backend - replace gluon_prototype with pytorch_prototype
# To use keras backend - replace gluon_prototype with keras_prototype
from gluon_prototype import prototype
# Create a sample project
gtf = prototype(verbose=1);
gtf.Prototype("sample-project-1", "sample-experiment-1");
```
<a id='3-0'></a>
## Create the Base block
```
# Create Base Convolution->Batchnorn->RELU block
def conv_bn_relu_block(output_channels=64, kernel_size=1, stride=1):
network = [];
network.append(gtf.convolution(output_channels=output_channels,
kernel_size=kernel_size,
stride=stride));
network.append(gtf.batch_normalization());
network.append(gtf.relu());
return network;
# Debug the block
branch_1 = conv_bn_relu_block();
network = [];
network.append(branch_1);
gtf.debug_custom_model_design(network);
```
<a id='3-1'></a>
## Create the first branch
```
def first_branch():
network = [];
network.append(conv_bn_relu_block(output_channels=64, kernel_size=1))
return network;
# Debug the branch
branch_1 = first_branch();
network = [];
network.append(branch_1);
gtf.debug_custom_model_design(network);
```
<a id='3-2'></a>
## Create the second branch
```
def second_branch():
network = [];
network.append(conv_bn_relu_block(output_channels=48, kernel_size=1));
network.append(conv_bn_relu_block(output_channels=64, kernel_size=5));
return network;
# Debug the branch
branch_2 = second_branch()
network = [];
network.append(branch_2);
gtf.debug_custom_model_design(network);
```
<a id='3-3'></a>
## Create the Third branch
```
def third_branch():
network = [];
network.append(conv_bn_relu_block(output_channels=64, kernel_size=1));
network.append(conv_bn_relu_block(output_channels=96, kernel_size=3));
network.append(conv_bn_relu_block(output_channels=96, kernel_size=3));
return network;
# Debug the branch
branch_3 = third_branch()
network = [];
network.append(branch_3);
gtf.debug_custom_model_design(network);
```
<a id='3-4'></a>
## Create the Fourth branch
```
def fourth_branch(pooling_branch_channels=32, pool_type="avg"):
network = [];
if(pool_type=="avg"):
network.append(gtf.average_pooling(kernel_size=3, stride=1, padding=1));
else:
network.append(gtf.max_pooling(kernel_size=3, stride=1, padding=1));
network.append(conv_bn_relu_block(output_channels=pooling_branch_channels, kernel_size=1));
return network;
# Debug the branch
branch_4 = fourth_branch()
network = [];
network.append(branch_4);
gtf.debug_custom_model_design(network);
```
<a id='3-5'></a>
## Merge the branches
```
def final_block(pooling_branch_channels=32, pool_type="avg"):
network = [];
#Create subnetwork and add branches
subnetwork = [];
branch_1 = first_branch()
branch_2 = second_branch()
branch_3 = third_branch()
branch_4 = fourth_branch(pooling_branch_channels=pooling_branch_channels,
pool_type=pool_type)
subnetwork.append(branch_1);
subnetwork.append(branch_2);
subnetwork.append(branch_3);
subnetwork.append(branch_4);
# Add merging element
subnetwork.append(gtf.concatenate());
# Add the subnetwork
network.append(subnetwork);
return network;
```
<a id='3-6'></a>
## Debug the merged network
```
final = final_block(pooling_branch_channels=32, pool_type="avg")
network = [];
network.append(final);
gtf.debug_custom_model_design(network);
```
<a id='3-7'></a>
## Compile the network
```
gtf.Compile_Network(network, data_shape=(3, 224, 224), use_gpu=False);
```
<a id='3-8'></a>
## Run data through the network
```
import mxnet as mx
x = np.zeros((1, 3, 224, 224));
x = mx.nd.array(x);
y = gtf.system_dict["local"]["model"].forward(x);
print(x.shape, y.shape)
```
<a id='3-9'></a>
## Visualize network using netron
```
gtf.Visualize_With_Netron(data_shape=(3, 224, 224))
```
<a id='4'></a>
# Creating Using MONK LOW code API
<a id='4-1'></a>
## Mxnet backend
```
from gluon_prototype import prototype
gtf = prototype(verbose=1);
gtf.Prototype("sample-project-1", "sample-experiment-1");
network = [];
# Single line addition of blocks
network.append(gtf.inception_a_block(pooling_branch_channels=32, pool_type="avg"));
gtf.Compile_Network(network, data_shape=(3, 224, 224), use_gpu=False);
```
<a id='4-2'></a>
## Pytorch backend
- Only the import changes
```
#Change gluon_prototype to pytorch_prototype
from pytorch_prototype import prototype
gtf = prototype(verbose=1);
gtf.Prototype("sample-project-1", "sample-experiment-1");
network = [];
# Single line addition of blocks
network.append(gtf.inception_a_block(pooling_branch_channels=32, pool_type="avg"));
gtf.Compile_Network(network, data_shape=(3, 224, 224), use_gpu=False);
```
<a id='4-3'></a>
## Keras backend
- Only the import changes
```
#Change gluon_prototype to keras_prototype
from keras_prototype import prototype
gtf = prototype(verbose=1);
gtf.Prototype("sample-project-1", "sample-experiment-1");
network = [];
# Single line addition of blocks
network.append(gtf.inception_a_block(pooling_branch_channels=32, pool_type="avg"));
gtf.Compile_Network(network, data_shape=(3, 224, 224), use_gpu=False);
```
<a id='5'></a>
# Appendix
<a id='5-1'></a>
## Study links
- https://medium.com/@sh.tsang/review-inception-v3-1st-runner-up-image-classification-in-ilsvrc-2015-17915421f77c
- https://www.analyticsvidhya.com/blog/2018/10/understanding-inception-network-from-scratch/
- https://software.intel.com/en-us/articles/inception-v3-deep-convolutional-architecture-for-classifying-acute-myeloidlymphoblastic
- https://codelabs.developers.google.com/codelabs/cpb102-txf-learning/index.html#0
- https://cloud.google.com/tpu/docs/inception-v3-advanced
<a id='5-2'></a>
## Creating block using traditional Mxnet
- Code credits - https://mxnet.incubator.apache.org/
```
# Traditional-Mxnet-gluon
import mxnet as mx
from mxnet.gluon import nn
from mxnet.gluon.nn import HybridBlock, BatchNorm
from mxnet.gluon.contrib.nn import HybridConcurrent, Identity
from mxnet import gluon, init, nd
def _make_basic_conv(norm_layer=BatchNorm, norm_kwargs=None, **kwargs):
out = nn.HybridSequential(prefix='')
out.add(nn.Conv2D(use_bias=False, **kwargs))
out.add(norm_layer(epsilon=0.001, **({} if norm_kwargs is None else norm_kwargs)))
out.add(nn.Activation('relu'))
return out
def _make_branch(use_pool, norm_layer, norm_kwargs, *conv_settings):
out = nn.HybridSequential(prefix='')
if use_pool == 'avg':
out.add(nn.AvgPool2D(pool_size=3, strides=1, padding=1))
elif use_pool == 'max':
out.add(nn.MaxPool2D(pool_size=3, strides=2))
setting_names = ['channels', 'kernel_size', 'strides', 'padding']
for setting in conv_settings:
kwargs = {}
for i, value in enumerate(setting):
if value is not None:
kwargs[setting_names[i]] = value
out.add(_make_basic_conv(norm_layer, norm_kwargs, **kwargs))
return out
def make_A(pool_features, prefix=None, norm_layer=BatchNorm, norm_kwargs=None):
out = HybridConcurrent(axis=1, prefix=prefix)
with out.name_scope():
out.add(_make_branch(None, norm_layer, norm_kwargs,
(64, 1, None, None)))
out.add(_make_branch(None, norm_layer, norm_kwargs,
(48, 1, None, None),
(64, 5, None, 2)))
out.add(_make_branch(None, norm_layer, norm_kwargs,
(64, 1, None, None),
(96, 3, None, 1),
(96, 3, None, 1)))
out.add(_make_branch('avg', norm_layer, norm_kwargs,
(pool_features, 1, None, None)))
return out
# Invoke the block
block = make_A(32)
# Initialize network and load block on machine
ctx = [mx.cpu()];
block.initialize(init.Xavier(), ctx = ctx);
block.collect_params().reset_ctx(ctx)
block.hybridize()
# Run data through network
x = np.zeros((1, 64, 224, 224));
x = mx.nd.array(x);
y = block.forward(x);
print(x.shape, y.shape)
# Export Model to Load on Netron
block.export("final", epoch=0);
netron.start("final-symbol.json", port=8082)
```
<a id='5-3'></a>
## Creating block using traditional Pytorch
- Code credits - https://pytorch.org/
```
# Traiditional-Pytorch
import torch
from torch import nn
from torch.jit.annotations import List
import torch.nn.functional as F
class BasicConv2d(nn.Module):
def __init__(self, in_channels, out_channels, **kwargs):
super(BasicConv2d, self).__init__()
self.conv = nn.Conv2d(in_channels, out_channels, bias=False, **kwargs)
self.bn = nn.BatchNorm2d(out_channels, eps=0.001)
def forward(self, x):
x = self.conv(x)
x = self.bn(x)
return F.relu(x, inplace=True)
class InceptionA(nn.Module):
def __init__(self, in_channels, pool_features):
super(InceptionA, self).__init__()
self.branch1x1 = BasicConv2d(in_channels, 64, kernel_size=1)
self.branch5x5_1 = BasicConv2d(in_channels, 48, kernel_size=1)
self.branch5x5_2 = BasicConv2d(48, 64, kernel_size=5, padding=2)
self.branch3x3dbl_1 = BasicConv2d(in_channels, 64, kernel_size=1)
self.branch3x3dbl_2 = BasicConv2d(64, 96, kernel_size=3, padding=1)
self.branch3x3dbl_3 = BasicConv2d(96, 96, kernel_size=3, padding=1)
self.branch_pool = BasicConv2d(in_channels, pool_features, kernel_size=1)
def forward(self, x):
branch1x1 = self.branch1x1(x)
branch5x5 = self.branch5x5_1(x)
branch5x5 = self.branch5x5_2(branch5x5)
branch3x3dbl = self.branch3x3dbl_1(x)
branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl)
branch3x3dbl = self.branch3x3dbl_3(branch3x3dbl)
branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1)
branch_pool = self.branch_pool(branch_pool)
outputs = [branch1x1, branch5x5, branch3x3dbl, branch_pool]
return torch.cat(outputs, 1)
# Invoke the block
block = InceptionA(3, 32);
# Initialize network and load block on machine
layers = []
layers.append(block);
net = nn.Sequential(*layers);
# Run data through network
x = torch.randn(1, 3, 224, 224)
y = net(x)
print(x.shape, y.shape);
# Export Model to Load on Netron
torch.onnx.export(net, # model being run
x, # model input (or a tuple for multiple inputs)
"model.onnx", # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=10, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names = ['input'], # the model's input names
output_names = ['output'], # the model's output names
dynamic_axes={'input' : {0 : 'batch_size'}, # variable lenght axes
'output' : {0 : 'batch_size'}})
netron.start('model.onnx', port=9998);
```
<a id='5-4'></a>
## Creating block using traditional Keras
- Code credits: https://keras.io/
```
# Traditional-Keras
import keras
import keras.layers as kla
import keras.models as kmo
import tensorflow as tf
from keras.models import Model
backend = 'channels_last'
from keras import layers
def inception_a_block(input_tensor, filters, stage, block):
bn_axis = 3
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
branch_1 = layers.Conv2D(64, (1, 1),
kernel_initializer='he_normal')(input_tensor)
branch_1 = layers.BatchNormalization(axis=bn_axis)(branch_1)
branch_1 = layers.Activation('relu')(branch_1)
branch_2 = layers.Conv2D(48, (1, 1),
kernel_initializer='he_normal')(input_tensor)
branch_2 = layers.BatchNormalization(axis=bn_axis)(branch_2)
branch_2 = layers.Activation('relu')(branch_2)
branch_2 = layers.Conv2D(64, (5, 5),
kernel_initializer='he_normal', padding="same")(branch_2)
branch_2 = layers.BatchNormalization(axis=bn_axis)(branch_2)
branch_2 = layers.Activation('relu')(branch_2)
branch_3 = layers.Conv2D(48, (1, 1),
kernel_initializer='he_normal')(input_tensor)
branch_3 = layers.BatchNormalization(axis=bn_axis)(branch_3)
branch_3 = layers.Activation('relu')(branch_3)
branch_3 = layers.Conv2D(96, (3, 3),
kernel_initializer='he_normal', padding="same")(branch_3)
branch_3 = layers.BatchNormalization(axis=bn_axis)(branch_3)
branch_3 = layers.Activation('relu')(branch_3)
branch_3 = layers.Conv2D(96, (3, 3),
kernel_initializer='he_normal', padding="same")(branch_3)
branch_3 = layers.BatchNormalization(axis=bn_axis)(branch_3)
branch_3 = layers.Activation('relu')(branch_3)
branch_4 = layers.AveragePooling2D(pool_size=(3, 3),
strides=(1, 1),
padding='same',
data_format=None)(input_tensor)
branch_4 = layers.Conv2D(filters, (1, 1),
kernel_initializer='he_normal')(branch_4)
branch_4 = layers.BatchNormalization(axis=bn_axis)(branch_4)
branch_4 = layers.Activation('relu')(branch_4)
x = layers.Concatenate()([branch_1, branch_2, branch_3, branch_4])
return x
def create_model(input_shape, filters, stage, block):
img_input = layers.Input(shape=input_shape);
x = inception_a_block(img_input, filters, stage, block)
return Model(img_input, x);
# Invoke the block
filters=32;
input_shape=(224, 224, 3);
model = create_model(input_shape, filters, 0, "0");
# Run data through network
x = tf.placeholder(tf.float32, shape=(1, 224, 224, 3))
y = model(x)
print(x.shape, y.shape)
# Export Model to Load on Netron
model.save("final.h5");
netron.start("final.h5", port=8082)
```
| true |
code
| 0.727274 | null | null | null | null |
|
# Blood Glucose Predictions with LSTM network
### Imports
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from statsmodels.tools.eval_measures import rmse
from sklearn.preprocessing import MinMaxScaler
from keras.preprocessing.sequence import TimeseriesGenerator
from keras.models import Sequential
from keras.layers import Dense, LSTM, Dropout, TimeDistributed
from keras.callbacks import ModelCheckpoint, EarlyStopping
import warnings
import io
import math
warnings.filterwarnings("ignore")
```
This is how the DataFrame generated by simglucose looks like
```
df = pd.read_csv("adolescent#008.csv")
df.Time = pd.to_datetime(df.Time)
df = df[0:2480] # this final data from some of pacients are not relevant(stays at hipo for to long to be real)
df.set_index("Time")
df.head()
```
We have only interest in the pacient blood glucose on time
```
plt.figure(figsize=(16,8))
plt.title('Blood Glucose from adolescent 8')
plt.plot(df['CGM'])
plt.xlabel('Timestamp',fontsize=18)
plt.ylabel('BG (mg/dL)',fontsize=18)
plt.show()
```
Let's create a function to prepare date for training and testing.
We can take 20 as the length of the input because that is approximately the maximum time which carbs and insulin affects the blood glucose.
```
def read_pacient(age="adolescent#0", number="08", extension=".csv", training_test_proportion=0.8,input_len=20, output_len=6):
# reading the file
df = pd.read_csv(age+number+extension)
df.Time = pd.to_datetime(df.Time)
df = df[0:2480] # this final data from some of pacients are not relevant(stays at hipo for to long to be real)
df.set_index("Time")
# Getting only blood glucuse from sensor data
data = df.filter(['CGM'])
dataset = data.values
training_data_len = math.ceil( len(dataset) *training_test_proportion) # setting proportion for training and testing
# Scalling data from 0 - 1 to input in the neural network
scaler = MinMaxScaler(feature_range=(0, 1))
scaled_data = scaler.fit_transform(dataset)
train_data = scaled_data[0:training_data_len , : ]
x_train=[] # arrays of blood glucose with len of input_len
y_train = [] # arrays of blood glucose with len of output_len
for i in range(input_len,len(train_data)-output_len):
x_train.append(train_data[i-input_len:i,0]) # past blood glucose to learn
y_train.append(train_data[i:i+output_len,0]) # future blood glucose to predict
x_train, y_train = np.array(x_train), np.array(y_train) # converting to numpy array
'''
Reshape is necessary so the neural network can understand the data
Shape will be (number of predictions, input_len, number of features)
Feature is which property we are using in the model, so in this case it is only the blood glucose from the pacient
'''
x_train = np.reshape(x_train, (x_train.shape[0],x_train.shape[1],1))
not_scaled_test_data = dataset[training_data_len - input_len: , : ]
test_data = scaled_data[training_data_len - input_len: , : ]
x_test = [] # arrays of blood glucose with len of input_len
y_test = [] # arrays of blood glucose with len of output_len
continuous_ytest = [] # list with not scaled blood glucose from y_test not broken into arrays
'''
So here in the looping of the test we are predicting output_len values
then the next output_len values so we can create a continuos plot of the
predicted glucose
'''
i = input_len
while (i >= input_len and i < len(test_data)-output_len):
x_test.append(test_data[i-input_len:i,0])
y_test.append(not_scaled_test_data[i:i+output_len,0])
for bg in not_scaled_test_data[i:i+output_len,0]:
continuous_ytest.append(bg) # not for testing, just for plot purpose
i = i+output_len # jump output_len values in the future
x_test = np.array(x_test) # converting to numpy array
x_test = np.reshape(x_test, (x_test.shape[0],x_test.shape[1],1))
return scaler, x_train, y_train, x_test, y_test, continuous_ytest
```
Now, let's create a function that applies a LSTM Model to our data
```
def make_predictions(scaler, x_train, y_train, x_test, y_test, batch_size=1, epochs=1):
# LSTM Model
model = Sequential()
model.add(LSTM(units=50, return_sequences=True,input_shape=(x_train.shape[1],1)))
model.add(LSTM(units=50, return_sequences=False))
model.add(Dropout(0.5))
model.add(Dense(units=y_train.shape[1]))
model.compile(optimizer="adam", loss='mse',metrics=['accuracy'])
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs)
predictions = model.predict(x_test) # make predictions
predictions = np.reshape(predictions, (predictions.shape[0],predictions.shape[1])) # reshape just like y_test
predictions = scaler.inverse_transform(predictions) # reverse data
# Create a continuous data of predictions to plot with continuous_ytest
continuous_predictions = predictions[0]
for i in range(1,len(predictions)):
continuous_predictions = np.concatenate([continuous_predictions,predictions[i]])
rmse=np.sqrt(np.mean(((predictions-y_test)**2)))
return model, predictions, continuous_predictions, rmse
```
Finally, we can have a function to plot our results
```
def show_plots(continuous_ytest, continuous_predictions):
plt.figure(figsize=(16,8))
plt.title('Blood Glucose Prediction Model Result')
plt.plot(continuous_ytest, color = 'b')
plt.plot(continuous_predictions, color = 'r')
plt.xlabel('Timestamp',fontsize=18)
plt.ylabel('BGBG (mg/dL)',fontsize=18)
plt.legend(['Real','Predictions'], loc='lower right')
plt.show()
```
Now just testing!
```
scaler, x_train, y_train, x_test, y_test, continuous_ytest = read_pacient() # standard parameters are for pacient number 8
model, predictions, continuous_predictions, rmse = make_predictions(scaler, x_train, y_train, x_test, y_test,batch_size=100, epochs=100)
```
## Results of training with patient number 08
```
show_plots(continuous_ytest, continuous_predictions)
print("Root-Mean-Squared Deviation {}".format(rmse))
```
We can now create and use a functions to apply this model in other pacient
```
def test_model(model, age="adolescent#0", number="01", extension=".csv",input_len=20, output_len=6):
# reading the file
df = pd.read_csv(age+number+extension)
df.Time = pd.to_datetime(df.Time)
df = df[0:2480] # this final data from some of pacients are not relevant(stays at hipo for to long to be real)
df.set_index("Time")
# Getting only blood glucuse from sensor data
data = df.filter(['CGM'])
dataset = data.values
# Scalling data from 0 - 1 to input in the neural network
scaler = MinMaxScaler(feature_range=(0, 1))
scaled_data = scaler.fit_transform(dataset)
x_test = [] # arrays of blood glucose with len of input_len
y_test = [] # arrays of blood glucose with len of output_len
continuous_ytest = [] # list with not scaled blood glucose from y_test not broken into arrays
i = input_len
while (i >= input_len and i < len(dataset)-output_len):
x_test.append(scaled_data[i-input_len:i,0])
y_test.append(dataset[i:i+output_len,0])
for bg in dataset[i:i+output_len,0]:
continuous_ytest.append(bg) # not for testing, just for plot purpose
i = i+output_len # jump output_len values in the future
x_test = np.array(x_test) # converting to numpy array
x_test = np.reshape(x_test, (x_test.shape[0],x_test.shape[1],1))
predictions = model.predict(x_test) # make predictions
predictions = np.reshape(predictions, (predictions.shape[0],predictions.shape[1])) # reshape just like y_test
predictions = scaler.inverse_transform(predictions) # reverse data
# Create a continuous data of predictions to plot with continuous_ytest
continuous_predictions = predictions[0]
for i in range(1,len(predictions)):
continuous_predictions = np.concatenate([continuous_predictions,predictions[i]])
rmse=np.sqrt(np.mean(((predictions-y_test)**2)))
return rmse, continuous_ytest, continuous_predictions
rmse2, continuous_ytes2, continuous_predictions2 = test_model(model)
```
### Results with Patient number 01
```
show_plots(continuous_ytes2, continuous_predictions2)
print("Root-Mean-Squared Deviation {}".format(rmse2))
rmse3, continuous_predictions3, continuous_ytes3 = test_model(model,number="10")
```
### Results with Patient number 10
```
show_plots(continuous_ytes3, continuous_predictions3)
print("Root-Mean-Squared Deviation {}".format(rmse3))
rmse4, continuous_predictions4, continuous_ytes4 = test_model(model,number="07")
```
### Results with Patient number 07
```
show_plots(continuous_ytes4, continuous_predictions4)
print("Root-Mean-Squared Deviation {}".format(rmse4))
```
| true |
code
| 0.669772 | null | null | null | null |
|
```
from fastai2.vision.all import *
torch
```
https://github.com/pytorch/pytorch/issues/34086
Code from
* https://github.com/pytorch/pytorch/blob/2f840b1662b487d5551d7230f8eb4d57645cfff5/test/test_autograd.py
* https://github.com/pytorch/pytorch/blob/2f840b1662b487d5551d7230f8eb4d57645cfff5/test/test_autograd.py
* https://github.com/pytorch/pytorch/blob/9600ed9af3b84c000b7f54765495e96f29c4bf1d/torch/autograd/profiler.py
* https://github.com/pytorch/pytorch/issues/19420
* https://github.com/pytorch/pytorch/issues/19422
* https://github.com/pytorch/pytorch/search?q=export_chrome_trace&type=Issues
Forum
* https://discuss.pytorch.org/t/interpreting-data-from-torch-autograd-profiler-profile/34390/4
* https://discuss.pytorch.org/t/proper-way-to-enable-and-disable-autograd-profiler/89773
## Importar lo necesario y copiar tests del profiler
```
from torch.autograd.profiler import (profile, format_time, EventList,
FunctionEvent, FunctionEventAvg,
record_function, emit_nvtx)
class Some():
def assertTrue(self, v): return v
def assertFalse(self, v): return not v
def assertEqual(self, a, b): return a == b
def test_profiler_tracing(self):
t1, t2 = torch.ones(1), torch.ones(1)
with torch.autograd.profiler.profile() as prof:
torch.add(t1, t2)
with tempfile.NamedTemporaryFile(mode="w+") as f:
print("export to chrome")
prof.export_chrome_trace(f.name)
# read the trace and expect valid json
# if the JSON generated by export_chrome_trace is not valid, this will throw and fail the test.
parsed = json.load(f)
print(f"pintando json de chrome {f.name}")
print(json.dumps(parsed, indent=4, sort_keys=True))
# Same test but for cuda.
if not torch.cuda.is_available():
return
device = torch.device("cuda:0")
t1, t2 = torch.ones(1, device=device), torch.ones(1, device=device)
with torch.autograd.profiler.profile(use_cuda=True) as prof:
torch.add(t1, t2)
with tempfile.NamedTemporaryFile(mode="w+") as f:
prof.export_chrome_trace(f.name)
# Now validate the json
parsed = json.load(f)
print(f"pintando json de chrome {f.name}")
print(json.dumps(parsed, indent=4, sort_keys=True))
def test_profiler(self):
x = torch.randn(10, 10)
with profile() as p:
self.assertTrue(torch.autograd._profiler_enabled())
y = x * 2 + 4
self.assertFalse(torch.autograd._profiler_enabled())
last_end = 0
names = ['aten::mul', 'aten::to', 'aten::empty_strided', 'aten::copy_',
'aten::empty', 'aten::add', 'aten::to', 'aten::empty_strided',
'aten::copy_', 'aten::empty']
top_level_names = ['aten::mul', 'aten::add']
top_level_iter = iter(top_level_names)
self.assertEqual(len(p.function_events), len(names))
for info, expected_name in zip(p.function_events, names):
if info.cpu_interval.start > last_end:
print(top_level_iter)
top_level_name_expected = next(top_level_iter)
self.assertEqual(info.name, top_level_name_expected)
last_end = info.cpu_interval.end
self.assertEqual(info.name, expected_name)
def test_profiler_unboxed_only(self):
x = torch.rand(3, 4)
with torch.autograd.profiler.profile() as prof:
x.resize_([3, 2])
# @skipIfRocm
def test_profiler_custom_op(self):
inst = torch.classes._TorchScriptTesting._PickleTester([3, 4])
with torch.autograd.profiler.profile() as prof:
torch.ops._TorchScriptTesting.take_an_instance(inst)
found_event = False
for e in prof.function_events:
if e.name == '_TorchScriptTesting::take_an_instance':
found_event = True
self.assertTrue(found_event)
def test_profiler_propagation(self):
def foo(x):
with record_function("in_foo") as rf:
return x * 2
x = torch.rand(3, 4)
traced_foo = torch.jit.trace(foo, x)
def bar(x):
with record_function("in_bar") as rf:
# we expect that profiler will be able
# propagate across fork
fut = torch.jit._fork(traced_foo, x)
y = torch.jit._wait(fut)
# note: continuation (and rf's end) can
# be executed in a different thread
with record_function("in_bar_after_wait") as rf2:
y = y * 2
return y
traced_bar = torch.jit.trace(bar, x)
with profile() as p:
traced_bar(x)
found_foo = False
found_bar = False
found_bar_after_wait = False
for info in p.function_events:
if info.name == "in_foo":
self.assertFalse(found_foo)
found_foo = True
elif info.name == "in_bar":
self.assertFalse(found_bar)
found_bar = True
elif info.name == "in_bar_after_wait":
self.assertFalse(found_bar_after_wait)
found_bar_after_wait = True
self.assertTrue(found_foo)
self.assertTrue(found_bar)
self.assertTrue(found_bar_after_wait)
def test_record_function_callbacks(self):
x = torch.randn(10, 10)
with profile() as p:
with record_function("foo"):
y = x * 2 + 4
function_events = p.function_events
foo_event = [event for event in function_events if "foo" in event.name][0]
self.assertEqual(foo_event.count, 1)
def test_profiler_aggregation_fake(self):
events = EventList()
id = [0]
def get_id():
id[0] = id[0] + 1
return id[0]
# [[thread_id, [(start, end, id), ....]], ...]
# Using list instead of a dict so order is guaranteed for any Python
# version
threads = [
[1, [(0, 1, get_id()), (1, 2, get_id())]],
[0, [(0, 2, get_id()), (1, 2, get_id()), (1, 3, get_id())]],
]
for thread, ranges in threads:
for range in ranges:
assert(len(range) == 3)
events.append(
FunctionEvent(
id=range[2],
node_id=0,
name="",
thread=thread,
cpu_start=range[0],
cpu_end=range[1],
)
)
events.populate_cpu_children()
# Note that [1, 3] pushes out [0, 2] first. Then we record [1, 2]
# as a child of [1, 3]
res = [[], [], [], [], [4]]
def get_children_ids(event):
return [child.id for child in event.cpu_children]
assert([get_children_ids(event) for event in events] == res)
def test_profiler_aggregation_table(self):
"""
Test if the profiling result is aggregated for `str(prof)`
See: https://github.com/pytorch/pytorch/issues/37500
"""
x = torch.randn(1024)
with torch.autograd.profiler.profile() as prof:
torch.einsum("i->", x)
prof_str = str(prof)
prof_table = prof.table()
self.assertEqual(prof_table, prof_str)
def test_profiler_function_event_avg(self):
avg = FunctionEventAvg()
avg.add(FunctionEvent(id=0, node_id=0, name="foo", thread=0, cpu_start=10, cpu_end=15))
avg.add(FunctionEvent(id=1, node_id=0, name="foo", thread=0, cpu_start=20, cpu_end=30))
avg.add(avg)
self.assertEqual(avg.key, "foo")
# aggregate stats
self.assertEqual(avg.count, 4)
self.assertEqual(avg.cpu_time_total, 30)
self.assertEqual(avg.self_cpu_time_total, 30)
self.assertEqual(avg.cuda_time_total, 0)
# average stats
self.assertEqual(avg.cpu_time, 7.5)
self.assertEqual(avg.cuda_time_total, 0)
def test_profiler_shapes(self):
print("")
layer1 = torch.nn.Linear(20, 30)
layer2 = torch.nn.Linear(30, 40)
input = torch.randn(128, 20)
with profile(record_shapes=True) as prof:
layer2(layer1(input))
print(prof.function_events)
top_level_expected_events_and_shapes = [
(None, [[30, 20]]),
('aten::addmm', [[30], [128, 20], [20, 30], [], []]),
(None, [[40, 30]]),
('aten::addmm', [[40], [128, 30], [30, 40], [], []])
]
expected_iter = iter(top_level_expected_events_and_shapes)
last_end = 0
for event in prof.function_events:
if event.cpu_interval.start > last_end:
name_expected, input_shape_expected = next(expected_iter)
if name_expected is not None:
self.assertEqual(event.name, name_expected)
self.assertEqual(event.input_shapes, input_shape_expected)
last_end = event.cpu_interval.end
def test_profiler_no_cuda(self):
print("")
layer = torch.nn.Linear(20, 30)
x = torch.randn(128, 20)
with profile(use_cuda=False) as prof:
layer(x)
prof_str = str(prof)
print(prof_str)
self.assertTrue('cpu' in prof_str.lower())
self.assertTrue('cuda' not in prof_str.lower())
def test_profiler_aggregation_lstm(self):
print("")
rnn = torch.nn.LSTM(10, 20, 2)
total_time_s = 0
with profile(record_shapes=True) as prof:
for i in range(20):
input = torch.randn(5, 3, 10)
h = torch.randn(2, 3, 20)
c = torch.randn(2, 3, 20)
start = time.time()
rnn(input, (h, c))
end = time.time()
total_time_s += end - start
print(prof.table(
sort_by="self_cpu_time_total", row_limit=10, header="TEST"))
print(prof.key_averages(group_by_input_shape=True).table(
sort_by="self_cpu_time_total", row_limit=10))
total_time_us = total_time_s * 1000.0 * 1000.0 # make it us which is profiler default
print(
"Total time based on python measurements: ",
format_time(total_time_us)
)
print(
"CPU time measurement python side overhead: {:.2f}%".format(
(total_time_us / prof.self_cpu_time_total - 1.0) * 100.0
)
)
if sys.platform != "win32":
with tempfile.NamedTemporaryFile() as trace_file:
prof.export_chrome_trace(trace_file.name)
def test_memory_profiler(self):
def run_profiler(tensor_creation_fn, metric):
# collecting allocs / deallocs
with profile(profile_memory=True, record_shapes=True) as prof:
x = None
with record_function("test_user_scope_alloc"):
x = tensor_creation_fn()
with record_function("test_user_scope_dealloc"):
del x
stats = prof.key_averages(group_by_input_shape=True)
print(stats.table(sort_by=metric))
return stats
def check_metrics(stats, metric, allocs=None, deallocs=None):
stat_metrics = {}
for stat in stats:
stat_metrics[stat.key] = getattr(stat, metric)
if allocs is not None:
for alloc_fn in allocs:
self.assertTrue(alloc_fn in stat_metrics)
self.assertTrue(stat_metrics[alloc_fn] > 0)
if deallocs is not None:
for dealloc_fn in deallocs:
self.assertTrue(dealloc_fn in stat_metrics)
self.assertTrue(stat_metrics[dealloc_fn] < 0)
def create_cpu_tensor():
return torch.rand(10, 10)
def create_cuda_tensor():
return torch.rand(10, 10).cuda()
def create_mkldnn_tensor():
return torch.rand(10, 10, dtype=torch.float32).to_mkldnn()
print("Running CPU test")
stats = run_profiler(create_cpu_tensor, "cpu_memory_usage")
check_metrics(
stats,
"cpu_memory_usage",
allocs=[
"aten::empty",
"aten::rand",
"test_user_scope_alloc",
],
deallocs=[
"test_user_scope_dealloc",
]
)
if torch.cuda.is_available():
create_cuda_tensor()
print("Running CUDA test")
stats = run_profiler(create_cuda_tensor, "cuda_memory_usage")
check_metrics(
stats,
"cuda_memory_usage",
allocs=[
"test_user_scope_alloc",
"aten::to",
"aten::empty_strided",
],
deallocs=[
"test_user_scope_dealloc",
]
)
check_metrics(
stats,
"cpu_memory_usage",
allocs=[
"aten::rand",
"aten::empty",
]
)
if torch._C.has_mkldnn:
create_mkldnn_tensor()
print("Running MKLDNN test")
stats = run_profiler(create_mkldnn_tensor, "cpu_memory_usage")
check_metrics(
stats,
"cpu_memory_usage",
allocs=[
"test_user_scope_alloc",
"aten::rand",
"aten::empty",
"aten::to_mkldnn",
],
deallocs=[
"test_user_scope_dealloc",
]
)
# check partial overlap of tensor allocation with memory profiler
x = torch.rand(10, 10)
with profile(profile_memory=True, record_shapes=True) as prof:
del x
x = torch.rand(10, 10)
del x
stats = prof.key_averages(group_by_input_shape=True)
check_metrics(
stats,
"cpu_memory_usage",
allocs=[
"aten::rand",
"aten::empty",
]
)
def test_record_function(self):
x = torch.randn(10, 10)
def forward(x):
with record_function("outer"):
y = x * 2 + 4
with record_function("inner"):
y = y - 1
y = y / 1
forward(x)
with profile() as p:
forward(x)
events = p.function_events
important_events = [
'outer',
'aten::mul',
'aten::add',
'inner',
'aten::sub',
'aten::div'
]
idx = 0
for info in events:
if info.name == important_events[idx]:
idx = idx + 1
if idx == len(important_events):
break
self.assertEqual(idx, len(important_events))
# We can also use record_function to decorate arbitrary function
@record_function('my_func')
def f(x, y):
return x + y
with profile() as p:
f(1, 2)
self.assertTrue('my_func' in str(p))
o = Some()
o.test_profiler_tracing()
from torch.autograd.profiler import FunctionEvent, EventList
```

# Buscar generar de manera correcta una traza
```
import random
l = [FunctionEvent(f'XXXX thread-{666+10+i}', i, "name", i, i*1000, i*1200) for i in range(10)]
l2 = [FunctionEvent(9999+100+i, i if random.random() < 0.4 else i+222, "CUDA name", i, i*1000+100, i*1200-100) for i in range(10)]
l.extend(l2)
for i, e in enumerate(l):
cude, start, end = random.randint(8, 10), i*1000+10, i*1200-10
e.append_kernel(f"add", cude, start, end)
start, end = i*1000+100, i*1200-100
e.append_kernel(f"sub_add {i}", cude, start, end)
ev=EventList(l)
with tempfile.NamedTemporaryFile(mode="w+") as f:
print(f"writed to {f.name}")
ev.export_chrome_trace(f.name)
# read the trace and expect valid json
# if the JSON generated by export_chrome_trace is not valid, this will throw and fail the test.
print(f.read())
print("readed")
parsed = json.load(f)
print(json.dumps(parsed, sort_keys=True)) #indent=4))
print(ev.table())
```
| true |
code
| 0.588121 | null | null | null | null |
|
# 7.5 IMDb(Internet Movie Database)からDataLoaderを作成
- 本ファイルでは、IMDb(Internet Movie Database)のデータを使用して、感情分析(0:ネガティブ、1:ポジティブ)を2値クラス分類するためのDatasetとDataLoaderを作成します。
※ 本章のファイルはすべてUbuntuでの動作を前提としています。Windowsなど文字コードが違う環境での動作にはご注意下さい。
# 7.5 学習目標
1. テキスト形式のファイルデータからtsvファイルを作成し、torchtext用のDataLoaderを作成できるようになる
# 事前準備
書籍の指示に従い、本章で使用するデータを用意します
# 1. IMDbデータセットをtsv形式に変換
Datasetをダウンロードします
※torchtextで標準でIMDbが使える関数があるのですが、今回は今後データセットが用意されていない場合でも対応できるように0から作ります。
http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
5万件のデータ(train,testともに2.5万件)です。データidとrating(1-10)でファイル名が決まっています。
rateは10の方が良いです。4以下がnegative、7以上がpositiveにクラス分けされています。
```
# tsv形式のファイルにします
import glob
import os
import io
import string
# 訓練データのtsvファイルを作成します
f = open('./data/IMDb_train.tsv', 'w')
path = './data/aclImdb/train/pos/'
for fname in glob.glob(os.path.join(path, '*.txt')):
with io.open(fname, 'r', encoding="utf-8") as ff:
text = ff.readline()
# タブがあれば消しておきます
text = text.replace('\t', " ")
text = text+'\t'+'1'+'\t'+'\n'
f.write(text)
path = './data/aclImdb/train/neg/'
for fname in glob.glob(os.path.join(path, '*.txt')):
with io.open(fname, 'r', encoding="utf-8") as ff:
text = ff.readline()
# タブがあれば消しておきます
text = text.replace('\t', " ")
text = text+'\t'+'0'+'\t'+'\n'
f.write(text)
f.close()
# テストデータの作成
f = open('./data/IMDb_test.tsv', 'w')
path = './data/aclImdb/test/pos/'
for fname in glob.glob(os.path.join(path, '*.txt')):
with io.open(fname, 'r', encoding="utf-8") as ff:
text = ff.readline()
# タブがあれば消しておきます
text = text.replace('\t', " ")
text = text+'\t'+'1'+'\t'+'\n'
f.write(text)
path = './data/aclImdb/test/neg/'
for fname in glob.glob(os.path.join(path, '*.txt')):
with io.open(fname, 'r', encoding="utf-8") as ff:
text = ff.readline()
# タブがあれば消しておきます
text = text.replace('\t', " ")
text = text+'\t'+'0'+'\t'+'\n'
f.write(text)
f.close()
```
# 2. 前処理と単語分割の関数を定義
```
import string
import re
# 以下の記号はスペースに置き換えます(カンマ、ピリオドを除く)。
# punctuationとは日本語で句点という意味です
print("区切り文字:", string.punctuation)
# !"#$%&'()*+,-./:;<=>?@[\]^_`{|}~
# 前処理
def preprocessing_text(text):
# 改行コードを消去
text = re.sub('<br />', '', text)
# カンマ、ピリオド以外の記号をスペースに置換
for p in string.punctuation:
if (p == ".") or (p == ","):
continue
else:
text = text.replace(p, " ")
# ピリオドなどの前後にはスペースを入れておく
text = text.replace(".", " . ")
text = text.replace(",", " , ")
return text
# 分かち書き(今回はデータが英語で、簡易的にスペースで区切る)
def tokenizer_punctuation(text):
return text.strip().split()
# 前処理と分かち書きをまとめた関数を定義
def tokenizer_with_preprocessing(text):
text = preprocessing_text(text)
ret = tokenizer_punctuation(text)
return ret
# 動作を確認します
print(tokenizer_with_preprocessing('I like cats.'))
```
# DataLoaderの作成
```
# データを読み込んだときに、読み込んだ内容に対して行う処理を定義します
import torchtext
# 文章とラベルの両方に用意します
max_length = 256
TEXT = torchtext.data.Field(sequential=True, tokenize=tokenizer_with_preprocessing, use_vocab=True,
lower=True, include_lengths=True, batch_first=True, fix_length=max_length, init_token="<cls>", eos_token="<eos>")
LABEL = torchtext.data.Field(sequential=False, use_vocab=False)
# 引数の意味は次の通り
# init_token:全部の文章で、文頭に入れておく単語
# eos_token:全部の文章で、文末に入れておく単語
# フォルダ「data」から各tsvファイルを読み込みます
train_val_ds, test_ds = torchtext.data.TabularDataset.splits(
path='./data/', train='IMDb_train.tsv',
test='IMDb_test.tsv', format='tsv',
fields=[('Text', TEXT), ('Label', LABEL)])
# 動作確認
print('訓練および検証のデータ数', len(train_val_ds))
print('1つ目の訓練および検証のデータ', vars(train_val_ds[0]))
import random
# torchtext.data.Datasetのsplit関数で訓練データとvalidationデータを分ける
train_ds, val_ds = train_val_ds.split(
split_ratio=0.8, random_state=random.seed(1234))
# 動作確認
print('訓練データの数', len(train_ds))
print('検証データの数', len(val_ds))
print('1つ目の訓練データ', vars(train_ds[0]))
```
# ボキャブラリーを作成
```
# torchtextで単語ベクトルとして英語学習済みモデルを読み込みます
from torchtext.vocab import Vectors
english_fasttext_vectors = Vectors(name='data/wiki-news-300d-1M.vec')
# 単語ベクトルの中身を確認します
print("1単語を表現する次元数:", english_fasttext_vectors.dim)
print("単語数:", len(english_fasttext_vectors.itos))
# ベクトル化したバージョンのボキャブラリーを作成します
TEXT.build_vocab(train_ds, vectors=english_fasttext_vectors, min_freq=10)
# ボキャブラリーのベクトルを確認します
print(TEXT.vocab.vectors.shape) # 17916個の単語が300次元のベクトルで表現されている
TEXT.vocab.vectors
# ボキャブラリーの単語の順番を確認します
TEXT.vocab.stoi
# DataLoaderを作成します(torchtextの文脈では単純にiteraterと呼ばれています)
train_dl = torchtext.data.Iterator(train_ds, batch_size=24, train=True)
val_dl = torchtext.data.Iterator(
val_ds, batch_size=24, train=False, sort=False)
test_dl = torchtext.data.Iterator(
test_ds, batch_size=24, train=False, sort=False)
# 動作確認 検証データのデータセットで確認
batch = next(iter(val_dl))
print(batch.Text)
print(batch.Label)
```
このようにDataLoaderは単語のidを格納しているので、分散表現はディープラーニングモデル側でidに応じて取得してあげる必要があります。
ここまでの内容をフォルダ「utils」のdataloader.pyに別途保存しておき、次節からはこちらから読み込むようにします
以上
| true |
code
| 0.324185 | null | null | null | null |
|
# Before we start...
This colab notebook is a minimum demo for faceswap-GAN v2.2. Since colab allows maximum run time limit of 12 hrs, we will only train a lightweight model in this notebook. **The purpose of this notebook is not to train a model that produces high quality results but a quick overview for how faceswap-GAN works.**
The pipeline of faceswap-GAN v2.2 is described below:
1. Upload two videos for training.
2. Apply face extraction (preprocessing) on the two uploaded videos
3. Train a liteweight faceswap-GAN model. (This will take 10 ~ 12 hrs)
4. Apply video conversion to the uploaded videos.
# Step 1: Set runtime type to Python 3/GPU
Set the colab notebook to GPU instance through: **runtime -> change runtime type -> Python3 and GPU**
The following cells will show the system information of the current instance. Run the cells and check if it uses python >= 3.6 and has a GPU device.
```
import platform
print(platform.python_version())
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
```
# Step 2: Git clone faceswap-GAN
```
!git clone https://github.com/shaoanlu/faceswap-GAN.git
%cd "faceswap-GAN"
```
# Step 3: Upload training videos
The user should upload two videos: **source video** and **target video**. The model will **tranform source face to target face by default.**
- The videos better **contain only one person**.
- There is no limitation on video length but the longer it is, the longer preprocessing time / video conversion time it will take, which may cause excceded run time of 12 hrs. (**Recommended video length: 30 secs ~ 2 mins.**)
```
from google.colab import files
# Upload source video
source_video = files.upload()
for fn_source_video, _ in source_video.items():
print(fn_source_video)
# Upload target video
target_video = files.upload()
for fn_target_video, _ in target_video.items():
print(fn_target_video)
```
# Step 4: Set maximum training iterations
Default 25000 iters require ~ 10hrs of training.
Iterations >= 27k may exceed run time limit; Iterations < 18k may yield poorly-trained model.
```
global TOTAL_ITERS
TOTAL_ITERS = 34000
```
# Step 5: Everything is ready.
**Press Ctrl + F10 (or runtime -> run after)** to start the remaining process and leave this page alone. It will take 10 ~ 12 hours to finish training. The result video can be downloaded by running the last cell:
```python
files.download("OUTPUT_VIDEO.mp4")
# Some browsers do not support this line (e.g., Opera does not pop up a save dialog). Please use Firefox or Chrome.
```
Notice that **this page should not be closed or refreshed while running**.
```
%%capture
!pip install moviepy
!pip install keras_vggface
import imageio
imageio.plugins.ffmpeg.download()
import keras.backend as K
from detector.face_detector import MTCNNFaceDetector
import glob
from preprocess import preprocess_video
fd = MTCNNFaceDetector(sess=K.get_session(), model_path="./mtcnn_weights/")
!mkdir -p faceA/rgb
!mkdir -p faceA/binary_mask
!mkdir -p faceB/rgb
!mkdir -p faceB/binary_mask
save_interval = 5 # perform face detection every {save_interval} frames
save_path = "./faceA/"
preprocess_video(fn_source_video, fd, save_interval, save_path)
save_path = "./faceB/"
preprocess_video(fn_target_video, fd, save_interval, save_path)
print(str(len(glob.glob("faceA/rgb/*.*"))) + " face(s) extracted from source video: " + fn_source_video + ".")
print(str(len(glob.glob("faceB/rgb/*.*"))) + " face(s) extracted from target video: " + fn_target_video + ".")
```
## The following cells are from [FaceSwap_GAN_v2.2_train_test.ipynb](https://github.com/shaoanlu/faceswap-GAN/blob/master/FaceSwap_GAN_v2.2_train_test.ipynb)
## Import packages
```
from keras.layers import *
import keras.backend as K
import tensorflow as tf
import os
import cv2
import glob
import time
import numpy as np
from pathlib import PurePath, Path
from IPython.display import clear_output
import matplotlib.pyplot as plt
%matplotlib inline
```
## Configuration
```
K.set_learning_phase(1)
# Number of CPU cores
num_cpus = os.cpu_count()
# Input/Output resolution
RESOLUTION = 64 # 64x64, 128x128, 256x256
assert (RESOLUTION % 64) == 0, "RESOLUTION should be 64, 128, or 256."
# Batch size
batchSize = 4
# Use motion blurs (data augmentation)
# set True if training data contains images extracted from videos
use_da_motion_blur = False
# Use eye-aware training
# require images generated from prep_binary_masks.ipynb
use_bm_eyes = True
# Probability of random color matching (data augmentation)
prob_random_color_match = 0.5
da_config = {
"prob_random_color_match": prob_random_color_match,
"use_da_motion_blur": use_da_motion_blur,
"use_bm_eyes": use_bm_eyes
}
# Path to training images
img_dirA = './faceA/rgb'
img_dirB = './faceB/rgb'
img_dirA_bm_eyes = "./faceA/binary_mask"
img_dirB_bm_eyes = "./faceB/binary_mask"
# Path to saved model weights
models_dir = "./models"
# Architecture configuration
arch_config = {}
arch_config['IMAGE_SHAPE'] = (RESOLUTION, RESOLUTION, 3)
arch_config['use_self_attn'] = True
arch_config['norm'] = "hybrid" # instancenorm, batchnorm, layernorm, groupnorm, none
arch_config['model_capacity'] = "lite" # standard, lite
# Loss function weights configuration
loss_weights = {}
loss_weights['w_D'] = 0.1 # Discriminator
loss_weights['w_recon'] = 1. # L1 reconstruction loss
loss_weights['w_edge'] = 0.1 # edge loss
loss_weights['w_eyes'] = 30. # reconstruction and edge loss on eyes area
loss_weights['w_pl'] = (0.01, 0.1, 0.3, 0.1) # perceptual loss (0.003, 0.03, 0.3, 0.3)
# Init. loss config.
loss_config = {}
loss_config["gan_training"] = "mixup_LSGAN"
loss_config['use_PL'] = False
loss_config["PL_before_activ"] = True
loss_config['use_mask_hinge_loss'] = False
loss_config['m_mask'] = 0.
loss_config['lr_factor'] = 1.
loss_config['use_cyclic_loss'] = False
```
## Build the model
```
from networks.faceswap_gan_model import FaceswapGANModel
from data_loader.data_loader import DataLoader
from utils import showG, showG_mask, showG_eyes
model = FaceswapGANModel(**arch_config)
%%capture
!wget https://github.com/rcmalli/keras-vggface/releases/download/v2.0/rcmalli_vggface_tf_notop_resnet50.h5
#from keras_vggface.vggface import VGGFace
# VGGFace ResNet50
#vggface = VGGFace(include_top=False, model='resnet50', input_shape=(224, 224, 3))'
from colab_demo.vggface_models import RESNET50
vggface = RESNET50(include_top=False, weights=None, input_shape=(224, 224, 3))
vggface.load_weights("rcmalli_vggface_tf_notop_resnet50.h5")
#from keras.applications.resnet50 import ResNet50
#vggface = ResNet50(include_top=False, input_shape=(224, 224, 3))
#vggface.summary()
model.build_pl_model(vggface_model=vggface, before_activ=loss_config["PL_before_activ"])
model.build_train_functions(loss_weights=loss_weights, **loss_config)
```
## Start training
```
# Create ./models directory
Path(f"models").mkdir(parents=True, exist_ok=True)
# Get filenames
train_A = glob.glob(img_dirA+"/*.*")
train_B = glob.glob(img_dirB+"/*.*")
train_AnB = train_A + train_B
assert len(train_A), "No image found in " + str(img_dirA)
assert len(train_B), "No image found in " + str(img_dirB)
print ("Number of images in folder A: " + str(len(train_A)))
print ("Number of images in folder B: " + str(len(train_B)))
def show_loss_config(loss_config):
for config, value in loss_config.items():
print(f"{config} = {value}")
def reset_session(save_path):
global model, vggface
global train_batchA, train_batchB
model.save_weights(path=save_path)
del model
del vggface
del train_batchA
del train_batchB
K.clear_session()
model = FaceswapGANModel(**arch_config)
model.load_weights(path=save_path)
#vggface = VGGFace(include_top=False, model='resnet50', input_shape=(224, 224, 3))
vggface = RESNET50(include_top=False, weights=None, input_shape=(224, 224, 3))
vggface.load_weights("rcmalli_vggface_tf_notop_resnet50.h5")
model.build_pl_model(vggface_model=vggface, before_activ=loss_config["PL_before_activ"])
train_batchA = DataLoader(train_A, train_AnB, batchSize, img_dirA_bm_eyes,
RESOLUTION, num_cpus, K.get_session(), **da_config)
train_batchB = DataLoader(train_B, train_AnB, batchSize, img_dirB_bm_eyes,
RESOLUTION, num_cpus, K.get_session(), **da_config)
# Start training
t0 = time.time()
# This try/except is meant to resume training if we disconnected from Colab
try:
gen_iterations
print(f"Resume training from iter {gen_iterations}.")
except:
gen_iterations = 0
errGA_sum = errGB_sum = errDA_sum = errDB_sum = 0
errGAs = {}
errGBs = {}
# Dictionaries are ordered in Python 3.6
for k in ['ttl', 'adv', 'recon', 'edge', 'pl']:
errGAs[k] = 0
errGBs[k] = 0
display_iters = 300
global TOTAL_ITERS
global train_batchA, train_batchB
train_batchA = DataLoader(train_A, train_AnB, batchSize, img_dirA_bm_eyes,
RESOLUTION, num_cpus, K.get_session(), **da_config)
train_batchB = DataLoader(train_B, train_AnB, batchSize, img_dirB_bm_eyes,
RESOLUTION, num_cpus, K.get_session(), **da_config)
while gen_iterations <= TOTAL_ITERS:
# Loss function automation
if gen_iterations == (TOTAL_ITERS//5 - display_iters//2):
clear_output()
loss_config['use_PL'] = True
loss_config['use_mask_hinge_loss'] = False
loss_config['m_mask'] = 0.0
reset_session(models_dir)
print("Building new loss funcitons...")
show_loss_config(loss_config)
model.build_train_functions(loss_weights=loss_weights, **loss_config)
print("Done.")
elif gen_iterations == (TOTAL_ITERS//5 + TOTAL_ITERS//10 - display_iters//2):
clear_output()
loss_config['use_PL'] = True
loss_config['use_mask_hinge_loss'] = True
loss_config['m_mask'] = 0.5
reset_session(models_dir)
print("Building new loss funcitons...")
show_loss_config(loss_config)
model.build_train_functions(loss_weights=loss_weights, **loss_config)
print("Complete.")
elif gen_iterations == (2*TOTAL_ITERS//5 - display_iters//2):
clear_output()
loss_config['use_PL'] = True
loss_config['use_mask_hinge_loss'] = True
loss_config['m_mask'] = 0.2
reset_session(models_dir)
print("Building new loss funcitons...")
show_loss_config(loss_config)
model.build_train_functions(loss_weights=loss_weights, **loss_config)
print("Done.")
elif gen_iterations == (TOTAL_ITERS//2 - display_iters//2):
clear_output()
loss_config['use_PL'] = True
loss_config['use_mask_hinge_loss'] = True
loss_config['m_mask'] = 0.4
loss_config['lr_factor'] = 0.3
reset_session(models_dir)
print("Building new loss funcitons...")
show_loss_config(loss_config)
model.build_train_functions(loss_weights=loss_weights, **loss_config)
print("Done.")
elif gen_iterations == (2*TOTAL_ITERS//3 - display_iters//2):
clear_output()
model.decoder_A.load_weights("models/decoder_B.h5") # swap decoders
model.decoder_B.load_weights("models/decoder_A.h5") # swap decoders
loss_config['use_PL'] = True
loss_config['use_mask_hinge_loss'] = True
loss_config['m_mask'] = 0.5
loss_config['lr_factor'] = 1
reset_session(models_dir)
print("Building new loss funcitons...")
show_loss_config(loss_config)
model.build_train_functions(loss_weights=loss_weights, **loss_config)
print("Done.")
elif gen_iterations == (8*TOTAL_ITERS//10 - display_iters//2):
clear_output()
loss_config['use_PL'] = True
loss_config['use_mask_hinge_loss'] = True
loss_config['m_mask'] = 0.1
loss_config['lr_factor'] = 0.3
reset_session(models_dir)
print("Building new loss funcitons...")
show_loss_config(loss_config)
model.build_train_functions(loss_weights=loss_weights, **loss_config)
print("Done.")
elif gen_iterations == (9*TOTAL_ITERS//10 - display_iters//2):
clear_output()
loss_config['use_PL'] = True
loss_config['use_mask_hinge_loss'] = False
loss_config['m_mask'] = 0.0
loss_config['lr_factor'] = 0.1
reset_session(models_dir)
print("Building new loss funcitons...")
show_loss_config(loss_config)
model.build_train_functions(loss_weights=loss_weights, **loss_config)
print("Done.")
if gen_iterations == 5:
print ("working.")
# Train dicriminators for one batch
data_A = train_batchA.get_next_batch()
data_B = train_batchB.get_next_batch()
errDA, errDB = model.train_one_batch_D(data_A=data_A, data_B=data_B)
errDA_sum +=errDA[0]
errDB_sum +=errDB[0]
# Train generators for one batch
data_A = train_batchA.get_next_batch()
data_B = train_batchB.get_next_batch()
errGA, errGB = model.train_one_batch_G(data_A=data_A, data_B=data_B)
errGA_sum += errGA[0]
errGB_sum += errGB[0]
for i, k in enumerate(['ttl', 'adv', 'recon', 'edge', 'pl']):
errGAs[k] += errGA[i]
errGBs[k] += errGB[i]
gen_iterations+=1
# Visualization
if gen_iterations % display_iters == 0:
clear_output()
# Display loss information
show_loss_config(loss_config)
print("----------")
print('[iter %d] Loss_DA: %f Loss_DB: %f Loss_GA: %f Loss_GB: %f time: %f'
% (gen_iterations, errDA_sum/display_iters, errDB_sum/display_iters,
errGA_sum/display_iters, errGB_sum/display_iters, time.time()-t0))
print("----------")
print("Generator loss details:")
print(f'[Adversarial loss]')
print(f'GA: {errGAs["adv"]/display_iters:.4f} GB: {errGBs["adv"]/display_iters:.4f}')
print(f'[Reconstruction loss]')
print(f'GA: {errGAs["recon"]/display_iters:.4f} GB: {errGBs["recon"]/display_iters:.4f}')
print(f'[Edge loss]')
print(f'GA: {errGAs["edge"]/display_iters:.4f} GB: {errGBs["edge"]/display_iters:.4f}')
if loss_config['use_PL'] == True:
print(f'[Perceptual loss]')
try:
print(f'GA: {errGAs["pl"][0]/display_iters:.4f} GB: {errGBs["pl"][0]/display_iters:.4f}')
except:
print(f'GA: {errGAs["pl"]/display_iters:.4f} GB: {errGBs["pl"]/display_iters:.4f}')
# Display images
print("----------")
wA, tA, _ = train_batchA.get_next_batch()
wB, tB, _ = train_batchB.get_next_batch()
print("Transformed (masked) results:")
showG(tA, tB, model.path_A, model.path_B, batchSize)
print("Masks:")
showG_mask(tA, tB, model.path_mask_A, model.path_mask_B, batchSize)
print("Reconstruction results:")
showG(wA, wB, model.path_bgr_A, model.path_bgr_B, batchSize)
errGA_sum = errGB_sum = errDA_sum = errDB_sum = 0
for k in ['ttl', 'adv', 'recon', 'edge', 'pl']:
errGAs[k] = 0
errGBs[k] = 0
# Save models
model.save_weights(path=models_dir)
```
## The following cells are from [FaceSwap_GAN_v2.2_video_conversion.ipynb](https://github.com/shaoanlu/faceswap-GAN/blob/master/FaceSwap_GAN_v2.2_video_conversion.ipynb)
## Video conversion
```
from converter.video_converter import VideoConverter
global model, vggface
global train_batchA, train_batchB
del model
del vggface
del train_batchA
del train_batchB
tf.reset_default_graph()
K.clear_session()
model = FaceswapGANModel(**arch_config)
model.load_weights(path=models_dir)
fd = MTCNNFaceDetector(sess=K.get_session(), model_path="./mtcnn_weights/")
vc = VideoConverter()
vc.set_face_detector(fd)
vc.set_gan_model(model)
options = {
# ===== Fixed =====
"use_smoothed_bbox": True,
"use_kalman_filter": True,
"use_auto_downscaling": False,
"bbox_moving_avg_coef": 0.65,
"min_face_area": 35 * 35,
"IMAGE_SHAPE": model.IMAGE_SHAPE,
# ===== Tunable =====
"kf_noise_coef": 1e-3,
"use_color_correction": "hist_match",
"detec_threshold": 0.8,
"roi_coverage": 0.9,
"enhance": 0.,
"output_type": 3,
"direction": "AtoB", # ==================== This line determines the transform direction ====================
}
if options["direction"] == "AtoB":
input_fn = fn_source_video
output_fn = "OUTPUT_VIDEO_AtoB.mp4"
elif options["direction"] == "BtoA":
input_fn = fn_target_video
output_fn = "OUTPUT_VIDEO_BtoA.mp4"
duration = None # None or a non-negative float tuple: (start_sec, end_sec). Duration of input video to be converted
vc.convert(input_fn=input_fn, output_fn=output_fn, options=options, duration=duration)
```
# Download result video
```
from google.colab import files
if options["direction"] == "AtoB":
files.download("OUTPUT_VIDEO_AtoB.mp4")
elif options["direction"] == "BtoA":
files.download("OUTPUT_VIDEO_BtoA.mp4")
```
| true |
code
| 0.489442 | null | null | null | null |
|
# Timeseries anomaly detection using an Autoencoder
**Author:** [pavithrasv](https://github.com/pavithrasv)<br>
**Date created:** 2020/05/31<br>
**Last modified:** 2020/05/31<br>
**Description:** Detect anomalies in a timeseries using an Autoencoder.
## Introduction
This script demonstrates how you can use a reconstruction convolutional
autoencoder model to detect anomalies in timeseries data.
## Setup
```
import numpy as np
import pandas as pd
from tensorflow import keras
from tensorflow.keras import layers
from matplotlib import pyplot as plt
```
## Load the data
We will use the [Numenta Anomaly Benchmark(NAB)](
https://www.kaggle.com/boltzmannbrain/nab) dataset. It provides artifical
timeseries data containing labeled anomalous periods of behavior. Data are
ordered, timestamped, single-valued metrics.
We will use the `art_daily_small_noise.csv` file for training and the
`art_daily_jumpsup.csv` file for testing. The simplicity of this dataset
allows us to demonstrate anomaly detection effectively.
```
master_url_root = "https://raw.githubusercontent.com/numenta/NAB/master/data/"
df_small_noise_url_suffix = "artificialNoAnomaly/art_daily_small_noise.csv"
df_small_noise_url = master_url_root + df_small_noise_url_suffix
df_small_noise = pd.read_csv(
df_small_noise_url, parse_dates=True, index_col="timestamp"
)
df_daily_jumpsup_url_suffix = "artificialWithAnomaly/art_daily_jumpsup.csv"
df_daily_jumpsup_url = master_url_root + df_daily_jumpsup_url_suffix
df_daily_jumpsup = pd.read_csv(
df_daily_jumpsup_url, parse_dates=True, index_col="timestamp"
)
```
## Quick look at the data
```
print(df_small_noise.head())
print(df_daily_jumpsup.head())
```
## Visualize the data
### Timeseries data without anomalies
We will use the following data for training.
```
fig, ax = plt.subplots()
df_small_noise.plot(legend=False, ax=ax)
plt.show()
```
### Timeseries data with anomalies
We will use the following data for testing and see if the sudden jump up in the
data is detected as an anomaly.
```
fig, ax = plt.subplots()
df_daily_jumpsup.plot(legend=False, ax=ax)
plt.show()
```
## Prepare training data
Get data values from the training timeseries data file and normalize the
`value` data. We have a `value` for every 5 mins for 14 days.
- 24 * 60 / 5 = **288 timesteps per day**
- 288 * 14 = **4032 data points** in total
```
# Normalize and save the mean and std we get,
# for normalizing test data.
training_mean = df_small_noise.mean()
training_std = df_small_noise.std()
df_training_value = (df_small_noise - training_mean) / training_std
print("Number of training samples:", len(df_training_value))
```
### Create sequences
Create sequences combining `TIME_STEPS` contiguous data values from the
training data.
```
TIME_STEPS = 288
# Generated training sequences for use in the model.
def create_sequences(values, time_steps=TIME_STEPS):
output = []
for i in range(len(values) - time_steps):
output.append(values[i : (i + time_steps)])
return np.stack(output)
x_train = create_sequences(df_training_value.values)
print("Training input shape: ", x_train.shape)
```
## Build a model
We will build a convolutional reconstruction autoencoder model. The model will
take input of shape `(batch_size, sequence_length, num_features)` and return
output of the same shape. In this case, `sequence_length` is 288 and
`num_features` is 1.
```
model = keras.Sequential(
[
layers.Input(shape=(x_train.shape[1], x_train.shape[2])),
layers.Conv1D(
filters=32, kernel_size=7, padding="same", strides=2, activation="relu"
),
layers.Dropout(rate=0.2),
layers.Conv1D(
filters=16, kernel_size=7, padding="same", strides=2, activation="relu"
),
layers.Conv1DTranspose(
filters=16, kernel_size=7, padding="same", strides=2, activation="relu"
),
layers.Dropout(rate=0.2),
layers.Conv1DTranspose(
filters=32, kernel_size=7, padding="same", strides=2, activation="relu"
),
layers.Conv1DTranspose(filters=1, kernel_size=7, padding="same"),
]
)
model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.001), loss="mse")
model.summary()
```
## Train the model
Please note that we are using `x_train` as both the input and the target
since this is a reconstruction model.
```
history = model.fit(
x_train,
x_train,
epochs=50,
batch_size=128,
validation_split=0.1,
callbacks=[
keras.callbacks.EarlyStopping(monitor="val_loss", patience=5, mode="min")
],
)
```
Let's plot training and validation loss to see how the training went.
```
plt.plot(history.history["loss"], label="Training Loss")
plt.plot(history.history["val_loss"], label="Validation Loss")
plt.legend()
plt.show()
```
## Detecting anomalies
We will detect anomalies by determining how well our model can reconstruct
the input data.
1. Find MAE loss on training samples.
2. Find max MAE loss value. This is the worst our model has performed trying
to reconstruct a sample. We will make this the `threshold` for anomaly
detection.
3. If the reconstruction loss for a sample is greater than this `threshold`
value then we can infer that the model is seeing a pattern that it isn't
familiar with. We will label this sample as an `anomaly`.
```
# Get train MAE loss.
x_train_pred = model.predict(x_train)
train_mae_loss = np.mean(np.abs(x_train_pred - x_train), axis=1)
plt.hist(train_mae_loss, bins=50)
plt.xlabel("Train MAE loss")
plt.ylabel("No of samples")
plt.show()
# Get reconstruction loss threshold.
threshold = np.max(train_mae_loss)
print("Reconstruction error threshold: ", threshold)
```
### Compare recontruction
Just for fun, let's see how our model has recontructed the first sample.
This is the 288 timesteps from day 1 of our training dataset.
```
# Checking how the first sequence is learnt
plt.plot(x_train[0])
plt.plot(x_train_pred[0])
plt.show()
```
### Prepare test data
```
def normalize_test(values, mean, std):
values -= mean
values /= std
return values
df_test_value = (df_daily_jumpsup - training_mean) / training_std
fig, ax = plt.subplots()
df_test_value.plot(legend=False, ax=ax)
plt.show()
# Create sequences from test values.
x_test = create_sequences(df_test_value.values)
print("Test input shape: ", x_test.shape)
# Get test MAE loss.
x_test_pred = model.predict(x_test)
test_mae_loss = np.mean(np.abs(x_test_pred - x_test), axis=1)
test_mae_loss = test_mae_loss.reshape((-1))
plt.hist(test_mae_loss, bins=50)
plt.xlabel("test MAE loss")
plt.ylabel("No of samples")
plt.show()
# Detect all the samples which are anomalies.
anomalies = test_mae_loss > threshold
print("Number of anomaly samples: ", np.sum(anomalies))
print("Indices of anomaly samples: ", np.where(anomalies))
```
## Plot anomalies
We now know the samples of the data which are anomalies. With this, we will
find the corresponding `timestamps` from the original test data. We will be
using the following method to do that:
Let's say time_steps = 3 and we have 10 training values. Our `x_train` will
look like this:
- 0, 1, 2
- 1, 2, 3
- 2, 3, 4
- 3, 4, 5
- 4, 5, 6
- 5, 6, 7
- 6, 7, 8
- 7, 8, 9
All except the initial and the final time_steps-1 data values, will appear in
`time_steps` number of samples. So, if we know that the samples
[(3, 4, 5), (4, 5, 6), (5, 6, 7)] are anomalies, we can say that the data point
5 is an anomaly.
```
# data i is an anomaly if samples [(i - timesteps + 1) to (i)] are anomalies
anomalous_data_indices = []
for data_idx in range(TIME_STEPS - 1, len(df_test_value) - TIME_STEPS + 1):
if np.all(anomalies[data_idx - TIME_STEPS + 1 : data_idx]):
anomalous_data_indices.append(data_idx)
```
Let's overlay the anomalies on the original test data plot.
```
df_subset = df_daily_jumpsup.iloc[anomalous_data_indices]
fig, ax = plt.subplots()
df_daily_jumpsup.plot(legend=False, ax=ax)
df_subset.plot(legend=False, ax=ax, color="r")
plt.show()
```
| true |
code
| 0.734012 | null | null | null | null |
|
# Visualizations
This tutorial illustrates the core visualization utilities available in Ax.
```
import numpy as np
from ax.service.ax_client import AxClient
from ax.modelbridge.cross_validation import cross_validate
from ax.plot.contour import interact_contour
from ax.plot.diagnostic import interact_cross_validation
from ax.plot.scatter import(
interact_fitted,
plot_objective_vs_constraints,
tile_fitted,
)
from ax.plot.slice import plot_slice
from ax.utils.measurement.synthetic_functions import hartmann6
from ax.utils.notebook.plotting import render, init_notebook_plotting
init_notebook_plotting()
```
## 1. Create experiment and run optimization
The vizualizations require an experiment object and a model fit on the evaluated data. The routine below is a copy of the Service API tutorial, so the explanation here is omitted. Retrieving the experiment and model objects for each API paradigm is shown in the respective tutorials
#### 1a. Define search space and evaluation function
```
noise_sd = 0.1
param_names = [f"x{i+1}" for i in range(6)] # x1, x2, ..., x6
def noisy_hartmann_evaluation_function(parameterization):
x = np.array([parameterization.get(p_name) for p_name in param_names])
noise1, noise2 = np.random.normal(0, noise_sd, 2)
return {
"hartmann6": (hartmann6(x) + noise1, noise_sd),
"l2norm": (np.sqrt((x ** 2).sum()) + noise2, noise_sd)
}
```
#### 1b. Create Experiment
```
ax_client = AxClient()
ax_client.create_experiment(
name="test_visualizations",
parameters=[
{
"name": p_name,
"type": "range",
"bounds": [0.0, 1.0],
}
for p_name in param_names
],
objective_name="hartmann6",
minimize=True,
outcome_constraints=["l2norm <= 1.25"]
)
```
#### 1c. Run the optimization and fit a GP on all data
```
for i in range(20):
parameters, trial_index = ax_client.get_next_trial()
# Local evaluation here can be replaced with deployment to external system.
ax_client.complete_trial(trial_index=trial_index, raw_data=noisy_hartmann_evaluation_function(parameters))
```
## 2. Contour plots
The plot below shows the response surface for `hartmann6` metric as a function of the `x1`, `x2` parameters.
The other parameters are fixed in the middle of their respective ranges, which in this example is 0.5 for all of them.
```
# this could alternately be done with `ax.plot.contour.plot_contour`
render(ax_client.get_contour_plot(param_x="x1", param_y="x2", metric_name='hartmann6'))
```
#### 2a. Interactive contour plot
The plot below allows toggling between different pairs of parameters to view the contours.
```
model = ax_client.generation_strategy.model
render(interact_contour(model=model, metric_name='hartmann6'))
```
## 3. Tradeoff plots
This plot illustrates the tradeoffs achievable for 2 different metrics. The plot takes the x-axis metric as input (usually the objective) and allows toggling among all other metrics for the y-axis.
This is useful to get a sense of the pareto frontier (i.e. what is the best objective value achievable for different bounds on the constraint)
```
render(plot_objective_vs_constraints(model, 'hartmann6', rel=False))
```
## 4. Cross-validation plots
CV plots are useful to check how well the model predictions calibrate against the actual measurements. If all points are close to the dashed line, then the model is a good predictor of the real data.
```
cv_results = cross_validate(model)
render(interact_cross_validation(cv_results))
```
## 5. Slice plots
Slice plots show the metric outcome as a function of one parameter while fixing the others. They serve a similar function as contour plots.
```
render(plot_slice(model, "x2", "hartmann6"))
```
## 6. Tile plots
Tile plots are useful for viewing the effect of each arm.
```
render(interact_fitted(model, rel=False))
```
| true |
code
| 0.68215 | null | null | null | null |
|
# Exploratory Data Analysis with Pandas
The main scope of this notebook is to perform an analysis of the reviews received for the applications (games) in Steam. Each row in the dataset represents a review made by one user (Author) about a specific application.
The goal will be to answer different possible research questions.
### Useful imports to analyze the dataset and analyze the results
```
import pandas as pd
import random
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import linregress, pearsonr, mannwhitneyu, shapiro, ttest_ind, normaltest
import seaborn as sea
from mpl_toolkits import mplot3d
import statsmodels.api as sm
import plotly.express as px
from sklearn import preprocessing
```
## Preparing the data
First we have to load the .csv files and merge them in a single dataframe. We will only select the columns of interest and skip rows that give troubles (for example different types in the same column due to the merge).
```
columns = ["app_id", "review_id","app_name", "language", "timestamp_created", "timestamp_updated", "recommended",
"votes_helpful", "votes_funny", "weighted_vote_score", "received_for_free", "steam_purchase", "author.steamid",
"author.num_reviews", "author.playtime_at_review"]
```
To parse the dates we'll use this simple function that converts a timestamp in seconds into a date
```
def dateparse(time_in_secs):
return pd.to_datetime(time_in_secs, unit='s')
df1 = pd.read_csv("./steam_reviews.csv", usecols = columns,
parse_dates=['timestamp_created', 'timestamp_updated'],
date_parser= dateparse)
df2 = pd.read_csv("./steam_reviews_1.csv", usecols = columns,
parse_dates=['timestamp_created', "timestamp_updated"], skiprows=[234060, 5938492, 8278792, 9163394],
date_parser=dateparse)
df3 = pd.read_csv("./steam_reviews_2.csv", usecols = columns,
parse_dates=['timestamp_created', 'timestamp_updated'], skiprows=[3895921, 3984228, 5893390, 1929921],
date_parser=dateparse)
```
To build the actual dataframe we concatenate the dataframes along the rows with the pandas method concat that is almost the same as numpy's concatenate
```
df = pd.concat([df1, df2, df3], ignore_index=True)
```
Given that the dataframes will consume much memory we suggest to delete the unused ones when possible
```
# deleting unused data frames
del [[df1,df2, df3]]
```
## Let's take a first look at the dataframe
We can see how many records are stored by using the method _shape_
```
df.shape
```
Print the first **k** records with the method _head_
```
# print a few lines of the dataset to visualize the format
df.head(5)
```
Or even check the type of each column with _info_
```
df.info()
```
The _describe_ method will give us some statistics about the quantitative features of the dataframe
```
# print different statitics of each columns of the datasets
pd.set_option('display.float_format', lambda x: '%.3f' % x)
df.describe()[1:]
```
## Let's explore the dataset by finding simple insights into the reviews.
Some of the tasks we would like to complete are:
- Plot the number of reviews for each application in descending order.
- Find the applications with the best Weighted Vote Score?
- Find the applications with the most and the least recommendations?
- How many of these applications were purchased, and how many were given for free?
### Building a sub-dataframe for the games
In order to answer this questions is useful to create a dataframe containing the information we need for each game:
- n_reviews: the total number of reviews for that application
- weighted_score_mean: the average the weighted score for the application
- recommended: the number of people who recommended this application
```
# first we count the number of reviews for each game by grouping and counting the number of different (unique) reviews_id
n_reviews = df.groupby("app_name")["review_id"].count()
# Then we take the average Weighted vote score grouping by the app name
weighted_score_mean = df.groupby("app_name")["weighted_vote_score"].mean()
# Now we count the number of recommendations
recommendations = df.groupby("app_name")["recommended"].sum()
```
We separately store in a variable the number of copies for each game that were acquired for free or purchased
```
gratis = df.groupby("received_for_free").review_id.count()
# Create a new dataframe concatenating the columns
game_df = pd.concat([n_reviews, weighted_score_mean, recommendations], axis=1).reset_index()
# Rename columns in a clearer way
game_df.rename(columns={"review_id": "n_reviews", "weighted_vote_score": "weighted_score_mean"}, inplace=True)
```
Let's take a look at this new dataframe
```
game_df.head()
```
Now we would like to order each column separately in order to find the best and worst games for each of them, this will also help us to plot them later. We take the first 20 games.
```
# most reviewed
most_reviewed = game_df.sort_values(by="n_reviews", ascending=False)[:20]
# highest score
highest_score = game_df.sort_values(by="weighted_score_mean", ascending=False)[:20]
# most and least recommended
most_recommended = df_per_game.sort_values(by="recommended", ascending=False)[:20]
least_recommended = df_per_game.sort_values(by="recommended", ascending=True)[:20]
```
Who is the most recommended? and the least?
```
print(most_recommended["app_name"].iloc[0], 'has the most recommendations:', most_recommended["recommended"].iloc[0])
print(least_recommended["app_name"].iloc[0], 'has the least recommendations:', least_recommended["recommended"].iloc[0])
```
### Plotting the results
For this task we will use _cmap_ to have colorful barplots and we will implement a function to return a plot given the correct data.
```
# to use cmaps
rescale = lambda y: (y*3 - np.min(y)) / (np.max(y) - np.min(y))
#funtion to use cmap with plt.bar
def c_map(cmap_name,y):
return plt.get_cmap(cmap_name)(rescale(y))
#cmap color extractor
def colors_from_cmap(cmap_name, length):
colors = plt.get_cmap(cmap_name)(np.linspace(0.2, 1, length))
return colors
```
The actual function to plot the data
```
def cmap_barplot(x, y, size, title, xlab = '', ylab = '', color_map, num_colors)
'''
function to plot a barplot with cmap
inputs:
x, y = a column of a dataframe
size = a tuple with the figure size
title = the figure titleù
xlab, ylab = axis labels
color_map = a string specifying the color map to apply
num_colors = number of colors required
'''
fig = plt.figure(0, figsize=(15, 7));
plt.title(title);
plt.barh(y.iloc[::-1],
x.iloc[::-1],
color=colors_from_cmap(color_map, num_colors));
plt.xlabel(xlab);
plt.ylabel(ylab);
plt.show();
```
#### Plot for the most reviewed games
```
cmap_barplot(most_reviewed["app_name"].iloc[::-1], most_reviewed["n_reviews"].iloc[::-1],
(15, 7), "Most reviewed games", "Number of reviews", 'Games', "Reds", 20)
```
#### Plot for the most recommended games
```
cmap_barplot(most_reviewed["app_name"].iloc[::-1], most_reviewed["recommended"].iloc[::-1],
(15, 7), "Most recommended games", "Number of recommendations", 'Games', 'Blues', 20)
```
#### Plot for the least recommended games
```
cmap_barplot(most_reviewed["app_name"], most_reviewed["recommended"],
(15, 7), "Least recommended games", "Number of recommendations", 'Games', 'Oranges', 20)
```
#### Highest scored games
```
cmap_barplot(most_reviewed["app_name"].iloc[::-1], most_reviewed["weighted_score_mean"].iloc[::-1],
(15, 7), "Highest scored games", "Average weighted score", 'Games', 'Greens', 20)
```
#### Pie plot of the purchased and for free games
```
fig = plt.figure(0, figsize=(8, 8))
plt.title("Applications purchased vs applications given for free");
plt.pie(labels = ["purchased", "free"],
x = gratis/df.shape[0],
colors=["turquoise", "darkslategrey"], explode = [0.1, 0.1]);
plt.title("Copies purchased vs copies given for free, normalized");
```
## What is the preferred time to review a game?
It might be useful to know the most common time to do a review and plot the number of reviews for given intervals of time.
For this task we will create a dataframe using only the columns that refer to the time a review has been made.
```
# extract the column of timestamps
time_df = df[['timestamp_created']]
#Change the type and store in a new column
time_df['Time_review'] = pd.to_datetime(time_df['timestamp_created'], unit='s').dt.strftime('%H:%M')
```
Now we count the number of occurrencies for each unique time we have and find the most common. For instance this operation can be made also with different time formats (like only considering the hours without the minutes)
```
# finds and print the max number of occurrencies and the time associated
ordered_time = time_df['Time_review'].value_counts().reset_index()
most_common_time = np.array(ordered_time.reset_index().head(1))[0]
print(most_common_time[0], 'is the most common time with', most_common_time[1], 'occurrencies')
ordered_time
```
### Plot the occurrences for an interval of time
```
def reviews_intervals(intervals):
'''
Given a list of intervals this functions, a dataframe and a column of interest
creates an histogram of the frequencies for each interval
'''
initial, final = intervals[::2], intervals[1::2]
intervals = pd.DataFrame({"Initial time": initial, "Final time": final})
for i in range(len(intervals)):
# create a new column for each interval and fill with 0 or 1 if the time review is the interval
time_df[intervals.iloc[i,0]+'-'+intervals.iloc[i,1]]=np.where((intervals.iloc[i,0] <= time_df['Time_review']) & (time_df['Time_review'] <= intervals.iloc[i,1]) , 1, 0)
#store the dataframe without the columns 'Time_review','timestamp_created'
nb_review_intervals = time_df.drop(['Time_review','timestamp_created'], axis=1)
nb_review_intervals.sum().plot.barh(title='Number of reviews for each interval', color=colors_from_cmap('autumn', intervals.shape[0]), figsize=(10,7));
# create the nested list of the homework example
intervals = ['06:00:00', '10:59:59', '11:00:00', '13:59:59','14:00:00', '16:59:59', '17:00:00', '19:59:59', '20:00:00', '23:59:59','00:00:00','02:59:59', '03:00:00', '05:59:59']
#apply the function 'reviews_intervals'
reviews_intervals(intervals)
```
## What are the most common languages?
The reviews are done from all over the world but what are the most common ones?
We can answer this by grouping and counting, then we'll extract some informations from a filtered dataset with only the most relevant languages.
```
# sorted number of reviews in each language in descending order
reviews_per_language = df.groupby("language")["review_id"].count().sort_values(ascending=False)
#store and print top three languages
top_three_languages = reviews_per_language[:3].index.tolist()
print('The Top 3 languages are :',*top_three_languages)
```
Here we create the function we will use to filter the dataset by language.
```
def filter_by_language(df, languages):
#data frame filtered only with the reviews written in the provided languages
filtered_df = df[df["language"].isin(languages)]
return filtered_df
```
Next we use our function filter_by_language to retrieve the subset of the dataframe pertaining to the three top languages.
We are interested for these languages to see which games have been voted as funny or helpful by creating two new boolean variables. Indeed, here we consider that a review is funny or helpful if it has one vote or more but this threshold can be changed with the variable threshold
```
filtered_df = filter_by_language(df, top_three_languages)
n = len(filtered_df)
#i fix a threshold which represent the minimum number of vote to consider the review helpful or funny
threshold=1
# new dataframe in which we create two new boolean attributes to know if we have more votes than the threshold
filtered_df = filtered_df.assign(was_funny=lambda x: x["votes_funny"] >= threshold,
was_helpful=lambda x: x["votes_helpful"] >= threshold)
# compute the percentage of funny and helpful rewiews
funny_or_not = filtered_df.groupby("was_funny")["review_id"].count() / n
helpful_or_not = filtered_df.groupby("was_helpful")["review_id"].count() / n
```
And now we plot the results.
#### Barplot for the reviews per language
```
cmap_barplot(reviews_per_language.index[::-1], reviews_per_language[::-1],
(15, 7), "Reviews per language", "# of reviews",
'languages', "cool", len(reviews_per_language.index))
```
#### Pie plot for funny or not reviews
```
fig_2 = plt.figure(figsize=(8, 8))
plt.pie(labels = ['Not Funny', 'Funny'],
x = funny_or_not,
colors=["crimson","violet"], explode = [0.1,0.1])
plt.title("Percentage of funny reviews for the top three languages")
plt.show()
```
#### Pie plot for helpful or not reviews
```
fig_2 = plt.figure(figsize=(8,8))
plt.pie(labels = ['Not Helpful','Helpful'],
x = helpful_or_not,
colors =["lime", "lightgreen"], explode = [0.1, 0.1])
plt.title("Percentage of helpful reviews for the top three languages")
plt.show()
```
## Insights about the authors of the reviews
The reviews' authors are users from the game that provide their opinion on it. Nowwe can check how often they make reviews.
First of all, we retrieve the number of reviews submitted by each author. And then order them in descending order and retrieve the top 10 authors (in terms of number of contributions) and we plot the results (after transforming each id in a string).
```
#compute the number of review per reviewer "author.steamid"
author_df = df.groupby("author.steamid")["review_id"].count()
# store the top 10
top_10_reviewers = author_df.sort_values(ascending=False).iloc[:10]
#change the type to obtain labels (str)
authors_names = list(map(str, top_10_reviewers.index.tolist()))
```
#### Barplot of the reviewers
```
cmap_barplot(authors_names[::-1], top_10_reviewers[::-1],
(15, 7),"Most popular reviewers","Number of reviews",
"Steam ID", "YlGnBu", len(authors_names))
```
Let' find the top reviewer and analyze him mire in depth by obtaining the name of all the applications he reviewed.
```
top_reviewer = authors_names[0]
print('The most popular reviewer has the id',top_reviewer)
top_reviewer = df[df["author.steamid"]==float(top_reviewer)]
print('The top reviewer wrote reviews about :\n')
for app in pd.unique(top_reviewer["app_name"]):
print('\t'+app)
```
And now we specifically save the information about the number of copies he purchased or received for free and whether he recommended them or not.
N.B. : We consider that a person wrote a review only if he played this game, so if he did nit obtain it for free he bought it.
```
free_or_not_top_reviewer = top_reviewer.groupby("received_for_free")["review_id"].count()
free_or_not_top_reviewer / len(top_reviewer)
```
As we can see the number of games received for free is not large enough to allow us to infer some informations for this reason we'll focus onthe purchased games.
```
recommended_purchased = top_reviewer[top_reviewer["received_for_free"]==False].groupby("recommended")["review_id"].count()
recommended_purchased
```
And now we plot the results.
#### Pie plot of the recommended games purchased from the top reviewer
```
plt.figure(figsize=(10,10))
plt.pie(labels=["Not recommended", "Recommended"],
x = recommended_purchased,
colors=["darkgoldenrod","khaki" ], explode = [0.1, 0.1])
plt.title("Purchased games positive vs negative reviews");
```
| true |
code
| 0.505615 | null | null | null | null |
|
##### Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License").
# DCGAN: An example with tf.keras and eager
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td><td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a></td></table>
This notebook demonstrates how to generate images of handwritten digits using [tf.keras](https://www.tensorflow.org/programmers_guide/keras) and [eager execution](https://www.tensorflow.org/programmers_guide/eager). To do this, we use Deep Convolutional Generative Adverserial Networks ([DCGAN](https://arxiv.org/pdf/1511.06434.pdf)).
On a colab GPU(Tesla K80), the model takes around 40 seconds per epoch to train.
Below is the output generated after training the generator and discriminator models for 150 epochs.

```
# to generate gifs
!pip install imageio
```
## Import TensorFlow and enable eager execution
```
# Import TensorFlow >= 1.9 and enable eager execution
import tensorflow as tf
tf.enable_eager_execution()
import os
import time
import numpy as np
import glob
import matplotlib.pyplot as plt
import PIL
import imageio
from IPython import display
```
## Load the dataset
We are going to use the MNIST dataset to train the generator and the discriminator. The generator will then generate handwritten digits.
```
(train_images, train_labels), (_, _) = tf.keras.datasets.mnist.load_data()
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')
# We are normalizing the images to the range of [-1, 1]
train_images = (train_images - 127.5) / 127.5
BUFFER_SIZE = 60000
BATCH_SIZE = 256
```
## Use tf.data to create batches and shuffle the dataset
```
train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
```
## Write the generator and discriminator models
* **Generator**
* It is responsible for **creating the convincing images good enough to fool the discriminator**.
* It consists of Conv2DTranspose(Upsampling) layers. We start with a fully connected layer and upsample the image 2 times so as to reach the desired image size(mnist image size) which is (28, 28, 1).
* We use **leaky relu** activation except for the **last layer** which uses **tanh** activation.
* **Discriminator**
* **The discriminator is responsible for classifying the fake images from the real images.**
* In other words, the discriminator is given generated images(from the generator) and the real MNIST images. The job of the discriminator is to classify these images into fake(generated) and real(MNIST images).
* **Basically the generator should be good enough to fool the discriminator that the generated images are real**.
```
class Generator(tf.keras.Model):
def __init__(self):
super(Generator, self).__init__()
self.fc1 = tf.keras.layers.Dense(7*7*64, use_bias=False)
self.batchnorm1 = tf.keras.layers.BatchNormalization()
self.conv1 = tf.keras.layers.Conv2DTranspose(64, (5, 5), strides=(1, 1), padding='same', use_bias=False)
self.batchnorm2 = tf.keras.layers.BatchNormalization()
self.conv2 = tf.keras.layers.Conv2DTranspose(32, (5, 5), strides=(2, 2), padding='same', use_bias=False)
self.batchnorm3 = tf.keras.layers.BatchNormalization()
self.conv3 = tf.keras.layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False)
def call(self, x, training=True):
x = self.fc1(x)
x = self.batchnorm1(x, training=training)
x = tf.nn.relu(x)
x = tf.reshape(x, shape=(-1, 7, 7, 64))
x = self.conv1(x)
x = self.batchnorm2(x, training=training)
x = tf.nn.relu(x)
x = self.conv2(x)
x = self.batchnorm3(x, training=training)
x = tf.nn.relu(x)
x = tf.nn.tanh(self.conv3(x))
return x
class Discriminator(tf.keras.Model):
def __init__(self):
super(Discriminator, self).__init__()
self.conv1 = tf.keras.layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same')
self.conv2 = tf.keras.layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same')
self.dropout = tf.keras.layers.Dropout(0.3)
self.flatten = tf.keras.layers.Flatten()
self.fc1 = tf.keras.layers.Dense(1)
def call(self, x, training=True):
x = tf.nn.leaky_relu(self.conv1(x))
x = self.dropout(x, training=training)
x = tf.nn.leaky_relu(self.conv2(x))
x = self.dropout(x, training=training)
x = self.flatten(x)
x = self.fc1(x)
return x
generator = Generator()
discriminator = Discriminator()
```
## Define the loss functions and the optimizer
* **Discriminator loss**
* The discriminator loss function takes 2 inputs; **real images, generated images**
* real_loss is a sigmoid cross entropy loss of the **real images** and an **array of ones(since these are the real images)**
* generated_loss is a sigmoid cross entropy loss of the **generated images** and an **array of zeros(since these are the fake images)**
* Then the total_loss is the sum of real_loss and the generated_loss
* **Generator loss**
* It is a sigmoid cross entropy loss of the generated images and an **array of ones**
* The discriminator and the generator optimizers are different since we will train them separately.
```
def discriminator_loss(real_output, generated_output):
# [1,1,...,1] with real output since it is true and we want
# our generated examples to look like it
real_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=tf.ones_like(real_output), logits=real_output)
# [0,0,...,0] with generated images since they are fake
generated_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=tf.zeros_like(generated_output), logits=generated_output)
total_loss = real_loss + generated_loss
return total_loss
def generator_loss(generated_output):
return tf.losses.sigmoid_cross_entropy(tf.ones_like(generated_output), generated_output)
discriminator_optimizer = tf.train.AdamOptimizer(1e-4)
generator_optimizer = tf.train.AdamOptimizer(1e-4)
```
## Training
* We start by iterating over the dataset
* The generator is given **noise as an input** which when passed through the generator model will output a image looking like a handwritten digit
* The discriminator is given the **real MNIST images as well as the generated images(from the generator)**.
* Next, we calculate the generator and the discriminator loss.
* Then, we calculate the gradients of loss with respect to both the generator and the discriminator variables(inputs) and apply those to the optimizer.
## Generate Images
* After training, its time to generate some images!
* We start by creating noise array as an input to the generator
* The generator will then convert the noise into handwritten images.
* Last step is to plot the predictions and **voila!**
```
EPOCHS = 150
noise_dim = 100
num_examples_to_generate = 100
# keeping the random vector constant for generation(prediction) so
# it will be easier to see the improvement of the gan.
random_vector_for_generation = tf.random_normal([num_examples_to_generate,
noise_dim])
def generate_and_save_images(model, epoch, test_input):
# make sure the training parameter is set to False because we
# don't want to train the batchnorm layer when doing inference.
predictions = model(test_input, training=False)
fig = plt.figure(figsize=(10,10))
for i in range(predictions.shape[0]):
plt.subplot(10, 10, i+1)
plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray')
plt.axis('off')
# tight_layout minimizes the overlap between 2 sub-plots
plt.tight_layout()
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
def train(dataset, epochs, noise_dim):
for epoch in range(epochs):
start = time.time()
for images in dataset:
# generating noise from a uniform distribution
noise = tf.random_normal([BATCH_SIZE, noise_dim])
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
generated_images = generator(noise, training=True)
real_output = discriminator(images, training=True)
generated_output = discriminator(generated_images, training=True)
gen_loss = generator_loss(generated_output)
disc_loss = discriminator_loss(real_output, generated_output)
gradients_of_generator = gen_tape.gradient(gen_loss, generator.variables)
gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.variables)
generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.variables))
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.variables))
if epoch % 10 == 0:
display.clear_output(wait=True)
generate_and_save_images(generator,
epoch + 1,
random_vector_for_generation)
print ('Time taken for epoch {} is {} sec'.format(epoch + 1,
time.time()-start))
# generating after the final epoch
generate_and_save_images(generator,
epochs,
random_vector_for_generation)
train(train_dataset, EPOCHS, noise_dim)
```
# Display an image using the epoch number
```
def display_image(epoch_no):
plt.figure(figsize=(15,15))
plt.imshow(np.array(PIL.Image.open('image_at_epoch_{:04d}.png'.format(epoch_no))))
plt.axis('off')
display_image(EPOCHS)
```
## Generate a GIF of all the saved images.
<!-- TODO(markdaoust): Remove the hack when Ipython version is updated -->
```
with imageio.get_writer('dcgan.gif', mode='I') as writer:
filenames = glob.glob('image*.png')
filenames = sorted(filenames)
for filename in filenames:
image = imageio.imread(filename)
writer.append_data(image)
# this is a hack to display the gif inside the notebook
os.system('mv dcgan.gif dcgan.gif.png')
display.Image(filename="dcgan.gif.png")
```
| true |
code
| 0.781711 | null | null | null | null |
|
# Assignment #04
## Exercise #04-01: a glimpse in the C language
This exercise can be done on a linux machine only!
```{tip}
You can use MyBinder's terminal if you don't have Linux!
```
Here is the C code sample from the lecture:
```c
#include <stdio.h>
int main ()
{
int a = 2;
int b = 3;
int c = a + b;
printf ("Sum of two numbers : %d \n", c);
}
```
**Write this code in a C code file, compile and run it.**
**Now, replace the line ``int b = 3`` with ``char b[] = "Hello";``. Compile and run the program again (ignore warnings at compilation). Does the output match your expectations? Can you explain what happens? Compare this behavior to python's, and try to explain why this behavior can lead to faster execution times.**
(content:montecarlo)=
## Exercise #04-02: Monte-Carlo estimation of $\pi$
A simple way to estimate $\pi$ using a computer is based on a [Monte-Carlo](https://en.wikipedia.org/wiki/Monte_Carlo_method) method. By drawing a sample of N points with random 2D coordinates (x, y) in the ``[0, 1[`` range, the ratio of points that fall within the unit circle divided by the total number of points (N) gives an estimate of $\pi / 4$.
**Provide two implementations of the monte-carlo estimation of $\pi$: a pure python version (standard library) and a vectorized version using numpy. Time their execution for N = [1e2, 1e3, ..., 1e7]. Optional: plot the numpy speed-up as a function of N.**
**Optional: try the numpy version with N=1e8 and above. Make conclusions about a new trade-off happening for large values of N.**
```{tip}
You can try to mimic ipython's ``%timeit`` in your code by running each function at least three times and taking the fastest execution of all three.
```
## Exercise #04-03: a new format based on fixed point binary numbers
Write a function which converts binary strings to decimal numbers. The function should handle unsigned (positive) numbers only. Examples:
- ``'101010'`` $\rightarrow$ ``42``
- ``'10000.1'`` $\rightarrow$ ``16.5``
- ``'1011101101.10101011'`` $\rightarrow$ ``749.66796875``
Now let's develop a new standard based on this representation. Dots cannot be represented by 0s and 1s, so that if we want the position of the dot to be flexible we need an additional memory slot to store this position. Let's define our new format as a 32 bits long sequence of bits, the first 5 bits (starting from the left) being used to give the position of the dot, and the remaining 27 bits used to represent the number. Examples:
- ``'10101010101010101010101010101010'`` $\rightarrow$ ``699050.65625``.
- ``'00000001100110011001100110011001'`` $\rightarrow$ ``0.19999999552965164``.
Explanation for example 1: the first five digits are `'10101'` which gives the number 21. The second part of the string therefore becomes a dot at position 21: ``'010101010101010101010.101010'``. This binary number is then converted to decimal.
Let's name this standard "BSE" (for "best standard ever"), and try to convince the *Institute of Electrical and Electronics Engineers* to adopt it in place of the old IEEE 754 standard. We have to answer the following questions:
- what is the smallest number the BSE can represent? The largest?
- what is the maximal accuracy of the BSE? (in other words, what is the difference between the smallest positive number and zero?)
- what is the lowest accuracy of our standard? (in other words, what is the difference between the largest number we can represent and the second largest?)
- does the difference between two nearest representable change, when the dot position doesn't?
- now compute the precision of our format for a range of possible values of the BSE
- for these values, compare the BSE to the IEEE754 ``binary32`` format (or its numpy equivalent ``np.float32``) using [numpy.nextafter](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.nextafter.html).
- (optional: you can also use matplotlib and a log-log plot to produce a graphic similar to the [wikipedia page on IEEE 754](https://en.wikipedia.org/wiki/IEEE_754#Basic_and_interchange_formats))
Conclude. Do you think we should try to convince the *Institute of Electrical and Electronics Engineers* and [present them our results](https://xkcd.com/541/)?
```{warning}
The BSE format **is not** the IEEE754 format. The BSE is a fun format explaining *some* (but not all) of the underlying concepts behind floating point numbers. I'm just saying, because some people got confused during the exam and remembered the BSE better than the real floating point representation...
```
## Exercise #04-04: exponential error growth
The number `e` can be defined as the sum of the infinite series:
$$e = \sum_{n=0}^{\infty} \frac{1}{n!}$$
We are going to approximate this number by truncating the sum to a finite value. We use the **standard library** and it's math module:
```
import math
n = 100
e1 = 0
for i in range(n + 1):
e1 += 1. / math.factorial(i)
e1
```
Close enough! Now let's compute it with the same values, but summed from n=100 to n=0:
```
e2 = 0
for i in range(n + 1)[::-1]:
e2 += 1. / math.factorial(i)
e2
```
Seems reasonable too! Are they different?
```
e1 - e2
```
**Which of the two values is closest to the actual e? Explain why this occurs, and what we can learn from this experiment.**
| true |
code
| 0.743417 | null | null | null | null |
|
```
# for use in tutorial and development; do not include this `sys.path` change in production:
import sys ; sys.path.insert(0, "../")
```
# Vector embedding with `gensim`
Let's make use of deep learning through a technique called *embedding* – to analyze the relatedness of the labels used for recipe ingredients.
Among the most closely related ingredients:
* Some are very close synonyms and should be consolidated to improve data quality
* Others are interesting other ingredients that pair frequently, useful for recommendations
On the one hand, this approach is quite helpful for analyzing the NLP annotations that go into a knowledge graph.
On the other hand it can be used along with [`SKOS`](https://www.w3.org/2004/02/skos/) or similar vocabularies for ontology-based discovery within the graph, e.g., for advanced search UI.
## Curating annotations
We'll be working with the labels for ingredients that go into our KG.
Looking at the raw data, there are many cases where slightly different spellings are being used for the same entity.
As a first step let's define a list of synonyms to substitute, prior to running the vector embedding.
This will help produce better quality results.
Note that this kind of work comes of the general heading of *curating annotations* ... which is what we spend so much time doing in KG work.
It's similar to how *data preparation* is ~80% of the workload for data science teams, and for good reason.
```
SYNONYMS = {
"pepper": "black pepper",
"black pepper": "black pepper",
"egg": "egg",
"eggs": "egg",
"vanilla": "vanilla",
"vanilla extract": "vanilla",
"flour": "flour",
"all-purpose flour": "flour",
"onions": "onion",
"onion": "onion",
"carrots": "carrot",
"carrot": "carrot",
"potatoes": "potato",
"potato": "potato",
"tomatoes": "tomato",
"fresh tomatoes": "tomato",
"fresh tomato": "tomato",
"garlic": "garlic",
"garlic clove": "garlic",
"garlic cloves": "garlic",
}
```
## Analyze ingredient labels from 250K recipes
```
import csv
MAX_ROW = 250000 # 231638
max_context = 0
min_context = 1000
recipes = []
vocab = set()
with open("../dat/all_ind.csv", "r") as f:
reader = csv.reader(f)
next(reader, None) # remove file header
for i, row in enumerate(reader):
id = row[0]
ind_set = set()
# substitute synonyms
for ind in set(eval(row[3])):
if ind in SYNONYMS:
ind_set.add(SYNONYMS[ind])
else:
ind_set.add(ind)
if len(ind_set) > 1:
recipes.append([id, ind_set])
vocab.update(ind_set)
max_context = max(max_context, len(ind_set))
min_context = min(min_context, len(ind_set))
if i > MAX_ROW:
break
print("max context: {} unique ingredients per recipe".format(max_context))
print("min context: {} unique ingredients per recipe".format(min_context))
print("vocab size", len(list(vocab)))
```
Since we've performed this data preparation work, let's use `pickle` to save this larger superset of the recipes dataset to the `tmp.pkl` file:
```
import pickle
pickle.dump(recipes, open("tmp.pkl", "wb"))
recipes[:3]
```
Then we can restore the pickled Python data structure for usage later in other use cases.
The output shows the first few entries, to illustrated the format.
Now reshape this data into a vector of vectors of ingredients per recipe, to use for training a [*word2vec*](https://arxiv.org/abs/1301.3781) vector embedding model:
```
vectors = []
for id, ind_set in recipes:
v = []
for ind in ind_set:
v.append(ind)
vectors.append(v)
vectors[:3]
```
We'll use the [`Word2Vec`](https://radimrehurek.com/gensim/models/word2vec.html) implementation in the `gensim` library (i.e., *deep learning*) to train an embedding model.
This approach tends to work best if the training data has at least 100K rows.
Let's also show how to serialize the *word2vec* results, saving them to the `tmp.w2v` file so they could be restored later for other use cases.
NB: there is work in progress which will replace `gensim` with `pytorch` instead.
```
import gensim
MIN_COUNT = 2
model_path = "tmp.w2v"
model = gensim.models.Word2Vec(vectors, min_count=MIN_COUNT, window=max_context)
model.save(model_path)
```
The `get_related()` function takes any ingredient as input, using the embedding model to find the most similar other ingredients – along with calculating [`levenshtein`](https://github.com/toastdriven/pylev) edit distances (string similarity) among these labels. Then it calculates *percentiles* for both metrics in [`numpy`](https://numpy.org/) and returns the results as a [`pandas`](https://pandas.pydata.org/) DataFrame.
```
import numpy as np
import pandas as pd
import pylev
def get_related (model, query, n=20, granularity=100):
"""return a DataFrame of the closely related items"""
try:
bins = np.linspace(0, 1, num=granularity, endpoint=True)
v = sorted(
model.wv.most_similar(positive=[query], topn=n),
key=lambda x: x[1],
reverse=True
)
df = pd.DataFrame(v, columns=["ingredient", "similarity"])
s = df["similarity"]
quantiles = s.quantile(bins, interpolation="nearest")
df["sim_pct"] = np.digitize(s, quantiles) - 1
df["levenshtein"] = [ pylev.levenshtein(d, query) / len(query) for d in df["ingredient"] ]
s = df["levenshtein"]
quantiles = s.quantile(bins, interpolation="nearest")
df["lev_pct"] = granularity - np.digitize(s, quantiles)
return df
except KeyError:
return pd.DataFrame(columns=["ingredient", "similarity", "percentile"])
```
Let's try this with `dried basil` as the ingredient to query, and review the top `50` most similar other ingredients returned as the DataFrame `df`:
```
pd.set_option("max_rows", None)
df = get_related(model, "dried basil", n=50)
df
```
Note how some of the most similar items, based on vector embedding, are *synonyms* or special forms of our query `dried basil` ingredient: `dried basil leaves`, `dry basil`, `dried sweet basil leaves`, etc. These tend to rank high in terms of levenshtein distance too.
Let's plot the similarity measures:
```
import matplotlib
import matplotlib.pyplot as plt
matplotlib.style.use("ggplot")
df["similarity"].plot(alpha=0.75, rot=0)
plt.show()
```
Notice the inflection points at approximately `0.56` and again at `0.47` in that plot.
We could use some statistical techniques (e.g., clustering) to segment the similarities into a few groups:
* highest similarity – potential synonyms for the query
* mid-range similarity – potential [hypernyms and hyponyms](https://en.wikipedia.org/wiki/Hyponymy_and_hypernymy) for the query
* long-tail similarity – other ingredients that pair well with the query
In this example, below a threshold of the 75th percentile for vector embedding similarity, the related ingredients are less about being synonyms and more about other foods that pair well with basil.
Let's define another function `rank_related()` which ranks the related ingredients based on a combination of these two metrics.
This uses a cheap approximation of a [*pareto archive*](https://www.cs.bham.ac.uk/~jdk/multi/) for the ranking -- which comes in handing for recommender systems and custom search applications that must combine multiple ranking metrics:
```
from kglab import root_mean_square
def rank_related (df):
df2 = df.copy(deep=True)
df2["related"] = df2.apply(lambda row: root_mean_square([row[2], row[4]]), axis=1)
return df2.sort_values(by=["related"], ascending=False)
rank_related(df)
```
Notice how the "synonym" cases tend to move up to the top now?
Meanwhile while the "pairs well with" are in the lower half of the ranked list: `fresh mushrooms`, `italian turkey sausage`, `cooked spaghetti`, `white kidney beans`, etc.
---
## Exercises
**Exercise 1:**
Build a report for a *human-in-the-loop* reviewer, using the `rank_related()` function while iterating over `vocab` to make algorithmic suggestions for possible synonyms.
**Exercise 2:**
How would you make algorithmic suggestions for a reviewer about which ingredients could be related to a query, e.g., using the `skos:broader` and `skos:narrower` relations in the [`skos`](https://www.w3.org/2004/02/skos/) vocabulary to represent *hypernyms* and *hyponyms* respectively?
This could extend the KG to provide a kind of thesaurus about recipe ingredients.
| true |
code
| 0.459804 | null | null | null | null |
|
# Variational Autoencoder in TensorFlow
[Variational Autoencoders](https://arxiv.org/abs/1312.6114) (VAE) are a popular model that allows for unsupervised (and semi-supervised) learning. In this notebook, we'll implement a simple VAE on the MNIST dataset.
One of the primary goals of the VAE (and auto-encoders in general) is to reconstruct the original input. Why would we want to do that? At first glance, such a model seems silly: a simple identity function achieves the same thing with perfect results. However, with an autoencoder, we can learn a compresesed representation in a smaller latent space, allowing us to learn features and structure of the data. Autoencoders are composed of two arms, the **encoder** and **decoder**, which convert values from the data space to the latent space and vice versa, respectively.
Importantly, since we're simply reconstructing the original input, we do *not* necessarily need labels to do our learning, as we have in previous examples. This is significant, as labels are often far more expensive to acquire than raw data, often prohibitively so. VAEs therefore allow us to leverage abundant unlabeled data. That said, VAEs are also able to take advantage of labels when available as well, either in a completely supervised or semi-supervised setting. Altogether, autoencoders can achieve impressive results on tasks like denoising, segmentation, and even predicting future images.
## Imports and Data
First, some package imports and loading of the data. This is similar to what we've done before, with the main difference being that we're going to use TensorFlow Slim, as a follow-up to [notebook 02A](https://github.com/kevinjliang/Duke-Tsinghua-MLSS-2017/blob/master/02A_TensorFlow-Slim.ipynb).
```
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
slim = tf.contrib.slim
# Import data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
```
## Encoder
The encoder deterministically transforms the data $x$ from the data space to the latent space of $z$. Since we're dealing with a *variational* autoencoder, we attempt to model the *distribution* of the latent space given the input, represented by $q(z|x)$. This isn't immediately obvious in the code implementation, but we assume a standard Gaussian prior on this distribution, and our encoder returns the mean and variance (actually log-variance) of this distribution. We use log-variance because our model returns a real number, while variances must be positive.
MNIST is a very simple dataset, so let's also keep the model simple: an MLP with 2 fully connected layers. We name the output `mu_logvar` as we will be interpretting the first half of the final 128-dimensional vector as the mean $\mu$ and the second half as the log-variance log($\sigma^2$).
```
def encoder(x):
"""Network q(z|x)"""
with slim.arg_scope([slim.fully_connected],
activation_fn=tf.nn.relu,
weights_initializer=tf.truncated_normal_initializer(0.0, 0.1)):
mu_logvar = slim.fully_connected(x, 128, scope='fc1')
mu_logvar = slim.fully_connected(mu_logvar, 128, activation_fn=None, scope='fc2')
return mu_logvar
```
Note that we use a couple features of TF-Slim here:
1. We use `slim.fully_connected()` to specify which layers we want to use, without having to worry about defining weight or bias variables beforehand.
2. We use `slim.arg_scope()` to specify default arguments so we can leave them out of the definitions of each of the fully connected layers. We can still override the `activation_fn` for the last layer though.
For this simple model, TF-Slim doesn't actually benefit us all that much, but for the sake of demonstration, we'll stick with it.
## Decoder
The decoder is the generative arm of the auotencoder. In the variational autoencoder, the image generation process is probabilisitic: we draw a $z$ from the probability distribution output of the encoder and generate an output in the data domain. This reconstruction $\hat{x}$ is thus of the distribution $p(x|z)$.
Again, since MNIST is simple, we'll use a 2 layer MLP for the decoder. Importantly, since we are focusing on reconstruction, we make sure that the final output of the decoder $\hat{x}$ is the same dimensions as our input $x$.
```
def decoder(mu_logvar):
"""Network p(x|z)"""
with slim.arg_scope([slim.fully_connected],
activation_fn=tf.nn.relu,
weights_initializer=tf.truncated_normal_initializer(0.0, 0.1)):
# Interpret z as concatenation of mean and log variance
mu, logvar = tf.split(mu_logvar, num_or_size_splits=2, axis=1)
# Standard deviation must be positive
stddev = tf.sqrt(tf.exp(logvar))
# Draw a z from the distribution
epsilon = tf.random_normal(tf.shape(stddev))
z = mu + tf.multiply(stddev, epsilon)
x_hat = slim.fully_connected(z, 128, scope='fc1')
x_hat = slim.fully_connected(x_hat, 784, activation_fn=None, scope='fc2')
return x_hat
```
## Loss
Our model has two criteria we're training to optimize:
1. Reconstruction loss: As an **autoencoder**, we want to be able to reconstruct the original input. To evaluate how well the model has done that, we use a pixel-wise L2 distance metric. *Is this a good idea? What are the potential weaknesses of this approach?*
2. [KL Divergence](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence): Because this model is **variational**, we also include a KL penalty to impose a Gaussian prior on the latent space. The exact derivation of this term can be found in the original [Auto-Encoding Variational Bayes paper](https://arxiv.org/abs/1312.6114). *Is a standard Gaussian prior a good assumption? What are the potential weaknesses of this approach?*
Because this model has two losses (unlike the single loss we've had in previous classification examples), we also have an extra parameter $\lambda$ to tune how to balance the two losses. This parameter can actually be very significant and require considerable tuning. What you set it to depends on the dataset, model, and application. Here, $\lambda=1$ turns out to work pretty well.
We use the ADAM algorithm that we've used before for optimization.
```
def optimizer(x_hat, x, mu_logvar):
"""Define loss functions (reconstruction, KL divergence) and optimizer"""
with tf.variable_scope('optimizer') as scope:
# Reconstruction loss
reconstruction = tf.reduce_sum(tf.squared_difference(x, x_hat))
# KL divergence
lam = 1
mu, logvar = tf.split(mu_logvar, num_or_size_splits=2, axis=1)
kl_d = lam * -0.5 * tf.reduce_sum(1.0 + logvar - tf.square(mu) - tf.exp(logvar))
# Total loss
loss = reconstruction + kl_d
# ADAM optimizer
train_step = tf.train.AdamOptimizer().minimize(loss)
return train_step, reconstruction, kl_d
```
## Visualization
It'll be nice to visualize the reconstructions that our model generates to see what it learns. This helper function plots the original inputs in one column and the reconstructions next to them in another column. I also may or may not have stolen it from Alex Lew, who included it in his [GAN notebook (03B)](https://github.com/kevinjliang/Duke-Tsinghua-MLSS-2017/blob/master/03B_Generative_Adversarial_Network.ipynb)...
```
def visualize_row(image, reconstruction, img_width=28, cmap='gray'):
"""
Takes in a tensor of images of given width, and displays them in a column
in a plot, using `cmap` to map from numbers to colors.
"""
fig, ax = plt.subplots(1, 2)
image = np.reshape(image, [-1, img_width])
reconstruction = np.reshape(reconstruction, [-1, img_width])
plt.figure()
ax[0].imshow(np.clip(image, 0, 1), cmap=cmap)
ax[1].imshow(np.clip(reconstruction, 0, 1), cmap=cmap)
plt.show()
```
## Define the graph and train
All of the functions we've written thus far are just that: functions. We still need to call them to assemble our TensorFlow computation graph. At this point, this should be becoming familiar.
One of the small differences is the inclusion of `tf.reset_default_graph()`, added to remedy a small, unfortunate side effect of using Jupyter and TensorFlow in conjunction, but you don't have to worry about it too much to understand the model. A more detailed explanation if you're interested below [1].
```
# Reset the graph
tf.reset_default_graph()
# Define input placeholder
x = tf.placeholder(tf.float32,[None, 784], name='x')
# Define VAE graph
with tf.variable_scope('encoder'):
mu_logvar = encoder(x)
with tf.variable_scope('decoder'):
x_hat = decoder(mu_logvar)
# Optimization
with tf.variable_scope('unlabeled') as scope:
train_step_unlabeled = optimizer(x_hat, x, mu_logvar)
```
<sub>*[1] The primary purpose of TensorFlow is to construct a computation graph connecting Tensors and operations. Each of these nodes must be assigned a unique name; if the user does not specify one, a unique name is automatically generated, like 'Placeholder_2', with the number at the end incrementing each time you create a new node of that type. Attempting to create a node with a name already found in the graph raises an error.*</sub>
<sub>*So how can this be problematic? In the Coding Environments notebook ([00B](https://github.com/kevinjliang/Duke-Tsinghua-MLSS-2017/blob/master/00B_Coding_Environments.ipynb)), it was mentioned that code from previously run cells persists. As such, if we're programming interactively and want to rebuild our graph after some updates, the new updated nodes we want to add collide with the names from our previous run, throwing an error. Why didn't we have to worry about this before? In the past, we haven't been naming our variables, so TensorFlow has been giving the nodes new unique names every time we update the graph and adding them to the collection of nodes from previous runs; the old nodes are never called, so they just sit there. However, TF-Slim does name the variables it generates, thus causing the problem. We can solve this by creating a new graph object before we define our computation graph, so every time we want to make modifications to the graph, we start anew.*</sub>
<sub>*If you're confused by that explanation, I wouldn't worry about it. It's not necessary for the program to run. It's there so we can re-run the cell defining the computation graph without restarting the entire kernel to clear memory of previous variables. In a traditionally written Python program (i.e. not IPython), you wouldn't need to do this.*</sub>
For training, we'll stay simple and train for 20000 iterations, visualizing our results with 5 digits from the validation set after every 1000 minibatches. Notice that this model is completely unsupervised: we never include the digit labels at any point in the process. Within a few thousand iterations, the model should start producing reasonable looking results:
```
with tf.Session() as sess:
# Initialize all variables
sess.run(tf.global_variables_initializer())
# Train VAE model
for i in range(20000):
batch = mnist.train.next_batch(100)
sess.run(train_step_unlabeled, feed_dict={x: batch[0]}) # No labels
# Visualize reconstructions every 1000 iterations
if i % 1000 == 0:
batch = mnist.validation.next_batch(5)
reconstructions = sess.run(x_hat, feed_dict={x: batch[0]})
print("Iteration {0}:".format(i))
visualize_row(batch[0], reconstructions)
```
| true |
code
| 0.85987 | null | null | null | null |
|
# Katz Centrality
In this notebook, we will compute the Katz centrality of each vertex in our test datase using both cuGraph and NetworkX. Additionally, NetworkX also contains a Numpy implementation that will used. The NetworkX and cuGraph processes will be interleaved so that each step can be compared.
Notebook Credits
* Original Authors: Bradley Rees
* Created: 10/15/2019
* Last Edit: 08/16/2020
RAPIDS Versions: 0.14
Test Hardware
* GV100 32G, CUDA 10.2
## Introduction
Katz centrality is a measure of the relative importance of a vertex within the graph based on measuring the influence across the total number of walks between vertex pairs.
<img src="https://latex.codecogs.com/gif.latex?C_{katz}(i)&space;=&space;\sum_{k=1}^{\infty}&space;\sum_{j=1}^{n}&space;\alpha&space;^k(A^k)_{ji}" title="C_{katz}(i) = \sum_{k=1}^{\infty} \sum_{j=1}^{n} \alpha ^k(A^k)_{ji}" />
See [Katz on Wikipedia](https://en.wikipedia.org/wiki/Katz_centrality) for more details on the algorithm.
To compute the Katz centrality scores for a graph in cuGraph we use:<br>
__df = cugraph.katz_centrality(G,alpha=0.1, max_iter=100, tol=1.0e-6, nstart=None, normalized=True)__
G: cugraph.Graph object
alpha: float, Attenuation factor. default is 0.1
max_iter: int, The maximum number of iterations before an answer is returned.
This can be used to limit the execution time and do an early exit
before the solver reaches the convergence tolerance. If this value
is lower or equal to 0 cuGraph will use the default value, which is 100
tol: float, Set the tolerance the approximation, this parameter should be a small
magnitude value. The lower the tolerance the better the approximation. If
this value is 0.0f, cuGraph will use the default value which is 0.00001.
Setting too small a tolerance can lead to non-convergence due to numerical
roundoff. Usually values between 0.01 and 0.00001 are acceptable.
nstart:cuDataFrame, GPU Dataframe containing the initial guess for katz centrality.
Default is None
normalized:bool, If True normalize the resulting katz centrality values.
Default is True
Returns:
df: a cudf.DataFrame object with two columns:
df['vertex']: The vertex identifier for the vertex
df['katz_centrality']: The Katz centrality score for the vertex
The value of _alpha_ should be<br>
<img src="https://latex.codecogs.com/gif.latex?\alpha&space;<&space;\frac{1}{\lambda&space;_{max}&space;}" title="\alpha < \frac{1}{\lambda _{max} }" />
currently the user is responsible for setting alpha appropiatly.
### _NOTICE_
There is a difference between how cuGraph and how NetworkX computes the Katz centrality score. That difference leads to the scores not matching. cuGraph does not currently support the 'beta' and 'weight' parameters as seen in the corresponding networkX call. The cuGraph implementation is based on a relaxed version of Katz defined by Foster with a reduced computational complexity of O(n+m)
Foster, K.C., Muth, S.Q., Potterat, J.J. et al.
Computational & Mathematical Organization Theory (2001) 7: 275.
https://doi.org/10.1023/A:1013470632383
#### Some notes about vertex IDs...
* The current version of cuGraph requires that vertex IDs be representable as 32-bit integers, meaning graphs currently can contain at most 2^32 unique vertex IDs. However, this limitation is being actively addressed and a version of cuGraph that accommodates more than 2^32 vertices will be available in the near future.
* cuGraph will automatically renumber graphs to an internal format consisting of a contiguous series of integers starting from 0, and convert back to the original IDs when returning data to the caller. If the vertex IDs of the data are already a contiguous series of integers starting from 0, the auto-renumbering step can be skipped for faster graph creation times.
* To skip auto-renumbering, set the `renumber` boolean arg to `False` when calling the appropriate graph creation API (eg. `G.from_cudf_edgelist(gdf_r, source='src', destination='dst', renumber=False)`).
* For more advanced renumbering support, see the examples in `structure/renumber.ipynb` and `structure/renumber-2.ipynb`
### Test Data
We will be using the Zachary Karate club dataset
*W. W. Zachary, An information flow model for conflict and fission in small groups, Journal of
Anthropological Research 33, 452-473 (1977).*

Because the test data has vertex IDs starting at 1, the auto-renumber feature of cuGraph (mentioned above) will be used so the starting vertex ID is zero for maximum efficiency. The resulting data will then be auto-unrenumbered, making the entire renumbering process transparent to users.
### Prep
```
# Import needed libraries
import cugraph
import cudf
# NetworkX libraries
import networkx as nx
```
### Some Prep
```
# define the parameters
max_iter = 100 # The maximum number of iterations
tol = 0.00001 # tolerance
# Define the path to the test data
datafile='../data/karate-data.csv'
```
### Read in the data - GPU
cuGraph depends on cuDF for data loading and the initial Dataframe creation
The data file contains an edge list, which represents the connection of a vertex to another. The `source` to `destination` pairs is in what is known as Coordinate Format (COO). In this test case, the data is just two columns. However a third, `weight`, column is also possible
```
gdf = cudf.read_csv(datafile, delimiter='\t', names=['src', 'dst'], dtype=['int32', 'int32'] )
```
### Create a Graph
```
# create a Graph using the source (src) and destination (dst) vertex pairs from the Dataframe
G = cugraph.Graph()
G.from_cudf_edgelist(gdf, source='src', destination='dst')
# compute degree and get the max
degree = G.degrees()
lamda = degree['out_degree'].max()
print("The max degree is " + str(lamda))
```
### Call the Karz algorithm
```
alpha = 1 / lamda
# Call cugraph.katz_centrality to get the Katz scores
gdf_katz = cugraph.katz_centrality(G, alpha=alpha)
```
_It was that easy!_
----
Let's now look at the results
```
# Find the most important vertex using the scores
# This methods should only be used for small graph
def find_top_scores(_df) :
m = _df['katz_centrality'].max()
return _df.query('katz_centrality >= @m')
top_df = find_top_scores(gdf_katz)
top_df
# let's sort the data and look at the top 5 vertices
gdf_katz.sort_values(by='katz_centrality', ascending=False).head(5)
```
---
## Now compute using NetworkX
```
# Read the data, this also created a NetworkX Graph
file = open(datafile, 'rb')
Gnx = nx.read_edgelist(file)
k_nx = nx.katz_centrality(Gnx, alpha=alpha, max_iter=max_iter, tol=tol)
k_nx_s = sorted(((value, key) for (key,value) in k_nx.items()), reverse=True)
k_nx_s[:5]
```
As mentioned, the scores are different but the ranking is the same.
```
# The Numpy version
k_nx_mp = nx.katz_centrality_numpy(Gnx, alpha=alpha)
sorted(((value, key) for (key,value) in k_nx_mp.items()), reverse=True)[:5]
```
___
Copyright (c) 2019-2020, NVIDIA CORPORATION.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
___
| true |
code
| 0.488222 | null | null | null | null |
|
# Prepare and Deploy a TensorFlow Model to AI Platform for Online Serving
This Notebook demonstrates how to prepare a TensorFlow 2.x model and deploy it for serving with AI Platform Prediction. This example uses the pretrained [ResNet V2 101](https://tfhub.dev/google/imagenet/resnet_v2_101/classification/4) image classification model from [TensorFlow Hub](https://tfhub.dev/) (TF Hub).
The Notebook covers the following steps:
1. Downloading and running the ResNet module from TF Hub
2. Creating serving signatures for the module
3. Exporting the model as a SavedModel
4. Deploying the SavedModel to AI Platform Prediction
5. Validating the deployed model
## Setup
This Notebook was tested on **AI Platform Notebooks** using the standard TF 2.2 image.
### Import libraries
```
import base64
import os
import json
import requests
import time
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
from typing import List, Optional, Text, Tuple
```
### Configure GCP environment settings
```
PROJECT_ID = '[your-google-project-id]' # Set your project Id
BUCKET = '[your-bucket-name]' # Set your bucket name Id
REGION = '[your-region]' # Set your region for deploying the model
MODEL_NAME = 'resnet_classifier'
MODEL_VERSION = 'v1'
GCS_MODEL_LOCATION = 'gs://{}/models/{}/{}'.format(BUCKET, MODEL_NAME, MODEL_VERSION)
THUB_MODEL_HANDLE = 'https://tfhub.dev/google/imagenet/resnet_v2_101/classification/4'
IMAGENET_LABELS_URL = 'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt'
IMAGES_FOLDER = 'test_images'
!gcloud config set project $PROJECT_ID
```
### Create a local workspace
```
LOCAL_WORKSPACE = '/tmp/workspace'
if tf.io.gfile.exists(LOCAL_WORKSPACE):
print("Removing previous workspace artifacts...")
tf.io.gfile.rmtree(LOCAL_WORKSPACE)
print("Creating a new workspace...")
tf.io.gfile.makedirs(LOCAL_WORKSPACE)
```
## 1. Loading and Running the ResNet Module
### 1.1. Download and instantiate the model
```
os.environ["TFHUB_DOWNLOAD_PROGRESS"] = 'True'
local_savedmodel_path = hub.resolve(THUB_MODEL_HANDLE)
print(local_savedmodel_path)
!ls -la {local_savedmodel_path}
model = hub.load(THUB_MODEL_HANDLE)
```
The expected input to most TF Hub TF2 image classification models, including ResNet 101, is a rank 4 tensor conforming to the following tensor specification: `tf.TensorSpec([None, height, width, 3], tf.float32)`. For the ResNet 101 model, the expected image size is `height x width = 224 x 224`. The color values for all channels are expected to be normalized to the [0, 1] range.
The output of the model is a batch of logits vectors. The indices into the logits are the `num_classes = 1001` classes from the ImageNet dataset. The mapping from indices to class labels can be found in the [labels file](download.tensorflow.org/data/ImageNetLabels.txt) with class 0 for "background", followed by 1000 actual ImageNet classes.
We will now test the model on a couple of JPEG images.
### 1.2. Display sample images
```
image_list = [tf.io.read_file(os.path.join(IMAGES_FOLDER, image_path))
for image_path in os.listdir(IMAGES_FOLDER)]
ncolumns = len(image_list) if len(image_list) < 4 else 4
nrows = int(len(image_list) // ncolumns)
fig, axes = plt.subplots(nrows=nrows, ncols=ncolumns, figsize=(10,10))
for axis, image in zip(axes.flat[0:], image_list):
decoded_image = tf.image.decode_image(image)
axis.set_title(decoded_image.shape)
axis.imshow(decoded_image.numpy())
```
### 1.3. Preprocess the testing images
The images need to be preprocessed to conform to the format expected by the ResNet101 model.
```
def _decode_and_scale(image, size):
image = tf.image.decode_image(image, expand_animations=False)
image_height = image.shape[0]
image_width = image.shape[1]
crop_size = tf.minimum(image_height, image_width)
offset_height = ((image_height - crop_size) + 1) // 2
offset_width = ((image_width - crop_size) + 1) // 2
image = tf.image.crop_to_bounding_box(image, offset_height, offset_width, crop_size, crop_size)
image = tf.cast(tf.image.resize(image, [size, size]), tf.uint8)
return image
size = 224
raw_images = tf.stack(image_list)
preprocessed_images = tf.map_fn(lambda x: _decode_and_scale(x, size), raw_images, dtype=tf.uint8)
preprocessed_images = tf.image.convert_image_dtype(preprocessed_images, tf.float32)
print(preprocessed_images.shape)
```
### 2.4. Run inference
```
predictions = model(preprocessed_images)
predictions
```
The model returns a batch of arrays with logits. This is not a very user friendly output so we will convert it to the list of ImageNet class labels.
```
labels_path = tf.keras.utils.get_file(
'ImageNetLabels.txt',
IMAGENET_LABELS_URL)
imagenet_labels = np.array(open(labels_path).read().splitlines())
```
We will display the 5 highest ranked labels for each image
```
for prediction in list(predictions):
decoded = imagenet_labels[np.argsort(prediction.numpy())[::-1][:5]]
print(list(decoded))
```
## 2. Create Serving Signatures
The inputs and outputs of the model as used during model training may not be optimal for serving. For example, in a typical training pipeline, feature engineering is performed as a separate step preceding model training and hyperparameter tuning. When serving the model, it may be more optimal to embed the feature engineering logic into the serving interface rather than require a client application to preprocess data.
The ResNet V2 101 model from TF Hub is optimized for recomposition and fine tuning. Since there are no serving signatures in the model's metadata, it cannot be served with TF Serving as is.
```
list(model.signatures)
```
To make it servable, we need to add a serving signature(s) describing the inference method(s) of the model.
We will add two signatures:
1. **The default signature** - This will expose the default predict method of the ResNet101 model.
2. **Prep/post-processing signature** - Since the expected inputs to this interface require a relatively complex image preprocessing to be performed by a client, we will also expose an alternative signature that embeds the preprocessing and postprocessing logic and accepts raw unprocessed images and returns the list of ranked class labels and associated label probabilities.
The signatures are created by defining a custom module class derived from the `tf.Module` base class that encapsulates our ResNet model and extends it with a method implementing the image preprocessing and output postprocessing logic. The default method of the custom module is mapped to the default method of the base ResNet module to maintain the analogous interface.
The custom module will be exported as `SavedModel` that includes the original model, the preprocessing logic, and two serving signatures.
This technique can be generalized to other scenarios where you need to extend a TensorFlow model and you have access to the serialized `SavedModel` but you don't have access to the Python code implementing the model.
#### 2.1. Define the custom serving module
```
LABELS_KEY = 'labels'
PROBABILITIES_KEY = 'probabilities'
NUM_LABELS = 5
class ServingModule(tf.Module):
"""
A custom tf.Module that adds image preprocessing and output post processing to
a base TF 2 image classification model from TF Hub.
"""
def __init__(self, base_model, input_size, output_labels):
super(ServingModule, self).__init__()
self._model = base_model
self._input_size = input_size
self._output_labels = tf.constant(output_labels, dtype=tf.string)
def _decode_and_scale(self, raw_image):
"""
Decodes, crops, and resizes a single raw image.
"""
image = tf.image.decode_image(raw_image, dtype=tf.dtypes.uint8, expand_animations=False)
image_shape = tf.shape(image)
image_height = image_shape[0]
image_width = image_shape[1]
crop_size = tf.minimum(image_height, image_width)
offset_height = ((image_height - crop_size) + 1) // 2
offset_width = ((image_width - crop_size) + 1) // 2
image = tf.image.crop_to_bounding_box(image, offset_height, offset_width, crop_size, crop_size)
image = tf.image.resize(image, [self._input_size, self._input_size])
image = tf.cast(image, tf.uint8)
return image
def _preprocess(self, raw_inputs):
"""
Preprocesses raw inputs as sent by the client.
"""
# A mitigation for https://github.com/tensorflow/tensorflow/issues/28007
with tf.device('/cpu:0'):
images = tf.map_fn(self._decode_and_scale, raw_inputs, dtype=tf.uint8)
images = tf.image.convert_image_dtype(images, tf.float32)
return images
def _postprocess(self, model_outputs):
"""
Postprocesses outputs returned by the base model.
"""
probabilities = tf.nn.softmax(model_outputs)
indices = tf.argsort(probabilities, axis=1, direction='DESCENDING')
return {
LABELS_KEY: tf.gather(self._output_labels, indices, axis=-1)[:,:NUM_LABELS],
PROBABILITIES_KEY: tf.sort(probabilities, direction='DESCENDING')[:,:NUM_LABELS]
}
@tf.function(input_signature=[tf.TensorSpec([None, 224, 224, 3], tf.float32)])
def __call__(self, x):
"""
A pass-through to the base model.
"""
return self._model(x)
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def predict_labels(self, raw_images):
"""
Preprocesses inputs, calls the base model
and postprocesses outputs from the base model.
"""
# Call the preprocessing handler
images = self._preprocess(raw_images)
# Call the base model
logits = self._model(images)
# Call the postprocessing handler
outputs = self._postprocess(logits)
return outputs
serving_module = ServingModule(model, 224, imagenet_labels)
```
#### 2.2. Test the custom serving module
```
predictions = serving_module.predict_labels(raw_images)
predictions
```
## 3. Save the custom serving module as `SavedModel`
```
model_path = os.path.join(LOCAL_WORKSPACE, MODEL_NAME, MODEL_VERSION)
default_signature = serving_module.__call__.get_concrete_function()
preprocess_signature = serving_module.predict_labels.get_concrete_function()
signatures = {
'serving_default': default_signature,
'serving_preprocess': preprocess_signature
}
tf.saved_model.save(serving_module, model_path, signatures=signatures)
```
### 3.1. Inspect the `SavedModel`
```
!saved_model_cli show --dir {model_path} --tag_set serve --all
```
### 3.2. Test loading and executing the `SavedModel`
```
model = tf.keras.models.load_model(model_path)
model.predict_labels(raw_images)
```
## 4. Deploy the `SavedModel` to AI Platform Prediction
### 4.1. Copy the `SavedModel` to GCS
```
!gsutil cp -r {model_path} {GCS_MODEL_LOCATION}
!gsutil ls {GCS_MODEL_LOCATION}
```
### 4.2 Create a model in AI Platform Prediction
```
!gcloud ai-platform models create {MODEL_NAME} \
--project {PROJECT_ID} \
--regions {REGION}
!gcloud ai-platform models list --project {PROJECT_ID}
```
### 4.3 Create a model version
```
MACHINE_TYPE='n1-standard-8'
ACCELERATOR='count=1,type=nvidia-tesla-p4'
!gcloud beta ai-platform versions create {MODEL_VERSION} \
--model={MODEL_NAME} \
--origin={GCS_MODEL_LOCATION} \
--runtime-version=2.1 \
--framework=TENSORFLOW \
--python-version=3.7 \
--machine-type={MACHINE_TYPE} \
--accelerator={ACCELERATOR} \
--project={PROJECT_ID}
!gcloud ai-platform versions list --model={MODEL_NAME} --project={PROJECT_ID}
```
## 5. Validate the Deployed Model Version to AI Platform Prediction
```
import googleapiclient.discovery
service = googleapiclient.discovery.build('ml', 'v1')
name = 'projects/{}/models/{}/versions/{}'.format(PROJECT_ID, MODEL_NAME, MODEL_VERSION)
print("Service name: {}".format(name))
def caip_predict(instances, signature_name='serving_default'):
request_body={
'signature_name': signature_name,
'instances': instances}
response = service.projects().predict(
name=name,
body=request_body
).execute()
if 'error' in response:
raise RuntimeError(response['error'])
outputs = response['predictions']
return outputs
signature_name = 'serving_preprocess'
encoded_images = [{'b64': base64.b64encode(image.numpy()).decode('utf-8')}
for image in image_list]
caip_predict(encoded_images, signature_name=signature_name)
```
## License
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at [https://www.apache.org/licenses/LICENSE-2.0](https://www.apache.org/licenses/LICENSE-2.0)
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| true |
code
| 0.677341 | null | null | null | null |
|
# Part 12: Train an Encrypted NN on Encrypted Data
In this notebook, we're going to use all the techniques we've learned thus far to perform neural network training (and prediction) while both the model and the data are encrypted.
In particular, we present our custom Autograd engine which works on encrypted computations.
Authors:
- Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask)
- Jason Paumier - Github: [@Jasopaum](https://github.com/Jasopaum)
- Théo Ryffel - Twitter: [@theoryffel](https://twitter.com/theoryffel)
# Step 1: Create Workers and Toy Data
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import syft as sy
# Set everything up
hook = sy.TorchHook(torch)
alice = sy.VirtualWorker(id="alice", hook=hook)
bob = sy.VirtualWorker(id="bob", hook=hook)
james = sy.VirtualWorker(id="james", hook=hook)
# A Toy Dataset
data = torch.tensor([[0,0],[0,1],[1,0],[1,1.]])
target = torch.tensor([[0],[0],[1],[1.]])
# A Toy Model
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(2, 2)
self.fc2 = nn.Linear(2, 1)
def forward(self, x):
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
return x
model = Net()
```
# Step 2: Encrypt the Model and Data
Encryption here comes in two steps. Since Secure Multi-Party Computation only works on integers, in order to operate over numbers with decimal points (such as weights and activations), we need to encode all of our numbers using Fixed Precision, which will give us several bits of decimal precision. We do this by calling .fix_precision().
We can then call .share() as we have for other demos, which will encrypt all of the values by sharing them between Alice and Bob. Note that we also set requires_grad to True, which also adds a special autograd method for encrypted data. Indeed, since Secure Multi-Party Computation doesn't work on float values, we can't use the usual PyTorch autograd. Therefore, we need to add a special AutogradTensor node that computes the gradient graph for backpropagation. You can print any of this element to see that it includes an AutogradTensor.
```
# We encode everything
data = data.fix_precision().share(bob, alice, crypto_provider=james, requires_grad=True)
target = target.fix_precision().share(bob, alice, crypto_provider=james, requires_grad=True)
model = model.fix_precision().share(bob, alice, crypto_provider=james, requires_grad=True)
print(data)
```
# Step 3: Train
And now we can train using simple tensor logic.
```
opt = optim.SGD(params=model.parameters(),lr=0.1).fix_precision()
for iter in range(20):
# 1) erase previous gradients (if they exist)
opt.zero_grad()
# 2) make a prediction
pred = model(data)
# 3) calculate how much we missed
loss = ((pred - target)**2).sum()
# 4) figure out which weights caused us to miss
loss.backward()
# 5) change those weights
opt.step()
# 6) print our progress
print(loss.get().float_precision())
```
The loss indeed decreased!
## Impact of fixed precision
You might wonder how encrypting everything impacts the decreasing loss. Actually, because the theoretical computation is the same, the numbers are very close to non-encrypted training. You can verify this by running the same example without encryption and with a deterministic initialisation of the model like this one in the model `__init__`:
```
with torch.no_grad():
self.fc1.weight.set_(torch.tensor([[ 0.0738, -0.2109],[-0.1579, 0.3174]], requires_grad=True))
self.fc1.bias.set_(torch.tensor([0.,0.1], requires_grad=True))
self.fc2.weight.set_(torch.tensor([[-0.5368, 0.7050]], requires_grad=True))
self.fc2.bias.set_(torch.tensor([-0.0343], requires_grad=True))
```
The slight difference you might observe is due to the rounding of values performed while transforming to fixed precision. The default `precision_fractional` is 3 and if you get it down to 2 the divergence with clear text training increases, while it reduces if you choose `precision_fractional = 4`.
# Congratulations!!! - Time to Join the Community!
Congratulations on completing this notebook tutorial! If you enjoyed this and would like to join the movement toward privacy preserving, decentralized ownership of AI and the AI supply chain (data), you can do so in the following ways!
### Star PySyft on Github
The easiest way to help our community is just by starring the Repos! This helps raise awareness of the cool tools we're building.
- [Star PySyft](https://github.com/OpenMined/PySyft)
### Join our Slack!
The best way to keep up to date on the latest advancements is to join our community! You can do so by filling out the form at [http://slack.openmined.org](http://slack.openmined.org)
### Join a Code Project!
The best way to contribute to our community is to become a code contributor! At any time you can go to PySyft Github Issues page and filter for "Projects". This will show you all the top level Tickets giving an overview of what projects you can join! If you don't want to join a project, but you would like to do a bit of coding, you can also look for more "one off" mini-projects by searching for github issues marked "good first issue".
- [PySyft Projects](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3AProject)
- [Good First Issue Tickets](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
### Donate
If you don't have time to contribute to our codebase, but would still like to lend support, you can also become a Backer on our Open Collective. All donations go toward our web hosting and other community expenses such as hackathons and meetups!
[OpenMined's Open Collective Page](https://opencollective.com/openmined)
| true |
code
| 0.778154 | null | null | null | null |
|
```
import numpy as np
import matplotlib.pyplot as plt
from sympy import S, solve
import plotutils as pu
%matplotlib inline
```
# numbers on a plane
Numbers can be a lot more interesting than just a value if you're just willing to shift your perspective a bit.
# integers
When we are dealing with integers we are dealing with all the whole numbers, zero and all the negative whole numbers. In math this set of numbers is often denoted with the symbol $\mathbb{Z}$. This is a *countable infinite* set and even though the numbers are a bit basic we can try to get some more insight into the structure of numbers.
# squares
If we take a number and multiply it with itself we get a *square number*. These are called square because we can easily plot them as squares in a plot.
```
def plot_rect(ax, p, fmt='b'):
x, y = p
ax.plot([0, x], [y, y], fmt) # horizontal line
ax.plot([x, x], [0, y], fmt) # vertical line
with plt.xkcd():
fig, axes = plt.subplots(1, figsize=(4, 4))
pu.setup_axes(axes, xlim=(-1, 4), ylim=(-1, 4))
for x in [1,2,3]: plot_rect(axes, (x, x))
```
However, what happens we have a non-square number such as $5$?. We can't easily plot this as two equal lenghts, we'll have to turn it into a rectangle of $1 \times 5$ or $5 \times 1$.
```
with plt.xkcd():
fig, axes = plt.subplots(1, figsize=(4, 4))
pu.setup_axes(axes, xlim=(-1, 6), ylim=(-1, 6))
for x, y in [(1, 5), (5, 1)]:
plot_rect(axes, (x, y))
```
The first thing we notice is that we can take one thing and project it as two things. The fact that this happens is perfectly natural because we decided to take a single value and project it in two-dimensions in a way that suits us. Nothing really weird about it but still it's worth to think about it for a moment. Apparently it's perfectly valid to have something even though the way we got there doesn't matter. We could either take the rectangle standing up or the one lying down.
Another interesting question to ask is whether we can get on the other sides of the axes. So far we have been happily plotting in the positive quadrant where $0 \le x$ and $0 \le y$ but what about the other three? Are they even reachable using just integer numbers?
We could make up some factor like $-1 \times -5$ and that would put us in the lower left. That would be equal to the same rectangles projected in the top right. And negative numbers would be either in the top left or bottom right. Although trivial this is interesting because now we find that if we project a single dimension into two dimensions we sometimes get 1 possibility, sometimes 2 and usually 4.
If we project zero we just get zero. However if we project $1$ we get either $1 \times 1$ or $-1 \times -1$. If we project $5$ we get $5 \times 1$, $1 \times 5$, $-5 \times -1$ and $-1 \times -5$.
| true |
code
| 0.609292 | null | null | null | null |
|
```
#loading dataset
import pandas as pd
#visualisation
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
# data preprocessing
from sklearn.preprocessing import StandardScaler
# data splitting
from sklearn.model_selection import train_test_split
# data modeling
from sklearn.metrics import confusion_matrix,accuracy_score,roc_curve,classification_report,auc
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
```
### Importing the Dataset
```
df = pd.read_csv("Lumpy skin disease data.csv")
df.head()
df.describe()
df.info()
df.isna().sum(axis=0)
df.columns
```
### Dropping Unnecessary Columns
```
df.drop(columns=['region','country','reportingDate','X5_Ct_2010_Da','X5_Bf_2010_Da'],inplace=True)
df.head()
df.corr()
```
### Exploratory Data Analysis
```
plt.figure(figsize=(3,3),dpi=150)
plt.style.use('dark_background')
sns.countplot(x='lumpy', data = df)
plt.xlabel('Lumpiness classes')
plt.ylabel('count of each class')
plt.title('Lumpiness class distribution')
plt.figure(figsize=(15, 15))
heatmap = sns.heatmap(df.corr(), vmin= -1, vmax = 1, annot=True)
heatmap.set_title('Correlation Heatmap', fontdict={'fontsize':12})
```
### Partitioning the dataset into training and test sets
```
X=df.iloc[:,:-1]
y=df.iloc[:,-1]
print("//Independent features//")
print(X.head())
print("\n\n//Dependent feature//")
print(y.head())
```
### Train Test Split
```
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.3,random_state=0)
```
### Feature Scaling
```
scaler=StandardScaler()
X_train=scaler.fit_transform(X_train)
X_test=scaler.transform(X_test)
# Logistic Regression
lr=LogisticRegression()
lr_mdl=lr.fit(X_train,y_train)
lr_pred=lr.predict(X_test)
lr_con_matrix=confusion_matrix(y_test,lr_pred)
lr_acc=accuracy_score(y_test,lr_pred)
print("Confusion Matrix",'\n',lr_con_matrix)
print('\n')
print("Accuracy of Logistic Regression: ",lr_acc*100,'\n')
print(classification_report(y_test,lr_pred))
#Random Forest Classfier
rf = RandomForestClassifier()
rf.fit(X_train,y_train)
rf_pred = rf.predict(X_test)
rf_con_matrix = confusion_matrix(y_test, rf_pred)
rf_acc = accuracy_score(y_test, rf_pred)
print("Confusion Matrix\n",rf_con_matrix)
print("\n")
print("Accuracy of Random Forest:",rf_acc*100,'\n')
print(classification_report(y_test,rf_pred))
#DecisionTreeClassifier
dt = DecisionTreeClassifier()
dt.fit(X_train, y_train)
dt_pred = dt.predict(X_test)
dt_con_matrix = confusion_matrix(y_test, dt_pred)
dt_acc = accuracy_score(y_test, dt_pred)
print("Confusion Matrix\n",dt_con_matrix)
print("\n")
print("Accuracy of Decision Tree Classifier:",dt_acc*100,'\n')
print(classification_report(y_test,dt_pred))
y_score1 = lr.predict_proba(X_test)[:,1]
y_score2 = rf.predict_proba(X_test)[:,1]
y_score3 = dt.predict_proba(X_test)[:,1]
false_positive_rate1, true_positive_rate1, threshold1 = roc_curve(y_test, y_score1)
false_positive_rate2, true_positive_rate2, threshold2 = roc_curve(y_test, y_score2)
false_positive_rate3, true_positive_rate3, threshold3 = roc_curve(y_test, y_score3)
plt.figure(figsize=(5,5),dpi=150)
plt.title('Receiver Operating Characteristic (ROC) Curve')
plt.plot(false_positive_rate1,true_positive_rate1, color='red', label = "Logistic Regression")
plt.plot(false_positive_rate2,true_positive_rate2, color='blue', label = "Random Forest")
plt.plot(false_positive_rate3,true_positive_rate3, color='green', label = "Decision Tree")
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],linestyle='--')
plt.axis('tight')
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
mdl_evl = pd.DataFrame({'Model': ['Logistic Regression','Random Forest', 'Decision Tree'], 'Accuracy': [lr_acc*100,rf_acc*100,dt_acc*100]})
mdl_evl
pal=['red','blue','green']
fig, ax = plt.subplots(figsize=(20,10))
sns.barplot(x="Model",y="Accuracy",palette=pal,data=mdl_evl)
plt.title('Model Accuracy')
plt.xlabel('Model')
plt.ylabel('Accuracy')
```
So according to the accuracy scores the best model is Random Forest.
| true |
code
| 0.625724 | null | null | null | null |
|
# Quantum pipeline using JAX backend
This performs an exact classical simulation.
```
from jax import numpy as np
def read_data(filename):
labels, sentences = [], []
with open(filename) as f:
for line in f:
labels.append([1, 0] if line[0] == '1' else [0, 1])
sentences.append(line[1:].strip())
return np.array(labels), sentences
train_labels, train_data = read_data('datasets/mc_train_data.txt')
dev_labels, dev_data = read_data('datasets/mc_dev_data.txt')
test_labels, test_data = read_data('datasets/mc_test_data.txt')
```
### Create diagrams
```
from lambeq.ccg2discocat import DepCCGParser
reader = DepCCGParser(possible_root_cats=['S[dcl]'])
raw_train_diagrams = reader.sentences2diagrams(train_data)
raw_dev_diagrams = reader.sentences2diagrams(dev_data)
raw_test_diagrams = reader.sentences2diagrams(test_data)
from discopy.rigid import Id
def remove_cups(diagram):
# Remove cups to reduce post-selection in the circuit, for faster execution
diags = []
for box, offset in zip(diagram.boxes, diagram.offsets):
if not box.dom: # word box
diags.insert(offset, box)
else: # cup (the only other type of box in these diagrams)
i = 0
off = offset
while off != len(diags[i].cod) - 1:
assert off > 0
off -= len(diags[i].cod)
i += 1
left, right = diags[i:i+2]
if len(left.cod) == 1:
new_diag = right >> (left.r.dagger() @ Id(right.cod[1:]))
else:
assert len(right.cod) == 1
new_diag = left >> (Id(left.cod[:-1]) @ right.l.dagger())
diags[i:i+2] = [new_diag]
assert len(diags) == 1
return diags[0]
train_diagrams = [remove_cups(diagram) for diagram in raw_train_diagrams]
dev_diagrams = [remove_cups(diagram) for diagram in raw_dev_diagrams]
test_diagrams = [remove_cups(diagram) for diagram in raw_test_diagrams]
train_diagrams[0].draw()
```
### Create circuits
```
from lambeq.circuit import IQPAnsatz
from lambeq.core.types import AtomicType
ansatz = IQPAnsatz({AtomicType.NOUN: 1, AtomicType.SENTENCE: 1},
n_layers=1, n_single_qubit_params=3)
train_circuits = [ansatz(diagram) for diagram in train_diagrams]
dev_circuits = [ansatz(diagram) for diagram in dev_diagrams]
test_circuits = [ansatz(diagram) for diagram in test_diagrams]
train_circuits[0].draw(figsize=(9, 12))
```
### Parameterise
```
from sympy import default_sort_key
all_circuits = train_circuits + dev_circuits + test_circuits
# sort the symbols since they are returned as a set
parameters = sorted(
{s for circ in all_circuits for s in circ.free_symbols},
key=default_sort_key)
from discopy.quantum import Circuit
from discopy.tensor import Tensor
from jax import jit
Tensor.np = np
def normalise(predictions):
# apply smoothing to predictions
predictions = np.abs(predictions) + 1e-9
return predictions / predictions.sum()
def make_pred_fn(circuits):
circuit_fns = [c.lambdify(*parameters) for c in circuits]
def predict(params):
outputs = Circuit.eval(*(c(*params) for c in circuit_fns))
return np.array([normalise(output.array) for output in outputs])
return predict
train_pred_fn = jit(make_pred_fn(train_circuits))
dev_pred_fn = jit(make_pred_fn(dev_circuits))
test_pred_fn = make_pred_fn(test_circuits)
```
### Train
```
from noisyopt import minimizeSPSA
import numpy
def make_cost_fn(pred_fn, labels):
def cost_fn(params, **kwargs):
predictions = pred_fn(params)
cost = -np.sum(labels * np.log(predictions)) / len(labels) # binary cross-entropy loss
costs.append(cost)
acc = np.sum(np.round(predictions) == labels) / len(labels) / 2 # half due to double-counting
accuracies.append(acc)
return cost
costs, accuracies = [], []
return cost_fn, costs, accuracies
train_cost_fn, train_costs, train_accs = make_cost_fn(train_pred_fn, train_labels)
dev_cost_fn, dev_costs, dev_accs = make_cost_fn(dev_pred_fn, dev_labels)
SEED = 0
rng = numpy.random.default_rng(SEED)
x0 = np.array(rng.random(len(parameters)))
numpy.random.seed(SEED)
result = minimizeSPSA(train_cost_fn, x0=x0, a=0.2, c=0.06, niter=80, callback=dev_cost_fn)
```
### Show results
```
import matplotlib.pyplot as plt
fig, ((ax_tl, ax_tr), (ax_bl, ax_br)) = plt.subplots(2, 2, sharex=True, sharey='row', figsize=(10, 6))
ax_tl.set_title('Training set')
ax_tr.set_title('Development set')
ax_bl.set_xlabel('Iterations')
ax_br.set_xlabel('Iterations')
ax_bl.set_ylabel('Accuracy')
ax_tl.set_ylabel('Loss')
colours = iter(plt.rcParams['axes.prop_cycle'].by_key()['color'])
ax_tl.plot(train_costs[1::2], color=next(colours)) # training evaluates twice per iteration
ax_bl.plot(train_accs[1::2], color=next(colours)) # so take every other entry
ax_tr.plot(dev_costs, color=next(colours))
ax_br.plot(dev_accs, color=next(colours))
# print test accuracy
test_cost_fn, _, test_accs = make_cost_fn(test_pred_fn, test_labels)
test_cost_fn(result.x)
print('Test accuracy:', test_accs[0])
```
| true |
code
| 0.597813 | null | null | null | null |
|
# PerfForesightConsumerType
```
# Initial imports and notebook setup, click arrow to show
from HARK.ConsumptionSaving.ConsIndShockModel import PerfForesightConsumerType
from HARK.utilities import plotFuncs
from time import clock
import matplotlib.pyplot as plt
import numpy as np
mystr = lambda number : "{:.4f}".format(number)
```
The module $\texttt{HARK.ConsumptionSaving.ConsIndShockModel}$ concerns consumption-saving models with idiosyncratic shocks to (non-capital) income. All of the models assume CRRA utility with geometric discounting, no bequest motive, and income shocks that are either fully transitory or fully permanent.
$\texttt{ConsIndShockModel}$ currently includes three models:
1. A very basic "perfect foresight" model with no uncertainty (shocks are zero).
2. A model with risk over transitory and permanent income shocks.
3. The model described in (2), with an interest rate for debt that differs from the interest rate for savings.
This notebook provides documentation for the first of these three models.
$\newcommand{\CRRA}{\rho}$
$\newcommand{\DiePrb}{\mathsf{D}}$
$\newcommand{\PermGroFac}{\Gamma}$
$\newcommand{\Rfree}{\mathsf{R}}$
$\newcommand{\DiscFac}{\beta}$
## Statement of the model
The $\texttt{PerfForesightConsumerType}$ class solves the problem of a consumer with Constant Relative Risk Aversion utility
${\CRRA}$
\begin{equation}
U(C) = \frac{C^{1-\CRRA}}{1-\rho},
\end{equation}
who has perfect foresight about everything except whether he will die between the end of period $t$ and the beginning of period $t+1$. Permanent labor income $P_t$ grows from period $t$ to period $t+1$ by factor $\PermGroFac_{t+1}$. The consumer faces no artificial borrowing constraint: He is able to borrow against his entire future stream of income.
At the beginning of period $t$, the consumer has market resources $M_t$ (which includes both market wealth and currrent income) and must choose how much to consume $C_t$ and how much to retain in a riskless asset $A_t$, which will earn return factor $\Rfree$. The agent's flow of future utility $U(C_{t+n})$ from consumption is geometrically discounted by factor $\DiscFac$ per period. The consumer only experiences future value if he survives, which occurs with probability $1-\DiePrb_{t+1}$.
For parallelism with the treatment of more complicated problems, we write the problem rather elaborately in Bellman form as:
\begin{eqnarray*}
V_t(M_t,P_t) &=& \max_{C_t}~U(C_t) ~+ \DiscFac (1 - \DiePrb_{t+1}) V_{t+1}(M_{t+1},P_{t+1}), \\
& s.t. & \\
A_t &=& M_t - C_t, \\
M_{t+1} &=& \Rfree A_t + Y_{t+1}, \\
Y_{t+1} &=& P_{t+1}, \\
P_{t+1} &=& \PermGroFac_{t+1} P_t.
\end{eqnarray*}
The parameters of the consumer's problem are the coefficient of relative risk aversion $\CRRA$, the intertemporal discount factor $\DiscFac$, an interest factor $\Rfree$, and age-varying sequences of the permanent income growth factor $\PermGroFac_t$ and survival probability $(1 - \DiePrb_t)$. [These lecture notes](http://econ.jhu.edu/people/ccarroll/public/lecturenotes/consumption/PerfForesightCRRA) show that under these assumptions the problem can be transformed into an equivalent problem stated in terms of *normalized* variables (represented in lower case); all real variables are divided by permanent income $P_t$ and value is divided by $P_t^{1-\CRRA}$. The Bellman form of the normalized model (see the lecture notes for details) is:
\begin{eqnarray*}
v_t(m_t) &=& \max_{c_t}~U(c_t) ~+ \DiscFac (1 - \DiePrb_{t+1}) \PermGroFac_{t+1}^{1-\CRRA} v_{t+1}(m_{t+1}), \\
& s.t. & \\
a_t &=& m_t - c_t, \\
m_{t+1} &=& a_t (\Rfree/\PermGroFac_{t+1} )+ 1.
\end{eqnarray*}
## Solution method for PerfForesightConsumerType
Because of the assumptions of CRRA utility, no risk other than mortality, and no artificial borrowing constraint, the problem has a closed form solution in which consumption is a linear function of resources, and the utility-inverse of the value function is also linear (that is, $u^{-1}(v)$ is linear in $m$). Details of the mathematical solution of this model can be found in the lecture notes [PerfForesightCRRA](http://econ.jhu.edu/people/ccarroll/public/lecturenotes/consumption/PerfForesightCRRA).
The one period problem for this model is solved by the function $\texttt{solveConsPerfForesight}$, which creates an instance of the class $\texttt{ConsPerfForesightSolver}$. To construct an instance of the class $\texttt{PerfForesightConsumerType}$, several parameters must be passed to this constructor.
## Example parameter values
| Parameter | Description | Code | Example value | Time-varying? |
| :---: | --- | --- | --- | --- |
| $\DiscFac$ |Intertemporal discount factor | $\texttt{DiscFac}$ | $0.96$ | |
| $\CRRA $ |Coefficient of relative risk aversion | $\texttt{CRRA}$ | $2.0$ | |
| $\Rfree$ | Risk free interest factor | $\texttt{Rfree}$ | $1.03$ | |
| $1 - \DiePrb_{t+1}$ |Survival probability | $\texttt{LivPrb}$ | $[0.98]$ | $\surd$ |
|$\PermGroFac_{t+1}$|Permanent income growth factor|$\texttt{PermGroFac}$| $[1.01]$ | $\surd$ |
|$T$| Number of periods in this type's "cycle" |$\texttt{T_cycle}$| $1$ | |
|(none)| Number of times the "cycle" occurs |$\texttt{cycles}$| $0$ | |
Note that the survival probability and income growth factor have time subscripts; likewise, the example values for these parameters are *lists* rather than simply single floats. This is because those parameters are in principle *time-varying*: their values can depend on which period of the problem the agent is in (for example, mortality probability depends on age). All time-varying parameters *must* be specified as lists, even when the model is being solved for an infinite horizon case where in practice the parameter takes the same value in every period.
The last two parameters in the table specify the "nature of time" for this type: the number of (non-terminal) periods in this type's "cycle", and the number of times that the "cycle" occurs. *Every* subclass of $\texttt{AgentType}$ uses these two code parameters to define the nature of time. Here, $\texttt{T_cycle}$ has the value $1$, indicating that there is exactly one period in the cycle, while $\texttt{cycles}$ is $0$, indicating that the cycle is repeated in *infinite* number of times-- it is an infinite horizon model, with the same "kind" of period repeated over and over.
In contrast, we could instead specify a life-cycle model by setting $\texttt{T_cycle}$ to $1$, and specifying age-varying sequences of income growth and survival probability. In all cases, the number of elements in each time-varying parameter should exactly equal $\texttt{T_cycle}$.
The parameter $\texttt{AgentCount}$ specifies how many consumers there are of this *type*-- how many individuals have these exact parameter values and are *ex ante* homogeneous. This information is not relevant for solving the model, but is needed in order to simulate a population of agents, introducing *ex post* heterogeneity through idiosyncratic shocks. Of course, simulating a perfect foresight model is quite boring, as there are *no* idiosyncratic shocks other than death!
The cell below defines a dictionary that can be passed to the constructor method for $\texttt{PerfForesightConsumerType}$, with the values from the table here.
```
PerfForesightDict = {
# Parameters actually used in the solution method
"CRRA" : 2.0, # Coefficient of relative risk aversion
"Rfree" : 1.03, # Interest factor on assets
"DiscFac" : 0.96, # Default intertemporal discount factor
"LivPrb" : [0.98], # Survival probability
"PermGroFac" :[1.01], # Permanent income growth factor
# Parameters that characterize the nature of time
"T_cycle" : 1, # Number of periods in the cycle for this agent type
"cycles" : 0 # Number of times the cycle occurs (0 --> infinitely repeated)
}
```
## Inspecting the solution
With the dictionary we have just defined, we can create an instance of $\texttt{PerfForesightConsumerType}$ by passing the dictionary to the class (as if the class were a function). This instance can then be solved by invoking its $\texttt{solve}$ method.
```
PFexample = PerfForesightConsumerType(**PerfForesightDict)
PFexample.cycles = 0
PFexample.solve()
```
The $\texttt{solve}$ method fills in the instance's attribute $\texttt{solution}$ as a time-varying list of solutions to each period of the consumer's problem. In this case, $\texttt{solution}$ will be a list with exactly one instance of the class $\texttt{ConsumerSolution}$, representing the solution to the infinite horizon model we specified.
```
print(PFexample.solution)
```
Each element of $\texttt{solution}$ has a few attributes. To see all of them, we can use the $\texttt{vars}$ built in function: the consumption functions are instantiated in the attribute $\texttt{cFunc}$ of each element of $\texttt{ConsumerType.solution}$. This method creates a (time varying) attribute $\texttt{cFunc}$ that contains a list of consumption functions by age.
```
print(vars(PFexample.solution[0]))
```
The two most important attributes of a single period solution are the (normalized) consumption function $\texttt{cFunc}$ and the (normalized) value function $\texttt{vFunc}$; the marginal value function $\texttt{vPfunc}$ is also constructed. Let's plot those functions near the lower bound of the permissible state space (the attribute $\texttt{mNrmMin}$ tells us the lower bound of $m_t$ where the consumption function is defined).
```
print('Linear perfect foresight consumption function:')
mMin = PFexample.solution[0].mNrmMin
plotFuncs(PFexample.solution[0].cFunc,mMin,mMin+10.)
print('Perfect foresight value function:')
plotFuncs(PFexample.solution[0].vFunc,mMin+0.1,mMin+10.1)
```
## Solution Method
### Recursive Formula for $\kappa_{t}$
The paper [BufferStockTheory](https://www.econ2.jhu.edu/people/ccarroll/papers/BufferStockTheory/) has a few other results that are used in the solution code. One is [the recursive formula for the MPC](https://www.econ2.jhu.edu/people/ccarroll/papers/BufferStockTheory/#MPCnvrs). Starting with the last period, in which $\kappa_{T}=1$, the inverse MPC's (and therefore the MPC's themselves) can be constructed using the recursive formula:
\begin{align}
\kappa_{t}^{-1} & = & 1 + \kappa_{t+1}^{-1}(\Rfree \beta)^{1/\rho}/G
\end{align}
### Consumption Function
For the perfect foresight problem, there is a well-known [analytical solution]( http://econ.jhu.edu/people/ccarroll/public/lecturenotes/consumption/PerfForesightCRRA/#cFuncAnalytical) for the consumption function: Calling $o_{t}$ 'overall wealth' (including market wealth plus human wealth $h_{t}$) and designating the marginal propensity to consume in period $t$ by $\kappa_{t}$:
\begin{align}
\mathrm{c}_{t} & = o_{t}\kappa_{t}
\end{align}
and in our normalized model $o_{t} = m_{t}-1+h_{t}$ (the '-1' term subtracts off the normalized current income of 1 from market resources $m$ which were market wealth plus current income).
### Value Function
A convenient feature of the perfect foresight problem is that the value function has a simple [analytical form](http://econ.jhu.edu/people/ccarroll/public/lecturenotes/consumption/PerfForesightCRRA/#vFuncAnalytical):
\begin{align}
\mathrm{v}_{t} & = \mathrm{u}(\mathrm{c}_{t}(m))\kappa_{t}^{-1}\\
&= \mathrm{u}(o_{t} \kappa_{t}) \kappa_{t}^{-1} \\
&= \mathrm{u}(o_{t})\kappa_{t}^{1-\rho} \kappa_{t}^{-1} \\
&= \mathrm{u}(o_{t})\kappa_{t}^{-\rho}
\end{align}
This means that the utility-inverse of the value function, ${\scriptsize \Lambda} \equiv \mathrm{u}^{-1}(\mathrm{v})$, is linear:
\begin{align}
\scriptsize \Lambda_{t} & = o_{t} \kappa_{t}^{-\rho/(1-\rho)}
\end{align}
When uncertainty or liquidity constraints are added to the problem, the ${\scriptsize \Lambda}$ function is no longer linear. But even in these cases, the utility-inverse of the value function is much better behaved (e.g., closer to linear; bounded over any feasible finite range of $m$) than the uninverted function (which, for example, approaches $-\infty$ as $m$ approaches its lower bound).
Our procedure will therefore generically be to construct the inverse value function, and to obtain the value function from it by uninverting. That is, we construct an interpolating approximation of $\scriptsize \Lambda_{t}$ and compute value on-the-fly from
\begin{align}
\mathrm{v}_{t}(m) & = \mathrm{u}({\scriptsize \Lambda_{t}}(m))
\end{align}
In this case, the interpolation is exact, not an approximation: We need only two points to construct a line, so we choose the minimum possible value of normalized market resources, $\texttt{mNrmMin}$, where $o_{t}=0$ so that $c_{t}=0$, and that minimum plus 1, where the inverted value function will have the value $\kappa_{t}^{-\rho/(1-\rho)}$. From these we construct $vFuncNvrs$ as a linear interpolating function (which automatically extrapolates to the whole number line).
## Checking Solution Conditions
The code performs tests for whether the supplied parameter values meet various conditions that determine the properties of the solution. Some conditions (like the Finite Human Wealth Condition) are required for the model to have a sensible solution, and if these conditions are violated the code generates a warning message. Other conditions govern characteristics of the model like whether consumption is falling (whether the consumer is 'absolutely impatient'). All conditions can manually be performed using the syntax below. The function returns "False" if none of the key conditions has been violated.
```
PFexample.checkConditions(verbose=True,public_call=True)
```
An element of $\texttt{solution}$ also includes the (normalized) marginal value function $\texttt{vPfunc}$, and the lower and upper bounds of the marginal propensity to consume (MPC) $\texttt{MPCmin}$ and $\texttt{MPCmax}$. Note that with a linear consumption function, the MPC is constant, so its lower and upper bound are identical.
## Simulating the model
Suppose we wanted to simulate many consumers who share the parameter values that we passed to $\texttt{PerfForesightConsumerType}$-- an *ex ante* homogeneous *type* of consumers. To do this, our instance would have to know *how many* agents there are of this type, as well as their initial levels of assets $a_t$ and permanent income $P_t$.
### Setting Parameters
Let's fill in this information by passing another dictionary to $\texttt{PFexample}$ with simulation parameters. The table below lists the parameters that an instance of $\texttt{PerfForesightConsumerType}$ needs in order to successfully simulate its model using the $\texttt{simulate}$ method.
| Description | Code | Example value |
| :---: | --- | --- |
| Number of consumers of this type | $\texttt{AgentCount}$ | $10000$ |
| Number of periods to simulate | $\texttt{T_sim}$ | $120$ |
| Mean of initial log (normalized) assets | $\texttt{aNrmInitMean}$ | $-6.0$ |
| Stdev of initial log (normalized) assets | $\texttt{aNrmInitStd}$ | $1.0$ |
| Mean of initial log permanent income | $\texttt{pLvlInitMean}$ | $0.0$ |
| Stdev of initial log permanent income | $\texttt{pLvlInitStd}$ | $0.0$ |
| Aggregrate productivity growth factor | $\texttt{PermGroFacAgg}$ | $1.0$ |
| Age after which consumers are automatically killed | $\texttt{T_age}$ | $None$ |
We have specified the model so that initial assets and permanent income are both distributed lognormally, with mean and standard deviation of the underlying normal distributions provided by the user.
The parameter $\texttt{PermGroFacAgg}$ exists for compatibility with more advanced models that employ aggregate productivity shocks; it can simply be set to 1.
In infinite horizon models, it might be useful to prevent agents from living extraordinarily long lives through a fortuitous sequence of mortality shocks. We have thus provided the option of setting $\texttt{T_age}$ to specify the maximum number of periods that a consumer can live before they are automatically killed (and replaced with a new consumer with initial state drawn from the specified distributions). This can be turned off by setting it to $\texttt{None}$.
The cell below puts these parameters into a dictionary, then gives them to $\texttt{PFexample}$. Note that all of these parameters *could* have been passed as part of the original dictionary; we omitted them above for simplicity.
```
# Create parameter values necessary for simulation
SimulationParams = {
"AgentCount" : 10000, # Number of agents of this type
"T_sim" : 120, # Number of periods to simulate
"aNrmInitMean" : -6.0, # Mean of log initial assets
"aNrmInitStd" : 1.0, # Standard deviation of log initial assets
"pLvlInitMean" : 0.0, # Mean of log initial permanent income
"pLvlInitStd" : 0.0, # Standard deviation of log initial permanent income
"PermGroFacAgg" : 1.0, # Aggregate permanent income growth factor
"T_age" : None, # Age after which simulated agents are automatically killed
}
PFexample(**SimulationParams) # This implicitly uses the assignParameters method of AgentType
```
To generate simulated data, we need to specify which variables we want to track the "history" of for this instance. To do so, we set the $\texttt{track_vars}$ attribute of our $\texttt{PerfForesightConsumerType}$ instance to be a list of strings with the simulation variables we want to track.
In this model, valid elments of $\texttt{track_vars}$ include $\texttt{mNrmNow}$, $\texttt{cNrmNow}$, $\texttt{aNrmNow}$, and $\texttt{pLvlNow}$. Because this model has no idiosyncratic shocks, our simulated data will be quite boring.
### Generating simulated data
Before simulating, the $\texttt{initializeSim}$ method must be invoked. This resets our instance back to its initial state, drawing a set of initial $\texttt{aNrmNow}$ and $\texttt{pLvlNow}$ values from the specified distributions and storing them in the attributes $\texttt{aNrmNow_init}$ and $\texttt{pLvlNow_init}$. It also resets this instance's internal random number generator, so that the same initial states will be set every time $\texttt{initializeSim}$ is called. In models with non-trivial shocks, this also ensures that the same sequence of shocks will be generated on every simulation run.
Finally, the $\texttt{simulate}$ method can be called.
```
# Create PFexample object
PFexample.track_vars = ['mNrmNow']
PFexample.initializeSim()
PFexample.simulate()
```
Each simulation variable $\texttt{X}$ named in $\texttt{track_vars}$ will have the *history* of that variable for each agent stored in the attribute $\texttt{X_hist}$ as an array of shape $(\texttt{T_sim},\texttt{AgentCount})$. To see that the simulation worked as intended, we can plot the mean of $m_t$ in each simulated period:
```
# Plot market resources over time
plt.plot(np.mean(PFexample.mNrmNow_hist,axis=1))
plt.xlabel('Time')
plt.ylabel('Mean normalized market resources')
plt.show()
```
A perfect foresight consumer can borrow against the PDV of his future income-- his human wealth-- and thus as time goes on, our simulated impatient agents approach the (very negative) steady state level of $m_t$ while being steadily replaced with consumers with roughly $m_t=1$.
The slight wiggles in the plotted curve are due to consumers randomly dying and being replaced; their replacement will have an initial state drawn from the distributions specified by the user. To see the current distribution of ages, we can look at the attribute $\texttt{t_age}$.
```
# Plot the CDF
N = PFexample.AgentCount
F = np.linspace(0.,1.,N)
plt.plot(np.sort(PFexample.t_age),F)
plt.xlabel('Current age of consumers')
plt.ylabel('Cumulative distribution')
plt.show()
```
The distribution is (discretely) exponential, with a point mass at 120 with consumers who have survived since the beginning of the simulation.
One might wonder why HARK requires users to call $\texttt{initializeSim}$ before calling $\texttt{simulate}$: Why doesn't $\texttt{simulate}$ just call $\texttt{initializeSim}$ as its first step? We have broken up these two steps so that users can simulate some number of periods, change something in the environment, and then resume the simulation.
When called with no argument, $\texttt{simulate}$ will simulate the model for $\texttt{T_sim}$ periods. The user can optionally pass an integer specifying the number of periods to simulate (which should not exceed $\texttt{T_sim}$).
In the cell below, we simulate our perfect foresight consumers for 80 periods, then seize a bunch of their assets (dragging their wealth even more negative), then simulate for the reamining 40 periods.
```
# The final resulting distribution is reasonably coherent
PFexample.initializeSim()
PFexample.simulate(80)
PFexample.aNrmNow += -5. # Adjust all simulated consumers' assets downward by 5
PFexample.simulate(40)
plt.plot(np.mean(PFexample.mNrmNow_hist,axis=1))
plt.xlabel('Time')
plt.ylabel('Mean normalized market resources')
plt.show()
```
| true |
code
| 0.772917 | null | null | null | null |
|
# FMI Hirlam, MET Norway HARMONIE and NCEP GFS comparison demo
In this demo notebook we provide short comparison of using three different weather forecast models:
GFS -- http://data.planetos.com/datasets/noaa_gfs_pgrb2_global_forecast_recompute_0.25degree
HIRLAM -- http://data.planetos.com/datasets/fmi_hirlam_surface
HARMONIE -- http://data.planetos.com/datasets/metno_harmonie_metcoop
You can get more information about the datasets by opening links to their detail pages, but their main difference is that GFS is a global, medium range weather forecast model with lower resolution, and HIRLAM and HARMONIE are limited area models, meaning they cover only small part of the globe, but provide higher resolution of all forecasted field, in return.
First we compare the datasets by showing their spatial coverages, then we demonstrate their resolutions by showing forecast field as a discrete grid (so one can see the difference in grid cell size and resolved surface details) and finally we demonstrate plotting weather forecast for the same variable from three models.
We try to keep this demo short, but in case you are interested in creating a more interactive notebook, please refer to our other examples:
https://github.com/planet-os/demos/blob/master/notebooks/PlanetOS_WAve_Models.ipynb
https://github.com/planet-os/notebooks/blob/master/api-examples/GFS_public_full_demo_main.ipynb
Unlike previous notebooks, we have moved most of the parsing code to external library dh_py_access, which you should get automatically if you get this notebook by cloning the git repository.
If you have any questions, contact our team at https://data.planetos.com
At first, let's import some modules. If you do not have them, download them (ie. using pip or conda).
If you encounter some errors, make sure you have the same numpy, basemap and matplotlib versions.
```
%matplotlib notebook
import numpy as np
print ('numpy version is ', np.__version__)
import matplotlib.pyplot as plt
import mpl_toolkits.basemap
print ('mpl_toolkits.basemap version is ', mpl_toolkits.basemap.__version__)
from mpl_toolkits.basemap import Basemap
import warnings
import datetime
import dateutil.parser
import matplotlib
print ('Matplotlib version is ',matplotlib.__version__)
warnings.filterwarnings("ignore",category=matplotlib.cbook.mplDeprecation)
import xarray as xr
```
Import datahub parsing library
```
from API_client.python.lib.dataset import dataset
import dh_py_access.lib.datahub as datahub
from dh_py_access import package_api
# from dh_py_access.lib.dataset import dataset as dataset
# import dh_py_access.lib.datahub as datahub
# from dh_py_access import package_api
```
Now we define hirlam and harmonie namespaces. Add server address and our API key.
<font color='red'>Please add your API key below:</font>
```
server = 'http://api.planetos.com/v1/datasets/'
API_key = open('APIKEY').read().strip()
dh=datahub.datahub_main(API_key)
fmi_hirlam_surface=dataset('fmi_hirlam_surface',dh)
metno_harmonie_metcoop=dataset('metno_harmonie_metcoop',dh)
gfs=dataset('noaa_gfs_pgrb2_global_forecast_recompute_0.25degree',dh)
```
One can easily see what kind of variables are available in given dataset by just calling methods:
1. long_names -- gives a long human readable name for variable, which is unfortunately not standardised in any way
2. standard_names -- gives variable names as defined in CF convention standard name table http://cfconventions.org/standard-names.html
3. variable_names -- names by which you can actually query data from the API
on a given dataset instance.
```
sample_var_names = {fmi_hirlam_surface:'Temperature_height_above_ground',
metno_harmonie_metcoop:'air_temperature_2m',
gfs:'tmp_m'}
today = datetime.datetime.today()
day_ago = today - datetime.timedelta(days=1)
reftime_start = datetime.datetime.strftime(day_ago, '%Y-%m-%dT') + '11:00:00'
reftime_end = datetime.datetime.strftime(day_ago, '%Y-%m-%dT') + '13:00:00'
def get_max_coverage_package(dataset, area_name, varfilter = 'temp'):
"""Download full coverage for limited area datasets"""
coords = dataset.get_dataset_boundaries()
ds_west = np.amin([i[0] for i in coords])
ds_east = np.amax([i[0] for i in coords])
ds_south = np.amin([i[1] for i in coords])
ds_north = np.amax([i[1] for i in coords])
temperature_variable = sample_var_names[dataset]
assert len(temperature_variable) >= 1, "something wrong {0}".format(temperature_variable)
assert type(temperature_variable) == str
return package_api.package_api(dh,dataset.datasetkey,temperature_variable,ds_west,ds_east,ds_south,ds_north,area_name=area_name)
area_name = 'maximum_04'
package_harmonie = get_max_coverage_package(metno_harmonie_metcoop, area_name=area_name)
package_fmi_hirlam = get_max_coverage_package(fmi_hirlam_surface, area_name=area_name)
package_harmonie.make_package()
package_fmi_hirlam.make_package()
package_harmonie.download_package()
package_fmi_hirlam.download_package()
data_harmonie = xr.open_dataset(package_harmonie.get_local_file_name())
data_fmi_hirlam = xr.open_dataset(package_fmi_hirlam.get_local_file_name(),decode_cf=False)
```
Take GFS for area of HARMONIE
```
left = np.amin(data_harmonie['longitude'].data)
right = np.amax(data_harmonie['longitude'].data)
bottom = np.amin(data_harmonie['latitude'].data)
top = np.amax(data_harmonie['latitude'].data)
package_gfs = package_api.package_api(dh,gfs.datasetkey,sample_var_names[gfs],left,right,bottom,top,area_name=area_name)
package_gfs.make_package()
package_gfs.download_package()
data_gfs = xr.open_dataset(package_gfs.get_local_file_name(),decode_cf=False)
```
## Dataset extent and resolution
Get some arbitrary field for demonstration, we use 2m temperature and as you can see, variable names may actually differ a lot between datasets. Please note that "get_tds_field" method is just for getting arbitrary preview image, if you wan't to query data for specific time and reftime, please refer to examples for our raster API (shown in other notebooks referenced to above) or use THREDDS server link given in dataset detail pages.
### Extent
The easiest way to show dataset extent is to plot it on a map with proper projection. We do not show GFS here, because, well, it is global.
```
m = Basemap(projection='ortho',lon_0=10,lat_0=50,resolution='l')
hir_x,hir_y=np.meshgrid(data_fmi_hirlam['lon'],data_fmi_hirlam['lat'])
X_hir,Y_hir=m(hir_x,hir_y)
fig=plt.figure()
plt.subplot(221)
air2d = data_fmi_hirlam[sample_var_names[fmi_hirlam_surface]][0,0,:,:]
air2d = np.ma.masked_where(air2d>500,air2d)
random_data = np.random.rand(947, 5294)
random_x = np.random.rand(947, 5294)
random_y = np.random.rand(947, 5294)
#m.pcolormesh(X_hir,Y_hir,random_data)
m.pcolormesh(random_x,random_y,random_data,vmin=np.min(random_data),vmax=np.max(random_data))
m.drawcoastlines()
plt.subplot(222)
harm_x,harm_y=np.meshgrid(data_harmonie.longitude,data_harmonie.latitude)
X_harm,Y_harm=m(harm_x,harm_y)
m.pcolormesh(X_harm,Y_harm,data_harmonie[sample_var_names[metno_harmonie_metcoop]][0,0,:,:])
m.drawcoastlines()
plt.colorbar()
```
### Resolution
Let's zoom in a little to illustrate difference in resolutions. By plotting the gridded data as a mesh, one can easily get the grid size from the figures. Plot's given for the Norwegian coast.
```
lon1,lon2 = 5,7
lat1,lat2 = 58,59
m2 = Basemap(projection='merc',llcrnrlat=lat1,urcrnrlat=lat2,\
llcrnrlon=lon1,urcrnrlon=lon2,lat_ts=58,resolution='i')
fig=plt.figure(figsize=(8,8))
plt.subplot(221)
## we cannot use .sel() method on hirlam data because
##it was opened with decode_cf=False
## which was because it contains both missing_value and fill_value, see https://github.com/pydata/xarray/issues/1749
x1 = np.argmin(np.abs(data_fmi_hirlam.lon-360-lon1)).data
x2 = np.argmin(np.abs(data_fmi_hirlam.lon-360-lon2)).data+1
y1 = np.argmin(np.abs(data_fmi_hirlam.lat-lat1)).data
y2 = np.argmin(np.abs(data_fmi_hirlam.lat-lat2)).data+1
height = int(np.argmin(np.abs(data_fmi_hirlam.height_above_ground-2)).data)
hir_x,hir_y=np.meshgrid(data_fmi_hirlam.lon[x1:x2].data,data_fmi_hirlam.lat[y1:y2].data)
X,Y=m2(hir_x-360,hir_y)
air2d_hirlam=data_fmi_hirlam.variables[sample_var_names[fmi_hirlam_surface]].isel(time=0,height_above_ground=height,lon=slice(x1,x2),lat=slice(y1,y2))
m2.pcolormesh(X,Y,air2d_hirlam)
m2.drawcoastlines()
plt.colorbar()
plt.subplot(222)
X,Y=m2(harm_x,harm_y)
air2d_harm = data_harmonie[sample_var_names[metno_harmonie_metcoop]].isel(time=0).sel(height1=2,longitude=slice(lon1,lon2),latitude=slice(lat1,lat2))
X,Y=m2(air2d_harm.longitude.data,air2d_harm.latitude.data)
m2.pcolormesh(X,Y,air2d_harm)
m2.drawcoastlines()
plt.colorbar()
plt.subplot(223)
ggg = data_gfs[sample_var_names[gfs]].isel(time1=0).sel(height_above_ground2=2,lon=slice(lon1,lon2),lat=slice(lat2,lat1))
x,y=np.meshgrid(ggg.lon,ggg.lat)
X,Y=m2(x,y)
m2.pcolormesh(X,Y,ggg)
m2.drawcoastlines()
plt.colorbar()
```
Can you guess which model is on which map by just looking at these images?
### Forecast for a single location
First, get point data for all datasets for given variable and for as long time range as the forecast goes.
```
longitude= 25.60
latitude = 58.36
ds = dataset('noaa_rbsn_timeseries',dh)
obs_data = ds.get_station_data_as_pandas(['26233'],variables='temperature',start = reftime_start)
sample_point_data = [(k,k.get_json_data_in_pandas(**{'var':v,'lon':longitude,'lat':latitude,'count':1000,'reftime_start':reftime_start,'reftime_end':reftime_end})) for k,v in sample_var_names.items()]
fig = plt.figure(figsize=(11,6))
for ddd in sample_point_data:
zlevels = [2.]
for i in zlevels:
pdata = np.array(ddd[1][ddd[1]['z']==i][sample_var_names[ddd[0]]],dtype=np.float) - 273.15
if np.sum(np.isnan(pdata)) != pdata.shape[0]:
time = ddd[1][ddd[1]['z']==i]['time']
if 'gfs' in ddd[0].datasetkey:
time = time[:-95]
pdata = pdata[:-95]
plt.plot(time, pdata, label = ddd[0].datasetkey)
plt.plot(obs_data['26233'].index,obs_data['26233']['temperature'].values,label = 'observations')
plt.legend()
plt.grid()
fig.autofmt_xdate()
plt.title('2m temperature forecast in different weather models')
plt.show()
```
| true |
code
| 0.47098 | null | null | null | null |
|
# Practical Deep Neural Network Performance Prediction for Hyperparameter Optimization
```
%matplotlib inline
from concurrent import futures
from functools import reduce, wraps
from IPython.display import display
import json
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import numpy as np
import os
import pandas as pd
from sklearn.utils import shuffle
import sys
import tensorflow as tf
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
tf.logging.set_verbosity(tf.logging.WARN)
print(tf.__version__)
```
## Model
```
N_hidden = 16
model_dir = 'model'
def model(n_hidden):
def model_loss(y, t):
t = tf.reshape(t, [-1])
mse = tf.reduce_mean(tf.square(y - t))
return mse
def training(loss):
optimizer = tf.train.AdamOptimizer()
train_step = optimizer.minimize(loss)
return train_step
x = tf.placeholder(tf.float32, shape=[None, None, 1])
t = tf.placeholder(tf.float32, shape=[None, 1])
n_batch = tf.placeholder(tf.int32, shape=[])
sequence_length = tf.placeholder(tf.int32, shape=[None])
output_keep_prob = tf.placeholder_with_default(1.0, shape=())
cell = tf.contrib.rnn.DropoutWrapper(
tf.nn.rnn_cell.LSTMCell(n_hidden),
output_keep_prob=output_keep_prob,
input_size=x.shape[-1],
variational_recurrent=True,
dtype=tf.float32)
zero_state = cell.zero_state(n_batch, tf.float32)
c_state = tf.placeholder(tf.float32, shape=[None, n_hidden])
h_state = tf.placeholder(tf.float32, shape=[None, n_hidden])
outputs, state = tf.nn.dynamic_rnn(
cell, x, initial_state=tf.nn.rnn_cell.LSTMStateTuple(c_state, h_state),
sequence_length=sequence_length, dtype=tf.float32)
h = tf.transpose(state.h)
W = tf.Variable(tf.truncated_normal([1, n_hidden], stddev=0.01))
b = tf.Variable(tf.zeros([1], dtype=tf.float32))
y = tf.sigmoid(tf.matmul(W, h) + b)
y = tf.reshape(y, [n_batch])
loss = model_loss(y, t)
train_step = training(loss)
init = tf.global_variables_initializer()
return x, t, n_batch, sequence_length, output_keep_prob, y, \
c_state, h_state, zero_state, state, loss, train_step, init
# Create model
(x, t, n_batch, s_len,
output_keep_prob, y, c_state,
h_state, zero_state, lstm_state,
loss, train_step, init) = model(n_hidden=N_hidden)
```
## Training
```
dataname = 'mnist'
batch_size = 16
epochs = 1000
output_keep_rate = 0.5
N_runs = 100
N_validation = 50
N_train = N_runs - N_validation
N_ensembles = 8
class EarlyStopping():
def __init__(self, sess, saver,
fname, patience=30, verbose=0):
self._saver = saver
self._sess = sess
self._fname = fname
self.patience = patience
self.verbose = verbose
self._loss = float('inf')
self._step = 0
def validate(self, loss):
if self._loss <= loss:
self._step += 1
if self._step > self.patience:
if self.verbose:
print('early stopping')
return True
else:
self._step = 0
self._loss = loss
self._saver.save(self._sess, self._fname)
return False
def prepare_data(df):
inputs = []
outputs = []
sequence_lengths = []
for i in range(1, df.shape[1]):
inputs.append(df.iloc[:, :i])
tmp = df.iloc[:, i:i + 1]
tmp.columns = [0]
outputs.append(tmp)
sequence_lengths.extend([i] * df.shape[0])
inputs = reduce(pd.DataFrame.append, inputs)
outputs = reduce(pd.DataFrame.append, outputs)
inputs.fillna(0, inplace=True)
outputs.fillna(0, inplace=True)
inputs.reset_index(inplace=True, drop=True)
outputs.reset_index(inplace=True, drop=True)
sequence_lengths = np.reshape(sequence_lengths, -1)
X = np.array(inputs).reshape(len(inputs), -1, 1)
Y = np.array(outputs).reshape(len(outputs), -1)
return X, Y, sequence_lengths
# Train data
df = pd.read_json('%s.json' % (dataname), orient='split')
display(df.head())
df.head().T.plot(title='Previous learning curves')
dlen = df.shape[1]
tmp = df.copy()
for i in range(1, df.shape[1]):
tmp.iloc[:, i] = (df.iloc[:, i] - df.iloc[:, i - 1]) / (1 - df.iloc[:, i - 1])
tmp.fillna(0, inplace=True)
df = tmp
# Training
with tf.Session() as sess:
for e in list(range(N_ensembles)):
shuffled_idx = np.arange(N_runs)
np.random.shuffle(shuffled_idx)
sess.run(init)
saver = tf.train.Saver()
early_stopping = EarlyStopping(
sess, saver, "%s/%s/%d" % (model_dir, dataname, e))
df_t = df.iloc[shuffled_idx][:N_train]
tmp = np.array(df_t).reshape(-1)
X_train, Y_train, SL_train = prepare_data(df_t)
df_v = df.iloc[shuffled_idx][N_train:]
X_validation, Y_validation, SL_validation = prepare_data(df_v)
for epoch in range(epochs):
X_, Y_, SL_ = shuffle(X_train, Y_train, SL_train)
N_batches = X_.shape[0] // batch_size
for i in range(N_batches):
z = sess.run(zero_state, feed_dict={n_batch: batch_size})
start = i * batch_size
end = start + batch_size
sess.run([train_step, loss], feed_dict={
x: X_[start:end],
t: Y_[start:end],
s_len: SL_[start:end],
n_batch: batch_size,
output_keep_prob: output_keep_rate,
c_state: z[0],
h_state: z[1]
})
z = sess.run(zero_state, feed_dict={n_batch: len(X_validation)})
val_loss = loss.eval(session=sess, feed_dict={
x: X_validation,
t: Y_validation,
s_len: SL_validation,
n_batch: len(X_validation),
c_state: z[0],
h_state: z[1]
})
print('\rensemble: %s\tepoch: %s\tvalidation loss:%s' % (
e, epoch, val_loss), end='')
if early_stopping.validate(val_loss):
break
```
## Prediction
```
dataname = 'mnist'
validation_dataname = 'mnist_test'
ylim = [0.95, 1.0] # [0, 1]
dlen = 20 # 300
inputlen = 1
plot_ticks = np.array(range(dlen))
N_test_cases = 1 # 20
N_sigma = 2
N_ensembles = 8
models = list(range(N_ensembles))
max_workers = len(models)
saver = tf.train.Saver()
config = tf.ConfigProto(device_count={"GPU": 0})
class Predictor():
def __init__(self,
dataname,
validation_dataname,
N_sigma,
dlen,
inputlen,
input_df,
modelpath):
self.dataname = dataname
self.validation_dataname = validation_dataname
self.N_sigma = N_sigma
self.dlen = dlen
self.inputlen = inputlen
self.input_df = input_df
self.modelpath = modelpath
def __call__(self):
with tf.Session(config=config) as sess:
sess.run(init)
saver.restore(sess, self.modelpath)
sess.graph.finalize()
predicted = self.input_df.values.tolist()
z = sess.run(zero_state, feed_dict={n_batch: 1})
y_, z = sess.run([y, lstm_state], feed_dict={
x: np.array(predicted).reshape(1, -1, 1),
n_batch: 1,
s_len: [len(predicted)],
c_state: z[0],
h_state: z[1]
})
predicted.append(y_.reshape(-1)[0])
for _ in range(self.dlen - len(predicted)):
y_, z = sess.run([y, lstm_state], feed_dict={
x: np.array(predicted)[-1:].reshape(1, -1, 1),
n_batch: 1,
s_len: [1],
c_state: z[0],
h_state: z[1]
})
predicted.append(y_.reshape(-1)[0])
for i in range(1, len(predicted)):
predicted[i] = predicted[i - 1] + (1 - predicted[i - 1]) * predicted[i]
predicted = np.array(predicted)
return predicted
class MultiPredictor():
def __init__(self,
dataname,
validation_dataname,
N_sigma,
dlen,
inputlen,
N_ensembles,
max_workers):
self.dataname = dataname
self.validation_dataname = validation_dataname
self.N_sigma = N_sigma
self.dlen = dlen
self.inputlen = inputlen
self.N_ensembles = N_ensembles
self.max_workers = max_workers
self.models = ['%s/%s/%d' % (model_dir, self.dataname, e) for e in models]
self.executor = futures.ProcessPoolExecutor(max_workers=self.max_workers)
def predict(self, input_df):
predictions = []
fs = [self.executor.submit(
Predictor(self.dataname,
self.validation_dataname,
self.N_sigma,
self.dlen,
self.inputlen,
input_df,
m)) for m in self.models]
for future in futures.as_completed(fs):
predictions.append(future.result())
predictions = pd.DataFrame(predictions).iloc[:, input_df.shape[0]:]
return predictions
def __del__(self):
self.executor.shutdown()
def plot(mean, original):
plt.figure()
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.xticks(plot_ticks, plot_ticks + 1)
plt.ylim(ylim)
ax = plt.gca()
plt.plot(mean, color='red', label='Prediction')
original.T.plot(ax=ax, linestyle='dashed', color='gray', label='Ground truth')
plt.legend()
plt.grid()
plt.show()
plt.close()
predictor = MultiPredictor(dataname, validation_dataname, N_sigma, dlen,
inputlen, N_ensembles, max_workers)
pred_df = pd.read_json('%s.json' % (validation_dataname), orient='split')
for target_num in range(N_test_cases):
for i in range(dlen - inputlen):
input_df = pred_df.iloc[target_num, :inputlen + i]
tmp = input_df.copy()
for i in range(1, input_df.shape[0]):
tmp[i] = (input_df[i] - input_df[i - 1]) / (1 - input_df[i - 1])
input_df = tmp
predictions = predictor.predict(input_df)
mean = predictions.mean()
std = predictions.std()
original = pred_df.iloc[target_num]
print('test case: %s\nnumber of inputs: %s\npredictive mean: %s\npredictive std: %s\nground truth: %s' % (
target_num, inputlen + i, mean.values[-1], std.values[-1], original.values[-1]))
plot(mean, original)
```
| true |
code
| 0.665927 | null | null | null | null |
|
# Validating the 10m Eastern Africa Cropland Mask
## Description
Previously, in the `6_Accuracy_assessment_20m.ipynb` notebook, we were doing preliminary validations on 20m resolution testing crop-masks. The crop-mask was stored on disk as a geotiff. The final cropland extent mask, produced at 10m resolution, is stored in the datacube and requires a different method for validating.
> NOTE: A very big sandbox is required (256GiB RAM) to run this script.
This notebook will output a `confusion error matrix` containing Overall, Producer's, and User's accuracy, along with the F1 score for each class.
***
## Getting started
To run this analysis, run all the cells in the notebook, starting with the "Load packages" cell.
### Load Packages
```
import os
import sys
import glob
import rasterio
import datacube
import pandas as pd
import numpy as np
import seaborn as sn
import matplotlib.pyplot as plt
import geopandas as gpd
from sklearn.metrics import f1_score
from rasterstats import zonal_stats
```
## Analysis Parameters
* `product` : name of crop-mask we're validating
* `bands`: the bands of the crop-mask we want to load and validate. Can one of either `'mask'` or `'filtered'`
* `grd_truth` : a shapefile containing crop/no-crop points to serve as the "ground-truth" dataset
```
product = "crop_mask_eastern"
band = 'mask'
grd_truth = 'data/validation_samples.shp'
```
### Load the datasets
`the cropland extent mask`
```
#connect to the datacube
dc = datacube.Datacube(app='feature_layers')
#load 10m cropmask
ds = dc.load(product=product, measurements=[band]).squeeze()
print(ds)
```
`Ground truth points`
```
#ground truth shapefile
ground_truth = gpd.read_file(grd_truth).to_crs('EPSG:6933')
# rename the class column to 'actual'
ground_truth = ground_truth.rename(columns={'Class':'Actual'})
# reclassifer into int
ground_truth['Actual'] = np.where(ground_truth['Actual']=='non-crop', 0, ground_truth['Actual'])
ground_truth['Actual'] = np.where(ground_truth['Actual']=='crop', 1, ground_truth['Actual'])
ground_truth.head()
```
## Convert points into polygons
When the validation data was collected, 40x40m polygons were evaluated as either crop/non-crop rather than points, so we want to sample the raster using the same small polygons. We'll find the majority or 'mode' statistic within the polygon and use that to compare with the validation dataset.
```
#set radius (in metres) around points
radius = 20
#create circle buffer around points, then find envelope
ground_truth['geometry'] = ground_truth['geometry'].buffer(radius).envelope
```
### Calculate zonal statistics
We want to know what the majority pixel value is inside each validation polygon.
```
def custom_majority(x):
a=np.ma.MaskedArray.count(x)
b=np.sum(x)
c=b/a
if c>0.5:
return 1
if c<=0.5:
return 0
#calculate stats
stats = zonal_stats(ground_truth.geometry,
ds[band].values,
affine=ds.geobox.affine,
add_stats={'majority':custom_majority},
nodata=255)
#append stats to grd truth df
ground_truth['Prediction']=[i['majority'] for i in stats]
ground_truth.head()
```
***
## Create a confusion matrix
```
confusion_matrix = pd.crosstab(ground_truth['Actual'],
ground_truth['Prediction'],
rownames=['Actual'],
colnames=['Prediction'],
margins=True)
confusion_matrix
```
### Calculate User's and Producer's Accuracy
`Producer's Accuracy`
```
confusion_matrix["Producer's"] = [confusion_matrix.loc[0, 0] / confusion_matrix.loc[0, 'All'] * 100,
confusion_matrix.loc[1, 1] / confusion_matrix.loc[1, 'All'] * 100,
np.nan]
```
`User's Accuracy`
```
users_accuracy = pd.Series([confusion_matrix[0][0] / confusion_matrix[0]['All'] * 100,
confusion_matrix[1][1] / confusion_matrix[1]['All'] * 100]
).rename("User's")
confusion_matrix = confusion_matrix.append(users_accuracy)
```
`Overall Accuracy`
```
confusion_matrix.loc["User's","Producer's"] = (confusion_matrix.loc[0, 0] +
confusion_matrix.loc[1, 1]) / confusion_matrix.loc['All', 'All'] * 100
```
`F1 Score`
The F1 score is the harmonic mean of the precision and recall, where an F1 score reaches its best value at 1 (perfect precision and recall), and is calculated as:
$$
\begin{aligned}
\text{Fscore} = 2 \times \frac{\text{UA} \times \text{PA}}{\text{UA} + \text{PA}}.
\end{aligned}
$$
Where UA = Users Accuracy, and PA = Producer's Accuracy
```
fscore = pd.Series([(2*(confusion_matrix.loc["User's", 0]*confusion_matrix.loc[0, "Producer's"]) / (confusion_matrix.loc["User's", 0]+confusion_matrix.loc[0, "Producer's"])) / 100,
f1_score(ground_truth['Actual'].astype(np.int8), ground_truth['Prediction'].astype(np.int8), average='binary')]
).rename("F-score")
confusion_matrix = confusion_matrix.append(fscore)
```
### Tidy Confusion Matrix
* Limit decimal places,
* Add readable class names
* Remove non-sensical values
```
# round numbers
confusion_matrix = confusion_matrix.round(decimals=2)
# rename booleans to class names
confusion_matrix = confusion_matrix.rename(columns={0:'Non-crop', 1:'Crop', 'All':'Total'},
index={0:'Non-crop', 1:'Crop', 'All':'Total'})
#remove the nonsensical values in the table
confusion_matrix.loc["User's", 'Total'] = '--'
confusion_matrix.loc['Total', "Producer's"] = '--'
confusion_matrix.loc["F-score", 'Total'] = '--'
confusion_matrix.loc["F-score", "Producer's"] = '--'
confusion_matrix
```
### Export csv
```
confusion_matrix.to_csv('results/Eastern_10m_accuracy_assessment_confusion_matrix.csv')
```
***
## Additional information
**License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
Digital Earth Africa data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license.
**Contact:** If you need assistance, please post a question on the [Open Data Cube Slack channel](http://slack.opendatacube.org/) or on the [GIS Stack Exchange](https://gis.stackexchange.com/questions/ask?tags=open-data-cube) using the `open-data-cube` tag (you can view previously asked questions [here](https://gis.stackexchange.com/questions/tagged/open-data-cube)).
If you would like to report an issue with this notebook, you can file one on [Github](https://github.com/digitalearthafrica/deafrica-sandbox-notebooks).
**Last modified:** Dec 2020
| true |
code
| 0.309043 | null | null | null | null |
|
# Revisiting Food-Safety Inspections from the Chicago Dataset - A Tutorial (Part 2)
David Lewis, Russell Hofvendahl, Jason Trager
* I switched name order here and put my bio second at the bottom
## 0. Foreward
* probably touch this up
Sustainabilist often works on data that is related to quality assurance and control (QA/QC) inspections of public or private infrastructure. Typically, this infrastructure takes the form of solar energy systems or energy efficiency upgrades for buildings. These data sets almost exclusively belong to private entities that have commissioned a study to evaluate how safe and/or well-installed the infrastructure that they financed is. For this reason, it has been very difficult to put anything up in the public sphere about how our work is conducted and any public documentation of what kind of analysis we do.
Enter Epicodus, a coding bootcamp in Portland, OR. Several weeks ago, I met David and Russell - two eager coding students who were just learning how to code. They were attending the first meeting of CleanWeb Portland’s first meeting, which Sustainabilist organized. We were talking about the lack of public datasets in sustainability, and I mentioned how Chicago’s food science data set was very similar to many of the QA/QC data sets that I have looked at. Just like that, a project was born.
The coding work demonstrated herein is 100% that of the student interns, under my guidance for how to structure, examine, and explore the data. The work was conducted using Google Collaboratory, iPython notebooks, and Anaconda’s scientific computing packages.
## 1. Review
* foreward?
* To prevent foodborne illness inspectors enforce stringent food codes, sometimes with the help of predictive violation models
* We seek to expand the work of the CDPH, exploring highres predictions and neural nets
* We want to to focus on helping restaurants prevent illness and avoid costly violations
* We cleaned and pre-processed data from the following sources (databases)
* ...(probably more stuff)
## 2. Feature engineering
* something on how the model works, what we're building it for, the thing about blinding the model to outcome and then comparing it to actual outcome
* how by training model to guess outcome for canvass inspections we're building a tool that we can feed same paramaters at any time to guess outcome of a simulated canvass inspection
* Somthing on feature selection, why it makes sense to try out what we're trying out
* should we explain features here or below? idk
## 3. Food Inspection Features
* load inspections and select what we want from it to use as basis for model data
* Something on what this data is, where it comes from, why we're using it?
```
import numpy as np
import pandas as pd
import os.path
root_path = os.path.dirname(os.getcwd())
# Load food inspection data
inspections = pd.read_csv(os.path.join(root_path, "DATA/food_inspections.csv"))
# Create basis for model_data
data = inspections.loc[:, ["inspection_id", "license", "inspection_date", "facility_type"]]
```
### 3.1. Pass / Fail Flags
* pass fail flags denote inspection outcome, this is something that will be "covered" so model can guess it
* converted to individual presence/absence flags to help with something or other (what and why specifically?)
```
# Create pass / fail flags
data["pass_flag"] = inspections.results.apply(lambda x: 1 if x == "Pass" else 0)
data["fail_flag"] = inspections.results.apply(lambda x: 1 if x == "Fail" else 0)
```
### 3.2. Facility Risk Flags
* Facilities like restaurants pose greater risk than packaged food kiosks and are given higher risk levels
* Higher risk levels mean greater inspection frequency also (unsure if this is relevant)
* Again converted to numeric form to fit with (specs? what?)
```
# Create risk flags
data["risk_1"] = inspections.results.apply(lambda x: 1 if x == "Risk 1 (High)" else 0)
data["risk_2"] = inspections.results.apply(lambda x: 1 if x == "Risk 2 (Medium)" else 0)
data["risk_3"] = inspections.results.apply(lambda x: 1 if x == "Risk 3 (Low)" else 0)
```
### 3.3. Violation Data
* Violation data is also something the model will be guessing, another part of the inspection outcome
* The data consists of a bunch of rows (representing inspection outcomes) with binary values for whether a specific health code was violated in that inspection
* Merged on inspection ID (each row of data is matched and merged with a violation data row with same ID. rows with no matches are excluded.)
```
# Load violation data
values = pd.read_csv(os.path.join(root_path, "DATA/violation_values.csv"))
counts = pd.read_csv(os.path.join(root_path, "DATA/violation_counts.csv"))
# Merge with violation data, filtering missing data
data = pd.merge(data, values, on="inspection_id")
data = pd.merge(data, counts, on="inspection_id")
```
### 3.4. Past Fails
* Passed fails refers to the previous inspection outcome for that license (as a binary flag)
* This is a strong predictor of inspection outcomes
* Passed fails is something the model will have access to when predicting inspection outcomes, and will be used to guess the actual and current outcome.
* We first create a dataframe of past data by arranging inspections chronologically, grouping by license and shifting each group of inspections by 1, so that the data for each inspection lines up with the row of the next inspection (the first row for each license will by empty and the last inspection is not used). The pre-grouping order is preserved upon shifting.
* (this could use visualization)
* We can then simply attach the fail_flag column to our data as past fails, setting the empty first value as 0 (no previous fail)
```
# Sort inspections by date
grouped = data.sort_values(by="inspection_date", inplace=True)
# Find previous inspections by shifting each license group
past_data = data.groupby("license").shift(1)
# Add past fails, with 0 for first inspections
data["past_fail"] = past_data.fail_flag.fillna(0)
```
### 3.5. Past Violation Data
* individual past violation values might well be good for predicting individual violations (eg watch out mr. restaurant, you violated these codes last inspection so you're at risk for them)
* We can use the same past_data to get past violation values
* We'll modify the names to pv_1, etc
* If we drop inspection_id we can just tack them on to the end of the data using join
* first records are set to 0 (no past violation)
* For past_critical, past_serious and past_minor we can similarly just grab each column and add it as a new column in data
```
# Select past violation values, remove past inspection id
past_values = past_data[values.columns].drop("inspection_id", axis=1).add_prefix("p")
# Add past values to model data, with 0 for first records
data = data.join(past_values.fillna(0))
# Add past violation counts, with 0 for first records
data["past_critical"] = past_data.critical_count.fillna(0)
data["past_serious"] = past_data.serious_count.fillna(0)
data["past_minor"] = past_data.minor_count.fillna(0)
```
### 3.6. Time Since Last
* One potential risk factor is greater time since last inspection (do we say we got this from Chicago team or just give our own justification?)
* To access this convert each inspection date to a python datetime, subtract the previous datetime from the later to create a series of delta objects and convert to days.
* the default is set to two.
```
# Calculate time since previous inspection
deltas = pd.to_datetime(data.inspection_date) - pd.to_datetime(past_data.inspection_date)
# Add years since previous inspection (default to 2)
data["time_since_last"] = deltas.apply(lambda x: x.days / 365.25).fillna(2)
```
### 3.7. First Record
* Actually not sure why this would matter in predicting outcomes? (check)
* Maybe first records are more likely to fail?
* To get it we simply put 1s for rows where data is absent in the shifted past_data.
```
# Check if first record
data["first_record"] = past_data.inspection_id.map(lambda x: 1 if pd.isnull(x) else 0)
```
## 4. Business License Features
* These are the features derived from the busuiness license dataset
* What is a business license? other background info?
### 4.1. Matching Inspections with Licenses
* Load data, see publication 1
```
# Load business license data
licenses = pd.read_csv(os.path.join(root_path, "DATA/business_licenses.csv"))
```
* In order to link food inspections to the business licenses of the facilities inspected we create a table of matches, each linking an inspection to a license
* Many business licenses can be matched by license number to an inspection, but to account for licence discrepancies we also matched based on venue (street address and name)
* Due to formatting differences it was necessary to use only the street number
```
# Business licenses have numbers on end preventing simple match
# so using street number instead
def get_street_number(address):
return address.split()[0]
licenses["street_number"] = licenses.address.apply(get_street_number)
inspections["street_number"] = inspections.address.apply(get_street_number)
# Match based on DBA name and street number
venue_matches = pd.merge(inspections, licenses, left_on=["dba_name", "street_number"], right_on=["doing_business_as_name", "street_number"])
# Match based on license numbers
licence_matches = pd.merge(inspections, licenses, left_on="license", right_on="license_number")
```
* to create the working matches dataset we then appended venue and licence matches and dropped any duplicate inspection / business licence matches.
```
# Join matches, reset index, drop duplicates
matches = venue_matches.append(license_matches, sort=False)
matches.reset_index(drop=True, inplace=True)
matches.drop_duplicates(["inspection_id", "id"], inplace=True)
# Restrict to matches where inspection falls within license period
matches = matches.loc[matches.inspection_date.between(matches.license_start_date, matches.expiration_date)]
```
### 4.2. Filterering by Category
* (This isn't a feature but is only convenient to do once we have the matches dataset. what to do?)
* many non-retail establishments eg schools, hospitals follow different inspection schedules, so to ensure consistent data we filter matches to include only inspections of retail food establishments
* to do this we select the inspection id's of all retail matches, drop any duplicates and merge these id's with the model data
* by default merge includes only rows with keys present in each dataset (inner join)
```
# Select retail food establishment inspection IDs
retail = matches.loc[matches.license_description == "Retail Food Establishment", ["inspection_id"]]
retail.drop_duplicates(inplace=True)
# FILTER: ONLY CONSIDER INSPECTIONS MATCHED WITH RETAIL LICENSES
data = pd.merge(data, retail, on="inspection_id")
```
### 4.3. Calculating Age at Inspection
* What might age at inspection tell?
* One feature previously found significant in predicting inspection outcomes is the age of the facility
* To calculate this we first convert all dates to datetime objects
* We then group by licence and within each group find the earliest license start date
* Finally we subtract this min date from the inspection date and merge the resulting age in with our model data
```
# Convert dates to datetime format
matches.inspection_date = pd.to_datetime(matches.inspection_date)
matches.license_start_date = pd.to_datetime(matches.license_start_date)
def get_age_data(group):
min_date = group.license_start_date.min()
deltas = group.inspection_date - min_date
group["age_at_inspection"] = deltas.apply(lambda x: x.days / 365.25)
return group[["inspection_id", "age_at_inspection"]]
# Calculate (3 mins), drop duplicates
age_data = matches.groupby("license").apply(get_age_data).drop_duplicates()
# Merge in age_at_inspection
data = pd.merge(data, age_data, on="inspection_id", how="left")
```
### 4.4. Calculating Category Data
* The chicago team found the categories of licences attributed to an establishment to be significant in predicting violation outcomes
* This data is derived from the licence_description column of the business licences dataset
* We will be noting the presence or absence of these categories as a series of binary flags
* To derive these features we first set up a dictionary linking the column entries to our desired snake case column titles
* We then group matches by inspection id to gather all licence descriptions for each inspection
* To generate the entries we apply our get_category_data method, using our dictionary to translate from licence_description entries to column titles
* Finally we fill missing entries as 0 and merge the results in with our model data
```
# Translate categories to snake-case titles
categories = {
"Consumption on Premises - Incidental Activity": "consumption_on_premises_incidental_activity",
"Tobacco": "tobacco",
"Package Goods": "package_goods",
"Limited Business License": "limited_business_license",
"Outdoor Patio": "outdoor_patio",
"Public Place of Amusement": "public_place_of_amusement",
"Children's Services Facility License": "childrens_services_facility_license",
"Tavern": "tavern",
"Regulated Business License": "regulated_business_license",
"Filling Station": "filling_station",
"Caterer's Liquor License": "caterers_liquor_license",
"Mobile Food License": "mobile_food_license"
}
# Create binary markers for license categories
def get_category_data(group):
df = group[["inspection_id"]].iloc[[0]]
for category in group.license_description:
if category in categories:
df[categories[category]] = 1
return df
# group by inspection, get categories (2 mins)
category_data = matches.groupby("inspection_id").apply(get_category_data)
# Reset index, set absent categories to 0
category_data.reset_index(drop=True, inplace=True)
category_data.fillna(0, inplace=True)
# Merge in category data, fill nan with 0
data = pd.merge(data, category_data, on="inspection_id", how="left").fillna(0)
```
## 5. Crime Density
* (I'm not sure whether to separate these by dataset or lump them as density features)
* Why we're including this
* Stuff on what kernel density is, why we're including it
* stuff on how we're doing it
* Stuff on how we chose the bandwidth and other params WHICH WE HAVEN'T DONE YET
```
# Load observation datasets
burglaries = pd.read_csv(os.path.join(root_path, "DATA/burglaries.csv"))
# Create datetime columns
inspections["datetime"] = pd.to_datetime(inspections.inspection_date)
burglaries["datetime"] = pd.to_datetime(burglaries.date)
# FILTER: consider only inspections since 2012
# Otherwise early inspections have few/no observations within window
inspections = inspections.loc[inspections.inspection_date >= "2012"]
from datetime import datetime, timedelta
from scipy import stats
def get_kde(observations, column_name, window, bandwidth):
# Sort chronologically and index by datetime
observations.sort_values("datetime", inplace=True)
observations.index = observations.datetime.values
# Generate kernel from 90 days of observations
def get_kde_given_date(group):
stop = group.datetime.iloc[0]
start = stop - timedelta(days=window)
recent = observations.loc[start:stop]
x1 = recent.longitude
y1 = recent.latitude
values = np.vstack([x1, y1])
kernel = stats.gaussian_kde(values)
x2 = group.longitude
y2 = group.latitude
samples = np.vstack([x2, y2])
group[column_name] = kernel(samples)
return group[["inspection_id", column_name]]
# Group inspections by date, generate kernels, sample
return inspections.groupby("inspection_date").apply(get_kde_given_date)
# Calculate burglary density estimates
burglary_kde = get_kde(burglaries, "burglary_kde", 90, 1)
# FILTER: only consider data since 2012 (with good kde data)
data = pd.merge(data, burglary_kde, on="inspection_id")
```
## 6. Garbage Cart Density
* Why we're including this feature
* With our kernel density methods already defined...
```
# Load observation datasets
carts = pd.read_csv(os.path.join(root_path, "DATA/garbage_carts.csv"))
# Create datetime columns
carts["datetime"] = pd.to_datetime(carts.creation_date)
# Calculate garbage cart density estimates
cart_kde = get_kde(carts, "cart_kde", 90, 1)
# FILTER: only consider data since 2012 (with good kde data)
data = pd.merge(data, cart_kde, on="inspection_id")
```
## 7. Sanitation Complaint Density
* Why we're including this feature
* As with crime and garbage carts...
```
# Load observation datasets
complaints = pd.read_csv(os.path.join(root_path, "DATA/sanitation_complaints.csv"))
# Create datetime columns
complaints["datetime"] = pd.to_datetime(complaints.creation_date)
# Calculate sanitation complaint density estimates
complaint_kde = get_kde(complaints, "complaint_kde", 90, 1)
# FILTER: only consider data since 2012 (with good kde data)
data = pd.merge(data, complaint_kde, on="inspection_id")
```
## 8. Weather Features
* What these features are
* where they came from
* why we're including them
```
# Load weather data
weather = pd.read_csv(os.path.join(root_path, "DATA/weather.csv"))
# Merge weather data with model data
data = pd.merge(data, weather, on="inspection_id")
```
## 9. Next Steps
* (just made these up pretty quickly)
* Choosing a model
* tuning the model
* training the model (a neural net probably?)
* building the tool
* distributing the tool
* Russell Hofvendahl is a web application developer with a great fondness for data driven decision making. Russell is excited to explore the applications of data science and machine learning in improving human judgement.
* David Lewis is a seasoned corporate responsibility professional working to utilize technology to help improve the health and well being of human populations through environmental stewardship.
* Jason S. Trager, Ph.D. is the managing partner at Sustainabilist and an expert in process improvement for distributed systems. Jason’s work portfolio includes the creation of novel data-driven methods for improving contractor performance, machine learning to optimize value in energy efficiency sales, and equipment maintenance optimization methodologies.
| true |
code
| 0.322833 | null | null | null | null |
|
# Building Models in PyMC3
Bayesian inference begins with specification of a probability model relating unknown variables to data. PyMC3 provides the basic building blocks for Bayesian probability models:
1. stochastic random variables
2. deterministic variables
3. factor potentials.
A **stochastic random variable** is a factor whose value is not completely determined by its parents, while the value of a **deterministic random variable** is entirely determined by its parents. Most models can be constructed using only these two variable types. The third quantity, the **factor potential**, is *not* a variable but simply a
log-likelihood term or constraint that is added to the joint log-probability to modify it.
## Example: Inferring patterns in UK coal mining disasters
To motivate this section, let's model a different dataset: a time series of recorded coal mining
disasters in the UK from 1851 to 1962.
Occurrences of disasters in the time series is thought to be derived from a
Poisson process with a large rate parameter in the early part of the time
series, and from one with a smaller rate in the later part. We are interested
in locating the change point in the series, which perhaps is related to changes
in mining safety regulations.
```
import numpy as np
year = np.arange(1851, 1962)
disasters_data = np.array([4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6,
3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5,
2, 2, 3, 4, 2, 1, 3, 2, 2, 1, 1, 1, 1, 3, 0, 0,
1, 0, 1, 1, 0, 0, 3, 1, 0, 3, 2, 2, 0, 1, 1, 1,
0, 1, 0, 1, 0, 0, 0, 2, 1, 0, 0, 0, 1, 1, 0, 2,
3, 3, 1, 1, 2, 1, 1, 1, 1, 2, 4, 2, 0, 0, 1, 4,
0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1])
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context('notebook')
fig, ax = plt.subplots(figsize=(12.5, 3.5))
n_count_data = len(disasters_data)
ax.bar(year, disasters_data, color="#348ABD")
ax.set_xlabel("Year")
ax.set_ylabel("Disasters")
ax.set_title("UK coal mining disasters, 1851-1962")
ax.set_xlim(1851, 1962);
```
We are going to use Poisson random variables for this type of count data. Denoting year $i$'s accident count by $y_i$,
$$ y_i \sim \text{Poisson}(\lambda) $$
The modeling problem revolves around estimating the values of the $\lambda$ parameters. Looking at the time series above, it appears that the rate declines later in the time series.
A ***changepoint model*** identifies a point (year) during the observation period (call it $\tau$) after which the parameter $\lambda$ drops to a lower value. So we are estimating two $\lambda$ parameters: one for the early period and another for the late period.
$$
\lambda =
\begin{cases}
\lambda_1 & \text{if } t \lt \tau \cr
\lambda_2 & \text{if } t \ge \tau
\end{cases}
$$
We need to assign prior probabilities to both $\lambda$ parameters. The gamma distribution not only provides a continuous density function for positive numbers, but it is also **conjugate** with the Poisson sampling distribution. We will specify suitably vague hyperparameters $\alpha$ and $\beta$ for both priors.
$$\begin{aligned}
\lambda_1 &\sim \text{Gamma}( \alpha, \beta ) \cr
\lambda_2 &\sim \text{Gamma}( \alpha, \beta )
\end{aligned}$$
Since we do not have any intuition about the location of the changepoint (prior to viewing the data), we will assign a discrete uniform prior over all years 1851-1962.
$$\begin{aligned}
& \tau \sim \text{Uniform(1851,1962) }\cr
& \Rightarrow P( \tau = k ) = \frac{1}{111}
\end{aligned}$$
## The FreeRV class
A stochastic variable is represented in PyMC3 by a `FreeRV` class. This structure adds functionality to Theano's `TensorVariable` class, by mixing in the PyMC `Factor` class. A `Factor` is used whenever a variable contributes a log-probability term to a model. Hence, you know a variable is a subclass of `Factor` whenever it has a `logp` method, as we saw in the previous section.
A `FreeRV` object has several important attributes:
`dshape`
: The variable's shape.
`dsize`
: The overall size of the variable.
`distribution`
: The probability density or mass function that describes the distribution of the variable's values.
`logp`
: The log-probability of the variable's current value given the values
of its parents.
`init_value`
: The initial value of the variable, used by many algorithms as a starting point for model fitting.
`model`
: The PyMC model to which the variable belongs.
### Creation of stochastic random variables
There are two ways to create stochastic random variables (`FreeRV` objects), which we will call the **automatic**, and **manual** interfaces.
#### Automatic
Stochastic random variables with standard distributions provided by PyMC3 can be created in a single line using special subclasses of the `Distribution` class. For example, as we have seen, the uniformly-distributed discrete variable $\tau$ in the coal mining disasters model is created using the automatic interface as follows:
```
from pymc3 import Model, Uniform
with Model() as disaster_model:
switchpoint = Uniform('switchpoint', lower=0, upper=110)
```
Similarly, the rate parameters can automatically be given exponential priors:
```
from pymc3 import Exponential
with disaster_model:
early_mean = Exponential('early_mean', lam=1)
late_mean = Exponential('late_mean', lam=1)
```
PyMC includes most of the probability density functions (for continuous variables) and probability mass functions (for discrete variables) used in statistical modeling. Continuous variables are represented by a specialized subclass of `Distribution` called `Continuous` and discrete variables by the `Discrete` subclass.
The main differences between these two sublcasses are in the `dtype` attribute (`int64` for `Discrete` and `float64` for `Continuous`) and the `defaults` attribute, which determines which summary statistic to use for initial values when one is not specified ('mode' for `Discrete` and 'median', 'mean', and 'mode' for `Continuous`).
```
switchpoint.distribution.defaults
```
Sometimes we wish to use a particular statistical distribution, without using it as a variable in a model; for example, to generate random numbers from the distribution. This class method allows that.
```
Exponential.dist(1)
```
#### Manual
The uniformly-distributed discrete stochastic variable `switchpoint` in the disasters model could alternatively be created from a function that computes its log-probability as follows:
```
from pymc3 import DensityDist
from pymc3.math import switch
with Model():
def uniform_logp(value, lower=0, upper=111):
"""The switchpoint for the rate of disaster occurrence."""
return switch((value > upper) | (value < lower), -np.inf, -np.log(upper - lower + 1))
switchpoint = DensityDist('switchpoint', logp=uniform_logp, dtype='int64')
switchpoint.logp({'switchpoint':4})
switchpoint.logp({'switchpoint': 44})
switchpoint.logp({'switchpoint':-1})
```
A couple of things to notice: while the function specified for the `logp` argument can be an arbitrary Python function, it must use **Theano operators and functions** (in this case, `switch`) in its body. This is because one or more of the arguments passed to the function may be `TensorVariables`, and they must be supported. Also, we passed the value to be evaluated by the `logp` function as a **dictionary**, rather than as a plain integer. By convention, values in PyMC3 are passed around as a data structure called a `Point`. Points in parameter space are represented by dictionaries with parameter names as they keys and the value of the parameters as the values.
To emphasize, the Python function passed to `DensityDist` should compute the *log*-density or *log*-probability of the variable. That is why the return value in the example above is `-log(upper-lower+1)` rather than `1/(upper-lower+1)`.
## The ObservedRV Class
Stochastic random variables whose values are observed (*i.e.* data likelihoods) are represented by a different class than unobserved random variables. A `ObservedRV` object is instantiated any time a stochastic variable is specified with data passed as the `observed` argument.
Otherwise, observed stochastic random variables are created via the same interfaces as unobserved: **automatic** or **manual**. As an example of an automatic instantiation, consider a Poisson data likelihood :
```
from pymc3 import Poisson
with disaster_model:
disasters = Poisson('disasters', mu=3, observed=[3,4,1,2,0,2,2])
```
A manual instantiation would be similar to that for a stochastic random variable, except `DensityDist` would recieve an `observed` argument. Here is an example of an *exponential survivial likelihood*:
```python
def logp(failure, value):
return (failure * log(lam) - lam * value).sum()
x = DensityDist('x', logp, observed={'failure':failure, 'value':t})
```
Notice in this example that there are two vetors observed data for the likelihood `x`, passed as a dictionary.
An important responsibility of `ObservedRV` is to automatically handle missing values in the data, when they are present (absent?). See PyMC3 documentation for details.
## Deterministic Variables
A deterministic variable is one whose values are **completely determined** by the values of their parents. For example, in our disasters model, `rate` is a deterministic variable.
```python
with disaster_model:
rate = pm.Deterministic('rate', switch(switchpoint >= np.arange(112), early_mean, late_mean))
```
so `rate`'s value can be computed exactly from the values of its parents `early_mean`, `late_mean` and `switchpoint`.
There are two types of deterministic variables in PyMC3
#### Anonymous deterministic variables
The easiest way to create a deterministic variable is to operate on or transform one or more variables in a model directly. For example, the simplest way to specify the `rate` variable above is as follows:
```
with disaster_model:
rate = switch(switchpoint >= np.arange(112), early_mean, late_mean)
```
Or, let's say we wanted to use the mean of the `early_mean` and `late_mean` variables somehere in our model:
```
with disaster_model:
mean_of_means = (early_mean + late_mean)/2
```
These are called *anonymous* variables because we did not wrap it with a call to `Determinstic`, which gives it a name as its first argument. We simply specified the variable as a Python (or, Theano) expression. This is therefore the simplest way to construct a determinstic variable. The only caveat is that the values generated by anonymous determinstics at every iteration of a MCMC algorithm, for example, are not recorded to the resulting trace. So, this approach is only appropriate for intermediate values in your model that you do not wish to obtain posterior estimates for, alongside the other variables in the model.
#### Named deterministic variables
To ensure that deterministic variables' values are accumulated during sampling, they should be instantiated using the **named deterministic** interface; this uses the `Deterministic` function to create the variable. Two things happen when a variable is created this way:
1. The variable is given a name (passed as the first argument)
2. The variable is appended to the model's list of random variables, which ensures that its values are tallied.
```
from pymc3 import Deterministic
with disaster_model:
rate = Deterministic('rate', switch(switchpoint >= np.arange(112), early_mean, late_mean))
disaster_model.named_vars
```
## Factor Potentials
For some applications, we want to be able to modify the joint density by incorporating terms that don't correspond to probabilities of variables conditional on parents, for example:
$$p(x_0, x_2, \ldots x_{N-1}) \propto \prod_{i=0}^{N-2} \psi_i(x_i, x_{i+1})$$
In other cases we may want to add probability terms to existing models. For example, suppose we want to constrain the early mean to be greater than the late mean in the disaster model, so that the joint density becomes:
$$p(y,\tau,\lambda_1,\lambda_2) \propto p(y|\tau,\lambda_1,\lambda_2) p(\tau) p(\lambda_1) p(\lambda_2) I(|\lambda_1-\lambda_2| \gt 0)$$
We call such log-probability terms **factor potentials** (Jordan 2004). Bayesian
hierarchical notation doesn't accomodate these potentials.
### Creation of Potentials
A potential can be created via the `Potential` function, in a way very similar to `Deterministic`'s named interface:
```
from pymc3 import Potential
with disaster_model:
rate_constraint = Potential('rate_constraint', switch((late_mean - early_mean)>0, -np.inf, 0))
```
The function takes just a `name` as its first argument and an expression returning the appropriate log-probability as the second argument.
## Sampling with MCMC
PyMC's core business is using Markov chain Monte Carlo to fit virtually any probability model. This involves the assignment and coordination of a suite of **step methods**, each of which is responsible for updating one or more variables.
The user's interface to PyMC's sampling algorithms is the `sample` function:
```python
sample(draws=500, step=None, init='auto', n_init=200000, start=None, trace=None, chain_idx=0, chains=None, cores=None, tune=500, progressbar=True, model=None, random_seed=None, discard_tuned_samples=True, compute_convergence_checks=True)
```
`sample` assigns particular samplers to model variables, and generates samples from them. The `draws` argument
controls the total number of MCMC iterations. PyMC can automate most of the details of sampling, outside of the selection of the number of draws, using default settings for several parameters that control how the sampling is set up and conducted. However, users may manually intervene in the specification of the sampling by passing values to a number of keyword argumetns for `sample`.
### Assigning step methods
The `step` argument allows users to assign a MCMC sampling algorithm to the entire model, or to a subset of the variables in the model. For example, if we wanted to use the Metropolis-Hastings sampler to fit our model, we could pass an instance of that step method to `sample` via the `step` argument:
```python
with my_model:
trace = sample(1000, step=Metropolis())
```
or if we only wanted to assign `Metropolis` to a parameter called `β`:
```python
with my_model:
trace = sample(1000, step=Metropolis(vars=[β]))
```
When `step` is not specified by the user, PyMC3 will assign step methods to variables automatically. To do so, each step method implements a class method called `competence`. This method returns a value from 0 (incompatible) to 3 (ideal), based on the attributes of the random variable in question. `sample` assigns the step method that returns the highest competence value to each of its unallocated stochastic random variables. In general:
* Binary variables will be assigned to `BinaryMetropolis` (Metropolis-Hastings for binary values)
* Discrete variables will be assigned to `Metropolis`
* Continuous variables will be assigned to `NUTS` (No U-turn Sampler)
### Starting values
The `start` argument allows for the specification of starting values for stochastic random variables in the model. MCMC algorithms begin by initializing all unknown quantities to arbitrary starting values. Though in theory the value can be any value under the support of the distribution describing the random variable, we can make sampling more difficult if an initial value is chosen in the extreme tail of the distribution, for example. If starting values are not passed by the user, default values are chosen from the mean, median or mode of the distribution.
One might be tempted to initialize a MCMC simulation at the maximum *a posteriori* (MAP) estimate:
```
with Model() as disaster_model:
switchpoint = Uniform('switchpoint', lower=year.min(), upper=year.max())
early_mean = Exponential('early_mean', lam=0.5)
late_mean = Exponential('late_mean', lam=0.5)
rate = switch(switchpoint >= year, early_mean, late_mean)
disasters = Poisson('disasters', rate, observed=disasters_data)
from pymc3 import find_MAP
with disaster_model:
start = find_MAP()
```
Except for small models, starting a sampler at the posterior mode is **not recommended**. As we saw in the introduction to Hamiltonian Monte Carlo, even though the probability density is highest around the mode, the volume of the posterior distribution is very low there. Hence, it is often not in (or near) the typical set.
However, for our small model things should work out okay.
```
start
from pymc3 import sample, Metropolis
with disaster_model:
trace = sample(step=Metropolis(), cores=2, start=start)
```
### Storing samples
Notice in the above call to `sample` that output is assigned to a variable we have called `trace`.
```
trace
```
This `MultiTrace` object is a data structure that stores the samples from an MCMC run in a tabular structure. By default, `sample` will create a new `MultiTrace` object that stores its samples in memory, as a NumPy `ndarray`. We can override the default behavior by specifying the `trace` argument. There are three options:
1. Selecting an alternative database backend to keeping samples in an `ndarray`. Passing either `"text"` or `"sqlite"`, for example, will save samples to text files or a SQLite database, respectively. An instance of a backend can also be passed.
2. Passing a list of variables will only record samples for the subset of variables specified in the list. These will be stored in memory.
3. An existing `MultiTrace` object. This will add samples to an existing backend.
```
with disaster_model:
db_trace = sample(100, tune=0, cores=2, trace='sqlite')
# Cleaning up!
!rm mcmc.sqlite
```
I recommend converting MCMC sample output to an ArviZ `InferenceData` object. Output data are stored in a robust and flexible xarray `Dataset` and allows for easy export to NetCDF for serialization.
```
from arviz import from_pymc3
model_output = from_pymc3(trace)
model_output
```
We will explore the output more in the next section, but the `InferenceData` object stores the posterior samples, the data that was used to fit the model, as well as a number of statistics related to the sampling procedure, which are useful for convergence diagnostic purposes.
```
type(model_output.posterior)
model_output.to_netcdf('trace.netcdf')
```
Serialized `InferenceData` objects can easily be re-imported.
```
from arviz import from_netcdf
imported_model_output = from_netcdf('trace.netcdf')
assert imported_model_output.posterior==model_output.posterior
#Clean up
!rm trace.netcdf
```
### Parallel sampling
Nearly all modern desktop computers have multiple CPU cores, and running multiple MCMC chains is an **embarrasingly parallel** computing task. It is therefore relatively simple to run chains in parallel in PyMC3. This is done by setting the `cores` argument in `sample` to some value between 2 and the number of cores on your machine (you can specify more chains than cores, but you will not gain efficiency by doing so). The default value of `cores` is `None`, which will select the number of CPUs on your machine, to a maximum of 4.
> Keep in mind that some chains might themselves be multithreaded via openmp or BLAS. In those cases it might be faster to set this to 1.
By default, PyMC3 will run a sample a minimum of 2 and a maximum of `cores` chains. However, the number of chains sampled can be set independently of the number of cores by specifying the `chains` argument.
```
with disaster_model:
ptrace = sample(100, tune=100, chains=4, cores=2)
```
Running $n$ iterations with $c$ chains will result in $n \times c$ samples.
```
ptrace['early_mean'].shape
```
If you want to specify different arguments for each chain, a list of argument values can be passed to `sample` as appropriate. For example, if we want to initialize random variables to particular (*e.g.* dispersed) values, we can pass a list of dictionaries to `start`:
```
with disaster_model:
ptrace = sample(10, tune=100, cores=2, discard_tuned_samples=False, init=None,
start=[{'early_mean':0.1}, {'early_mean':10}])
[chain[:5] for chain in ptrace.get_values('early_mean', combine=False)]
```
Generating several chains is generally recommended because it aids in model checking, allowing statistics such as the potential scale reduction factor ($\hat{R}$) and effective sample size to be calculated, as we will see in the model checking section.
## Step methods
Step method classes handle individual stochastic variables, or sometimes groups of them. They are responsible for making the variables they handle take **single MCMC steps** conditional on the rest of the model. Each PyMC step method (usually subclasses of `ArrayStep`) implements a method called `astep()`, which is called iteratively by `sample`.
All step methods share an optional argument `vars` that allows a particular subset of variables to be handled by the step method instance. Particular step methods will have additional arguments for setting parameters and preferences specific to that sampling algorithm.
> NB: when a PyMC function or method has an argument called `vars` it is expecting a list of variables (*i.e.* the variables themselves), whereas arguments called `varnames` expect a list of variables names (*i.e.* strings)
### HamiltonianMC
The Hamiltonian Monte Carlo algorithm is implemented in the `HamiltonianMC` class. Being a gradient-based sampler, it is only suitable for **continuous random variables**. Several optional arguments can be provided by the user. The algorithm is **non-adaptive**, so the parameter values passed at instantiation are fixed at those values throughout sampling.
`HamiltonianMC` requires a scaling matrix parameter `scaling`, which is analogous to the variance parameter for the jump proposal distribution in Metropolis-Hastings, although it is used somewhat differently here. The matrix gives an approximate shape of the posterior distribution, so that `HamiltonianMC` does not make jumps that are too large in some directions and too small in other directions. It is important to set this scaling parameter to a reasonable value to facilitate efficient sampling. This is especially true for models that have many unobserved stochastic random variables or models with highly non-normal posterior distributions.
Fortunately, `HamiltonianMC` can often make good guesses for the scaling parameters. If you pass a point in parameter space (as a dictionary of variable names to parameter values, the same format as returned by `find_MAP`), it will look at the **local curvature** of the log posterior-density (the diagonal of the Hessian matrix) at that point to guess values for a good scaling vector, which can result in a good scaling value. Also, the MAP estimate is often a good point to use to initiate sampling.
- `scaling`
: Scaling for momentum distribution. If a 1-dimensional array is passed, it is interpreted as a matrix diagonal.
- `step_scale`
: Size of steps to take, automatically scaled down by $1/n^{0.25}$. Defaults to .25.
- `path_length`
: total length to travel during leapfrog. Defaults to 2.
- `is_cov`
: Flag for treating scaling as a covariance matrix/vector, if True. Treated as precision otherwise.
- `step_rand`
: A function which takes the step size and returns an new one used to randomize the step size at each iteration.
### NUTS
A disadgantage of the HMC sampler is that there are key hyperparameters that require tuning for sampling to proceed efficiently. Hoffman and Gelman (2014) developed an auto-tuning variant of HMC that takes care of selecting path lengths and step sizes.
`NUTS` is the No U-turn Sampler of Hoffman and Gelman (2014), an adaptive version of Hamiltonian MC that **automatically tunes** the step size and number on the fly. NUTS uses a recursive algorithm to build a set of likely candidate points that spans a wide swath of the target distribution. True to its name, it stops automatically when it starts to double back and retrace its steps.
The algorithm employs **binary doubling**, which takes leapfrog steps alternating in direction with respect to the initial gradient. That is, one step is taken in the forward direction, two in the reverse direction, then four, eight, etc. The result is a balanced, binary tree with nodes comprised of Hamiltonian states.

Doubling process builds a balanced binary tree whose leaf nodes correspond to
position-momentum states. Doubling is halted when the subtrajectory from the
leftmost to the rightmost nodes of any balanced subtree of the overall binary tree starts to double back on itself

To ensure detailed balance, a slice variable is sampled from:
$$ u \sim \text{Uniform}(0, \exp[L(\theta) - 0.5 r \cdot r])$$
where $r$ is the initial momentum vector. The next sample is then chosen uniformly from the points in the remaining balanced tree.
In addition to the arguments to `HamiltonianMC`, `NUTS` takes additional parameters to controls the tuning. The most important of these is the target acceptance rate for the Metropolis acceptance phase of the algorithm, `taget_accept`.
Sometimes if the NUTS struggles to sample efficiently, changing this parameter above the default target rate of 0.8 will improve sampling (the original recommendation by Hoffman & Gelman was 0.6). Increasing the rate very high will also make the sampler more conservative, however, taking many small steps at every iteration.
```
with disaster_model:
trace_99 = sample(100, tune=200, cores=2, target_accept=0.99)
```
There is rarely a reason to use `HamiltonianMC` rather than `NUTS`. It is the default sampler for continuous variables in PyMC3.
### Metropolis
``Metropolis`` implements a Metropolis-Hastings step, as described the theory section, and is designed to handle float- and integer-valued variables.
A `Metropolis` step method can be instantiated with any of several optional arguments:
- `S`
: This sets the proposal standard deviation or covariance matrix.
- `proposal_dist`
: A function that generates zero-mean random deviates used as proposals. Defaults to the normal distribution.
- `scaling`
: An initial scale factor for the proposal
- `tune_interval`
: The number of intervals between tuning updates to `scaling` factor.
When the step method is instantiated, the `proposal_dist` is parameterized with the value passed for `S`. While sampling, the value of `scaling` is used to scale the value proposed by `proposal_dist`, and this value is tuned throughout the MCMC run. During tuning, the acceptance ratio of the step method is examined, and this scaling factor
is updated accordingly. Tuning only occurs when the acceptance rate is **lower than 20%** or **higher than 50%**; rates between 20-50% are considered optimal for Metropolis-Hastings sampling. The default tuning interval (`tune_interval`) is 100 iterations.
Although tuning will continue throughout the sampling loop, it is important to verify that the
**diminishing tuning** condition of [Roberts and Rosenthal (2007)](http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.jap/1183667414) is satisfied: the
amount of tuning should decrease to zero, or tuning should become very infrequent.
`Metropolis` handles discrete variable types automatically by rounding the proposed values and casting them to integers.
### BinaryMetropolis
While binary (boolean) variables can be handled by the `Metropolis` step method, sampling will be very inefficient. The `BinaryMetropolis` class is optimized to handle binary variables, by one of only two possible values. The only tuneable parameter is the `scaling` argument, which is used to vary the Bernoulli probability:
p_jump = 1. - .5 ** self.scaling
This value is compared to pseudo-random numbers generated by the step method, to determine whether a 0 or 1 is proposed.
`BinaryMetropolis` will be automatically selected for random variables that are distributed as Bernoulli, or categorical with only 2 categories.
### Slice
Though the Metropolis-Hastings algorithm is easy to implement for a variety of models, its efficiency is poor. We have seen that it is possible to tune Metropolis samplers, but it would be nice to have a "black-box" method that works for arbitrary continuous distributions, which we may know little about a priori.
The **slice sampler** (Neal 2003) improves upon the Metropolis sampler by being both efficient and easy to program generally. The idea is to first sample from the conditional distribution for $y$ (i.e., $Pr(x)$) given some current value of $x$, which is uniform over the $(0,f(x))$, and conditional on this value for $y$, then sample $x$, which is uniform on $S = {x : y < f (x)}$.
The steps required to perform a single iteration of the slice sampler to update the current value of $x_i$ is as follows:
1. Sample $y$ uniformly on (0,f(xi)).
2. Use this value $y$ to define a horizontal *slice* $S = {x : y < f (x)}$.
3. Establish an interval, I=(xa,xb), around xi that contains most of the slice.
4. Sample $x_{i+1}$ from the region of the slice overlaping I.
Hence, slice sampling employs an **auxilliary variable** ($y$) that is not retained at the end of the iteration. Note that in practice one may operate on the log scale such that $g(x) = \log(f (x))$ to avoid floating-point underflow. In this case, the auxiliary variable becomes $z = log(y) = g(x_i) − e$, where $e \sim \text{Exp}(1)$, resulting in the slice $S = \{x : z < g(x)\}$.
There are many ways of establishing and sampling from the interval $I$, with the only restriction being that the resulting Markov chain leaves $f(x)$ **invariant**. The objective is to include as much of the slice as possible, so that the potential step size can be large, but not (much) larger than the slice, so that the sampling of invalid points is minimized. Ideally, we would like it to be the slice itself, but it may not always be feasible to determine (and certainly not automatically).
In PyMC3, the `Slice` class implements the **univariate** slice sampler. It is suitable for univariate, continuous variables. There is a single user-defined parameter `w`, which sets the width of the initial slice. If not specified, it defaults to a width of 1.
```
from pymc3 import Slice
with disaster_model:
slice_trace = sample(2000, cores=2, step=Slice())
from arviz import plot_trace
plot_trace(slice_trace, var_names=['early_mean','late_mean']);
```
---
## To Learn More
- Hoffman MD, Gelman A. 2014. The No-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo. The Journal of Machine Learning Research. 15(1):1593-1623.
- M.I. Jordan. 2004. Graphical models. Statist. Sci., 19(1):140–155.
- Neal, R. M. 2003. Slice sampling. The Annals of Statistics, 31(3), 705–767. doi:10.1111/1467-9868.00198
| true |
code
| 0.639624 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/unica-ml/ml/blob/master/notebooks/ml06.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Elements of Linear Discriminant Functions
This is the notebook associated to Part 4 of the ML course.
Let's start by importing some utility functions.
```
from matplotlib import pyplot as plt
import numpy as np
def plot_function(fun, grid_limits=([0, 0], [1, 1]),
background=False, resolution=0.02, alpha=1.0, loop=False):
"""Plot function on 2D space."""
x1_min, x1_max = grid_limits[0][0], grid_limits[1][0]
x2_min, x2_max = grid_limits[0][1], grid_limits[1][1]
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
x = np.array([xx1.ravel(), xx2.ravel()]).T
if loop:
scores = np.zeros(shape=(x.shape[0],))
for i in range(x.shape[0]):
scores[i] = fun(x[i, :])
else:
scores = fun(x)
Z = scores.reshape(xx1.shape)
if background: # plot decision function
plt.contourf(xx1, xx2, Z, cmap='jet', levels=50, alpha=alpha)
plt.colorbar()
else:
# plot decision boundary
plt.contourf(xx1, xx2, Z, levels=[-0.01, 0, 0.01], colors=('k',))
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
return
def plot_dataset(x, y, feat0=0, feat1=1):
colors = ['r.', 'b.', 'g.', 'k.', 'c.', 'm.']
class_labels = np.unique(y).astype(int)
for k in class_labels:
plt.plot(x[y == k, feat0], x[y == k, feat1], colors[k % 7])
```
Now let's code a simple class implementing a linear classifier $f(x)=w^T x + b$, and display its decision boundary on a bi-dimensional toy example.
Note that, if we set $h(x) = f(x) \cdot k$ (being $k$ a constant value), we obtain a linear classifier $h(x)$ with $w^\prime = kw$ and $b^\prime = kb$. While this classifier has the same decision boundary (in fact, $h(x)=0$ is equivalent to $f(x)=0$), it exhibits a different slope. For example, if $k>1$ the decision function will change more rapidly around each point $x$. You can compare the plots at the end of this section to note the difference.
```
from sklearn.datasets import make_blobs
class LinearClassifier:
"""Simple class implementing f(x) = w'x +b."""
def __init__(self, w, b):
self.w = w
self.b = b
@property
def w(self):
return self._w
@w.setter
def w(self, w):
self._w = np.array(w)
@property
def b(self):
return self._b
@b.setter
def b(self, b):
self._b = np.array(b)
def decision_function(self, x):
return x.dot(self.w)+b
def predict(self, x):
return np.array(self.decision_function(x) >= 0)
x, y = make_blobs(n_samples=100, n_features=2, centers=2, random_state=3)
w = [1, 1]
b = 0.1
clf = LinearClassifier(w,b)
grid_limits = (x.min(axis=0)-0.5, x.max(axis=0)+0.5)
plt.figure(figsize=(13.5,3))
plt.subplot(1, 3, 1)
plot_dataset(x, y)
plot_function(clf.decision_function, background=False, grid_limits=grid_limits)
plt.axis('equal')
plt.title('Decision boundary at f(x)=0')
plt.xlabel(r'Feature $x_1$')
plt.ylabel(r'Feature $x_2$')
plt.subplot(1, 3, 2)
plot_dataset(x, y)
plot_function(clf.decision_function, background=True, grid_limits=grid_limits)
plt.clim([-20, 20])
plot_function(clf.decision_function, background=False, grid_limits=grid_limits)
plt.axis('equal')
plt.title('f(x)')
plt.xlabel(r'Feature $x_1$')
plt.ylabel(r'Feature $x_2$')
plt.subplot(1, 3, 3)
plot_dataset(x, y)
clf.w = 2*clf.w
clf.b = 2*clf.b
plot_function(clf.decision_function, background=True, grid_limits=grid_limits)
plt.clim([-20, 20])
plot_function(clf.decision_function, background=False, grid_limits=grid_limits)
plt.axis('equal')
plt.title('2*f(x)')
plt.xlabel(r'Feature $x_1$')
plt.ylabel(r'Feature $x_2$')
plt.show()
```
## Optimizing the Loss Function
We have described so far the basics of linear classification, namely, how samples are predicted by a linear classifier.
The question that remains to be addressed is how one can learn the parameters $\theta = (w,b)$ for a linear classifier from the training data $D=(x_i, y_i)_{i=1}^n$.
This is typically achieved by formulating the learning problem as an optimization problem:
$$ \theta^\star \in \arg\min_\theta L(D, \theta),$$
where the objective function $L(D, \theta)$ is a proxy function to evaluating the classification error. This problem is typically solved efficiently via gradient descent.
Depending on the choice of the objective function $L(D, \theta)$, one can implement many different learning algorithms. Note that this formulation also holds for nonlinear classification functions and more complex algorithms, including neural networks and deep-learning algorithms.
Let's start from something easy. First of all, let's assume that the loss function can be decomposed as the sum of the loss on each training point: $L(D, \theta) = \frac{1}{n}\sum_{i=1}^n \ell(y_i, f(x_i; \theta))$.
It is not difficult to see that, if we take $\ell$ to be 1 for correct predictions and 0 otherwise, $L$ will correspond to measuring the fraction of training points that are wrongly predicted (i.e., the training error). This is called the zero-one loss.
Below, we plot the zero-one loss along with the so-called hinge loss (i.e., its closest convex upper bound) as function of $y f(x)$.
In fact, loss functions can be normally expressed as a function of the product $y f(x)$, given that, if $y f(x) \geq 0$, the point $x$ is correctly predicted ($y$ and $f$ agree in sign), otherwise it is misclassified.
Here are the equations:
- zero-one loss: $\ell(y, f(x)) = \begin{cases} 1, \; {\rm if} \; yf(x) < 0, \\ 0, \; {\rm otherwise.}\end{cases}$.
- hinge loss: $\ell(y, f(x)) = \max(0, 1-yf(x))$.
```
yf = np.linspace(-3, 3, num=100)
hinge = 1-yf
hinge[hinge<=0]=0
zero_one = yf < 0
plt.figure(figsize=(5,4))
plt.plot(yf, zero_one, 'b', label='zero-one loss')
plt.plot(yf, hinge, 'r', label='hinge loss')
plt.xlabel(r'$y \cdot f(x)$')
plt.ylabel(r'$\ell(y, f(x))$')
plt.title('Loss functions')
plt.legend()
plt.show()
```
Let's have a look at how these losses behave in the space of parameters $(w_1, w_2)$, assuming that $b=0$ (not optimized).
Every point in this space is a linear classifier (i.e., a hyperplane passing through the origin) and we report (using the color axis) the corresponding error on the training set (i.e., the training loss).
```
class Loss:
"""Class implementing basic loss functions."""
def __init__(self, clf, x, y):
self._clf = clf # classifier to be evaluated
self._x = x # training points
self._y = y # training labels
def zero_one_loss(self, w=None):
if w is not None:
self._clf.w = w
y = 2*self._y - 1 # convert {0,1} to {-1,+1}
scores = self._clf.decision_function(self._x)
return np.mean(y*scores < 0)
def hinge_loss(self, w=None):
if w is not None:
self._clf.w = w
y = 2*self._y - 1 # convert {0,1} to {-1,+1}
scores = self._clf.decision_function(self._x)
hinge = 1-y*scores
hinge[hinge <= 0] = 0
return np.mean(hinge)
clf = LinearClassifier(w=[1, 1],b=0)
loss = Loss(clf, x, y)
plt.figure(figsize=(12,4))
plt.subplot(1, 2, 1)
plot_function(loss.zero_one_loss, background=True,
loop=True, resolution=0.1,
grid_limits=([-10,-10], [10, 10]))
plt.xlabel(r'$w_1$')
plt.ylabel(r'$w_2$')
plt.title('zero-one loss')
plt.subplot(1, 2, 2)
plot_function(loss.hinge_loss, background=True,
loop=True, resolution=0.1,
grid_limits=([-10,-10], [10, 10]))
plt.xlabel(r'$w_1$')
plt.ylabel(r'$w_2$')
plt.title('hinge loss')
plt.show()
# we now fix w2=0 and let only w1 change
n_points=100
w1 = np.linspace(-10, 10, num=n_points)
w2 = np.zeros(shape=(n_points,))
w = np.vstack((w1,w2)).T
zero_one = np.zeros(shape=(n_points,))
hinge = np.zeros(shape=(n_points,))
for i in range(n_points):
zero_one[i] = loss.zero_one_loss(w[i, :])
hinge[i] = loss.hinge_loss(w[i, :])
plt.figure(figsize=(12,4))
plt.subplot(1, 2, 1)
plt.plot(w1, zero_one)
plt.xlabel(r'$w_1$')
plt.title(r'zero-one loss (at $w_2=0$)')
plt.subplot(1, 2, 2)
plt.plot(w1, hinge)
plt.xlabel(r'$w_1$')
plt.title(r'hinge loss (at $w_2=0$)')
plt.show()
```
Let's extend our class now by adding the derivative of the hinge, and let's run gradient descent to optimize the loss.
The hinge loss $\ell(y, f(x)) = \max(0, 1-yf(x))$ is not differentiable at the hinge, i.e., when $yf(x)=1$, but subgradients can be used.
In this case, we can assume that the gradient is zero at the hinge. Accordingly, we can set the gradient to zero when the loss is zero, and instead differentiate $1-yf(x)$ w.r.t. $w$ when the hinge loss is not null. We thus get:
$$\nabla_w \ell(y, f(x))=\begin{cases} 0, \; {\rm if} \; 1-yf(x) \leq 0, \\ -yx, \; {\rm otherwise.}\end{cases}$$
We also report the derivative w.r.t. $b$ for completeness:
$$\nabla_b \ell(y, f(x))=\begin{cases} 0, \; {\rm if} \; 1-yf(x) \leq 0, \\ -y, \; {\rm otherwise.}\end{cases}$$
Recall that these are derivatives of the loss computed for each training point. We will then need to average these values over all training points.
```
class LossGrad(Loss):
"""Extend previous class by adding the hinge loss gradient."""
def __init__(self, clf, x, y):
Loss.__init__(self, clf, x, y)
def hinge_loss_gradient(self, w=None):
if w is not None:
self._clf.w = w
y = 2*self._y - 1 # convert {0,1} to {-1,+1}
scores = self._clf.decision_function(self._x)
hinge = 1-y*scores
hinge[hinge <= 0] = 0
grad = np.zeros(shape=self._x.shape) # one grad per point
grad[hinge > 0, :] = self._x[hinge>0, :]
y = np.atleast_2d(y) # required to broadcast (on each column of grad)
grad *= -y.T
return np.mean(grad, axis=0)
# let's start optimizing. We start from w=[10,6]
n_iter = 20
w = np.zeros(shape=(n_iter+1, 2))
hinge = np.zeros(shape=(n_iter+1, )) # objective at w in each iter
w[0, :] = np.array([10., 6.]) # init
clf = LinearClassifier(w=w[0, :], b=0)
loss = LossGrad(clf, x, y)
hinge[0] = loss.hinge_loss(w=clf.w)
eta = 0.5 # gradient step size
for i in range(n_iter):
clf.w -= eta * loss.hinge_loss_gradient(w=clf.w)
w[i+1, :] = clf.w
hinge[i+1] = loss.hinge_loss(w=clf.w)
plt.figure(figsize=(15,3.5))
plt.subplot(1, 3, 1)
plot_function(loss.hinge_loss, background=True,
loop=True, resolution=0.1,
grid_limits=([-10,-10], [10, 10]))
plt.plot(w[:, 0], w[:, 1], 'rx:')
plt.xlabel(r'$w_1$')
plt.ylabel(r'$w_2$')
plt.title('hinge loss')
plt.subplot(1, 3, 2)
plt.plot(hinge)
plt.xlabel('Iteration')
plt.title('hinge loss (along the descent path)')
plt.subplot(1, 3, 3)
plot_dataset(x, y)
for i in range(n_iter+1):
clf.w = w[i, :]
plot_function(clf.decision_function, grid_limits=grid_limits)
plt.show()
```
| true |
code
| 0.769643 | null | null | null | null |
|
# Training LeNet using MNIST and Joey
In this notebook, we will construct and train LeNet using Joey, data from MNIST and the SGD with momentum PyTorch optimizer.
Let's start with importing the prerequisites:
```
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import joey as ml
import numpy as np
import matplotlib.pyplot as plt
from devito import logger
```
In order to speed up processing, we'll not print performance messages coming from Devito.
```
logger.set_log_noperf()
```
`create_lenet()` returns a `Net` instance representing LeNet.
```
def create_lenet():
# Six 3x3 filters, activation RELU
layer1 = ml.Conv(kernel_size=(6, 3, 3),
input_size=(batch_size, 1, 32, 32),
activation=ml.activation.ReLU())
# Max 2x2 subsampling
layer2 = ml.MaxPooling(kernel_size=(2, 2),
input_size=(batch_size, 6, 30, 30),
stride=(2, 2))
# Sixteen 3x3 filters, activation RELU
layer3 = ml.Conv(kernel_size=(16, 3, 3),
input_size=(batch_size, 6, 15, 15),
activation=ml.activation.ReLU())
# Max 2x2 subsampling
layer4 = ml.MaxPooling(kernel_size=(2, 2),
input_size=(batch_size, 16, 13, 13),
stride=(2, 2),
strict_stride_check=False)
# Full connection (16 * 6 * 6 -> 120), activation RELU
layer5 = ml.FullyConnected(weight_size=(120, 576),
input_size=(576, batch_size),
activation=ml.activation.ReLU())
# Full connection (120 -> 84), activation RELU
layer6 = ml.FullyConnected(weight_size=(84, 120),
input_size=(120, batch_size),
activation=ml.activation.ReLU())
# Full connection (84 -> 10), output layer
layer7 = ml.FullyConnectedSoftmax(weight_size=(10, 84),
input_size=(84, batch_size))
# Flattening layer necessary between layer 4 and 5
layer_flat = ml.Flat(input_size=(batch_size, 16, 6, 6))
layers = [layer1, layer2, layer3, layer4,
layer_flat, layer5, layer6, layer7]
return (ml.Net(layers), layers)
```
A proper training iteration is carried out in `train()`. Note that we pass a PyTorch optimizer to `net.backward()`. Joey will take care to use it for updating weights appropriately.
```
def train(net, input_data, expected_results, pytorch_optimizer):
outputs = net.forward(input_data)
def loss_grad(layer, expected):
gradients = []
for b in range(len(expected)):
row = []
for i in range(10):
result = layer.result.data[i, b]
if i == expected[b]:
result -= 1
row.append(result)
gradients.append(row)
return gradients
net.backward(expected_results, loss_grad, pytorch_optimizer)
```
In this example, every batch will consist of 4 images and the training session will be capped at 100 iterations.
```
batch_size = 4
iterations = 100
```
Before starting training, we need to download MNIST data using PyTorch.
```
transform = transforms.Compose(
[transforms.Resize((32, 32)),
transforms.ToTensor(),
transforms.Normalize(0.5, 0.5)])
trainset = torchvision.datasets.MNIST(root='./mnist', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=False, num_workers=2)
classes = ('0', '1', '2', '3', '4', '5', '6', '7', '8', '9')
```
Afterwards, let's instantiate Joey's LeNet along with the SGD with momentum PyTorch optimizer.
```
devito_net, devito_layers = create_lenet()
optimizer = optim.SGD(devito_net.pytorch_parameters, lr=0.001, momentum=0.9)
```
We're almost ready! The last thing to do is saving our original parameters as they will be required for making later comparisons with PyTorch.
```
layer1_kernel = torch.tensor(devito_layers[0].kernel.data)
layer1_bias = torch.tensor(devito_layers[0].bias.data)
layer3_kernel = torch.tensor(devito_layers[2].kernel.data)
layer3_bias = torch.tensor(devito_layers[2].bias.data)
layer5_kernel = torch.tensor(devito_layers[5].kernel.data)
layer5_bias = torch.tensor(devito_layers[5].bias.data)
layer6_kernel = torch.tensor(devito_layers[6].kernel.data)
layer6_bias = torch.tensor(devito_layers[6].bias.data)
layer7_kernel = torch.tensor(devito_layers[7].kernel.data)
layer7_bias = torch.tensor(devito_layers[7].bias.data)
```
We can start the Joey training session now.
```
for i, data in enumerate(trainloader, 0):
images, labels = data
images.double()
train(devito_net, images, labels, optimizer)
if i == iterations - 1:
break
```
Afterwards, let's create a PyTorch equivalent of Joey's LeNet, train it using the same initial weights and data and compare the results.
```
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 3)
self.conv2 = nn.Conv2d(6, 16, 3)
self.fc1 = nn.Linear(16 * 6 * 6, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:]
num_features = 1
for s in size:
num_features *= s
return num_features
net = Net()
net.double()
with torch.no_grad():
net.conv1.weight[:] = layer1_kernel
net.conv1.bias[:] = layer1_bias
net.conv2.weight[:] = layer3_kernel
net.conv2.bias[:] = layer3_bias
net.fc1.weight[:] = layer5_kernel
net.fc1.bias[:] = layer5_bias
net.fc2.weight[:] = layer6_kernel
net.fc2.bias[:] = layer6_bias
net.fc3.weight[:] = layer7_kernel
net.fc3.bias[:] = layer7_bias
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
criterion = nn.CrossEntropyLoss()
for i, data in enumerate(trainloader, 0):
images, labels = data
optimizer.zero_grad()
outputs = net(images.double())
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
if i == iterations - 1:
break
layers = [devito_layers[0], devito_layers[2], devito_layers[5], devito_layers[6], devito_layers[7]]
pytorch_layers = [net.conv1, net.conv2, net.fc1, net.fc2, net.fc3]
max_error = 0
index = -1
for i in range(5):
kernel = layers[i].kernel.data
pytorch_kernel = pytorch_layers[i].weight.detach().numpy()
kernel_error = abs(kernel - pytorch_kernel) / abs(pytorch_kernel)
bias = layers[i].bias.data
pytorch_bias = pytorch_layers[i].bias.detach().numpy()
bias_error = abs(bias - pytorch_bias) / abs(pytorch_bias)
error = max(np.nanmax(kernel_error), np.nanmax(bias_error))
print('layers[' + str(i) + '] maximum relative error: ' + str(error))
if error > max_error:
max_error = error
index = i
print()
print('Maximum relative error is in layers[' + str(index) + ']: ' + str(max_error))
```
As we can see, the maximum relative error is low enough to consider the training session in Joey numerically correct.
| true |
code
| 0.838184 | null | null | null | null |
|
# 📃 Solution for Exercise M6.04
The aim of this exercise is to:
* verify if a GBDT tends to overfit if the number of estimators is not
appropriate as previously seen for AdaBoost;
* use the early-stopping strategy to avoid adding unnecessary trees, to
get the best statistical performances.
we will use the California housing dataset to conduct our experiments.
```
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
data, target = fetch_california_housing(return_X_y=True, as_frame=True)
target *= 100 # rescale the target in k$
data_train, data_test, target_train, target_test = train_test_split(
data, target, random_state=0, test_size=0.5)
```
<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;">Note</p>
<p class="last">If you want a deeper overview regarding this dataset, you can refer to the
Appendix - Datasets description section at the end of this MOOC.</p>
</div>
Similarly to the previous exercise, create a gradient boosting decision tree
and create a validation curve to assess the impact of the number of trees
on the statistical performance of the model. Use the mean absolute error
to assess the statistical performance of the model.
```
import numpy as np
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.model_selection import validation_curve
gbdt = GradientBoostingRegressor()
param_range = np.unique(np.logspace(0, 1.8, num=30).astype(int))
train_scores, test_scores = validation_curve(
gbdt,
data_train,
target_train,
param_name="n_estimators",
param_range=param_range,
scoring="neg_mean_absolute_error",
n_jobs=-1,
)
train_errors, test_errors = -train_scores, -test_scores
import matplotlib.pyplot as plt
plt.errorbar(
param_range,
train_errors.mean(axis=1),
yerr=train_errors.std(axis=1),
label="Training score",
)
plt.errorbar(
param_range,
test_errors.mean(axis=1),
yerr=test_errors.std(axis=1),
label="Cross-validation score",
)
plt.legend()
plt.ylabel("Mean absolute error in k$\n(smaller is better)")
plt.xlabel("# estimators")
_ = plt.title("Validation curve for GBDT regressor")
```
Unlike AdaBoost, the gradient boosting model will always improve when
increasing the number of trees in the ensemble. However, it will reach a
plateau where adding new trees will just make fitting and scoring slower.
To avoid adding new unnecessary tree, gradient boosting offers an
early-stopping option. Internally, the algorithm will use an out-of-sample
set to compute the statistical performance of the model at each addition of a
tree. Thus, if the statistical performance are not improving for several
iterations, it will stop adding trees.
Now, create a gradient-boosting model with `n_estimators=1000`. This number
of trees will be too large. Change the parameter `n_iter_no_change` such
that the gradient boosting fitting will stop after adding 5 trees that do not
improve the overall statistical performance.
```
gbdt = GradientBoostingRegressor(n_estimators=1000, n_iter_no_change=5)
gbdt.fit(data_train, target_train)
gbdt.n_estimators_
```
We see that the number of trees used is far below 1000 with the current
dataset. Training the GBDT with the entire 1000 trees would have been
useless.
| true |
code
| 0.879768 | null | null | null | null |
|
# Logarithm
Here we analyse how accurate are the approximate functions for Logarithm
We compare two methods:
- Newton Raphson
- 6th order HouseHolder
We show how they perform in the context of encrypted computation, show that 6th order HouseHolder is better suited and discuss how to improve initialization of this method.
### Define a benchmark method
```
import os, sys
sys.path.insert(1, os.path.join(sys.path[0], '..'))
import torch as th
import matplotlib.pyplot as plt
import numpy as np
def benchmark(real_func, approx_func, interval, n_points=100, approx_kwargs={}, forward_transformer=lambda x:x, backward_transformer=lambda x:x):
"""
Benchmark an approximation function compared to an exact function.
Compute and print the relative divergence
Args:
real_func:
approx_func:
interval:
n_points:
approx_kwargs: optional kwargs to provide to the approximation function
forward_transformer: optional input transformation to apply before calling approx_func
backward_transformer: optional output transformation to apply after calling approx_func
"""
start, stop = interval
points = np.linspace(start, stop, num=n_points)
real_values = []
approx_values = []
for x in points:
x = th.tensor([x])
real_value = real_func(x)
real_values.append(real_value.item())
x_syft = forward_transformer(x)
approx_value_syft = approx_func(x_syft, **approx_kwargs)
approx_value = backward_transformer(approx_value_syft)
approx_values.append(approx_value.item())
plt.figure(figsize=(15,4))
plt.subplot(121, title="Real and approximate logarithm")
real_values = np.array(real_values)
approx_values = np.array(approx_values)
plt.plot(points, real_values)
plt.plot(points, approx_values)
plt.subplot(122, title="Relative error")
norm_diff = 2 * np.abs(real_values - approx_values)/np.abs(real_values + approx_values)
plt.plot(points, norm_diff)
plt.show()
```
## 1. Using the Newton Raphson method
```
from funcs import log_newton, log_householder
```
## 1.A Approximation alone
We analyse here the loss incurred by the approximation using only normal pytorch tensors
```
if not hasattr(th, 'native_exp'):
th.native_exp = th.exp
def hook_exp(x, **kwargs):
return th.native_exp(x)
th.exp = hook_exp
th.Tensor.refresh = lambda x:x
benchmark(
th.log,
log_newton,
interval = (3, 15),
approx_kwargs={'iterations': 3}
)
```
This is great but it is limited to a small interval $[3, 5]$. On a full range interval $[0.1, 250]$ it behaves poorly. We show here the result with different number of iterations.
```
for it in [0, 1, 2, 3]:
benchmark(
th.log,
log_newton,
interval = (0.1, 250),
approx_kwargs={'iterations': it}
)
```
## 1.B Approximation with AdditiveSharingTensors
```
import syft as sy
hook = sy.TorchHook(th)
bob = sy.VirtualWorker(hook, id="bob")
alice = sy.VirtualWorker(hook, id="alice")
charlie = sy.VirtualWorker(hook, id="charlie")
crypto = sy.VirtualWorker(hook, id="crypto_provider")
th.Tensor.native_refresh = th.Tensor.refresh
benchmark(
th.log,
log_newton,
interval = (0.1, 250),
n_points=20,
approx_kwargs={'iterations': 2, 'exp_iterations': 8},
forward_transformer=lambda x: x.fix_precision(precision_fractional=5).share(alice, bob, crypto_provider=crypto),
backward_transformer=lambda x: x.get().float_precision()
)
```
Interestingly here, the approximation only works on a given range, roughly $[70:160]$
```
benchmark(
th.log,
log_newton,
interval = (70, 160),
n_points=20,
approx_kwargs={'iterations': 2, 'exp_iterations': 8},
forward_transformer=lambda x: x.fix_precision(precision_fractional=5).share(alice, bob, crypto_provider=crypto),
backward_transformer=lambda x: x.get().float_precision()
)
```
With more iterations $2 \rightarrow 8$, results are a bit better but are much more expensive to compute:
```
benchmark(
th.log,
log_newton,
interval = (70, 160),
n_points=20,
approx_kwargs={'iterations': 8, 'exp_iterations': 8},
forward_transformer=lambda x: x.fix_precision(precision_fractional=5).share(alice, bob, crypto_provider=crypto),
backward_transformer=lambda x: x.get().float_precision()
)
```
### Remarks
- The approximation and its range of validity depends on the initialization chosen
Here is an alternate initialization
```
from funcs import exp
def log_newton(x, iterations=2, exp_iterations=8):
"""Approximates the logarithm using the Newton Raphson method
Args:
iterations (int): number of iterations for Newton Raphson approximation.
exp_iterations (int): number of iterations for limit approximation of exp
.. inspired by https://github.com/facebookresearch/CrypTen
"""
# PREVIOUS:
y = x / 40 + 1.9 - 8 * exp(-2 * x - 0.3, iterations=exp_iterations)
# NEW:
#y = x / 120 - 20 * exp(-2 * x - 1.0, iterations=exp_iterations) + 3.0
for i in range(iterations):
h = [1 - x * exp((-y).refresh(), iterations=exp_iterations)]
for i in range(1, 5):
h.append(h[-1] * h[0])
y -= h[0] * (1 + h[0] + h[1] + h[2] + h[3] + h[4])
return y
```
The field of validity is now very different!
```
benchmark(
th.log,
log_newton,
interval = (0.1, 250),
n_points=20,
approx_kwargs={'iterations': 2, 'exp_iterations': 8},
forward_transformer=lambda x: x.fix_precision(precision_fractional=5).share(alice, bob, crypto_provider=crypto),
backward_transformer=lambda x: x.get().float_precision()
)
```
On $[5:23]$:
```
benchmark(
th.log,
log_newton,
interval = (5, 23),
n_points=20,
approx_kwargs={'iterations': 2, 'exp_iterations': 8},
forward_transformer=lambda x: x.fix_precision(precision_fractional=5).share(alice, bob, crypto_provider=crypto),
backward_transformer=lambda x: x.get().float_precision()
)
```
The reason for this is that Newton's method is really instable, in section 2 we study the HouseHolder method which is a better fit for this task.
# 2. Using the HouseHolder method
## 1.A Approximation alone
We analyse here the loss incurred by the approximation using only normal pytorch tensors
```
th.Tensor.refresh = lambda x:x
benchmark(
th.log,
log_householder,
interval = (0.1, 250),
n_points=20,
approx_kwargs={'iterations': 2, 'exp_iterations': 8}
)
```
Results are much better with this approximation, right?
What about adding AdditiveSharingTensors in the loop?
## 2.B Approximation with AdditiveSharingTensors
_We re-instantiate refresh as we work with AdditiveSharingTensors_
```
th.Tensor.refresh = th.Tensor.native_refresh
benchmark(
th.log,
log_householder,
interval = (0.1, 250),
n_points=20,
approx_kwargs={'iterations': 2, 'exp_iterations': 8},
forward_transformer=lambda x: x.fix_precision(precision_fractional=5).share(alice, bob, crypto_provider=crypto),
backward_transformer=lambda x: x.get().float_precision()
)
```
This is still very good!
One interesting quesiton is now to see how the initialisation provided influence the global approximation. We'll investigate in the following part how to find the best initialisation.
# 3. Optimisation of the initialisation
```
import torch as th
import torch.nn as nn
from funcs import exp_limit
class ApproxModel(nn.Module):
def __init__(self):
super(ApproxModel, self).__init__()
self.w1 = nn.Parameter(th.tensor(1/120.))
self.b1 = nn.Parameter(th.tensor(3.))
self.alpha = nn.Parameter(th.tensor(-20.))
self.w2 = nn.Parameter(th.tensor(-2.))
self.b2 = nn.Parameter(th.tensor(-1.))
def forward(self, x):
y = x * self.w1 + self.b1 + self.alpha * exp_limit(x * self.w2 + self.b2)
for i in range(2):
h = [1 - x * exp_limit(-y)]
for i in range(1, 5):
h.append(h[-1] * h[0])
y -= h[0] * (1 + h[0] / 2 + h[1] / 3 + h[2] / 4 + h[3] / 5 + h[4] / 6)
return y
# Training settings
model = ApproxModel()
optimizer = th.optim.Adam(params=model.parameters(), lr=0.001)
n_points = 1000
batch_size = 100
# 1. Built the training set
# np.logspace(-3, 2.4) is a range from 0.001 to 250
data = th.tensor(np.logspace(-3, 2.4, num=n_points))
# permute data and reshape
data = data[th.randperm(n_points)].view(-1, 1)
# 2. compute the target
target = th.log(data)
for epoch in range(10000):
# randomly shuffle at each epoch
rand_idx = th.randperm(n_points)
for i in range(int(n_points/batch_size)):
if i == 1 and epoch % 100 == 0:
print(
round(1/model.w1.item(), 2),
round(model.b1.item(), 2),
round(model.alpha.item(), 2),
round(model.w2.item(), 2),
round(model.b2.item(), 2),
loss.item()
)
data_batch = data[rand_idx[i:i+batch_size]]
target_batch = target[rand_idx[i:i+batch_size]]
optimizer.zero_grad()
pred = model(data)
# the loss chosen is a normalized MSE
loss = (((pred - target)/(pred + target))**2).mean()
loss.backward()
optimizer.step()
```
The params seem to be converging, we will keep those one for our implementation. Note that the relative error is very small and is close to 10e-3.
| true |
code
| 0.709774 | null | null | null | null |
|
# Ray Crash Course - Exercise Solutions
© 2019-2021, Anyscale. All Rights Reserved

This notebook discusses solutions for the exercises in the _crash course_.
## 01 Ray Crash Course - Tasks - Exercise 1
As currently written, the memory footprint of `estimate_pi` scales linearly with `N`, because it allocates two NumPy arrays of size `N`. This limits the size of `N` we can evaluate (as I confirmed by locking up my laptop...). However, this isn't actually necessary. We could do the same calculation in "blocks, for example `m` blocks of size `N/m` and then combine the results. Furthermore, there's no dependencies between the calculations with those blocks, giving us further potential speed-up by parellelizing them with Ray.
Adapt `ray_estimate_pi` to use this technique. Pick some `N` value above which the calculation is done in blocks. Compare the performance of the old vs. new implementation.
As you do this exercise, you might ponder the fact that we often averaged multiple trials for a given `N` and then ask yourself, what's the difference between averaging `10` trials for `N = 1000` vs. `1` trial for `N = 10000`, for example?
First, import things we need and redefine functions and data we need from the notebook:
```
import numpy as np
import sys, time, statistics, math
import ray
sys.path.append('..')
from pi_calc import str_large_n
trials = 5
ray.init(ignore_reinit_error=True)
print(f'Dashboard URL: http://{ray.get_dashboard_url()}')
```
Here's `estimate_pi` again, but now we'll also return the counts, for reasons we'll discuss shortly.
```
def estimate_pi(num_samples):
xs = np.random.uniform(low=-1.0, high=1.0, size=num_samples) # Generate num_samples random samples for the x coordinate.
ys = np.random.uniform(low=-1.0, high=1.0, size=num_samples) # Generate num_samples random samples for the y coordinate.
xys = np.stack((xs, ys), axis=-1) # Like Python's "zip(a,b)"; creates np.array([(x1,y1), (x2,y2), ...]).
inside = xs*xs + ys*ys <= 1.0 # Creates a predicate over all the array elements.
xys_inside = xys[inside] # Selects only those "zipped" array elements inside the circle.
in_circle = xys_inside.shape[0] # Return the number of elements inside the circle.
approx_pi = 4.0*in_circle/num_samples # The Pi estimate.
return approx_pi, in_circle, num_samples
```
Here's the original `ray_estimate_pi`, but now it will also return the counts, not just $\pi$.
```
@ray.remote
def ray_estimate_pi(num_samples):
return estimate_pi(num_samples)
fmt = '{:10.5f} seconds: pi ~ {:7.6f}, stddev = {:5.4f}, error = {:5.4f}%'
```
Here's `ray_try_it`, but now we handle the additional returned values from `ray_estimate_pi`:
```
def ray_try_it(n, trials):
print('trials = {:5d}, N = {:s}: '.format(trials, str_large_n(n, padding=15)), end='') # str_large_n imported above.
start = time.time()
refs = [ray_estimate_pi.remote(n) for _ in range(trials)]
pis_counts = ray.get(refs)
pis = list(map(lambda t: t[0], pis_counts))
approx_pi = statistics.mean(pis)
stdev = 0.0 if trials == 1 else statistics.stdev(pis)
duration = time.time() - start
error = (100.0*abs(approx_pi-np.pi)/np.pi)
print(fmt.format(duration, approx_pi, stdev, error)) # str_large_n imported above.
return trials, n, duration, approx_pi, stdev, error
```
First, let's look at the "ponder" question at the end, just using the original implementation. We'll do a few runs of the following cell. Note that we're using large maximum `n` values here. If you are working on a slow machine or VM, consider deleting the last value `10000000` here and below:
```
for n in [1000, 10000, 100000, 1000000, 10000000]:
ray_try_it(n, round(10000000/n))
for n in [1000, 10000, 100000, 1000000, 10000000]:
ray_try_it(n, round(10000000/n))
for n in [1000, 10000, 100000, 1000000, 10000000]:
ray_try_it(n, round(10000000/n))
```
The standard deviation is misleading now, because the number of trials change. The errors are roughly within an order of magnitude, due in part to expected statistical variation. Generally speaking, larger `N` and lower `trials` had lower errors. This may be due to the other big source of variation, the inevitable rounding error computing $\pi$ (`4 * inside_count/N`), one time per trial (`1` to `10,000` times). Experiments are supposed to eliminate as many extraneous variables as possible, so I would argue that sticking to one value for `trials` and varying `N` is more meaningful. In fact, in the implementation that follows, we'll eliminate the potential rounding error variation by keep track of the inside and total counts, then computing $\pi$ once at the end.
First, a function to return sample sizes for a given `N` and `m`.
```
def sample_sizes(N, m):
ranges = [(m*i, m*(i+1)) for i in range(math.ceil(N/m))]
if ranges[-1][1] > N:
ranges[-1] = (ranges[-1][0], N)
return list(map(lambda x: x[1]-x[0], ranges))
@ray.remote
def ray_estimate_pi_blocks(num_samples, m):
"""
Perform the estimate in blocks up to ``m`` samples in size. A more user-friendly solution would embed logic to
determine an reasonably good ``m`` value, but for our purposes, passing in ``m`` is more convenient.
"""
sizes = sample_sizes(num_samples, m)
refs = [ray_estimate_pi.remote(size) for size in sizes]
values = ray.get(refs) # Not using ray.wait() is okay; the tasks are all roughly the same size
inside_count = 0
total_count = 0
for _, icount, tcount in values: # Toss the pi value returned
inside_count += icount
total_count += tcount
return 4.0*inside_count/total_count, inside_count, total_count
```
Let's try it:
```
for m in [10000, 100000, 1000000]:
print(f'm = {m}:')
for n in [1000, 10000, 100000, 1000000, 10000000, 100000000]:
start = time.time()
approx_pi, inside_count, total_count = ray.get(ray_estimate_pi_blocks.remote(n, m))
duration = time.time() - start
print(f'{n:15}: duration = {duration:6.5} seconds, pi = {approx_pi:6.5}, # inside/outside = {inside_count:12}/{total_count}')
```
Let's compare to the original implementation:
```
for n in [1000, 10000, 100000, 1000000, 10000000, 100000000]:
start = time.time()
approx_pi, inside_count, total_count = ray.get(ray_estimate_pi.remote(n))
duration = time.time() - start
print(f'{n:15}: duration = {duration:6.5} seconds, pi = {approx_pi:6.5}, # inside/outside = {inside_count:12}/{total_count}')
```
Note that for larger `N`, `ray_estimate_pi_blocks` time scale noticeably slower than the original implementation, e.g., for the highest `N`, `100,000,000`, the durations are approximately `1.2` seconds vs. `9.6` seconds.
## 01 Ray Crash Course - Tasks - Exercise 2
What `N` value is needed to get a reliable estimate to five decimal places, `3.1415` (for some definition of "reliable")? If you have a powerful machine or a cluster, you could try a higher accuracy. You'll need to use the solution to Exercise 1 or you can make a guess based on the results we've already seen in this notebook.
To use the solution from Exercise 1, we'll need a modified `ray_try_it` to add the `m` blocks parameter:
```
def ray_try_it_blocks(n, m, trials):
print('trials = {:5d}, N = {:s}: '.format(trials, str_large_n(n, padding=15)), end='') # str_large_n imported above.
start = time.time()
refs = [ray_estimate_pi_blocks.remote(n, m) for _ in range(trials)]
pis_counts = ray.get(refs)
pis = list(map(lambda t: t[0], pis_counts))
approx_pi = statistics.mean(pis)
stdev = 0.0 if trials == 1 else statistics.stdev(pis)
duration = time.time() - start
error = (100.0*abs(approx_pi-np.pi)/np.pi)
print(fmt.format(duration, approx_pi, stdev, error)) # str_large_n imported above.
return trials, n, duration, approx_pi, stdev, error
```
Let's compute the error we would have to achieve for this accuracy.
```
target_error = 100*abs(3.1415 - np.pi)/np.pi
target_error
```
Okay, let's keep trying bigger `N` until we get to this number, but now we need to pick a definition of "reliable", because the results will depend on the number of `trials` we do. Also, some experiments will get "lucky" for relatively low `N` values.
> **WARNING:** This could take a while. You could choose a less accurate error goal if you have limited compute resources.
```
N = 100
error = 10.0
while error > target_error:
N *= 10
_, _, duration, approx_pi, _, error = ray_try_it_blocks(N, 1000000, trials)
if N > 100000000:
print("Stopping so we don't crash the machine...")
break
print(f'{N} samples is sufficient to get the error below {target_error}%')
```
You should run the previous cell several times. Some runs might succeed with `N = 100,000`, while more often it will be above 1M or 10M.
## 01 Ray Crash Course - Tasks - Exercise 3
For small computation problems, Ray adds enough overhead that its benefits are outweighed. You can see from the performance graphs in the lesson that smaller `N` or smaller trial values will likely cause the performance curves to cross. Try small values of `N` and small trial numbers. When do the lines cross? Try timing individual runs for small `N` around the crossing point. What can you infer from this "tipping point" about appropriate sizing of tasks, at least for your test environment?
First, here is more code from the notebook. Here is `try_it`, modified to handle the extra return values from the modified `estimate_pi`:
```
def try_it(n, trials):
print('trials = {:3d}, N = {:s}: '.format(trials, str_large_n(n, padding=12)), end='') # str_large_n imported above.
start = time.time()
pis_counts = [estimate_pi(n) for _ in range(trials)]
pis = list(map(lambda t: t[0], pis_counts))
approx_pi = statistics.mean(pis)
stdev = statistics.stdev(pis)
duration = time.time() - start
error = (100.0*abs(approx_pi-np.pi)/np.pi)
print(fmt.format(duration, approx_pi, stdev, error)) # str_large_n imported above.
return trials, n, duration, approx_pi, stdev, error
small_ns = [1, 10, 100, 1000, 10000, 100000]
data_ns = [try_it(n, trials) for n in small_ns]
ray_data_ns = [ray_try_it(n, trials) for n in small_ns]
np_data_ns = np.array(data_ns)
np_ray_data_ns = np.array(ray_data_ns)
from bokeh_util import two_lines_plot, means_stddevs_plot # Some plotting utilities in `./bokeh_util.py`.
from bokeh.plotting import show, figure
from bokeh.layouts import gridplot
two_lines = two_lines_plot(
"N vs. Execution Times (Smaller Is Better)", 'N', 'Time', 'No Ray', 'Ray',
np_data_ns[:,1], np_data_ns[:,2], np_ray_data_ns[:,1], np_ray_data_ns[:,2],
x_axis_type='log', y_axis_type='log')
show(two_lines, plot_width=800, plot_height=400)
```
(If you can't see it, click [here](../../images/Pi-small-Ns-vs-times.png).)
Let's calculate the `N` where they cross:
```
for i in range(len(small_ns)):
if data_ns[i] >= ray_data_ns[i]:
print(f'Crossing point: N = {small_ns[i]}')
```
## 02 Ray Crash Course - Actors - Exercise 1
You are asked these questions about the `Counter` vs. `RayCounter` performance:
> Ignoring pause = 0, can you explain why the Ray times are almost, but slightly larger than the non-ray times consistently? Study the implementations for `ray_counter_trial` and `RayCounter`. What code is synchronous and blocking vs. concurrent? In fact, is there _any_ code that is actually concurrent when you have just one instance of `Counter` or `RayCounter`?
Here is `ray_counter_trial` again, with comments about concurrency vs. synchronous blocking calls:
```
def ray_counter_trial(count_to, num_counters = 1, pause = 0.01):
print('ray: count_to = {:5d}, num counters = {:4d}, pause = {:5.3f}: '.format(count_to, num_counters, pause), end='')
start = time.time()
final_count_futures = []
# Actor instantiation blocks, but returns almost immediately. The actor creation overhead is low. It is a little bit larger
# than normal class instantiation, but insignificant for overall performance.
counters = [RayCounter.remote(pause) for _ in range(num_counters)]
for i in range(num_counters):
for n in range(count_to):
counters[i].next.remote() # Nonblocking, so will be faster for long pause scenarios...
final_count_futures.append(counters[i].get_count.remote())
ray.get(final_count_futures) # but block until all invocations are finished!
duration = time.time() - start
print('time = {:9.5f} seconds'.format(duration))
return count_to, num_counters, pause, duration
```
Both `next` methods in `Counter` and `RayCounter`, call `time.sleep(pause)` before completing, but for `RayCounter` it runs asynchronously, while it blocks for `Counter`. You do have to block to get the current count and if lots of async invocations of `next` are being processed, a call to `ray.get(actor.get_counter())` will block until all of them are finished.
Hence, the reason a single `RayCounter` instance never outperforms a `Counter` instance is because _all_ the code in `ray_counter_trial` becomes effectively _synchronous_ because of the single line `ray.get(final_count_futures)`. Since the Ray implementation has extra overhead for Ray, it will always take a little longer.
The real benefit is running many counters concurrently. `ray_counter_trial` does this seamlessly, while `counter_trial` remains fully synchronous.
At the end of the exercise is this statement and question:
> Once past zero pauses, the Ray overhead is constant. It doesn't grow with the pause time. Can you explain why it doesn't grow?
The Ray overhead doesn't change because the number of Ray-related invocations don't change as the pause time grows. We still use one counter instance and ten invocations of it. Hence the overhead is a constant, even though the method invocations will take longer to complete, depending on the `pause` value.
# 03 Ray Crash Course - Why Ray?
There were no exercises for this lesson.
# 04 Ray Crash Course - Python Multiprocessing with Ray
There were no exercises for this lesson.
# 05 Ray Crash Course - Ray Parallel Iterators - Exercises 1-3
Here we combine the solutions for the first three exercises. This code is also available as a complete, standalone Ray program in [word-count-exercises.py](word-count-exercises.py).
```
import glob, gzip, re, sys, os
import numpy as np
class WordCount:
"Wraps a dictionary of words and counts."
def __init__(self):
self.counts = {}
def __call__(self, word, increment):
count = increment
if word in self.counts:
count = self.counts[word]+increment
self.counts[word] = count
return (word, count)
def sort_counts(self, descending=True):
"Returns a generator of word-count pairs sorted by count."
return (wc for wc in sorted(self.counts.items(), key = lambda wc: wc[1], reverse=descending))
def unzip(f):
if f.endswith(".gz"):
return gzip.open(f)
else:
return open(f, 'r')
# Exercise 3: Remove stop words. Edit this set to taste!
stop_words1 = {
'that', 'the', 'this', 'an',
'and', 'or', 'but', 'of'
}
## All the single digits and ASCII letters:
l=[str(i) for i in range(10)]
l.extend([chr(i) for i in range(ord('a'), ord('z')+1)])
stop_words = stop_words1.union(set(l))
def is_stop_word(word):
"""
Treat all single-character words, blanks, and integers as stop words.
(Try adding floating point numbers.)
Otherwise, check for membership in a set of words.
We use a set because it provides O(1) lookup!
"""
w = word.strip()
if len(w) <= 1 or w.isdigit():
return True
return w in stop_words
def count_words(file_globs, top_n = 100, batch_window = 1024):
# The working directory of this application may be _different_
# than the Ray cluster's working directory. (In a real cluster,
# the files available will be different, too, but we'll ignore
# the problem here.) So, we need to pass absolute paths or our
# ray.util.iter.from_items won't find the files!
globs = [g for f in file_globs for g in glob.glob(f)]
file_list = list(map(lambda f: os.path.abspath(f), globs))
print(f'Processing {len(file_list)} files: {file_list}')
# Exercise 1: use combine instead of for_each(...).flatten(...).
# We replace two occurrences:
word_count = (
ray.util.iter.from_items(file_list, num_shards=4)
.combine(lambda f: unzip(f).readlines())
# Exercise 2: convert to lower case!
.combine(lambda line: re.split('\W+', line.lower())) # split into words.
# Exercise 3: remove stop words.
.filter(lambda word: not is_stop_word(word))
.for_each(lambda word: (word, 1))
.batch(batch_window)
)
# Combine the dictionaries of counts across shards with a sliding window
# of "batch_window" lines.
wordCount = WordCount()
for shard_counts in word_count.gather_async():
for word, count in shard_counts:
wordCount(word, count)
sorted_list_iterator = wordCount.sort_counts()
return [sorted_list_iterator.__next__() for i in range(top_n)]
%time word_counts = count_words(['../*.ipynb'], top_n=100) # The notebooks are now in the parent directory.
word_counts
```
# 05 Ray Crash Course - Ray Parallel Iterators - Exercise 4
Now let's run `count_words` on the `README.md` for the tutorial repo:
```
%time word_counts_readme = count_words(['../../README.md'], top_n=100) # The README is two directories up!
word_counts_readme
```
Now which words are most prominent?
```
ray.shutdown() # "Undo ray.init()". Terminate all the processes started in this notebook.
```
| true |
code
| 0.592726 | null | null | null | null |
|
# Answers: Classes
Provided here are answers to the practice questions at the end of "Classes".
## Objects
**Objects Q1**.
```
# specific strings will differ
true_var = 'asdf123'.isalnum()
false_var = '!!!!'.isalnum()
```
**Objects Q2**.
```
days_summary = {}
for day in days_of_week:
days_summary[day] = site_days.count(day)
```
**Objects Q3**.
```
from random import choice
rand_int = choice(range(0,10))
```
## Classes
**Classes Q1**.
```
class ClassRoster():
def __init__(self, course):
self.students = []
self.course = course
def add_student(self, pid, name):
self.students.append({pid: name})
```
**Classes Q2**.
```
class ToDo():
def __init__(self):
self.to_do = []
def add_item(self, item, top=True):
if top:
self.to_do.insert(0, item)
else:
self.to_do.append(item)
def remove_item(self, item):
self.to_do.remove(item)
```
**Classes Q3**.
```
class NewYear():
zodiac_signs = {
'Ox' : [1937, 1949, 1961, 1973, 1985, 1997, 2009, 2021],
'Tiger' : [1938, 1950, 1962, 1974, 1986, 1998, 2010],
'Rabbit' : [1939, 1951, 1963, 1975, 1987, 1999, 2011, 2023],
'Dragon' : [1940, 1952, 1964, 1976, 1988, 2000, 2012, 2024],
'Snake' : [1941, 1953, 1965, 1977, 1989, 2001, 2013, 2025],
'Horse' : [1942, 1954, 1966, 1978, 1990, 2002, 2014, 2026],
'Goat/Sheep' : [1943, 1955, 1967, 1979, 1991, 2003, 2015, 2027],
'Monkey' : [1944, 1956, 1968, 1980, 1992, 2004, 2016, 2028],
'Rooster' : [1945, 1957, 1969, 1981, 1993, 2005, 2017, 2029],
'Dog' : [1946, 1958, 1970, 1982, 1994, 2006, 2018, 2030],
'Pig' : [1947, 1959, 1971, 1983, 1995, 2007, 2019, 2031],
'Rat' : [1936, 1948, 1960, 1972, 1984, 1996, 2008, 2020]
}
def __init__(self, year):
self.year = year
def return_sign(self):
for key in self.zodiac_signs:
if self.year in self.zodiac_signs[key]:
out = key
break # answer would be fine without break here
return 'You were born in the year of the ' + out + '!'
```
**Classes Q4**.
Part I.
```
class Kingdom():
def __init__(self, name, title):
self.name = name
self.title = title
def introduce(self):
return 'Hello, my name is ' + self.name + ', and I am a ' + self.title + '.'
```
Part II.
```
import random
class CourtJester(Kingdom):
headwear = "fool's cap"
def tell_a_joke(self):
joke_list = ['A clown held the door open for me yesterday. I thought it was a nice jester',
'How does the court jester address the King of Ducks? Mal’Lard',
'What did the court jester call the balding crown prince? The Heir Apparent with no Hair Apparent',
'What do you call a joke made by using sign language? A jester']
out_joke = random.choice(joke_list)
return out_joke
```
**Classes Q5**.
```
class StudentInfo():
def __init__(self, name, year, school, proj_grade):
self.name = name
self.year = year
self.school = school
self.proj_grade = proj_grade
def follow_up(self):
out = {}
if self.proj_grade <= 65:
out[self.name] = self.proj_grade
return out
```
| true |
code
| 0.225417 | null | null | null | null |
|
# SYS 611: Dice Fighters Example (w/ Binomial Process Gen.)
Paul T. Grogan <pgrogan@stevens.edu>
This example shows how to model the dice fighters example in Python using a binomial process generator.
## Dependencies
This example is compatible with Python 2 environments through use of the `__future__` library function. Additionally, this example uses the `numpy` and `scipy.stats` libraries.
```
# import the python3 behavior for importing, division, and printing in python2
from __future__ import absolute_import, division, print_function
# import the numpy library and refer to it as `np`
import numpy as np
# import the scipy.stats library and refer to it as `stats`
import scipy.stats as stats
```
## Elementary State Variables
There are five elementary state variables defined below:
* `round_number`: Current round number
* `red_size`: Red force size
* `blue_size`: Blue force size
* `red_chance_hit`: Red team probability of landing a 'hit' on a blue team
* `blue_chance_hit`: Blue team probability of landing a 'hit' on a red team
All variables are defined with global scope and initialized to an initial value.
A helper function `print_state` formats the display of key state variables.
```
round_number = 0
red_size = 20
blue_size = 10
red_chance_hit = 1/6
blue_chance_hit = 3/6
def print_state():
print("Round: {:d} | Red: {:d}, Blue: {:d}".format(round_number, red_size, blue_size))
```
## Derived State Variables
There is one derived state variable defined below:
* `is_complete`: Determines if a game is complete.
```
def is_complete():
"""
Check if the game is complete, meaning at least one team has no forces remaining.
Return True if the game is complete, False otherwise.
"""
return (red_size <= 0 or blue_size <= 0)
```
## Process Generators
There are two process generator functions defined below:
* `generate_red_hits`: a process generator to determine how many hits the red team scores
* `generate_blue_hits`: a process generator to determine how many hits the blue team scores
These functions use the binomial inverse CDF function (called a PPF function in `scipy.stats`) following the inverse transform method (IVT) to generate the number of hits based on the number of forces remaining.
```
# define the generate_red_hits function
def generate_red_hits():
"""
Randomly generate the number of red hits on the blue team.
"""
# use the binomial PPF (inverse CDF) with a random sample and cast to an integer
return int(stats.binom.ppf(np.random.rand(), red_size, red_chance_hit))
# note: the code above could be replaced by a built-in numpy process generator:
# return np.random.binomial(red_size, red_chance_hit)
# define the generate_blue_hits function
def generate_blue_hits():
"""
Randomly generate the number of blue hits on the red team.
"""
# use the binomial PPF (inverse CDF) with a random sample and cast to an integer
return int(stats.binom.ppf(np.random.rand(), blue_size, blue_chance_hit))
# note: the code above could be replaced by a built-in numpy process generator:
# return np.random.binomial(blue_size, blue_chance_hit)
```
## State Transition Functions
There are three state transition functions defined below:
* `red_suffer_losses`: decreases the red force size by the number of blue hits
* `generate_red_hits`: decreases the blue force size by the number of red hits
* `next_round`: advances to the next round
```
def red_suffer_losses(opponent_hits):
"""
Decrease the red team size by the number of blue hits.
"""
# (note: red_size must be declared as a global variable to update in this function!)
global red_size
# update the red_size based on the number of opponent hits
red_size -= opponent_hits
def blue_suffer_losses(opponent_hits):
"""
Decrease the blue team size by the number of red hits.
"""
# (note: blue_size must be declared as a global variable to update in this function!)
global blue_size
# update the blue_size based on number of opponent hits
blue_size -= opponent_hits
def next_round():
"""
Advance to the next round.
"""
# (note: round_number must be declared as a global variable to update in this function!)
global round_number
# advance the round_number
round_number += 1
```
## Simulation Execution
The following script runs a complete dice fighters match.
```
round_number = 0
red_size = 20
blue_size = 10
red_chance_hit = 1/6
blue_chance_hit = 3/6
# main execution loop: continue while the game is not complete
while not is_complete():
# generate the number of red hits
red_hits = generate_red_hits()
# generate the number of blue hits
blue_hits = generate_blue_hits()
# red team suffers losses of blue hits
red_suffer_losses(blue_hits)
# blue team suffers losses of red hits
blue_suffer_losses(red_hits)
# advance to the next round
next_round()
# print out the current state for debugging
print_state()
# after main loop exists, check who won (whichever team still has fighters!)
if red_size > 0:
print("Red Wins")
elif blue_size > 0:
print("Blue Wins")
else:
print("Tie - Mutual Destruction!")
```
| true |
code
| 0.59134 | null | null | null | null |
|
# Sampled Softmax
For classification and prediction problems a typical criterion function is cross-entropy with softmax. If the number of output classes is high the computation of this criterion and the corresponding gradients could be quite costly. Sampled Softmax is a heuristic to speed up training in these cases. (see: [Adaptive Importance Sampling to Accelerate Training of a Neural Probabilistic Language Model](http://www.iro.umontreal.ca/~lisa/pointeurs/importance_samplingIEEEtnn.pdf), [Exploring the Limits of Language Modeling](https://arxiv.org/pdf/1602.02410v1.pdf), [What is Candidate Sampling](https://www.tensorflow.org/extras/candidate_sampling.pdf))
#### Select the notebook runtime environment devices / settings
Before we dive into the details we run some setup that is required for automated testing of this notebook.
```
import os
import cntk as C
# Select the right target device when this notebook is being tested:
if 'TEST_DEVICE' in os.environ:
if os.environ['TEST_DEVICE'] == 'cpu':
C.device.try_set_default_device(C.device.cpu())
else:
C.device.try_set_default_device(C.device.gpu(0))
```
## Basics
The softmax function is used in neural networks if we want to interpret the network output as a probability distribution over a set of classes $C$ with $|C|=N_C$.
Softmax maps an $N_C$-dimensional vector $z$, which has unrestricted values, to an $N_C$ dimensional vector $p$ with non-negative values that sum up to 1 so that they can be interpreted as probabilities. More precisely:
$$
\begin{align}
p_i &= softmax(z, i)\\
&= \frac{exp(z_i)}{\sum_{k\in C} exp(z_k)}\\
\end{align}
$$
In what follows we assume that the input $z$ to the softmax is computed from some hidden vector $h$ of dimension $N_h$ in a specific way, namely:
$$ z = W h + b $$
where $W$ is a learnable weight matrix of dimension $(N_c, N_h)$ and $b$ is a learnable bias vector.
We restrict ourselves to this specific choice of $z$ because it helps in implementing an efficient sampled softmax.
In a typical use-case like for example a recurrent language model, the hidden vector $h$ would be the output of the recurrent layers and $C$ would be the set of words to predict.
As a training criterion, we use cross-entropy which is a function of the expected (true) class $t\in C$ and the probability predicted for it:
$$cross\_entropy := -log(p_t)$$
## Sampled Softmax from the outside
For the normal softmax the CNTK Python-api provides the function [cross_entropy_with_softmax](https://cntk.ai/pythondocs/cntk.ops.html?highlight=softmax#cntk.ops.cross_entropy_with_softmax). This takes as input the $N_C$-dimensional vector $z$. As mentioned for our sampled softmax implementation we assume that this z is computed by $ z = W h + b $. In sampled softmax this has to be part of the whole implementation of the criterion.
Below we show the code for `cross_entropy_with_sampled_softmax_and_embedding`. Let’s look at the signature first.
One fundamental difference to the corresponding function in the Python-api (`cross_entropy_with_softmax`) is that in the Python api function the input corresponds to $z$ and must have the same dimension as the target vector, while in cross_entropy_with_full_softmax the input corresponds to our hidden vector $h$ can have any dimension (hidden_dim).
Actually, hidden_dim will be typically much lower than the dimension of the target vector.
We also have some additional parameters `num_samples, sampling_weights, allow_duplicates` that control the random sampling.
Another difference to the api function is that we return a triple (z, cross_entropy_on_samples, error_on_samples).
We will come back to the details of the implementation below.
```
from __future__ import print_function
from __future__ import division
# Creates a subgraph computing cross-entropy with sampled softmax.
def cross_entropy_with_sampled_softmax_and_embedding(
hidden_vector, # Node providing hidden input
target_vector, # Node providing the expected labels (as sparse vectors)
num_classes, # Number of classes
hidden_dim, # Dimension of the hidden vector
num_samples, # Number of samples to use for sampled softmax
sampling_weights, # Node providing weights to be used for the weighted sampling
allow_duplicates = True, # Boolean flag to control whether to use sampling with replacemement
# (allow_duplicates == True) or without replacement.
):
# define the parameters learnable parameters
b = C.Parameter(shape = (num_classes, 1), init = 0)
W = C.Parameter(shape = (num_classes, hidden_dim), init = C.glorot_uniform())
# Define the node that generates a set of random samples per minibatch
# Sparse matrix (num_samples * num_classes)
sample_selector = C.random_sample(sampling_weights, num_samples, allow_duplicates)
# For each of the samples we also need the probablity that it in the sampled set.
inclusion_probs = C.random_sample_inclusion_frequency(sampling_weights, num_samples, allow_duplicates) # dense row [1 * vocab_size]
log_prior = C.log(inclusion_probs) # dense row [1 * num_classes]
# Create a submatrix wS of 'weights
W_sampled = C.times(sample_selector, W) # [num_samples * hidden_dim]
z_sampled = C.times_transpose(W_sampled, hidden_vector) + C.times(sample_selector, b) - C.times_transpose (sample_selector, log_prior)# [num_samples]
# Getting the weight vector for the true label. Dimension hidden_dim
W_target = C.times(target_vector, W) # [1 * hidden_dim]
z_target = C.times_transpose(W_target, hidden_vector) + C.times(target_vector, b) - C.times_transpose(target_vector, log_prior) # [1]
z_reduced = C.reduce_log_sum_exp(z_sampled)
# Compute the cross entropy that is used for training.
# We don't check whether any of the classes in the random samples conincides with the true label, so it might
# happen that the true class is counted
# twice in the normalising demnominator of sampled softmax.
cross_entropy_on_samples = C.log_add_exp(z_target, z_reduced) - z_target
# For applying the model we also output a node providing the input for the full softmax
z = C.times_transpose(W, hidden_vector) + b
z = C.reshape(z, shape = (num_classes))
zSMax = C.reduce_max(z_sampled)
error_on_samples = C.less(z_target, zSMax)
return (z, cross_entropy_on_samples, error_on_samples)
```
To give a better idea of what the inputs and outputs are and how this all differs from the normal softmax we give below a corresponding function using normal softmax:
```
# Creates subgraph computing cross-entropy with (full) softmax.
def cross_entropy_with_softmax_and_embedding(
hidden_vector, # Node providing hidden input
target_vector, # Node providing the expected labels (as sparse vectors)
num_classes, # Number of classes
hidden_dim # Dimension of the hidden vector
):
# Setup bias and weights
b = C.Parameter(shape = (num_classes, 1), init = 0)
W = C.Parameter(shape = (num_classes, hidden_dim), init = C.glorot_uniform())
z = C.reshape( C.times_transpose(W, hidden_vector) + b, (1, num_classes))
# Use cross_entropy_with_softmax
cross_entropy = C.cross_entropy_with_softmax(z, target_vector)
zMax = C.reduce_max(z)
zT = C.times_transpose(z, target_vector)
error_on_samples = C.less(zT, zMax)
return (z, cross_entropy, error_on_samples)
```
As you can see the main differences to the api function `cross_entropy_with_softmax` are:
* We include the mapping $ z = W h + b $ into the function.
* We return a triple (z, cross_entropy, error_on_samples) instead of just returning the cross entropy.
## A toy example
To explain how to integrate sampled softmax let us look at a toy example. In this toy example we first transform one-hot input vectors via some random projection into a lower dimensional vector $h$. The modeling task is to reverse this mapping using (sampled) softmax. Well, as already said this is a toy example.
```
import numpy as np
from math import log, exp, sqrt
from cntk.logging import ProgressPrinter
import timeit
# A class with all parameters
class Param:
# Learning parameters
learning_rate = 0.03
minibatch_size = 100
num_minbatches = 100
test_set_size = 1000
momentum_time_constant = 5 * minibatch_size
reporting_interval = 10
allow_duplicates = False
# Parameters for sampled softmax
use_sampled_softmax = True
use_sparse = True
softmax_sample_size = 10
# Details of data and model
num_classes = 50
hidden_dim = 10
data_sampling_distribution = lambda: np.repeat(1.0 / Param.num_classes, Param.num_classes)
softmax_sampling_weights = lambda: np.repeat(1.0 / Param.num_classes, Param.num_classes)
# Creates random one-hot vectors of dimension 'num_classes'.
# Returns a tuple with a list of one-hot vectors, and list with the indices they encode.
def get_random_one_hot_data(num_vectors):
indices = np.random.choice(
range(Param.num_classes),
size=num_vectors,
p = data_sampling_distribution()).reshape((1, num_vectors))
list_of_vectors = C.Value.one_hot(indices, Param.num_classes)
return (list_of_vectors, indices.flatten())
# Create a network that:
# * Transforms the input one hot-vectors with a constant random embedding
# * Applies a linear decoding with parameters we want to learn
def create_model(labels):
# random projection matrix
random_data = np.random.normal(scale = sqrt(1.0/Param.hidden_dim), size=(Param.num_classes, Param.hidden_dim)).astype(np.float32)
random_matrix = C.constant(shape = (Param.num_classes, Param.hidden_dim), value = random_data)
h = C.times(labels, random_matrix)
# Connect the latent output to (sampled/full) softmax.
if Param.use_sampled_softmax:
sampling_weights = np.asarray(softmax_sampling_weights(), dtype=np.float32)
sampling_weights.reshape((1, Param.num_classes))
softmax_input, ce, errs = cross_entropy_with_sampled_softmax_and_embedding(
h,
labels,
Param.num_classes,
Param.hidden_dim,
Param.softmax_sample_size,
softmax_sampling_weights(),
Param.allow_duplicates)
else:
softmax_input, ce, errs = cross_entropy_with_softmax_and_embedding(
h,
labels,
Param.num_classes,
Param.hidden_dim)
return softmax_input, ce, errs
def train(do_print_progress):
labels = C.input_variable(shape = Param.num_classes, is_sparse = Param.use_sparse)
z, cross_entropy, errs = create_model(labels)
# Setup the trainer
learning_rate_schedule = C.learning_rate_schedule(Param.learning_rate, C.UnitType.sample)
momentum_schedule = C.momentum_as_time_constant_schedule(Param.momentum_time_constant)
learner = C.momentum_sgd(z.parameters, learning_rate_schedule, momentum_schedule, True)
progress_writers = None
if do_print_progress:
progress_writers = [ProgressPrinter(freq=Param.reporting_interval, tag='Training')]
trainer = C.Trainer(z, (cross_entropy, errs), learner, progress_writers)
minbatch = 0
average_cross_entropy = compute_average_cross_entropy(z)
minbatch_data = [0] # store minibatch values
cross_entropy_data = [average_cross_entropy] # store cross_entropy values
# Run training
t_total= 0
# Run training
for minbatch in range(1,Param.num_minbatches):
# Specify the mapping of input variables in the model to actual minibatch data to be trained with
label_data, indices = get_random_one_hot_data(Param.minibatch_size)
arguments = ({labels : label_data})
# If do_print_progress is True, this will automatically print the progress using ProgressPrinter
# The printed loss numbers are computed using the sampled softmax criterion
t_start = timeit.default_timer()
trainer.train_minibatch(arguments)
t_end = timeit.default_timer()
t_delta = t_end - t_start
samples_per_second = Param.minibatch_size / t_delta
# We ignore the time measurements of the first two minibatches
if minbatch > 2:
t_total += t_delta
# For comparison also print result using the full criterion
if minbatch % Param.reporting_interval == int(Param.reporting_interval/2):
# memorize the progress data for plotting
average_cross_entropy = compute_average_cross_entropy(z)
minbatch_data.append(minbatch)
cross_entropy_data.append(average_cross_entropy)
if do_print_progress:
print("\nMinbatch=%d Cross-entropy from full softmax = %.3f perplexity = %.3f samples/s = %.1f"
% (minbatch, average_cross_entropy, exp(average_cross_entropy), samples_per_second))
# Number of samples we measured. First two minbatches were ignored
samples_measured = Param.minibatch_size * (Param.num_minbatches - 2)
overall_samples_per_second = samples_measured / t_total
return (minbatch_data, cross_entropy_data, overall_samples_per_second)
def compute_average_cross_entropy(softmax_input):
vectors, indices = get_random_one_hot_data(Param.test_set_size)
total_cross_entropy = 0.0
arguments = (vectors)
z = softmax_input.eval(arguments).reshape(Param.test_set_size, Param.num_classes)
for i in range(len(indices)):
log_p = log_softmax(z[i], indices[i])
total_cross_entropy -= log_p
return total_cross_entropy / len(indices)
# Computes log(softmax(z,index)) for a one-dimensional numpy array z in an numerically stable way.
def log_softmax(z, # numpy array
index # index into the array
):
max_z = np.max(z)
return z[index] - max_z - log(np.sum(np.exp(z - max_z)))
np.random.seed(1)
print("start...")
train(do_print_progress = True)
print("done.")
```
In the above code we use two different methods to report training progress:
1. Using a function that computes the average cross entropy on full softmax.
2. Using the built-in ProgressPrinter
ProgressPrinter reports how the value of the training criterion changes over time.
In our case the training criterion is cross-entropy from **sampled** softmax.
The same is true for the error rate computed by progress printer, this is computed only for true-class vs sampled-classes and will therefore underestimate the true error rate.
Therefore while ProgressPrinter already gives us some idea how training goes on, if we want to compare the behavior for different sampling strategies (sample size, sampling weights, ...) we should not rely on numbers that are computed only using the sampled subset of classes.
## Importance sampling
Often the we don't have uniform distribution for the classes on the output side. The typical example is when we have words as output classes. A typical example are words where e.g. 'the' will be much more frequent than most others.
In such cases one often uses a non uniform distribution for drawing the samples in sampled softmax but instead increases the sampling weight for the frequent classes. This is also called importane sampling.
In our example the sampling distribution is controlled by the weight array `softmax_sampling_weights`.
As an example let's look at the case where the classes are distrubted according to zipf-distrubtion like:
$$
p[i] \propto \frac{1}{i+5},
$$
actually we use this distribution already in our example.
How does training behavior change if we switch uniform sampling to sampling with the zipfian distribution in sampled softmax?
```
# We want to lot the data
import matplotlib.pyplot as plt
%matplotlib inline
# Define weights of zipfian distributuion
def zipf(index):
return 1.0 / (index + 5)
# Use zipifian distribution for the classes
def zipf_sampling_weights():
return np.asarray([ zipf(i) for i in range(Param.num_classes)], dtype=np.float32)
data_sampling_distribution = lambda: zipf_sampling_weights() / np.sum(zipf_sampling_weights())
print("start...")
# Train using uniform sampling (like before)
np.random.seed(1)
softmax_sampling_weights = lambda: np.repeat(1.0/Param.num_classes, Param.num_classes)
minibatch_data, cross_entropy_data, _ = train(do_print_progress = False)
# Train using importance sampling
np.random.seed(1)
softmax_sampling_weights = zipf_sampling_weights
minibatch_data2, cross_entropy_data2, _ = train(do_print_progress = False)
plt.plot(minibatch_data, cross_entropy_data, 'r--',minibatch_data, cross_entropy_data2, 'b--')
plt.xlabel('number of mini-batches')
plt.ylabel('cross entropy')
plt.show()
```
In the example above we compare uniform sampling (red) vs sampling with the same distribution the classes have (blue).
You will need to experiment to find the best settings for all the softmax parameters.
## What speedups to expect?
The speed difference between full softmax and sampled softmax in terms of training instances depends strongly on the concrete settings, namely
* Number of classes. Typically the speed-up will increase the more output classes you have.
* Number of samples used in sampled softmax
* Dimension of hiddlen layer input
* Minibatch size
* Hardware
Also you need to test how much you can reduce sample size without degradation of the result.
```
print("start...")
# Reset parameters
class Param:
# Learning parameters
learning_rate = 0.03
minibatch_size = 8
num_minbatches = 100
test_set_size = 1 # we are only interrested in speed
momentum_time_constant = 5 * minibatch_size
reporting_interval = 1000000 # Switch off reporting to speed up
allow_duplicates = False
# Parameters for sampled softmax
use_sampled_softmax = True
use_sparse = True
softmax_sample_size = 10
# Details of data and model
num_classes = 50000
hidden_dim = 10
data_sampling_distribution = lambda: np.repeat(1.0 / Param.num_classes, Param.num_classes)
softmax_sampling_weights = lambda: np.repeat(1.0 / Param.num_classes, Param.num_classes)
sample_sizes = [5, 10, 100, 1000]
speed_with_sampled_softmax = []
# Get the speed with sampled softmax for different sizes
for sample_size in sample_sizes:
print("Measuring speed of sampled softmax for sample size %d ..." % (sample_size))
Param.use_sampled_softmax = True
Param.softmax_sample_size = sample_size
_, _, samples_per_second = train(do_print_progress = False)
speed_with_sampled_softmax.append(samples_per_second)
# Get the speed with full softmax
Param.use_sampled_softmax = False
print("Measuring speed of full softmax ...")
_, _, samples_per_second = train(do_print_progress = False)
speed_without_sampled_softmax = np.repeat(samples_per_second, len(sample_sizes))
# Plot the speed of sampled softmax (blue) as a function of sample sizes
# and compare it to the speed with full softmax (red).
plt.plot(sample_sizes, speed_without_sampled_softmax, 'r--',sample_sizes, speed_with_sampled_softmax, 'b--')
plt.xlabel('softmax sample size')
plt.ylabel('speed: instances / second')
plt.title("Speed 'sampled softmax' (blue) vs. 'full softmax' (red)")
plt.ylim(ymin=0)
plt.show()
```
| true |
code
| 0.652767 | null | null | null | null |
|
### Data Source
Dataset is derived from Fannie Mae’s [Single-Family Loan Performance Data](http://www.fanniemae.com/portal/funding-the-market/data/loan-performance-data.html) with all rights reserved by Fannie Mae. This processed dataset is redistributed with permission and consent from Fannie Mae. For the full raw dataset visit [Fannie Mae]() to register for an account and to download
Instruction is available at NVIDIA [RAPIDS demo site](https://rapidsai.github.io/demos/datasets/mortgage-data).
### Prerequisite
This notebook runs in a Dataproc cluster with GPU nodes, with [Spark RAPIDS](https://github.com/GoogleCloudDataproc/initialization-actions/tree/master/rapids) set up.
### Define ETL Process
Define data schema and steps to do the ETL process:
```
import time
from pyspark import broadcast
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
from pyspark.sql.types import *
from pyspark.sql.window import Window
def _get_quarter_from_csv_file_name():
return substring_index(substring_index(input_file_name(), '.', 1), '_', -1)
_csv_perf_schema = StructType([
StructField('loan_id', LongType()),
StructField('monthly_reporting_period', StringType()),
StructField('servicer', StringType()),
StructField('interest_rate', DoubleType()),
StructField('current_actual_upb', DoubleType()),
StructField('loan_age', DoubleType()),
StructField('remaining_months_to_legal_maturity', DoubleType()),
StructField('adj_remaining_months_to_maturity', DoubleType()),
StructField('maturity_date', StringType()),
StructField('msa', DoubleType()),
StructField('current_loan_delinquency_status', IntegerType()),
StructField('mod_flag', StringType()),
StructField('zero_balance_code', StringType()),
StructField('zero_balance_effective_date', StringType()),
StructField('last_paid_installment_date', StringType()),
StructField('foreclosed_after', StringType()),
StructField('disposition_date', StringType()),
StructField('foreclosure_costs', DoubleType()),
StructField('prop_preservation_and_repair_costs', DoubleType()),
StructField('asset_recovery_costs', DoubleType()),
StructField('misc_holding_expenses', DoubleType()),
StructField('holding_taxes', DoubleType()),
StructField('net_sale_proceeds', DoubleType()),
StructField('credit_enhancement_proceeds', DoubleType()),
StructField('repurchase_make_whole_proceeds', StringType()),
StructField('other_foreclosure_proceeds', DoubleType()),
StructField('non_interest_bearing_upb', DoubleType()),
StructField('principal_forgiveness_upb', StringType()),
StructField('repurchase_make_whole_proceeds_flag', StringType()),
StructField('foreclosure_principal_write_off_amount', StringType()),
StructField('servicing_activity_indicator', StringType())])
_csv_acq_schema = StructType([
StructField('loan_id', LongType()),
StructField('orig_channel', StringType()),
StructField('seller_name', StringType()),
StructField('orig_interest_rate', DoubleType()),
StructField('orig_upb', IntegerType()),
StructField('orig_loan_term', IntegerType()),
StructField('orig_date', StringType()),
StructField('first_pay_date', StringType()),
StructField('orig_ltv', DoubleType()),
StructField('orig_cltv', DoubleType()),
StructField('num_borrowers', DoubleType()),
StructField('dti', DoubleType()),
StructField('borrower_credit_score', DoubleType()),
StructField('first_home_buyer', StringType()),
StructField('loan_purpose', StringType()),
StructField('property_type', StringType()),
StructField('num_units', IntegerType()),
StructField('occupancy_status', StringType()),
StructField('property_state', StringType()),
StructField('zip', IntegerType()),
StructField('mortgage_insurance_percent', DoubleType()),
StructField('product_type', StringType()),
StructField('coborrow_credit_score', DoubleType()),
StructField('mortgage_insurance_type', DoubleType()),
StructField('relocation_mortgage_indicator', StringType())])
_name_mapping = [
("WITMER FUNDING, LLC", "Witmer"),
("WELLS FARGO CREDIT RISK TRANSFER SECURITIES TRUST 2015", "Wells Fargo"),
("WELLS FARGO BANK, NA" , "Wells Fargo"),
("WELLS FARGO BANK, N.A." , "Wells Fargo"),
("WELLS FARGO BANK, NA" , "Wells Fargo"),
("USAA FEDERAL SAVINGS BANK" , "USAA"),
("UNITED SHORE FINANCIAL SERVICES, LLC D\\/B\\/A UNITED WHOLESALE MORTGAGE" , "United Seq(e"),
("U.S. BANK N.A." , "US Bank"),
("SUNTRUST MORTGAGE INC." , "Suntrust"),
("STONEGATE MORTGAGE CORPORATION" , "Stonegate Mortgage"),
("STEARNS LENDING, LLC" , "Stearns Lending"),
("STEARNS LENDING, INC." , "Stearns Lending"),
("SIERRA PACIFIC MORTGAGE COMPANY, INC." , "Sierra Pacific Mortgage"),
("REGIONS BANK" , "Regions"),
("RBC MORTGAGE COMPANY" , "RBC"),
("QUICKEN LOANS INC." , "Quicken Loans"),
("PULTE MORTGAGE, L.L.C." , "Pulte Mortgage"),
("PROVIDENT FUNDING ASSOCIATES, L.P." , "Provident Funding"),
("PROSPECT MORTGAGE, LLC" , "Prospect Mortgage"),
("PRINCIPAL RESIDENTIAL MORTGAGE CAPITAL RESOURCES, LLC" , "Principal Residential"),
("PNC BANK, N.A." , "PNC"),
("PMT CREDIT RISK TRANSFER TRUST 2015-2" , "PennyMac"),
("PHH MORTGAGE CORPORATION" , "PHH Mortgage"),
("PENNYMAC CORP." , "PennyMac"),
("PACIFIC UNION FINANCIAL, LLC" , "Other"),
("OTHER" , "Other"),
("NYCB MORTGAGE COMPANY, LLC" , "NYCB"),
("NEW YORK COMMUNITY BANK" , "NYCB"),
("NETBANK FUNDING SERVICES" , "Netbank"),
("NATIONSTAR MORTGAGE, LLC" , "Nationstar Mortgage"),
("METLIFE BANK, NA" , "Metlife"),
("LOANDEPOT.COM, LLC" , "LoanDepot.com"),
("J.P. MORGAN MADISON AVENUE SECURITIES TRUST, SERIES 2015-1" , "JP Morgan Chase"),
("J.P. MORGAN MADISON AVENUE SECURITIES TRUST, SERIES 2014-1" , "JP Morgan Chase"),
("JPMORGAN CHASE BANK, NATIONAL ASSOCIATION" , "JP Morgan Chase"),
("JPMORGAN CHASE BANK, NA" , "JP Morgan Chase"),
("JP MORGAN CHASE BANK, NA" , "JP Morgan Chase"),
("IRWIN MORTGAGE, CORPORATION" , "Irwin Mortgage"),
("IMPAC MORTGAGE CORP." , "Impac Mortgage"),
("HSBC BANK USA, NATIONAL ASSOCIATION" , "HSBC"),
("HOMEWARD RESIDENTIAL, INC." , "Homeward Mortgage"),
("HOMESTREET BANK" , "Other"),
("HOMEBRIDGE FINANCIAL SERVICES, INC." , "HomeBridge"),
("HARWOOD STREET FUNDING I, LLC" , "Harwood Mortgage"),
("GUILD MORTGAGE COMPANY" , "Guild Mortgage"),
("GMAC MORTGAGE, LLC (USAA FEDERAL SAVINGS BANK)" , "GMAC"),
("GMAC MORTGAGE, LLC" , "GMAC"),
("GMAC (USAA)" , "GMAC"),
("FREMONT BANK" , "Fremont Bank"),
("FREEDOM MORTGAGE CORP." , "Freedom Mortgage"),
("FRANKLIN AMERICAN MORTGAGE COMPANY" , "Franklin America"),
("FLEET NATIONAL BANK" , "Fleet National"),
("FLAGSTAR CAPITAL MARKETS CORPORATION" , "Flagstar Bank"),
("FLAGSTAR BANK, FSB" , "Flagstar Bank"),
("FIRST TENNESSEE BANK NATIONAL ASSOCIATION" , "Other"),
("FIFTH THIRD BANK" , "Fifth Third Bank"),
("FEDERAL HOME LOAN BANK OF CHICAGO" , "Fedral Home of Chicago"),
("FDIC, RECEIVER, INDYMAC FEDERAL BANK FSB" , "FDIC"),
("DOWNEY SAVINGS AND LOAN ASSOCIATION, F.A." , "Downey Mortgage"),
("DITECH FINANCIAL LLC" , "Ditech"),
("CITIMORTGAGE, INC." , "Citi"),
("CHICAGO MORTGAGE SOLUTIONS DBA INTERFIRST MORTGAGE COMPANY" , "Chicago Mortgage"),
("CHICAGO MORTGAGE SOLUTIONS DBA INTERBANK MORTGAGE COMPANY" , "Chicago Mortgage"),
("CHASE HOME FINANCE, LLC" , "JP Morgan Chase"),
("CHASE HOME FINANCE FRANKLIN AMERICAN MORTGAGE COMPANY" , "JP Morgan Chase"),
("CHASE HOME FINANCE (CIE 1)" , "JP Morgan Chase"),
("CHASE HOME FINANCE" , "JP Morgan Chase"),
("CASHCALL, INC." , "CashCall"),
("CAPITAL ONE, NATIONAL ASSOCIATION" , "Capital One"),
("CALIBER HOME LOANS, INC." , "Caliber Funding"),
("BISHOPS GATE RESIDENTIAL MORTGAGE TRUST" , "Bishops Gate Mortgage"),
("BANK OF AMERICA, N.A." , "Bank of America"),
("AMTRUST BANK" , "AmTrust"),
("AMERISAVE MORTGAGE CORPORATION" , "Amerisave"),
("AMERIHOME MORTGAGE COMPANY, LLC" , "AmeriHome Mortgage"),
("ALLY BANK" , "Ally Bank"),
("ACADEMY MORTGAGE CORPORATION" , "Academy Mortgage"),
("NO CASH-OUT REFINANCE" , "OTHER REFINANCE"),
("REFINANCE - NOT SPECIFIED" , "OTHER REFINANCE"),
("Other REFINANCE" , "OTHER REFINANCE")]
cate_col_names = [
"orig_channel",
"first_home_buyer",
"loan_purpose",
"property_type",
"occupancy_status",
"property_state",
"relocation_mortgage_indicator",
"seller_name",
"mod_flag"
]
# Numberic columns
label_col_name = "delinquency_12"
numeric_col_names = [
"orig_interest_rate",
"orig_upb",
"orig_loan_term",
"orig_ltv",
"orig_cltv",
"num_borrowers",
"dti",
"borrower_credit_score",
"num_units",
"zip",
"mortgage_insurance_percent",
"current_loan_delinquency_status",
"current_actual_upb",
"interest_rate",
"loan_age",
"msa",
"non_interest_bearing_upb",
label_col_name
]
all_col_names = cate_col_names + numeric_col_names
def read_perf_csv(spark, path):
return spark.read.format('csv') \
.option('nullValue', '') \
.option('header', 'false') \
.option('delimiter', '|') \
.schema(_csv_perf_schema) \
.load(path) \
.withColumn('quarter', _get_quarter_from_csv_file_name())
def read_acq_csv(spark, path):
return spark.read.format('csv') \
.option('nullValue', '') \
.option('header', 'false') \
.option('delimiter', '|') \
.schema(_csv_acq_schema) \
.load(path) \
.withColumn('quarter', _get_quarter_from_csv_file_name())
def _parse_dates(perf):
return perf \
.withColumn('monthly_reporting_period', to_date(col('monthly_reporting_period'), 'MM/dd/yyyy')) \
.withColumn('monthly_reporting_period_month', month(col('monthly_reporting_period'))) \
.withColumn('monthly_reporting_period_year', year(col('monthly_reporting_period'))) \
.withColumn('monthly_reporting_period_day', dayofmonth(col('monthly_reporting_period'))) \
.withColumn('last_paid_installment_date', to_date(col('last_paid_installment_date'), 'MM/dd/yyyy')) \
.withColumn('foreclosed_after', to_date(col('foreclosed_after'), 'MM/dd/yyyy')) \
.withColumn('disposition_date', to_date(col('disposition_date'), 'MM/dd/yyyy')) \
.withColumn('maturity_date', to_date(col('maturity_date'), 'MM/yyyy')) \
.withColumn('zero_balance_effective_date', to_date(col('zero_balance_effective_date'), 'MM/yyyy'))
def _create_perf_deliquency(spark, perf):
aggDF = perf.select(
col("quarter"),
col("loan_id"),
col("current_loan_delinquency_status"),
when(col("current_loan_delinquency_status") >= 1, col("monthly_reporting_period")).alias("delinquency_30"),
when(col("current_loan_delinquency_status") >= 3, col("monthly_reporting_period")).alias("delinquency_90"),
when(col("current_loan_delinquency_status") >= 6, col("monthly_reporting_period")).alias("delinquency_180")) \
.groupBy("quarter", "loan_id") \
.agg(
max("current_loan_delinquency_status").alias("delinquency_12"),
min("delinquency_30").alias("delinquency_30"),
min("delinquency_90").alias("delinquency_90"),
min("delinquency_180").alias("delinquency_180")) \
.select(
col("quarter"),
col("loan_id"),
(col("delinquency_12") >= 1).alias("ever_30"),
(col("delinquency_12") >= 3).alias("ever_90"),
(col("delinquency_12") >= 6).alias("ever_180"),
col("delinquency_30"),
col("delinquency_90"),
col("delinquency_180"))
joinedDf = perf \
.withColumnRenamed("monthly_reporting_period", "timestamp") \
.withColumnRenamed("monthly_reporting_period_month", "timestamp_month") \
.withColumnRenamed("monthly_reporting_period_year", "timestamp_year") \
.withColumnRenamed("current_loan_delinquency_status", "delinquency_12") \
.withColumnRenamed("current_actual_upb", "upb_12") \
.select("quarter", "loan_id", "timestamp", "delinquency_12", "upb_12", "timestamp_month", "timestamp_year") \
.join(aggDF, ["loan_id", "quarter"], "left_outer")
# calculate the 12 month delinquency and upb values
months = 12
monthArray = [lit(x) for x in range(0, 12)]
# explode on a small amount of data is actually slightly more efficient than a cross join
testDf = joinedDf \
.withColumn("month_y", explode(array(monthArray))) \
.select(
col("quarter"),
floor(((col("timestamp_year") * 12 + col("timestamp_month")) - 24000) / months).alias("josh_mody"),
floor(((col("timestamp_year") * 12 + col("timestamp_month")) - 24000 - col("month_y")) / months).alias("josh_mody_n"),
col("ever_30"),
col("ever_90"),
col("ever_180"),
col("delinquency_30"),
col("delinquency_90"),
col("delinquency_180"),
col("loan_id"),
col("month_y"),
col("delinquency_12"),
col("upb_12")) \
.groupBy("quarter", "loan_id", "josh_mody_n", "ever_30", "ever_90", "ever_180", "delinquency_30", "delinquency_90", "delinquency_180", "month_y") \
.agg(max("delinquency_12").alias("delinquency_12"), min("upb_12").alias("upb_12")) \
.withColumn("timestamp_year", floor((lit(24000) + (col("josh_mody_n") * lit(months)) + (col("month_y") - 1)) / lit(12))) \
.selectExpr('*', 'pmod(24000 + (josh_mody_n * {}) + month_y, 12) as timestamp_month_tmp'.format(months)) \
.withColumn("timestamp_month", when(col("timestamp_month_tmp") == lit(0), lit(12)).otherwise(col("timestamp_month_tmp"))) \
.withColumn("delinquency_12", ((col("delinquency_12") > 3).cast("int") + (col("upb_12") == 0).cast("int")).alias("delinquency_12")) \
.drop("timestamp_month_tmp", "josh_mody_n", "month_y")
return perf.withColumnRenamed("monthly_reporting_period_month", "timestamp_month") \
.withColumnRenamed("monthly_reporting_period_year", "timestamp_year") \
.join(testDf, ["quarter", "loan_id", "timestamp_year", "timestamp_month"], "left") \
.drop("timestamp_year", "timestamp_month")
def _create_acquisition(spark, acq):
nameMapping = spark.createDataFrame(_name_mapping, ["from_seller_name", "to_seller_name"])
return acq.join(nameMapping, col("seller_name") == col("from_seller_name"), "left") \
.drop("from_seller_name") \
.withColumn("old_name", col("seller_name")) \
.withColumn("seller_name", coalesce(col("to_seller_name"), col("seller_name"))) \
.drop("to_seller_name") \
.withColumn("orig_date", to_date(col("orig_date"), "MM/yyyy")) \
.withColumn("first_pay_date", to_date(col("first_pay_date"), "MM/yyyy")) \
def _gen_dictionary(etl_df, col_names):
cnt_table = etl_df.select(posexplode(array([col(i) for i in col_names])))\
.withColumnRenamed("pos", "column_id")\
.withColumnRenamed("col", "data")\
.filter("data is not null")\
.groupBy("column_id", "data")\
.count()
windowed = Window.partitionBy("column_id").orderBy(desc("count"))
return cnt_table.withColumn("id", row_number().over(windowed)).drop("count")
def _cast_string_columns_to_numeric(spark, input_df):
cached_dict_df = _gen_dictionary(input_df, cate_col_names).cache()
output_df = input_df
# Generate the final table with all columns being numeric.
for col_pos, col_name in enumerate(cate_col_names):
col_dict_df = cached_dict_df.filter(col("column_id") == col_pos)\
.drop("column_id")\
.withColumnRenamed("data", col_name)
output_df = output_df.join(broadcast(col_dict_df), col_name, "left")\
.drop(col_name)\
.withColumnRenamed("id", col_name)
return output_df
def run_mortgage(spark, perf, acq):
parsed_perf = _parse_dates(perf)
perf_deliqency = _create_perf_deliquency(spark, parsed_perf)
cleaned_acq = _create_acquisition(spark, acq)
df = perf_deliqency.join(cleaned_acq, ["loan_id", "quarter"], "inner")
test_quarters = ['2016Q1','2016Q2','2016Q3','2016Q4']
train_df = df.filter(~df.quarter.isin(test_quarters)).drop("quarter")
test_df = df.filter(df.quarter.isin(test_quarters)).drop("quarter")
casted_train_df = _cast_string_columns_to_numeric(spark, train_df)\
.select(all_col_names)\
.withColumn(label_col_name, when(col(label_col_name) > 0, 1).otherwise(0))\
.fillna(float(0))
casted_test_df = _cast_string_columns_to_numeric(spark, test_df)\
.select(all_col_names)\
.withColumn(label_col_name, when(col(label_col_name) > 0, 1).otherwise(0))\
.fillna(float(0))
return casted_train_df, casted_test_df
```
### Define Spark conf and Create Spark Session
For details explanation for spark conf, please go to Spark RAPIDS [config guide](https://nvidia.github.io/spark-rapids/docs/configs.html).
```
sc.stop()
conf = SparkConf().setAppName("MortgageETL-CPU")
conf.set("spark.executor.instances", "20")
conf.set("spark.executor.cores", "7") # spark.executor.cores times spark.executor.instances should equal total cores.
conf.set("spark.task.cpus", "1")
conf.set("spark.executor.memory", "36g")
conf.set("spark.locality.wait", "0s")
conf.set("spark.sql.files.maxPartitionBytes", "512m")
conf.set("spark.executor.resource.gpu.amount", "0")
conf.set("spark.task.resource.gpu.amount", "0")
conf.set("spark.plugins", " ")
conf.set("spark.sql.broadcastTimeout", "7200")
spark = SparkSession.builder \
.config(conf=conf) \
.getOrCreate()
sc = spark.sparkContext
```
### Define Data Input/Output location
```
orig_perf_path = 'gs://dataproc-nv-demo/mortgage_full/perf/*'
orig_acq_path = 'gs://dataproc-nv-demo/mortgage_full/acq/*'
train_path = 'gs://dataproc-nv-demo/mortgage_cpu/train/'
test_path = 'gs://dataproc-nv-demo/mortgage_cpu/test/'
tmp_perf_path = 'gs://dataproc-nv-demo/mortgage_parquet_cpu/perf/'
tmp_acq_path = 'gs://dataproc-nv-demo/mortgage_parquet_cpu/acq/'
```
### Read CSV data and Transcode to Parquet
```
# Lets transcode the data first
start = time.time()
# we want a few big files instead of lots of small files
spark.conf.set('spark.sql.files.maxPartitionBytes', '200G')
acq = read_acq_csv(spark, orig_acq_path)
acq.repartition(20).write.parquet(tmp_acq_path, mode='overwrite')
perf = read_perf_csv(spark, orig_perf_path)
perf.coalesce(80).write.parquet(tmp_perf_path, mode='overwrite')
end = time.time()
print(end - start)
```
### Execute ETL Code Defined in 1st Cell
```
# Now lets actually process the data\n",
start = time.time()
spark.conf.set('spark.sql.shuffle.partitions', '160')
perf = spark.read.parquet(tmp_perf_path)
acq = spark.read.parquet(tmp_acq_path)
train_out, test_out = run_mortgage(spark, perf, acq)
train_out.write.parquet(train_path, mode='overwrite')
end = time.time()
print(end - start)
test_out.write.parquet(test_path, mode='overwrite')
end = time.time()
print(end - start)
```
### Print Physical Plan
```
train_out.explain()
```
| true |
code
| 0.348008 | null | null | null | null |
|
### Univariate linear regression using gradient descent
```
import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets, linear_model
from sklearn.metrics import mean_squared_error, r2_score
%matplotlib inline
data_train = np.zeros((2,20))
data_train[0] = [4, 5, 5, 7, 8, 8, 9, 11, 11, 12, 13, 14, 16, 18, 19, 19, 21, 22, 25, 27] #x (input)
data_train[1] = [21, 24, 27, 30, 29, 31, 32, 33, 36, 37, 41, 37, 40, 39, 41, 42, 44, 45, 45, 48] #y (what we want to predict)
data_test = np.zeros((2,5))
data_test[0] = [40, 15, 19, 23, 6] #x (input)
data_test[1] = [61, 39, 43, 46, 26] #y (what we want to predict)
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(data_train[0].reshape(-1, 1), data_train[1].reshape(-1, 1))
# Make predictions using the testing set
data_test_pred = regr.predict(data_test[0].reshape(-1, 1))
# The coefficients
print regr.coef_
print regr.intercept_
# The mean squared error
print("Mean squared error: %.2f"
% mean_squared_error(data_test[1].reshape(-1, 1), data_test_pred))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % r2_score(data_test[1].reshape(-1, 1), data_test_pred))
# Plot outputs
plt.plot(data_train[0], data_train[1], 'bx')
plt.plot(data_test[0], data_test_pred, color='red', linewidth=1)
plt.ylabel('Y_train')
plt.xlabel('X_train')
plt.title('Training dataset')
plt.xticks(())
plt.yticks(())
plt.show()
import pytorch as torch
from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
X, y = load_iris(return_X_y=True)
print X.shape
X_1 = X[:,0].reshape(-1,1)
print X_1.shape
print y.shape
# Using only one feature
classifier1 = LogisticRegression(random_state=0).fit(X_1, y)
classifier1.predict(X[:2, 0])
classifier1.predict_proba(X[:2, 0])
classifier1.score(X_1, y)
# Using only one feature
classifier1 = LogisticRegression(random_state=0, solver='lbfgs', multi_class='multinomial').fit(X, y)
classifier1.predict(X[:2, :])
classifier1.predict_proba(X[:2, :])
classifier1.score(X, y)
# Load the diabetes dataset
diabetes = datasets.load_diabetes()
# Use only one feature
diabetes_X = diabetes.data[:, np.newaxis, 2]
# Split the data into training/testing sets
diabetes_X_train = diabetes_X[:-20]
diabetes_X_test = diabetes_X[-20:]
# Split the targets into training/testing sets
diabetes_y_train = diabetes.target[:-20]
diabetes_y_test = diabetes.target[-20:]
```
#### Dataset
```
data_train = np.zeros((2,20))
data_train[0] = [4, 5, 5, 7, 8, 8, 9, 11, 11, 12, 13, 14, 16, 18, 19, 19, 21, 22, 25, 27] #x (input)
data_train[1] = [21, 24, 27, 30, 29, 31, 32, 33, 36, 37, 41, 37, 40, 39, 41, 42, 44, 45, 45, 48] #y (what we want to predict)
plt.plot(data_train[0], data_train[1], 'bx')
plt.ylabel('Y_train')
plt.xlabel('X_train')
plt.title('Training dataset')
plt.show()
```
#### Implement prediction function
- Based on hypothesis h(x) = t0 + t1*x
```
def make_prediction(X, t0, t1):
y = (t1 * X) + t0
return y
```
#### Implement cost function
- Using standard mean squared error
```
def compute_cost(y, y_predicted):
squared_differences = [data**2 for data in (y-y_predicted)]
cost = sum(squared_differences) / float(len(y))
return cost
```
#### Implement gradient descent function
- For each epoch:
- Compute the predicted y values using the current t0 and t1 values
- Compute the cost function on the entire dataset
- Compute the gradients
- Update the current t0 and t1 values with gradient descent
```
def gradient_descent(X, y, t0_current=0, t1_current=0, epochs=1000, learning_rate=0.0001):
cost_array = np.zeros((4,epochs))
for i in range(epochs):
y_current = make_prediction(X, t0_current, t1_current)
cost = compute_cost(y, y_current)
t1_grad = -2/float(len(y)) * sum(X * (y - y_current))
t0_grad = -2/float(len(y)) * sum(y - y_current)
t1_current = t1_current - (learning_rate * t1_grad)
t0_current = t0_current - (learning_rate * t0_grad)
cost_array[:,i] = [i, cost, t0_current, t1_current]
return t1_current, t0_current, cost, cost_array
```
#### Run the algorithm
```
[t1_current, t0_current, cost, cost_array] = gradient_descent(data_train[0], data_train[1], t0_current=0, t1_current=0, epochs=20000, learning_rate=0.001)
print "The is h(x) = t0 + t1*x with t0 = {0} and t1 = {1}.".format(t0_current, t1_current)
print "This solution has a cost of {0}.".format(cost)
```
#### Plot the hypothesis
```
plt.plot(data_train[0], data_train[1], 'bx')
plt.ylabel('Y_train')
plt.xlabel('X_train')
plt.title('Training dataset')
h = np.linspace(0, 30, 100)
plt.plot(h, t0_current+t1_current*h)
plt.show()
```
#### Plot the cost vs the number of epochs
- Useful to make sure that your algorithm is learning and the cost is being minimized
- We can observe that the algorithm starts to converge after 2500 epochs
```
plt.plot(cost_array[0], cost_array[1])
plt.ylabel('Cost')
plt.xlabel('epochs')
plt.title('Cost vs epochs')
plt.show()
```
#### Plot the evolution of the t0 param. vs the number of epochs
- We initialized the t0 param. to 0 here.
```
plt.plot(cost_array[0], cost_array[2])
plt.ylabel('t0')
plt.xlabel('epochs')
plt.title('t0 vs epochs')
plt.show()
```
#### Plot the evolution of the t1 param. vs the number of epochs
- We initialized the t1 param. to 0 here.
```
plt.plot(cost_array[0], cost_array[3])
plt.ylabel('t1')
plt.xlabel('epochs')
plt.title('t1 vs epochs')
plt.show()
```
| true |
code
| 0.754983 | null | null | null | null |
|
# Adversarial-Robustness-Toolbox for scikit-learn AdaBoostClassifier
```
from sklearn.ensemble import AdaBoostClassifier
from sklearn.datasets import load_iris
import numpy as np
from matplotlib import pyplot as plt
from art.estimators.classification import SklearnClassifier
from art.attacks.evasion import ZooAttack
from art.utils import load_mnist
import warnings
warnings.filterwarnings('ignore')
```
## 1 Training scikit-learn AdaBoostClassifier and attacking with ART Zeroth Order Optimization attack
```
def get_adversarial_examples(x_train, y_train):
# Create and fit AdaBoostClassifier
model = AdaBoostClassifier()
model.fit(X=x_train, y=y_train)
# Create ART classfier for scikit-learn AdaBoostClassifier
art_classifier = SklearnClassifier(model=model)
# Create ART Zeroth Order Optimization attack
zoo = ZooAttack(classifier=art_classifier, confidence=0.0, targeted=False, learning_rate=1e-1, max_iter=20,
binary_search_steps=10, initial_const=1e-3, abort_early=True, use_resize=False,
use_importance=False, nb_parallel=1, batch_size=1, variable_h=0.2)
# Generate adversarial samples with ART Zeroth Order Optimization attack
x_train_adv = zoo.generate(x_train)
return x_train_adv, model
```
## 1.1 Utility functions
```
def get_data(num_classes):
x_train, y_train = load_iris(return_X_y=True)
x_train = x_train[y_train < num_classes][:, [0, 1]]
y_train = y_train[y_train < num_classes]
x_train[:, 0][y_train == 0] *= 2
x_train[:, 1][y_train == 2] *= 2
x_train[:, 0][y_train == 0] -= 3
x_train[:, 1][y_train == 2] -= 2
x_train[:, 0] = (x_train[:, 0] - 4) / (9 - 4)
x_train[:, 1] = (x_train[:, 1] - 1) / (6 - 1)
return x_train, y_train
def plot_results(model, x_train, y_train, x_train_adv, num_classes):
fig, axs = plt.subplots(1, num_classes, figsize=(num_classes * 5, 5))
colors = ['orange', 'blue', 'green']
for i_class in range(num_classes):
# Plot difference vectors
for i in range(y_train[y_train == i_class].shape[0]):
x_1_0 = x_train[y_train == i_class][i, 0]
x_1_1 = x_train[y_train == i_class][i, 1]
x_2_0 = x_train_adv[y_train == i_class][i, 0]
x_2_1 = x_train_adv[y_train == i_class][i, 1]
if x_1_0 != x_2_0 or x_1_1 != x_2_1:
axs[i_class].plot([x_1_0, x_2_0], [x_1_1, x_2_1], c='black', zorder=1)
# Plot benign samples
for i_class_2 in range(num_classes):
axs[i_class].scatter(x_train[y_train == i_class_2][:, 0], x_train[y_train == i_class_2][:, 1], s=20,
zorder=2, c=colors[i_class_2])
axs[i_class].set_aspect('equal', adjustable='box')
# Show predicted probability as contour plot
h = .01
x_min, x_max = 0, 1
y_min, y_max = 0, 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z_proba = model.predict_proba(np.c_[xx.ravel(), yy.ravel()])
Z_proba = Z_proba[:, i_class].reshape(xx.shape)
im = axs[i_class].contourf(xx, yy, Z_proba, levels=[0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0],
vmin=0, vmax=1)
if i_class == num_classes - 1:
cax = fig.add_axes([0.95, 0.2, 0.025, 0.6])
plt.colorbar(im, ax=axs[i_class], cax=cax)
# Plot adversarial samples
for i in range(y_train[y_train == i_class].shape[0]):
x_1_0 = x_train[y_train == i_class][i, 0]
x_1_1 = x_train[y_train == i_class][i, 1]
x_2_0 = x_train_adv[y_train == i_class][i, 0]
x_2_1 = x_train_adv[y_train == i_class][i, 1]
if x_1_0 != x_2_0 or x_1_1 != x_2_1:
axs[i_class].scatter(x_2_0, x_2_1, zorder=2, c='red', marker='X')
axs[i_class].set_xlim((x_min, x_max))
axs[i_class].set_ylim((y_min, y_max))
axs[i_class].set_title('class ' + str(i_class))
axs[i_class].set_xlabel('feature 1')
axs[i_class].set_ylabel('feature 2')
```
# 2 Example: Iris dataset
### legend
- colored background: probability of class i
- orange circles: class 1
- blue circles: class 2
- green circles: class 3
- red crosses: adversarial samples for class i
```
num_classes = 2
x_train, y_train = get_data(num_classes=num_classes)
x_train_adv, model = get_adversarial_examples(x_train, y_train)
plot_results(model, x_train, y_train, x_train_adv, num_classes)
num_classes = 3
x_train, y_train = get_data(num_classes=num_classes)
x_train_adv, model = get_adversarial_examples(x_train, y_train)
plot_results(model, x_train, y_train, x_train_adv, num_classes)
```
# 3 Example: MNIST
## 3.1 Load and transform MNIST dataset
```
(x_train, y_train), (x_test, y_test), min_, max_ = load_mnist()
n_samples_train = x_train.shape[0]
n_features_train = x_train.shape[1] * x_train.shape[2] * x_train.shape[3]
n_samples_test = x_test.shape[0]
n_features_test = x_test.shape[1] * x_test.shape[2] * x_test.shape[3]
x_train = x_train.reshape(n_samples_train, n_features_train)
x_test = x_test.reshape(n_samples_test, n_features_test)
y_train = np.argmax(y_train, axis=1)
y_test = np.argmax(y_test, axis=1)
n_samples_max = 200
x_train = x_train[0:n_samples_max]
y_train = y_train[0:n_samples_max]
x_test = x_test[0:n_samples_max]
y_test = y_test[0:n_samples_max]
```
## 3.2 Train AdaBoostClassifier classifier
```
model = AdaBoostClassifier(base_estimator=None, n_estimators=50, learning_rate=0.1, algorithm='SAMME.R',
random_state=None)
model.fit(X=x_train, y=y_train)
```
## 3.3 Create and apply Zeroth Order Optimization Attack with ART
```
art_classifier = SklearnClassifier(model=model)
zoo = ZooAttack(classifier=art_classifier, confidence=0.0, targeted=False, learning_rate=1e-1, max_iter=30,
binary_search_steps=20, initial_const=1e-3, abort_early=True, use_resize=False,
use_importance=False, nb_parallel=10, batch_size=1, variable_h=0.25)
x_train_adv = zoo.generate(x_train)
x_test_adv = zoo.generate(x_test)
```
## 3.4 Evaluate AdaBoostClassifier on benign and adversarial samples
```
score = model.score(x_train, y_train)
print("Benign Training Score: %.4f" % score)
plt.matshow(x_train[0, :].reshape((28, 28)))
plt.clim(0, 1)
prediction = model.predict(x_train[0:1, :])[0]
print("Benign Training Predicted Label: %i" % prediction)
score = model.score(x_train_adv, y_train)
print("Adversarial Training Score: %.4f" % score)
plt.matshow(x_train_adv[0, :].reshape((28, 28)))
plt.clim(0, 1)
prediction = model.predict(x_train_adv[0:1, :])[0]
print("Adversarial Training Predicted Label: %i" % prediction)
score = model.score(x_test, y_test)
print("Benign Test Score: %.4f" % score)
plt.matshow(x_test[0, :].reshape((28, 28)))
plt.clim(0, 1)
prediction = model.predict(x_test[0:1, :])[0]
print("Benign Test Predicted Label: %i" % prediction)
score = model.score(x_test_adv, y_test)
print("Adversarial Test Score: %.4f" % score)
plt.matshow(x_test_adv[0, :].reshape((28, 28)))
plt.clim(0, 1)
prediction = model.predict(x_test_adv[0:1, :])[0]
print("Adversarial Test Predicted Label: %i" % prediction)
```
| true |
code
| 0.764214 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/word_analogies_torch.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Solving word analogies using pre-trained word embeddings
Based on D2L 14.7
http://d2l.ai/chapter_natural-language-processing-pretraining/similarity-analogy.html
```
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(seed=1)
import math
import requests
import zipfile
import hashlib
import os
import random
import torch
from torch import nn
from torch.nn import functional as F
!mkdir figures # for saving plots
# Required functions
def download(name, cache_dir=os.path.join('..', 'data')):
"""Download a file inserted into DATA_HUB, return the local filename."""
assert name in DATA_HUB, f"{name} does not exist in {DATA_HUB}."
url, sha1_hash = DATA_HUB[name]
os.makedirs(cache_dir, exist_ok=True)
fname = os.path.join(cache_dir, url.split('/')[-1])
if os.path.exists(fname):
sha1 = hashlib.sha1()
with open(fname, 'rb') as f:
while True:
data = f.read(1048576)
if not data:
break
sha1.update(data)
if sha1.hexdigest() == sha1_hash:
return fname # Hit cache
print(f'Downloading {fname} from {url}...')
r = requests.get(url, stream=True, verify=True)
with open(fname, 'wb') as f:
f.write(r.content)
return fname
def download_extract(name, folder=None):
"""Download and extract a zip/tar file."""
fname = download(name)
base_dir = os.path.dirname(fname)
data_dir, ext = os.path.splitext(fname)
if ext == '.zip':
fp = zipfile.ZipFile(fname, 'r')
elif ext in ('.tar', '.gz'):
fp = tarfile.open(fname, 'r')
else:
assert False, 'Only zip/tar files can be extracted.'
fp.extractall(base_dir)
return os.path.join(base_dir, folder) if folder else data_dir
```
# Get pre-trained word embeddings
Pretrained embeddings taken from
GloVe website: https://nlp.stanford.edu/projects/glove/
fastText website: https://fasttext.cc/
```
DATA_HUB = dict()
DATA_URL = 'http://d2l-data.s3-accelerate.amazonaws.com/'
DATA_HUB['glove.6b.50d'] = (DATA_URL + 'glove.6B.50d.zip',
'0b8703943ccdb6eb788e6f091b8946e82231bc4d')
DATA_HUB['glove.6b.100d'] = (DATA_URL + 'glove.6B.100d.zip',
'cd43bfb07e44e6f27cbcc7bc9ae3d80284fdaf5a')
DATA_HUB['glove.42b.300d'] = (DATA_URL + 'glove.42B.300d.zip',
'b5116e234e9eb9076672cfeabf5469f3eec904fa')
DATA_HUB['wiki.en'] = (DATA_URL + 'wiki.en.zip',
'c1816da3821ae9f43899be655002f6c723e91b88')
class TokenEmbedding:
"""Token Embedding."""
def __init__(self, embedding_name):
self.idx_to_token, self.idx_to_vec = self._load_embedding(
embedding_name)
self.unknown_idx = 0
self.token_to_idx = {
token: idx for idx, token in enumerate(self.idx_to_token)}
def _load_embedding(self, embedding_name):
idx_to_token, idx_to_vec = ['<unk>'], []
# data_dir = d2l.download_extract(embedding_name)
data_dir = download_extract(embedding_name)
# GloVe website: https://nlp.stanford.edu/projects/glove/
# fastText website: https://fasttext.cc/
with open(os.path.join(data_dir, 'vec.txt'), 'r') as f:
for line in f:
elems = line.rstrip().split(' ')
token, elems = elems[0], [float(elem) for elem in elems[1:]]
# Skip header information, such as the top row in fastText
if len(elems) > 1:
idx_to_token.append(token)
idx_to_vec.append(elems)
idx_to_vec = [[0] * len(idx_to_vec[0])] + idx_to_vec
return idx_to_token, torch.tensor(idx_to_vec)
def __getitem__(self, tokens):
indices = [
self.token_to_idx.get(token, self.unknown_idx)
for token in tokens]
vecs = self.idx_to_vec[torch.tensor(indices)]
return vecs
def __len__(self):
return len(self.idx_to_token)
```
Get a 50dimensional glove embedding, with vocab size of 400k
```
glove_6b50d = TokenEmbedding('glove.6b.50d')
len(glove_6b50d)
```
Map from word to index and vice versa.
```
glove_6b50d.token_to_idx['beautiful'], glove_6b50d.idx_to_token[3367]
embedder = glove_6b50d
#embedder = TokenEmbedding('glove.6b.100d')
embedder.idx_to_vec.shape
```
# Finding most similar words
```
def knn(W, x, k):
# The added 1e-9 is for numerical stability
cos = torch.mv(W, x.reshape(-1,)) / (
(torch.sqrt(torch.sum(W * W, axis=1) + 1e-9) * torch.sqrt((x * x).sum())) )
_, topk = torch.topk(cos, k=k)
return topk, [cos[int(i)] for i in topk]
def get_similar_tokens(query_token, k, embed):
topk, cos = knn(embed.idx_to_vec, embed[[query_token]], k + 1)
for i, c in zip(topk[1:], cos[1:]): # Remove input words
print(f'cosine sim={float(c):.3f}: {embed.idx_to_token[int(i)]}')
get_similar_tokens('man', 3, embedder)
get_similar_tokens('banana', 3, embedder)
```
# Word analogies
```
# We slightly modify D2L code so it works on the man:woman:king:queen example
def get_analogy(token_a, token_b, token_c, embed):
vecs = embed[[token_a, token_b, token_c]]
x = vecs[1] - vecs[0] + vecs[2]
topk, cos = knn(embed.idx_to_vec, x, 10)
# remove word c from nearest neighbor
idx_c = embed.token_to_idx[token_c]
topk = list(topk.numpy())
topk.remove(idx_c)
return embed.idx_to_token[int(topk[0])]
get_analogy('man', 'woman', 'king', embedder)
get_analogy('man', 'woman', 'son', embedder)
get_analogy('beijing', 'china', 'tokyo', embedder)
```
| true |
code
| 0.615203 | null | null | null | null |
|
<!--BOOK_INFORMATION-->
<a href="https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv" target="_blank"><img align="left" src="data/cover.jpg" style="width: 76px; height: 100px; background: white; padding: 1px; border: 1px solid black; margin-right:10px;"></a>
*This notebook contains an excerpt from the book [Machine Learning for OpenCV](https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv) by Michael Beyeler.
The code is released under the [MIT license](https://opensource.org/licenses/MIT),
and is available on [GitHub](https://github.com/mbeyeler/opencv-machine-learning).*
*Note that this excerpt contains only the raw code - the book is rich with additional explanations and illustrations.
If you find this content useful, please consider supporting the work by
[buying the book](https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv)!*
<!--NAVIGATION-->
< [Compressing Color Spaces Using k-Means](08.02-Compressing-Color-Images-Using-k-Means.ipynb) | [Contents](../README.md) | [Implementing Agglomerative Hierarchical Clustering](08.04-Implementing-Agglomerative-Hierarchical-Clustering.ipynb) >
# Classifying handwritten digits using k-means
Although the last application was a pretty creative use of $k$-means, we can do better still.
We have previously discussed k-means in the context of unsupervised learning, where we
tried to discover some hidden structure in the data.
However, doesn't the same concept apply to most classification tasks? Let's say our task was
to classify handwritten digits. Don't most zeros look similar, if not the same? And don't all
zeros look categorically different from all possible ones? Isn't this exactly the kind of
"hidden structure" we set out to discover with unsupervised learning? Doesn't this mean we
could use clustering for classification as well?
Let's find out together. In this section, we will attempt to use k-means to try and classify
handwritten digits. In other words, we will try to identify similar digits without using the
original label information.
## Loading the dataset
From the earlier chapters, you might recall that scikit-learn provides a whole range of
handwritten digits via its `load_digits` utility function. The dataset consists of 1,797
samples with 64 features each, where each of the features has the brightness of one pixel in
an 8 x 8 image:
```
from sklearn.datasets import load_digits
digits = load_digits()
digits.data.shape
```
## Running k-means
Setting up $k$-means works exactly the same as in the previous examples. We tell the
algorithm to perform at most 10 iterations and stop the process if our prediction of the
cluster centers does not improve within a distance of 1.0:
```
import cv2
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
flags = cv2.KMEANS_RANDOM_CENTERS
```
Then we apply $k$-means to the data as we did before. Since there are 10 different digits (0-9),
we tell the algorithm to look for 10 distinct clusters:
```
import numpy as np
compactness, clusters, centers = cv2.kmeans(digits.data.astype(np.float32), 10, None, criteria, 10, flags)
```
And done!
Similar to the $N \times 3$ matrix that represented different RGB colors, this time, the centers array
consists of $N \times 8 \times 8$ center images, where $N$ is the number of clusters. Therefore, if we want
to plot the centers, we have to reshape the `centers` matrix back into 8 x 8 images:
```
import matplotlib.pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
fig, ax = plt.subplots(2, 5, figsize=(10, 4))
centers = centers.reshape(10, 8, 8)
for axi, center in zip(ax.flat, centers):
axi.set(xticks=[], yticks=[])
axi.imshow(center, interpolation='nearest', cmap=plt.cm.binary)
plt.savefig('digits.png')
```
Look familiar?
Remarkably, $k$-means was able to partition the digit images not just into any 10 random
clusters, but into the digits 0-9! In order to find out which images were grouped into which
clusters, we need to generate a labels vector as we know it from supervised learning
problems:
```
from scipy.stats import mode
labels = np.zeros_like(clusters.ravel())
for i in range(10):
mask = (clusters.ravel() == i)
labels[mask] = mode(digits.target[mask])[0]
```
Then we can calculate the performance of the algorithm using scikit-learn's
accuracy_score metric:
```
from sklearn.metrics import accuracy_score
accuracy_score(digits.target, labels)
```
Remarkably, $k$-means achieved 78.4% accuracy without knowing the first thing about the
labels of the original images!
We can gain more insights about what went wrong and how by looking at the **confusion
matrix**. The confusion matrix is a 2D matrix $C$, where every element $C_{i,j}$ is equal to the
number of observations known to be in group (or cluster) $i$, but predicted to be in group $j$.
Thus, all elements on the diagonal of the matrix represent data points that have been
correctly classified (that is, known to be in group $i$ and predicted to be in group $i$). Off-diagonal
elements show misclassifications.
In scikit-learn, creating a confusion matrix is essentially a one-liner:
```
from sklearn.metrics import confusion_matrix
confusion_matrix(digits.target, labels)
```
The confusion matrix tells us that $k$-means did a pretty good job at classifying data points
from the first nine classes; however, it confused all nines to be (mostly) threes. Still, this
result is pretty solid, given that the algorithm had no target labels to be trained on.
<!--NAVIGATION-->
< [Compressing Color Spaces Using k-Means](08.02-Compressing-Color-Images-Using-k-Means.ipynb) | [Contents](../README.md) | [Implementing Agglomerative Hierarchical Clustering](08.04-Implementing-Agglomerative-Hierarchical-Clustering.ipynb) >
| true |
code
| 0.816315 | null | null | null | null |
|
# Translation simple ecoder-decocer over the b3 dataset
```
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchtext import data
import pandas as pd
import unicodedata
import string
import re
import random
import copy
from contra_qa.plots.functions import simple_step_plot
import matplotlib.pyplot as plt
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
from nltk.translate.bleu_score import sentence_bleu
% matplotlib inline
```
### Preparing data
```
df2 = pd.read_csv("data/boolean3_train.csv")
df2_test = pd.read_csv("data/boolean3_test.csv")
df2["text"] = df2["sentence1"] + df2["sentence2"]
df2_test["text"] = df2_test["sentence1"] + df2_test["sentence2"]
all_sentences = list(df2.text.values) + list(df2_test.text.values)
df2train = df2.iloc[:8500]
df2valid = df2.iloc[8500:]
df2train.tail()
SOS_token = 0
EOS_token = 1
class Lang:
def __init__(self, name):
self.name = name
self.word2index = {}
self.word2count = {}
self.index2word = {0: "SOS", 1: "EOS"}
self.n_words = 2 # Count SOS and EOS
def addSentence(self, sentence):
for word in sentence.split(' '):
self.addWord(word)
def addWord(self, word):
if word not in self.word2index:
self.word2index[word] = self.n_words
self.word2count[word] = 1
self.index2word[self.n_words] = word
self.n_words += 1
else:
self.word2count[word] += 1
# Turn a Unicode string to plain ASCII, thanks to
# http://stackoverflow.com/a/518232/2809427
def unicodeToAscii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
# Lowercase, trim, and remove non-letter characters
def normalizeString(s):
s = unicodeToAscii(s.lower().strip())
s = re.sub(r"([.!?])", r" \1", s)
s = re.sub(r"[^a-zA-Z.!?]+", r" ", s)
return s
example = "ddddda'''~~çãpoeéééééÈ'''#$$##@!@!@AAS@#12323fdf"
print("Before:", example)
print()
print("After:", normalizeString(example))
pairs_A = list(zip(list(df2train.sentence1.values), list(df2train.and_A.values)))
pairs_B = list(zip(list(df2train.sentence1.values), list(df2train.and_B.values)))
pairs_A = [(normalizeString(s1), normalizeString(s2)) for s1, s2 in pairs_A]
pairs_B = [(normalizeString(s1), normalizeString(s2)) for s1, s2 in pairs_B]
pairs_A_val = list(zip(list(df2valid.sentence1.values), list(df2valid.and_A.values)))
pairs_B_val = list(zip(list(df2valid.sentence1.values), list(df2valid.and_B.values)))
pairs_A_val = [(normalizeString(s1), normalizeString(s2)) for s1, s2 in pairs_A_val]
pairs_B_val = [(normalizeString(s1), normalizeString(s2)) for s1, s2 in pairs_B_val]
all_text_pairs = zip(all_sentences, all_sentences)
all_text_pairs = [(normalizeString(s1), normalizeString(s2)) for s1, s2 in all_text_pairs]
def readLangs(lang1, lang2, pairs, reverse=False):
# Reverse pairs, make Lang instances
if reverse:
pairs = [tuple(reversed(p)) for p in pairs]
input_lang = Lang(lang2)
output_lang = Lang(lang1)
else:
input_lang = Lang(lang1)
output_lang = Lang(lang2)
return input_lang, output_lang, pairs
MAX_LENGTH = 20
def filterPair(p):
cond1 = len(p[0].split(' ')) < MAX_LENGTH
cond2 = len(p[1].split(' ')) < MAX_LENGTH
return cond1 and cond2
def filterPairs(pairs):
return [pair for pair in pairs if filterPair(pair)]
def prepareData(lang1, lang2, pairs, reverse=False):
input_lang, output_lang, pairs = readLangs(lang1, lang2, pairs, reverse)
print("Read %s sentence pairs" % len(pairs))
pairs = filterPairs(pairs)
print("Trimmed to %s sentence pairs" % len(pairs))
print("Counting words...")
for pair in pairs:
input_lang.addSentence(pair[0])
output_lang.addSentence(pair[1])
print("Counted words:")
print(input_lang.name, input_lang.n_words)
print(output_lang.name, output_lang.n_words)
return input_lang, output_lang, pairs
_, _, training_pairs_A = prepareData("eng_enc",
"eng_dec",
pairs_A)
print()
input_lang, _, _ = prepareData("eng_enc",
"eng_dec",
all_text_pairs)
output_lang = copy.deepcopy(input_lang)
print()
_, _, valid_pairs_A = prepareData("eng_enc",
"eng_dec",
pairs_A_val)
_, _, training_pairs_B = prepareData("eng_enc",
"eng_dec",
pairs_B)
print()
_, _, valid_pairs_B = prepareData("eng_enc",
"eng_dec",
pairs_B_val)
```
### sentences 2 tensors
```
example = random.choice(training_pairs_A)
print(example)
def indexesFromSentence(lang, sentence):
return [lang.word2index[word] for word in sentence.split(' ')]
indexesFromSentence(input_lang,example[0])
indexesFromSentence(output_lang, example[1])
def tensorFromSentence(lang, sentence):
indexes = indexesFromSentence(lang, sentence)
indexes.append(EOS_token)
return torch.tensor(indexes, dtype=torch.long, device=device).view(-1, 1)
input_sen = tensorFromSentence(input_lang,example[0])
output_sen = tensorFromSentence(output_lang, example[1])
print(input_sen)
print()
print(input_sen.shape)
print(input_sen.dtype)
print(output_sen)
print()
print(output_sen.shape)
print(output_sen.dtype)
def tensorsFromPair(pair):
input_tensor = tensorFromSentence(input_lang, pair[0])
target_tensor = tensorFromSentence(output_lang, pair[1])
return (input_tensor, target_tensor)
input_sen, output_sen = tensorsFromPair(example)
print("input\n")
print(input_sen)
print()
print(input_sen.shape)
print(input_sen.dtype)
print("\noutput\n")
print(output_sen)
print()
print(output_sen.shape)
print(output_sen.dtype)
class EncoderRNN(nn.Module):
def __init__(self, input_size, hidden_size):
super(EncoderRNN, self).__init__()
self.hidden_size = hidden_size
self.embedding = nn.Embedding(input_size, hidden_size)
self.gru = nn.GRU(hidden_size, hidden_size)
def forward(self, input, hidden):
embedded = self.embedding(input).view(1, 1, -1)
output = embedded
output, hidden = self.gru(output, hidden)
return output, hidden
def initHidden(self):
return torch.zeros(1, 1, self.hidden_size, device=device)
hidden_size = 10
eng_enc_v_size = input_lang.n_words
eng_dec_v_size = output_lang.n_words
encoder = EncoderRNN(eng_enc_v_size, hidden_size)
h0 = encoder.initHidden()
print("input_sen:", input_sen.shape, input_sen.dtype)
print("h0:", h0.shape, h0.dtype)
max_length = MAX_LENGTH
encoder_outputs = torch.zeros(max_length,
encoder.hidden_size,
device=device)
input_length = input_sen.size(0)
for ei in range(input_length):
output, hidden_enc = encoder(input_sen[ei], h0)
h0 = hidden_enc
encoder_outputs[ei] = output[0, 0]
print("output:", output.shape, output.dtype)
print("hidden_enc:", hidden_enc.shape, hidden_enc.dtype)
class DecoderRNN(nn.Module):
def __init__(self, hidden_size, output_size):
super(DecoderRNN, self).__init__()
self.hidden_size = hidden_size
self.embedding = nn.Embedding(output_size, hidden_size)
self.gru = nn.GRU(hidden_size, hidden_size)
self.out = nn.Linear(hidden_size, output_size)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, input, hidden):
output = self.embedding(input).view(1, 1, -1)
output = F.relu(output)
output, hidden = self.gru(output, hidden)
output = self.softmax(self.out(output[0]))
return output, hidden
def initHidden(self):
return torch.zeros(1, 1, self.hidden_size, device=device)
decoder = DecoderRNN(hidden_size, eng_dec_v_size)
decoder_input = torch.tensor([[SOS_token]], device=device)
decoder_hidden = hidden_enc
target_length = output_sen.size(0)
for di in range(target_length):
decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden)
decoder_input = output_sen[di] # Teacher forcing
print("decoder_output:", decoder_output.shape, decoder_output.dtype)
print()
print("decoder_hidden:", decoder_hidden.shape, decoder_hidden.dtype)
```
## Calculate loss over each token of the target language
```
learning_rate = 0.2
encoder_optimizer = torch.optim.SGD(encoder.parameters(), lr=learning_rate)
decoder_optimizer = torch.optim.SGD(decoder.parameters(), lr=learning_rate)
criterion = nn.NLLLoss()
def train(input_tensor,
target_tensor,
encoder,
decoder,
encoder_optimizer,
decoder_optimizer,
criterion,
max_length,
teacher_forcing_ratio=0.5):
encoder_hidden = encoder.initHidden()
encoder_optimizer.zero_grad()
decoder_optimizer.zero_grad()
input_length = input_tensor.size(0)
target_length = target_tensor.size(0)
encoder_outputs = torch.zeros(max_length,
encoder.hidden_size,
device=device)
loss = 0
for ei in range(input_length):
encoder_output, encoder_hidden = encoder(
input_tensor[ei], encoder_hidden)
encoder_outputs[ei] = encoder_output[0, 0]
decoder_input = torch.tensor([[SOS_token]], device=device)
decoder_hidden = encoder_hidden
use_teacher_forcing = True
if not random.random() < teacher_forcing_ratio:
use_teacher_forcing = False
if use_teacher_forcing:
# Teacher forcing: Feed the target as the next input
for di in range(target_length):
decoder_output, decoder_hidden = decoder(
decoder_input, decoder_hidden)
loss += criterion(decoder_output, target_tensor[di])
decoder_input = target_tensor[di] # Teacher forcing
else:
# Without teacher forcing: use its own predictions as the next input
for di in range(target_length):
decoder_output, decoder_hidden = decoder(
decoder_input, decoder_hidden)
_, topone = decoder_output.topk(1)
decoder_input = topone.squeeze().detach() # detach from history as input
loss += criterion(decoder_output, target_tensor[di])
if decoder_input.item() == EOS_token:
break
loss.backward()
encoder_optimizer.step()
decoder_optimizer.step()
return loss.item() / target_length
def get_loss(input_tensor,
target_tensor,
encoder,
decoder,
criterion,
max_length):
encoder_hidden = encoder.initHidden()
input_length = input_tensor.size(0)
target_length = target_tensor.size(0)
encoder_outputs = torch.zeros(max_length,
encoder.hidden_size,
device=device)
loss = 0
for ei in range(input_length):
encoder_output, encoder_hidden = encoder(
input_tensor[ei], encoder_hidden)
encoder_outputs[ei] = encoder_output[0, 0]
decoder_input = torch.tensor([[SOS_token]], device=device)
decoder_hidden = encoder_hidden
for di in range(target_length):
decoder_output, decoder_hidden = decoder(
decoder_input, decoder_hidden)
_, topone = decoder_output.topk(1)
decoder_input = topone.squeeze().detach() # detach from history as input
loss += criterion(decoder_output, target_tensor[di])
if decoder_input.item() == EOS_token:
break
return loss.item() / target_length
```
Test get loss
```
valid_pairs = [tensorsFromPair(pair) for pair in valid_pairs_A]
valid_loss = []
for t in valid_pairs:
input_sen, output_sen = t
loss = get_loss(input_sen,
output_sen,
encoder,
decoder,
criterion,
MAX_LENGTH)
valid_loss.append(loss)
print("mean loss", np.mean(valid_loss))
import time
import math
def asMinutes(s):
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
def timeSince(since):
now = time.time()
s = now - since
return '%s' % asMinutes(s)
```
Test train
```
n_iters = 1000
training_pairs_little = [tensorsFromPair(random.choice(training_pairs_A)) for i in range(n_iters)]
losses = []
start = time.time()
for t in training_pairs_little:
input_sen, output_sen = t
loss = train(input_sen,
output_sen,
encoder,
decoder,
encoder_optimizer,
decoder_optimizer,
criterion,
max_length=MAX_LENGTH)
losses.append(loss)
print(timeSince(start))
simple_step_plot([losses],
"loss",
"loss example ({} pair of sentences only)".format(n_iters),
"loss_example.png",
figsize=(10,3))
def trainIters(encoder,
decoder,
n_iters,
pairs,
valid_pairs,
encoder_path,
decoder_path,
batch_size=32,
status_every=100,
learning_rate=0.01,
teacher_forcing_ratio=0.5):
plot_losses = []
old = 0
start = time.time()
all_loss = []
valid_loss = float("inf")
encoder_optimizer = torch.optim.SGD(encoder.parameters(), lr=learning_rate)
decoder_optimizer = torch.optim.SGD(decoder.parameters(), lr=learning_rate)
criterion = nn.NLLLoss()
training_pairs = [tensorsFromPair(random.choice(pairs))
for i in range(n_iters)]
for i, t in enumerate(training_pairs):
input_sen, output_sen = t
loss = train(input_sen,
output_sen,
encoder,
decoder,
encoder_optimizer,
decoder_optimizer,
criterion,
max_length=MAX_LENGTH,
teacher_forcing_ratio=teacher_forcing_ratio)
plot_losses.append(loss)
if i % status_every == 0 and i != 0:
valid_batch = [tensorsFromPair(random.choice(valid_pairs))
for i in range(batch_size)]
batch_loss = 0
for t in valid_batch:
input_sen, output_sen = t
batch_loss += get_loss(input_sen,
output_sen,
encoder,
decoder,
criterion,
MAX_LENGTH)
current_valid_loss = batch_loss / batch_size
if current_valid_loss < valid_loss:
valid_loss = current_valid_loss
torch.save(encoder.state_dict(), encoder_path)
torch.save(decoder.state_dict(), decoder_path)
print("mean training loss = {:.2f}".format(np.mean(plot_losses)))
print("mean valid loss = {:.2f}".format(current_valid_loss))
print("time in {} steps:".format(status_every), timeSince(start))
print()
# simple_step_plot([plot_losses],
# "loss",
# "loss plot (from {} to {})".format(old, i),
# "loss_example.png",
# figsize=(10, 3))
all_loss += plot_losses
plot_losses = []
old = i
start = time.time()
simple_step_plot([all_loss],
"loss",
"loss over training" ,
"loss_example.png",
figsize=(15, 3))
```
## translating
```
def translate(encoder,
decoder,
sentence,
max_length=MAX_LENGTH):
with torch.no_grad():
input_tensor = tensorFromSentence(input_lang, sentence)
input_length = input_tensor.size()[0]
encoder_hidden = encoder.initHidden()
encoder_outputs = torch.zeros(
max_length, encoder.hidden_size, device=device)
for ei in range(input_length):
encoder_output, encoder_hidden = encoder(input_tensor[ei],
encoder_hidden)
encoder_outputs[ei] += encoder_output[0, 0]
decoder_input = torch.tensor([[SOS_token]], device=device) # SOS
decoder_hidden = encoder_hidden
decoded_words = []
for di in range(max_length):
decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden)
_, topone = decoder_output.data.topk(1)
if topone.item() == EOS_token:
decoded_words.append('<EOS>')
break
else:
decoded_words.append(output_lang.index2word[topone.item()])
decoder_input = topone.squeeze().detach()
return " ".join(decoded_words)
```
## translation of a non trained model
```
encoder = EncoderRNN(eng_enc_v_size, hidden_size)
decoder = DecoderRNN(hidden_size, eng_dec_v_size)
np.random.shuffle(training_pairs_A)
for t in training_pairs_A[0:3]:
print("input_sentence : " + t[0])
neural_translation = translate(encoder,
decoder,
t[0],
max_length=MAX_LENGTH)
print("neural translation : " + neural_translation)
reference = t[1] + ' <EOS>'
print("reference translation : " + reference)
reference = reference.split(" ")
candidate = neural_translation.split(" ")
score = sentence_bleu([reference], candidate)
print("blue score = {:.2f}".format(score))
print()
```
## Training some models and observing its translation
```
def save_translation(pairs, encoder, decoder, max_length, out_path):
with open(out_path, "w") as file:
file.write("source,candidate,reference,blue,accuracy\n")
for tuple_ in pairs:
source, reference = tuple_
candidate = translate(encoder,
decoder,
source,
max_length=max_length)
reference = reference + ' <EOS>'
blue = sentence_bleu([reference.split(" ")], candidate.split(" "))
if blue >= 0.95:
acc = 1
else:
acc = 0
line = source + ","
line += candidate + ","
line += reference + ","
line += "{:.3f},".format(blue)
line += "{}\n".format(acc)
file.write(line)
```
Test save_translation
```
save_translation(training_pairs_A[0:3],
encoder,
decoder,
MAX_LENGTH,
"temp.csv")
```
### Training 1
```
hidden_size = 500
encoder = EncoderRNN(eng_enc_v_size, hidden_size)
decoder = DecoderRNN(hidden_size, eng_dec_v_size)
trainIters(encoder=encoder,
decoder=decoder,
n_iters=5000,
pairs=training_pairs_A,
valid_pairs=valid_pairs_A,
encoder_path="b3_encoder1.pkl",
decoder_path="b3_decoder1.pkl",
status_every=200,
learning_rate=0.02,
teacher_forcing_ratio=0.2)
save_translation(training_pairs_A,
encoder,
decoder,
MAX_LENGTH,
"b3_training1.csv")
df_results = pd.read_csv("b3_training1.csv")
acc = np.mean(df_results.accuracy.values)
blue = np.mean(df_results.blue.values)
print("mean blue score over training data = {:.3f}".format(blue))
print("mean acc over training data = {:.3f}".format(acc))
```
### Training 2
```
hidden_size = 500
encoder = EncoderRNN(eng_enc_v_size, hidden_size)
decoder = DecoderRNN(hidden_size, eng_dec_v_size)
trainIters(encoder=encoder,
decoder=decoder,
n_iters=5000,
pairs=training_pairs_B,
valid_pairs=valid_pairs_B,
encoder_path="b3_encoder2.pkl",
decoder_path="b3_decoder2.pkl",
status_every=200,
learning_rate=0.02,
teacher_forcing_ratio=0.5)
save_translation(training_pairs_A,
encoder,
decoder,
MAX_LENGTH,
"b3_training2.csv")
df_results = pd.read_csv("b3_training2.csv")
acc = np.mean(df_results.accuracy.values)
blue = np.mean(df_results.blue.values)
print("mean blue score over training data = {:.3f}".format(blue))
print("mean acc over training data = {:.3f}".format(acc))
```
### Evaluating the trained models
### and A
```
hidden_size = 500
encoder = EncoderRNN(eng_enc_v_size, hidden_size)
decoder = DecoderRNN(hidden_size, eng_dec_v_size)
encoder.load_state_dict(torch.load("b3_encoder1.pkl"))
decoder.load_state_dict(torch.load("b3_decoder1.pkl"))
save_translation(training_pairs_A,
encoder,
decoder,
MAX_LENGTH,
"b3_training1.csv")
df_results = pd.read_csv("b3_training1.csv")
acc = np.mean(df_results.accuracy.values)
blue = np.mean(df_results.blue.values)
print("mean blue score over training data = {:.3f}".format(blue))
print("mean acc over training data = {:.3f}".format(acc))
save_translation(valid_pairs_A,
encoder,
decoder,
MAX_LENGTH,
"b3_valid1.csv")
df_results = pd.read_csv("b3_valid1.csv")
acc = np.mean(df_results.accuracy.values)
blue = np.mean(df_results.blue.values)
print("mean blue score over valid data = {:.3f}".format(blue))
print("mean acc over valid data = {:.3f}".format(acc))
```
### and B
```
hidden_size = 500
encoder = EncoderRNN(eng_enc_v_size, hidden_size)
decoder = DecoderRNN(hidden_size, eng_dec_v_size)
encoder.load_state_dict(torch.load("b3_encoder2.pkl"))
decoder.load_state_dict(torch.load("b3_decoder2.pkl"))
save_translation(training_pairs_B,
encoder,
decoder,
MAX_LENGTH,
"b3_training2.csv")
df_results = pd.read_csv("b3_training2.csv")
acc = np.mean(df_results.accuracy.values)
blue = np.mean(df_results.blue.values)
print("mean blue score over training data = {:.3f}".format(blue))
print("mean acc over training data = {:.3f}".format(acc))
save_translation(valid_pairs_B,
encoder,
decoder,
MAX_LENGTH,
"b3_valid2.csv")
df_results = pd.read_csv("b3_valid2.csv")
acc = np.mean(df_results.accuracy.values)
blue = np.mean(df_results.blue.values)
print("mean blue score over valid data = {:.3f}".format(blue))
print("mean acc over valid data = {:.3f}".format(acc))
```
| true |
code
| 0.49231 | null | null | null | null |
|
# Simple Naive Bayes Classifier
## T1. Load a dataset
The following code loads a dataset consisting of text messages and spam-ham labels.
```
from typing import List, Tuple, Dict, Iterable, Set
from collections import defaultdict
import re
import math
import pandas as pd
url = 'https://raw.githubusercontent.com/mlee-pnu/IDS/main/spam_dataset.csv'
df = pd.read_csv(url)
# TODOs
hams = df['Category'].value_counts()["ham"]
spams = df['Category'].value_counts()["spam"]
print(df['Category'].value_counts())
```
## T2. Spam filter for individual words
We first defined a function ***tokenize()*** to convert a given text into a set of words.
Using the function, we now try to count the frequency of each word in each class (spam and ham).
Complete the following code and answer the following questions:
```
def tokenize(text: str) -> Set[str]:
text = text.lower()
all_words = re.findall("[a-z0-9']+", text)
return set(all_words)
tokens: Set[str] = set()
token_spam_counts: Dict[str, int] = defaultdict(int)
token_ham_counts: Dict[str, int] = defaultdict(int)
spam = df[df.Category == 'spam']
ham = df[df.Category == 'ham']
spam_word_list = []
for msg in spam['Message'].to_list():
for token in tokenize(msg):
tokens.add(token)
token_spam_counts[token] += 1
spam_word_list.append(token)
for msg in ham['Message'].to_list():
for token in tokenize(msg):
tokens.add(token)
token_ham_counts[token] += 1
from collections import Counter
spam_dict = dict(Counter(spam_word_list))
# TODOs
word = "free"
n_word_spam = token_spam_counts["free"] # frequency of the word in spam messages
n_word_ham = token_ham_counts["free"] # frequency of the word in ham messages
# print(n_word_spam, n_word_ham)
p_spam = spam['Message'].count()/df['Message'].count() # P(spam)
p_ham = ham['Message'].count()/df['Message'].count() # P(ham)
# print(p_spam, p_ham)
p_word_given_spam = (n_word_spam/df['Message'].count())/p_spam # P(word|spam)
p_word_given_ham = (n_word_ham/df['Message'].count())/p_ham # P(word|ham)
# print(p_word_given_spam, p_word_given_ham)
# p(spam|word)
p_word = (n_word_ham+n_word_spam)
p_spam_given_word = n_word_spam/p_word
# P(ham|word)
p_ham_given_word = n_word_ham/p_word
print(p_spam_given_word, p_ham_given_word)
```
## T3. Spam filter that combines words: Naive Bayes
You received a text message "just do it" from an unknown sender.
Complete the function ***predict()*** that outputs the probability of the message being spam and the predicted label of the message.
```
text = "just do it"
# TODOs
# solution 1.
def predict(text: str):
prob = 1
label = "spam"
k = 0.0 # smoothing factor
log_spam = log_ham = 0.0
for token in tokens:
# Calculate p(token|spam), p(token|ham)
word = token
n_word_spam = token_spam_counts[word] # frequency of the word in spam messages
n_word_ham = token_ham_counts[word] # frequency of the word in ham messages
p_spam = spams/(hams+spams) # P(spam)
p_ham = hams/(hams+spams) # P(ham)
p_word_given_spam = (n_word_spam + k) / (spams + 2*k) # P(word|spam)
p_word_given_ham = (n_word_ham + k) / (hams + 2*k) # P(word|ham)
# iterating on the bag of words
if token in tokenize(text):
log_spam += math.log(p_word_given_spam)
log_ham += math.log(p_word_given_ham)
else:
log_spam += math.log(1.0 - p_word_given_spam)
log_ham += math.log(1.0 - p_word_given_ham)
p_if_spam = math.exp(log_spam + math.log(p_spam))
p_if_ham = math.exp(log_ham + math.log(p_ham))
prob = p_if_spam / (p_if_spam + p_if_ham)
label = "spam" if prob > 0.5 else "ham"
return prob, label
print(predict(text))
```
## T4. Smoothing method
You again received two text messages from unknown senders.
Complete the function ***spamFilter()*** that classifies a given message.
You may want to apply a smoothing method for this task.
```
########## OKAY BUT NOT CORRECT
textA = "reward! download your free ticket from our website www.pnu.edu"
textB = "call me and get your money back"
# TODOs
def spamFilter2(text: str):
k = 1.0 # smoothing factor
log_spam = log_ham = 0.0
for token in tokens:
# Calculate p(token|spam), p(token|ham)
word = token
n_word_spam = token_spam_counts[word] # frequency of the word in spam messages
n_word_ham = token_ham_counts[word] # frequency of the word in ham messages
p_spam = spams/(hams+spams) # P(spam)
p_ham = hams/(hams+spams) # P(ham)
p_word_given_spam = (n_word_spam + k) / (spams + 2*k) # P(word|spam)
p_word_given_ham = (n_word_ham + k) / (hams + 2*k) # P(word|ham)
# iterating on the bag of words
if token in tokenize(text):
log_spam += math.log(p_word_given_spam)
log_ham += math.log(p_word_given_ham)
else:
log_spam += math.log(1.0 - p_word_given_spam)
log_ham += math.log(1.0 - p_word_given_ham)
p_if_spam = math.exp(log_spam + math.log(p_spam))
p_if_ham = math.exp(log_ham + math.log(p_ham))
# p_if_spam = math.exp(log_spam)
# p_if_ham = math.exp(log_ham)
print(p_if_spam, p_if_ham)
prob = p_if_spam / (p_if_spam + p_if_ham)
label = "spam" if prob > 0.5 else "ham"
return label, prob
print(spamFilter2(textA))
print(spamFilter2(textB))
```
| true |
code
| 0.347606 | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.